Community Data Science Workshops (Spring 2014)/Reflections: Difference between revisions

no edit summary
imported>Mako
imported>Mako
No edit summary
Line 3:
This page hosts reflections on organization and curriculum and is written for anybody interested in organizing their own CDSW. This includes future versions of ourselves.
 
In feedback, the mentors, and the students, suggested that the workshops were a huge success. Students suggested that learned an enormous amount and benefittedbenefited enormously. Mentors were also generally very excited about running similar projects in the future. That said, we all felt there were many ways to improve on the projects.
 
== Structure ==
Line 16:
Our organization and the curriculum for Sessions 0 and 1 were borrowed from the Boston Python Workshop. Session 0 was a three hour evening session to install software. The other sessions were all day-long session (10am to 4pm) sessions broken up into the following schedule:
 
* Morning, 10am-noon: A 2 hour lecturelanguagelecture
* Lunch, noon-1pm
* Afternoon, 1pm-3:30pm: Practice workinigworking on projects in 3 breakout sessions
* Wrap-up, 3:30pm-4pm: Wrap-up, next steps, and upcoming opportunities
 
We did not take roll or even track how many people were present. Our feeling was that nearly every student who came to the first week (SesionsSessions 0 and 1) came to Session 2. Retention between the second two sessions was much worse with perhaps only 60% of the full group returning for session 3. We attribute this both to poor timing (the weekend before finals at UW) and to the long space between the sessions.
 
=== Morning Lectures ===
 
Benjamin Mako Hill gave all three of the two hours lectures. All of the lectures involved the teach working through material in an interactive Python interpretorinterpreter with students following along on their own computers. In general, the lecturslectures were well recievedreceived by students.
 
Concern with the lectures include the feeling that:
Line 32:
* If students got lost, it could be very hard to catch up given how the interactive session tended to build on earlier steps.
* There were often more mentors than needed in the morning sessions meaning that many mentors were idle.
* As the lectures progressed and the work and tasks became more complex, working in the interactive interpretorinterpreter become increasingly difficult — particularly for very long programs.
 
To address these concerns, we've suggested the following changes:
 
* Break up the lecture into at least two parts. Between those parts, include a small (10-15 minute) long excerciseexercise. This will both break things up, allow mentors to be of more help, and give students who fell behind a chance to catch up. It will also allow students to grab coffee and such.
* Record the lectures so that students can catch up after the fact.
* Arrange for some mentors to arrive after noon if they'd prefer.
Line 46:
In the afternoon, we broken into small groups to work on projects. In each session we tried to have two projects on different topics for learners with different interests and a third project which was self-directed.
 
In sessions 1 and 2, the self direct projects were based on working through examples from CodeAcademyCode Academy that we had put together and aggregratedaggregated from material already online. In the CodeAcademyCode Academy room, students could work at their own pace and there mentors on hand to work with them. In SessonSession 3, we did not use Code Academy but instead had a room that was devoted to students working with mentors on data science projects of their chose. In this case, we asked that, because of issues with the student to mentor ratio, students only participate in this session if they fletfelt they could be self-sufficient and willing to work on their own 70-80% of the time with mentor help the rest of the time.
 
In all other breakout sessions, student would download a prepared example in the form a of a zip file or tar.gz file. In each case, these projects would include:
Line 57:
On average, the sessions involved about 1/3 amount of interactive lecture where the lead mentor would walk through one or more of the examples explaining the code in detail.
 
For most of the sessions, however, the lead mentor would present a list of increasiniglyincreasingly difficult challenges which would be listed for the entire group (often in comments in source code of an example project).
 
Learners would work on these challenges at their own pace working with Mentors for help. If the group was stuck on a concept or tool, the lead mentor would bring the group back together to walk through the concept using the project in the full group.
Line 77:
* http://repl.it looks intriguing but perhaps not either ready enough or "real" enough
* Emphasize more strongly that Windows users ''need'' to come to Session 0.
* Change the CodeAcademyCode Academy lessons to remove and change the HTML example. Users that knew HTML already were often confused because printing "<b>foo</b>" did not result in actually bolded text. This was just the wrong choice for a simple string concatenation example.
* Add some text to emphasize the difference between the Python shell and the system shell. Students were confused about this until the end.
* Add a new check off step that includes the following: create a file, save it, run
Line 90:
In terms of the afternoon sessions, we felt that the Colorwall example was ''way'' too complicated. It introduced many features and concepts that nobody had seen up front.
 
The Wordplay example was much beterbetter in this regard. In particular, what we liked about WorldplayWordplay was that it was broken up into a series of small example projects that did one small thing.
 
This provided us with an opportunity to walk through the example and then pose challenges to students to do something concrete. Students could look through their example programs and build up from there. We felt that this was much more useful than in Colorwall where there were several large conceptual hurdles.
Line 98:
== Session 2: Learning APIs ==
 
Mentors and students felt that this session was the most successful and effective session — including, suprisinglysurprisingly, the most widely tested BPW session.
 
=== Morning Lecture ===
 
The morning lecture was well received — if deliviereddelivered too quickly by Benjamin Mako Hill. UnsuprisinglyUnsurprisingly, the example of PlaceKitten as an PI was an enormous hit.
 
Generally, speaking, explaining what APIs are is difficult. In particular, it's useful to explicitly say that we are focused on web APIs and that APIs are protocols or languages. Learners frequently wanted to ask questions like, "Where in the program is the API?" The API, of course, is the protocol that describes what a client can ask for and what they can expect to receive back. Preparing a concise answer to this question ahead of time is worthwhile.
 
Although there was some debate among the mentors, if there is one thing we might remove from curriculum for a future session, it might be JSON. The reason it seemed less useful is that most of the APIs that most learners plan to use (e.g., Twitter) already have Python interfaces in the form of modules. In this sense, spend 1/4 of a lecture to learn how to parse JSON objects seems like a poor use of time. On the other hand, spendigspending time looking at JSON objects provides practicing think about more complex data structures (e.g., nested lists and dictionaries) which is something that ''is'' neccessarynecessary and that students will otherwise not be prepared for.
 
=== Afternoon Sessions ===
Line 112:
In our session, more than 2/3 students were interested in learning Twitter and the session was heavily attended.
 
In Twitter, discoverability on the tweepy objects was a challenge. Users will have an object but you it's not easy to introspect those objects and see what's there in the same way you can with a JSON object. This came a suprisesurprise to us and required some real-time consultation with the TweePy documentation.
 
The Wikipedia session ended up spending very little time working with the example code we had prepared at all. Instead, we worked directly from exmaplesexamples in the morning and wrote code almost from Scratch while looking directly at the API.
 
Our session focused on building a version of the game Catfishing. Essentially, we set out to write a program that would get a list of categories for a set of articles, randomly select an artilcearticles, and then show categories back to the user to have them "guess" the article. We modified the program to not include obvious giveaways (e.g., to remove categories that include the answer itself as a substring).
 
Both sessions worked well and received good feedback.
 
In future session, we might like to focus on other APIs including, perhaps, APIs that do not include modules which provide a stronger non-pedogogicalpedagogical reason to focus on reaedingreading and learning JSON.
 
SimpleAPIsSimple APIs might have been a good example of somethignsomething we could do as a small group excerciseexercise between parts of the lecture.
 
== Session 3: Data Analysis and Visualization ==
Line 157:
Our goal was get learners as close to independence as possible but we felt that most learners didn't make it all the way. In a sense, our our final session seemed to let out a little bit on a low point int he class in the sense that many user had learned enough that they were able to work but not enough that they were not struggling enormously in the process.
 
One suggestion is to add an additionialadditional optional session with no lecture or planned projects. Learners could come and mentors will be with them to work on ''their'' projects. Of course, we want everybody to be able to come so we should also create a set of "random" projects for folks that don't have them.
 
 
* The spacing between sessions too much. In part, this was due to the fact that we were creating curriculum as we went. Next time, we will try to do the sessions every other week (e.g., 3 sesionssessions in 5 weeks).
 
* The breaks for lunch were a bit too long. We took 1 hours breaks but 45 minutes would have been enough for everybody. Learners were interested in getting back in action.
 
* The general structure of the entire curriculum was not as clear as it might have been. This was at least in part because the details of what we would teach int he later sesionssessions were not done but it led to questions. In the future, we should present this clearly up front.
 
* We did not have enough mentors with experience using Python in Windows. We had many skilled GNU/Linux users and ''zero'' students running GNU/Linux. Most of the mentors used Mac OSX and most of the learners ran Windows.
Line 192:
The rooms were free.
 
If you had a toaltotal budget would be in the order of $2000-2500, I think
you could easily do a similar 3.5 day-long sessions.
 
<!-- LocalWords: CDSW BPW JSON
-->
Anonymous user