Python Workshops for Beginners/Saturday November 15th lecture

Material for the lecture
For the lecture, you really will need two files. Download both of these to your computer by using right or control click on the link and then using Save as or Save link as. Keep track of where you put the files:


 * http://mako.cc/teaching/2014/cdsw/build_hpwp_dataset.py
 * http://nada.com.washington.edu/~mako/hp_wiki.csv

Overview of the day

 * Lecture
 * New tools!
 * Our philosophy around data visualization
 * We're going to walk through some analysis of edits to Harry Potter in Wikipedia, start to finish
 * We'll focus on manipulating data in Python
 * Visualizing things in Google Docs
 * Lunch (not Pizza!)
 * Project based work
 * Project and challenge based continuition of the work in here focusing on Google Docs
 * Matplotlib!
 * Room for you to to work on your projects!
 * Wrap-up!

Lecture outline

 * Four things in Python I have to teach you:
 * Functions
 * while loops
 * break
 * string.join
 * My philosophy about data analysis: use the tools you have
 * Walk-through of
 * Look at dataset with  and/or in spreadsheet
 * Load data into Python
 * review of opening files
 * csv module and and csv.reader function
 * csv.DictReader
 * Basic counting
 * Answer question: What proportion of edits to Wikipedia Harry Potter articles are minor?
 * Count the number of minor edits and calculate proportion
 * Answer question: What proportion of edits to Wikipedia Harry Potter articles are made by "anonymous" contributors?
 * Count the number of anonymous edits and calculate proportion

We mostly worked on these questions in the afternoon:


 * More advanced counting
 * Answer question: What are the most edited articles on Harry Potter?
 * Count the number of edits per articles
 * Answer question: Who are the most active editors on articles in Harry Potter?
 * Count the number of edits per user
 * Looking at time series data
 * "Bin" data by day to generate the trend line
 * Exporting and visualizing data
 * Export dataset on edits over time
 * Export dataset on articles over users
 * Load data into Google Docs