Studying Wikipedia Page Protections

This notebook provides a tutorial for how to study page protections on Wikipedia either via the Mediawiki dumps or API. It has three stages:

Accessing the Page Protection Dumps

This is an example of how to parse through Mediawiki dumps and determine what sorts of edit protections are applied to a given Wikipedia article.

1. Extract data from the dump file and save in a database connection

After exploring the dump file....completar..

Using sqlite3 library create a connection object to represents a database and create the table page_restrictions with the same atributes observed in the CREATE TABLE statement in the dump file

2. Open the dump file and iterate over it to fill the table page_restrictions

After opened the dump file iterate over it until finding the INSERT INTO statements to execute into the database connection

Inspect the page_restrictions table

3. Save page_restrictions table into a Pandas DataFrame

Save data into a Pandas DataFrame.

Pandas library provide high-performance, easy-to-use data structures and data analysis tools.

4. Inspect the DataFrame

As the previous table shows, for each pr_page (page id) there can be more than one record, apparently one for each type of protection that the page has.

Accessing the Page Protection APIs

The Page Protection API can be a much simpler way to access data about page protections for a given article if you know what articles you are interested in and are interested in relatively few articles (e.g., hundreds or low thousands).

NOTE: the APIs are up-to-date while the Mediawiki dumps are always at least several days behind -- i.e. for specific snapshots in time -- so the data you get from the Mediawiki dumps might be different from the APIs if permissions have changed to a page's protections in the intervening days.

1. Select 10 random page IDs from dump file.

2. Request to API the 10 random IDs

2. Examine API results and compare to data from dump file.

Create a new DataFrame from the dump file with only the 10 random ids.

Create a DataFrame with the API data

Example Analyses of Page Protection Data

Here we show some examples of things we can do with the data that we gathered about the protections for various Wikipedia articles. You'll want to come up with some questions to ask of the data as well. For this, you might need to gather additional data such as:

Descriptive statistics

TODO: give an overview of basic details about page protections and any conclusions you reach based on the analyses you do below

Predictive Model

TODO: Train and evaluate a predictive model on the data you gathered for the above descriptive statistics. Describe what you learned from the model or how it would be useful.

Future Analyses

TODO: Describe any additional analyses you can think of that would be interesting (and why) -- even if you are not sure how to do them.