Lab 3 - Collaboration Networks

Professor Brian Keegan
Department of Information Science, CU Boulder
This notebook is copyright and made available under the Apache License v2.0 license.

This is the second of five lab notebooks that will explore how to do some introductory data extraction and analysis from Wikipedia data. This lab will extend the methods in the prior lab about analyzing a single article's revision histories and use network science methods to analyze the networks of coauthorship and hyperlinks. You do not need to be fluent in either to complete the lab, but there are many options for extending the analyses we do here by using more advanced queries and scripting methods.

Acknowledgements
I'd like to thank the Wikimedia Foundation for the PAWS system and related Wikitech infrastructure that this workbook runs within. Yuvi Panda, Aaron Halfaker, Jonathan Morgan, and Dario Taraborelli have all provided crucial support and feedback.

Confirm that basic Python commands work

Import modules and setup environment

Load up all the libraries we'll need to connect to the database, retreive information for analysis, and visualize results.

Define the name of the article you want to use for the rest of the lab.

Define helper functions

Start off by using two functions from Lab 2.

Retrieve the content of the page via API

Calculate communities within the network

Data retrieval

Retrieve the host name, username, and (encrypted) password associated with your account so you can login to the database.

Connect to the system using the creditials from above, then select the database for the English Wikipedia.

Define functions to retrieve data from the MySQL database

Define a function get_user_revision_counts to return the number of times and the first/last edits a user made to a single article.

Check to see the results of how this function works.

Write a function get_neighbors_all_revisions to take a single page, get all its outlinks, and for each one of these outlinks get the user revision counts. This will return a large DataFrame containing all the users who ever edited the article and the number of times.

Now run the get_neighbors_all_revisions function to not only get all the page outlinks, but also all the revisions for every page in the dataset.

This step may take more than a minute, depending on the number of articles and number of revisions made to them.

Convert edgelist data into a network

We can convert this edgelist into a networkx bipartite graph by doing several steps. First we store the unique names of all the users and pages as references for subsequent analysis. Then we use the from_pandas_dataframe function to take the "collab_g_edgelist_df" and turn it into a directed graph (order of the connections matters) with the users as a source and the page titles as a target. The nodes in the resulting graph should then be labelled as being users or pages so that we can compare them more clearly in the visualization.

However the amount of data in the network exceeds the available memory we have in the PAWS environment (~1 GB). The code for constructing the whole network graph is below, but not in an executable form so as to prevent a kernel crash and you losing your progress so far.

# Convert the edgelist_df into a networkx graph, # preseving source, target, and weights collab_g = nx.from_pandas_dataframe(all_rev_count_df,source='user',target='page', edge_attr=['edits','min_timestamp','max_timestamp'], create_using=nx.DiGraph()) # Label the nodes in the network as being users or pages using the lists from above collab_g.add_nodes_from(collab_users,nodetype='user') collab_g.add_nodes_from(collab_pages,nodetype='page') # Write the graph out to file so we can visualize it in Gephi nx.write_gexf(collab_g,'collaboration_{0}.gexf'.format(page_title.replace(' ','_')))

Try an alternate version that excludes users having the name 'bot' in the title and users who made only one revision. You may get a SettingWithCopyWarning that pops up as a red box, but it should still have worked succesfully if the number of users, pages, and edges prints out at the bottom.

Project bipartite network into a page-page graph

Project the two-mode/bipartite graph (having users connected to pages) into a weighted one-mode graph where pages are connected to pages if they share users in common.

You could similarly do this for users, but this requires much more memory than we have available within PAWS. You can do some things to remove the pendants (nodes having a degree of one). But it still generates super-hairball-y graphs.

This code is not executable because it will almost certainly cause your notebook to run out of memory and crash. I provide the code in case you want to run it locally on other data.

num_of_nodes = len(collab_nobot_g)-1 collab_nobot_dc = {u:dc*num_of_nodes for u,dc in nx.degree_centrality(collab_nobot_g).items()} pendant_users = [node for node,degree in collab_nobot_dc.items() if 'u:' in node and degree < 3] cg_no_pendants = collab_nobot_g.copy() cg_no_pendants.remove_nodes_from(pendant_users) cg_no_pendants_users = [n for n,d in cg_no_pendants.nodes_iter(data=True) if d['nodetype'] == 'user'] user_collaboration_g = nx.bipartite.weighted_projected_graph(cg_no_pendants,cg_no_pendants_users) user_collaboration_g.number_of_nodes(), user_collaboration_g.number_of_edges() nx.write_gexf(user_collaboration_g,'cwpg_users_cg_nobot_{0}.gexf'.format(page_title.replace(' ','_')))

Extract the backbone of the graph using the method from Serrano, et al. (2009). This method retains the most statistically-significant weighted edges for each node based on a cut-off value of alpha. Rather than using a global thresholding strategy (e.g., dropping all edges below a certain weight) it retains the most important edges for each node.

Load the data for the projected backbone (0.01 threshold) collaboration graph.

Run the function on the igraph version of the backbone (0.01 threshold) collaboration network.

This may take a minute or more since these are intensive calculations

Compute descriptive statistics for the collaboration network

Identify the most well-connected nodes

Compute the directed degree centralities. The network is constructed such that users contribute to articles. It should be impossible for articles to contribute to articles, or users to contribute to users, or articles to contribute to users.

Following this enforced direction, in-degree values for articles should be non-zero and reflect the number of users who contributed to them while it should be zero for users. The out-degree values for users should be non-zero and reflect the number of articles they contributed to while it should be zero for pages.

Convert the in- and out-degree dictionaries from above into a DataFrame called "cg_degree_df" for collaboration degree centrality DataFrame. Also create new columns in cg_degree_df that label whether the row is for a page or a user to help with filtering later on.

Look at the nodes with the highest in-degree: these are articles and the number of unique users who contributed to them.

Look at the nodes with the highest out-degree: these are uses and the number of unique articles (in the set) they contributed to.

Are there any articles with only a single editor?

Look at the articles that only have a single contributor in the set of hyperlinked articles. Some of these look like major topics, but they aren't the main page themselves, but rather redirects that are placeholders pointing to another page elsewhere. For instance, an article like "September 11, 2001 attacks" exists but actually points to "September 11 attacks". The former page is an example of a redirect that has its own, but typically extremely short, revision history.