# Lab 2 - Hyperlink Networks¶

Professor Brian Keegan
Department of Information Science, CU Boulder

This is the second of five lab notebooks that will explore how to do some introductory data extraction and analysis from Wikipedia data. This lab will extend the methods in the prior lab about analyzing a single article's revision histories and use network science methods to analyze the networks of hyperlinks around a single article. You do not need to be fluent in either to complete the lab, but there are many options for extending the analyses we do here by using more advanced queries and scripting methods.

Acknowledgements
I'd like to thank the Wikimedia Foundation for the PAWS system and related Wikitech infrastructure that this workbook runs within. Yuvi Panda, Aaron Halfaker, Jonathan Morgan, and Dario Taraborelli have all provided crucial support and feedback.

## Import modules and setup environment¶

Load up all the libraries we'll need to connect to the database, retreive information for analysis, and visualize results.

Define the name of the article you want to use for the rest of the lab.

The first part of this lab with examine the hyperlinks among Wikipedia articles.

### Retrieve the content of the page via API¶

Write a function that takes an article title and returns the list of links in the body of the article. Note that the reason we don't use the "pagelinks" table in MySQL or the "links" parameter in the API is that this includes links within templates. Articles with templates link to each other forming over-dense clusters in the resulting networks. We only want the links appearing in the body of the text.

We pass a request to the API, which returns a JSON-formatted string containing the HTML of the page. We use BeautifulSoup to parse through the HTML tree and extract the non-template links and return them as a list.

Run an example article, showing the first 10 articles.

You could write a recursive function like recursively_get_hyperlink_network that would crawl the hyperlink network out to an arbitrary distance, but this is becomes exhorbitantly expensive at any depth greater than 1.

Here's an example function, but is not executable to prevent you from harming yourself. :)

def recursively_get_hyperlink_network(seed_page,depth): neighbors = {} if depth < 0: return neighbors neighbors[seed_page] = get_page_outlinks(seed_page) for neighbor in neighbors[seed_page]: neighbors[neighbor] = get_hyperlink_network(neighbor,depth-1) return neighbors

Instead, define a simple function to get the 1.5-step ego hyperlink network. The "ego" is the seed page you start from, the "alters" are the neighbors that the ego links out to. We also get the alters of the alters (2nd order alters), but only include these 2nd order connections if they link to 1st order alters. In other words, the 1.5-step ego hyperlink network are all the pages linked from the seed page and the connections among this set of articles.

Run this on an example article and save the resulting graph object to disk.

This step could take more than a minute depending on the number of links and size of the neighboring pages.

### Play the Wikipedia Game!¶

Using only the hyperlinks on the article, try to get from the first article to the second article.

No cheating!

After you've played the game a few times, see what an optimal shortest path is. You may get an error indicating there is no shortest path, in which case, try a new pair of nodes.

The shortest path length is the path connecting two nodes in the fewest steps. This is related to the "small world" effect where everyone in the world is just a few handshakes from each other. It's rare to find complex networks where the longest shortest path is above 5. Nodes that are this far from each other are likely about very unrelated topics.

If there are no paths greater than 5, lower the path_length_threshold from 5 to 4.

The long_path_lengths dictionary below is populated by computing all the shortest path lengths between nodes in the network and only keeping those paths that are longer than 5 steps from each other. In a directed graph like our hyperlink network, it's important to follow the direction of the arrows: if page A links to page B but page B doesn't link to page A, then we can't make a shortest path from B to A, we have to find another path.

The shortest path between the articles can be identified using the shortest_path function and supplying the graph and the names of two nodes.

Test out different combinations of articles from the long_path_lengths to find the articles that are farthest apart by entering different article names for page1 and page2.

Look at the nodes with the highest in-degree: other pages in the network point to this page.

Look at the nodes with the highest-out-degree: these pages point to many other pages.

Look at the nodes that have no links out.

Look at nodes that have a single link in. These are also known as (in-) pendants. If there are none, it should appear as an empty series.

Look at the nodes with a single link out. These are also known as (out-)pendants. If there are none, it should appear as an empty series.

Given a page, what are the neighbors that link in to it? Assign a specific article title to the page1 variable by replacing the np.random.choice(degree_df.index)

### Calculate communities within the network¶

Define a function to compute node community memberships for multiple community detection algorithms within igraph. The output is a dictionary of dictionaries where the top-level key is the name of the algorithm and returns a second-level dictionary keyed by the the page name with values being the community membership value. Documentation and details about these algorithms can be found under the igraph graph-class documentation.

Not included in the comparative_community_detector function are two additional community detection algorithms that are too intensive or are not working properly. They're documented below if you ever care to explore in the future.

# Uses up a ton of memory and crashes kernel immediately ig_hg_optimal_modularity = hyperlink_g.community_optimal_modularity().membership ig_hg_optimal_modularity_labels = dict(zip(ig_hg.vs['id'],ig_hg_optimal_modularity)) pd.Series(ig_hg_optimal_modularity_labels).value_counts().head(10) # Lumps everyone into a single community ig_hg_label_propagation = hyperlink_g.community_label_propagation(initial=range(ig_hg_d.vcount()),fixed=[False]*ig_hg_d.vcount()).membership ig_hg_label_propagation_labels = dict(zip(ig_hg_d.vs['id'],ig_hg_label_propagation)) pd.Series(ig_hg_label_propagation_labels).value_counts().head(10)

Here we need to shift from using the networkx library to using the igraph library. The former is built purely in Python which makes it easier-to-use but somewhat slower while the latter is a "wrapper" that lets us write in Python but does the calculations in much-faster C code behind-the-scenes.

Run the function on the igraph version of the hyperlink network.

This may take a minute or more since these are intensive calculations