Cloud9

A Hadoop toolkit for working with big data

Warning It is strongly recommended that you first complete the word count tutorial before trying this exercise.

This exercise is a simple extension of the word count demo: in the first part of the exercise, you'll count bigrams, and in the second part of the exercise, you'll compute bigram relative frequencies. For both parts, feel free to use Hadoop data types in the lintools-datatypes package here.

Part I: Count the bigrams

Take the word count example edu.umd.cloud9.example.simple.DemoWordCount and extend it to count bigrams. Bigrams are simply sequences of two consecutive words. For example, the previous sentence contains the following bigrams: "Bigrams are", "are simply", "simply sequences", "sequence of", etc.

Work with the sample collection included in Cloud9, the Bible and the complete works of Shakespeare. Don't worry about doing anything fancy in terms of tokenization; it's fine to continue using Java's StringTokenizer.

Questions to answer:

  1. How many unique bigrams are there?
  2. List the top ten most frequent bigrams and their counts.
  3. What fraction of all bigrams occurrences does the top ten bigrams account for? That is, what is the cumulative frequency of the top ten bigrams?
  4. How many bigrams appear only once?

Part II: From bigram counts to relative frequencies

Extend your program to compute bigram relative frequencies, i.e., how likely you are to observe a word given the preceding word. The output of the code should be a table of values for F(Wn|Wn-1).

Hint: to compute F(B|A), count up the number of occurrences of the bigram "A B", and then divide by the number of occurrences of all the bigrams that start with "A".

Questions to answer:

  1. What are the five most frequent words following the word "light"? What is the frequency of observing each word?
  2. Same question, except for the word "contain".
  3. If there are a total of N words in your vocabulary, then there are a total of N2 possible values for F(Wn|Wn-1)—in theory, every word can follow every other word (including itself). What fraction of these values are non-zero? In other words, what proportion of all possible events are actually observed? To give a concrete example, let's say that following the word "happy", you only observe 100 different words in the text collection. This means that N-100 words are never seen after "happy" (perhaps the distribution of happiness is quite limited?).

Solutions

When you're ready, the solutions to this exercise are located here.