SI485i, Fall 2012

Lab 1: Twitter Hashtag Segmentation

current standings

Motivation

This lab will take strings of continuous characters and segment them into words. Segmenting text is an important task for many foreign languages, such as Chinese and Arabic. Even in English where whitespace typically separates its words, genres like social media often mash words together into incomprehensible (to computers, at least) strings. Hashtags are one particular example: #segmentingisfun.

Objective

You are given a set of hashtags. Your goal is to segment them into tokens using an English dictionary and the MaxMatch algorithm in your textbook.

#bigbangtheory -> big bang theory
#chickensoup -> chicken soup
#running -> running
#30times -> 30 times
#neverstop -> never stop

I am providing you two incomplete source files: Segment.java and Evaluate.java. Your task is to fill in the missing code.

Code and Data

Look in /courses/nchamber/nlp/lab1/. You will edit Segment.java and Evaluate.java for this lab. You will also need Levenshtein.java as helper code. Copy these to your own lab1 directory. There is also a dictionary file, bigwordlist.txt.

Part One: MaxMatch

Implement the MaxMatch algorithm as described in your textbook, Jurafsky/Martin. Briefly, the algorithm starts from the first character and finds the longest matching known English word. It then continues from the end of that word, repeating the lookup. You obviously will need an English dictionary. Use the one on the shared drive at /courses/nchamber/nlp/lab1/bigwordlist.txt. It contains a list of words and how often they occurred in a large corpus. Note the list is very noisy (as much of NLP is).

Implement MaxMatch in Segment.java. You should implement it inside the maxMatch(String filename) method. This method should simply output your segmentation guesses, one per line, and nothing else. Do not output the hashtag character, but just your segmented words (as in the Objective section example above). Run it on the provided list of hash tags at /courses/nchamber/nlp/lab1/hashtags-train.txt. The code is easy to run: java Segment max <dictionary-path> <hashtags-path>

You then need to subjectively study your output. Answer the following questions. You'll want to scan through your output and see what kinds of errors have appeared:

  1. When MaxMatch is incorrect, what types of failures do you see and what causes each? List the types you observe with an example for each. Hint: you should identify at least 3 different types of errors.

Part Two: Empirical Evaluation

We now want to objectively evaluate our performance. Part Two has you compute a score for your system. Note that the lab directory also has the files hashtags-dev.txt and hashtags-dev-gold.txt. These contain more tags, but also the correct answers (gold tags). You will now compare your system's output to gold answers.

The textbook describes the minimum edit distance score to compare how different two strings are (by counting the number of changes to transform one to the other). Lucky for you, I'm providing code that computes this for you free of charge. Your task is to use this distance metric to compute a Word Error Rate (WER) between your guesses and the gold answers. WER is just the length normalized minimum edit distance (i.e, minimum edit distance divided by the length of the correct segmentation string). Use my provided code Levenshtein.java to compute the edit distance score (create a Levenshtein object first, then call score):

Levenshtein.score(String a, String b);

Your task is to take my completed Levenshtein scorer and create a working WER that takes your output with gold answers and returns the average WER across that test set. Do this in Evaluate.java in the avgWER(guessPath, goldPath) and WER(guess, gold) methods. Note that avgWER() needs to read two files: (1) your output from Part One, and (2) my provided gold answers. So you'll just want to pipe Part One's output to a file. Then run the code: java Evaluate <guesses-file> <gold-file>

  1. What is the average WER of MaxMatch on your test data? (this is one number: the average WER across all test hashtags) Your number will be small, as in, far less than 1.0. An average error of 1.0 would mean you made around one mistake on every hashtag.

Part Three: Improve MaxMatch

Using your analysis from Parts 1 and 2, you will now improve the algorithm. Your goal is to improve the WER score, and the sky's the limit on what you should try. Points will be given to the number of different improvements, creativity, and overall WER improvement. Do not change your maxMatch() method. You should copy that code into improvedMaxMatch() and then make your improvements there. You can run the system now: java better <hashtags-path>

  1. List the improvements you made.
  2. What is the final WER of your improved algorithm?

What to turn in

  1. A single text file with answers to the four questions in Parts 1,2,3. Save it in your lab1 directory.
  2. Completed Segment.java and Evaluate.java files. You must complete the WER, avgWER, loadDictionary, maxMatch, improvedMaxMatch functions. Do not change the behavior of main(). Segment.java takes two command-line arguments: (1) a token ("max" or "better") that causes maxMatch or improvedMaxMatch to be called, and (2) a text file with one hashtag per line. Your code should open and read that file, then output your segmentations, one per line. Do not print anything except the segmentations. The number of lines output must equal the number of lines input.

How to turn in

Use the submit script: /courses/nchamber/submit

Create a directory for your lab called lab1 (all lowercase). When you are ready to turn in your code, execute the following command from the directory one level above lab1:
    /courses/nchamber/submit  lab1

Double-check the output from this script to make sure your lab was submitted. It is your responsibility if the script fails and you did not notice. The script will print out "Submission successful." at the very end of its output. If you don't see this, your lab was not submitted.

Competition!

Your submissions will be automatically tested (twice a day). It is thus very important that you follow the above output instructions correctly, or your final WER score will be incorrect. If all goes well, the results will appear here. You will be able to instantly see how you rank against your classmates. You are encouraged to submit early and resubmit as many times as you want. However, the system updates your score only once or twice a day, so you'll have to wait till the next day to see your status.

This is all very beta, and might fail miserably. I will post final performance after the due date if the auto-grading fails.

Grading

Part One: 20 pts

  1. MaxMatch correctly implemented: 14 pts
  2. Failure analysis: 6 pts
Part Two: 10 pts
Part Three: 10 pts
  1. Multiple improvements made: 6 pts
  2. Final WER reported and performance improved: 4 pts (0 pts if WER does not significantly improve)
Compiling Penalty: your code does not compile: -10 pts
Output Penalty: your output does not conform to the requirements: -5 pts
Extra Credit: best class performance, or creative/lots of MaxMatch improvements

Total: 40 pts