SI486m, Spring 2015
Due date: the start of class, Mar 5
Before we build full PCFG parsers, this lab will introduce you to some important aspects of English syntax, and you will build basic probabilities over a few rules that we will loosely define. Next week, you will work with an actual parser and learn real PCFGs.
Your first task is to write some CFG rules for Verb Phrases (VP). Verbs come in many forms, and your job is to focus on a single verb, leave. You must write grammar rules (e.g., VP -> VBG NP) for each of the following tenses and grammatical aspects using the Penn Treebank POS tagset.
Each of the above 7 verb categories should have its own unique VP->X X rule! Open a text file, and write the 7 VP grammar rules. You will be graded on how accurately your rules capture the forms, and how they do not allow other forms to match them. IMPORTANT: assume the verb leave has one noun argument, so your rules should have the appropriate NP with it!
Now estimate the probability P(VP->VBG NP) of each rule, using our same twitter dataset from the last lab (/courses/nchamber/nlp/lab4/data/tweets).
Step 1: Use zgrep (highly recommended, see below for zgrep tips) or write Java code (not recommended. lots more work. see below for code setup.) to search and count VP occurrences, and output the count of each. Write the count next to each of your 7 grammar rules. Note that the future and present forms both contain "leave", so take care not to double count across tenses.
Step 2: Calculate the probability of each rule. Remember that P(VP->VBG NP) = P(VBG NP | VP). Your probabilities should of course sum to one! Write the probabilities next to each of your 7 grammar rules.
After you finish the verb leave, pick a different verb that is frequent in English, and compute the probabilities just based on that verb. The VP rules should be unchanged from 'leave' if you did this correctly (except to substitute your new verb in, of course)! Put these in the same text file below your 'leave' rules, write the new verb's counts and the probabilities.
Asking questions in English is somewhat straightforward. There are relatively well-defined rules to transform a normal English sentence into a wh-question. Take this sentence as an example:
"I ate the bread" -> "What did I eat?"
Below are several sentences with a phrase in bold. Your task is to remove that phrase and ask about it using a wh-word. Step one is to rewrite the sentence as a question. Step two is to draw a parse tree for the question. Step three is to come up with the transformation rules to morph the sentence into the question (e.g., "remove the NP and put 'what' at the beginning of the sentence"). Step four is to find examples in the Twitter data that start with the same wh-words. List at least 5 examples each, and see if they match your transformation rules. If not, fix your rules.
If you wish, PDF template here, Word template here.
Use zgrep on the zipped files. Don't unzip them.
zgrep " apple ": This searches for 'apple' with spaces on either side.
zgrep "\bapple\b": Even better. Searches for 'apple' with word boundaries on both sides. In other words, if it starts or ends a sentence, or has punctuation, apple will still match!
zgrep "pattern" | wc -l: This searches for your pattern, and then pipes the lines to wc. wc counts lines for you. It couldn't be easier!
Create a new lab5 directory. You can do this lab in Java, or you could also just use grep and some unix tools like wc. If you want to do java, just reuse the base code from Lab 4. Copy lab4/java/ to your lab5 directory (cp -R /courses/nchamber/nlp/lab4/java lab5/). The Datasets.java class will give you your tweets one at a time and you can search their strings! See Lab 4's description for more code details.
No auto-submit. Print it out, staple, and hand in on the due date.
Verb Phrases (30 pts)
Wh-Questions (25 pts)
Total: 55 pts