SI425, Fall 2017

Lab 5: Ease into Probabilistic Syntax

Due date: the start of class, Oct 19


Before we build full PCFG parsers, this lab will introduce you to some important aspects of English syntax, and you will build basic probabilities over a few rules that we will loosely define. Next week, you will work with an actual parser and learn real PCFGs.

1. Build a Grammar: Verb Tense and Aspect

Your first task is to write some CFG rules for Verb Phrases (VP). Verbs come in many forms, and your job is to focus on a single verb, leave. You must write grammar rules (e.g., VP -> VBG NP) for each of the following tenses and grammatical aspects using the Penn Treebank POS tagset.

  1. Present tense: "leave", "leaves"
  2. Present perfect: "has left", "have left"
  3. Present progressive: "is leaving", "are leaving", "am leaving"
  4. Past tense: "left"
  5. Past perfect: "had left"
  6. Past progressive: "was leaving", "were leaving"
  7. Future: "will leave"

Each of the above 7 verb categories should have its own unique VP->X X rule! Open a text file, and write the 7 VP grammar rules. You will be graded on how accurately your rules capture the forms, and how they do not allow other forms to match them. IMPORTANT: assume the verb leave has one noun argument, so your rules should have the appropriate NP with it!

2. Add Probability to your Rules

Now estimate the probability P(VP->VBG NP) of each rule, using our same twitter dataset from the last lab (/courses/nchamber/nlp/lab4/data/tweets).

Step 1: Use zgrep (highly recommended, see below for zgrep tips) or write Java code (not recommended. lots more work. see below for code setup.) to search and count VP occurrences, and output the count of each. Write the count next to each of your 7 grammar rules. Note that the future and present forms both contain "leave", so take care not to double count across tenses. For example:

zgrep -i "leaving" /courses/nchamber/nlp/lab4/data/tweets/20111020.txt.gz > leaving.txt

This ignores case and searches for "leaving" as a string in the zipped file. It pipes the output to a file. But even better, search for your word with word boundaries to avoid matching substrings of bigger words:

zgrep -i "\bleaving\b" /courses/nchamber/nlp/lab4/data/tweets/20111020.txt.gz > leaving.txt

You can then use tools like wc to count how many matches:

wc -l leaving.txt

Step 2: Calculate the probability of each rule. Remember that P(VP->VBG NP) = P(VBG NP | VP). Your probabilities should of course sum to one! Write the probabilities next to each of your 7 grammar rules.

3. Repeat for a second Verb

After you finish the verb leave, pick a different verb that is frequent in English, and compute the probabilities just based on that verb. The VP rules should be unchanged from 'leave' if you did this correctly (except to substitute your new verb in, of course)! Put these in the same text file below your 'leave' rules, write the new verb's counts and the probabilities.

4. Wh-Question Syntax

Asking questions in English is somewhat straightforward. There are relatively well-defined rules to transform a normal English sentence into a wh-question. Take this sentence as an example:

"I ate the bread" -> "What did I eat?"

Below are several sentences with a phrase in bold. Your task is to remove that phrase and ask about it using a wh-word. Step one is to rewrite the sentence as a question. Step two is to draw a parse tree for the question. Step three is to come up with the transformation rules to morph the sentence into the question (e.g., "remove the NP and put 'what' at the beginning of the sentence"). Step four is to find examples in the Twitter data that start with the same wh-words. List at least 5 examples each, and see if they match your transformation rules. If not, fix your rules.

  1. John picked up the chair.
  2. I am going to the big mall tomorrow.
  3. Susan decided to leave John.
  4. Susan decided to leave John.
  5. We thought about eating the burritos. (not a wh-word question. make it a yes/no question)

If you wish, PDF template here, Word template here.

Helpful POS Tags

Helpful grep

Use zgrep on the zipped files. Don't unzip them.

zgrep " apple ": This searches for 'apple' with spaces on either side.

zgrep "\bapple\b": Even better. Searches for 'apple' with word boundaries on both sides. In other words, if it starts or ends a sentence, or has punctuation, apple will still match!

zgrep "pattern" | wc -l: This searches for your pattern, and then pipes the lines to wc. wc counts lines for you. It couldn't be easier!

Helpful Code Setup (only needed if you don't use zgrep)

Create a new lab5 directory. You can do this lab in Java, or you could also just use grep and some unix tools like wc. If you want to do java, just reuse the base code from Lab 4. Copy lab4/java/ to your lab5 directory (cp -R /courses/nchamber/nlp/lab4/java lab5/). The class will give you your tweets one at a time and you can search their strings! See Lab 4's description for more code details.

What to turn in

  1. One page with two grammars of VP rules. One for the verb leave and one for your verb of choice. Both should have probabilities attached to the rules, as well as the raw counts of each rule to see how you computed your probabilities (and for partial credit). Print this out.
  2. Printouts of your Wh-Question answers (5 sheets). Use the template linked to above for easiest formatting.

How to turn in

No auto-submit. Print it out, staple, and hand in on the due date.


Verb Phrases (30 pts)

Wh-Questions (25 pts)

Total: 55 pts