Menu

Class 7: Scanning


Reading
Section 2.2 of Programming Language Pragmatics.

Homework
Note: this is a very important homework. Plan on spending some time to figure it out.

Look at the file hwCalc.ypp. Download it to your Unix account, and compile and run like this:
> bison hwCalc.ypp
> g++ -o hwCalc hwCalc.tab.cpp
> ./hwCalc
3*(-7 + 12/3);
-9
Some changes to a language can be handled completely by the scanner. Your job is to modify hwCalc.ypp so it can accept integers input in either decimal or binary. Speciffically, a string of 0's and 1's followed immediately by a "b", no whitespace or anything, is an integer literal in binary. So, for example:
> ./hwCalc
10 + 3;
13 
10b + 3;
5
  1. Draw a modified version of the finite automaton below that takes binary integer constants into account.
  2. Modify hwCalc.ypp to implement this feature. Only yylex() needs to change to make this happen!

Turn In: A printout of your bison file and a printout of a screen capture of a session in which you do some calculations, including the "10 + 3" and "10b + 3" above.

Hints:


Homework review
  1. Collect and discuss homework. In the last problem, what happens if we add
    factor -> NUM:NUM:NUM
    
    to our grammar? Is there a difference if we instead add
    term -> factor:factor:factor
    
    instead? Which is better? Why?

Scanning / tokenization / lexical analysis
Recall, scanning is the process of breaking up a stream of symbols into a stream of tokens. This will usually include stripping out comments, inserting "include" files, etc. If you're reading from an input stream like stdin, the fundmental operation is something like getNextChar(), i.e. a function that returns to the caller the next character from the input stream. For scanning, the fundamental operation is something like getNextToken(), i.e. a function that returns the next token from the input stream. However, what type of object should "a token" actually be? Usually, you give a constant int value to each token, and that id is certainly returned. However, for at least some tokens you need more than that id. For example, a NUM token needs the actual characters that were lumped together as a NUM, or at least the int or double they define. For a string constant expression, you may prefer not to have the literal text, but rather the text after replacing multi-character escape sequences like \n with their appropriate char value. So usually, the scanner will have to return several pieces of information from getNextToken().

Finite automata and scanning: hand-rolled approach
Recall in theory of computing, that we had a lab about using the parser generator bison to create a simple infix calculator from a grammar for arithmetic expressions. There we pretty much ignored scanning. Now let's do it right. Here is a bison file that's got everything but the definition of yylex(), which is bison's name for "getNextToken()".
%{
#include <iostream>
#include <string>
#include <cstdlib> //-- I need this for atoi
using namespace std;

//-- Lexer prototype required by bison, aka getNextToken()
int yylex(); 
int yyerror(const char *p) { cerr << "Error!" << endl; }
%}

//-- GRAMMAR SYMBOL DECLARATIONS
%union {
  int val; 
  char sym;
};
%token <val> NUM
%token <sym> OPA OPM LP RP STOP
%type  <val> exp term sfactor factor res

//-- GRAMMAR RULES 
%%
res: exp STOP { cout << $1 << endl; }

exp: exp OPA term     { $$ = ($2 == '+' ? $1 + $3 : $1 - $3); }
| term                { $$ = $1; }

term: term OPM factor { $$ = ($2 == '*' ? $1 * $3 : $1 / $3); }
| sfactor             { $$ = $1; }

sfactor: OPA factor   { $$ = ($1 == '+' ? $2 : -$2); }
| factor              { $$ = $1; }

factor: NUM           { $$ = $1; }
| LP exp RP           { $$ = $2; }

%%
//-- FUNCTION DEFINITIONS
int main()
{
  while(1) yyparse();
  return 0;
}

All we need to do is to provide a yylex() function that recognizes the tokens NUM, OPA (i.e. + and -), OPM (i.e. * and /), STOP (i.e. ;), LP (i.e. "(") and RP (")"). Oh, and it should also treat whitespace appropriately. Instead of sitting at the keyboard and trying to hack something together, let's see if we can construct a FA that recognizes tokens and base our implementation on it.
int yylex()
{
  bool found = false;
  int state = 0;
  string val = "";
  while(!found)
  {
    char c = cin.get();
    switch(state)
    {
    case 0:
      switch(c) {
      case '0': case '1': case '2': case '3': case '4':
      case '5': case '6': case '7': case '8': case '9':
	val += c; state = 1; break;
      case '+': case '-': val += c; state = 2; break;
      case '*': case '/': val += c; state = 3; break;
      case ';': val += c; state = 4; break;
      case '(': val += c; state = 5; break;
      case ')': val += c; state = 6; break;
      case ' ': case '\t': case '\n': break;
      case EOF: exit(0); break;
      default: found = true; }
      break;
    case 1:
      switch(c) {
      case '0':case '1':case '2':case '3':case '4':
      case '5':case '6':case '7':case '8': case '9':
	val += c; state = 1; break;
      default:
	cin.putback(c);
	found = true; }
      break;
    case 2: case 3: case 4: case 5: case 6:
      cin.putback(c);
      found = true;
      break;
    }
  }
  
  switch(state) {
  case 0: return 0; // EOF
  case 1: yylval.val = atoi(val.c_str()); return NUM;
  case 2: yylval.sym = val[0];            return OPA;
  case 3: yylval.sym = val[0];            return OPM;
  case 4:                                 return STOP;
  case 5:                                 return LP;
  case 6:                                 return RP; }
}
One thing that may require a bit of explanation: if you read a character with cin.get(), you can "unread it" with cin.put_back(). That's a kind of convenient trick here. Here's the complete program: to view or to download.

Some interesting questions to ask at this point:

Maximal munch
Suppose we have ** for exponentiation and * multiplication, with EXP and OPM as the associated tokens. How does the lexer decide whether to lex ** as EXP or OPM OPM? Or what about ***? How should that lex? The usual rule of thumb in language design is to follow the rule of "maximal munch": whichever token matches the most characters is the one you take. Thus, *** lexes to EXP OPM. Similarly, if "for" matches a LOOP token, then "formula" lexes as the ID token with value formula, not LOOP ID where the ID matches "mula". Maximal munch also matches our need for lexing 4.13 as a single REALNUM token rather than INT DOT INT or something similar.

The "maximal munch" rule requires a little bit of care. From the implementation perspective we run the input characters through a DFA, collecting the characters as they are read in a buffer (the variable val in the above). We don't stop and emit a token just because we hit an accepting state. We stop when we hit a "missing" transition, i.e. a state/char combination for which there is no outgoing arrow, and rewind until we hit the last accepting state seen prior to the missing transition. This backing up requires that we put some characters back into the input stream --- that we pretend we hadn't read them after all. That is, in fact, what the putback function in the example is doing.

C++'s putback function is only guaranteed to work correctly for putting-back a single charcter between reads. So, in general, we need to deal with this ourselves. We keep a buffer buff of unread characters, and when reading the next character, instead of doing

char c = cin.get();
... we check the buffer first.
if buff empty, c = cin.get(), otherwise c is next char from buff
Then, when we need to "putback" multiple characters, we stick them on the end of buff. Implementing this in a correct and robust way is a bit tricky. You need a circular, extensible buffer, which you should know about, but which is pretty easy to mess up.

Here's an example of a lanugage construct that might require more than one "putback" operation: suppose that for some language "-" is a token, and "-->" is a token, but "--" is not. If we read "--@", what do we do?

Handling lexical errors
Normally we like to handle errors in a compiler/interpreter in a reasonable way. What's reasonable? Well you should give an error message that helps the user track down the problem, and it's nice if you don't puke on the first error, but continue on and try to process the rest of the input. After all, if there are several errors it's nice to see messages for them all at once.

The general strategy for scanners is to throw away characters until you get something you're able to make a token out of. You should output an error message telling the user what got thrown away, of course, and if you're especially kind, you keep track of line numbers (and possible character position within a line) to help the user out. In out simple calculator example, each input is very small, usually just a singe line, so just printing out the offensive characters along with an error message is probably sufficient. In fact, all we really need is to add an extra transition from state 0 back to itself for all "other" characters, with the associated action of printing an error message, and the effect will be what we want.


Christopher W Brown
Last modified: Wed Sep 9 09:48:55 EDT 2009