Written by Federico Tomassetti
in ANTLR, Parsing

    This post is part of a series. The goal of the series is to describe how to create a useful language and all the supporting tools.

    1. Building a lexer
    2. Building a parser
    3. Creating an editor with syntax highlighting
    4. Build an editor with autocompletion
    5. Mapping the parse tree to the abstract syntax tree
    6. Model to model transformations
    7. Validation
    8. Generating bytecode

    After writing this series of posts I refined my method, expanded it, and clarified into this book titled How to create pragmatic, lightweight languages

    Code

    The code is available on GitHub. The code described in this post is associated to the tag 02_parser

    A change to the lexer

    With respect to the lexer we have seen in the first article we need to do one minor change: we want to keep recognizing the whitespace but we do not want to consider it in our parser rules. So we will instruct the lexer to throw all the whitespace tokens away and not passing them to the parser. To do that we need to change one line in SandyLexer.g4:

    WS                 : [t ]+ -> skip ;

    Thanks to Joseph for reporting this one!

    The parser

    The parser is simply defined as an ANTLR grammar. We have previously built a separate lexer. Here we reuse those terminals (NEWLINE, VAR, ID, etc.) to build rules such as statement, assignment, etc.

    Here it is our new ANTLR grammar:

    parser grammar SandyParser;
    
    options { tokenVocab=SandyLexer; }
    
    sandyFile : lines=line+ ;
    
    line      : statement (NEWLINE | EOF) ;
    
    statement : varDeclaration # varDeclarationStatement
              | assignment     # assignmentStatement ;
    
    varDeclaration : VAR assignment ;
    
    assignment : ID ASSIGN expression ;
    
    expression : left=expression operator=(DIVISION|ASTERISK) right=expression # binaryOperation
               | left=expression operator=(PLUS|MINUS) right=expression        # binaryOperation
               | LPAREN expression RPAREN # parenExpression
               | ID                #varReference
               | MINUS expression  #minusExpression
               | INTLIT # intLiteral
               | DECLIT # decimalLiteral ;
    • we reuse the existing lexer (tokenVocab=SandyLexer)
    • we start by defining the rule reppresenting the whole file: sandyFile. It is defined as a list of at list one line
    • each line is composed by a statement terminated either by a newline or the end of the file
    • a statement can be a varDeclaration or an assignment
    • an expression can be defined in many different ways. The order is important because it determines the operator’s precedence. So the multiplication comes before the sum

    To build it we simply run ./gradlew generateGrammarSource. Please refer to the build.gradle file in the repository or take a look at the previous post of the series.

    Testing

    Ok, we defined our parser, now we need to test it. In general, I think we need to test a parser in three ways:

    • Verify that all the code we need to parse is parsed without errors
    • Ensure that code containing errors is not parsed
    • Verify that the shape of the resulting AST is the one we expect

    In practice, the first point is the one on which I tend to insist the most. If you are building a parser for an existing language the best way to test your parser is to try parsing as much code as you can, verifying that all the errors found correspond to actual errors in the original code and not errors in the parser. Typically I iterate over this step multiple times to complete my grammars.

    The second and third points are refinements on which I work once I am sure my grammar can recognize everything.

    In this simple case, we will write simple test cases to cover the first and the third point: we will verify that some examples are parsed and we will verify that the AST produced is the one we want.

    It is a bit cumbersome to verify that the AST produced is the one you want. There are different ways to do that but in this case, I chose to generate a string representation of the AST and verify it is the same as the one expected. It is an indirect way of testing the AST is the one I want but it is much easier for simple cases like this one.

    This is how we produce a string representation of the AST:

    package me.tomassetti.sandy
    
    import org.antlr.v4.runtime.ParserRuleContext
    import org.antlr.v4.runtime.tree.TerminalNode
    import java.util.*
    
    // Each AST element is wrapped as an element
    // We can dump each element to a String
    abstract class ParseTreeElement {
        abstract fun multiLineString(indentation : String = ""): String
    }
    
    // To dump a leaf (which corresponds to a Terminal) we just write
    // T[...] and inside the square brackets we write the text corresponding
    // to the terminal
    class ParseTreeLeaf(val text: String) : ParseTreeElement() {
        override fun toString(): String{
            return "T[$text]"
        }
    
        override fun multiLineString(indentation : String): String = "${indentation}T[$text]n"
    }
    
    // For nodes things are slightly more complex: 
    // we need to first print the name of the node, then in the next lines
    // we print the children, recursively. While printing the children
    // we increase the indentation
    class ParseTreeNode(val name: String) : ParseTreeElement() {
        val children = LinkedList<ParseTreeElement>()
        fun child(c : ParseTreeElement) : ParseTreeNode {
            children.add(c)
            return this
        }
    
        override fun toString(): String {
            return "Node($name) $children"
        }
    
        override fun multiLineString(indentation : String): String {
            val sb = StringBuilder()
            sb.append("${indentation}$namen")
            children.forEach { c -> sb.append(c.multiLineString(indentation + "  ")) }
            return sb.toString()
        }
    }
    
    // Given an AST node we wrap all the parts as elements:
    // the terminals will be Leaf elements and the non-terminals
    // will be Node elements.
    // Once we have wrapped those elements we can produce a string for them
    fun toParseTree(node: ParserRuleContext) : ParseTreeNode {
        val res = ParseTreeNode(node.javaClass.simpleName.removeSuffix("Context"))
        node.children.forEach { c ->
            when (c) {
                is ParserRuleContext -> res.child(toParseTree(c))
                is TerminalNode -> res.child(ParseTreeLeaf(c.text))
            }
        }
        return res
    }

    And these are some test cases:

    package me.tomassetti.sandy
    
    import me.tomassetti.langsandbox.SandyLexer
    import me.tomassetti.langsandbox.SandyParser
    import org.antlr.v4.runtime.ANTLRInputStream
    import org.antlr.v4.runtime.CommonTokenStream
    import java.io.*
    import java.util.*
    import org.junit.Test as test
    import kotlin.test.*
    
    class SandyParserTest {
        
        fun lexerForResource(resourceName: String) = SandyLexer(ANTLRInputStream(this.javaClass.getResourceAsStream("/${resourceName}.sandy")))
    
        fun parseResource(resourceName: String) : SandyParser.SandyFileContext = SandyParser(CommonTokenStream(lexerForResource(resourceName))).sandyFile()
    
        @test fun parseAdditionAssignment() {
            assertEquals(
    """SandyFile
      Line
        AssignmentStatement
          Assignment
            T[a]
            T[=]
            BinaryOperation
              IntLiteral
                T[1]
              T[+]
              IntLiteral
                T[2]
        T[<EOF>]
    """,
                    toParseTree(parseResource("addition_assignment")).multiLineString())
        }
    
        @test fun parseSimplestVarDecl() {
            assertEquals(
    """SandyFile
      Line
        VarDeclarationStatement
          VarDeclaration
            T[var]
            Assignment
              T[a]
              T[=]
              IntLiteral
                T[1]
        T[<EOF>]
    """,
                    toParseTree(parseResource("simplest_var_decl")).multiLineString())
        }
    
        @test fun parsePrecedenceExpressions() {
            assertEquals(
    """SandyFile
      Line
        VarDeclarationStatement
          VarDeclaration
            T[var]
            Assignment
              T[a]
              T[=]
              BinaryOperation
                BinaryOperation
                  IntLiteral
                    T[1]
                  T[+]
                  BinaryOperation
                    IntLiteral
                      T[2]
                    T[*]
                    IntLiteral
                      T[3]
                T[-]
                IntLiteral
                  T[4]
        T[<EOF>]
    """,
                    toParseTree(parseResource("precedence_expression")).multiLineString())
        }
    
    
    }

    Simple, isn’t it?

    Conclusions

    We have seen how to build a simple lexer and a simple parser. Many tutorials stop there. We are instead going to move on and build more tools from our lexer and parser. We laid the foundations, we now have to move to the rest of the infrastructure. Things will start to get interesting.

    In the next post, we will see how to build an editor with syntax highlighting for our language.

    Some of the functionalities presented in this tutorial have been later extracted into a library we use in our open-source and commercial projects in Language Engineering. The library is named Kolasu and it is open-source. It may be useful for your projects. You can find it here: https://github.com/strumenta/kolasu

    The ANTLR Mega Tutorial as a PDF

    Get the Mega Tutorial delivered to your email and read it when you want on the device you want

    Powered by ConvertKit
     
    Creating a Programming Language

    Learn to Create Programming Languages

    Subscribe to our newsletter to get the FREE email course that teaches you how to create a programming language