HTML is a markup language with a simple structure. It would be quite easy to build a parser for HTML with a parser generator. Actually, you may not need even to do that, if you choose a popular parser generator, like ANTLR. That is because there are already available grammars ready to be used.

HTML is so popular that there is even a better option: using a library. It is better because it is easier to use and usually provides more features, such as a way to create an HTML document or to support easy navigation through the parsed document. For example, usually it comes with a CSS/jQuery-like selector to find nodes according to their position in the hierarchy.

The goal of this article is helping you to find the right library to process HTML. Whatever you are using: Java, C#, Python, or JavaScript we got you covered.

We are not going to see libraries for more specific tasks, such as article extractors or web scraping, like Goose. They have typically restricted uses, while in this article we focus on the generic libraries to process HTML.

Parsing HTML

Receive the guide to your inbox to read it on all your devices when you have time.

We won't send you spam. Unsubscribe at any time. Powered by ConvertKit

Java

Let’s start with the Java libraries to process HTML.

Lagarto and Jerry

Jodd is set of Java micro frameworks, tools and utilities

Among the many Jodd components available there are Lagarto, an HTML parser, and Jerry, defined as jQuery in Java. There are even more components that can do other things. For instance, CSSelly, which is a parser for CSS-selectors strings and powers Jerry, and StripHtml, which reduces the size of HTML documents.

Lagarto works as a traditional parser, rather than the typical library. You have to build a visitor and then the parser will call the proper function each time a tag is encountered. The interface is simple and mainly you have to implement a visitor that will be called for each tag and for each piece of text. Lagarto is quite basic, it just does parsing. Even the building of the (DOM) tree is done by an extension, aptly called DOMBuilder.

While Lagarto could be very useful for advanced parsing tasks, usually you will want to use Jerry. Jerry tries to stay as close as possible to jQuery, but only to its static and HTML manipulation parts. It does not implement animations or ajax calls. Behind the scenes Jerry uses Lagarto and CSSelly, but it is much easier to use. Also, you are probably already familiar with jQuery.

The documentation of Jerry is good and there are a few examples in the documentation, including the following one.

// from the documentation 
public class ChangeGooglePage
{
    public static void main(String[] args) throws IOException
    {
        // download the page super-efficiently
        File file = new File(SystemUtil.getTempDir(), "google.html");
        NetUtil.downloadFile("http://google.com", file);

        // create Jerry, i.e. document context
        Jerry doc = Jerry.jerry(FileUtil.readString(file));

        // remove div for toolbar
        doc.$("div#mngb").detach();
        // replace logo with html content
        doc.$("div#lga").html("<b>Google</b>");

        // produce clean html...
        String newHtml = doc.html();
        // ...and save it to file system
        FileUtil.writeString(
            new File(SystemUtil.getTempDir(), "google2.html"),
            newHtml);
    }
}

HTMLCleaner

HTMLCleaner is a parser that is mainly designed to be a cleaner of HTML for further processing. As the documentation explains it.

HtmlCleaner is an open source HTML parser written in Java. HTML found on the Web is usually dirty, ill-formed and unsuitable for further processing. For any serious consumption of such documents, it is necessary to first clean up the mess and bring some order to the tags, attributes and ordinary text. For any given HTML document, HtmlCleaner reorders individual elements and produces well-formed XML. By default, it follows similar rules that the most of web browsers use in order to create the Document Object Model. However, you can provide custom tag and rule sets for tag filtering and balancing.

This explanation also reveals that the project is old, given that in the last few years the broken HTML problem is much less prominent that it was before. However, it is still updated and maintained. So the disadvantage of using HTMLCleaner is that the interface is a bit old and can be clunky when you need to manipulate HTML.

The advantage is that it works well even on old HTML documents. It can also write the documents in XML or pretty HTML (i.e., with the correct indentation). If you need JDOM and a product that support XPath, or you even like XML, look no further.

The documentation offers a few examples and API documentation, but nothing more. The following example comes from it.

HtmlCleaner cleaner = new HtmlCleaner();
final String siteUrl = "http://www.themoscowtimes.com/";
 
TagNode node = cleaner.clean(new URL(siteUrl));
 
// traverse whole DOM and update images to absolute URLs
node.traverse(new TagNodeVisitor() {
    public boolean visit(TagNode tagNode, HtmlNode htmlNode) {
        if (htmlNode instanceof TagNode) {
            TagNode tag = (TagNode) htmlNode;
            String tagName = tag.getName();
            if ("img".equals(tagName)) {
                String src = tag.getAttributeByName("src");
                if (src != null) {
                    tag.setAttribute("src", Utils.fullUrl(siteUrl, src));
                }
            }
        } else if (htmlNode instanceof CommentNode) {
            CommentNode comment = ((CommentNode) htmlNode); 
            comment.getContent().append(" -- By HtmlCleaner");
        }
        // tells visitor to continue traversing the DOM tree
        return true;
    }
});
 
SimpleHtmlSerializer serializer = 
    new SimpleHtmlSerializer(cleaner.getProperties());
serializer.writeToFile(node, "c:/temp/themoscowtimes.html");

Jsoup

jsoup is a Java library for working with real-world HTML

Jsoup is a library with a long history, but a modern attitude:

  • it can handle old and bad HTML, but it also equipped for HTML5
  • it has powerful support for manipulation, with support for CSS selectors, DOM Traversal and easy addition or removal of HTML
  • it can clean HTML, both to protect against XSS attacks and in the sense that it improves structure and formatting

There is little more to say about jsoup, because it does everything you need from an HTML parser and even more (e.g., cleaning HTML documents). It can be very concise.

In this example it directly fetches HTML documents from an URL and select a few links. On line 9 you can also see a nice option: the chance to automatically get the absolute url even if the attribute href reference a local one. This is possible by using the proper setting, which is set implicitly when you fetch the URL with the connect method.

Document doc = Jsoup.connect("http://en.wikipedia.org/")
               .userAgent("Mozilla")
               .get();

Elements newsHeadlines = doc.select("#mp-itn b a");

print("nLinks: (%d)", newsHeadlines.size());
for (Element link : newsHeadlines) {
   print(" * a: <%s>  (%s)", link.attr("abs:href"), trim(link.text(), 35));
}

The documentation lacks a tutorial, but it provides a cookbook, that essentially fulfills the same function, and an API reference. There is also an online interactive demo that shows how jsoup parses an HTML document.

C#

Let’s move to the C# library to process HTML.

AngleSharp

The ultimate angle brackets parser library parsing HTML5, MathML, SVG and CSS to construct a DOM based on the official W3C specifications.

AngleSharp is quite simply the default choice for whenever you need a modern HTML parser for a C# project. In fact, it does not just parse HTML5, but also its most used companions: CSS and SVG. There is also an extension to integrate scripting in the contest of parsing HTML documents: both C# and JavaScript, based on Jint. Which means that you can parse HTML documents after they have been modified by JavaScript. Both the JavaScript included in the page or a script you add yourself.

AngleSharp fully support modern conventions for easy manipulation, like CSS selectors and jQuery-like constructs. But it is also well integrated in the .NET world, with support for LINQ for DOM elements. It has also evolved in something more than a parser:

The DOM exposed by AngleSharp is fully functional and interactive. Handle DOM events in your code

The following example, from the documentation, shows a few features of AngleSharp.

var parser = new HtmlParser();
var document = parser.Parse("<ul><li>First item<li>Second item<li class='blue'>Third item!<li class='blue red'>Last item!</ul>");

//Do something with LINQ
var blueListItemsLinq = document.All.Where(m => m.LocalName == "li" && m.ClassList.Contains("blue"));

//Or directly with CSS selectors
var blueListItemsCssSelector = document.QuerySelectorAll("li.blue");

Console.WriteLine("Comparing both ways ...");

Console.WriteLine();
Console.WriteLine("LINQ:");

foreach (var item in blueListItemsLinq)
    Console.WriteLine(item.Text());

Console.WriteLine();
Console.WriteLine("CSS:");

foreach (var item in blueListItemsCssSelector)
    Console.WriteLine(item.Text());

The documentation may contain all the information you need, but it certainly could use a better organization. For the most part it is delivered within the GitHub project, but there are also old tutorials on CodeProject, by the author of the library.

HtmlAgilityPack

HtmlAgilityPack was once considered the default choice for HTML parsing with C#. Although some says for the lack of better alternatives, because the quality of the code was low. In any case it was essentially abandoned for the last few years, until it was revived by ZZZ Projects.

The revival has improved the quality of the code and provided an accessible the documentation. However, it still has an old mindset: it supports XSLT and XPath, but not CSS selector. The first feature was very useful 10 years ago, but the second is necessary for modern HTML parsing.

If you are in need for things like XPath, HtmlAgilityPack should be your best choice. In other cases, I do not think it is the best right now, unless you are already using it.

// Load an HTML document
var url = "http://html-agility-pack.net/";
var web = new HtmlWeb();
var doc = web.Load(url);

// Get value with XPath	
var value = doc.DocumentNode
	       .SelectNodes("//td/input")
               .First()
	       .Attributes["value"].Value;

Python

Now it is the turn of the Python libraries.

HTML Parser of The Standard Library

The standard Python library is quite rich and implement even an HTML Parser. The bad news is that the parser works like a simple and traditional parser, so there are no advanced functionalities geared to handle HTML. The parser essentially makes available a visitor with basic functions for handle the data inside tags, the beginning and the ending of tags.

from html.parser import HTMLParser

class MyHTMLParser(HTMLParser):
    def handle_starttag(self, tag, attrs):
        print("Encountered a start tag:", tag)

    def handle_endtag(self, tag):
        print("Encountered an end tag :", tag)

    def handle_data(self, data):
        print("Encountered some data  :", data)

parser = MyHTMLParser()
parser.feed('<html><head><title>Test</title></head>'
            '<body><h1>Parse me!</h1></body></html>')

It works, but it does not really offer anything better than a parser generated by ANTLR or any other generic parser generator.

Html5lib

html5lib is a pure-python library for parsing HTML. It is designed to conform to the WHATWG HTML specification, as is implemented by all major web browsers.

Html5lib it is considered a good library to parse HTML5 and a very slow one. Partially because it is written in Python and not in C, like some of the alternatives.

By default the parsing produces an ElementTree tree, but it can be set to create a DOM tree, based on xml.dom.minidom. Html5lib provides walkers that simplify the traversing of the tree and serializers.

The following example shows the parser, walker and serializer in action.

import html5lib
element = html5lib.parse('<p xml:lang="pl">Witam wszystkich')
walker = html5lib.getTreeWalker("etree")
stream = walker(element)
s = html5lib.serializer.HTMLSerializer()
output = s.serialize(stream)
for item in output:
  print("%r" % item)

# Output
# '<p'
# ' '
# 'xml:lang'
# '='
# 'pl'
# '>'
# 'Witam wszystkich'

It has a sparse documentation.

Html5-parser

Html5-parser is a parser for Python, but written in C. It also just a parser that produces a tree. It exposes literally one function named parse. The documentation compares it to html5lib, claiming that it is 30x quicker.

To produce the output tree, by default, it relies on the library lxml. The same library allows also to pretty print the output. It even refers to the documentation of that library to explain how to navigate the resulting tree.

from html5_parser import parse
from lxml.etree import tostring
root = parse(some_html)
print(tostring(root))

Lxml

lxml is the most feature-rich and easy-to-use library for processing XML and HTML in the Python language.

Lxml is probably the most used low-level parsing library for Python, because of its speed, reliability and features. It is written in Cython, but it relies mostly on the C libraries libxml2 and libxml. Though, this does not mean that it is only a low-level library, but that is also used by other HTML libraries.

The library it is designed to work with the ElementTree API, a container for storing XML documents in memory. If you are not familiar with it, the important thing to know it is that it is an old-school way of dealing with (X)HTML. Basically, you are going to search with XPath and work as if it was the golden age of XML.

Fortunately, there is also a specific package for HTML, lxml.html that provide a few features specifically for parsing HTML. The most important one is that support CSS selectors to easily find elements.

There are also many other features, for example:

  • it can submit forms
  • it provides an internal DSL to create HTML documents
  • it can remove unwanted elements from the input, such as script content or CSS style annotations (i.e., it can clean HTML in the semantic sense, eliminating foreign elements)

In short: it can do many things, but not always in the easiest way you can imagine.

import urllib
from lxml.html import fromstring
url = 'http://microformats.org/'
content = urllib.urlopen(url).read()
doc = fromstring(content)
doc.make_links_absolute(url)

# [..]

# some handy functions for microformats

def get_text(el, class_name):
    els = el.find_class(class_name)
    if els:
        return els[0].text_content()
    else:
        return ''
def get_value(el):
    return get_text(el, 'value') or el.text_content()
def get_all_texts(el, class_name):
    return [e.text_content() for e in els.find_class(class_name)]
def parse_addresses(el):
    # Ideally this would parse street, etc.
    return el.find_class('adr')

# the parsing:

for el in doc.find_class('hcard'):
    card = Card()
    card.el = el
    card.fn = get_text(el, 'fn')
    card.tels = []
    for tel_el in card.find_class('tel'):
        card.tels.append(Phone(get_value(tel_el),
                               get_all_texts(tel_el, 'type')))
    card.addresses = parse_addresses(el)

The documentation is very thorough and it also available as one 512-pages PDF. There is everything you can think of: tutorials, examples, explanations of the concept used in the library…

AdvancedHTMLParser

AdvancedHTMLParser is a Python parser that aims to reproduce the behavior of raw JavaScript in Python. By raw JavaScript I mean without jQuery or CSS selector syntax. So, it build a DOM-like representation that you can interact with.

If it works in HTML javascript on a tag element, it should work on an AdvancedTag element with python.

The parser also adds a few additional features. For instance, it supports direct modification of attributes (e.g., tag.id = "nope") instead of using the JavaScript-like syntax (e.g., setAttribute function). It can also perform a basic validation of an HTML document (i.e., check for missing closing tokens) and output a prettified HTML.

The most important addition, though, is the support for advanced search and filtering methods for tags. The method find search value and attributes, while filter is more advanced. The second one depends on another library called QueryableList, which is described as ORM-style filtering to any list of items. It is not as powerful as XPath or CSS selectors and it does not use a familiar syntax for HTML manipulation. However, it is similar to the one used for database queries. It certainly simplify your job compared to a raw parser.

The documentation is good enough, though it consists just of what you find in the README of the GitHub project and the following example in the source code.

#!/usr/bin/env python

import AdvancedHTMLParser

if __name__ == '__main__':

    parser = AdvancedHTMLParser.AdvancedHTMLParser()

    parser.parseStr('''
    # html text here
     ''')

    # Get all items by name
    items = parser.getElementsByName('items')
    
    print ( "Items less than $4.00: ")
    print ( "-----------------------n")
    
    for item in items:
        priceEm = item.getElementsByName('price')[0]

        priceValue = round(float(priceEm.innerHTML.strip()), 2)
        if priceValue < 4.00:
            name = priceEm.getPeersByName('itemName')[0].innerHTML.strip()

            print ( "%s - $%.2f" %(name, priceValue) )

# OUTPUT:
# Items less than $4.00: 
# -----------------------
# 
# Sponges - $1.96
# Turtles - $3.55
# Coop - $1.44
# Pudding Cups - $1.60

Beautiful Soup

Beautiful Soup is a Python library for pulling data out of HTML and XML files. It works with your favorite parser to provide idiomatic ways of navigating, searching, and modifying the parse tree. It commonly saves programmers hours or days of work.

As the description on their website reminds you, technically Beautiful Soup it is not properly a parser. In fact, it can use a few parsers behind the scenes, like the standard Python parser or lxml. However, in practical terms, if you are using Python and you need to parse HTML, probably you want to use something like Beautiful Soup to work with HTML.

Beautiful Soup is the go-to library when you need an easy way to parse HTML documents. In terms of features it might not provide all that you think of, but it probably gives all that you actually need to use.

While you can navigate the parse tree yourself, using standard functions, to move around the tree (e.g., next_element, find_parent) you are probably going to use the simplest methods it provides.

The first are CSS selectors, to easily select the needed elements of the document. But there are also simpler functions to find elements according to their name or directly accessing the tags (e.g., title). They are both quite powerful, but the first will be more familiar to users of JavaScript, while the other is more pythonic.

from bs4 import BeautifulSoup
soup = BeautifulSoup(html_doc, 'html.parser')

# it finds all nodes satisfying the regular expression
# and having the matching id
soup.find_all(href=re.compile("elsie"), id='link1')
# [<a class="sister" href="http://example.com/elsie" id="link1">three</a>]

# CSS selectors
soup.select("p > a")
# [<a class="sister" href="http://example.com/elsie" id="link1">Elsie</a>,
#  <a class="sister" href="http://example.com/lacie"  id="link2">Lacie</a>,
#  <a class="sister" href="http://example.com/tillie" id="link3">Tillie</a>]

There are a few functions to manipulate the document and easily add or remove elements. For instance, there are a few functions to wrap an element inside a provided one or doing the inverse operation.

Beautiful Soup also gives functions to pretty print the output or get only the text of the HTML document.

The documentation is great: there are explanation and plenty examples for all features. There is not an official tutorial, but given the quality of the documentation it is not really needed.

JavaScript

Of course, we need also to see JavaScript libraries to process HTML. We are going to divide between parsing HTML in the browser and running in Node.js.

Browser

The browser automatically parses the current HTML document, which means that a parser is always included.

Plain JavaScript or jQuery

HTML parsing is implicit in JavaScript, since it was basically created to manipulate the DOM. Which means that the browser automatically parses HTML for you and makes it accessible in the form of a DOM. This means also that you can access the same functionality. The easiest way is by parsing HTML in a new element of the current document. However, you can also create a new document altogether.

You can pick between plain JavaScript and using the jQuery library. JQuery offers great support for CSS selectors and a few of its own selectors to easily find DOM elements. Parsing HTML is also made easier, you just need a single function: parseHTML.

The library does other things, other than making easier to manipulate the DOM, such as dealing with forms and asynchronous calls to the server. Given the environment in which it runs, it is also easy to add elements to the page and have them automatically parsed.

JQuery may be the most popular library in existence because it also deals with the issues of compatibility between different browsers. You might start using it because all the examples around the web are in jQuery, and not in plain JavaScript. But then you keep using it, because JavaScript is actually less portable between different browsers. There are inconsistencies between the API and the behavior of different browsers, which are masked by this wonderful library.

DOMParser

The native DOM manipulation capabilities of JavaScript and jQuery are great for simple parsing of HTML fragments. However, if you actually need to parse a complete HTML or XML source in a DOM document programmatically, there is a better solution: DOMParser. This is classified as an experimental feature, but it is available in all modern browsers.

var parser = new DOMParser();
var doc = parser.parseFromString(stringContainingXMLSource, "application/xml");
// returns a Document, but not a SVGDocument nor a HTMLDocument

parser = new DOMParser();
doc = parser.parseFromString(stringContainingSVGSource, "image/svg+xml");
// returns a SVGDocument, which also is a Document.

parser = new DOMParser();
doc = parser.parseFromString(stringContainingHTMLSource, "text/html");
// returns a HTMLDocument, which also is a Document.

By using DOMParser you can easily parse HTML document. Instead usually you have to resort to trick the browser into parsing it for you, for instance by adding a new element to the current document.

Node.js

While Node.js can easily work with the web, it does not make easily accessible parsing functionalities like that of the browser. In this sense, JavaScript in Node.js works like a traditional language, when it comes to parsing: you have to take care of it yourself.

Cheerio

Fast, flexible, and lean implementation of core jQuery designed specifically for the server.

There is little more to say about Cheerio than it is jQuery on the server. It should be obvious, but we are going to state it anyway: it looks like jQuery, but there is no browser. This means that Cheerio parses HTML and make easy to manipulate it, but it does not make things happen. It does not interpret the HTML as if it were in the browser; both in the sense that it might parse things differently from a browser and that the results of the parsing are not send directly to the user. If you need that functionalities you will have to take care of them yourself.

The library includes also a few jQuery utility functions, such as slice and eq, to manipulate ranges. It can serialize in an array name and value of form elements, but it cannot submit them to the server, as jQuery can. That is because Node.js run on the server.

The developer created this library because it wanted a lightweight alternative to jsdom, that was also quicker and less strict in parsing. The last thing it is needed to parse real and messy websites.

The syntax and usage of Cheerio should be very familiar to any JavaScript developer.

var cheerio = require('cheerio'),
    $ = cheerio.load('<h3 class = "title">I am here!</h3>');

$('h3.title').text('There is nobody here!');
$('h3').attr('id', 'in_hiding');

$.html();
//=> <h3 class = "title" id = "in_hiding">There is nobody here!</h3>

The documentation is limited to the long README of the project, but that is probably all that you need.

Jsdom

 jsdom is a pure-JavaScript implementation of many web standards, notably the WHATWG DOM and HTML Standards, for use with Node.js. In general, the goal of the project is to emulate enough of a subset of a web browser to be useful for testing and scraping real-world web applications.

So jsdom is more than an HTML parser, it works as a browser. In the context of parsing, it means that it would automatically add the necessary tags, if you omit them from the data you are trying to parse. For instance, if there were no html tag it would implicitly add it, just like a browser would do.

The fact that supports the DOM standard means that a jsdom object will have familiar properties, such as document or window, and that manipulating the DOM would be like using plain JavaScript.

You can also optionally specify few properties, like the URL of the document, referrer or user agent. The url is particularly useful, if you need to parse links that contains local URLs.

Since it is not really related to parsing, we just mention that jsdom have a (virtual) console, support for cookies, etc. In short, all you need to simulate a browser environment. It can also deal with external resources, even JavaScript scripts. Which means that it can load and execute them, if you ask it. Note however that there are security risks in doing so, just like when you execute any external code. All of that have a number of caveats that you should read in the documentation.

One important thing to notice is that you can alter the environment before the parsing happens. For instance, you can add JavaScript libraries that simulate functionalities not supported by the jsdom parser. These libraries are usually called shims.

const jsdom = require("jsdom");
const { JSDOM } = jsdom;
const dom = new JSDOM('<!DOCTYPE html><p>Goodbye world</p>');

console.log(dom.window.document.querySelector("p").textContent);
// => "Goodbye world"

The documentation is good enough. It might be surprisingly short given the vastity of the project, but it can get away with little, because you can find documentation for using the DOM elsewhere.

Htmlparser2 and related libraries

Felix Böhm has made a few libraries to parse HTML (XML and RSS), CSS selectors and building a DOM. It is successful and good enough to even power the Cheerio library. The libraries can be used separately, but works also together.

The HTML parser is quick, but it is also really basic. The following example shows that allows you just to execute functions, when you meet tags or text elements.

// from the documentation
var htmlparser = require("htmlparser2");
var parser = new htmlparser.Parser({
	onopentag: function(name, attribs){
		if(name === "script" && attribs.type === "text/javascript"){
			console.log("JS! Hooray!");
		}
	},
	ontext: function(text){
		console.log("-->", text);
	}
}, {decodeEntities: true});
parser.write("Xyz <script type='text/javascript'>var foo = '<<bar>>';</ script>");
parser.end();

They are powerful and great if you need to do advanced and complex manipulation of HTML documents. However, even together, they are somewhat clunky to use if you intend to simply parse HTML and do some basic manipulation of the DOM. In part this is due to the features themselves. For instance, the DOM library just builds the DOM, there are no helpers to manipulate it. In fact, to manipulate the DOM you need yet another library called domutils, for which there is literally zero documentation.

However, the issue really is that though they work together, they do not provide functionalities on top of each other, they just work along each other. They are mostly designed for advanced parsing need. For example, if you want to build a word processor that use HTML behind the scenes, these are great. Otherwise you are probably going to look somewhere else.

This difficulty of using it is compounded by the limited documentation. The only good part is for the CSS selectors engine.

Parse5

parse5 provides nearly everything you may need when dealing with HTML.

Parse5 is a library meant to be used to build other tools but can also be used to parse HTML directly for simple tasks. However, it is somewhat limited in this second regard. This is shown by the following example.

const parse5 = require('parse5');

const document = parse5.parse('<!DOCTYPE html><html><head></head><body>Hi there!</body></html>');

console.log(document.childNodes[1].tagName); //=> 'html'

It is easy to use, but the issue is that it does not provide the methods that the browser gives you to manipulate the DOM (e.g., getElementById).

The difficulty is also increased by the limited documentation: it is basically a series of question that are answered with an API reference (e.g., “I need to parse a HTML string” => Use parse5.parse method). So, it is feasible to use it for simple DOM manipulation, but you are probably not going to want to.

On the other hand, parse5 lists an impressive series of project that adopt it: jsdom, Angular2 and Polymer. So, if you need a reliable foundation for advanced manipulation or parsing of HTML, it is clearly a great choice.

Summary

We have seen a few libraries for Java, C#, Python, and JavaScript. You might be surprised that, despite the popularity of HTML, there are usually few mature choices for each language. That is because while HTML is very popular and structurally simple, providing support for all the multiple standards is hard work.

On top of that, the actual HTML documents out there might be in a wrong form, according to the standard, but they still work in the browser. So, they must work with your library, too. Add to this the need to give an easy way to traverse an HTML document and the shortage is readily explained. Quite simply people expect that a library to parse HTML will do much more than just parse HTML.

While there might not always be that many choices, luckily there is always at least one good choice available for all the languages we have considered.

Read more:

To discover more about how to write a python parser, you can read Parsing In Python: Tools And Libraries

To understand how to use ANTLR, you can read The ANTLR Mega Tutorial

To discover more about parsing in Javascript, you can read Parsing in JavaScript: Tools and Libraries

For a full understanding how parsers works, see A Guide to Parsing: Algorithms and Terminology

To discover more about parsing in Java, you can read Parsing in Java: Tools and Libraries

To discover more about parsing in C#, you can read Parsing in C#: Tools and Libraries

To discover more about parsing SQL, you can read Parsing SLQ