Pygments: updated to 2.7.0.

Tue, 15 Sep 2020 19:09:05 +0200

author
Detlev Offenbach <detlev@die-offenbachs.de>
date
Tue, 15 Sep 2020 19:09:05 +0200
changeset 7701
25f42e208e08
parent 7700
a3cf077a8db3
child 7702
f8b97639deb5

Pygments: updated to 2.7.0.

docs/changelog file | annotate | diff | comparison | revisions
eric6.e4p file | annotate | diff | comparison | revisions
eric6/ThirdParty/Pygments/pygments/AUTHORS file | annotate | diff | comparison | revisions
eric6/ThirdParty/Pygments/pygments/CHANGES file | annotate | diff | comparison | revisions
eric6/ThirdParty/Pygments/pygments/LICENSE file | annotate | diff | comparison | revisions
eric6/ThirdParty/Pygments/pygments/PKG-INFO file | annotate | diff | comparison | revisions
eric6/ThirdParty/Pygments/pygments/__init__.py file | annotate | diff | comparison | revisions
eric6/ThirdParty/Pygments/pygments/__main__.py file | annotate | diff | comparison | revisions
eric6/ThirdParty/Pygments/pygments/cmdline.py file | annotate | diff | comparison | revisions
eric6/ThirdParty/Pygments/pygments/console.py file | annotate | diff | comparison | revisions
eric6/ThirdParty/Pygments/pygments/filter.py file | annotate | diff | comparison | revisions
eric6/ThirdParty/Pygments/pygments/filters/__init__.py file | annotate | diff | comparison | revisions
eric6/ThirdParty/Pygments/pygments/formatter.py file | annotate | diff | comparison | revisions
eric6/ThirdParty/Pygments/pygments/formatters/__init__.py file | annotate | diff | comparison | revisions
eric6/ThirdParty/Pygments/pygments/formatters/_mapping.py file | annotate | diff | comparison | revisions
eric6/ThirdParty/Pygments/pygments/formatters/bbcode.py file | annotate | diff | comparison | revisions
eric6/ThirdParty/Pygments/pygments/formatters/html.py file | annotate | diff | comparison | revisions
eric6/ThirdParty/Pygments/pygments/formatters/img.py file | annotate | diff | comparison | revisions
eric6/ThirdParty/Pygments/pygments/formatters/irc.py file | annotate | diff | comparison | revisions
eric6/ThirdParty/Pygments/pygments/formatters/latex.py file | annotate | diff | comparison | revisions
eric6/ThirdParty/Pygments/pygments/formatters/other.py file | annotate | diff | comparison | revisions
eric6/ThirdParty/Pygments/pygments/formatters/rtf.py file | annotate | diff | comparison | revisions
eric6/ThirdParty/Pygments/pygments/formatters/svg.py file | annotate | diff | comparison | revisions
eric6/ThirdParty/Pygments/pygments/formatters/terminal.py file | annotate | diff | comparison | revisions
eric6/ThirdParty/Pygments/pygments/formatters/terminal256.py file | annotate | diff | comparison | revisions
eric6/ThirdParty/Pygments/pygments/lexer.py file | annotate | diff | comparison | revisions
eric6/ThirdParty/Pygments/pygments/lexers/__init__.py file | annotate | diff | comparison | revisions
eric6/ThirdParty/Pygments/pygments/lexers/_asy_builtins.py file | annotate | diff | comparison | revisions
eric6/ThirdParty/Pygments/pygments/lexers/_cl_builtins.py file | annotate | diff | comparison | revisions
eric6/ThirdParty/Pygments/pygments/lexers/_cocoa_builtins.py file | annotate | diff | comparison | revisions
eric6/ThirdParty/Pygments/pygments/lexers/_csound_builtins.py file | annotate | diff | comparison | revisions
eric6/ThirdParty/Pygments/pygments/lexers/_lasso_builtins.py file | annotate | diff | comparison | revisions
eric6/ThirdParty/Pygments/pygments/lexers/_lua_builtins.py file | annotate | diff | comparison | revisions
eric6/ThirdParty/Pygments/pygments/lexers/_mapping.py file | annotate | diff | comparison | revisions
eric6/ThirdParty/Pygments/pygments/lexers/_mql_builtins.py file | annotate | diff | comparison | revisions
eric6/ThirdParty/Pygments/pygments/lexers/_mysql_builtins.py file | annotate | diff | comparison | revisions
eric6/ThirdParty/Pygments/pygments/lexers/_openedge_builtins.py file | annotate | diff | comparison | revisions
eric6/ThirdParty/Pygments/pygments/lexers/_php_builtins.py file | annotate | diff | comparison | revisions
eric6/ThirdParty/Pygments/pygments/lexers/_postgres_builtins.py file | annotate | diff | comparison | revisions
eric6/ThirdParty/Pygments/pygments/lexers/_scilab_builtins.py file | annotate | diff | comparison | revisions
eric6/ThirdParty/Pygments/pygments/lexers/_sourcemod_builtins.py file | annotate | diff | comparison | revisions
eric6/ThirdParty/Pygments/pygments/lexers/_stan_builtins.py file | annotate | diff | comparison | revisions
eric6/ThirdParty/Pygments/pygments/lexers/_stata_builtins.py file | annotate | diff | comparison | revisions
eric6/ThirdParty/Pygments/pygments/lexers/_tsql_builtins.py file | annotate | diff | comparison | revisions
eric6/ThirdParty/Pygments/pygments/lexers/_usd_builtins.py file | annotate | diff | comparison | revisions
eric6/ThirdParty/Pygments/pygments/lexers/_vbscript_builtins.py file | annotate | diff | comparison | revisions
eric6/ThirdParty/Pygments/pygments/lexers/_vim_builtins.py file | annotate | diff | comparison | revisions
eric6/ThirdParty/Pygments/pygments/lexers/actionscript.py file | annotate | diff | comparison | revisions
eric6/ThirdParty/Pygments/pygments/lexers/agile.py file | annotate | diff | comparison | revisions
eric6/ThirdParty/Pygments/pygments/lexers/algebra.py file | annotate | diff | comparison | revisions
eric6/ThirdParty/Pygments/pygments/lexers/ambient.py file | annotate | diff | comparison | revisions
eric6/ThirdParty/Pygments/pygments/lexers/ampl.py file | annotate | diff | comparison | revisions
eric6/ThirdParty/Pygments/pygments/lexers/apl.py file | annotate | diff | comparison | revisions
eric6/ThirdParty/Pygments/pygments/lexers/archetype.py file | annotate | diff | comparison | revisions
eric6/ThirdParty/Pygments/pygments/lexers/arrow.py file | annotate | diff | comparison | revisions
eric6/ThirdParty/Pygments/pygments/lexers/asm.py file | annotate | diff | comparison | revisions
eric6/ThirdParty/Pygments/pygments/lexers/automation.py file | annotate | diff | comparison | revisions
eric6/ThirdParty/Pygments/pygments/lexers/bare.py file | annotate | diff | comparison | revisions
eric6/ThirdParty/Pygments/pygments/lexers/basic.py file | annotate | diff | comparison | revisions
eric6/ThirdParty/Pygments/pygments/lexers/bibtex.py file | annotate | diff | comparison | revisions
eric6/ThirdParty/Pygments/pygments/lexers/boa.py file | annotate | diff | comparison | revisions
eric6/ThirdParty/Pygments/pygments/lexers/business.py file | annotate | diff | comparison | revisions
eric6/ThirdParty/Pygments/pygments/lexers/c_cpp.py file | annotate | diff | comparison | revisions
eric6/ThirdParty/Pygments/pygments/lexers/c_like.py file | annotate | diff | comparison | revisions
eric6/ThirdParty/Pygments/pygments/lexers/capnproto.py file | annotate | diff | comparison | revisions
eric6/ThirdParty/Pygments/pygments/lexers/chapel.py file | annotate | diff | comparison | revisions
eric6/ThirdParty/Pygments/pygments/lexers/clean.py file | annotate | diff | comparison | revisions
eric6/ThirdParty/Pygments/pygments/lexers/compiled.py file | annotate | diff | comparison | revisions
eric6/ThirdParty/Pygments/pygments/lexers/configs.py file | annotate | diff | comparison | revisions
eric6/ThirdParty/Pygments/pygments/lexers/console.py file | annotate | diff | comparison | revisions
eric6/ThirdParty/Pygments/pygments/lexers/crystal.py file | annotate | diff | comparison | revisions
eric6/ThirdParty/Pygments/pygments/lexers/csound.py file | annotate | diff | comparison | revisions
eric6/ThirdParty/Pygments/pygments/lexers/css.py file | annotate | diff | comparison | revisions
eric6/ThirdParty/Pygments/pygments/lexers/d.py file | annotate | diff | comparison | revisions
eric6/ThirdParty/Pygments/pygments/lexers/dalvik.py file | annotate | diff | comparison | revisions
eric6/ThirdParty/Pygments/pygments/lexers/data.py file | annotate | diff | comparison | revisions
eric6/ThirdParty/Pygments/pygments/lexers/devicetree.py file | annotate | diff | comparison | revisions
eric6/ThirdParty/Pygments/pygments/lexers/diff.py file | annotate | diff | comparison | revisions
eric6/ThirdParty/Pygments/pygments/lexers/dotnet.py file | annotate | diff | comparison | revisions
eric6/ThirdParty/Pygments/pygments/lexers/dsls.py file | annotate | diff | comparison | revisions
eric6/ThirdParty/Pygments/pygments/lexers/dylan.py file | annotate | diff | comparison | revisions
eric6/ThirdParty/Pygments/pygments/lexers/ecl.py file | annotate | diff | comparison | revisions
eric6/ThirdParty/Pygments/pygments/lexers/eiffel.py file | annotate | diff | comparison | revisions
eric6/ThirdParty/Pygments/pygments/lexers/elm.py file | annotate | diff | comparison | revisions
eric6/ThirdParty/Pygments/pygments/lexers/email.py file | annotate | diff | comparison | revisions
eric6/ThirdParty/Pygments/pygments/lexers/erlang.py file | annotate | diff | comparison | revisions
eric6/ThirdParty/Pygments/pygments/lexers/esoteric.py file | annotate | diff | comparison | revisions
eric6/ThirdParty/Pygments/pygments/lexers/ezhil.py file | annotate | diff | comparison | revisions
eric6/ThirdParty/Pygments/pygments/lexers/factor.py file | annotate | diff | comparison | revisions
eric6/ThirdParty/Pygments/pygments/lexers/fantom.py file | annotate | diff | comparison | revisions
eric6/ThirdParty/Pygments/pygments/lexers/felix.py file | annotate | diff | comparison | revisions
eric6/ThirdParty/Pygments/pygments/lexers/floscript.py file | annotate | diff | comparison | revisions
eric6/ThirdParty/Pygments/pygments/lexers/forth.py file | annotate | diff | comparison | revisions
eric6/ThirdParty/Pygments/pygments/lexers/fortran.py file | annotate | diff | comparison | revisions
eric6/ThirdParty/Pygments/pygments/lexers/foxpro.py file | annotate | diff | comparison | revisions
eric6/ThirdParty/Pygments/pygments/lexers/freefem.py file | annotate | diff | comparison | revisions
eric6/ThirdParty/Pygments/pygments/lexers/functional.py file | annotate | diff | comparison | revisions
eric6/ThirdParty/Pygments/pygments/lexers/gdscript.py file | annotate | diff | comparison | revisions
eric6/ThirdParty/Pygments/pygments/lexers/go.py file | annotate | diff | comparison | revisions
eric6/ThirdParty/Pygments/pygments/lexers/grammar_notation.py file | annotate | diff | comparison | revisions
eric6/ThirdParty/Pygments/pygments/lexers/graph.py file | annotate | diff | comparison | revisions
eric6/ThirdParty/Pygments/pygments/lexers/graphics.py file | annotate | diff | comparison | revisions
eric6/ThirdParty/Pygments/pygments/lexers/haskell.py file | annotate | diff | comparison | revisions
eric6/ThirdParty/Pygments/pygments/lexers/haxe.py file | annotate | diff | comparison | revisions
eric6/ThirdParty/Pygments/pygments/lexers/hdl.py file | annotate | diff | comparison | revisions
eric6/ThirdParty/Pygments/pygments/lexers/hexdump.py file | annotate | diff | comparison | revisions
eric6/ThirdParty/Pygments/pygments/lexers/html.py file | annotate | diff | comparison | revisions
eric6/ThirdParty/Pygments/pygments/lexers/idl.py file | annotate | diff | comparison | revisions
eric6/ThirdParty/Pygments/pygments/lexers/igor.py file | annotate | diff | comparison | revisions
eric6/ThirdParty/Pygments/pygments/lexers/inferno.py file | annotate | diff | comparison | revisions
eric6/ThirdParty/Pygments/pygments/lexers/installers.py file | annotate | diff | comparison | revisions
eric6/ThirdParty/Pygments/pygments/lexers/int_fiction.py file | annotate | diff | comparison | revisions
eric6/ThirdParty/Pygments/pygments/lexers/iolang.py file | annotate | diff | comparison | revisions
eric6/ThirdParty/Pygments/pygments/lexers/j.py file | annotate | diff | comparison | revisions
eric6/ThirdParty/Pygments/pygments/lexers/javascript.py file | annotate | diff | comparison | revisions
eric6/ThirdParty/Pygments/pygments/lexers/julia.py file | annotate | diff | comparison | revisions
eric6/ThirdParty/Pygments/pygments/lexers/jvm.py file | annotate | diff | comparison | revisions
eric6/ThirdParty/Pygments/pygments/lexers/lisp.py file | annotate | diff | comparison | revisions
eric6/ThirdParty/Pygments/pygments/lexers/make.py file | annotate | diff | comparison | revisions
eric6/ThirdParty/Pygments/pygments/lexers/markup.py file | annotate | diff | comparison | revisions
eric6/ThirdParty/Pygments/pygments/lexers/math.py file | annotate | diff | comparison | revisions
eric6/ThirdParty/Pygments/pygments/lexers/matlab.py file | annotate | diff | comparison | revisions
eric6/ThirdParty/Pygments/pygments/lexers/mime.py file | annotate | diff | comparison | revisions
eric6/ThirdParty/Pygments/pygments/lexers/ml.py file | annotate | diff | comparison | revisions
eric6/ThirdParty/Pygments/pygments/lexers/modeling.py file | annotate | diff | comparison | revisions
eric6/ThirdParty/Pygments/pygments/lexers/modula2.py file | annotate | diff | comparison | revisions
eric6/ThirdParty/Pygments/pygments/lexers/monte.py file | annotate | diff | comparison | revisions
eric6/ThirdParty/Pygments/pygments/lexers/mosel.py file | annotate | diff | comparison | revisions
eric6/ThirdParty/Pygments/pygments/lexers/ncl.py file | annotate | diff | comparison | revisions
eric6/ThirdParty/Pygments/pygments/lexers/nimrod.py file | annotate | diff | comparison | revisions
eric6/ThirdParty/Pygments/pygments/lexers/nit.py file | annotate | diff | comparison | revisions
eric6/ThirdParty/Pygments/pygments/lexers/nix.py file | annotate | diff | comparison | revisions
eric6/ThirdParty/Pygments/pygments/lexers/oberon.py file | annotate | diff | comparison | revisions
eric6/ThirdParty/Pygments/pygments/lexers/objective.py file | annotate | diff | comparison | revisions
eric6/ThirdParty/Pygments/pygments/lexers/ooc.py file | annotate | diff | comparison | revisions
eric6/ThirdParty/Pygments/pygments/lexers/other.py file | annotate | diff | comparison | revisions
eric6/ThirdParty/Pygments/pygments/lexers/parasail.py file | annotate | diff | comparison | revisions
eric6/ThirdParty/Pygments/pygments/lexers/parsers.py file | annotate | diff | comparison | revisions
eric6/ThirdParty/Pygments/pygments/lexers/pascal.py file | annotate | diff | comparison | revisions
eric6/ThirdParty/Pygments/pygments/lexers/pawn.py file | annotate | diff | comparison | revisions
eric6/ThirdParty/Pygments/pygments/lexers/perl.py file | annotate | diff | comparison | revisions
eric6/ThirdParty/Pygments/pygments/lexers/php.py file | annotate | diff | comparison | revisions
eric6/ThirdParty/Pygments/pygments/lexers/pointless.py file | annotate | diff | comparison | revisions
eric6/ThirdParty/Pygments/pygments/lexers/pony.py file | annotate | diff | comparison | revisions
eric6/ThirdParty/Pygments/pygments/lexers/praat.py file | annotate | diff | comparison | revisions
eric6/ThirdParty/Pygments/pygments/lexers/prolog.py file | annotate | diff | comparison | revisions
eric6/ThirdParty/Pygments/pygments/lexers/promql.py file | annotate | diff | comparison | revisions
eric6/ThirdParty/Pygments/pygments/lexers/python.py file | annotate | diff | comparison | revisions
eric6/ThirdParty/Pygments/pygments/lexers/qvt.py file | annotate | diff | comparison | revisions
eric6/ThirdParty/Pygments/pygments/lexers/r.py file | annotate | diff | comparison | revisions
eric6/ThirdParty/Pygments/pygments/lexers/rdf.py file | annotate | diff | comparison | revisions
eric6/ThirdParty/Pygments/pygments/lexers/rebol.py file | annotate | diff | comparison | revisions
eric6/ThirdParty/Pygments/pygments/lexers/resource.py file | annotate | diff | comparison | revisions
eric6/ThirdParty/Pygments/pygments/lexers/ride.py file | annotate | diff | comparison | revisions
eric6/ThirdParty/Pygments/pygments/lexers/rnc.py file | annotate | diff | comparison | revisions
eric6/ThirdParty/Pygments/pygments/lexers/roboconf.py file | annotate | diff | comparison | revisions
eric6/ThirdParty/Pygments/pygments/lexers/robotframework.py file | annotate | diff | comparison | revisions
eric6/ThirdParty/Pygments/pygments/lexers/ruby.py file | annotate | diff | comparison | revisions
eric6/ThirdParty/Pygments/pygments/lexers/rust.py file | annotate | diff | comparison | revisions
eric6/ThirdParty/Pygments/pygments/lexers/sas.py file | annotate | diff | comparison | revisions
eric6/ThirdParty/Pygments/pygments/lexers/scdoc.py file | annotate | diff | comparison | revisions
eric6/ThirdParty/Pygments/pygments/lexers/scripting.py file | annotate | diff | comparison | revisions
eric6/ThirdParty/Pygments/pygments/lexers/sgf.py file | annotate | diff | comparison | revisions
eric6/ThirdParty/Pygments/pygments/lexers/shell.py file | annotate | diff | comparison | revisions
eric6/ThirdParty/Pygments/pygments/lexers/sieve.py file | annotate | diff | comparison | revisions
eric6/ThirdParty/Pygments/pygments/lexers/slash.py file | annotate | diff | comparison | revisions
eric6/ThirdParty/Pygments/pygments/lexers/smalltalk.py file | annotate | diff | comparison | revisions
eric6/ThirdParty/Pygments/pygments/lexers/smv.py file | annotate | diff | comparison | revisions
eric6/ThirdParty/Pygments/pygments/lexers/snobol.py file | annotate | diff | comparison | revisions
eric6/ThirdParty/Pygments/pygments/lexers/solidity.py file | annotate | diff | comparison | revisions
eric6/ThirdParty/Pygments/pygments/lexers/special.py file | annotate | diff | comparison | revisions
eric6/ThirdParty/Pygments/pygments/lexers/sql.py file | annotate | diff | comparison | revisions
eric6/ThirdParty/Pygments/pygments/lexers/stata.py file | annotate | diff | comparison | revisions
eric6/ThirdParty/Pygments/pygments/lexers/supercollider.py file | annotate | diff | comparison | revisions
eric6/ThirdParty/Pygments/pygments/lexers/tcl.py file | annotate | diff | comparison | revisions
eric6/ThirdParty/Pygments/pygments/lexers/templates.py file | annotate | diff | comparison | revisions
eric6/ThirdParty/Pygments/pygments/lexers/teraterm.py file | annotate | diff | comparison | revisions
eric6/ThirdParty/Pygments/pygments/lexers/testing.py file | annotate | diff | comparison | revisions
eric6/ThirdParty/Pygments/pygments/lexers/text.py file | annotate | diff | comparison | revisions
eric6/ThirdParty/Pygments/pygments/lexers/textedit.py file | annotate | diff | comparison | revisions
eric6/ThirdParty/Pygments/pygments/lexers/textfmts.py file | annotate | diff | comparison | revisions
eric6/ThirdParty/Pygments/pygments/lexers/theorem.py file | annotate | diff | comparison | revisions
eric6/ThirdParty/Pygments/pygments/lexers/tnt.py file | annotate | diff | comparison | revisions
eric6/ThirdParty/Pygments/pygments/lexers/trafficscript.py file | annotate | diff | comparison | revisions
eric6/ThirdParty/Pygments/pygments/lexers/typoscript.py file | annotate | diff | comparison | revisions
eric6/ThirdParty/Pygments/pygments/lexers/unicon.py file | annotate | diff | comparison | revisions
eric6/ThirdParty/Pygments/pygments/lexers/urbi.py file | annotate | diff | comparison | revisions
eric6/ThirdParty/Pygments/pygments/lexers/usd.py file | annotate | diff | comparison | revisions
eric6/ThirdParty/Pygments/pygments/lexers/varnish.py file | annotate | diff | comparison | revisions
eric6/ThirdParty/Pygments/pygments/lexers/verification.py file | annotate | diff | comparison | revisions
eric6/ThirdParty/Pygments/pygments/lexers/web.py file | annotate | diff | comparison | revisions
eric6/ThirdParty/Pygments/pygments/lexers/webidl.py file | annotate | diff | comparison | revisions
eric6/ThirdParty/Pygments/pygments/lexers/webmisc.py file | annotate | diff | comparison | revisions
eric6/ThirdParty/Pygments/pygments/lexers/whiley.py file | annotate | diff | comparison | revisions
eric6/ThirdParty/Pygments/pygments/lexers/x10.py file | annotate | diff | comparison | revisions
eric6/ThirdParty/Pygments/pygments/lexers/xorg.py file | annotate | diff | comparison | revisions
eric6/ThirdParty/Pygments/pygments/lexers/yang.py file | annotate | diff | comparison | revisions
eric6/ThirdParty/Pygments/pygments/lexers/zig.py file | annotate | diff | comparison | revisions
eric6/ThirdParty/Pygments/pygments/modeline.py file | annotate | diff | comparison | revisions
eric6/ThirdParty/Pygments/pygments/plugin.py file | annotate | diff | comparison | revisions
eric6/ThirdParty/Pygments/pygments/regexopt.py file | annotate | diff | comparison | revisions
eric6/ThirdParty/Pygments/pygments/scanner.py file | annotate | diff | comparison | revisions
eric6/ThirdParty/Pygments/pygments/sphinxext.py file | annotate | diff | comparison | revisions
eric6/ThirdParty/Pygments/pygments/style.py file | annotate | diff | comparison | revisions
eric6/ThirdParty/Pygments/pygments/styles/__init__.py file | annotate | diff | comparison | revisions
eric6/ThirdParty/Pygments/pygments/styles/abap.py file | annotate | diff | comparison | revisions
eric6/ThirdParty/Pygments/pygments/styles/algol.py file | annotate | diff | comparison | revisions
eric6/ThirdParty/Pygments/pygments/styles/algol_nu.py file | annotate | diff | comparison | revisions
eric6/ThirdParty/Pygments/pygments/styles/arduino.py file | annotate | diff | comparison | revisions
eric6/ThirdParty/Pygments/pygments/styles/autumn.py file | annotate | diff | comparison | revisions
eric6/ThirdParty/Pygments/pygments/styles/borland.py file | annotate | diff | comparison | revisions
eric6/ThirdParty/Pygments/pygments/styles/bw.py file | annotate | diff | comparison | revisions
eric6/ThirdParty/Pygments/pygments/styles/colorful.py file | annotate | diff | comparison | revisions
eric6/ThirdParty/Pygments/pygments/styles/default.py file | annotate | diff | comparison | revisions
eric6/ThirdParty/Pygments/pygments/styles/emacs.py file | annotate | diff | comparison | revisions
eric6/ThirdParty/Pygments/pygments/styles/friendly.py file | annotate | diff | comparison | revisions
eric6/ThirdParty/Pygments/pygments/styles/fruity.py file | annotate | diff | comparison | revisions
eric6/ThirdParty/Pygments/pygments/styles/igor.py file | annotate | diff | comparison | revisions
eric6/ThirdParty/Pygments/pygments/styles/inkpot.py file | annotate | diff | comparison | revisions
eric6/ThirdParty/Pygments/pygments/styles/lovelace.py file | annotate | diff | comparison | revisions
eric6/ThirdParty/Pygments/pygments/styles/manni.py file | annotate | diff | comparison | revisions
eric6/ThirdParty/Pygments/pygments/styles/monokai.py file | annotate | diff | comparison | revisions
eric6/ThirdParty/Pygments/pygments/styles/murphy.py file | annotate | diff | comparison | revisions
eric6/ThirdParty/Pygments/pygments/styles/native.py file | annotate | diff | comparison | revisions
eric6/ThirdParty/Pygments/pygments/styles/paraiso_dark.py file | annotate | diff | comparison | revisions
eric6/ThirdParty/Pygments/pygments/styles/paraiso_light.py file | annotate | diff | comparison | revisions
eric6/ThirdParty/Pygments/pygments/styles/pastie.py file | annotate | diff | comparison | revisions
eric6/ThirdParty/Pygments/pygments/styles/perldoc.py file | annotate | diff | comparison | revisions
eric6/ThirdParty/Pygments/pygments/styles/rainbow_dash.py file | annotate | diff | comparison | revisions
eric6/ThirdParty/Pygments/pygments/styles/rrt.py file | annotate | diff | comparison | revisions
eric6/ThirdParty/Pygments/pygments/styles/sas.py file | annotate | diff | comparison | revisions
eric6/ThirdParty/Pygments/pygments/styles/solarized.py file | annotate | diff | comparison | revisions
eric6/ThirdParty/Pygments/pygments/styles/stata_dark.py file | annotate | diff | comparison | revisions
eric6/ThirdParty/Pygments/pygments/styles/stata_light.py file | annotate | diff | comparison | revisions
eric6/ThirdParty/Pygments/pygments/styles/tango.py file | annotate | diff | comparison | revisions
eric6/ThirdParty/Pygments/pygments/styles/trac.py file | annotate | diff | comparison | revisions
eric6/ThirdParty/Pygments/pygments/styles/vim.py file | annotate | diff | comparison | revisions
eric6/ThirdParty/Pygments/pygments/styles/vs.py file | annotate | diff | comparison | revisions
eric6/ThirdParty/Pygments/pygments/styles/xcode.py file | annotate | diff | comparison | revisions
eric6/ThirdParty/Pygments/pygments/token.py file | annotate | diff | comparison | revisions
eric6/ThirdParty/Pygments/pygments/unistring.py file | annotate | diff | comparison | revisions
eric6/ThirdParty/Pygments/pygments/util.py file | annotate | diff | comparison | revisions
--- a/docs/changelog	Tue Sep 15 18:46:58 2020 +0200
+++ b/docs/changelog	Tue Sep 15 19:09:05 2020 +0200
@@ -5,6 +5,8 @@
 - Editor
   -- added an outline widget showing the structure of the editor source code
      and allowing to navigate in the code
+- Third Party packages
+  -- updated Pygments to 2.7.0
 
 Version 20.9:
 - bug fixes
@@ -49,7 +51,7 @@
 - MicroPython
   -- added support for Calliope mini
 - Third Party packages
-  -- updated Pygments to 2.3.1
+  -- updated Pygments to 2.6.1
 
 Version 20.4:
 - bug fixes
--- a/eric6.e4p	Tue Sep 15 18:46:58 2020 +0200
+++ b/eric6.e4p	Tue Sep 15 19:09:05 2020 +0200
@@ -1016,6 +1016,7 @@
     <Source>eric6/ThirdParty/Pygments/pygments/lexers/_lua_builtins.py</Source>
     <Source>eric6/ThirdParty/Pygments/pygments/lexers/_mapping.py</Source>
     <Source>eric6/ThirdParty/Pygments/pygments/lexers/_mql_builtins.py</Source>
+    <Source>eric6/ThirdParty/Pygments/pygments/lexers/_mysql_builtins.py</Source>
     <Source>eric6/ThirdParty/Pygments/pygments/lexers/_openedge_builtins.py</Source>
     <Source>eric6/ThirdParty/Pygments/pygments/lexers/_php_builtins.py</Source>
     <Source>eric6/ThirdParty/Pygments/pygments/lexers/_postgres_builtins.py</Source>
@@ -1034,8 +1035,10 @@
     <Source>eric6/ThirdParty/Pygments/pygments/lexers/ampl.py</Source>
     <Source>eric6/ThirdParty/Pygments/pygments/lexers/apl.py</Source>
     <Source>eric6/ThirdParty/Pygments/pygments/lexers/archetype.py</Source>
+    <Source>eric6/ThirdParty/Pygments/pygments/lexers/arrow.py</Source>
     <Source>eric6/ThirdParty/Pygments/pygments/lexers/asm.py</Source>
     <Source>eric6/ThirdParty/Pygments/pygments/lexers/automation.py</Source>
+    <Source>eric6/ThirdParty/Pygments/pygments/lexers/bare.py</Source>
     <Source>eric6/ThirdParty/Pygments/pygments/lexers/basic.py</Source>
     <Source>eric6/ThirdParty/Pygments/pygments/lexers/bibtex.py</Source>
     <Source>eric6/ThirdParty/Pygments/pygments/lexers/boa.py</Source>
@@ -1054,6 +1057,7 @@
     <Source>eric6/ThirdParty/Pygments/pygments/lexers/d.py</Source>
     <Source>eric6/ThirdParty/Pygments/pygments/lexers/dalvik.py</Source>
     <Source>eric6/ThirdParty/Pygments/pygments/lexers/data.py</Source>
+    <Source>eric6/ThirdParty/Pygments/pygments/lexers/devicetree.py</Source>
     <Source>eric6/ThirdParty/Pygments/pygments/lexers/diff.py</Source>
     <Source>eric6/ThirdParty/Pygments/pygments/lexers/dotnet.py</Source>
     <Source>eric6/ThirdParty/Pygments/pygments/lexers/dsls.py</Source>
@@ -1074,6 +1078,7 @@
     <Source>eric6/ThirdParty/Pygments/pygments/lexers/foxpro.py</Source>
     <Source>eric6/ThirdParty/Pygments/pygments/lexers/freefem.py</Source>
     <Source>eric6/ThirdParty/Pygments/pygments/lexers/functional.py</Source>
+    <Source>eric6/ThirdParty/Pygments/pygments/lexers/gdscript.py</Source>
     <Source>eric6/ThirdParty/Pygments/pygments/lexers/go.py</Source>
     <Source>eric6/ThirdParty/Pygments/pygments/lexers/grammar_notation.py</Source>
     <Source>eric6/ThirdParty/Pygments/pygments/lexers/graph.py</Source>
@@ -1118,9 +1123,11 @@
     <Source>eric6/ThirdParty/Pygments/pygments/lexers/pawn.py</Source>
     <Source>eric6/ThirdParty/Pygments/pygments/lexers/perl.py</Source>
     <Source>eric6/ThirdParty/Pygments/pygments/lexers/php.py</Source>
+    <Source>eric6/ThirdParty/Pygments/pygments/lexers/pointless.py</Source>
     <Source>eric6/ThirdParty/Pygments/pygments/lexers/pony.py</Source>
     <Source>eric6/ThirdParty/Pygments/pygments/lexers/praat.py</Source>
     <Source>eric6/ThirdParty/Pygments/pygments/lexers/prolog.py</Source>
+    <Source>eric6/ThirdParty/Pygments/pygments/lexers/promql.py</Source>
     <Source>eric6/ThirdParty/Pygments/pygments/lexers/python.py</Source>
     <Source>eric6/ThirdParty/Pygments/pygments/lexers/qvt.py</Source>
     <Source>eric6/ThirdParty/Pygments/pygments/lexers/r.py</Source>
@@ -1156,6 +1163,7 @@
     <Source>eric6/ThirdParty/Pygments/pygments/lexers/textedit.py</Source>
     <Source>eric6/ThirdParty/Pygments/pygments/lexers/textfmts.py</Source>
     <Source>eric6/ThirdParty/Pygments/pygments/lexers/theorem.py</Source>
+    <Source>eric6/ThirdParty/Pygments/pygments/lexers/tnt.py</Source>
     <Source>eric6/ThirdParty/Pygments/pygments/lexers/trafficscript.py</Source>
     <Source>eric6/ThirdParty/Pygments/pygments/lexers/typoscript.py</Source>
     <Source>eric6/ThirdParty/Pygments/pygments/lexers/unicon.py</Source>
@@ -1169,6 +1177,7 @@
     <Source>eric6/ThirdParty/Pygments/pygments/lexers/whiley.py</Source>
     <Source>eric6/ThirdParty/Pygments/pygments/lexers/x10.py</Source>
     <Source>eric6/ThirdParty/Pygments/pygments/lexers/xorg.py</Source>
+    <Source>eric6/ThirdParty/Pygments/pygments/lexers/yang.py</Source>
     <Source>eric6/ThirdParty/Pygments/pygments/lexers/zig.py</Source>
     <Source>eric6/ThirdParty/Pygments/pygments/modeline.py</Source>
     <Source>eric6/ThirdParty/Pygments/pygments/plugin.py</Source>
@@ -2103,9 +2112,6 @@
     <Other>eric6/APIs/MicroPython/circuitpython.api</Other>
     <Other>eric6/APIs/MicroPython/microbit.api</Other>
     <Other>eric6/APIs/MicroPython/micropython.api</Other>
-    <Other>eric6/APIs/Python/zope-2.10.7.api</Other>
-    <Other>eric6/APIs/Python/zope-2.11.2.api</Other>
-    <Other>eric6/APIs/Python/zope-3.3.1.api</Other>
     <Other>eric6/APIs/Python3/PyQt4.bas</Other>
     <Other>eric6/APIs/Python3/PyQt5.bas</Other>
     <Other>eric6/APIs/Python3/PyQtChart.bas</Other>
@@ -2113,6 +2119,9 @@
     <Other>eric6/APIs/Python3/QScintilla2.bas</Other>
     <Other>eric6/APIs/Python3/eric6.api</Other>
     <Other>eric6/APIs/Python3/eric6.bas</Other>
+    <Other>eric6/APIs/Python/zope-2.10.7.api</Other>
+    <Other>eric6/APIs/Python/zope-2.11.2.api</Other>
+    <Other>eric6/APIs/Python/zope-3.3.1.api</Other>
     <Other>eric6/APIs/QSS/qss.api</Other>
     <Other>eric6/APIs/Ruby/Ruby-1.8.7.api</Other>
     <Other>eric6/APIs/Ruby/Ruby-1.8.7.bas</Other>
--- a/eric6/ThirdParty/Pygments/pygments/AUTHORS	Tue Sep 15 18:46:58 2020 +0200
+++ b/eric6/ThirdParty/Pygments/pygments/AUTHORS	Tue Sep 15 19:09:05 2020 +0200
@@ -1,231 +1,237 @@
-Pygments is written and maintained by Georg Brandl <georg@python.org>.
-
-Major developers are Tim Hatch <tim@timhatch.com> and Armin Ronacher
-<armin.ronacher@active-4.com>.
-
-Other contributors, listed alphabetically, are:
-
-* Sam Aaron -- Ioke lexer
-* Ali Afshar -- image formatter
-* Thomas Aglassinger -- Easytrieve, JCL, Rexx, Transact-SQL and VBScript
-  lexers
-* Muthiah Annamalai -- Ezhil lexer
-* Kumar Appaiah -- Debian control lexer
-* Andreas Amann -- AppleScript lexer
-* Timothy Armstrong -- Dart lexer fixes
-* Jeffrey Arnold -- R/S, Rd, BUGS, Jags, and Stan lexers
-* Jeremy Ashkenas -- CoffeeScript lexer
-* José Joaquín Atria -- Praat lexer
-* Stefan Matthias Aust -- Smalltalk lexer
-* Lucas Bajolet -- Nit lexer
-* Ben Bangert -- Mako lexers
-* Max Battcher -- Darcs patch lexer
-* Thomas Baruchel -- APL lexer
-* Tim Baumann -- (Literate) Agda lexer
-* Paul Baumgart, 280 North, Inc. -- Objective-J lexer
-* Michael Bayer -- Myghty lexers
-* Thomas Beale -- Archetype lexers
-* John Benediktsson -- Factor lexer
-* Trevor Bergeron -- mIRC formatter
-* Vincent Bernat -- LessCSS lexer
-* Christopher Bertels -- Fancy lexer
-* Sébastien Bigaret -- QVT Operational lexer
-* Jarrett Billingsley -- MiniD lexer
-* Adam Blinkinsop -- Haskell, Redcode lexers
-* Stéphane Blondon -- SGF and Sieve lexers
-* Frits van Bommel -- assembler lexers
-* Pierre Bourdon -- bugfixes
-* Martijn Braam -- Kernel log lexer
-* Matthias Bussonnier -- ANSI style handling for terminal-256 formatter
-* chebee7i -- Python traceback lexer improvements
-* Hiram Chirino -- Scaml and Jade lexers
-* Mauricio Caceres -- SAS and Stata lexers.
-* Ian Cooper -- VGL lexer
-* David Corbett -- Inform, Jasmin, JSGF, Snowball, and TADS 3 lexers
-* Leaf Corcoran -- MoonScript lexer
-* Christopher Creutzig -- MuPAD lexer
-* Daniël W. Crompton -- Pike lexer
-* Pete Curry -- bugfixes
-* Bryan Davis -- EBNF lexer
-* Bruno Deferrari -- Shen lexer
-* Giedrius Dubinskas -- HTML formatter improvements
-* Owen Durni -- Haxe lexer
-* Alexander Dutton, Oxford University Computing Services -- SPARQL lexer
-* James Edwards -- Terraform lexer
-* Nick Efford -- Python 3 lexer
-* Sven Efftinge -- Xtend lexer
-* Artem Egorkine -- terminal256 formatter
-* Matthew Fernandez -- CAmkES lexer
-* Michael Ficarra -- CPSA lexer
-* James H. Fisher -- PostScript lexer
-* William S. Fulton -- SWIG lexer
-* Carlos Galdino -- Elixir and Elixir Console lexers
-* Michael Galloy -- IDL lexer
-* Naveen Garg -- Autohotkey lexer
-* Simon Garnotel -- FreeFem++ lexer
-* Laurent Gautier -- R/S lexer
-* Alex Gaynor -- PyPy log lexer
-* Richard Gerkin -- Igor Pro lexer
-* Alain Gilbert -- TypeScript lexer
-* Alex Gilding -- BlitzBasic lexer
-* GitHub, Inc -- DASM16, Augeas, TOML, and Slash lexers
-* Bertrand Goetzmann -- Groovy lexer
-* Krzysiek Goj -- Scala lexer
-* Rostyslav Golda -- FloScript lexer
-* Andrey Golovizin -- BibTeX lexers
-* Matt Good -- Genshi, Cheetah lexers
-* Michał Górny -- vim modeline support
-* Alex Gosse -- TrafficScript lexer
-* Patrick Gotthardt -- PHP namespaces support
-* Olivier Guibe -- Asymptote lexer
-* Phil Hagelberg -- Fennel lexer
-* Florian Hahn -- Boogie lexer
-* Martin Harriman -- SNOBOL lexer
-* Matthew Harrison -- SVG formatter
-* Steven Hazel -- Tcl lexer
-* Dan Michael Heggø -- Turtle lexer
-* Aslak Hellesøy -- Gherkin lexer
-* Greg Hendershott -- Racket lexer
-* Justin Hendrick -- ParaSail lexer
-* Jordi Gutiérrez Hermoso -- Octave lexer
-* David Hess, Fish Software, Inc. -- Objective-J lexer
-* Varun Hiremath -- Debian control lexer
-* Rob Hoelz -- Perl 6 lexer
-* Doug Hogan -- Mscgen lexer
-* Ben Hollis -- Mason lexer
-* Max Horn -- GAP lexer
-* Alastair Houghton -- Lexer inheritance facility
-* Tim Howard -- BlitzMax lexer
-* Dustin Howett -- Logos lexer
-* Ivan Inozemtsev -- Fantom lexer
-* Hiroaki Itoh -- Shell console rewrite, Lexers for PowerShell session,
-  MSDOS session, BC, WDiff
-* Brian R. Jackson -- Tea lexer
-* Christian Jann -- ShellSession lexer
-* Dennis Kaarsemaker -- sources.list lexer
-* Dmitri Kabak -- Inferno Limbo lexer
-* Igor Kalnitsky -- vhdl lexer
-* Colin Kennedy - USD lexer
-* Alexander Kit -- MaskJS lexer
-* Pekka Klärck -- Robot Framework lexer
-* Gerwin Klein -- Isabelle lexer
-* Eric Knibbe -- Lasso lexer
-* Stepan Koltsov -- Clay lexer
-* Adam Koprowski -- Opa lexer
-* Benjamin Kowarsch -- Modula-2 lexer
-* Domen Kožar -- Nix lexer
-* Oleh Krekel -- Emacs Lisp lexer
-* Alexander Kriegisch -- Kconfig and AspectJ lexers
-* Marek Kubica -- Scheme lexer
-* Jochen Kupperschmidt -- Markdown processor
-* Gerd Kurzbach -- Modelica lexer
-* Jon Larimer, Google Inc. -- Smali lexer
-* Olov Lassus -- Dart lexer
-* Matt Layman -- TAP lexer
-* Kristian Lyngstøl -- Varnish lexers
-* Sylvestre Ledru -- Scilab lexer
-* Chee Sing Lee -- Flatline lexer
-* Mark Lee -- Vala lexer
-* Valentin Lorentz -- C++ lexer improvements
-* Ben Mabey -- Gherkin lexer
-* Angus MacArthur -- QML lexer
-* Louis Mandel -- X10 lexer
-* Louis Marchand -- Eiffel lexer
-* Simone Margaritelli -- Hybris lexer
-* Kirk McDonald -- D lexer
-* Gordon McGregor -- SystemVerilog lexer
-* Stephen McKamey -- Duel/JBST lexer
-* Brian McKenna -- F# lexer
-* Charles McLaughlin -- Puppet lexer
-* Kurt McKee -- Tera Term macro lexer
-* Lukas Meuser -- BBCode formatter, Lua lexer
-* Cat Miller -- Pig lexer
-* Paul Miller -- LiveScript lexer
-* Hong Minhee -- HTTP lexer
-* Michael Mior -- Awk lexer
-* Bruce Mitchener -- Dylan lexer rewrite
-* Reuben Morais -- SourcePawn lexer
-* Jon Morton -- Rust lexer
-* Paulo Moura -- Logtalk lexer
-* Mher Movsisyan -- DTD lexer
-* Dejan Muhamedagic -- Crmsh lexer
-* Ana Nelson -- Ragel, ANTLR, R console lexers
-* Kurt Neufeld -- Markdown lexer
-* Nam T. Nguyen -- Monokai style
-* Jesper Noehr -- HTML formatter "anchorlinenos"
-* Mike Nolta -- Julia lexer
-* Jonas Obrist -- BBCode lexer
-* Edward O'Callaghan -- Cryptol lexer
-* David Oliva -- Rebol lexer
-* Pat Pannuto -- nesC lexer
-* Jon Parise -- Protocol buffers and Thrift lexers
-* Benjamin Peterson -- Test suite refactoring
-* Ronny Pfannschmidt -- BBCode lexer
-* Dominik Picheta -- Nimrod lexer
-* Andrew Pinkham -- RTF Formatter Refactoring
-* Clément Prévost -- UrbiScript lexer
-* Tanner Prynn -- cmdline -x option and loading lexers from files
-* Oleh Prypin -- Crystal lexer (based on Ruby lexer)
-* Xidorn Quan -- Web IDL lexer
-* Elias Rabel -- Fortran fixed form lexer
-* raichoo -- Idris lexer
-* Kashif Rasul -- CUDA lexer
-* Nathan Reed -- HLSL lexer
-* Justin Reidy -- MXML lexer
-* Norman Richards -- JSON lexer
-* Corey Richardson -- Rust lexer updates
-* Lubomir Rintel -- GoodData MAQL and CL lexers
-* Andre Roberge -- Tango style
-* Georg Rollinger -- HSAIL lexer
-* Michiel Roos -- TypoScript lexer
-* Konrad Rudolph -- LaTeX formatter enhancements
-* Mario Ruggier -- Evoque lexers
-* Miikka Salminen -- Lovelace style, Hexdump lexer, lexer enhancements
-* Stou Sandalski -- NumPy, FORTRAN, tcsh and XSLT lexers
-* Matteo Sasso -- Common Lisp lexer
-* Joe Schafer -- Ada lexer
-* Ken Schutte -- Matlab lexers
-* René Schwaiger -- Rainbow Dash style
-* Sebastian Schweizer -- Whiley lexer
-* Tassilo Schweyer -- Io, MOOCode lexers
-* Ted Shaw -- AutoIt lexer
-* Joerg Sieker -- ABAP lexer
-* Robert Simmons -- Standard ML lexer
-* Kirill Simonov -- YAML lexer
-* Corbin Simpson -- Monte lexer
-* Alexander Smishlajev -- Visual FoxPro lexer
-* Steve Spigarelli -- XQuery lexer
-* Jerome St-Louis -- eC lexer
-* Camil Staps -- Clean and NuSMV lexers; Solarized style
-* James Strachan -- Kotlin lexer
-* Tom Stuart -- Treetop lexer
-* Colin Sullivan -- SuperCollider lexer
-* Ben Swift -- Extempore lexer
-* Edoardo Tenani -- Arduino lexer
-* Tiberius Teng -- default style overhaul
-* Jeremy Thurgood -- Erlang, Squid config lexers
-* Brian Tiffin -- OpenCOBOL lexer
-* Bob Tolbert -- Hy lexer
-* Matthias Trute -- Forth lexer
-* Erick Tryzelaar -- Felix lexer
-* Alexander Udalov -- Kotlin lexer improvements
-* Thomas Van Doren -- Chapel lexer
-* Daniele Varrazzo -- PostgreSQL lexers
-* Abe Voelker -- OpenEdge ABL lexer
-* Pepijn de Vos -- HTML formatter CTags support
-* Matthias Vallentin -- Bro lexer
-* Benoît Vinot -- AMPL lexer
-* Linh Vu Hong -- RSL lexer
-* Nathan Weizenbaum -- Haml and Sass lexers
-* Nathan Whetsell -- Csound lexers
-* Dietmar Winkler -- Modelica lexer
-* Nils Winter -- Smalltalk lexer
-* Davy Wybiral -- Clojure lexer
-* Whitney Young -- ObjectiveC lexer
-* Diego Zamboni -- CFengine3 lexer
-* Enrique Zamudio -- Ceylon lexer
-* Alex Zimin -- Nemerle lexer
-* Rob Zimmerman -- Kal lexer
-* Vincent Zurczak -- Roboconf lexer
-
-Many thanks for all contributions!
+Pygments is written and maintained by Georg Brandl <georg@python.org>.
+
+Major developers are Tim Hatch <tim@timhatch.com> and Armin Ronacher
+<armin.ronacher@active-4.com>.
+
+Other contributors, listed alphabetically, are:
+
+* Sam Aaron -- Ioke lexer
+* Ali Afshar -- image formatter
+* Thomas Aglassinger -- Easytrieve, JCL, Rexx, Transact-SQL and VBScript
+  lexers
+* Muthiah Annamalai -- Ezhil lexer
+* Kumar Appaiah -- Debian control lexer
+* Andreas Amann -- AppleScript lexer
+* Timothy Armstrong -- Dart lexer fixes
+* Jeffrey Arnold -- R/S, Rd, BUGS, Jags, and Stan lexers
+* Jeremy Ashkenas -- CoffeeScript lexer
+* José Joaquín Atria -- Praat lexer
+* Stefan Matthias Aust -- Smalltalk lexer
+* Lucas Bajolet -- Nit lexer
+* Ben Bangert -- Mako lexers
+* Max Battcher -- Darcs patch lexer
+* Thomas Baruchel -- APL lexer
+* Tim Baumann -- (Literate) Agda lexer
+* Paul Baumgart, 280 North, Inc. -- Objective-J lexer
+* Michael Bayer -- Myghty lexers
+* Thomas Beale -- Archetype lexers
+* John Benediktsson -- Factor lexer
+* Trevor Bergeron -- mIRC formatter
+* Vincent Bernat -- LessCSS lexer
+* Christopher Bertels -- Fancy lexer
+* Sébastien Bigaret -- QVT Operational lexer
+* Jarrett Billingsley -- MiniD lexer
+* Adam Blinkinsop -- Haskell, Redcode lexers
+* Stéphane Blondon -- SGF and Sieve lexers
+* Frits van Bommel -- assembler lexers
+* Pierre Bourdon -- bugfixes
+* Martijn Braam -- Kernel log lexer, BARE lexer
+* Matthias Bussonnier -- ANSI style handling for terminal-256 formatter
+* chebee7i -- Python traceback lexer improvements
+* Hiram Chirino -- Scaml and Jade lexers
+* Mauricio Caceres -- SAS and Stata lexers.
+* Ian Cooper -- VGL lexer
+* David Corbett -- Inform, Jasmin, JSGF, Snowball, and TADS 3 lexers
+* Leaf Corcoran -- MoonScript lexer
+* Christopher Creutzig -- MuPAD lexer
+* Daniël W. Crompton -- Pike lexer
+* Pete Curry -- bugfixes
+* Bryan Davis -- EBNF lexer
+* Bruno Deferrari -- Shen lexer
+* Giedrius Dubinskas -- HTML formatter improvements
+* Owen Durni -- Haxe lexer
+* Alexander Dutton, Oxford University Computing Services -- SPARQL lexer
+* James Edwards -- Terraform lexer
+* Nick Efford -- Python 3 lexer
+* Sven Efftinge -- Xtend lexer
+* Artem Egorkine -- terminal256 formatter
+* Matthew Fernandez -- CAmkES lexer
+* Paweł Fertyk -- GDScript lexer, HTML formatter improvements
+* Michael Ficarra -- CPSA lexer
+* James H. Fisher -- PostScript lexer
+* William S. Fulton -- SWIG lexer
+* Carlos Galdino -- Elixir and Elixir Console lexers
+* Michael Galloy -- IDL lexer
+* Naveen Garg -- Autohotkey lexer
+* Simon Garnotel -- FreeFem++ lexer
+* Laurent Gautier -- R/S lexer
+* Alex Gaynor -- PyPy log lexer
+* Richard Gerkin -- Igor Pro lexer
+* Alain Gilbert -- TypeScript lexer
+* Alex Gilding -- BlitzBasic lexer
+* GitHub, Inc -- DASM16, Augeas, TOML, and Slash lexers
+* Bertrand Goetzmann -- Groovy lexer
+* Krzysiek Goj -- Scala lexer
+* Rostyslav Golda -- FloScript lexer
+* Andrey Golovizin -- BibTeX lexers
+* Matt Good -- Genshi, Cheetah lexers
+* Michał Górny -- vim modeline support
+* Alex Gosse -- TrafficScript lexer
+* Patrick Gotthardt -- PHP namespaces support
+* Olivier Guibe -- Asymptote lexer
+* Phil Hagelberg -- Fennel lexer
+* Florian Hahn -- Boogie lexer
+* Martin Harriman -- SNOBOL lexer
+* Matthew Harrison -- SVG formatter
+* Steven Hazel -- Tcl lexer
+* Dan Michael Heggø -- Turtle lexer
+* Aslak Hellesøy -- Gherkin lexer
+* Greg Hendershott -- Racket lexer
+* Justin Hendrick -- ParaSail lexer
+* Jordi Gutiérrez Hermoso -- Octave lexer
+* David Hess, Fish Software, Inc. -- Objective-J lexer
+* Varun Hiremath -- Debian control lexer
+* Rob Hoelz -- Perl 6 lexer
+* Doug Hogan -- Mscgen lexer
+* Ben Hollis -- Mason lexer
+* Max Horn -- GAP lexer
+* Alastair Houghton -- Lexer inheritance facility
+* Tim Howard -- BlitzMax lexer
+* Dustin Howett -- Logos lexer
+* Ivan Inozemtsev -- Fantom lexer
+* Hiroaki Itoh -- Shell console rewrite, Lexers for PowerShell session,
+  MSDOS session, BC, WDiff
+* Brian R. Jackson -- Tea lexer
+* Christian Jann -- ShellSession lexer
+* Dennis Kaarsemaker -- sources.list lexer
+* Dmitri Kabak -- Inferno Limbo lexer
+* Igor Kalnitsky -- vhdl lexer
+* Colin Kennedy - USD lexer
+* Alexander Kit -- MaskJS lexer
+* Pekka Klärck -- Robot Framework lexer
+* Gerwin Klein -- Isabelle lexer
+* Eric Knibbe -- Lasso lexer
+* Stepan Koltsov -- Clay lexer
+* Adam Koprowski -- Opa lexer
+* Benjamin Kowarsch -- Modula-2 lexer
+* Domen Kožar -- Nix lexer
+* Oleh Krekel -- Emacs Lisp lexer
+* Alexander Kriegisch -- Kconfig and AspectJ lexers
+* Marek Kubica -- Scheme lexer
+* Jochen Kupperschmidt -- Markdown processor
+* Gerd Kurzbach -- Modelica lexer
+* Jon Larimer, Google Inc. -- Smali lexer
+* Olov Lassus -- Dart lexer
+* Matt Layman -- TAP lexer
+* Kristian Lyngstøl -- Varnish lexers
+* Sylvestre Ledru -- Scilab lexer
+* Chee Sing Lee -- Flatline lexer
+* Mark Lee -- Vala lexer
+* Valentin Lorentz -- C++ lexer improvements
+* Ben Mabey -- Gherkin lexer
+* Angus MacArthur -- QML lexer
+* Louis Mandel -- X10 lexer
+* Louis Marchand -- Eiffel lexer
+* Simone Margaritelli -- Hybris lexer
+* Kirk McDonald -- D lexer
+* Gordon McGregor -- SystemVerilog lexer
+* Stephen McKamey -- Duel/JBST lexer
+* Brian McKenna -- F# lexer
+* Charles McLaughlin -- Puppet lexer
+* Kurt McKee -- Tera Term macro lexer, PostgreSQL updates, MySQL overhaul
+* Lukas Meuser -- BBCode formatter, Lua lexer
+* Cat Miller -- Pig lexer
+* Paul Miller -- LiveScript lexer
+* Hong Minhee -- HTTP lexer
+* Michael Mior -- Awk lexer
+* Bruce Mitchener -- Dylan lexer rewrite
+* Reuben Morais -- SourcePawn lexer
+* Jon Morton -- Rust lexer
+* Paulo Moura -- Logtalk lexer
+* Mher Movsisyan -- DTD lexer
+* Dejan Muhamedagic -- Crmsh lexer
+* Ana Nelson -- Ragel, ANTLR, R console lexers
+* Kurt Neufeld -- Markdown lexer
+* Nam T. Nguyen -- Monokai style
+* Jesper Noehr -- HTML formatter "anchorlinenos"
+* Mike Nolta -- Julia lexer
+* Avery Nortonsmith -- Pointless lexer
+* Jonas Obrist -- BBCode lexer
+* Edward O'Callaghan -- Cryptol lexer
+* David Oliva -- Rebol lexer
+* Pat Pannuto -- nesC lexer
+* Jon Parise -- Protocol buffers and Thrift lexers
+* Benjamin Peterson -- Test suite refactoring
+* Ronny Pfannschmidt -- BBCode lexer
+* Dominik Picheta -- Nimrod lexer
+* Andrew Pinkham -- RTF Formatter Refactoring
+* Clément Prévost -- UrbiScript lexer
+* Tanner Prynn -- cmdline -x option and loading lexers from files
+* Oleh Prypin -- Crystal lexer (based on Ruby lexer)
+* Xidorn Quan -- Web IDL lexer
+* Elias Rabel -- Fortran fixed form lexer
+* raichoo -- Idris lexer
+* Daniel Ramirez -- GDScript lexer
+* Kashif Rasul -- CUDA lexer
+* Nathan Reed -- HLSL lexer
+* Justin Reidy -- MXML lexer
+* Norman Richards -- JSON lexer
+* Corey Richardson -- Rust lexer updates
+* Lubomir Rintel -- GoodData MAQL and CL lexers
+* Andre Roberge -- Tango style
+* Georg Rollinger -- HSAIL lexer
+* Michiel Roos -- TypoScript lexer
+* Konrad Rudolph -- LaTeX formatter enhancements
+* Mario Ruggier -- Evoque lexers
+* Miikka Salminen -- Lovelace style, Hexdump lexer, lexer enhancements
+* Stou Sandalski -- NumPy, FORTRAN, tcsh and XSLT lexers
+* Matteo Sasso -- Common Lisp lexer
+* Joe Schafer -- Ada lexer
+* Max Schillinger -- TiddlyWiki5 lexer
+* Ken Schutte -- Matlab lexers
+* René Schwaiger -- Rainbow Dash style
+* Sebastian Schweizer -- Whiley lexer
+* Tassilo Schweyer -- Io, MOOCode lexers
+* Pablo Seminario -- PromQL lexer
+* Ted Shaw -- AutoIt lexer
+* Joerg Sieker -- ABAP lexer
+* Robert Simmons -- Standard ML lexer
+* Kirill Simonov -- YAML lexer
+* Corbin Simpson -- Monte lexer
+* Alexander Smishlajev -- Visual FoxPro lexer
+* Steve Spigarelli -- XQuery lexer
+* Jerome St-Louis -- eC lexer
+* Camil Staps -- Clean and NuSMV lexers; Solarized style
+* James Strachan -- Kotlin lexer
+* Tom Stuart -- Treetop lexer
+* Colin Sullivan -- SuperCollider lexer
+* Ben Swift -- Extempore lexer
+* Edoardo Tenani -- Arduino lexer
+* Tiberius Teng -- default style overhaul
+* Jeremy Thurgood -- Erlang, Squid config lexers
+* Brian Tiffin -- OpenCOBOL lexer
+* Bob Tolbert -- Hy lexer
+* Matthias Trute -- Forth lexer
+* Erick Tryzelaar -- Felix lexer
+* Alexander Udalov -- Kotlin lexer improvements
+* Thomas Van Doren -- Chapel lexer
+* Daniele Varrazzo -- PostgreSQL lexers
+* Abe Voelker -- OpenEdge ABL lexer
+* Pepijn de Vos -- HTML formatter CTags support
+* Matthias Vallentin -- Bro lexer
+* Benoît Vinot -- AMPL lexer
+* Linh Vu Hong -- RSL lexer
+* Nathan Weizenbaum -- Haml and Sass lexers
+* Nathan Whetsell -- Csound lexers
+* Dietmar Winkler -- Modelica lexer
+* Nils Winter -- Smalltalk lexer
+* Davy Wybiral -- Clojure lexer
+* Whitney Young -- ObjectiveC lexer
+* Diego Zamboni -- CFengine3 lexer
+* Enrique Zamudio -- Ceylon lexer
+* Alex Zimin -- Nemerle lexer
+* Rob Zimmerman -- Kal lexer
+* Vincent Zurczak -- Roboconf lexer
+* Hubert Gruniaux -- C and C++ lexer improvements
+
+Many thanks for all contributions!
--- a/eric6/ThirdParty/Pygments/pygments/CHANGES	Tue Sep 15 18:46:58 2020 +0200
+++ b/eric6/ThirdParty/Pygments/pygments/CHANGES	Tue Sep 15 19:09:05 2020 +0200
@@ -1,1457 +1,1520 @@
-Pygments changelog
-==================
-
-Since 2.5.0, issue numbers refer to the tracker at
-<https://github.com/pygments/pygments/issues>,
-pull request numbers to the requests at
-<https://github.com/pygments/pygments/pulls>.
-
-
-Version 2.6.1
--------------
-(released March 8, 2020)
-
-- This release fixes a packaging issue. No functional changes.
-
-Version 2.6
------------
-(released March 8, 2020)
-
-- Running Pygments on Python 2.x is no longer supported.
-  (The Python 2 lexer still exists.)
-
-- Added lexers:
-
-  * Linux kernel logs (PR#1310)
-  * LLVM MIR (PR#1361)
-  * MiniScript (PR#1397)
-  * Mosel (PR#1287, PR#1326)
-  * Parsing Expression Grammar (PR#1336)
-  * ReasonML (PR#1386)
-  * Ride (PR#1319, PR#1321)
-  * Sieve (PR#1257)
-  * USD (PR#1290)
-  * WebIDL (PR#1309)
-
-- Updated lexers:
-
-  * Apache2 (PR#1378)
-  * Chapel (PR#1357)
-  * CSound (PR#1383)
-  * D (PR#1375, PR#1362)
-  * Idris (PR#1360)
-  * Perl6/Raku lexer (PR#1344)
-  * Python3 (PR#1382, PR#1385)
-  * Rust: Updated lexer to cover more builtins (mostly macros) and miscellaneous
-  new syntax (PR#1320)
-  * SQL: Add temporal support keywords (PR#1402)
-
-- The 256-color/true-color terminal formatters now support the italic attribute
-  in styles (PR#1288)
-- Support HTTP 2/3 header (PR#1308)
-- Support missing reason in HTTP header (PR#1322)
-- Boogie/Silver: support line continuations and triggers, move contract keywords
-  to separate category (PR#1299)
-- GAS: support C-style comments (PR#1291)
-- Fix names in S lexer (PR#1330, PR#1333)
-- Fix numeric literals in Ada (PR#1334)
-- Recognize ``.mjs`` files as Javascript (PR#1392)
-- Recognize ``.eex`` files as Elixir (PR#1387)
-- Fix ``re.MULTILINE`` usage (PR#1388)
-- Recognize ``pipenv`` and ``poetry`` dependency & lock files (PR#1376)
-- Improve font search on Windows (#1247)
-- Remove unused script block (#1401)
-
-Version 2.5.2
--------------
-(released November 29, 2019)
-
-- Fix incompatibility with some setuptools versions (PR#1316)
-
-- Fix lexing of ReST field lists (PR#1279)
-- Fix lexing of Matlab keywords as field names (PR#1282)
-- Recognize double-quoted strings in Matlab (PR#1278)
-- Avoid slow backtracking in Vim lexer (PR#1312)
-- Fix Scala highlighting of types (PR#1315)
-- Highlight field lists more consistently in ReST (PR#1279)
-- Fix highlighting Matlab keywords in field names (PR#1282)
-- Recognize Matlab double quoted strings (PR#1278)
-- Add some Terraform keywords
-- Update Modelica lexer to 3.4
-- Update Crystal examples
-
-
-Version 2.5.1
--------------
-(released November 26, 2019)
-
-- This release fixes a packaging issue. No functional changes.
-
-
-Version 2.5.0
--------------
-(released November 26, 2019)
-
-- Added lexers:
-
-  * Email (PR#1246)
-  * Erlang, Elixir shells (PR#823, #1521)
-  * Notmuch (PR#1264)
-  * `Scdoc <https://git.sr.ht/~sircmpwn/scdoc>`_ (PR#1268)
-  * `Solidity <https://solidity.readthedocs.io/>`_ (#1214)
-  * `Zeek <https://www.zeek.org>`_ (new name for Bro) (PR#1269)
-  * `Zig <https://ziglang.org/>`_ (PR#820)
-
-- Updated lexers:
-
-  * Apache2 Configuration (PR#1251)
-  * Bash sessions (#1253)
-  * CSound (PR#1250)
-  * Dart
-  * Dockerfile
-  * Emacs Lisp
-  * Handlebars (PR#773)
-  * Java (#1101, #987)
-  * Logtalk (PR#1261)
-  * Matlab (PR#1271)
-  * Praat (PR#1277)
-  * Python3 (PR#1255, PR#1400)
-  * Ruby
-  * YAML (#1528)
-  * Velocity
-
-- Added styles:
-
-  * Inkpot (PR#1276)
-
-- The ``PythonLexer`` class is now an alias for the former ``Python3Lexer``.
-  The old ``PythonLexer`` is available as ``Python2Lexer``.  Same change has
-  been done for the ``PythonTracebackLexer``.  The ``python3`` option for
-  the ``PythonConsoleLexer`` is now true by default.
-
-- Bump ``NasmLexer`` priority over ``TasmLexer`` for ``.asm`` files
-  (fixes #1326)
-- Default font in the ``ImageFormatter`` has been updated (#928, PR#1245)
-- Test suite switched to py.test, removed nose dependency (#1490)
-- Reduce ``TeraTerm`` lexer score -- it used to match nearly all languages
-  (#1256)
-- Treat ``Skylark``/``Starlark`` files as Python files (PR#1259)
-- Image formatter: actually respect ``line_number_separator`` option
-
-- Add LICENSE file to wheel builds
-- Agda: fix lambda highlighting
-- Dart: support ``@`` annotations
-- Dockerfile: accept ``FROM ... AS`` syntax
-- Emacs Lisp: add more string functions
-- GAS: accept registers in directive arguments
-- Java: make structural punctuation (braces, parens, colon, comma) ``Punctuation``, not ``Operator`` (#987)
-- Java: support ``var`` contextual keyword (#1101)
-- Matlab: Fix recognition of ``function`` keyword (PR#1271)
-- Python: recognize ``.jy`` filenames (#976)
-- Python: recognize ``f`` string prefix (#1156)
-- Ruby: support squiggly heredocs
-- Shell sessions: recognize Virtualenv prompt (PR#1266)
-- Velocity: support silent reference syntax
-
-
-Version 2.4.2
--------------
-(released May 28, 2019)
-
-- Fix encoding error when guessing lexer with given ``encoding`` option
-  (#1438)
-
-
-Version 2.4.1
--------------
-(released May 24, 2019)
-
-- Updated lexers:
-
-  * Coq (#1430)
-  * MSDOS Session (PR#734)
-  * NASM (#1517)
-  * Objective-C (PR#813, #1508)
-  * Prolog (#1511)
-  * TypeScript (#1515)
-
-- Support CSS variables in stylesheets (PR#814, #1356)
-- Fix F# lexer name (PR#709)
-- Fix ``TerminalFormatter`` using bold for bright text (#1480)
-
-
-Version 2.4.0
--------------
-(released May 8, 2019)
-
-- Added lexers:
-
-  * Augeas (PR#807)
-  * BBC Basic (PR#806)
-  * Boa (PR#756)
-  * Charm++ CI (PR#788)
-  * DASM16 (PR#807)
-  * FloScript (PR#750)
-  * FreeFem++ (PR#785)
-  * Hspec (PR#790)
-  * Pony (PR#627)
-  * SGF (PR#780)
-  * Slash (PR#807)
-  * Slurm (PR#760)
-  * Tera Term Language (PR#749)
-  * TOML (PR#807)
-  * Unicon (PR#731)
-  * VBScript (PR#673)
-
-- Updated lexers:
-
-  * Apache2 (PR#766)
-  * Cypher (PR#746)
-  * LLVM (PR#792)
-  * Makefiles (PR#766)
-  * PHP (#1482)
-  * Rust
-  * SQL (PR#672)
-  * Stan (PR#774)
-  * Stata (PR#800)
-  * Terraform (PR#787)
-  * YAML
-
-- Add solarized style (PR#708)
-- Add support for Markdown reference-style links (PR#753)
-- Add license information to generated HTML/CSS files (#1496)
-- Change ANSI color names (PR#777)
-- Fix catastrophic backtracking in the bash lexer (#1494)
-- Fix documentation failing to build using Sphinx 2.0 (#1501)
-- Fix incorrect links in the Lisp and R lexer documentation (PR#775)
-- Fix rare unicode errors on Python 2.7 (PR#798, #1492)
-- Fix lexers popping from an empty stack (#1506)
-- TypoScript uses ``.typoscript`` now (#1498)
-- Updated Trove classifiers and ``pip`` requirements (PR#799)
-
-
-
-Version 2.3.1
--------------
-(released Dec 16, 2018)
-
-- Updated lexers:
-
-  * ASM (PR#784)
-  * Chapel (PR#735)
-  * Clean (PR#621)
-  * CSound (PR#684)
-  * Elm (PR#744)
-  * Fortran (PR#747)
-  * GLSL (PR#740)
-  * Haskell (PR#745)
-  * Hy (PR#754)
-  * Igor Pro (PR#764)
-  * PowerShell (PR#705)
-  * Python (PR#720, #1299, PR#715)
-  * SLexer (PR#680)
-  * YAML (PR#762, PR#724)
-
-- Fix invalid string escape sequences
-- Fix `FutureWarning` introduced by regex changes in Python 3.7
-
-
-Version 2.3.0
--------------
-(released Nov 25, 2018)
-
-- Added lexers:
-
-  * Fennel (PR#783)
-  * HLSL (PR#675)
-
-- Updated lexers:
-
-  * Dockerfile (PR#714)
-
-- Minimum Python versions changed to 2.7 and 3.5
-- Added support for Python 3.7 generator changes (PR#772)
-- Fix incorrect token type in SCSS for single-quote strings (#1322)
-- Use `terminal256` formatter if `TERM` contains `256` (PR#666)
-- Fix incorrect handling of GitHub style fences in Markdown (PR#741, #1389)
-- Fix `%a` not being highlighted in Python3 strings (PR#727)
-
-
-Version 2.2.0
--------------
-(released Jan 22, 2017)
-
-- Added lexers:
-
-  * AMPL
-  * TypoScript (#1173)
-  * Varnish config (PR#554)
-  * Clean (PR#503)
-  * WDiff (PR#513)
-  * Flatline (PR#551)
-  * Silver (PR#537)
-  * HSAIL (PR#518)
-  * JSGF (PR#546)
-  * NCAR command language (PR#536)
-  * Extempore (PR#530)
-  * Cap'n Proto (PR#595)
-  * Whiley (PR#573)
-  * Monte (PR#592)
-  * Crystal (PR#576)
-  * Snowball (PR#589)
-  * CapDL (PR#579)
-  * NuSMV (PR#564)
-  * SAS, Stata (PR#593)
-
-- Added the ability to load lexer and formatter classes directly from files
-  with the `-x` command line option and the `lexers.load_lexer_from_file()`
-  and `formatters.load_formatter_from_file()` functions. (PR#559)
-
-- Added `lexers.find_lexer_class_by_name()`. (#1203)
-
-- Added new token types and lexing for magic methods and variables in Python
-  and PHP.
-
-- Added a new token type for string affixes and lexing for them in Python, C++
-  and Postgresql lexers.
-
-- Added a new token type for heredoc (and similar) string delimiters and
-  lexing for them in C++, Perl, PHP, Postgresql and Ruby lexers.
-
-- Styles can now define colors with ANSI colors for use in the 256-color
-  terminal formatter. (PR#531)
-
-- Improved the CSS lexer. (#1083, #1130)
-
-- Added "Rainbow Dash" style. (PR#623)
-
-- Delay loading `pkg_resources`, which takes a long while to import. (PR#690)
-
-
-Version 2.1.3
--------------
-(released Mar 2, 2016)
-
-- Fixed regression in Bash lexer (PR#563)
-
-
-Version 2.1.2
--------------
-(released Feb 29, 2016)
-
-- Fixed Python 3 regression in image formatter (#1215)
-- Fixed regression in Bash lexer (PR#562)
-
-
-Version 2.1.1
--------------
-(relased Feb 14, 2016)
-
-- Fixed Jython compatibility (#1205)
-- Fixed HTML formatter output with leading empty lines (#1111)
-- Added a mapping table for LaTeX encodings and added utf8 (#1152)
-- Fixed image formatter font searching on Macs (#1188)
-- Fixed deepcopy-ing of Token instances (#1168)
-- Fixed Julia string interpolation (#1170)
-- Fixed statefulness of HttpLexer between get_tokens calls
-- Many smaller fixes to various lexers
-
-
-Version 2.1
------------
-(released Jan 17, 2016)
-
-- Added lexers:
-
-  * Emacs Lisp (PR#431)
-  * Arduino (PR#442)
-  * Modula-2 with multi-dialect support (#1090)
-  * Fortran fixed format (PR#213)
-  * Archetype Definition language (PR#483)
-  * Terraform (PR#432)
-  * Jcl, Easytrieve (PR#208)
-  * ParaSail (PR#381)
-  * Boogie (PR#420)
-  * Turtle (PR#425)
-  * Fish Shell (PR#422)
-  * Roboconf (PR#449)
-  * Test Anything Protocol (PR#428)
-  * Shen (PR#385)
-  * Component Pascal (PR#437)
-  * SuperCollider (PR#472)
-  * Shell consoles (Tcsh, PowerShell, MSDOS) (PR#479)
-  * Elm and J (PR#452)
-  * Crmsh (PR#440)
-  * Praat (PR#492)
-  * CSound (PR#494)
-  * Ezhil (PR#443)
-  * Thrift (PR#469)
-  * QVT Operational (PR#204)
-  * Hexdump (PR#508)
-  * CAmkES Configuration (PR#462)
-
-- Added styles:
-
-  * Lovelace (PR#456)
-  * Algol and Algol-nu (#1090)
-
-- Added formatters:
-
-  * IRC (PR#458)
-  * True color (24-bit) terminal ANSI sequences (#1142)
-    (formatter alias: "16m")
-
-- New "filename" option for HTML formatter (PR#527).
-
-- Improved performance of the HTML formatter for long lines (PR#504).
-
-- Updated autopygmentize script (PR#445).
-
-- Fixed style inheritance for non-standard token types in HTML output.
-
-- Added support for async/await to Python 3 lexer.
-
-- Rewrote linenos option for TerminalFormatter (it's better, but slightly
-  different output than before) (#1147).
-
-- Javascript lexer now supports most of ES6 (#1100).
-
-- Cocoa builtins updated for iOS 8.1 (PR#433).
-
-- Combined BashSessionLexer and ShellSessionLexer, new version should support
-  the prompt styles of either.
-
-- Added option to pygmentize to show a full traceback on exceptions.
-
-- Fixed incomplete output on Windows and Python 3 (e.g. when using iPython
-  Notebook) (#1153).
-
-- Allowed more traceback styles in Python console lexer (PR#253).
-
-- Added decorators to TypeScript (PR#509).
-
-- Fix highlighting of certain IRC logs formats (#1076).
-
-
-Version 2.0.2
--------------
-(released Jan 20, 2015)
-
-- Fix Python tracebacks getting duplicated in the console lexer (#1068).
-
-- Backquote-delimited identifiers are now recognized in F# (#1062).
-
-
-Version 2.0.1
--------------
-(released Nov 10, 2014)
-
-- Fix an encoding issue when using ``pygmentize`` with the ``-o`` option.
-
-
-Version 2.0
------------
-(released Nov 9, 2014)
-
-- Default lexer encoding is now "guess", i.e. UTF-8 / Locale / Latin1 is
-  tried in that order.
-
-- Major update to Swift lexer (PR#410).
-
-- Multiple fixes to lexer guessing in conflicting cases:
-
-  * recognize HTML5 by doctype
-  * recognize XML by XML declaration
-  * don't recognize C/C++ as SystemVerilog
-
-- Simplified regexes and builtin lists.
-
-
-Version 2.0rc1
---------------
-(released Oct 16, 2014)
-
-- Dropped Python 2.4 and 2.5 compatibility.  This is in favor of single-source
-  compatibility between Python 2.6, 2.7 and 3.3+.
-
-- New website and documentation based on Sphinx (finally!)
-
-- Lexers added:
-
-  * APL (#969)
-  * Agda and Literate Agda (PR#203)
-  * Alloy (PR#355)
-  * AmbientTalk
-  * BlitzBasic (PR#197)
-  * ChaiScript (PR#24)
-  * Chapel (PR#256)
-  * Cirru (PR#275)
-  * Clay (PR#184)
-  * ColdFusion CFC (PR#283)
-  * Cryptol and Literate Cryptol (PR#344)
-  * Cypher (PR#257)
-  * Docker config files
-  * EBNF (PR#193)
-  * Eiffel (PR#273)
-  * GAP (PR#311)
-  * Golo (PR#309)
-  * Handlebars (PR#186)
-  * Hy (PR#238)
-  * Idris and Literate Idris (PR#210)
-  * Igor Pro (PR#172)
-  * Inform 6/7 (PR#281)
-  * Intel objdump (PR#279)
-  * Isabelle (PR#386)
-  * Jasmin (PR#349)
-  * JSON-LD (PR#289)
-  * Kal (PR#233)
-  * Lean (PR#399)
-  * LSL (PR#296)
-  * Limbo (PR#291)
-  * Liquid (#977)
-  * MQL (PR#285)
-  * MaskJS (PR#280)
-  * Mozilla preprocessors
-  * Mathematica (PR#245)
-  * NesC (PR#166)
-  * Nit (PR#375)
-  * Nix (PR#267)
-  * Pan
-  * Pawn (PR#211)
-  * Perl 6 (PR#181)
-  * Pig (PR#304)
-  * Pike (PR#237)
-  * QBasic (PR#182)
-  * Red (PR#341)
-  * ResourceBundle (#1038)
-  * Rexx (PR#199)
-  * Rql (PR#251)
-  * Rsl
-  * SPARQL (PR#78)
-  * Slim (PR#366)
-  * Swift (PR#371)
-  * Swig (PR#168)
-  * TADS 3 (PR#407)
-  * Todo.txt todo lists
-  * Twig (PR#404)
-
-- Added a helper to "optimize" regular expressions that match one of many
-  literal words; this can save 20% and more lexing time with lexers that
-  highlight many keywords or builtins.
-
-- New styles: "xcode" and "igor", similar to the default highlighting of
-  the respective IDEs.
-
-- The command-line "pygmentize" tool now tries a little harder to find the
-  correct encoding for files and the terminal (#979).
-
-- Added "inencoding" option for lexers to override "encoding" analogous
-  to "outencoding" (#800).
-
-- Added line-by-line "streaming" mode for pygmentize with the "-s" option.
-  (PR#165)  Only fully works for lexers that have no constructs spanning
-  lines!
-
-- Added an "envname" option to the LaTeX formatter to select a replacement
-  verbatim environment (PR#235).
-
-- Updated the Makefile lexer to yield a little more useful highlighting.
-
-- Lexer aliases passed to ``get_lexer_by_name()`` are now case-insensitive.
-
-- File name matching in lexers and formatters will now use a regex cache
-  for speed (PR#205).
-
-- Pygments will now recognize "vim" modelines when guessing the lexer for
-  a file based on content (PR#118).
-
-- Major restructure of the ``pygments.lexers`` module namespace.  There are now
-  many more modules with less lexers per module.  Old modules are still around
-  and re-export the lexers they previously contained.
-
-- The NameHighlightFilter now works with any Name.* token type (#790).
-
-- Python 3 lexer: add new exceptions from PEP 3151.
-
-- Opa lexer: add new keywords (PR#170).
-
-- Julia lexer: add keywords and underscore-separated number
-  literals (PR#176).
-
-- Lasso lexer: fix method highlighting, update builtins. Fix
-  guessing so that plain XML isn't always taken as Lasso (PR#163).
-
-- Objective C/C++ lexers: allow "@" prefixing any expression (#871).
-
-- Ruby lexer: fix lexing of Name::Space tokens (#860) and of symbols
-  in hashes (#873).
-
-- Stan lexer: update for version 2.4.0 of the language (PR#162, PR#255, PR#377).
-
-- JavaScript lexer: add the "yield" keyword (PR#196).
-
-- HTTP lexer: support for PATCH method (PR#190).
-
-- Koka lexer: update to newest language spec (PR#201).
-
-- Haxe lexer: rewrite and support for Haxe 3 (PR#174).
-
-- Prolog lexer: add different kinds of numeric literals (#864).
-
-- F# lexer: rewrite with newest spec for F# 3.0 (#842), fix a bug with
-  dotted chains (#948).
-
-- Kotlin lexer: general update (PR#271).
-
-- Rebol lexer: fix comment detection and analyse_text (PR#261).
-
-- LLVM lexer: update keywords to v3.4 (PR#258).
-
-- PHP lexer: add new keywords and binary literals (PR#222).
-
-- external/markdown-processor.py updated to newest python-markdown (PR#221).
-
-- CSS lexer: some highlighting order fixes (PR#231).
-
-- Ceylon lexer: fix parsing of nested multiline comments (#915).
-
-- C family lexers: fix parsing of indented preprocessor directives (#944).
-
-- Rust lexer: update to 0.9 language version (PR#270, PR#388).
-
-- Elixir lexer: update to 0.15 language version (PR#392).
-
-- Fix swallowing incomplete tracebacks in Python console lexer (#874).
-
-
-Version 1.6
------------
-(released Feb 3, 2013)
-
-- Lexers added:
-
-  * Dylan console (PR#149)
-  * Logos (PR#150)
-  * Shell sessions (PR#158)
-
-- Fix guessed lexers not receiving lexer options (#838).
-
-- Fix unquoted HTML attribute lexing in Opa (#841).
-
-- Fixes to the Dart lexer (PR#160).
-
-
-Version 1.6rc1
---------------
-(released Jan 9, 2013)
-
-- Lexers added:
-
-  * AspectJ (PR#90)
-  * AutoIt (PR#122)
-  * BUGS-like languages (PR#89)
-  * Ceylon (PR#86)
-  * Croc (new name for MiniD)
-  * CUDA (PR#75)
-  * Dg (PR#116)
-  * IDL (PR#115)
-  * Jags (PR#89)
-  * Julia (PR#61)
-  * Kconfig (#711)
-  * Lasso (PR#95, PR#113)
-  * LiveScript (PR#84)
-  * Monkey (PR#117)
-  * Mscgen (PR#80)
-  * NSIS scripts (PR#136)
-  * OpenCOBOL (PR#72)
-  * QML (PR#123)
-  * Puppet (PR#133)
-  * Racket (PR#94)
-  * Rdoc (PR#99)
-  * Robot Framework (PR#137)
-  * RPM spec files (PR#124)
-  * Rust (PR#67)
-  * Smali (Dalvik assembly)
-  * SourcePawn (PR#39)
-  * Stan (PR#89)
-  * Treetop (PR#125)
-  * TypeScript (PR#114)
-  * VGL (PR#12)
-  * Visual FoxPro (#762)
-  * Windows Registry (#819)
-  * Xtend (PR#68)
-
-- The HTML formatter now supports linking to tags using CTags files, when the
-  python-ctags package is installed (PR#87).
-
-- The HTML formatter now has a "linespans" option that wraps every line in a
-  <span> tag with a specific id (PR#82).
-
-- When deriving a lexer from another lexer with token definitions, definitions
-  for states not in the child lexer are now inherited.  If you override a state
-  in the child lexer, an "inherit" keyword has been added to insert the base
-  state at that position (PR#141).
-
-- The C family lexers now inherit token definitions from a common base class,
-  removing code duplication (PR#141).
-
-- Use "colorama" on Windows for console color output (PR#142).
-
-- Fix Template Haskell highlighting (PR#63).
-
-- Fix some S/R lexer errors (PR#91).
-
-- Fix a bug in the Prolog lexer with names that start with 'is' (#810).
-
-- Rewrite Dylan lexer, add Dylan LID lexer (PR#147).
-
-- Add a Java quickstart document (PR#146).
-
-- Add a "external/autopygmentize" file that can be used as .lessfilter (#802).
-
-
-Version 1.5
------------
-(codename Zeitdilatation, released Mar 10, 2012)
-
-- Lexers added:
-
-  * Awk (#630)
-  * Fancy (#633)
-  * PyPy Log
-  * eC
-  * Nimrod
-  * Nemerle (#667)
-  * F# (#353)
-  * Groovy (#501)
-  * PostgreSQL (#660)
-  * DTD
-  * Gosu (#634)
-  * Octave (PR#22)
-  * Standard ML (PR#14)
-  * CFengine3 (#601)
-  * Opa (PR#37)
-  * HTTP sessions (PR#42)
-  * JSON (PR#31)
-  * SNOBOL (PR#30)
-  * MoonScript (PR#43)
-  * ECL (PR#29)
-  * Urbiscript (PR#17)
-  * OpenEdge ABL (PR#27)
-  * SystemVerilog (PR#35)
-  * Coq (#734)
-  * PowerShell (#654)
-  * Dart (#715)
-  * Fantom (PR#36)
-  * Bro (PR#5)
-  * NewLISP (PR#26)
-  * VHDL (PR#45)
-  * Scilab (#740)
-  * Elixir (PR#57)
-  * Tea (PR#56)
-  * Kotlin (PR#58)
-
-- Fix Python 3 terminal highlighting with pygmentize (#691).
-
-- In the LaTeX formatter, escape special &, < and > chars (#648).
-
-- In the LaTeX formatter, fix display problems for styles with token
-  background colors (#670).
-
-- Enhancements to the Squid conf lexer (#664).
-
-- Several fixes to the reStructuredText lexer (#636).
-
-- Recognize methods in the ObjC lexer (#638).
-
-- Fix Lua "class" highlighting: it does not have classes (#665).
-
-- Fix degenerate regex in Scala lexer (#671) and highlighting bugs (#713, 708).
-
-- Fix number pattern order in Ocaml lexer (#647).
-
-- Fix generic type highlighting in ActionScript 3 (#666).
-
-- Fixes to the Clojure lexer (PR#9).
-
-- Fix degenerate regex in Nemerle lexer (#706).
-
-- Fix infinite looping in CoffeeScript lexer (#729).
-
-- Fix crashes and analysis with ObjectiveC lexer (#693, #696).
-
-- Add some Fortran 2003 keywords.
-
-- Fix Boo string regexes (#679).
-
-- Add "rrt" style (#727).
-
-- Fix infinite looping in Darcs Patch lexer.
-
-- Lots of misc fixes to character-eating bugs and ordering problems in many
-  different lexers.
-
-
-Version 1.4
------------
-(codename Unschärfe, released Jan 03, 2011)
-
-- Lexers added:
-
-  * Factor (#520)
-  * PostScript (#486)
-  * Verilog (#491)
-  * BlitzMax Basic (#478)
-  * Ioke (#465)
-  * Java properties, split out of the INI lexer (#445)
-  * Scss (#509)
-  * Duel/JBST
-  * XQuery (#617)
-  * Mason (#615)
-  * GoodData (#609)
-  * SSP (#473)
-  * Autohotkey (#417)
-  * Google Protocol Buffers
-  * Hybris (#506)
-
-- Do not fail in analyse_text methods (#618).
-
-- Performance improvements in the HTML formatter (#523).
-
-- With the ``noclasses`` option in the HTML formatter, some styles
-  present in the stylesheet were not added as inline styles.
-
-- Four fixes to the Lua lexer (#480, #481, #482, #497).
-
-- More context-sensitive Gherkin lexer with support for more i18n translations.
-
-- Support new OO keywords in Matlab lexer (#521).
-
-- Small fix in the CoffeeScript lexer (#519).
-
-- A bugfix for backslashes in ocaml strings (#499).
-
-- Fix unicode/raw docstrings in the Python lexer (#489).
-
-- Allow PIL to work without PIL.pth (#502).
-
-- Allow seconds as a unit in CSS (#496).
-
-- Support ``application/javascript`` as a JavaScript mime type (#504).
-
-- Support `Offload <https://offload.codeplay.com/>`_ C++ Extensions as
-  keywords in the C++ lexer (#484).
-
-- Escape more characters in LaTeX output (#505).
-
-- Update Haml/Sass lexers to version 3 (#509).
-
-- Small PHP lexer string escaping fix (#515).
-
-- Support comments before preprocessor directives, and unsigned/
-  long long literals in C/C++ (#613, #616).
-
-- Support line continuations in the INI lexer (#494).
-
-- Fix lexing of Dylan string and char literals (#628).
-
-- Fix class/procedure name highlighting in VB.NET lexer (#624).
-
-
-Version 1.3.1
--------------
-(bugfix release, released Mar 05, 2010)
-
-- The ``pygmentize`` script was missing from the distribution.
-
-
-Version 1.3
------------
-(codename Schneeglöckchen, released Mar 01, 2010)
-
-- Added the ``ensurenl`` lexer option, which can be used to suppress the
-  automatic addition of a newline to the lexer input.
-
-- Lexers added:
-
-  * Ada
-  * Coldfusion
-  * Modula-2
-  * Haxe
-  * R console
-  * Objective-J
-  * Haml and Sass
-  * CoffeeScript
-
-- Enhanced reStructuredText highlighting.
-
-- Added support for PHP 5.3 namespaces in the PHP lexer.
-
-- Added a bash completion script for `pygmentize`, to the external/
-  directory (#466).
-
-- Fixed a bug in `do_insertions()` used for multi-lexer languages.
-
-- Fixed a Ruby regex highlighting bug (#476).
-
-- Fixed regex highlighting bugs in Perl lexer (#258).
-
-- Add small enhancements to the C lexer (#467) and Bash lexer (#469).
-
-- Small fixes for the Tcl, Debian control file, Nginx config,
-  Smalltalk, Objective-C, Clojure, Lua lexers.
-
-- Gherkin lexer: Fixed single apostrophe bug and added new i18n keywords.
-
-
-Version 1.2.2
--------------
-(bugfix release, released Jan 02, 2010)
-
-* Removed a backwards incompatibility in the LaTeX formatter that caused
-  Sphinx to produce invalid commands when writing LaTeX output (#463).
-
-* Fixed a forever-backtracking regex in the BashLexer (#462).
-
-
-Version 1.2.1
--------------
-(bugfix release, released Jan 02, 2010)
-
-* Fixed mishandling of an ellipsis in place of the frames in a Python
-  console traceback, resulting in clobbered output.
-
-
-Version 1.2
------------
-(codename Neujahr, released Jan 01, 2010)
-
-- Dropped Python 2.3 compatibility.
-
-- Lexers added:
-
-  * Asymptote
-  * Go
-  * Gherkin (Cucumber)
-  * CMake
-  * Ooc
-  * Coldfusion
-  * Haxe
-  * R console
-
-- Added options for rendering LaTeX in source code comments in the
-  LaTeX formatter (#461).
-
-- Updated the Logtalk lexer.
-
-- Added `line_number_start` option to image formatter (#456).
-
-- Added `hl_lines` and `hl_color` options to image formatter (#457).
-
-- Fixed the HtmlFormatter's handling of noclasses=True to not output any
-  classes (#427).
-
-- Added the Monokai style (#453).
-
-- Fixed LLVM lexer identifier syntax and added new keywords (#442).
-
-- Fixed the PythonTracebackLexer to handle non-traceback data in header or
-  trailer, and support more partial tracebacks that start on line 2 (#437).
-
-- Fixed the CLexer to not highlight ternary statements as labels.
-
-- Fixed lexing of some Ruby quoting peculiarities (#460).
-
-- A few ASM lexer fixes (#450).
-
-
-Version 1.1.1
--------------
-(bugfix release, released Sep 15, 2009)
-
-- Fixed the BBCode lexer (#435).
-
-- Added support for new Jinja2 keywords.
-
-- Fixed test suite failures.
-
-- Added Gentoo-specific suffixes to Bash lexer.
-
-
-Version 1.1
------------
-(codename Brillouin, released Sep 11, 2009)
-
-- Ported Pygments to Python 3.  This needed a few changes in the way
-  encodings are handled; they may affect corner cases when used with
-  Python 2 as well.
-
-- Lexers added:
-
-  * Antlr/Ragel, thanks to Ana Nelson
-  * (Ba)sh shell
-  * Erlang shell
-  * GLSL
-  * Prolog
-  * Evoque
-  * Modelica
-  * Rebol
-  * MXML
-  * Cython
-  * ABAP
-  * ASP.net (VB/C#)
-  * Vala
-  * Newspeak
-
-- Fixed the LaTeX formatter's output so that output generated for one style
-  can be used with the style definitions of another (#384).
-
-- Added "anchorlinenos" and "noclobber_cssfile" (#396) options to HTML
-  formatter.
-
-- Support multiline strings in Lua lexer.
-
-- Rewrite of the JavaScript lexer by Pumbaa80 to better support regular
-  expression literals (#403).
-
-- When pygmentize is asked to highlight a file for which multiple lexers
-  match the filename, use the analyse_text guessing engine to determine the
-  winner (#355).
-
-- Fixed minor bugs in the JavaScript lexer (#383), the Matlab lexer (#378),
-  the Scala lexer (#392), the INI lexer (#391), the Clojure lexer (#387)
-  and the AS3 lexer (#389).
-
-- Fixed three Perl heredoc lexing bugs (#379, #400, #422).
-
-- Fixed a bug in the image formatter which misdetected lines (#380).
-
-- Fixed bugs lexing extended Ruby strings and regexes.
-
-- Fixed a bug when lexing git diffs.
-
-- Fixed a bug lexing the empty commit in the PHP lexer (#405).
-
-- Fixed a bug causing Python numbers to be mishighlighted as floats (#397).
-
-- Fixed a bug when backslashes are used in odd locations in Python (#395).
-
-- Fixed various bugs in Matlab and S-Plus lexers, thanks to Winston Chang (#410,
-  #411, #413, #414) and fmarc (#419).
-
-- Fixed a bug in Haskell single-line comment detection (#426).
-
-- Added new-style reStructuredText directive for docutils 0.5+ (#428).
-
-
-Version 1.0
------------
-(codename Dreiundzwanzig, released Nov 23, 2008)
-
-- Don't use join(splitlines()) when converting newlines to ``\n``,
-  because that doesn't keep all newlines at the end when the
-  ``stripnl`` lexer option is False.
-
-- Added ``-N`` option to command-line interface to get a lexer name
-  for a given filename.
-
-- Added Tango style, written by Andre Roberge for the Crunchy project.
-
-- Added Python3TracebackLexer and ``python3`` option to
-  PythonConsoleLexer.
-
-- Fixed a few bugs in the Haskell lexer.
-
-- Fixed PythonTracebackLexer to be able to recognize SyntaxError and
-  KeyboardInterrupt (#360).
-
-- Provide one formatter class per image format, so that surprises like::
-
-    pygmentize -f gif -o foo.gif foo.py
-
-  creating a PNG file are avoided.
-
-- Actually use the `font_size` option of the image formatter.
-
-- Fixed numpy lexer that it doesn't listen for `*.py` any longer.
-
-- Fixed HTML formatter so that text options can be Unicode
-  strings (#371).
-
-- Unified Diff lexer supports the "udiff" alias now.
-
-- Fixed a few issues in Scala lexer (#367).
-
-- RubyConsoleLexer now supports simple prompt mode (#363).
-
-- JavascriptLexer is smarter about what constitutes a regex (#356).
-
-- Add Applescript lexer, thanks to Andreas Amann (#330).
-
-- Make the codetags more strict about matching words (#368).
-
-- NginxConfLexer is a little more accurate on mimetypes and
-  variables (#370).
-
-
-Version 0.11.1
---------------
-(released Aug 24, 2008)
-
-- Fixed a Jython compatibility issue in pygments.unistring (#358).
-
-
-Version 0.11
-------------
-(codename Straußenei, released Aug 23, 2008)
-
-Many thanks go to Tim Hatch for writing or integrating most of the bug
-fixes and new features.
-
-- Lexers added:
-
-  * Nasm-style assembly language, thanks to delroth
-  * YAML, thanks to Kirill Simonov
-  * ActionScript 3, thanks to Pierre Bourdon
-  * Cheetah/Spitfire templates, thanks to Matt Good
-  * Lighttpd config files
-  * Nginx config files
-  * Gnuplot plotting scripts
-  * Clojure
-  * POV-Ray scene files
-  * Sqlite3 interactive console sessions
-  * Scala source files, thanks to Krzysiek Goj
-
-- Lexers improved:
-
-  * C lexer highlights standard library functions now and supports C99
-    types.
-  * Bash lexer now correctly highlights heredocs without preceding
-    whitespace.
-  * Vim lexer now highlights hex colors properly and knows a couple
-    more keywords.
-  * Irc logs lexer now handles xchat's default time format (#340) and
-    correctly highlights lines ending in ``>``.
-  * Support more delimiters for perl regular expressions (#258).
-  * ObjectiveC lexer now supports 2.0 features.
-
-- Added "Visual Studio" style.
-
-- Updated markdown processor to Markdown 1.7.
-
-- Support roman/sans/mono style defs and use them in the LaTeX
-  formatter.
-
-- The RawTokenFormatter is no longer registered to ``*.raw`` and it's
-  documented that tokenization with this lexer may raise exceptions.
-
-- New option ``hl_lines`` to HTML formatter, to highlight certain
-  lines.
-
-- New option ``prestyles`` to HTML formatter.
-
-- New option *-g* to pygmentize, to allow lexer guessing based on
-  filetext (can be slowish, so file extensions are still checked
-  first).
-
-- ``guess_lexer()`` now makes its decision much faster due to a cache
-  of whether data is xml-like (a check which is used in several
-  versions of ``analyse_text()``.  Several lexers also have more
-  accurate ``analyse_text()`` now.
-
-
-Version 0.10
-------------
-(codename Malzeug, released May 06, 2008)
-
-- Lexers added:
-
-  * Io
-  * Smalltalk
-  * Darcs patches
-  * Tcl
-  * Matlab
-  * Matlab sessions
-  * FORTRAN
-  * XSLT
-  * tcsh
-  * NumPy
-  * Python 3
-  * S, S-plus, R statistics languages
-  * Logtalk
-
-- In the LatexFormatter, the *commandprefix* option is now by default
-  'PY' instead of 'C', since the latter resulted in several collisions
-  with other packages.  Also, the special meaning of the *arg*
-  argument to ``get_style_defs()`` was removed.
-
-- Added ImageFormatter, to format code as PNG, JPG, GIF or BMP.
-  (Needs the Python Imaging Library.)
-
-- Support doc comments in the PHP lexer.
-
-- Handle format specifications in the Perl lexer.
-
-- Fix comment handling in the Batch lexer.
-
-- Add more file name extensions for the C++, INI and XML lexers.
-
-- Fixes in the IRC and MuPad lexers.
-
-- Fix function and interface name highlighting in the Java lexer.
-
-- Fix at-rule handling in the CSS lexer.
-
-- Handle KeyboardInterrupts gracefully in pygmentize.
-
-- Added BlackWhiteStyle.
-
-- Bash lexer now correctly highlights math, does not require
-  whitespace after semicolons, and correctly highlights boolean
-  operators.
-
-- Makefile lexer is now capable of handling BSD and GNU make syntax.
-
-
-Version 0.9
------------
-(codename Herbstzeitlose, released Oct 14, 2007)
-
-- Lexers added:
-
-  * Erlang
-  * ActionScript
-  * Literate Haskell
-  * Common Lisp
-  * Various assembly languages
-  * Gettext catalogs
-  * Squid configuration
-  * Debian control files
-  * MySQL-style SQL
-  * MOOCode
-
-- Lexers improved:
-
-  * Greatly improved the Haskell and OCaml lexers.
-  * Improved the Bash lexer's handling of nested constructs.
-  * The C# and Java lexers exhibited abysmal performance with some
-    input code; this should now be fixed.
-  * The IRC logs lexer is now able to colorize weechat logs too.
-  * The Lua lexer now recognizes multi-line comments.
-  * Fixed bugs in the D and MiniD lexer.
-
-- The encoding handling of the command line mode (pygmentize) was
-  enhanced. You shouldn't get UnicodeErrors from it anymore if you
-  don't give an encoding option.
-
-- Added a ``-P`` option to the command line mode which can be used to
-  give options whose values contain commas or equals signs.
-
-- Added 256-color terminal formatter.
-
-- Added an experimental SVG formatter.
-
-- Added the ``lineanchors`` option to the HTML formatter, thanks to
-  Ian Charnas for the idea.
-
-- Gave the line numbers table a CSS class in the HTML formatter.
-
-- Added a Vim 7-like style.
-
-
-Version 0.8.1
--------------
-(released Jun 27, 2007)
-
-- Fixed POD highlighting in the Ruby lexer.
-
-- Fixed Unicode class and namespace name highlighting in the C# lexer.
-
-- Fixed Unicode string prefix highlighting in the Python lexer.
-
-- Fixed a bug in the D and MiniD lexers.
-
-- Fixed the included MoinMoin parser.
-
-
-Version 0.8
------------
-(codename Maikäfer, released May 30, 2007)
-
-- Lexers added:
-
-  * Haskell, thanks to Adam Blinkinsop
-  * Redcode, thanks to Adam Blinkinsop
-  * D, thanks to Kirk McDonald
-  * MuPad, thanks to Christopher Creutzig
-  * MiniD, thanks to Jarrett Billingsley
-  * Vim Script, by Tim Hatch
-
-- The HTML formatter now has a second line-numbers mode in which it
-  will just integrate the numbers in the same ``<pre>`` tag as the
-  code.
-
-- The `CSharpLexer` now is Unicode-aware, which means that it has an
-  option that can be set so that it correctly lexes Unicode
-  identifiers allowed by the C# specs.
-
-- Added a `RaiseOnErrorTokenFilter` that raises an exception when the
-  lexer generates an error token, and a `VisibleWhitespaceFilter` that
-  converts whitespace (spaces, tabs, newlines) into visible
-  characters.
-
-- Fixed the `do_insertions()` helper function to yield correct
-  indices.
-
-- The ReST lexer now automatically highlights source code blocks in
-  ".. sourcecode:: language" and ".. code:: language" directive
-  blocks.
-
-- Improved the default style (thanks to Tiberius Teng). The old
-  default is still available as the "emacs" style (which was an alias
-  before).
-
-- The `get_style_defs` method of HTML formatters now uses the
-  `cssclass` option as the default selector if it was given.
-
-- Improved the ReST and Bash lexers a bit.
-
-- Fixed a few bugs in the Makefile and Bash lexers, thanks to Tim
-  Hatch.
-
-- Fixed a bug in the command line code that disallowed ``-O`` options
-  when using the ``-S`` option.
-
-- Fixed a bug in the `RawTokenFormatter`.
-
-
-Version 0.7.1
--------------
-(released Feb 15, 2007)
-
-- Fixed little highlighting bugs in the Python, Java, Scheme and
-  Apache Config lexers.
-
-- Updated the included manpage.
-
-- Included a built version of the documentation in the source tarball.
-
-
-Version 0.7
------------
-(codename Faschingskrapfn, released Feb 14, 2007)
-
-- Added a MoinMoin parser that uses Pygments. With it, you get
-  Pygments highlighting in Moin Wiki pages.
-
-- Changed the exception raised if no suitable lexer, formatter etc. is
-  found in one of the `get_*_by_*` functions to a custom exception,
-  `pygments.util.ClassNotFound`. It is, however, a subclass of
-  `ValueError` in order to retain backwards compatibility.
-
-- Added a `-H` command line option which can be used to get the
-  docstring of a lexer, formatter or filter.
-
-- Made the handling of lexers and formatters more consistent. The
-  aliases and filename patterns of formatters are now attributes on
-  them.
-
-- Added an OCaml lexer, thanks to Adam Blinkinsop.
-
-- Made the HTML formatter more flexible, and easily subclassable in
-  order to make it easy to implement custom wrappers, e.g. alternate
-  line number markup. See the documentation.
-
-- Added an `outencoding` option to all formatters, making it possible
-  to override the `encoding` (which is used by lexers and formatters)
-  when using the command line interface. Also, if using the terminal
-  formatter and the output file is a terminal and has an encoding
-  attribute, use it if no encoding is given.
-
-- Made it possible to just drop style modules into the `styles`
-  subpackage of the Pygments installation.
-
-- Added a "state" keyword argument to the `using` helper.
-
-- Added a `commandprefix` option to the `LatexFormatter` which allows
-  to control how the command names are constructed.
-
-- Added quite a few new lexers, thanks to Tim Hatch:
-
-  * Java Server Pages
-  * Windows batch files
-  * Trac Wiki markup
-  * Python tracebacks
-  * ReStructuredText
-  * Dylan
-  * and the Befunge esoteric programming language (yay!)
-
-- Added Mako lexers by Ben Bangert.
-
-- Added "fruity" style, another dark background originally vim-based
-  theme.
-
-- Added sources.list lexer by Dennis Kaarsemaker.
-
-- Added token stream filters, and a pygmentize option to use them.
-
-- Changed behavior of `in` Operator for tokens.
-
-- Added mimetypes for all lexers.
-
-- Fixed some problems lexing Python strings.
-
-- Fixed tickets: #167, #178, #179, #180, #185, #201.
-
-
-Version 0.6
------------
-(codename Zimtstern, released Dec 20, 2006)
-
-- Added option for the HTML formatter to write the CSS to an external
-  file in "full document" mode.
-
-- Added RTF formatter.
-
-- Added Bash and Apache configuration lexers (thanks to Tim Hatch).
-
-- Improved guessing methods for various lexers.
-
-- Added `@media` support to CSS lexer (thanks to Tim Hatch).
-
-- Added a Groff lexer (thanks to Tim Hatch).
-
-- License change to BSD.
-
-- Added lexers for the Myghty template language.
-
-- Added a Scheme lexer (thanks to Marek Kubica).
-
-- Added some functions to iterate over existing lexers, formatters and
-  lexers.
-
-- The HtmlFormatter's `get_style_defs()` can now take a list as an
-  argument to generate CSS with multiple prefixes.
-
-- Support for guessing input encoding added.
-
-- Encoding support added: all processing is now done with Unicode
-  strings, input and output are converted from and optionally to byte
-  strings (see the ``encoding`` option of lexers and formatters).
-
-- Some improvements in the C(++) lexers handling comments and line
-  continuations.
-
-
-Version 0.5.1
--------------
-(released Oct 30, 2006)
-
-- Fixed traceback in ``pygmentize -L`` (thanks to Piotr Ozarowski).
-
-
-Version 0.5
------------
-(codename PyKleur, released Oct 30, 2006)
-
-- Initial public release.
+Pygments changelog
+==================
+
+Since 2.5.0, issue numbers refer to the tracker at
+<https://github.com/pygments/pygments/issues>,
+pull request numbers to the requests at
+<https://github.com/pygments/pygments/pulls>.
+
+
+Version 2.7.0
+-------------
+(released September 12, 2020)
+
+- Added lexers:
+
+  * Arrow (PR#1481, PR#1499)
+  * BARE (PR#1488)
+  * Devicetree (PR#1434)
+  * F* (PR#1409)
+  * GDScript (PR#1457)
+  * Pointless (PR#1494)
+  * PromQL (PR#1506)
+  * PsySH (PR#1438)
+  * Singularity (PR#1285)
+  * TiddlyWiki5 (PR#1390)
+  * TNT (PR#1414)
+  * YANG (PR#1408, PR#1428)
+
+- Updated lexers:
+
+  * APL (PR#1503)
+  * C++ (PR#1350, which also fixes: #1222, #996, #906, #828, #1162, #1166,
+    #1396)
+  * Chapel (PR#1423)
+  * CMake (#1491)
+  * CSound (#1509)
+  * Cython (PR#1507)
+  * Dart (PR#1449)
+  * Fennel (PR#1535)
+  * Fortran (PR#1442)
+  * GAS (PR#1530)
+  * HTTP (PR#1432, #1520, PR#1521)
+  * Inform 6 (PR#1461)
+  * Javascript (PR#1533)
+  * JSON (#1065, PR#1528)
+  * Lean (PR#1415)
+  * Matlab (PR#1399)
+  * Markdown (#1492, PR#1495)
+  * MySQL (#975, #1063, #1453, PR#1527)
+  * NASM (PR#1465)
+  * Nim (PR#1426)
+  * PostgreSQL (PR#1513)
+  * PowerShell (PR#1398, PR#1497)
+  * Protobuf (PR#1505)
+  * Robot (PR#1480)
+  * SQL (PR#1402)
+  * SystemVerilog (PR#1436, PR#1452, PR#1454, PR#1460, PR#1462, PR#1463, PR#1464, PR#1471, #1496, PR#1504)
+  * TeraTerm (PR#1337)
+  * XML (#1502)
+
+- Added a new filter for math symbols (PR#1406)
+- The Kconfig lexer will match Kconfig derivative names now (PR#1458)
+- Improved HTML formatter output (PR#1500)
+- ``.markdown`` is now recognized as an extension for Markdown files (PR#1476)
+- Fixed line number colors for Solarized (PR#1477, #1356)
+- Improvements to exception handling (PR#1478)
+- Improvements to tests (PR#1532, PR#1533, PR#1539)
+- Various code cleanups (PR#1536, PR#1537, PR#1538)
+
+
+Version 2.6.1
+-------------
+(released March 8, 2020)
+
+- This release fixes a packaging issue. No functional changes.
+
+
+Version 2.6
+-----------
+(released March 8, 2020)
+
+- Running Pygments on Python 2.x is no longer supported.
+  (The Python 2 lexer still exists.)
+
+- Added lexers:
+
+  * Linux kernel logs (PR#1310)
+  * LLVM MIR (PR#1361)
+  * MiniScript (PR#1397)
+  * Mosel (PR#1287, PR#1326)
+  * Parsing Expression Grammar (PR#1336)
+  * ReasonML (PR#1386)
+  * Ride (PR#1319, PR#1321)
+  * Sieve (PR#1257)
+  * USD (PR#1290)
+  * WebIDL (PR#1309)
+
+- Updated lexers:
+
+  * Apache2 (PR#1378)
+  * Chapel (PR#1357)
+  * CSound (PR#1383)
+  * D (PR#1375, PR#1362)
+  * Idris (PR#1360)
+  * Perl6/Raku lexer (PR#1344)
+  * Python3 (PR#1382, PR#1385)
+  * Rust: Updated lexer to cover more builtins (mostly macros) and miscellaneous
+    new syntax (PR#1320)
+  * SQL: Add temporal support keywords (PR#1402)
+
+- The 256-color/true-color terminal formatters now support the italic attribute
+  in styles (PR#1288)
+- Support HTTP 2/3 header (PR#1308)
+- Support missing reason in HTTP header (PR#1322)
+- Boogie/Silver: support line continuations and triggers, move contract keywords
+  to separate category (PR#1299)
+- GAS: support C-style comments (PR#1291)
+- Fix names in S lexer (PR#1330, PR#1333)
+- Fix numeric literals in Ada (PR#1334)
+- Recognize ``.mjs`` files as Javascript (PR#1392)
+- Recognize ``.eex`` files as Elixir (PR#1387)
+- Fix ``re.MULTILINE`` usage (PR#1388)
+- Recognize ``pipenv`` and ``poetry`` dependency & lock files (PR#1376)
+- Improve font search on Windows (#1247)
+- Remove unused script block (#1401)
+
+
+Version 2.5.2
+-------------
+(released November 29, 2019)
+
+- Fix incompatibility with some setuptools versions (PR#1316)
+
+- Fix lexing of ReST field lists (PR#1279)
+- Fix lexing of Matlab keywords as field names (PR#1282)
+- Recognize double-quoted strings in Matlab (PR#1278)
+- Avoid slow backtracking in Vim lexer (PR#1312)
+- Fix Scala highlighting of types (PR#1315)
+- Highlight field lists more consistently in ReST (PR#1279)
+- Fix highlighting Matlab keywords in field names (PR#1282)
+- Recognize Matlab double quoted strings (PR#1278)
+- Add some Terraform keywords
+- Update Modelica lexer to 3.4
+- Update Crystal examples
+
+
+Version 2.5.1
+-------------
+(released November 26, 2019)
+
+- This release fixes a packaging issue. No functional changes.
+
+
+Version 2.5.0
+-------------
+(released November 26, 2019)
+
+- Added lexers:
+
+  * Email (PR#1246)
+  * Erlang, Elixir shells (PR#823, #1521)
+  * Notmuch (PR#1264)
+  * `Scdoc <https://git.sr.ht/~sircmpwn/scdoc>`_ (PR#1268)
+  * `Solidity <https://solidity.readthedocs.io/>`_ (#1214)
+  * `Zeek <https://www.zeek.org>`_ (new name for Bro) (PR#1269)
+  * `Zig <https://ziglang.org/>`_ (PR#820)
+
+- Updated lexers:
+
+  * Apache2 Configuration (PR#1251)
+  * Bash sessions (#1253)
+  * CSound (PR#1250)
+  * Dart
+  * Dockerfile
+  * Emacs Lisp
+  * Handlebars (PR#773)
+  * Java (#1101, #987)
+  * Logtalk (PR#1261)
+  * Matlab (PR#1271)
+  * Praat (PR#1277)
+  * Python3 (PR#1255, PR#1400)
+  * Ruby
+  * YAML (#1528)
+  * Velocity
+
+- Added styles:
+
+  * Inkpot (PR#1276)
+
+- The ``PythonLexer`` class is now an alias for the former ``Python3Lexer``.
+  The old ``PythonLexer`` is available as ``Python2Lexer``.  Same change has
+  been done for the ``PythonTracebackLexer``.  The ``python3`` option for
+  the ``PythonConsoleLexer`` is now true by default.
+
+- Bump ``NasmLexer`` priority over ``TasmLexer`` for ``.asm`` files
+  (fixes #1326)
+- Default font in the ``ImageFormatter`` has been updated (#928, PR#1245)
+- Test suite switched to py.test, removed nose dependency (#1490)
+- Reduce ``TeraTerm`` lexer score -- it used to match nearly all languages
+  (#1256)
+- Treat ``Skylark``/``Starlark`` files as Python files (PR#1259)
+- Image formatter: actually respect ``line_number_separator`` option
+
+- Add LICENSE file to wheel builds
+- Agda: fix lambda highlighting
+- Dart: support ``@`` annotations
+- Dockerfile: accept ``FROM ... AS`` syntax
+- Emacs Lisp: add more string functions
+- GAS: accept registers in directive arguments
+- Java: make structural punctuation (braces, parens, colon, comma) ``Punctuation``, not ``Operator`` (#987)
+- Java: support ``var`` contextual keyword (#1101)
+- Matlab: Fix recognition of ``function`` keyword (PR#1271)
+- Python: recognize ``.jy`` filenames (#976)
+- Python: recognize ``f`` string prefix (#1156)
+- Ruby: support squiggly heredocs
+- Shell sessions: recognize Virtualenv prompt (PR#1266)
+- Velocity: support silent reference syntax
+
+
+Version 2.4.2
+-------------
+(released May 28, 2019)
+
+- Fix encoding error when guessing lexer with given ``encoding`` option
+  (#1438)
+
+
+Version 2.4.1
+-------------
+(released May 24, 2019)
+
+- Updated lexers:
+
+  * Coq (#1430)
+  * MSDOS Session (PR#734)
+  * NASM (#1517)
+  * Objective-C (PR#813, #1508)
+  * Prolog (#1511)
+  * TypeScript (#1515)
+
+- Support CSS variables in stylesheets (PR#814, #1356)
+- Fix F# lexer name (PR#709)
+- Fix ``TerminalFormatter`` using bold for bright text (#1480)
+
+
+Version 2.4.0
+-------------
+(released May 8, 2019)
+
+- Added lexers:
+
+  * Augeas (PR#807)
+  * BBC Basic (PR#806)
+  * Boa (PR#756)
+  * Charm++ CI (PR#788)
+  * DASM16 (PR#807)
+  * FloScript (PR#750)
+  * FreeFem++ (PR#785)
+  * Hspec (PR#790)
+  * Pony (PR#627)
+  * SGF (PR#780)
+  * Slash (PR#807)
+  * Slurm (PR#760)
+  * Tera Term Language (PR#749)
+  * TOML (PR#807)
+  * Unicon (PR#731)
+  * VBScript (PR#673)
+
+- Updated lexers:
+
+  * Apache2 (PR#766)
+  * Cypher (PR#746)
+  * LLVM (PR#792)
+  * Makefiles (PR#766)
+  * PHP (#1482)
+  * Rust
+  * SQL (PR#672)
+  * Stan (PR#774)
+  * Stata (PR#800)
+  * Terraform (PR#787)
+  * YAML
+
+- Add solarized style (PR#708)
+- Add support for Markdown reference-style links (PR#753)
+- Add license information to generated HTML/CSS files (#1496)
+- Change ANSI color names (PR#777)
+- Fix catastrophic backtracking in the bash lexer (#1494)
+- Fix documentation failing to build using Sphinx 2.0 (#1501)
+- Fix incorrect links in the Lisp and R lexer documentation (PR#775)
+- Fix rare unicode errors on Python 2.7 (PR#798, #1492)
+- Fix lexers popping from an empty stack (#1506)
+- TypoScript uses ``.typoscript`` now (#1498)
+- Updated Trove classifiers and ``pip`` requirements (PR#799)
+
+
+
+Version 2.3.1
+-------------
+(released Dec 16, 2018)
+
+- Updated lexers:
+
+  * ASM (PR#784)
+  * Chapel (PR#735)
+  * Clean (PR#621)
+  * CSound (PR#684)
+  * Elm (PR#744)
+  * Fortran (PR#747)
+  * GLSL (PR#740)
+  * Haskell (PR#745)
+  * Hy (PR#754)
+  * Igor Pro (PR#764)
+  * PowerShell (PR#705)
+  * Python (PR#720, #1299, PR#715)
+  * SLexer (PR#680)
+  * YAML (PR#762, PR#724)
+
+- Fix invalid string escape sequences
+- Fix `FutureWarning` introduced by regex changes in Python 3.7
+
+
+Version 2.3.0
+-------------
+(released Nov 25, 2018)
+
+- Added lexers:
+
+  * Fennel (PR#783)
+  * HLSL (PR#675)
+
+- Updated lexers:
+
+  * Dockerfile (PR#714)
+
+- Minimum Python versions changed to 2.7 and 3.5
+- Added support for Python 3.7 generator changes (PR#772)
+- Fix incorrect token type in SCSS for single-quote strings (#1322)
+- Use `terminal256` formatter if `TERM` contains `256` (PR#666)
+- Fix incorrect handling of GitHub style fences in Markdown (PR#741, #1389)
+- Fix `%a` not being highlighted in Python3 strings (PR#727)
+
+
+Version 2.2.0
+-------------
+(released Jan 22, 2017)
+
+- Added lexers:
+
+  * AMPL
+  * TypoScript (#1173)
+  * Varnish config (PR#554)
+  * Clean (PR#503)
+  * WDiff (PR#513)
+  * Flatline (PR#551)
+  * Silver (PR#537)
+  * HSAIL (PR#518)
+  * JSGF (PR#546)
+  * NCAR command language (PR#536)
+  * Extempore (PR#530)
+  * Cap'n Proto (PR#595)
+  * Whiley (PR#573)
+  * Monte (PR#592)
+  * Crystal (PR#576)
+  * Snowball (PR#589)
+  * CapDL (PR#579)
+  * NuSMV (PR#564)
+  * SAS, Stata (PR#593)
+
+- Added the ability to load lexer and formatter classes directly from files
+  with the `-x` command line option and the `lexers.load_lexer_from_file()`
+  and `formatters.load_formatter_from_file()` functions. (PR#559)
+
+- Added `lexers.find_lexer_class_by_name()`. (#1203)
+
+- Added new token types and lexing for magic methods and variables in Python
+  and PHP.
+
+- Added a new token type for string affixes and lexing for them in Python, C++
+  and Postgresql lexers.
+
+- Added a new token type for heredoc (and similar) string delimiters and
+  lexing for them in C++, Perl, PHP, Postgresql and Ruby lexers.
+
+- Styles can now define colors with ANSI colors for use in the 256-color
+  terminal formatter. (PR#531)
+
+- Improved the CSS lexer. (#1083, #1130)
+
+- Added "Rainbow Dash" style. (PR#623)
+
+- Delay loading `pkg_resources`, which takes a long while to import. (PR#690)
+
+
+Version 2.1.3
+-------------
+(released Mar 2, 2016)
+
+- Fixed regression in Bash lexer (PR#563)
+
+
+Version 2.1.2
+-------------
+(released Feb 29, 2016)
+
+- Fixed Python 3 regression in image formatter (#1215)
+- Fixed regression in Bash lexer (PR#562)
+
+
+Version 2.1.1
+-------------
+(relased Feb 14, 2016)
+
+- Fixed Jython compatibility (#1205)
+- Fixed HTML formatter output with leading empty lines (#1111)
+- Added a mapping table for LaTeX encodings and added utf8 (#1152)
+- Fixed image formatter font searching on Macs (#1188)
+- Fixed deepcopy-ing of Token instances (#1168)
+- Fixed Julia string interpolation (#1170)
+- Fixed statefulness of HttpLexer between get_tokens calls
+- Many smaller fixes to various lexers
+
+
+Version 2.1
+-----------
+(released Jan 17, 2016)
+
+- Added lexers:
+
+  * Emacs Lisp (PR#431)
+  * Arduino (PR#442)
+  * Modula-2 with multi-dialect support (#1090)
+  * Fortran fixed format (PR#213)
+  * Archetype Definition language (PR#483)
+  * Terraform (PR#432)
+  * Jcl, Easytrieve (PR#208)
+  * ParaSail (PR#381)
+  * Boogie (PR#420)
+  * Turtle (PR#425)
+  * Fish Shell (PR#422)
+  * Roboconf (PR#449)
+  * Test Anything Protocol (PR#428)
+  * Shen (PR#385)
+  * Component Pascal (PR#437)
+  * SuperCollider (PR#472)
+  * Shell consoles (Tcsh, PowerShell, MSDOS) (PR#479)
+  * Elm and J (PR#452)
+  * Crmsh (PR#440)
+  * Praat (PR#492)
+  * CSound (PR#494)
+  * Ezhil (PR#443)
+  * Thrift (PR#469)
+  * QVT Operational (PR#204)
+  * Hexdump (PR#508)
+  * CAmkES Configuration (PR#462)
+
+- Added styles:
+
+  * Lovelace (PR#456)
+  * Algol and Algol-nu (#1090)
+
+- Added formatters:
+
+  * IRC (PR#458)
+  * True color (24-bit) terminal ANSI sequences (#1142)
+    (formatter alias: "16m")
+
+- New "filename" option for HTML formatter (PR#527).
+
+- Improved performance of the HTML formatter for long lines (PR#504).
+
+- Updated autopygmentize script (PR#445).
+
+- Fixed style inheritance for non-standard token types in HTML output.
+
+- Added support for async/await to Python 3 lexer.
+
+- Rewrote linenos option for TerminalFormatter (it's better, but slightly
+  different output than before) (#1147).
+
+- Javascript lexer now supports most of ES6 (#1100).
+
+- Cocoa builtins updated for iOS 8.1 (PR#433).
+
+- Combined BashSessionLexer and ShellSessionLexer, new version should support
+  the prompt styles of either.
+
+- Added option to pygmentize to show a full traceback on exceptions.
+
+- Fixed incomplete output on Windows and Python 3 (e.g. when using iPython
+  Notebook) (#1153).
+
+- Allowed more traceback styles in Python console lexer (PR#253).
+
+- Added decorators to TypeScript (PR#509).
+
+- Fix highlighting of certain IRC logs formats (#1076).
+
+
+Version 2.0.2
+-------------
+(released Jan 20, 2015)
+
+- Fix Python tracebacks getting duplicated in the console lexer (#1068).
+
+- Backquote-delimited identifiers are now recognized in F# (#1062).
+
+
+Version 2.0.1
+-------------
+(released Nov 10, 2014)
+
+- Fix an encoding issue when using ``pygmentize`` with the ``-o`` option.
+
+
+Version 2.0
+-----------
+(released Nov 9, 2014)
+
+- Default lexer encoding is now "guess", i.e. UTF-8 / Locale / Latin1 is
+  tried in that order.
+
+- Major update to Swift lexer (PR#410).
+
+- Multiple fixes to lexer guessing in conflicting cases:
+
+  * recognize HTML5 by doctype
+  * recognize XML by XML declaration
+  * don't recognize C/C++ as SystemVerilog
+
+- Simplified regexes and builtin lists.
+
+
+Version 2.0rc1
+--------------
+(released Oct 16, 2014)
+
+- Dropped Python 2.4 and 2.5 compatibility.  This is in favor of single-source
+  compatibility between Python 2.6, 2.7 and 3.3+.
+
+- New website and documentation based on Sphinx (finally!)
+
+- Lexers added:
+
+  * APL (#969)
+  * Agda and Literate Agda (PR#203)
+  * Alloy (PR#355)
+  * AmbientTalk
+  * BlitzBasic (PR#197)
+  * ChaiScript (PR#24)
+  * Chapel (PR#256)
+  * Cirru (PR#275)
+  * Clay (PR#184)
+  * ColdFusion CFC (PR#283)
+  * Cryptol and Literate Cryptol (PR#344)
+  * Cypher (PR#257)
+  * Docker config files
+  * EBNF (PR#193)
+  * Eiffel (PR#273)
+  * GAP (PR#311)
+  * Golo (PR#309)
+  * Handlebars (PR#186)
+  * Hy (PR#238)
+  * Idris and Literate Idris (PR#210)
+  * Igor Pro (PR#172)
+  * Inform 6/7 (PR#281)
+  * Intel objdump (PR#279)
+  * Isabelle (PR#386)
+  * Jasmin (PR#349)
+  * JSON-LD (PR#289)
+  * Kal (PR#233)
+  * Lean (PR#399)
+  * LSL (PR#296)
+  * Limbo (PR#291)
+  * Liquid (#977)
+  * MQL (PR#285)
+  * MaskJS (PR#280)
+  * Mozilla preprocessors
+  * Mathematica (PR#245)
+  * NesC (PR#166)
+  * Nit (PR#375)
+  * Nix (PR#267)
+  * Pan
+  * Pawn (PR#211)
+  * Perl 6 (PR#181)
+  * Pig (PR#304)
+  * Pike (PR#237)
+  * QBasic (PR#182)
+  * Red (PR#341)
+  * ResourceBundle (#1038)
+  * Rexx (PR#199)
+  * Rql (PR#251)
+  * Rsl
+  * SPARQL (PR#78)
+  * Slim (PR#366)
+  * Swift (PR#371)
+  * Swig (PR#168)
+  * TADS 3 (PR#407)
+  * Todo.txt todo lists
+  * Twig (PR#404)
+
+- Added a helper to "optimize" regular expressions that match one of many
+  literal words; this can save 20% and more lexing time with lexers that
+  highlight many keywords or builtins.
+
+- New styles: "xcode" and "igor", similar to the default highlighting of
+  the respective IDEs.
+
+- The command-line "pygmentize" tool now tries a little harder to find the
+  correct encoding for files and the terminal (#979).
+
+- Added "inencoding" option for lexers to override "encoding" analogous
+  to "outencoding" (#800).
+
+- Added line-by-line "streaming" mode for pygmentize with the "-s" option.
+  (PR#165)  Only fully works for lexers that have no constructs spanning
+  lines!
+
+- Added an "envname" option to the LaTeX formatter to select a replacement
+  verbatim environment (PR#235).
+
+- Updated the Makefile lexer to yield a little more useful highlighting.
+
+- Lexer aliases passed to ``get_lexer_by_name()`` are now case-insensitive.
+
+- File name matching in lexers and formatters will now use a regex cache
+  for speed (PR#205).
+
+- Pygments will now recognize "vim" modelines when guessing the lexer for
+  a file based on content (PR#118).
+
+- Major restructure of the ``pygments.lexers`` module namespace.  There are now
+  many more modules with less lexers per module.  Old modules are still around
+  and re-export the lexers they previously contained.
+
+- The NameHighlightFilter now works with any Name.* token type (#790).
+
+- Python 3 lexer: add new exceptions from PEP 3151.
+
+- Opa lexer: add new keywords (PR#170).
+
+- Julia lexer: add keywords and underscore-separated number
+  literals (PR#176).
+
+- Lasso lexer: fix method highlighting, update builtins. Fix
+  guessing so that plain XML isn't always taken as Lasso (PR#163).
+
+- Objective C/C++ lexers: allow "@" prefixing any expression (#871).
+
+- Ruby lexer: fix lexing of Name::Space tokens (#860) and of symbols
+  in hashes (#873).
+
+- Stan lexer: update for version 2.4.0 of the language (PR#162, PR#255, PR#377).
+
+- JavaScript lexer: add the "yield" keyword (PR#196).
+
+- HTTP lexer: support for PATCH method (PR#190).
+
+- Koka lexer: update to newest language spec (PR#201).
+
+- Haxe lexer: rewrite and support for Haxe 3 (PR#174).
+
+- Prolog lexer: add different kinds of numeric literals (#864).
+
+- F# lexer: rewrite with newest spec for F# 3.0 (#842), fix a bug with
+  dotted chains (#948).
+
+- Kotlin lexer: general update (PR#271).
+
+- Rebol lexer: fix comment detection and analyse_text (PR#261).
+
+- LLVM lexer: update keywords to v3.4 (PR#258).
+
+- PHP lexer: add new keywords and binary literals (PR#222).
+
+- external/markdown-processor.py updated to newest python-markdown (PR#221).
+
+- CSS lexer: some highlighting order fixes (PR#231).
+
+- Ceylon lexer: fix parsing of nested multiline comments (#915).
+
+- C family lexers: fix parsing of indented preprocessor directives (#944).
+
+- Rust lexer: update to 0.9 language version (PR#270, PR#388).
+
+- Elixir lexer: update to 0.15 language version (PR#392).
+
+- Fix swallowing incomplete tracebacks in Python console lexer (#874).
+
+
+Version 1.6
+-----------
+(released Feb 3, 2013)
+
+- Lexers added:
+
+  * Dylan console (PR#149)
+  * Logos (PR#150)
+  * Shell sessions (PR#158)
+
+- Fix guessed lexers not receiving lexer options (#838).
+
+- Fix unquoted HTML attribute lexing in Opa (#841).
+
+- Fixes to the Dart lexer (PR#160).
+
+
+Version 1.6rc1
+--------------
+(released Jan 9, 2013)
+
+- Lexers added:
+
+  * AspectJ (PR#90)
+  * AutoIt (PR#122)
+  * BUGS-like languages (PR#89)
+  * Ceylon (PR#86)
+  * Croc (new name for MiniD)
+  * CUDA (PR#75)
+  * Dg (PR#116)
+  * IDL (PR#115)
+  * Jags (PR#89)
+  * Julia (PR#61)
+  * Kconfig (#711)
+  * Lasso (PR#95, PR#113)
+  * LiveScript (PR#84)
+  * Monkey (PR#117)
+  * Mscgen (PR#80)
+  * NSIS scripts (PR#136)
+  * OpenCOBOL (PR#72)
+  * QML (PR#123)
+  * Puppet (PR#133)
+  * Racket (PR#94)
+  * Rdoc (PR#99)
+  * Robot Framework (PR#137)
+  * RPM spec files (PR#124)
+  * Rust (PR#67)
+  * Smali (Dalvik assembly)
+  * SourcePawn (PR#39)
+  * Stan (PR#89)
+  * Treetop (PR#125)
+  * TypeScript (PR#114)
+  * VGL (PR#12)
+  * Visual FoxPro (#762)
+  * Windows Registry (#819)
+  * Xtend (PR#68)
+
+- The HTML formatter now supports linking to tags using CTags files, when the
+  python-ctags package is installed (PR#87).
+
+- The HTML formatter now has a "linespans" option that wraps every line in a
+  <span> tag with a specific id (PR#82).
+
+- When deriving a lexer from another lexer with token definitions, definitions
+  for states not in the child lexer are now inherited.  If you override a state
+  in the child lexer, an "inherit" keyword has been added to insert the base
+  state at that position (PR#141).
+
+- The C family lexers now inherit token definitions from a common base class,
+  removing code duplication (PR#141).
+
+- Use "colorama" on Windows for console color output (PR#142).
+
+- Fix Template Haskell highlighting (PR#63).
+
+- Fix some S/R lexer errors (PR#91).
+
+- Fix a bug in the Prolog lexer with names that start with 'is' (#810).
+
+- Rewrite Dylan lexer, add Dylan LID lexer (PR#147).
+
+- Add a Java quickstart document (PR#146).
+
+- Add a "external/autopygmentize" file that can be used as .lessfilter (#802).
+
+
+Version 1.5
+-----------
+(codename Zeitdilatation, released Mar 10, 2012)
+
+- Lexers added:
+
+  * Awk (#630)
+  * Fancy (#633)
+  * PyPy Log
+  * eC
+  * Nimrod
+  * Nemerle (#667)
+  * F# (#353)
+  * Groovy (#501)
+  * PostgreSQL (#660)
+  * DTD
+  * Gosu (#634)
+  * Octave (PR#22)
+  * Standard ML (PR#14)
+  * CFengine3 (#601)
+  * Opa (PR#37)
+  * HTTP sessions (PR#42)
+  * JSON (PR#31)
+  * SNOBOL (PR#30)
+  * MoonScript (PR#43)
+  * ECL (PR#29)
+  * Urbiscript (PR#17)
+  * OpenEdge ABL (PR#27)
+  * SystemVerilog (PR#35)
+  * Coq (#734)
+  * PowerShell (#654)
+  * Dart (#715)
+  * Fantom (PR#36)
+  * Bro (PR#5)
+  * NewLISP (PR#26)
+  * VHDL (PR#45)
+  * Scilab (#740)
+  * Elixir (PR#57)
+  * Tea (PR#56)
+  * Kotlin (PR#58)
+
+- Fix Python 3 terminal highlighting with pygmentize (#691).
+
+- In the LaTeX formatter, escape special &, < and > chars (#648).
+
+- In the LaTeX formatter, fix display problems for styles with token
+  background colors (#670).
+
+- Enhancements to the Squid conf lexer (#664).
+
+- Several fixes to the reStructuredText lexer (#636).
+
+- Recognize methods in the ObjC lexer (#638).
+
+- Fix Lua "class" highlighting: it does not have classes (#665).
+
+- Fix degenerate regex in Scala lexer (#671) and highlighting bugs (#713, 708).
+
+- Fix number pattern order in Ocaml lexer (#647).
+
+- Fix generic type highlighting in ActionScript 3 (#666).
+
+- Fixes to the Clojure lexer (PR#9).
+
+- Fix degenerate regex in Nemerle lexer (#706).
+
+- Fix infinite looping in CoffeeScript lexer (#729).
+
+- Fix crashes and analysis with ObjectiveC lexer (#693, #696).
+
+- Add some Fortran 2003 keywords.
+
+- Fix Boo string regexes (#679).
+
+- Add "rrt" style (#727).
+
+- Fix infinite looping in Darcs Patch lexer.
+
+- Lots of misc fixes to character-eating bugs and ordering problems in many
+  different lexers.
+
+
+Version 1.4
+-----------
+(codename Unschärfe, released Jan 03, 2011)
+
+- Lexers added:
+
+  * Factor (#520)
+  * PostScript (#486)
+  * Verilog (#491)
+  * BlitzMax Basic (#478)
+  * Ioke (#465)
+  * Java properties, split out of the INI lexer (#445)
+  * Scss (#509)
+  * Duel/JBST
+  * XQuery (#617)
+  * Mason (#615)
+  * GoodData (#609)
+  * SSP (#473)
+  * Autohotkey (#417)
+  * Google Protocol Buffers
+  * Hybris (#506)
+
+- Do not fail in analyse_text methods (#618).
+
+- Performance improvements in the HTML formatter (#523).
+
+- With the ``noclasses`` option in the HTML formatter, some styles
+  present in the stylesheet were not added as inline styles.
+
+- Four fixes to the Lua lexer (#480, #481, #482, #497).
+
+- More context-sensitive Gherkin lexer with support for more i18n translations.
+
+- Support new OO keywords in Matlab lexer (#521).
+
+- Small fix in the CoffeeScript lexer (#519).
+
+- A bugfix for backslashes in ocaml strings (#499).
+
+- Fix unicode/raw docstrings in the Python lexer (#489).
+
+- Allow PIL to work without PIL.pth (#502).
+
+- Allow seconds as a unit in CSS (#496).
+
+- Support ``application/javascript`` as a JavaScript mime type (#504).
+
+- Support `Offload <https://offload.codeplay.com/>`_ C++ Extensions as
+  keywords in the C++ lexer (#484).
+
+- Escape more characters in LaTeX output (#505).
+
+- Update Haml/Sass lexers to version 3 (#509).
+
+- Small PHP lexer string escaping fix (#515).
+
+- Support comments before preprocessor directives, and unsigned/
+  long long literals in C/C++ (#613, #616).
+
+- Support line continuations in the INI lexer (#494).
+
+- Fix lexing of Dylan string and char literals (#628).
+
+- Fix class/procedure name highlighting in VB.NET lexer (#624).
+
+
+Version 1.3.1
+-------------
+(bugfix release, released Mar 05, 2010)
+
+- The ``pygmentize`` script was missing from the distribution.
+
+
+Version 1.3
+-----------
+(codename Schneeglöckchen, released Mar 01, 2010)
+
+- Added the ``ensurenl`` lexer option, which can be used to suppress the
+  automatic addition of a newline to the lexer input.
+
+- Lexers added:
+
+  * Ada
+  * Coldfusion
+  * Modula-2
+  * Haxe
+  * R console
+  * Objective-J
+  * Haml and Sass
+  * CoffeeScript
+
+- Enhanced reStructuredText highlighting.
+
+- Added support for PHP 5.3 namespaces in the PHP lexer.
+
+- Added a bash completion script for `pygmentize`, to the external/
+  directory (#466).
+
+- Fixed a bug in `do_insertions()` used for multi-lexer languages.
+
+- Fixed a Ruby regex highlighting bug (#476).
+
+- Fixed regex highlighting bugs in Perl lexer (#258).
+
+- Add small enhancements to the C lexer (#467) and Bash lexer (#469).
+
+- Small fixes for the Tcl, Debian control file, Nginx config,
+  Smalltalk, Objective-C, Clojure, Lua lexers.
+
+- Gherkin lexer: Fixed single apostrophe bug and added new i18n keywords.
+
+
+Version 1.2.2
+-------------
+(bugfix release, released Jan 02, 2010)
+
+* Removed a backwards incompatibility in the LaTeX formatter that caused
+  Sphinx to produce invalid commands when writing LaTeX output (#463).
+
+* Fixed a forever-backtracking regex in the BashLexer (#462).
+
+
+Version 1.2.1
+-------------
+(bugfix release, released Jan 02, 2010)
+
+* Fixed mishandling of an ellipsis in place of the frames in a Python
+  console traceback, resulting in clobbered output.
+
+
+Version 1.2
+-----------
+(codename Neujahr, released Jan 01, 2010)
+
+- Dropped Python 2.3 compatibility.
+
+- Lexers added:
+
+  * Asymptote
+  * Go
+  * Gherkin (Cucumber)
+  * CMake
+  * Ooc
+  * Coldfusion
+  * Haxe
+  * R console
+
+- Added options for rendering LaTeX in source code comments in the
+  LaTeX formatter (#461).
+
+- Updated the Logtalk lexer.
+
+- Added `line_number_start` option to image formatter (#456).
+
+- Added `hl_lines` and `hl_color` options to image formatter (#457).
+
+- Fixed the HtmlFormatter's handling of noclasses=True to not output any
+  classes (#427).
+
+- Added the Monokai style (#453).
+
+- Fixed LLVM lexer identifier syntax and added new keywords (#442).
+
+- Fixed the PythonTracebackLexer to handle non-traceback data in header or
+  trailer, and support more partial tracebacks that start on line 2 (#437).
+
+- Fixed the CLexer to not highlight ternary statements as labels.
+
+- Fixed lexing of some Ruby quoting peculiarities (#460).
+
+- A few ASM lexer fixes (#450).
+
+
+Version 1.1.1
+-------------
+(bugfix release, released Sep 15, 2009)
+
+- Fixed the BBCode lexer (#435).
+
+- Added support for new Jinja2 keywords.
+
+- Fixed test suite failures.
+
+- Added Gentoo-specific suffixes to Bash lexer.
+
+
+Version 1.1
+-----------
+(codename Brillouin, released Sep 11, 2009)
+
+- Ported Pygments to Python 3.  This needed a few changes in the way
+  encodings are handled; they may affect corner cases when used with
+  Python 2 as well.
+
+- Lexers added:
+
+  * Antlr/Ragel, thanks to Ana Nelson
+  * (Ba)sh shell
+  * Erlang shell
+  * GLSL
+  * Prolog
+  * Evoque
+  * Modelica
+  * Rebol
+  * MXML
+  * Cython
+  * ABAP
+  * ASP.net (VB/C#)
+  * Vala
+  * Newspeak
+
+- Fixed the LaTeX formatter's output so that output generated for one style
+  can be used with the style definitions of another (#384).
+
+- Added "anchorlinenos" and "noclobber_cssfile" (#396) options to HTML
+  formatter.
+
+- Support multiline strings in Lua lexer.
+
+- Rewrite of the JavaScript lexer by Pumbaa80 to better support regular
+  expression literals (#403).
+
+- When pygmentize is asked to highlight a file for which multiple lexers
+  match the filename, use the analyse_text guessing engine to determine the
+  winner (#355).
+
+- Fixed minor bugs in the JavaScript lexer (#383), the Matlab lexer (#378),
+  the Scala lexer (#392), the INI lexer (#391), the Clojure lexer (#387)
+  and the AS3 lexer (#389).
+
+- Fixed three Perl heredoc lexing bugs (#379, #400, #422).
+
+- Fixed a bug in the image formatter which misdetected lines (#380).
+
+- Fixed bugs lexing extended Ruby strings and regexes.
+
+- Fixed a bug when lexing git diffs.
+
+- Fixed a bug lexing the empty commit in the PHP lexer (#405).
+
+- Fixed a bug causing Python numbers to be mishighlighted as floats (#397).
+
+- Fixed a bug when backslashes are used in odd locations in Python (#395).
+
+- Fixed various bugs in Matlab and S-Plus lexers, thanks to Winston Chang (#410,
+  #411, #413, #414) and fmarc (#419).
+
+- Fixed a bug in Haskell single-line comment detection (#426).
+
+- Added new-style reStructuredText directive for docutils 0.5+ (#428).
+
+
+Version 1.0
+-----------
+(codename Dreiundzwanzig, released Nov 23, 2008)
+
+- Don't use join(splitlines()) when converting newlines to ``\n``,
+  because that doesn't keep all newlines at the end when the
+  ``stripnl`` lexer option is False.
+
+- Added ``-N`` option to command-line interface to get a lexer name
+  for a given filename.
+
+- Added Tango style, written by Andre Roberge for the Crunchy project.
+
+- Added Python3TracebackLexer and ``python3`` option to
+  PythonConsoleLexer.
+
+- Fixed a few bugs in the Haskell lexer.
+
+- Fixed PythonTracebackLexer to be able to recognize SyntaxError and
+  KeyboardInterrupt (#360).
+
+- Provide one formatter class per image format, so that surprises like::
+
+    pygmentize -f gif -o foo.gif foo.py
+
+  creating a PNG file are avoided.
+
+- Actually use the `font_size` option of the image formatter.
+
+- Fixed numpy lexer that it doesn't listen for `*.py` any longer.
+
+- Fixed HTML formatter so that text options can be Unicode
+  strings (#371).
+
+- Unified Diff lexer supports the "udiff" alias now.
+
+- Fixed a few issues in Scala lexer (#367).
+
+- RubyConsoleLexer now supports simple prompt mode (#363).
+
+- JavascriptLexer is smarter about what constitutes a regex (#356).
+
+- Add Applescript lexer, thanks to Andreas Amann (#330).
+
+- Make the codetags more strict about matching words (#368).
+
+- NginxConfLexer is a little more accurate on mimetypes and
+  variables (#370).
+
+
+Version 0.11.1
+--------------
+(released Aug 24, 2008)
+
+- Fixed a Jython compatibility issue in pygments.unistring (#358).
+
+
+Version 0.11
+------------
+(codename Straußenei, released Aug 23, 2008)
+
+Many thanks go to Tim Hatch for writing or integrating most of the bug
+fixes and new features.
+
+- Lexers added:
+
+  * Nasm-style assembly language, thanks to delroth
+  * YAML, thanks to Kirill Simonov
+  * ActionScript 3, thanks to Pierre Bourdon
+  * Cheetah/Spitfire templates, thanks to Matt Good
+  * Lighttpd config files
+  * Nginx config files
+  * Gnuplot plotting scripts
+  * Clojure
+  * POV-Ray scene files
+  * Sqlite3 interactive console sessions
+  * Scala source files, thanks to Krzysiek Goj
+
+- Lexers improved:
+
+  * C lexer highlights standard library functions now and supports C99
+    types.
+  * Bash lexer now correctly highlights heredocs without preceding
+    whitespace.
+  * Vim lexer now highlights hex colors properly and knows a couple
+    more keywords.
+  * Irc logs lexer now handles xchat's default time format (#340) and
+    correctly highlights lines ending in ``>``.
+  * Support more delimiters for perl regular expressions (#258).
+  * ObjectiveC lexer now supports 2.0 features.
+
+- Added "Visual Studio" style.
+
+- Updated markdown processor to Markdown 1.7.
+
+- Support roman/sans/mono style defs and use them in the LaTeX
+  formatter.
+
+- The RawTokenFormatter is no longer registered to ``*.raw`` and it's
+  documented that tokenization with this lexer may raise exceptions.
+
+- New option ``hl_lines`` to HTML formatter, to highlight certain
+  lines.
+
+- New option ``prestyles`` to HTML formatter.
+
+- New option *-g* to pygmentize, to allow lexer guessing based on
+  filetext (can be slowish, so file extensions are still checked
+  first).
+
+- ``guess_lexer()`` now makes its decision much faster due to a cache
+  of whether data is xml-like (a check which is used in several
+  versions of ``analyse_text()``.  Several lexers also have more
+  accurate ``analyse_text()`` now.
+
+
+Version 0.10
+------------
+(codename Malzeug, released May 06, 2008)
+
+- Lexers added:
+
+  * Io
+  * Smalltalk
+  * Darcs patches
+  * Tcl
+  * Matlab
+  * Matlab sessions
+  * FORTRAN
+  * XSLT
+  * tcsh
+  * NumPy
+  * Python 3
+  * S, S-plus, R statistics languages
+  * Logtalk
+
+- In the LatexFormatter, the *commandprefix* option is now by default
+  'PY' instead of 'C', since the latter resulted in several collisions
+  with other packages.  Also, the special meaning of the *arg*
+  argument to ``get_style_defs()`` was removed.
+
+- Added ImageFormatter, to format code as PNG, JPG, GIF or BMP.
+  (Needs the Python Imaging Library.)
+
+- Support doc comments in the PHP lexer.
+
+- Handle format specifications in the Perl lexer.
+
+- Fix comment handling in the Batch lexer.
+
+- Add more file name extensions for the C++, INI and XML lexers.
+
+- Fixes in the IRC and MuPad lexers.
+
+- Fix function and interface name highlighting in the Java lexer.
+
+- Fix at-rule handling in the CSS lexer.
+
+- Handle KeyboardInterrupts gracefully in pygmentize.
+
+- Added BlackWhiteStyle.
+
+- Bash lexer now correctly highlights math, does not require
+  whitespace after semicolons, and correctly highlights boolean
+  operators.
+
+- Makefile lexer is now capable of handling BSD and GNU make syntax.
+
+
+Version 0.9
+-----------
+(codename Herbstzeitlose, released Oct 14, 2007)
+
+- Lexers added:
+
+  * Erlang
+  * ActionScript
+  * Literate Haskell
+  * Common Lisp
+  * Various assembly languages
+  * Gettext catalogs
+  * Squid configuration
+  * Debian control files
+  * MySQL-style SQL
+  * MOOCode
+
+- Lexers improved:
+
+  * Greatly improved the Haskell and OCaml lexers.
+  * Improved the Bash lexer's handling of nested constructs.
+  * The C# and Java lexers exhibited abysmal performance with some
+    input code; this should now be fixed.
+  * The IRC logs lexer is now able to colorize weechat logs too.
+  * The Lua lexer now recognizes multi-line comments.
+  * Fixed bugs in the D and MiniD lexer.
+
+- The encoding handling of the command line mode (pygmentize) was
+  enhanced. You shouldn't get UnicodeErrors from it anymore if you
+  don't give an encoding option.
+
+- Added a ``-P`` option to the command line mode which can be used to
+  give options whose values contain commas or equals signs.
+
+- Added 256-color terminal formatter.
+
+- Added an experimental SVG formatter.
+
+- Added the ``lineanchors`` option to the HTML formatter, thanks to
+  Ian Charnas for the idea.
+
+- Gave the line numbers table a CSS class in the HTML formatter.
+
+- Added a Vim 7-like style.
+
+
+Version 0.8.1
+-------------
+(released Jun 27, 2007)
+
+- Fixed POD highlighting in the Ruby lexer.
+
+- Fixed Unicode class and namespace name highlighting in the C# lexer.
+
+- Fixed Unicode string prefix highlighting in the Python lexer.
+
+- Fixed a bug in the D and MiniD lexers.
+
+- Fixed the included MoinMoin parser.
+
+
+Version 0.8
+-----------
+(codename Maikäfer, released May 30, 2007)
+
+- Lexers added:
+
+  * Haskell, thanks to Adam Blinkinsop
+  * Redcode, thanks to Adam Blinkinsop
+  * D, thanks to Kirk McDonald
+  * MuPad, thanks to Christopher Creutzig
+  * MiniD, thanks to Jarrett Billingsley
+  * Vim Script, by Tim Hatch
+
+- The HTML formatter now has a second line-numbers mode in which it
+  will just integrate the numbers in the same ``<pre>`` tag as the
+  code.
+
+- The `CSharpLexer` now is Unicode-aware, which means that it has an
+  option that can be set so that it correctly lexes Unicode
+  identifiers allowed by the C# specs.
+
+- Added a `RaiseOnErrorTokenFilter` that raises an exception when the
+  lexer generates an error token, and a `VisibleWhitespaceFilter` that
+  converts whitespace (spaces, tabs, newlines) into visible
+  characters.
+
+- Fixed the `do_insertions()` helper function to yield correct
+  indices.
+
+- The ReST lexer now automatically highlights source code blocks in
+  ".. sourcecode:: language" and ".. code:: language" directive
+  blocks.
+
+- Improved the default style (thanks to Tiberius Teng). The old
+  default is still available as the "emacs" style (which was an alias
+  before).
+
+- The `get_style_defs` method of HTML formatters now uses the
+  `cssclass` option as the default selector if it was given.
+
+- Improved the ReST and Bash lexers a bit.
+
+- Fixed a few bugs in the Makefile and Bash lexers, thanks to Tim
+  Hatch.
+
+- Fixed a bug in the command line code that disallowed ``-O`` options
+  when using the ``-S`` option.
+
+- Fixed a bug in the `RawTokenFormatter`.
+
+
+Version 0.7.1
+-------------
+(released Feb 15, 2007)
+
+- Fixed little highlighting bugs in the Python, Java, Scheme and
+  Apache Config lexers.
+
+- Updated the included manpage.
+
+- Included a built version of the documentation in the source tarball.
+
+
+Version 0.7
+-----------
+(codename Faschingskrapfn, released Feb 14, 2007)
+
+- Added a MoinMoin parser that uses Pygments. With it, you get
+  Pygments highlighting in Moin Wiki pages.
+
+- Changed the exception raised if no suitable lexer, formatter etc. is
+  found in one of the `get_*_by_*` functions to a custom exception,
+  `pygments.util.ClassNotFound`. It is, however, a subclass of
+  `ValueError` in order to retain backwards compatibility.
+
+- Added a `-H` command line option which can be used to get the
+  docstring of a lexer, formatter or filter.
+
+- Made the handling of lexers and formatters more consistent. The
+  aliases and filename patterns of formatters are now attributes on
+  them.
+
+- Added an OCaml lexer, thanks to Adam Blinkinsop.
+
+- Made the HTML formatter more flexible, and easily subclassable in
+  order to make it easy to implement custom wrappers, e.g. alternate
+  line number markup. See the documentation.
+
+- Added an `outencoding` option to all formatters, making it possible
+  to override the `encoding` (which is used by lexers and formatters)
+  when using the command line interface. Also, if using the terminal
+  formatter and the output file is a terminal and has an encoding
+  attribute, use it if no encoding is given.
+
+- Made it possible to just drop style modules into the `styles`
+  subpackage of the Pygments installation.
+
+- Added a "state" keyword argument to the `using` helper.
+
+- Added a `commandprefix` option to the `LatexFormatter` which allows
+  to control how the command names are constructed.
+
+- Added quite a few new lexers, thanks to Tim Hatch:
+
+  * Java Server Pages
+  * Windows batch files
+  * Trac Wiki markup
+  * Python tracebacks
+  * ReStructuredText
+  * Dylan
+  * and the Befunge esoteric programming language (yay!)
+
+- Added Mako lexers by Ben Bangert.
+
+- Added "fruity" style, another dark background originally vim-based
+  theme.
+
+- Added sources.list lexer by Dennis Kaarsemaker.
+
+- Added token stream filters, and a pygmentize option to use them.
+
+- Changed behavior of `in` Operator for tokens.
+
+- Added mimetypes for all lexers.
+
+- Fixed some problems lexing Python strings.
+
+- Fixed tickets: #167, #178, #179, #180, #185, #201.
+
+
+Version 0.6
+-----------
+(codename Zimtstern, released Dec 20, 2006)
+
+- Added option for the HTML formatter to write the CSS to an external
+  file in "full document" mode.
+
+- Added RTF formatter.
+
+- Added Bash and Apache configuration lexers (thanks to Tim Hatch).
+
+- Improved guessing methods for various lexers.
+
+- Added `@media` support to CSS lexer (thanks to Tim Hatch).
+
+- Added a Groff lexer (thanks to Tim Hatch).
+
+- License change to BSD.
+
+- Added lexers for the Myghty template language.
+
+- Added a Scheme lexer (thanks to Marek Kubica).
+
+- Added some functions to iterate over existing lexers, formatters and
+  lexers.
+
+- The HtmlFormatter's `get_style_defs()` can now take a list as an
+  argument to generate CSS with multiple prefixes.
+
+- Support for guessing input encoding added.
+
+- Encoding support added: all processing is now done with Unicode
+  strings, input and output are converted from and optionally to byte
+  strings (see the ``encoding`` option of lexers and formatters).
+
+- Some improvements in the C(++) lexers handling comments and line
+  continuations.
+
+
+Version 0.5.1
+-------------
+(released Oct 30, 2006)
+
+- Fixed traceback in ``pygmentize -L`` (thanks to Piotr Ozarowski).
+
+
+Version 0.5
+-----------
+(codename PyKleur, released Oct 30, 2006)
+
+- Initial public release.
--- a/eric6/ThirdParty/Pygments/pygments/LICENSE	Tue Sep 15 18:46:58 2020 +0200
+++ b/eric6/ThirdParty/Pygments/pygments/LICENSE	Tue Sep 15 19:09:05 2020 +0200
@@ -1,25 +1,25 @@
-Copyright (c) 2006-2019 by the respective authors (see AUTHORS file).
-All rights reserved.
-
-Redistribution and use in source and binary forms, with or without
-modification, are permitted provided that the following conditions are
-met:
-
-* Redistributions of source code must retain the above copyright
-  notice, this list of conditions and the following disclaimer.
-
-* Redistributions in binary form must reproduce the above copyright
-  notice, this list of conditions and the following disclaimer in the
-  documentation and/or other materials provided with the distribution.
-
-THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
-"AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
-LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
-A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
-OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
-SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
-LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
-DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
-THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
-(INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
-OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+Copyright (c) 2006-2020 by the respective authors (see AUTHORS file).
+All rights reserved.
+
+Redistribution and use in source and binary forms, with or without
+modification, are permitted provided that the following conditions are
+met:
+
+* Redistributions of source code must retain the above copyright
+  notice, this list of conditions and the following disclaimer.
+
+* Redistributions in binary form must reproduce the above copyright
+  notice, this list of conditions and the following disclaimer in the
+  documentation and/or other materials provided with the distribution.
+
+THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+"AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+(INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
--- a/eric6/ThirdParty/Pygments/pygments/PKG-INFO	Tue Sep 15 18:46:58 2020 +0200
+++ b/eric6/ThirdParty/Pygments/pygments/PKG-INFO	Tue Sep 15 19:09:05 2020 +0200
@@ -1,6 +1,6 @@
 Metadata-Version: 1.2
 Name: Pygments
-Version: 2.6.1
+Version: 2.7.0
 Summary: Pygments is a syntax highlighting package written in Python.
 Home-page: https://pygments.org/
 Author: Georg Brandl
@@ -22,7 +22,7 @@
         * a number of output formats, presently HTML, LaTeX, RTF, SVG, all image     formats that PIL supports and ANSI sequences
         * it is usable as a command-line tool and as a library
         
-        :copyright: Copyright 2006-2019 by the Pygments team, see AUTHORS.
+        :copyright: Copyright 2006-2020 by the Pygments team, see AUTHORS.
         :license: BSD, see LICENSE for details.
         
 Keywords: syntax highlighting
--- a/eric6/ThirdParty/Pygments/pygments/__init__.py	Tue Sep 15 18:46:58 2020 +0200
+++ b/eric6/ThirdParty/Pygments/pygments/__init__.py	Tue Sep 15 19:09:05 2020 +0200
@@ -1,85 +1,85 @@
-# -*- coding: utf-8 -*-
-"""
-    Pygments
-    ~~~~~~~~
-
-    Pygments is a syntax highlighting package written in Python.
-
-    It is a generic syntax highlighter for general use in all kinds of software
-    such as forum systems, wikis or other applications that need to prettify
-    source code. Highlights are:
-
-    * a wide range of common languages and markup formats is supported
-    * special attention is paid to details, increasing quality by a fair amount
-    * support for new languages and formats are added easily
-    * a number of output formats, presently HTML, LaTeX, RTF, SVG, all image
-      formats that PIL supports, and ANSI sequences
-    * it is usable as a command-line tool and as a library
-    * ... and it highlights even Brainfuck!
-
-    The `Pygments master branch`_ is installable with ``easy_install Pygments==dev``.
-
-    .. _Pygments master branch:
-       https://github.com/pygments/pygments/archive/master.zip#egg=Pygments-dev
-
-    :copyright: Copyright 2006-2019 by the Pygments team, see AUTHORS.
-    :license: BSD, see LICENSE for details.
-"""
-import sys
-from io import StringIO, BytesIO
-
-__version__ = '2.6.1'
-__docformat__ = 'restructuredtext'
-
-__all__ = ['lex', 'format', 'highlight']
-
-
-def lex(code, lexer):
-    """
-    Lex ``code`` with ``lexer`` and return an iterable of tokens.
-    """
-    try:
-        return lexer.get_tokens(code)
-    except TypeError as err:
-        if (isinstance(err.args[0], str) and
-            ('unbound method get_tokens' in err.args[0] or
-             'missing 1 required positional argument' in err.args[0])):
-            raise TypeError('lex() argument must be a lexer instance, '
-                            'not a class')
-        raise
-
-
-def format(tokens, formatter, outfile=None):  # pylint: disable=redefined-builtin
-    """
-    Format a tokenlist ``tokens`` with the formatter ``formatter``.
-
-    If ``outfile`` is given and a valid file object (an object
-    with a ``write`` method), the result will be written to it, otherwise
-    it is returned as a string.
-    """
-    try:
-        if not outfile:
-            realoutfile = getattr(formatter, 'encoding', None) and BytesIO() or StringIO()
-            formatter.format(tokens, realoutfile)
-            return realoutfile.getvalue()
-        else:
-            formatter.format(tokens, outfile)
-    except TypeError as err:
-        if (isinstance(err.args[0], str) and
-            ('unbound method format' in err.args[0] or
-             'missing 1 required positional argument' in err.args[0])):
-            raise TypeError('format() argument must be a formatter instance, '
-                            'not a class')
-        raise
-
-
-def highlight(code, lexer, formatter, outfile=None):
-    """
-    Lex ``code`` with ``lexer`` and format it with the formatter ``formatter``.
-
-    If ``outfile`` is given and a valid file object (an object
-    with a ``write`` method), the result will be written to it, otherwise
-    it is returned as a string.
-    """
-    return format(lex(code, lexer), formatter, outfile)
-
+# -*- coding: utf-8 -*-
+"""
+    Pygments
+    ~~~~~~~~
+
+    Pygments is a syntax highlighting package written in Python.
+
+    It is a generic syntax highlighter for general use in all kinds of software
+    such as forum systems, wikis or other applications that need to prettify
+    source code. Highlights are:
+
+    * a wide range of common languages and markup formats is supported
+    * special attention is paid to details, increasing quality by a fair amount
+    * support for new languages and formats are added easily
+    * a number of output formats, presently HTML, LaTeX, RTF, SVG, all image
+      formats that PIL supports, and ANSI sequences
+    * it is usable as a command-line tool and as a library
+    * ... and it highlights even Brainfuck!
+
+    The `Pygments master branch`_ is installable with ``easy_install Pygments==dev``.
+
+    .. _Pygments master branch:
+       https://github.com/pygments/pygments/archive/master.zip#egg=Pygments-dev
+
+    :copyright: Copyright 2006-2020 by the Pygments team, see AUTHORS.
+    :license: BSD, see LICENSE for details.
+"""
+import sys
+from io import StringIO, BytesIO
+
+__version__ = '2.7.0'
+__docformat__ = 'restructuredtext'
+
+__all__ = ['lex', 'format', 'highlight']
+
+
+def lex(code, lexer):
+    """
+    Lex ``code`` with ``lexer`` and return an iterable of tokens.
+    """
+    try:
+        return lexer.get_tokens(code)
+    except TypeError as err:
+        if (isinstance(err.args[0], str) and
+            ('unbound method get_tokens' in err.args[0] or
+             'missing 1 required positional argument' in err.args[0])):
+            raise TypeError('lex() argument must be a lexer instance, '
+                            'not a class')
+        raise
+
+
+def format(tokens, formatter, outfile=None):  # pylint: disable=redefined-builtin
+    """
+    Format a tokenlist ``tokens`` with the formatter ``formatter``.
+
+    If ``outfile`` is given and a valid file object (an object
+    with a ``write`` method), the result will be written to it, otherwise
+    it is returned as a string.
+    """
+    try:
+        if not outfile:
+            realoutfile = getattr(formatter, 'encoding', None) and BytesIO() or StringIO()
+            formatter.format(tokens, realoutfile)
+            return realoutfile.getvalue()
+        else:
+            formatter.format(tokens, outfile)
+    except TypeError as err:
+        if (isinstance(err.args[0], str) and
+            ('unbound method format' in err.args[0] or
+             'missing 1 required positional argument' in err.args[0])):
+            raise TypeError('format() argument must be a formatter instance, '
+                            'not a class')
+        raise
+
+
+def highlight(code, lexer, formatter, outfile=None):
+    """
+    Lex ``code`` with ``lexer`` and format it with the formatter ``formatter``.
+
+    If ``outfile`` is given and a valid file object (an object
+    with a ``write`` method), the result will be written to it, otherwise
+    it is returned as a string.
+    """
+    return format(lex(code, lexer), formatter, outfile)
+
--- a/eric6/ThirdParty/Pygments/pygments/__main__.py	Tue Sep 15 18:46:58 2020 +0200
+++ b/eric6/ThirdParty/Pygments/pygments/__main__.py	Tue Sep 15 19:09:05 2020 +0200
@@ -1,18 +1,18 @@
-# -*- coding: utf-8 -*-
-"""
-    pygments.__main__
-    ~~~~~~~~~~~~~~~~~
-
-    Main entry point for ``python -m pygments``.
-
-    :copyright: Copyright 2006-2019 by the Pygments team, see AUTHORS.
-    :license: BSD, see LICENSE for details.
-"""
-
-import sys
-import pygments.cmdline
-
-try:
-    sys.exit(pygments.cmdline.main(sys.argv))
-except KeyboardInterrupt:
-    sys.exit(1)
+# -*- coding: utf-8 -*-
+"""
+    pygments.__main__
+    ~~~~~~~~~~~~~~~~~
+
+    Main entry point for ``python -m pygments``.
+
+    :copyright: Copyright 2006-2020 by the Pygments team, see AUTHORS.
+    :license: BSD, see LICENSE for details.
+"""
+
+import sys
+import pygments.cmdline
+
+try:
+    sys.exit(pygments.cmdline.main(sys.argv))
+except KeyboardInterrupt:
+    sys.exit(1)
--- a/eric6/ThirdParty/Pygments/pygments/cmdline.py	Tue Sep 15 18:46:58 2020 +0200
+++ b/eric6/ThirdParty/Pygments/pygments/cmdline.py	Tue Sep 15 19:09:05 2020 +0200
@@ -1,575 +1,582 @@
-# -*- coding: utf-8 -*-
-"""
-    pygments.cmdline
-    ~~~~~~~~~~~~~~~~
-
-    Command line interface.
-
-    :copyright: Copyright 2006-2019 by the Pygments team, see AUTHORS.
-    :license: BSD, see LICENSE for details.
-"""
-
-import os
-import sys
-import getopt
-from textwrap import dedent
-
-from pygments import __version__, highlight
-from pygments.util import ClassNotFound, OptionError, docstring_headline, \
-    guess_decode, guess_decode_from_terminal, terminal_encoding, \
-    UnclosingTextIOWrapper
-from pygments.lexers import get_all_lexers, get_lexer_by_name, guess_lexer, \
-    load_lexer_from_file, get_lexer_for_filename, find_lexer_class_for_filename
-from pygments.lexers.special import TextLexer
-from pygments.formatters.latex import LatexEmbeddedLexer, LatexFormatter
-from pygments.formatters import get_all_formatters, get_formatter_by_name, \
-    load_formatter_from_file, get_formatter_for_filename, find_formatter_class
-from pygments.formatters.terminal import TerminalFormatter
-from pygments.formatters.terminal256 import Terminal256Formatter
-from pygments.filters import get_all_filters, find_filter_class
-from pygments.styles import get_all_styles, get_style_by_name
-
-
-USAGE = """\
-Usage: %s [-l <lexer> | -g] [-F <filter>[:<options>]] [-f <formatter>]
-          [-O <options>] [-P <option=value>] [-s] [-v] [-x] [-o <outfile>] [<infile>]
-
-       %s -S <style> -f <formatter> [-a <arg>] [-O <options>] [-P <option=value>]
-       %s -L [<which> ...]
-       %s -N <filename>
-       %s -H <type> <name>
-       %s -h | -V
-
-Highlight the input file and write the result to <outfile>.
-
-If no input file is given, use stdin, if -o is not given, use stdout.
-
-If -s is passed, lexing will be done in "streaming" mode, reading and
-highlighting one line at a time.  This will only work properly with
-lexers that have no constructs spanning multiple lines!
-
-<lexer> is a lexer name (query all lexer names with -L). If -l is not
-given, the lexer is guessed from the extension of the input file name
-(this obviously doesn't work if the input is stdin).  If -g is passed,
-attempt to guess the lexer from the file contents, or pass through as
-plain text if this fails (this can work for stdin).
-
-Likewise, <formatter> is a formatter name, and will be guessed from
-the extension of the output file name. If no output file is given,
-the terminal formatter will be used by default.
-
-The additional option -x allows custom lexers and formatters to be
-loaded from a .py file relative to the current working directory. For
-example, ``-l ./customlexer.py -x``. By default, this option expects a
-file with a class named CustomLexer or CustomFormatter; you can also
-specify your own class name with a colon (``-l ./lexer.py:MyLexer``).
-Users should be very careful not to use this option with untrusted files,
-because it will import and run them.
-
-With the -O option, you can give the lexer and formatter a comma-
-separated list of options, e.g. ``-O bg=light,python=cool``.
-
-The -P option adds lexer and formatter options like the -O option, but
-you can only give one option per -P. That way, the option value may
-contain commas and equals signs, which it can't with -O, e.g.
-``-P "heading=Pygments, the Python highlighter".
-
-With the -F option, you can add filters to the token stream, you can
-give options in the same way as for -O after a colon (note: there must
-not be spaces around the colon).
-
-The -O, -P and -F options can be given multiple times.
-
-With the -S option, print out style definitions for style <style>
-for formatter <formatter>. The argument given by -a is formatter
-dependent.
-
-The -L option lists lexers, formatters, styles or filters -- set
-`which` to the thing you want to list (e.g. "styles"), or omit it to
-list everything.
-
-The -N option guesses and prints out a lexer name based solely on
-the given filename. It does not take input or highlight anything.
-If no specific lexer can be determined "text" is returned.
-
-The -H option prints detailed help for the object <name> of type <type>,
-where <type> is one of "lexer", "formatter" or "filter".
-
-The -s option processes lines one at a time until EOF, rather than
-waiting to process the entire file.  This only works for stdin, and
-is intended for streaming input such as you get from 'tail -f'.
-Example usage: "tail -f sql.log | pygmentize -s -l sql"
-
-The -v option prints a detailed traceback on unhandled exceptions,
-which is useful for debugging and bug reports.
-
-The -h option prints this help.
-The -V option prints the package version.
-"""
-
-
-def _parse_options(o_strs):
-    opts = {}
-    if not o_strs:
-        return opts
-    for o_str in o_strs:
-        if not o_str.strip():
-            continue
-        o_args = o_str.split(',')
-        for o_arg in o_args:
-            o_arg = o_arg.strip()
-            try:
-                o_key, o_val = o_arg.split('=', 1)
-                o_key = o_key.strip()
-                o_val = o_val.strip()
-            except ValueError:
-                opts[o_arg] = True
-            else:
-                opts[o_key] = o_val
-    return opts
-
-
-def _parse_filters(f_strs):
-    filters = []
-    if not f_strs:
-        return filters
-    for f_str in f_strs:
-        if ':' in f_str:
-            fname, fopts = f_str.split(':', 1)
-            filters.append((fname, _parse_options([fopts])))
-        else:
-            filters.append((f_str, {}))
-    return filters
-
-
-def _print_help(what, name):
-    try:
-        if what == 'lexer':
-            cls = get_lexer_by_name(name)
-            print("Help on the %s lexer:" % cls.name)
-            print(dedent(cls.__doc__))
-        elif what == 'formatter':
-            cls = find_formatter_class(name)
-            print("Help on the %s formatter:" % cls.name)
-            print(dedent(cls.__doc__))
-        elif what == 'filter':
-            cls = find_filter_class(name)
-            print("Help on the %s filter:" % name)
-            print(dedent(cls.__doc__))
-        return 0
-    except (AttributeError, ValueError):
-        print("%s not found!" % what, file=sys.stderr)
-        return 1
-
-
-def _print_list(what):
-    if what == 'lexer':
-        print()
-        print("Lexers:")
-        print("~~~~~~~")
-
-        info = []
-        for fullname, names, exts, _ in get_all_lexers():
-            tup = (', '.join(names)+':', fullname,
-                   exts and '(filenames ' + ', '.join(exts) + ')' or '')
-            info.append(tup)
-        info.sort()
-        for i in info:
-            print(('* %s\n    %s %s') % i)
-
-    elif what == 'formatter':
-        print()
-        print("Formatters:")
-        print("~~~~~~~~~~~")
-
-        info = []
-        for cls in get_all_formatters():
-            doc = docstring_headline(cls)
-            tup = (', '.join(cls.aliases) + ':', doc, cls.filenames and
-                   '(filenames ' + ', '.join(cls.filenames) + ')' or '')
-            info.append(tup)
-        info.sort()
-        for i in info:
-            print(('* %s\n    %s %s') % i)
-
-    elif what == 'filter':
-        print()
-        print("Filters:")
-        print("~~~~~~~~")
-
-        for name in get_all_filters():
-            cls = find_filter_class(name)
-            print("* " + name + ':')
-            print("    %s" % docstring_headline(cls))
-
-    elif what == 'style':
-        print()
-        print("Styles:")
-        print("~~~~~~~")
-
-        for name in get_all_styles():
-            cls = get_style_by_name(name)
-            print("* " + name + ':')
-            print("    %s" % docstring_headline(cls))
-
-
-def main_inner(popts, args, usage):
-    opts = {}
-    O_opts = []
-    P_opts = []
-    F_opts = []
-    for opt, arg in popts:
-        if opt == '-O':
-            O_opts.append(arg)
-        elif opt == '-P':
-            P_opts.append(arg)
-        elif opt == '-F':
-            F_opts.append(arg)
-        opts[opt] = arg
-
-    if opts.pop('-h', None) is not None:
-        print(usage)
-        return 0
-
-    if opts.pop('-V', None) is not None:
-        print('Pygments version %s, (c) 2006-2019 by Georg Brandl.' % __version__)
-        return 0
-
-    # handle ``pygmentize -L``
-    L_opt = opts.pop('-L', None)
-    if L_opt is not None:
-        if opts:
-            print(usage, file=sys.stderr)
-            return 2
-
-        # print version
-        main(['', '-V'])
-        if not args:
-            args = ['lexer', 'formatter', 'filter', 'style']
-        for arg in args:
-            _print_list(arg.rstrip('s'))
-        return 0
-
-    # handle ``pygmentize -H``
-    H_opt = opts.pop('-H', None)
-    if H_opt is not None:
-        if opts or len(args) != 2:
-            print(usage, file=sys.stderr)
-            return 2
-
-        what, name = args  # pylint: disable=unbalanced-tuple-unpacking
-        if what not in ('lexer', 'formatter', 'filter'):
-            print(usage, file=sys.stderr)
-            return 2
-
-        return _print_help(what, name)
-
-    # parse -O options
-    parsed_opts = _parse_options(O_opts)
-    opts.pop('-O', None)
-
-    # parse -P options
-    for p_opt in P_opts:
-        try:
-            name, value = p_opt.split('=', 1)
-        except ValueError:
-            parsed_opts[p_opt] = True
-        else:
-            parsed_opts[name] = value
-    opts.pop('-P', None)
-
-    # encodings
-    inencoding = parsed_opts.get('inencoding', parsed_opts.get('encoding'))
-    outencoding = parsed_opts.get('outencoding', parsed_opts.get('encoding'))
-
-    # handle ``pygmentize -N``
-    infn = opts.pop('-N', None)
-    if infn is not None:
-        lexer = find_lexer_class_for_filename(infn)
-        if lexer is None:
-            lexer = TextLexer
-
-        print(lexer.aliases[0])
-        return 0
-
-    # handle ``pygmentize -S``
-    S_opt = opts.pop('-S', None)
-    a_opt = opts.pop('-a', None)
-    if S_opt is not None:
-        f_opt = opts.pop('-f', None)
-        if not f_opt:
-            print(usage, file=sys.stderr)
-            return 2
-        if opts or args:
-            print(usage, file=sys.stderr)
-            return 2
-
-        try:
-            parsed_opts['style'] = S_opt
-            fmter = get_formatter_by_name(f_opt, **parsed_opts)
-        except ClassNotFound as err:
-            print(err, file=sys.stderr)
-            return 1
-
-        print(fmter.get_style_defs(a_opt or ''))
-        return 0
-
-    # if no -S is given, -a is not allowed
-    if a_opt is not None:
-        print(usage, file=sys.stderr)
-        return 2
-
-    # parse -F options
-    F_opts = _parse_filters(F_opts)
-    opts.pop('-F', None)
-
-    allow_custom_lexer_formatter = False
-    # -x: allow custom (eXternal) lexers and formatters
-    if opts.pop('-x', None) is not None:
-        allow_custom_lexer_formatter = True
-
-    # select lexer
-    lexer = None
-
-    # given by name?
-    lexername = opts.pop('-l', None)
-    if lexername:
-        # custom lexer, located relative to user's cwd
-        if allow_custom_lexer_formatter and '.py' in lexername:
-            try:
-                filename = None
-                name = None
-                if ':' in lexername:
-                    filename, name = lexername.rsplit(':', 1)
-
-                    if '.py' in name:
-                        # This can happen on Windows: If the lexername is
-                        # C:\lexer.py -- return to normal load path in that case
-                        name = None
-
-                if filename and name:
-                    lexer = load_lexer_from_file(filename, name,
-                                                 **parsed_opts)
-                else:
-                    lexer = load_lexer_from_file(lexername, **parsed_opts)
-            except ClassNotFound as err:
-                print('Error:', err, file=sys.stderr)
-                return 1
-        else:
-            try:
-                lexer = get_lexer_by_name(lexername, **parsed_opts)
-            except (OptionError, ClassNotFound) as err:
-                print('Error:', err, file=sys.stderr)
-                return 1
-
-    # read input code
-    code = None
-
-    if args:
-        if len(args) > 1:
-            print(usage, file=sys.stderr)
-            return 2
-
-        if '-s' in opts:
-            print('Error: -s option not usable when input file specified',
-                  file=sys.stderr)
-            return 2
-
-        infn = args[0]
-        try:
-            with open(infn, 'rb') as infp:
-                code = infp.read()
-        except Exception as err:
-            print('Error: cannot read infile:', err, file=sys.stderr)
-            return 1
-        if not inencoding:
-            code, inencoding = guess_decode(code)
-
-        # do we have to guess the lexer?
-        if not lexer:
-            try:
-                lexer = get_lexer_for_filename(infn, code, **parsed_opts)
-            except ClassNotFound as err:
-                if '-g' in opts:
-                    try:
-                        lexer = guess_lexer(code, **parsed_opts)
-                    except ClassNotFound:
-                        lexer = TextLexer(**parsed_opts)
-                else:
-                    print('Error:', err, file=sys.stderr)
-                    return 1
-            except OptionError as err:
-                print('Error:', err, file=sys.stderr)
-                return 1
-
-    elif '-s' not in opts:  # treat stdin as full file (-s support is later)
-        # read code from terminal, always in binary mode since we want to
-        # decode ourselves and be tolerant with it
-        code = sys.stdin.buffer.read()  # use .buffer to get a binary stream
-        if not inencoding:
-            code, inencoding = guess_decode_from_terminal(code, sys.stdin)
-            # else the lexer will do the decoding
-        if not lexer:
-            try:
-                lexer = guess_lexer(code, **parsed_opts)
-            except ClassNotFound:
-                lexer = TextLexer(**parsed_opts)
-
-    else:  # -s option needs a lexer with -l
-        if not lexer:
-            print('Error: when using -s a lexer has to be selected with -l',
-                  file=sys.stderr)
-            return 2
-
-    # process filters
-    for fname, fopts in F_opts:
-        try:
-            lexer.add_filter(fname, **fopts)
-        except ClassNotFound as err:
-            print('Error:', err, file=sys.stderr)
-            return 1
-
-    # select formatter
-    outfn = opts.pop('-o', None)
-    fmter = opts.pop('-f', None)
-    if fmter:
-        # custom formatter, located relative to user's cwd
-        if allow_custom_lexer_formatter and '.py' in fmter:
-            try:
-                filename = None
-                name = None
-                if ':' in fmter:
-                    # Same logic as above for custom lexer
-                    filename, name = fmter.rsplit(':', 1)
-
-                    if '.py' in name:
-                        name = None
-
-                if filename and name:
-                    fmter = load_formatter_from_file(filename, name,
-                                    **parsed_opts)
-                else:
-                    fmter = load_formatter_from_file(fmter, **parsed_opts)
-            except ClassNotFound as err:
-                print('Error:', err, file=sys.stderr)
-                return 1
-        else:
-            try:
-                fmter = get_formatter_by_name(fmter, **parsed_opts)
-            except (OptionError, ClassNotFound) as err:
-                print('Error:', err, file=sys.stderr)
-                return 1
-
-    if outfn:
-        if not fmter:
-            try:
-                fmter = get_formatter_for_filename(outfn, **parsed_opts)
-            except (OptionError, ClassNotFound) as err:
-                print('Error:', err, file=sys.stderr)
-                return 1
-        try:
-            outfile = open(outfn, 'wb')
-        except Exception as err:
-            print('Error: cannot open outfile:', err, file=sys.stderr)
-            return 1
-    else:
-        if not fmter:
-            if '256' in os.environ.get('TERM', ''):
-                fmter = Terminal256Formatter(**parsed_opts)
-            else:
-                fmter = TerminalFormatter(**parsed_opts)
-        outfile = sys.stdout.buffer
-
-    # determine output encoding if not explicitly selected
-    if not outencoding:
-        if outfn:
-            # output file? use lexer encoding for now (can still be None)
-            fmter.encoding = inencoding
-        else:
-            # else use terminal encoding
-            fmter.encoding = terminal_encoding(sys.stdout)
-
-    # provide coloring under Windows, if possible
-    if not outfn and sys.platform in ('win32', 'cygwin') and \
-       fmter.name in ('Terminal', 'Terminal256'):  # pragma: no cover
-        # unfortunately colorama doesn't support binary streams on Py3
-        outfile = UnclosingTextIOWrapper(outfile, encoding=fmter.encoding)
-        fmter.encoding = None
-        try:
-            import colorama.initialise
-        except ImportError:
-            pass
-        else:
-            outfile = colorama.initialise.wrap_stream(
-                outfile, convert=None, strip=None, autoreset=False, wrap=True)
-
-    # When using the LaTeX formatter and the option `escapeinside` is
-    # specified, we need a special lexer which collects escaped text
-    # before running the chosen language lexer.
-    escapeinside = parsed_opts.get('escapeinside', '')
-    if len(escapeinside) == 2 and isinstance(fmter, LatexFormatter):
-        left = escapeinside[0]
-        right = escapeinside[1]
-        lexer = LatexEmbeddedLexer(left, right, lexer)
-
-    # ... and do it!
-    if '-s' not in opts:
-        # process whole input as per normal...
-        highlight(code, lexer, fmter, outfile)
-        return 0
-    else:
-        # line by line processing of stdin (eg: for 'tail -f')...
-        try:
-            while 1:
-                line = sys.stdin.buffer.readline()
-                if not line:
-                    break
-                if not inencoding:
-                    line = guess_decode_from_terminal(line, sys.stdin)[0]
-                highlight(line, lexer, fmter, outfile)
-                if hasattr(outfile, 'flush'):
-                    outfile.flush()
-            return 0
-        except KeyboardInterrupt:  # pragma: no cover
-            return 0
-
-
-def main(args=sys.argv):
-    """
-    Main command line entry point.
-    """
-    usage = USAGE % ((args[0],) * 6)
-
-    try:
-        popts, args = getopt.getopt(args[1:], "l:f:F:o:O:P:LS:a:N:vhVHgsx")
-    except getopt.GetoptError:
-        print(usage, file=sys.stderr)
-        return 2
-
-    try:
-        return main_inner(popts, args, usage)
-    except Exception:
-        if '-v' in dict(popts):
-            print(file=sys.stderr)
-            print('*' * 65, file=sys.stderr)
-            print('An unhandled exception occurred while highlighting.',
-                  file=sys.stderr)
-            print('Please report the whole traceback to the issue tracker at',
-                  file=sys.stderr)
-            print('<https://github.com/pygments/pygments/issues>.',
-                  file=sys.stderr)
-            print('*' * 65, file=sys.stderr)
-            print(file=sys.stderr)
-            raise
-        import traceback
-        info = traceback.format_exception(*sys.exc_info())
-        msg = info[-1].strip()
-        if len(info) >= 3:
-            # extract relevant file and position info
-            msg += '\n   (f%s)' % info[-2].split('\n')[0].strip()[1:]
-        print(file=sys.stderr)
-        print('*** Error while highlighting:', file=sys.stderr)
-        print(msg, file=sys.stderr)
-        print('*** If this is a bug you want to report, please rerun with -v.',
-              file=sys.stderr)
-        return 1
+# -*- coding: utf-8 -*-
+"""
+    pygments.cmdline
+    ~~~~~~~~~~~~~~~~
+
+    Command line interface.
+
+    :copyright: Copyright 2006-2020 by the Pygments team, see AUTHORS.
+    :license: BSD, see LICENSE for details.
+"""
+
+import os
+import sys
+import getopt
+from textwrap import dedent
+
+from pygments import __version__, highlight
+from pygments.util import ClassNotFound, OptionError, docstring_headline, \
+    guess_decode, guess_decode_from_terminal, terminal_encoding, \
+    UnclosingTextIOWrapper
+from pygments.lexers import get_all_lexers, get_lexer_by_name, guess_lexer, \
+    load_lexer_from_file, get_lexer_for_filename, find_lexer_class_for_filename
+from pygments.lexers.special import TextLexer
+from pygments.formatters.latex import LatexEmbeddedLexer, LatexFormatter
+from pygments.formatters import get_all_formatters, get_formatter_by_name, \
+    load_formatter_from_file, get_formatter_for_filename, find_formatter_class
+from pygments.formatters.terminal import TerminalFormatter
+from pygments.formatters.terminal256 import Terminal256Formatter
+from pygments.filters import get_all_filters, find_filter_class
+from pygments.styles import get_all_styles, get_style_by_name
+
+
+USAGE = """\
+Usage: %s [-l <lexer> | -g] [-F <filter>[:<options>]] [-f <formatter>]
+          [-O <options>] [-P <option=value>] [-s] [-v] [-x] [-o <outfile>] [<infile>]
+
+       %s -S <style> -f <formatter> [-a <arg>] [-O <options>] [-P <option=value>]
+       %s -L [<which> ...]
+       %s -N <filename>
+       %s -H <type> <name>
+       %s -h | -V
+
+Highlight the input file and write the result to <outfile>.
+
+If no input file is given, use stdin, if -o is not given, use stdout.
+
+If -s is passed, lexing will be done in "streaming" mode, reading and
+highlighting one line at a time.  This will only work properly with
+lexers that have no constructs spanning multiple lines!
+
+<lexer> is a lexer name (query all lexer names with -L). If -l is not
+given, the lexer is guessed from the extension of the input file name
+(this obviously doesn't work if the input is stdin).  If -g is passed,
+attempt to guess the lexer from the file contents, or pass through as
+plain text if this fails (this can work for stdin).
+
+Likewise, <formatter> is a formatter name, and will be guessed from
+the extension of the output file name. If no output file is given,
+the terminal formatter will be used by default.
+
+The additional option -x allows custom lexers and formatters to be
+loaded from a .py file relative to the current working directory. For
+example, ``-l ./customlexer.py -x``. By default, this option expects a
+file with a class named CustomLexer or CustomFormatter; you can also
+specify your own class name with a colon (``-l ./lexer.py:MyLexer``).
+Users should be very careful not to use this option with untrusted files,
+because it will import and run them.
+
+With the -O option, you can give the lexer and formatter a comma-
+separated list of options, e.g. ``-O bg=light,python=cool``.
+
+The -P option adds lexer and formatter options like the -O option, but
+you can only give one option per -P. That way, the option value may
+contain commas and equals signs, which it can't with -O, e.g.
+``-P "heading=Pygments, the Python highlighter".
+
+With the -F option, you can add filters to the token stream, you can
+give options in the same way as for -O after a colon (note: there must
+not be spaces around the colon).
+
+The -O, -P and -F options can be given multiple times.
+
+With the -S option, print out style definitions for style <style>
+for formatter <formatter>. The argument given by -a is formatter
+dependent.
+
+The -L option lists lexers, formatters, styles or filters -- set
+`which` to the thing you want to list (e.g. "styles"), or omit it to
+list everything.
+
+The -N option guesses and prints out a lexer name based solely on
+the given filename. It does not take input or highlight anything.
+If no specific lexer can be determined "text" is returned.
+
+The -H option prints detailed help for the object <name> of type <type>,
+where <type> is one of "lexer", "formatter" or "filter".
+
+The -s option processes lines one at a time until EOF, rather than
+waiting to process the entire file.  This only works for stdin, and
+is intended for streaming input such as you get from 'tail -f'.
+Example usage: "tail -f sql.log | pygmentize -s -l sql"
+
+The -v option prints a detailed traceback on unhandled exceptions,
+which is useful for debugging and bug reports.
+
+The -h option prints this help.
+The -V option prints the package version.
+"""
+
+
+def _parse_options(o_strs):
+    opts = {}
+    if not o_strs:
+        return opts
+    for o_str in o_strs:
+        if not o_str.strip():
+            continue
+        o_args = o_str.split(',')
+        for o_arg in o_args:
+            o_arg = o_arg.strip()
+            try:
+                o_key, o_val = o_arg.split('=', 1)
+                o_key = o_key.strip()
+                o_val = o_val.strip()
+            except ValueError:
+                opts[o_arg] = True
+            else:
+                opts[o_key] = o_val
+    return opts
+
+
+def _parse_filters(f_strs):
+    filters = []
+    if not f_strs:
+        return filters
+    for f_str in f_strs:
+        if ':' in f_str:
+            fname, fopts = f_str.split(':', 1)
+            filters.append((fname, _parse_options([fopts])))
+        else:
+            filters.append((f_str, {}))
+    return filters
+
+
+def _print_help(what, name):
+    try:
+        if what == 'lexer':
+            cls = get_lexer_by_name(name)
+            print("Help on the %s lexer:" % cls.name)
+            print(dedent(cls.__doc__))
+        elif what == 'formatter':
+            cls = find_formatter_class(name)
+            print("Help on the %s formatter:" % cls.name)
+            print(dedent(cls.__doc__))
+        elif what == 'filter':
+            cls = find_filter_class(name)
+            print("Help on the %s filter:" % name)
+            print(dedent(cls.__doc__))
+        return 0
+    except (AttributeError, ValueError):
+        print("%s not found!" % what, file=sys.stderr)
+        return 1
+
+
+def _print_list(what):
+    if what == 'lexer':
+        print()
+        print("Lexers:")
+        print("~~~~~~~")
+
+        info = []
+        for fullname, names, exts, _ in get_all_lexers():
+            tup = (', '.join(names)+':', fullname,
+                   exts and '(filenames ' + ', '.join(exts) + ')' or '')
+            info.append(tup)
+        info.sort()
+        for i in info:
+            print(('* %s\n    %s %s') % i)
+
+    elif what == 'formatter':
+        print()
+        print("Formatters:")
+        print("~~~~~~~~~~~")
+
+        info = []
+        for cls in get_all_formatters():
+            doc = docstring_headline(cls)
+            tup = (', '.join(cls.aliases) + ':', doc, cls.filenames and
+                   '(filenames ' + ', '.join(cls.filenames) + ')' or '')
+            info.append(tup)
+        info.sort()
+        for i in info:
+            print(('* %s\n    %s %s') % i)
+
+    elif what == 'filter':
+        print()
+        print("Filters:")
+        print("~~~~~~~~")
+
+        for name in get_all_filters():
+            cls = find_filter_class(name)
+            print("* " + name + ':')
+            print("    %s" % docstring_headline(cls))
+
+    elif what == 'style':
+        print()
+        print("Styles:")
+        print("~~~~~~~")
+
+        for name in get_all_styles():
+            cls = get_style_by_name(name)
+            print("* " + name + ':')
+            print("    %s" % docstring_headline(cls))
+
+
+def main_inner(popts, args, usage):
+    opts = {}
+    O_opts = []
+    P_opts = []
+    F_opts = []
+    for opt, arg in popts:
+        if opt == '-O':
+            O_opts.append(arg)
+        elif opt == '-P':
+            P_opts.append(arg)
+        elif opt == '-F':
+            F_opts.append(arg)
+        opts[opt] = arg
+
+    if opts.pop('-h', None) is not None:
+        print(usage)
+        return 0
+
+    if opts.pop('-V', None) is not None:
+        print('Pygments version %s, (c) 2006-2020 by Georg Brandl.' % __version__)
+        return 0
+
+    # handle ``pygmentize -L``
+    L_opt = opts.pop('-L', None)
+    if L_opt is not None:
+        if opts:
+            print(usage, file=sys.stderr)
+            return 2
+
+        # print version
+        main(['', '-V'])
+        if not args:
+            args = ['lexer', 'formatter', 'filter', 'style']
+        for arg in args:
+            _print_list(arg.rstrip('s'))
+        return 0
+
+    # handle ``pygmentize -H``
+    H_opt = opts.pop('-H', None)
+    if H_opt is not None:
+        if opts or len(args) != 2:
+            print(usage, file=sys.stderr)
+            return 2
+
+        what, name = args  # pylint: disable=unbalanced-tuple-unpacking
+        if what not in ('lexer', 'formatter', 'filter'):
+            print(usage, file=sys.stderr)
+            return 2
+
+        return _print_help(what, name)
+
+    # parse -O options
+    parsed_opts = _parse_options(O_opts)
+    opts.pop('-O', None)
+
+    # parse -P options
+    for p_opt in P_opts:
+        try:
+            name, value = p_opt.split('=', 1)
+        except ValueError:
+            parsed_opts[p_opt] = True
+        else:
+            parsed_opts[name] = value
+    opts.pop('-P', None)
+
+    # encodings
+    inencoding = parsed_opts.get('inencoding', parsed_opts.get('encoding'))
+    outencoding = parsed_opts.get('outencoding', parsed_opts.get('encoding'))
+
+    # handle ``pygmentize -N``
+    infn = opts.pop('-N', None)
+    if infn is not None:
+        lexer = find_lexer_class_for_filename(infn)
+        if lexer is None:
+            lexer = TextLexer
+
+        print(lexer.aliases[0])
+        return 0
+
+    # handle ``pygmentize -S``
+    S_opt = opts.pop('-S', None)
+    a_opt = opts.pop('-a', None)
+    if S_opt is not None:
+        f_opt = opts.pop('-f', None)
+        if not f_opt:
+            print(usage, file=sys.stderr)
+            return 2
+        if opts or args:
+            print(usage, file=sys.stderr)
+            return 2
+
+        try:
+            parsed_opts['style'] = S_opt
+            fmter = get_formatter_by_name(f_opt, **parsed_opts)
+        except ClassNotFound as err:
+            print(err, file=sys.stderr)
+            return 1
+
+        print(fmter.get_style_defs(a_opt or ''))
+        return 0
+
+    # if no -S is given, -a is not allowed
+    if a_opt is not None:
+        print(usage, file=sys.stderr)
+        return 2
+
+    # parse -F options
+    F_opts = _parse_filters(F_opts)
+    opts.pop('-F', None)
+
+    allow_custom_lexer_formatter = False
+    # -x: allow custom (eXternal) lexers and formatters
+    if opts.pop('-x', None) is not None:
+        allow_custom_lexer_formatter = True
+
+    # select lexer
+    lexer = None
+
+    # given by name?
+    lexername = opts.pop('-l', None)
+    if lexername:
+        # custom lexer, located relative to user's cwd
+        if allow_custom_lexer_formatter and '.py' in lexername:
+            try:
+                filename = None
+                name = None
+                if ':' in lexername:
+                    filename, name = lexername.rsplit(':', 1)
+
+                    if '.py' in name:
+                        # This can happen on Windows: If the lexername is
+                        # C:\lexer.py -- return to normal load path in that case
+                        name = None
+
+                if filename and name:
+                    lexer = load_lexer_from_file(filename, name,
+                                                 **parsed_opts)
+                else:
+                    lexer = load_lexer_from_file(lexername, **parsed_opts)
+            except ClassNotFound as err:
+                print('Error:', err, file=sys.stderr)
+                return 1
+        else:
+            try:
+                lexer = get_lexer_by_name(lexername, **parsed_opts)
+            except (OptionError, ClassNotFound) as err:
+                print('Error:', err, file=sys.stderr)
+                return 1
+
+    # read input code
+    code = None
+
+    if args:
+        if len(args) > 1:
+            print(usage, file=sys.stderr)
+            return 2
+
+        if '-s' in opts:
+            print('Error: -s option not usable when input file specified',
+                  file=sys.stderr)
+            return 2
+
+        infn = args[0]
+        try:
+            with open(infn, 'rb') as infp:
+                code = infp.read()
+        except Exception as err:
+            print('Error: cannot read infile:', err, file=sys.stderr)
+            return 1
+        if not inencoding:
+            code, inencoding = guess_decode(code)
+
+        # do we have to guess the lexer?
+        if not lexer:
+            try:
+                lexer = get_lexer_for_filename(infn, code, **parsed_opts)
+            except ClassNotFound as err:
+                if '-g' in opts:
+                    try:
+                        lexer = guess_lexer(code, **parsed_opts)
+                    except ClassNotFound:
+                        lexer = TextLexer(**parsed_opts)
+                else:
+                    print('Error:', err, file=sys.stderr)
+                    return 1
+            except OptionError as err:
+                print('Error:', err, file=sys.stderr)
+                return 1
+
+    elif '-s' not in opts:  # treat stdin as full file (-s support is later)
+        # read code from terminal, always in binary mode since we want to
+        # decode ourselves and be tolerant with it
+        code = sys.stdin.buffer.read()  # use .buffer to get a binary stream
+        if not inencoding:
+            code, inencoding = guess_decode_from_terminal(code, sys.stdin)
+            # else the lexer will do the decoding
+        if not lexer:
+            try:
+                lexer = guess_lexer(code, **parsed_opts)
+            except ClassNotFound:
+                lexer = TextLexer(**parsed_opts)
+
+    else:  # -s option needs a lexer with -l
+        if not lexer:
+            print('Error: when using -s a lexer has to be selected with -l',
+                  file=sys.stderr)
+            return 2
+
+    # process filters
+    for fname, fopts in F_opts:
+        try:
+            lexer.add_filter(fname, **fopts)
+        except ClassNotFound as err:
+            print('Error:', err, file=sys.stderr)
+            return 1
+
+    # select formatter
+    outfn = opts.pop('-o', None)
+    fmter = opts.pop('-f', None)
+    if fmter:
+        # custom formatter, located relative to user's cwd
+        if allow_custom_lexer_formatter and '.py' in fmter:
+            try:
+                filename = None
+                name = None
+                if ':' in fmter:
+                    # Same logic as above for custom lexer
+                    filename, name = fmter.rsplit(':', 1)
+
+                    if '.py' in name:
+                        name = None
+
+                if filename and name:
+                    fmter = load_formatter_from_file(filename, name,
+                                    **parsed_opts)
+                else:
+                    fmter = load_formatter_from_file(fmter, **parsed_opts)
+            except ClassNotFound as err:
+                print('Error:', err, file=sys.stderr)
+                return 1
+        else:
+            try:
+                fmter = get_formatter_by_name(fmter, **parsed_opts)
+            except (OptionError, ClassNotFound) as err:
+                print('Error:', err, file=sys.stderr)
+                return 1
+
+    if outfn:
+        if not fmter:
+            try:
+                fmter = get_formatter_for_filename(outfn, **parsed_opts)
+            except (OptionError, ClassNotFound) as err:
+                print('Error:', err, file=sys.stderr)
+                return 1
+        try:
+            outfile = open(outfn, 'wb')
+        except Exception as err:
+            print('Error: cannot open outfile:', err, file=sys.stderr)
+            return 1
+    else:
+        if not fmter:
+            if '256' in os.environ.get('TERM', ''):
+                fmter = Terminal256Formatter(**parsed_opts)
+            else:
+                fmter = TerminalFormatter(**parsed_opts)
+        outfile = sys.stdout.buffer
+
+    # determine output encoding if not explicitly selected
+    if not outencoding:
+        if outfn:
+            # output file? use lexer encoding for now (can still be None)
+            fmter.encoding = inencoding
+        else:
+            # else use terminal encoding
+            fmter.encoding = terminal_encoding(sys.stdout)
+
+    # provide coloring under Windows, if possible
+    if not outfn and sys.platform in ('win32', 'cygwin') and \
+       fmter.name in ('Terminal', 'Terminal256'):  # pragma: no cover
+        # unfortunately colorama doesn't support binary streams on Py3
+        outfile = UnclosingTextIOWrapper(outfile, encoding=fmter.encoding)
+        fmter.encoding = None
+        try:
+            import colorama.initialise
+        except ImportError:
+            pass
+        else:
+            outfile = colorama.initialise.wrap_stream(
+                outfile, convert=None, strip=None, autoreset=False, wrap=True)
+
+    # When using the LaTeX formatter and the option `escapeinside` is
+    # specified, we need a special lexer which collects escaped text
+    # before running the chosen language lexer.
+    escapeinside = parsed_opts.get('escapeinside', '')
+    if len(escapeinside) == 2 and isinstance(fmter, LatexFormatter):
+        left = escapeinside[0]
+        right = escapeinside[1]
+        lexer = LatexEmbeddedLexer(left, right, lexer)
+
+    # ... and do it!
+    if '-s' not in opts:
+        # process whole input as per normal...
+        try:
+            highlight(code, lexer, fmter, outfile)
+        finally:
+            if outfn:
+                outfile.close()
+        return 0
+    else:
+        # line by line processing of stdin (eg: for 'tail -f')...
+        try:
+            while 1:
+                line = sys.stdin.buffer.readline()
+                if not line:
+                    break
+                if not inencoding:
+                    line = guess_decode_from_terminal(line, sys.stdin)[0]
+                highlight(line, lexer, fmter, outfile)
+                if hasattr(outfile, 'flush'):
+                    outfile.flush()
+            return 0
+        except KeyboardInterrupt:  # pragma: no cover
+            return 0
+        finally:
+            if outfn:
+                outfile.close()
+
+
+def main(args=sys.argv):
+    """
+    Main command line entry point.
+    """
+    usage = USAGE % ((args[0],) * 6)
+
+    try:
+        popts, args = getopt.getopt(args[1:], "l:f:F:o:O:P:LS:a:N:vhVHgsx")
+    except getopt.GetoptError:
+        print(usage, file=sys.stderr)
+        return 2
+
+    try:
+        return main_inner(popts, args, usage)
+    except Exception:
+        if '-v' in dict(popts):
+            print(file=sys.stderr)
+            print('*' * 65, file=sys.stderr)
+            print('An unhandled exception occurred while highlighting.',
+                  file=sys.stderr)
+            print('Please report the whole traceback to the issue tracker at',
+                  file=sys.stderr)
+            print('<https://github.com/pygments/pygments/issues>.',
+                  file=sys.stderr)
+            print('*' * 65, file=sys.stderr)
+            print(file=sys.stderr)
+            raise
+        import traceback
+        info = traceback.format_exception(*sys.exc_info())
+        msg = info[-1].strip()
+        if len(info) >= 3:
+            # extract relevant file and position info
+            msg += '\n   (f%s)' % info[-2].split('\n')[0].strip()[1:]
+        print(file=sys.stderr)
+        print('*** Error while highlighting:', file=sys.stderr)
+        print(msg, file=sys.stderr)
+        print('*** If this is a bug you want to report, please rerun with -v.',
+              file=sys.stderr)
+        return 1
--- a/eric6/ThirdParty/Pygments/pygments/console.py	Tue Sep 15 18:46:58 2020 +0200
+++ b/eric6/ThirdParty/Pygments/pygments/console.py	Tue Sep 15 19:09:05 2020 +0200
@@ -1,71 +1,71 @@
-# -*- coding: utf-8 -*-
-"""
-    pygments.console
-    ~~~~~~~~~~~~~~~~
-
-    Format colored console output.
-
-    :copyright: Copyright 2006-2019 by the Pygments team, see AUTHORS.
-    :license: BSD, see LICENSE for details.
-"""
-
-esc = "\x1b["
-
-codes = {}
-codes[""] = ""
-codes["reset"] = esc + "39;49;00m"
-
-codes["bold"] = esc + "01m"
-codes["faint"] = esc + "02m"
-codes["standout"] = esc + "03m"
-codes["underline"] = esc + "04m"
-codes["blink"] = esc + "05m"
-codes["overline"] = esc + "06m"
-
-dark_colors = ["black", "red", "green", "yellow", "blue",
-               "magenta", "cyan", "gray"]
-light_colors = ["brightblack", "brightred", "brightgreen", "brightyellow", "brightblue",
-                "brightmagenta", "brightcyan", "white"]
-
-x = 30
-for d, l in zip(dark_colors, light_colors):
-    codes[d] = esc + "%im" % x
-    codes[l] = esc + "%im" % (60 + x)
-    x += 1
-
-del d, l, x
-
-codes["white"] = codes["bold"]
-
-
-def reset_color():
-    return codes["reset"]
-
-
-def colorize(color_key, text):
-    return codes[color_key] + text + codes["reset"]
-
-
-def ansiformat(attr, text):
-    """
-    Format ``text`` with a color and/or some attributes::
-
-        color       normal color
-        *color*     bold color
-        _color_     underlined color
-        +color+     blinking color
-    """
-    result = []
-    if attr[:1] == attr[-1:] == '+':
-        result.append(codes['blink'])
-        attr = attr[1:-1]
-    if attr[:1] == attr[-1:] == '*':
-        result.append(codes['bold'])
-        attr = attr[1:-1]
-    if attr[:1] == attr[-1:] == '_':
-        result.append(codes['underline'])
-        attr = attr[1:-1]
-    result.append(codes[attr])
-    result.append(text)
-    result.append(codes['reset'])
-    return ''.join(result)
+# -*- coding: utf-8 -*-
+"""
+    pygments.console
+    ~~~~~~~~~~~~~~~~
+
+    Format colored console output.
+
+    :copyright: Copyright 2006-2020 by the Pygments team, see AUTHORS.
+    :license: BSD, see LICENSE for details.
+"""
+
+esc = "\x1b["
+
+codes = {}
+codes[""] = ""
+codes["reset"] = esc + "39;49;00m"
+
+codes["bold"] = esc + "01m"
+codes["faint"] = esc + "02m"
+codes["standout"] = esc + "03m"
+codes["underline"] = esc + "04m"
+codes["blink"] = esc + "05m"
+codes["overline"] = esc + "06m"
+
+dark_colors = ["black", "red", "green", "yellow", "blue",
+               "magenta", "cyan", "gray"]
+light_colors = ["brightblack", "brightred", "brightgreen", "brightyellow", "brightblue",
+                "brightmagenta", "brightcyan", "white"]
+
+x = 30
+for d, l in zip(dark_colors, light_colors):
+    codes[d] = esc + "%im" % x
+    codes[l] = esc + "%im" % (60 + x)
+    x += 1
+
+del d, l, x
+
+codes["white"] = codes["bold"]
+
+
+def reset_color():
+    return codes["reset"]
+
+
+def colorize(color_key, text):
+    return codes[color_key] + text + codes["reset"]
+
+
+def ansiformat(attr, text):
+    """
+    Format ``text`` with a color and/or some attributes::
+
+        color       normal color
+        *color*     bold color
+        _color_     underlined color
+        +color+     blinking color
+    """
+    result = []
+    if attr[:1] == attr[-1:] == '+':
+        result.append(codes['blink'])
+        attr = attr[1:-1]
+    if attr[:1] == attr[-1:] == '*':
+        result.append(codes['bold'])
+        attr = attr[1:-1]
+    if attr[:1] == attr[-1:] == '_':
+        result.append(codes['underline'])
+        attr = attr[1:-1]
+    result.append(codes[attr])
+    result.append(text)
+    result.append(codes['reset'])
+    return ''.join(result)
--- a/eric6/ThirdParty/Pygments/pygments/filter.py	Tue Sep 15 18:46:58 2020 +0200
+++ b/eric6/ThirdParty/Pygments/pygments/filter.py	Tue Sep 15 19:09:05 2020 +0200
@@ -1,74 +1,72 @@
-# -*- coding: utf-8 -*-
-"""
-    pygments.filter
-    ~~~~~~~~~~~~~~~
-
-    Module that implements the default filter.
-
-    :copyright: Copyright 2006-2019 by the Pygments team, see AUTHORS.
-    :license: BSD, see LICENSE for details.
-"""
-
-
-def apply_filters(stream, filters, lexer=None):
-    """
-    Use this method to apply an iterable of filters to
-    a stream. If lexer is given it's forwarded to the
-    filter, otherwise the filter receives `None`.
-    """
-    def _apply(filter_, stream):
-        for token in filter_.filter(lexer, stream):
-            yield token
-    for filter_ in filters:
-        stream = _apply(filter_, stream)
-    return stream
-
-
-def simplefilter(f):
-    """
-    Decorator that converts a function into a filter::
-
-        @simplefilter
-        def lowercase(self, lexer, stream, options):
-            for ttype, value in stream:
-                yield ttype, value.lower()
-    """
-    return type(f.__name__, (FunctionFilter,), {
-        '__module__': getattr(f, '__module__'),
-        '__doc__': f.__doc__,
-        'function': f,
-    })
-
-
-class Filter:
-    """
-    Default filter. Subclass this class or use the `simplefilter`
-    decorator to create own filters.
-    """
-
-    def __init__(self, **options):
-        self.options = options
-
-    def filter(self, lexer, stream):
-        raise NotImplementedError()
-
-
-class FunctionFilter(Filter):
-    """
-    Abstract class used by `simplefilter` to create simple
-    function filters on the fly. The `simplefilter` decorator
-    automatically creates subclasses of this class for
-    functions passed to it.
-    """
-    function = None
-
-    def __init__(self, **options):
-        if not hasattr(self, 'function'):
-            raise TypeError('%r used without bound function' %
-                            self.__class__.__name__)
-        Filter.__init__(self, **options)
-
-    def filter(self, lexer, stream):
-        # pylint: disable=not-callable
-        for ttype, value in self.function(lexer, stream, self.options):
-            yield ttype, value
+# -*- coding: utf-8 -*-
+"""
+    pygments.filter
+    ~~~~~~~~~~~~~~~
+
+    Module that implements the default filter.
+
+    :copyright: Copyright 2006-2020 by the Pygments team, see AUTHORS.
+    :license: BSD, see LICENSE for details.
+"""
+
+
+def apply_filters(stream, filters, lexer=None):
+    """
+    Use this method to apply an iterable of filters to
+    a stream. If lexer is given it's forwarded to the
+    filter, otherwise the filter receives `None`.
+    """
+    def _apply(filter_, stream):
+        yield from filter_.filter(lexer, stream)
+    for filter_ in filters:
+        stream = _apply(filter_, stream)
+    return stream
+
+
+def simplefilter(f):
+    """
+    Decorator that converts a function into a filter::
+
+        @simplefilter
+        def lowercase(self, lexer, stream, options):
+            for ttype, value in stream:
+                yield ttype, value.lower()
+    """
+    return type(f.__name__, (FunctionFilter,), {
+        '__module__': getattr(f, '__module__'),
+        '__doc__': f.__doc__,
+        'function': f,
+    })
+
+
+class Filter:
+    """
+    Default filter. Subclass this class or use the `simplefilter`
+    decorator to create own filters.
+    """
+
+    def __init__(self, **options):
+        self.options = options
+
+    def filter(self, lexer, stream):
+        raise NotImplementedError()
+
+
+class FunctionFilter(Filter):
+    """
+    Abstract class used by `simplefilter` to create simple
+    function filters on the fly. The `simplefilter` decorator
+    automatically creates subclasses of this class for
+    functions passed to it.
+    """
+    function = None
+
+    def __init__(self, **options):
+        if not hasattr(self, 'function'):
+            raise TypeError('%r used without bound function' %
+                            self.__class__.__name__)
+        Filter.__init__(self, **options)
+
+    def filter(self, lexer, stream):
+        # pylint: disable=not-callable
+        yield from self.function(lexer, stream, self.options)
--- a/eric6/ThirdParty/Pygments/pygments/filters/__init__.py	Tue Sep 15 18:46:58 2020 +0200
+++ b/eric6/ThirdParty/Pygments/pygments/filters/__init__.py	Tue Sep 15 19:09:05 2020 +0200
@@ -1,351 +1,938 @@
-# -*- coding: utf-8 -*-
-"""
-    pygments.filters
-    ~~~~~~~~~~~~~~~~
-
-    Module containing filter lookup functions and default
-    filters.
-
-    :copyright: Copyright 2006-2019 by the Pygments team, see AUTHORS.
-    :license: BSD, see LICENSE for details.
-"""
-
-import re
-
-from pygments.token import String, Comment, Keyword, Name, Error, Whitespace, \
-    string_to_tokentype
-from pygments.filter import Filter
-from pygments.util import get_list_opt, get_int_opt, get_bool_opt, \
-    get_choice_opt, ClassNotFound, OptionError
-from pygments.plugin import find_plugin_filters
-
-
-def find_filter_class(filtername):
-    """Lookup a filter by name. Return None if not found."""
-    if filtername in FILTERS:
-        return FILTERS[filtername]
-    for name, cls in find_plugin_filters():
-        if name == filtername:
-            return cls
-    return None
-
-
-def get_filter_by_name(filtername, **options):
-    """Return an instantiated filter.
-
-    Options are passed to the filter initializer if wanted.
-    Raise a ClassNotFound if not found.
-    """
-    cls = find_filter_class(filtername)
-    if cls:
-        return cls(**options)
-    else:
-        raise ClassNotFound('filter %r not found' % filtername)
-
-
-def get_all_filters():
-    """Return a generator of all filter names."""
-    for name in FILTERS:
-        yield name
-    for name, _ in find_plugin_filters():
-        yield name
-
-
-def _replace_special(ttype, value, regex, specialttype,
-                     replacefunc=lambda x: x):
-    last = 0
-    for match in regex.finditer(value):
-        start, end = match.start(), match.end()
-        if start != last:
-            yield ttype, value[last:start]
-        yield specialttype, replacefunc(value[start:end])
-        last = end
-    if last != len(value):
-        yield ttype, value[last:]
-
-
-class CodeTagFilter(Filter):
-    """Highlight special code tags in comments and docstrings.
-
-    Options accepted:
-
-    `codetags` : list of strings
-       A list of strings that are flagged as code tags.  The default is to
-       highlight ``XXX``, ``TODO``, ``BUG`` and ``NOTE``.
-    """
-
-    def __init__(self, **options):
-        Filter.__init__(self, **options)
-        tags = get_list_opt(options, 'codetags',
-                            ['XXX', 'TODO', 'BUG', 'NOTE'])
-        self.tag_re = re.compile(r'\b(%s)\b' % '|'.join([
-            re.escape(tag) for tag in tags if tag
-        ]))
-
-    def filter(self, lexer, stream):
-        regex = self.tag_re
-        for ttype, value in stream:
-            if ttype in String.Doc or \
-               ttype in Comment and \
-               ttype not in Comment.Preproc:
-                for sttype, svalue in _replace_special(ttype, value, regex,
-                                                       Comment.Special):
-                    yield sttype, svalue
-            else:
-                yield ttype, value
-
-
-class KeywordCaseFilter(Filter):
-    """Convert keywords to lowercase or uppercase or capitalize them, which
-    means first letter uppercase, rest lowercase.
-
-    This can be useful e.g. if you highlight Pascal code and want to adapt the
-    code to your styleguide.
-
-    Options accepted:
-
-    `case` : string
-       The casing to convert keywords to. Must be one of ``'lower'``,
-       ``'upper'`` or ``'capitalize'``.  The default is ``'lower'``.
-    """
-
-    def __init__(self, **options):
-        Filter.__init__(self, **options)
-        case = get_choice_opt(options, 'case',
-                              ['lower', 'upper', 'capitalize'], 'lower')
-        self.convert = getattr(str, case)
-
-    def filter(self, lexer, stream):
-        for ttype, value in stream:
-            if ttype in Keyword:
-                yield ttype, self.convert(value)
-            else:
-                yield ttype, value
-
-
-class NameHighlightFilter(Filter):
-    """Highlight a normal Name (and Name.*) token with a different token type.
-
-    Example::
-
-        filter = NameHighlightFilter(
-            names=['foo', 'bar', 'baz'],
-            tokentype=Name.Function,
-        )
-
-    This would highlight the names "foo", "bar" and "baz"
-    as functions. `Name.Function` is the default token type.
-
-    Options accepted:
-
-    `names` : list of strings
-      A list of names that should be given the different token type.
-      There is no default.
-    `tokentype` : TokenType or string
-      A token type or a string containing a token type name that is
-      used for highlighting the strings in `names`.  The default is
-      `Name.Function`.
-    """
-
-    def __init__(self, **options):
-        Filter.__init__(self, **options)
-        self.names = set(get_list_opt(options, 'names', []))
-        tokentype = options.get('tokentype')
-        if tokentype:
-            self.tokentype = string_to_tokentype(tokentype)
-        else:
-            self.tokentype = Name.Function
-
-    def filter(self, lexer, stream):
-        for ttype, value in stream:
-            if ttype in Name and value in self.names:
-                yield self.tokentype, value
-            else:
-                yield ttype, value
-
-
-class ErrorToken(Exception):
-    pass
-
-
-class RaiseOnErrorTokenFilter(Filter):
-    """Raise an exception when the lexer generates an error token.
-
-    Options accepted:
-
-    `excclass` : Exception class
-      The exception class to raise.
-      The default is `pygments.filters.ErrorToken`.
-
-    .. versionadded:: 0.8
-    """
-
-    def __init__(self, **options):
-        Filter.__init__(self, **options)
-        self.exception = options.get('excclass', ErrorToken)
-        try:
-            # issubclass() will raise TypeError if first argument is not a class
-            if not issubclass(self.exception, Exception):
-                raise TypeError
-        except TypeError:
-            raise OptionError('excclass option is not an exception class')
-
-    def filter(self, lexer, stream):
-        for ttype, value in stream:
-            if ttype is Error:
-                raise self.exception(value)
-            yield ttype, value
-
-
-class VisibleWhitespaceFilter(Filter):
-    """Convert tabs, newlines and/or spaces to visible characters.
-
-    Options accepted:
-
-    `spaces` : string or bool
-      If this is a one-character string, spaces will be replaces by this string.
-      If it is another true value, spaces will be replaced by ``·`` (unicode
-      MIDDLE DOT).  If it is a false value, spaces will not be replaced.  The
-      default is ``False``.
-    `tabs` : string or bool
-      The same as for `spaces`, but the default replacement character is ``»``
-      (unicode RIGHT-POINTING DOUBLE ANGLE QUOTATION MARK).  The default value
-      is ``False``.  Note: this will not work if the `tabsize` option for the
-      lexer is nonzero, as tabs will already have been expanded then.
-    `tabsize` : int
-      If tabs are to be replaced by this filter (see the `tabs` option), this
-      is the total number of characters that a tab should be expanded to.
-      The default is ``8``.
-    `newlines` : string or bool
-      The same as for `spaces`, but the default replacement character is ``¶``
-      (unicode PILCROW SIGN).  The default value is ``False``.
-    `wstokentype` : bool
-      If true, give whitespace the special `Whitespace` token type.  This allows
-      styling the visible whitespace differently (e.g. greyed out), but it can
-      disrupt background colors.  The default is ``True``.
-
-    .. versionadded:: 0.8
-    """
-
-    def __init__(self, **options):
-        Filter.__init__(self, **options)
-        for name, default in [('spaces',   u'·'),
-                              ('tabs',     u'»'),
-                              ('newlines', u'¶')]:
-            opt = options.get(name, False)
-            if isinstance(opt, str) and len(opt) == 1:
-                setattr(self, name, opt)
-            else:
-                setattr(self, name, (opt and default or ''))
-        tabsize = get_int_opt(options, 'tabsize', 8)
-        if self.tabs:
-            self.tabs += ' ' * (tabsize - 1)
-        if self.newlines:
-            self.newlines += '\n'
-        self.wstt = get_bool_opt(options, 'wstokentype', True)
-
-    def filter(self, lexer, stream):
-        if self.wstt:
-            spaces = self.spaces or u' '
-            tabs = self.tabs or u'\t'
-            newlines = self.newlines or u'\n'
-            regex = re.compile(r'\s')
-
-            def replacefunc(wschar):
-                if wschar == ' ':
-                    return spaces
-                elif wschar == '\t':
-                    return tabs
-                elif wschar == '\n':
-                    return newlines
-                return wschar
-
-            for ttype, value in stream:
-                for sttype, svalue in _replace_special(ttype, value, regex,
-                                                       Whitespace, replacefunc):
-                    yield sttype, svalue
-        else:
-            spaces, tabs, newlines = self.spaces, self.tabs, self.newlines
-            # simpler processing
-            for ttype, value in stream:
-                if spaces:
-                    value = value.replace(' ', spaces)
-                if tabs:
-                    value = value.replace('\t', tabs)
-                if newlines:
-                    value = value.replace('\n', newlines)
-                yield ttype, value
-
-
-class GobbleFilter(Filter):
-    """Gobbles source code lines (eats initial characters).
-
-    This filter drops the first ``n`` characters off every line of code.  This
-    may be useful when the source code fed to the lexer is indented by a fixed
-    amount of space that isn't desired in the output.
-
-    Options accepted:
-
-    `n` : int
-       The number of characters to gobble.
-
-    .. versionadded:: 1.2
-    """
-    def __init__(self, **options):
-        Filter.__init__(self, **options)
-        self.n = get_int_opt(options, 'n', 0)
-
-    def gobble(self, value, left):
-        if left < len(value):
-            return value[left:], 0
-        else:
-            return u'', left - len(value)
-
-    def filter(self, lexer, stream):
-        n = self.n
-        left = n  # How many characters left to gobble.
-        for ttype, value in stream:
-            # Remove ``left`` tokens from first line, ``n`` from all others.
-            parts = value.split('\n')
-            (parts[0], left) = self.gobble(parts[0], left)
-            for i in range(1, len(parts)):
-                (parts[i], left) = self.gobble(parts[i], n)
-            value = u'\n'.join(parts)
-
-            if value != '':
-                yield ttype, value
-
-
-class TokenMergeFilter(Filter):
-    """Merges consecutive tokens with the same token type in the output
-    stream of a lexer.
-
-    .. versionadded:: 1.2
-    """
-    def __init__(self, **options):
-        Filter.__init__(self, **options)
-
-    def filter(self, lexer, stream):
-        current_type = None
-        current_value = None
-        for ttype, value in stream:
-            if ttype is current_type:
-                current_value += value
-            else:
-                if current_type is not None:
-                    yield current_type, current_value
-                current_type = ttype
-                current_value = value
-        if current_type is not None:
-            yield current_type, current_value
-
-
-FILTERS = {
-    'codetagify':     CodeTagFilter,
-    'keywordcase':    KeywordCaseFilter,
-    'highlight':      NameHighlightFilter,
-    'raiseonerror':   RaiseOnErrorTokenFilter,
-    'whitespace':     VisibleWhitespaceFilter,
-    'gobble':         GobbleFilter,
-    'tokenmerge':     TokenMergeFilter,
-}
+# -*- coding: utf-8 -*-
+"""
+    pygments.filters
+    ~~~~~~~~~~~~~~~~
+
+    Module containing filter lookup functions and default
+    filters.
+
+    :copyright: Copyright 2006-2020 by the Pygments team, see AUTHORS.
+    :license: BSD, see LICENSE for details.
+"""
+
+import re
+
+from pygments.token import String, Comment, Keyword, Name, Error, Whitespace, \
+    string_to_tokentype
+from pygments.filter import Filter
+from pygments.util import get_list_opt, get_int_opt, get_bool_opt, \
+    get_choice_opt, ClassNotFound, OptionError
+from pygments.plugin import find_plugin_filters
+
+
+def find_filter_class(filtername):
+    """Lookup a filter by name. Return None if not found."""
+    if filtername in FILTERS:
+        return FILTERS[filtername]
+    for name, cls in find_plugin_filters():
+        if name == filtername:
+            return cls
+    return None
+
+
+def get_filter_by_name(filtername, **options):
+    """Return an instantiated filter.
+
+    Options are passed to the filter initializer if wanted.
+    Raise a ClassNotFound if not found.
+    """
+    cls = find_filter_class(filtername)
+    if cls:
+        return cls(**options)
+    else:
+        raise ClassNotFound('filter %r not found' % filtername)
+
+
+def get_all_filters():
+    """Return a generator of all filter names."""
+    yield from FILTERS
+    for name, _ in find_plugin_filters():
+        yield name
+
+
+def _replace_special(ttype, value, regex, specialttype,
+                     replacefunc=lambda x: x):
+    last = 0
+    for match in regex.finditer(value):
+        start, end = match.start(), match.end()
+        if start != last:
+            yield ttype, value[last:start]
+        yield specialttype, replacefunc(value[start:end])
+        last = end
+    if last != len(value):
+        yield ttype, value[last:]
+
+
+class CodeTagFilter(Filter):
+    """Highlight special code tags in comments and docstrings.
+
+    Options accepted:
+
+    `codetags` : list of strings
+       A list of strings that are flagged as code tags.  The default is to
+       highlight ``XXX``, ``TODO``, ``BUG`` and ``NOTE``.
+    """
+
+    def __init__(self, **options):
+        Filter.__init__(self, **options)
+        tags = get_list_opt(options, 'codetags',
+                            ['XXX', 'TODO', 'BUG', 'NOTE'])
+        self.tag_re = re.compile(r'\b(%s)\b' % '|'.join([
+            re.escape(tag) for tag in tags if tag
+        ]))
+
+    def filter(self, lexer, stream):
+        regex = self.tag_re
+        for ttype, value in stream:
+            if ttype in String.Doc or \
+               ttype in Comment and \
+               ttype not in Comment.Preproc:
+                yield from _replace_special(ttype, value, regex, Comment.Special)
+            else:
+                yield ttype, value
+
+
+class SymbolFilter(Filter):
+    """Convert mathematical symbols such as \\<longrightarrow> in Isabelle
+    or \\longrightarrow in LaTeX into Unicode characters.
+
+    This is mostly useful for HTML or console output when you want to
+    approximate the source rendering you'd see in an IDE.
+
+    Options accepted:
+
+    `lang` : string
+       The symbol language. Must be one of ``'isabelle'`` or
+       ``'latex'``.  The default is ``'isabelle'``.
+    """
+
+    latex_symbols = {
+        '\\alpha'                : '\U000003b1',
+        '\\beta'                 : '\U000003b2',
+        '\\gamma'                : '\U000003b3',
+        '\\delta'                : '\U000003b4',
+        '\\varepsilon'           : '\U000003b5',
+        '\\zeta'                 : '\U000003b6',
+        '\\eta'                  : '\U000003b7',
+        '\\vartheta'             : '\U000003b8',
+        '\\iota'                 : '\U000003b9',
+        '\\kappa'                : '\U000003ba',
+        '\\lambda'               : '\U000003bb',
+        '\\mu'                   : '\U000003bc',
+        '\\nu'                   : '\U000003bd',
+        '\\xi'                   : '\U000003be',
+        '\\pi'                   : '\U000003c0',
+        '\\varrho'               : '\U000003c1',
+        '\\sigma'                : '\U000003c3',
+        '\\tau'                  : '\U000003c4',
+        '\\upsilon'              : '\U000003c5',
+        '\\varphi'               : '\U000003c6',
+        '\\chi'                  : '\U000003c7',
+        '\\psi'                  : '\U000003c8',
+        '\\omega'                : '\U000003c9',
+        '\\Gamma'                : '\U00000393',
+        '\\Delta'                : '\U00000394',
+        '\\Theta'                : '\U00000398',
+        '\\Lambda'               : '\U0000039b',
+        '\\Xi'                   : '\U0000039e',
+        '\\Pi'                   : '\U000003a0',
+        '\\Sigma'                : '\U000003a3',
+        '\\Upsilon'              : '\U000003a5',
+        '\\Phi'                  : '\U000003a6',
+        '\\Psi'                  : '\U000003a8',
+        '\\Omega'                : '\U000003a9',
+        '\\leftarrow'            : '\U00002190',
+        '\\longleftarrow'        : '\U000027f5',
+        '\\rightarrow'           : '\U00002192',
+        '\\longrightarrow'       : '\U000027f6',
+        '\\Leftarrow'            : '\U000021d0',
+        '\\Longleftarrow'        : '\U000027f8',
+        '\\Rightarrow'           : '\U000021d2',
+        '\\Longrightarrow'       : '\U000027f9',
+        '\\leftrightarrow'       : '\U00002194',
+        '\\longleftrightarrow'   : '\U000027f7',
+        '\\Leftrightarrow'       : '\U000021d4',
+        '\\Longleftrightarrow'   : '\U000027fa',
+        '\\mapsto'               : '\U000021a6',
+        '\\longmapsto'           : '\U000027fc',
+        '\\relbar'               : '\U00002500',
+        '\\Relbar'               : '\U00002550',
+        '\\hookleftarrow'        : '\U000021a9',
+        '\\hookrightarrow'       : '\U000021aa',
+        '\\leftharpoondown'      : '\U000021bd',
+        '\\rightharpoondown'     : '\U000021c1',
+        '\\leftharpoonup'        : '\U000021bc',
+        '\\rightharpoonup'       : '\U000021c0',
+        '\\rightleftharpoons'    : '\U000021cc',
+        '\\leadsto'              : '\U0000219d',
+        '\\downharpoonleft'      : '\U000021c3',
+        '\\downharpoonright'     : '\U000021c2',
+        '\\upharpoonleft'        : '\U000021bf',
+        '\\upharpoonright'       : '\U000021be',
+        '\\restriction'          : '\U000021be',
+        '\\uparrow'              : '\U00002191',
+        '\\Uparrow'              : '\U000021d1',
+        '\\downarrow'            : '\U00002193',
+        '\\Downarrow'            : '\U000021d3',
+        '\\updownarrow'          : '\U00002195',
+        '\\Updownarrow'          : '\U000021d5',
+        '\\langle'               : '\U000027e8',
+        '\\rangle'               : '\U000027e9',
+        '\\lceil'                : '\U00002308',
+        '\\rceil'                : '\U00002309',
+        '\\lfloor'               : '\U0000230a',
+        '\\rfloor'               : '\U0000230b',
+        '\\flqq'                 : '\U000000ab',
+        '\\frqq'                 : '\U000000bb',
+        '\\bot'                  : '\U000022a5',
+        '\\top'                  : '\U000022a4',
+        '\\wedge'                : '\U00002227',
+        '\\bigwedge'             : '\U000022c0',
+        '\\vee'                  : '\U00002228',
+        '\\bigvee'               : '\U000022c1',
+        '\\forall'               : '\U00002200',
+        '\\exists'               : '\U00002203',
+        '\\nexists'              : '\U00002204',
+        '\\neg'                  : '\U000000ac',
+        '\\Box'                  : '\U000025a1',
+        '\\Diamond'              : '\U000025c7',
+        '\\vdash'                : '\U000022a2',
+        '\\models'               : '\U000022a8',
+        '\\dashv'                : '\U000022a3',
+        '\\surd'                 : '\U0000221a',
+        '\\le'                   : '\U00002264',
+        '\\ge'                   : '\U00002265',
+        '\\ll'                   : '\U0000226a',
+        '\\gg'                   : '\U0000226b',
+        '\\lesssim'              : '\U00002272',
+        '\\gtrsim'               : '\U00002273',
+        '\\lessapprox'           : '\U00002a85',
+        '\\gtrapprox'            : '\U00002a86',
+        '\\in'                   : '\U00002208',
+        '\\notin'                : '\U00002209',
+        '\\subset'               : '\U00002282',
+        '\\supset'               : '\U00002283',
+        '\\subseteq'             : '\U00002286',
+        '\\supseteq'             : '\U00002287',
+        '\\sqsubset'             : '\U0000228f',
+        '\\sqsupset'             : '\U00002290',
+        '\\sqsubseteq'           : '\U00002291',
+        '\\sqsupseteq'           : '\U00002292',
+        '\\cap'                  : '\U00002229',
+        '\\bigcap'               : '\U000022c2',
+        '\\cup'                  : '\U0000222a',
+        '\\bigcup'               : '\U000022c3',
+        '\\sqcup'                : '\U00002294',
+        '\\bigsqcup'             : '\U00002a06',
+        '\\sqcap'                : '\U00002293',
+        '\\Bigsqcap'             : '\U00002a05',
+        '\\setminus'             : '\U00002216',
+        '\\propto'               : '\U0000221d',
+        '\\uplus'                : '\U0000228e',
+        '\\bigplus'              : '\U00002a04',
+        '\\sim'                  : '\U0000223c',
+        '\\doteq'                : '\U00002250',
+        '\\simeq'                : '\U00002243',
+        '\\approx'               : '\U00002248',
+        '\\asymp'                : '\U0000224d',
+        '\\cong'                 : '\U00002245',
+        '\\equiv'                : '\U00002261',
+        '\\Join'                 : '\U000022c8',
+        '\\bowtie'               : '\U00002a1d',
+        '\\prec'                 : '\U0000227a',
+        '\\succ'                 : '\U0000227b',
+        '\\preceq'               : '\U0000227c',
+        '\\succeq'               : '\U0000227d',
+        '\\parallel'             : '\U00002225',
+        '\\mid'                  : '\U000000a6',
+        '\\pm'                   : '\U000000b1',
+        '\\mp'                   : '\U00002213',
+        '\\times'                : '\U000000d7',
+        '\\div'                  : '\U000000f7',
+        '\\cdot'                 : '\U000022c5',
+        '\\star'                 : '\U000022c6',
+        '\\circ'                 : '\U00002218',
+        '\\dagger'               : '\U00002020',
+        '\\ddagger'              : '\U00002021',
+        '\\lhd'                  : '\U000022b2',
+        '\\rhd'                  : '\U000022b3',
+        '\\unlhd'                : '\U000022b4',
+        '\\unrhd'                : '\U000022b5',
+        '\\triangleleft'         : '\U000025c3',
+        '\\triangleright'        : '\U000025b9',
+        '\\triangle'             : '\U000025b3',
+        '\\triangleq'            : '\U0000225c',
+        '\\oplus'                : '\U00002295',
+        '\\bigoplus'             : '\U00002a01',
+        '\\otimes'               : '\U00002297',
+        '\\bigotimes'            : '\U00002a02',
+        '\\odot'                 : '\U00002299',
+        '\\bigodot'              : '\U00002a00',
+        '\\ominus'               : '\U00002296',
+        '\\oslash'               : '\U00002298',
+        '\\dots'                 : '\U00002026',
+        '\\cdots'                : '\U000022ef',
+        '\\sum'                  : '\U00002211',
+        '\\prod'                 : '\U0000220f',
+        '\\coprod'               : '\U00002210',
+        '\\infty'                : '\U0000221e',
+        '\\int'                  : '\U0000222b',
+        '\\oint'                 : '\U0000222e',
+        '\\clubsuit'             : '\U00002663',
+        '\\diamondsuit'          : '\U00002662',
+        '\\heartsuit'            : '\U00002661',
+        '\\spadesuit'            : '\U00002660',
+        '\\aleph'                : '\U00002135',
+        '\\emptyset'             : '\U00002205',
+        '\\nabla'                : '\U00002207',
+        '\\partial'              : '\U00002202',
+        '\\flat'                 : '\U0000266d',
+        '\\natural'              : '\U0000266e',
+        '\\sharp'                : '\U0000266f',
+        '\\angle'                : '\U00002220',
+        '\\copyright'            : '\U000000a9',
+        '\\textregistered'       : '\U000000ae',
+        '\\textonequarter'       : '\U000000bc',
+        '\\textonehalf'          : '\U000000bd',
+        '\\textthreequarters'    : '\U000000be',
+        '\\textordfeminine'      : '\U000000aa',
+        '\\textordmasculine'     : '\U000000ba',
+        '\\euro'                 : '\U000020ac',
+        '\\pounds'               : '\U000000a3',
+        '\\yen'                  : '\U000000a5',
+        '\\textcent'             : '\U000000a2',
+        '\\textcurrency'         : '\U000000a4',
+        '\\textdegree'           : '\U000000b0',
+    }
+
+    isabelle_symbols = {
+        '\\<zero>'                 : '\U0001d7ec',
+        '\\<one>'                  : '\U0001d7ed',
+        '\\<two>'                  : '\U0001d7ee',
+        '\\<three>'                : '\U0001d7ef',
+        '\\<four>'                 : '\U0001d7f0',
+        '\\<five>'                 : '\U0001d7f1',
+        '\\<six>'                  : '\U0001d7f2',
+        '\\<seven>'                : '\U0001d7f3',
+        '\\<eight>'                : '\U0001d7f4',
+        '\\<nine>'                 : '\U0001d7f5',
+        '\\<A>'                    : '\U0001d49c',
+        '\\<B>'                    : '\U0000212c',
+        '\\<C>'                    : '\U0001d49e',
+        '\\<D>'                    : '\U0001d49f',
+        '\\<E>'                    : '\U00002130',
+        '\\<F>'                    : '\U00002131',
+        '\\<G>'                    : '\U0001d4a2',
+        '\\<H>'                    : '\U0000210b',
+        '\\<I>'                    : '\U00002110',
+        '\\<J>'                    : '\U0001d4a5',
+        '\\<K>'                    : '\U0001d4a6',
+        '\\<L>'                    : '\U00002112',
+        '\\<M>'                    : '\U00002133',
+        '\\<N>'                    : '\U0001d4a9',
+        '\\<O>'                    : '\U0001d4aa',
+        '\\<P>'                    : '\U0001d4ab',
+        '\\<Q>'                    : '\U0001d4ac',
+        '\\<R>'                    : '\U0000211b',
+        '\\<S>'                    : '\U0001d4ae',
+        '\\<T>'                    : '\U0001d4af',
+        '\\<U>'                    : '\U0001d4b0',
+        '\\<V>'                    : '\U0001d4b1',
+        '\\<W>'                    : '\U0001d4b2',
+        '\\<X>'                    : '\U0001d4b3',
+        '\\<Y>'                    : '\U0001d4b4',
+        '\\<Z>'                    : '\U0001d4b5',
+        '\\<a>'                    : '\U0001d5ba',
+        '\\<b>'                    : '\U0001d5bb',
+        '\\<c>'                    : '\U0001d5bc',
+        '\\<d>'                    : '\U0001d5bd',
+        '\\<e>'                    : '\U0001d5be',
+        '\\<f>'                    : '\U0001d5bf',
+        '\\<g>'                    : '\U0001d5c0',
+        '\\<h>'                    : '\U0001d5c1',
+        '\\<i>'                    : '\U0001d5c2',
+        '\\<j>'                    : '\U0001d5c3',
+        '\\<k>'                    : '\U0001d5c4',
+        '\\<l>'                    : '\U0001d5c5',
+        '\\<m>'                    : '\U0001d5c6',
+        '\\<n>'                    : '\U0001d5c7',
+        '\\<o>'                    : '\U0001d5c8',
+        '\\<p>'                    : '\U0001d5c9',
+        '\\<q>'                    : '\U0001d5ca',
+        '\\<r>'                    : '\U0001d5cb',
+        '\\<s>'                    : '\U0001d5cc',
+        '\\<t>'                    : '\U0001d5cd',
+        '\\<u>'                    : '\U0001d5ce',
+        '\\<v>'                    : '\U0001d5cf',
+        '\\<w>'                    : '\U0001d5d0',
+        '\\<x>'                    : '\U0001d5d1',
+        '\\<y>'                    : '\U0001d5d2',
+        '\\<z>'                    : '\U0001d5d3',
+        '\\<AA>'                   : '\U0001d504',
+        '\\<BB>'                   : '\U0001d505',
+        '\\<CC>'                   : '\U0000212d',
+        '\\<DD>'                   : '\U0001d507',
+        '\\<EE>'                   : '\U0001d508',
+        '\\<FF>'                   : '\U0001d509',
+        '\\<GG>'                   : '\U0001d50a',
+        '\\<HH>'                   : '\U0000210c',
+        '\\<II>'                   : '\U00002111',
+        '\\<JJ>'                   : '\U0001d50d',
+        '\\<KK>'                   : '\U0001d50e',
+        '\\<LL>'                   : '\U0001d50f',
+        '\\<MM>'                   : '\U0001d510',
+        '\\<NN>'                   : '\U0001d511',
+        '\\<OO>'                   : '\U0001d512',
+        '\\<PP>'                   : '\U0001d513',
+        '\\<QQ>'                   : '\U0001d514',
+        '\\<RR>'                   : '\U0000211c',
+        '\\<SS>'                   : '\U0001d516',
+        '\\<TT>'                   : '\U0001d517',
+        '\\<UU>'                   : '\U0001d518',
+        '\\<VV>'                   : '\U0001d519',
+        '\\<WW>'                   : '\U0001d51a',
+        '\\<XX>'                   : '\U0001d51b',
+        '\\<YY>'                   : '\U0001d51c',
+        '\\<ZZ>'                   : '\U00002128',
+        '\\<aa>'                   : '\U0001d51e',
+        '\\<bb>'                   : '\U0001d51f',
+        '\\<cc>'                   : '\U0001d520',
+        '\\<dd>'                   : '\U0001d521',
+        '\\<ee>'                   : '\U0001d522',
+        '\\<ff>'                   : '\U0001d523',
+        '\\<gg>'                   : '\U0001d524',
+        '\\<hh>'                   : '\U0001d525',
+        '\\<ii>'                   : '\U0001d526',
+        '\\<jj>'                   : '\U0001d527',
+        '\\<kk>'                   : '\U0001d528',
+        '\\<ll>'                   : '\U0001d529',
+        '\\<mm>'                   : '\U0001d52a',
+        '\\<nn>'                   : '\U0001d52b',
+        '\\<oo>'                   : '\U0001d52c',
+        '\\<pp>'                   : '\U0001d52d',
+        '\\<qq>'                   : '\U0001d52e',
+        '\\<rr>'                   : '\U0001d52f',
+        '\\<ss>'                   : '\U0001d530',
+        '\\<tt>'                   : '\U0001d531',
+        '\\<uu>'                   : '\U0001d532',
+        '\\<vv>'                   : '\U0001d533',
+        '\\<ww>'                   : '\U0001d534',
+        '\\<xx>'                   : '\U0001d535',
+        '\\<yy>'                   : '\U0001d536',
+        '\\<zz>'                   : '\U0001d537',
+        '\\<alpha>'                : '\U000003b1',
+        '\\<beta>'                 : '\U000003b2',
+        '\\<gamma>'                : '\U000003b3',
+        '\\<delta>'                : '\U000003b4',
+        '\\<epsilon>'              : '\U000003b5',
+        '\\<zeta>'                 : '\U000003b6',
+        '\\<eta>'                  : '\U000003b7',
+        '\\<theta>'                : '\U000003b8',
+        '\\<iota>'                 : '\U000003b9',
+        '\\<kappa>'                : '\U000003ba',
+        '\\<lambda>'               : '\U000003bb',
+        '\\<mu>'                   : '\U000003bc',
+        '\\<nu>'                   : '\U000003bd',
+        '\\<xi>'                   : '\U000003be',
+        '\\<pi>'                   : '\U000003c0',
+        '\\<rho>'                  : '\U000003c1',
+        '\\<sigma>'                : '\U000003c3',
+        '\\<tau>'                  : '\U000003c4',
+        '\\<upsilon>'              : '\U000003c5',
+        '\\<phi>'                  : '\U000003c6',
+        '\\<chi>'                  : '\U000003c7',
+        '\\<psi>'                  : '\U000003c8',
+        '\\<omega>'                : '\U000003c9',
+        '\\<Gamma>'                : '\U00000393',
+        '\\<Delta>'                : '\U00000394',
+        '\\<Theta>'                : '\U00000398',
+        '\\<Lambda>'               : '\U0000039b',
+        '\\<Xi>'                   : '\U0000039e',
+        '\\<Pi>'                   : '\U000003a0',
+        '\\<Sigma>'                : '\U000003a3',
+        '\\<Upsilon>'              : '\U000003a5',
+        '\\<Phi>'                  : '\U000003a6',
+        '\\<Psi>'                  : '\U000003a8',
+        '\\<Omega>'                : '\U000003a9',
+        '\\<bool>'                 : '\U0001d539',
+        '\\<complex>'              : '\U00002102',
+        '\\<nat>'                  : '\U00002115',
+        '\\<rat>'                  : '\U0000211a',
+        '\\<real>'                 : '\U0000211d',
+        '\\<int>'                  : '\U00002124',
+        '\\<leftarrow>'            : '\U00002190',
+        '\\<longleftarrow>'        : '\U000027f5',
+        '\\<rightarrow>'           : '\U00002192',
+        '\\<longrightarrow>'       : '\U000027f6',
+        '\\<Leftarrow>'            : '\U000021d0',
+        '\\<Longleftarrow>'        : '\U000027f8',
+        '\\<Rightarrow>'           : '\U000021d2',
+        '\\<Longrightarrow>'       : '\U000027f9',
+        '\\<leftrightarrow>'       : '\U00002194',
+        '\\<longleftrightarrow>'   : '\U000027f7',
+        '\\<Leftrightarrow>'       : '\U000021d4',
+        '\\<Longleftrightarrow>'   : '\U000027fa',
+        '\\<mapsto>'               : '\U000021a6',
+        '\\<longmapsto>'           : '\U000027fc',
+        '\\<midarrow>'             : '\U00002500',
+        '\\<Midarrow>'             : '\U00002550',
+        '\\<hookleftarrow>'        : '\U000021a9',
+        '\\<hookrightarrow>'       : '\U000021aa',
+        '\\<leftharpoondown>'      : '\U000021bd',
+        '\\<rightharpoondown>'     : '\U000021c1',
+        '\\<leftharpoonup>'        : '\U000021bc',
+        '\\<rightharpoonup>'       : '\U000021c0',
+        '\\<rightleftharpoons>'    : '\U000021cc',
+        '\\<leadsto>'              : '\U0000219d',
+        '\\<downharpoonleft>'      : '\U000021c3',
+        '\\<downharpoonright>'     : '\U000021c2',
+        '\\<upharpoonleft>'        : '\U000021bf',
+        '\\<upharpoonright>'       : '\U000021be',
+        '\\<restriction>'          : '\U000021be',
+        '\\<Colon>'                : '\U00002237',
+        '\\<up>'                   : '\U00002191',
+        '\\<Up>'                   : '\U000021d1',
+        '\\<down>'                 : '\U00002193',
+        '\\<Down>'                 : '\U000021d3',
+        '\\<updown>'               : '\U00002195',
+        '\\<Updown>'               : '\U000021d5',
+        '\\<langle>'               : '\U000027e8',
+        '\\<rangle>'               : '\U000027e9',
+        '\\<lceil>'                : '\U00002308',
+        '\\<rceil>'                : '\U00002309',
+        '\\<lfloor>'               : '\U0000230a',
+        '\\<rfloor>'               : '\U0000230b',
+        '\\<lparr>'                : '\U00002987',
+        '\\<rparr>'                : '\U00002988',
+        '\\<lbrakk>'               : '\U000027e6',
+        '\\<rbrakk>'               : '\U000027e7',
+        '\\<lbrace>'               : '\U00002983',
+        '\\<rbrace>'               : '\U00002984',
+        '\\<guillemotleft>'        : '\U000000ab',
+        '\\<guillemotright>'       : '\U000000bb',
+        '\\<bottom>'               : '\U000022a5',
+        '\\<top>'                  : '\U000022a4',
+        '\\<and>'                  : '\U00002227',
+        '\\<And>'                  : '\U000022c0',
+        '\\<or>'                   : '\U00002228',
+        '\\<Or>'                   : '\U000022c1',
+        '\\<forall>'               : '\U00002200',
+        '\\<exists>'               : '\U00002203',
+        '\\<nexists>'              : '\U00002204',
+        '\\<not>'                  : '\U000000ac',
+        '\\<box>'                  : '\U000025a1',
+        '\\<diamond>'              : '\U000025c7',
+        '\\<turnstile>'            : '\U000022a2',
+        '\\<Turnstile>'            : '\U000022a8',
+        '\\<tturnstile>'           : '\U000022a9',
+        '\\<TTurnstile>'           : '\U000022ab',
+        '\\<stileturn>'            : '\U000022a3',
+        '\\<surd>'                 : '\U0000221a',
+        '\\<le>'                   : '\U00002264',
+        '\\<ge>'                   : '\U00002265',
+        '\\<lless>'                : '\U0000226a',
+        '\\<ggreater>'             : '\U0000226b',
+        '\\<lesssim>'              : '\U00002272',
+        '\\<greatersim>'           : '\U00002273',
+        '\\<lessapprox>'           : '\U00002a85',
+        '\\<greaterapprox>'        : '\U00002a86',
+        '\\<in>'                   : '\U00002208',
+        '\\<notin>'                : '\U00002209',
+        '\\<subset>'               : '\U00002282',
+        '\\<supset>'               : '\U00002283',
+        '\\<subseteq>'             : '\U00002286',
+        '\\<supseteq>'             : '\U00002287',
+        '\\<sqsubset>'             : '\U0000228f',
+        '\\<sqsupset>'             : '\U00002290',
+        '\\<sqsubseteq>'           : '\U00002291',
+        '\\<sqsupseteq>'           : '\U00002292',
+        '\\<inter>'                : '\U00002229',
+        '\\<Inter>'                : '\U000022c2',
+        '\\<union>'                : '\U0000222a',
+        '\\<Union>'                : '\U000022c3',
+        '\\<squnion>'              : '\U00002294',
+        '\\<Squnion>'              : '\U00002a06',
+        '\\<sqinter>'              : '\U00002293',
+        '\\<Sqinter>'              : '\U00002a05',
+        '\\<setminus>'             : '\U00002216',
+        '\\<propto>'               : '\U0000221d',
+        '\\<uplus>'                : '\U0000228e',
+        '\\<Uplus>'                : '\U00002a04',
+        '\\<noteq>'                : '\U00002260',
+        '\\<sim>'                  : '\U0000223c',
+        '\\<doteq>'                : '\U00002250',
+        '\\<simeq>'                : '\U00002243',
+        '\\<approx>'               : '\U00002248',
+        '\\<asymp>'                : '\U0000224d',
+        '\\<cong>'                 : '\U00002245',
+        '\\<smile>'                : '\U00002323',
+        '\\<equiv>'                : '\U00002261',
+        '\\<frown>'                : '\U00002322',
+        '\\<Join>'                 : '\U000022c8',
+        '\\<bowtie>'               : '\U00002a1d',
+        '\\<prec>'                 : '\U0000227a',
+        '\\<succ>'                 : '\U0000227b',
+        '\\<preceq>'               : '\U0000227c',
+        '\\<succeq>'               : '\U0000227d',
+        '\\<parallel>'             : '\U00002225',
+        '\\<bar>'                  : '\U000000a6',
+        '\\<plusminus>'            : '\U000000b1',
+        '\\<minusplus>'            : '\U00002213',
+        '\\<times>'                : '\U000000d7',
+        '\\<div>'                  : '\U000000f7',
+        '\\<cdot>'                 : '\U000022c5',
+        '\\<star>'                 : '\U000022c6',
+        '\\<bullet>'               : '\U00002219',
+        '\\<circ>'                 : '\U00002218',
+        '\\<dagger>'               : '\U00002020',
+        '\\<ddagger>'              : '\U00002021',
+        '\\<lhd>'                  : '\U000022b2',
+        '\\<rhd>'                  : '\U000022b3',
+        '\\<unlhd>'                : '\U000022b4',
+        '\\<unrhd>'                : '\U000022b5',
+        '\\<triangleleft>'         : '\U000025c3',
+        '\\<triangleright>'        : '\U000025b9',
+        '\\<triangle>'             : '\U000025b3',
+        '\\<triangleq>'            : '\U0000225c',
+        '\\<oplus>'                : '\U00002295',
+        '\\<Oplus>'                : '\U00002a01',
+        '\\<otimes>'               : '\U00002297',
+        '\\<Otimes>'               : '\U00002a02',
+        '\\<odot>'                 : '\U00002299',
+        '\\<Odot>'                 : '\U00002a00',
+        '\\<ominus>'               : '\U00002296',
+        '\\<oslash>'               : '\U00002298',
+        '\\<dots>'                 : '\U00002026',
+        '\\<cdots>'                : '\U000022ef',
+        '\\<Sum>'                  : '\U00002211',
+        '\\<Prod>'                 : '\U0000220f',
+        '\\<Coprod>'               : '\U00002210',
+        '\\<infinity>'             : '\U0000221e',
+        '\\<integral>'             : '\U0000222b',
+        '\\<ointegral>'            : '\U0000222e',
+        '\\<clubsuit>'             : '\U00002663',
+        '\\<diamondsuit>'          : '\U00002662',
+        '\\<heartsuit>'            : '\U00002661',
+        '\\<spadesuit>'            : '\U00002660',
+        '\\<aleph>'                : '\U00002135',
+        '\\<emptyset>'             : '\U00002205',
+        '\\<nabla>'                : '\U00002207',
+        '\\<partial>'              : '\U00002202',
+        '\\<flat>'                 : '\U0000266d',
+        '\\<natural>'              : '\U0000266e',
+        '\\<sharp>'                : '\U0000266f',
+        '\\<angle>'                : '\U00002220',
+        '\\<copyright>'            : '\U000000a9',
+        '\\<registered>'           : '\U000000ae',
+        '\\<hyphen>'               : '\U000000ad',
+        '\\<inverse>'              : '\U000000af',
+        '\\<onequarter>'           : '\U000000bc',
+        '\\<onehalf>'              : '\U000000bd',
+        '\\<threequarters>'        : '\U000000be',
+        '\\<ordfeminine>'          : '\U000000aa',
+        '\\<ordmasculine>'         : '\U000000ba',
+        '\\<section>'              : '\U000000a7',
+        '\\<paragraph>'            : '\U000000b6',
+        '\\<exclamdown>'           : '\U000000a1',
+        '\\<questiondown>'         : '\U000000bf',
+        '\\<euro>'                 : '\U000020ac',
+        '\\<pounds>'               : '\U000000a3',
+        '\\<yen>'                  : '\U000000a5',
+        '\\<cent>'                 : '\U000000a2',
+        '\\<currency>'             : '\U000000a4',
+        '\\<degree>'               : '\U000000b0',
+        '\\<amalg>'                : '\U00002a3f',
+        '\\<mho>'                  : '\U00002127',
+        '\\<lozenge>'              : '\U000025ca',
+        '\\<wp>'                   : '\U00002118',
+        '\\<wrong>'                : '\U00002240',
+        '\\<struct>'               : '\U000022c4',
+        '\\<acute>'                : '\U000000b4',
+        '\\<index>'                : '\U00000131',
+        '\\<dieresis>'             : '\U000000a8',
+        '\\<cedilla>'              : '\U000000b8',
+        '\\<hungarumlaut>'         : '\U000002dd',
+        '\\<some>'                 : '\U000003f5',
+        '\\<newline>'              : '\U000023ce',
+        '\\<open>'                 : '\U00002039',
+        '\\<close>'                : '\U0000203a',
+        '\\<here>'                 : '\U00002302',
+        '\\<^sub>'                 : '\U000021e9',
+        '\\<^sup>'                 : '\U000021e7',
+        '\\<^bold>'                : '\U00002759',
+        '\\<^bsub>'                : '\U000021d8',
+        '\\<^esub>'                : '\U000021d9',
+        '\\<^bsup>'                : '\U000021d7',
+        '\\<^esup>'                : '\U000021d6',
+    }
+
+    lang_map = {'isabelle' : isabelle_symbols, 'latex' : latex_symbols}
+
+    def __init__(self, **options):
+        Filter.__init__(self, **options)
+        lang = get_choice_opt(options, 'lang',
+                              ['isabelle', 'latex'], 'isabelle')
+        self.symbols = self.lang_map[lang]
+
+    def filter(self, lexer, stream):
+        for ttype, value in stream:
+            if value in self.symbols:
+                yield ttype, self.symbols[value]
+            else:
+                yield ttype, value
+
+
+class KeywordCaseFilter(Filter):
+    """Convert keywords to lowercase or uppercase or capitalize them, which
+    means first letter uppercase, rest lowercase.
+
+    This can be useful e.g. if you highlight Pascal code and want to adapt the
+    code to your styleguide.
+
+    Options accepted:
+
+    `case` : string
+       The casing to convert keywords to. Must be one of ``'lower'``,
+       ``'upper'`` or ``'capitalize'``.  The default is ``'lower'``.
+    """
+
+    def __init__(self, **options):
+        Filter.__init__(self, **options)
+        case = get_choice_opt(options, 'case',
+                              ['lower', 'upper', 'capitalize'], 'lower')
+        self.convert = getattr(str, case)
+
+    def filter(self, lexer, stream):
+        for ttype, value in stream:
+            if ttype in Keyword:
+                yield ttype, self.convert(value)
+            else:
+                yield ttype, value
+
+
+class NameHighlightFilter(Filter):
+    """Highlight a normal Name (and Name.*) token with a different token type.
+
+    Example::
+
+        filter = NameHighlightFilter(
+            names=['foo', 'bar', 'baz'],
+            tokentype=Name.Function,
+        )
+
+    This would highlight the names "foo", "bar" and "baz"
+    as functions. `Name.Function` is the default token type.
+
+    Options accepted:
+
+    `names` : list of strings
+      A list of names that should be given the different token type.
+      There is no default.
+    `tokentype` : TokenType or string
+      A token type or a string containing a token type name that is
+      used for highlighting the strings in `names`.  The default is
+      `Name.Function`.
+    """
+
+    def __init__(self, **options):
+        Filter.__init__(self, **options)
+        self.names = set(get_list_opt(options, 'names', []))
+        tokentype = options.get('tokentype')
+        if tokentype:
+            self.tokentype = string_to_tokentype(tokentype)
+        else:
+            self.tokentype = Name.Function
+
+    def filter(self, lexer, stream):
+        for ttype, value in stream:
+            if ttype in Name and value in self.names:
+                yield self.tokentype, value
+            else:
+                yield ttype, value
+
+
+class ErrorToken(Exception):
+    pass
+
+
+class RaiseOnErrorTokenFilter(Filter):
+    """Raise an exception when the lexer generates an error token.
+
+    Options accepted:
+
+    `excclass` : Exception class
+      The exception class to raise.
+      The default is `pygments.filters.ErrorToken`.
+
+    .. versionadded:: 0.8
+    """
+
+    def __init__(self, **options):
+        Filter.__init__(self, **options)
+        self.exception = options.get('excclass', ErrorToken)
+        try:
+            # issubclass() will raise TypeError if first argument is not a class
+            if not issubclass(self.exception, Exception):
+                raise TypeError
+        except TypeError:
+            raise OptionError('excclass option is not an exception class')
+
+    def filter(self, lexer, stream):
+        for ttype, value in stream:
+            if ttype is Error:
+                raise self.exception(value)
+            yield ttype, value
+
+
+class VisibleWhitespaceFilter(Filter):
+    """Convert tabs, newlines and/or spaces to visible characters.
+
+    Options accepted:
+
+    `spaces` : string or bool
+      If this is a one-character string, spaces will be replaces by this string.
+      If it is another true value, spaces will be replaced by ``·`` (unicode
+      MIDDLE DOT).  If it is a false value, spaces will not be replaced.  The
+      default is ``False``.
+    `tabs` : string or bool
+      The same as for `spaces`, but the default replacement character is ``»``
+      (unicode RIGHT-POINTING DOUBLE ANGLE QUOTATION MARK).  The default value
+      is ``False``.  Note: this will not work if the `tabsize` option for the
+      lexer is nonzero, as tabs will already have been expanded then.
+    `tabsize` : int
+      If tabs are to be replaced by this filter (see the `tabs` option), this
+      is the total number of characters that a tab should be expanded to.
+      The default is ``8``.
+    `newlines` : string or bool
+      The same as for `spaces`, but the default replacement character is ``¶``
+      (unicode PILCROW SIGN).  The default value is ``False``.
+    `wstokentype` : bool
+      If true, give whitespace the special `Whitespace` token type.  This allows
+      styling the visible whitespace differently (e.g. greyed out), but it can
+      disrupt background colors.  The default is ``True``.
+
+    .. versionadded:: 0.8
+    """
+
+    def __init__(self, **options):
+        Filter.__init__(self, **options)
+        for name, default in [('spaces',   '·'),
+                              ('tabs',     '»'),
+                              ('newlines', '¶')]:
+            opt = options.get(name, False)
+            if isinstance(opt, str) and len(opt) == 1:
+                setattr(self, name, opt)
+            else:
+                setattr(self, name, (opt and default or ''))
+        tabsize = get_int_opt(options, 'tabsize', 8)
+        if self.tabs:
+            self.tabs += ' ' * (tabsize - 1)
+        if self.newlines:
+            self.newlines += '\n'
+        self.wstt = get_bool_opt(options, 'wstokentype', True)
+
+    def filter(self, lexer, stream):
+        if self.wstt:
+            spaces = self.spaces or ' '
+            tabs = self.tabs or '\t'
+            newlines = self.newlines or '\n'
+            regex = re.compile(r'\s')
+
+            def replacefunc(wschar):
+                if wschar == ' ':
+                    return spaces
+                elif wschar == '\t':
+                    return tabs
+                elif wschar == '\n':
+                    return newlines
+                return wschar
+
+            for ttype, value in stream:
+                yield from _replace_special(ttype, value, regex, Whitespace,
+                                            replacefunc)
+        else:
+            spaces, tabs, newlines = self.spaces, self.tabs, self.newlines
+            # simpler processing
+            for ttype, value in stream:
+                if spaces:
+                    value = value.replace(' ', spaces)
+                if tabs:
+                    value = value.replace('\t', tabs)
+                if newlines:
+                    value = value.replace('\n', newlines)
+                yield ttype, value
+
+
+class GobbleFilter(Filter):
+    """Gobbles source code lines (eats initial characters).
+
+    This filter drops the first ``n`` characters off every line of code.  This
+    may be useful when the source code fed to the lexer is indented by a fixed
+    amount of space that isn't desired in the output.
+
+    Options accepted:
+
+    `n` : int
+       The number of characters to gobble.
+
+    .. versionadded:: 1.2
+    """
+    def __init__(self, **options):
+        Filter.__init__(self, **options)
+        self.n = get_int_opt(options, 'n', 0)
+
+    def gobble(self, value, left):
+        if left < len(value):
+            return value[left:], 0
+        else:
+            return '', left - len(value)
+
+    def filter(self, lexer, stream):
+        n = self.n
+        left = n  # How many characters left to gobble.
+        for ttype, value in stream:
+            # Remove ``left`` tokens from first line, ``n`` from all others.
+            parts = value.split('\n')
+            (parts[0], left) = self.gobble(parts[0], left)
+            for i in range(1, len(parts)):
+                (parts[i], left) = self.gobble(parts[i], n)
+            value = '\n'.join(parts)
+
+            if value != '':
+                yield ttype, value
+
+
+class TokenMergeFilter(Filter):
+    """Merges consecutive tokens with the same token type in the output
+    stream of a lexer.
+
+    .. versionadded:: 1.2
+    """
+    def __init__(self, **options):
+        Filter.__init__(self, **options)
+
+    def filter(self, lexer, stream):
+        current_type = None
+        current_value = None
+        for ttype, value in stream:
+            if ttype is current_type:
+                current_value += value
+            else:
+                if current_type is not None:
+                    yield current_type, current_value
+                current_type = ttype
+                current_value = value
+        if current_type is not None:
+            yield current_type, current_value
+
+
+FILTERS = {
+    'codetagify':     CodeTagFilter,
+    'keywordcase':    KeywordCaseFilter,
+    'highlight':      NameHighlightFilter,
+    'raiseonerror':   RaiseOnErrorTokenFilter,
+    'whitespace':     VisibleWhitespaceFilter,
+    'gobble':         GobbleFilter,
+    'tokenmerge':     TokenMergeFilter,
+    'symbols':        SymbolFilter,
+}
--- a/eric6/ThirdParty/Pygments/pygments/formatter.py	Tue Sep 15 18:46:58 2020 +0200
+++ b/eric6/ThirdParty/Pygments/pygments/formatter.py	Tue Sep 15 19:09:05 2020 +0200
@@ -1,95 +1,95 @@
-# -*- coding: utf-8 -*-
-"""
-    pygments.formatter
-    ~~~~~~~~~~~~~~~~~~
-
-    Base formatter class.
-
-    :copyright: Copyright 2006-2019 by the Pygments team, see AUTHORS.
-    :license: BSD, see LICENSE for details.
-"""
-
-import codecs
-
-from pygments.util import get_bool_opt
-from pygments.styles import get_style_by_name
-
-__all__ = ['Formatter']
-
-
-def _lookup_style(style):
-    if isinstance(style, str):
-        return get_style_by_name(style)
-    return style
-
-
-class Formatter:
-    """
-    Converts a token stream to text.
-
-    Options accepted:
-
-    ``style``
-        The style to use, can be a string or a Style subclass
-        (default: "default"). Not used by e.g. the
-        TerminalFormatter.
-    ``full``
-        Tells the formatter to output a "full" document, i.e.
-        a complete self-contained document. This doesn't have
-        any effect for some formatters (default: false).
-    ``title``
-        If ``full`` is true, the title that should be used to
-        caption the document (default: '').
-    ``encoding``
-        If given, must be an encoding name. This will be used to
-        convert the Unicode token strings to byte strings in the
-        output. If it is "" or None, Unicode strings will be written
-        to the output file, which most file-like objects do not
-        support (default: None).
-    ``outencoding``
-        Overrides ``encoding`` if given.
-    """
-
-    #: Name of the formatter
-    name = None
-
-    #: Shortcuts for the formatter
-    aliases = []
-
-    #: fn match rules
-    filenames = []
-
-    #: If True, this formatter outputs Unicode strings when no encoding
-    #: option is given.
-    unicodeoutput = True
-
-    def __init__(self, **options):
-        self.style = _lookup_style(options.get('style', 'default'))
-        self.full = get_bool_opt(options, 'full', False)
-        self.title = options.get('title', '')
-        self.encoding = options.get('encoding', None) or None
-        if self.encoding in ('guess', 'chardet'):
-            # can happen for e.g. pygmentize -O encoding=guess
-            self.encoding = 'utf-8'
-        self.encoding = options.get('outencoding') or self.encoding
-        self.options = options
-
-    def get_style_defs(self, arg=''):
-        """
-        Return the style definitions for the current style as a string.
-
-        ``arg`` is an additional argument whose meaning depends on the
-        formatter used. Note that ``arg`` can also be a list or tuple
-        for some formatters like the html formatter.
-        """
-        return ''
-
-    def format(self, tokensource, outfile):
-        """
-        Format ``tokensource``, an iterable of ``(tokentype, tokenstring)``
-        tuples and write it into ``outfile``.
-        """
-        if self.encoding:
-            # wrap the outfile in a StreamWriter
-            outfile = codecs.lookup(self.encoding)[3](outfile)
-        return self.format_unencoded(tokensource, outfile)
+# -*- coding: utf-8 -*-
+"""
+    pygments.formatter
+    ~~~~~~~~~~~~~~~~~~
+
+    Base formatter class.
+
+    :copyright: Copyright 2006-2020 by the Pygments team, see AUTHORS.
+    :license: BSD, see LICENSE for details.
+"""
+
+import codecs
+
+from pygments.util import get_bool_opt
+from pygments.styles import get_style_by_name
+
+__all__ = ['Formatter']
+
+
+def _lookup_style(style):
+    if isinstance(style, str):
+        return get_style_by_name(style)
+    return style
+
+
+class Formatter:
+    """
+    Converts a token stream to text.
+
+    Options accepted:
+
+    ``style``
+        The style to use, can be a string or a Style subclass
+        (default: "default"). Not used by e.g. the
+        TerminalFormatter.
+    ``full``
+        Tells the formatter to output a "full" document, i.e.
+        a complete self-contained document. This doesn't have
+        any effect for some formatters (default: false).
+    ``title``
+        If ``full`` is true, the title that should be used to
+        caption the document (default: '').
+    ``encoding``
+        If given, must be an encoding name. This will be used to
+        convert the Unicode token strings to byte strings in the
+        output. If it is "" or None, Unicode strings will be written
+        to the output file, which most file-like objects do not
+        support (default: None).
+    ``outencoding``
+        Overrides ``encoding`` if given.
+    """
+
+    #: Name of the formatter
+    name = None
+
+    #: Shortcuts for the formatter
+    aliases = []
+
+    #: fn match rules
+    filenames = []
+
+    #: If True, this formatter outputs Unicode strings when no encoding
+    #: option is given.
+    unicodeoutput = True
+
+    def __init__(self, **options):
+        self.style = _lookup_style(options.get('style', 'default'))
+        self.full = get_bool_opt(options, 'full', False)
+        self.title = options.get('title', '')
+        self.encoding = options.get('encoding', None) or None
+        if self.encoding in ('guess', 'chardet'):
+            # can happen for e.g. pygmentize -O encoding=guess
+            self.encoding = 'utf-8'
+        self.encoding = options.get('outencoding') or self.encoding
+        self.options = options
+
+    def get_style_defs(self, arg=''):
+        """
+        Return the style definitions for the current style as a string.
+
+        ``arg`` is an additional argument whose meaning depends on the
+        formatter used. Note that ``arg`` can also be a list or tuple
+        for some formatters like the html formatter.
+        """
+        return ''
+
+    def format(self, tokensource, outfile):
+        """
+        Format ``tokensource``, an iterable of ``(tokentype, tokenstring)``
+        tuples and write it into ``outfile``.
+        """
+        if self.encoding:
+            # wrap the outfile in a StreamWriter
+            outfile = codecs.lookup(self.encoding)[3](outfile)
+        return self.format_unencoded(tokensource, outfile)
--- a/eric6/ThirdParty/Pygments/pygments/formatters/__init__.py	Tue Sep 15 18:46:58 2020 +0200
+++ b/eric6/ThirdParty/Pygments/pygments/formatters/__init__.py	Tue Sep 15 19:09:05 2020 +0200
@@ -1,154 +1,154 @@
-# -*- coding: utf-8 -*-
-"""
-    pygments.formatters
-    ~~~~~~~~~~~~~~~~~~~
-
-    Pygments formatters.
-
-    :copyright: Copyright 2006-2019 by the Pygments team, see AUTHORS.
-    :license: BSD, see LICENSE for details.
-"""
-
-import re
-import sys
-import types
-import fnmatch
-from os.path import basename
-
-from pygments.formatters._mapping import FORMATTERS
-from pygments.plugin import find_plugin_formatters
-from pygments.util import ClassNotFound
-
-__all__ = ['get_formatter_by_name', 'get_formatter_for_filename',
-           'get_all_formatters', 'load_formatter_from_file'] + list(FORMATTERS)
-
-_formatter_cache = {}  # classes by name
-_pattern_cache = {}
-
-
-def _fn_matches(fn, glob):
-    """Return whether the supplied file name fn matches pattern filename."""
-    if glob not in _pattern_cache:
-        pattern = _pattern_cache[glob] = re.compile(fnmatch.translate(glob))
-        return pattern.match(fn)
-    return _pattern_cache[glob].match(fn)
-
-
-def _load_formatters(module_name):
-    """Load a formatter (and all others in the module too)."""
-    mod = __import__(module_name, None, None, ['__all__'])
-    for formatter_name in mod.__all__:
-        cls = getattr(mod, formatter_name)
-        _formatter_cache[cls.name] = cls
-
-
-def get_all_formatters():
-    """Return a generator for all formatter classes."""
-    # NB: this returns formatter classes, not info like get_all_lexers().
-    for info in FORMATTERS.values():
-        if info[1] not in _formatter_cache:
-            _load_formatters(info[0])
-        yield _formatter_cache[info[1]]
-    for _, formatter in find_plugin_formatters():
-        yield formatter
-
-
-def find_formatter_class(alias):
-    """Lookup a formatter by alias.
-
-    Returns None if not found.
-    """
-    for module_name, name, aliases, _, _ in FORMATTERS.values():
-        if alias in aliases:
-            if name not in _formatter_cache:
-                _load_formatters(module_name)
-            return _formatter_cache[name]
-    for _, cls in find_plugin_formatters():
-        if alias in cls.aliases:
-            return cls
-
-
-def get_formatter_by_name(_alias, **options):
-    """Lookup and instantiate a formatter by alias.
-
-    Raises ClassNotFound if not found.
-    """
-    cls = find_formatter_class(_alias)
-    if cls is None:
-        raise ClassNotFound("no formatter found for name %r" % _alias)
-    return cls(**options)
-
-
-def load_formatter_from_file(filename, formattername="CustomFormatter",
-                             **options):
-    """Load a formatter from a file.
-
-    This method expects a file located relative to the current working
-    directory, which contains a class named CustomFormatter. By default,
-    it expects the Formatter to be named CustomFormatter; you can specify
-    your own class name as the second argument to this function.
-
-    Users should be very careful with the input, because this method
-    is equivalent to running eval on the input file.
-
-    Raises ClassNotFound if there are any problems importing the Formatter.
-
-    .. versionadded:: 2.2
-    """
-    try:
-        # This empty dict will contain the namespace for the exec'd file
-        custom_namespace = {}
-        with open(filename, 'rb') as f:
-            exec(f.read(), custom_namespace)
-        # Retrieve the class `formattername` from that namespace
-        if formattername not in custom_namespace:
-            raise ClassNotFound('no valid %s class found in %s' %
-                                (formattername, filename))
-        formatter_class = custom_namespace[formattername]
-        # And finally instantiate it with the options
-        return formatter_class(**options)
-    except IOError as err:
-        raise ClassNotFound('cannot read %s: %s' % (filename, err))
-    except ClassNotFound:
-        raise
-    except Exception as err:
-        raise ClassNotFound('error when loading custom formatter: %s' % err)
-
-
-def get_formatter_for_filename(fn, **options):
-    """Lookup and instantiate a formatter by filename pattern.
-
-    Raises ClassNotFound if not found.
-    """
-    fn = basename(fn)
-    for modname, name, _, filenames, _ in FORMATTERS.values():
-        for filename in filenames:
-            if _fn_matches(fn, filename):
-                if name not in _formatter_cache:
-                    _load_formatters(modname)
-                return _formatter_cache[name](**options)
-    for cls in find_plugin_formatters():
-        for filename in cls.filenames:
-            if _fn_matches(fn, filename):
-                return cls(**options)
-    raise ClassNotFound("no formatter found for file name %r" % fn)
-
-
-class _automodule(types.ModuleType):
-    """Automatically import formatters."""
-
-    def __getattr__(self, name):
-        info = FORMATTERS.get(name)
-        if info:
-            _load_formatters(info[0])
-            cls = _formatter_cache[info[1]]
-            setattr(self, name, cls)
-            return cls
-        raise AttributeError(name)
-
-
-oldmod = sys.modules[__name__]
-newmod = _automodule(__name__)
-newmod.__dict__.update(oldmod.__dict__)
-sys.modules[__name__] = newmod
-del newmod.newmod, newmod.oldmod, newmod.sys, newmod.types
+# -*- coding: utf-8 -*-
+"""
+    pygments.formatters
+    ~~~~~~~~~~~~~~~~~~~
+
+    Pygments formatters.
+
+    :copyright: Copyright 2006-2020 by the Pygments team, see AUTHORS.
+    :license: BSD, see LICENSE for details.
+"""
+
+import re
+import sys
+import types
+import fnmatch
+from os.path import basename
+
+from pygments.formatters._mapping import FORMATTERS
+from pygments.plugin import find_plugin_formatters
+from pygments.util import ClassNotFound
+
+__all__ = ['get_formatter_by_name', 'get_formatter_for_filename',
+           'get_all_formatters', 'load_formatter_from_file'] + list(FORMATTERS)
+
+_formatter_cache = {}  # classes by name
+_pattern_cache = {}
+
+
+def _fn_matches(fn, glob):
+    """Return whether the supplied file name fn matches pattern filename."""
+    if glob not in _pattern_cache:
+        pattern = _pattern_cache[glob] = re.compile(fnmatch.translate(glob))
+        return pattern.match(fn)
+    return _pattern_cache[glob].match(fn)
+
+
+def _load_formatters(module_name):
+    """Load a formatter (and all others in the module too)."""
+    mod = __import__(module_name, None, None, ['__all__'])
+    for formatter_name in mod.__all__:
+        cls = getattr(mod, formatter_name)
+        _formatter_cache[cls.name] = cls
+
+
+def get_all_formatters():
+    """Return a generator for all formatter classes."""
+    # NB: this returns formatter classes, not info like get_all_lexers().
+    for info in FORMATTERS.values():
+        if info[1] not in _formatter_cache:
+            _load_formatters(info[0])
+        yield _formatter_cache[info[1]]
+    for _, formatter in find_plugin_formatters():
+        yield formatter
+
+
+def find_formatter_class(alias):
+    """Lookup a formatter by alias.
+
+    Returns None if not found.
+    """
+    for module_name, name, aliases, _, _ in FORMATTERS.values():
+        if alias in aliases:
+            if name not in _formatter_cache:
+                _load_formatters(module_name)
+            return _formatter_cache[name]
+    for _, cls in find_plugin_formatters():
+        if alias in cls.aliases:
+            return cls
+
+
+def get_formatter_by_name(_alias, **options):
+    """Lookup and instantiate a formatter by alias.
+
+    Raises ClassNotFound if not found.
+    """
+    cls = find_formatter_class(_alias)
+    if cls is None:
+        raise ClassNotFound("no formatter found for name %r" % _alias)
+    return cls(**options)
+
+
+def load_formatter_from_file(filename, formattername="CustomFormatter",
+                             **options):
+    """Load a formatter from a file.
+
+    This method expects a file located relative to the current working
+    directory, which contains a class named CustomFormatter. By default,
+    it expects the Formatter to be named CustomFormatter; you can specify
+    your own class name as the second argument to this function.
+
+    Users should be very careful with the input, because this method
+    is equivalent to running eval on the input file.
+
+    Raises ClassNotFound if there are any problems importing the Formatter.
+
+    .. versionadded:: 2.2
+    """
+    try:
+        # This empty dict will contain the namespace for the exec'd file
+        custom_namespace = {}
+        with open(filename, 'rb') as f:
+            exec(f.read(), custom_namespace)
+        # Retrieve the class `formattername` from that namespace
+        if formattername not in custom_namespace:
+            raise ClassNotFound('no valid %s class found in %s' %
+                                (formattername, filename))
+        formatter_class = custom_namespace[formattername]
+        # And finally instantiate it with the options
+        return formatter_class(**options)
+    except IOError as err:
+        raise ClassNotFound('cannot read %s: %s' % (filename, err))
+    except ClassNotFound:
+        raise
+    except Exception as err:
+        raise ClassNotFound('error when loading custom formatter: %s' % err)
+
+
+def get_formatter_for_filename(fn, **options):
+    """Lookup and instantiate a formatter by filename pattern.
+
+    Raises ClassNotFound if not found.
+    """
+    fn = basename(fn)
+    for modname, name, _, filenames, _ in FORMATTERS.values():
+        for filename in filenames:
+            if _fn_matches(fn, filename):
+                if name not in _formatter_cache:
+                    _load_formatters(modname)
+                return _formatter_cache[name](**options)
+    for cls in find_plugin_formatters():
+        for filename in cls.filenames:
+            if _fn_matches(fn, filename):
+                return cls(**options)
+    raise ClassNotFound("no formatter found for file name %r" % fn)
+
+
+class _automodule(types.ModuleType):
+    """Automatically import formatters."""
+
+    def __getattr__(self, name):
+        info = FORMATTERS.get(name)
+        if info:
+            _load_formatters(info[0])
+            cls = _formatter_cache[info[1]]
+            setattr(self, name, cls)
+            return cls
+        raise AttributeError(name)
+
+
+oldmod = sys.modules[__name__]
+newmod = _automodule(__name__)
+newmod.__dict__.update(oldmod.__dict__)
+sys.modules[__name__] = newmod
+del newmod.newmod, newmod.oldmod, newmod.sys, newmod.types
--- a/eric6/ThirdParty/Pygments/pygments/formatters/_mapping.py	Tue Sep 15 18:46:58 2020 +0200
+++ b/eric6/ThirdParty/Pygments/pygments/formatters/_mapping.py	Tue Sep 15 19:09:05 2020 +0200
@@ -1,83 +1,83 @@
-# -*- coding: utf-8 -*-
-"""
-    pygments.formatters._mapping
-    ~~~~~~~~~~~~~~~~~~~~~~~~~~~~
-
-    Formatter mapping definitions. This file is generated by itself. Everytime
-    you change something on a builtin formatter definition, run this script from
-    the formatters folder to update it.
-
-    Do not alter the FORMATTERS dictionary by hand.
-
-    :copyright: Copyright 2006-2019 by the Pygments team, see AUTHORS.
-    :license: BSD, see LICENSE for details.
-"""
-
-FORMATTERS = {
-    'BBCodeFormatter': ('pygments.formatters.bbcode', 'BBCode', ('bbcode', 'bb'), (), 'Format tokens with BBcodes. These formatting codes are used by many bulletin boards, so you can highlight your sourcecode with pygments before posting it there.'),
-    'BmpImageFormatter': ('pygments.formatters.img', 'img_bmp', ('bmp', 'bitmap'), ('*.bmp',), 'Create a bitmap image from source code. This uses the Python Imaging Library to generate a pixmap from the source code.'),
-    'GifImageFormatter': ('pygments.formatters.img', 'img_gif', ('gif',), ('*.gif',), 'Create a GIF image from source code. This uses the Python Imaging Library to generate a pixmap from the source code.'),
-    'HtmlFormatter': ('pygments.formatters.html', 'HTML', ('html',), ('*.html', '*.htm'), "Format tokens as HTML 4 ``<span>`` tags within a ``<pre>`` tag, wrapped in a ``<div>`` tag. The ``<div>``'s CSS class can be set by the `cssclass` option."),
-    'IRCFormatter': ('pygments.formatters.irc', 'IRC', ('irc', 'IRC'), (), 'Format tokens with IRC color sequences'),
-    'ImageFormatter': ('pygments.formatters.img', 'img', ('img', 'IMG', 'png'), ('*.png',), 'Create a PNG image from source code. This uses the Python Imaging Library to generate a pixmap from the source code.'),
-    'JpgImageFormatter': ('pygments.formatters.img', 'img_jpg', ('jpg', 'jpeg'), ('*.jpg',), 'Create a JPEG image from source code. This uses the Python Imaging Library to generate a pixmap from the source code.'),
-    'LatexFormatter': ('pygments.formatters.latex', 'LaTeX', ('latex', 'tex'), ('*.tex',), 'Format tokens as LaTeX code. This needs the `fancyvrb` and `color` standard packages.'),
-    'NullFormatter': ('pygments.formatters.other', 'Text only', ('text', 'null'), ('*.txt',), 'Output the text unchanged without any formatting.'),
-    'RawTokenFormatter': ('pygments.formatters.other', 'Raw tokens', ('raw', 'tokens'), ('*.raw',), 'Format tokens as a raw representation for storing token streams.'),
-    'RtfFormatter': ('pygments.formatters.rtf', 'RTF', ('rtf',), ('*.rtf',), 'Format tokens as RTF markup. This formatter automatically outputs full RTF documents with color information and other useful stuff. Perfect for Copy and Paste into Microsoft(R) Word(R) documents.'),
-    'SvgFormatter': ('pygments.formatters.svg', 'SVG', ('svg',), ('*.svg',), 'Format tokens as an SVG graphics file.  This formatter is still experimental. Each line of code is a ``<text>`` element with explicit ``x`` and ``y`` coordinates containing ``<tspan>`` elements with the individual token styles.'),
-    'Terminal256Formatter': ('pygments.formatters.terminal256', 'Terminal256', ('terminal256', 'console256', '256'), (), 'Format tokens with ANSI color sequences, for output in a 256-color terminal or console.  Like in `TerminalFormatter` color sequences are terminated at newlines, so that paging the output works correctly.'),
-    'TerminalFormatter': ('pygments.formatters.terminal', 'Terminal', ('terminal', 'console'), (), 'Format tokens with ANSI color sequences, for output in a text console. Color sequences are terminated at newlines, so that paging the output works correctly.'),
-    'TerminalTrueColorFormatter': ('pygments.formatters.terminal256', 'TerminalTrueColor', ('terminal16m', 'console16m', '16m'), (), 'Format tokens with ANSI color sequences, for output in a true-color terminal or console.  Like in `TerminalFormatter` color sequences are terminated at newlines, so that paging the output works correctly.'),
-    'TestcaseFormatter': ('pygments.formatters.other', 'Testcase', ('testcase',), (), 'Format tokens as appropriate for a new testcase.')
-}
-
-if __name__ == '__main__':  # pragma: no cover
-    import sys
-    import os
-
-    # lookup formatters
-    found_formatters = []
-    imports = []
-    sys.path.insert(0, os.path.join(os.path.dirname(__file__), '..', '..'))
-    from pygments.util import docstring_headline
-
-    for root, dirs, files in os.walk('.'):
-        for filename in files:
-            if filename.endswith('.py') and not filename.startswith('_'):
-                module_name = 'pygments.formatters%s.%s' % (
-                    root[1:].replace('/', '.'), filename[:-3])
-                print(module_name)
-                module = __import__(module_name, None, None, [''])
-                for formatter_name in module.__all__:
-                    formatter = getattr(module, formatter_name)
-                    found_formatters.append(
-                        '%r: %r' % (formatter_name,
-                                    (module_name,
-                                     formatter.name,
-                                     tuple(formatter.aliases),
-                                     tuple(formatter.filenames),
-                                     docstring_headline(formatter))))
-    # sort them to make the diff minimal
-    found_formatters.sort()
-
-    # extract useful sourcecode from this file
-    with open(__file__) as fp:
-        content = fp.read()
-        # replace crnl to nl for Windows.
-        #
-        # Note that, originally, contributers should keep nl of master
-        # repository, for example by using some kind of automatic
-        # management EOL, like `EolExtension
-        #  <https://www.mercurial-scm.org/wiki/EolExtension>`.
-        content = content.replace("\r\n", "\n")
-    header = content[:content.find('FORMATTERS = {')]
-    footer = content[content.find("if __name__ == '__main__':"):]
-
-    # write new file
-    with open(__file__, 'w') as fp:
-        fp.write(header)
-        fp.write('FORMATTERS = {\n    %s\n}\n\n' % ',\n    '.join(found_formatters))
-        fp.write(footer)
-
-    print ('=== %d formatters processed.' % len(found_formatters))
+# -*- coding: utf-8 -*-
+"""
+    pygments.formatters._mapping
+    ~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+    Formatter mapping definitions. This file is generated by itself. Everytime
+    you change something on a builtin formatter definition, run this script from
+    the formatters folder to update it.
+
+    Do not alter the FORMATTERS dictionary by hand.
+
+    :copyright: Copyright 2006-2020 by the Pygments team, see AUTHORS.
+    :license: BSD, see LICENSE for details.
+"""
+
+FORMATTERS = {
+    'BBCodeFormatter': ('pygments.formatters.bbcode', 'BBCode', ('bbcode', 'bb'), (), 'Format tokens with BBcodes. These formatting codes are used by many bulletin boards, so you can highlight your sourcecode with pygments before posting it there.'),
+    'BmpImageFormatter': ('pygments.formatters.img', 'img_bmp', ('bmp', 'bitmap'), ('*.bmp',), 'Create a bitmap image from source code. This uses the Python Imaging Library to generate a pixmap from the source code.'),
+    'GifImageFormatter': ('pygments.formatters.img', 'img_gif', ('gif',), ('*.gif',), 'Create a GIF image from source code. This uses the Python Imaging Library to generate a pixmap from the source code.'),
+    'HtmlFormatter': ('pygments.formatters.html', 'HTML', ('html',), ('*.html', '*.htm'), "Format tokens as HTML 4 ``<span>`` tags within a ``<pre>`` tag, wrapped in a ``<div>`` tag. The ``<div>``'s CSS class can be set by the `cssclass` option."),
+    'IRCFormatter': ('pygments.formatters.irc', 'IRC', ('irc', 'IRC'), (), 'Format tokens with IRC color sequences'),
+    'ImageFormatter': ('pygments.formatters.img', 'img', ('img', 'IMG', 'png'), ('*.png',), 'Create a PNG image from source code. This uses the Python Imaging Library to generate a pixmap from the source code.'),
+    'JpgImageFormatter': ('pygments.formatters.img', 'img_jpg', ('jpg', 'jpeg'), ('*.jpg',), 'Create a JPEG image from source code. This uses the Python Imaging Library to generate a pixmap from the source code.'),
+    'LatexFormatter': ('pygments.formatters.latex', 'LaTeX', ('latex', 'tex'), ('*.tex',), 'Format tokens as LaTeX code. This needs the `fancyvrb` and `color` standard packages.'),
+    'NullFormatter': ('pygments.formatters.other', 'Text only', ('text', 'null'), ('*.txt',), 'Output the text unchanged without any formatting.'),
+    'RawTokenFormatter': ('pygments.formatters.other', 'Raw tokens', ('raw', 'tokens'), ('*.raw',), 'Format tokens as a raw representation for storing token streams.'),
+    'RtfFormatter': ('pygments.formatters.rtf', 'RTF', ('rtf',), ('*.rtf',), 'Format tokens as RTF markup. This formatter automatically outputs full RTF documents with color information and other useful stuff. Perfect for Copy and Paste into Microsoft(R) Word(R) documents.'),
+    'SvgFormatter': ('pygments.formatters.svg', 'SVG', ('svg',), ('*.svg',), 'Format tokens as an SVG graphics file.  This formatter is still experimental. Each line of code is a ``<text>`` element with explicit ``x`` and ``y`` coordinates containing ``<tspan>`` elements with the individual token styles.'),
+    'Terminal256Formatter': ('pygments.formatters.terminal256', 'Terminal256', ('terminal256', 'console256', '256'), (), 'Format tokens with ANSI color sequences, for output in a 256-color terminal or console.  Like in `TerminalFormatter` color sequences are terminated at newlines, so that paging the output works correctly.'),
+    'TerminalFormatter': ('pygments.formatters.terminal', 'Terminal', ('terminal', 'console'), (), 'Format tokens with ANSI color sequences, for output in a text console. Color sequences are terminated at newlines, so that paging the output works correctly.'),
+    'TerminalTrueColorFormatter': ('pygments.formatters.terminal256', 'TerminalTrueColor', ('terminal16m', 'console16m', '16m'), (), 'Format tokens with ANSI color sequences, for output in a true-color terminal or console.  Like in `TerminalFormatter` color sequences are terminated at newlines, so that paging the output works correctly.'),
+    'TestcaseFormatter': ('pygments.formatters.other', 'Testcase', ('testcase',), (), 'Format tokens as appropriate for a new testcase.')
+}
+
+if __name__ == '__main__':  # pragma: no cover
+    import sys
+    import os
+
+    # lookup formatters
+    found_formatters = []
+    imports = []
+    sys.path.insert(0, os.path.join(os.path.dirname(__file__), '..', '..'))
+    from pygments.util import docstring_headline
+
+    for root, dirs, files in os.walk('.'):
+        for filename in files:
+            if filename.endswith('.py') and not filename.startswith('_'):
+                module_name = 'pygments.formatters%s.%s' % (
+                    root[1:].replace('/', '.'), filename[:-3])
+                print(module_name)
+                module = __import__(module_name, None, None, [''])
+                for formatter_name in module.__all__:
+                    formatter = getattr(module, formatter_name)
+                    found_formatters.append(
+                        '%r: %r' % (formatter_name,
+                                    (module_name,
+                                     formatter.name,
+                                     tuple(formatter.aliases),
+                                     tuple(formatter.filenames),
+                                     docstring_headline(formatter))))
+    # sort them to make the diff minimal
+    found_formatters.sort()
+
+    # extract useful sourcecode from this file
+    with open(__file__) as fp:
+        content = fp.read()
+        # replace crnl to nl for Windows.
+        #
+        # Note that, originally, contributers should keep nl of master
+        # repository, for example by using some kind of automatic
+        # management EOL, like `EolExtension
+        #  <https://www.mercurial-scm.org/wiki/EolExtension>`.
+        content = content.replace("\r\n", "\n")
+    header = content[:content.find('FORMATTERS = {')]
+    footer = content[content.find("if __name__ == '__main__':"):]
+
+    # write new file
+    with open(__file__, 'w') as fp:
+        fp.write(header)
+        fp.write('FORMATTERS = {\n    %s\n}\n\n' % ',\n    '.join(found_formatters))
+        fp.write(footer)
+
+    print ('=== %d formatters processed.' % len(found_formatters))
--- a/eric6/ThirdParty/Pygments/pygments/formatters/bbcode.py	Tue Sep 15 18:46:58 2020 +0200
+++ b/eric6/ThirdParty/Pygments/pygments/formatters/bbcode.py	Tue Sep 15 19:09:05 2020 +0200
@@ -1,109 +1,109 @@
-# -*- coding: utf-8 -*-
-"""
-    pygments.formatters.bbcode
-    ~~~~~~~~~~~~~~~~~~~~~~~~~~
-
-    BBcode formatter.
-
-    :copyright: Copyright 2006-2019 by the Pygments team, see AUTHORS.
-    :license: BSD, see LICENSE for details.
-"""
-
-
-from pygments.formatter import Formatter
-from pygments.util import get_bool_opt
-
-__all__ = ['BBCodeFormatter']
-
-
-class BBCodeFormatter(Formatter):
-    """
-    Format tokens with BBcodes. These formatting codes are used by many
-    bulletin boards, so you can highlight your sourcecode with pygments before
-    posting it there.
-
-    This formatter has no support for background colors and borders, as there
-    are no common BBcode tags for that.
-
-    Some board systems (e.g. phpBB) don't support colors in their [code] tag,
-    so you can't use the highlighting together with that tag.
-    Text in a [code] tag usually is shown with a monospace font (which this
-    formatter can do with the ``monofont`` option) and no spaces (which you
-    need for indentation) are removed.
-
-    Additional options accepted:
-
-    `style`
-        The style to use, can be a string or a Style subclass (default:
-        ``'default'``).
-
-    `codetag`
-        If set to true, put the output into ``[code]`` tags (default:
-        ``false``)
-
-    `monofont`
-        If set to true, add a tag to show the code with a monospace font
-        (default: ``false``).
-    """
-    name = 'BBCode'
-    aliases = ['bbcode', 'bb']
-    filenames = []
-
-    def __init__(self, **options):
-        Formatter.__init__(self, **options)
-        self._code = get_bool_opt(options, 'codetag', False)
-        self._mono = get_bool_opt(options, 'monofont', False)
-
-        self.styles = {}
-        self._make_styles()
-
-    def _make_styles(self):
-        for ttype, ndef in self.style:
-            start = end = ''
-            if ndef['color']:
-                start += '[color=#%s]' % ndef['color']
-                end = '[/color]' + end
-            if ndef['bold']:
-                start += '[b]'
-                end = '[/b]' + end
-            if ndef['italic']:
-                start += '[i]'
-                end = '[/i]' + end
-            if ndef['underline']:
-                start += '[u]'
-                end = '[/u]' + end
-            # there are no common BBcodes for background-color and border
-
-            self.styles[ttype] = start, end
-
-    def format_unencoded(self, tokensource, outfile):
-        if self._code:
-            outfile.write('[code]')
-        if self._mono:
-            outfile.write('[font=monospace]')
-
-        lastval = ''
-        lasttype = None
-
-        for ttype, value in tokensource:
-            while ttype not in self.styles:
-                ttype = ttype.parent
-            if ttype == lasttype:
-                lastval += value
-            else:
-                if lastval:
-                    start, end = self.styles[lasttype]
-                    outfile.write(''.join((start, lastval, end)))
-                lastval = value
-                lasttype = ttype
-
-        if lastval:
-            start, end = self.styles[lasttype]
-            outfile.write(''.join((start, lastval, end)))
-
-        if self._mono:
-            outfile.write('[/font]')
-        if self._code:
-            outfile.write('[/code]')
-        if self._code or self._mono:
-            outfile.write('\n')
+# -*- coding: utf-8 -*-
+"""
+    pygments.formatters.bbcode
+    ~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+    BBcode formatter.
+
+    :copyright: Copyright 2006-2020 by the Pygments team, see AUTHORS.
+    :license: BSD, see LICENSE for details.
+"""
+
+
+from pygments.formatter import Formatter
+from pygments.util import get_bool_opt
+
+__all__ = ['BBCodeFormatter']
+
+
+class BBCodeFormatter(Formatter):
+    """
+    Format tokens with BBcodes. These formatting codes are used by many
+    bulletin boards, so you can highlight your sourcecode with pygments before
+    posting it there.
+
+    This formatter has no support for background colors and borders, as there
+    are no common BBcode tags for that.
+
+    Some board systems (e.g. phpBB) don't support colors in their [code] tag,
+    so you can't use the highlighting together with that tag.
+    Text in a [code] tag usually is shown with a monospace font (which this
+    formatter can do with the ``monofont`` option) and no spaces (which you
+    need for indentation) are removed.
+
+    Additional options accepted:
+
+    `style`
+        The style to use, can be a string or a Style subclass (default:
+        ``'default'``).
+
+    `codetag`
+        If set to true, put the output into ``[code]`` tags (default:
+        ``false``)
+
+    `monofont`
+        If set to true, add a tag to show the code with a monospace font
+        (default: ``false``).
+    """
+    name = 'BBCode'
+    aliases = ['bbcode', 'bb']
+    filenames = []
+
+    def __init__(self, **options):
+        Formatter.__init__(self, **options)
+        self._code = get_bool_opt(options, 'codetag', False)
+        self._mono = get_bool_opt(options, 'monofont', False)
+
+        self.styles = {}
+        self._make_styles()
+
+    def _make_styles(self):
+        for ttype, ndef in self.style:
+            start = end = ''
+            if ndef['color']:
+                start += '[color=#%s]' % ndef['color']
+                end = '[/color]' + end
+            if ndef['bold']:
+                start += '[b]'
+                end = '[/b]' + end
+            if ndef['italic']:
+                start += '[i]'
+                end = '[/i]' + end
+            if ndef['underline']:
+                start += '[u]'
+                end = '[/u]' + end
+            # there are no common BBcodes for background-color and border
+
+            self.styles[ttype] = start, end
+
+    def format_unencoded(self, tokensource, outfile):
+        if self._code:
+            outfile.write('[code]')
+        if self._mono:
+            outfile.write('[font=monospace]')
+
+        lastval = ''
+        lasttype = None
+
+        for ttype, value in tokensource:
+            while ttype not in self.styles:
+                ttype = ttype.parent
+            if ttype == lasttype:
+                lastval += value
+            else:
+                if lastval:
+                    start, end = self.styles[lasttype]
+                    outfile.write(''.join((start, lastval, end)))
+                lastval = value
+                lasttype = ttype
+
+        if lastval:
+            start, end = self.styles[lasttype]
+            outfile.write(''.join((start, lastval, end)))
+
+        if self._mono:
+            outfile.write('[/font]')
+        if self._code:
+            outfile.write('[/code]')
+        if self._code or self._mono:
+            outfile.write('\n')
--- a/eric6/ThirdParty/Pygments/pygments/formatters/html.py	Tue Sep 15 18:46:58 2020 +0200
+++ b/eric6/ThirdParty/Pygments/pygments/formatters/html.py	Tue Sep 15 19:09:05 2020 +0200
@@ -1,880 +1,931 @@
-# -*- coding: utf-8 -*-
-"""
-    pygments.formatters.html
-    ~~~~~~~~~~~~~~~~~~~~~~~~
-
-    Formatter for HTML output.
-
-    :copyright: Copyright 2006-2019 by the Pygments team, see AUTHORS.
-    :license: BSD, see LICENSE for details.
-"""
-
-import os
-import sys
-import os.path
-from io import StringIO
-
-from pygments.formatter import Formatter
-from pygments.token import Token, Text, STANDARD_TYPES
-from pygments.util import get_bool_opt, get_int_opt, get_list_opt
-
-try:
-    import ctags
-except ImportError:
-    ctags = None
-
-__all__ = ['HtmlFormatter']
-
-
-_escape_html_table = {
-    ord('&'): u'&amp;',
-    ord('<'): u'&lt;',
-    ord('>'): u'&gt;',
-    ord('"'): u'&quot;',
-    ord("'"): u'&#39;',
-}
-
-
-def escape_html(text, table=_escape_html_table):
-    """Escape &, <, > as well as single and double quotes for HTML."""
-    return text.translate(table)
-
-
-def webify(color):
-    if color.startswith('calc') or color.startswith('var'):
-        return color
-    else:
-        return '#' + color
-
-
-def _get_ttype_class(ttype):
-    fname = STANDARD_TYPES.get(ttype)
-    if fname:
-        return fname
-    aname = ''
-    while fname is None:
-        aname = '-' + ttype[-1] + aname
-        ttype = ttype.parent
-        fname = STANDARD_TYPES.get(ttype)
-    return fname + aname
-
-
-CSSFILE_TEMPLATE = '''\
-/*
-generated by Pygments <https://pygments.org/>
-Copyright 2006-2019 by the Pygments team.
-Licensed under the BSD license, see LICENSE for details.
-*/
-td.linenos { background-color: #f0f0f0; padding-right: 10px; }
-span.lineno { background-color: #f0f0f0; padding: 0 5px 0 5px; }
-pre { line-height: 125%%; }
-%(styledefs)s
-'''
-
-DOC_HEADER = '''\
-<!DOCTYPE html PUBLIC "-//W3C//DTD HTML 4.01//EN"
-   "http://www.w3.org/TR/html4/strict.dtd">
-<!--
-generated by Pygments <https://pygments.org/>
-Copyright 2006-2019 by the Pygments team.
-Licensed under the BSD license, see LICENSE for details.
--->
-<html>
-<head>
-  <title>%(title)s</title>
-  <meta http-equiv="content-type" content="text/html; charset=%(encoding)s">
-  <style type="text/css">
-''' + CSSFILE_TEMPLATE + '''
-  </style>
-</head>
-<body>
-<h2>%(title)s</h2>
-
-'''
-
-DOC_HEADER_EXTERNALCSS = '''\
-<!DOCTYPE html PUBLIC "-//W3C//DTD HTML 4.01//EN"
-   "http://www.w3.org/TR/html4/strict.dtd">
-
-<html>
-<head>
-  <title>%(title)s</title>
-  <meta http-equiv="content-type" content="text/html; charset=%(encoding)s">
-  <link rel="stylesheet" href="%(cssfile)s" type="text/css">
-</head>
-<body>
-<h2>%(title)s</h2>
-
-'''
-
-DOC_FOOTER = '''\
-</body>
-</html>
-'''
-
-
-class HtmlFormatter(Formatter):
-    r"""
-    Format tokens as HTML 4 ``<span>`` tags within a ``<pre>`` tag, wrapped
-    in a ``<div>`` tag. The ``<div>``'s CSS class can be set by the `cssclass`
-    option.
-
-    If the `linenos` option is set to ``"table"``, the ``<pre>`` is
-    additionally wrapped inside a ``<table>`` which has one row and two
-    cells: one containing the line numbers and one containing the code.
-    Example:
-
-    .. sourcecode:: html
-
-        <div class="highlight" >
-        <table><tr>
-          <td class="linenos" title="click to toggle"
-            onclick="with (this.firstChild.style)
-                     { display = (display == '') ? 'none' : '' }">
-            <pre>1
-            2</pre>
-          </td>
-          <td class="code">
-            <pre><span class="Ke">def </span><span class="NaFu">foo</span>(bar):
-              <span class="Ke">pass</span>
-            </pre>
-          </td>
-        </tr></table></div>
-
-    (whitespace added to improve clarity).
-
-    Wrapping can be disabled using the `nowrap` option.
-
-    A list of lines can be specified using the `hl_lines` option to make these
-    lines highlighted (as of Pygments 0.11).
-
-    With the `full` option, a complete HTML 4 document is output, including
-    the style definitions inside a ``<style>`` tag, or in a separate file if
-    the `cssfile` option is given.
-
-    When `tagsfile` is set to the path of a ctags index file, it is used to
-    generate hyperlinks from names to their definition.  You must enable
-    `lineanchors` and run ctags with the `-n` option for this to work.  The
-    `python-ctags` module from PyPI must be installed to use this feature;
-    otherwise a `RuntimeError` will be raised.
-
-    The `get_style_defs(arg='')` method of a `HtmlFormatter` returns a string
-    containing CSS rules for the CSS classes used by the formatter. The
-    argument `arg` can be used to specify additional CSS selectors that
-    are prepended to the classes. A call `fmter.get_style_defs('td .code')`
-    would result in the following CSS classes:
-
-    .. sourcecode:: css
-
-        td .code .kw { font-weight: bold; color: #00FF00 }
-        td .code .cm { color: #999999 }
-        ...
-
-    If you have Pygments 0.6 or higher, you can also pass a list or tuple to the
-    `get_style_defs()` method to request multiple prefixes for the tokens:
-
-    .. sourcecode:: python
-
-        formatter.get_style_defs(['div.syntax pre', 'pre.syntax'])
-
-    The output would then look like this:
-
-    .. sourcecode:: css
-
-        div.syntax pre .kw,
-        pre.syntax .kw { font-weight: bold; color: #00FF00 }
-        div.syntax pre .cm,
-        pre.syntax .cm { color: #999999 }
-        ...
-
-    Additional options accepted:
-
-    `nowrap`
-        If set to ``True``, don't wrap the tokens at all, not even inside a ``<pre>``
-        tag. This disables most other options (default: ``False``).
-
-    `full`
-        Tells the formatter to output a "full" document, i.e. a complete
-        self-contained document (default: ``False``).
-
-    `title`
-        If `full` is true, the title that should be used to caption the
-        document (default: ``''``).
-
-    `style`
-        The style to use, can be a string or a Style subclass (default:
-        ``'default'``). This option has no effect if the `cssfile`
-        and `noclobber_cssfile` option are given and the file specified in
-        `cssfile` exists.
-
-    `noclasses`
-        If set to true, token ``<span>`` tags will not use CSS classes, but
-        inline styles. This is not recommended for larger pieces of code since
-        it increases output size by quite a bit (default: ``False``).
-
-    `classprefix`
-        Since the token types use relatively short class names, they may clash
-        with some of your own class names. In this case you can use the
-        `classprefix` option to give a string to prepend to all Pygments-generated
-        CSS class names for token types.
-        Note that this option also affects the output of `get_style_defs()`.
-
-    `cssclass`
-        CSS class for the wrapping ``<div>`` tag (default: ``'highlight'``).
-        If you set this option, the default selector for `get_style_defs()`
-        will be this class.
-
-        .. versionadded:: 0.9
-           If you select the ``'table'`` line numbers, the wrapping table will
-           have a CSS class of this string plus ``'table'``, the default is
-           accordingly ``'highlighttable'``.
-
-    `cssstyles`
-        Inline CSS styles for the wrapping ``<div>`` tag (default: ``''``).
-
-    `prestyles`
-        Inline CSS styles for the ``<pre>`` tag (default: ``''``).
-
-        .. versionadded:: 0.11
-
-    `cssfile`
-        If the `full` option is true and this option is given, it must be the
-        name of an external file. If the filename does not include an absolute
-        path, the file's path will be assumed to be relative to the main output
-        file's path, if the latter can be found. The stylesheet is then written
-        to this file instead of the HTML file.
-
-        .. versionadded:: 0.6
-
-    `noclobber_cssfile`
-        If `cssfile` is given and the specified file exists, the css file will
-        not be overwritten. This allows the use of the `full` option in
-        combination with a user specified css file. Default is ``False``.
-
-        .. versionadded:: 1.1
-
-    `linenos`
-        If set to ``'table'``, output line numbers as a table with two cells,
-        one containing the line numbers, the other the whole code.  This is
-        copy-and-paste-friendly, but may cause alignment problems with some
-        browsers or fonts.  If set to ``'inline'``, the line numbers will be
-        integrated in the ``<pre>`` tag that contains the code (that setting
-        is *new in Pygments 0.8*).
-
-        For compatibility with Pygments 0.7 and earlier, every true value
-        except ``'inline'`` means the same as ``'table'`` (in particular, that
-        means also ``True``).
-
-        The default value is ``False``, which means no line numbers at all.
-
-        **Note:** with the default ("table") line number mechanism, the line
-        numbers and code can have different line heights in Internet Explorer
-        unless you give the enclosing ``<pre>`` tags an explicit ``line-height``
-        CSS property (you get the default line spacing with ``line-height:
-        125%``).
-
-    `hl_lines`
-        Specify a list of lines to be highlighted.
-
-        .. versionadded:: 0.11
-
-    `linenostart`
-        The line number for the first line (default: ``1``).
-
-    `linenostep`
-        If set to a number n > 1, only every nth line number is printed.
-
-    `linenospecial`
-        If set to a number n > 0, every nth line number is given the CSS
-        class ``"special"`` (default: ``0``).
-
-    `nobackground`
-        If set to ``True``, the formatter won't output the background color
-        for the wrapping element (this automatically defaults to ``False``
-        when there is no wrapping element [eg: no argument for the
-        `get_syntax_defs` method given]) (default: ``False``).
-
-        .. versionadded:: 0.6
-
-    `lineseparator`
-        This string is output between lines of code. It defaults to ``"\n"``,
-        which is enough to break a line inside ``<pre>`` tags, but you can
-        e.g. set it to ``"<br>"`` to get HTML line breaks.
-
-        .. versionadded:: 0.7
-
-    `lineanchors`
-        If set to a nonempty string, e.g. ``foo``, the formatter will wrap each
-        output line in an anchor tag with a ``name`` of ``foo-linenumber``.
-        This allows easy linking to certain lines.
-
-        .. versionadded:: 0.9
-
-    `linespans`
-        If set to a nonempty string, e.g. ``foo``, the formatter will wrap each
-        output line in a span tag with an ``id`` of ``foo-linenumber``.
-        This allows easy access to lines via javascript.
-
-        .. versionadded:: 1.6
-
-    `anchorlinenos`
-        If set to `True`, will wrap line numbers in <a> tags. Used in
-        combination with `linenos` and `lineanchors`.
-
-    `tagsfile`
-        If set to the path of a ctags file, wrap names in anchor tags that
-        link to their definitions. `lineanchors` should be used, and the
-        tags file should specify line numbers (see the `-n` option to ctags).
-
-        .. versionadded:: 1.6
-
-    `tagurlformat`
-        A string formatting pattern used to generate links to ctags definitions.
-        Available variables are `%(path)s`, `%(fname)s` and `%(fext)s`.
-        Defaults to an empty string, resulting in just `#prefix-number` links.
-
-        .. versionadded:: 1.6
-
-    `filename`
-        A string used to generate a filename when rendering ``<pre>`` blocks,
-        for example if displaying source code.
-
-        .. versionadded:: 2.1
-
-    `wrapcode`
-        Wrap the code inside ``<pre>`` blocks using ``<code>``, as recommended
-        by the HTML5 specification.
-
-        .. versionadded:: 2.4
-
-
-    **Subclassing the HTML formatter**
-
-    .. versionadded:: 0.7
-
-    The HTML formatter is now built in a way that allows easy subclassing, thus
-    customizing the output HTML code. The `format()` method calls
-    `self._format_lines()` which returns a generator that yields tuples of ``(1,
-    line)``, where the ``1`` indicates that the ``line`` is a line of the
-    formatted source code.
-
-    If the `nowrap` option is set, the generator is the iterated over and the
-    resulting HTML is output.
-
-    Otherwise, `format()` calls `self.wrap()`, which wraps the generator with
-    other generators. These may add some HTML code to the one generated by
-    `_format_lines()`, either by modifying the lines generated by the latter,
-    then yielding them again with ``(1, line)``, and/or by yielding other HTML
-    code before or after the lines, with ``(0, html)``. The distinction between
-    source lines and other code makes it possible to wrap the generator multiple
-    times.
-
-    The default `wrap()` implementation adds a ``<div>`` and a ``<pre>`` tag.
-
-    A custom `HtmlFormatter` subclass could look like this:
-
-    .. sourcecode:: python
-
-        class CodeHtmlFormatter(HtmlFormatter):
-
-            def wrap(self, source, outfile):
-                return self._wrap_code(source)
-
-            def _wrap_code(self, source):
-                yield 0, '<code>'
-                for i, t in source:
-                    if i == 1:
-                        # it's a line of formatted code
-                        t += '<br>'
-                    yield i, t
-                yield 0, '</code>'
-
-    This results in wrapping the formatted lines with a ``<code>`` tag, where the
-    source lines are broken using ``<br>`` tags.
-
-    After calling `wrap()`, the `format()` method also adds the "line numbers"
-    and/or "full document" wrappers if the respective options are set. Then, all
-    HTML yielded by the wrapped generator is output.
-    """
-
-    name = 'HTML'
-    aliases = ['html']
-    filenames = ['*.html', '*.htm']
-
-    def __init__(self, **options):
-        Formatter.__init__(self, **options)
-        self.title = self._decodeifneeded(self.title)
-        self.nowrap = get_bool_opt(options, 'nowrap', False)
-        self.noclasses = get_bool_opt(options, 'noclasses', False)
-        self.classprefix = options.get('classprefix', '')
-        self.cssclass = self._decodeifneeded(options.get('cssclass', 'highlight'))
-        self.cssstyles = self._decodeifneeded(options.get('cssstyles', ''))
-        self.prestyles = self._decodeifneeded(options.get('prestyles', ''))
-        self.cssfile = self._decodeifneeded(options.get('cssfile', ''))
-        self.noclobber_cssfile = get_bool_opt(options, 'noclobber_cssfile', False)
-        self.tagsfile = self._decodeifneeded(options.get('tagsfile', ''))
-        self.tagurlformat = self._decodeifneeded(options.get('tagurlformat', ''))
-        self.filename = self._decodeifneeded(options.get('filename', ''))
-        self.wrapcode = get_bool_opt(options, 'wrapcode', False)
-
-        if self.tagsfile:
-            if not ctags:
-                raise RuntimeError('The "ctags" package must to be installed '
-                                   'to be able to use the "tagsfile" feature.')
-            self._ctags = ctags.CTags(self.tagsfile)
-
-        linenos = options.get('linenos', False)
-        if linenos == 'inline':
-            self.linenos = 2
-        elif linenos:
-            # compatibility with <= 0.7
-            self.linenos = 1
-        else:
-            self.linenos = 0
-        self.linenostart = abs(get_int_opt(options, 'linenostart', 1))
-        self.linenostep = abs(get_int_opt(options, 'linenostep', 1))
-        self.linenospecial = abs(get_int_opt(options, 'linenospecial', 0))
-        self.nobackground = get_bool_opt(options, 'nobackground', False)
-        self.lineseparator = options.get('lineseparator', u'\n')
-        self.lineanchors = options.get('lineanchors', '')
-        self.linespans = options.get('linespans', '')
-        self.anchorlinenos = options.get('anchorlinenos', False)
-        self.hl_lines = set()
-        for lineno in get_list_opt(options, 'hl_lines', []):
-            try:
-                self.hl_lines.add(int(lineno))
-            except ValueError:
-                pass
-
-        self._create_stylesheet()
-
-    def _get_css_class(self, ttype):
-        """Return the css class of this token type prefixed with
-        the classprefix option."""
-        ttypeclass = _get_ttype_class(ttype)
-        if ttypeclass:
-            return self.classprefix + ttypeclass
-        return ''
-
-    def _get_css_classes(self, ttype):
-        """Return the css classes of this token type prefixed with
-        the classprefix option."""
-        cls = self._get_css_class(ttype)
-        while ttype not in STANDARD_TYPES:
-            ttype = ttype.parent
-            cls = self._get_css_class(ttype) + ' ' + cls
-        return cls
-
-    def _create_stylesheet(self):
-        t2c = self.ttype2class = {Token: ''}
-        c2s = self.class2style = {}
-        for ttype, ndef in self.style:
-            name = self._get_css_class(ttype)
-            style = ''
-            if ndef['color']:
-                style += 'color: %s; ' % webify(ndef['color'])
-            if ndef['bold']:
-                style += 'font-weight: bold; '
-            if ndef['italic']:
-                style += 'font-style: italic; '
-            if ndef['underline']:
-                style += 'text-decoration: underline; '
-            if ndef['bgcolor']:
-                style += 'background-color: %s; ' % webify(ndef['bgcolor'])
-            if ndef['border']:
-                style += 'border: 1px solid %s; ' % webify(ndef['border'])
-            if style:
-                t2c[ttype] = name
-                # save len(ttype) to enable ordering the styles by
-                # hierarchy (necessary for CSS cascading rules!)
-                c2s[name] = (style[:-2], ttype, len(ttype))
-
-    def get_style_defs(self, arg=None):
-        """
-        Return CSS style definitions for the classes produced by the current
-        highlighting style. ``arg`` can be a string or list of selectors to
-        insert before the token type classes.
-        """
-        if arg is None:
-            arg = ('cssclass' in self.options and '.'+self.cssclass or '')
-        if isinstance(arg, str):
-            args = [arg]
-        else:
-            args = list(arg)
-
-        def prefix(cls):
-            if cls:
-                cls = '.' + cls
-            tmp = []
-            for arg in args:
-                tmp.append((arg and arg + ' ' or '') + cls)
-            return ', '.join(tmp)
-
-        styles = [(level, ttype, cls, style)
-                  for cls, (style, ttype, level) in self.class2style.items()
-                  if cls and style]
-        styles.sort()
-        lines = ['%s { %s } /* %s */' % (prefix(cls), style, repr(ttype)[6:])
-                 for (level, ttype, cls, style) in styles]
-        if arg and not self.nobackground and \
-           self.style.background_color is not None:
-            text_style = ''
-            if Text in self.ttype2class:
-                text_style = ' ' + self.class2style[self.ttype2class[Text]][0]
-            lines.insert(0, '%s { background: %s;%s }' %
-                         (prefix(''), self.style.background_color, text_style))
-        if self.style.highlight_color is not None:
-            lines.insert(0, '%s.hll { background-color: %s }' %
-                         (prefix(''), self.style.highlight_color))
-        return '\n'.join(lines)
-
-    def _decodeifneeded(self, value):
-        if isinstance(value, bytes):
-            if self.encoding:
-                return value.decode(self.encoding)
-            return value.decode()
-        return value
-
-    def _wrap_full(self, inner, outfile):
-        if self.cssfile:
-            if os.path.isabs(self.cssfile):
-                # it's an absolute filename
-                cssfilename = self.cssfile
-            else:
-                try:
-                    filename = outfile.name
-                    if not filename or filename[0] == '<':
-                        # pseudo files, e.g. name == '<fdopen>'
-                        raise AttributeError
-                    cssfilename = os.path.join(os.path.dirname(filename),
-                                               self.cssfile)
-                except AttributeError:
-                    print('Note: Cannot determine output file name, '
-                          'using current directory as base for the CSS file name',
-                          file=sys.stderr)
-                    cssfilename = self.cssfile
-            # write CSS file only if noclobber_cssfile isn't given as an option.
-            try:
-                if not os.path.exists(cssfilename) or not self.noclobber_cssfile:
-                    with open(cssfilename, "w") as cf:
-                        cf.write(CSSFILE_TEMPLATE %
-                                 {'styledefs': self.get_style_defs('body')})
-            except IOError as err:
-                err.strerror = 'Error writing CSS file: ' + err.strerror
-                raise
-
-            yield 0, (DOC_HEADER_EXTERNALCSS %
-                      dict(title=self.title,
-                           cssfile=self.cssfile,
-                           encoding=self.encoding))
-        else:
-            yield 0, (DOC_HEADER %
-                      dict(title=self.title,
-                           styledefs=self.get_style_defs('body'),
-                           encoding=self.encoding))
-
-        for t, line in inner:
-            yield t, line
-        yield 0, DOC_FOOTER
-
-    def _wrap_tablelinenos(self, inner):
-        dummyoutfile = StringIO()
-        lncount = 0
-        for t, line in inner:
-            if t:
-                lncount += 1
-            dummyoutfile.write(line)
-
-        fl = self.linenostart
-        mw = len(str(lncount + fl - 1))
-        sp = self.linenospecial
-        st = self.linenostep
-        la = self.lineanchors
-        aln = self.anchorlinenos
-        nocls = self.noclasses
-        if sp:
-            lines = []
-
-            for i in range(fl, fl+lncount):
-                if i % st == 0:
-                    if i % sp == 0:
-                        if aln:
-                            lines.append('<a href="#%s-%d" class="special">%*d</a>' %
-                                         (la, i, mw, i))
-                        else:
-                            lines.append('<span class="special">%*d</span>' % (mw, i))
-                    else:
-                        if aln:
-                            lines.append('<a href="#%s-%d">%*d</a>' % (la, i, mw, i))
-                        else:
-                            lines.append('%*d' % (mw, i))
-                else:
-                    lines.append('')
-            ls = '\n'.join(lines)
-        else:
-            lines = []
-            for i in range(fl, fl+lncount):
-                if i % st == 0:
-                    if aln:
-                        lines.append('<a href="#%s-%d">%*d</a>' % (la, i, mw, i))
-                    else:
-                        lines.append('%*d' % (mw, i))
-                else:
-                    lines.append('')
-            ls = '\n'.join(lines)
-
-        # in case you wonder about the seemingly redundant <div> here: since the
-        # content in the other cell also is wrapped in a div, some browsers in
-        # some configurations seem to mess up the formatting...
-        if nocls:
-            yield 0, ('<table class="%stable">' % self.cssclass +
-                      '<tr><td><div class="linenodiv" '
-                      'style="background-color: #f0f0f0; padding-right: 10px">'
-                      '<pre style="line-height: 125%">' +
-                      ls + '</pre></div></td><td class="code">')
-        else:
-            yield 0, ('<table class="%stable">' % self.cssclass +
-                      '<tr><td class="linenos"><div class="linenodiv"><pre>' +
-                      ls + '</pre></div></td><td class="code">')
-        yield 0, dummyoutfile.getvalue()
-        yield 0, '</td></tr></table>'
-
-    def _wrap_inlinelinenos(self, inner):
-        # need a list of lines since we need the width of a single number :(
-        lines = list(inner)
-        sp = self.linenospecial
-        st = self.linenostep
-        num = self.linenostart
-        mw = len(str(len(lines) + num - 1))
-
-        if self.noclasses:
-            if sp:
-                for t, line in lines:
-                    if num % sp == 0:
-                        style = 'background-color: #ffffc0; padding: 0 5px 0 5px'
-                    else:
-                        style = 'background-color: #f0f0f0; padding: 0 5px 0 5px'
-                    yield 1, '<span style="%s">%*s </span>' % (
-                        style, mw, (num % st and ' ' or num)) + line
-                    num += 1
-            else:
-                for t, line in lines:
-                    yield 1, ('<span style="background-color: #f0f0f0; '
-                              'padding: 0 5px 0 5px">%*s </span>' % (
-                                  mw, (num % st and ' ' or num)) + line)
-                    num += 1
-        elif sp:
-            for t, line in lines:
-                yield 1, '<span class="lineno%s">%*s </span>' % (
-                    num % sp == 0 and ' special' or '', mw,
-                    (num % st and ' ' or num)) + line
-                num += 1
-        else:
-            for t, line in lines:
-                yield 1, '<span class="lineno">%*s </span>' % (
-                    mw, (num % st and ' ' or num)) + line
-                num += 1
-
-    def _wrap_lineanchors(self, inner):
-        s = self.lineanchors
-        # subtract 1 since we have to increment i *before* yielding
-        i = self.linenostart - 1
-        for t, line in inner:
-            if t:
-                i += 1
-                yield 1, '<a name="%s-%d"></a>' % (s, i) + line
-            else:
-                yield 0, line
-
-    def _wrap_linespans(self, inner):
-        s = self.linespans
-        i = self.linenostart - 1
-        for t, line in inner:
-            if t:
-                i += 1
-                yield 1, '<span id="%s-%d">%s</span>' % (s, i, line)
-            else:
-                yield 0, line
-
-    def _wrap_div(self, inner):
-        style = []
-        if (self.noclasses and not self.nobackground and
-                self.style.background_color is not None):
-            style.append('background: %s' % (self.style.background_color,))
-        if self.cssstyles:
-            style.append(self.cssstyles)
-        style = '; '.join(style)
-
-        yield 0, ('<div' + (self.cssclass and ' class="%s"' % self.cssclass) +
-                  (style and (' style="%s"' % style)) + '>')
-        for tup in inner:
-            yield tup
-        yield 0, '</div>\n'
-
-    def _wrap_pre(self, inner):
-        style = []
-        if self.prestyles:
-            style.append(self.prestyles)
-        if self.noclasses:
-            style.append('line-height: 125%')
-        style = '; '.join(style)
-
-        if self.filename:
-            yield 0, ('<span class="filename">' + self.filename + '</span>')
-
-        # the empty span here is to keep leading empty lines from being
-        # ignored by HTML parsers
-        yield 0, ('<pre' + (style and ' style="%s"' % style) + '><span></span>')
-        for tup in inner:
-            yield tup
-        yield 0, '</pre>'
-
-    def _wrap_code(self, inner):
-        yield 0, '<code>'
-        for tup in inner:
-            yield tup
-        yield 0, '</code>'
-
-    def _format_lines(self, tokensource):
-        """
-        Just format the tokens, without any wrapping tags.
-        Yield individual lines.
-        """
-        nocls = self.noclasses
-        lsep = self.lineseparator
-        # for <span style=""> lookup only
-        getcls = self.ttype2class.get
-        c2s = self.class2style
-        escape_table = _escape_html_table
-        tagsfile = self.tagsfile
-
-        lspan = ''
-        line = []
-        for ttype, value in tokensource:
-            if nocls:
-                cclass = getcls(ttype)
-                while cclass is None:
-                    ttype = ttype.parent
-                    cclass = getcls(ttype)
-                cspan = cclass and '<span style="%s">' % c2s[cclass][0] or ''
-            else:
-                cls = self._get_css_classes(ttype)
-                cspan = cls and '<span class="%s">' % cls or ''
-
-            parts = value.translate(escape_table).split('\n')
-
-            if tagsfile and ttype in Token.Name:
-                filename, linenumber = self._lookup_ctag(value)
-                if linenumber:
-                    base, filename = os.path.split(filename)
-                    if base:
-                        base += '/'
-                    filename, extension = os.path.splitext(filename)
-                    url = self.tagurlformat % {'path': base, 'fname': filename,
-                                               'fext': extension}
-                    parts[0] = "<a href=\"%s#%s-%d\">%s" % \
-                        (url, self.lineanchors, linenumber, parts[0])
-                    parts[-1] = parts[-1] + "</a>"
-
-            # for all but the last line
-            for part in parts[:-1]:
-                if line:
-                    if lspan != cspan:
-                        line.extend(((lspan and '</span>'), cspan, part,
-                                     (cspan and '</span>'), lsep))
-                    else:  # both are the same
-                        line.extend((part, (lspan and '</span>'), lsep))
-                    yield 1, ''.join(line)
-                    line = []
-                elif part:
-                    yield 1, ''.join((cspan, part, (cspan and '</span>'), lsep))
-                else:
-                    yield 1, lsep
-            # for the last line
-            if line and parts[-1]:
-                if lspan != cspan:
-                    line.extend(((lspan and '</span>'), cspan, parts[-1]))
-                    lspan = cspan
-                else:
-                    line.append(parts[-1])
-            elif parts[-1]:
-                line = [cspan, parts[-1]]
-                lspan = cspan
-            # else we neither have to open a new span nor set lspan
-
-        if line:
-            line.extend(((lspan and '</span>'), lsep))
-            yield 1, ''.join(line)
-
-    def _lookup_ctag(self, token):
-        entry = ctags.TagEntry()
-        if self._ctags.find(entry, token, 0):
-            return entry['file'], entry['lineNumber']
-        else:
-            return None, None
-
-    def _highlight_lines(self, tokensource):
-        """
-        Highlighted the lines specified in the `hl_lines` option by
-        post-processing the token stream coming from `_format_lines`.
-        """
-        hls = self.hl_lines
-
-        for i, (t, value) in enumerate(tokensource):
-            if t != 1:
-                yield t, value
-            if i + 1 in hls:  # i + 1 because Python indexes start at 0
-                if self.noclasses:
-                    style = ''
-                    if self.style.highlight_color is not None:
-                        style = (' style="background-color: %s"' %
-                                 (self.style.highlight_color,))
-                    yield 1, '<span%s>%s</span>' % (style, value)
-                else:
-                    yield 1, '<span class="hll">%s</span>' % value
-            else:
-                yield 1, value
-
-    def wrap(self, source, outfile):