Updated Pygments to version 1.4.0.

Wed, 05 Jan 2011 15:46:19 +0100

author
Detlev Offenbach <detlev@die-offenbachs.de>
date
Wed, 05 Jan 2011 15:46:19 +0100
changeset 808
8f85926125ef
parent 805
83ca4d1ff648
child 811
2ed99614dbf4

Updated Pygments to version 1.4.0.

APIs/Python3/eric5.api file | annotate | diff | comparison | revisions
DebugClients/Ruby/Completer.rb file | annotate | diff | comparison | revisions
Documentation/Help/source.qch file | annotate | diff | comparison | revisions
Documentation/Help/source.qhp file | annotate | diff | comparison | revisions
Documentation/Source/eric5.QScintilla.Lexers.LexerPygments.html file | annotate | diff | comparison | revisions
QScintilla/Lexers/LexerPygments.py file | annotate | diff | comparison | revisions
ThirdParty/Pygments/pygments/AUTHORS file | annotate | diff | comparison | revisions
ThirdParty/Pygments/pygments/CHANGES file | annotate | diff | comparison | revisions
ThirdParty/Pygments/pygments/PKG-INFO file | annotate | diff | comparison | revisions
ThirdParty/Pygments/pygments/__init__.py file | annotate | diff | comparison | revisions
ThirdParty/Pygments/pygments/cmdline.py file | annotate | diff | comparison | revisions
ThirdParty/Pygments/pygments/formatter.py file | annotate | diff | comparison | revisions
ThirdParty/Pygments/pygments/formatters/_mapping.py file | annotate | diff | comparison | revisions
ThirdParty/Pygments/pygments/formatters/html.py file | annotate | diff | comparison | revisions
ThirdParty/Pygments/pygments/formatters/img.py file | annotate | diff | comparison | revisions
ThirdParty/Pygments/pygments/formatters/latex.py file | annotate | diff | comparison | revisions
ThirdParty/Pygments/pygments/formatters/other.py file | annotate | diff | comparison | revisions
ThirdParty/Pygments/pygments/lexer.py file | annotate | diff | comparison | revisions
ThirdParty/Pygments/pygments/lexers/__init__.py file | annotate | diff | comparison | revisions
ThirdParty/Pygments/pygments/lexers/_luabuiltins.py file | annotate | diff | comparison | revisions
ThirdParty/Pygments/pygments/lexers/_mapping.py file | annotate | diff | comparison | revisions
ThirdParty/Pygments/pygments/lexers/_phpbuiltins.py file | annotate | diff | comparison | revisions
ThirdParty/Pygments/pygments/lexers/agile.py file | annotate | diff | comparison | revisions
ThirdParty/Pygments/pygments/lexers/compiled.py file | annotate | diff | comparison | revisions
ThirdParty/Pygments/pygments/lexers/dotnet.py file | annotate | diff | comparison | revisions
ThirdParty/Pygments/pygments/lexers/functional.py file | annotate | diff | comparison | revisions
ThirdParty/Pygments/pygments/lexers/hdl.py file | annotate | diff | comparison | revisions
ThirdParty/Pygments/pygments/lexers/math.py file | annotate | diff | comparison | revisions
ThirdParty/Pygments/pygments/lexers/other.py file | annotate | diff | comparison | revisions
ThirdParty/Pygments/pygments/lexers/special.py file | annotate | diff | comparison | revisions
ThirdParty/Pygments/pygments/lexers/templates.py file | annotate | diff | comparison | revisions
ThirdParty/Pygments/pygments/lexers/text.py file | annotate | diff | comparison | revisions
ThirdParty/Pygments/pygments/lexers/web.py file | annotate | diff | comparison | revisions
ThirdParty/Pygments/pygments/style.py file | annotate | diff | comparison | revisions
ThirdParty/Pygments/pygments/styles/__init__.py file | annotate | diff | comparison | revisions
ThirdParty/Pygments/pygments/token.py file | annotate | diff | comparison | revisions
ThirdParty/Pygments/pygments/unistring.py file | annotate | diff | comparison | revisions
ThirdParty/Pygments/pygments/util.py file | annotate | diff | comparison | revisions
changelog file | annotate | diff | comparison | revisions
eric5.e4p file | annotate | diff | comparison | revisions
i18n/eric5_cs.ts file | annotate | diff | comparison | revisions
i18n/eric5_de.qm file | annotate | diff | comparison | revisions
i18n/eric5_de.ts file | annotate | diff | comparison | revisions
i18n/eric5_en.ts file | annotate | diff | comparison | revisions
i18n/eric5_es.ts file | annotate | diff | comparison | revisions
i18n/eric5_fr.ts file | annotate | diff | comparison | revisions
i18n/eric5_it.ts file | annotate | diff | comparison | revisions
i18n/eric5_ru.ts file | annotate | diff | comparison | revisions
i18n/eric5_tr.ts file | annotate | diff | comparison | revisions
i18n/eric5_zh_CN.GB2312.ts file | annotate | diff | comparison | revisions
--- a/APIs/Python3/eric5.api	Tue Jan 04 17:37:48 2011 +0100
+++ b/APIs/Python3/eric5.api	Wed Jan 05 15:46:19 2011 +0100
@@ -5205,6 +5205,7 @@
 eric5.QScintilla.Lexers.LexerProperties.LexerProperties?1(parent=None)
 eric5.QScintilla.Lexers.LexerPygments.LexerPygments.canStyle?4()
 eric5.QScintilla.Lexers.LexerPygments.LexerPygments.defaultColor?4(style)
+eric5.QScintilla.Lexers.LexerPygments.LexerPygments.defaultEolFill?4(style)
 eric5.QScintilla.Lexers.LexerPygments.LexerPygments.defaultFont?4(style)
 eric5.QScintilla.Lexers.LexerPygments.LexerPygments.defaultKeywords?4(kwSet)
 eric5.QScintilla.Lexers.LexerPygments.LexerPygments.defaultPaper?4(style)
@@ -5216,8 +5217,8 @@
 eric5.QScintilla.Lexers.LexerPygments.LexerPygments.styleBitsNeeded?4()
 eric5.QScintilla.Lexers.LexerPygments.LexerPygments.styleText?4(start, end)
 eric5.QScintilla.Lexers.LexerPygments.LexerPygments?1(parent = None, name = "")
-eric5.QScintilla.Lexers.LexerPygments.PYGMENTS_ERROR?7
 eric5.QScintilla.Lexers.LexerPygments.PYGMENTS_INSERTED?7
+eric5.QScintilla.Lexers.LexerPygments.PYGMENTS_PUNCTUATION?7
 eric5.QScintilla.Lexers.LexerPygments.TOKEN_MAP?7
 eric5.QScintilla.Lexers.LexerPython.LexerPython.autoCompletionWordSeparators?4()
 eric5.QScintilla.Lexers.LexerPython.LexerPython.defaultKeywords?4(kwSet)
--- a/DebugClients/Ruby/Completer.rb	Tue Jan 04 17:37:48 2011 +0100
+++ b/DebugClients/Ruby/Completer.rb	Wed Jan 05 15:46:19 2011 +0100
@@ -135,7 +135,7 @@
         # Global variable
             candidates = global_variables.grep(Regexp.new(Regexp.quote($1)))
 
-#        when /^(\$?(\.?[^.]+)+)\.([^.]*)$/
+##        when /^(\$?(\.?[^.]+)+)\.([^.]*)$/
         when /^((\.?[^.]+)+)\.([^.]*)$/
         # variable
             receiver = $1
@@ -179,7 +179,7 @@
 
         else
             candidates = eval("methods | private_methods | local_variables | self.class.constants", @binding)
-              
+            
             (candidates|ReservedWords).grep(/^#{Regexp.quote(input)}/)
         end
     end
Binary file Documentation/Help/source.qch has changed
--- a/Documentation/Help/source.qhp	Tue Jan 04 17:37:48 2011 +0100
+++ b/Documentation/Help/source.qhp	Wed Jan 05 15:46:19 2011 +0100
@@ -9811,6 +9811,7 @@
       <keyword name="LexerPygments.__guessLexer" id="LexerPygments.__guessLexer" ref="eric5.QScintilla.Lexers.LexerPygments.html#LexerPygments.__guessLexer" />
       <keyword name="LexerPygments.canStyle" id="LexerPygments.canStyle" ref="eric5.QScintilla.Lexers.LexerPygments.html#LexerPygments.canStyle" />
       <keyword name="LexerPygments.defaultColor" id="LexerPygments.defaultColor" ref="eric5.QScintilla.Lexers.LexerPygments.html#LexerPygments.defaultColor" />
+      <keyword name="LexerPygments.defaultEolFill" id="LexerPygments.defaultEolFill" ref="eric5.QScintilla.Lexers.LexerPygments.html#LexerPygments.defaultEolFill" />
       <keyword name="LexerPygments.defaultFont" id="LexerPygments.defaultFont" ref="eric5.QScintilla.Lexers.LexerPygments.html#LexerPygments.defaultFont" />
       <keyword name="LexerPygments.defaultKeywords" id="LexerPygments.defaultKeywords" ref="eric5.QScintilla.Lexers.LexerPygments.html#LexerPygments.defaultKeywords" />
       <keyword name="LexerPygments.defaultPaper" id="LexerPygments.defaultPaper" ref="eric5.QScintilla.Lexers.LexerPygments.html#LexerPygments.defaultPaper" />
--- a/Documentation/Source/eric5.QScintilla.Lexers.LexerPygments.html	Tue Jan 04 17:37:48 2011 +0100
+++ b/Documentation/Source/eric5.QScintilla.Lexers.LexerPygments.html	Wed Jan 05 15:46:19 2011 +0100
@@ -26,7 +26,7 @@
 </p>
 <h3>Global Attributes</h3>
 <table>
-<tr><td>PYGMENTS_ERROR</td></tr><tr><td>PYGMENTS_INSERTED</td></tr><tr><td>TOKEN_MAP</td></tr>
+<tr><td>PYGMENTS_INSERTED</td></tr><tr><td>PYGMENTS_PUNCTUATION</td></tr><tr><td>TOKEN_MAP</td></tr>
 </table>
 <h3>Classes</h3>
 <table>
@@ -68,6 +68,9 @@
 <td><a href="#LexerPygments.defaultColor">defaultColor</a></td>
 <td>Public method to get the default foreground color for a style.</td>
 </tr><tr>
+<td><a href="#LexerPygments.defaultEolFill">defaultEolFill</a></td>
+<td>Public method to get the default fill to eol flag.</td>
+</tr><tr>
 <td><a href="#LexerPygments.defaultFont">defaultFont</a></td>
 <td>Public method to get the default font for a style.</td>
 </tr><tr>
@@ -152,6 +155,21 @@
 <dd>
 foreground color (QColor)
 </dd>
+</dl><a NAME="LexerPygments.defaultEolFill" ID="LexerPygments.defaultEolFill"></a>
+<h4>LexerPygments.defaultEolFill</h4>
+<b>defaultEolFill</b>(<i>style</i>)
+<p>
+        Public method to get the default fill to eol flag.
+</p><dl>
+<dt><i>style</i></dt>
+<dd>
+style number (integer)
+</dd>
+</dl><dl>
+<dt>Returns:</dt>
+<dd>
+fill to eol flag (boolean)
+</dd>
 </dl><a NAME="LexerPygments.defaultFont" ID="LexerPygments.defaultFont"></a>
 <h4>LexerPygments.defaultFont</h4>
 <b>defaultFont</b>(<i>style</i>)
--- a/QScintilla/Lexers/LexerPygments.py	Tue Jan 04 17:37:48 2011 +0100
+++ b/QScintilla/Lexers/LexerPygments.py	Wed Jan 05 15:46:19 2011 +0100
@@ -56,55 +56,80 @@
 PYGMENTS_PROMPT, \
 PYGMENTS_OUTPUT, \
 PYGMENTS_TRACEBACK, \
-PYGMENTS_ERROR              = list(range(40, 47))
+PYGMENTS_ERROR, \
+PYGMENTS_MULTILINECOMMENT, \
+PYGMENTS_PROPERTY, \
+PYGMENTS_CHAR, \
+PYGMENTS_HEREDOC, \
+PYGMENTS_PUNCTUATION        = list(range(40, 52))
 
 #-----------------------------------------------------------------------------#
 
 TOKEN_MAP = {
-    Token.Comment:                   PYGMENTS_COMMENT,
-    Token.Comment.Preproc:           PYGMENTS_PREPROCESSOR,
+    Token.Comment:                  PYGMENTS_COMMENT,
+    Token.Comment.Preproc:          PYGMENTS_PREPROCESSOR,
+    Token.Comment.Multiline:        PYGMENTS_MULTILINECOMMENT,
+    Token.Comment.Single:           PYGMENTS_COMMENT,
+    Token.Comment.Special:          PYGMENTS_COMMENT,
 
-    Token.Keyword:                   PYGMENTS_KEYWORD,
-    Token.Keyword.Pseudo:            PYGMENTS_PSEUDOKEYWORD,
-    Token.Keyword.Type:              PYGMENTS_TYPEKEYWORD,
+    Token.Keyword:                  PYGMENTS_KEYWORD,
+    Token.Keyword.Pseudo:           PYGMENTS_PSEUDOKEYWORD,
+    Token.Keyword.Type:             PYGMENTS_TYPEKEYWORD,
+    Token.Keyword.Namespace:        PYGMENTS_KEYWORD,
 
-    Token.Operator:                  PYGMENTS_OPERATOR,
-    Token.Operator.Word:             PYGMENTS_WORD,
+    Token.Operator:                 PYGMENTS_OPERATOR,
+    Token.Operator.Word:            PYGMENTS_WORD,
 
-    Token.Name.Builtin:              PYGMENTS_BUILTIN,
-    Token.Name.Function:             PYGMENTS_FUNCTION,
-    Token.Name.Class:                PYGMENTS_CLASS,
-    Token.Name.Namespace:            PYGMENTS_NAMESPACE,
-    Token.Name.Exception:            PYGMENTS_EXCEPTION,
-    Token.Name.Variable:             PYGMENTS_VARIABLE,
-    Token.Name.Constant:             PYGMENTS_CONSTANT,
-    Token.Name.Label:                PYGMENTS_LABEL,
-    Token.Name.Entity:               PYGMENTS_ENTITY,
-    Token.Name.Attribute:            PYGMENTS_ATTRIBUTE,
-    Token.Name.Tag:                  PYGMENTS_TAG,
-    Token.Name.Decorator:            PYGMENTS_DECORATOR,
+    Token.Name:                     PYGMENTS_DEFAULT,
+    Token.Name.Builtin:             PYGMENTS_BUILTIN,
+    Token.Name.Builtin.Pseudo:      PYGMENTS_BUILTIN,
+    Token.Name.Function:            PYGMENTS_FUNCTION,
+    Token.Name.Class:               PYGMENTS_CLASS,
+    Token.Name.Namespace:           PYGMENTS_NAMESPACE,
+    Token.Name.Exception:           PYGMENTS_EXCEPTION,
+    Token.Name.Variable:            PYGMENTS_VARIABLE,
+    Token.Name.Variable.Class:      PYGMENTS_VARIABLE,
+    Token.Name.Variable.Global:     PYGMENTS_VARIABLE,
+    Token.Name.Variable.Instance:   PYGMENTS_VARIABLE,
+    Token.Name.Constant:            PYGMENTS_CONSTANT,
+    Token.Name.Label:               PYGMENTS_LABEL,
+    Token.Name.Entity:              PYGMENTS_ENTITY,
+    Token.Name.Attribute:           PYGMENTS_ATTRIBUTE,
+    Token.Name.Tag:                 PYGMENTS_TAG,
+    Token.Name.Decorator:           PYGMENTS_DECORATOR,
+    Token.Name.Property:            PYGMENTS_PROPERTY,
 
-    Token.String:                    PYGMENTS_STRING,
-    Token.String.Doc:                PYGMENTS_DOCSTRING,
-    Token.String.Interpol:           PYGMENTS_SCALAR,
-    Token.String.Escape:             PYGMENTS_ESCAPE,
-    Token.String.Regex:              PYGMENTS_REGEX,
-    Token.String.Symbol:             PYGMENTS_SYMBOL,
-    Token.String.Other:              PYGMENTS_OTHER,
-    Token.Number:                    PYGMENTS_NUMBER,
+    Token.String:                   PYGMENTS_STRING,
+    Token.String.Char:              PYGMENTS_CHAR,
+    Token.String.Doc:               PYGMENTS_DOCSTRING,
+    Token.String.Interpol:          PYGMENTS_SCALAR,
+    Token.String.Escape:            PYGMENTS_ESCAPE,
+    Token.String.Regex:             PYGMENTS_REGEX,
+    Token.String.Symbol:            PYGMENTS_SYMBOL,
+    Token.String.Other:             PYGMENTS_OTHER,
+    Token.String.Heredoc:           PYGMENTS_HEREDOC,
+    
+    Token.Number:                   PYGMENTS_NUMBER,
+    Token.Number.Float:             PYGMENTS_NUMBER,
+    Token.Number.Hex:               PYGMENTS_NUMBER,
+    Token.Number.Integer:           PYGMENTS_NUMBER,
+    Token.Number.Integer.Long:      PYGMENTS_NUMBER,
+    Token.Number.Oct:               PYGMENTS_NUMBER,
 
-    Token.Generic.Heading:           PYGMENTS_HEADING,
-    Token.Generic.Subheading:        PYGMENTS_SUBHEADING,
-    Token.Generic.Deleted:           PYGMENTS_DELETED,
-    Token.Generic.Inserted:          PYGMENTS_INSERTED,
-    Token.Generic.Error:             PYGMENTS_GENERIC_ERROR,
-    Token.Generic.Emph:              PYGMENTS_EMPHASIZE,
-    Token.Generic.Strong:            PYGMENTS_STRONG,
-    Token.Generic.Prompt:            PYGMENTS_PROMPT,
-    Token.Generic.Output:            PYGMENTS_OUTPUT,
-    Token.Generic.Traceback:         PYGMENTS_TRACEBACK,
+    Token.Punctuation:              PYGMENTS_PUNCTUATION,
 
-    Token.Error:                     PYGMENTS_ERROR, 
+    Token.Generic.Heading:          PYGMENTS_HEADING,
+    Token.Generic.Subheading:       PYGMENTS_SUBHEADING,
+    Token.Generic.Deleted:          PYGMENTS_DELETED,
+    Token.Generic.Inserted:         PYGMENTS_INSERTED,
+    Token.Generic.Error:            PYGMENTS_GENERIC_ERROR,
+    Token.Generic.Emph:             PYGMENTS_EMPHASIZE,
+    Token.Generic.Strong:           PYGMENTS_STRONG,
+    Token.Generic.Prompt:           PYGMENTS_PROMPT,
+    Token.Generic.Output:           PYGMENTS_OUTPUT,
+    Token.Generic.Traceback:        PYGMENTS_TRACEBACK,
+
+    Token.Error:                    PYGMENTS_ERROR, 
 }
 
 #-----------------------------------------------------------------------------#
@@ -125,88 +150,106 @@
         self.__pygmentsName = name
         
         self.descriptions = {
-            PYGMENTS_DEFAULT       : self.trUtf8("Default"), 
-            PYGMENTS_COMMENT       : self.trUtf8("Comment"), 
-            PYGMENTS_PREPROCESSOR  : self.trUtf8("Preprocessor"), 
-            PYGMENTS_KEYWORD       : self.trUtf8("Keyword"), 
-            PYGMENTS_PSEUDOKEYWORD : self.trUtf8("Pseudo Keyword"), 
-            PYGMENTS_TYPEKEYWORD   : self.trUtf8("Type Keyword"), 
-            PYGMENTS_OPERATOR      : self.trUtf8("Operator"), 
-            PYGMENTS_WORD          : self.trUtf8("Word"), 
-            PYGMENTS_BUILTIN       : self.trUtf8("Builtin"), 
-            PYGMENTS_FUNCTION      : self.trUtf8("Function or method name"), 
-            PYGMENTS_CLASS         : self.trUtf8("Class name"), 
-            PYGMENTS_NAMESPACE     : self.trUtf8("Namespace"), 
-            PYGMENTS_EXCEPTION     : self.trUtf8("Exception"), 
-            PYGMENTS_VARIABLE      : self.trUtf8("Identifier"), 
-            PYGMENTS_CONSTANT      : self.trUtf8("Constant"), 
-            PYGMENTS_LABEL         : self.trUtf8("Label"), 
-            PYGMENTS_ENTITY        : self.trUtf8("Entity"), 
-            PYGMENTS_ATTRIBUTE     : self.trUtf8("Attribute"), 
-            PYGMENTS_TAG           : self.trUtf8("Tag"), 
-            PYGMENTS_DECORATOR     : self.trUtf8("Decorator"), 
-            PYGMENTS_STRING        : self.trUtf8("String"), 
-            PYGMENTS_DOCSTRING     : self.trUtf8("Documentation string"), 
-            PYGMENTS_SCALAR        : self.trUtf8("Scalar"), 
-            PYGMENTS_ESCAPE        : self.trUtf8("Escape"), 
-            PYGMENTS_REGEX         : self.trUtf8("Regular expression"), 
-            PYGMENTS_SYMBOL        : self.trUtf8("Symbol"), 
-            PYGMENTS_OTHER         : self.trUtf8("Other string"), 
-            PYGMENTS_NUMBER        : self.trUtf8("Number"), 
-            PYGMENTS_HEADING       : self.trUtf8("Heading"), 
-            PYGMENTS_SUBHEADING    : self.trUtf8("Subheading"), 
-            PYGMENTS_DELETED       : self.trUtf8("Deleted"), 
-            PYGMENTS_INSERTED      : self.trUtf8("Inserted"), 
-            PYGMENTS_GENERIC_ERROR : self.trUtf8("Generic error"), 
-            PYGMENTS_EMPHASIZE     : self.trUtf8("Emphasized text"), 
-            PYGMENTS_STRONG        : self.trUtf8("Strong text"), 
-            PYGMENTS_PROMPT        : self.trUtf8("Prompt"), 
-            PYGMENTS_OUTPUT        : self.trUtf8("Output"), 
-            PYGMENTS_TRACEBACK     : self.trUtf8("Traceback"), 
-            PYGMENTS_ERROR         : self.trUtf8("Error"), 
+            PYGMENTS_DEFAULT            : self.trUtf8("Default"), 
+            PYGMENTS_COMMENT            : self.trUtf8("Comment"), 
+            PYGMENTS_PREPROCESSOR       : self.trUtf8("Preprocessor"), 
+            PYGMENTS_KEYWORD            : self.trUtf8("Keyword"), 
+            PYGMENTS_PSEUDOKEYWORD      : self.trUtf8("Pseudo Keyword"), 
+            PYGMENTS_TYPEKEYWORD        : self.trUtf8("Type Keyword"), 
+            PYGMENTS_OPERATOR           : self.trUtf8("Operator"), 
+            PYGMENTS_WORD               : self.trUtf8("Word"), 
+            PYGMENTS_BUILTIN            : self.trUtf8("Builtin"), 
+            PYGMENTS_FUNCTION           : self.trUtf8("Function or method name"), 
+            PYGMENTS_CLASS              : self.trUtf8("Class name"), 
+            PYGMENTS_NAMESPACE          : self.trUtf8("Namespace"), 
+            PYGMENTS_EXCEPTION          : self.trUtf8("Exception"), 
+            PYGMENTS_VARIABLE           : self.trUtf8("Identifier"), 
+            PYGMENTS_CONSTANT           : self.trUtf8("Constant"), 
+            PYGMENTS_LABEL              : self.trUtf8("Label"), 
+            PYGMENTS_ENTITY             : self.trUtf8("Entity"), 
+            PYGMENTS_ATTRIBUTE          : self.trUtf8("Attribute"), 
+            PYGMENTS_TAG                : self.trUtf8("Tag"), 
+            PYGMENTS_DECORATOR          : self.trUtf8("Decorator"), 
+            PYGMENTS_STRING             : self.trUtf8("String"), 
+            PYGMENTS_DOCSTRING          : self.trUtf8("Documentation string"), 
+            PYGMENTS_SCALAR             : self.trUtf8("Scalar"), 
+            PYGMENTS_ESCAPE             : self.trUtf8("Escape"), 
+            PYGMENTS_REGEX              : self.trUtf8("Regular expression"), 
+            PYGMENTS_SYMBOL             : self.trUtf8("Symbol"), 
+            PYGMENTS_OTHER              : self.trUtf8("Other string"), 
+            PYGMENTS_NUMBER             : self.trUtf8("Number"), 
+            PYGMENTS_HEADING            : self.trUtf8("Heading"), 
+            PYGMENTS_SUBHEADING         : self.trUtf8("Subheading"), 
+            PYGMENTS_DELETED            : self.trUtf8("Deleted"), 
+            PYGMENTS_INSERTED           : self.trUtf8("Inserted"), 
+            PYGMENTS_GENERIC_ERROR      : self.trUtf8("Generic error"), 
+            PYGMENTS_EMPHASIZE          : self.trUtf8("Emphasized text"), 
+            PYGMENTS_STRONG             : self.trUtf8("Strong text"), 
+            PYGMENTS_PROMPT             : self.trUtf8("Prompt"), 
+            PYGMENTS_OUTPUT             : self.trUtf8("Output"), 
+            PYGMENTS_TRACEBACK          : self.trUtf8("Traceback"), 
+            PYGMENTS_ERROR              : self.trUtf8("Error"), 
+            PYGMENTS_MULTILINECOMMENT   : self.trUtf8("Comment block"),
+            PYGMENTS_PROPERTY           : self.trUtf8("Property"),
+            PYGMENTS_CHAR               : self.trUtf8("Character"),
+            PYGMENTS_HEREDOC            : self.trUtf8("Here document"),
+            PYGMENTS_PUNCTUATION        : self.trUtf8("Punctuation"),
         }
         
         self.defaultColors = {
-            PYGMENTS_DEFAULT       : QColor("#000000"), 
-            PYGMENTS_COMMENT       : QColor("#408080"), 
-            PYGMENTS_PREPROCESSOR  : QColor("#BC7A00"), 
-            PYGMENTS_KEYWORD       : QColor("#008000"), 
-            PYGMENTS_PSEUDOKEYWORD : QColor("#008000"), 
-            PYGMENTS_TYPEKEYWORD   : QColor("#B00040"), 
-            PYGMENTS_OPERATOR      : QColor("#666666"), 
-            PYGMENTS_WORD          : QColor("#AA22FF"), 
-            PYGMENTS_BUILTIN       : QColor("#008000"), 
-            PYGMENTS_FUNCTION      : QColor("#0000FF"), 
-            PYGMENTS_CLASS         : QColor("#0000FF"), 
-            PYGMENTS_NAMESPACE     : QColor("#0000FF"), 
-            PYGMENTS_EXCEPTION     : QColor("#D2413A"), 
-            PYGMENTS_VARIABLE      : QColor("#19177C"), 
-            PYGMENTS_CONSTANT      : QColor("#880000"), 
-            PYGMENTS_LABEL         : QColor("#A0A000"), 
-            PYGMENTS_ENTITY        : QColor("#999999"), 
-            PYGMENTS_ATTRIBUTE     : QColor("#7D9029"), 
-            PYGMENTS_TAG           : QColor("#008000"), 
-            PYGMENTS_DECORATOR     : QColor("#AA22FF"), 
-            PYGMENTS_STRING        : QColor("#BA2121"), 
-            PYGMENTS_DOCSTRING     : QColor("#BA2121"), 
-            PYGMENTS_SCALAR        : QColor("#BB6688"), 
-            PYGMENTS_ESCAPE        : QColor("#BB6622"), 
-            PYGMENTS_REGEX         : QColor("#BB6688"), 
-            PYGMENTS_SYMBOL        : QColor("#19177C"), 
-            PYGMENTS_OTHER         : QColor("#008000"), 
-            PYGMENTS_NUMBER        : QColor("#666666"), 
-            PYGMENTS_HEADING       : QColor("#000080"), 
-            PYGMENTS_SUBHEADING    : QColor("#800080"), 
-            PYGMENTS_DELETED       : QColor("#A00000"), 
-            PYGMENTS_INSERTED      : QColor("#00A000"), 
-            PYGMENTS_GENERIC_ERROR : QColor("#FF0000"), 
-            PYGMENTS_PROMPT        : QColor("#000080"), 
-            PYGMENTS_OUTPUT        : QColor("#808080"), 
-            PYGMENTS_TRACEBACK     : QColor("#0040D0"), 
+            PYGMENTS_DEFAULT            : QColor("#000000"), 
+            PYGMENTS_COMMENT            : QColor("#408080"), 
+            PYGMENTS_PREPROCESSOR       : QColor("#BC7A00"), 
+            PYGMENTS_KEYWORD            : QColor("#008000"), 
+            PYGMENTS_PSEUDOKEYWORD      : QColor("#008000"), 
+            PYGMENTS_TYPEKEYWORD        : QColor("#B00040"), 
+            PYGMENTS_OPERATOR           : QColor("#666666"), 
+            PYGMENTS_WORD               : QColor("#AA22FF"), 
+            PYGMENTS_BUILTIN            : QColor("#008000"), 
+            PYGMENTS_FUNCTION           : QColor("#0000FF"), 
+            PYGMENTS_CLASS              : QColor("#0000FF"), 
+            PYGMENTS_NAMESPACE          : QColor("#0000FF"), 
+            PYGMENTS_EXCEPTION          : QColor("#D2413A"), 
+            PYGMENTS_VARIABLE           : QColor("#19177C"), 
+            PYGMENTS_CONSTANT           : QColor("#880000"), 
+            PYGMENTS_LABEL              : QColor("#A0A000"), 
+            PYGMENTS_ENTITY             : QColor("#999999"), 
+            PYGMENTS_ATTRIBUTE          : QColor("#7D9029"), 
+            PYGMENTS_TAG                : QColor("#008000"), 
+            PYGMENTS_DECORATOR          : QColor("#AA22FF"), 
+            PYGMENTS_STRING             : QColor("#BA2121"), 
+            PYGMENTS_DOCSTRING          : QColor("#BA2121"), 
+            PYGMENTS_SCALAR             : QColor("#BB6688"), 
+            PYGMENTS_ESCAPE             : QColor("#BB6622"), 
+            PYGMENTS_REGEX              : QColor("#BB6688"), 
+            PYGMENTS_SYMBOL             : QColor("#19177C"), 
+            PYGMENTS_OTHER              : QColor("#008000"), 
+            PYGMENTS_NUMBER             : QColor("#666666"), 
+            PYGMENTS_HEADING            : QColor("#000080"), 
+            PYGMENTS_SUBHEADING         : QColor("#800080"), 
+            PYGMENTS_DELETED            : QColor("#A00000"), 
+            PYGMENTS_INSERTED           : QColor("#00A000"), 
+            PYGMENTS_GENERIC_ERROR      : QColor("#FF0000"), 
+            PYGMENTS_PROMPT             : QColor("#000080"), 
+            PYGMENTS_OUTPUT             : QColor("#808080"), 
+            PYGMENTS_TRACEBACK          : QColor("#0040D0"), 
+            PYGMENTS_MULTILINECOMMENT   : QColor("#007F00"),
+            PYGMENTS_PROPERTY           : QColor("#00A0E0"),
+            PYGMENTS_CHAR               : QColor("#7F007F"),
+            PYGMENTS_HEREDOC            : QColor("#7F007F"),
+            PYGMENTS_PUNCTUATION        : QColor("#000000"),
         }
         
         self.defaultPapers = {
-            PYGMENTS_ERROR         : QColor("#FF0000"), 
+            PYGMENTS_ERROR              : QColor("#FF0000"), 
+            PYGMENTS_MULTILINECOMMENT   : QColor("#A8FFA8"),
+            PYGMENTS_HEREDOC            : QColor("#DDD0DD"),
+        }
+        
+        self.defaultEolFill = {
+            PYGMENTS_ERROR              : True, 
+            PYGMENTS_MULTILINECOMMENT   : True,
+            PYGMENTS_HEREDOC            : True, 
         }
     
     def language(self):
@@ -261,7 +304,7 @@
         @param style style number (integer)
         @return font (QFont)
         """
-        if style in [PYGMENTS_COMMENT, PYGMENTS_PREPROCESSOR]:
+        if style in [PYGMENTS_COMMENT, PYGMENTS_PREPROCESSOR, PYGMENTS_MULTILINECOMMENT]:
             if Utilities.isWindowsPlatform():
                 f = QFont("Comic Sans MS", 9)
             else:
@@ -270,7 +313,7 @@
                 f.setItalic(True)
             return f
         
-        if style in [PYGMENTS_STRING]:
+        if style in [PYGMENTS_STRING, PYGMENTS_CHAR]:
             if Utilities.isWindowsPlatform():
                 return QFont("Comic Sans MS", 10)
             else:
@@ -292,6 +335,18 @@
         
         return LexerContainer.defaultFont(self, style)
     
+    def defaultEolFill(self, style):
+        """
+        Public method to get the default fill to eol flag.
+        
+        @param style style number (integer)
+        @return fill to eol flag (boolean)
+        """
+        try:
+            return self.defaultEolFill[style]
+        except KeyError:
+            return LexerContainer.defaultEolFill(self, style)
+        
     def styleBitsNeeded(self):
         """
         Public method to get the number of style bits needed by the lexer.
--- a/ThirdParty/Pygments/pygments/AUTHORS	Tue Jan 04 17:37:48 2011 +0100
+++ b/ThirdParty/Pygments/pygments/AUTHORS	Wed Jan 05 15:46:19 2011 +0100
@@ -5,6 +5,7 @@
 
 Other contributors, listed alphabetically, are:
 
+* Sam Aaron -- Ioke lexer
 * Kumar Appaiah -- Debian control lexer
 * Ali Afshar -- image formatter
 * Andreas Amann -- AppleScript lexer
@@ -14,15 +15,19 @@
 * Max Battcher -- Darcs patch lexer
 * Paul Baumgart, 280 North, Inc. -- Objective-J lexer
 * Michael Bayer -- Myghty lexers
+* John Benediktsson -- Factor lexer
 * Jarrett Billingsley -- MiniD lexer
 * Adam Blinkinsop -- Haskell, Redcode lexers
 * Frits van Bommel -- assembler lexers
 * Pierre Bourdon -- bugfixes
+* Hiram Chirino -- Scaml and Jade lexers
 * Christopher Creutzig -- MuPAD lexer
 * Pete Curry -- bugfixes
 * Owen Durni -- haXe lexer
 * Nick Efford -- Python 3 lexer
 * Artem Egorkine -- terminal256 formatter
+* James H. Fisher -- PostScript lexer
+* Naveen Garg - Autohotkey lexer
 * Laurent Gautier -- R/S lexer
 * Krzysiek Goj -- Scala lexer
 * Matt Good -- Genshi, Cheetah lexers
@@ -33,6 +38,8 @@
 * Aslak Hellesøy -- Gherkin lexer
 * David Hess, Fish Software, Inc. -- Objective-J lexer
 * Varun Hiremath -- Debian control lexer
+* Ben Hollis -- Mason lexer
+* Tim Howard -- BlitzMax lexer
 * Dennis Kaarsemaker -- sources.list lexer
 * Benjamin Kowarsch -- Modula-2 lexer
 * Marek Kubica -- Scheme lexer
@@ -40,7 +47,9 @@
 * Gerd Kurzbach -- Modelica lexer
 * Mark Lee -- Vala lexer
 * Ben Mabey -- Gherkin lexer
+* Simone Margaritelli -- Hybris lexer
 * Kirk McDonald -- D lexer
+* Stephen McKamey -- Duel/JBST lexer
 * Lukas Meuser -- BBCode formatter, Lua lexer
 * Paulo Moura -- Logtalk lexer
 * Ana Nelson -- Ragel, ANTLR, R console lexers
@@ -48,9 +57,11 @@
 * Jesper Noehr -- HTML formatter "anchorlinenos"
 * Jonas Obrist -- BBCode lexer
 * David Oliva -- Rebol lexer
+* Jon Parise -- Protocol buffers lexer
 * Ronny Pfannschmidt -- BBCode lexer
 * Benjamin Peterson -- Test suite refactoring
 * Justin Reidy -- MXML lexer
+* Lubomir Rintel -- GoodData MAQL and CL lexers
 * Andre Roberge -- Tango style
 * Konrad Rudolph -- LaTeX formatter enhancements
 * Mario Ruggier -- Evoque lexers
@@ -61,6 +72,7 @@
 * Tassilo Schweyer -- Io, MOOCode lexers
 * Joerg Sieker -- ABAP lexer
 * Kirill Simonov -- YAML lexer
+* Steve Spigarelli -- XQuery lexer
 * Tiberius Teng -- default style overhaul
 * Jeremy Thurgood -- Erlang, Squid config lexers
 * Erick Tryzelaar -- Felix lexer
--- a/ThirdParty/Pygments/pygments/CHANGES	Tue Jan 04 17:37:48 2011 +0100
+++ b/ThirdParty/Pygments/pygments/CHANGES	Wed Jan 05 15:46:19 2011 +0100
@@ -1,11 +1,73 @@
 Pygments changelog
 ==================
 
-Issue numbers refer to the tracker at http://dev.pocoo.org/projects/pygments/.
+Issue numbers refer to the tracker at
+http://bitbucket.org/birkenfeld/pygments-main/issues.
 
 Version 1.4
 -----------
-(in development)
+(codename Unschärfe, released Jan 03, 2010)
+
+- Lexers added:
+
+  * Factor (#520)
+  * PostScript (#486)
+  * Verilog (#491)
+  * BlitzMax Basic (#478)
+  * Ioke (#465)
+  * Java properties, split out of the INI lexer (#445)
+  * Scss (#509)
+  * Duel/JBST
+  * XQuery (#617)
+  * Mason (#615)
+  * GoodData (#609)
+  * SSP (#473)
+  * Autohotkey (#417)
+  * Google Protocol Buffers
+  * Hybris (#506)
+
+- Do not fail in analyse_text methods (#618).
+
+- Performance improvements in the HTML formatter (#523).
+
+- With the ``noclasses`` option in the HTML formatter, some styles
+  present in the stylesheet were not added as inline styles.
+
+- Four fixes to the Lua lexer (#480, #481, #482, #497).
+
+- More context-sensitive Gherkin lexer with support for more i18n translations.
+
+- Support new OO keywords in Matlab lexer (#521).
+
+- Small fix in the CoffeeScript lexer (#519).
+
+- A bugfix for backslashes in ocaml strings (#499).
+
+- Fix unicode/raw docstrings in the Python lexer (#489).
+
+- Allow PIL to work without PIL.pth (#502).
+
+- Allow seconds as a unit in CSS (#496).
+
+- Support ``application/javascript`` as a JavaScript mime type (#504).
+
+- Support `Offload <http://offload.codeplay.com>`_ C++ Extensions as
+  keywords in the C++ lexer (#484).
+
+- Escape more characters in LaTeX output (#505).
+
+- Update Haml/Sass lexers to version 3 (#509).
+
+- Small PHP lexer string escaping fix (#515).
+
+- Support comments before preprocessor directives, and unsigned/
+  long long literals in C/C++ (#613, #616).
+
+- Support line continuations in the INI lexer (#494).
+
+- Fix lexing of Dylan string and char literals (#628).
+
+- Fix class/procedure name highlighting in VB.NET lexer (#624).
 
 
 Version 1.3.1
--- a/ThirdParty/Pygments/pygments/PKG-INFO	Tue Jan 04 17:37:48 2011 +0100
+++ b/ThirdParty/Pygments/pygments/PKG-INFO	Wed Jan 05 15:46:19 2011 +0100
@@ -1,35 +1,35 @@
 Metadata-Version: 1.0
 Name: Pygments
-Version: 1.3.1
+Version: 1.4
 Summary: Pygments is a syntax highlighting package written in Python.
 Home-page: http://pygments.org/
 Author: Georg Brandl
 Author-email: georg@python.org
 License: BSD License
 Description: 
-        Pygments
-        ~~~~~~~~
+            Pygments
+            ~~~~~~~~
         
-        Pygments is a syntax highlighting package written in Python.
+            Pygments is a syntax highlighting package written in Python.
         
-        It is a generic syntax highlighter for general use in all kinds of software
-        such as forum systems, wikis or other applications that need to prettify
-        source code. Highlights are:
+            It is a generic syntax highlighter for general use in all kinds of software
+            such as forum systems, wikis or other applications that need to prettify
+            source code. Highlights are:
         
-        * a wide range of common languages and markup formats is supported
-        * special attention is paid to details, increasing quality by a fair amount
-        * support for new languages and formats are added easily
-        * a number of output formats, presently HTML, LaTeX, RTF, SVG, all image       formats that PIL supports and ANSI sequences
-        * it is usable as a command-line tool and as a library
-        * ... and it highlights even Brainfuck!
+            * a wide range of common languages and markup formats is supported
+            * special attention is paid to details, increasing quality by a fair amount
+            * support for new languages and formats are added easily
+            * a number of output formats, presently HTML, LaTeX, RTF, SVG, all image       formats that PIL supports and ANSI sequences
+            * it is usable as a command-line tool and as a library
+            * ... and it highlights even Brainfuck!
         
-        The `Pygments tip`_ is installable with ``easy_install Pygments==dev``.
+            The `Pygments tip`_ is installable with ``easy_install Pygments==dev``.
         
-        .. _Pygments tip:
-        http://dev.pocoo.org/hg/pygments-main/archive/tip.tar.gz#egg=Pygments-dev
+            .. _Pygments tip:
+               http://bitbucket.org/birkenfeld/pygments-main/get/tip.zip#egg=Pygments-dev
         
-        :copyright: Copyright 2006-2010 by the Pygments team, see AUTHORS.
-        :license: BSD, see LICENSE for details.
+            :copyright: Copyright 2006-2010 by the Pygments team, see AUTHORS.
+            :license: BSD, see LICENSE for details.
         
 Keywords: syntax highlighting
 Platform: any
--- a/ThirdParty/Pygments/pygments/__init__.py	Tue Jan 04 17:37:48 2011 +0100
+++ b/ThirdParty/Pygments/pygments/__init__.py	Wed Jan 05 15:46:19 2011 +0100
@@ -1,91 +1,91 @@
-# -*- coding: utf-8 -*-
-"""
-    Pygments
-    ~~~~~~~~
-
-    Pygments is a syntax highlighting package written in Python.
-
-    It is a generic syntax highlighter for general use in all kinds of software
-    such as forum systems, wikis or other applications that need to prettify
-    source code. Highlights are:
-
-    * a wide range of common languages and markup formats is supported
-    * special attention is paid to details, increasing quality by a fair amount
-    * support for new languages and formats are added easily
-    * a number of output formats, presently HTML, LaTeX, RTF, SVG, all image
-      formats that PIL supports, and ANSI sequences
-    * it is usable as a command-line tool and as a library
-    * ... and it highlights even Brainfuck!
-
-    The `Pygments tip`_ is installable with ``easy_install Pygments==dev``.
-
-    .. _Pygments tip:
-       http://dev.pocoo.org/hg/pygments-main/archive/tip.tar.gz#egg=Pygments-dev
-
-    :copyright: Copyright 2006-2010 by the Pygments team, see AUTHORS.
-    :license: BSD, see LICENSE for details.
-"""
-
-__version__ = '1.3.1'
-__docformat__ = 'restructuredtext'
-
-__all__ = ['lex', 'format', 'highlight']
-
-
-import sys
-
-from pygments.util import StringIO, BytesIO
-
-
-def lex(code, lexer):
-    """
-    Lex ``code`` with ``lexer`` and return an iterable of tokens.
-    """
-    try:
-        return lexer.get_tokens(code)
-    except TypeError as err:
-        if isinstance(err.args[0], str) and \
-           'unbound method get_tokens' in err.args[0]:
-            raise TypeError('lex() argument must be a lexer instance, '
-                            'not a class')
-        raise
-
-
-def format(tokens, formatter, outfile=None):
-    """
-    Format a tokenlist ``tokens`` with the formatter ``formatter``.
-
-    If ``outfile`` is given and a valid file object (an object
-    with a ``write`` method), the result will be written to it, otherwise
-    it is returned as a string.
-    """
-    try:
-        if not outfile:
-            #print formatter, 'using', formatter.encoding
-            realoutfile = formatter.encoding and BytesIO() or StringIO()
-            formatter.format(tokens, realoutfile)
-            return realoutfile.getvalue()
-        else:
-            formatter.format(tokens, outfile)
-    except TypeError as err:
-        if isinstance(err.args[0], str) and \
-           'unbound method format' in err.args[0]:
-            raise TypeError('format() argument must be a formatter instance, '
-                            'not a class')
-        raise
-
-
-def highlight(code, lexer, formatter, outfile=None):
-    """
-    Lex ``code`` with ``lexer`` and format it with the formatter ``formatter``.
-
-    If ``outfile`` is given and a valid file object (an object
-    with a ``write`` method), the result will be written to it, otherwise
-    it is returned as a string.
-    """
-    return format(lex(code, lexer), formatter, outfile)
-
-
-if __name__ == '__main__':
-    from pygments.cmdline import main
-    sys.exit(main(sys.argv))
+# -*- coding: utf-8 -*-
+"""
+    Pygments
+    ~~~~~~~~
+
+    Pygments is a syntax highlighting package written in Python.
+
+    It is a generic syntax highlighter for general use in all kinds of software
+    such as forum systems, wikis or other applications that need to prettify
+    source code. Highlights are:
+
+    * a wide range of common languages and markup formats is supported
+    * special attention is paid to details, increasing quality by a fair amount
+    * support for new languages and formats are added easily
+    * a number of output formats, presently HTML, LaTeX, RTF, SVG, all image
+      formats that PIL supports, and ANSI sequences
+    * it is usable as a command-line tool and as a library
+    * ... and it highlights even Brainfuck!
+
+    The `Pygments tip`_ is installable with ``easy_install Pygments==dev``.
+
+    .. _Pygments tip:
+       http://bitbucket.org/birkenfeld/pygments-main/get/tip.zip#egg=Pygments-dev
+
+    :copyright: Copyright 2006-2010 by the Pygments team, see AUTHORS.
+    :license: BSD, see LICENSE for details.
+"""
+
+__version__ = '1.4'
+__docformat__ = 'restructuredtext'
+
+__all__ = ['lex', 'format', 'highlight']
+
+
+import sys
+
+from pygments.util import StringIO, BytesIO
+
+
+def lex(code, lexer):
+    """
+    Lex ``code`` with ``lexer`` and return an iterable of tokens.
+    """
+    try:
+        return lexer.get_tokens(code)
+    except TypeError as err:
+        if isinstance(err.args[0], str) and \
+           'unbound method get_tokens' in err.args[0]:
+            raise TypeError('lex() argument must be a lexer instance, '
+                            'not a class')
+        raise
+
+
+def format(tokens, formatter, outfile=None):
+    """
+    Format a tokenlist ``tokens`` with the formatter ``formatter``.
+
+    If ``outfile`` is given and a valid file object (an object
+    with a ``write`` method), the result will be written to it, otherwise
+    it is returned as a string.
+    """
+    try:
+        if not outfile:
+            #print formatter, 'using', formatter.encoding
+            realoutfile = formatter.encoding and BytesIO() or StringIO()
+            formatter.format(tokens, realoutfile)
+            return realoutfile.getvalue()
+        else:
+            formatter.format(tokens, outfile)
+    except TypeError as err:
+        if isinstance(err.args[0], str) and \
+           'unbound method format' in err.args[0]:
+            raise TypeError('format() argument must be a formatter instance, '
+                            'not a class')
+        raise
+
+
+def highlight(code, lexer, formatter, outfile=None):
+    """
+    Lex ``code`` with ``lexer`` and format it with the formatter ``formatter``.
+
+    If ``outfile`` is given and a valid file object (an object
+    with a ``write`` method), the result will be written to it, otherwise
+    it is returned as a string.
+    """
+    return format(lex(code, lexer), formatter, outfile)
+
+
+if __name__ == '__main__':
+    from pygments.cmdline import main
+    sys.exit(main(sys.argv))
--- a/ThirdParty/Pygments/pygments/cmdline.py	Tue Jan 04 17:37:48 2011 +0100
+++ b/ThirdParty/Pygments/pygments/cmdline.py	Wed Jan 05 15:46:19 2011 +0100
@@ -1,430 +1,430 @@
-# -*- coding: utf-8 -*-
-"""
-    pygments.cmdline
-    ~~~~~~~~~~~~~~~~
-
-    Command line interface.
-
-    :copyright: Copyright 2006-2010 by the Pygments team, see AUTHORS.
-    :license: BSD, see LICENSE for details.
-"""
-import sys
-import getopt
-from textwrap import dedent
-
-from pygments import __version__, highlight
-from pygments.util import ClassNotFound, OptionError, docstring_headline
-from pygments.lexers import get_all_lexers, get_lexer_by_name, get_lexer_for_filename, \
-     find_lexer_class, guess_lexer, TextLexer
-from pygments.formatters import get_all_formatters, get_formatter_by_name, \
-     get_formatter_for_filename, find_formatter_class, \
-     TerminalFormatter  # pylint:disable-msg=E0611
-from pygments.filters import get_all_filters, find_filter_class
-from pygments.styles import get_all_styles, get_style_by_name
-
-
-USAGE = """\
-Usage: %s [-l <lexer> | -g] [-F <filter>[:<options>]] [-f <formatter>]
-          [-O <options>] [-P <option=value>] [-o <outfile>] [<infile>]
-
-       %s -S <style> -f <formatter> [-a <arg>] [-O <options>] [-P <option=value>]
-       %s -L [<which> ...]
-       %s -N <filename>
-       %s -H <type> <name>
-       %s -h | -V
-
-Highlight the input file and write the result to <outfile>.
-
-If no input file is given, use stdin, if -o is not given, use stdout.
-
-<lexer> is a lexer name (query all lexer names with -L). If -l is not
-given, the lexer is guessed from the extension of the input file name
-(this obviously doesn't work if the input is stdin).  If -g is passed,
-attempt to guess the lexer from the file contents, or pass through as
-plain text if this fails (this can work for stdin).
-
-Likewise, <formatter> is a formatter name, and will be guessed from
-the extension of the output file name. If no output file is given,
-the terminal formatter will be used by default.
-
-With the -O option, you can give the lexer and formatter a comma-
-separated list of options, e.g. ``-O bg=light,python=cool``.
-
-The -P option adds lexer and formatter options like the -O option, but
-you can only give one option per -P. That way, the option value may
-contain commas and equals signs, which it can't with -O, e.g.
-``-P "heading=Pygments, the Python highlighter".
-
-With the -F option, you can add filters to the token stream, you can
-give options in the same way as for -O after a colon (note: there must
-not be spaces around the colon).
-
-The -O, -P and -F options can be given multiple times.
-
-With the -S option, print out style definitions for style <style>
-for formatter <formatter>. The argument given by -a is formatter
-dependent.
-
-The -L option lists lexers, formatters, styles or filters -- set
-`which` to the thing you want to list (e.g. "styles"), or omit it to
-list everything.
-
-The -N option guesses and prints out a lexer name based solely on
-the given filename. It does not take input or highlight anything.
-If no specific lexer can be determined "text" is returned.
-
-The -H option prints detailed help for the object <name> of type <type>,
-where <type> is one of "lexer", "formatter" or "filter".
-
-The -h option prints this help.
-The -V option prints the package version.
-"""
-
-
-def _parse_options(o_strs):
-    opts = {}
-    if not o_strs:
-        return opts
-    for o_str in o_strs:
-        if not o_str:
-            continue
-        o_args = o_str.split(',')
-        for o_arg in o_args:
-            o_arg = o_arg.strip()
-            try:
-                o_key, o_val = o_arg.split('=')
-                o_key = o_key.strip()
-                o_val = o_val.strip()
-            except ValueError:
-                opts[o_arg] = True
-            else:
-                opts[o_key] = o_val
-    return opts
-
-
-def _parse_filters(f_strs):
-    filters = []
-    if not f_strs:
-        return filters
-    for f_str in f_strs:
-        if ':' in f_str:
-            fname, fopts = f_str.split(':', 1)
-            filters.append((fname, _parse_options([fopts])))
-        else:
-            filters.append((f_str, {}))
-    return filters
-
-
-def _print_help(what, name):
-    try:
-        if what == 'lexer':
-            cls = find_lexer_class(name)
-            print("Help on the %s lexer:" % cls.name)
-            print(dedent(cls.__doc__))
-        elif what == 'formatter':
-            cls = find_formatter_class(name)
-            print("Help on the %s formatter:" % cls.name)
-            print(dedent(cls.__doc__))
-        elif what == 'filter':
-            cls = find_filter_class(name)
-            print("Help on the %s filter:" % name)
-            print(dedent(cls.__doc__))
-    except AttributeError:
-        print("%s not found!" % what, file=sys.stderr)
-
-
-def _print_list(what):
-    if what == 'lexer':
-        print()
-        print("Lexers:")
-        print("~~~~~~~")
-
-        info = []
-        for fullname, names, exts, _ in get_all_lexers():
-            tup = (', '.join(names)+':', fullname,
-                   exts and '(filenames ' + ', '.join(exts) + ')' or '')
-            info.append(tup)
-        info.sort()
-        for i in info:
-            print(('* %s\n    %s %s') % i)
-
-    elif what == 'formatter':
-        print()
-        print("Formatters:")
-        print("~~~~~~~~~~~")
-
-        info = []
-        for cls in get_all_formatters():
-            doc = docstring_headline(cls)
-            tup = (', '.join(cls.aliases) + ':', doc, cls.filenames and
-                   '(filenames ' + ', '.join(cls.filenames) + ')' or '')
-            info.append(tup)
-        info.sort()
-        for i in info:
-            print(('* %s\n    %s %s') % i)
-
-    elif what == 'filter':
-        print()
-        print("Filters:")
-        print("~~~~~~~~")
-
-        for name in get_all_filters():
-            cls = find_filter_class(name)
-            print("* " + name + ':')
-            print("    %s" % docstring_headline(cls))
-
-    elif what == 'style':
-        print()
-        print("Styles:")
-        print("~~~~~~~")
-
-        for name in get_all_styles():
-            cls = get_style_by_name(name)
-            print("* " + name + ':')
-            print("    %s" % docstring_headline(cls))
-
-
-def main(args=sys.argv):
-    """
-    Main command line entry point.
-    """
-    # pylint: disable-msg=R0911,R0912,R0915
-
-    usage = USAGE % ((args[0],) * 6)
-
-    try:
-        popts, args = getopt.getopt(args[1:], "l:f:F:o:O:P:LS:a:N:hVHg")
-    except getopt.GetoptError as err:
-        print(usage, file=sys.stderr)
-        return 2
-    opts = {}
-    O_opts = []
-    P_opts = []
-    F_opts = []
-    for opt, arg in popts:
-        if opt == '-O':
-            O_opts.append(arg)
-        elif opt == '-P':
-            P_opts.append(arg)
-        elif opt == '-F':
-            F_opts.append(arg)
-        opts[opt] = arg
-
-    if not opts and not args:
-        print(usage)
-        return 0
-
-    if opts.pop('-h', None) is not None:
-        print(usage)
-        return 0
-
-    if opts.pop('-V', None) is not None:
-        print('Pygments version %s, (c) 2006-2008 by Georg Brandl.' % __version__)
-        return 0
-
-    # handle ``pygmentize -L``
-    L_opt = opts.pop('-L', None)
-    if L_opt is not None:
-        if opts:
-            print(usage, file=sys.stderr)
-            return 2
-
-        # print version
-        main(['', '-V'])
-        if not args:
-            args = ['lexer', 'formatter', 'filter', 'style']
-        for arg in args:
-            _print_list(arg.rstrip('s'))
-        return 0
-
-    # handle ``pygmentize -H``
-    H_opt = opts.pop('-H', None)
-    if H_opt is not None:
-        if opts or len(args) != 2:
-            print(usage, file=sys.stderr)
-            return 2
-
-        what, name = args
-        if what not in ('lexer', 'formatter', 'filter'):
-            print(usage, file=sys.stderr)
-            return 2
-
-        _print_help(what, name)
-        return 0
-
-    # parse -O options
-    parsed_opts = _parse_options(O_opts)
-    opts.pop('-O', None)
-
-    # parse -P options
-    for p_opt in P_opts:
-        try:
-            name, value = p_opt.split('=', 1)
-        except ValueError:
-            parsed_opts[p_opt] = True
-        else:
-            parsed_opts[name] = value
-    opts.pop('-P', None)
-
-    # handle ``pygmentize -N``
-    infn = opts.pop('-N', None)
-    if infn is not None:
-        try:
-            lexer = get_lexer_for_filename(infn, **parsed_opts)
-        except ClassNotFound as err:
-            lexer = TextLexer()
-        except OptionError as err:
-            print('Error:', err, file=sys.stderr)
-            return 1
-
-        print(lexer.aliases[0])
-        return 0
-
-    # handle ``pygmentize -S``
-    S_opt = opts.pop('-S', None)
-    a_opt = opts.pop('-a', None)
-    if S_opt is not None:
-        f_opt = opts.pop('-f', None)
-        if not f_opt:
-            print(usage, file=sys.stderr)
-            return 2
-        if opts or args:
-            print(usage, file=sys.stderr)
-            return 2
-
-        try:
-            parsed_opts['style'] = S_opt
-            fmter = get_formatter_by_name(f_opt, **parsed_opts)
-        except ClassNotFound as err:
-            print(err, file=sys.stderr)
-            return 1
-
-        arg = a_opt or ''
-        try:
-            print(fmter.get_style_defs(arg))
-        except Exception as err:
-            print('Error:', err, file=sys.stderr)
-            return 1
-        return 0
-
-    # if no -S is given, -a is not allowed
-    if a_opt is not None:
-        print(usage, file=sys.stderr)
-        return 2
-
-    # parse -F options
-    F_opts = _parse_filters(F_opts)
-    opts.pop('-F', None)
-
-    # select formatter
-    outfn = opts.pop('-o', None)
-    fmter = opts.pop('-f', None)
-    if fmter:
-        try:
-            fmter = get_formatter_by_name(fmter, **parsed_opts)
-        except (OptionError, ClassNotFound) as err:
-            print('Error:', err, file=sys.stderr)
-            return 1
-
-    if outfn:
-        if not fmter:
-            try:
-                fmter = get_formatter_for_filename(outfn, **parsed_opts)
-            except (OptionError, ClassNotFound) as err:
-                print('Error:', err, file=sys.stderr)
-                return 1
-        try:
-            outfile = open(outfn, 'wb')
-        except Exception as err:
-            print('Error: cannot open outfile:', err, file=sys.stderr)
-            return 1
-    else:
-        if not fmter:
-            fmter = TerminalFormatter(**parsed_opts)
-        outfile = sys.stdout
-
-    # select lexer
-    lexer = opts.pop('-l', None)
-    if lexer:
-        try:
-            lexer = get_lexer_by_name(lexer, **parsed_opts)
-        except (OptionError, ClassNotFound) as err:
-            print('Error:', err, file=sys.stderr)
-            return 1
-
-    if args:
-        if len(args) > 1:
-            print(usage, file=sys.stderr)
-            return 2
-
-        infn = args[0]
-        try:
-            code = open(infn, 'rb').read()
-        except Exception as err:
-            print('Error: cannot read infile:', err, file=sys.stderr)
-            return 1
-
-        if not lexer:
-            try:
-                lexer = get_lexer_for_filename(infn, code, **parsed_opts)
-            except ClassNotFound as err:
-                if '-g' in opts:
-                    try:
-                        lexer = guess_lexer(code)
-                    except ClassNotFound:
-                        lexer = TextLexer()
-                else:
-                    print('Error:', err, file=sys.stderr)
-                    return 1
-            except OptionError as err:
-                print('Error:', err, file=sys.stderr)
-                return 1
-
-    else:
-        if '-g' in opts:
-            code = sys.stdin.read()
-            try:
-                lexer = guess_lexer(code)
-            except ClassNotFound:
-                lexer = TextLexer()
-        elif not lexer:
-            print('Error: no lexer name given and reading ' + \
-                                'from stdin (try using -g or -l <lexer>)', file=sys.stderr)
-            return 2
-        else:
-            code = sys.stdin.read()
-
-    # No encoding given? Use latin1 if output file given,
-    # stdin/stdout encoding otherwise.
-    # (This is a compromise, I'm not too happy with it...)
-    if 'encoding' not in parsed_opts and 'outencoding' not in parsed_opts:
-        if outfn:
-            # encoding pass-through
-            fmter.encoding = 'latin1'
-        else:
-            if sys.version_info < (3,):
-                # use terminal encoding; Python 3's terminals already do that
-                lexer.encoding = getattr(sys.stdin, 'encoding',
-                                         None) or 'ascii'
-                fmter.encoding = getattr(sys.stdout, 'encoding',
-                                         None) or 'ascii'
-
-    # ... and do it!
-    try:
-        # process filters
-        for fname, fopts in F_opts:
-            lexer.add_filter(fname, **fopts)
-        highlight(code, lexer, fmter, outfile)
-    except Exception as err:
-        import traceback
-        info = traceback.format_exception(*sys.exc_info())
-        msg = info[-1].strip()
-        if len(info) >= 3:
-            # extract relevant file and position info
-            msg += '\n   (f%s)' % info[-2].split('\n')[0].strip()[1:]
-        print(file=sys.stderr)
-        print('*** Error while highlighting:', file=sys.stderr)
-        print(msg, file=sys.stderr)
-        return 1
-
-    return 0
+# -*- coding: utf-8 -*-
+"""
+    pygments.cmdline
+    ~~~~~~~~~~~~~~~~
+
+    Command line interface.
+
+    :copyright: Copyright 2006-2010 by the Pygments team, see AUTHORS.
+    :license: BSD, see LICENSE for details.
+"""
+import sys
+import getopt
+from textwrap import dedent
+
+from pygments import __version__, highlight
+from pygments.util import ClassNotFound, OptionError, docstring_headline
+from pygments.lexers import get_all_lexers, get_lexer_by_name, get_lexer_for_filename, \
+     find_lexer_class, guess_lexer, TextLexer
+from pygments.formatters import get_all_formatters, get_formatter_by_name, \
+     get_formatter_for_filename, find_formatter_class, \
+     TerminalFormatter  # pylint:disable-msg=E0611
+from pygments.filters import get_all_filters, find_filter_class
+from pygments.styles import get_all_styles, get_style_by_name
+
+
+USAGE = """\
+Usage: %s [-l <lexer> | -g] [-F <filter>[:<options>]] [-f <formatter>]
+          [-O <options>] [-P <option=value>] [-o <outfile>] [<infile>]
+
+       %s -S <style> -f <formatter> [-a <arg>] [-O <options>] [-P <option=value>]
+       %s -L [<which> ...]
+       %s -N <filename>
+       %s -H <type> <name>
+       %s -h | -V
+
+Highlight the input file and write the result to <outfile>.
+
+If no input file is given, use stdin, if -o is not given, use stdout.
+
+<lexer> is a lexer name (query all lexer names with -L). If -l is not
+given, the lexer is guessed from the extension of the input file name
+(this obviously doesn't work if the input is stdin).  If -g is passed,
+attempt to guess the lexer from the file contents, or pass through as
+plain text if this fails (this can work for stdin).
+
+Likewise, <formatter> is a formatter name, and will be guessed from
+the extension of the output file name. If no output file is given,
+the terminal formatter will be used by default.
+
+With the -O option, you can give the lexer and formatter a comma-
+separated list of options, e.g. ``-O bg=light,python=cool``.
+
+The -P option adds lexer and formatter options like the -O option, but
+you can only give one option per -P. That way, the option value may
+contain commas and equals signs, which it can't with -O, e.g.
+``-P "heading=Pygments, the Python highlighter".
+
+With the -F option, you can add filters to the token stream, you can
+give options in the same way as for -O after a colon (note: there must
+not be spaces around the colon).
+
+The -O, -P and -F options can be given multiple times.
+
+With the -S option, print out style definitions for style <style>
+for formatter <formatter>. The argument given by -a is formatter
+dependent.
+
+The -L option lists lexers, formatters, styles or filters -- set
+`which` to the thing you want to list (e.g. "styles"), or omit it to
+list everything.
+
+The -N option guesses and prints out a lexer name based solely on
+the given filename. It does not take input or highlight anything.
+If no specific lexer can be determined "text" is returned.
+
+The -H option prints detailed help for the object <name> of type <type>,
+where <type> is one of "lexer", "formatter" or "filter".
+
+The -h option prints this help.
+The -V option prints the package version.
+"""
+
+
+def _parse_options(o_strs):
+    opts = {}
+    if not o_strs:
+        return opts
+    for o_str in o_strs:
+        if not o_str:
+            continue
+        o_args = o_str.split(',')
+        for o_arg in o_args:
+            o_arg = o_arg.strip()
+            try:
+                o_key, o_val = o_arg.split('=')
+                o_key = o_key.strip()
+                o_val = o_val.strip()
+            except ValueError:
+                opts[o_arg] = True
+            else:
+                opts[o_key] = o_val
+    return opts
+
+
+def _parse_filters(f_strs):
+    filters = []
+    if not f_strs:
+        return filters
+    for f_str in f_strs:
+        if ':' in f_str:
+            fname, fopts = f_str.split(':', 1)
+            filters.append((fname, _parse_options([fopts])))
+        else:
+            filters.append((f_str, {}))
+    return filters
+
+
+def _print_help(what, name):
+    try:
+        if what == 'lexer':
+            cls = find_lexer_class(name)
+            print("Help on the %s lexer:" % cls.name)
+            print(dedent(cls.__doc__))
+        elif what == 'formatter':
+            cls = find_formatter_class(name)
+            print("Help on the %s formatter:" % cls.name)
+            print(dedent(cls.__doc__))
+        elif what == 'filter':
+            cls = find_filter_class(name)
+            print("Help on the %s filter:" % name)
+            print(dedent(cls.__doc__))
+    except AttributeError:
+        print("%s not found!" % what, file=sys.stderr)
+
+
+def _print_list(what):
+    if what == 'lexer':
+        print()
+        print("Lexers:")
+        print("~~~~~~~")
+
+        info = []
+        for fullname, names, exts, _ in get_all_lexers():
+            tup = (', '.join(names)+':', fullname,
+                   exts and '(filenames ' + ', '.join(exts) + ')' or '')
+            info.append(tup)
+        info.sort()
+        for i in info:
+            print(('* %s\n    %s %s') % i)
+
+    elif what == 'formatter':
+        print()
+        print("Formatters:")
+        print("~~~~~~~~~~~")
+
+        info = []
+        for cls in get_all_formatters():
+            doc = docstring_headline(cls)
+            tup = (', '.join(cls.aliases) + ':', doc, cls.filenames and
+                   '(filenames ' + ', '.join(cls.filenames) + ')' or '')
+            info.append(tup)
+        info.sort()
+        for i in info:
+            print(('* %s\n    %s %s') % i)
+
+    elif what == 'filter':
+        print()
+        print("Filters:")
+        print("~~~~~~~~")
+
+        for name in get_all_filters():
+            cls = find_filter_class(name)
+            print("* " + name + ':')
+            print("    %s" % docstring_headline(cls))
+
+    elif what == 'style':
+        print()
+        print("Styles:")
+        print("~~~~~~~")
+
+        for name in get_all_styles():
+            cls = get_style_by_name(name)
+            print("* " + name + ':')
+            print("    %s" % docstring_headline(cls))
+
+
+def main(args=sys.argv):
+    """
+    Main command line entry point.
+    """
+    # pylint: disable-msg=R0911,R0912,R0915
+
+    usage = USAGE % ((args[0],) * 6)
+
+    try:
+        popts, args = getopt.getopt(args[1:], "l:f:F:o:O:P:LS:a:N:hVHg")
+    except getopt.GetoptError as err:
+        print(usage, file=sys.stderr)
+        return 2
+    opts = {}
+    O_opts = []
+    P_opts = []
+    F_opts = []
+    for opt, arg in popts:
+        if opt == '-O':
+            O_opts.append(arg)
+        elif opt == '-P':
+            P_opts.append(arg)
+        elif opt == '-F':
+            F_opts.append(arg)
+        opts[opt] = arg
+
+    if not opts and not args:
+        print(usage)
+        return 0
+
+    if opts.pop('-h', None) is not None:
+        print(usage)
+        return 0
+
+    if opts.pop('-V', None) is not None:
+        print('Pygments version %s, (c) 2006-2008 by Georg Brandl.' % __version__)
+        return 0
+
+    # handle ``pygmentize -L``
+    L_opt = opts.pop('-L', None)
+    if L_opt is not None:
+        if opts:
+            print(usage, file=sys.stderr)
+            return 2
+
+        # print version
+        main(['', '-V'])
+        if not args:
+            args = ['lexer', 'formatter', 'filter', 'style']
+        for arg in args:
+            _print_list(arg.rstrip('s'))
+        return 0
+
+    # handle ``pygmentize -H``
+    H_opt = opts.pop('-H', None)
+    if H_opt is not None:
+        if opts or len(args) != 2:
+            print(usage, file=sys.stderr)
+            return 2
+
+        what, name = args
+        if what not in ('lexer', 'formatter', 'filter'):
+            print(usage, file=sys.stderr)
+            return 2
+
+        _print_help(what, name)
+        return 0
+
+    # parse -O options
+    parsed_opts = _parse_options(O_opts)
+    opts.pop('-O', None)
+
+    # parse -P options
+    for p_opt in P_opts:
+        try:
+            name, value = p_opt.split('=', 1)
+        except ValueError:
+            parsed_opts[p_opt] = True
+        else:
+            parsed_opts[name] = value
+    opts.pop('-P', None)
+
+    # handle ``pygmentize -N``
+    infn = opts.pop('-N', None)
+    if infn is not None:
+        try:
+            lexer = get_lexer_for_filename(infn, **parsed_opts)
+        except ClassNotFound as err:
+            lexer = TextLexer()
+        except OptionError as err:
+            print('Error:', err, file=sys.stderr)
+            return 1
+
+        print(lexer.aliases[0])
+        return 0
+
+    # handle ``pygmentize -S``
+    S_opt = opts.pop('-S', None)
+    a_opt = opts.pop('-a', None)
+    if S_opt is not None:
+        f_opt = opts.pop('-f', None)
+        if not f_opt:
+            print(usage, file=sys.stderr)
+            return 2
+        if opts or args:
+            print(usage, file=sys.stderr)
+            return 2
+
+        try:
+            parsed_opts['style'] = S_opt
+            fmter = get_formatter_by_name(f_opt, **parsed_opts)
+        except ClassNotFound as err:
+            print(err, file=sys.stderr)
+            return 1
+
+        arg = a_opt or ''
+        try:
+            print(fmter.get_style_defs(arg))
+        except Exception as err:
+            print('Error:', err, file=sys.stderr)
+            return 1
+        return 0
+
+    # if no -S is given, -a is not allowed
+    if a_opt is not None:
+        print(usage, file=sys.stderr)
+        return 2
+
+    # parse -F options
+    F_opts = _parse_filters(F_opts)
+    opts.pop('-F', None)
+
+    # select formatter
+    outfn = opts.pop('-o', None)
+    fmter = opts.pop('-f', None)
+    if fmter:
+        try:
+            fmter = get_formatter_by_name(fmter, **parsed_opts)
+        except (OptionError, ClassNotFound) as err:
+            print('Error:', err, file=sys.stderr)
+            return 1
+
+    if outfn:
+        if not fmter:
+            try:
+                fmter = get_formatter_for_filename(outfn, **parsed_opts)
+            except (OptionError, ClassNotFound) as err:
+                print('Error:', err, file=sys.stderr)
+                return 1
+        try:
+            outfile = open(outfn, 'wb')
+        except Exception as err:
+            print('Error: cannot open outfile:', err, file=sys.stderr)
+            return 1
+    else:
+        if not fmter:
+            fmter = TerminalFormatter(**parsed_opts)
+        outfile = sys.stdout
+
+    # select lexer
+    lexer = opts.pop('-l', None)
+    if lexer:
+        try:
+            lexer = get_lexer_by_name(lexer, **parsed_opts)
+        except (OptionError, ClassNotFound) as err:
+            print('Error:', err, file=sys.stderr)
+            return 1
+
+    if args:
+        if len(args) > 1:
+            print(usage, file=sys.stderr)
+            return 2
+
+        infn = args[0]
+        try:
+            code = open(infn, 'rb').read()
+        except Exception as err:
+            print('Error: cannot read infile:', err, file=sys.stderr)
+            return 1
+
+        if not lexer:
+            try:
+                lexer = get_lexer_for_filename(infn, code, **parsed_opts)
+            except ClassNotFound as err:
+                if '-g' in opts:
+                    try:
+                        lexer = guess_lexer(code)
+                    except ClassNotFound:
+                        lexer = TextLexer()
+                else:
+                    print('Error:', err, file=sys.stderr)
+                    return 1
+            except OptionError as err:
+                print('Error:', err, file=sys.stderr)
+                return 1
+
+    else:
+        if '-g' in opts:
+            code = sys.stdin.read()
+            try:
+                lexer = guess_lexer(code)
+            except ClassNotFound:
+                lexer = TextLexer()
+        elif not lexer:
+            print('Error: no lexer name given and reading ' + \
+                                'from stdin (try using -g or -l <lexer>)', file=sys.stderr)
+            return 2
+        else:
+            code = sys.stdin.read()
+
+    # No encoding given? Use latin1 if output file given,
+    # stdin/stdout encoding otherwise.
+    # (This is a compromise, I'm not too happy with it...)
+    if 'encoding' not in parsed_opts and 'outencoding' not in parsed_opts:
+        if outfn:
+            # encoding pass-through
+            fmter.encoding = 'latin1'
+        else:
+            if sys.version_info < (3,):
+                # use terminal encoding; Python 3's terminals already do that
+                lexer.encoding = getattr(sys.stdin, 'encoding',
+                                         None) or 'ascii'
+                fmter.encoding = getattr(sys.stdout, 'encoding',
+                                         None) or 'ascii'
+
+    # ... and do it!
+    try:
+        # process filters
+        for fname, fopts in F_opts:
+            lexer.add_filter(fname, **fopts)
+        highlight(code, lexer, fmter, outfile)
+    except Exception as err:
+        import traceback
+        info = traceback.format_exception(*sys.exc_info())
+        msg = info[-1].strip()
+        if len(info) >= 3:
+            # extract relevant file and position info
+            msg += '\n   (f%s)' % info[-2].split('\n')[0].strip()[1:]
+        print(file=sys.stderr)
+        print('*** Error while highlighting:', file=sys.stderr)
+        print(msg, file=sys.stderr)
+        return 1
+
+    return 0
--- a/ThirdParty/Pygments/pygments/formatter.py	Tue Jan 04 17:37:48 2011 +0100
+++ b/ThirdParty/Pygments/pygments/formatter.py	Wed Jan 05 15:46:19 2011 +0100
@@ -1,92 +1,92 @@
-# -*- coding: utf-8 -*-
-"""
-    pygments.formatter
-    ~~~~~~~~~~~~~~~~~~
-
-    Base formatter class.
-
-    :copyright: Copyright 2006-2010 by the Pygments team, see AUTHORS.
-    :license: BSD, see LICENSE for details.
-"""
-
-import codecs
-
-from pygments.util import get_bool_opt
-from pygments.styles import get_style_by_name
-
-__all__ = ['Formatter']
-
-
-def _lookup_style(style):
-    if isinstance(style, str):
-        return get_style_by_name(style)
-    return style
-
-
-class Formatter(object):
-    """
-    Converts a token stream to text.
-
-    Options accepted:
-
-    ``style``
-        The style to use, can be a string or a Style subclass
-        (default: "default"). Not used by e.g. the
-        TerminalFormatter.
-    ``full``
-        Tells the formatter to output a "full" document, i.e.
-        a complete self-contained document. This doesn't have
-        any effect for some formatters (default: false).
-    ``title``
-        If ``full`` is true, the title that should be used to
-        caption the document (default: '').
-    ``encoding``
-        If given, must be an encoding name. This will be used to
-        convert the Unicode token strings to byte strings in the
-        output. If it is "" or None, Unicode strings will be written
-        to the output file, which most file-like objects do not
-        support (default: None).
-    ``outencoding``
-        Overrides ``encoding`` if given.
-    """
-
-    #: Name of the formatter
-    name = None
-
-    #: Shortcuts for the formatter
-    aliases = []
-
-    #: fn match rules
-    filenames = []
-
-    #: If True, this formatter outputs Unicode strings when no encoding
-    #: option is given.
-    unicodeoutput = True
-
-    def __init__(self, **options):
-        self.style = _lookup_style(options.get('style', 'default'))
-        self.full  = get_bool_opt(options, 'full', False)
-        self.title = options.get('title', '')
-        self.encoding = options.get('encoding', None) or None
-        self.encoding = options.get('outencoding', None) or self.encoding
-        self.options = options
-
-    def get_style_defs(self, arg=''):
-        """
-        Return the style definitions for the current style as a string.
-
-        ``arg`` is an additional argument whose meaning depends on the
-        formatter used. Note that ``arg`` can also be a list or tuple
-        for some formatters like the html formatter.
-        """
-        return ''
-
-    def format(self, tokensource, outfile):
-        """
-        Format ``tokensource``, an iterable of ``(tokentype, tokenstring)``
-        tuples and write it into ``outfile``.
-        """
-        if self.encoding:
-            # wrap the outfile in a StreamWriter
-            outfile = codecs.lookup(self.encoding)[3](outfile)
-        return self.format_unencoded(tokensource, outfile)
+# -*- coding: utf-8 -*-
+"""
+    pygments.formatter
+    ~~~~~~~~~~~~~~~~~~
+
+    Base formatter class.
+
+    :copyright: Copyright 2006-2010 by the Pygments team, see AUTHORS.
+    :license: BSD, see LICENSE for details.
+"""
+
+import codecs
+
+from pygments.util import get_bool_opt
+from pygments.styles import get_style_by_name
+
+__all__ = ['Formatter']
+
+
+def _lookup_style(style):
+    if isinstance(style, str):
+        return get_style_by_name(style)
+    return style
+
+
+class Formatter(object):
+    """
+    Converts a token stream to text.
+
+    Options accepted:
+
+    ``style``
+        The style to use, can be a string or a Style subclass
+        (default: "default"). Not used by e.g. the
+        TerminalFormatter.
+    ``full``
+        Tells the formatter to output a "full" document, i.e.
+        a complete self-contained document. This doesn't have
+        any effect for some formatters (default: false).
+    ``title``
+        If ``full`` is true, the title that should be used to
+        caption the document (default: '').
+    ``encoding``
+        If given, must be an encoding name. This will be used to
+        convert the Unicode token strings to byte strings in the
+        output. If it is "" or None, Unicode strings will be written
+        to the output file, which most file-like objects do not
+        support (default: None).
+    ``outencoding``
+        Overrides ``encoding`` if given.
+    """
+
+    #: Name of the formatter
+    name = None
+
+    #: Shortcuts for the formatter
+    aliases = []
+
+    #: fn match rules
+    filenames = []
+
+    #: If True, this formatter outputs Unicode strings when no encoding
+    #: option is given.
+    unicodeoutput = True
+
+    def __init__(self, **options):
+        self.style = _lookup_style(options.get('style', 'default'))
+        self.full  = get_bool_opt(options, 'full', False)
+        self.title = options.get('title', '')
+        self.encoding = options.get('encoding', None) or None
+        self.encoding = options.get('outencoding', None) or self.encoding
+        self.options = options
+
+    def get_style_defs(self, arg=''):
+        """
+        Return the style definitions for the current style as a string.
+
+        ``arg`` is an additional argument whose meaning depends on the
+        formatter used. Note that ``arg`` can also be a list or tuple
+        for some formatters like the html formatter.
+        """
+        return ''
+
+    def format(self, tokensource, outfile):
+        """
+        Format ``tokensource``, an iterable of ``(tokentype, tokenstring)``
+        tuples and write it into ``outfile``.
+        """
+        if self.encoding:
+            # wrap the outfile in a StreamWriter
+            outfile = codecs.lookup(self.encoding)[3](outfile)
+        return self.format_unencoded(tokensource, outfile)
--- a/ThirdParty/Pygments/pygments/formatters/_mapping.py	Tue Jan 04 17:37:48 2011 +0100
+++ b/ThirdParty/Pygments/pygments/formatters/_mapping.py	Wed Jan 05 15:46:19 2011 +0100
@@ -1,92 +1,92 @@
-# -*- coding: utf-8 -*-
-"""
-    pygments.formatters._mapping
-    ~~~~~~~~~~~~~~~~~~~~~~~~~~~
-
-    Formatter mapping defintions. This file is generated by itself. Everytime
-    you change something on a builtin formatter defintion, run this script from
-    the formatters folder to update it.
-
-    Do not alter the FORMATTERS dictionary by hand.
-
-    :copyright: Copyright 2006-2010 by the Pygments team, see AUTHORS.
-    :license: BSD, see LICENSE for details.
-"""
-
-from pygments.util import docstring_headline
-
-# start
-from pygments.formatters.bbcode import BBCodeFormatter
-from pygments.formatters.html import HtmlFormatter
-from pygments.formatters.img import BmpImageFormatter
-from pygments.formatters.img import GifImageFormatter
-from pygments.formatters.img import ImageFormatter
-from pygments.formatters.img import JpgImageFormatter
-from pygments.formatters.latex import LatexFormatter
-from pygments.formatters.other import NullFormatter
-from pygments.formatters.other import RawTokenFormatter
-from pygments.formatters.rtf import RtfFormatter
-from pygments.formatters.svg import SvgFormatter
-from pygments.formatters.terminal import TerminalFormatter
-from pygments.formatters.terminal256 import Terminal256Formatter
-
-FORMATTERS = {
-    BBCodeFormatter: ('BBCode', ('bbcode', 'bb'), (), 'Format tokens with BBcodes. These formatting codes are used by many bulletin boards, so you can highlight your sourcecode with pygments before posting it there.'),
-    BmpImageFormatter: ('img_bmp', ('bmp', 'bitmap'), ('*.bmp',), 'Create a bitmap image from source code. This uses the Python Imaging Library to generate a pixmap from the source code.'),
-    GifImageFormatter: ('img_gif', ('gif',), ('*.gif',), 'Create a GIF image from source code. This uses the Python Imaging Library to generate a pixmap from the source code.'),
-    HtmlFormatter: ('HTML', ('html',), ('*.html', '*.htm'), "Format tokens as HTML 4 ``<span>`` tags within a ``<pre>`` tag, wrapped in a ``<div>`` tag. The ``<div>``'s CSS class can be set by the `cssclass` option."),
-    ImageFormatter: ('img', ('img', 'IMG', 'png'), ('*.png',), 'Create a PNG image from source code. This uses the Python Imaging Library to generate a pixmap from the source code.'),
-    JpgImageFormatter: ('img_jpg', ('jpg', 'jpeg'), ('*.jpg',), 'Create a JPEG image from source code. This uses the Python Imaging Library to generate a pixmap from the source code.'),
-    LatexFormatter: ('LaTeX', ('latex', 'tex'), ('*.tex',), 'Format tokens as LaTeX code. This needs the `fancyvrb` and `color` standard packages.'),
-    NullFormatter: ('Text only', ('text', 'null'), ('*.txt',), 'Output the text unchanged without any formatting.'),
-    RawTokenFormatter: ('Raw tokens', ('raw', 'tokens'), ('*.raw',), 'Format tokens as a raw representation for storing token streams.'),
-    RtfFormatter: ('RTF', ('rtf',), ('*.rtf',), 'Format tokens as RTF markup. This formatter automatically outputs full RTF documents with color information and other useful stuff. Perfect for Copy and Paste into Microsoft\xc2\xae Word\xc2\xae documents.'),
-    SvgFormatter: ('SVG', ('svg',), ('*.svg',), 'Format tokens as an SVG graphics file.  This formatter is still experimental. Each line of code is a ``<text>`` element with explicit ``x`` and ``y`` coordinates containing ``<tspan>`` elements with the individual token styles.'),
-    Terminal256Formatter: ('Terminal256', ('terminal256', 'console256', '256'), (), 'Format tokens with ANSI color sequences, for output in a 256-color terminal or console. Like in `TerminalFormatter` color sequences are terminated at newlines, so that paging the output works correctly.'),
-    TerminalFormatter: ('Terminal', ('terminal', 'console'), (), 'Format tokens with ANSI color sequences, for output in a text console. Color sequences are terminated at newlines, so that paging the output works correctly.')
-}
-
-if __name__ == '__main__':
-    import sys
-    import os
-
-    # lookup formatters
-    found_formatters = []
-    imports = []
-    sys.path.insert(0, os.path.join(os.path.dirname(__file__), '..', '..'))
-    for filename in os.listdir('.'):
-        if filename.endswith('.py') and not filename.startswith('_'):
-            module_name = 'pygments.formatters.%s' % filename[:-3]
-            print(module_name)
-            module = __import__(module_name, None, None, [''])
-            for formatter_name in module.__all__:
-                imports.append((module_name, formatter_name))
-                formatter = getattr(module, formatter_name)
-                found_formatters.append(
-                    '%s: %r' % (formatter_name,
-                                (formatter.name,
-                                 tuple(formatter.aliases),
-                                 tuple(formatter.filenames),
-                                 docstring_headline(formatter))))
-    # sort them, that should make the diff files for svn smaller
-    found_formatters.sort()
-    imports.sort()
-
-    # extract useful sourcecode from this file
-    f = open(__file__)
-    try:
-        content = f.read()
-    finally:
-        f.close()
-    header = content[:content.find('# start')]
-    footer = content[content.find("if __name__ == '__main__':"):]
-
-    # write new file
-    f = open(__file__, 'w')
-    f.write(header)
-    f.write('# start\n')
-    f.write('\n'.join(['from %s import %s' % imp for imp in imports]))
-    f.write('\n\n')
-    f.write('FORMATTERS = {\n    %s\n}\n\n' % ',\n    '.join(found_formatters))
-    f.write(footer)
-    f.close()
+# -*- coding: utf-8 -*-
+"""
+    pygments.formatters._mapping
+    ~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+    Formatter mapping defintions. This file is generated by itself. Everytime
+    you change something on a builtin formatter defintion, run this script from
+    the formatters folder to update it.
+
+    Do not alter the FORMATTERS dictionary by hand.
+
+    :copyright: Copyright 2006-2010 by the Pygments team, see AUTHORS.
+    :license: BSD, see LICENSE for details.
+"""
+
+from pygments.util import docstring_headline
+
+# start
+from pygments.formatters.bbcode import BBCodeFormatter
+from pygments.formatters.html import HtmlFormatter
+from pygments.formatters.img import BmpImageFormatter
+from pygments.formatters.img import GifImageFormatter
+from pygments.formatters.img import ImageFormatter
+from pygments.formatters.img import JpgImageFormatter
+from pygments.formatters.latex import LatexFormatter
+from pygments.formatters.other import NullFormatter
+from pygments.formatters.other import RawTokenFormatter
+from pygments.formatters.rtf import RtfFormatter
+from pygments.formatters.svg import SvgFormatter
+from pygments.formatters.terminal import TerminalFormatter
+from pygments.formatters.terminal256 import Terminal256Formatter
+
+FORMATTERS = {
+    BBCodeFormatter: ('BBCode', ('bbcode', 'bb'), (), 'Format tokens with BBcodes. These formatting codes are used by many bulletin boards, so you can highlight your sourcecode with pygments before posting it there.'),
+    BmpImageFormatter: ('img_bmp', ('bmp', 'bitmap'), ('*.bmp',), 'Create a bitmap image from source code. This uses the Python Imaging Library to generate a pixmap from the source code.'),
+    GifImageFormatter: ('img_gif', ('gif',), ('*.gif',), 'Create a GIF image from source code. This uses the Python Imaging Library to generate a pixmap from the source code.'),
+    HtmlFormatter: ('HTML', ('html',), ('*.html', '*.htm'), "Format tokens as HTML 4 ``<span>`` tags within a ``<pre>`` tag, wrapped in a ``<div>`` tag. The ``<div>``'s CSS class can be set by the `cssclass` option."),
+    ImageFormatter: ('img', ('img', 'IMG', 'png'), ('*.png',), 'Create a PNG image from source code. This uses the Python Imaging Library to generate a pixmap from the source code.'),
+    JpgImageFormatter: ('img_jpg', ('jpg', 'jpeg'), ('*.jpg',), 'Create a JPEG image from source code. This uses the Python Imaging Library to generate a pixmap from the source code.'),
+    LatexFormatter: ('LaTeX', ('latex', 'tex'), ('*.tex',), 'Format tokens as LaTeX code. This needs the `fancyvrb` and `color` standard packages.'),
+    NullFormatter: ('Text only', ('text', 'null'), ('*.txt',), 'Output the text unchanged without any formatting.'),
+    RawTokenFormatter: ('Raw tokens', ('raw', 'tokens'), ('*.raw',), 'Format tokens as a raw representation for storing token streams.'),
+    RtfFormatter: ('RTF', ('rtf',), ('*.rtf',), 'Format tokens as RTF markup. This formatter automatically outputs full RTF documents with color information and other useful stuff. Perfect for Copy and Paste into Microsoft\xc2\xae Word\xc2\xae documents.'),
+    SvgFormatter: ('SVG', ('svg',), ('*.svg',), 'Format tokens as an SVG graphics file.  This formatter is still experimental. Each line of code is a ``<text>`` element with explicit ``x`` and ``y`` coordinates containing ``<tspan>`` elements with the individual token styles.'),
+    Terminal256Formatter: ('Terminal256', ('terminal256', 'console256', '256'), (), 'Format tokens with ANSI color sequences, for output in a 256-color terminal or console. Like in `TerminalFormatter` color sequences are terminated at newlines, so that paging the output works correctly.'),
+    TerminalFormatter: ('Terminal', ('terminal', 'console'), (), 'Format tokens with ANSI color sequences, for output in a text console. Color sequences are terminated at newlines, so that paging the output works correctly.')
+}
+
+if __name__ == '__main__':
+    import sys
+    import os
+
+    # lookup formatters
+    found_formatters = []
+    imports = []
+    sys.path.insert(0, os.path.join(os.path.dirname(__file__), '..', '..'))
+    for filename in os.listdir('.'):
+        if filename.endswith('.py') and not filename.startswith('_'):
+            module_name = 'pygments.formatters.%s' % filename[:-3]
+            print(module_name)
+            module = __import__(module_name, None, None, [''])
+            for formatter_name in module.__all__:
+                imports.append((module_name, formatter_name))
+                formatter = getattr(module, formatter_name)
+                found_formatters.append(
+                    '%s: %r' % (formatter_name,
+                                (formatter.name,
+                                 tuple(formatter.aliases),
+                                 tuple(formatter.filenames),
+                                 docstring_headline(formatter))))
+    # sort them, that should make the diff files for svn smaller
+    found_formatters.sort()
+    imports.sort()
+
+    # extract useful sourcecode from this file
+    f = open(__file__)
+    try:
+        content = f.read()
+    finally:
+        f.close()
+    header = content[:content.find('# start')]
+    footer = content[content.find("if __name__ == '__main__':"):]
+
+    # write new file
+    f = open(__file__, 'w')
+    f.write(header)
+    f.write('# start\n')
+    f.write('\n'.join(['from %s import %s' % imp for imp in imports]))
+    f.write('\n\n')
+    f.write('FORMATTERS = {\n    %s\n}\n\n' % ',\n    '.join(found_formatters))
+    f.write(footer)
+    f.close()
--- a/ThirdParty/Pygments/pygments/formatters/html.py	Tue Jan 04 17:37:48 2011 +0100
+++ b/ThirdParty/Pygments/pygments/formatters/html.py	Wed Jan 05 15:46:19 2011 +0100
@@ -1,723 +1,750 @@
-# -*- coding: utf-8 -*-
-"""
-    pygments.formatters.html
-    ~~~~~~~~~~~~~~~~~~~~~~~~
-
-    Formatter for HTML output.
-
-    :copyright: Copyright 2006-2010 by the Pygments team, see AUTHORS.
-    :license: BSD, see LICENSE for details.
-"""
-
-import os
-import sys
-import io
-
-from pygments.formatter import Formatter
-from pygments.token import Token, Text, STANDARD_TYPES
-from pygments.util import get_bool_opt, get_int_opt, get_list_opt, bytes
-
-
-__all__ = ['HtmlFormatter']
-
-
-def escape_html(text):
-    """Escape &, <, > as well as single and double quotes for HTML."""
-    return text.replace('&', '&amp;').  \
-                replace('<', '&lt;').   \
-                replace('>', '&gt;').   \
-                replace('"', '&quot;'). \
-                replace("'", '&#39;')
-
-
-def get_random_id():
-    """Return a random id for javascript fields."""
-    from random import random
-    from time import time
-    try:
-        from hashlib import sha1 as sha
-    except ImportError:
-        import sha
-        sha = sha.new
-    return sha('%s|%s' % (random(), time())).hexdigest()
-
-
-def _get_ttype_class(ttype):
-    fname = STANDARD_TYPES.get(ttype)
-    if fname:
-        return fname
-    aname = ''
-    while fname is None:
-        aname = '-' + ttype[-1] + aname
-        ttype = ttype.parent
-        fname = STANDARD_TYPES.get(ttype)
-    return fname + aname
-
-
-CSSFILE_TEMPLATE = '''\
-td.linenos { background-color: #f0f0f0; padding-right: 10px; }
-span.lineno { background-color: #f0f0f0; padding: 0 5px 0 5px; }
-pre { line-height: 125%%; }
-%(styledefs)s
-'''
-
-DOC_HEADER = '''\
-<!DOCTYPE html PUBLIC "-//W3C//DTD HTML 4.01//EN"
-   "http://www.w3.org/TR/html4/strict.dtd">
-
-<html>
-<head>
-  <title>%(title)s</title>
-  <meta http-equiv="content-type" content="text/html; charset=%(encoding)s">
-  <style type="text/css">
-''' + CSSFILE_TEMPLATE + '''
-  </style>
-</head>
-<body>
-<h2>%(title)s</h2>
-
-'''
-
-DOC_HEADER_EXTERNALCSS = '''\
-<!DOCTYPE html PUBLIC "-//W3C//DTD HTML 4.01//EN"
-   "http://www.w3.org/TR/html4/strict.dtd">
-
-<html>
-<head>
-  <title>%(title)s</title>
-  <meta http-equiv="content-type" content="text/html; charset=%(encoding)s">
-  <link rel="stylesheet" href="%(cssfile)s" type="text/css">
-</head>
-<body>
-<h2>%(title)s</h2>
-
-'''
-
-DOC_FOOTER = '''\
-</body>
-</html>
-'''
-
-
-class HtmlFormatter(Formatter):
-    r"""
-    Format tokens as HTML 4 ``<span>`` tags within a ``<pre>`` tag, wrapped
-    in a ``<div>`` tag. The ``<div>``'s CSS class can be set by the `cssclass`
-    option.
-
-    If the `linenos` option is set to ``"table"``, the ``<pre>`` is
-    additionally wrapped inside a ``<table>`` which has one row and two
-    cells: one containing the line numbers and one containing the code.
-    Example:
-
-    .. sourcecode:: html
-
-        <div class="highlight" >
-        <table><tr>
-          <td class="linenos" title="click to toggle"
-            onclick="with (this.firstChild.style)
-                     { display = (display == '') ? 'none' : '' }">
-            <pre>1
-            2</pre>
-          </td>
-          <td class="code">
-            <pre><span class="Ke">def </span><span class="NaFu">foo</span>(bar):
-              <span class="Ke">pass</span>
-            </pre>
-          </td>
-        </tr></table></div>
-
-    (whitespace added to improve clarity).
-
-    Wrapping can be disabled using the `nowrap` option.
-
-    A list of lines can be specified using the `hl_lines` option to make these
-    lines highlighted (as of Pygments 0.11).
-
-    With the `full` option, a complete HTML 4 document is output, including
-    the style definitions inside a ``<style>`` tag, or in a separate file if
-    the `cssfile` option is given.
-
-    The `get_style_defs(arg='')` method of a `HtmlFormatter` returns a string
-    containing CSS rules for the CSS classes used by the formatter. The
-    argument `arg` can be used to specify additional CSS selectors that
-    are prepended to the classes. A call `fmter.get_style_defs('td .code')`
-    would result in the following CSS classes:
-
-    .. sourcecode:: css
-
-        td .code .kw { font-weight: bold; color: #00FF00 }
-        td .code .cm { color: #999999 }
-        ...
-
-    If you have Pygments 0.6 or higher, you can also pass a list or tuple to the
-    `get_style_defs()` method to request multiple prefixes for the tokens:
-
-    .. sourcecode:: python
-
-        formatter.get_style_defs(['div.syntax pre', 'pre.syntax'])
-
-    The output would then look like this:
-
-    .. sourcecode:: css
-
-        div.syntax pre .kw,
-        pre.syntax .kw { font-weight: bold; color: #00FF00 }
-        div.syntax pre .cm,
-        pre.syntax .cm { color: #999999 }
-        ...
-
-    Additional options accepted:
-
-    `nowrap`
-        If set to ``True``, don't wrap the tokens at all, not even inside a ``<pre>``
-        tag. This disables most other options (default: ``False``).
-
-    `full`
-        Tells the formatter to output a "full" document, i.e. a complete
-        self-contained document (default: ``False``).
-
-    `title`
-        If `full` is true, the title that should be used to caption the
-        document (default: ``''``).
-
-    `style`
-        The style to use, can be a string or a Style subclass (default:
-        ``'default'``). This option has no effect if the `cssfile`
-        and `noclobber_cssfile` option are given and the file specified in
-        `cssfile` exists.
-
-    `noclasses`
-        If set to true, token ``<span>`` tags will not use CSS classes, but
-        inline styles. This is not recommended for larger pieces of code since
-        it increases output size by quite a bit (default: ``False``).
-
-    `classprefix`
-        Since the token types use relatively short class names, they may clash
-        with some of your own class names. In this case you can use the
-        `classprefix` option to give a string to prepend to all Pygments-generated
-        CSS class names for token types.
-        Note that this option also affects the output of `get_style_defs()`.
-
-    `cssclass`
-        CSS class for the wrapping ``<div>`` tag (default: ``'highlight'``).
-        If you set this option, the default selector for `get_style_defs()`
-        will be this class.
-
-        *New in Pygments 0.9:* If you select the ``'table'`` line numbers, the
-        wrapping table will have a CSS class of this string plus ``'table'``,
-        the default is accordingly ``'highlighttable'``.
-
-    `cssstyles`
-        Inline CSS styles for the wrapping ``<div>`` tag (default: ``''``).
-
-    `prestyles`
-        Inline CSS styles for the ``<pre>`` tag (default: ``''``).  *New in
-        Pygments 0.11.*
-
-    `cssfile`
-        If the `full` option is true and this option is given, it must be the
-        name of an external file. If the filename does not include an absolute
-        path, the file's path will be assumed to be relative to the main output
-        file's path, if the latter can be found. The stylesheet is then written
-        to this file instead of the HTML file. *New in Pygments 0.6.*
-
-    `noclobber_cssfile`
-        If `cssfile` is given and the specified file exists, the css file will
-        not be overwritten. This allows the use of the `full` option in
-        combination with a user specified css file. Default is ``False``.
-        *New in Pygments 1.1.*
-
-    `linenos`
-        If set to ``'table'``, output line numbers as a table with two cells,
-        one containing the line numbers, the other the whole code.  This is
-        copy-and-paste-friendly, but may cause alignment problems with some
-        browsers or fonts.  If set to ``'inline'``, the line numbers will be
-        integrated in the ``<pre>`` tag that contains the code (that setting
-        is *new in Pygments 0.8*).
-
-        For compatibility with Pygments 0.7 and earlier, every true value
-        except ``'inline'`` means the same as ``'table'`` (in particular, that
-        means also ``True``).
-
-        The default value is ``False``, which means no line numbers at all.
-
-        **Note:** with the default ("table") line number mechanism, the line
-        numbers and code can have different line heights in Internet Explorer
-        unless you give the enclosing ``<pre>`` tags an explicit ``line-height``
-        CSS property (you get the default line spacing with ``line-height:
-        125%``).
-
-    `hl_lines`
-        Specify a list of lines to be highlighted.  *New in Pygments 0.11.*
-
-    `linenostart`
-        The line number for the first line (default: ``1``).
-
-    `linenostep`
-        If set to a number n > 1, only every nth line number is printed.
-
-    `linenospecial`
-        If set to a number n > 0, every nth line number is given the CSS
-        class ``"special"`` (default: ``0``).
-
-    `nobackground`
-        If set to ``True``, the formatter won't output the background color
-        for the wrapping element (this automatically defaults to ``False``
-        when there is no wrapping element [eg: no argument for the
-        `get_syntax_defs` method given]) (default: ``False``). *New in
-        Pygments 0.6.*
-
-    `lineseparator`
-        This string is output between lines of code. It defaults to ``"\n"``,
-        which is enough to break a line inside ``<pre>`` tags, but you can
-        e.g. set it to ``"<br>"`` to get HTML line breaks. *New in Pygments
-        0.7.*
-
-    `lineanchors`
-        If set to a nonempty string, e.g. ``foo``, the formatter will wrap each
-        output line in an anchor tag with a ``name`` of ``foo-linenumber``.
-        This allows easy linking to certain lines. *New in Pygments 0.9.*
-
-    `anchorlinenos`
-        If set to `True`, will wrap line numbers in <a> tags. Used in
-        combination with `linenos` and `lineanchors`.
-
-
-    **Subclassing the HTML formatter**
-
-    *New in Pygments 0.7.*
-
-    The HTML formatter is now built in a way that allows easy subclassing, thus
-    customizing the output HTML code. The `format()` method calls
-    `self._format_lines()` which returns a generator that yields tuples of ``(1,
-    line)``, where the ``1`` indicates that the ``line`` is a line of the
-    formatted source code.
-
-    If the `nowrap` option is set, the generator is the iterated over and the
-    resulting HTML is output.
-
-    Otherwise, `format()` calls `self.wrap()`, which wraps the generator with
-    other generators. These may add some HTML code to the one generated by
-    `_format_lines()`, either by modifying the lines generated by the latter,
-    then yielding them again with ``(1, line)``, and/or by yielding other HTML
-    code before or after the lines, with ``(0, html)``. The distinction between
-    source lines and other code makes it possible to wrap the generator multiple
-    times.
-
-    The default `wrap()` implementation adds a ``<div>`` and a ``<pre>`` tag.
-
-    A custom `HtmlFormatter` subclass could look like this:
-
-    .. sourcecode:: python
-
-        class CodeHtmlFormatter(HtmlFormatter):
-
-            def wrap(self, source, outfile):
-                return self._wrap_code(source)
-
-            def _wrap_code(self, source):
-                yield 0, '<code>'
-                for i, t in source:
-                    if i == 1:
-                        # it's a line of formatted code
-                        t += '<br>'
-                    yield i, t
-                yield 0, '</code>'
-
-    This results in wrapping the formatted lines with a ``<code>`` tag, where the
-    source lines are broken using ``<br>`` tags.
-
-    After calling `wrap()`, the `format()` method also adds the "line numbers"
-    and/or "full document" wrappers if the respective options are set. Then, all
-    HTML yielded by the wrapped generator is output.
-    """
-
-    name = 'HTML'
-    aliases = ['html']
-    filenames = ['*.html', '*.htm']
-
-    def __init__(self, **options):
-        Formatter.__init__(self, **options)
-        self.title = self._decodeifneeded(self.title)
-        self.nowrap = get_bool_opt(options, 'nowrap', False)
-        self.noclasses = get_bool_opt(options, 'noclasses', False)
-        self.classprefix = options.get('classprefix', '')
-        self.cssclass = self._decodeifneeded(options.get('cssclass', 'highlight'))
-        self.cssstyles = self._decodeifneeded(options.get('cssstyles', ''))
-        self.prestyles = self._decodeifneeded(options.get('prestyles', ''))
-        self.cssfile = self._decodeifneeded(options.get('cssfile', ''))
-        self.noclobber_cssfile = get_bool_opt(options, 'noclobber_cssfile', False)
-
-        linenos = options.get('linenos', False)
-        if linenos == 'inline':
-            self.linenos = 2
-        elif linenos:
-            # compatibility with <= 0.7
-            self.linenos = 1
-        else:
-            self.linenos = 0
-        self.linenostart = abs(get_int_opt(options, 'linenostart', 1))
-        self.linenostep = abs(get_int_opt(options, 'linenostep', 1))
-        self.linenospecial = abs(get_int_opt(options, 'linenospecial', 0))
-        self.nobackground = get_bool_opt(options, 'nobackground', False)
-        self.lineseparator = options.get('lineseparator', '\n')
-        self.lineanchors = options.get('lineanchors', '')
-        self.anchorlinenos = options.get('anchorlinenos', False)
-        self.hl_lines = set()
-        for lineno in get_list_opt(options, 'hl_lines', []):
-            try:
-                self.hl_lines.add(int(lineno))
-            except ValueError:
-                pass
-
-        self._class_cache = {}
-        self._create_stylesheet()
-
-    def _get_css_class(self, ttype):
-        """Return the css class of this token type prefixed with
-        the classprefix option."""
-        if ttype in self._class_cache:
-            return self._class_cache[ttype]
-        return self.classprefix + _get_ttype_class(ttype)
-
-    def _create_stylesheet(self):
-        t2c = self.ttype2class = {Token: ''}
-        c2s = self.class2style = {}
-        cp = self.classprefix
-        for ttype, ndef in self.style:
-            name = cp + _get_ttype_class(ttype)
-            style = ''
-            if ndef['color']:
-                style += 'color: #%s; ' % ndef['color']
-            if ndef['bold']:
-                style += 'font-weight: bold; '
-            if ndef['italic']:
-                style += 'font-style: italic; '
-            if ndef['underline']:
-                style += 'text-decoration: underline; '
-            if ndef['bgcolor']:
-                style += 'background-color: #%s; ' % ndef['bgcolor']
-            if ndef['border']:
-                style += 'border: 1px solid #%s; ' % ndef['border']
-            if style:
-                t2c[ttype] = name
-                # save len(ttype) to enable ordering the styles by
-                # hierarchy (necessary for CSS cascading rules!)
-                c2s[name] = (style[:-2], ttype, len(ttype))
-
-    def get_style_defs(self, arg=None):
-        """
-        Return CSS style definitions for the classes produced by the current
-        highlighting style. ``arg`` can be a string or list of selectors to
-        insert before the token type classes.
-        """
-        if arg is None:
-            arg = ('cssclass' in self.options and '.'+self.cssclass or '')
-        if isinstance(arg, str):
-            args = [arg]
-        else:
-            args = list(arg)
-
-        def prefix(cls):
-            if cls:
-                cls = '.' + cls
-            tmp = []
-            for arg in args:
-                tmp.append((arg and arg + ' ' or '') + cls)
-            return ', '.join(tmp)
-
-        styles = [(level, ttype, cls, style)
-                  for cls, (style, ttype, level) in self.class2style.items()
-                  if cls and style]
-        styles.sort()
-        lines = ['%s { %s } /* %s */' % (prefix(cls), style, repr(ttype)[6:])
-                 for (level, ttype, cls, style) in styles]
-        if arg and not self.nobackground and \
-           self.style.background_color is not None:
-            text_style = ''
-            if Text in self.ttype2class:
-                text_style = ' ' + self.class2style[self.ttype2class[Text]][0]
-            lines.insert(0, '%s { background: %s;%s }' %
-                         (prefix(''), self.style.background_color, text_style))
-        if self.style.highlight_color is not None:
-            lines.insert(0, '%s.hll { background-color: %s }' %
-                         (prefix(''), self.style.highlight_color))
-        return '\n'.join(lines)
-
-    def _decodeifneeded(self, value):
-        if isinstance(value, bytes):
-            if self.encoding:
-                return value.decode(self.encoding)
-            return value.decode()
-        return value
-
-    def _wrap_full(self, inner, outfile):
-        if self.cssfile:
-            if os.path.isabs(self.cssfile):
-                # it's an absolute filename
-                cssfilename = self.cssfile
-            else:
-                try:
-                    filename = outfile.name
-                    if not filename or filename[0] == '<':
-                        # pseudo files, e.g. name == '<fdopen>'
-                        raise AttributeError
-                    cssfilename = os.path.join(os.path.dirname(filename),
-                                               self.cssfile)
-                except AttributeError:
-                    print('Note: Cannot determine output file name, ' \
-                          'using current directory as base for the CSS file name', file=sys.stderr)
-                    cssfilename = self.cssfile
-            # write CSS file only if noclobber_cssfile isn't given as an option.
-            try:
-                if not os.path.exists(cssfilename) or not self.noclobber_cssfile:
-                    cf = open(cssfilename, "w")
-                    cf.write(CSSFILE_TEMPLATE %
-                            {'styledefs': self.get_style_defs('body')})
-                    cf.close()
-            except IOError as err:
-                err.strerror = 'Error writing CSS file: ' + err.strerror
-                raise
-
-            yield 0, (DOC_HEADER_EXTERNALCSS %
-                      dict(title     = self.title,
-                           cssfile   = self.cssfile,
-                           encoding  = self.encoding))
-        else:
-            yield 0, (DOC_HEADER %
-                      dict(title     = self.title,
-                           styledefs = self.get_style_defs('body'),
-                           encoding  = self.encoding))
-
-        for t, line in inner:
-            yield t, line
-        yield 0, DOC_FOOTER
-
-    def _wrap_tablelinenos(self, inner):
-        dummyoutfile = io.StringIO()
-        lncount = 0
-        for t, line in inner:
-            if t:
-                lncount += 1
-            dummyoutfile.write(line)
-
-        fl = self.linenostart
-        mw = len(str(lncount + fl - 1))
-        sp = self.linenospecial
-        st = self.linenostep
-        la = self.lineanchors
-        aln = self.anchorlinenos
-        if sp:
-            lines = []
-
-            for i in range(fl, fl+lncount):
-                if i % st == 0:
-                    if i % sp == 0:
-                        if aln:
-                            lines.append('<a href="#%s-%d" class="special">%*d</a>' %
-                                         (la, i, mw, i))
-                        else:
-                            lines.append('<span class="special">%*d</span>' % (mw, i))
-                    else:
-                        if aln:
-                            lines.append('<a href="#%s-%d">%*d</a>' % (la, i, mw, i))
-                        else:
-                            lines.append('%*d' % (mw, i))
-                else:
-                    lines.append('')
-            ls = '\n'.join(lines)
-        else:
-            lines = []
-            for i in range(fl, fl+lncount):
-                if i % st == 0:
-                    if aln:
-                        lines.append('<a href="#%s-%d">%*d</a>' % (la, i, mw, i))
-                    else:
-                        lines.append('%*d' % (mw, i))
-                else:
-                    lines.append('')
-            ls = '\n'.join(lines)
-
-        # in case you wonder about the seemingly redundant <div> here: since the
-        # content in the other cell also is wrapped in a div, some browsers in
-        # some configurations seem to mess up the formatting...
-        yield 0, ('<table class="%stable">' % self.cssclass +
-                  '<tr><td class="linenos"><div class="linenodiv"><pre>' +
-                  ls + '</pre></div></td><td class="code">')
-        yield 0, dummyoutfile.getvalue()
-        yield 0, '</td></tr></table>'
-
-    def _wrap_inlinelinenos(self, inner):
-        # need a list of lines since we need the width of a single number :(
-        lines = list(inner)
-        sp = self.linenospecial
-        st = self.linenostep
-        num = self.linenostart
-        mw = len(str(len(lines) + num - 1))
-
-        if sp:
-            for t, line in lines:
-                yield 1, '<span class="lineno%s">%*s</span> ' % (
-                    num%sp == 0 and ' special' or '', mw,
-                    (num%st and ' ' or num)) + line
-                num += 1
-        else:
-            for t, line in lines:
-                yield 1, '<span class="lineno">%*s</span> ' % (
-                    mw, (num%st and ' ' or num)) + line
-                num += 1
-
-    def _wrap_lineanchors(self, inner):
-        s = self.lineanchors
-        i = 0
-        for t, line in inner:
-            if t:
-                i += 1
-                yield 1, '<a name="%s-%d"></a>' % (s, i) + line
-            else:
-                yield 0, line
-
-    def _wrap_div(self, inner):
-        style = []
-        if (self.noclasses and not self.nobackground and
-            self.style.background_color is not None):
-            style.append('background: %s' % (self.style.background_color,))
-        if self.cssstyles:
-            style.append(self.cssstyles)
-        style = '; '.join(style)
-
-        yield 0, ('<div' + (self.cssclass and ' class="%s"' % self.cssclass)
-                  + (style and (' style="%s"' % style)) + '>')
-        for tup in inner:
-            yield tup
-        yield 0, '</div>\n'
-
-    def _wrap_pre(self, inner):
-        style = []
-        if self.prestyles:
-            style.append(self.prestyles)
-        if self.noclasses:
-            style.append('line-height: 125%')
-        style = '; '.join(style)
-
-        yield 0, ('<pre' + (style and ' style="%s"' % style) + '>')
-        for tup in inner:
-            yield tup
-        yield 0, '</pre>'
-
-    def _format_lines(self, tokensource):
-        """
-        Just format the tokens, without any wrapping tags.
-        Yield individual lines.
-        """
-        nocls = self.noclasses
-        lsep = self.lineseparator
-        # for <span style=""> lookup only
-        getcls = self.ttype2class.get
-        c2s = self.class2style
-
-        lspan = ''
-        line = ''
-        for ttype, value in tokensource:
-            if nocls:
-                cclass = getcls(ttype)
-                while cclass is None:
-                    ttype = ttype.parent
-                    cclass = getcls(ttype)
-                cspan = cclass and '<span style="%s">' % c2s[cclass][0] or ''
-            else:
-                cls = self._get_css_class(ttype)
-                cspan = cls and '<span class="%s">' % cls or ''
-
-            parts = escape_html(value).split('\n')
-
-            # for all but the last line
-            for part in parts[:-1]:
-                if line:
-                    if lspan != cspan:
-                        line += (lspan and '</span>') + cspan + part + \
-                                (cspan and '</span>') + lsep
-                    else: # both are the same
-                        line += part + (lspan and '</span>') + lsep
-                    yield 1, line
-                    line = ''
-                elif part:
-                    yield 1, cspan + part + (cspan and '</span>') + lsep
-                else:
-                    yield 1, lsep
-            # for the last line
-            if line and parts[-1]:
-                if lspan != cspan:
-                    line += (lspan and '</span>') + cspan + parts[-1]
-                    lspan = cspan
-                else:
-                    line += parts[-1]
-            elif parts[-1]:
-                line = cspan + parts[-1]
-                lspan = cspan
-            # else we neither have to open a new span nor set lspan
-
-        if line:
-            yield 1, line + (lspan and '</span>') + lsep
-
-    def _highlight_lines(self, tokensource):
-        """
-        Highlighted the lines specified in the `hl_lines` option by
-        post-processing the token stream coming from `_format_lines`.
-        """
-        hls = self.hl_lines
-
-        for i, (t, value) in enumerate(tokensource):
-            if t != 1:
-                yield t, value
-            if i + 1 in hls: # i + 1 because Python indexes start at 0
-                if self.noclasses:
-                    style = ''
-                    if self.style.highlight_color is not None:
-                        style = (' style="background-color: %s"' %
-                                 (self.style.highlight_color,))
-                    yield 1, '<span%s>%s</span>' % (style, value)
-                else:
-                    yield 1, '<span class="hll">%s</span>' % value
-            else:
-                yield 1, value
-
-    def wrap(self, source, outfile):
-        """
-        Wrap the ``source``, which is a generator yielding
-        individual lines, in custom generators. See docstring
-        for `format`. Can be overridden.
-        """
-        return self._wrap_div(self._wrap_pre(source))
-
-    def format_unencoded(self, tokensource, outfile):
-        """
-        The formatting process uses several nested generators; which of
-        them are used is determined by the user's options.
-
-        Each generator should take at least one argument, ``inner``,
-        and wrap the pieces of text generated by this.
-
-        Always yield 2-tuples: (code, text). If "code" is 1, the text
-        is part of the original tokensource being highlighted, if it's
-        0, the text is some piece of wrapping. This makes it possible to
-        use several different wrappers that process the original source
-        linewise, e.g. line number generators.
-        """
-        source = self._format_lines(tokensource)
-        if self.hl_lines:
-            source = self._highlight_lines(source)
-        if not self.nowrap:
-            if self.linenos == 2:
-                source = self._wrap_inlinelinenos(source)
-            if self.lineanchors:
-                source = self._wrap_lineanchors(source)
-            source = self.wrap(source, outfile)
-            if self.linenos == 1:
-                source = self._wrap_tablelinenos(source)
-            if self.full:
-                source = self._wrap_full(source, outfile)
-
-        for t, piece in source:
-            outfile.write(piece)
+# -*- coding: utf-8 -*-
+"""
+    pygments.formatters.html
+    ~~~~~~~~~~~~~~~~~~~~~~~~
+
+    Formatter for HTML output.
+
+    :copyright: Copyright 2006-2010 by the Pygments team, see AUTHORS.
+    :license: BSD, see LICENSE for details.
+"""
+
+import os
+import sys
+import io
+
+from pygments.formatter import Formatter
+from pygments.token import Token, Text, STANDARD_TYPES
+from pygments.util import get_bool_opt, get_int_opt, get_list_opt, bytes
+
+
+__all__ = ['HtmlFormatter']
+
+
+_escape_html_table = {
+    ord('&'): '&amp;',
+    ord('<'): '&lt;',
+    ord('>'): '&gt;',
+    ord('"'): '&quot;',
+    ord("'"): '&#39;',
+}
+
+def escape_html(text, table=_escape_html_table):
+    """Escape &, <, > as well as single and double quotes for HTML."""
+    return text.translate(table)
+
+def get_random_id():
+    """Return a random id for javascript fields."""
+    from random import random
+    from time import time
+    try:
+        from hashlib import sha1 as sha
+    except ImportError:
+        import sha
+        sha = sha.new
+    return sha('%s|%s' % (random(), time())).hexdigest()
+
+
+def _get_ttype_class(ttype):
+    fname = STANDARD_TYPES.get(ttype)
+    if fname:
+        return fname
+    aname = ''
+    while fname is None:
+        aname = '-' + ttype[-1] + aname
+        ttype = ttype.parent
+        fname = STANDARD_TYPES.get(ttype)
+    return fname + aname
+
+
+CSSFILE_TEMPLATE = '''\
+td.linenos { background-color: #f0f0f0; padding-right: 10px; }
+span.lineno { background-color: #f0f0f0; padding: 0 5px 0 5px; }
+pre { line-height: 125%%; }
+%(styledefs)s
+'''
+
+DOC_HEADER = '''\
+<!DOCTYPE html PUBLIC "-//W3C//DTD HTML 4.01//EN"
+   "http://www.w3.org/TR/html4/strict.dtd">
+
+<html>
+<head>
+  <title>%(title)s</title>
+  <meta http-equiv="content-type" content="text/html; charset=%(encoding)s">
+  <style type="text/css">
+''' + CSSFILE_TEMPLATE + '''
+  </style>
+</head>
+<body>
+<h2>%(title)s</h2>
+
+'''
+
+DOC_HEADER_EXTERNALCSS = '''\
+<!DOCTYPE html PUBLIC "-//W3C//DTD HTML 4.01//EN"
+   "http://www.w3.org/TR/html4/strict.dtd">
+
+<html>
+<head>
+  <title>%(title)s</title>
+  <meta http-equiv="content-type" content="text/html; charset=%(encoding)s">
+  <link rel="stylesheet" href="%(cssfile)s" type="text/css">
+</head>
+<body>
+<h2>%(title)s</h2>
+
+'''
+
+DOC_FOOTER = '''\
+</body>
+</html>
+'''
+
+
+class HtmlFormatter(Formatter):
+    r"""
+    Format tokens as HTML 4 ``<span>`` tags within a ``<pre>`` tag, wrapped
+    in a ``<div>`` tag. The ``<div>``'s CSS class can be set by the `cssclass`
+    option.
+
+    If the `linenos` option is set to ``"table"``, the ``<pre>`` is
+    additionally wrapped inside a ``<table>`` which has one row and two
+    cells: one containing the line numbers and one containing the code.
+    Example:
+
+    .. sourcecode:: html
+
+        <div class="highlight" >
+        <table><tr>
+          <td class="linenos" title="click to toggle"
+            onclick="with (this.firstChild.style)
+                     { display = (display == '') ? 'none' : '' }">
+            <pre>1
+            2</pre>
+          </td>
+          <td class="code">
+            <pre><span class="Ke">def </span><span class="NaFu">foo</span>(bar):
+              <span class="Ke">pass</span>
+            </pre>
+          </td>
+        </tr></table></div>
+
+    (whitespace added to improve clarity).
+
+    Wrapping can be disabled using the `nowrap` option.
+
+    A list of lines can be specified using the `hl_lines` option to make these
+    lines highlighted (as of Pygments 0.11).
+
+    With the `full` option, a complete HTML 4 document is output, including
+    the style definitions inside a ``<style>`` tag, or in a separate file if
+    the `cssfile` option is given.
+
+    The `get_style_defs(arg='')` method of a `HtmlFormatter` returns a string
+    containing CSS rules for the CSS classes used by the formatter. The
+    argument `arg` can be used to specify additional CSS selectors that
+    are prepended to the classes. A call `fmter.get_style_defs('td .code')`
+    would result in the following CSS classes:
+
+    .. sourcecode:: css
+
+        td .code .kw { font-weight: bold; color: #00FF00 }
+        td .code .cm { color: #999999 }
+        ...
+
+    If you have Pygments 0.6 or higher, you can also pass a list or tuple to the
+    `get_style_defs()` method to request multiple prefixes for the tokens:
+
+    .. sourcecode:: python
+
+        formatter.get_style_defs(['div.syntax pre', 'pre.syntax'])
+
+    The output would then look like this:
+
+    .. sourcecode:: css
+
+        div.syntax pre .kw,
+        pre.syntax .kw { font-weight: bold; color: #00FF00 }
+        div.syntax pre .cm,
+        pre.syntax .cm { color: #999999 }
+        ...
+
+    Additional options accepted:
+
+    `nowrap`
+        If set to ``True``, don't wrap the tokens at all, not even inside a ``<pre>``
+        tag. This disables most other options (default: ``False``).
+
+    `full`
+        Tells the formatter to output a "full" document, i.e. a complete
+        self-contained document (default: ``False``).
+
+    `title`
+        If `full` is true, the title that should be used to caption the
+        document (default: ``''``).
+
+    `style`
+        The style to use, can be a string or a Style subclass (default:
+        ``'default'``). This option has no effect if the `cssfile`
+        and `noclobber_cssfile` option are given and the file specified in
+        `cssfile` exists.
+
+    `noclasses`
+        If set to true, token ``<span>`` tags will not use CSS classes, but
+        inline styles. This is not recommended for larger pieces of code since
+        it increases output size by quite a bit (default: ``False``).
+
+    `classprefix`
+        Since the token types use relatively short class names, they may clash
+        with some of your own class names. In this case you can use the
+        `classprefix` option to give a string to prepend to all Pygments-generated
+        CSS class names for token types.
+        Note that this option also affects the output of `get_style_defs()`.
+
+    `cssclass`
+        CSS class for the wrapping ``<div>`` tag (default: ``'highlight'``).
+        If you set this option, the default selector for `get_style_defs()`
+        will be this class.
+
+        *New in Pygments 0.9:* If you select the ``'table'`` line numbers, the
+        wrapping table will have a CSS class of this string plus ``'table'``,
+        the default is accordingly ``'highlighttable'``.
+
+    `cssstyles`
+        Inline CSS styles for the wrapping ``<div>`` tag (default: ``''``).
+
+    `prestyles`
+        Inline CSS styles for the ``<pre>`` tag (default: ``''``).  *New in
+        Pygments 0.11.*
+
+    `cssfile`
+        If the `full` option is true and this option is given, it must be the
+        name of an external file. If the filename does not include an absolute
+        path, the file's path will be assumed to be relative to the main output
+        file's path, if the latter can be found. The stylesheet is then written
+        to this file instead of the HTML file. *New in Pygments 0.6.*
+
+    `noclobber_cssfile`
+        If `cssfile` is given and the specified file exists, the css file will
+        not be overwritten. This allows the use of the `full` option in
+        combination with a user specified css file. Default is ``False``.
+        *New in Pygments 1.1.*
+
+    `linenos`
+        If set to ``'table'``, output line numbers as a table with two cells,
+        one containing the line numbers, the other the whole code.  This is
+        copy-and-paste-friendly, but may cause alignment problems with some
+        browsers or fonts.  If set to ``'inline'``, the line numbers will be
+        integrated in the ``<pre>`` tag that contains the code (that setting
+        is *new in Pygments 0.8*).
+
+        For compatibility with Pygments 0.7 and earlier, every true value
+        except ``'inline'`` means the same as ``'table'`` (in particular, that
+        means also ``True``).
+
+        The default value is ``False``, which means no line numbers at all.
+
+        **Note:** with the default ("table") line number mechanism, the line
+        numbers and code can have different line heights in Internet Explorer
+        unless you give the enclosing ``<pre>`` tags an explicit ``line-height``
+        CSS property (you get the default line spacing with ``line-height:
+        125%``).
+
+    `hl_lines`
+        Specify a list of lines to be highlighted.  *New in Pygments 0.11.*
+
+    `linenostart`
+        The line number for the first line (default: ``1``).
+
+    `linenostep`
+        If set to a number n > 1, only every nth line number is printed.
+
+    `linenospecial`
+        If set to a number n > 0, every nth line number is given the CSS
+        class ``"special"`` (default: ``0``).
+
+    `nobackground`
+        If set to ``True``, the formatter won't output the background color
+        for the wrapping element (this automatically defaults to ``False``
+        when there is no wrapping element [eg: no argument for the
+        `get_syntax_defs` method given]) (default: ``False``). *New in
+        Pygments 0.6.*
+
+    `lineseparator`
+        This string is output between lines of code. It defaults to ``"\n"``,
+        which is enough to break a line inside ``<pre>`` tags, but you can
+        e.g. set it to ``"<br>"`` to get HTML line breaks. *New in Pygments
+        0.7.*
+
+    `lineanchors`
+        If set to a nonempty string, e.g. ``foo``, the formatter will wrap each
+        output line in an anchor tag with a ``name`` of ``foo-linenumber``.
+        This allows easy linking to certain lines. *New in Pygments 0.9.*
+
+    `anchorlinenos`
+        If set to `True`, will wrap line numbers in <a> tags. Used in
+        combination with `linenos` and `lineanchors`.
+
+
+    **Subclassing the HTML formatter**
+
+    *New in Pygments 0.7.*
+
+    The HTML formatter is now built in a way that allows easy subclassing, thus
+    customizing the output HTML code. The `format()` method calls
+    `self._format_lines()` which returns a generator that yields tuples of ``(1,
+    line)``, where the ``1`` indicates that the ``line`` is a line of the
+    formatted source code.
+
+    If the `nowrap` option is set, the generator is the iterated over and the
+    resulting HTML is output.
+
+    Otherwise, `format()` calls `self.wrap()`, which wraps the generator with
+    other generators. These may add some HTML code to the one generated by
+    `_format_lines()`, either by modifying the lines generated by the latter,
+    then yielding them again with ``(1, line)``, and/or by yielding other HTML
+    code before or after the lines, with ``(0, html)``. The distinction between
+    source lines and other code makes it possible to wrap the generator multiple
+    times.
+
+    The default `wrap()` implementation adds a ``<div>`` and a ``<pre>`` tag.
+
+    A custom `HtmlFormatter` subclass could look like this:
+
+    .. sourcecode:: python
+
+        class CodeHtmlFormatter(HtmlFormatter):
+
+            def wrap(self, source, outfile):
+                return self._wrap_code(source)
+
+            def _wrap_code(self, source):
+                yield 0, '<code>'
+                for i, t in source:
+                    if i == 1:
+                        # it's a line of formatted code
+                        t += '<br>'
+                    yield i, t
+                yield 0, '</code>'
+
+    This results in wrapping the formatted lines with a ``<code>`` tag, where the
+    source lines are broken using ``<br>`` tags.
+
+    After calling `wrap()`, the `format()` method also adds the "line numbers"
+    and/or "full document" wrappers if the respective options are set. Then, all
+    HTML yielded by the wrapped generator is output.
+    """
+
+    name = 'HTML'
+    aliases = ['html']
+    filenames = ['*.html', '*.htm']
+
+    def __init__(self, **options):
+        Formatter.__init__(self, **options)
+        self.title = self._decodeifneeded(self.title)
+        self.nowrap = get_bool_opt(options, 'nowrap', False)
+        self.noclasses = get_bool_opt(options, 'noclasses', False)
+        self.classprefix = options.get('classprefix', '')
+        self.cssclass = self._decodeifneeded(options.get('cssclass', 'highlight'))
+        self.cssstyles = self._decodeifneeded(options.get('cssstyles', ''))
+        self.prestyles = self._decodeifneeded(options.get('prestyles', ''))
+        self.cssfile = self._decodeifneeded(options.get('cssfile', ''))
+        self.noclobber_cssfile = get_bool_opt(options, 'noclobber_cssfile', False)
+
+        linenos = options.get('linenos', False)
+        if linenos == 'inline':
+            self.linenos = 2
+        elif linenos:
+            # compatibility with <= 0.7
+            self.linenos = 1
+        else:
+            self.linenos = 0
+        self.linenostart = abs(get_int_opt(options, 'linenostart', 1))
+        self.linenostep = abs(get_int_opt(options, 'linenostep', 1))
+        self.linenospecial = abs(get_int_opt(options, 'linenospecial', 0))
+        self.nobackground = get_bool_opt(options, 'nobackground', False)
+        self.lineseparator = options.get('lineseparator', '\n')
+        self.lineanchors = options.get('lineanchors', '')
+        self.anchorlinenos = options.get('anchorlinenos', False)
+        self.hl_lines = set()
+        for lineno in get_list_opt(options, 'hl_lines', []):
+            try:
+                self.hl_lines.add(int(lineno))
+            except ValueError:
+                pass
+
+        self._create_stylesheet()
+
+    def _get_css_class(self, ttype):
+        """Return the css class of this token type prefixed with
+        the classprefix option."""
+        ttypeclass = _get_ttype_class(ttype)
+        if ttypeclass:
+            return self.classprefix + ttypeclass
+        return ''
+
+    def _create_stylesheet(self):
+        t2c = self.ttype2class = {Token: ''}
+        c2s = self.class2style = {}
+        for ttype, ndef in self.style:
+            name = self._get_css_class(ttype)
+            style = ''
+            if ndef['color']:
+                style += 'color: #%s; ' % ndef['color']
+            if ndef['bold']:
+                style += 'font-weight: bold; '
+            if ndef['italic']:
+                style += 'font-style: italic; '
+            if ndef['underline']:
+                style += 'text-decoration: underline; '
+            if ndef['bgcolor']:
+                style += 'background-color: #%s; ' % ndef['bgcolor']
+            if ndef['border']:
+                style += 'border: 1px solid #%s; ' % ndef['border']
+            if style:
+                t2c[ttype] = name
+                # save len(ttype) to enable ordering the styles by
+                # hierarchy (necessary for CSS cascading rules!)
+                c2s[name] = (style[:-2], ttype, len(ttype))
+
+    def get_style_defs(self, arg=None):
+        """
+        Return CSS style definitions for the classes produced by the current
+        highlighting style. ``arg`` can be a string or list of selectors to
+        insert before the token type classes.
+        """
+        if arg is None:
+            arg = ('cssclass' in self.options and '.'+self.cssclass or '')
+        if isinstance(arg, str):
+            args = [arg]
+        else:
+            args = list(arg)
+
+        def prefix(cls):
+            if cls:
+                cls = '.' + cls
+            tmp = []
+            for arg in args:
+                tmp.append((arg and arg + ' ' or '') + cls)
+            return ', '.join(tmp)
+
+        styles = [(level, ttype, cls, style)
+                  for cls, (style, ttype, level) in self.class2style.items()
+                  if cls and style]
+        styles.sort()
+        lines = ['%s { %s } /* %s */' % (prefix(cls), style, repr(ttype)[6:])
+                 for (level, ttype, cls, style) in styles]
+        if arg and not self.nobackground and \
+           self.style.background_color is not None:
+            text_style = ''
+            if Text in self.ttype2class:
+                text_style = ' ' + self.class2style[self.ttype2class[Text]][0]
+            lines.insert(0, '%s { background: %s;%s }' %
+                         (prefix(''), self.style.background_color, text_style))
+        if self.style.highlight_color is not None:
+            lines.insert(0, '%s.hll { background-color: %s }' %
+                         (prefix(''), self.style.highlight_color))
+        return '\n'.join(lines)
+
+    def _decodeifneeded(self, value):
+        if isinstance(value, bytes):
+            if self.encoding:
+                return value.decode(self.encoding)
+            return value.decode()
+        return value
+
+    def _wrap_full(self, inner, outfile):
+        if self.cssfile:
+            if os.path.isabs(self.cssfile):
+                # it's an absolute filename
+                cssfilename = self.cssfile
+            else:
+                try:
+                    filename = outfile.name
+                    if not filename or filename[0] == '<':
+                        # pseudo files, e.g. name == '<fdopen>'
+                        raise AttributeError
+                    cssfilename = os.path.join(os.path.dirname(filename),
+                                               self.cssfile)
+                except AttributeError:
+                    print('Note: Cannot determine output file name, ' \
+                          'using current directory as base for the CSS file name', file=sys.stderr)
+                    cssfilename = self.cssfile
+            # write CSS file only if noclobber_cssfile isn't given as an option.
+            try:
+                if not os.path.exists(cssfilename) or not self.noclobber_cssfile:
+                    cf = open(cssfilename, "w")
+                    cf.write(CSSFILE_TEMPLATE %
+                            {'styledefs': self.get_style_defs('body')})
+                    cf.close()
+            except IOError as err:
+                err.strerror = 'Error writing CSS file: ' + err.strerror
+                raise
+
+            yield 0, (DOC_HEADER_EXTERNALCSS %
+                      dict(title     = self.title,
+                           cssfile   = self.cssfile,
+                           encoding  = self.encoding))
+        else:
+            yield 0, (DOC_HEADER %
+                      dict(title     = self.title,
+                           styledefs = self.get_style_defs('body'),
+                           encoding  = self.encoding))
+
+        for t, line in inner:
+            yield t, line
+        yield 0, DOC_FOOTER
+
+    def _wrap_tablelinenos(self, inner):
+        dummyoutfile = io.StringIO()
+        lncount = 0
+        for t, line in inner:
+            if t:
+                lncount += 1
+            dummyoutfile.write(line)
+
+        fl = self.linenostart
+        mw = len(str(lncount + fl - 1))
+        sp = self.linenospecial
+        st = self.linenostep
+        la = self.lineanchors
+        aln = self.anchorlinenos
+        nocls = self.noclasses
+        if sp:
+            lines = []
+
+            for i in range(fl, fl+lncount):
+                if i % st == 0:
+                    if i % sp == 0:
+                        if aln:
+                            lines.append('<a href="#%s-%d" class="special">%*d</a>' %
+                                         (la, i, mw, i))
+                        else:
+                            lines.append('<span class="special">%*d</span>' % (mw, i))
+                    else:
+                        if aln:
+                            lines.append('<a href="#%s-%d">%*d</a>' % (la, i, mw, i))
+                        else:
+                            lines.append('%*d' % (mw, i))
+                else:
+                    lines.append('')
+            ls = '\n'.join(lines)
+        else:
+            lines = []
+            for i in range(fl, fl+lncount):
+                if i % st == 0:
+                    if aln:
+                        lines.append('<a href="#%s-%d">%*d</a>' % (la, i, mw, i))
+                    else:
+                        lines.append('%*d' % (mw, i))
+                else:
+                    lines.append('')
+            ls = '\n'.join(lines)
+
+        # in case you wonder about the seemingly redundant <div> here: since the
+        # content in the other cell also is wrapped in a div, some browsers in
+        # some configurations seem to mess up the formatting...
+        if nocls:
+            yield 0, ('<table class="%stable">' % self.cssclass +
+                      '<tr><td><div class="linenodiv" '
+                      'style="background-color: #f0f0f0; padding-right: 10px">'
+                      '<pre style="line-height: 125%">' +
+                      ls + '</pre></div></td><td class="code">')
+        else:
+            yield 0, ('<table class="%stable">' % self.cssclass +
+                      '<tr><td class="linenos"><div class="linenodiv"><pre>' +
+                      ls + '</pre></div></td><td class="code">')
+        yield 0, dummyoutfile.getvalue()
+        yield 0, '</td></tr></table>'
+
+    def _wrap_inlinelinenos(self, inner):
+        # need a list of lines since we need the width of a single number :(
+        lines = list(inner)
+        sp = self.linenospecial
+        st = self.linenostep
+        num = self.linenostart
+        mw = len(str(len(lines) + num - 1))
+
+        if self.noclasses:
+            if sp:
+                for t, line in lines:
+                    if num%sp == 0:
+                        style = 'background-color: #ffffc0; padding: 0 5px 0 5px'
+                    else:
+                        style = 'background-color: #f0f0f0; padding: 0 5px 0 5px'
+                    yield 1, '<span style="%s">%*s</span> ' % (
+                        style, mw, (num%st and ' ' or num)) + line
+                    num += 1
+            else:
+                for t, line in lines:
+                    yield 1, ('<span style="background-color: #f0f0f0; '
+                              'padding: 0 5px 0 5px">%*s</span> ' % (
+                              mw, (num%st and ' ' or num)) + line)
+                    num += 1
+        elif sp:
+            for t, line in lines:
+                yield 1, '<span class="lineno%s">%*s</span> ' % (
+                    num%sp == 0 and ' special' or '', mw,
+                    (num%st and ' ' or num)) + line
+                num += 1
+        else:
+            for t, line in lines:
+                yield 1, '<span class="lineno">%*s</span> ' % (
+                    mw, (num%st and ' ' or num)) + line
+                num += 1
+
+    def _wrap_lineanchors(self, inner):
+        s = self.lineanchors
+        i = 0
+        for t, line in inner:
+            if t:
+                i += 1
+                yield 1, '<a name="%s-%d"></a>' % (s, i) + line
+            else:
+                yield 0, line
+
+    def _wrap_div(self, inner):
+        style = []
+        if (self.noclasses and not self.nobackground and
+            self.style.background_color is not None):
+            style.append('background: %s' % (self.style.background_color,))
+        if self.cssstyles:
+            style.append(self.cssstyles)
+        style = '; '.join(style)
+
+        yield 0, ('<div' + (self.cssclass and ' class="%s"' % self.cssclass)
+                  + (style and (' style="%s"' % style)) + '>')
+        for tup in inner:
+            yield tup
+        yield 0, '</div>\n'
+
+    def _wrap_pre(self, inner):
+        style = []
+        if self.prestyles:
+            style.append(self.prestyles)
+        if self.noclasses:
+            style.append('line-height: 125%')
+        style = '; '.join(style)
+
+        yield 0, ('<pre' + (style and ' style="%s"' % style) + '>')
+        for tup in inner:
+            yield tup
+        yield 0, '</pre>'
+
+    def _format_lines(self, tokensource):
+        """
+        Just format the tokens, without any wrapping tags.
+        Yield individual lines.
+        """
+        nocls = self.noclasses
+        lsep = self.lineseparator
+        # for <span style=""> lookup only
+        getcls = self.ttype2class.get
+        c2s = self.class2style
+        escape_table = _escape_html_table
+
+        lspan = ''
+        line = ''
+        for ttype, value in tokensource:
+            if nocls:
+                cclass = getcls(ttype)
+                while cclass is None:
+                    ttype = ttype.parent
+                    cclass = getcls(ttype)
+                cspan = cclass and '<span style="%s">' % c2s[cclass][0] or ''
+            else:
+                cls = self._get_css_class(ttype)
+                cspan = cls and '<span class="%s">' % cls or ''
+
+            parts = value.translate(escape_table).split('\n')
+
+            # for all but the last line
+            for part in parts[:-1]:
+                if line:
+                    if lspan != cspan:
+                        line += (lspan and '</span>') + cspan + part + \
+                                (cspan and '</span>') + lsep
+                    else: # both are the same
+                        line += part + (lspan and '</span>') + lsep
+                    yield 1, line
+                    line = ''
+                elif part:
+                    yield 1, cspan + part + (cspan and '</span>') + lsep
+                else:
+                    yield 1, lsep
+            # for the last line
+            if line and parts[-1]:
+                if lspan != cspan:
+                    line += (lspan and '</span>') + cspan + parts[-1]
+                    lspan = cspan
+                else:
+                    line += parts[-1]
+            elif parts[-1]:
+                line = cspan + parts[-1]
+                lspan = cspan
+            # else we neither have to open a new span nor set lspan
+
+        if line:
+            yield 1, line + (lspan and '</span>') + lsep
+
+    def _highlight_lines(self, tokensource):
+        """
+        Highlighted the lines specified in the `hl_lines` option by
+        post-processing the token stream coming from `_format_lines`.
+        """
+        hls = self.hl_lines
+
+        for i, (t, value) in enumerate(tokensource):
+            if t != 1:
+                yield t, value
+            if i + 1 in hls: # i + 1 because Python indexes start at 0
+                if self.noclasses:
+                    style = ''
+                    if self.style.highlight_color is not None:
+                        style = (' style="background-color: %s"' %
+                                 (self.style.highlight_color,))
+                    yield 1, '<span%s>%s</span>' % (style, value)
+                else:
+                    yield 1, '<span class="hll">%s</span>' % value
+            else:
+                yield 1, value
+
+    def wrap(self, source, outfile):
+        """
+        Wrap the ``source``, which is a generator yielding
+        individual lines, in custom generators. See docstring
+        for `format`. Can be overridden.
+        """
+        return self._wrap_div(self._wrap_pre(source))
+
+    def format_unencoded(self, tokensource, outfile):
+        """
+        The formatting process uses several nested generators; which of
+        them are used is determined by the user's options.
+
+        Each generator should take at least one argument, ``inner``,
+        and wrap the pieces of text generated by this.
+
+        Always yield 2-tuples: (code, text). If "code" is 1, the text
+        is part of the original tokensource being highlighted, if it's
+        0, the text is some piece of wrapping. This makes it possible to
+        use several different wrappers that process the original source
+        linewise, e.g. line number generators.
+        """
+        source = self._format_lines(tokensource)
+        if self.hl_lines:
+            source = self._highlight_lines(source)
+        if not self.nowrap:
+            if self.linenos == 2:
+                source = self._wrap_inlinelinenos(source)
+            if self.lineanchors:
+                source = self._wrap_lineanchors(source)
+            source = self.wrap(source, outfile)
+            if self.linenos == 1:
+                source = self._wrap_tablelinenos(source)
+            if self.full:
+                source = self._wrap_full(source, outfile)
+
+        for t, piece in source:
+            outfile.write(piece)
--- a/ThirdParty/Pygments/pygments/formatters/img.py	Tue Jan 04 17:37:48 2011 +0100
+++ b/ThirdParty/Pygments/pygments/formatters/img.py	Wed Jan 05 15:46:19 2011 +0100
@@ -1,553 +1,553 @@
-# -*- coding: utf-8 -*-
-"""
-    pygments.formatters.img
-    ~~~~~~~~~~~~~~~~~~~~~~~
-
-    Formatter for Pixmap output.
-
-    :copyright: Copyright 2006-2010 by the Pygments team, see AUTHORS.
-    :license: BSD, see LICENSE for details.
-"""
-
-import sys
-from subprocess import getstatusoutput
-
-from pygments.formatter import Formatter
-from pygments.util import get_bool_opt, get_int_opt, \
-     get_list_opt, get_choice_opt
-
-# Import this carefully
-try:
-    import Image, ImageDraw, ImageFont
-    pil_available = True
-except ImportError:
-    pil_available = False
-
-try:
-    import winreg
-except ImportError:
-    _winreg = None
-
-__all__ = ['ImageFormatter', 'GifImageFormatter', 'JpgImageFormatter',
-           'BmpImageFormatter']
-
-
-# For some unknown reason every font calls it something different
-STYLES = {
-    'NORMAL':     ['', 'Roman', 'Book', 'Normal', 'Regular', 'Medium'],
-    'ITALIC':     ['Oblique', 'Italic'],
-    'BOLD':       ['Bold'],
-    'BOLDITALIC': ['Bold Oblique', 'Bold Italic'],
-}
-
-# A sane default for modern systems
-DEFAULT_FONT_NAME_NIX = 'Bitstream Vera Sans Mono'
-DEFAULT_FONT_NAME_WIN = 'Courier New'
-
-
-class PilNotAvailable(ImportError):
-    """When Python imaging library is not available"""
-
-
-class FontNotFound(Exception):
-    """When there are no usable fonts specified"""
-
-
-class FontManager(object):
-    """
-    Manages a set of fonts: normal, italic, bold, etc...
-    """
-
-    def __init__(self, font_name, font_size=14):
-        self.font_name = font_name
-        self.font_size = font_size
-        self.fonts = {}
-        self.encoding = None
-        if sys.platform.startswith('win'):
-            if not font_name:
-                self.font_name = DEFAULT_FONT_NAME_WIN
-            self._create_win()
-        else:
-            if not font_name:
-                self.font_name = DEFAULT_FONT_NAME_NIX
-            self._create_nix()
-
-    def _get_nix_font_path(self, name, style):
-        exit, out = getstatusoutput('fc-list "%s:style=%s" file' %
-                                    (name, style))
-        if not exit:
-            lines = out.splitlines()
-            if lines:
-                path = lines[0].strip().strip(':')
-                return path
-
-    def _create_nix(self):
-        for name in STYLES['NORMAL']:
-            path = self._get_nix_font_path(self.font_name, name)
-            if path is not None:
-                self.fonts['NORMAL'] = ImageFont.truetype(path, self.font_size)
-                break
-        else:
-            raise FontNotFound('No usable fonts named: "%s"' %
-                               self.font_name)
-        for style in ('ITALIC', 'BOLD', 'BOLDITALIC'):
-            for stylename in STYLES[style]:
-                path = self._get_nix_font_path(self.font_name, stylename)
-                if path is not None:
-                    self.fonts[style] = ImageFont.truetype(path, self.font_size)
-                    break
-            else:
-                if style == 'BOLDITALIC':
-                    self.fonts[style] = self.fonts['BOLD']
-                else:
-                    self.fonts[style] = self.fonts['NORMAL']
-
-    def _lookup_win(self, key, basename, styles, fail=False):
-        for suffix in ('', ' (TrueType)'):
-            for style in styles:
-                try:
-                    valname = '%s%s%s' % (basename, style and ' '+style, suffix)
-                    val, _ = winreg.QueryValueEx(key, valname)
-                    return val
-                except EnvironmentError:
-                    continue
-        else:
-            if fail:
-                raise FontNotFound('Font %s (%s) not found in registry' %
-                                   (basename, styles[0]))
-            return None
-
-    def _create_win(self):
-        try:
-            key = winreg.OpenKey(
-                winreg.HKEY_LOCAL_MACHINE,
-                r'Software\Microsoft\Windows NT\CurrentVersion\Fonts')
-        except EnvironmentError:
-            try:
-                key = winreg.OpenKey(
-                    winreg.HKEY_LOCAL_MACHINE,
-                    r'Software\Microsoft\Windows\CurrentVersion\Fonts')
-            except EnvironmentError:
-                raise FontNotFound('Can\'t open Windows font registry key')
-        try:
-            path = self._lookup_win(key, self.font_name, STYLES['NORMAL'], True)
-            self.fonts['NORMAL'] = ImageFont.truetype(path, self.font_size)
-            for style in ('ITALIC', 'BOLD', 'BOLDITALIC'):
-                path = self._lookup_win(key, self.font_name, STYLES[style])
-                if path:
-                    self.fonts[style] = ImageFont.truetype(path, self.font_size)
-                else:
-                    if style == 'BOLDITALIC':
-                        self.fonts[style] = self.fonts['BOLD']
-                    else:
-                        self.fonts[style] = self.fonts['NORMAL']
-        finally:
-            winreg.CloseKey(key)
-
-    def get_char_size(self):
-        """
-        Get the character size.
-        """
-        return self.fonts['NORMAL'].getsize('M')
-
-    def get_font(self, bold, oblique):
-        """
-        Get the font based on bold and italic flags.
-        """
-        if bold and oblique:
-            return self.fonts['BOLDITALIC']
-        elif bold:
-            return self.fonts['BOLD']
-        elif oblique:
-            return self.fonts['ITALIC']
-        else:
-            return self.fonts['NORMAL']
-
-
-class ImageFormatter(Formatter):
-    """
-    Create a PNG image from source code. This uses the Python Imaging Library to
-    generate a pixmap from the source code.
-
-    *New in Pygments 0.10.*
-
-    Additional options accepted:
-
-    `image_format`
-        An image format to output to that is recognised by PIL, these include:
-
-        * "PNG" (default)
-        * "JPEG"
-        * "BMP"
-        * "GIF"
-
-    `line_pad`
-        The extra spacing (in pixels) between each line of text.
-
-        Default: 2
-
-    `font_name`
-        The font name to be used as the base font from which others, such as
-        bold and italic fonts will be generated.  This really should be a
-        monospace font to look sane.
-
-        Default: "Bitstream Vera Sans Mono"
-
-    `font_size`
-        The font size in points to be used.
-
-        Default: 14
-
-    `image_pad`
-        The padding, in pixels to be used at each edge of the resulting image.
-
-        Default: 10
-
-    `line_numbers`
-        Whether line numbers should be shown: True/False
-
-        Default: True
-
-    `line_number_start`
-        The line number of the first line.
-
-        Default: 1
-
-    `line_number_step`
-        The step used when printing line numbers.
-
-        Default: 1
-
-    `line_number_bg`
-        The background colour (in "#123456" format) of the line number bar, or
-        None to use the style background color.
-
-        Default: "#eed"
-
-    `line_number_fg`
-        The text color of the line numbers (in "#123456"-like format).
-
-        Default: "#886"
-
-    `line_number_chars`
-        The number of columns of line numbers allowable in the line number
-        margin.
-
-        Default: 2
-
-    `line_number_bold`
-        Whether line numbers will be bold: True/False
-
-        Default: False
-
-    `line_number_italic`
-        Whether line numbers will be italicized: True/False
-
-        Default: False
-
-    `line_number_separator`
-        Whether a line will be drawn between the line number area and the
-        source code area: True/False
-
-        Default: True
-
-    `line_number_pad`
-        The horizontal padding (in pixels) between the line number margin, and
-        the source code area.
-
-        Default: 6
-
-    `hl_lines`
-        Specify a list of lines to be highlighted.  *New in Pygments 1.2.*
-
-        Default: empty list
-
-    `hl_color`
-        Specify the color for highlighting lines.  *New in Pygments 1.2.*
-
-        Default: highlight color of the selected style
-    """
-
-    # Required by the pygments mapper
-    name = 'img'
-    aliases = ['img', 'IMG', 'png']
-    filenames = ['*.png']
-
-    unicodeoutput = False
-
-    default_image_format = 'png'
-
-    def __init__(self, **options):
-        """
-        See the class docstring for explanation of options.
-        """
-        if not pil_available:
-            raise PilNotAvailable(
-                'Python Imaging Library is required for this formatter')
-        Formatter.__init__(self, **options)
-        # Read the style
-        self.styles = dict(self.style)
-        if self.style.background_color is None:
-            self.background_color = '#fff'
-        else:
-            self.background_color = self.style.background_color
-        # Image options
-        self.image_format = get_choice_opt(
-            options, 'image_format', ['png', 'jpeg', 'gif', 'bmp'],
-            self.default_image_format, normcase=True)
-        self.image_pad = get_int_opt(options, 'image_pad', 10)
-        self.line_pad = get_int_opt(options, 'line_pad', 2)
-        # The fonts
-        fontsize = get_int_opt(options, 'font_size', 14)
-        self.fonts = FontManager(options.get('font_name', ''), fontsize)
-        self.fontw, self.fonth = self.fonts.get_char_size()
-        # Line number options
-        self.line_number_fg = options.get('line_number_fg', '#886')
-        self.line_number_bg = options.get('line_number_bg', '#eed')
-        self.line_number_chars = get_int_opt(options,
-                                        'line_number_chars', 2)
-        self.line_number_bold = get_bool_opt(options,
-                                        'line_number_bold', False)
-        self.line_number_italic = get_bool_opt(options,
-                                        'line_number_italic', False)
-        self.line_number_pad = get_int_opt(options, 'line_number_pad', 6)
-        self.line_numbers = get_bool_opt(options, 'line_numbers', True)
-        self.line_number_separator = get_bool_opt(options,
-                                        'line_number_separator', True)
-        self.line_number_step = get_int_opt(options, 'line_number_step', 1)
-        self.line_number_start = get_int_opt(options, 'line_number_start', 1)
-        if self.line_numbers:
-            self.line_number_width = (self.fontw * self.line_number_chars +
-                                   self.line_number_pad * 2)
-        else:
-            self.line_number_width = 0
-        self.hl_lines = []
-        hl_lines_str = get_list_opt(options, 'hl_lines', [])
-        for line in hl_lines_str:
-            try:
-                self.hl_lines.append(int(line))
-            except ValueError:
-                pass
-        self.hl_color = options.get('hl_color',
-                                    self.style.highlight_color) or '#f90'
-        self.drawables = []
-
-    def get_style_defs(self, arg=''):
-        raise NotImplementedError('The -S option is meaningless for the image '
-                                  'formatter. Use -O style=<stylename> instead.')
-
-    def _get_line_height(self):
-        """
-        Get the height of a line.
-        """
-        return self.fonth + self.line_pad
-
-    def _get_line_y(self, lineno):
-        """
-        Get the Y coordinate of a line number.
-        """
-        return lineno * self._get_line_height() + self.image_pad
-
-    def _get_char_width(self):
-        """
-        Get the width of a character.
-        """
-        return self.fontw
-
-    def _get_char_x(self, charno):
-        """
-        Get the X coordinate of a character position.
-        """
-        return charno * self.fontw + self.image_pad + self.line_number_width
-
-    def _get_text_pos(self, charno, lineno):
-        """
-        Get the actual position for a character and line position.
-        """
-        return self._get_char_x(charno), self._get_line_y(lineno)
-
-    def _get_linenumber_pos(self, lineno):
-        """
-        Get the actual position for the start of a line number.
-        """
-        return (self.image_pad, self._get_line_y(lineno))
-
-    def _get_text_color(self, style):
-        """
-        Get the correct color for the token from the style.
-        """
-        if style['color'] is not None:
-            fill = '#' + style['color']
-        else:
-            fill = '#000'
-        return fill
-
-    def _get_style_font(self, style):
-        """
-        Get the correct font for the style.
-        """
-        return self.fonts.get_font(style['bold'], style['italic'])
-
-    def _get_image_size(self, maxcharno, maxlineno):
-        """
-        Get the required image size.
-        """
-        return (self._get_char_x(maxcharno) + self.image_pad,
-                self._get_line_y(maxlineno + 0) + self.image_pad)
-
-    def _draw_linenumber(self, posno, lineno):
-        """
-        Remember a line number drawable to paint later.
-        """
-        self._draw_text(
-            self._get_linenumber_pos(posno),
-            str(lineno).rjust(self.line_number_chars),
-            font=self.fonts.get_font(self.line_number_bold,
-                                     self.line_number_italic),
-            fill=self.line_number_fg,
-        )
-
-    def _draw_text(self, pos, text, font, **kw):
-        """
-        Remember a single drawable tuple to paint later.
-        """
-        self.drawables.append((pos, text, font, kw))
-
-    def _create_drawables(self, tokensource):
-        """
-        Create drawables for the token content.
-        """
-        lineno = charno = maxcharno = 0
-        for ttype, value in tokensource:
-            while ttype not in self.styles:
-                ttype = ttype.parent
-            style = self.styles[ttype]
-            # TODO: make sure tab expansion happens earlier in the chain.  It
-            # really ought to be done on the input, as to do it right here is
-            # quite complex.
-            value = value.expandtabs(4)
-            lines = value.splitlines(True)
-            #print lines
-            for i, line in enumerate(lines):
-                temp = line.rstrip('\n')
-                if temp:
-                    self._draw_text(
-                        self._get_text_pos(charno, lineno),
-                        temp,
-                        font = self._get_style_font(style),
-                        fill = self._get_text_color(style)
-                    )
-                    charno += len(temp)
-                    maxcharno = max(maxcharno, charno)
-                if line.endswith('\n'):
-                    # add a line for each extra line in the value
-                    charno = 0
-                    lineno += 1
-        self.maxcharno = maxcharno
-        self.maxlineno = lineno
-
-    def _draw_line_numbers(self):
-        """
-        Create drawables for the line numbers.
-        """
-        if not self.line_numbers:
-            return
-        for p in range(self.maxlineno):
-            n = p + self.line_number_start
-            if (n % self.line_number_step) == 0:
-                self._draw_linenumber(p, n)
-
-    def _paint_line_number_bg(self, im):
-        """
-        Paint the line number background on the image.
-        """
-        if not self.line_numbers:
-            return
-        if self.line_number_fg is None:
-            return
-        draw = ImageDraw.Draw(im)
-        recth = im.size[-1]
-        rectw = self.image_pad + self.line_number_width - self.line_number_pad
-        draw.rectangle([(0, 0),
-                        (rectw, recth)],
-             fill=self.line_number_bg)
-        draw.line([(rectw, 0), (rectw, recth)], fill=self.line_number_fg)
-        del draw
-
-    def format(self, tokensource, outfile):
-        """
-        Format ``tokensource``, an iterable of ``(tokentype, tokenstring)``
-        tuples and write it into ``outfile``.
-
-        This implementation calculates where it should draw each token on the
-        pixmap, then calculates the required pixmap size and draws the items.
-        """
-        self._create_drawables(tokensource)
-        self._draw_line_numbers()
-        im = Image.new(
-            'RGB',
-            self._get_image_size(self.maxcharno, self.maxlineno),
-            self.background_color
-        )
-        self._paint_line_number_bg(im)
-        draw = ImageDraw.Draw(im)
-        # Highlight
-        if self.hl_lines:
-            x = self.image_pad + self.line_number_width - self.line_number_pad + 1
-            recth = self._get_line_height()
-            rectw = im.size[0] - x
-            for linenumber in self.hl_lines:
-                y = self._get_line_y(linenumber - 1)
-                draw.rectangle([(x, y), (x + rectw, y + recth)],
-                               fill=self.hl_color)
-        for pos, value, font, kw in self.drawables:
-            draw.text(pos, value, font=font, **kw)
-        im.save(outfile, self.image_format.upper())
-
-
-# Add one formatter per format, so that the "-f gif" option gives the correct result
-# when used in pygmentize.
-
-class GifImageFormatter(ImageFormatter):
-    """
-    Create a GIF image from source code. This uses the Python Imaging Library to
-    generate a pixmap from the source code.
-
-    *New in Pygments 1.0.* (You could create GIF images before by passing a
-    suitable `image_format` option to the `ImageFormatter`.)
-    """
-
-    name = 'img_gif'
-    aliases = ['gif']
-    filenames = ['*.gif']
-    default_image_format = 'gif'
-
-
-class JpgImageFormatter(ImageFormatter):
-    """
-    Create a JPEG image from source code. This uses the Python Imaging Library to
-    generate a pixmap from the source code.
-
-    *New in Pygments 1.0.* (You could create JPEG images before by passing a
-    suitable `image_format` option to the `ImageFormatter`.)
-    """
-
-    name = 'img_jpg'
-    aliases = ['jpg', 'jpeg']
-    filenames = ['*.jpg']
-    default_image_format = 'jpeg'
-
-
-class BmpImageFormatter(ImageFormatter):
-    """
-    Create a bitmap image from source code. This uses the Python Imaging Library to
-    generate a pixmap from the source code.
-
-    *New in Pygments 1.0.* (You could create bitmap images before by passing a
-    suitable `image_format` option to the `ImageFormatter`.)
-    """
-
-    name = 'img_bmp'
-    aliases = ['bmp', 'bitmap']
-    filenames = ['*.bmp']
-    default_image_format = 'bmp'
+# -*- coding: utf-8 -*-
+"""
+    pygments.formatters.img
+    ~~~~~~~~~~~~~~~~~~~~~~~
+
+    Formatter for Pixmap output.
+
+    :copyright: Copyright 2006-2010 by the Pygments team, see AUTHORS.
+    :license: BSD, see LICENSE for details.
+"""
+
+import sys
+from subprocess import getstatusoutput
+
+from pygments.formatter import Formatter
+from pygments.util import get_bool_opt, get_int_opt, \
+     get_list_opt, get_choice_opt
+
+# Import this carefully
+try:
+    from PIL import Image, ImageDraw, ImageFont
+    pil_available = True
+except ImportError:
+    pil_available = False
+
+try:
+    import winreg
+except ImportError:
+    _winreg = None
+
+__all__ = ['ImageFormatter', 'GifImageFormatter', 'JpgImageFormatter',
+           'BmpImageFormatter']
+
+
+# For some unknown reason every font calls it something different
+STYLES = {
+    'NORMAL':     ['', 'Roman', 'Book', 'Normal', 'Regular', 'Medium'],
+    'ITALIC':     ['Oblique', 'Italic'],
+    'BOLD':       ['Bold'],
+    'BOLDITALIC': ['Bold Oblique', 'Bold Italic'],
+}
+
+# A sane default for modern systems
+DEFAULT_FONT_NAME_NIX = 'Bitstream Vera Sans Mono'
+DEFAULT_FONT_NAME_WIN = 'Courier New'
+
+
+class PilNotAvailable(ImportError):
+    """When Python imaging library is not available"""
+
+
+class FontNotFound(Exception):
+    """When there are no usable fonts specified"""
+
+
+class FontManager(object):
+    """
+    Manages a set of fonts: normal, italic, bold, etc...
+    """
+
+    def __init__(self, font_name, font_size=14):
+        self.font_name = font_name
+        self.font_size = font_size
+        self.fonts = {}
+        self.encoding = None
+        if sys.platform.startswith('win'):
+            if not font_name:
+                self.font_name = DEFAULT_FONT_NAME_WIN
+            self._create_win()
+        else:
+            if not font_name:
+                self.font_name = DEFAULT_FONT_NAME_NIX
+            self._create_nix()
+
+    def _get_nix_font_path(self, name, style):
+        exit, out = getstatusoutput('fc-list "%s:style=%s" file' %
+                                    (name, style))
+        if not exit:
+            lines = out.splitlines()
+            if lines:
+                path = lines[0].strip().strip(':')
+                return path
+
+    def _create_nix(self):
+        for name in STYLES['NORMAL']:
+            path = self._get_nix_font_path(self.font_name, name)
+            if path is not None:
+                self.fonts['NORMAL'] = ImageFont.truetype(path, self.font_size)
+                break
+        else:
+            raise FontNotFound('No usable fonts named: "%s"' %
+                               self.font_name)
+        for style in ('ITALIC', 'BOLD', 'BOLDITALIC'):
+            for stylename in STYLES[style]:
+                path = self._get_nix_font_path(self.font_name, stylename)
+                if path is not None:
+                    self.fonts[style] = ImageFont.truetype(path, self.font_size)
+                    break
+            else:
+                if style == 'BOLDITALIC':
+                    self.fonts[style] = self.fonts['BOLD']
+                else:
+                    self.fonts[style] = self.fonts['NORMAL']
+
+    def _lookup_win(self, key, basename, styles, fail=False):
+        for suffix in ('', ' (TrueType)'):
+            for style in styles:
+                try:
+                    valname = '%s%s%s' % (basename, style and ' '+style, suffix)
+                    val, _ = winreg.QueryValueEx(key, valname)
+                    return val
+                except EnvironmentError:
+                    continue
+        else:
+            if fail:
+                raise FontNotFound('Font %s (%s) not found in registry' %
+                                   (basename, styles[0]))
+            return None
+
+    def _create_win(self):
+        try:
+            key = winreg.OpenKey(
+                winreg.HKEY_LOCAL_MACHINE,
+                r'Software\Microsoft\Windows NT\CurrentVersion\Fonts')
+        except EnvironmentError:
+            try:
+                key = winreg.OpenKey(
+                    winreg.HKEY_LOCAL_MACHINE,
+                    r'Software\Microsoft\Windows\CurrentVersion\Fonts')
+            except EnvironmentError:
+                raise FontNotFound('Can\'t open Windows font registry key')
+        try:
+            path = self._lookup_win(key, self.font_name, STYLES['NORMAL'], True)
+            self.fonts['NORMAL'] = ImageFont.truetype(path, self.font_size)
+            for style in ('ITALIC', 'BOLD', 'BOLDITALIC'):
+                path = self._lookup_win(key, self.font_name, STYLES[style])
+                if path:
+                    self.fonts[style] = ImageFont.truetype(path, self.font_size)
+                else:
+                    if style == 'BOLDITALIC':
+                        self.fonts[style] = self.fonts['BOLD']
+                    else:
+                        self.fonts[style] = self.fonts['NORMAL']
+        finally:
+            winreg.CloseKey(key)
+
+    def get_char_size(self):
+        """
+        Get the character size.
+        """
+        return self.fonts['NORMAL'].getsize('M')
+
+    def get_font(self, bold, oblique):
+        """
+        Get the font based on bold and italic flags.
+        """
+        if bold and oblique:
+            return self.fonts['BOLDITALIC']
+        elif bold:
+            return self.fonts['BOLD']
+        elif oblique:
+            return self.fonts['ITALIC']
+        else:
+            return self.fonts['NORMAL']
+
+
+class ImageFormatter(Formatter):
+    """
+    Create a PNG image from source code. This uses the Python Imaging Library to
+    generate a pixmap from the source code.
+
+    *New in Pygments 0.10.*
+
+    Additional options accepted:
+
+    `image_format`
+        An image format to output to that is recognised by PIL, these include:
+
+        * "PNG" (default)
+        * "JPEG"
+        * "BMP"
+        * "GIF"
+
+    `line_pad`
+        The extra spacing (in pixels) between each line of text.
+
+        Default: 2
+
+    `font_name`
+        The font name to be used as the base font from which others, such as
+        bold and italic fonts will be generated.  This really should be a
+        monospace font to look sane.
+
+        Default: "Bitstream Vera Sans Mono"
+
+    `font_size`
+        The font size in points to be used.
+
+        Default: 14
+
+    `image_pad`
+        The padding, in pixels to be used at each edge of the resulting image.
+
+        Default: 10
+
+    `line_numbers`
+        Whether line numbers should be shown: True/False
+
+        Default: True
+
+    `line_number_start`
+        The line number of the first line.
+
+        Default: 1
+
+    `line_number_step`
+        The step used when printing line numbers.
+
+        Default: 1
+
+    `line_number_bg`
+        The background colour (in "#123456" format) of the line number bar, or
+        None to use the style background color.
+
+        Default: "#eed"
+
+    `line_number_fg`
+        The text color of the line numbers (in "#123456"-like format).
+
+        Default: "#886"
+
+    `line_number_chars`
+        The number of columns of line numbers allowable in the line number
+        margin.
+
+        Default: 2
+
+    `line_number_bold`
+        Whether line numbers will be bold: True/False
+
+        Default: False
+
+    `line_number_italic`
+        Whether line numbers will be italicized: True/False
+
+        Default: False
+
+    `line_number_separator`
+        Whether a line will be drawn between the line number area and the
+        source code area: True/False
+
+        Default: True
+
+    `line_number_pad`
+        The horizontal padding (in pixels) between the line number margin, and
+        the source code area.
+
+        Default: 6
+
+    `hl_lines`
+        Specify a list of lines to be highlighted.  *New in Pygments 1.2.*
+
+        Default: empty list
+
+    `hl_color`
+        Specify the color for highlighting lines.  *New in Pygments 1.2.*
+
+        Default: highlight color of the selected style
+    """
+
+    # Required by the pygments mapper
+    name = 'img'
+    aliases = ['img', 'IMG', 'png']
+    filenames = ['*.png']
+
+    unicodeoutput = False
+
+    default_image_format = 'png'
+
+    def __init__(self, **options):
+        """
+        See the class docstring for explanation of options.
+        """
+        if not pil_available:
+            raise PilNotAvailable(
+                'Python Imaging Library is required for this formatter')
+        Formatter.__init__(self, **options)
+        # Read the style
+        self.styles = dict(self.style)
+        if self.style.background_color is None:
+            self.background_color = '#fff'
+        else:
+            self.background_color = self.style.background_color
+        # Image options
+        self.image_format = get_choice_opt(
+            options, 'image_format', ['png', 'jpeg', 'gif', 'bmp'],
+            self.default_image_format, normcase=True)
+        self.image_pad = get_int_opt(options, 'image_pad', 10)
+        self.line_pad = get_int_opt(options, 'line_pad', 2)
+        # The fonts
+        fontsize = get_int_opt(options, 'font_size', 14)
+        self.fonts = FontManager(options.get('font_name', ''), fontsize)
+        self.fontw, self.fonth = self.fonts.get_char_size()
+        # Line number options
+        self.line_number_fg = options.get('line_number_fg', '#886')
+        self.line_number_bg = options.get('line_number_bg', '#eed')
+        self.line_number_chars = get_int_opt(options,
+                                        'line_number_chars', 2)
+        self.line_number_bold = get_bool_opt(options,
+                                        'line_number_bold', False)
+        self.line_number_italic = get_bool_opt(options,
+                                        'line_number_italic', False)
+        self.line_number_pad = get_int_opt(options, 'line_number_pad', 6)
+        self.line_numbers = get_bool_opt(options, 'line_numbers', True)
+        self.line_number_separator = get_bool_opt(options,
+                                        'line_number_separator', True)
+        self.line_number_step = get_int_opt(options, 'line_number_step', 1)
+        self.line_number_start = get_int_opt(options, 'line_number_start', 1)
+        if self.line_numbers:
+            self.line_number_width = (self.fontw * self.line_number_chars +
+                                   self.line_number_pad * 2)
+        else:
+            self.line_number_width = 0
+        self.hl_lines = []
+        hl_lines_str = get_list_opt(options, 'hl_lines', [])
+        for line in hl_lines_str:
+            try:
+                self.hl_lines.append(int(line))
+            except ValueError:
+                pass
+        self.hl_color = options.get('hl_color',
+                                    self.style.highlight_color) or '#f90'
+        self.drawables = []
+
+    def get_style_defs(self, arg=''):
+        raise NotImplementedError('The -S option is meaningless for the image '
+                                  'formatter. Use -O style=<stylename> instead.')
+
+    def _get_line_height(self):
+        """
+        Get the height of a line.
+        """
+        return self.fonth + self.line_pad
+
+    def _get_line_y(self, lineno):
+        """
+        Get the Y coordinate of a line number.
+        """
+        return lineno * self._get_line_height() + self.image_pad
+
+    def _get_char_width(self):
+        """
+        Get the width of a character.
+        """
+        return self.fontw
+
+    def _get_char_x(self, charno):
+        """
+        Get the X coordinate of a character position.
+        """
+        return charno * self.fontw + self.image_pad + self.line_number_width
+
+    def _get_text_pos(self, charno, lineno):
+        """
+        Get the actual position for a character and line position.
+        """
+        return self._get_char_x(charno), self._get_line_y(lineno)
+
+    def _get_linenumber_pos(self, lineno):
+        """
+        Get the actual position for the start of a line number.
+        """
+        return (self.image_pad, self._get_line_y(lineno))
+
+    def _get_text_color(self, style):
+        """
+        Get the correct color for the token from the style.
+        """
+        if style['color'] is not None:
+            fill = '#' + style['color']
+        else:
+            fill = '#000'
+        return fill
+
+    def _get_style_font(self, style):
+        """
+        Get the correct font for the style.
+        """
+        return self.fonts.get_font(style['bold'], style['italic'])
+
+    def _get_image_size(self, maxcharno, maxlineno):
+        """
+        Get the required image size.
+        """
+        return (self._get_char_x(maxcharno) + self.image_pad,
+                self._get_line_y(maxlineno + 0) + self.image_pad)
+
+    def _draw_linenumber(self, posno, lineno):
+        """
+        Remember a line number drawable to paint later.
+        """
+        self._draw_text(
+            self._get_linenumber_pos(posno),
+            str(lineno).rjust(self.line_number_chars),
+            font=self.fonts.get_font(self.line_number_bold,
+                                     self.line_number_italic),
+            fill=self.line_number_fg,
+        )
+
+    def _draw_text(self, pos, text, font, **kw):
+        """
+        Remember a single drawable tuple to paint later.
+        """
+        self.drawables.append((pos, text, font, kw))
+
+    def _create_drawables(self, tokensource):
+        """
+        Create drawables for the token content.
+        """
+        lineno = charno = maxcharno = 0
+        for ttype, value in tokensource:
+            while ttype not in self.styles:
+                ttype = ttype.parent
+            style = self.styles[ttype]
+            # TODO: make sure tab expansion happens earlier in the chain.  It
+            # really ought to be done on the input, as to do it right here is
+            # quite complex.
+            value = value.expandtabs(4)
+            lines = value.splitlines(True)
+            #print lines
+            for i, line in enumerate(lines):
+                temp = line.rstrip('\n')
+                if temp:
+                    self._draw_text(
+                        self._get_text_pos(charno, lineno),
+                        temp,
+                        font = self._get_style_font(style),
+                        fill = self._get_text_color(style)
+                    )
+                    charno += len(temp)
+                    maxcharno = max(maxcharno, charno)
+                if line.endswith('\n'):
+                    # add a line for each extra line in the value
+                    charno = 0
+                    lineno += 1
+        self.maxcharno = maxcharno
+        self.maxlineno = lineno
+
+    def _draw_line_numbers(self):
+        """
+        Create drawables for the line numbers.
+        """
+        if not self.line_numbers:
+            return
+        for p in range(self.maxlineno):
+            n = p + self.line_number_start
+            if (n % self.line_number_step) == 0:
+                self._draw_linenumber(p, n)
+
+    def _paint_line_number_bg(self, im):
+        """
+        Paint the line number background on the image.
+        """
+        if not self.line_numbers:
+            return
+        if self.line_number_fg is None:
+            return
+        draw = ImageDraw.Draw(im)
+        recth = im.size[-1]
+        rectw = self.image_pad + self.line_number_width - self.line_number_pad
+        draw.rectangle([(0, 0),
+                        (rectw, recth)],
+             fill=self.line_number_bg)
+        draw.line([(rectw, 0), (rectw, recth)], fill=self.line_number_fg)
+        del draw
+
+    def format(self, tokensource, outfile):
+        """
+        Format ``tokensource``, an iterable of ``(tokentype, tokenstring)``
+        tuples and write it into ``outfile``.
+
+        This implementation calculates where it should draw each token on the
+        pixmap, then calculates the required pixmap size and draws the items.
+        """
+        self._create_drawables(tokensource)
+        self._draw_line_numbers()
+        im = Image.new(
+            'RGB',
+            self._get_image_size(self.maxcharno, self.maxlineno),
+            self.background_color
+        )
+        self._paint_line_number_bg(im)
+        draw = ImageDraw.Draw(im)
+        # Highlight
+        if self.hl_lines:
+            x = self.image_pad + self.line_number_width - self.line_number_pad + 1
+            recth = self._get_line_height()
+            rectw = im.size[0] - x
+            for linenumber in self.hl_lines:
+                y = self._get_line_y(linenumber - 1)
+                draw.rectangle([(x, y), (x + rectw, y + recth)],
+                               fill=self.hl_color)
+        for pos, value, font, kw in self.drawables:
+            draw.text(pos, value, font=font, **kw)
+        im.save(outfile, self.image_format.upper())
+
+
+# Add one formatter per format, so that the "-f gif" option gives the correct result
+# when used in pygmentize.
+
+class GifImageFormatter(ImageFormatter):
+    """
+    Create a GIF image from source code. This uses the Python Imaging Library to
+    generate a pixmap from the source code.
+
+    *New in Pygments 1.0.* (You could create GIF images before by passing a
+    suitable `image_format` option to the `ImageFormatter`.)
+    """
+
+    name = 'img_gif'
+    aliases = ['gif']
+    filenames = ['*.gif']
+    default_image_format = 'gif'
+
+
+class JpgImageFormatter(ImageFormatter):
+    """
+    Create a JPEG image from source code. This uses the Python Imaging Library to
+    generate a pixmap from the source code.
+
+    *New in Pygments 1.0.* (You could create JPEG images before by passing a
+    suitable `image_format` option to the `ImageFormatter`.)
+    """
+
+    name = 'img_jpg'
+    aliases = ['jpg', 'jpeg']
+    filenames = ['*.jpg']
+    default_image_format = 'jpeg'
+
+
+class BmpImageFormatter(ImageFormatter):
+    """
+    Create a bitmap image from source code. This uses the Python Imaging Library to
+    generate a pixmap from the source code.
+
+    *New in Pygments 1.0.* (You could create bitmap images before by passing a
+    suitable `image_format` option to the `ImageFormatter`.)
+    """
+
+    name = 'img_bmp'
+    aliases = ['bmp', 'bitmap']
+    filenames = ['*.bmp']
+    default_image_format = 'bmp'
--- a/ThirdParty/Pygments/pygments/formatters/latex.py	Tue Jan 04 17:37:48 2011 +0100
+++ b/ThirdParty/Pygments/pygments/formatters/latex.py	Wed Jan 05 15:46:19 2011 +0100
@@ -1,354 +1,363 @@
-# -*- coding: utf-8 -*-
-"""
-    pygments.formatters.latex
-    ~~~~~~~~~~~~~~~~~~~~~~~~~
-
-    Formatter for LaTeX fancyvrb output.
-
-    :copyright: Copyright 2006-2010 by the Pygments team, see AUTHORS.
-    :license: BSD, see LICENSE for details.
-"""
-
-from pygments.formatter import Formatter
-from pygments.token import Token, STANDARD_TYPES
-from pygments.util import get_bool_opt, get_int_opt, StringIO
-
-
-__all__ = ['LatexFormatter']
-
-
-def escape_tex(text, commandprefix):
-    return text.replace('\\', '\x00'). \
-                replace('{', '\x01'). \
-                replace('}', '\x02'). \
-                replace('^', '\x03'). \
-                replace('_', '\x04'). \
-                replace('\x00', r'\%sZbs{}' % commandprefix). \
-                replace('\x01', r'\%sZob{}' % commandprefix). \
-                replace('\x02', r'\%sZcb{}' % commandprefix). \
-                replace('\x03', r'\%sZca{}' % commandprefix). \
-                replace('\x04', r'\%sZus{}' % commandprefix)
-
-
-DOC_TEMPLATE = r'''
-\documentclass{%(docclass)s}
-\usepackage{fancyvrb}
-\usepackage{color}
-\usepackage[%(encoding)s]{inputenc}
-%(preamble)s
-
-%(styledefs)s
-
-\begin{document}
-
-\section*{%(title)s}
-
-%(code)s
-\end{document}
-'''
-
-## Small explanation of the mess below :)
-#
-# The previous version of the LaTeX formatter just assigned a command to
-# each token type defined in the current style.  That obviously is
-# problematic if the highlighted code is produced for a different style
-# than the style commands themselves.
-#
-# This version works much like the HTML formatter which assigns multiple
-# CSS classes to each <span> tag, from the most specific to the least
-# specific token type, thus falling back to the parent token type if one
-# is not defined.  Here, the classes are there too and use the same short
-# forms given in token.STANDARD_TYPES.
-#
-# Highlighted code now only uses one custom command, which by default is
-# \PY and selectable by the commandprefix option (and in addition the
-# escapes \PYZat, \PYZlb and \PYZrb which haven't been renamed for
-# backwards compatibility purposes).
-#
-# \PY has two arguments: the classes, separated by +, and the text to
-# render in that style.  The classes are resolved into the respective
-# style commands by magic, which serves to ignore unknown classes.
-#
-# The magic macros are:
-# * \PY@it, \PY@bf, etc. are unconditionally wrapped around the text
-#   to render in \PY@do.  Their definition determines the style.
-# * \PY@reset resets \PY@it etc. to do nothing.
-# * \PY@toks parses the list of classes, using magic inspired by the
-#   keyval package (but modified to use plusses instead of commas
-#   because fancyvrb redefines commas inside its environments).
-# * \PY@tok processes one class, calling the \PY@tok@classname command
-#   if it exists.
-# * \PY@tok@classname sets the \PY@it etc. to reflect the chosen style
-#   for its class.
-# * \PY resets the style, parses the classnames and then calls \PY@do.
-
-STYLE_TEMPLATE = r'''
-\makeatletter
-\def\%(cp)s@reset{\let\%(cp)s@it=\relax \let\%(cp)s@bf=\relax%%
-    \let\%(cp)s@ul=\relax \let\%(cp)s@tc=\relax%%
-    \let\%(cp)s@bc=\relax \let\%(cp)s@ff=\relax}
-\def\%(cp)s@tok#1{\csname %(cp)s@tok@#1\endcsname}
-\def\%(cp)s@toks#1+{\ifx\relax#1\empty\else%%
-    \%(cp)s@tok{#1}\expandafter\%(cp)s@toks\fi}
-\def\%(cp)s@do#1{\%(cp)s@bc{\%(cp)s@tc{\%(cp)s@ul{%%
-    \%(cp)s@it{\%(cp)s@bf{\%(cp)s@ff{#1}}}}}}}
-\def\%(cp)s#1#2{\%(cp)s@reset\%(cp)s@toks#1+\relax+\%(cp)s@do{#2}}
-
-%(styles)s
-
-\def\%(cp)sZbs{\char`\\}
-\def\%(cp)sZus{\char`\_}
-\def\%(cp)sZob{\char`\{}
-\def\%(cp)sZcb{\char`\}}
-\def\%(cp)sZca{\char`\^}
-%% for compatibility with earlier versions
-\def\%(cp)sZat{@}
-\def\%(cp)sZlb{[}
-\def\%(cp)sZrb{]}
-\makeatother
-'''
-
-
-def _get_ttype_name(ttype):
-    fname = STANDARD_TYPES.get(ttype)
-    if fname:
-        return fname
-    aname = ''
-    while fname is None:
-        aname = ttype[-1] + aname
-        ttype = ttype.parent
-        fname = STANDARD_TYPES.get(ttype)
-    return fname + aname
-
-
-class LatexFormatter(Formatter):
-    r"""
-    Format tokens as LaTeX code. This needs the `fancyvrb` and `color`
-    standard packages.
-
-    Without the `full` option, code is formatted as one ``Verbatim``
-    environment, like this:
-
-    .. sourcecode:: latex
-
-        \begin{Verbatim}[commandchars=@\[\]]
-        @PY[k][def ]@PY[n+nf][foo](@PY[n][bar]):
-            @PY[k][pass]
-        \end{Verbatim}
-
-    The special command used here (``@PY``) and all the other macros it needs
-    are output by the `get_style_defs` method.
-
-    With the `full` option, a complete LaTeX document is output, including
-    the command definitions in the preamble.
-
-    The `get_style_defs()` method of a `LatexFormatter` returns a string
-    containing ``\def`` commands defining the macros needed inside the
-    ``Verbatim`` environments.
-
-    Additional options accepted:
-
-    `style`
-        The style to use, can be a string or a Style subclass (default:
-        ``'default'``).
-
-    `full`
-        Tells the formatter to output a "full" document, i.e. a complete
-        self-contained document (default: ``False``).
-
-    `title`
-        If `full` is true, the title that should be used to caption the
-        document (default: ``''``).
-
-    `docclass`
-        If the `full` option is enabled, this is the document class to use
-        (default: ``'article'``).
-
-    `preamble`
-        If the `full` option is enabled, this can be further preamble commands,
-        e.g. ``\usepackage`` (default: ``''``).
-
-    `linenos`
-        If set to ``True``, output line numbers (default: ``False``).
-
-    `linenostart`
-        The line number for the first line (default: ``1``).
-
-    `linenostep`
-        If set to a number n > 1, only every nth line number is printed.
-
-    `verboptions`
-        Additional options given to the Verbatim environment (see the *fancyvrb*
-        docs for possible values) (default: ``''``).
-
-    `commandprefix`
-        The LaTeX commands used to produce colored output are constructed
-        using this prefix and some letters (default: ``'PY'``).
-        *New in Pygments 0.7.*
-
-        *New in Pygments 0.10:* the default is now ``'PY'`` instead of ``'C'``.
-
-    `texcomments`
-        If set to ``True``, enables LaTeX comment lines.  That is, LaTex markup
-        in comment tokens is not escaped so that LaTeX can render it (default:
-        ``False``).  *New in Pygments 1.2.*
-
-    `mathescape`
-        If set to ``True``, enables LaTeX math mode escape in comments. That
-        is, ``'$...$'`` inside a comment will trigger math mode (default:
-        ``False``).  *New in Pygments 1.2.*
-    """
-    name = 'LaTeX'
-    aliases = ['latex', 'tex']
-    filenames = ['*.tex']
-
-    def __init__(self, **options):
-        Formatter.__init__(self, **options)
-        self.docclass = options.get('docclass', 'article')
-        self.preamble = options.get('preamble', '')
-        self.linenos = get_bool_opt(options, 'linenos', False)
-        self.linenostart = abs(get_int_opt(options, 'linenostart', 1))
-        self.linenostep = abs(get_int_opt(options, 'linenostep', 1))
-        self.verboptions = options.get('verboptions', '')
-        self.nobackground = get_bool_opt(options, 'nobackground', False)
-        self.commandprefix = options.get('commandprefix', 'PY')
-        self.texcomments = get_bool_opt(options, 'texcomments', False)
-        self.mathescape = get_bool_opt(options, 'mathescape', False)
-
-        self._create_stylesheet()
-
-
-    def _create_stylesheet(self):
-        t2n = self.ttype2name = {Token: ''}
-        c2d = self.cmd2def = {}
-        cp = self.commandprefix
-
-        def rgbcolor(col):
-            if col:
-                return ','.join(['%.2f' %(int(col[i] + col[i + 1], 16) / 255.0)
-                                 for i in (0, 2, 4)])
-            else:
-                return '1,1,1'
-
-        for ttype, ndef in self.style:
-            name = _get_ttype_name(ttype)
-            cmndef = ''
-            if ndef['bold']:
-                cmndef += r'\let\$$@bf=\textbf'
-            if ndef['italic']:
-                cmndef += r'\let\$$@it=\textit'
-            if ndef['underline']:
-                cmndef += r'\let\$$@ul=\underline'
-            if ndef['roman']:
-                cmndef += r'\let\$$@ff=\textrm'
-            if ndef['sans']:
-                cmndef += r'\let\$$@ff=\textsf'
-            if ndef['mono']:
-                cmndef += r'\let\$$@ff=\textsf'
-            if ndef['color']:
-                cmndef += (r'\def\$$@tc##1{\textcolor[rgb]{%s}{##1}}' %
-                           rgbcolor(ndef['color']))
-            if ndef['border']:
-                cmndef += (r'\def\$$@bc##1{\fcolorbox[rgb]{%s}{%s}{##1}}' %
-                           (rgbcolor(ndef['border']),
-                            rgbcolor(ndef['bgcolor'])))
-            elif ndef['bgcolor']:
-                cmndef += (r'\def\$$@bc##1{\colorbox[rgb]{%s}{##1}}' %
-                           rgbcolor(ndef['bgcolor']))
-            if cmndef == '':
-                continue
-            cmndef = cmndef.replace('$$', cp)
-            t2n[ttype] = name
-            c2d[name] = cmndef
-
-    def get_style_defs(self, arg=''):
-        """
-        Return the command sequences needed to define the commands
-        used to format text in the verbatim environment. ``arg`` is ignored.
-        """
-        cp = self.commandprefix
-        styles = []
-        for name, definition in self.cmd2def.items():
-            styles.append(r'\def\%s@tok@%s{%s}' % (cp, name, definition))
-        return STYLE_TEMPLATE % {'cp': self.commandprefix,
-                                 'styles': '\n'.join(styles)}
-
-    def format_unencoded(self, tokensource, outfile):
-        # TODO: add support for background colors
-        t2n = self.ttype2name
-        cp = self.commandprefix
-
-        if self.full:
-            realoutfile = outfile
-            outfile = StringIO()
-
-        outfile.write(r'\begin{Verbatim}[commandchars=\\\{\}')
-        if self.linenos:
-            start, step = self.linenostart, self.linenostep
-            outfile.write(',numbers=left' +
-                          (start and ',firstnumber=%d' % start or '') +
-                          (step and ',stepnumber=%d' % step or ''))
-        if self.mathescape or self.texcomments:
-            outfile.write(r',codes={\catcode`\$=3\catcode`\^=7\catcode`\_=8}')
-        if self.verboptions:
-            outfile.write(',' + self.verboptions)
-        outfile.write(']\n')
-
-        for ttype, value in tokensource:
-            if ttype in Token.Comment:
-                if self.texcomments:
-                    # Try to guess comment starting lexeme and escape it ...
-                    start = value[0:1]
-                    for i in range(1, len(value)):
-                        if start[0] != value[i]:
-                            break
-                        start += value[i]
-
-                    value = value[len(start):]
-                    start = escape_tex(start, self.commandprefix)
-
-                    # ... but do not escape inside comment.
-                    value = start + value
-                elif self.mathescape:
-                    # Only escape parts not inside a math environment.
-                    parts = value.split('$')
-                    in_math = False
-                    for i, part in enumerate(parts):
-                        if not in_math:
-                            parts[i] = escape_tex(part, self.commandprefix)
-                        in_math = not in_math
-                    value = '$'.join(parts)
-                else:
-                    value = escape_tex(value, self.commandprefix)
-            else:
-                value = escape_tex(value, self.commandprefix)
-            styles = []
-            while ttype is not Token:
-                try:
-                    styles.append(t2n[ttype])
-                except KeyError:
-                    # not in current style
-                    styles.append(_get_ttype_name(ttype))
-                ttype = ttype.parent
-            styleval = '+'.join(reversed(styles))
-            if styleval:
-                spl = value.split('\n')
-                for line in spl[:-1]:
-                    if line:
-                        outfile.write("\\%s{%s}{%s}" % (cp, styleval, line))
-                    outfile.write('\n')
-                if spl[-1]:
-                    outfile.write("\\%s{%s}{%s}" % (cp, styleval, spl[-1]))
-            else:
-                outfile.write(value)
-
-        outfile.write('\\end{Verbatim}\n')
-
-        if self.full:
-            realoutfile.write(DOC_TEMPLATE %
-                dict(docclass  = self.docclass,
-                     preamble  = self.preamble,
-                     title     = self.title,
-                     encoding  = self.encoding or 'latin1',
-                     styledefs = self.get_style_defs(),
-                     code      = outfile.getvalue()))
+# -*- coding: utf-8 -*-
+"""
+    pygments.formatters.latex
+    ~~~~~~~~~~~~~~~~~~~~~~~~~
+
+    Formatter for LaTeX fancyvrb output.
+
+    :copyright: Copyright 2006-2010 by the Pygments team, see AUTHORS.
+    :license: BSD, see LICENSE for details.
+"""
+
+from pygments.formatter import Formatter
+from pygments.token import Token, STANDARD_TYPES
+from pygments.util import get_bool_opt, get_int_opt, StringIO
+
+
+__all__ = ['LatexFormatter']
+
+
+def escape_tex(text, commandprefix):
+    return text.replace('\\', '\x00'). \
+                replace('{', '\x01'). \
+                replace('}', '\x02'). \
+                replace('\x00', r'\%sZbs{}' % commandprefix). \
+                replace('\x01', r'\%sZob{}' % commandprefix). \
+                replace('\x02', r'\%sZcb{}' % commandprefix). \
+                replace('^', r'\%sZca{}' % commandprefix). \
+                replace('_', r'\%sZus{}' % commandprefix). \
+                replace('#', r'\%sZsh{}' % commandprefix). \
+                replace('%', r'\%sZpc{}' % commandprefix). \
+                replace('$', r'\%sZdl{}' % commandprefix). \
+                replace('~', r'\%sZti{}' % commandprefix)
+
+
+DOC_TEMPLATE = r'''
+\documentclass{%(docclass)s}
+\usepackage{fancyvrb}
+\usepackage{color}
+\usepackage[%(encoding)s]{inputenc}
+%(preamble)s
+
+%(styledefs)s
+
+\begin{document}
+
+\section*{%(title)s}
+
+%(code)s
+\end{document}
+'''
+
+## Small explanation of the mess below :)
+#
+# The previous version of the LaTeX formatter just assigned a command to
+# each token type defined in the current style.  That obviously is
+# problematic if the highlighted code is produced for a different style
+# than the style commands themselves.
+#
+# This version works much like the HTML formatter which assigns multiple
+# CSS classes to each <span> tag, from the most specific to the least
+# specific token type, thus falling back to the parent token type if one
+# is not defined.  Here, the classes are there too and use the same short
+# forms given in token.STANDARD_TYPES.
+#
+# Highlighted code now only uses one custom command, which by default is
+# \PY and selectable by the commandprefix option (and in addition the
+# escapes \PYZat, \PYZlb and \PYZrb which haven't been renamed for
+# backwards compatibility purposes).
+#
+# \PY has two arguments: the classes, separated by +, and the text to
+# render in that style.  The classes are resolved into the respective
+# style commands by magic, which serves to ignore unknown classes.
+#
+# The magic macros are:
+# * \PY@it, \PY@bf, etc. are unconditionally wrapped around the text
+#   to render in \PY@do.  Their definition determines the style.
+# * \PY@reset resets \PY@it etc. to do nothing.
+# * \PY@toks parses the list of classes, using magic inspired by the
+#   keyval package (but modified to use plusses instead of commas
+#   because fancyvrb redefines commas inside its environments).
+# * \PY@tok processes one class, calling the \PY@tok@classname command
+#   if it exists.
+# * \PY@tok@classname sets the \PY@it etc. to reflect the chosen style
+#   for its class.
+# * \PY resets the style, parses the classnames and then calls \PY@do.
+#
+# Tip: to read this code, print it out in substituted form using e.g.
+# >>> print STYLE_TEMPLATE % {'cp': 'PY'}
+
+STYLE_TEMPLATE = r'''
+\makeatletter
+\def\%(cp)s@reset{\let\%(cp)s@it=\relax \let\%(cp)s@bf=\relax%%
+    \let\%(cp)s@ul=\relax \let\%(cp)s@tc=\relax%%
+    \let\%(cp)s@bc=\relax \let\%(cp)s@ff=\relax}
+\def\%(cp)s@tok#1{\csname %(cp)s@tok@#1\endcsname}
+\def\%(cp)s@toks#1+{\ifx\relax#1\empty\else%%
+    \%(cp)s@tok{#1}\expandafter\%(cp)s@toks\fi}
+\def\%(cp)s@do#1{\%(cp)s@bc{\%(cp)s@tc{\%(cp)s@ul{%%
+    \%(cp)s@it{\%(cp)s@bf{\%(cp)s@ff{#1}}}}}}}
+\def\%(cp)s#1#2{\%(cp)s@reset\%(cp)s@toks#1+\relax+\%(cp)s@do{#2}}
+
+%(styles)s
+
+\def\%(cp)sZbs{\char`\\}
+\def\%(cp)sZus{\char`\_}
+\def\%(cp)sZob{\char`\{}
+\def\%(cp)sZcb{\char`\}}
+\def\%(cp)sZca{\char`\^}
+\def\%(cp)sZsh{\char`\#}
+\def\%(cp)sZpc{\char`\%%}
+\def\%(cp)sZdl{\char`\$}
+\def\%(cp)sZti{\char`\~}
+%% for compatibility with earlier versions
+\def\%(cp)sZat{@}
+\def\%(cp)sZlb{[}
+\def\%(cp)sZrb{]}
+\makeatother
+'''
+
+
+def _get_ttype_name(ttype):
+    fname = STANDARD_TYPES.get(ttype)
+    if fname:
+        return fname
+    aname = ''
+    while fname is None:
+        aname = ttype[-1] + aname
+        ttype = ttype.parent
+        fname = STANDARD_TYPES.get(ttype)
+    return fname + aname
+
+
+class LatexFormatter(Formatter):
+    r"""
+    Format tokens as LaTeX code. This needs the `fancyvrb` and `color`
+    standard packages.
+
+    Without the `full` option, code is formatted as one ``Verbatim``
+    environment, like this:
+
+    .. sourcecode:: latex
+
+        \begin{Verbatim}[commandchars=\\{\}]
+        \PY{k}{def }\PY{n+nf}{foo}(\PY{n}{bar}):
+            \PY{k}{pass}
+        \end{Verbatim}
+
+    The special command used here (``\PY``) and all the other macros it needs
+    are output by the `get_style_defs` method.
+
+    With the `full` option, a complete LaTeX document is output, including
+    the command definitions in the preamble.
+
+    The `get_style_defs()` method of a `LatexFormatter` returns a string
+    containing ``\def`` commands defining the macros needed inside the
+    ``Verbatim`` environments.
+
+    Additional options accepted:
+
+    `style`
+        The style to use, can be a string or a Style subclass (default:
+        ``'default'``).
+
+    `full`
+        Tells the formatter to output a "full" document, i.e. a complete
+        self-contained document (default: ``False``).
+
+    `title`
+        If `full` is true, the title that should be used to caption the
+        document (default: ``''``).
+
+    `docclass`
+        If the `full` option is enabled, this is the document class to use
+        (default: ``'article'``).
+
+    `preamble`
+        If the `full` option is enabled, this can be further preamble commands,
+        e.g. ``\usepackage`` (default: ``''``).
+
+    `linenos`
+        If set to ``True``, output line numbers (default: ``False``).
+
+    `linenostart`
+        The line number for the first line (default: ``1``).
+
+    `linenostep`
+        If set to a number n > 1, only every nth line number is printed.
+
+    `verboptions`
+        Additional options given to the Verbatim environment (see the *fancyvrb*
+        docs for possible values) (default: ``''``).
+
+    `commandprefix`
+        The LaTeX commands used to produce colored output are constructed
+        using this prefix and some letters (default: ``'PY'``).
+        *New in Pygments 0.7.*
+
+        *New in Pygments 0.10:* the default is now ``'PY'`` instead of ``'C'``.
+
+    `texcomments`
+        If set to ``True``, enables LaTeX comment lines.  That is, LaTex markup
+        in comment tokens is not escaped so that LaTeX can render it (default:
+        ``False``).  *New in Pygments 1.2.*
+
+    `mathescape`
+        If set to ``True``, enables LaTeX math mode escape in comments. That
+        is, ``'$...$'`` inside a comment will trigger math mode (default:
+        ``False``).  *New in Pygments 1.2.*
+    """
+    name = 'LaTeX'
+    aliases = ['latex', 'tex']
+    filenames = ['*.tex']
+
+    def __init__(self, **options):
+        Formatter.__init__(self, **options)
+        self.docclass = options.get('docclass', 'article')
+        self.preamble = options.get('preamble', '')
+        self.linenos = get_bool_opt(options, 'linenos', False)
+        self.linenostart = abs(get_int_opt(options, 'linenostart', 1))
+        self.linenostep = abs(get_int_opt(options, 'linenostep', 1))
+        self.verboptions = options.get('verboptions', '')
+        self.nobackground = get_bool_opt(options, 'nobackground', False)
+        self.commandprefix = options.get('commandprefix', 'PY')
+        self.texcomments = get_bool_opt(options, 'texcomments', False)
+        self.mathescape = get_bool_opt(options, 'mathescape', False)
+
+        self._create_stylesheet()
+
+
+    def _create_stylesheet(self):
+        t2n = self.ttype2name = {Token: ''}
+        c2d = self.cmd2def = {}
+        cp = self.commandprefix
+
+        def rgbcolor(col):
+            if col:
+                return ','.join(['%.2f' %(int(col[i] + col[i + 1], 16) / 255.0)
+                                 for i in (0, 2, 4)])
+            else:
+                return '1,1,1'
+
+        for ttype, ndef in self.style:
+            name = _get_ttype_name(ttype)
+            cmndef = ''
+            if ndef['bold']:
+                cmndef += r'\let\$$@bf=\textbf'
+            if ndef['italic']:
+                cmndef += r'\let\$$@it=\textit'
+            if ndef['underline']:
+                cmndef += r'\let\$$@ul=\underline'
+            if ndef['roman']:
+                cmndef += r'\let\$$@ff=\textrm'
+            if ndef['sans']:
+                cmndef += r'\let\$$@ff=\textsf'
+            if ndef['mono']:
+                cmndef += r'\let\$$@ff=\textsf'
+            if ndef['color']:
+                cmndef += (r'\def\$$@tc##1{\textcolor[rgb]{%s}{##1}}' %
+                           rgbcolor(ndef['color']))
+            if ndef['border']:
+                cmndef += (r'\def\$$@bc##1{\fcolorbox[rgb]{%s}{%s}{##1}}' %
+                           (rgbcolor(ndef['border']),
+                            rgbcolor(ndef['bgcolor'])))
+            elif ndef['bgcolor']:
+                cmndef += (r'\def\$$@bc##1{\colorbox[rgb]{%s}{##1}}' %
+                           rgbcolor(ndef['bgcolor']))
+            if cmndef == '':
+                continue
+            cmndef = cmndef.replace('$$', cp)
+            t2n[ttype] = name
+            c2d[name] = cmndef
+
+    def get_style_defs(self, arg=''):
+        """
+        Return the command sequences needed to define the commands
+        used to format text in the verbatim environment. ``arg`` is ignored.
+        """
+        cp = self.commandprefix
+        styles = []
+        for name, definition in self.cmd2def.items():
+            styles.append(r'\def\%s@tok@%s{%s}' % (cp, name, definition))
+        return STYLE_TEMPLATE % {'cp': self.commandprefix,
+                                 'styles': '\n'.join(styles)}
+
+    def format_unencoded(self, tokensource, outfile):
+        # TODO: add support for background colors
+        t2n = self.ttype2name
+        cp = self.commandprefix
+
+        if self.full:
+            realoutfile = outfile
+            outfile = StringIO()
+
+        outfile.write(r'\begin{Verbatim}[commandchars=\\\{\}')
+        if self.linenos:
+            start, step = self.linenostart, self.linenostep
+            outfile.write(',numbers=left' +
+                          (start and ',firstnumber=%d' % start or '') +
+                          (step and ',stepnumber=%d' % step or ''))
+        if self.mathescape or self.texcomments:
+            outfile.write(r',codes={\catcode`\$=3\catcode`\^=7\catcode`\_=8}')
+        if self.verboptions:
+            outfile.write(',' + self.verboptions)
+        outfile.write(']\n')
+
+        for ttype, value in tokensource:
+            if ttype in Token.Comment:
+                if self.texcomments:
+                    # Try to guess comment starting lexeme and escape it ...
+                    start = value[0:1]
+                    for i in range(1, len(value)):
+                        if start[0] != value[i]:
+                            break
+                        start += value[i]
+
+                    value = value[len(start):]
+                    start = escape_tex(start, self.commandprefix)
+
+                    # ... but do not escape inside comment.
+                    value = start + value
+                elif self.mathescape:
+                    # Only escape parts not inside a math environment.
+                    parts = value.split('$')
+                    in_math = False
+                    for i, part in enumerate(parts):
+                        if not in_math:
+                            parts[i] = escape_tex(part, self.commandprefix)
+                        in_math = not in_math
+                    value = '$'.join(parts)
+                else:
+                    value = escape_tex(value, self.commandprefix)
+            else:
+                value = escape_tex(value, self.commandprefix)
+            styles = []
+            while ttype is not Token:
+                try:
+                    styles.append(t2n[ttype])
+                except KeyError:
+                    # not in current style
+                    styles.append(_get_ttype_name(ttype))
+                ttype = ttype.parent
+            styleval = '+'.join(reversed(styles))
+            if styleval:
+                spl = value.split('\n')
+                for line in spl[:-1]:
+                    if line:
+                        outfile.write("\\%s{%s}{%s}" % (cp, styleval, line))
+                    outfile.write('\n')
+                if spl[-1]:
+                    outfile.write("\\%s{%s}{%s}" % (cp, styleval, spl[-1]))
+            else:
+                outfile.write(value)
+
+        outfile.write('\\end{Verbatim}\n')
+
+        if self.full:
+            realoutfile.write(DOC_TEMPLATE %
+                dict(docclass  = self.docclass,
+                     preamble  = self.preamble,
+                     title     = self.title,
+                     encoding  = self.encoding or 'latin1',
+                     styledefs = self.get_style_defs(),
+                     code      = outfile.getvalue()))
--- a/ThirdParty/Pygments/pygments/formatters/other.py	Tue Jan 04 17:37:48 2011 +0100
+++ b/ThirdParty/Pygments/pygments/formatters/other.py	Wed Jan 05 15:46:19 2011 +0100
@@ -1,117 +1,117 @@
-# -*- coding: utf-8 -*-
-"""
-    pygments.formatters.other
-    ~~~~~~~~~~~~~~~~~~~~~~~~~
-
-    Other formatters: NullFormatter, RawTokenFormatter.
-
-    :copyright: Copyright 2006-2010 by the Pygments team, see AUTHORS.
-    :license: BSD, see LICENSE for details.
-"""
-
-from pygments.formatter import Formatter
-from pygments.util import OptionError, get_choice_opt, b
-from pygments.token import Token
-from pygments.console import colorize
-
-__all__ = ['NullFormatter', 'RawTokenFormatter']
-
-
-class NullFormatter(Formatter):
-    """
-    Output the text unchanged without any formatting.
-    """
-    name = 'Text only'
-    aliases = ['text', 'null']
-    filenames = ['*.txt']
-
-    def format(self, tokensource, outfile):
-        enc = self.encoding
-        for ttype, value in tokensource:
-            if enc:
-                outfile.write(value.encode(enc))
-            else:
-                outfile.write(value)
-
-
-class RawTokenFormatter(Formatter):
-    r"""
-    Format tokens as a raw representation for storing token streams.
-
-    The format is ``tokentype<TAB>repr(tokenstring)\n``. The output can later
-    be converted to a token stream with the `RawTokenLexer`, described in the
-    `lexer list <lexers.txt>`_.
-
-    Only two options are accepted:
-
-    `compress`
-        If set to ``'gz'`` or ``'bz2'``, compress the output with the given
-        compression algorithm after encoding (default: ``''``).
-    `error_color`
-        If set to a color name, highlight error tokens using that color.  If
-        set but with no value, defaults to ``'red'``.
-        *New in Pygments 0.11.*
-
-    """
-    name = 'Raw tokens'
-    aliases = ['raw', 'tokens']
-    filenames = ['*.raw']
-
-    unicodeoutput = False
-
-    def __init__(self, **options):
-        Formatter.__init__(self, **options)
-        if self.encoding:
-            raise OptionError('the raw formatter does not support the '
-                              'encoding option')
-        self.encoding = 'ascii'  # let pygments.format() do the right thing
-        self.compress = get_choice_opt(options, 'compress',
-                                       ['', 'none', 'gz', 'bz2'], '')
-        self.error_color = options.get('error_color', None)
-        if self.error_color is True:
-            self.error_color = 'red'
-        if self.error_color is not None:
-            try:
-                colorize(self.error_color, '')
-            except KeyError:
-                raise ValueError("Invalid color %r specified" %
-                                 self.error_color)
-
-    def format(self, tokensource, outfile):
-        try:
-            outfile.write(b(''))
-        except TypeError:
-            raise TypeError('The raw tokens formatter needs a binary '
-                            'output file')
-        if self.compress == 'gz':
-            import gzip
-            outfile = gzip.GzipFile('', 'wb', 9, outfile)
-            def write(text):
-                outfile.write(text.encode())
-            flush = outfile.flush
-        elif self.compress == 'bz2':
-            import bz2
-            compressor = bz2.BZ2Compressor(9)
-            def write(text):
-                outfile.write(compressor.compress(text.encode()))
-            def flush():
-                outfile.write(compressor.flush())
-                outfile.flush()
-        else:
-            def write(text):
-                outfile.write(text.encode())
-            flush = outfile.flush
-
-        lasttype = None
-        lastval = ''
-        if self.error_color:
-            for ttype, value in tokensource:
-                line = "%s\t%r\n" % (ttype, value)
-                if ttype is Token.Error:
-                    write(colorize(self.error_color, line))
-                else:
-                    write(line)
-        else:
-            for ttype, value in tokensource:
-                write("%s\t%r\n" % (ttype, value))
-        flush()
+# -*- coding: utf-8 -*-
+"""
+    pygments.formatters.other
+    ~~~~~~~~~~~~~~~~~~~~~~~~~
+
+    Other formatters: NullFormatter, RawTokenFormatter.
+
+    :copyright: Copyright 2006-2010 by the Pygments team, see AUTHORS.
+    :license: BSD, see LICENSE for details.
+"""
+
+from pygments.formatter import Formatter
+from pygments.util import OptionError, get_choice_opt, b
+from pygments.token import Token
+from pygments.console import colorize
+
+__all__ = ['NullFormatter', 'RawTokenFormatter']
+
+
+class NullFormatter(Formatter):
+    """
+    Output the text unchanged without any formatting.
+    """
+    name = 'Text only'
+    aliases = ['text', 'null']
+    filenames = ['*.txt']
+
+    def format(self, tokensource, outfile):
+        enc = self.encoding
+        for ttype, value in tokensource:
+            if enc:
+                outfile.write(value.encode(enc))
+            else:
+                outfile.write(value)
+
+
+class RawTokenFormatter(Formatter):
+    r"""
+    Format tokens as a raw representation for storing token streams.
+
+    The format is ``tokentype<TAB>repr(tokenstring)\n``. The output can later
+    be converted to a token stream with the `RawTokenLexer`, described in the
+    `lexer list <lexers.txt>`_.
+
+    Only two options are accepted:
+
+    `compress`
+        If set to ``'gz'`` or ``'bz2'``, compress the output with the given
+        compression algorithm after encoding (default: ``''``).
+    `error_color`
+        If set to a color name, highlight error tokens using that color.  If
+        set but with no value, defaults to ``'red'``.
+        *New in Pygments 0.11.*
+
+    """
+    name = 'Raw tokens'
+    aliases = ['raw', 'tokens']
+    filenames = ['*.raw']
+
+    unicodeoutput = False
+
+    def __init__(self, **options):
+        Formatter.__init__(self, **options)
+        if self.encoding:
+            raise OptionError('the raw formatter does not support the '
+                              'encoding option')
+        self.encoding = 'ascii'  # let pygments.format() do the right thing
+        self.compress = get_choice_opt(options, 'compress',
+                                       ['', 'none', 'gz', 'bz2'], '')
+        self.error_color = options.get('error_color', None)
+        if self.error_color is True:
+            self.error_color = 'red'
+        if self.error_color is not None:
+            try:
+                colorize(self.error_color, '')
+            except KeyError:
+                raise ValueError("Invalid color %r specified" %
+                                 self.error_color)
+
+    def format(self, tokensource, outfile):
+        try:
+            outfile.write(b(''))
+        except TypeError:
+            raise TypeError('The raw tokens formatter needs a binary '
+                            'output file')
+        if self.compress == 'gz':
+            import gzip
+            outfile = gzip.GzipFile('', 'wb', 9, outfile)
+            def write(text):
+                outfile.write(text.encode())
+            flush = outfile.flush
+        elif self.compress == 'bz2':
+            import bz2
+            compressor = bz2.BZ2Compressor(9)
+            def write(text):
+                outfile.write(compressor.compress(text.encode()))
+            def flush():
+                outfile.write(compressor.flush())
+                outfile.flush()
+        else:
+            def write(text):
+                outfile.write(text.encode())
+            flush = outfile.flush
+
+        lasttype = None
+        lastval = ''
+        if self.error_color:
+            for ttype, value in tokensource:
+                line = "%s\t%r\n" % (ttype, value)
+                if ttype is Token.Error:
+                    write(colorize(self.error_color, line))
+                else:
+                    write(line)
+        else:
+            for ttype, value in tokensource:
+                write("%s\t%r\n" % (ttype, value))
+        flush()
--- a/ThirdParty/Pygments/pygments/lexer.py	Tue Jan 04 17:37:48 2011 +0100
+++ b/ThirdParty/Pygments/pygments/lexer.py	Wed Jan 05 15:46:19 2011 +0100
@@ -1,658 +1,675 @@
-# -*- coding: utf-8 -*-
-"""
-    pygments.lexer
-    ~~~~~~~~~~~~~~
-
-    Base lexer classes.
-
-    :copyright: Copyright 2006-2010 by the Pygments team, see AUTHORS.
-    :license: BSD, see LICENSE for details.
-"""
-import re
-
-from pygments.filter import apply_filters, Filter
-from pygments.filters import get_filter_by_name
-from pygments.token import Error, Text, Other, _TokenType
-from pygments.util import get_bool_opt, get_int_opt, get_list_opt, \
-     make_analysator
-import collections
-
-
-__all__ = ['Lexer', 'RegexLexer', 'ExtendedRegexLexer', 'DelegatingLexer',
-           'LexerContext', 'include', 'bygroups', 'using', 'this']
-
-
-_default_analyse = staticmethod(lambda x: 0.0)
-
-
-class LexerMeta(type):
-    """
-    This metaclass automagically converts ``analyse_text`` methods into
-    static methods which always return float values.
-    """
-
-    def __new__(cls, name, bases, d):
-        if 'analyse_text' in d:
-            d['analyse_text'] = make_analysator(d['analyse_text'])
-        return type.__new__(cls, name, bases, d)
-
-
-class Lexer(object, metaclass=LexerMeta):
-    """
-    Lexer for a specific language.
-
-    Basic options recognized:
-    ``stripnl``
-        Strip leading and trailing newlines from the input (default: True).
-    ``stripall``
-        Strip all leading and trailing whitespace from the input
-        (default: False).
-    ``ensurenl``
-        Make sure that the input ends with a newline (default: True).  This
-        is required for some lexers that consume input linewise.
-        *New in Pygments 1.3.*
-    ``tabsize``
-        If given and greater than 0, expand tabs in the input (default: 0).
-    ``encoding``
-        If given, must be an encoding name. This encoding will be used to
-        convert the input string to Unicode, if it is not already a Unicode
-        string (default: ``'latin1'``).
-        Can also be ``'guess'`` to use a simple UTF-8 / Latin1 detection, or
-        ``'chardet'`` to use the chardet library, if it is installed.
-    """
-
-    #: Name of the lexer
-    name = None
-
-    #: Shortcuts for the lexer
-    aliases = []
-
-    #: fn match rules
-    filenames = []
-
-    #: fn alias filenames
-    alias_filenames = []
-
-    #: mime types
-    mimetypes = []
-
-    def __init__(self, **options):
-        self.options = options
-        self.stripnl = get_bool_opt(options, 'stripnl', True)
-        self.stripall = get_bool_opt(options, 'stripall', False)
-        self.ensurenl = get_bool_opt(options, 'ensurenl', True)
-        self.tabsize = get_int_opt(options, 'tabsize', 0)
-        self.encoding = options.get('encoding', 'latin1')
-        # self.encoding = options.get('inencoding', None) or self.encoding
-        self.filters = []
-        for filter_ in get_list_opt(options, 'filters', ()):
-            self.add_filter(filter_)
-
-    def __repr__(self):
-        if self.options:
-            return '<pygments.lexers.%s with %r>' % (self.__class__.__name__,
-                                                     self.options)
-        else:
-            return '<pygments.lexers.%s>' % self.__class__.__name__
-
-    def add_filter(self, filter_, **options):
-        """
-        Add a new stream filter to this lexer.
-        """
-        if not isinstance(filter_, Filter):
-            filter_ = get_filter_by_name(filter_, **options)
-        self.filters.append(filter_)
-
-    def analyse_text(text):
-        """
-        Has to return a float between ``0`` and ``1`` that indicates
-        if a lexer wants to highlight this text. Used by ``guess_lexer``.
-        If this method returns ``0`` it won't highlight it in any case, if
-        it returns ``1`` highlighting with this lexer is guaranteed.
-
-        The `LexerMeta` metaclass automatically wraps this function so
-        that it works like a static method (no ``self`` or ``cls``
-        parameter) and the return value is automatically converted to
-        `float`. If the return value is an object that is boolean `False`
-        it's the same as if the return values was ``0.0``.
-        """
-
-    def get_tokens(self, text, unfiltered=False):
-        """
-        Return an iterable of (tokentype, value) pairs generated from
-        `text`. If `unfiltered` is set to `True`, the filtering mechanism
-        is bypassed even if filters are defined.
-
-        Also preprocess the text, i.e. expand tabs and strip it if
-        wanted and applies registered filters.
-        """
-        if not isinstance(text, str):
-            if self.encoding == 'guess':
-                try:
-                    text = text.decode('utf-8')
-                    if text.startswith('\ufeff'):
-                        text = text[len('\ufeff'):]
-                except UnicodeDecodeError:
-                    text = text.decode('latin1')
-            elif self.encoding == 'chardet':
-                try:
-                    import chardet
-                except ImportError:
-                    raise ImportError('To enable chardet encoding guessing, '
-                                      'please install the chardet library '
-                                      'from http://chardet.feedparser.org/')
-                enc = chardet.detect(text)
-                text = text.decode(enc['encoding'])
-            else:
-                text = text.decode(self.encoding)
-        # text now *is* a unicode string
-        text = text.replace('\r\n', '\n')
-        text = text.replace('\r', '\n')
-        if self.stripall:
-            text = text.strip()
-        elif self.stripnl:
-            text = text.strip('\n')
-        if self.tabsize > 0:
-            text = text.expandtabs(self.tabsize)
-        if self.ensurenl and not text.endswith('\n'):
-            text += '\n'
-
-        def streamer():
-            for i, t, v in self.get_tokens_unprocessed(text):
-                yield t, v
-        stream = streamer()
-        if not unfiltered:
-            stream = apply_filters(stream, self.filters, self)
-        return stream
-
-    def get_tokens_unprocessed(self, text):
-        """
-        Return an iterable of (tokentype, value) pairs.
-        In subclasses, implement this method as a generator to
-        maximize effectiveness.
-        """
-        raise NotImplementedError
-
-
-class DelegatingLexer(Lexer):
-    """
-    This lexer takes two lexer as arguments. A root lexer and
-    a language lexer. First everything is scanned using the language
-    lexer, afterwards all ``Other`` tokens are lexed using the root
-    lexer.
-
-    The lexers from the ``template`` lexer package use this base lexer.
-    """
-
-    def __init__(self, _root_lexer, _language_lexer, _needle=Other, **options):
-        self.root_lexer = _root_lexer(**options)
-        self.language_lexer = _language_lexer(**options)
-        self.needle = _needle
-        Lexer.__init__(self, **options)
-
-    def get_tokens_unprocessed(self, text):
-        buffered = ''
-        insertions = []
-        lng_buffer = []
-        for i, t, v in self.language_lexer.get_tokens_unprocessed(text):
-            if t is self.needle:
-                if lng_buffer:
-                    insertions.append((len(buffered), lng_buffer))
-                    lng_buffer = []
-                buffered += v
-            else:
-                lng_buffer.append((i, t, v))
-        if lng_buffer:
-            insertions.append((len(buffered), lng_buffer))
-        return do_insertions(insertions,
-                             self.root_lexer.get_tokens_unprocessed(buffered))
-
-
-#-------------------------------------------------------------------------------
-# RegexLexer and ExtendedRegexLexer
-#
-
-
-class include(str):
-    """
-    Indicates that a state should include rules from another state.
-    """
-    pass
-
-
-class combined(tuple):
-    """
-    Indicates a state combined from multiple states.
-    """
-
-    def __new__(cls, *args):
-        return tuple.__new__(cls, args)
-
-    def __init__(self, *args):
-        # tuple.__init__ doesn't do anything
-        pass
-
-
-class _PseudoMatch(object):
-    """
-    A pseudo match object constructed from a string.
-    """
-
-    def __init__(self, start, text):
-        self._text = text
-        self._start = start
-
-    def start(self, arg=None):
-        return self._start
-
-    def end(self, arg=None):
-        return self._start + len(self._text)
-
-    def group(self, arg=None):
-        if arg:
-            raise IndexError('No such group')
-        return self._text
-
-    def groups(self):
-        return (self._text,)
-
-    def groupdict(self):
-        return {}
-
-
-def bygroups(*args):
-    """
-    Callback that yields multiple actions for each group in the match.
-    """
-    def callback(lexer, match, ctx=None):
-        for i, action in enumerate(args):
-            if action is None:
-                continue
-            elif type(action) is _TokenType:
-                data = match.group(i + 1)
-                if data:
-                    yield match.start(i + 1), action, data
-            else:
-                if ctx:
-                    ctx.pos = match.start(i + 1)
-                for item in action(lexer, _PseudoMatch(match.start(i + 1),
-                                   match.group(i + 1)), ctx):
-                    if item:
-                        yield item
-        if ctx:
-            ctx.pos = match.end()
-    return callback
-
-
-class _This(object):
-    """
-    Special singleton used for indicating the caller class.
-    Used by ``using``.
-    """
-this = _This()
-
-
-def using(_other, **kwargs):
-    """
-    Callback that processes the match with a different lexer.
-
-    The keyword arguments are forwarded to the lexer, except `state` which
-    is handled separately.
-
-    `state` specifies the state that the new lexer will start in, and can
-    be an enumerable such as ('root', 'inline', 'string') or a simple
-    string which is assumed to be on top of the root state.
-
-    Note: For that to work, `_other` must not be an `ExtendedRegexLexer`.
-    """
-    gt_kwargs = {}
-    if 'state' in kwargs:
-        s = kwargs.pop('state')
-        if isinstance(s, (list, tuple)):
-            gt_kwargs['stack'] = s
-        else:
-            gt_kwargs['stack'] = ('root', s)
-
-    if _other is this:
-        def callback(lexer, match, ctx=None):
-            # if keyword arguments are given the callback
-            # function has to create a new lexer instance
-            if kwargs:
-                # XXX: cache that somehow
-                kwargs.update(lexer.options)
-                lx = lexer.__class__(**kwargs)
-            else:
-                lx = lexer
-            s = match.start()
-            for i, t, v in lx.get_tokens_unprocessed(match.group(), **gt_kwargs):
-                yield i + s, t, v
-            if ctx:
-                ctx.pos = match.end()
-    else:
-        def callback(lexer, match, ctx=None):
-            # XXX: cache that somehow
-            kwargs.update(lexer.options)
-            lx = _other(**kwargs)
-
-            s = match.start()
-            for i, t, v in lx.get_tokens_unprocessed(match.group(), **gt_kwargs):
-                yield i + s, t, v
-            if ctx:
-                ctx.pos = match.end()
-    return callback
-
-
-class RegexLexerMeta(LexerMeta):
-    """
-    Metaclass for RegexLexer, creates the self._tokens attribute from
-    self.tokens on the first instantiation.
-    """
-
-    def _process_state(cls, unprocessed, processed, state):
-        assert type(state) is str, "wrong state name %r" % state
-        assert state[0] != '#', "invalid state name %r" % state
-        if state in processed:
-            return processed[state]
-        tokens = processed[state] = []
-        rflags = cls.flags
-        for tdef in unprocessed[state]:
-            if isinstance(tdef, include):
-                # it's a state reference
-                assert tdef != state, "circular state reference %r" % state
-                tokens.extend(cls._process_state(unprocessed, processed, str(tdef)))
-                continue
-
-            assert type(tdef) is tuple, "wrong rule def %r" % tdef
-
-            try:
-                rex = re.compile(tdef[0], rflags).match
-            except Exception as err:
-                raise ValueError("uncompilable regex %r in state %r of %r: %s" %
-                                 (tdef[0], state, cls, err))
-
-            assert type(tdef[1]) is _TokenType or isinstance(tdef[1], collections.Callable), \
-                   'token type must be simple type or callable, not %r' % (tdef[1],)
-
-            if len(tdef) == 2:
-                new_state = None
-            else:
-                tdef2 = tdef[2]
-                if isinstance(tdef2, str):
-                    # an existing state
-                    if tdef2 == '#pop':
-                        new_state = -1
-                    elif tdef2 in unprocessed:
-                        new_state = (tdef2,)
-                    elif tdef2 == '#push':
-                        new_state = tdef2
-                    elif tdef2[:5] == '#pop:':
-                        new_state = -int(tdef2[5:])
-                    else:
-                        assert False, 'unknown new state %r' % tdef2
-                elif isinstance(tdef2, combined):
-                    # combine a new state from existing ones
-                    new_state = '_tmp_%d' % cls._tmpname
-                    cls._tmpname += 1
-                    itokens = []
-                    for istate in tdef2:
-                        assert istate != state, 'circular state ref %r' % istate
-                        itokens.extend(cls._process_state(unprocessed,
-                                                          processed, istate))
-                    processed[new_state] = itokens
-                    new_state = (new_state,)
-                elif isinstance(tdef2, tuple):
-                    # push more than one state
-                    for state in tdef2:
-                        assert (state in unprocessed or
-                                state in ('#pop', '#push')), \
-                               'unknown new state ' + state
-                    new_state = tdef2
-                else:
-                    assert False, 'unknown new state def %r' % tdef2
-            tokens.append((rex, tdef[1], new_state))
-        return tokens
-
-    def process_tokendef(cls, name, tokendefs=None):
-        processed = cls._all_tokens[name] = {}
-        tokendefs = tokendefs or cls.tokens[name]
-        for state in list(list(tokendefs.keys())):
-            cls._process_state(tokendefs, processed, state)
-        return processed
-
-    def __call__(cls, *args, **kwds):
-        if not hasattr(cls, '_tokens'):
-            cls._all_tokens = {}
-            cls._tmpname = 0
-            if hasattr(cls, 'token_variants') and cls.token_variants:
-                # don't process yet
-                pass
-            else:
-                cls._tokens = cls.process_tokendef('', cls.tokens)
-
-        return type.__call__(cls, *args, **kwds)
-
-
-class RegexLexer(Lexer, metaclass=RegexLexerMeta):
-    """
-    Base for simple stateful regular expression-based lexers.
-    Simplifies the lexing process so that you need only
-    provide a list of states and regular expressions.
-    """
-
-    #: Flags for compiling the regular expressions.
-    #: Defaults to MULTILINE.
-    flags = re.MULTILINE
-
-    #: Dict of ``{'state': [(regex, tokentype, new_state), ...], ...}``
-    #:
-    #: The initial state is 'root'.
-    #: ``new_state`` can be omitted to signify no state transition.
-    #: If it is a string, the state is pushed on the stack and changed.
-    #: If it is a tuple of strings, all states are pushed on the stack and
-    #: the current state will be the topmost.
-    #: It can also be ``combined('state1', 'state2', ...)``
-    #: to signify a new, anonymous state combined from the rules of two
-    #: or more existing ones.
-    #: Furthermore, it can be '#pop' to signify going back one step in
-    #: the state stack, or '#push' to push the current state on the stack
-    #: again.
-    #:
-    #: The tuple can also be replaced with ``include('state')``, in which
-    #: case the rules from the state named by the string are included in the
-    #: current one.
-    tokens = {}
-
-    def get_tokens_unprocessed(self, text, stack=('root',)):
-        """
-        Split ``text`` into (tokentype, text) pairs.
-
-        ``stack`` is the inital stack (default: ``['root']``)
-        """
-        pos = 0
-        tokendefs = self._tokens
-        statestack = list(stack)
-        statetokens = tokendefs[statestack[-1]]
-        while 1:
-            for rexmatch, action, new_state in statetokens:
-                m = rexmatch(text, pos)
-                if m:
-                    if type(action) is _TokenType:
-                        yield pos, action, m.group()
-                    else:
-                        for item in action(self, m):
-                            yield item
-                    pos = m.end()
-                    if new_state is not None:
-                        # state transition
-                        if isinstance(new_state, tuple):
-                            for state in new_state:
-                                if state == '#pop':
-                                    statestack.pop()
-                                elif state == '#push':
-                                    statestack.append(statestack[-1])
-                                else:
-                                    statestack.append(state)
-                        elif isinstance(new_state, int):
-                            # pop
-                            del statestack[new_state:]
-                        elif new_state == '#push':
-                            statestack.append(statestack[-1])
-                        else:
-                            assert False, "wrong state def: %r" % new_state
-                        statetokens = tokendefs[statestack[-1]]
-                    break
-            else:
-                try:
-                    if text[pos] == '\n':
-                        # at EOL, reset state to "root"
-                        pos += 1
-                        statestack = ['root']
-                        statetokens = tokendefs['root']
-                        yield pos, Text, '\n'
-                        continue
-                    yield pos, Error, text[pos]
-                    pos += 1
-                except IndexError:
-                    break
-
-
-class LexerContext(object):
-    """
-    A helper object that holds lexer position data.
-    """
-
-    def __init__(self, text, pos, stack=None, end=None):
-        self.text = text
-        self.pos = pos
-        self.end = end or len(text) # end=0 not supported ;-)
-        self.stack = stack or ['root']
-
-    def __repr__(self):
-        return 'LexerContext(%r, %r, %r)' % (
-            self.text, self.pos, self.stack)
-
-
-class ExtendedRegexLexer(RegexLexer):
-    """
-    A RegexLexer that uses a context object to store its state.
-    """
-
-    def get_tokens_unprocessed(self, text=None, context=None):
-        """
-        Split ``text`` into (tokentype, text) pairs.
-        If ``context`` is given, use this lexer context instead.
-        """
-        tokendefs = self._tokens
-        if not context:
-            ctx = LexerContext(text, 0)
-            statetokens = tokendefs['root']
-        else:
-            ctx = context
-            statetokens = tokendefs[ctx.stack[-1]]
-            text = ctx.text
-        while 1:
-            for rexmatch, action, new_state in statetokens:
-                m = rexmatch(text, ctx.pos, ctx.end)
-                if m:
-                    if type(action) is _TokenType:
-                        yield ctx.pos, action, m.group()
-                        ctx.pos = m.end()
-                    else:
-                        for item in action(self, m, ctx):
-                            yield item
-                        if not new_state:
-                            # altered the state stack?
-                            statetokens = tokendefs[ctx.stack[-1]]
-                    # CAUTION: callback must set ctx.pos!
-                    if new_state is not None:
-                        # state transition
-                        if isinstance(new_state, tuple):
-                            ctx.stack.extend(new_state)
-                        elif isinstance(new_state, int):
-                            # pop
-                            del ctx.stack[new_state:]
-                        elif new_state == '#push':
-                            ctx.stack.append(ctx.stack[-1])
-                        else:
-                            assert False, "wrong state def: %r" % new_state
-                        statetokens = tokendefs[ctx.stack[-1]]
-                    break
-            else:
-                try:
-                    if ctx.pos >= ctx.end:
-                        break
-                    if text[ctx.pos] == '\n':
-                        # at EOL, reset state to "root"
-                        ctx.pos += 1
-                        ctx.stack = ['root']
-                        statetokens = tokendefs['root']
-                        yield ctx.pos, Text, '\n'
-                        continue
-                    yield ctx.pos, Error, text[ctx.pos]
-                    ctx.pos += 1
-                except IndexError:
-                    break
-
-
-def do_insertions(insertions, tokens):
-    """
-    Helper for lexers which must combine the results of several
-    sublexers.
-
-    ``insertions`` is a list of ``(index, itokens)`` pairs.
-    Each ``itokens`` iterable should be inserted at position
-    ``index`` into the token stream given by the ``tokens``
-    argument.
-
-    The result is a combined token stream.
-
-    TODO: clean up the code here.
-    """
-    insertions = iter(insertions)
-    try:
-        index, itokens = next(insertions)
-    except StopIteration:
-        # no insertions
-        for item in tokens:
-            yield item
-        return
-
-    realpos = None
-    insleft = True
-
-    # iterate over the token stream where we want to insert
-    # the tokens from the insertion list.
-    for i, t, v in tokens:
-        # first iteration. store the postition of first item
-        if realpos is None:
-            realpos = i
-        oldi = 0
-        while insleft and i + len(v) >= index:
-            tmpval = v[oldi:index - i]
-            yield realpos, t, tmpval
-            realpos += len(tmpval)
-            for it_index, it_token, it_value in itokens:
-                yield realpos, it_token, it_value
-                realpos += len(it_value)
-            oldi = index - i
-            try:
-                index, itokens = next(insertions)
-            except StopIteration:
-                insleft = False
-                break  # not strictly necessary
-        yield realpos, t, v[oldi:]
-        realpos += len(v) - oldi
-
-    # leftover tokens
-    while insleft:
-        # no normal tokens, set realpos to zero
-        realpos = realpos or 0
-        for p, t, v in itokens:
-            yield realpos, t, v
-            realpos += len(v)
-        try:
-            index, itokens = next(insertions)
-        except StopIteration:
-            insleft = False
-            break  # not strictly necessary
-
+# -*- coding: utf-8 -*-
+"""
+    pygments.lexer
+    ~~~~~~~~~~~~~~
+
+    Base lexer classes.
+
+    :copyright: Copyright 2006-2010 by the Pygments team, see AUTHORS.
+    :license: BSD, see LICENSE for details.
+"""
+import re
+
+from pygments.filter import apply_filters, Filter
+from pygments.filters import get_filter_by_name
+from pygments.token import Error, Text, Other, _TokenType
+from pygments.util import get_bool_opt, get_int_opt, get_list_opt, \
+     make_analysator
+
+
+__all__ = ['Lexer', 'RegexLexer', 'ExtendedRegexLexer', 'DelegatingLexer',
+           'LexerContext', 'include', 'bygroups', 'using', 'this']
+
+
+_default_analyse = staticmethod(lambda x: 0.0)
+
+
+class LexerMeta(type):
+    """
+    This metaclass automagically converts ``analyse_text`` methods into
+    static methods which always return float values.
+    """
+
+    def __new__(cls, name, bases, d):
+        if 'analyse_text' in d:
+            d['analyse_text'] = make_analysator(d['analyse_text'])
+        return type.__new__(cls, name, bases, d)
+
+
+class Lexer(object, metaclass=LexerMeta):
+    """
+    Lexer for a specific language.
+
+    Basic options recognized:
+    ``stripnl``
+        Strip leading and trailing newlines from the input (default: True).
+    ``stripall``
+        Strip all leading and trailing whitespace from the input
+        (default: False).
+    ``ensurenl``
+        Make sure that the input ends with a newline (default: True).  This
+        is required for some lexers that consume input linewise.
+        *New in Pygments 1.3.*
+    ``tabsize``
+        If given and greater than 0, expand tabs in the input (default: 0).
+    ``encoding``
+        If given, must be an encoding name. This encoding will be used to
+        convert the input string to Unicode, if it is not already a Unicode
+        string (default: ``'latin1'``).
+        Can also be ``'guess'`` to use a simple UTF-8 / Latin1 detection, or
+        ``'chardet'`` to use the chardet library, if it is installed.
+    """
+
+    #: Name of the lexer
+    name = None
+
+    #: Shortcuts for the lexer
+    aliases = []
+
+    #: fn match rules
+    filenames = []
+
+    #: fn alias filenames
+    alias_filenames = []
+
+    #: mime types
+    mimetypes = []
+
+    def __init__(self, **options):
+        self.options = options
+        self.stripnl = get_bool_opt(options, 'stripnl', True)
+        self.stripall = get_bool_opt(options, 'stripall', False)
+        self.ensurenl = get_bool_opt(options, 'ensurenl', True)
+        self.tabsize = get_int_opt(options, 'tabsize', 0)
+        self.encoding = options.get('encoding', 'latin1')
+        # self.encoding = options.get('inencoding', None) or self.encoding
+        self.filters = []
+        for filter_ in get_list_opt(options, 'filters', ()):
+            self.add_filter(filter_)
+
+    def __repr__(self):
+        if self.options:
+            return '<pygments.lexers.%s with %r>' % (self.__class__.__name__,
+                                                     self.options)
+        else:
+            return '<pygments.lexers.%s>' % self.__class__.__name__
+
+    def add_filter(self, filter_, **options):
+        """
+        Add a new stream filter to this lexer.
+        """
+        if not isinstance(filter_, Filter):
+            filter_ = get_filter_by_name(filter_, **options)
+        self.filters.append(filter_)
+
+    def analyse_text(text):
+        """
+        Has to return a float between ``0`` and ``1`` that indicates
+        if a lexer wants to highlight this text. Used by ``guess_lexer``.
+        If this method returns ``0`` it won't highlight it in any case, if
+        it returns ``1`` highlighting with this lexer is guaranteed.
+
+        The `LexerMeta` metaclass automatically wraps this function so
+        that it works like a static method (no ``self`` or ``cls``
+        parameter) and the return value is automatically converted to
+        `float`. If the return value is an object that is boolean `False`
+        it's the same as if the return values was ``0.0``.
+        """
+
+    def get_tokens(self, text, unfiltered=False):
+        """
+        Return an iterable of (tokentype, value) pairs generated from
+        `text`. If `unfiltered` is set to `True`, the filtering mechanism
+        is bypassed even if filters are defined.
+
+        Also preprocess the text, i.e. expand tabs and strip it if
+        wanted and applies registered filters.
+        """
+        if not isinstance(text, str):
+            if self.encoding == 'guess':
+                try:
+                    text = text.decode('utf-8')
+                    if text.startswith('\ufeff'):
+                        text = text[len('\ufeff'):]
+                except UnicodeDecodeError:
+                    text = text.decode('latin1')
+            elif self.encoding == 'chardet':
+                try:
+                    import chardet
+                except ImportError:
+                    raise ImportError('To enable chardet encoding guessing, '
+                                      'please install the chardet library '
+                                      'from http://chardet.feedparser.org/')
+                enc = chardet.detect(text)
+                text = text.decode(enc['encoding'])
+            else:
+                text = text.decode(self.encoding)
+        # text now *is* a unicode string
+        text = text.replace('\r\n', '\n')
+        text = text.replace('\r', '\n')
+        if self.stripall:
+            text = text.strip()
+        elif self.stripnl:
+            text = text.strip('\n')
+        if self.tabsize > 0:
+            text = text.expandtabs(self.tabsize)
+        if self.ensurenl and not text.endswith('\n'):
+            text += '\n'
+
+        def streamer():
+            for i, t, v in self.get_tokens_unprocessed(text):
+                yield t, v
+        stream = streamer()
+        if not unfiltered:
+            stream = apply_filters(stream, self.filters, self)
+        return stream
+
+    def get_tokens_unprocessed(self, text):
+        """
+        Return an iterable of (tokentype, value) pairs.
+        In subclasses, implement this method as a generator to
+        maximize effectiveness.
+        """
+        raise NotImplementedError
+
+
+class DelegatingLexer(Lexer):
+    """
+    This lexer takes two lexer as arguments. A root lexer and
+    a language lexer. First everything is scanned using the language
+    lexer, afterwards all ``Other`` tokens are lexed using the root
+    lexer.
+
+    The lexers from the ``template`` lexer package use this base lexer.
+    """
+
+    def __init__(self, _root_lexer, _language_lexer, _needle=Other, **options):
+        self.root_lexer = _root_lexer(**options)
+        self.language_lexer = _language_lexer(**options)
+        self.needle = _needle
+        Lexer.__init__(self, **options)
+
+    def get_tokens_unprocessed(self, text):
+        buffered = ''
+        insertions = []
+        lng_buffer = []
+        for i, t, v in self.language_lexer.get_tokens_unprocessed(text):
+            if t is self.needle:
+                if lng_buffer:
+                    insertions.append((len(buffered), lng_buffer))
+                    lng_buffer = []
+                buffered += v
+            else:
+                lng_buffer.append((i, t, v))
+        if lng_buffer:
+            insertions.append((len(buffered), lng_buffer))
+        return do_insertions(insertions,
+                             self.root_lexer.get_tokens_unprocessed(buffered))
+
+
+#-------------------------------------------------------------------------------
+# RegexLexer and ExtendedRegexLexer
+#
+
+
+class include(str):
+    """
+    Indicates that a state should include rules from another state.
+    """
+    pass
+
+
+class combined(tuple):
+    """
+    Indicates a state combined from multiple states.
+    """
+
+    def __new__(cls, *args):
+        return tuple.__new__(cls, args)
+
+    def __init__(self, *args):
+        # tuple.__init__ doesn't do anything
+        pass
+
+
+class _PseudoMatch(object):
+    """
+    A pseudo match object constructed from a string.
+    """
+
+    def __init__(self, start, text):
+        self._text = text
+        self._start = start
+
+    def start(self, arg=None):
+        return self._start
+
+    def end(self, arg=None):
+        return self._start + len(self._text)
+
+    def group(self, arg=None):
+        if arg:
+            raise IndexError('No such group')
+        return self._text
+
+    def groups(self):
+        return (self._text,)
+
+    def groupdict(self):
+        return {}
+
+
+def bygroups(*args):
+    """
+    Callback that yields multiple actions for each group in the match.
+    """
+    def callback(lexer, match, ctx=None):
+        for i, action in enumerate(args):
+            if action is None:
+                continue
+            elif type(action) is _TokenType:
+                data = match.group(i + 1)
+                if data:
+                    yield match.start(i + 1), action, data
+            else:
+                if ctx:
+                    ctx.pos = match.start(i + 1)
+                for item in action(lexer, _PseudoMatch(match.start(i + 1),
+                                   match.group(i + 1)), ctx):
+                    if item:
+                        yield item
+        if ctx:
+            ctx.pos = match.end()
+    return callback
+
+
+class _This(object):
+    """
+    Special singleton used for indicating the caller class.
+    Used by ``using``.
+    """
+this = _This()
+
+
+def using(_other, **kwargs):
+    """
+    Callback that processes the match with a different lexer.
+
+    The keyword arguments are forwarded to the lexer, except `state` which
+    is handled separately.
+
+    `state` specifies the state that the new lexer will start in, and can
+    be an enumerable such as ('root', 'inline', 'string') or a simple
+    string which is assumed to be on top of the root state.
+
+    Note: For that to work, `_other` must not be an `ExtendedRegexLexer`.
+    """
+    gt_kwargs = {}
+    if 'state' in kwargs:
+        s = kwargs.pop('state')
+        if isinstance(s, (list, tuple)):
+            gt_kwargs['stack'] = s
+        else:
+            gt_kwargs['stack'] = ('root', s)
+
+    if _other is this:
+        def callback(lexer, match, ctx=None):
+            # if keyword arguments are given the callback
+            # function has to create a new lexer instance
+            if kwargs:
+                # XXX: cache that somehow
+                kwargs.update(lexer.options)
+                lx = lexer.__class__(**kwargs)
+            else:
+                lx = lexer
+            s = match.start()
+            for i, t, v in lx.get_tokens_unprocessed(match.group(), **gt_kwargs):
+                yield i + s, t, v
+            if ctx:
+                ctx.pos = match.end()
+    else:
+        def callback(lexer, match, ctx=None):
+            # XXX: cache that somehow
+            kwargs.update(lexer.options)
+            lx = _other(**kwargs)
+
+            s = match.start()
+            for i, t, v in lx.get_tokens_unprocessed(match.group(), **gt_kwargs):
+                yield i + s, t, v
+            if ctx:
+                ctx.pos = match.end()
+    return callback
+
+
+class RegexLexerMeta(LexerMeta):
+    """
+    Metaclass for RegexLexer, creates the self._tokens attribute from
+    self.tokens on the first instantiation.
+    """
+
+    def _process_regex(cls, regex, rflags):
+        """Preprocess the regular expression component of a token definition."""
+        return re.compile(regex, rflags).match
+
+    def _process_token(cls, token):
+        """Preprocess the token component of a token definition."""
+        assert type(token) is _TokenType or hasattr(token, '__call__'), \
+               'token type must be simple type or callable, not %r' % (token,)
+        return token
+
+    def _process_new_state(cls, new_state, unprocessed, processed):
+        """Preprocess the state transition action of a token definition."""
+        if isinstance(new_state, str):
+            # an existing state
+            if new_state == '#pop':
+                return -1
+            elif new_state in unprocessed:
+                return (new_state,)
+            elif new_state == '#push':
+                return new_state
+            elif new_state[:5] == '#pop:':
+                return -int(new_state[5:])
+            else:
+                assert False, 'unknown new state %r' % new_state
+        elif isinstance(new_state, combined):
+            # combine a new state from existing ones
+            tmp_state = '_tmp_%d' % cls._tmpname
+            cls._tmpname += 1
+            itokens = []
+            for istate in new_state:
+                assert istate != new_state, 'circular state ref %r' % istate
+                itokens.extend(cls._process_state(unprocessed,
+                                                  processed, istate))
+            processed[tmp_state] = itokens
+            return (tmp_state,)
+        elif isinstance(new_state, tuple):
+            # push more than one state
+            for istate in new_state:
+                assert (istate in unprocessed or
+                        istate in ('#pop', '#push')), \
+                       'unknown new state ' + istate
+            return new_state
+        else:
+            assert False, 'unknown new state def %r' % new_state
+
+    def _process_state(cls, unprocessed, processed, state):
+        """Preprocess a single state definition."""
+        assert type(state) is str, "wrong state name %r" % state
+        assert state[0] != '#', "invalid state name %r" % state
+        if state in processed:
+            return processed[state]
+        tokens = processed[state] = []
+        rflags = cls.flags
+        for tdef in unprocessed[state]:
+            if isinstance(tdef, include):
+                # it's a state reference
+                assert tdef != state, "circular state reference %r" % state
+                tokens.extend(cls._process_state(unprocessed, processed,
+                                                 str(tdef)))
+                continue
+
+            assert type(tdef) is tuple, "wrong rule def %r" % tdef
+
+            try:
+                rex = cls._process_regex(tdef[0], rflags)
+            except Exception as err:
+                raise ValueError("uncompilable regex %r in state %r of %r: %s" %
+                                 (tdef[0], state, cls, err))
+
+            token = cls._process_token(tdef[1])
+
+            if len(tdef) == 2:
+                new_state = None
+            else:
+                new_state = cls._process_new_state(tdef[2],
+                                                   unprocessed, processed)
+
+            tokens.append((rex, token, new_state))
+        return tokens
+
+    def process_tokendef(cls, name, tokendefs=None):
+        """Preprocess a dictionary of token definitions."""
+        processed = cls._all_tokens[name] = {}
+        tokendefs = tokendefs or cls.tokens[name]
+        for state in list(tokendefs.keys()):
+            cls._process_state(tokendefs, processed, state)
+        return processed
+
+    def __call__(cls, *args, **kwds):
+        """Instantiate cls after preprocessing its token definitions."""
+        if not hasattr(cls, '_tokens'):
+            cls._all_tokens = {}
+            cls._tmpname = 0
+            if hasattr(cls, 'token_variants') and cls.token_variants:
+                # don't process yet
+                pass
+            else:
+                cls._tokens = cls.process_tokendef('', cls.tokens)
+
+        return type.__call__(cls, *args, **kwds)
+
+
+class RegexLexer(Lexer, metaclass=RegexLexerMeta):
+    """
+    Base for simple stateful regular expression-based lexers.
+    Simplifies the lexing process so that you need only
+    provide a list of states and regular expressions.
+    """
+
+    #: Flags for compiling the regular expressions.
+    #: Defaults to MULTILINE.
+    flags = re.MULTILINE
+
+    #: Dict of ``{'state': [(regex, tokentype, new_state), ...], ...}``
+    #:
+    #: The initial state is 'root'.
+    #: ``new_state`` can be omitted to signify no state transition.
+    #: If it is a string, the state is pushed on the stack and changed.
+    #: If it is a tuple of strings, all states are pushed on the stack and
+    #: the current state will be the topmost.
+    #: It can also be ``combined('state1', 'state2', ...)``
+    #: to signify a new, anonymous state combined from the rules of two
+    #: or more existing ones.
+    #: Furthermore, it can be '#pop' to signify going back one step in
+    #: the state stack, or '#push' to push the current state on the stack
+    #: again.
+    #:
+    #: The tuple can also be replaced with ``include('state')``, in which
+    #: case the rules from the state named by the string are included in the
+    #: current one.
+    tokens = {}
+
+    def get_tokens_unprocessed(self, text, stack=('root',)):
+        """
+        Split ``text`` into (tokentype, text) pairs.
+
+        ``stack`` is the inital stack (default: ``['root']``)
+        """
+        pos = 0
+        tokendefs = self._tokens
+        statestack = list(stack)
+        statetokens = tokendefs[statestack[-1]]
+        while 1:
+            for rexmatch, action, new_state in statetokens:
+                m = rexmatch(text, pos)
+                if m:
+                    if type(action) is _TokenType:
+                        yield pos, action, m.group()
+                    else:
+                        for item in action(self, m):
+                            yield item
+                    pos = m.end()
+                    if new_state is not None:
+                        # state transition
+                        if isinstance(new_state, tuple):
+                            for state in new_state:
+                                if state == '#pop':
+                                    statestack.pop()
+                                elif state == '#push':
+                                    statestack.append(statestack[-1])
+                                else:
+                                    statestack.append(state)
+                        elif isinstance(new_state, int):
+                            # pop
+                            del statestack[new_state:]
+                        elif new_state == '#push':
+                            statestack.append(statestack[-1])
+                        else:
+                            assert False, "wrong state def: %r" % new_state
+                        statetokens = tokendefs[statestack[-1]]
+                    break
+            else:
+                try:
+                    if text[pos] == '\n':
+                        # at EOL, reset state to "root"
+                        pos += 1
+                        statestack = ['root']
+                        statetokens = tokendefs['root']
+                        yield pos, Text, '\n'
+                        continue
+                    yield pos, Error, text[pos]
+                    pos += 1
+                except IndexError:
+                    break
+
+
+class LexerContext(object):
+    """
+    A helper object that holds lexer position data.
+    """
+
+    def __init__(self, text, pos, stack=None, end=None):
+        self.text = text
+        self.pos = pos
+        self.end = end or len(text) # end=0 not supported ;-)
+        self.stack = stack or ['root']
+
+    def __repr__(self):
+        return 'LexerContext(%r, %r, %r)' % (
+            self.text, self.pos, self.stack)
+
+
+class ExtendedRegexLexer(RegexLexer):
+    """
+    A RegexLexer that uses a context object to store its state.
+    """
+
+    def get_tokens_unprocessed(self, text=None, context=None):
+        """
+        Split ``text`` into (tokentype, text) pairs.
+        If ``context`` is given, use this lexer context instead.
+        """
+        tokendefs = self._tokens
+        if not context:
+            ctx = LexerContext(text, 0)
+            statetokens = tokendefs['root']
+        else:
+            ctx = context
+            statetokens = tokendefs[ctx.stack[-1]]
+            text = ctx.text
+        while 1:
+            for rexmatch, action, new_state in statetokens:
+                m = rexmatch(text, ctx.pos, ctx.end)
+                if m:
+                    if type(action) is _TokenType:
+                        yield ctx.pos, action, m.group()
+                        ctx.pos = m.end()
+                    else:
+                        for item in action(self, m, ctx):
+                            yield item
+                        if not new_state:
+                            # altered the state stack?
+                            statetokens = tokendefs[ctx.stack[-1]]
+                    # CAUTION: callback must set ctx.pos!
+                    if new_state is not None:
+                        # state transition
+                        if isinstance(new_state, tuple):
+                            ctx.stack.extend(new_state)
+                        elif isinstance(new_state, int):
+                            # pop
+                            del ctx.stack[new_state:]
+                        elif new_state == '#push':
+                            ctx.stack.append(ctx.stack[-1])
+                        else:
+                            assert False, "wrong state def: %r" % new_state
+                        statetokens = tokendefs[ctx.stack[-1]]
+                    break
+            else:
+                try:
+                    if ctx.pos >= ctx.end:
+                        break
+                    if text[ctx.pos] == '\n':
+                        # at EOL, reset state to "root"
+                        ctx.pos += 1
+                        ctx.stack = ['root']
+                        statetokens = tokendefs['root']
+                        yield ctx.pos, Text, '\n'
+                        continue
+                    yield ctx.pos, Error, text[ctx.pos]
+                    ctx.pos += 1
+                except IndexError:
+                    break
+
+
+def do_insertions(insertions, tokens):
+    """
+    Helper for lexers which must combine the results of several
+    sublexers.
+
+    ``insertions`` is a list of ``(index, itokens)`` pairs.
+    Each ``itokens`` iterable should be inserted at position
+    ``index`` into the token stream given by the ``tokens``
+    argument.
+
+    The result is a combined token stream.
+
+    TODO: clean up the code here.
+    """
+    insertions = iter(insertions)
+    try:
+        index, itokens = next(insertions)
+    except StopIteration:
+        # no insertions
+        for item in tokens:
+            yield item
+        return
+
+    realpos = None
+    insleft = True
+
+    # iterate over the token stream where we want to insert
+    # the tokens from the insertion list.
+    for i, t, v in tokens:
+        # first iteration. store the postition of first item
+        if realpos is None:
+            realpos = i
+        oldi = 0
+        while insleft and i + len(v) >= index:
+            tmpval = v[oldi:index - i]
+            yield realpos, t, tmpval
+            realpos += len(tmpval)
+            for it_index, it_token, it_value in itokens:
+                yield realpos, it_token, it_value
+                realpos += len(it_value)
+            oldi = index - i
+            try:
+                index, itokens = next(insertions)
+            except StopIteration:
+                insleft = False
+                break  # not strictly necessary
+        yield realpos, t, v[oldi:]
+        realpos += len(v) - oldi
+
+    # leftover tokens
+    while insleft:
+        # no normal tokens, set realpos to zero
+        realpos = realpos or 0
+        for p, t, v in itokens:
+            yield realpos, t, v
+            realpos += len(v)
+        try:
+            index, itokens = next(insertions)
+        except StopIteration:
+            insleft = False
+            break  # not strictly necessary
+
--- a/ThirdParty/Pygments/pygments/lexers/__init__.py	Tue Jan 04 17:37:48 2011 +0100
+++ b/ThirdParty/Pygments/pygments/lexers/__init__.py	Wed Jan 05 15:46:19 2011 +0100
@@ -1,226 +1,226 @@
-# -*- coding: utf-8 -*-
-"""
-    pygments.lexers
-    ~~~~~~~~~~~~~~~
-
-    Pygments lexers.
-
-    :copyright: Copyright 2006-2010 by the Pygments team, see AUTHORS.
-    :license: BSD, see LICENSE for details.
-"""
-
-import sys
-import types
-import fnmatch
-from os.path import basename
-
-from pygments.lexers._mapping import LEXERS
-from pygments.plugin import find_plugin_lexers
-from pygments.util import ClassNotFound, bytes
-
-
-__all__ = ['get_lexer_by_name', 'get_lexer_for_filename', 'find_lexer_class',
-           'guess_lexer'] + list(LEXERS.keys())
-
-_lexer_cache = {}
-
-
-def _load_lexers(module_name):
-    """
-    Load a lexer (and all others in the module too).
-    """
-    mod = __import__(module_name, None, None, ['__all__'])
-    for lexer_name in mod.__all__:
-        cls = getattr(mod, lexer_name)
-        _lexer_cache[cls.name] = cls
-
-
-def get_all_lexers():
-    """
-    Return a generator of tuples in the form ``(name, aliases,
-    filenames, mimetypes)`` of all know lexers.
-    """
-    for item in LEXERS.values():
-        yield item[1:]
-    for lexer in find_plugin_lexers():
-        yield lexer.name, lexer.aliases, lexer.filenames, lexer.mimetypes
-
-
-def find_lexer_class(name):
-    """
-    Lookup a lexer class by name. Return None if not found.
-    """
-    if name in _lexer_cache:
-        return _lexer_cache[name]
-    # lookup builtin lexers
-    for module_name, lname, aliases, _, _ in LEXERS.values():
-        if name == lname:
-            _load_lexers(module_name)
-            return _lexer_cache[name]
-    # continue with lexers from setuptools entrypoints
-    for cls in find_plugin_lexers():
-        if cls.name == name:
-            return cls
-
-
-def get_lexer_by_name(_alias, **options):
-    """
-    Get a lexer by an alias.
-    """
-    # lookup builtin lexers
-    for module_name, name, aliases, _, _ in LEXERS.values():
-        if _alias in aliases:
-            if name not in _lexer_cache:
-                _load_lexers(module_name)
-            return _lexer_cache[name](**options)
-    # continue with lexers from setuptools entrypoints
-    for cls in find_plugin_lexers():
-        if _alias in cls.aliases:
-            return cls(**options)
-    raise ClassNotFound('no lexer for alias %r found' % _alias)
-
-
-def get_lexer_for_filename(_fn, code=None, **options):
-    """
-    Get a lexer for a filename.  If multiple lexers match the filename
-    pattern, use ``analyze_text()`` to figure out which one is more
-    appropriate.
-    """
-    matches = []
-    fn = basename(_fn)
-    for modname, name, _, filenames, _ in LEXERS.values():
-        for filename in filenames:
-            if fnmatch.fnmatch(fn, filename):
-                if name not in _lexer_cache:
-                    _load_lexers(modname)
-                matches.append(_lexer_cache[name])
-    for cls in find_plugin_lexers():
-        for filename in cls.filenames:
-            if fnmatch.fnmatch(fn, filename):
-                matches.append(cls)
-
-    if sys.version_info > (3,) and isinstance(code, bytes):
-        # decode it, since all analyse_text functions expect unicode
-        code = code.decode('latin1')
-
-    def get_rating(cls):
-        # The class _always_ defines analyse_text because it's included in
-        # the Lexer class.  The default implementation returns None which
-        # gets turned into 0.0.  Run scripts/detect_missing_analyse_text.py
-        # to find lexers which need it overridden.
-        d = cls.analyse_text(code)
-        #print "Got %r from %r" % (d, cls)
-        return d
-
-    if code:
-        matches.sort(key=get_rating)
-    if matches:
-        #print "Possible lexers, after sort:", matches
-        return matches[-1](**options)
-    raise ClassNotFound('no lexer for filename %r found' % _fn)
-
-
-def get_lexer_for_mimetype(_mime, **options):
-    """
-    Get a lexer for a mimetype.
-    """
-    for modname, name, _, _, mimetypes in LEXERS.values():
-        if _mime in mimetypes:
-            if name not in _lexer_cache:
-                _load_lexers(modname)
-            return _lexer_cache[name](**options)
-    for cls in find_plugin_lexers():
-        if _mime in cls.mimetypes:
-            return cls(**options)
-    raise ClassNotFound('no lexer for mimetype %r found' % _mime)
-
-
-def _iter_lexerclasses():
-    """
-    Return an iterator over all lexer classes.
-    """
-    for module_name, name, _, _, _ in LEXERS.values():
-        if name not in _lexer_cache:
-            _load_lexers(module_name)
-        yield _lexer_cache[name]
-    for lexer in find_plugin_lexers():
-        yield lexer
-
-
-def guess_lexer_for_filename(_fn, _text, **options):
-    """
-    Lookup all lexers that handle those filenames primary (``filenames``)
-    or secondary (``alias_filenames``). Then run a text analysis for those
-    lexers and choose the best result.
-
-    usage::
-
-        >>> from pygments.lexers import guess_lexer_for_filename
-        >>> guess_lexer_for_filename('hello.html', '<%= @foo %>')
-        <pygments.lexers.templates.RhtmlLexer object at 0xb7d2f32c>
-        >>> guess_lexer_for_filename('hello.html', '<h1>{{ title|e }}</h1>')
-        <pygments.lexers.templates.HtmlDjangoLexer object at 0xb7d2f2ac>
-        >>> guess_lexer_for_filename('style.css', 'a { color: <?= $link ?> }')
-        <pygments.lexers.templates.CssPhpLexer object at 0xb7ba518c>
-    """
-    fn = basename(_fn)
-    primary = None
-    matching_lexers = set()
-    for lexer in _iter_lexerclasses():
-        for filename in lexer.filenames:
-            if fnmatch.fnmatch(fn, filename):
-                matching_lexers.add(lexer)
-                primary = lexer
-        for filename in lexer.alias_filenames:
-            if fnmatch.fnmatch(fn, filename):
-                matching_lexers.add(lexer)
-    if not matching_lexers:
-        raise ClassNotFound('no lexer for filename %r found' % fn)
-    if len(matching_lexers) == 1:
-        return matching_lexers.pop()(**options)
-    result = []
-    for lexer in matching_lexers:
-        rv = lexer.analyse_text(_text)
-        if rv == 1.0:
-            return lexer(**options)
-        result.append((rv, lexer))
-    result.sort()
-    if not result[-1][0] and primary is not None:
-        return primary(**options)
-    return result[-1][1](**options)
-
-
-def guess_lexer(_text, **options):
-    """
-    Guess a lexer by strong distinctions in the text (eg, shebang).
-    """
-    best_lexer = [0.0, None]
-    for lexer in _iter_lexerclasses():
-        rv = lexer.analyse_text(_text)
-        if rv == 1.0:
-            return lexer(**options)
-        if rv > best_lexer[0]:
-            best_lexer[:] = (rv, lexer)
-    if not best_lexer[0] or best_lexer[1] is None:
-        raise ClassNotFound('no lexer matching the text found')
-    return best_lexer[1](**options)
-
-
-class _automodule(types.ModuleType):
-    """Automatically import lexers."""
-
-    def __getattr__(self, name):
-        info = LEXERS.get(name)
-        if info:
-            _load_lexers(info[0])
-            cls = _lexer_cache[info[1]]
-            setattr(self, name, cls)
-            return cls
-        raise AttributeError(name)
-
-
-oldmod = sys.modules['pygments.lexers']
-newmod = _automodule('pygments.lexers')
-newmod.__dict__.update(oldmod.__dict__)
-sys.modules['pygments.lexers'] = newmod
-del newmod.newmod, newmod.oldmod, newmod.sys, newmod.types
+# -*- coding: utf-8 -*-
+"""
+    pygments.lexers
+    ~~~~~~~~~~~~~~~
+
+    Pygments lexers.
+
+    :copyright: Copyright 2006-2010 by the Pygments team, see AUTHORS.
+    :license: BSD, see LICENSE for details.
+"""
+
+import sys
+import types
+import fnmatch
+from os.path import basename
+
+from pygments.lexers._mapping import LEXERS
+from pygments.plugin import find_plugin_lexers
+from pygments.util import ClassNotFound, bytes
+
+
+__all__ = ['get_lexer_by_name', 'get_lexer_for_filename', 'find_lexer_class',
+           'guess_lexer'] + list(LEXERS.keys())
+
+_lexer_cache = {}
+
+
+def _load_lexers(module_name):
+    """
+    Load a lexer (and all others in the module too).
+    """
+    mod = __import__(module_name, None, None, ['__all__'])
+    for lexer_name in mod.__all__:
+        cls = getattr(mod, lexer_name)
+        _lexer_cache[cls.name] = cls
+
+
+def get_all_lexers():
+    """
+    Return a generator of tuples in the form ``(name, aliases,
+    filenames, mimetypes)`` of all know lexers.
+    """
+    for item in LEXERS.values():
+        yield item[1:]
+    for lexer in find_plugin_lexers():
+        yield lexer.name, lexer.aliases, lexer.filenames, lexer.mimetypes
+
+
+def find_lexer_class(name):
+    """
+    Lookup a lexer class by name. Return None if not found.
+    """
+    if name in _lexer_cache:
+        return _lexer_cache[name]
+    # lookup builtin lexers
+    for module_name, lname, aliases, _, _ in LEXERS.values():
+        if name == lname:
+            _load_lexers(module_name)
+            return _lexer_cache[name]
+    # continue with lexers from setuptools entrypoints
+    for cls in find_plugin_lexers():
+        if cls.name == name:
+            return cls
+
+
+def get_lexer_by_name(_alias, **options):
+    """
+    Get a lexer by an alias.
+    """
+    # lookup builtin lexers
+    for module_name, name, aliases, _, _ in LEXERS.values():
+        if _alias in aliases:
+            if name not in _lexer_cache:
+                _load_lexers(module_name)
+            return _lexer_cache[name](**options)
+    # continue with lexers from setuptools entrypoints
+    for cls in find_plugin_lexers():
+        if _alias in cls.aliases:
+            return cls(**options)
+    raise ClassNotFound('no lexer for alias %r found' % _alias)
+
+
+def get_lexer_for_filename(_fn, code=None, **options):
+    """
+    Get a lexer for a filename.  If multiple lexers match the filename
+    pattern, use ``analyze_text()`` to figure out which one is more
+    appropriate.
+    """
+    matches = []
+    fn = basename(_fn)
+    for modname, name, _, filenames, _ in LEXERS.values():
+        for filename in filenames:
+            if fnmatch.fnmatch(fn, filename):
+                if name not in _lexer_cache:
+                    _load_lexers(modname)
+                matches.append(_lexer_cache[name])
+    for cls in find_plugin_lexers():
+        for filename in cls.filenames:
+            if fnmatch.fnmatch(fn, filename):
+                matches.append(cls)
+
+    if sys.version_info > (3,) and isinstance(code, bytes):
+        # decode it, since all analyse_text functions expect unicode
+        code = code.decode('latin1')
+
+    def get_rating(cls):
+        # The class _always_ defines analyse_text because it's included in
+        # the Lexer class.  The default implementation returns None which
+        # gets turned into 0.0.  Run scripts/detect_missing_analyse_text.py
+        # to find lexers which need it overridden.
+        d = cls.analyse_text(code)
+        #print "Got %r from %r" % (d, cls)
+        return d
+
+    if code:
+        matches.sort(key=get_rating)
+    if matches:
+        #print "Possible lexers, after sort:", matches
+        return matches[-1](**options)
+    raise ClassNotFound('no lexer for filename %r found' % _fn)
+
+
+def get_lexer_for_mimetype(_mime, **options):
+    """
+    Get a lexer for a mimetype.
+    """
+    for modname, name, _, _, mimetypes in LEXERS.values():
+        if _mime in mimetypes:
+            if name not in _lexer_cache:
+                _load_lexers(modname)
+            return _lexer_cache[name](**options)
+    for cls in find_plugin_lexers():
+        if _mime in cls.mimetypes:
+            return cls(**options)
+    raise ClassNotFound('no lexer for mimetype %r found' % _mime)
+
+
+def _iter_lexerclasses():
+    """
+    Return an iterator over all lexer classes.
+    """
+    for module_name, name, _, _, _ in LEXERS.values():
+        if name not in _lexer_cache:
+            _load_lexers(module_name)
+        yield _lexer_cache[name]
+    for lexer in find_plugin_lexers():
+        yield lexer
+
+
+def guess_lexer_for_filename(_fn, _text, **options):
+    """
+    Lookup all lexers that handle those filenames primary (``filenames``)
+    or secondary (``alias_filenames``). Then run a text analysis for those
+    lexers and choose the best result.
+
+    usage::
+
+        >>> from pygments.lexers import guess_lexer_for_filename
+        >>> guess_lexer_for_filename('hello.html', '<%= @foo %>')
+        <pygments.lexers.templates.RhtmlLexer object at 0xb7d2f32c>
+        >>> guess_lexer_for_filename('hello.html', '<h1>{{ title|e }}</h1>')
+        <pygments.lexers.templates.HtmlDjangoLexer object at 0xb7d2f2ac>
+        >>> guess_lexer_for_filename('style.css', 'a { color: <?= $link ?> }')
+        <pygments.lexers.templates.CssPhpLexer object at 0xb7ba518c>
+    """
+    fn = basename(_fn)
+    primary = None
+    matching_lexers = set()
+    for lexer in _iter_lexerclasses():
+        for filename in lexer.filenames:
+            if fnmatch.fnmatch(fn, filename):
+                matching_lexers.add(lexer)
+                primary = lexer
+        for filename in lexer.alias_filenames:
+            if fnmatch.fnmatch(fn, filename):
+                matching_lexers.add(lexer)
+    if not matching_lexers:
+        raise ClassNotFound('no lexer for filename %r found' % fn)
+    if len(matching_lexers) == 1:
+        return matching_lexers.pop()(**options)
+    result = []
+    for lexer in matching_lexers:
+        rv = lexer.analyse_text(_text)
+        if rv == 1.0:
+            return lexer(**options)
+        result.append((rv, lexer))
+    result.sort()
+    if not result[-1][0] and primary is not None:
+        return primary(**options)
+    return result[-1][1](**options)
+
+
+def guess_lexer(_text, **options):
+    """
+    Guess a lexer by strong distinctions in the text (eg, shebang).
+    """
+    best_lexer = [0.0, None]
+    for lexer in _iter_lexerclasses():
+        rv = lexer.analyse_text(_text)
+        if rv == 1.0:
+            return lexer(**options)
+        if rv > best_lexer[0]:
+            best_lexer[:] = (rv, lexer)
+    if not best_lexer[0] or best_lexer[1] is None:
+        raise ClassNotFound('no lexer matching the text found')
+    return best_lexer[1](**options)
+
+
+class _automodule(types.ModuleType):
+    """Automatically import lexers."""
+
+    def __getattr__(self, name):
+        info = LEXERS.get(name)
+        if info:
+            _load_lexers(info[0])
+            cls = _lexer_cache[info[1]]
+            setattr(self, name, cls)
+            return cls
+        raise AttributeError(name)
+
+
+oldmod = sys.modules['pygments.lexers']
+newmod = _automodule('pygments.lexers')
+newmod.__dict__.update(oldmod.__dict__)
+sys.modules['pygments.lexers'] = newmod
+del newmod.newmod, newmod.oldmod, newmod.sys, newmod.types
--- a/ThirdParty/Pygments/pygments/lexers/_luabuiltins.py	Tue Jan 04 17:37:48 2011 +0100
+++ b/ThirdParty/Pygments/pygments/lexers/_luabuiltins.py	Wed Jan 05 15:46:19 2011 +0100
@@ -1,256 +1,249 @@
-# -*- coding: utf-8 -*-
-"""
-    pygments.lexers._luabuiltins
-    ~~~~~~~~~~~~~~~~~~~~~~~~~~~~
-
-    This file contains the names and modules of lua functions
-    It is able to re-generate itself, but for adding new functions you
-    probably have to add some callbacks (see function module_callbacks).
-
-    Do not edit the MODULES dict by hand.
-
-    :copyright: Copyright 2006-2010 by the Pygments team, see AUTHORS.
-    :license: BSD, see LICENSE for details.
-"""
-
-MODULES = {'basic': ['_G',
-           '_VERSION',
-           'assert',
-           'collectgarbage',
-           'dofile',
-           'error',
-           'getfenv',
-           'getmetatable',
-           'ipairs',
-           'load',
-           'loadfile',
-           'loadstring',
-           'next',
-           'pairs',
-           'pcall',
-           'print',
-           'rawequal',
-           'rawget',
-           'rawset',
-           'select',
-           'setfenv',
-           'setmetatable',
-           'tonumber',
-           'tostring',
-           'type',
-           'unpack',
-           'xpcall'],
- 'coroutine': ['coroutine.create',
-               'coroutine.resume',
-               'coroutine.running',
-               'coroutine.status',
-               'coroutine.wrap',
-               'coroutine.yield'],
- 'debug': ['debug.debug',
-           'debug.getfenv',
-           'debug.gethook',
-           'debug.getinfo',
-           'debug.getlocal',
-           'debug.getmetatable',
-           'debug.getregistry',
-           'debug.getupvalue',
-           'debug.setfenv',
-           'debug.sethook',
-           'debug.setlocal',
-           'debug.setmetatable',
-           'debug.setupvalue',
-           'debug.traceback'],
- 'io': ['file:close',
-        'file:flush',
-        'file:lines',
-        'file:read',
-        'file:seek',
-        'file:setvbuf',
-        'file:write',
-        'io.close',
-        'io.flush',
-        'io.input',
-        'io.lines',
-        'io.open',
-        'io.output',
-        'io.popen',
-        'io.read',
-        'io.tmpfile',
-        'io.type',
-        'io.write'],
- 'math': ['math.abs',
-          'math.acos',
-          'math.asin',
-          'math.atan2',
-          'math.atan',
-          'math.ceil',
-          'math.cosh',
-          'math.cos',
-          'math.deg',
-          'math.exp',
-          'math.floor',
-          'math.fmod',
-          'math.frexp',
-          'math.huge',
-          'math.ldexp',
-          'math.log10',
-          'math.log',
-          'math.max',
-          'math.min',
-          'math.modf',
-          'math.pi',
-          'math.pow',
-          'math.rad',
-          'math.random',
-          'math.randomseed',
-          'math.sinh',
-          'math.sin',
-          'math.sqrt',
-          'math.tanh',
-          'math.tan'],
- 'modules': ['module',
-             'require',
-             'package.cpath',
-             'package.loaded',
-             'package.loadlib',
-             'package.path',
-             'package.preload',
-             'package.seeall'],
- 'os': ['os.clock',
-        'os.date',
-        'os.difftime',
-        'os.execute',
-        'os.exit',
-        'os.getenv',
-        'os.remove',
-        'os.rename',
-        'os.setlocale',
-        'os.time',
-        'os.tmpname'],
- 'string': ['string.byte',
-            'string.char',
-            'string.dump',
-            'string.find',
-            'string.format',
-            'string.gmatch',
-            'string.gsub',
-            'string.len',
-            'string.lower',
-            'string.match',
-            'string.rep',
-            'string.reverse',
-            'string.sub',
-            'string.upper'],
- 'table': ['table.concat',
-           'table.insert',
-           'table.maxn',
-           'table.remove',
-           'table.sort']}
-
-if __name__ == '__main__':
-    import re
-    import urllib.request, urllib.parse, urllib.error
-    import pprint
-
-    # you can't generally find out what module a function belongs to if you
-    # have only its name. Because of this, here are some callback functions
-    # that recognize if a gioven function belongs to a specific module
-    def module_callbacks():
-        def is_in_coroutine_module(name):
-            return name.startswith('coroutine.')
-
-        def is_in_modules_module(name):
-            if name in ['require', 'module'] or name.startswith('package'):
-                return True
-            else:
-                return False
-
-        def is_in_string_module(name):
-            return name.startswith('string.')
-
-        def is_in_table_module(name):
-            return name.startswith('table.')
-
-        def is_in_math_module(name):
-            return name.startswith('math')
-
-        def is_in_io_module(name):
-            return name.startswith('io.') or name.startswith('file:')
-
-        def is_in_os_module(name):
-            return name.startswith('os.')
-
-        def is_in_debug_module(name):
-            return name.startswith('debug.')
-
-        return {'coroutine': is_in_coroutine_module,
-                'modules': is_in_modules_module,
-                'string': is_in_string_module,
-                'table': is_in_table_module,
-                'math': is_in_math_module,
-                'io': is_in_io_module,
-                'os': is_in_os_module,
-                'debug': is_in_debug_module}
-
-
-
-    def get_newest_version():
-        f = urllib.request.urlopen('http://www.lua.org/manual/')
-        r = re.compile(r'^<A HREF="(\d\.\d)/">Lua \1</A>')
-        for line in f:
-            m = r.match(line)
-            if m is not None:
-                return m.groups()[0]
-
-    def get_lua_functions(version):
-        f = urllib.request.urlopen('http://www.lua.org/manual/%s/' % version)
-        r = re.compile(r'^<A HREF="manual.html#pdf-(.+)">\1</A>')
-        functions = []
-        for line in f:
-            m = r.match(line)
-            if m is not None:
-                functions.append(m.groups()[0])
-        return functions
-
-    def get_function_module(name):
-        for mod, cb in module_callbacks().items():
-            if cb(name):
-                return mod
-        if '.' in name:
-            return name.split('.')[0]
-        else:
-            return 'basic'
-
-    def regenerate(filename, modules):
-        f = open(filename)
-        try:
-            content = f.read()
-        finally:
-            f.close()
-
-        header = content[:content.find('MODULES = {')]
-        footer = content[content.find("if __name__ == '__main__':"):]
-
-
-        f = open(filename, 'w')
-        f.write(header)
-        f.write('MODULES = %s\n\n' % pprint.pformat(modules))
-        f.write(footer)
-        f.close()
-
-    def run():
-        version = get_newest_version()
-        print('> Downloading function index for Lua %s' % version)
-        functions = get_lua_functions(version)
-        print('> %d functions found:' % len(functions))
-
-        modules = {}
-        for full_function_name in functions:
-            print('>> %s' % full_function_name)
-            m = get_function_module(full_function_name)
-            modules.setdefault(m, []).append(full_function_name)
-
-        regenerate(__file__, modules)
-
-
-    run()
+# -*- coding: utf-8 -*-
+"""
+    pygments.lexers._luabuiltins
+    ~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+    This file contains the names and modules of lua functions
+    It is able to re-generate itself, but for adding new functions you
+    probably have to add some callbacks (see function module_callbacks).
+
+    Do not edit the MODULES dict by hand.
+
+    :copyright: Copyright 2006-2010 by the Pygments team, see AUTHORS.
+    :license: BSD, see LICENSE for details.
+"""
+
+MODULES = {'basic': ['_G',
+           '_VERSION',
+           'assert',
+           'collectgarbage',
+           'dofile',
+           'error',
+           'getfenv',
+           'getmetatable',
+           'ipairs',
+           'load',
+           'loadfile',
+           'loadstring',
+           'next',
+           'pairs',
+           'pcall',
+           'print',
+           'rawequal',
+           'rawget',
+           'rawset',
+           'select',
+           'setfenv',
+           'setmetatable',
+           'tonumber',
+           'tostring',
+           'type',
+           'unpack',
+           'xpcall'],
+ 'coroutine': ['coroutine.create',
+               'coroutine.resume',
+               'coroutine.running',
+               'coroutine.status',
+               'coroutine.wrap',
+               'coroutine.yield'],
+ 'debug': ['debug.debug',
+           'debug.getfenv',
+           'debug.gethook',
+           'debug.getinfo',
+           'debug.getlocal',
+           'debug.getmetatable',
+           'debug.getregistry',
+           'debug.getupvalue',
+           'debug.setfenv',
+           'debug.sethook',
+           'debug.setlocal',
+           'debug.setmetatable',
+           'debug.setupvalue',
+           'debug.traceback'],
+ 'io': ['io.close',
+        'io.flush',
+        'io.input',
+        'io.lines',
+        'io.open',
+        'io.output',
+        'io.popen',
+        'io.read',
+        'io.tmpfile',
+        'io.type',
+        'io.write'],
+ 'math': ['math.abs',
+          'math.acos',
+          'math.asin',
+          'math.atan2',
+          'math.atan',
+          'math.ceil',
+          'math.cosh',
+          'math.cos',
+          'math.deg',
+          'math.exp',
+          'math.floor',
+          'math.fmod',
+          'math.frexp',
+          'math.huge',
+          'math.ldexp',
+          'math.log10',
+          'math.log',
+          'math.max',
+          'math.min',
+          'math.modf',
+          'math.pi',
+          'math.pow',
+          'math.rad',
+          'math.random',
+          'math.randomseed',
+          'math.sinh',
+          'math.sin',
+          'math.sqrt',
+          'math.tanh',
+          'math.tan'],
+ 'modules': ['module',
+             'require',
+             'package.cpath',
+             'package.loaded',
+             'package.loadlib',
+             'package.path',
+             'package.preload',
+             'package.seeall'],
+ 'os': ['os.clock',
+        'os.date',
+        'os.difftime',
+        'os.execute',
+        'os.exit',
+        'os.getenv',
+        'os.remove',
+        'os.rename',
+        'os.setlocale',
+        'os.time',
+        'os.tmpname'],
+ 'string': ['string.byte',
+            'string.char',
+            'string.dump',
+            'string.find',
+            'string.format',
+            'string.gmatch',
+            'string.gsub',
+            'string.len',
+            'string.lower',
+            'string.match',
+            'string.rep',
+            'string.reverse',
+            'string.sub',
+            'string.upper'],
+ 'table': ['table.concat',
+           'table.insert',
+           'table.maxn',
+           'table.remove',
+           'table.sort']}
+
+if __name__ == '__main__':
+    import re
+    import urllib.request, urllib.parse, urllib.error
+    import pprint
+
+    # you can't generally find out what module a function belongs to if you
+    # have only its name. Because of this, here are some callback functions
+    # that recognize if a gioven function belongs to a specific module
+    def module_callbacks():
+        def is_in_coroutine_module(name):
+            return name.startswith('coroutine.')
+
+        def is_in_modules_module(name):
+            if name in ['require', 'module'] or name.startswith('package'):
+                return True
+            else:
+                return False
+
+        def is_in_string_module(name):
+            return name.startswith('string.')
+
+        def is_in_table_module(name):
+            return name.startswith('table.')
+
+        def is_in_math_module(name):
+            return name.startswith('math')
+
+        def is_in_io_module(name):
+            return name.startswith('io.')
+
+        def is_in_os_module(name):
+            return name.startswith('os.')
+
+        def is_in_debug_module(name):
+            return name.startswith('debug.')
+
+        return {'coroutine': is_in_coroutine_module,
+                'modules': is_in_modules_module,
+                'string': is_in_string_module,
+                'table': is_in_table_module,
+                'math': is_in_math_module,
+                'io': is_in_io_module,
+                'os': is_in_os_module,
+                'debug': is_in_debug_module}
+
+
+
+    def get_newest_version():
+        f = urllib.request.urlopen('http://www.lua.org/manual/')
+        r = re.compile(r'^<A HREF="(\d\.\d)/">Lua \1</A>')
+        for line in f:
+            m = r.match(line)
+            if m is not None:
+                return m.groups()[0]
+
+    def get_lua_functions(version):
+        f = urllib.request.urlopen('http://www.lua.org/manual/%s/' % version)
+        r = re.compile(r'^<A HREF="manual.html#pdf-(.+)">\1</A>')
+        functions = []
+        for line in f:
+            m = r.match(line)
+            if m is not None:
+                functions.append(m.groups()[0])
+        return functions
+
+    def get_function_module(name):
+        for mod, cb in module_callbacks().items():
+            if cb(name):
+                return mod
+        if '.' in name:
+            return name.split('.')[0]
+        else:
+            return 'basic'
+
+    def regenerate(filename, modules):
+        f = open(filename)
+        try:
+            content = f.read()
+        finally:
+            f.close()
+
+        header = content[:content.find('MODULES = {')]
+        footer = content[content.find("if __name__ == '__main__':"):]
+
+
+        f = open(filename, 'w')
+        f.write(header)
+        f.write('MODULES = %s\n\n' % pprint.pformat(modules))
+        f.write(footer)
+        f.close()
+
+    def run():
+        version = get_newest_version()
+        print('> Downloading function index for Lua %s' % version)
+        functions = get_lua_functions(version)
+        print('> %d functions found:' % len(functions))
+
+        modules = {}
+        for full_function_name in functions:
+            print('>> %s' % full_function_name)
+            m = get_function_module(full_function_name)
+            modules.setdefault(m, []).append(full_function_name)
+
+        regenerate(__file__, modules)
+
+
+    run()
--- a/ThirdParty/Pygments/pygments/lexers/_mapping.py	Tue Jan 04 17:37:48 2011 +0100
+++ b/ThirdParty/Pygments/pygments/lexers/_mapping.py	Wed Jan 05 15:46:19 2011 +0100
@@ -1,234 +1,255 @@
-# -*- coding: utf-8 -*-
-"""
-    pygments.lexers._mapping
-    ~~~~~~~~~~~~~~~~~~~~~~~~
-
-    Lexer mapping defintions. This file is generated by itself. Everytime
-    you change something on a builtin lexer defintion, run this script from
-    the lexers folder to update it.
-
-    Do not alter the LEXERS dictionary by hand.
-
-    :copyright: Copyright 2006-2010 by the Pygments team, see AUTHORS.
-    :license: BSD, see LICENSE for details.
-"""
-
-LEXERS = {
-    'ABAPLexer': ('pygments.lexers.other', 'ABAP', ('abap',), ('*.abap',), ('text/x-abap',)),
-    'ActionScript3Lexer': ('pygments.lexers.web', 'ActionScript 3', ('as3', 'actionscript3'), ('*.as',), ('application/x-actionscript', 'text/x-actionscript', 'text/actionscript')),
-    'ActionScriptLexer': ('pygments.lexers.web', 'ActionScript', ('as', 'actionscript'), ('*.as',), ('application/x-actionscript', 'text/x-actionscript', 'text/actionscript')),
-    'AdaLexer': ('pygments.lexers.compiled', 'Ada', ('ada', 'ada95ada2005'), ('*.adb', '*.ads', '*.ada'), ('text/x-ada',)),
-    'AntlrActionScriptLexer': ('pygments.lexers.parsers', 'ANTLR With ActionScript Target', ('antlr-as', 'antlr-actionscript'), ('*.G', '*.g'), ()),
-    'AntlrCSharpLexer': ('pygments.lexers.parsers', 'ANTLR With C# Target', ('antlr-csharp', 'antlr-c#'), ('*.G', '*.g'), ()),
-    'AntlrCppLexer': ('pygments.lexers.parsers', 'ANTLR With CPP Target', ('antlr-cpp',), ('*.G', '*.g'), ()),
-    'AntlrJavaLexer': ('pygments.lexers.parsers', 'ANTLR With Java Target', ('antlr-java',), ('*.G', '*.g'), ()),
-    'AntlrLexer': ('pygments.lexers.parsers', 'ANTLR', ('antlr',), (), ()),
-    'AntlrObjectiveCLexer': ('pygments.lexers.parsers', 'ANTLR With ObjectiveC Target', ('antlr-objc',), ('*.G', '*.g'), ()),
-    'AntlrPerlLexer': ('pygments.lexers.parsers', 'ANTLR With Perl Target', ('antlr-perl',), ('*.G', '*.g'), ()),
-    'AntlrPythonLexer': ('pygments.lexers.parsers', 'ANTLR With Python Target', ('antlr-python',), ('*.G', '*.g'), ()),
-    'AntlrRubyLexer': ('pygments.lexers.parsers', 'ANTLR With Ruby Target', ('antlr-ruby', 'antlr-rb'), ('*.G', '*.g'), ()),
-    'ApacheConfLexer': ('pygments.lexers.text', 'ApacheConf', ('apacheconf', 'aconf', 'apache'), ('.htaccess', 'apache.conf', 'apache2.conf'), ('text/x-apacheconf',)),
-    'AppleScriptLexer': ('pygments.lexers.other', 'AppleScript', ('applescript',), ('*.applescript',), ()),
-    'AsymptoteLexer': ('pygments.lexers.other', 'Asymptote', ('asy', 'asymptote'), ('*.asy',), ('text/x-asymptote',)),
-    'BBCodeLexer': ('pygments.lexers.text', 'BBCode', ('bbcode',), (), ('text/x-bbcode',)),
-    'BaseMakefileLexer': ('pygments.lexers.text', 'Makefile', ('basemake',), (), ()),
-    'BashLexer': ('pygments.lexers.other', 'Bash', ('bash', 'sh', 'ksh'), ('*.sh', '*.ksh', '*.bash', '*.ebuild', '*.eclass'), ('application/x-sh', 'application/x-shellscript')),
-    'BashSessionLexer': ('pygments.lexers.other', 'Bash Session', ('console',), ('*.sh-session',), ('application/x-shell-session',)),
-    'BatchLexer': ('pygments.lexers.other', 'Batchfile', ('bat',), ('*.bat', '*.cmd'), ('application/x-dos-batch',)),
-    'BefungeLexer': ('pygments.lexers.other', 'Befunge', ('befunge',), ('*.befunge',), ('application/x-befunge',)),
-    'BooLexer': ('pygments.lexers.dotnet', 'Boo', ('boo',), ('*.boo',), ('text/x-boo',)),
-    'BrainfuckLexer': ('pygments.lexers.other', 'Brainfuck', ('brainfuck', 'bf'), ('*.bf', '*.b'), ('application/x-brainfuck',)),
-    'CLexer': ('pygments.lexers.compiled', 'C', ('c',), ('*.c', '*.h'), ('text/x-chdr', 'text/x-csrc')),
-    'CMakeLexer': ('pygments.lexers.text', 'CMake', ('cmake',), ('*.cmake',), ('text/x-cmake',)),
-    'CObjdumpLexer': ('pygments.lexers.asm', 'c-objdump', ('c-objdump',), ('*.c-objdump',), ('text/x-c-objdump',)),
-    'CSharpAspxLexer': ('pygments.lexers.dotnet', 'aspx-cs', ('aspx-cs',), ('*.aspx', '*.asax', '*.ascx', '*.ashx', '*.asmx', '*.axd'), ()),
-    'CSharpLexer': ('pygments.lexers.dotnet', 'C#', ('csharp', 'c#'), ('*.cs',), ('text/x-csharp',)),
-    'CheetahHtmlLexer': ('pygments.lexers.templates', 'HTML+Cheetah', ('html+cheetah', 'html+spitfire'), (), ('text/html+cheetah', 'text/html+spitfire')),
-    'CheetahJavascriptLexer': ('pygments.lexers.templates', 'JavaScript+Cheetah', ('js+cheetah', 'javascript+cheetah', 'js+spitfire', 'javascript+spitfire'), (), ('application/x-javascript+cheetah', 'text/x-javascript+cheetah', 'text/javascript+cheetah', 'application/x-javascript+spitfire', 'text/x-javascript+spitfire', 'text/javascript+spitfire')),
-    'CheetahLexer': ('pygments.lexers.templates', 'Cheetah', ('cheetah', 'spitfire'), ('*.tmpl', '*.spt'), ('application/x-cheetah', 'application/x-spitfire')),
-    'CheetahXmlLexer': ('pygments.lexers.templates', 'XML+Cheetah', ('xml+cheetah', 'xml+spitfire'), (), ('application/xml+cheetah', 'application/xml+spitfire')),
-    'ClojureLexer': ('pygments.lexers.agile', 'Clojure', ('clojure', 'clj'), ('*.clj',), ('text/x-clojure', 'application/x-clojure')),
-    'CoffeeScriptLexer': ('pygments.lexers.web', 'CoffeeScript', ('coffee-script', 'coffeescript'), ('*.coffee',), ('text/coffeescript',)),
-    'ColdfusionHtmlLexer': ('pygments.lexers.templates', 'Coldufsion HTML', ('cfm',), ('*.cfm', '*.cfml', '*.cfc'), ('application/x-coldfusion',)),
-    'ColdfusionLexer': ('pygments.lexers.templates', 'cfstatement', ('cfs',), (), ()),
-    'CommonLispLexer': ('pygments.lexers.functional', 'Common Lisp', ('common-lisp', 'cl'), ('*.cl', '*.lisp', '*.el'), ('text/x-common-lisp',)),
-    'CppLexer': ('pygments.lexers.compiled', 'C++', ('cpp', 'c++'), ('*.cpp', '*.hpp', '*.c++', '*.h++', '*.cc', '*.hh', '*.cxx', '*.hxx'), ('text/x-c++hdr', 'text/x-c++src')),
-    'CppObjdumpLexer': ('pygments.lexers.asm', 'cpp-objdump', ('cpp-objdump', 'c++-objdumb', 'cxx-objdump'), ('*.cpp-objdump', '*.c++-objdump', '*.cxx-objdump'), ('text/x-cpp-objdump',)),
-    'CssDjangoLexer': ('pygments.lexers.templates', 'CSS+Django/Jinja', ('css+django', 'css+jinja'), (), ('text/css+django', 'text/css+jinja')),
-    'CssErbLexer': ('pygments.lexers.templates', 'CSS+Ruby', ('css+erb', 'css+ruby'), (), ('text/css+ruby',)),
-    'CssGenshiLexer': ('pygments.lexers.templates', 'CSS+Genshi Text', ('css+genshitext', 'css+genshi'), (), ('text/css+genshi',)),
-    'CssLexer': ('pygments.lexers.web', 'CSS', ('css',), ('*.css',), ('text/css',)),
-    'CssPhpLexer': ('pygments.lexers.templates', 'CSS+PHP', ('css+php',), (), ('text/css+php',)),
-    'CssSmartyLexer': ('pygments.lexers.templates', 'CSS+Smarty', ('css+smarty',), (), ('text/css+smarty',)),
-    'CythonLexer': ('pygments.lexers.compiled', 'Cython', ('cython', 'pyx'), ('*.pyx', '*.pxd', '*.pxi'), ('text/x-cython', 'application/x-cython')),
-    'DLexer': ('pygments.lexers.compiled', 'D', ('d',), ('*.d', '*.di'), ('text/x-dsrc',)),
-    'DObjdumpLexer': ('pygments.lexers.asm', 'd-objdump', ('d-objdump',), ('*.d-objdump',), ('text/x-d-objdump',)),
-    'DarcsPatchLexer': ('pygments.lexers.text', 'Darcs Patch', ('dpatch',), ('*.dpatch', '*.darcspatch'), ()),
-    'DebianControlLexer': ('pygments.lexers.text', 'Debian Control file', ('control',), ('control',), ()),
-    'DelphiLexer': ('pygments.lexers.compiled', 'Delphi', ('delphi', 'pas', 'pascal', 'objectpascal'), ('*.pas',), ('text/x-pascal',)),
-    'DiffLexer': ('pygments.lexers.text', 'Diff', ('diff', 'udiff'), ('*.diff', '*.patch'), ('text/x-diff', 'text/x-patch')),
-    'DjangoLexer': ('pygments.lexers.templates', 'Django/Jinja', ('django', 'jinja'), (), ('application/x-django-templating', 'application/x-jinja')),
-    'DylanLexer': ('pygments.lexers.compiled', 'Dylan', ('dylan',), ('*.dylan',), ('text/x-dylan',)),
-    'ErbLexer': ('pygments.lexers.templates', 'ERB', ('erb',), (), ('application/x-ruby-templating',)),
-    'ErlangLexer': ('pygments.lexers.functional', 'Erlang', ('erlang',), ('*.erl', '*.hrl'), ('text/x-erlang',)),
-    'ErlangShellLexer': ('pygments.lexers.functional', 'Erlang erl session', ('erl',), ('*.erl-sh',), ('text/x-erl-shellsession',)),
-    'EvoqueHtmlLexer': ('pygments.lexers.templates', 'HTML+Evoque', ('html+evoque',), ('*.html',), ('text/html+evoque',)),
-    'EvoqueLexer': ('pygments.lexers.templates', 'Evoque', ('evoque',), ('*.evoque',), ('application/x-evoque',)),
-    'EvoqueXmlLexer': ('pygments.lexers.templates', 'XML+Evoque', ('xml+evoque',), ('*.xml',), ('application/xml+evoque',)),
-    'FelixLexer': ('pygments.lexers.compiled', 'Felix', ('felix', 'flx'), ('*.flx', '*.flxh'), ('text/x-felix',)),
-    'FortranLexer': ('pygments.lexers.compiled', 'Fortran', ('fortran',), ('*.f', '*.f90'), ('text/x-fortran',)),
-    'GLShaderLexer': ('pygments.lexers.compiled', 'GLSL', ('glsl',), ('*.vert', '*.frag', '*.geo'), ('text/x-glslsrc',)),
-    'GasLexer': ('pygments.lexers.asm', 'GAS', ('gas',), ('*.s', '*.S'), ('text/x-gas',)),
-    'GenshiLexer': ('pygments.lexers.templates', 'Genshi', ('genshi', 'kid', 'xml+genshi', 'xml+kid'), ('*.kid',), ('application/x-genshi', 'application/x-kid')),
-    'GenshiTextLexer': ('pygments.lexers.templates', 'Genshi Text', ('genshitext',), (), ('application/x-genshi-text', 'text/x-genshi')),
-    'GettextLexer': ('pygments.lexers.text', 'Gettext Catalog', ('pot', 'po'), ('*.pot', '*.po'), ('application/x-gettext', 'text/x-gettext', 'text/gettext')),
-    'GherkinLexer': ('pygments.lexers.other', 'Gherkin', ('Cucumber', 'cucumber', 'Gherkin', 'gherkin'), ('*.feature',), ('text/x-gherkin',)),
-    'GnuplotLexer': ('pygments.lexers.other', 'Gnuplot', ('gnuplot',), ('*.plot', '*.plt'), ('text/x-gnuplot',)),
-    'GoLexer': ('pygments.lexers.compiled', 'Go', ('go',), ('*.go',), ('text/x-gosrc',)),
-    'GroffLexer': ('pygments.lexers.text', 'Groff', ('groff', 'nroff', 'man'), ('*.[1234567]', '*.man'), ('application/x-troff', 'text/troff')),
-    'HamlLexer': ('pygments.lexers.web', 'Haml', ('haml', 'HAML'), ('*.haml',), ('text/x-haml',)),
-    'HaskellLexer': ('pygments.lexers.functional', 'Haskell', ('haskell', 'hs'), ('*.hs',), ('text/x-haskell',)),
-    'HaxeLexer': ('pygments.lexers.web', 'haXe', ('hx', 'haXe'), ('*.hx',), ('text/haxe',)),
-    'HtmlDjangoLexer': ('pygments.lexers.templates', 'HTML+Django/Jinja', ('html+django', 'html+jinja'), (), ('text/html+django', 'text/html+jinja')),
-    'HtmlGenshiLexer': ('pygments.lexers.templates', 'HTML+Genshi', ('html+genshi', 'html+kid'), (), ('text/html+genshi',)),
-    'HtmlLexer': ('pygments.lexers.web', 'HTML', ('html',), ('*.html', '*.htm', '*.xhtml', '*.xslt'), ('text/html', 'application/xhtml+xml')),
-    'HtmlPhpLexer': ('pygments.lexers.templates', 'HTML+PHP', ('html+php',), ('*.phtml',), ('application/x-php', 'application/x-httpd-php', 'application/x-httpd-php3', 'application/x-httpd-php4', 'application/x-httpd-php5')),
-    'HtmlSmartyLexer': ('pygments.lexers.templates', 'HTML+Smarty', ('html+smarty',), (), ('text/html+smarty',)),
-    'IniLexer': ('pygments.lexers.text', 'INI', ('ini', 'cfg'), ('*.ini', '*.cfg', '*.properties'), ('text/x-ini',)),
-    'IoLexer': ('pygments.lexers.agile', 'Io', ('io',), ('*.io',), ('text/x-iosrc',)),
-    'IrcLogsLexer': ('pygments.lexers.text', 'IRC logs', ('irc',), ('*.weechatlog',), ('text/x-irclog',)),
-    'JavaLexer': ('pygments.lexers.compiled', 'Java', ('java',), ('*.java',), ('text/x-java',)),
-    'JavascriptDjangoLexer': ('pygments.lexers.templates', 'JavaScript+Django/Jinja', ('js+django', 'javascript+django', 'js+jinja', 'javascript+jinja'), (), ('application/x-javascript+django', 'application/x-javascript+jinja', 'text/x-javascript+django', 'text/x-javascript+jinja', 'text/javascript+django', 'text/javascript+jinja')),
-    'JavascriptErbLexer': ('pygments.lexers.templates', 'JavaScript+Ruby', ('js+erb', 'javascript+erb', 'js+ruby', 'javascript+ruby'), (), ('application/x-javascript+ruby', 'text/x-javascript+ruby', 'text/javascript+ruby')),
-    'JavascriptGenshiLexer': ('pygments.lexers.templates', 'JavaScript+Genshi Text', ('js+genshitext', 'js+genshi', 'javascript+genshitext', 'javascript+genshi'), (), ('application/x-javascript+genshi', 'text/x-javascript+genshi', 'text/javascript+genshi')),
-    'JavascriptLexer': ('pygments.lexers.web', 'JavaScript', ('js', 'javascript'), ('*.js',), ('application/x-javascript', 'text/x-javascript', 'text/javascript')),
-    'JavascriptPhpLexer': ('pygments.lexers.templates', 'JavaScript+PHP', ('js+php', 'javascript+php'), (), ('application/x-javascript+php', 'text/x-javascript+php', 'text/javascript+php')),
-    'JavascriptSmartyLexer': ('pygments.lexers.templates', 'JavaScript+Smarty', ('js+smarty', 'javascript+smarty'), (), ('application/x-javascript+smarty', 'text/x-javascript+smarty', 'text/javascript+smarty')),
-    'JspLexer': ('pygments.lexers.templates', 'Java Server Page', ('jsp',), ('*.jsp',), ('application/x-jsp',)),
-    'LighttpdConfLexer': ('pygments.lexers.text', 'Lighttpd configuration file', ('lighty', 'lighttpd'), (), ('text/x-lighttpd-conf',)),
-    'LiterateHaskellLexer': ('pygments.lexers.functional', 'Literate Haskell', ('lhs', 'literate-haskell'), ('*.lhs',), ('text/x-literate-haskell',)),
-    'LlvmLexer': ('pygments.lexers.asm', 'LLVM', ('llvm',), ('*.ll',), ('text/x-llvm',)),
-    'LogtalkLexer': ('pygments.lexers.other', 'Logtalk', ('logtalk',), ('*.lgt',), ('text/x-logtalk',)),
-    'LuaLexer': ('pygments.lexers.agile', 'Lua', ('lua',), ('*.lua',), ('text/x-lua', 'application/x-lua')),
-    'MOOCodeLexer': ('pygments.lexers.other', 'MOOCode', ('moocode',), ('*.moo',), ('text/x-moocode',)),
-    'MakefileLexer': ('pygments.lexers.text', 'Makefile', ('make', 'makefile', 'mf', 'bsdmake'), ('*.mak', 'Makefile', 'makefile', 'Makefile.*', 'GNUmakefile'), ('text/x-makefile',)),
-    'MakoCssLexer': ('pygments.lexers.templates', 'CSS+Mako', ('css+mako',), (), ('text/css+mako',)),
-    'MakoHtmlLexer': ('pygments.lexers.templates', 'HTML+Mako', ('html+mako',), (), ('text/html+mako',)),
-    'MakoJavascriptLexer': ('pygments.lexers.templates', 'JavaScript+Mako', ('js+mako', 'javascript+mako'), (), ('application/x-javascript+mako', 'text/x-javascript+mako', 'text/javascript+mako')),
-    'MakoLexer': ('pygments.lexers.templates', 'Mako', ('mako',), ('*.mao',), ('application/x-mako',)),
-    'MakoXmlLexer': ('pygments.lexers.templates', 'XML+Mako', ('xml+mako',), (), ('application/xml+mako',)),
-    'MatlabLexer': ('pygments.lexers.math', 'Matlab', ('matlab', 'octave'), ('*.m',), ('text/matlab',)),
-    'MatlabSessionLexer': ('pygments.lexers.math', 'Matlab session', ('matlabsession',), (), ()),
-    'MiniDLexer': ('pygments.lexers.agile', 'MiniD', ('minid',), ('*.md',), ('text/x-minidsrc',)),
-    'ModelicaLexer': ('pygments.lexers.other', 'Modelica', ('modelica',), ('*.mo',), ('text/x-modelica',)),
-    'Modula2Lexer': ('pygments.lexers.compiled', 'Modula-2', ('modula2', 'm2'), ('*.def', '*.mod'), ('text/x-modula2',)),
-    'MoinWikiLexer': ('pygments.lexers.text', 'MoinMoin/Trac Wiki markup', ('trac-wiki', 'moin'), (), ('text/x-trac-wiki',)),
-    'MuPADLexer': ('pygments.lexers.math', 'MuPAD', ('mupad',), ('*.mu',), ()),
-    'MxmlLexer': ('pygments.lexers.web', 'MXML', ('mxml',), ('*.mxml',), ()),
-    'MySqlLexer': ('pygments.lexers.other', 'MySQL', ('mysql',), (), ('text/x-mysql',)),
-    'MyghtyCssLexer': ('pygments.lexers.templates', 'CSS+Myghty', ('css+myghty',), (), ('text/css+myghty',)),
-    'MyghtyHtmlLexer': ('pygments.lexers.templates', 'HTML+Myghty', ('html+myghty',), (), ('text/html+myghty',)),
-    'MyghtyJavascriptLexer': ('pygments.lexers.templates', 'JavaScript+Myghty', ('js+myghty', 'javascript+myghty'), (), ('application/x-javascript+myghty', 'text/x-javascript+myghty', 'text/javascript+mygthy')),
-    'MyghtyLexer': ('pygments.lexers.templates', 'Myghty', ('myghty',), ('*.myt', 'autodelegate'), ('application/x-myghty',)),
-    'MyghtyXmlLexer': ('pygments.lexers.templates', 'XML+Myghty', ('xml+myghty',), (), ('application/xml+myghty',)),
-    'NasmLexer': ('pygments.lexers.asm', 'NASM', ('nasm',), ('*.asm', '*.ASM'), ('text/x-nasm',)),
-    'NewspeakLexer': ('pygments.lexers.other', 'Newspeak', ('newspeak',), ('*.ns2',), ('text/x-newspeak',)),
-    'NginxConfLexer': ('pygments.lexers.text', 'Nginx configuration file', ('nginx',), (), ('text/x-nginx-conf',)),
-    'NumPyLexer': ('pygments.lexers.math', 'NumPy', ('numpy',), (), ()),
-    'ObjdumpLexer': ('pygments.lexers.asm', 'objdump', ('objdump',), ('*.objdump',), ('text/x-objdump',)),
-    'ObjectiveCLexer': ('pygments.lexers.compiled', 'Objective-C', ('objective-c', 'objectivec', 'obj-c', 'objc'), ('*.m',), ('text/x-objective-c',)),
-    'ObjectiveJLexer': ('pygments.lexers.web', 'Objective-J', ('objective-j', 'objectivej', 'obj-j', 'objj'), ('*.j',), ('text/x-objective-j',)),
-    'OcamlLexer': ('pygments.lexers.compiled', 'OCaml', ('ocaml',), ('*.ml', '*.mli', '*.mll', '*.mly'), ('text/x-ocaml',)),
-    'OcamlLexer': ('pygments.lexers.functional', 'OCaml', ('ocaml',), ('*.ml', '*.mli', '*.mll', '*.mly'), ('text/x-ocaml',)),
-    'OocLexer': ('pygments.lexers.compiled', 'Ooc', ('ooc',), ('*.ooc',), ('text/x-ooc',)),
-    'PerlLexer': ('pygments.lexers.agile', 'Perl', ('perl', 'pl'), ('*.pl', '*.pm'), ('text/x-perl', 'application/x-perl')),
-    'PhpLexer': ('pygments.lexers.web', 'PHP', ('php', 'php3', 'php4', 'php5'), ('*.php', '*.php[345]'), ('text/x-php',)),
-    'PovrayLexer': ('pygments.lexers.other', 'POVRay', ('pov',), ('*.pov', '*.inc'), ('text/x-povray',)),
-    'PrologLexer': ('pygments.lexers.compiled', 'Prolog', ('prolog',), ('*.prolog', '*.pro', '*.pl'), ('text/x-prolog',)),
-    'Python3Lexer': ('pygments.lexers.agile', 'Python 3', ('python3', 'py3'), (), ('text/x-python3', 'application/x-python3')),
-    'Python3TracebackLexer': ('pygments.lexers.agile', 'Python 3.0 Traceback', ('py3tb',), ('*.py3tb',), ('text/x-python3-traceback',)),
-    'PythonConsoleLexer': ('pygments.lexers.agile', 'Python console session', ('pycon',), (), ('text/x-python-doctest',)),
-    'PythonLexer': ('pygments.lexers.agile', 'Python', ('python', 'py'), ('*.py', '*.pyw', '*.sc', 'SConstruct', 'SConscript', '*.tac'), ('text/x-python', 'application/x-python')),
-    'PythonTracebackLexer': ('pygments.lexers.agile', 'Python Traceback', ('pytb',), ('*.pytb',), ('text/x-python-traceback',)),
-    'RConsoleLexer': ('pygments.lexers.math', 'RConsole', ('rconsole', 'rout'), ('*.Rout',), ()),
-    'RagelCLexer': ('pygments.lexers.parsers', 'Ragel in C Host', ('ragel-c',), ('*.rl',), ()),
-    'RagelCppLexer': ('pygments.lexers.parsers', 'Ragel in CPP Host', ('ragel-cpp',), ('*.rl',), ()),
-    'RagelDLexer': ('pygments.lexers.parsers', 'Ragel in D Host', ('ragel-d',), ('*.rl',), ()),
-    'RagelEmbeddedLexer': ('pygments.lexers.parsers', 'Embedded Ragel', ('ragel-em',), ('*.rl',), ()),
-    'RagelJavaLexer': ('pygments.lexers.parsers', 'Ragel in Java Host', ('ragel-java',), ('*.rl',), ()),
-    'RagelLexer': ('pygments.lexers.parsers', 'Ragel', ('ragel',), (), ()),
-    'RagelObjectiveCLexer': ('pygments.lexers.parsers', 'Ragel in Objective C Host', ('ragel-objc',), ('*.rl',), ()),
-    'RagelRubyLexer': ('pygments.lexers.parsers', 'Ragel in Ruby Host', ('ragel-ruby', 'ragel-rb'), ('*.rl',), ()),
-    'RawTokenLexer': ('pygments.lexers.special', 'Raw token data', ('raw',), (), ('application/x-pygments-tokens',)),
-    'RebolLexer': ('pygments.lexers.other', 'REBOL', ('rebol',), ('*.r', '*.r3'), ('text/x-rebol',)),
-    'RedcodeLexer': ('pygments.lexers.other', 'Redcode', ('redcode',), ('*.cw',), ()),
-    'RhtmlLexer': ('pygments.lexers.templates', 'RHTML', ('rhtml', 'html+erb', 'html+ruby'), ('*.rhtml',), ('text/html+ruby',)),
-    'RstLexer': ('pygments.lexers.text', 'reStructuredText', ('rst', 'rest', 'restructuredtext'), ('*.rst', '*.rest'), ('text/x-rst', 'text/prs.fallenstein.rst')),
-    'RubyConsoleLexer': ('pygments.lexers.agile', 'Ruby irb session', ('rbcon', 'irb'), (), ('text/x-ruby-shellsession',)),
-    'RubyLexer': ('pygments.lexers.agile', 'Ruby', ('rb', 'ruby'), ('*.rb', '*.rbw', 'Rakefile', '*.rake', '*.gemspec', '*.rbx'), ('text/x-ruby', 'application/x-ruby')),
-    'SLexer': ('pygments.lexers.math', 'S', ('splus', 's', 'r'), ('*.S', '*.R'), ('text/S-plus', 'text/S', 'text/R')),
-    'SassLexer': ('pygments.lexers.web', 'Sass', ('sass', 'SASS'), ('*.sass',), ('text/x-sass',)),
-    'ScalaLexer': ('pygments.lexers.compiled', 'Scala', ('scala',), ('*.scala',), ('text/x-scala',)),
-    'SchemeLexer': ('pygments.lexers.functional', 'Scheme', ('scheme', 'scm'), ('*.scm',), ('text/x-scheme', 'application/x-scheme')),
-    'SmalltalkLexer': ('pygments.lexers.other', 'Smalltalk', ('smalltalk', 'squeak'), ('*.st',), ('text/x-smalltalk',)),
-    'SmartyLexer': ('pygments.lexers.templates', 'Smarty', ('smarty',), ('*.tpl',), ('application/x-smarty',)),
-    'SourcesListLexer': ('pygments.lexers.text', 'Debian Sourcelist', ('sourceslist', 'sources.list'), ('sources.list',), ()),
-    'SqlLexer': ('pygments.lexers.other', 'SQL', ('sql',), ('*.sql',), ('text/x-sql',)),
-    'SqliteConsoleLexer': ('pygments.lexers.other', 'sqlite3con', ('sqlite3',), ('*.sqlite3-console',), ('text/x-sqlite3-console',)),
-    'SquidConfLexer': ('pygments.lexers.text', 'SquidConf', ('squidconf', 'squid.conf', 'squid'), ('squid.conf',), ('text/x-squidconf',)),
-    'TclLexer': ('pygments.lexers.agile', 'Tcl', ('tcl',), ('*.tcl',), ('text/x-tcl', 'text/x-script.tcl', 'application/x-tcl')),
-    'TcshLexer': ('pygments.lexers.other', 'Tcsh', ('tcsh', 'csh'), ('*.tcsh', '*.csh'), ('application/x-csh',)),
-    'TexLexer': ('pygments.lexers.text', 'TeX', ('tex', 'latex'), ('*.tex', '*.aux', '*.toc'), ('text/x-tex', 'text/x-latex')),
-    'TextLexer': ('pygments.lexers.special', 'Text only', ('text',), ('*.txt',), ('text/plain',)),
-    'ValaLexer': ('pygments.lexers.compiled', 'Vala', ('vala', 'vapi'), ('*.vala', '*.vapi'), ('text/x-vala',)),
-    'VbNetAspxLexer': ('pygments.lexers.dotnet', 'aspx-vb', ('aspx-vb',), ('*.aspx', '*.asax', '*.ascx', '*.ashx', '*.asmx', '*.axd'), ()),
-    'VbNetLexer': ('pygments.lexers.dotnet', 'VB.net', ('vb.net', 'vbnet'), ('*.vb', '*.bas'), ('text/x-vbnet', 'text/x-vba')),
-    'VimLexer': ('pygments.lexers.text', 'VimL', ('vim',), ('*.vim', '.vimrc'), ('text/x-vim',)),
-    'XmlDjangoLexer': ('pygments.lexers.templates', 'XML+Django/Jinja', ('xml+django', 'xml+jinja'), (), ('application/xml+django', 'application/xml+jinja')),
-    'XmlErbLexer': ('pygments.lexers.templates', 'XML+Ruby', ('xml+erb', 'xml+ruby'), (), ('application/xml+ruby',)),
-    'XmlLexer': ('pygments.lexers.web', 'XML', ('xml',), ('*.xml', '*.xsl', '*.rss', '*.xslt', '*.xsd', '*.wsdl'), ('text/xml', 'application/xml', 'image/svg+xml', 'application/rss+xml', 'application/atom+xml', 'application/xsl+xml', 'application/xslt+xml')),
-    'XmlPhpLexer': ('pygments.lexers.templates', 'XML+PHP', ('xml+php',), (), ('application/xml+php',)),
-    'XmlSmartyLexer': ('pygments.lexers.templates', 'XML+Smarty', ('xml+smarty',), (), ('application/xml+smarty',)),
-    'XsltLexer': ('pygments.lexers.web', 'XSLT', ('xslt',), ('*.xsl', '*.xslt'), ('text/xml', 'application/xml', 'image/svg+xml', 'application/rss+xml', 'application/atom+xml', 'application/xsl+xml', 'application/xslt+xml')),
-    'YamlLexer': ('pygments.lexers.text', 'YAML', ('yaml',), ('*.yaml', '*.yml'), ('text/x-yaml',))
-}
-
-if __name__ == '__main__':
-    import sys
-    import os
-
-    # lookup lexers
-    found_lexers = []
-    sys.path.insert(0, os.path.join(os.path.dirname(__file__), '..', '..'))
-    for filename in os.listdir('.'):
-        if filename.endswith('.py') and not filename.startswith('_'):
-            module_name = 'pygments.lexers.%s' % filename[:-3]
-            print(module_name)
-            module = __import__(module_name, None, None, [''])
-            for lexer_name in module.__all__:
-                lexer = getattr(module, lexer_name)
-                found_lexers.append(
-                    '%r: %r' % (lexer_name,
-                                (module_name,
-                                 lexer.name,
-                                 tuple(lexer.aliases),
-                                 tuple(lexer.filenames),
-                                 tuple(lexer.mimetypes))))
-    # sort them, that should make the diff files for svn smaller
-    found_lexers.sort()
-
-    # extract useful sourcecode from this file
-    f = open(__file__)
-    try:
-        content = f.read()
-    finally:
-        f.close()
-    header = content[:content.find('LEXERS = {')]
-    footer = content[content.find("if __name__ == '__main__':"):]
-
-    # write new file
-    f = open(__file__, 'w')
-    f.write(header)
-    f.write('LEXERS = {\n    %s\n}\n\n' % ',\n    '.join(found_lexers))
-    f.write(footer)
-    f.close()
+# -*- coding: utf-8 -*-
+"""
+    pygments.lexers._mapping
+    ~~~~~~~~~~~~~~~~~~~~~~~~
+
+    Lexer mapping defintions. This file is generated by itself. Everytime
+    you change something on a builtin lexer defintion, run this script from
+    the lexers folder to update it.
+
+    Do not alter the LEXERS dictionary by hand.
+
+    :copyright: Copyright 2006-2010 by the Pygments team, see AUTHORS.
+    :license: BSD, see LICENSE for details.
+"""
+
+LEXERS = {
+    'ABAPLexer': ('pygments.lexers.other', 'ABAP', ('abap',), ('*.abap',), ('text/x-abap',)),
+    'ActionScript3Lexer': ('pygments.lexers.web', 'ActionScript 3', ('as3', 'actionscript3'), ('*.as',), ('application/x-actionscript', 'text/x-actionscript', 'text/actionscript')),
+    'ActionScriptLexer': ('pygments.lexers.web', 'ActionScript', ('as', 'actionscript'), ('*.as',), ('application/x-actionscript', 'text/x-actionscript', 'text/actionscript')),
+    'AdaLexer': ('pygments.lexers.compiled', 'Ada', ('ada', 'ada95ada2005'), ('*.adb', '*.ads', '*.ada'), ('text/x-ada',)),
+    'AntlrActionScriptLexer': ('pygments.lexers.parsers', 'ANTLR With ActionScript Target', ('antlr-as', 'antlr-actionscript'), ('*.G', '*.g'), ()),
+    'AntlrCSharpLexer': ('pygments.lexers.parsers', 'ANTLR With C# Target', ('antlr-csharp', 'antlr-c#'), ('*.G', '*.g'), ()),
+    'AntlrCppLexer': ('pygments.lexers.parsers', 'ANTLR With CPP Target', ('antlr-cpp',), ('*.G', '*.g'), ()),
+    'AntlrJavaLexer': ('pygments.lexers.parsers', 'ANTLR With Java Target', ('antlr-java',), ('*.G', '*.g'), ()),
+    'AntlrLexer': ('pygments.lexers.parsers', 'ANTLR', ('antlr',), (), ()),
+    'AntlrObjectiveCLexer': ('pygments.lexers.parsers', 'ANTLR With ObjectiveC Target', ('antlr-objc',), ('*.G', '*.g'), ()),
+    'AntlrPerlLexer': ('pygments.lexers.parsers', 'ANTLR With Perl Target', ('antlr-perl',), ('*.G', '*.g'), ()),
+    'AntlrPythonLexer': ('pygments.lexers.parsers', 'ANTLR With Python Target', ('antlr-python',), ('*.G', '*.g'), ()),
+    'AntlrRubyLexer': ('pygments.lexers.parsers', 'ANTLR With Ruby Target', ('antlr-ruby', 'antlr-rb'), ('*.G', '*.g'), ()),
+    'ApacheConfLexer': ('pygments.lexers.text', 'ApacheConf', ('apacheconf', 'aconf', 'apache'), ('.htaccess', 'apache.conf', 'apache2.conf'), ('text/x-apacheconf',)),
+    'AppleScriptLexer': ('pygments.lexers.other', 'AppleScript', ('applescript',), ('*.applescript',), ()),
+    'AsymptoteLexer': ('pygments.lexers.other', 'Asymptote', ('asy', 'asymptote'), ('*.asy',), ('text/x-asymptote',)),
+    'AutohotkeyLexer': ('pygments.lexers.other', 'autohotkey', ('ahk',), ('*.ahk', '*.ahkl'), ('text/x-autohotkey',)),
+    'BBCodeLexer': ('pygments.lexers.text', 'BBCode', ('bbcode',), (), ('text/x-bbcode',)),
+    'BaseMakefileLexer': ('pygments.lexers.text', 'Makefile', ('basemake',), (), ()),
+    'BashLexer': ('pygments.lexers.other', 'Bash', ('bash', 'sh', 'ksh'), ('*.sh', '*.ksh', '*.bash', '*.ebuild', '*.eclass'), ('application/x-sh', 'application/x-shellscript')),
+    'BashSessionLexer': ('pygments.lexers.other', 'Bash Session', ('console',), ('*.sh-session',), ('application/x-shell-session',)),
+    'BatchLexer': ('pygments.lexers.other', 'Batchfile', ('bat',), ('*.bat', '*.cmd'), ('application/x-dos-batch',)),
+    'BefungeLexer': ('pygments.lexers.other', 'Befunge', ('befunge',), ('*.befunge',), ('application/x-befunge',)),
+    'BlitzMaxLexer': ('pygments.lexers.compiled', 'BlitzMax', ('blitzmax', 'bmax'), ('*.bmx',), ('text/x-bmx',)),
+    'BooLexer': ('pygments.lexers.dotnet', 'Boo', ('boo',), ('*.boo',), ('text/x-boo',)),
+    'BrainfuckLexer': ('pygments.lexers.other', 'Brainfuck', ('brainfuck', 'bf'), ('*.bf', '*.b'), ('application/x-brainfuck',)),
+    'CLexer': ('pygments.lexers.compiled', 'C', ('c',), ('*.c', '*.h'), ('text/x-chdr', 'text/x-csrc')),
+    'CMakeLexer': ('pygments.lexers.text', 'CMake', ('cmake',), ('*.cmake', 'CMakeLists.txt'), ('text/x-cmake',)),
+    'CObjdumpLexer': ('pygments.lexers.asm', 'c-objdump', ('c-objdump',), ('*.c-objdump',), ('text/x-c-objdump',)),
+    'CSharpAspxLexer': ('pygments.lexers.dotnet', 'aspx-cs', ('aspx-cs',), ('*.aspx', '*.asax', '*.ascx', '*.ashx', '*.asmx', '*.axd'), ()),
+    'CSharpLexer': ('pygments.lexers.dotnet', 'C#', ('csharp', 'c#'), ('*.cs',), ('text/x-csharp',)),
+    'CheetahHtmlLexer': ('pygments.lexers.templates', 'HTML+Cheetah', ('html+cheetah', 'html+spitfire'), (), ('text/html+cheetah', 'text/html+spitfire')),
+    'CheetahJavascriptLexer': ('pygments.lexers.templates', 'JavaScript+Cheetah', ('js+cheetah', 'javascript+cheetah', 'js+spitfire', 'javascript+spitfire'), (), ('application/x-javascript+cheetah', 'text/x-javascript+cheetah', 'text/javascript+cheetah', 'application/x-javascript+spitfire', 'text/x-javascript+spitfire', 'text/javascript+spitfire')),
+    'CheetahLexer': ('pygments.lexers.templates', 'Cheetah', ('cheetah', 'spitfire'), ('*.tmpl', '*.spt'), ('application/x-cheetah', 'application/x-spitfire')),
+    'CheetahXmlLexer': ('pygments.lexers.templates', 'XML+Cheetah', ('xml+cheetah', 'xml+spitfire'), (), ('application/xml+cheetah', 'application/xml+spitfire')),
+    'ClojureLexer': ('pygments.lexers.agile', 'Clojure', ('clojure', 'clj'), ('*.clj',), ('text/x-clojure', 'application/x-clojure')),
+    'CoffeeScriptLexer': ('pygments.lexers.web', 'CoffeeScript', ('coffee-script', 'coffeescript'), ('*.coffee',), ('text/coffeescript',)),
+    'ColdfusionHtmlLexer': ('pygments.lexers.templates', 'Coldfusion HTML', ('cfm',), ('*.cfm', '*.cfml', '*.cfc'), ('application/x-coldfusion',)),
+    'ColdfusionLexer': ('pygments.lexers.templates', 'cfstatement', ('cfs',), (), ()),
+    'CommonLispLexer': ('pygments.lexers.functional', 'Common Lisp', ('common-lisp', 'cl'), ('*.cl', '*.lisp', '*.el'), ('text/x-common-lisp',)),
+    'CppLexer': ('pygments.lexers.compiled', 'C++', ('cpp', 'c++'), ('*.cpp', '*.hpp', '*.c++', '*.h++', '*.cc', '*.hh', '*.cxx', '*.hxx'), ('text/x-c++hdr', 'text/x-c++src')),
+    'CppObjdumpLexer': ('pygments.lexers.asm', 'cpp-objdump', ('cpp-objdump', 'c++-objdumb', 'cxx-objdump'), ('*.cpp-objdump', '*.c++-objdump', '*.cxx-objdump'), ('text/x-cpp-objdump',)),
+    'CssDjangoLexer': ('pygments.lexers.templates', 'CSS+Django/Jinja', ('css+django', 'css+jinja'), (), ('text/css+django', 'text/css+jinja')),
+    'CssErbLexer': ('pygments.lexers.templates', 'CSS+Ruby', ('css+erb', 'css+ruby'), (), ('text/css+ruby',)),
+    'CssGenshiLexer': ('pygments.lexers.templates', 'CSS+Genshi Text', ('css+genshitext', 'css+genshi'), (), ('text/css+genshi',)),
+    'CssLexer': ('pygments.lexers.web', 'CSS', ('css',), ('*.css',), ('text/css',)),
+    'CssPhpLexer': ('pygments.lexers.templates', 'CSS+PHP', ('css+php',), (), ('text/css+php',)),
+    'CssSmartyLexer': ('pygments.lexers.templates', 'CSS+Smarty', ('css+smarty',), (), ('text/css+smarty',)),
+    'CythonLexer': ('pygments.lexers.compiled', 'Cython', ('cython', 'pyx'), ('*.pyx', '*.pxd', '*.pxi'), ('text/x-cython', 'application/x-cython')),
+    'DLexer': ('pygments.lexers.compiled', 'D', ('d',), ('*.d', '*.di'), ('text/x-dsrc',)),
+    'DObjdumpLexer': ('pygments.lexers.asm', 'd-objdump', ('d-objdump',), ('*.d-objdump',), ('text/x-d-objdump',)),
+    'DarcsPatchLexer': ('pygments.lexers.text', 'Darcs Patch', ('dpatch',), ('*.dpatch', '*.darcspatch'), ()),
+    'DebianControlLexer': ('pygments.lexers.text', 'Debian Control file', ('control',), ('control',), ()),
+    'DelphiLexer': ('pygments.lexers.compiled', 'Delphi', ('delphi', 'pas', 'pascal', 'objectpascal'), ('*.pas',), ('text/x-pascal',)),
+    'DiffLexer': ('pygments.lexers.text', 'Diff', ('diff', 'udiff'), ('*.diff', '*.patch'), ('text/x-diff', 'text/x-patch')),
+    'DjangoLexer': ('pygments.lexers.templates', 'Django/Jinja', ('django', 'jinja'), (), ('application/x-django-templating', 'application/x-jinja')),
+    'DuelLexer': ('pygments.lexers.web', 'Duel', ('duel', 'Duel Engine', 'Duel View', 'JBST', 'jbst', 'JsonML+BST'), ('*.duel', '*.jbst'), ('text/x-duel', 'text/x-jbst')),
+    'DylanLexer': ('pygments.lexers.compiled', 'Dylan', ('dylan',), ('*.dylan', '*.dyl'), ('text/x-dylan',)),
+    'ErbLexer': ('pygments.lexers.templates', 'ERB', ('erb',), (), ('application/x-ruby-templating',)),
+    'ErlangLexer': ('pygments.lexers.functional', 'Erlang', ('erlang',), ('*.erl', '*.hrl'), ('text/x-erlang',)),
+    'ErlangShellLexer': ('pygments.lexers.functional', 'Erlang erl session', ('erl',), ('*.erl-sh',), ('text/x-erl-shellsession',)),
+    'EvoqueHtmlLexer': ('pygments.lexers.templates', 'HTML+Evoque', ('html+evoque',), ('*.html',), ('text/html+evoque',)),
+    'EvoqueLexer': ('pygments.lexers.templates', 'Evoque', ('evoque',), ('*.evoque',), ('application/x-evoque',)),
+    'EvoqueXmlLexer': ('pygments.lexers.templates', 'XML+Evoque', ('xml+evoque',), ('*.xml',), ('application/xml+evoque',)),
+    'FactorLexer': ('pygments.lexers.agile', 'Factor', ('factor',), ('*.factor',), ('text/x-factor',)),
+    'FelixLexer': ('pygments.lexers.compiled', 'Felix', ('felix', 'flx'), ('*.flx', '*.flxh'), ('text/x-felix',)),
+    'FortranLexer': ('pygments.lexers.compiled', 'Fortran', ('fortran',), ('*.f', '*.f90'), ('text/x-fortran',)),
+    'GLShaderLexer': ('pygments.lexers.compiled', 'GLSL', ('glsl',), ('*.vert', '*.frag', '*.geo'), ('text/x-glslsrc',)),
+    'GasLexer': ('pygments.lexers.asm', 'GAS', ('gas',), ('*.s', '*.S'), ('text/x-gas',)),
+    'GenshiLexer': ('pygments.lexers.templates', 'Genshi', ('genshi', 'kid', 'xml+genshi', 'xml+kid'), ('*.kid',), ('application/x-genshi', 'application/x-kid')),
+    'GenshiTextLexer': ('pygments.lexers.templates', 'Genshi Text', ('genshitext',), (), ('application/x-genshi-text', 'text/x-genshi')),
+    'GettextLexer': ('pygments.lexers.text', 'Gettext Catalog', ('pot', 'po'), ('*.pot', '*.po'), ('application/x-gettext', 'text/x-gettext', 'text/gettext')),
+    'GherkinLexer': ('pygments.lexers.other', 'Gherkin', ('Cucumber', 'cucumber', 'Gherkin', 'gherkin'), ('*.feature',), ('text/x-gherkin',)),
+    'GnuplotLexer': ('pygments.lexers.other', 'Gnuplot', ('gnuplot',), ('*.plot', '*.plt'), ('text/x-gnuplot',)),
+    'GoLexer': ('pygments.lexers.compiled', 'Go', ('go',), ('*.go',), ('text/x-gosrc',)),
+    'GoodDataCLLexer': ('pygments.lexers.other', 'GoodData-CL', ('gooddata-cl',), ('*.gdc',), ('text/x-gooddata-cl',)),
+    'GroffLexer': ('pygments.lexers.text', 'Groff', ('groff', 'nroff', 'man'), ('*.[1234567]', '*.man'), ('application/x-troff', 'text/troff')),
+    'HamlLexer': ('pygments.lexers.web', 'Haml', ('haml', 'HAML'), ('*.haml',), ('text/x-haml',)),
+    'HaskellLexer': ('pygments.lexers.functional', 'Haskell', ('haskell', 'hs'), ('*.hs',), ('text/x-haskell',)),
+    'HaxeLexer': ('pygments.lexers.web', 'haXe', ('hx', 'haXe'), ('*.hx',), ('text/haxe',)),
+    'HtmlDjangoLexer': ('pygments.lexers.templates', 'HTML+Django/Jinja', ('html+django', 'html+jinja'), (), ('text/html+django', 'text/html+jinja')),
+    'HtmlGenshiLexer': ('pygments.lexers.templates', 'HTML+Genshi', ('html+genshi', 'html+kid'), (), ('text/html+genshi',)),
+    'HtmlLexer': ('pygments.lexers.web', 'HTML', ('html',), ('*.html', '*.htm', '*.xhtml', '*.xslt'), ('text/html', 'application/xhtml+xml')),
+    'HtmlPhpLexer': ('pygments.lexers.templates', 'HTML+PHP', ('html+php',), ('*.phtml',), ('application/x-php', 'application/x-httpd-php', 'application/x-httpd-php3', 'application/x-httpd-php4', 'application/x-httpd-php5')),
+    'HtmlSmartyLexer': ('pygments.lexers.templates', 'HTML+Smarty', ('html+smarty',), (), ('text/html+smarty',)),
+    'HybrisLexer': ('pygments.lexers.other', 'Hybris', ('hybris', 'hy'), ('*.hy', '*.hyb'), ('text/x-hybris', 'application/x-hybris')),
+    'IniLexer': ('pygments.lexers.text', 'INI', ('ini', 'cfg'), ('*.ini', '*.cfg'), ('text/x-ini',)),
+    'IoLexer': ('pygments.lexers.agile', 'Io', ('io',), ('*.io',), ('text/x-iosrc',)),
+    'IokeLexer': ('pygments.lexers.agile', 'Ioke', ('ioke', 'ik'), ('*.ik',), ('text/x-iokesrc',)),
+    'IrcLogsLexer': ('pygments.lexers.text', 'IRC logs', ('irc',), ('*.weechatlog',), ('text/x-irclog',)),
+    'JadeLexer': ('pygments.lexers.web', 'Jade', ('jade', 'JADE'), ('*.jade',), ('text/x-jade',)),
+    'JavaLexer': ('pygments.lexers.compiled', 'Java', ('java',), ('*.java',), ('text/x-java',)),
+    'JavascriptDjangoLexer': ('pygments.lexers.templates', 'JavaScript+Django/Jinja', ('js+django', 'javascript+django', 'js+jinja', 'javascript+jinja'), (), ('application/x-javascript+django', 'application/x-javascript+jinja', 'text/x-javascript+django', 'text/x-javascript+jinja', 'text/javascript+django', 'text/javascript+jinja')),
+    'JavascriptErbLexer': ('pygments.lexers.templates', 'JavaScript+Ruby', ('js+erb', 'javascript+erb', 'js+ruby', 'javascript+ruby'), (), ('application/x-javascript+ruby', 'text/x-javascript+ruby', 'text/javascript+ruby')),
+    'JavascriptGenshiLexer': ('pygments.lexers.templates', 'JavaScript+Genshi Text', ('js+genshitext', 'js+genshi', 'javascript+genshitext', 'javascript+genshi'), (), ('application/x-javascript+genshi', 'text/x-javascript+genshi', 'text/javascript+genshi')),
+    'JavascriptLexer': ('pygments.lexers.web', 'JavaScript', ('js', 'javascript'), ('*.js',), ('application/javascript', 'application/x-javascript', 'text/x-javascript', 'text/javascript')),
+    'JavascriptPhpLexer': ('pygments.lexers.templates', 'JavaScript+PHP', ('js+php', 'javascript+php'), (), ('application/x-javascript+php', 'text/x-javascript+php', 'text/javascript+php')),
+    'JavascriptSmartyLexer': ('pygments.lexers.templates', 'JavaScript+Smarty', ('js+smarty', 'javascript+smarty'), (), ('application/x-javascript+smarty', 'text/x-javascript+smarty', 'text/javascript+smarty')),
+    'JspLexer': ('pygments.lexers.templates', 'Java Server Page', ('jsp',), ('*.jsp',), ('application/x-jsp',)),
+    'LighttpdConfLexer': ('pygments.lexers.text', 'Lighttpd configuration file', ('lighty', 'lighttpd'), (), ('text/x-lighttpd-conf',)),
+    'LiterateHaskellLexer': ('pygments.lexers.functional', 'Literate Haskell', ('lhs', 'literate-haskell'), ('*.lhs',), ('text/x-literate-haskell',)),
+    'LlvmLexer': ('pygments.lexers.asm', 'LLVM', ('llvm',), ('*.ll',), ('text/x-llvm',)),
+    'LogtalkLexer': ('pygments.lexers.other', 'Logtalk', ('logtalk',), ('*.lgt',), ('text/x-logtalk',)),
+    'LuaLexer': ('pygments.lexers.agile', 'Lua', ('lua',), ('*.lua', '*.wlua'), ('text/x-lua', 'application/x-lua')),
+    'MOOCodeLexer': ('pygments.lexers.other', 'MOOCode', ('moocode',), ('*.moo',), ('text/x-moocode',)),
+    'MakefileLexer': ('pygments.lexers.text', 'Makefile', ('make', 'makefile', 'mf', 'bsdmake'), ('*.mak', 'Makefile', 'makefile', 'Makefile.*', 'GNUmakefile'), ('text/x-makefile',)),
+    'MakoCssLexer': ('pygments.lexers.templates', 'CSS+Mako', ('css+mako',), (), ('text/css+mako',)),
+    'MakoHtmlLexer': ('pygments.lexers.templates', 'HTML+Mako', ('html+mako',), (), ('text/html+mako',)),
+    'MakoJavascriptLexer': ('pygments.lexers.templates', 'JavaScript+Mako', ('js+mako', 'javascript+mako'), (), ('application/x-javascript+mako', 'text/x-javascript+mako', 'text/javascript+mako')),
+    'MakoLexer': ('pygments.lexers.templates', 'Mako', ('mako',), ('*.mao',), ('application/x-mako',)),
+    'MakoXmlLexer': ('pygments.lexers.templates', 'XML+Mako', ('xml+mako',), (), ('application/xml+mako',)),
+    'MaqlLexer': ('pygments.lexers.other', 'MAQL', ('maql',), ('*.maql',), ('text/x-gooddata-maql', 'application/x-gooddata-maql')),
+    'MasonLexer': ('pygments.lexers.templates', 'Mason', ('mason',), ('*.m', '*.mhtml', '*.mc', '*.mi', 'autohandler', 'dhandler'), ('application/x-mason',)),
+    'MatlabLexer': ('pygments.lexers.math', 'Matlab', ('matlab', 'octave'), ('*.m',), ('text/matlab',)),
+    'MatlabSessionLexer': ('pygments.lexers.math', 'Matlab session', ('matlabsession',), (), ()),
+    'MiniDLexer': ('pygments.lexers.agile', 'MiniD', ('minid',), ('*.md',), ('text/x-minidsrc',)),
+    'ModelicaLexer': ('pygments.lexers.other', 'Modelica', ('modelica',), ('*.mo',), ('text/x-modelica',)),
+    'Modula2Lexer': ('pygments.lexers.compiled', 'Modula-2', ('modula2', 'm2'), ('*.def', '*.mod'), ('text/x-modula2',)),
+    'MoinWikiLexer': ('pygments.lexers.text', 'MoinMoin/Trac Wiki markup', ('trac-wiki', 'moin'), (), ('text/x-trac-wiki',)),
+    'MuPADLexer': ('pygments.lexers.math', 'MuPAD', ('mupad',), ('*.mu',), ()),
+    'MxmlLexer': ('pygments.lexers.web', 'MXML', ('mxml',), ('*.mxml',), ()),
+    'MySqlLexer': ('pygments.lexers.other', 'MySQL', ('mysql',), (), ('text/x-mysql',)),
+    'MyghtyCssLexer': ('pygments.lexers.templates', 'CSS+Myghty', ('css+myghty',), (), ('text/css+myghty',)),
+    'MyghtyHtmlLexer': ('pygments.lexers.templates', 'HTML+Myghty', ('html+myghty',), (), ('text/html+myghty',)),
+    'MyghtyJavascriptLexer': ('pygments.lexers.templates', 'JavaScript+Myghty', ('js+myghty', 'javascript+myghty'), (), ('application/x-javascript+myghty', 'text/x-javascript+myghty', 'text/javascript+mygthy')),
+    'MyghtyLexer': ('pygments.lexers.templates', 'Myghty', ('myghty',), ('*.myt', 'autodelegate'), ('application/x-myghty',)),
+    'MyghtyXmlLexer': ('pygments.lexers.templates', 'XML+Myghty', ('xml+myghty',), (), ('application/xml+myghty',)),
+    'NasmLexer': ('pygments.lexers.asm', 'NASM', ('nasm',), ('*.asm', '*.ASM'), ('text/x-nasm',)),
+    'NewspeakLexer': ('pygments.lexers.other', 'Newspeak', ('newspeak',), ('*.ns2',), ('text/x-newspeak',)),
+    'NginxConfLexer': ('pygments.lexers.text', 'Nginx configuration file', ('nginx',), (), ('text/x-nginx-conf',)),
+    'NumPyLexer': ('pygments.lexers.math', 'NumPy', ('numpy',), (), ()),
+    'ObjdumpLexer': ('pygments.lexers.asm', 'objdump', ('objdump',), ('*.objdump',), ('text/x-objdump',)),
+    'ObjectiveCLexer': ('pygments.lexers.compiled', 'Objective-C', ('objective-c', 'objectivec', 'obj-c', 'objc'), ('*.m',), ('text/x-objective-c',)),
+    'ObjectiveJLexer': ('pygments.lexers.web', 'Objective-J', ('objective-j', 'objectivej', 'obj-j', 'objj'), ('*.j',), ('text/x-objective-j',)),
+    'OcamlLexer': ('pygments.lexers.compiled', 'OCaml', ('ocaml',), ('*.ml', '*.mli', '*.mll', '*.mly'), ('text/x-ocaml',)),
+    'OcamlLexer': ('pygments.lexers.functional', 'OCaml', ('ocaml',), ('*.ml', '*.mli', '*.mll', '*.mly'), ('text/x-ocaml',)),
+    'OocLexer': ('pygments.lexers.compiled', 'Ooc', ('ooc',), ('*.ooc',), ('text/x-ooc',)),
+    'PerlLexer': ('pygments.lexers.agile', 'Perl', ('perl', 'pl'), ('*.pl', '*.pm'), ('text/x-perl', 'application/x-perl')),
+    'PhpLexer': ('pygments.lexers.web', 'PHP', ('php', 'php3', 'php4', 'php5'), ('*.php', '*.php[345]'), ('text/x-php',)),
+    'PostScriptLexer': ('pygments.lexers.other', 'PostScript', ('postscript',), ('*.ps', '*.eps'), ('application/postscript',)),
+    'PovrayLexer': ('pygments.lexers.other', 'POVRay', ('pov',), ('*.pov', '*.inc'), ('text/x-povray',)),
+    'PrologLexer': ('pygments.lexers.compiled', 'Prolog', ('prolog',), ('*.prolog', '*.pro', '*.pl'), ('text/x-prolog',)),
+    'PropertiesLexer': ('pygments.lexers.text', 'Properties', ('properties',), ('*.properties',), ('text/x-java-properties',)),
+    'ProtoBufLexer': ('pygments.lexers.other', 'Protocol Buffer', ('protobuf',), ('*.proto',), ()),
+    'Python3Lexer': ('pygments.lexers.agile', 'Python 3', ('python3', 'py3'), (), ('text/x-python3', 'application/x-python3')),
+    'Python3TracebackLexer': ('pygments.lexers.agile', 'Python 3.0 Traceback', ('py3tb',), ('*.py3tb',), ('text/x-python3-traceback',)),
+    'PythonConsoleLexer': ('pygments.lexers.agile', 'Python console session', ('pycon',), (), ('text/x-python-doctest',)),
+    'PythonLexer': ('pygments.lexers.agile', 'Python', ('python', 'py'), ('*.py', '*.pyw', '*.sc', 'SConstruct', 'SConscript', '*.tac'), ('text/x-python', 'application/x-python')),
+    'PythonTracebackLexer': ('pygments.lexers.agile', 'Python Traceback', ('pytb',), ('*.pytb',), ('text/x-python-traceback',)),
+    'RConsoleLexer': ('pygments.lexers.math', 'RConsole', ('rconsole', 'rout'), ('*.Rout',), ()),
+    'RagelCLexer': ('pygments.lexers.parsers', 'Ragel in C Host', ('ragel-c',), ('*.rl',), ()),
+    'RagelCppLexer': ('pygments.lexers.parsers', 'Ragel in CPP Host', ('ragel-cpp',), ('*.rl',), ()),
+    'RagelDLexer': ('pygments.lexers.parsers', 'Ragel in D Host', ('ragel-d',), ('*.rl',), ()),
+    'RagelEmbeddedLexer': ('pygments.lexers.parsers', 'Embedded Ragel', ('ragel-em',), ('*.rl',), ()),
+    'RagelJavaLexer': ('pygments.lexers.parsers', 'Ragel in Java Host', ('ragel-java',), ('*.rl',), ()),
+    'RagelLexer': ('pygments.lexers.parsers', 'Ragel', ('ragel',), (), ()),
+    'RagelObjectiveCLexer': ('pygments.lexers.parsers', 'Ragel in Objective C Host', ('ragel-objc',), ('*.rl',), ()),
+    'RagelRubyLexer': ('pygments.lexers.parsers', 'Ragel in Ruby Host', ('ragel-ruby', 'ragel-rb'), ('*.rl',), ()),
+    'RawTokenLexer': ('pygments.lexers.special', 'Raw token data', ('raw',), (), ('application/x-pygments-tokens',)),
+    'RebolLexer': ('pygments.lexers.other', 'REBOL', ('rebol',), ('*.r', '*.r3'), ('text/x-rebol',)),
+    'RedcodeLexer': ('pygments.lexers.other', 'Redcode', ('redcode',), ('*.cw',), ()),
+    'RhtmlLexer': ('pygments.lexers.templates', 'RHTML', ('rhtml', 'html+erb', 'html+ruby'), ('*.rhtml',), ('text/html+ruby',)),
+    'RstLexer': ('pygments.lexers.text', 'reStructuredText', ('rst', 'rest', 'restructuredtext'), ('*.rst', '*.rest'), ('text/x-rst', 'text/prs.fallenstein.rst')),
+    'RubyConsoleLexer': ('pygments.lexers.agile', 'Ruby irb session', ('rbcon', 'irb'), (), ('text/x-ruby-shellsession',)),
+    'RubyLexer': ('pygments.lexers.agile', 'Ruby', ('rb', 'ruby', 'duby'), ('*.rb', '*.rbw', 'Rakefile', '*.rake', '*.gemspec', '*.rbx', '*.duby'), ('text/x-ruby', 'application/x-ruby')),
+    'SLexer': ('pygments.lexers.math', 'S', ('splus', 's', 'r'), ('*.S', '*.R'), ('text/S-plus', 'text/S', 'text/R')),
+    'SassLexer': ('pygments.lexers.web', 'Sass', ('sass', 'SASS'), ('*.sass',), ('text/x-sass',)),
+    'ScalaLexer': ('pygments.lexers.compiled', 'Scala', ('scala',), ('*.scala',), ('text/x-scala',)),
+    'ScamlLexer': ('pygments.lexers.web', 'Scaml', ('scaml', 'SCAML'), ('*.scaml',), ('text/x-scaml',)),
+    'SchemeLexer': ('pygments.lexers.functional', 'Scheme', ('scheme', 'scm'), ('*.scm',), ('text/x-scheme', 'application/x-scheme')),
+    'ScssLexer': ('pygments.lexers.web', 'SCSS', ('scss',), ('*.scss',), ('text/x-scss',)),
+    'SmalltalkLexer': ('pygments.lexers.other', 'Smalltalk', ('smalltalk', 'squeak'), ('*.st',), ('text/x-smalltalk',)),
+    'SmartyLexer': ('pygments.lexers.templates', 'Smarty', ('smarty',), ('*.tpl',), ('application/x-smarty',)),
+    'SourcesListLexer': ('pygments.lexers.text', 'Debian Sourcelist', ('sourceslist', 'sources.list'), ('sources.list',), ()),
+    'SqlLexer': ('pygments.lexers.other', 'SQL', ('sql',), ('*.sql',), ('text/x-sql',)),
+    'SqliteConsoleLexer': ('pygments.lexers.other', 'sqlite3con', ('sqlite3',), ('*.sqlite3-console',), ('text/x-sqlite3-console',)),
+    'SquidConfLexer': ('pygments.lexers.text', 'SquidConf', ('squidconf', 'squid.conf', 'squid'), ('squid.conf',), ('text/x-squidconf',)),
+    'SspLexer': ('pygments.lexers.templates', 'Scalate Server Page', ('ssp',), ('*.ssp',), ('application/x-ssp',)),
+    'TclLexer': ('pygments.lexers.agile', 'Tcl', ('tcl',), ('*.tcl',), ('text/x-tcl', 'text/x-script.tcl', 'application/x-tcl')),
+    'TcshLexer': ('pygments.lexers.other', 'Tcsh', ('tcsh', 'csh'), ('*.tcsh', '*.csh'), ('application/x-csh',)),
+    'TexLexer': ('pygments.lexers.text', 'TeX', ('tex', 'latex'), ('*.tex', '*.aux', '*.toc'), ('text/x-tex', 'text/x-latex')),
+    'TextLexer': ('pygments.lexers.special', 'Text only', ('text',), ('*.txt',), ('text/plain',)),
+    'ValaLexer': ('pygments.lexers.compiled', 'Vala', ('vala', 'vapi'), ('*.vala', '*.vapi'), ('text/x-vala',)),
+    'VbNetAspxLexer': ('pygments.lexers.dotnet', 'aspx-vb', ('aspx-vb',), ('*.aspx', '*.asax', '*.ascx', '*.ashx', '*.asmx', '*.axd'), ()),
+    'VbNetLexer': ('pygments.lexers.dotnet', 'VB.net', ('vb.net', 'vbnet'), ('*.vb', '*.bas'), ('text/x-vbnet', 'text/x-vba')),
+    'VelocityHtmlLexer': ('pygments.lexers.templates', 'HTML+Velocity', ('html+velocity',), (), ('text/html+velocity',)),
+    'VelocityLexer': ('pygments.lexers.templates', 'Velocity', ('velocity',), ('*.vm', '*.fhtml'), ()),
+    'VelocityXmlLexer': ('pygments.lexers.templates', 'XML+Velocity', ('xml+velocity',), (), ('application/xml+velocity',)),
+    'VerilogLexer': ('pygments.lexers.hdl', 'verilog', ('v',), ('*.v', '*.sv'), ('text/x-verilog',)),
+    'VimLexer': ('pygments.lexers.text', 'VimL', ('vim',), ('*.vim', '.vimrc'), ('text/x-vim',)),
+    'XQueryLexer': ('pygments.lexers.web', 'XQuery', ('xquery', 'xqy'), ('*.xqy', '*.xquery'), ('text/xquery', 'application/xquery')),
+    'XmlDjangoLexer': ('pygments.lexers.templates', 'XML+Django/Jinja', ('xml+django', 'xml+jinja'), (), ('application/xml+django', 'application/xml+jinja')),
+    'XmlErbLexer': ('pygments.lexers.templates', 'XML+Ruby', ('xml+erb', 'xml+ruby'), (), ('application/xml+ruby',)),
+    'XmlLexer': ('pygments.lexers.web', 'XML', ('xml',), ('*.xml', '*.xsl', '*.rss', '*.xslt', '*.xsd', '*.wsdl'), ('text/xml', 'application/xml', 'image/svg+xml', 'application/rss+xml', 'application/atom+xml', 'application/xsl+xml', 'application/xslt+xml')),
+    'XmlPhpLexer': ('pygments.lexers.templates', 'XML+PHP', ('xml+php',), (), ('application/xml+php',)),
+    'XmlSmartyLexer': ('pygments.lexers.templates', 'XML+Smarty', ('xml+smarty',), (), ('application/xml+smarty',)),
+    'XsltLexer': ('pygments.lexers.web', 'XSLT', ('xslt',), ('*.xsl', '*.xslt'), ('text/xml', 'application/xml', 'image/svg+xml', 'application/rss+xml', 'application/atom+xml', 'application/xsl+xml', 'application/xslt+xml')),
+    'YamlLexer': ('pygments.lexers.text', 'YAML', ('yaml',), ('*.yaml', '*.yml'), ('text/x-yaml',))
+}
+
+if __name__ == '__main__':
+    import sys
+    import os
+
+    # lookup lexers
+    found_lexers = []
+    sys.path.insert(0, os.path.join(os.path.dirname(__file__), '..', '..'))
+    for filename in os.listdir('.'):
+        if filename.endswith('.py') and not filename.startswith('_'):
+            module_name = 'pygments.lexers.%s' % filename[:-3]
+            print(module_name)
+            module = __import__(module_name, None, None, [''])
+            for lexer_name in module.__all__:
+                lexer = getattr(module, lexer_name)
+                found_lexers.append(
+                    '%r: %r' % (lexer_name,
+                                (module_name,
+                                 lexer.name,
+                                 tuple(lexer.aliases),
+                                 tuple(lexer.filenames),
+                                 tuple(lexer.mimetypes))))
+    # sort them, that should make the diff files for svn smaller
+    found_lexers.sort()
+
+    # extract useful sourcecode from this file
+    f = open(__file__)
+    try:
+        content = f.read()
+    finally:
+        f.close()
+    header = content[:content.find('LEXERS = {')]
+    footer = content[content.find("if __name__ == '__main__':"):]
+
+    # write new file
+    f = open(__file__, 'w')
+    f.write(header)
+    f.write('LEXERS = {\n    %s\n}\n\n' % ',\n    '.join(found_lexers))
+    f.write(footer)
+    f.close()
--- a/ThirdParty/Pygments/pygments/lexers/_phpbuiltins.py	Tue Jan 04 17:37:48 2011 +0100
+++ b/ThirdParty/Pygments/pygments/lexers/_phpbuiltins.py	Wed Jan 05 15:46:19 2011 +0100
@@ -1,3389 +1,3389 @@
-# -*- coding: utf-8 -*-
-"""
-    pygments.lexers._phpbuiltins
-    ~~~~~~~~~~~~~~~~~~~~~~~~~~~~
-
-    This file loads the function names and their modules from the
-    php webpage and generates itself.
-
-    Do not alter the MODULES dict by hand!
-
-    WARNING: the generation transfers quite much data over your
-             internet connection. don't run that at home, use
-             a server ;-)
-
-    :copyright: Copyright 2006-2010 by the Pygments team, see AUTHORS.
-    :license: BSD, see LICENSE for details.
-"""
-
-
-MODULES = {'.NET': ['dotnet_load'],
- 'APD': ['apd_breakpoint',
-         'apd_callstack',
-         'apd_clunk',
-         'apd_continue',
-         'apd_croak',
-         'apd_dump_function_table',
-         'apd_dump_persistent_resources',
-         'apd_dump_regular_resources',
-         'apd_echo',
-         'apd_get_active_symbols',
-         'apd_set_pprof_trace',
-         'apd_set_session',
-         'apd_set_session_trace',
-         'apd_set_socket_session_trace',
-         'override_function',
-         'rename_function'],
- 'Apache': ['apache_child_terminate',
-            'apache_get_modules',
-            'apache_get_version',
-            'apache_getenv',
-            'apache_lookup_uri',
-            'apache_note',
-            'apache_request_headers',
-            'apache_reset_timeout',
-            'apache_response_headers',
-            'apache_setenv',
-            'ascii2ebcdic',
-            'ebcdic2ascii',
-            'getallheaders',
-            'virtual'],
- 'Arrays': ['array',
-            'array_change_key_case',
-            'array_chunk',
-            'array_combine',
-            'array_count_values',
-            'array_diff',
-            'array_diff_assoc',
-            'array_diff_key',
-            'array_diff_uassoc',
-            'array_diff_ukey',
-            'array_fill',
-            'array_filter',
-            'array_flip',
-            'array_intersect',
-            'array_intersect_assoc',
-            'array_intersect_key',
-            'array_intersect_uassoc',
-            'array_intersect_ukey',
-            'array_key_exists',
-            'array_keys',
-            'array_map',
-            'array_merge',
-            'array_merge_recursive',
-            'array_multisort',
-            'array_pad',
-            'array_pop',
-            'array_push',
-            'array_rand',
-            'array_reduce',
-            'array_reverse',
-            'array_search',
-            'array_shift',
-            'array_slice',
-            'array_splice',
-            'array_sum',
-            'array_udiff',
-            'array_udiff_assoc',
-            'array_udiff_uassoc',
-            'array_uintersect',
-            'array_uintersect_assoc',
-            'array_uintersect_uassoc',
-            'array_unique',
-            'array_unshift',
-            'array_values',
-            'array_walk',
-            'array_walk_recursive',
-            'arsort',
-            'asort',
-            'compact',
-            'count',
-            'current',
-            'each',
-            'end',
-            'extract',
-            'in_array',
-            'key',
-            'krsort',
-            'ksort',
-            'list',
-            'natcasesort',
-            'natsort',
-            'next',
-            'pos',
-            'prev',
-            'range',
-            'reset',
-            'rsort',
-            'shuffle',
-            'sizeof',
-            'sort',
-            'uasort',
-            'uksort',
-            'usort'],
- 'Aspell': ['aspell_check',
-            'aspell_check_raw',
-            'aspell_new',
-            'aspell_suggest'],
- 'BC math': ['bcadd',
-             'bccomp',
-             'bcdiv',
-             'bcmod',
-             'bcmul',
-             'bcpow',
-             'bcpowmod',
-             'bcscale',
-             'bcsqrt',
-             'bcsub'],
- 'Bzip2': ['bzclose',
-           'bzcompress',
-           'bzdecompress',
-           'bzerrno',
-           'bzerror',
-           'bzerrstr',
-           'bzflush',
-           'bzopen',
-           'bzread',
-           'bzwrite'],
- 'CCVS': ['ccvs_add',
-          'ccvs_auth',
-          'ccvs_command',
-          'ccvs_count',
-          'ccvs_delete',
-          'ccvs_done',
-          'ccvs_init',
-          'ccvs_lookup',
-          'ccvs_new',
-          'ccvs_report',
-          'ccvs_return',
-          'ccvs_reverse',
-          'ccvs_sale',
-          'ccvs_status',
-          'ccvs_textvalue',
-          'ccvs_void'],
- 'COM': ['com_addref',
-         'com_create_guid',
-         'com_event_sink',
-         'com_get',
-         'com_get_active_object',
-         'com_invoke',
-         'com_isenum',
-         'com_load',
-         'com_load_typelib',
-         'com_message_pump',
-         'com_print_typeinfo',
-         'com_propget',
-         'com_propput',
-         'com_propset',
-         'com_release',
-         'com_set',
-         'variant_abs',
-         'variant_add',
-         'variant_and',
-         'variant_cast',
-         'variant_cat',
-         'variant_cmp',
-         'variant_date_from_timestamp',
-         'variant_date_to_timestamp',
-         'variant_div',
-         'variant_eqv',
-         'variant_fix',
-         'variant_get_type',
-         'variant_idiv',
-         'variant_imp',
-         'variant_int',
-         'variant_mod',
-         'variant_mul',
-         'variant_neg',
-         'variant_not',
-         'variant_or',
-         'variant_pow',
-         'variant_round',
-         'variant_set',
-         'variant_set_type',
-         'variant_sub',
-         'variant_xor'],
- 'CURL': ['curl_close',
-          'curl_copy_handle',
-          'curl_errno',
-          'curl_error',
-          'curl_exec',
-          'curl_getinfo',
-          'curl_init',
-          'curl_multi_add_handle',
-          'curl_multi_close',
-          'curl_multi_exec',
-          'curl_multi_getcontent',
-          'curl_multi_info_read',
-          'curl_multi_init',
-          'curl_multi_remove_handle',
-          'curl_multi_select',
-          'curl_setopt',
-          'curl_version'],
- 'Calendar': ['cal_days_in_month',
-              'cal_from_jd',
-              'cal_info',
-              'cal_to_jd',
-              'easter_date',
-              'easter_days',
-              'frenchtojd',
-              'gregoriantojd',
-              'jddayofweek',
-              'jdmonthname',
-              'jdtofrench',
-              'jdtogregorian',
-              'jdtojewish',
-              'jdtojulian',
-              'jdtounix',
-              'jewishtojd',
-              'juliantojd',
-              'unixtojd'],
- 'Classes/Objects': ['call_user_method',
-                     'call_user_method_array',
-                     'class_exists',
-                     'get_class',
-                     'get_class_methods',
-                     'get_class_vars',
-                     'get_declared_classes',
-                     'get_declared_interfaces',
-                     'get_object_vars',
-                     'get_parent_class',
-                     'interface_exists',
-                     'is_a',
-                     'is_subclass_of',
-                     'method_exists'],
- 'Classkit': ['classkit_import',
-              'classkit_method_add',
-              'classkit_method_copy',
-              'classkit_method_redefine',
-              'classkit_method_remove',
-              'classkit_method_rename'],
- 'ClibPDF': ['cpdf_add_annotation',
-             'cpdf_add_outline',
-             'cpdf_arc',
-             'cpdf_begin_text',
-             'cpdf_circle',
-             'cpdf_clip',
-             'cpdf_close',
-             'cpdf_closepath',
-             'cpdf_closepath_fill_stroke',
-             'cpdf_closepath_stroke',
-             'cpdf_continue_text',
-             'cpdf_curveto',
-             'cpdf_end_text',
-             'cpdf_fill',
-             'cpdf_fill_stroke',
-             'cpdf_finalize',
-             'cpdf_finalize_page',
-             'cpdf_global_set_document_limits',
-             'cpdf_import_jpeg',
-             'cpdf_lineto',
-             'cpdf_moveto',
-             'cpdf_newpath',
-             'cpdf_open',
-             'cpdf_output_buffer',
-             'cpdf_page_init',
-             'cpdf_place_inline_image',
-             'cpdf_rect',
-             'cpdf_restore',
-             'cpdf_rlineto',
-             'cpdf_rmoveto',
-             'cpdf_rotate',
-             'cpdf_rotate_text',
-             'cpdf_save',
-             'cpdf_save_to_file',
-             'cpdf_scale',
-             'cpdf_set_action_url',
-             'cpdf_set_char_spacing',
-             'cpdf_set_creator',
-             'cpdf_set_current_page',
-             'cpdf_set_font',
-             'cpdf_set_font_directories',
-             'cpdf_set_font_map_file',
-             'cpdf_set_horiz_scaling',
-             'cpdf_set_keywords',
-             'cpdf_set_leading',
-             'cpdf_set_page_animation',
-             'cpdf_set_subject',
-             'cpdf_set_text_matrix',
-             'cpdf_set_text_pos',
-             'cpdf_set_text_rendering',
-             'cpdf_set_text_rise',
-             'cpdf_set_title',
-             'cpdf_set_viewer_preferences',
-             'cpdf_set_word_spacing',
-             'cpdf_setdash',
-             'cpdf_setflat',
-             'cpdf_setgray',
-             'cpdf_setgray_fill',
-             'cpdf_setgray_stroke',
-             'cpdf_setlinecap',
-             'cpdf_setlinejoin',
-             'cpdf_setlinewidth',
-             'cpdf_setmiterlimit',
-             'cpdf_setrgbcolor',
-             'cpdf_setrgbcolor_fill',
-             'cpdf_setrgbcolor_stroke',
-             'cpdf_show',
-             'cpdf_show_xy',
-             'cpdf_stringwidth',
-             'cpdf_stroke',
-             'cpdf_text',
-             'cpdf_translate'],
- 'Crack': ['crack_check',
-           'crack_closedict',
-           'crack_getlastmessage',
-           'crack_opendict'],
- 'Cybercash': ['cybercash_base64_decode',
-               'cybercash_base64_encode',
-               'cybercash_decr',
-               'cybercash_encr'],
- 'Cyrus IMAP': ['cyrus_authenticate',
-                'cyrus_bind',
-                'cyrus_close',
-                'cyrus_connect',
-                'cyrus_query',
-                'cyrus_unbind'],
- 'DB++': ['dbplus_add',
-          'dbplus_aql',
-          'dbplus_chdir',
-          'dbplus_close',
-          'dbplus_curr',
-          'dbplus_errcode',
-          'dbplus_errno',
-          'dbplus_find',
-          'dbplus_first',
-          'dbplus_flush',
-          'dbplus_freealllocks',
-          'dbplus_freelock',
-          'dbplus_freerlocks',
-          'dbplus_getlock',
-          'dbplus_getunique',
-          'dbplus_info',
-          'dbplus_last',
-          'dbplus_lockrel',
-          'dbplus_next',
-          'dbplus_open',
-          'dbplus_prev',
-          'dbplus_rchperm',
-          'dbplus_rcreate',
-          'dbplus_rcrtexact',
-          'dbplus_rcrtlike',
-          'dbplus_resolve',
-          'dbplus_restorepos',
-          'dbplus_rkeys',
-          'dbplus_ropen',
-          'dbplus_rquery',
-          'dbplus_rrename',
-          'dbplus_rsecindex',
-          'dbplus_runlink',
-          'dbplus_rzap',
-          'dbplus_savepos',
-          'dbplus_setindex',
-          'dbplus_setindexbynumber',
-          'dbplus_sql',
-          'dbplus_tcl',
-          'dbplus_tremove',
-          'dbplus_undo',
-          'dbplus_undoprepare',
-          'dbplus_unlockrel',
-          'dbplus_unselect',
-          'dbplus_update',
-          'dbplus_xlockrel',
-          'dbplus_xunlockrel'],
- 'DBM': ['dblist',
-         'dbmclose',
-         'dbmdelete',
-         'dbmexists',
-         'dbmfetch',
-         'dbmfirstkey',
-         'dbminsert',
-         'dbmnextkey',
-         'dbmopen',
-         'dbmreplace'],
- 'DOM': ['dom_import_simplexml'],
- 'DOM XML': ['domxml_new_doc',
-             'domxml_open_file',
-             'domxml_open_mem',
-             'domxml_version',
-             'domxml_xmltree',
-             'domxml_xslt_stylesheet',
-             'domxml_xslt_stylesheet_doc',
-             'domxml_xslt_stylesheet_file',
-             'xpath_eval',
-             'xpath_eval_expression',
-             'xpath_new_context',
-             'xptr_eval',
-             'xptr_new_context'],
- 'Date/Time': ['checkdate',
-               'date',
-               'date_sunrise',
-               'date_sunset',
-               'getdate',
-               'gettimeofday',
-               'gmdate',
-               'gmmktime',
-               'gmstrftime',
-               'idate',
-               'localtime',
-               'microtime',
-               'mktime',
-               'strftime',
-               'strptime',
-               'strtotime',
-               'time'],
- 'Direct IO': ['dio_close',
-               'dio_fcntl',
-               'dio_open',
-               'dio_read',
-               'dio_seek',
-               'dio_stat',
-               'dio_tcsetattr',
-               'dio_truncate',
-               'dio_write'],
- 'Directories': ['chdir',
-                 'chroot',
-                 'closedir',
-                 'getcwd',
-                 'opendir',
-                 'readdir',
-                 'rewinddir',
-                 'scandir'],
- 'Errors and Logging': ['debug_backtrace',
-                        'debug_print_backtrace',
-                        'error_log',
-                        'error_reporting',
-                        'restore_error_handler',
-                        'restore_exception_handler',
-                        'set_error_handler',
-                        'set_exception_handler',
-                        'trigger_error',
-                        'user_error'],
- 'Exif': ['exif_imagetype',
-          'exif_read_data',
-          'exif_tagname',
-          'exif_thumbnail',
-          'read_exif_data'],
- 'FDF': ['fdf_add_doc_javascript',
-         'fdf_add_template',
-         'fdf_close',
-         'fdf_create',
-         'fdf_enum_values',
-         'fdf_errno',
-         'fdf_error',
-         'fdf_get_ap',
-         'fdf_get_attachment',
-         'fdf_get_encoding',
-         'fdf_get_file',
-         'fdf_get_flags',
-         'fdf_get_opt',
-         'fdf_get_status',
-         'fdf_get_value',
-         'fdf_get_version',
-         'fdf_header',
-         'fdf_next_field_name',
-         'fdf_open',
-         'fdf_open_string',
-         'fdf_remove_item',
-         'fdf_save',
-         'fdf_save_string',
-         'fdf_set_ap',
-         'fdf_set_encoding',
-         'fdf_set_file',
-         'fdf_set_flags',
-         'fdf_set_javascript_action',
-         'fdf_set_on_import_javascript',
-         'fdf_set_opt',
-         'fdf_set_status',
-         'fdf_set_submit_form_action',
-         'fdf_set_target_frame',
-         'fdf_set_value',
-         'fdf_set_version'],
- 'FTP': ['ftp_alloc',
-         'ftp_cdup',
-         'ftp_chdir',
-         'ftp_chmod',
-         'ftp_close',
-         'ftp_connect',
-         'ftp_delete',
-         'ftp_exec',
-         'ftp_fget',
-         'ftp_fput',
-         'ftp_get',
-         'ftp_get_option',
-         'ftp_login',
-         'ftp_mdtm',
-         'ftp_mkdir',
-         'ftp_nb_continue',
-         'ftp_nb_fget',
-         'ftp_nb_fput',
-         'ftp_nb_get',
-         'ftp_nb_put',
-         'ftp_nlist',
-         'ftp_pasv',
-         'ftp_put',
-         'ftp_pwd',
-         'ftp_quit',
-         'ftp_raw',
-         'ftp_rawlist',
-         'ftp_rename',
-         'ftp_rmdir',
-         'ftp_set_option',
-         'ftp_site',
-         'ftp_size',
-         'ftp_ssl_connect',
-         'ftp_systype'],
- 'Filesystem': ['basename',
-                'chgrp',
-                'chmod',
-                'chown',
-                'clearstatcache',
-                'copy',
-                'delete',
-                'dirname',
-                'disk_free_space',
-                'disk_total_space',
-                'diskfreespace',
-                'fclose',
-                'feof',
-                'fflush',
-                'fgetc',
-                'fgetcsv',
-                'fgets',
-                'fgetss',
-                'file',
-                'file_exists',
-                'file_get_contents',
-                'file_put_contents',
-                'fileatime',
-                'filectime',
-                'filegroup',
-                'fileinode',
-                'filemtime',
-                'fileowner',
-                'fileperms',
-                'filesize',
-                'filetype',
-                'flock',
-                'fnmatch',
-                'fopen',
-                'fpassthru',
-                'fputcsv',
-                'fputs',
-                'fread',
-                'fscanf',
-                'fseek',
-                'fstat',
-                'ftell',
-                'ftruncate',
-                'fwrite',
-                'glob',
-                'is_dir',
-                'is_executable',
-                'is_file',
-                'is_link',
-                'is_readable',
-                'is_uploaded_file',
-                'is_writable',
-                'is_writeable',
-                'link',
-                'linkinfo',
-                'lstat',
-                'mkdir',
-                'move_uploaded_file',
-                'parse_ini_file',
-                'pathinfo',
-                'pclose',
-                'popen',
-                'readfile',
-                'readlink',
-                'realpath',
-                'rename',
-                'rewind',
-                'rmdir',
-                'set_file_buffer',
-                'stat',
-                'symlink',
-                'tempnam',
-                'tmpfile',
-                'touch',
-                'umask',
-                'unlink'],
- 'Firebird/InterBase': ['ibase_add_user',
-                        'ibase_affected_rows',
-                        'ibase_backup',
-                        'ibase_blob_add',
-                        'ibase_blob_cancel',
-                        'ibase_blob_close',
-                        'ibase_blob_create',
-                        'ibase_blob_echo',
-                        'ibase_blob_get',
-                        'ibase_blob_import',
-                        'ibase_blob_info',
-                        'ibase_blob_open',
-                        'ibase_close',
-                        'ibase_commit',
-                        'ibase_commit_ret',
-                        'ibase_connect',
-                        'ibase_db_info',
-                        'ibase_delete_user',
-                        'ibase_drop_db',
-                        'ibase_errcode',
-                        'ibase_errmsg',
-                        'ibase_execute',
-                        'ibase_fetch_assoc',
-                        'ibase_fetch_object',
-                        'ibase_fetch_row',
-                        'ibase_field_info',
-                        'ibase_free_event_handler',
-                        'ibase_free_query',
-                        'ibase_free_result',
-                        'ibase_gen_id',
-                        'ibase_maintain_db',
-                        'ibase_modify_user',
-                        'ibase_name_result',
-                        'ibase_num_fields',
-                        'ibase_num_params',
-                        'ibase_param_info',
-                        'ibase_pconnect',
-                        'ibase_prepare',
-                        'ibase_query',
-                        'ibase_restore',
-                        'ibase_rollback',
-                        'ibase_rollback_ret',
-                        'ibase_server_info',
-                        'ibase_service_attach',
-                        'ibase_service_detach',
-                        'ibase_set_event_handler',
-                        'ibase_timefmt',
-                        'ibase_trans',
-                        'ibase_wait_event'],
- 'FriBiDi': ['fribidi_log2vis'],
- 'FrontBase': ['fbsql_affected_rows',
-               'fbsql_autocommit',
-               'fbsql_blob_size',
-               'fbsql_change_user',
-               'fbsql_clob_size',
-               'fbsql_close',
-               'fbsql_commit',
-               'fbsql_connect',
-               'fbsql_create_blob',
-               'fbsql_create_clob',
-               'fbsql_create_db',
-               'fbsql_data_seek',
-               'fbsql_database',
-               'fbsql_database_password',
-               'fbsql_db_query',
-               'fbsql_db_status',
-               'fbsql_drop_db',
-               'fbsql_errno',
-               'fbsql_error',
-               'fbsql_fetch_array',
-               'fbsql_fetch_assoc',
-               'fbsql_fetch_field',
-               'fbsql_fetch_lengths',
-               'fbsql_fetch_object',
-               'fbsql_fetch_row',
-               'fbsql_field_flags',
-               'fbsql_field_len',
-               'fbsql_field_name',
-               'fbsql_field_seek',
-               'fbsql_field_table',
-               'fbsql_field_type',
-               'fbsql_free_result',
-               'fbsql_get_autostart_info',
-               'fbsql_hostname',
-               'fbsql_insert_id',
-               'fbsql_list_dbs',
-               'fbsql_list_fields',
-               'fbsql_list_tables',
-               'fbsql_next_result',
-               'fbsql_num_fields',
-               'fbsql_num_rows',
-               'fbsql_password',
-               'fbsql_pconnect',
-               'fbsql_query',
-               'fbsql_read_blob',
-               'fbsql_read_clob',
-               'fbsql_result',
-               'fbsql_rollback',
-               'fbsql_select_db',
-               'fbsql_set_lob_mode',
-               'fbsql_set_password',
-               'fbsql_set_transaction',
-               'fbsql_start_db',
-               'fbsql_stop_db',
-               'fbsql_tablename',
-               'fbsql_username',
-               'fbsql_warnings'],
- 'Function handling': ['call_user_func',
-                       'call_user_func_array',
-                       'create_function',
-                       'func_get_arg',
-                       'func_get_args',
-                       'func_num_args',
-                       'function_exists',
-                       'get_defined_functions',
-                       'register_shutdown_function',
-                       'register_tick_function',
-                       'unregister_tick_function'],
- 'GMP': ['gmp_abs',
-         'gmp_add',
-         'gmp_and',
-         'gmp_clrbit',
-         'gmp_cmp',
-         'gmp_com',
-         'gmp_div',
-         'gmp_div_q',
-         'gmp_div_qr',
-         'gmp_div_r',
-         'gmp_divexact',