Renamed the Python 2 debugger package to Python2. jsonrpc

Sat, 03 Sep 2016 18:12:12 +0200

author
Detlev Offenbach <detlev@die-offenbachs.de>
date
Sat, 03 Sep 2016 18:12:12 +0200
branch
jsonrpc
changeset 5133
b7fe69c6cb1c
parent 5132
a094eee9f862
child 5134
4a4212a6f40c

Renamed the Python 2 debugger package to Python2.

DebugClients/Python/AsyncFile.py file | annotate | diff | comparison | revisions
DebugClients/Python/AsyncIO.py file | annotate | diff | comparison | revisions
DebugClients/Python/DCTestResult.py file | annotate | diff | comparison | revisions
DebugClients/Python/DebugBase.py file | annotate | diff | comparison | revisions
DebugClients/Python/DebugClient.py file | annotate | diff | comparison | revisions
DebugClients/Python/DebugClientBase.py file | annotate | diff | comparison | revisions
DebugClients/Python/DebugClientCapabilities.py file | annotate | diff | comparison | revisions
DebugClients/Python/DebugClientThreads.py file | annotate | diff | comparison | revisions
DebugClients/Python/DebugConfig.py file | annotate | diff | comparison | revisions
DebugClients/Python/DebugProtocol.py file | annotate | diff | comparison | revisions
DebugClients/Python/DebugThread.py file | annotate | diff | comparison | revisions
DebugClients/Python/DebugUtilities.py file | annotate | diff | comparison | revisions
DebugClients/Python/FlexCompleter.py file | annotate | diff | comparison | revisions
DebugClients/Python/PyProfile.py file | annotate | diff | comparison | revisions
DebugClients/Python/__init__.py file | annotate | diff | comparison | revisions
DebugClients/Python/coverage/__init__.py file | annotate | diff | comparison | revisions
DebugClients/Python/coverage/__main__.py file | annotate | diff | comparison | revisions
DebugClients/Python/coverage/annotate.py file | annotate | diff | comparison | revisions
DebugClients/Python/coverage/backunittest.py file | annotate | diff | comparison | revisions
DebugClients/Python/coverage/backward.py file | annotate | diff | comparison | revisions
DebugClients/Python/coverage/bytecode.py file | annotate | diff | comparison | revisions
DebugClients/Python/coverage/cmdline.py file | annotate | diff | comparison | revisions
DebugClients/Python/coverage/collector.py file | annotate | diff | comparison | revisions
DebugClients/Python/coverage/config.py file | annotate | diff | comparison | revisions
DebugClients/Python/coverage/control.py file | annotate | diff | comparison | revisions
DebugClients/Python/coverage/data.py file | annotate | diff | comparison | revisions
DebugClients/Python/coverage/debug.py file | annotate | diff | comparison | revisions
DebugClients/Python/coverage/doc/AUTHORS.txt file | annotate | diff | comparison | revisions
DebugClients/Python/coverage/doc/CHANGES.rst file | annotate | diff | comparison | revisions
DebugClients/Python/coverage/doc/LICENSE.txt file | annotate | diff | comparison | revisions
DebugClients/Python/coverage/doc/README.rst file | annotate | diff | comparison | revisions
DebugClients/Python/coverage/env.py file | annotate | diff | comparison | revisions
DebugClients/Python/coverage/execfile.py file | annotate | diff | comparison | revisions
DebugClients/Python/coverage/files.py file | annotate | diff | comparison | revisions
DebugClients/Python/coverage/html.py file | annotate | diff | comparison | revisions
DebugClients/Python/coverage/misc.py file | annotate | diff | comparison | revisions
DebugClients/Python/coverage/monkey.py file | annotate | diff | comparison | revisions
DebugClients/Python/coverage/parser.py file | annotate | diff | comparison | revisions
DebugClients/Python/coverage/phystokens.py file | annotate | diff | comparison | revisions
DebugClients/Python/coverage/pickle2json.py file | annotate | diff | comparison | revisions
DebugClients/Python/coverage/plugin.py file | annotate | diff | comparison | revisions
DebugClients/Python/coverage/plugin_support.py file | annotate | diff | comparison | revisions
DebugClients/Python/coverage/python.py file | annotate | diff | comparison | revisions
DebugClients/Python/coverage/pytracer.py file | annotate | diff | comparison | revisions
DebugClients/Python/coverage/report.py file | annotate | diff | comparison | revisions
DebugClients/Python/coverage/results.py file | annotate | diff | comparison | revisions
DebugClients/Python/coverage/summary.py file | annotate | diff | comparison | revisions
DebugClients/Python/coverage/templite.py file | annotate | diff | comparison | revisions
DebugClients/Python/coverage/test_helpers.py file | annotate | diff | comparison | revisions
DebugClients/Python/coverage/version.py file | annotate | diff | comparison | revisions
DebugClients/Python/coverage/xmlreport.py file | annotate | diff | comparison | revisions
DebugClients/Python/eric6dbgstub.py file | annotate | diff | comparison | revisions
DebugClients/Python/getpass.py file | annotate | diff | comparison | revisions
DebugClients/Python2/AsyncFile.py file | annotate | diff | comparison | revisions
DebugClients/Python2/AsyncIO.py file | annotate | diff | comparison | revisions
DebugClients/Python2/DCTestResult.py file | annotate | diff | comparison | revisions
DebugClients/Python2/DebugBase.py file | annotate | diff | comparison | revisions
DebugClients/Python2/DebugClient.py file | annotate | diff | comparison | revisions
DebugClients/Python2/DebugClientBase.py file | annotate | diff | comparison | revisions
DebugClients/Python2/DebugClientCapabilities.py file | annotate | diff | comparison | revisions
DebugClients/Python2/DebugClientThreads.py file | annotate | diff | comparison | revisions
DebugClients/Python2/DebugConfig.py file | annotate | diff | comparison | revisions
DebugClients/Python2/DebugProtocol.py file | annotate | diff | comparison | revisions
DebugClients/Python2/DebugThread.py file | annotate | diff | comparison | revisions
DebugClients/Python2/DebugUtilities.py file | annotate | diff | comparison | revisions
DebugClients/Python2/FlexCompleter.py file | annotate | diff | comparison | revisions
DebugClients/Python2/PyProfile.py file | annotate | diff | comparison | revisions
DebugClients/Python2/__init__.py file | annotate | diff | comparison | revisions
DebugClients/Python2/coverage/__init__.py file | annotate | diff | comparison | revisions
DebugClients/Python2/coverage/__main__.py file | annotate | diff | comparison | revisions
DebugClients/Python2/coverage/annotate.py file | annotate | diff | comparison | revisions
DebugClients/Python2/coverage/backunittest.py file | annotate | diff | comparison | revisions
DebugClients/Python2/coverage/backward.py file | annotate | diff | comparison | revisions
DebugClients/Python2/coverage/bytecode.py file | annotate | diff | comparison | revisions
DebugClients/Python2/coverage/cmdline.py file | annotate | diff | comparison | revisions
DebugClients/Python2/coverage/collector.py file | annotate | diff | comparison | revisions
DebugClients/Python2/coverage/config.py file | annotate | diff | comparison | revisions
DebugClients/Python2/coverage/control.py file | annotate | diff | comparison | revisions
DebugClients/Python2/coverage/data.py file | annotate | diff | comparison | revisions
DebugClients/Python2/coverage/debug.py file | annotate | diff | comparison | revisions
DebugClients/Python2/coverage/doc/AUTHORS.txt file | annotate | diff | comparison | revisions
DebugClients/Python2/coverage/doc/CHANGES.rst file | annotate | diff | comparison | revisions
DebugClients/Python2/coverage/doc/LICENSE.txt file | annotate | diff | comparison | revisions
DebugClients/Python2/coverage/doc/README.rst file | annotate | diff | comparison | revisions
DebugClients/Python2/coverage/env.py file | annotate | diff | comparison | revisions
DebugClients/Python2/coverage/execfile.py file | annotate | diff | comparison | revisions
DebugClients/Python2/coverage/files.py file | annotate | diff | comparison | revisions
DebugClients/Python2/coverage/html.py file | annotate | diff | comparison | revisions
DebugClients/Python2/coverage/misc.py file | annotate | diff | comparison | revisions
DebugClients/Python2/coverage/monkey.py file | annotate | diff | comparison | revisions
DebugClients/Python2/coverage/parser.py file | annotate | diff | comparison | revisions
DebugClients/Python2/coverage/phystokens.py file | annotate | diff | comparison | revisions
DebugClients/Python2/coverage/pickle2json.py file | annotate | diff | comparison | revisions
DebugClients/Python2/coverage/plugin.py file | annotate | diff | comparison | revisions
DebugClients/Python2/coverage/plugin_support.py file | annotate | diff | comparison | revisions
DebugClients/Python2/coverage/python.py file | annotate | diff | comparison | revisions
DebugClients/Python2/coverage/pytracer.py file | annotate | diff | comparison | revisions
DebugClients/Python2/coverage/report.py file | annotate | diff | comparison | revisions
DebugClients/Python2/coverage/results.py file | annotate | diff | comparison | revisions
DebugClients/Python2/coverage/summary.py file | annotate | diff | comparison | revisions
DebugClients/Python2/coverage/templite.py file | annotate | diff | comparison | revisions
DebugClients/Python2/coverage/test_helpers.py file | annotate | diff | comparison | revisions
DebugClients/Python2/coverage/version.py file | annotate | diff | comparison | revisions
DebugClients/Python2/coverage/xmlreport.py file | annotate | diff | comparison | revisions
DebugClients/Python2/eric6dbgstub.py file | annotate | diff | comparison | revisions
DebugClients/Python2/getpass.py file | annotate | diff | comparison | revisions
Debugger/DebuggerInterfacePython2.py file | annotate | diff | comparison | revisions
Documentation/Source/eric6.DebugClients.Python.AsyncFile.html file | annotate | diff | comparison | revisions
Documentation/Source/eric6.DebugClients.Python.DCTestResult.html file | annotate | diff | comparison | revisions
Documentation/Source/eric6.DebugClients.Python.DebugBase.html file | annotate | diff | comparison | revisions
Documentation/Source/eric6.DebugClients.Python.DebugClient.html file | annotate | diff | comparison | revisions
Documentation/Source/eric6.DebugClients.Python.DebugClientBase.html file | annotate | diff | comparison | revisions
Documentation/Source/eric6.DebugClients.Python.DebugClientCapabilities.html file | annotate | diff | comparison | revisions
Documentation/Source/eric6.DebugClients.Python.DebugClientThreads.html file | annotate | diff | comparison | revisions
Documentation/Source/eric6.DebugClients.Python.DebugConfig.html file | annotate | diff | comparison | revisions
Documentation/Source/eric6.DebugClients.Python.DebugThread.html file | annotate | diff | comparison | revisions
Documentation/Source/eric6.DebugClients.Python.FlexCompleter.html file | annotate | diff | comparison | revisions
Documentation/Source/eric6.DebugClients.Python.PyProfile.html file | annotate | diff | comparison | revisions
Documentation/Source/eric6.DebugClients.Python.eric6dbgstub.html file | annotate | diff | comparison | revisions
Documentation/Source/eric6.DebugClients.Python.getpass.html file | annotate | diff | comparison | revisions
Documentation/Source/index-eric6.DebugClients.Python.html file | annotate | diff | comparison | revisions
eric6.e4p file | annotate | diff | comparison | revisions
--- a/DebugClients/Python/AsyncFile.py	Sat Sep 03 18:02:37 2016 +0200
+++ /dev/null	Thu Jan 01 00:00:00 1970 +0000
@@ -1,339 +0,0 @@
-# -*- coding: utf-8 -*-
-
-# Copyright (c) 2002 - 2016 Detlev Offenbach <detlev@die-offenbachs.de>
-#
-
-"""
-Module implementing an asynchronous file like socket interface for the
-debugger.
-"""
-
-import socket
-
-from DebugUtilities import prepareJsonCommand
-
-
-def AsyncPendingWrite(file):
-    """
-    Module function to check for data to be written.
-    
-    @param file The file object to be checked (file)
-    @return Flag indicating if there is data wating (int)
-    """
-    try:
-        pending = file.pendingWrite()
-    except Exception:
-        pending = 0
-
-    return pending
-
-
-class AsyncFile(object):
-    """
-    Class wrapping a socket object with a file interface.
-    """
-    maxtries = 10
-    maxbuffersize = 1024 * 1024 * 4
-    
-    def __init__(self, sock, mode, name):
-        """
-        Constructor
-        
-        @param sock the socket object being wrapped
-        @param mode mode of this file (string)
-        @param name name of this file (string)
-        """
-        # Initialise the attributes.
-        self.closed = False
-        self.sock = sock
-        self.mode = mode
-        self.name = name
-        self.nWriteErrors = 0
-        self.encoding = "utf-8"
-
-        self.wpending = u''
-
-    def __checkMode(self, mode):
-        """
-        Private method to check the mode.
-        
-        This method checks, if an operation is permitted according to
-        the mode of the file. If it is not, an IOError is raised.
-        
-        @param mode the mode to be checked (string)
-        @exception IOError raised to indicate a bad file descriptor
-        """
-        if mode != self.mode:
-            raise IOError('[Errno 9] Bad file descriptor')
-
-    def __nWrite(self, n):
-        """
-        Private method to write a specific number of pending bytes.
-        
-        @param n the number of bytes to be written (int)
-        """
-        if n:
-            try:
-                buf = self.wpending[:n]
-                try:
-                    buf = buf.encode('utf-8', 'backslashreplace')
-                except (UnicodeEncodeError, UnicodeDecodeError):
-                    pass
-                self.sock.sendall(buf)
-                self.wpending = self.wpending[n:]
-                self.nWriteErrors = 0
-            except socket.error:
-                self.nWriteErrors += 1
-                if self.nWriteErrors > self.maxtries:
-                    self.wpending = u''  # delete all output
-
-    def pendingWrite(self):
-        """
-        Public method that returns the number of bytes waiting to be written.
-        
-        @return the number of bytes to be written (int)
-        """
-        return self.wpending.rfind('\n') + 1
-
-    def close(self, closeit=False):
-        """
-        Public method to close the file.
-        
-        @param closeit flag to indicate a close ordered by the debugger code
-            (boolean)
-        """
-        if closeit and not self.closed:
-            self.flush()
-            self.sock.close()
-            self.closed = True
-
-    def flush(self):
-        """
-        Public method to write all pending bytes.
-        """
-        self.__nWrite(len(self.wpending))
-
-    def isatty(self):
-        """
-        Public method to indicate whether a tty interface is supported.
-        
-        @return always false
-        """
-        return False
-
-    def fileno(self):
-        """
-        Public method returning the file number.
-        
-        @return file number (int)
-        """
-        try:
-            return self.sock.fileno()
-        except socket.error:
-            return -1
-
-    def readable(self):
-        """
-        Public method to check, if the stream is readable.
-        
-        @return flag indicating a readable stream (boolean)
-        """
-        return self.mode == "r"
-    
-    def read_p(self, size=-1):
-        """
-        Public method to read bytes from this file.
-        
-        @param size maximum number of bytes to be read (int)
-        @return the bytes read (any)
-        """
-        self.__checkMode('r')
-
-        if size < 0:
-            size = 20000
-
-        return self.sock.recv(size).decode('utf8', 'backslashreplace')
-
-    def read(self, size=-1):
-        """
-        Public method to read bytes from this file.
-        
-        @param size maximum number of bytes to be read (int)
-        @return the bytes read (any)
-        """
-        self.__checkMode('r')
-
-        buf = raw_input()
-        if size >= 0:
-            buf = buf[:size]
-        return buf
-
-    def readline_p(self, size=-1):
-        """
-        Public method to read a line from this file.
-        
-        <b>Note</b>: This method will not block and may return
-        only a part of a line if that is all that is available.
-        
-        @param size maximum number of bytes to be read (int)
-        @return one line of text up to size bytes (string)
-        """
-        self.__checkMode('r')
-
-        if size < 0:
-            size = 20000
-
-        # The integration of the debugger client event loop and the connection
-        # to the debugger relies on the two lines of the debugger command being
-        # delivered as two separate events.  Therefore we make sure we only
-        # read a line at a time.
-        line = self.sock.recv(size, socket.MSG_PEEK)
-
-        eol = line.find(b'\n')
-
-        if eol >= 0:
-            size = eol + 1
-        else:
-            size = len(line)
-
-        # Now we know how big the line is, read it for real.
-        return self.sock.recv(size).decode('utf8', 'backslashreplace')
-
-    def readlines(self, sizehint=-1):
-        """
-        Public method to read all lines from this file.
-        
-        @param sizehint hint of the numbers of bytes to be read (int)
-        @return list of lines read (list of strings)
-        """
-        self.__checkMode('r')
-
-        lines = []
-        room = sizehint
-
-        line = self.readline_p(room)
-        linelen = len(line)
-
-        while linelen > 0:
-            lines.append(line)
-
-            if sizehint >= 0:
-                room = room - linelen
-
-                if room <= 0:
-                    break
-
-            line = self.readline_p(room)
-            linelen = len(line)
-
-        return lines
-
-    def readline(self, sizehint=-1):
-        """
-        Public method to read one line from this file.
-        
-        @param sizehint hint of the numbers of bytes to be read (int)
-        @return one line read (string)
-        """
-        self.__checkMode('r')
-
-        line = raw_input() + '\n'
-        if sizehint >= 0:
-            line = line[:sizehint]
-        return line
-        
-    def seekable(self):
-        """
-        Public method to check, if the stream is seekable.
-        
-        @return flag indicating a seekable stream (boolean)
-        """
-        return False
-    
-    def seek(self, offset, whence=0):
-        """
-        Public method to move the filepointer.
-        
-        @param offset offset to seek for
-        @param whence where to seek from
-        @exception IOError This method is not supported and always raises an
-        IOError.
-        """
-        raise IOError('[Errno 29] Illegal seek')
-
-    def tell(self):
-        """
-        Public method to get the filepointer position.
-        
-        @exception IOError This method is not supported and always raises an
-        IOError.
-        """
-        raise IOError('[Errno 29] Illegal seek')
-
-    def truncate(self, size=-1):
-        """
-        Public method to truncate the file.
-        
-        @param size size to truncate to (integer)
-        @exception IOError This method is not supported and always raises an
-        IOError.
-        """
-        raise IOError('[Errno 29] Illegal seek')
-
-    def writable(self):
-        """
-        Public method to check, if a stream is writable.
-        
-        @return flag indicating a writable stream (boolean)
-        """
-        return self.mode == "w"
-    
-    def write(self, s):
-        """
-        Public method to write a string to the file.
-        
-        @param s bytes to be written (string)
-        """
-        self.__checkMode('w')
-        
-        cmd = prepareJsonCommand("ClientOutput", {
-            "text": s,
-        })
-        self.write_p(cmd)
-    
-    def write_p(self, s):
-        """
-        Public method to write a string to the file.
-        
-        @param s text to be written (string)
-        @exception socket.error raised to indicate too many send attempts
-        """
-        self.__checkMode('w')
-        tries = 0
-        if not self.wpending:
-            self.wpending = s
-        elif len(self.wpending) + len(s) > self.maxbuffersize:
-            # flush wpending if it is too big
-            while self.wpending:
-                # if we have a persistent error in sending the data, an
-                # exception will be raised in __nWrite
-                self.flush()
-                tries += 1
-                if tries > self.maxtries:
-                    raise socket.error("Too many attempts to send data")
-            self.wpending = s
-        else:
-            self.wpending += s
-        self.__nWrite(self.pendingWrite())
-
-    def writelines(self, lines):
-        """
-        Public method to write a list of strings to the file.
-        
-        @param lines list of texts to be written (list of string)
-        """
-        self.write("".join(lines))
-
-#
-# eflag: FileType = Python2
-# eflag: noqa = M601, M702
--- a/DebugClients/Python/AsyncIO.py	Sat Sep 03 18:02:37 2016 +0200
+++ /dev/null	Thu Jan 01 00:00:00 1970 +0000
@@ -1,88 +0,0 @@
-# -*- coding: utf-8 -*-
-
-# Copyright (c) 2002 - 2016 Detlev Offenbach <detlev@die-offenbachs.de>
-#
-
-"""
-Module implementing a base class of an asynchronous interface for the debugger.
-"""
-
-# TODO: delete this file
-class AsyncIO(object):
-    """
-    Class implementing asynchronous reading and writing.
-    """
-    def __init__(self):
-        """
-        Constructor
-        """
-        # There is no connection yet.
-        self.disconnect()
-
-    def disconnect(self):
-        """
-        Public method to disconnect any current connection.
-        """
-        self.readfd = None
-        self.writefd = None
-
-    def setDescriptors(self, rfd, wfd):
-        """
-        Public method called to set the descriptors for the connection.
-        
-        @param rfd file descriptor of the input file (int)
-        @param wfd file descriptor of the output file (int)
-        """
-        self.rbuf = ''
-        self.readfd = rfd
-
-        self.wbuf = ''
-        self.writefd = wfd
-
-    def readReady(self, fd):
-        """
-        Public method called when there is data ready to be read.
-        
-        @param fd file descriptor of the file that has data to be read (int)
-        """
-        try:
-            got = self.readfd.readline_p()
-        except Exception:
-            return
-
-        if len(got) == 0:
-            self.sessionClose()
-            return
-
-        self.rbuf = self.rbuf + got
-
-        # Call handleLine for the line if it is complete.
-        eol = self.rbuf.find('\n')
-
-        while eol >= 0:
-            s = self.rbuf[:eol + 1]
-            self.rbuf = self.rbuf[eol + 1:]
-            self.handleLine(s)
-            eol = self.rbuf.find('\n')
-
-    def writeReady(self, fd):
-        """
-        Public method called when we are ready to write data.
-        
-        @param fd file descriptor of the file that has data to be written (int)
-        """
-        self.writefd.write(self.wbuf)
-        self.writefd.flush()
-        self.wbuf = ''
-
-    def write(self, s):
-        """
-        Public method to write a string.
-        
-        @param s the data to be written (string)
-        """
-        self.wbuf = self.wbuf + s
-
-#
-# eflag: FileType = Python2
-# eflag: noqa = M601, M702
--- a/DebugClients/Python/DCTestResult.py	Sat Sep 03 18:02:37 2016 +0200
+++ /dev/null	Thu Jan 01 00:00:00 1970 +0000
@@ -1,131 +0,0 @@
-# -*- coding: utf-8 -*-
-
-# Copyright (c) 2003 - 2016 Detlev Offenbach <detlev@die-offenbachs.de>
-#
-
-"""
-Module implementing a TestResult derivative for the eric6 debugger.
-"""
-
-import select
-from unittest import TestResult
-
-
-class DCTestResult(TestResult):
-    """
-    A TestResult derivative to work with eric6's debug client.
-    
-    For more details see unittest.py of the standard python distribution.
-    """
-    def __init__(self, dbgClient):
-        """
-        Constructor
-        
-        @param dbgClient reference to the debug client
-        @type DebugClientBase
-        """
-        TestResult.__init__(self)
-        self.__dbgClient = dbgClient
-        
-    def addFailure(self, test, err):
-        """
-        Public method called if a test failed.
-        
-        @param test Reference to the test object
-        @param err The error traceback
-        """
-        TestResult.addFailure(self, test, err)
-        tracebackLines = self._exc_info_to_string(err, test)
-        self.__dbgClient.sendJsonCommand("ResponseUTTestFailed", {
-            "testname": str(test),
-            "traceback": tracebackLines,
-            "id": test.id(),
-        })
-        
-    def addError(self, test, err):
-        """
-        Public method called if a test errored.
-        
-        @param test Reference to the test object
-        @param err The error traceback
-        """
-        TestResult.addError(self, test, err)
-        tracebackLines = self._exc_info_to_string(err, test)
-        self.__dbgClient.sendJsonCommand("ResponseUTTestErrored", {
-            "testname": str(test),
-            "traceback": tracebackLines,
-            "id": test.id(),
-        })
-        
-    def addSkip(self, test, reason):
-        """
-        Public method called if a test was skipped.
-        
-        @param test reference to the test object
-        @param reason reason for skipping the test (string)
-        """
-        TestResult.addSkip(self, test, reason)
-        self.__dbgClient.sendJsonCommand("ResponseUTTestSkipped", {
-            "testname": str(test),
-            "reason": reason,
-            "id": test.id(),
-        })
-        
-    def addExpectedFailure(self, test, err):
-        """
-        Public method called if a test failed expected.
-        
-        @param test reference to the test object
-        @param err error traceback
-        """
-        TestResult.addExpectedFailure(self, test, err)
-        tracebackLines = self._exc_info_to_string(err, test)
-        self.__dbgClient.sendJsonCommand("ResponseUTTestFailedExpected", {
-            "testname": str(test),
-            "traceback": tracebackLines,
-            "id": test.id(),
-        })
-        
-    def addUnexpectedSuccess(self, test):
-        """
-        Public method called if a test succeeded expectedly.
-        
-        @param test reference to the test object
-        """
-        TestResult.addUnexpectedSuccess(self, test)
-        self.__dbgClient.sendJsonCommand("ResponseUTTestSucceededUnexpected", {
-            "testname": str(test),
-            "id": test.id(),
-        })
-        
-    def startTest(self, test):
-        """
-        Public method called at the start of a test.
-        
-        @param test Reference to the test object
-        """
-        TestResult.startTest(self, test)
-        self.__dbgClient.sendJsonCommand("ResponseUTStartTest", {
-            "testname": str(test),
-            "description": test.shortDescription(),
-        })
-
-    def stopTest(self, test):
-        """
-        Public method called at the end of a test.
-        
-        @param test Reference to the test object
-        """
-        TestResult.stopTest(self, test)
-        self.__dbgClient.sendJsonCommand("ResponseUTStopTest", {})
-        
-        # ensure that pending input is processed
-        rrdy, wrdy, xrdy = select.select(
-            [self.__dbgClient.readstream], [], [], 0.01)
-
-        if self.__dbgClient.readstream in rrdy:
-            self.__dbgClient.readReady(self.__dbgClient.readstream)
-
-#
-# eflag: FileType = Python2
-# eflag: noqa = M601, M702
--- a/DebugClients/Python/DebugBase.py	Sat Sep 03 18:02:37 2016 +0200
+++ /dev/null	Thu Jan 01 00:00:00 1970 +0000
@@ -1,905 +0,0 @@
-# -*- coding: utf-8 -*-
-
-# Copyright (c) 2002 - 2016 Detlev Offenbach <detlev@die-offenbachs.de>
-#
-
-"""
-Module implementing the debug base class.
-"""
-
-import sys
-import bdb
-import os
-import types
-import atexit
-import inspect
-import ctypes
-from inspect import CO_GENERATOR
-
-gRecursionLimit = 64
-
-
-def printerr(s):
-    """
-    Module function used for debugging the debug client.
-    
-    @param s data to be printed
-    """
-    sys.__stderr__.write('%s\n' % unicode(s))
-    sys.__stderr__.flush()
-
-
-def setRecursionLimit(limit):
-    """
-    Module function to set the recursion limit.
-    
-    @param limit recursion limit (integer)
-    """
-    global gRecursionLimit
-    gRecursionLimit = limit
-
-
-class DebugBase(bdb.Bdb):
-    """
-    Class implementing base class of the debugger.
-
-    Provides simple wrapper methods around bdb for the 'owning' client to
-    call to step etc.
-    """
-    def __init__(self, dbgClient):
-        """
-        Constructor
-        
-        @param dbgClient the owning client
-        """
-        bdb.Bdb.__init__(self)
-
-        self._dbgClient = dbgClient
-        self._mainThread = True
-        
-        self.breaks = self._dbgClient.breakpoints
-        
-        self.__event = ""
-        self.__isBroken = ""
-        self.cFrame = None
-        
-        # current frame we are at
-        self.currentFrame = None
-        
-        # frame that we are stepping in, can be different than currentFrame
-        self.stepFrame = None
-        
-        # provide a hook to perform a hard breakpoint
-        # Use it like this:
-        # if hasattr(sys, 'breakpoint): sys.breakpoint()
-        sys.breakpoint = self.set_trace
-        
-        # initialize parent
-        bdb.Bdb.reset(self)
-        
-        self.__recursionDepth = -1
-        self.setRecursionDepth(inspect.currentframe())
-    
-    def getCurrentFrame(self):
-        """
-        Public method to return the current frame.
-        
-        @return the current frame
-        """
-        return self.currentFrame
-    
-    def getFrameLocals(self, frmnr=0):
-        """
-        Public method to return the locals dictionary of the current frame
-        or a frame below.
-        
-        @keyparam frmnr distance of frame to get locals dictionary of. 0 is
-            the current frame (int)
-        @return locals dictionary of the frame
-        """
-        f = self.currentFrame
-        while f is not None and frmnr > 0:
-            f = f.f_back
-            frmnr -= 1
-        return f.f_locals
-    
-    def storeFrameLocals(self, frmnr=0):
-        """
-        Public method to store the locals into the frame, so an access to
-        frame.f_locals returns the last data.
-        
-        @keyparam frmnr distance of frame to store locals dictionary to. 0 is
-            the current frame (int)
-        """
-        cf = self.currentFrame
-        while cf is not None and frmnr > 0:
-            cf = cf.f_back
-            frmnr -= 1
-        ctypes.pythonapi.PyFrame_LocalsToFast(
-            ctypes.py_object(cf),
-            ctypes.c_int(0))
-    
-    def step(self, traceMode):
-        """
-        Public method to perform a step operation in this thread.
-        
-        @param traceMode If it is non-zero, then the step is a step into,
-              otherwise it is a step over.
-        """
-        self.stepFrame = self.currentFrame
-        
-        if traceMode:
-            self.currentFrame = None
-            self.set_step()
-        else:
-            self.set_next(self.currentFrame)
-    
-    def stepOut(self):
-        """
-        Public method to perform a step out of the current call.
-        """
-        self.stepFrame = self.currentFrame
-        self.set_return(self.currentFrame)
-    
-    def go(self, special):
-        """
-        Public method to resume the thread.
-
-        It resumes the thread stopping only at breakpoints or exceptions.
-        
-        @param special flag indicating a special continue operation
-        """
-        self.currentFrame = None
-        self.set_continue(special)
-    
-    def setRecursionDepth(self, frame):
-        """
-        Public method to determine the current recursion depth.
-        
-        @param frame The current stack frame.
-        """
-        self.__recursionDepth = 0
-        while frame is not None:
-            self.__recursionDepth += 1
-            frame = frame.f_back
-    
-    def profile(self, frame, event, arg):
-        """
-        Public method used to trace some stuff independent of the debugger
-        trace function.
-        
-        @param frame current stack frame.
-        @param event trace event (string)
-        @param arg arguments
-        @exception RuntimeError raised to indicate too many recursions
-        """
-        if event == 'return':
-            self.cFrame = frame.f_back
-            self.__recursionDepth -= 1
-            self.__sendCallTrace(event, frame, self.cFrame)
-        elif event == 'call':
-            self.__sendCallTrace(event, self.cFrame, frame)
-            self.cFrame = frame
-            self.__recursionDepth += 1
-            if self.__recursionDepth > gRecursionLimit:
-                raise RuntimeError(
-                    'maximum recursion depth exceeded\n'
-                    '(offending frame is two down the stack)')
-    
-    def __sendCallTrace(self, event, fromFrame, toFrame):
-        """
-        Private method to send a call/return trace.
-        
-        @param event trace event (string)
-        @param fromFrame originating frame (frame)
-        @param toFrame destination frame (frame)
-        """
-        if self._dbgClient.callTraceEnabled:
-            if not self.__skip_it(fromFrame) and not self.__skip_it(toFrame):
-                if event in ["call", "return"]:
-                    fr = fromFrame
-                    # TODO: change from and to info to a dictionary
-                    fromStr = "%s:%s:%s" % (
-                        self._dbgClient.absPath(self.fix_frame_filename(fr)),
-                        fr.f_lineno,
-                        fr.f_code.co_name)
-                    fr = toFrame
-                    toStr = "%s:%s:%s" % (
-                        self._dbgClient.absPath(self.fix_frame_filename(fr)),
-                        fr.f_lineno,
-                        fr.f_code.co_name)
-                    self._dbgClient.sendCallTrace(event, fromStr, toStr)
-    
-    def trace_dispatch(self, frame, event, arg):
-        """
-        Public method reimplemented from bdb.py to do some special things.
-        
-        This specialty is to check the connection to the debug server
-        for new events (i.e. new breakpoints) while we are going through
-        the code.
-        
-        @param frame The current stack frame.
-        @param event The trace event (string)
-        @param arg The arguments
-        @return local trace function
-        """
-        if self.quitting:
-            return  # None
-        
-        # give the client a chance to push through new break points.
-        self._dbgClient.eventPoll()
-        
-        self.__event == event
-        self.__isBroken = False
-        
-        if event == 'line':
-            return self.dispatch_line(frame)
-        if event == 'call':
-            return self.dispatch_call(frame, arg)
-        if event == 'return':
-            return self.dispatch_return(frame, arg)
-        if event == 'exception':
-            return self.dispatch_exception(frame, arg)
-        if event == 'c_call':
-            return self.trace_dispatch
-        if event == 'c_exception':
-            return self.trace_dispatch
-        if event == 'c_return':
-            return self.trace_dispatch
-        print 'DebugBase.trace_dispatch: unknown debugging event:', repr(event) # __IGNORE_WARNING__
-        return self.trace_dispatch
-
-    def dispatch_line(self, frame):
-        """
-        Public method reimplemented from bdb.py to do some special things.
-        
-        This speciality is to check the connection to the debug server
-        for new events (i.e. new breakpoints) while we are going through
-        the code.
-        
-        @param frame The current stack frame.
-        @return local trace function
-        @exception bdb.BdbQuit raised to indicate the end of the debug session
-        """
-        if self.stop_here(frame) or self.break_here(frame):
-            self.user_line(frame)
-            if self.quitting:
-                raise bdb.BdbQuit
-        return self.trace_dispatch
-
-    def dispatch_return(self, frame, arg):
-        """
-        Public method reimplemented from bdb.py to handle passive mode cleanly.
-        
-        @param frame The current stack frame.
-        @param arg The arguments
-        @return local trace function
-        @exception bdb.BdbQuit raised to indicate the end of the debug session
-        """
-        if self.stop_here(frame) or frame == self.returnframe:
-            # Ignore return events in generator except when stepping.
-            if self.stopframe and frame.f_code.co_flags & CO_GENERATOR:
-                return self.trace_dispatch
-            self.user_return(frame, arg)
-            if self.quitting and not self._dbgClient.passive:
-                raise bdb.BdbQuit
-        return self.trace_dispatch
-
-    def dispatch_exception(self, frame, arg):
-        """
-        Public method reimplemented from bdb.py to always call user_exception.
-        
-        @param frame The current stack frame.
-        @param arg The arguments
-        @return local trace function
-        @exception bdb.BdbQuit raised to indicate the end of the debug session
-        """
-        if not self.__skip_it(frame):
-            # When stepping with next/until/return in a generator frame,
-            # skip the internal StopIteration exception (with no traceback)
-            # triggered by a subiterator run with the 'yield from'
-            # statement.
-            if not (frame.f_code.co_flags & CO_GENERATOR and
-                    arg[0] is StopIteration and arg[2] is None):
-                self.user_exception(frame, arg)
-                if self.quitting:
-                    raise bdb.BdbQuit
-        
-        # Stop at the StopIteration or GeneratorExit exception when the user
-        # has set stopframe in a generator by issuing a return command, or a
-        # next/until command at the last statement in the generator before the
-        # exception.
-        elif (self.stopframe and frame is not self.stopframe and
-                self.stopframe.f_code.co_flags & CO_GENERATOR and
-                arg[0] in (StopIteration, GeneratorExit)):
-            self.user_exception(frame, arg)
-            if self.quitting:
-                raise bdb.BdbQuit
-        
-        return self.trace_dispatch
-
-    def set_trace(self, frame=None):
-        """
-        Public method reimplemented from bdb.py to do some special setup.
-        
-        @param frame frame to start debugging from
-        """
-        bdb.Bdb.set_trace(self, frame)
-        sys.setprofile(self.profile)
-    
-    def set_continue(self, special):
-        """
-        Public method reimplemented from bdb.py to always get informed of
-        exceptions.
-        
-        @param special flag indicating a special continue operation
-        """
-        # Modified version of the one found in bdb.py
-        # Here we only set a new stop frame if it is a normal continue.
-        if not special:
-            self._set_stopinfo(self.botframe, None)
-        else:
-            self._set_stopinfo(self.stopframe, None)
-
-    def set_quit(self):
-        """
-        Public method to quit.
-        
-        It wraps call to bdb to clear the current frame properly.
-        """
-        self.currentFrame = None
-        sys.setprofile(None)
-        bdb.Bdb.set_quit(self)
-    
-    def fix_frame_filename(self, frame):
-        """
-        Public method used to fixup the filename for a given frame.
-        
-        The logic employed here is that if a module was loaded
-        from a .pyc file, then the correct .py to operate with
-        should be in the same path as the .pyc. The reason this
-        logic is needed is that when a .pyc file is generated, the
-        filename embedded and thus what is readable in the code object
-        of the frame object is the fully qualified filepath when the
-        pyc is generated. If files are moved from machine to machine
-        this can break debugging as the .pyc will refer to the .py
-        on the original machine. Another case might be sharing
-        code over a network... This logic deals with that.
-        
-        @param frame the frame object
-        @return fixed up file name (string)
-        """
-        # get module name from __file__
-        if '__file__' in frame.f_globals and \
-           frame.f_globals['__file__'] and \
-           frame.f_globals['__file__'] == frame.f_code.co_filename:
-            root, ext = os.path.splitext(frame.f_globals['__file__'])
-            if ext in ['.pyc', '.py', '.py2', '.pyo']:
-                fixedName = root + '.py'
-                if os.path.exists(fixedName):
-                    return fixedName
-                
-                fixedName = root + '.py2'
-                if os.path.exists(fixedName):
-                    return fixedName
-
-        return frame.f_code.co_filename
-
-    def set_watch(self, cond, temporary=0):
-        """
-        Public method to set a watch expression.
-        
-        @param cond expression of the watch expression (string)
-        @param temporary flag indicating a temporary watch expression (boolean)
-        """
-        bp = bdb.Breakpoint("Watch", 0, temporary, cond)
-        if cond.endswith('??created??') or cond.endswith('??changed??'):
-            bp.condition, bp.special = cond.split()
-        else:
-            bp.condition = cond
-            bp.special = ""
-        bp.values = {}
-        if "Watch" not in self.breaks:
-            self.breaks["Watch"] = 1
-        else:
-            self.breaks["Watch"] += 1
-    
-    def clear_watch(self, cond):
-        """
-        Public method to clear a watch expression.
-        
-        @param cond expression of the watch expression to be cleared (string)
-        """
-        try:
-            possibles = bdb.Breakpoint.bplist["Watch", 0]
-            for i in range(0, len(possibles)):
-                b = possibles[i]
-                if b.cond == cond:
-                    b.deleteMe()
-                    self.breaks["Watch"] -= 1
-                    if self.breaks["Watch"] == 0:
-                        del self.breaks["Watch"]
-                    break
-        except KeyError:
-            pass
-    
-    def get_watch(self, cond):
-        """
-        Public method to get a watch expression.
-        
-        @param cond expression of the watch expression to be cleared (string)
-        @return reference to the watch point
-        """
-        possibles = bdb.Breakpoint.bplist["Watch", 0]
-        for i in range(0, len(possibles)):
-            b = possibles[i]
-            if b.cond == cond:
-                return b
-    
-    def __do_clearWatch(self, cond):
-        """
-        Private method called to clear a temporary watch expression.
-        
-        @param cond expression of the watch expression to be cleared (string)
-        """
-        self.clear_watch(cond)
-        self._dbgClient.sendClearTemporaryWatch(cond)
-
-    def __effective(self, frame):
-        """
-        Private method to determine, if a watch expression is effective.
-        
-        @param frame the current execution frame
-        @return tuple of watch expression and a flag to indicate, that a
-            temporary watch expression may be deleted (bdb.Breakpoint, boolean)
-        """
-        possibles = bdb.Breakpoint.bplist["Watch", 0]
-        for i in range(0, len(possibles)):
-            b = possibles[i]
-            if b.enabled == 0:
-                continue
-            if not b.cond:
-                # watch expression without expression shouldn't occur,
-                # just ignore it
-                continue
-            try:
-                val = eval(b.condition, frame.f_globals, frame.f_locals)
-                if b.special:
-                    if b.special == '??created??':
-                        if b.values[frame][0] == 0:
-                            b.values[frame][0] = 1
-                            b.values[frame][1] = val
-                            return (b, True)
-                        else:
-                            continue
-                    b.values[frame][0] = 1
-                    if b.special == '??changed??':
-                        if b.values[frame][1] != val:
-                            b.values[frame][1] = val
-                            if b.values[frame][2] > 0:
-                                b.values[frame][2] -= 1
-                                continue
-                            else:
-                                return (b, True)
-                        else:
-                            continue
-                    continue
-                if val:
-                    if b.ignore > 0:
-                        b.ignore -= 1
-                        continue
-                    else:
-                        return (b, 1)
-            except Exception:
-                if b.special:
-                    try:
-                        b.values[frame][0] = 0
-                    except KeyError:
-                        b.values[frame] = [0, None, b.ignore]
-                continue
-        return (None, False)
-    
-    def break_here(self, frame):
-        """
-        Public method reimplemented from bdb.py to fix the filename from the
-        frame.
-        
-        See fix_frame_filename for more info.
-        
-        @param frame the frame object
-        @return flag indicating the break status (boolean)
-        """
-        filename = self.canonic(self.fix_frame_filename(frame))
-        if filename not in self.breaks and "Watch" not in self.breaks:
-            return False
-        
-        if filename in self.breaks:
-            lineno = frame.f_lineno
-            if lineno not in self.breaks[filename]:
-                # The line itself has no breakpoint, but maybe the line is the
-                # first line of a function with breakpoint set by function
-                # name.
-                lineno = frame.f_code.co_firstlineno
-            if lineno in self.breaks[filename]:
-                # flag says ok to delete temp. breakpoint
-                (bp, flag) = bdb.effective(filename, lineno, frame)
-                if bp:
-                    self.currentbp = bp.number
-                    if (flag and bp.temporary):
-                        self.__do_clear(filename, lineno)
-                    return True
-        
-        if "Watch" in self.breaks:
-            # flag says ok to delete temp. watch
-            (bp, flag) = self.__effective(frame)
-            if bp:
-                self.currentbp = bp.number
-                if (flag and bp.temporary):
-                    self.__do_clearWatch(bp.cond)
-                return True
-        
-        return False
-
-    def break_anywhere(self, frame):
-        """
-        Public method reimplemented from bdb.py to do some special things.
-        
-        These speciality is to fix the filename from the frame
-        (see fix_frame_filename for more info).
-        
-        @param frame the frame object
-        @return flag indicating the break status (boolean)
-        """
-        return \
-            self.canonic(self.fix_frame_filename(frame)) in self.breaks or \
-            ("Watch" in self.breaks and self.breaks["Watch"])
-
-    def get_break(self, filename, lineno):
-        """
-        Public method reimplemented from bdb.py to get the first breakpoint of
-        a particular line.
-        
-        Because eric6 supports only one breakpoint per line, this overwritten
-        method will return this one and only breakpoint.
-        
-        @param filename filename of the bp to retrieve (string)
-        @param lineno linenumber of the bp to retrieve (integer)
-        @return breakpoint or None, if there is no bp
-        """
-        filename = self.canonic(filename)
-        return filename in self.breaks and \
-            lineno in self.breaks[filename] and \
-            bdb.Breakpoint.bplist[filename, lineno][0] or None
-    
-    def __do_clear(self, filename, lineno):
-        """
-        Private method called to clear a temporary breakpoint.
-        
-        @param filename name of the file the bp belongs to
-        @param lineno linenumber of the bp
-        """
-        self.clear_break(filename, lineno)
-        self._dbgClient.sendClearTemporaryBreakpoint(filename, lineno)
-
-    def getStack(self):
-        """
-        Public method to get the stack.
-        
-        @return list of lists with file name (string), line number (integer)
-            and function name (string)
-        """
-        fr = self.cFrame
-        stack = []
-        while fr is not None:
-            fname = self._dbgClient.absPath(self.fix_frame_filename(fr))
-            if not fname.startswith("<"):
-                fline = fr.f_lineno
-                ffunc = fr.f_code.co_name
-                
-                if ffunc == '?':
-                    ffunc = ''
-            
-                if ffunc and not ffunc.startswith("<"):
-                    argInfo = inspect.getargvalues(fr)
-                    try:
-                        fargs = inspect.formatargvalues(argInfo[0], argInfo[1],
-                                                        argInfo[2], argInfo[3])
-                    except Exception:
-                        fargs = ""
-                else:
-                    fargs = ""
-                
-            stack.append([fname, fline, ffunc, fargs])
-            
-            if fr == self._dbgClient.mainFrame:
-                fr = None
-            else:
-                fr = fr.f_back
-        
-        return stack
-    
-    def user_line(self, frame):
-        """
-        Public method reimplemented to handle the program about to execute a
-        particular line.
-        
-        @param frame the frame object
-        """
-        line = frame.f_lineno
-
-        # We never stop on line 0.
-        if line == 0:
-            return
-
-        fn = self._dbgClient.absPath(self.fix_frame_filename(frame))
-
-        # See if we are skipping at the start of a newly loaded program.
-        if self._dbgClient.mainFrame is None:
-            if fn != self._dbgClient.getRunning():
-                return
-            fr = frame
-            while (fr is not None and
-                   fr.f_code not in [
-                        self._dbgClient.handleLine.func_code,
-                        self._dbgClient.handleJsonCommand.func_code]):
-                self._dbgClient.mainFrame = fr
-                fr = fr.f_back
-
-        self.currentFrame = frame
-        
-        fr = frame
-        stack = []
-        while fr is not None:
-            # Reset the trace function so we can be sure
-            # to trace all functions up the stack... This gets around
-            # problems where an exception/breakpoint has occurred
-            # but we had disabled tracing along the way via a None
-            # return from dispatch_call
-            fr.f_trace = self.trace_dispatch
-            fname = self._dbgClient.absPath(self.fix_frame_filename(fr))
-            if not fname.startswith("<"):
-                fline = fr.f_lineno
-                ffunc = fr.f_code.co_name
-                
-                if ffunc == '?':
-                    ffunc = ''
-                
-                if ffunc and not ffunc.startswith("<"):
-                    argInfo = inspect.getargvalues(fr)
-                    try:
-                        fargs = inspect.formatargvalues(argInfo[0], argInfo[1],
-                                                        argInfo[2], argInfo[3])
-                    except Exception:
-                        fargs = ""
-                else:
-                    fargs = ""
-                
-            stack.append([fname, fline, ffunc, fargs])
-            
-            if fr == self._dbgClient.mainFrame:
-                fr = None
-            else:
-                fr = fr.f_back
-        
-        self.__isBroken = True
-        
-        self._dbgClient.sendResponseLine(stack)
-        self._dbgClient.eventLoop()
-
-    def user_exception(self, frame, (exctype, excval, exctb), unhandled=0):
-        """
-        Public method reimplemented to report an exception to the debug server.
-        
-        @param frame the frame object
-        @param exctype the type of the exception
-        @param excval data about the exception
-        @param exctb traceback for the exception
-        @param unhandled flag indicating an uncaught exception
-        """
-        if exctype in [GeneratorExit, StopIteration]:
-            # ignore these
-            return
-        
-        if exctype in [SystemExit, bdb.BdbQuit]:
-            atexit._run_exitfuncs()
-            if excval is None:
-                exitcode = 0
-                message = ""
-            elif isinstance(excval, (unicode, str)):
-                exitcode = 1
-                message = excval
-            elif isinstance(excval, int):
-                exitcode = excval
-                message = ""
-            elif isinstance(excval, SystemExit):
-                code = excval.code
-                if isinstance(code, (unicode, str)):
-                    exitcode = 1
-                    message = code
-                elif isinstance(code, int):
-                    exitcode = code
-                    message = ""
-                else:
-                    exitcode = 1
-                    message = str(code)
-            else:
-                exitcode = 1
-                message = str(excval)
-            self._dbgClient.progTerminated(exitcode, message)
-            return
-        
-        if exctype in [SyntaxError, IndentationError]:
-            try:
-                message, (filename, lineno, charno, text) = excval
-            except ValueError:
-                message = ""
-                filename = ""
-                lineno = 0
-                charno = 0
-                realSyntaxError = True
-            
-            if realSyntaxError:
-                self._dbgClient.sendSyntaxError(
-                    message, filename, lineno, charno)
-                self._dbgClient.eventLoop()
-                return
-        
-        if type(exctype) in [types.ClassType,   # Python up to 2.4
-                             types.TypeType]:   # Python 2.5+
-            exctype = exctype.__name__
-        
-        if excval is None:
-            excval = ''
-        
-        if unhandled:
-            exctypetxt = "unhandled %s" % unicode(exctype)
-        else:
-            exctypetxt = unicode(exctype)
-        try:
-            excvaltxt = unicode(excval).encode(self._dbgClient.getCoding())
-        except TypeError:
-            excvaltxt = str(excval)
-        
-        stack = []
-        if exctb:
-            frlist = self.__extract_stack(exctb)
-            frlist.reverse()
-            
-            self.currentFrame = frlist[0]
-            
-            for fr in frlist:
-                filename = self._dbgClient.absPath(self.fix_frame_filename(fr))
-                
-                if os.path.basename(filename).startswith("DebugClient") or \
-                   os.path.basename(filename) == "bdb.py":
-                    break
-                
-                linenr = fr.f_lineno
-                ffunc = fr.f_code.co_name
-                
-                if ffunc == '?':
-                    ffunc = ''
-                
-                if ffunc and not ffunc.startswith("<"):
-                    argInfo = inspect.getargvalues(fr)
-                    try:
-                        fargs = inspect.formatargvalues(argInfo[0], argInfo[1],
-                                                        argInfo[2], argInfo[3])
-                    except Exception:
-                        fargs = ""
-                else:
-                    fargs = ""
-                
-                stack.append([filename, linenr, ffunc, fargs])
-        
-        self._dbgClient.sendException(exctypetxt, excvaltxt, stack)
-        
-        if exctb is None:
-            return
-        
-        self._dbgClient.eventLoop()
-    
-    def __extract_stack(self, exctb):
-        """
-        Private member to return a list of stack frames.
-        
-        @param exctb exception traceback
-        @return list of stack frames
-        """
-        tb = exctb
-        stack = []
-        while tb is not None:
-            stack.append(tb.tb_frame)
-            tb = tb.tb_next
-        tb = None
-        return stack
-
-    def user_return(self, frame, retval):
-        """
-        Public method reimplemented to report program termination to the
-        debug server.
-        
-        @param frame the frame object
-        @param retval the return value of the program
-        """
-        # The program has finished if we have just left the first frame.
-        if frame == self._dbgClient.mainFrame and \
-                self._mainThread:
-            atexit._run_exitfuncs()
-            self._dbgClient.progTerminated(retval)
-        elif frame is not self.stepFrame:
-            self.stepFrame = None
-            self.user_line(frame)
-
-    def stop_here(self, frame):
-        """
-        Public method reimplemented to filter out debugger files.
-        
-        Tracing is turned off for files that are part of the
-        debugger that are called from the application being debugged.
-        
-        @param frame the frame object
-        @return flag indicating whether the debugger should stop here
-        """
-        if self.__skip_it(frame):
-            return False
-        return bdb.Bdb.stop_here(self, frame)
-
-    def __skip_it(self, frame):
-        """
-        Private method to filter out debugger files.
-        
-        Tracing is turned off for files that are part of the
-        debugger that are called from the application being debugged.
-        
-        @param frame the frame object
-        @return flag indicating whether the debugger should skip this frame
-        """
-        if frame is None:
-            return True
-        
-        fn = self.fix_frame_filename(frame)
-
-        # Eliminate things like <string> and <stdin>.
-        if fn[0] == '<':
-            return True
-
-        #XXX - think of a better way to do this.  It's only a convenience for
-        #debugging the debugger - when the debugger code is in the current
-        #directory.
-        if os.path.basename(fn) in [
-            'AsyncFile.py', 'DCTestResult.py',
-            'DebugBase.py', 'DebugClient.py',
-            'DebugClientBase.py',
-            'DebugClientCapabilities.py',
-            'DebugClientThreads.py',
-            'DebugConfig.py', 'DebugThread.py',
-            'DebugUtilities.py', 'FlexCompleter.py',
-            'PyProfile.py'] or \
-           os.path.dirname(fn).endswith("coverage"):
-            return True
-
-        if self._dbgClient.shouldSkip(fn):
-            return True
-        
-        return False
-    
-    def isBroken(self):
-        """
-        Public method to return the broken state of the debugger.
-        
-        @return flag indicating the broken state (boolean)
-        """
-        return self.__isBroken
-    
-    def getEvent(self):
-        """
-        Protected method to return the last debugger event.
-        
-        @return last debugger event (string)
-        """
-        return self.__event
-
-#
-# eflag: FileType = Python2
-# eflag: noqa = M601, M702
--- a/DebugClients/Python/DebugClient.py	Sat Sep 03 18:02:37 2016 +0200
+++ /dev/null	Thu Jan 01 00:00:00 1970 +0000
@@ -1,39 +0,0 @@
-# -*- coding: utf-8 -*-
-
-# Copyright (c) 2003 - 2016 Detlev Offenbach <detlev@die-offenbachs.de>
-#
-
-"""
-Module implementing a Qt free version of the debug client.
-"""
-
-from DebugBase import DebugBase
-import DebugClientBase
-
-
-class DebugClient(DebugClientBase.DebugClientBase, DebugBase):
-    """
-    Class implementing the client side of the debugger.
-    
-    This variant of the debugger implements the standard debugger client
-    by subclassing all relevant base classes.
-    """
-    def __init__(self):
-        """
-        Constructor
-        """
-        DebugClientBase.DebugClientBase.__init__(self)
-        
-        DebugBase.__init__(self, self)
-        
-        self.variant = 'Standard'
-
-# We are normally called by the debugger to execute directly.
-
-if __name__ == '__main__':
-    debugClient = DebugClient()
-    debugClient.main()
-
-#
-# eflag: FileType = Python2
-# eflag: noqa = M601, M702
--- a/DebugClients/Python/DebugClientBase.py	Sat Sep 03 18:02:37 2016 +0200
+++ /dev/null	Thu Jan 01 00:00:00 1970 +0000
@@ -1,2284 +0,0 @@
-# -*- coding: utf-8 -*-
-
-# Copyright (c) 2002 - 2016 Detlev Offenbach <detlev@die-offenbachs.de>
-#
-
-"""
-Module implementing a debug client base class.
-"""
-
-import sys
-import socket
-import select
-import codeop
-import traceback
-import os
-import time
-import imp
-import re
-import atexit
-import signal
-import inspect
-
-
-import DebugClientCapabilities
-from DebugBase import setRecursionLimit, printerr   # __IGNORE_WARNING__
-from AsyncFile import AsyncFile, AsyncPendingWrite
-from DebugConfig import ConfigVarTypeStrings
-from FlexCompleter import Completer
-from DebugUtilities import prepareJsonCommand
-
-
-DebugClientInstance = None
-
-###############################################################################
-
-
-def DebugClientRawInput(prompt="", echo=1):
-    """
-    Replacement for the standard raw_input builtin.
-    
-    This function works with the split debugger.
-    
-    @param prompt prompt to be shown. (string)
-    @param echo flag indicating echoing of the input (boolean)
-    @return result of the raw_input() call
-    """
-    if DebugClientInstance is None or not DebugClientInstance.redirect:
-        return DebugClientOrigRawInput(prompt)
-
-    return DebugClientInstance.raw_input(prompt, echo)
-
-# Use our own raw_input().
-try:
-    DebugClientOrigRawInput = __builtins__.__dict__['raw_input']
-    __builtins__.__dict__['raw_input'] = DebugClientRawInput
-except (AttributeError, KeyError):
-    import __main__
-    DebugClientOrigRawInput = __main__.__builtins__.__dict__['raw_input']
-    __main__.__builtins__.__dict__['raw_input'] = DebugClientRawInput
-
-###############################################################################
-
-
-def DebugClientInput(prompt=""):
-    """
-    Replacement for the standard input builtin.
-    
-    This function works with the split debugger.
-    
-    @param prompt prompt to be shown (string)
-    @return result of the input() call
-    """
-    if DebugClientInstance is None or DebugClientInstance.redirect == 0:
-        return DebugClientOrigInput(prompt)
-
-    return DebugClientInstance.input(prompt)
-
-# Use our own input().
-try:
-    DebugClientOrigInput = __builtins__.__dict__['input']
-    __builtins__.__dict__['input'] = DebugClientInput
-except (AttributeError, KeyError):
-    import __main__
-    DebugClientOrigInput = __main__.__builtins__.__dict__['input']
-    __main__.__builtins__.__dict__['input'] = DebugClientInput
-
-###############################################################################
-
-
-def DebugClientFork():
-    """
-    Replacement for the standard os.fork().
-    
-    @return result of the fork() call
-    """
-    if DebugClientInstance is None:
-        return DebugClientOrigFork()
-    
-    return DebugClientInstance.fork()
-
-# use our own fork().
-if 'fork' in dir(os):
-    DebugClientOrigFork = os.fork
-    os.fork = DebugClientFork
-
-###############################################################################
-
-
-def DebugClientClose(fd):
-    """
-    Replacement for the standard os.close(fd).
-    
-    @param fd open file descriptor to be closed (integer)
-    """
-    if DebugClientInstance is None:
-        DebugClientOrigClose(fd)
-    
-    DebugClientInstance.close(fd)
-
-# use our own close().
-if 'close' in dir(os):
-    DebugClientOrigClose = os.close
-    os.close = DebugClientClose
-
-###############################################################################
-
-
-def DebugClientSetRecursionLimit(limit):
-    """
-    Replacement for the standard sys.setrecursionlimit(limit).
-    
-    @param limit recursion limit (integer)
-    """
-    rl = max(limit, 64)
-    setRecursionLimit(rl)
-    DebugClientOrigSetRecursionLimit(rl + 64)
-
-# use our own setrecursionlimit().
-if 'setrecursionlimit' in dir(sys):
-    DebugClientOrigSetRecursionLimit = sys.setrecursionlimit
-    sys.setrecursionlimit = DebugClientSetRecursionLimit
-    DebugClientSetRecursionLimit(sys.getrecursionlimit())
-
-###############################################################################
-
-
-class DebugClientBase(object):
-    """
-    Class implementing the client side of the debugger.
-
-    It provides access to the Python interpeter from a debugger running in
-    another process whether or not the Qt event loop is running.
-
-    The protocol between the debugger and the client assumes that there will be
-    a single source of debugger commands and a single source of Python
-    statements.  Commands and statement are always exactly one line and may be
-    interspersed.
-
-    The protocol is as follows.  First the client opens a connection to the
-    debugger and then sends a series of one line commands.  A command is either
-    &gt;Load&lt;, &gt;Step&lt;, &gt;StepInto&lt;, ... or a Python statement.
-    See DebugProtocol.py for a listing of valid protocol tokens.
-
-    A Python statement consists of the statement to execute, followed (in a
-    separate line) by &gt;OK?&lt;. If the statement was incomplete then the
-    response is &gt;Continue&lt;. If there was an exception then the response
-    is &gt;Exception&lt;. Otherwise the response is &gt;OK&lt;. The reason
-    for the &gt;OK?&lt; part is to provide a sentinal (ie. the responding
-    &gt;OK&lt;) after any possible output as a result of executing the command.
-
-    The client may send any other lines at any other time which should be
-    interpreted as program output.
-
-    If the debugger closes the session there is no response from the client.
-    The client may close the session at any time as a result of the script
-    being debugged closing or crashing.
-    
-    <b>Note</b>: This class is meant to be subclassed by individual
-    DebugClient classes. Do not instantiate it directly.
-    """
-    clientCapabilities = DebugClientCapabilities.HasAll
-    
-    def __init__(self):
-        """
-        Constructor
-        """
-        self.breakpoints = {}
-        self.redirect = True
-        self.__receiveBuffer = ""
-
-        # The next couple of members are needed for the threaded version.
-        # For this base class they contain static values for the non threaded
-        # debugger
-        
-        # dictionary of all threads running
-        self.threads = {}
-        
-        # the "current" thread, basically the thread we are at a
-        # breakpoint for.
-        self.currentThread = self
-        
-        # special objects representing the main scripts thread and frame
-        self.mainThread = self
-        self.mainFrame = None
-        self.framenr = 0
-        
-        # The context to run the debugged program in.
-        self.debugMod = imp.new_module('__main__')
-        self.debugMod.__dict__['__builtins__'] = __builtins__
-
-        # The list of complete lines to execute.
-        self.buffer = ''
-        
-        # The list of regexp objects to filter variables against
-        self.globalsFilterObjects = []
-        self.localsFilterObjects = []
-
-        self._fncache = {}
-        self.dircache = []
-        self.mainProcStr = None     # used for the passive mode
-        self.passive = False        # used to indicate the passive mode
-        self.running = None
-        self.test = None
-        self.tracePython = False
-        self.debugging = False
-        
-        self.fork_auto = False
-        self.fork_child = False
-
-        self.readstream = None
-        self.writestream = None
-        self.errorstream = None
-        self.pollingDisabled = False
-        
-        self.callTraceEnabled = False
-        self.__newCallTraceEnabled = False
-        
-        self.skipdirs = sys.path[:]
-        
-        self.variant = 'You should not see this'
-        
-        # commandline completion stuff
-        self.complete = Completer(self.debugMod.__dict__).complete
-        
-        if sys.hexversion < 0x2020000:
-            self.compile_command = codeop.compile_command
-        else:
-            self.compile_command = codeop.CommandCompiler()
-        
-        self.coding_re = re.compile(r"coding[:=]\s*([-\w_.]+)")
-        self.defaultCoding = 'utf-8'
-        self.__coding = self.defaultCoding
-        self.noencoding = False
-
-    def getCoding(self):
-        """
-        Public method to return the current coding.
-        
-        @return codec name (string)
-        """
-        return self.__coding
-        
-    def __setCoding(self, filename):
-        """
-        Private method to set the coding used by a python file.
-        
-        @param filename name of the file to inspect (string)
-        """
-        if self.noencoding:
-            self.__coding = sys.getdefaultencoding()
-        else:
-            default = 'latin-1'
-            try:
-                f = open(filename, 'rb')
-                # read the first and second line
-                text = f.readline()
-                text = "%s%s" % (text, f.readline())
-                f.close()
-            except IOError:
-                self.__coding = default
-                return
-            
-            for l in text.splitlines():
-                m = self.coding_re.search(l)
-                if m:
-                    self.__coding = m.group(1)
-                    return
-            self.__coding = default
-
-    def attachThread(self, target=None, args=None, kwargs=None, mainThread=0):
-        """
-        Public method to setup a thread for DebugClient to debug.
-        
-        If mainThread is non-zero, then we are attaching to the already
-        started mainthread of the app and the rest of the args are ignored.
-        
-        This is just an empty function and is overridden in the threaded
-        debugger.
-        
-        @param target the start function of the target thread (i.e. the user
-            code)
-        @param args arguments to pass to target
-        @param kwargs keyword arguments to pass to target
-        @param mainThread non-zero, if we are attaching to the already
-              started mainthread of the app
-        """
-        if self.debugging:
-            sys.setprofile(self.profile)
-    
-    def __dumpThreadList(self):
-        """
-        Private method to send the list of threads.
-        """
-        threadList = []
-        if self.threads and self.currentThread:
-            # indication for the threaded debugger
-            currentId = self.currentThread.get_ident()
-            for t in self.threads.values():
-                d = {}
-                d["id"] = t.get_ident()
-                d["name"] = t.get_name()
-                d["broken"] = t.isBroken()
-                threadList.append(d)
-        else:
-            currentId = -1
-            d = {}
-            d["id"] = -1
-            d["name"] = "MainThread"
-            if hasattr(self, "isBroken"):
-                d["broken"] = self.isBroken()
-            else:
-                d["broken"] = False
-            threadList.append(d)
-        
-        self.sendJsonCommand("ResponseThreadList", {
-            "currentID": currentId,
-            "threadList": threadList,
-        })
-    
-    def raw_input(self, prompt, echo):
-        """
-        Public method to implement raw_input() using the event loop.
-        
-        @param prompt the prompt to be shown (string)
-        @param echo Flag indicating echoing of the input (boolean)
-        @return the entered string
-        """
-        self.sendJsonCommand("RequestRaw", {
-            "prompt": prompt,
-            "echo": echo,
-        })
-        self.eventLoop(True)
-        return self.rawLine
-
-    def input(self, prompt):
-        """
-        Public method to implement input() using the event loop.
-        
-        @param prompt the prompt to be shown (string)
-        @return the entered string evaluated as a Python expresion
-        """
-        return eval(self.raw_input(prompt, 1))
-        
-    def sessionClose(self, exit=True):
-        """
-        Public method to close the session with the debugger and optionally
-        terminate.
-        
-        @param exit flag indicating to terminate (boolean)
-        """
-        try:
-            self.set_quit()
-        except Exception:
-            pass
-
-        self.debugging = False
-        
-        # make sure we close down our end of the socket
-        # might be overkill as normally stdin, stdout and stderr
-        # SHOULD be closed on exit, but it does not hurt to do it here
-        self.readstream.close(True)
-        self.writestream.close(True)
-        self.errorstream.close(True)
-
-        if exit:
-            # Ok, go away.
-            sys.exit()
-
-    def handleLine(self, line):
-        """
-        Public method to handle the receipt of a complete line.
-
-        It first looks for a valid protocol token at the start of the line.
-        Thereafter it trys to execute the lines accumulated so far.
-        
-        @param line the received line
-        """
-        # Remove any newline.
-        if line[-1] == '\n':
-            line = line[:-1]
-
-##        printerr(line)          ##debug
-        
-        self.handleJsonCommand(line)
-    
-    def handleJsonCommand(self, jsonStr):
-        """
-        Public method to handle a command serialized as a JSON string.
-        
-        @param jsonStr string containing the command received from the IDE
-        @type str
-        """
-        import json
-        
-        try:
-            commandDict = json.loads(jsonStr.strip())
-        except json.JSONDecodeError as err:
-            printerr(str(err))
-            return
-        
-        method = commandDict["method"]
-        params = commandDict["params"]
-        
-        if method == "RequestVariables":
-            self.__dumpVariables(
-                params["frameNumber"], params["scope"], params["filters"])
-        
-        elif method == "RequestVariable":
-            self.__dumpVariable(
-                params["variable"], params["frameNumber"],
-                params["scope"], params["filters"])
-        
-        elif method == "RequestThreadList":
-            self.__dumpThreadList()
-        
-        elif method == "RequestThreadSet":
-            if params["threadID"] in self.threads:
-                self.setCurrentThread(params["threadID"])
-                self.sendJsonCommand("ResponseThreadSet", {})
-                stack = self.currentThread.getStack()
-                self.sendJsonCommand("ResponseStack", {
-                    "stack": stack,
-                })
-        
-        elif method == "RequestCapabilities":
-            self.sendJsonCommand("ResponseCapabilities", {
-                "capabilities": self.__clientCapabilities(),
-                "clientType": "Python3"
-            })
-        
-        elif method == "RequestBanner":
-            self.sendJsonCommand("ResponseBanner", {
-                "version": "Python {0}".format(sys.version),
-                "platform": socket.gethostname(),
-                "dbgclient": self.variant,
-            })
-        
-        elif method == "RequestSetFilter":
-            self.__generateFilterObjects(params["scope"], params["filter"])
-        
-        elif method == "RequestCallTrace":
-            if self.debugging:
-                self.callTraceEnabled = params["enable"]
-            else:
-                self.__newCallTraceEnabled = params["enable"]
-                # remember for later
-        
-        elif method == "RequestEnvironment":
-            for key, value in params["environment"].items():
-                if key.endswith("+"):
-                    if key[:-1] in os.environ:
-                        os.environ[key[:-1]] += value
-                    else:
-                        os.environ[key[:-1]] = value
-                else:
-                    os.environ[key] = value
-        
-        elif method == "RequestLoad":
-            self._fncache = {}
-            self.dircache = []
-            sys.argv = []
-            params["filename"] = params["filename"].encode(
-                sys.getfilesystemencoding())
-            self.__setCoding(params["filename"])
-            sys.argv.append(params["filename"])
-            sys.argv.extend(params["argv"])
-            sys.path = self.__getSysPath(os.path.dirname(sys.argv[0]))
-            if params["workdir"] == '':
-                os.chdir(sys.path[1])
-            else:
-                os.chdir(params["workdir"])
-            
-            self.running = sys.argv[0]
-            self.mainFrame = None
-            self.debugging = True
-            
-            self.fork_auto = params["autofork"]
-            self.fork_child = params["forkChild"]
-            
-            self.threads.clear()
-            self.attachThread(mainThread=True)
-            
-            # set the system exception handling function to ensure, that
-            # we report on all unhandled exceptions
-            sys.excepthook = self.__unhandled_exception
-            self.__interceptSignals()
-            
-            # clear all old breakpoints, they'll get set after we have
-            # started
-            self.mainThread.clear_all_breaks()
-            
-            self.mainThread.tracePython = params["traceInterpreter"]
-            
-            # This will eventually enter a local event loop.
-            self.debugMod.__dict__['__file__'] = self.running
-            sys.modules['__main__'] = self.debugMod
-            self.callTraceEnabled = self.__newCallTraceEnabled
-            res = self.mainThread.run(
-                'execfile(' + repr(self.running) + ')',
-                self.debugMod.__dict__)
-            self.progTerminated(res)
-
-        elif method == "RequestRun":
-            sys.argv = []
-            params["filename"] = params["filename"].encode(
-                sys.getfilesystemencoding())
-            self.__setCoding(params["filename"])
-            sys.argv.append(params["filename"])
-            sys.argv.extend(params["argv"])
-            sys.path = self.__getSysPath(os.path.dirname(sys.argv[0]))
-            if params["workdir"] == '':
-                os.chdir(sys.path[1])
-            else:
-                os.chdir(params["workdir"])
-
-            self.running = sys.argv[0]
-            self.mainFrame = None
-            self.botframe = None
-            
-            self.fork_auto = params["autofork"]
-            self.fork_child = params["forkChild"]
-            
-            self.threads.clear()
-            self.attachThread(mainThread=True)
-            
-            # set the system exception handling function to ensure, that
-            # we report on all unhandled exceptions
-            sys.excepthook = self.__unhandled_exception
-            self.__interceptSignals()
-            
-            self.mainThread.tracePython = False
-            
-            self.debugMod.__dict__['__file__'] = sys.argv[0]
-            sys.modules['__main__'] = self.debugMod
-            res = 0
-            try:
-                execfile(sys.argv[0], self.debugMod.__dict__)
-            except SystemExit as exc:
-                res = exc.code
-                atexit._run_exitfuncs()
-            self.writestream.flush()
-            self.progTerminated(res)
-
-        elif method == "RequestCoverage":
-            from coverage import coverage
-            sys.argv = []
-            params["filename"] = params["filename"].encode(
-                sys.getfilesystemencoding())
-            self.__setCoding(params["filename"])
-            sys.argv.append(params["filename"])
-            sys.argv.extend(params["argv"])
-            sys.path = self.__getSysPath(os.path.dirname(sys.argv[0]))
-            if params["workdir"] == '':
-                os.chdir(sys.path[1])
-            else:
-                os.chdir(params["workdir"])
-            
-            # set the system exception handling function to ensure, that
-            # we report on all unhandled exceptions
-            sys.excepthook = self.__unhandled_exception
-            self.__interceptSignals()
-            
-            # generate a coverage object
-            self.cover = coverage(
-                auto_data=True,
-                data_file="%s.coverage" % os.path.splitext(sys.argv[0])[0])
-            
-            if params["erase"]:
-                self.cover.erase()
-            sys.modules['__main__'] = self.debugMod
-            self.debugMod.__dict__['__file__'] = sys.argv[0]
-            self.running = sys.argv[0]
-            res = 0
-            self.cover.start()
-            try:
-                execfile(sys.argv[0], self.debugMod.__dict__)
-            except SystemExit as exc:
-                res = exc.code
-                atexit._run_exitfuncs()
-            self.cover.stop()
-            self.cover.save()
-            self.writestream.flush()
-            self.progTerminated(res)
-        
-        elif method == "RequestProfile":
-            sys.setprofile(None)
-            import PyProfile
-            sys.argv = []
-            params["filename"] = params["filename"].encode(
-                sys.getfilesystemencoding())
-            self.__setCoding(params["filename"])
-            sys.argv.append(params["filename"])
-            sys.argv.extend(params["argv"])
-            sys.path = self.__getSysPath(os.path.dirname(sys.argv[0]))
-            if params["workdir"] == '':
-                os.chdir(sys.path[1])
-            else:
-                os.chdir(params["workdir"])
-
-            # set the system exception handling function to ensure, that
-            # we report on all unhandled exceptions
-            sys.excepthook = self.__unhandled_exception
-            self.__interceptSignals()
-            
-            # generate a profile object
-            self.prof = PyProfile.PyProfile(sys.argv[0])
-            
-            if params["erase"]:
-                self.prof.erase()
-            self.debugMod.__dict__['__file__'] = sys.argv[0]
-            sys.modules['__main__'] = self.debugMod
-            self.running = sys.argv[0]
-            res = 0
-            try:
-                self.prof.run('execfile(%r)' % sys.argv[0])
-            except SystemExit as exc:
-                res = exc.code
-                atexit._run_exitfuncs()
-            self.prof.save()
-            self.writestream.flush()
-            self.progTerminated(res)
-        
-        elif method == "ExecuteStatement":
-            if self.buffer:
-                self.buffer = self.buffer + '\n' + params["statement"]
-            else:
-                self.buffer = params["statement"]
-
-            try:
-                code = self.compile_command(self.buffer, self.readstream.name)
-            except (OverflowError, SyntaxError, ValueError):
-                # Report the exception
-                sys.last_type, sys.last_value, sys.last_traceback = \
-                    sys.exc_info()
-                self.sendJsonCommand("ClientOutput", {
-                    "text": "".join(traceback.format_exception_only(
-                        sys.last_type, sys.last_value))
-                })
-                self.buffer = ''
-            else:
-                if code is None:
-                    self.sendJsonCommand("ResponseContinue", {})
-                    return
-                else:
-                    self.buffer = ''
-
-                    try:
-                        if self.running is None:
-                            exec code in self.debugMod.__dict__
-                        else:
-                            if self.currentThread is None:
-                                # program has terminated
-                                self.running = None
-                                _globals = self.debugMod.__dict__
-                                _locals = _globals
-                            else:
-                                cf = self.currentThread.getCurrentFrame()
-                                # program has terminated
-                                if cf is None:
-                                    self.running = None
-                                    _globals = self.debugMod.__dict__
-                                    _locals = _globals
-                                else:
-                                    frmnr = self.framenr
-                                    while cf is not None and frmnr > 0:
-                                        cf = cf.f_back
-                                        frmnr -= 1
-                                    _globals = cf.f_globals
-                                    _locals = \
-                                        self.currentThread.getFrameLocals(
-                                            self.framenr)
-                            # reset sys.stdout to our redirector
-                            # (unconditionally)
-                            if "sys" in _globals:
-                                __stdout = _globals["sys"].stdout
-                                _globals["sys"].stdout = self.writestream
-                                exec code in _globals, _locals
-                                _globals["sys"].stdout = __stdout
-                            elif "sys" in _locals:
-                                __stdout = _locals["sys"].stdout
-                                _locals["sys"].stdout = self.writestream
-                                exec code in _globals, _locals
-                                _locals["sys"].stdout = __stdout
-                            else:
-                                exec code in _globals, _locals
-                            
-                            self.currentThread.storeFrameLocals(self.framenr)
-                    except SystemExit, exc:
-                        self.progTerminated(exc.code)
-                    except Exception:
-                        # Report the exception and the traceback
-                        tlist = []
-                        try:
-                            exc_type, exc_value, exc_tb = sys.exc_info()
-                            sys.last_type = exc_type
-                            sys.last_value = exc_value
-                            sys.last_traceback = exc_tb
-                            tblist = traceback.extract_tb(exc_tb)
-                            del tblist[:1]
-                            tlist = traceback.format_list(tblist)
-                            if tlist:
-                                tlist.insert(
-                                    0, "Traceback (innermost last):\n")
-                                tlist.extend(traceback.format_exception_only(
-                                    exc_type, exc_value))
-                        finally:
-                            tblist = exc_tb = None
-
-                        self.sendJsonCommand("ClientOutput", {
-                            "text": "".join(tlist)
-                        })
-            
-            self.sendJsonCommand("ResponseOK", {})
-        
-        elif method == "RequestStep":
-            self.currentThread.step(True)
-            self.eventExit = True
-
-        elif method == "RequestStepOver":
-            self.currentThread.step(False)
-            self.eventExit = True
-        
-        elif method == "RequestStepOut":
-            self.currentThread.stepOut()
-            self.eventExit = True
-        
-        elif method == "RequestStepQuit":
-            if self.passive:
-                self.progTerminated(42)
-            else:
-                self.set_quit()
-                self.eventExit = True
-        
-        elif method == "RequestContinue":
-            self.currentThread.go(params["special"])
-            self.eventExit = True
-        
-        elif method == "RawInput":
-            # If we are handling raw mode input then break out of the current
-            # event loop.
-            self.rawLine = params["input"]
-            self.eventExit = True
-        
-        elif method == "RequestBreakpoint":
-            params["filename"] = params["filename"].encode(
-                sys.getfilesystemencoding())
-            if params["setBreakpoint"]:
-                if params["condition"] in ['None', '']:
-                    params["condition"] = None
-                elif params["condition"] is not None:
-                    try:
-                        compile(params["condition"], '<string>', 'eval')
-                    except SyntaxError:
-                        self.sendJsonCommand("ResponseBPConditionError", {
-                            "filename": params["filename"],
-                            "line": params["line"],
-                        })
-                        return
-                self.mainThread.set_break(
-                    params["filename"], params["line"], params["temporary"],
-                    params["condition"])
-            else:
-                self.mainThread.clear_break(params["filename"], params["line"])
-        
-        elif method == "RequestBreakpointEnable":
-            params["filename"] = params["filename"].encode(
-                sys.getfilesystemencoding())
-            bp = self.mainThread.get_break(params["filename"], params["line"])
-            if bp is not None:
-                if params["enable"]:
-                    bp.enable()
-                else:
-                    bp.disable()
-            
-        
-        elif method == "RequestBreakpointIgnore":
-            params["filename"] = params["filename"].encode(
-                sys.getfilesystemencoding())
-            bp = self.mainThread.get_break(params["filename"], params["line"])
-            if bp is not None:
-                bp.ignore = params["count"]
-        
-        elif method == "RequestWatch":
-            if params["setWatch"]:
-                if not params["condition"].endswith(
-                        ('??created??', '??changed??')):
-                    try:
-                        compile(params["condition"], '<string>', 'eval')
-                    except SyntaxError:
-                        self.sendJsonCommand("ResponseWatchConditionError", {
-                            "condition": params["condition"],
-                        })
-                        return
-                self.mainThread.set_watch(
-                    params["condition"], params["temporary"])
-            else:
-                self.mainThread.clear_watch(params["condition"])
-        
-        elif method == "RequestWatchEnable":
-            wp = self.mainThread.get_watch(params["condition"])
-            if wp is not None:
-                if params["enable"]:
-                    wp.enable()
-                else:
-                    wp.disable()
-        
-        elif method == "RequestWatchIgnore":
-            wp = self.mainThread.get_watch(params["condition"])
-            if wp is not None:
-                wp.ignore = params["count"]
-        
-        elif method == "RequestShutdown":
-            self.sessionClose()
-        
-        elif method == "RequestCompletion":
-            self.__completionList(params["text"])
-        
-        elif method == "RequestUTPrepare":
-            params["filename"] = params["filename"].encode(
-                sys.getfilesystemencoding())
-            sys.path.insert(
-                0, os.path.dirname(os.path.abspath(params["filename"])))
-            os.chdir(sys.path[0])
-            
-            # set the system exception handling function to ensure, that
-            # we report on all unhandled exceptions
-            sys.excepthook = self.__unhandled_exception
-            self.__interceptSignals()
-            
-            try:
-                import unittest
-                utModule = __import__(params["testname"])
-                try:
-                    if params["failed"]:
-                        self.test = unittest.defaultTestLoader\
-                            .loadTestsFromNames(params["failed"], utModule)
-                    else:
-                        self.test = unittest.defaultTestLoader\
-                            .loadTestsFromName(params["testfunctionname"],
-                                               utModule)
-                except AttributeError:
-                    self.test = unittest.defaultTestLoader\
-                        .loadTestsFromModule(utModule)
-            except Exception:
-                exc_type, exc_value, exc_tb = sys.exc_info()
-                self.sendJsonCommand("ResponseUTPrepared", {
-                    "count": 0,
-                    "exception": exc_type.__name__,
-                    "message": str(exc_value),
-                })
-                return
-            
-            # generate a coverage object
-            if params["coverage"]:
-                from coverage import coverage
-                self.cover = coverage(
-                    auto_data=True,
-                    data_file="%s.coverage" % \
-                        os.path.splitext(params["coveragefile"])[0])
-                if params["coverageerase"]:
-                    self.cover.erase()
-            else:
-                self.cover = None
-            
-            self.sendJsonCommand("ResponseUTPrepared", {
-                "count": self.test.countTestCases(),
-                "exception": "",
-                "message": "",
-            })
-        
-        elif method == "RequestUTRun":
-            from DCTestResult import DCTestResult
-            self.testResult = DCTestResult(self)
-            if self.cover:
-                self.cover.start()
-            self.test.run(self.testResult)
-            if self.cover:
-                self.cover.stop()
-                self.cover.save()
-            self.sendJsonCommand("ResponseUTFinished", {})
-        
-        elif method == "RequestUTStop":
-            self.testResult.stop()
-        
-        elif method == "ResponseForkTo":
-            # this results from a separate event loop
-            self.fork_child = (params["target"] == 'child')
-            self.eventExit = True
-    
-    def sendJsonCommand(self, method, params):
-        """
-        Public method to send a single command or response to the IDE.
-        
-        @param method command or response command name to be sent
-        @type str
-        @param params dictionary of named parameters for the command or
-            response
-        @type dict
-        """
-        cmd = prepareJsonCommand(method, params)
-        
-        self.writestream.write_p(cmd)
-        self.writestream.flush()
-    
-    def sendClearTemporaryBreakpoint(self, filename, lineno):
-        """
-        Public method to signal the deletion of a temporary breakpoint.
-        
-        @param filename name of the file the bp belongs to
-        @type str
-        @param lineno linenumber of the bp
-        @type int
-        """
-        self.sendJsonCommand("ResponseClearBreakpoint", {
-            "filename": filename,
-            "line": lineno
-        })
-    
-    def sendClearTemporaryWatch(self, condition):
-        """
-        Public method to signal the deletion of a temporary watch expression.
-        
-        @param condition condition of the watch expression to be cleared
-        @type str
-        """
-        self.sendJsonCommand("ResponseClearWatch", {
-            "condition": condition,
-        })
-    
-    def sendResponseLine(self, stack):
-        """
-        Public method to send the current call stack.
-        
-        @param stack call stack
-        @type list
-        """
-        self.sendJsonCommand("ResponseLine", {
-            "stack": stack,
-        })
-    
-    def sendCallTrace(self, event, fromStr, toStr):
-        """
-        Public method to send a call trace entry.
-        
-        @param event trace event (call or return)
-        @type str
-        @param fromStr pre-formatted origin info
-        @type str
-        @param toStr pre-formatted target info
-        @type str
-        """
-        self.sendJsonCommand("CallTrace", {
-            "event": event[0],
-            "from": fromStr,
-            "to": toStr,
-        })
-    
-    def sendException(self, exceptionType, exceptionMessage, stack):
-        """
-        Public method to send information for an exception.
-        
-        @param exceptionType type of exception raised
-        @type str
-        @param exceptionMessage message of the exception
-        @type str
-        @param stack stack trace information
-        @type list
-        """
-        self.sendJsonCommand("ResponseException", {
-            "type": exceptionType,
-            "message": exceptionMessage,
-            "stack": stack,
-        })
-    
-    def sendSyntaxError(self, message, filename, lineno, charno):
-        """
-        Public method to send information for a syntax error.
-        
-        @param message syntax error message
-        @type str
-        @param filename name of the faulty file
-        @type str
-        @param lineno line number info
-        @type int
-        @param charno character number info
-        @tyoe int
-        """
-        self.sendJsonCommand("ResponseSyntax", {
-            "message": message,
-            "filename": filename,
-            "linenumber": lineno,
-            "characternumber": charno,
-        })
-    
-    def sendPassiveStartup(self, filename, exceptions):
-        """
-        Public method to send the passive start information.
-        
-        @param filename name of the script
-        @type str
-        @param exceptions flag to enable exception reporting of the IDE
-        @type bool
-        """
-        self.sendJsonCommand("PassiveStartup", {
-            "filename": filename,
-            "exceptions": exceptions,
-        })
-
-    def __clientCapabilities(self):
-        """
-        Private method to determine the clients capabilities.
-        
-        @return client capabilities (integer)
-        """
-        try:
-            import PyProfile    # __IGNORE_WARNING__
-            try:
-                del sys.modules['PyProfile']
-            except KeyError:
-                pass
-            return self.clientCapabilities
-        except ImportError:
-            return (
-                self.clientCapabilities & ~DebugClientCapabilities.HasProfiler)
-    
-    def readReady(self, stream):
-        """
-        Public method called when there is data ready to be read.
-        
-        @param stream file like object that has data to be written
-        """
-        try:
-            got = stream.readline_p()
-        except Exception:
-            return
-
-        if len(got) == 0:
-            self.sessionClose()
-            return
-
-        self.__receiveBuffer = self.__receiveBuffer + got
-        
-        # Call handleLine for the line if it is complete.
-        eol = self.__receiveBuffer.find('\n')
-        while eol >= 0:
-            line = self.__receiveBuffer[:eol + 1]
-            self.__receiveBuffer = self.__receiveBuffer[eol + 1:]
-            self.handleLine(line)
-            eol = self.__receiveBuffer.find('\n')
-
-    def writeReady(self, stream):
-        """
-        Public method called when we are ready to write data.
-        
-        @param stream file like object that has data to be written
-        """
-        stream.write_p("")
-        stream.flush()
-
-    def __interact(self):
-        """
-        Private method to interact with the debugger.
-        """
-        global DebugClientInstance
-
-        DebugClientInstance = self
-        self.__receiveBuffer = ""
-
-        if not self.passive:
-            # At this point simulate an event loop.
-            self.eventLoop()
-
-    def eventLoop(self, disablePolling=False):
-        """
-        Public method implementing our event loop.
-        
-        @param disablePolling flag indicating to enter an event loop with
-            polling disabled (boolean)
-        """
-        self.eventExit = None
-        self.pollingDisabled = disablePolling
-
-        while self.eventExit is None:
-            wrdy = []
-
-            if self.writestream.nWriteErrors > self.writestream.maxtries:
-                break
-            
-            if AsyncPendingWrite(self.writestream):
-                wrdy.append(self.writestream)
-
-            if AsyncPendingWrite(self.errorstream):
-                wrdy.append(self.errorstream)
-            
-            try:
-                rrdy, wrdy, xrdy = select.select([self.readstream], wrdy, [])
-            except (select.error, KeyboardInterrupt, socket.error):
-                # just carry on
-                continue
-
-            if self.readstream in rrdy:
-                self.readReady(self.readstream)
-
-            if self.writestream in wrdy:
-                self.writeReady(self.writestream)
-
-            if self.errorstream in wrdy:
-                self.writeReady(self.errorstream)
-
-        self.eventExit = None
-        self.pollingDisabled = False
-
-    def eventPoll(self):
-        """
-        Public method to poll for events like 'set break point'.
-        """
-        if self.pollingDisabled:
-            return
-        
-        # the choice of a ~0.5 second poll interval is arbitrary.
-        lasteventpolltime = getattr(self, 'lasteventpolltime', time.time())
-        now = time.time()
-        if now - lasteventpolltime < 0.5:
-            self.lasteventpolltime = lasteventpolltime
-            return
-        else:
-            self.lasteventpolltime = now
-
-        wrdy = []
-        if AsyncPendingWrite(self.writestream):
-            wrdy.append(self.writestream)
-
-        if AsyncPendingWrite(self.errorstream):
-            wrdy.append(self.errorstream)
-        
-        # immediate return if nothing is ready.
-        try:
-            rrdy, wrdy, xrdy = select.select([self.readstream], wrdy, [], 0)
-        except (select.error, KeyboardInterrupt, socket.error):
-            return
-
-        if self.readstream in rrdy:
-            self.readReady(self.readstream)
-
-        if self.writestream in wrdy:
-            self.writeReady(self.writestream)
-
-        if self.errorstream in wrdy:
-            self.writeReady(self.errorstream)
-        
-    def connectDebugger(self, port, remoteAddress=None, redirect=1):
-        """
-        Public method to establish a session with the debugger.
-        
-        It opens a network connection to the debugger, connects it to stdin,
-        stdout and stderr and saves these file objects in case the application
-        being debugged redirects them itself.
-        
-        @param port the port number to connect to (int)
-        @param remoteAddress the network address of the debug server host
-            (string)
-        @param redirect flag indicating redirection of stdin, stdout and
-            stderr (boolean)
-        """
-        if remoteAddress is None:
-            remoteAddress = "127.0.0.1"
-        elif "@@i" in remoteAddress:
-            remoteAddress = remoteAddress.split("@@i")[0]
-        sock = socket.create_connection((remoteAddress, port))
-
-        self.readstream = AsyncFile(sock, sys.stdin.mode, sys.stdin.name)
-        self.writestream = AsyncFile(sock, sys.stdout.mode, sys.stdout.name)
-        self.errorstream = AsyncFile(sock, sys.stderr.mode, sys.stderr.name)
-        
-        if redirect:
-            sys.stdin = self.readstream
-            sys.stdout = self.writestream
-            sys.stderr = self.errorstream
-        self.redirect = redirect
-        
-        # attach to the main thread here
-        self.attachThread(mainThread=1)
-
-    def __unhandled_exception(self, exctype, excval, exctb):
-        """
-        Private method called to report an uncaught exception.
-        
-        @param exctype the type of the exception
-        @param excval data about the exception
-        @param exctb traceback for the exception
-        """
-        self.mainThread.user_exception(None, (exctype, excval, exctb), True)
-    
-    def __interceptSignals(self):
-        """
-        Private method to intercept common signals.
-        """
-        for signum in [
-            signal.SIGABRT,                 # abnormal termination
-            signal.SIGFPE,                  # floating point exception
-            signal.SIGILL,                  # illegal instruction
-            signal.SIGSEGV,                 # segmentation violation
-        ]:
-            signal.signal(signum, self.__signalHandler)
-    
-    def __signalHandler(self, signalNumber, stackFrame):
-        """
-        Private method to handle signals.
-        
-        @param signalNumber number of the signal to be handled
-        @type int
-        @param stackFrame current stack frame
-        @type frame object
-        """
-        if signalNumber == signal.SIGABRT:
-            message = "Abnormal Termination"
-        elif signalNumber == signal.SIGFPE:
-            message = "Floating Point Exception"
-        elif signalNumber == signal.SIGILL:
-            message = "Illegal Instruction"
-        elif signalNumber == signal.SIGSEGV:
-            message = "Segmentation Violation"
-        else:
-            message = "Unknown Signal '%d'" % signalNumber
-        
-        filename = self.absPath(stackFrame)
-        
-        linenr = stackFrame.f_lineno
-        ffunc = stackFrame.f_code.co_name
-        
-        if ffunc == '?':
-            ffunc = ''
-        
-        if ffunc and not ffunc.startswith("<"):
-            argInfo = inspect.getargvalues(stackFrame)
-            try:
-                fargs = inspect.formatargvalues(
-                    argInfo.args, argInfo.varargs,
-                    argInfo.keywords, argInfo.locals)
-            except Exception:
-                fargs = ""
-        else:
-            fargs = ""
-        
-        self.sendJsonCommand("ResponseSignal", {
-            "message": message,
-            "filename": filename,
-            "linenumber": linenr,
-            "function": ffunc,
-            "arguments": fargs,
-        })
-    
-    def absPath(self, fn):
-        """
-        Public method to convert a filename to an absolute name.
-
-        sys.path is used as a set of possible prefixes. The name stays
-        relative if a file could not be found.
-        
-        @param fn filename (string)
-        @return the converted filename (string)
-        """
-        if os.path.isabs(fn):
-            return fn
-
-        # Check the cache.
-        if fn in self._fncache:
-            return self._fncache[fn]
-
-        # Search sys.path.
-        for p in sys.path:
-            afn = os.path.abspath(os.path.join(p, fn))
-            nafn = os.path.normcase(afn)
-
-            if os.path.exists(nafn):
-                self._fncache[fn] = afn
-                d = os.path.dirname(afn)
-                if (d not in sys.path) and (d not in self.dircache):
-                    self.dircache.append(d)
-                return afn
-
-        # Search the additional directory cache
-        for p in self.dircache:
-            afn = os.path.abspath(os.path.join(p, fn))
-            nafn = os.path.normcase(afn)
-            
-            if os.path.exists(nafn):
-                self._fncache[fn] = afn
-                return afn
-                
-        # Nothing found.
-        return fn
-
-    def shouldSkip(self, fn):
-        """
-        Public method to check if a file should be skipped.
-        
-        @param fn filename to be checked
-        @return non-zero if fn represents a file we are 'skipping',
-            zero otherwise.
-        """
-        if self.mainThread.tracePython:     # trace into Python library
-            return False
-            
-        # Eliminate anything that is part of the Python installation.
-        afn = self.absPath(fn)
-        for d in self.skipdirs:
-            if afn.startswith(d):
-                return True
-        
-        # special treatment for paths containing site-packages or dist-packages
-        for part in ["site-packages", "dist-packages"]:
-            if part in afn:
-                return True
-        
-        return False
-        
-    def getRunning(self):
-        """
-        Public method to return the main script we are currently running.
-        
-        @return flag indicating a running debug session (boolean)
-        """
-        return self.running
-
-    def progTerminated(self, status, message=""):
-        """
-        Public method to tell the debugger that the program has terminated.
-        
-        @param status return status
-        @type int
-        @param message status message
-        @type str
-        """
-        if status is None:
-            status = 0
-        elif not isinstance(status, int):
-            message = str(status)
-            status = 1
-
-        if self.running:
-            self.set_quit()
-            self.running = None
-            self.sendJsonCommand("ResponseExit", {
-                "status": status,
-                "message": message,
-            })
-        
-        # reset coding
-        self.__coding = self.defaultCoding
-
-    def __dumpVariables(self, frmnr, scope, filter):
-        """
-        Private method to return the variables of a frame to the debug server.
-        
-        @param frmnr distance of frame reported on. 0 is the current frame
-            (int)
-        @param scope 1 to report global variables, 0 for local variables (int)
-        @param filter the indices of variable types to be filtered (list of
-            int)
-        """
-        if self.currentThread is None:
-            return
-        
-        if scope == 0:
-            self.framenr = frmnr
-        
-        f = self.currentThread.getCurrentFrame()
-        
-        while f is not None and frmnr > 0:
-            f = f.f_back
-            frmnr -= 1
-        
-        if f is None:
-            if scope:
-                dict = self.debugMod.__dict__
-            else:
-                scope = -1
-        elif scope:
-            dict = f.f_globals
-        elif f.f_globals is f.f_locals:
-                scope = -1
-        else:
-            dict = f.f_locals
-            
-        varlist = []
-        
-        if scope != -1:
-            keylist = dict.keys()
-            
-            vlist = self.__formatVariablesList(keylist, dict, scope, filter)
-            varlist.extend(vlist)
-            
-        self.sendJsonCommand("ResponseVariables", {
-            "scope": scope,
-            "variables": varlist,
-        })
-    
-    def __dumpVariable(self, var, frmnr, scope, filter):
-        """
-        Private method to return the variables of a frame to the debug server.
-        
-        @param var list encoded name of the requested variable
-            (list of strings)
-        @param frmnr distance of frame reported on. 0 is the current frame
-            (int)
-        @param scope 1 to report global variables, 0 for local variables (int)
-        @param filter the indices of variable types to be filtered
-            (list of int)
-        """
-        if self.currentThread is None:
-            return
-        
-        f = self.currentThread.getCurrentFrame()
-        
-        while f is not None and frmnr > 0:
-            f = f.f_back
-            frmnr -= 1
-        
-        if f is None:
-            if scope:
-                dict = self.debugMod.__dict__
-            else:
-                scope = -1
-        elif scope:
-            dict = f.f_globals
-        elif f.f_globals is f.f_locals:
-                scope = -1
-        else:
-            dict = f.f_locals
-        
-        varlist = []
-        
-        if scope != -1:
-            # search the correct dictionary
-            i = 0
-            rvar = var[:]
-            dictkeys = None
-            obj = None
-            isDict = False
-            formatSequences = False
-            access = ""
-            oaccess = ""
-            odict = dict
-            
-            qtVariable = False
-            qvar = None
-            qvtype = ""
-            
-            while i < len(var):
-                if len(dict):
-                    udict = dict
-                ndict = {}
-                # this has to be in line with VariablesViewer.indicators
-                if var[i][-2:] in ["[]", "()", "{}"]:   # __IGNORE_WARNING__
-                    if i + 1 == len(var):
-                        if var[i][:-2] == '...':
-                            dictkeys = [var[i - 1]]
-                        else:
-                            dictkeys = [var[i][:-2]]
-                        formatSequences = True
-                        if not access and not oaccess:
-                            if var[i][:-2] == '...':
-                                access = '["%s"]' % var[i - 1]
-                                dict = odict
-                            else:
-                                access = '["%s"]' % var[i][:-2]
-                        else:
-                            if var[i][:-2] == '...':
-                                if oaccess:
-                                    access = oaccess
-                                else:
-                                    access = '%s[%s]' % (access, var[i - 1])
-                                dict = odict
-                            else:
-                                if oaccess:
-                                    access = '%s[%s]' % (oaccess, var[i][:-2])
-                                    oaccess = ''
-                                else:
-                                    access = '%s[%s]' % (access, var[i][:-2])
-                        if var[i][-2:] == "{}":         # __IGNORE_WARNING__
-                            isDict = True
-                        break
-                    else:
-                        if not access:
-                            if var[i][:-2] == '...':
-                                access = '["%s"]' % var[i - 1]
-                                dict = odict
-                            else:
-                                access = '["%s"]' % var[i][:-2]
-                        else:
-                            if var[i][:-2] == '...':
-                                access = '%s[%s]' % (access, var[i - 1])
-                                dict = odict
-                            else:
-                                if oaccess:
-                                    access = '%s[%s]' % (oaccess, var[i][:-2])
-                                    oaccess = ''
-                                else:
-                                    access = '%s[%s]' % (access, var[i][:-2])
-                else:
-                    if access:
-                        if oaccess:
-                            access = '%s[%s]' % (oaccess, var[i])
-                        else:
-                            access = '%s[%s]' % (access, var[i])
-                        if var[i - 1][:-2] == '...':
-                            oaccess = access
-                        else:
-                            oaccess = ''
-                        try:
-                            exec 'mdict = dict%s.__dict__' % access
-                            ndict.update(mdict)     # __IGNORE_WARNING__
-                            exec 'obj = dict%s' % access
-                            if "PyQt4." in str(type(obj)) or \
-                                    "PyQt5." in str(type(obj)):
-                                qtVariable = True
-                                qvar = obj
-                                qvtype = ("%s" % type(qvar))[1:-1]\
-                                    .split()[1][1:-1]
-                        except Exception:
-                            pass
-                        try:
-                            exec 'mcdict = dict%s.__class__.__dict__' % access
-                            ndict.update(mcdict)     # __IGNORE_WARNING__
-                            if mdict and "sipThis" not in mdict.keys():  # __IGNORE_WARNING__
-                                del rvar[0:2]
-                                access = ""
-                        except Exception:
-                            pass
-                        try:
-                            cdict = {}
-                            exec 'slv = dict%s.__slots__' % access
-                            for v in slv:   # __IGNORE_WARNING__
-                                try:
-                                    exec 'cdict[v] = dict%s.%s' % (access, v)
-                                except Exception:
-                                    pass
-                            ndict.update(cdict)
-                            exec 'obj = dict%s' % access
-                            access = ""
-                            if "PyQt4." in str(type(obj)) or \
-                                    "PyQt5." in str(type(obj)):
-                                qtVariable = True
-                                qvar = obj
-                                qvtype = ("%s" % type(qvar))[1:-1]\
-                                    .split()[1][1:-1]
-                        except Exception:
-                            pass
-                    else:
-                        try:
-                            ndict.update(dict[var[i]].__dict__)
-                            ndict.update(dict[var[i]].__class__.__dict__)
-                            del rvar[0]
-                            obj = dict[var[i]]
-                            if "PyQt4." in str(type(obj)) or \
-                                    "PyQt5." in str(type(obj)):
-                                qtVariable = True
-                                qvar = obj
-                                qvtype = ("%s" % type(qvar))[1:-1]\
-                                    .split()[1][1:-1]
-                        except Exception:
-                            pass
-                        try:
-                            cdict = {}
-                            slv = dict[var[i]].__slots__
-                            for v in slv:
-                                try:
-                                    exec 'cdict[v] = dict[var[i]].%s' % v
-                                except Exception:
-                                    pass
-                            ndict.update(cdict)
-                            obj = dict[var[i]]
-                            if "PyQt4." in str(type(obj)) or \
-                                    "PyQt5." in str(type(obj)):
-                                qtVariable = True
-                                qvar = obj
-                                qvtype = ("%s" % type(qvar))[1:-1]\
-                                    .split()[1][1:-1]
-                        except Exception:
-                            pass
-                    odict = dict
-                    dict = ndict
-                i += 1
-            
-            if qtVariable:
-                vlist = self.__formatQtVariable(qvar, qvtype)
-            elif ("sipThis" in dict.keys() and len(dict) == 1) or \
-                    (len(dict) == 0 and len(udict) > 0):
-                if access:
-                    exec 'qvar = udict%s' % access
-                # this has to be in line with VariablesViewer.indicators
-                elif rvar and rvar[0][-2:] in ["[]", "()", "{}"]:   # __IGNORE_WARNING__
-                    exec 'qvar = udict["%s"][%s]' % (rvar[0][:-2], rvar[1])
-                else:
-                    qvar = udict[var[-1]]
-                qvtype = ("%s" % type(qvar))[1:-1].split()[1][1:-1]
-                if qvtype.startswith(("PyQt4", "PyQt5")):
-                    vlist = self.__formatQtVariable(qvar, qvtype)
-                else:
-                    vlist = []
-            else:
-                qtVariable = False
-                if len(dict) == 0 and len(udict) > 0:
-                    if access:
-                        exec 'qvar = udict%s' % access
-                    # this has to be in line with VariablesViewer.indicators
-                    elif rvar and rvar[0][-2:] in ["[]", "()", "{}"]:   # __IGNORE_WARNING__
-                        exec 'qvar = udict["%s"][%s]' % (rvar[0][:-2], rvar[1])
-                    else:
-                        qvar = udict[var[-1]]
-                    qvtype = ("%s" % type(qvar))[1:-1].split()[1][1:-1]
-                    if qvtype.startswith(("PyQt4", "PyQt5")):
-                        qtVariable = True
-                
-                if qtVariable:
-                    vlist = self.__formatQtVariable(qvar, qvtype)
-                else:
-                    # format the dictionary found
-                    if dictkeys is None:
-                        dictkeys = dict.keys()
-                    else:
-                        # treatment for sequences and dictionaries
-                        if access:
-                            exec "dict = dict%s" % access
-                        else:
-                            dict = dict[dictkeys[0]]
-                        if isDict:
-                            dictkeys = dict.keys()
-                        else:
-                            dictkeys = range(len(dict))
-                    vlist = self.__formatVariablesList(
-                        dictkeys, dict, scope, filter, formatSequences)
-            varlist.extend(vlist)
-        
-            if obj is not None and not formatSequences:
-                try:
-                    if unicode(repr(obj)).startswith('{'):
-                        varlist.append(('...', 'dict', "%d" % len(obj.keys())))
-                    elif unicode(repr(obj)).startswith('['):
-                        varlist.append(('...', 'list', "%d" % len(obj)))
-                    elif unicode(repr(obj)).startswith('('):
-                        varlist.append(('...', 'tuple', "%d" % len(obj)))
-                except Exception:
-                    pass
-        
-        self.sendJsonCommand("ResponseVariable", {
-            "scope": scope,
-            "variable": var,
-            "variables": varlist,
-        })
-        
-    def __formatQtVariable(self, value, vtype):
-        """
-        Private method to produce a formated output of a simple Qt4/Qt5 type.
-        
-        @param value variable to be formated
-        @param vtype type of the variable to be formatted (string)
-        @return A tuple consisting of a list of formatted variables. Each
-            variable entry is a tuple of three elements, the variable name,
-            its type and value.
-        """
-        qttype = vtype.split('.')[-1]
-        varlist = []
-        if qttype == 'QChar':
-            varlist.append(("", "QChar", "%s" % unichr(value.unicode())))
-            varlist.append(("", "int", "%d" % value.unicode()))
-        elif qttype == 'QByteArray':
-            varlist.append(("hex", "QByteArray", "%s" % value.toHex()))
-            varlist.append(("base64", "QByteArray", "%s" % value.toBase64()))
-            varlist.append(("percent encoding", "QByteArray",
-                            "%s" % value.toPercentEncoding()))
-        elif qttype == 'QString':
-            varlist.append(("", "QString", "%s" % value))
-        elif qttype == 'QStringList':
-            for i in range(value.count()):
-                varlist.append(("%d" % i, "QString", "%s" % value[i]))
-        elif qttype == 'QPoint':
-            varlist.append(("x", "int", "%d" % value.x()))
-            varlist.append(("y", "int", "%d" % value.y()))
-        elif qttype == 'QPointF':
-            varlist.append(("x", "float", "%g" % value.x()))
-            varlist.append(("y", "float", "%g" % value.y()))
-        elif qttype == 'QRect':
-            varlist.append(("x", "int", "%d" % value.x()))
-            varlist.append(("y", "int", "%d" % value.y()))
-            varlist.append(("width", "int", "%d" % value.width()))
-            varlist.append(("height", "int", "%d" % value.height()))
-        elif qttype == 'QRectF':
-            varlist.append(("x", "float", "%g" % value.x()))
-            varlist.append(("y", "float", "%g" % value.y()))
-            varlist.append(("width", "float", "%g" % value.width()))
-            varlist.append(("height", "float", "%g" % value.height()))
-        elif qttype == 'QSize':
-            varlist.append(("width", "int", "%d" % value.width()))
-            varlist.append(("height", "int", "%d" % value.height()))
-        elif qttype == 'QSizeF':
-            varlist.append(("width", "float", "%g" % value.width()))
-            varlist.append(("height", "float", "%g" % value.height()))
-        elif qttype == 'QColor':
-            varlist.append(("name", "str", "%s" % value.name()))
-            r, g, b, a = value.getRgb()
-            varlist.append(("rgba", "int", "%d, %d, %d, %d" % (r, g, b, a)))
-            h, s, v, a = value.getHsv()
-            varlist.append(("hsva", "int", "%d, %d, %d, %d" % (h, s, v, a)))
-            c, m, y, k, a = value.getCmyk()
-            varlist.append(
-                ("cmyka", "int", "%d, %d, %d, %d, %d" % (c, m, y, k, a)))
-        elif qttype == 'QDate':
-            varlist.append(("", "QDate", "%s" % value.toString()))
-        elif qttype == 'QTime':
-            varlist.append(("", "QTime", "%s" % value.toString()))
-        elif qttype == 'QDateTime':
-            varlist.append(("", "QDateTime", "%s" % value.toString()))
-        elif qttype == 'QDir':
-            varlist.append(("path", "str", "%s" % value.path()))
-            varlist.append(
-                ("absolutePath", "str", "%s" % value.absolutePath()))
-            varlist.append(
-                ("canonicalPath", "str", "%s" % value.canonicalPath()))
-        elif qttype == 'QFile':
-            varlist.append(("fileName", "str", "%s" % value.fileName()))
-        elif qttype == 'QFont':
-            varlist.append(("family", "str", "%s" % value.family()))
-            varlist.append(("pointSize", "int", "%d" % value.pointSize()))
-            varlist.append(("weight", "int", "%d" % value.weight()))
-            varlist.append(("bold", "bool", "%s" % value.bold()))
-            varlist.append(("italic", "bool", "%s" % value.italic()))
-        elif qttype == 'QUrl':
-            varlist.append(("url", "str", "%s" % value.toString()))
-            varlist.append(("scheme", "str", "%s" % value.scheme()))
-            varlist.append(("user", "str", "%s" % value.userName()))
-            varlist.append(("password", "str", "%s" % value.password()))
-            varlist.append(("host", "str", "%s" % value.host()))
-            varlist.append(("port", "int", "%d" % value.port()))
-            varlist.append(("path", "str", "%s" % value.path()))
-        elif qttype == 'QModelIndex':
-            varlist.append(("valid", "bool", "%s" % value.isValid()))
-            if value.isValid():
-                varlist.append(("row", "int", "%s" % value.row()))
-                varlist.append(("column", "int", "%s" % value.column()))
-                varlist.append(
-                    ("internalId", "int", "%s" % value.internalId()))
-                varlist.append(
-                    ("internalPointer", "void *", "%s" %
-                     value.internalPointer()))
-        elif qttype == 'QRegExp':
-            varlist.append(("pattern", "str", "%s" % value.pattern()))
-        
-        # GUI stuff
-        elif qttype == 'QAction':
-            varlist.append(("name", "str", "%s" % value.objectName()))
-            varlist.append(("text", "str", "%s" % value.text()))
-            varlist.append(("icon text", "str", "%s" % value.iconText()))
-            varlist.append(("tooltip", "str", "%s" % value.toolTip()))
-            varlist.append(("whatsthis", "str", "%s" % value.whatsThis()))
-            varlist.append(
-                ("shortcut", "str", "%s" % value.shortcut().toString()))
-        elif qttype == 'QKeySequence':
-            varlist.append(("value", "", "%s" % value.toString()))
-            
-        # XML stuff
-        elif qttype == 'QDomAttr':
-            varlist.append(("name", "str", "%s" % value.name()))
-            varlist.append(("value", "str", "%s" % value.value()))
-        elif qttype == 'QDomCharacterData':
-            varlist.append(("data", "str", "%s" % value.data()))
-        elif qttype == 'QDomComment':
-            varlist.append(("data", "str", "%s" % value.data()))
-        elif qttype == "QDomDocument":
-            varlist.append(("text", "str", "%s" % value.toString()))
-        elif qttype == 'QDomElement':
-            varlist.append(("tagName", "str", "%s" % value.tagName()))
-            varlist.append(("text", "str", "%s" % value.text()))
-        elif qttype == 'QDomText':
-            varlist.append(("data", "str", "%s" % value.data()))
-            
-        # Networking stuff
-        elif qttype == 'QHostAddress':
-            varlist.append(
-                ("address", "QHostAddress", "%s" % value.toString()))
-            
-        return varlist
-        
-    def __formatVariablesList(self, keylist, dict, scope, filter=[],
-                              formatSequences=0):
-        """
-        Private method to produce a formated variables list.
-        
-        The dictionary passed in to it is scanned. Variables are
-        only added to the list, if their type is not contained
-        in the filter list and their name doesn't match any of
-        the filter expressions. The formated variables list (a list of tuples
-        of 3 values) is returned.
-        
-        @param keylist keys of the dictionary
-        @param dict the dictionary to be scanned
-        @param scope 1 to filter using the globals filter, 0 using the locals
-            filter (int).
-            Variables are only added to the list, if their name do not match
-            any of the filter expressions.
-        @param filter the indices of variable types to be filtered. Variables
-            are only added to the list, if their type is not contained in the
-            filter list.
-        @param formatSequences flag indicating, that sequence or dictionary
-            variables should be formatted. If it is 0 (or false), just the
-            number of items contained in these variables is returned. (boolean)
-        @return A tuple consisting of a list of formatted variables. Each
-            variable entry is a tuple of three elements, the variable name,
-            its type and value.
-        """
-        varlist = []
-        if scope:
-            patternFilterObjects = self.globalsFilterObjects
-        else:
-            patternFilterObjects = self.localsFilterObjects
-        
-        for key in keylist:
-            # filter based on the filter pattern
-            matched = False
-            for pat in patternFilterObjects:
-                if pat.match(unicode(key)):
-                    matched = True
-                    break
-            if matched:
-                continue
-            
-            # filter hidden attributes (filter #0)
-            if 0 in filter and unicode(key)[:2] == '__':
-                continue
-            
-            # special handling for '__builtins__' (it's way too big)
-            if key == '__builtins__':
-                rvalue = '<module __builtin__ (built-in)>'
-                valtype = 'module'
-            else:
-                value = dict[key]
-                valtypestr = ("%s" % type(value))[1:-1]
-                    
-                if valtypestr.split(' ', 1)[0] == 'class':
-                    # handle new class type of python 2.2+
-                    if ConfigVarTypeStrings.index('instance') in filter:
-                        continue
-                    valtype = valtypestr
-                else:
-                    valtype = valtypestr[6:-1]
-                    try:
-                        if ConfigVarTypeStrings.index(valtype) in filter:
-                            continue
-                    except ValueError:
-                        if valtype == "classobj":
-                            if ConfigVarTypeStrings.index(
-                                    'instance') in filter:
-                                continue
-                        elif valtype == "sip.methoddescriptor":
-                            if ConfigVarTypeStrings.index(
-                                    'instance method') in filter:
-                                continue
-                        elif valtype == "sip.enumtype":
-                            if ConfigVarTypeStrings.index('class') in filter:
-                                continue
-                        elif not valtype.startswith("PySide") and \
-                                ConfigVarTypeStrings.index('other') in filter:
-                            continue
-                    
-                try:
-                    if valtype not in ['list', 'tuple', 'dict']:
-                        rvalue = repr(value)
-                        if valtype.startswith('class') and \
-                           rvalue[0] in ['{', '(', '[']:
-                            rvalue = ""
-                    else:
-                        if valtype == 'dict':
-                            rvalue = "%d" % len(value.keys())
-                        else:
-                            rvalue = "%d" % len(value)
-                except Exception:
-                    rvalue = ''
-                
-            if formatSequences:
-                if unicode(key) == key:
-                    key = "'%s'" % key
-                else:
-                    key = unicode(key)
-            varlist.append((key, valtype, rvalue))
-        
-        return varlist
-        
-    def __generateFilterObjects(self, scope, filterString):
-        """
-        Private slot to convert a filter string to a list of filter objects.
-        
-        @param scope 1 to generate filter for global variables, 0 for local
-            variables (int)
-        @param filterString string of filter patterns separated by ';'
-        """
-        patternFilterObjects = []
-        for pattern in filterString.split(';'):
-            patternFilterObjects.append(re.compile('^%s$' % pattern))
-        if scope:
-            self.globalsFilterObjects = patternFilterObjects[:]
-        else:
-            self.localsFilterObjects = patternFilterObjects[:]
-        
-    def __completionList(self, text):
-        """
-        Private slot to handle the request for a commandline completion list.
-        
-        @param text the text to be completed (string)
-        """
-        completerDelims = ' \t\n`~!@#$%^&*()-=+[{]}\\|;:\'",<>/?'
-        
-        completions = set()
-        # find position of last delim character
-        pos = -1
-        while pos >= -len(text):
-            if text[pos] in completerDelims:
-                if pos == -1:
-                    text = ''
-                else:
-                    text = text[pos + 1:]
-                break
-            pos -= 1
-        
-        # Get local and global completions
-        try:
-            localdict = self.currentThread.getFrameLocals(self.framenr)
-            localCompleter = Completer(localdict).complete
-            self.__getCompletionList(text, localCompleter, completions)
-        except AttributeError:
-            pass
-        self.__getCompletionList(text, self.complete, completions)
-        
-        self.sendJsonCommand("ResponseCompletion", {
-            "completions": list(completions),
-            "text": text,
-        })
-
-    def __getCompletionList(self, text, completer, completions):
-        """
-        Private method to create a completions list.
-        
-        @param text text to complete (string)
-        @param completer completer methode
-        @param completions set where to add new completions strings (set)
-        """
-        state = 0
-        try:
-            comp = completer(text, state)
-        except Exception:
-            comp = None
-        while comp is not None:
-            completions.add(comp)
-            state += 1
-            try:
-                comp = completer(text, state)
-            except Exception:
-                comp = None
-
-    def startDebugger(self, filename=None, host=None, port=None,
-                      enableTrace=True, exceptions=True, tracePython=False,
-                      redirect=True):
-        """
-        Public method used to start the remote debugger.
-        
-        @param filename the program to be debugged (string)
-        @param host hostname of the debug server (string)
-        @param port portnumber of the debug server (int)
-        @param enableTrace flag to enable the tracing function (boolean)
-        @param exceptions flag to enable exception reporting of the IDE
-            (boolean)
-        @param tracePython flag to enable tracing into the Python library
-            (boolean)
-        @param redirect flag indicating redirection of stdin, stdout and
-            stderr (boolean)
-        """
-        global debugClient
-        if host is None:
-            host = os.getenv('ERICHOST', 'localhost')
-        if port is None:
-            port = os.getenv('ERICPORT', 42424)
-        
-        remoteAddress = self.__resolveHost(host)
-        self.connectDebugger(port, remoteAddress, redirect)
-        if filename is not None:
-            self.running = os.path.abspath(filename)
-        else:
-            try:
-                self.running = os.path.abspath(sys.argv[0])
-            except IndexError:
-                self.running = None
-        if self.running:
-            self.__setCoding(self.running)
-        self.passive = True
-        self.sendPassiveStartup(self.running, exceptions)
-        self.__interact()
-        
-        # setup the debugger variables
-        self._fncache = {}
-        self.dircache = []
-        self.mainFrame = None
-        self.debugging = True
-        
-        self.attachThread(mainThread=True)
-        self.mainThread.tracePython = tracePython
-        
-        # set the system exception handling function to ensure, that
-        # we report on all unhandled exceptions
-        sys.excepthook = self.__unhandled_exception
-        self.__interceptSignals()
-        
-        # now start debugging
-        if enableTrace:
-            self.mainThread.set_trace()
-        
-    def startProgInDebugger(self, progargs, wd='', host=None,
-                            port=None, exceptions=True, tracePython=False,
-                            redirect=True):
-        """
-        Public method used to start the remote debugger.
-        
-        @param progargs commandline for the program to be debugged
-            (list of strings)
-        @param wd working directory for the program execution (string)
-        @param host hostname of the debug server (string)
-        @param port portnumber of the debug server (int)
-        @param exceptions flag to enable exception reporting of the IDE
-            (boolean)
-        @param tracePython flag to enable tracing into the Python library
-            (boolean)
-        @param redirect flag indicating redirection of stdin, stdout and
-            stderr (boolean)
-        """
-        if host is None:
-            host = os.getenv('ERICHOST', 'localhost')
-        if port is None:
-            port = os.getenv('ERICPORT', 42424)
-        
-        remoteAddress = self.__resolveHost(host)
-        self.connectDebugger(port, remoteAddress, redirect)
-        
-        self._fncache = {}
-        self.dircache = []
-        sys.argv = progargs[:]
-        sys.argv[0] = os.path.abspath(sys.argv[0])
-        sys.path = self.__getSysPath(os.path.dirname(sys.argv[0]))
-        if wd == '':
-            os.chdir(sys.path[1])
-        else:
-            os.chdir(wd)
-        self.running = sys.argv[0]
-        self.__setCoding(self.running)
-        self.mainFrame = None
-        self.debugging = True
-        
-        self.passive = True
-        self.sendPassiveStartup(self.running, exceptions)
-        self.__interact()
-        
-        self.attachThread(mainThread=1)
-        self.mainThread.tracePython = tracePython
-        
-        # set the system exception handling function to ensure, that
-        # we report on all unhandled exceptions
-        sys.excepthook = self.__unhandled_exception
-        self.__interceptSignals()
-        
-        # This will eventually enter a local event loop.
-        # Note the use of backquotes to cause a repr of self.running. The
-        # need for this is on Windows os where backslash is the path separator.
-        # They will get inadvertantly stripped away during the eval causing
-        # IOErrors if self.running is passed as a normal str.
-        self.debugMod.__dict__['__file__'] = self.running
-        sys.modules['__main__'] = self.debugMod
-        res = self.mainThread.run('execfile(' + repr(self.running) + ')',
-                                  self.debugMod.__dict__)
-        self.progTerminated(res)
-
-    def run_call(self, scriptname, func, *args):
-        """
-        Public method used to start the remote debugger and call a function.
-        
-        @param scriptname name of the script to be debugged (string)
-        @param func function to be called
-        @param *args arguments being passed to func
-        @return result of the function call
-        """
-        self.startDebugger(scriptname, enableTrace=0)
-        res = self.mainThread.runcall(func, *args)
-        self.progTerminated(res)
-        return res
-        
-    def __resolveHost(self, host):
-        """
-        Private method to resolve a hostname to an IP address.
-        
-        @param host hostname of the debug server (string)
-        @return IP address (string)
-        """
-        try:
-            host, version = host.split("@@")
-        except ValueError:
-            version = 'v4'
-        if version == 'v4':
-            family = socket.AF_INET
-        else:
-            family = socket.AF_INET6
-        return socket.getaddrinfo(host, None, family,
-                                  socket.SOCK_STREAM)[0][4][0]
-        
-    def main(self):
-        """
-        Public method implementing the main method.
-        """
-        if '--' in sys.argv:
-            args = sys.argv[1:]
-            host = None
-            port = None
-            wd = ''
-            tracePython = False
-            exceptions = True
-            redirect = True
-            while args[0]:
-                if args[0] == '-h':
-                    host = args[1]
-                    del args[0]
-                    del args[0]
-                elif args[0] == '-p':
-                    port = int(args[1])
-                    del args[0]
-                    del args[0]
-                elif args[0] == '-w':
-                    wd = args[1]
-                    del args[0]
-                    del args[0]
-                elif args[0] == '-t':
-                    tracePython = True
-                    del args[0]
-                elif args[0] == '-e':
-                    exceptions = False
-                    del args[0]
-                elif args[0] == '-n':
-                    redirect = False
-                    del args[0]
-                elif args[0] == '--no-encoding':
-                    self.noencoding = True
-                    del args[0]
-                elif args[0] == '--fork-child':
-                    self.fork_auto = True
-                    self.fork_child = True
-                    del args[0]
-                elif args[0] == '--fork-parent':
-                    self.fork_auto = True
-                    self.fork_child = False
-                    del args[0]
-                elif args[0] == '--':
-                    del args[0]
-                    break
-                else:   # unknown option
-                    del args[0]
-            if not args:
-                print "No program given. Aborting!"     # __IGNORE_WARNING__
-            else:
-                if not self.noencoding:
-                    self.__coding = self.defaultCoding
-                self.startProgInDebugger(args, wd, host, port,
-                                         exceptions=exceptions,
-                                         tracePython=tracePython,
-                                         redirect=redirect)
-        else:
-            if sys.argv[1] == '--no-encoding':
-                self.noencoding = True
-                del sys.argv[1]
-            if sys.argv[1] == '':
-                del sys.argv[1]
-            try:
-                port = int(sys.argv[1])
-            except (ValueError, IndexError):
-                port = -1
-            try:
-                redirect = int(sys.argv[2])
-            except (ValueError, IndexError):
-                redirect = True
-            try:
-                ipOrHost = sys.argv[3]
-                if ':' in ipOrHost:
-                    remoteAddress = ipOrHost
-                elif ipOrHost[0] in '0123456789':
-                    remoteAddress = ipOrHost
-                else:
-                    remoteAddress = self.__resolveHost(ipOrHost)
-            except Exception:
-                remoteAddress = None
-            sys.argv = ['']
-            if '' not in sys.path:
-                sys.path.insert(0, '')
-            if port >= 0:
-                if not self.noencoding:
-                    self.__coding = self.defaultCoding
-                self.connectDebugger(port, remoteAddress, redirect)
-                self.__interact()
-            else:
-                print "No network port given. Aborting..."  # __IGNORE_WARNING__
-        
-    def fork(self):
-        """
-        Public method implementing a fork routine deciding which branch to
-        follow.
-        
-        @return process ID (integer)
-        """
-        if not self.fork_auto:
-            self.sendJsonCommand("RequestForkTo", {})
-            self.eventLoop(True)
-        pid = DebugClientOrigFork()
-        if pid == 0:
-            # child
-            if not self.fork_child:
-                sys.settrace(None)
-                sys.setprofile(None)
-                self.sessionClose(0)
-        else:
-            # parent
-            if self.fork_child:
-                sys.settrace(None)
-                sys.setprofile(None)
-                self.sessionClose(0)
-        return pid
-        
-    def close(self, fd):
-        """
-        Public method implementing a close method as a replacement for
-        os.close().
-        
-        It prevents the debugger connections from being closed.
-        
-        @param fd file descriptor to be closed (integer)
-        """
-        if fd in [self.readstream.fileno(), self.writestream.fileno(),
-                  self.errorstream.fileno()]:
-            return
-        
-        DebugClientOrigClose(fd)
-        
-    def __getSysPath(self, firstEntry):
-        """
-        Private slot to calculate a path list including the PYTHONPATH
-        environment variable.
-        
-        @param firstEntry entry to be put first in sys.path (string)
-        @return path list for use as sys.path (list of strings)
-        """
-        sysPath = [path for path in os.environ.get("PYTHONPATH", "")
-                   .split(os.pathsep)
-                   if path not in sys.path] + sys.path[:]
-        if "" in sysPath:
-            sysPath.remove("")
-        sysPath.insert(0, firstEntry)
-        sysPath.insert(0, '')
-        return sysPath
-
-#
-# eflag: FileType = Python2
-# eflag: noqa = M601, M702
--- a/DebugClients/Python/DebugClientCapabilities.py	Sat Sep 03 18:02:37 2016 +0200
+++ /dev/null	Thu Jan 01 00:00:00 1970 +0000
@@ -1,23 +0,0 @@
-# -*- coding: utf-8 -*-
-
-# Copyright (c) 2005 - 2016 Detlev Offenbach <detlev@die-offenbachs.de>
-#
-
-"""
-Module defining the debug clients capabilities.
-"""
-
-HasDebugger = 0x0001
-HasInterpreter = 0x0002
-HasProfiler = 0x0004
-HasCoverage = 0x0008
-HasCompleter = 0x0010
-HasUnittest = 0x0020
-HasShell = 0x0040
-
-HasAll = HasDebugger | HasInterpreter | HasProfiler | \
-    HasCoverage | HasCompleter | HasUnittest | HasShell
-
-#
-# eflag: FileType = Python2
-# eflag: noqa = M601, M702
--- a/DebugClients/Python/DebugClientThreads.py	Sat Sep 03 18:02:37 2016 +0200
+++ /dev/null	Thu Jan 01 00:00:00 1970 +0000
@@ -1,200 +0,0 @@
-# -*- coding: utf-8 -*-
-
-# Copyright (c) 2003 - 2016 Detlev Offenbach <detlev@die-offenbachs.de>
-#
-
-"""
-Module implementing the multithreaded version of the debug client.
-"""
-
-import thread
-import sys
-
-from DebugThread import DebugThread
-import DebugClientBase
-
-
-def _debugclient_start_new_thread(target, args, kwargs={}):
-    """
-    Module function used to allow for debugging of multiple threads.
-    
-    The way it works is that below, we reset thread._start_new_thread to
-    this function object. Thus, providing a hook for us to see when
-    threads are started. From here we forward the request onto the
-    DebugClient which will create a DebugThread object to allow tracing
-    of the thread then start up the thread. These actions are always
-    performed in order to allow dropping into debug mode.
-    
-    See DebugClientThreads.attachThread and DebugThread.DebugThread in
-    DebugThread.py
-    
-    @param target the start function of the target thread (i.e. the user code)
-    @param args arguments to pass to target
-    @param kwargs keyword arguments to pass to target
-    @return The identifier of the created thread
-    """
-    if DebugClientBase.DebugClientInstance is not None:
-        return DebugClientBase.DebugClientInstance.attachThread(
-            target, args, kwargs)
-    else:
-        return _original_start_thread(target, args, kwargs)
-    
-# make thread hooks available to system
-_original_start_thread = thread.start_new_thread
-thread.start_new_thread = _debugclient_start_new_thread
-
-# Note: import threading here AFTER above hook, as threading cache's
-#       thread._start_new_thread.
-from threading import RLock
-
-
-class DebugClientThreads(DebugClientBase.DebugClientBase):
-    """
-    Class implementing the client side of the debugger.
-
-    This variant of the debugger implements a threaded debugger client
-    by subclassing all relevant base classes.
-    """
-    def __init__(self):
-        """
-        Constructor
-        """
-        DebugClientBase.DebugClientBase.__init__(self)
-        
-        # protection lock for synchronization
-        self.clientLock = RLock()
-        
-        # the "current" thread, basically the thread we are at a breakpoint
-        # for.
-        self.currentThread = None
-        
-        # special objects representing the main scripts thread and frame
-        self.mainThread = None
-        self.mainFrame = None
-        
-        self.variant = 'Threaded'
-
-    def attachThread(self, target=None, args=None, kwargs=None, mainThread=0):
-        """
-        Public method to setup a thread for DebugClient to debug.
-        
-        If mainThread is non-zero, then we are attaching to the already
-        started mainthread of the app and the rest of the args are ignored.
-        
-        @param target the start function of the target thread (i.e. the
-            user code)
-        @param args arguments to pass to target
-        @param kwargs keyword arguments to pass to target
-        @param mainThread non-zero, if we are attaching to the already
-              started mainthread of the app
-        @return The identifier of the created thread
-        """
-        try:
-            self.lockClient()
-            newThread = DebugThread(self, target, args, kwargs, mainThread)
-            ident = -1
-            if mainThread:
-                ident = thread.get_ident()
-                self.mainThread = newThread
-                if self.debugging:
-                    sys.setprofile(newThread.profile)
-            else:
-                ident = _original_start_thread(newThread.bootstrap, ())
-                if self.mainThread is not None:
-                    self.tracePython = self.mainThread.tracePython
-            newThread.set_ident(ident)
-            self.threads[newThread.get_ident()] = newThread
-        finally:
-            self.unlockClient()
-        return ident
-    
-    def threadTerminated(self, dbgThread):
-        """
-        Public method called when a DebugThread has exited.
-        
-        @param dbgThread the DebugThread that has exited
-        """
-        try:
-            self.lockClient()
-            try:
-                del self.threads[dbgThread.get_ident()]
-            except KeyError:
-                pass
-        finally:
-            self.unlockClient()
-            
-    def lockClient(self, blocking=1):
-        """
-        Public method to acquire the lock for this client.
-        
-        @param blocking flag to indicating a blocking lock
-        @return flag indicating successful locking
-        """
-        if blocking:
-            self.clientLock.acquire()
-        else:
-            return self.clientLock.acquire(blocking)
-        
-    def unlockClient(self):
-        """
-        Public method to release the lock for this client.
-        """
-        try:
-            self.clientLock.release()
-        except AssertionError:
-            pass
-        
-    def setCurrentThread(self, id):
-        """
-        Public method to set the current thread.
-
-        @param id the id the current thread should be set to.
-        """
-        try:
-            self.lockClient()
-            if id is None:
-                self.currentThread = None
-            else:
-                self.currentThread = self.threads[id]
-        finally:
-            self.unlockClient()
-    
-    def eventLoop(self, disablePolling=False):
-        """
-        Public method implementing our event loop.
-        
-        @param disablePolling flag indicating to enter an event loop with
-            polling disabled (boolean)
-        """
-        # make sure we set the current thread appropriately
-        threadid = thread.get_ident()
-        self.setCurrentThread(threadid)
-        
-        DebugClientBase.DebugClientBase.eventLoop(self, disablePolling)
-        
-        self.setCurrentThread(None)
-
-    def set_quit(self):
-        """
-        Public method to do a 'set quit' on all threads.
-        """
-        try:
-            locked = self.lockClient(0)
-            try:
-                for key in self.threads.keys():
-                    self.threads[key].set_quit()
-            except Exception:
-                pass
-        finally:
-            if locked:
-                self.unlockClient()
-
-# We are normally called by the debugger to execute directly.
-
-if __name__ == '__main__':
-    debugClient = DebugClientThreads()
-    debugClient.main()
-
-#
-# eflag: FileType = Python2
-# eflag: noqa = M601, M702, E402
--- a/DebugClients/Python/DebugConfig.py	Sat Sep 03 18:02:37 2016 +0200
+++ /dev/null	Thu Jan 01 00:00:00 1970 +0000
@@ -1,23 +0,0 @@
-# -*- coding: utf-8 -*-
-
-# Copyright (c) 2005 - 2016 Detlev Offenbach <detlev@die-offenbachs.de>
-#
-
-"""
-Module defining type strings for the different Python types.
-"""
-
-ConfigVarTypeStrings = [
-    '__', 'NoneType', 'type',
-    'bool', 'int', 'long', 'float', 'complex',
-    'str', 'unicode', 'tuple', 'list',
-    'dict', 'dict-proxy', 'set', 'file', 'xrange',
-    'slice', 'buffer', 'class', 'instance',
-    'instance method', 'property', 'generator',
-    'function', 'builtin_function_or_method', 'code', 'module',
-    'ellipsis', 'traceback', 'frame', 'other'
-]
-
-#
-# eflag: FileType = Python2
-# eflag: noqa = M601, M702
--- a/DebugClients/Python/DebugProtocol.py	Sat Sep 03 18:02:37 2016 +0200
+++ /dev/null	Thu Jan 01 00:00:00 1970 +0000
@@ -1,88 +0,0 @@
-# -*- coding: utf-8 -*-
-
-# Copyright (c) 2002 - 2016 Detlev Offenbach <detlev@die-offenbachs.de>
-#
-
-"""
-Module defining the debug protocol tokens.
-"""
-# TODO: delete this file
-# The address used for debugger/client communications.
-DebugAddress = '127.0.0.1'
-
-# The protocol "words".
-RequestOK = '>OK?<'
-RequestEnv = '>Environment<'
-RequestCapabilities = '>Capabilities<'
-RequestLoad = '>Load<'
-RequestRun = '>Run<'
-RequestCoverage = '>Coverage<'
-RequestProfile = '>Profile<'
-RequestContinue = '>Continue<'
-RequestStep = '>Step<'
-RequestStepOver = '>StepOver<'
-RequestStepOut = '>StepOut<'
-RequestStepQuit = '>StepQuit<'
-RequestBreak = '>Break<'
-RequestBreakEnable = '>EnableBreak<'
-RequestBreakIgnore = '>IgnoreBreak<'
-RequestWatch = '>Watch<'
-RequestWatchEnable = '>EnableWatch<'
-RequestWatchIgnore = '>IgnoreWatch<'
-RequestVariables = '>Variables<'
-RequestVariable = '>Variable<'
-RequestSetFilter = '>SetFilter<'
-RequestThreadList = '>ThreadList<'
-RequestThreadSet = '>ThreadSet<'
-RequestEval = '>Eval<'
-RequestExec = '>Exec<'
-RequestShutdown = '>Shutdown<'
-RequestBanner = '>Banner<'
-RequestCompletion = '>Completion<'
-RequestUTPrepare = '>UTPrepare<'
-RequestUTRun = '>UTRun<'
-RequestUTStop = '>UTStop<'
-RequestForkTo = '>ForkTo<'
-RequestForkMode = '>ForkMode<'
-
-ResponseOK = '>OK<'
-ResponseCapabilities = RequestCapabilities
-ResponseContinue = '>Continue<'
-ResponseException = '>Exception<'
-ResponseSyntax = '>SyntaxError<'
-ResponseSignal = '>Signal<'
-ResponseExit = '>Exit<'
-ResponseLine = '>Line<'
-ResponseRaw = '>Raw<'
-ResponseClearBreak = '>ClearBreak<'
-ResponseBPConditionError = '>BPConditionError<'
-ResponseClearWatch = '>ClearWatch<'
-ResponseWPConditionError = '>WPConditionError<'
-ResponseVariables = RequestVariables
-ResponseVariable = RequestVariable
-ResponseThreadList = RequestThreadList
-ResponseThreadSet = RequestThreadSet
-ResponseStack = '>CurrentStack<'
-ResponseBanner = RequestBanner
-ResponseCompletion = RequestCompletion
-ResponseUTPrepared = '>UTPrepared<'
-ResponseUTStartTest = '>UTStartTest<'
-ResponseUTStopTest = '>UTStopTest<'
-ResponseUTTestFailed = '>UTTestFailed<'
-ResponseUTTestErrored = '>UTTestErrored<'
-ResponseUTTestSkipped = '>UTTestSkipped<'
-ResponseUTTestFailedExpected = '>UTTestFailedExpected<'
-ResponseUTTestSucceededUnexpected = '>UTTestSucceededUnexpected<'
-ResponseUTFinished = '>UTFinished<'
-ResponseForkTo = RequestForkTo
-
-PassiveStartup = '>PassiveStartup<'
-
-RequestCallTrace = '>CallTrace<'
-CallTrace = '>CallTrace<'
-
-EOT = '>EOT<\n'
-
-#
-# eflag: FileType = Python2
-# eflag: noqa = M601, M702
--- a/DebugClients/Python/DebugThread.py	Sat Sep 03 18:02:37 2016 +0200
+++ /dev/null	Thu Jan 01 00:00:00 1970 +0000
@@ -1,134 +0,0 @@
-# -*- coding: utf-8 -*-
-
-# Copyright (c) 2002 - 2016 Detlev Offenbach <detlev@die-offenbachs.de>
-#
-
-"""
-Module implementing the debug thread.
-"""
-
-import bdb
-import sys
-
-from DebugBase import DebugBase
-
-
-class DebugThread(DebugBase):
-    """
-    Class implementing a debug thread.
-
-    It represents a thread in the python interpreter that we are tracing.
-    
-    Provides simple wrapper methods around bdb for the 'owning' client to
-    call to step etc.
-    """
-    def __init__(self, dbgClient, targ=None, args=None, kwargs=None,
-                 mainThread=False):
-        """
-        Constructor
-        
-        @param dbgClient the owning client
-        @param targ the target method in the run thread
-        @param args  arguments to be passed to the thread
-        @param kwargs arguments to be passed to the thread
-        @param mainThread False if this thread is not the main script's thread
-        """
-        DebugBase.__init__(self, dbgClient)
-        
-        self._target = targ
-        self._args = args
-        self._kwargs = kwargs
-        self._mainThread = mainThread
-        # thread running tracks execution state of client code
-        # it will always be 0 for main thread as that is tracked
-        # by DebugClientThreads and Bdb...
-        self._threadRunning = False
-        
-        self.__ident = None  # id of this thread.
-        self.__name = ""
-        self.tracePython = False
-    
-    def set_ident(self, id):
-        """
-        Public method to set the id for this thread.
-        
-        @param id id for this thread (int)
-        """
-        self.__ident = id
-    
-    def get_ident(self):
-        """
-        Public method to return the id of this thread.
-        
-        @return the id of this thread (int)
-        """
-        return self.__ident
-    
-    def get_name(self):
-        """
-        Public method to return the name of this thread.
-        
-        @return name of this thread (string)
-        """
-        return self.__name
-    
-    def traceThread(self):
-        """
-        Public method to setup tracing for this thread.
-        """
-        self.set_trace()
-        if not self._mainThread:
-            self.set_continue(0)
-    
-    def bootstrap(self):
-        """
-        Public method to bootstrap the thread.
-        
-        It wraps the call to the user function to enable tracing
-        before hand.
-        """
-        try:
-            try:
-                self._threadRunning = True
-                self.traceThread()
-                self._target(*self._args, **self._kwargs)
-            except bdb.BdbQuit:
-                pass
-        finally:
-            self._threadRunning = False
-            self.quitting = True
-            self._dbgClient.threadTerminated(self)
-            sys.settrace(None)
-            sys.setprofile(None)
-    
-    def trace_dispatch(self, frame, event, arg):
-        """
-        Public method wrapping the trace_dispatch of bdb.py.
-        
-        It wraps the call to dispatch tracing into
-        bdb to make sure we have locked the client to prevent multiple
-        threads from entering the client event loop.
-        
-        @param frame The current stack frame.
-        @param event The trace event (string)
-        @param arg The arguments
-        @return local trace function
-        """
-        try:
-            self._dbgClient.lockClient()
-            # if this thread came out of a lock, and we are quitting
-            # and we are still running, then get rid of tracing for this thread
-            if self.quitting and self._threadRunning:
-                sys.settrace(None)
-                sys.setprofile(None)
-            import threading
-            self.__name = threading.currentThread().getName()
-            retval = DebugBase.trace_dispatch(self, frame, event, arg)
-        finally:
-            self._dbgClient.unlockClient()
-        
-        return retval
-
-#
-# eflag: FileType = Python2
-# eflag: noqa = M601, M702
--- a/DebugClients/Python/DebugUtilities.py	Sat Sep 03 18:02:37 2016 +0200
+++ /dev/null	Thu Jan 01 00:00:00 1970 +0000
@@ -1,34 +0,0 @@
-# -*- coding: utf-8 -*-
-
-# Copyright (c) 2016 Detlev Offenbach <detlev@die-offenbachs.de>
-#
-
-"""
-Module implementing utilities functions for the debug client.
-"""
-
-
-def prepareJsonCommand(method, params):
-    """
-    Function to prepare a single command or response for transmission to
-    the IDE.
-    
-    @param method command or response name to be sent
-    @type str
-    @param params dictionary of named parameters for the command or response
-    @type dict
-    @return prepared JSON command or response string
-    @rtype str
-    """
-    import json
-    
-    commandDict = {
-        "jsonrpc": "2.0",
-        "method": method,
-        "params": params,
-    }
-    return json.dumps(commandDict) + '\n'
-
-#
-# eflag: FileType = Python2
-# eflag: noqa = M702
--- a/DebugClients/Python/FlexCompleter.py	Sat Sep 03 18:02:37 2016 +0200
+++ /dev/null	Thu Jan 01 00:00:00 1970 +0000
@@ -1,275 +0,0 @@
-# -*- coding: utf-8 -*-
-
-"""
-Word completion for the eric6 shell.
-
-<h4>NOTE for eric6 variant</h4>
-
-    This version is a re-implementation of FlexCompleter
-    as found in the PyQwt package. It is modified to work with the eric6 debug
-    clients.
-
-
-<h4>NOTE for the PyQwt variant</h4>
-
-    This version is a re-implementation of FlexCompleter
-    with readline support for PyQt&sip-3.6 and earlier.
-
-    Full readline support is present in PyQt&sip-snapshot-20030531 and later.
-
-
-<h4>NOTE for FlexCompleter</h4>
-
-    This version is a re-implementation of rlcompleter with
-    selectable namespace.
-
-    The problem with rlcompleter is that it's hardwired to work with
-    __main__.__dict__, and in some cases one may have 'sandboxed' namespaces.
-    So this class is a ripoff of rlcompleter, with the namespace to work in as
-    an optional parameter.
-    
-    This class can be used just like rlcompleter, but the Completer class now
-    has a constructor with the optional 'namespace' parameter.
-    
-    A patch has been submitted to Python@sourceforge for these changes to go in
-    the standard Python distribution.
-
-
-<h4>Original rlcompleter documentation</h4>
-
-    This requires the latest extension to the readline module (the
-    completes keywords, built-ins and globals in __main__; when completing
-    NAME.NAME..., it evaluates (!) the expression up to the last dot and
-    completes its attributes.
-    
-    It's very cool to do "import string" type "string.", hit the
-    completion key (twice), and see the list of names defined by the
-    string module!
-    
-    Tip: to use the tab key as the completion key, call
-    
-    'readline.parse_and_bind("tab: complete")'
-    
-    <b>Notes</b>:
-    <ul>
-    <li>
-    Exceptions raised by the completer function are *ignored* (and
-    generally cause the completion to fail).  This is a feature -- since
-    readline sets the tty device in raw (or cbreak) mode, printing a
-    traceback wouldn't work well without some complicated hoopla to save,
-    reset and restore the tty state.
-    </li>
-    <li>
-    The evaluation of the NAME.NAME... form may cause arbitrary
-    application defined code to be executed if an object with a
-    __getattr__ hook is found.  Since it is the responsibility of the
-    application (or the user) to enable this feature, I consider this an
-    acceptable risk.  More complicated expressions (e.g. function calls or
-    indexing operations) are *not* evaluated.
-    </li>
-    <li>
-    GNU readline is also used by the built-in functions input() and
-    raw_input(), and thus these also benefit/suffer from the completer
-    features.  Clearly an interactive application can benefit by
-    specifying its own completer function and using raw_input() for all
-    its input.
-    </li>
-    <li>
-    When the original stdin is not a tty device, GNU readline is never
-    used, and this module (and the readline module) are silently inactive.
-    </li>
-    </ul>
-"""
-
-#*****************************************************************************
-#
-# Since this file is essentially a minimally modified copy of the rlcompleter
-# module which is part of the standard Python distribution, I assume that the
-# proper procedure is to maintain its copyright as belonging to the Python
-# Software Foundation:
-#
-#       Copyright (C) 2001 Python Software Foundation, www.python.org
-#
-#  Distributed under the terms of the Python Software Foundation license.
-#
-#  Full text available at:
-#
-#                  http://www.python.org/2.1/license.html
-#
-#*****************************************************************************
-
-import __builtin__
-import __main__
-
-__all__ = ["Completer"]
-
-
-class Completer(object):
-    """
-    Class implementing the command line completer object.
-    """
-    def __init__(self, namespace=None):
-        """
-        Constructor
-
-        Completer([namespace]) -> completer instance.
-
-        If unspecified, the default namespace where completions are performed
-        is __main__ (technically, __main__.__dict__). Namespaces should be
-        given as dictionaries.
-
-        Completer instances should be used as the completion mechanism of
-        readline via the set_completer() call:
-
-        readline.set_completer(Completer(my_namespace).complete)
-        
-        @param namespace namespace for the completer
-        @exception TypeError raised to indicate a wrong namespace structure
-        """
-        if namespace and not isinstance(namespace, dict):
-            raise TypeError('namespace must be a dictionary')
-
-        # Don't bind to namespace quite yet, but flag whether the user wants a
-        # specific namespace or to use __main__.__dict__. This will allow us
-        # to bind to __main__.__dict__ at completion time, not now.
-        if namespace is None:
-            self.use_main_ns = 1
-        else:
-            self.use_main_ns = 0
-            self.namespace = namespace
-
-    def complete(self, text, state):
-        """
-        Public method to return the next possible completion for 'text'.
-
-        This is called successively with state == 0, 1, 2, ... until it
-        returns None.  The completion should begin with 'text'.
-        
-        @param text The text to be completed. (string)
-        @param state The state of the completion. (integer)
-        @return The possible completions as a list of strings.
-        """
-        if self.use_main_ns:
-            self.namespace = __main__.__dict__
-            
-        if state == 0:
-            if "." in text:
-                self.matches = self.attr_matches(text)
-            else:
-                self.matches = self.global_matches(text)
-        try:
-            return self.matches[state]
-        except IndexError:
-            return None
-
-    def _callable_postfix(self, val, word):
-        """
-        Protected method to check for a callable.
-        
-        @param val value to check (object)
-        @param word word to ammend (string)
-        @return ammended word (string)
-        """
-        if hasattr(val, '__call__'):
-            word = word + "("
-        return word
-
-    def global_matches(self, text):
-        """
-        Public method to compute matches when text is a simple name.
-
-        @param text The text to be completed. (string)
-        @return A list of all keywords, built-in functions and names currently
-        defined in self.namespace that match.
-        """
-        import keyword
-        matches = []
-        n = len(text)
-        for word in keyword.kwlist:
-            if word[:n] == text:
-                matches.append(word)
-        for nspace in [__builtin__.__dict__, self.namespace]:
-            for word, val in nspace.items():
-                if word[:n] == text and word != "__builtins__":
-                    matches.append(self._callable_postfix(val, word))
-        return matches
-
-    def attr_matches(self, text):
-        """
-        Public method to compute matches when text contains a dot.
-
-        Assuming the text is of the form NAME.NAME....[NAME], and is
-        evaluatable in self.namespace, it will be evaluated and its attributes
-        (as revealed by dir()) are used as possible completions.  (For class
-        instances, class members are are also considered.)
-
-        <b>WARNING</b>: this can still invoke arbitrary C code, if an object
-        with a __getattr__ hook is evaluated.
-        
-        @param text The text to be completed. (string)
-        @return A list of all matches.
-        """
-        import re
-
-    # Testing. This is the original code:
-    #m = re.match(r"(\w+(\.\w+)*)\.(\w*)", text)
-
-    # Modified to catch [] in expressions:
-    #m = re.match(r"([\w\[\]]+(\.[\w\[\]]+)*)\.(\w*)", text)
-
-        # Another option, seems to work great. Catches things like ''.<tab>
-        m = re.match(r"(\S+(\.\w+)*)\.(\w*)", text)
-
-        if not m:
-            return
-        expr, attr = m.group(1, 3)
-        try:
-            thisobject = eval(expr, self.namespace)
-        except Exception:
-            return []
-
-        # get the content of the object, except __builtins__
-        words = dir(thisobject)
-        if "__builtins__" in words:
-            words.remove("__builtins__")
-
-        if hasattr(object, '__class__'):
-            words.append('__class__')
-            words = words + get_class_members(object.__class__)
-        matches = []
-        n = len(attr)
-        for word in words:
-            try:
-                if word[:n] == attr and hasattr(thisobject, word):
-                    val = getattr(thisobject, word)
-                    word = self._callable_postfix(
-                        val, "%s.%s" % (expr, word))
-                    matches.append(word)
-            except Exception:
-                # some badly behaved objects pollute dir() with non-strings,
-                # which cause the completion to fail.  This way we skip the
-                # bad entries and can still continue processing the others.
-                pass
-        return matches
-
-
-def get_class_members(klass):
-    """
-    Module function to retrieve the class members.
-    
-    @param klass The class object to be analysed.
-    @return A list of all names defined in the class.
-    """
-    # PyQwt's hack for PyQt&sip-3.6 and earlier
-    if hasattr(klass, 'getLazyNames'):
-        return klass.getLazyNames()
-    # vanilla Python stuff
-    ret = dir(klass)
-    if hasattr(klass, '__bases__'):
-        for base in klass.__bases__:
-            ret = ret + get_class_members(base)
-    return ret
-
-#
-# eflag: FileType = Python2
-# eflag: noqa = M601, M702, M111
--- a/DebugClients/Python/PyProfile.py	Sat Sep 03 18:02:37 2016 +0200
+++ /dev/null	Thu Jan 01 00:00:00 1970 +0000
@@ -1,176 +0,0 @@
-# -*- coding: utf-8 -*-
-
-# Copyright (c) 2002 - 2016 Detlev Offenbach <detlev@die-offenbachs.de>
-
-"""
-Module defining additions to the standard Python profile.py.
-"""
-
-import os
-import marshal
-import profile
-import atexit
-import pickle
-
-
-class PyProfile(profile.Profile):
-    """
-    Class extending the standard Python profiler with additional methods.
-    
-    This class extends the standard Python profiler by the functionality to
-    save the collected timing data in a timing cache, to restore these data
-    on subsequent calls, to store a profile dump to a standard filename and
-    to erase these caches.
-    """
-    def __init__(self, basename, timer=None, bias=None):
-        """
-        Constructor
-        
-        @param basename name of the script to be profiled (string)
-        @param timer function defining the timing calculation
-        @param bias calibration value (float)
-        """
-        try:
-            profile.Profile.__init__(self, timer, bias)
-        except TypeError:
-            profile.Profile.__init__(self, timer)
-        
-        self.dispatch = self.__class__.dispatch
-        
-        basename = os.path.splitext(basename)[0]
-        self.profileCache = "%s.profile" % basename
-        self.timingCache = "%s.timings" % basename
-        
-        self.__restore()
-        atexit.register(self.save)
-        
-    def __restore(self):
-        """
-        Private method to restore the timing data from the timing cache.
-        """
-        if not os.path.exists(self.timingCache):
-            return
-            
-        try:
-            cache = open(self.timingCache, 'rb')
-            timings = marshal.load(cache)
-            cache.close()
-            if isinstance(timings, type.DictType):
-                self.timings = timings
-        except Exception:
-            pass
-        
-    def save(self):
-        """
-        Public method to store the collected profile data.
-        """
-        # dump the raw timing data
-        cache = open(self.timingCache, 'wb')
-        marshal.dump(self.timings, cache)
-        cache.close()
-        
-        # dump the profile data
-        self.dump_stats(self.profileCache)
-        
-    def dump_stats(self, file):
-        """
-        Public method to dump the statistics data.
-        
-        @param file name of the file to write to (string)
-        """
-        try:
-            f = open(file, 'wb')
-            self.create_stats()
-            pickle.dump(self.stats, f, 2)
-        except (EnvironmentError, pickle.PickleError):
-            pass
-        finally:
-            f.close()
-
-    def erase(self):
-        """
-        Public method to erase the collected timing data.
-        """
-        self.timings = {}
-        if os.path.exists(self.timingCache):
-            os.remove(self.timingCache)
-
-    def fix_frame_filename(self, frame):
-        """
-        Public method used to fixup the filename for a given frame.
-        
-        The logic employed here is that if a module was loaded
-        from a .pyc file, then the correct .py to operate with
-        should be in the same path as the .pyc. The reason this
-        logic is needed is that when a .pyc file is generated, the
-        filename embedded and thus what is readable in the code object
-        of the frame object is the fully qualified filepath when the
-        pyc is generated. If files are moved from machine to machine
-        this can break debugging as the .pyc will refer to the .py
-        on the original machine. Another case might be sharing
-        code over a network... This logic deals with that.
-        
-        @param frame the frame object
-        @return fixed up file name (string)
-        """
-        # get module name from __file__
-        if not isinstance(frame, profile.Profile.fake_frame) and \
-                '__file__' in frame.f_globals:
-            root, ext = os.path.splitext(frame.f_globals['__file__'])
-            if ext in ['.pyc', '.py', '.py2', '.pyo']:
-                fixedName = root + '.py'
-                if os.path.exists(fixedName):
-                    return fixedName
-                
-                fixedName = root + '.py2'
-                if os.path.exists(fixedName):
-                    return fixedName
-
-        return frame.f_code.co_filename
-
-    def trace_dispatch_call(self, frame, t):
-        """
-        Public method used to trace functions calls.
-        
-        This is a variant of the one found in the standard Python
-        profile.py calling fix_frame_filename above.
-        
-        @param frame reference to the call frame
-        @param t arguments of the call
-        @return flag indicating a handled call
-        """
-        if self.cur and frame.f_back is not self.cur[-2]:
-            rpt, rit, ret, rfn, rframe, rcur = self.cur
-            if not isinstance(rframe, profile.Profile.fake_frame):
-                assert rframe.f_back is frame.f_back, ("Bad call", rfn,
-                                                       rframe, rframe.f_back,
-                                                       frame, frame.f_back)
-                self.trace_dispatch_return(rframe, 0)
-                assert (self.cur is None or
-                        frame.f_back is self.cur[-2]), ("Bad call",
-                                                        self.cur[-3])
-        fcode = frame.f_code
-        fn = (self.fix_frame_filename(frame),
-              fcode.co_firstlineno, fcode.co_name)
-        self.cur = (t, 0, 0, fn, frame, self.cur)
-        timings = self.timings
-        if fn in timings:
-            cc, ns, tt, ct, callers = timings[fn]
-            timings[fn] = cc, ns + 1, tt, ct, callers
-        else:
-            timings[fn] = 0, 0, 0, 0, {}
-        return 1
-    
-    dispatch = {
-        "call": trace_dispatch_call,
-        "exception": profile.Profile.trace_dispatch_exception,
-        "return": profile.Profile.trace_dispatch_return,
-        "c_call": profile.Profile.trace_dispatch_c_call,
-        "c_exception": profile.Profile.trace_dispatch_return,
-        # the C function returned
-        "c_return": profile.Profile.trace_dispatch_return,
-    }
-
-#
-# eflag: FileType = Python2
-# eflag: noqa = M601, M702
--- a/DebugClients/Python/__init__.py	Sat Sep 03 18:02:37 2016 +0200
+++ /dev/null	Thu Jan 01 00:00:00 1970 +0000
@@ -1,13 +0,0 @@
-# -*- coding: utf-8 -*-
-
-# Copyright (c) 2005 - 2016 Detlev Offenbach <detlev@die-offenbachs.de>
-#
-
-"""
-Package implementing the Python debugger.
-
-It consists of different kinds of debug clients.
-"""
-
-#
-# eflag: FileType = Python2
--- a/DebugClients/Python/coverage/__init__.py	Sat Sep 03 18:02:37 2016 +0200
+++ /dev/null	Thu Jan 01 00:00:00 1970 +0000
@@ -1,38 +0,0 @@
-# Licensed under the Apache License: http://www.apache.org/licenses/LICENSE-2.0
-# For details: https://bitbucket.org/ned/coveragepy/src/default/NOTICE.txt
-
-"""Code coverage measurement for Python.
-
-Ned Batchelder
-http://nedbatchelder.com/code/coverage
-
-"""
-
-from coverage.version import __version__, __url__, version_info
-
-from coverage.control import Coverage, process_startup
-from coverage.data import CoverageData
-from coverage.misc import CoverageException
-from coverage.plugin import CoveragePlugin, FileTracer, FileReporter
-from coverage.pytracer import PyTracer
-
-# Backward compatibility.
-coverage = Coverage
-
-# On Windows, we encode and decode deep enough that something goes wrong and
-# the encodings.utf_8 module is loaded and then unloaded, I don't know why.
-# Adding a reference here prevents it from being unloaded.  Yuk.
-import encodings.utf_8
-
-# Because of the "from coverage.control import fooey" lines at the top of the
-# file, there's an entry for coverage.coverage in sys.modules, mapped to None.
-# This makes some inspection tools (like pydoc) unable to find the class
-# coverage.coverage.  So remove that entry.
-import sys
-try:
-    del sys.modules['coverage.coverage']
-except KeyError:
-    pass
-
-#
-# eflag: FileType = Python2
--- a/DebugClients/Python/coverage/__main__.py	Sat Sep 03 18:02:37 2016 +0200
+++ /dev/null	Thu Jan 01 00:00:00 1970 +0000
@@ -1,11 +0,0 @@
-# Licensed under the Apache License: http://www.apache.org/licenses/LICENSE-2.0
-# For details: https://bitbucket.org/ned/coveragepy/src/default/NOTICE.txt
-
-"""Coverage.py's main entry point."""
-
-import sys
-from coverage.cmdline import main
-sys.exit(main())
-
-#
-# eflag: FileType = Python2
--- a/DebugClients/Python/coverage/annotate.py	Sat Sep 03 18:02:37 2016 +0200
+++ /dev/null	Thu Jan 01 00:00:00 1970 +0000
@@ -1,106 +0,0 @@
-# Licensed under the Apache License: http://www.apache.org/licenses/LICENSE-2.0
-# For details: https://bitbucket.org/ned/coveragepy/src/default/NOTICE.txt
-
-"""Source file annotation for coverage.py."""
-
-import io
-import os
-import re
-
-from coverage.files import flat_rootname
-from coverage.misc import isolate_module
-from coverage.report import Reporter
-
-os = isolate_module(os)
-
-
-class AnnotateReporter(Reporter):
-    """Generate annotated source files showing line coverage.
-
-    This reporter creates annotated copies of the measured source files. Each
-    .py file is copied as a .py,cover file, with a left-hand margin annotating
-    each line::
-
-        > def h(x):
-        -     if 0:   #pragma: no cover
-        -         pass
-        >     if x == 1:
-        !         a = 1
-        >     else:
-        >         a = 2
-
-        > h(2)
-
-    Executed lines use '>', lines not executed use '!', lines excluded from
-    consideration use '-'.
-
-    """
-
-    def __init__(self, coverage, config):
-        super(AnnotateReporter, self).__init__(coverage, config)
-        self.directory = None
-
-    blank_re = re.compile(r"\s*(#|$)")
-    else_re = re.compile(r"\s*else\s*:\s*(#|$)")
-
-    def report(self, morfs, directory=None):
-        """Run the report.
-
-        See `coverage.report()` for arguments.
-
-        """
-        self.report_files(self.annotate_file, morfs, directory)
-
-    def annotate_file(self, fr, analysis):
-        """Annotate a single file.
-
-        `fr` is the FileReporter for the file to annotate.
-
-        """
-        statements = sorted(analysis.statements)
-        missing = sorted(analysis.missing)
-        excluded = sorted(analysis.excluded)
-
-        if self.directory:
-            dest_file = os.path.join(self.directory, flat_rootname(fr.relative_filename()))
-            if dest_file.endswith("_py"):
-                dest_file = dest_file[:-3] + ".py"
-            dest_file += ",cover"
-        else:
-            dest_file = fr.filename + ",cover"
-
-        with io.open(dest_file, 'w', encoding='utf8') as dest:
-            i = 0
-            j = 0
-            covered = True
-            source = fr.source()
-            for lineno, line in enumerate(source.splitlines(True), start=1):
-                while i < len(statements) and statements[i] < lineno:
-                    i += 1
-                while j < len(missing) and missing[j] < lineno:
-                    j += 1
-                if i < len(statements) and statements[i] == lineno:
-                    covered = j >= len(missing) or missing[j] > lineno
-                if self.blank_re.match(line):
-                    dest.write(u'  ')
-                elif self.else_re.match(line):
-                    # Special logic for lines containing only 'else:'.
-                    if i >= len(statements) and j >= len(missing):
-                        dest.write(u'! ')
-                    elif i >= len(statements) or j >= len(missing):
-                        dest.write(u'> ')
-                    elif statements[i] == missing[j]:
-                        dest.write(u'! ')
-                    else:
-                        dest.write(u'> ')
-                elif lineno in excluded:
-                    dest.write(u'- ')
-                elif covered:
-                    dest.write(u'> ')
-                else:
-                    dest.write(u'! ')
-
-                dest.write(line)
-
-#
-# eflag: FileType = Python2
--- a/DebugClients/Python/coverage/backunittest.py	Sat Sep 03 18:02:37 2016 +0200
+++ /dev/null	Thu Jan 01 00:00:00 1970 +0000
@@ -1,45 +0,0 @@
-# Licensed under the Apache License: http://www.apache.org/licenses/LICENSE-2.0
-# For details: https://bitbucket.org/ned/coveragepy/src/default/NOTICE.txt
-
-"""Implementations of unittest features from the future."""
-
-# Use unittest2 if it's available, otherwise unittest.  This gives us
-# back-ported features for 2.6.
-try:
-    import unittest2 as unittest
-except ImportError:
-    import unittest
-
-
-def unittest_has(method):
-    """Does `unittest.TestCase` have `method` defined?"""
-    return hasattr(unittest.TestCase, method)
-
-
-class TestCase(unittest.TestCase):
-    """Just like unittest.TestCase, but with assert methods added.
-
-    Designed to be compatible with 3.1 unittest.  Methods are only defined if
-    `unittest` doesn't have them.
-
-    """
-    # pylint: disable=missing-docstring
-
-    # Many Pythons have this method defined.  But PyPy3 has a bug with it
-    # somehow (https://bitbucket.org/pypy/pypy/issues/2092), so always use our
-    # own implementation that works everywhere, at least for the ways we're
-    # calling it.
-    def assertCountEqual(self, s1, s2):
-        """Assert these have the same elements, regardless of order."""
-        self.assertEqual(sorted(s1), sorted(s2))
-
-    if not unittest_has('assertRaisesRegex'):
-        def assertRaisesRegex(self, *args, **kwargs):
-            return self.assertRaisesRegexp(*args, **kwargs)
-
-    if not unittest_has('assertRegex'):
-        def assertRegex(self, *args, **kwargs):
-            return self.assertRegexpMatches(*args, **kwargs)
-
-#
-# eflag: FileType = Python2
--- a/DebugClients/Python/coverage/backward.py	Sat Sep 03 18:02:37 2016 +0200
+++ /dev/null	Thu Jan 01 00:00:00 1970 +0000
@@ -1,175 +0,0 @@
-# Licensed under the Apache License: http://www.apache.org/licenses/LICENSE-2.0
-# For details: https://bitbucket.org/ned/coveragepy/src/default/NOTICE.txt
-
-"""Add things to old Pythons so I can pretend they are newer."""
-
-# This file does lots of tricky stuff, so disable a bunch of pylint warnings.
-# pylint: disable=redefined-builtin
-# pylint: disable=unused-import
-# pylint: disable=no-name-in-module
-
-import sys
-
-from coverage import env
-
-
-# Pythons 2 and 3 differ on where to get StringIO.
-try:
-    from cStringIO import StringIO
-except ImportError:
-    from io import StringIO
-
-# In py3, ConfigParser was renamed to the more-standard configparser
-try:
-    import configparser
-except ImportError:
-    import ConfigParser as configparser
-
-# What's a string called?
-try:
-    string_class = basestring
-except NameError:
-    string_class = str
-
-# What's a Unicode string called?
-try:
-    unicode_class = unicode
-except NameError:
-    unicode_class = str
-
-# Where do pickles come from?
-try:
-    import cPickle as pickle
-except ImportError:
-    import pickle
-
-# range or xrange?
-try:
-    range = xrange
-except NameError:
-    range = range
-
-# shlex.quote is new, but there's an undocumented implementation in "pipes",
-# who knew!?
-try:
-    from shlex import quote as shlex_quote
-except ImportError:
-    # Useful function, available under a different (undocumented) name
-    # in Python versions earlier than 3.3.
-    from pipes import quote as shlex_quote
-
-# A function to iterate listlessly over a dict's items.
-try:
-    {}.iteritems
-except AttributeError:
-    def iitems(d):
-        """Produce the items from dict `d`."""
-        return d.items()
-else:
-    def iitems(d):
-        """Produce the items from dict `d`."""
-        return d.iteritems()
-
-# Getting the `next` function from an iterator is different in 2 and 3.
-try:
-    iter([]).next
-except AttributeError:
-    def iternext(seq):
-        """Get the `next` function for iterating over `seq`."""
-        return iter(seq).__next__
-else:
-    def iternext(seq):
-        """Get the `next` function for iterating over `seq`."""
-        return iter(seq).next
-
-# Python 3.x is picky about bytes and strings, so provide methods to
-# get them right, and make them no-ops in 2.x
-if env.PY3:
-    def to_bytes(s):
-        """Convert string `s` to bytes."""
-        return s.encode('utf8')
-
-    def binary_bytes(byte_values):
-        """Produce a byte string with the ints from `byte_values`."""
-        return bytes(byte_values)
-
-    def bytes_to_ints(bytes_value):
-        """Turn a bytes object into a sequence of ints."""
-        # In Python 3, iterating bytes gives ints.
-        return bytes_value
-
-else:
-    def to_bytes(s):
-        """Convert string `s` to bytes (no-op in 2.x)."""
-        return s
-
-    def binary_bytes(byte_values):
-        """Produce a byte string with the ints from `byte_values`."""
-        return "".join(chr(b) for b in byte_values)
-
-    def bytes_to_ints(bytes_value):
-        """Turn a bytes object into a sequence of ints."""
-        for byte in bytes_value:
-            yield ord(byte)
-
-
-try:
-    # In Python 2.x, the builtins were in __builtin__
-    BUILTINS = sys.modules['__builtin__']
-except KeyError:
-    # In Python 3.x, they're in builtins
-    BUILTINS = sys.modules['builtins']
-
-
-# imp was deprecated in Python 3.3
-try:
-    import importlib
-    import importlib.util
-    imp = None
-except ImportError:
-    importlib = None
-
-# We only want to use importlib if it has everything we need.
-try:
-    importlib_util_find_spec = importlib.util.find_spec
-except Exception:
-    import imp
-    importlib_util_find_spec = None
-
-# What is the .pyc magic number for this version of Python?
-try:
-    PYC_MAGIC_NUMBER = importlib.util.MAGIC_NUMBER
-except AttributeError:
-    PYC_MAGIC_NUMBER = imp.get_magic()
-
-
-def import_local_file(modname, modfile=None):
-    """Import a local file as a module.
-
-    Opens a file in the current directory named `modname`.py, imports it
-    as `modname`, and returns the module object.  `modfile` is the file to
-    import if it isn't in the current directory.
-
-    """
-    try:
-        from importlib.machinery import SourceFileLoader
-    except ImportError:
-        SourceFileLoader = None
-
-    if modfile is None:
-        modfile = modname + '.py'
-    if SourceFileLoader:
-        mod = SourceFileLoader(modname, modfile).load_module()
-    else:
-        for suff in imp.get_suffixes():                 # pragma: part covered
-            if suff[0] == '.py':
-                break
-
-        with open(modfile, 'r') as f:
-            # pylint: disable=undefined-loop-variable
-            mod = imp.load_module(modname, f, modfile, suff)
-
-    return mod
-
-#
-# eflag: FileType = Python2
--- a/DebugClients/Python/coverage/bytecode.py	Sat Sep 03 18:02:37 2016 +0200
+++ /dev/null	Thu Jan 01 00:00:00 1970 +0000
@@ -1,25 +0,0 @@
-# Licensed under the Apache License: http://www.apache.org/licenses/LICENSE-2.0
-# For details: https://bitbucket.org/ned/coveragepy/src/default/NOTICE.txt
-
-"""Bytecode manipulation for coverage.py"""
-
-import types
-
-
-class CodeObjects(object):
-    """Iterate over all the code objects in `code`."""
-    def __init__(self, code):
-        self.stack = [code]
-
-    def __iter__(self):
-        while self.stack:
-            # We're going to return the code object on the stack, but first
-            # push its children for later returning.
-            code = self.stack.pop()
-            for c in code.co_consts:
-                if isinstance(c, types.CodeType):
-                    self.stack.append(c)
-            yield code
-
-#
-# eflag: FileType = Python2
--- a/DebugClients/Python/coverage/cmdline.py	Sat Sep 03 18:02:37 2016 +0200
+++ /dev/null	Thu Jan 01 00:00:00 1970 +0000
@@ -1,766 +0,0 @@
-# Licensed under the Apache License: http://www.apache.org/licenses/LICENSE-2.0
-# For details: https://bitbucket.org/ned/coveragepy/src/default/NOTICE.txt
-
-"""Command-line support for coverage.py."""
-
-import glob
-import optparse
-import os.path
-import sys
-import textwrap
-import traceback
-
-from coverage import env
-from coverage.collector import CTracer
-from coverage.execfile import run_python_file, run_python_module
-from coverage.misc import CoverageException, ExceptionDuringRun, NoSource
-from coverage.debug import info_formatter, info_header
-
-
-class Opts(object):
-    """A namespace class for individual options we'll build parsers from."""
-
-    append = optparse.make_option(
-        '-a', '--append', action='store_true',
-        help="Append coverage data to .coverage, otherwise it is started clean with each run.",
-    )
-    branch = optparse.make_option(
-        '', '--branch', action='store_true',
-        help="Measure branch coverage in addition to statement coverage.",
-    )
-    CONCURRENCY_CHOICES = [
-        "thread", "gevent", "greenlet", "eventlet", "multiprocessing",
-    ]
-    concurrency = optparse.make_option(
-        '', '--concurrency', action='store', metavar="LIB",
-        choices=CONCURRENCY_CHOICES,
-        help=(
-            "Properly measure code using a concurrency library. "
-            "Valid values are: %s."
-        ) % ", ".join(CONCURRENCY_CHOICES),
-    )
-    debug = optparse.make_option(
-        '', '--debug', action='store', metavar="OPTS",
-        help="Debug options, separated by commas",
-    )
-    directory = optparse.make_option(
-        '-d', '--directory', action='store', metavar="DIR",
-        help="Write the output files to DIR.",
-    )
-    fail_under = optparse.make_option(
-        '', '--fail-under', action='store', metavar="MIN", type="int",
-        help="Exit with a status of 2 if the total coverage is less than MIN.",
-    )
-    help = optparse.make_option(
-        '-h', '--help', action='store_true',
-        help="Get help on this command.",
-    )
-    ignore_errors = optparse.make_option(
-        '-i', '--ignore-errors', action='store_true',
-        help="Ignore errors while reading source files.",
-    )
-    include = optparse.make_option(
-        '', '--include', action='store',
-        metavar="PAT1,PAT2,...",
-        help=(
-            "Include only files whose paths match one of these patterns. "
-            "Accepts shell-style wildcards, which must be quoted."
-        ),
-    )
-    pylib = optparse.make_option(
-        '-L', '--pylib', action='store_true',
-        help=(
-            "Measure coverage even inside the Python installed library, "
-            "which isn't done by default."
-        ),
-    )
-    show_missing = optparse.make_option(
-        '-m', '--show-missing', action='store_true',
-        help="Show line numbers of statements in each module that weren't executed.",
-    )
-    skip_covered = optparse.make_option(
-        '--skip-covered', action='store_true',
-        help="Skip files with 100% coverage.",
-    )
-    omit = optparse.make_option(
-        '', '--omit', action='store',
-        metavar="PAT1,PAT2,...",
-        help=(
-            "Omit files whose paths match one of these patterns. "
-            "Accepts shell-style wildcards, which must be quoted."
-        ),
-    )
-    output_xml = optparse.make_option(
-        '-o', '', action='store', dest="outfile",
-        metavar="OUTFILE",
-        help="Write the XML report to this file. Defaults to 'coverage.xml'",
-    )
-    parallel_mode = optparse.make_option(
-        '-p', '--parallel-mode', action='store_true',
-        help=(
-            "Append the machine name, process id and random number to the "
-            ".coverage data file name to simplify collecting data from "
-            "many processes."
-        ),
-    )
-    module = optparse.make_option(
-        '-m', '--module', action='store_true',
-        help=(
-            "<pyfile> is an importable Python module, not a script path, "
-            "to be run as 'python -m' would run it."
-        ),
-    )
-    rcfile = optparse.make_option(
-        '', '--rcfile', action='store',
-        help="Specify configuration file.  Defaults to '.coveragerc'",
-    )
-    source = optparse.make_option(
-        '', '--source', action='store', metavar="SRC1,SRC2,...",
-        help="A list of packages or directories of code to be measured.",
-    )
-    timid = optparse.make_option(
-        '', '--timid', action='store_true',
-        help=(
-            "Use a simpler but slower trace method.  Try this if you get "
-            "seemingly impossible results!"
-        ),
-    )
-    title = optparse.make_option(
-        '', '--title', action='store', metavar="TITLE",
-        help="A text string to use as the title on the HTML.",
-    )
-    version = optparse.make_option(
-        '', '--version', action='store_true',
-        help="Display version information and exit.",
-    )
-
-
-class CoverageOptionParser(optparse.OptionParser, object):
-    """Base OptionParser for coverage.py.
-
-    Problems don't exit the program.
-    Defaults are initialized for all options.
-
-    """
-
-    def __init__(self, *args, **kwargs):
-        super(CoverageOptionParser, self).__init__(
-            add_help_option=False, *args, **kwargs
-            )
-        self.set_defaults(
-            action=None,
-            append=None,
-            branch=None,
-            concurrency=None,
-            debug=None,
-            directory=None,
-            fail_under=None,
-            help=None,
-            ignore_errors=None,
-            include=None,
-            module=None,
-            omit=None,
-            parallel_mode=None,
-            pylib=None,
-            rcfile=True,
-            show_missing=None,
-            skip_covered=None,
-            source=None,
-            timid=None,
-            title=None,
-            version=None,
-            )
-
-        self.disable_interspersed_args()
-        self.help_fn = self.help_noop
-
-    def help_noop(self, error=None, topic=None, parser=None):
-        """No-op help function."""
-        pass
-
-    class OptionParserError(Exception):
-        """Used to stop the optparse error handler ending the process."""
-        pass
-
-    def parse_args_ok(self, args=None, options=None):
-        """Call optparse.parse_args, but return a triple:
-
-        (ok, options, args)
-
-        """
-        try:
-            options, args = \
-                super(CoverageOptionParser, self).parse_args(args, options)
-        except self.OptionParserError:
-            return False, None, None
-        return True, options, args
-
-    def error(self, msg):
-        """Override optparse.error so sys.exit doesn't get called."""
-        self.help_fn(msg)
-        raise self.OptionParserError
-
-
-class GlobalOptionParser(CoverageOptionParser):
-    """Command-line parser for coverage.py global option arguments."""
-
-    def __init__(self):
-        super(GlobalOptionParser, self).__init__()
-
-        self.add_options([
-            Opts.help,
-            Opts.version,
-        ])
-
-
-class CmdOptionParser(CoverageOptionParser):
-    """Parse one of the new-style commands for coverage.py."""
-
-    def __init__(self, action, options=None, defaults=None, usage=None, description=None):
-        """Create an OptionParser for a coverage.py command.
-
-        `action` is the slug to put into `options.action`.
-        `options` is a list of Option's for the command.
-        `defaults` is a dict of default value for options.
-        `usage` is the usage string to display in help.
-        `description` is the description of the command, for the help text.
-
-        """
-        if usage:
-            usage = "%prog " + usage
-        super(CmdOptionParser, self).__init__(
-            usage=usage,
-            description=description,
-        )
-        self.set_defaults(action=action, **(defaults or {}))
-        if options:
-            self.add_options(options)
-        self.cmd = action
-
-    def __eq__(self, other):
-        # A convenience equality, so that I can put strings in unit test
-        # results, and they will compare equal to objects.
-        return (other == "<CmdOptionParser:%s>" % self.cmd)
-
-    def get_prog_name(self):
-        """Override of an undocumented function in optparse.OptionParser."""
-        program_name = super(CmdOptionParser, self).get_prog_name()
-
-        # Include the sub-command for this parser as part of the command.
-        return "%(command)s %(subcommand)s" % {'command': program_name, 'subcommand': self.cmd}
-
-
-GLOBAL_ARGS = [
-    Opts.debug,
-    Opts.help,
-    Opts.rcfile,
-    ]
-
-CMDS = {
-    'annotate': CmdOptionParser(
-        "annotate",
-        [
-            Opts.directory,
-            Opts.ignore_errors,
-            Opts.include,
-            Opts.omit,
-            ] + GLOBAL_ARGS,
-        usage="[options] [modules]",
-        description=(
-            "Make annotated copies of the given files, marking statements that are executed "
-            "with > and statements that are missed with !."
-        ),
-    ),
-
-    'combine': CmdOptionParser(
-        "combine",
-        GLOBAL_ARGS,
-        usage="<path1> <path2> ... <pathN>",
-        description=(
-            "Combine data from multiple coverage files collected "
-            "with 'run -p'.  The combined results are written to a single "
-            "file representing the union of the data. The positional "
-            "arguments are data files or directories containing data files. "
-            "If no paths are provided, data files in the default data file's "
-            "directory are combined."
-        ),
-    ),
-
-    'debug': CmdOptionParser(
-        "debug", GLOBAL_ARGS,
-        usage="<topic>",
-        description=(
-            "Display information on the internals of coverage.py, "
-            "for diagnosing problems. "
-            "Topics are 'data' to show a summary of the collected data, "
-            "or 'sys' to show installation information."
-        ),
-    ),
-
-    'erase': CmdOptionParser(
-        "erase", GLOBAL_ARGS,
-        usage=" ",
-        description="Erase previously collected coverage data.",
-    ),
-
-    'help': CmdOptionParser(
-        "help", GLOBAL_ARGS,
-        usage="[command]",
-        description="Describe how to use coverage.py",
-    ),
-
-    'html': CmdOptionParser(
-        "html",
-        [
-            Opts.directory,
-            Opts.fail_under,
-            Opts.ignore_errors,
-            Opts.include,
-            Opts.omit,
-            Opts.title,
-            ] + GLOBAL_ARGS,
-        usage="[options] [modules]",
-        description=(
-            "Create an HTML report of the coverage of the files.  "
-            "Each file gets its own page, with the source decorated to show "
-            "executed, excluded, and missed lines."
-        ),
-    ),
-
-    'report': CmdOptionParser(
-        "report",
-        [
-            Opts.fail_under,
-            Opts.ignore_errors,
-            Opts.include,
-            Opts.omit,
-            Opts.show_missing,
-            Opts.skip_covered,
-            ] + GLOBAL_ARGS,
-        usage="[options] [modules]",
-        description="Report coverage statistics on modules."
-    ),
-
-    'run': CmdOptionParser(
-        "run",
-        [
-            Opts.append,
-            Opts.branch,
-            Opts.concurrency,
-            Opts.include,
-            Opts.module,
-            Opts.omit,
-            Opts.pylib,
-            Opts.parallel_mode,
-            Opts.source,
-            Opts.timid,
-            ] + GLOBAL_ARGS,
-        usage="[options] <pyfile> [program options]",
-        description="Run a Python program, measuring code execution."
-    ),
-
-    'xml': CmdOptionParser(
-        "xml",
-        [
-            Opts.fail_under,
-            Opts.ignore_errors,
-            Opts.include,
-            Opts.omit,
-            Opts.output_xml,
-            ] + GLOBAL_ARGS,
-        usage="[options] [modules]",
-        description="Generate an XML report of coverage results."
-    ),
-}
-
-
-OK, ERR, FAIL_UNDER = 0, 1, 2
-
-
-class CoverageScript(object):
-    """The command-line interface to coverage.py."""
-
-    def __init__(self, _covpkg=None, _run_python_file=None,
-                 _run_python_module=None, _help_fn=None, _path_exists=None):
-        # _covpkg is for dependency injection, so we can test this code.
-        if _covpkg:
-            self.covpkg = _covpkg
-        else:
-            import coverage
-            self.covpkg = coverage
-
-        # For dependency injection:
-        self.run_python_file = _run_python_file or run_python_file
-        self.run_python_module = _run_python_module or run_python_module
-        self.help_fn = _help_fn or self.help
-        self.path_exists = _path_exists or os.path.exists
-        self.global_option = False
-
-        self.coverage = None
-
-        self.program_name = os.path.basename(sys.argv[0])
-        if env.WINDOWS:
-            # entry_points={'console_scripts':...} on Windows makes files
-            # called coverage.exe, coverage3.exe, and coverage-3.5.exe. These
-            # invoke coverage-script.py, coverage3-script.py, and
-            # coverage-3.5-script.py.  argv[0] is the .py file, but we want to
-            # get back to the original form.
-            auto_suffix = "-script.py"
-            if self.program_name.endswith(auto_suffix):
-                self.program_name = self.program_name[:-len(auto_suffix)]
-
-    def command_line(self, argv):
-        """The bulk of the command line interface to coverage.py.
-
-        `argv` is the argument list to process.
-
-        Returns 0 if all is well, 1 if something went wrong.
-
-        """
-        # Collect the command-line options.
-        if not argv:
-            self.help_fn(topic='minimum_help')
-            return OK
-
-        # The command syntax we parse depends on the first argument.  Global
-        # switch syntax always starts with an option.
-        self.global_option = argv[0].startswith('-')
-        if self.global_option:
-            parser = GlobalOptionParser()
-        else:
-            parser = CMDS.get(argv[0])
-            if not parser:
-                self.help_fn("Unknown command: '%s'" % argv[0])
-                return ERR
-            argv = argv[1:]
-
-        parser.help_fn = self.help_fn
-        ok, options, args = parser.parse_args_ok(argv)
-        if not ok:
-            return ERR
-
-        # Handle help and version.
-        if self.do_help(options, args, parser):
-            return OK
-
-        # Check for conflicts and problems in the options.
-        if not self.args_ok(options, args):
-            return ERR
-
-        # We need to be able to import from the current directory, because
-        # plugins may try to, for example, to read Django settings.
-        sys.path[0] = ''
-
-        # Listify the list options.
-        source = unshell_list(options.source)
-        omit = unshell_list(options.omit)
-        include = unshell_list(options.include)
-        debug = unshell_list(options.debug)
-
-        # Do something.
-        self.coverage = self.covpkg.coverage(
-            data_suffix=options.parallel_mode,
-            cover_pylib=options.pylib,
-            timid=options.timid,
-            branch=options.branch,
-            config_file=options.rcfile,
-            source=source,
-            omit=omit,
-            include=include,
-            debug=debug,
-            concurrency=options.concurrency,
-            )
-
-        if options.action == "debug":
-            return self.do_debug(args)
-
-        elif options.action == "erase":
-            self.coverage.erase()
-            return OK
-
-        elif options.action == "run":
-            return self.do_run(options, args)
-
-        elif options.action == "combine":
-            self.coverage.load()
-            data_dirs = args or None
-            self.coverage.combine(data_dirs)
-            self.coverage.save()
-            return OK
-
-        # Remaining actions are reporting, with some common options.
-        report_args = dict(
-            morfs=unglob_args(args),
-            ignore_errors=options.ignore_errors,
-            omit=omit,
-            include=include,
-            )
-
-        self.coverage.load()
-
-        total = None
-        if options.action == "report":
-            total = self.coverage.report(
-                show_missing=options.show_missing,
-                skip_covered=options.skip_covered, **report_args)
-        elif options.action == "annotate":
-            self.coverage.annotate(
-                directory=options.directory, **report_args)
-        elif options.action == "html":
-            total = self.coverage.html_report(
-                directory=options.directory, title=options.title,
-                **report_args)
-        elif options.action == "xml":
-            outfile = options.outfile
-            total = self.coverage.xml_report(outfile=outfile, **report_args)
-
-        if total is not None:
-            # Apply the command line fail-under options, and then use the config
-            # value, so we can get fail_under from the config file.
-            if options.fail_under is not None:
-                self.coverage.set_option("report:fail_under", options.fail_under)
-
-            if self.coverage.get_option("report:fail_under"):
-
-                # Total needs to be rounded, but be careful of 0 and 100.
-                if 0 < total < 1:
-                    total = 1
-                elif 99 < total < 100:
-                    total = 99
-                else:
-                    total = round(total)
-
-                if total >= self.coverage.get_option("report:fail_under"):
-                    return OK
-                else:
-                    return FAIL_UNDER
-
-        return OK
-
-    def help(self, error=None, topic=None, parser=None):
-        """Display an error message, or the named topic."""
-        assert error or topic or parser
-        if error:
-            print(error)
-            print("Use '%s help' for help." % (self.program_name,))
-        elif parser:
-            print(parser.format_help().strip())
-        else:
-            help_params = dict(self.covpkg.__dict__)
-            help_params['program_name'] = self.program_name
-            if CTracer is not None:
-                help_params['extension_modifier'] = 'with C extension'
-            else:
-                help_params['extension_modifier'] = 'without C extension'
-            help_msg = textwrap.dedent(HELP_TOPICS.get(topic, '')).strip()
-            if help_msg:
-                print(help_msg.format(**help_params))
-            else:
-                print("Don't know topic %r" % topic)
-
-    def do_help(self, options, args, parser):
-        """Deal with help requests.
-
-        Return True if it handled the request, False if not.
-
-        """
-        # Handle help.
-        if options.help:
-            if self.global_option:
-                self.help_fn(topic='help')
-            else:
-                self.help_fn(parser=parser)
-            return True
-
-        if options.action == "help":
-            if args:
-                for a in args:
-                    parser = CMDS.get(a)
-                    if parser:
-                        self.help_fn(parser=parser)
-                    else:
-                        self.help_fn(topic=a)
-            else:
-                self.help_fn(topic='help')
-            return True
-
-        # Handle version.
-        if options.version:
-            self.help_fn(topic='version')
-            return True
-
-        return False
-
-    def args_ok(self, options, args):
-        """Check for conflicts and problems in the options.
-
-        Returns True if everything is OK, or False if not.
-
-        """
-        if options.action == "run" and not args:
-            self.help_fn("Nothing to do.")
-            return False
-
-        return True
-
-    def do_run(self, options, args):
-        """Implementation of 'coverage run'."""
-
-        if options.append and self.coverage.get_option("run:parallel"):
-            self.help_fn("Can't append to data files in parallel mode.")
-            return ERR
-
-        if not self.coverage.get_option("run:parallel"):
-            if not options.append:
-                self.coverage.erase()
-
-        # Run the script.
-        self.coverage.start()
-        code_ran = True
-        try:
-            if options.module:
-                self.run_python_module(args[0], args)
-            else:
-                filename = args[0]
-                self.run_python_file(filename, args)
-        except NoSource:
-            code_ran = False
-            raise
-        finally:
-            self.coverage.stop()
-            if code_ran:
-                if options.append:
-                    data_file = self.coverage.get_option("run:data_file")
-                    if self.path_exists(data_file):
-                        self.coverage.combine(data_paths=[data_file])
-                self.coverage.save()
-
-        return OK
-
-    def do_debug(self, args):
-        """Implementation of 'coverage debug'."""
-
-        if not args:
-            self.help_fn("What information would you like: data, sys?")
-            return ERR
-
-        for info in args:
-            if info == 'sys':
-                sys_info = self.coverage.sys_info()
-                print(info_header("sys"))
-                for line in info_formatter(sys_info):
-                    print(" %s" % line)
-            elif info == 'data':
-                self.coverage.load()
-                data = self.coverage.data
-                print(info_header("data"))
-                print("path: %s" % self.coverage.data_files.filename)
-                if data:
-                    print("has_arcs: %r" % data.has_arcs())
-                    summary = data.line_counts(fullpath=True)
-                    filenames = sorted(summary.keys())
-                    print("\n%d files:" % len(filenames))
-                    for f in filenames:
-                        line = "%s: %d lines" % (f, summary[f])
-                        plugin = data.file_tracer(f)
-                        if plugin:
-                            line += " [%s]" % plugin
-                        print(line)
-                else:
-                    print("No data collected")
-            else:
-                self.help_fn("Don't know what you mean by %r" % info)
-                return ERR
-
-        return OK
-
-
-def unshell_list(s):
-    """Turn a command-line argument into a list."""
-    if not s:
-        return None
-    if env.WINDOWS:
-        # When running coverage.py as coverage.exe, some of the behavior
-        # of the shell is emulated: wildcards are expanded into a list of
-        # file names.  So you have to single-quote patterns on the command
-        # line, but (not) helpfully, the single quotes are included in the
-        # argument, so we have to strip them off here.
-        s = s.strip("'")
-    return s.split(',')
-
-
-def unglob_args(args):
-    """Interpret shell wildcards for platforms that need it."""
-    if env.WINDOWS:
-        globbed = []
-        for arg in args:
-            if '?' in arg or '*' in arg:
-                globbed.extend(glob.glob(arg))
-            else:
-                globbed.append(arg)
-        args = globbed
-    return args
-
-
-HELP_TOPICS = {
-    'help': """\
-        Coverage.py, version {__version__} {extension_modifier}
-        Measure, collect, and report on code coverage in Python programs.
-
-        usage: {program_name} <command> [options] [args]
-
-        Commands:
-            annotate    Annotate source files with execution information.
-            combine     Combine a number of data files.
-            erase       Erase previously collected coverage data.
-            help        Get help on using coverage.py.
-            html        Create an HTML report.
-            report      Report coverage stats on modules.
-            run         Run a Python program and measure code execution.
-            xml         Create an XML report of coverage results.
-
-        Use "{program_name} help <command>" for detailed help on any command.
-        For full documentation, see {__url__}
-    """,
-
-    'minimum_help': """\
-        Code coverage for Python.  Use '{program_name} help' for help.
-    """,
-
-    'version': """\
-        Coverage.py, version {__version__} {extension_modifier}
-        Documentation at {__url__}
-    """,
-}
-
-
-def main(argv=None):
-    """The main entry point to coverage.py.
-
-    This is installed as the script entry point.
-
-    """
-    if argv is None:
-        argv = sys.argv[1:]
-    try:
-        status = CoverageScript().command_line(argv)
-    except ExceptionDuringRun as err:
-        # An exception was caught while running the product code.  The
-        # sys.exc_info() return tuple is packed into an ExceptionDuringRun
-        # exception.
-        traceback.print_exception(*err.args)
-        status = ERR
-    except CoverageException as err:
-        # A controlled error inside coverage.py: print the message to the user.
-        print(err)
-        status = ERR
-    except SystemExit as err:
-        # The user called `sys.exit()`.  Exit with their argument, if any.
-        if err.args:
-            status = err.args[0]
-        else:
-            status = None
-    return status
-
-#
-# eflag: FileType = Python2
--- a/DebugClients/Python/coverage/collector.py	Sat Sep 03 18:02:37 2016 +0200
+++ /dev/null	Thu Jan 01 00:00:00 1970 +0000
@@ -1,364 +0,0 @@
-# Licensed under the Apache License: http://www.apache.org/licenses/LICENSE-2.0
-# For details: https://bitbucket.org/ned/coveragepy/src/default/NOTICE.txt
-
-"""Raw data collector for coverage.py."""
-
-import os
-import sys
-
-from coverage import env
-from coverage.backward import iitems
-from coverage.files import abs_file
-from coverage.misc import CoverageException, isolate_module
-from coverage.pytracer import PyTracer
-
-os = isolate_module(os)
-
-
-try:
-    # Use the C extension code when we can, for speed.
-    from coverage.tracer import CTracer, CFileDisposition   # pylint: disable=no-name-in-module
-except ImportError:
-    # Couldn't import the C extension, maybe it isn't built.
-    if os.getenv('COVERAGE_TEST_TRACER') == 'c':
-        # During testing, we use the COVERAGE_TEST_TRACER environment variable
-        # to indicate that we've fiddled with the environment to test this
-        # fallback code.  If we thought we had a C tracer, but couldn't import
-        # it, then exit quickly and clearly instead of dribbling confusing
-        # errors. I'm using sys.exit here instead of an exception because an
-        # exception here causes all sorts of other noise in unittest.
-        sys.stderr.write("*** COVERAGE_TEST_TRACER is 'c' but can't import CTracer!\n")
-        sys.exit(1)
-    CTracer = None
-
-
-class FileDisposition(object):
-    """A simple value type for recording what to do with a file."""
-    pass
-
-
-def should_start_context(frame):
-    """Who-Tests-What hack: Determine whether this frame begins a new who-context."""
-    fn_name = frame.f_code.co_name
-    if fn_name.startswith("test"):
-        return fn_name
-
-
-class Collector(object):
-    """Collects trace data.
-
-    Creates a Tracer object for each thread, since they track stack
-    information.  Each Tracer points to the same shared data, contributing
-    traced data points.
-
-    When the Collector is started, it creates a Tracer for the current thread,
-    and installs a function to create Tracers for each new thread started.
-    When the Collector is stopped, all active Tracers are stopped.
-
-    Threads started while the Collector is stopped will never have Tracers
-    associated with them.
-
-    """
-
-    # The stack of active Collectors.  Collectors are added here when started,
-    # and popped when stopped.  Collectors on the stack are paused when not
-    # the top, and resumed when they become the top again.
-    _collectors = []
-
-    def __init__(self, should_trace, check_include, timid, branch, warn, concurrency):
-        """Create a collector.
-
-        `should_trace` is a function, taking a file name, and returning a
-        `coverage.FileDisposition object`.
-
-        `check_include` is a function taking a file name and a frame. It returns
-        a boolean: True if the file should be traced, False if not.
-
-        If `timid` is true, then a slower simpler trace function will be
-        used.  This is important for some environments where manipulation of
-        tracing functions make the faster more sophisticated trace function not
-        operate properly.
-
-        If `branch` is true, then branches will be measured.  This involves
-        collecting data on which statements followed each other (arcs).  Use
-        `get_arc_data` to get the arc data.
-
-        `warn` is a warning function, taking a single string message argument,
-        to be used if a warning needs to be issued.
-
-        `concurrency` is a string indicating the concurrency library in use.
-        Valid values are "greenlet", "eventlet", "gevent", or "thread" (the
-        default).
-
-        """
-        self.should_trace = should_trace
-        self.check_include = check_include
-        self.warn = warn
-        self.branch = branch
-        self.threading = None
-        self.concurrency = concurrency
-
-        self.concur_id_func = None
-
-        try:
-            if concurrency == "greenlet":
-                import greenlet
-                self.concur_id_func = greenlet.getcurrent
-            elif concurrency == "eventlet":
-                import eventlet.greenthread     # pylint: disable=import-error,useless-suppression
-                self.concur_id_func = eventlet.greenthread.getcurrent
-            elif concurrency == "gevent":
-                import gevent                   # pylint: disable=import-error,useless-suppression
-                self.concur_id_func = gevent.getcurrent
-            elif concurrency == "thread" or not concurrency:
-                # It's important to import threading only if we need it.  If
-                # it's imported early, and the program being measured uses
-                # gevent, then gevent's monkey-patching won't work properly.
-                import threading
-                self.threading = threading
-            else:
-                raise CoverageException("Don't understand concurrency=%s" % concurrency)
-        except ImportError:
-            raise CoverageException(
-                "Couldn't trace with concurrency=%s, the module isn't installed." % concurrency
-            )
-
-        # Who-Tests-What is just a hack at the moment, so turn it on with an
-        # environment variable.
-        self.wtw = int(os.getenv('COVERAGE_WTW', 0))
-
-        self.reset()
-
-        if timid:
-            # Being timid: use the simple Python trace function.
-            self._trace_class = PyTracer
-        else:
-            # Being fast: use the C Tracer if it is available, else the Python
-            # trace function.
-            self._trace_class = CTracer or PyTracer
-
-        if self._trace_class is CTracer:
-            self.file_disposition_class = CFileDisposition
-            self.supports_plugins = True
-        else:
-            self.file_disposition_class = FileDisposition
-            self.supports_plugins = False
-
-    def __repr__(self):
-        return "<Collector at 0x%x: %s>" % (id(self), self.tracer_name())
-
-    def tracer_name(self):
-        """Return the class name of the tracer we're using."""
-        return self._trace_class.__name__
-
-    def reset(self):
-        """Clear collected data, and prepare to collect more."""
-        # A dictionary mapping file names to dicts with line number keys (if not
-        # branch coverage), or mapping file names to dicts with line number
-        # pairs as keys (if branch coverage).
-        self.data = {}
-
-        # A dict mapping contexts to data dictionaries.
-        self.contexts = {}
-        self.contexts[None] = self.data
-
-        # A dictionary mapping file names to file tracer plugin names that will
-        # handle them.
-        self.file_tracers = {}
-
-        # The .should_trace_cache attribute is a cache from file names to
-        # coverage.FileDisposition objects, or None.  When a file is first
-        # considered for tracing, a FileDisposition is obtained from
-        # Coverage.should_trace.  Its .trace attribute indicates whether the
-        # file should be traced or not.  If it should be, a plugin with dynamic
-        # file names can decide not to trace it based on the dynamic file name
-        # being excluded by the inclusion rules, in which case the
-        # FileDisposition will be replaced by None in the cache.
-        if env.PYPY:
-            import __pypy__                     # pylint: disable=import-error
-            # Alex Gaynor said:
-            # should_trace_cache is a strictly growing key: once a key is in
-            # it, it never changes.  Further, the keys used to access it are
-            # generally constant, given sufficient context. That is to say, at
-            # any given point _trace() is called, pypy is able to know the key.
-            # This is because the key is determined by the physical source code
-            # line, and that's invariant with the call site.
-            #
-            # This property of a dict with immutable keys, combined with
-            # call-site-constant keys is a match for PyPy's module dict,
-            # which is optimized for such workloads.
-            #
-            # This gives a 20% benefit on the workload described at
-            # https://bitbucket.org/pypy/pypy/issue/1871/10x-slower-than-cpython-under-coverage
-            self.should_trace_cache = __pypy__.newdict("module")
-        else:
-            self.should_trace_cache = {}
-
-        # Our active Tracers.
-        self.tracers = []
-
-    def _start_tracer(self):
-        """Start a new Tracer object, and store it in self.tracers."""
-        tracer = self._trace_class()
-        tracer.data = self.data
-        tracer.trace_arcs = self.branch
-        tracer.should_trace = self.should_trace
-        tracer.should_trace_cache = self.should_trace_cache
-        tracer.warn = self.warn
-
-        if hasattr(tracer, 'concur_id_func'):
-            tracer.concur_id_func = self.concur_id_func
-        elif self.concur_id_func:
-            raise CoverageException(
-                "Can't support concurrency=%s with %s, only threads are supported" % (
-                    self.concurrency, self.tracer_name(),
-                )
-            )
-
-        if hasattr(tracer, 'file_tracers'):
-            tracer.file_tracers = self.file_tracers
-        if hasattr(tracer, 'threading'):
-            tracer.threading = self.threading
-        if hasattr(tracer, 'check_include'):
-            tracer.check_include = self.check_include
-        if self.wtw:
-            if hasattr(tracer, 'should_start_context'):
-                tracer.should_start_context = should_start_context
-            if hasattr(tracer, 'switch_context'):
-                tracer.switch_context = self.switch_context
-
-        fn = tracer.start()
-        self.tracers.append(tracer)
-
-        return fn
-
-    # The trace function has to be set individually on each thread before
-    # execution begins.  Ironically, the only support the threading module has
-    # for running code before the thread main is the tracing function.  So we
-    # install this as a trace function, and the first time it's called, it does
-    # the real trace installation.
-
-    def _installation_trace(self, frame, event, arg):
-        """Called on new threads, installs the real tracer."""
-        # Remove ourselves as the trace function.
-        sys.settrace(None)
-        # Install the real tracer.
-        fn = self._start_tracer()
-        # Invoke the real trace function with the current event, to be sure
-        # not to lose an event.
-        if fn:
-            fn = fn(frame, event, arg)
-        # Return the new trace function to continue tracing in this scope.
-        return fn
-
-    def start(self):
-        """Start collecting trace information."""
-        if self._collectors:
-            self._collectors[-1].pause()
-
-        # Check to see whether we had a fullcoverage tracer installed. If so,
-        # get the stack frames it stashed away for us.
-        traces0 = []
-        fn0 = sys.gettrace()
-        if fn0:
-            tracer0 = getattr(fn0, '__self__', None)
-            if tracer0:
-                traces0 = getattr(tracer0, 'traces', [])
-
-        try:
-            # Install the tracer on this thread.
-            fn = self._start_tracer()
-        except:
-            if self._collectors:
-                self._collectors[-1].resume()
-            raise
-
-        # If _start_tracer succeeded, then we add ourselves to the global
-        # stack of collectors.
-        self._collectors.append(self)
-
-        # Replay all the events from fullcoverage into the new trace function.
-        for args in traces0:
-            (frame, event, arg), lineno = args
-            try:
-                fn(frame, event, arg, lineno=lineno)
-            except TypeError:
-                raise Exception("fullcoverage must be run with the C trace function.")
-
-        # Install our installation tracer in threading, to jump start other
-        # threads.
-        if self.threading:
-            self.threading.settrace(self._installation_trace)
-
-    def stop(self):
-        """Stop collecting trace information."""
-        assert self._collectors
-        assert self._collectors[-1] is self, (
-            "Expected current collector to be %r, but it's %r" % (self, self._collectors[-1])
-        )
-
-        self.pause()
-        self.tracers = []
-
-        # Remove this Collector from the stack, and resume the one underneath
-        # (if any).
-        self._collectors.pop()
-        if self._collectors:
-            self._collectors[-1].resume()
-
-    def pause(self):
-        """Pause tracing, but be prepared to `resume`."""
-        for tracer in self.tracers:
-            tracer.stop()
-            stats = tracer.get_stats()
-            if stats:
-                print("\nCoverage.py tracer stats:")
-                for k in sorted(stats.keys()):
-                    print("%20s: %s" % (k, stats[k]))
-        if self.threading:
-            self.threading.settrace(None)
-
-    def resume(self):
-        """Resume tracing after a `pause`."""
-        for tracer in self.tracers:
-            tracer.start()
-        if self.threading:
-            self.threading.settrace(self._installation_trace)
-        else:
-            self._start_tracer()
-
-    def switch_context(self, new_context):
-        """Who-Tests-What hack: switch to a new who-context."""
-        # Make a new data dict, or find the existing one, and switch all the
-        # tracers to use it.
-        data = self.contexts.setdefault(new_context, {})
-        for tracer in self.tracers:
-            tracer.data = data
-
-    def save_data(self, covdata):
-        """Save the collected data to a `CoverageData`.
-
-        Also resets the collector.
-
-        """
-        def abs_file_dict(d):
-            """Return a dict like d, but with keys modified by `abs_file`."""
-            return dict((abs_file(k), v) for k, v in iitems(d))
-
-        if self.branch:
-            covdata.add_arcs(abs_file_dict(self.data))
-        else:
-            covdata.add_lines(abs_file_dict(self.data))
-        covdata.add_file_tracers(abs_file_dict(self.file_tracers))
-
-        if self.wtw:
-            # Just a hack, so just hack it.
-            import pprint
-            out_file = "coverage_wtw_{:06}.py".format(os.getpid())
-            with open(out_file, "w") as wtw_out:
-                pprint.pprint(self.contexts, wtw_out)
-
-        self.reset()
-
-#
-# eflag: FileType = Python2
--- a/DebugClients/Python/coverage/config.py	Sat Sep 03 18:02:37 2016 +0200
+++ /dev/null	Thu Jan 01 00:00:00 1970 +0000
@@ -1,368 +0,0 @@
-# Licensed under the Apache License: http://www.apache.org/licenses/LICENSE-2.0
-# For details: https://bitbucket.org/ned/coveragepy/src/default/NOTICE.txt
-
-"""Config file for coverage.py"""
-
-import collections
-import os
-import re
-import sys
-
-from coverage.backward import configparser, iitems, string_class
-from coverage.misc import CoverageException, isolate_module
-
-os = isolate_module(os)
-
-
-class HandyConfigParser(configparser.RawConfigParser):
-    """Our specialization of ConfigParser."""
-
-    def __init__(self, section_prefix):
-        configparser.RawConfigParser.__init__(self)
-        self.section_prefix = section_prefix
-
-    def read(self, filename):
-        """Read a file name as UTF-8 configuration data."""
-        kwargs = {}
-        if sys.version_info >= (3, 2):
-            kwargs['encoding'] = "utf-8"
-        return configparser.RawConfigParser.read(self, filename, **kwargs)
-
-    def has_option(self, section, option):
-        section = self.section_prefix + section
-        return configparser.RawConfigParser.has_option(self, section, option)
-
-    def has_section(self, section):
-        section = self.section_prefix + section
-        return configparser.RawConfigParser.has_section(self, section)
-
-    def options(self, section):
-        section = self.section_prefix + section
-        return configparser.RawConfigParser.options(self, section)
-
-    def get_section(self, section):
-        """Get the contents of a section, as a dictionary."""
-        d = {}
-        for opt in self.options(section):
-            d[opt] = self.get(section, opt)
-        return d
-
-    def get(self, section, *args, **kwargs):
-        """Get a value, replacing environment variables also.
-
-        The arguments are the same as `RawConfigParser.get`, but in the found
-        value, ``$WORD`` or ``${WORD}`` are replaced by the value of the
-        environment variable ``WORD``.
-
-        Returns the finished value.
-
-        """
-        section = self.section_prefix + section
-        v = configparser.RawConfigParser.get(self, section, *args, **kwargs)
-        def dollar_replace(m):
-            """Called for each $replacement."""
-            # Only one of the groups will have matched, just get its text.
-            word = next(w for w in m.groups() if w is not None)     # pragma: part covered
-            if word == "$":
-                return "$"
-            else:
-                return os.environ.get(word, '')
-
-        dollar_pattern = r"""(?x)   # Use extended regex syntax
-            \$(?:                   # A dollar sign, then
-            (?P<v1>\w+) |           #   a plain word,
-            {(?P<v2>\w+)} |         #   or a {-wrapped word,
-            (?P<char>[$])           #   or a dollar sign.
-            )
-            """
-        v = re.sub(dollar_pattern, dollar_replace, v)
-        return v
-
-    def getlist(self, section, option):
-        """Read a list of strings.
-
-        The value of `section` and `option` is treated as a comma- and newline-
-        separated list of strings.  Each value is stripped of whitespace.
-
-        Returns the list of strings.
-
-        """
-        value_list = self.get(section, option)
-        values = []
-        for value_line in value_list.split('\n'):
-            for value in value_line.split(','):
-                value = value.strip()
-                if value:
-                    values.append(value)
-        return values
-
-    def getregexlist(self, section, option):
-        """Read a list of full-line regexes.
-
-        The value of `section` and `option` is treated as a newline-separated
-        list of regexes.  Each value is stripped of whitespace.
-
-        Returns the list of strings.
-
-        """
-        line_list = self.get(section, option)
-        value_list = []
-        for value in line_list.splitlines():
-            value = value.strip()
-            try:
-                re.compile(value)
-            except re.error as e:
-                raise CoverageException(
-                    "Invalid [%s].%s value %r: %s" % (section, option, value, e)
-                )
-            if value:
-                value_list.append(value)
-        return value_list
-
-
-# The default line exclusion regexes.
-DEFAULT_EXCLUDE = [
-    r'(?i)#\s*pragma[:\s]?\s*no\s*cover',
-]
-
-# The default partial branch regexes, to be modified by the user.
-DEFAULT_PARTIAL = [
-    r'(?i)#\s*pragma[:\s]?\s*no\s*branch',
-]
-
-# The default partial branch regexes, based on Python semantics.
-# These are any Python branching constructs that can't actually execute all
-# their branches.
-DEFAULT_PARTIAL_ALWAYS = [
-    'while (True|1|False|0):',
-    'if (True|1|False|0):',
-]
-
-
-class CoverageConfig(object):
-    """Coverage.py configuration.
-
-    The attributes of this class are the various settings that control the
-    operation of coverage.py.
-
-    """
-    def __init__(self):
-        """Initialize the configuration attributes to their defaults."""
-        # Metadata about the config.
-        self.attempted_config_files = []
-        self.config_files = []
-
-        # Defaults for [run]
-        self.branch = False
-        self.concurrency = None
-        self.cover_pylib = False
-        self.data_file = ".coverage"
-        self.debug = []
-        self.note = None
-        self.parallel = False
-        self.plugins = []
-        self.source = None
-        self.timid = False
-
-        # Defaults for [report]
-        self.exclude_list = DEFAULT_EXCLUDE[:]
-        self.fail_under = 0
-        self.ignore_errors = False
-        self.include = None
-        self.omit = None
-        self.partial_always_list = DEFAULT_PARTIAL_ALWAYS[:]
-        self.partial_list = DEFAULT_PARTIAL[:]
-        self.precision = 0
-        self.show_missing = False
-        self.skip_covered = False
-
-        # Defaults for [html]
-        self.extra_css = None
-        self.html_dir = "htmlcov"
-        self.html_title = "Coverage report"
-
-        # Defaults for [xml]
-        self.xml_output = "coverage.xml"
-        self.xml_package_depth = 99
-
-        # Defaults for [paths]
-        self.paths = {}
-
-        # Options for plugins
-        self.plugin_options = {}
-
-    MUST_BE_LIST = ["omit", "include", "debug", "plugins"]
-
-    def from_args(self, **kwargs):
-        """Read config values from `kwargs`."""
-        for k, v in iitems(kwargs):
-            if v is not None:
-                if k in self.MUST_BE_LIST and isinstance(v, string_class):
-                    v = [v]
-                setattr(self, k, v)
-
-    def from_file(self, filename, section_prefix=""):
-        """Read configuration from a .rc file.
-
-        `filename` is a file name to read.
-
-        Returns True or False, whether the file could be read.
-
-        """
-        self.attempted_config_files.append(filename)
-
-        cp = HandyConfigParser(section_prefix)
-        try:
-            files_read = cp.read(filename)
-        except configparser.Error as err:
-            raise CoverageException("Couldn't read config file %s: %s" % (filename, err))
-        if not files_read:
-            return False
-
-        self.config_files.extend(files_read)
-
-        try:
-            for option_spec in self.CONFIG_FILE_OPTIONS:
-                self._set_attr_from_config_option(cp, *option_spec)
-        except ValueError as err:
-            raise CoverageException("Couldn't read config file %s: %s" % (filename, err))
-
-        # Check that there are no unrecognized options.
-        all_options = collections.defaultdict(set)
-        for option_spec in self.CONFIG_FILE_OPTIONS:
-            section, option = option_spec[1].split(":")
-            all_options[section].add(option)
-
-        for section, options in iitems(all_options):
-            if cp.has_section(section):
-                for unknown in set(cp.options(section)) - options:
-                    if section_prefix:
-                        section = section_prefix + section
-                    raise CoverageException(
-                        "Unrecognized option '[%s] %s=' in config file %s" % (
-                            section, unknown, filename
-                        )
-                    )
-
-        # [paths] is special
-        if cp.has_section('paths'):
-            for option in cp.options('paths'):
-                self.paths[option] = cp.getlist('paths', option)
-
-        # plugins can have options
-        for plugin in self.plugins:
-            if cp.has_section(plugin):
-                self.plugin_options[plugin] = cp.get_section(plugin)
-
-        return True
-
-    CONFIG_FILE_OPTIONS = [
-        # These are *args for _set_attr_from_config_option:
-        #   (attr, where, type_="")
-        #
-        #   attr is the attribute to set on the CoverageConfig object.
-        #   where is the section:name to read from the configuration file.
-        #   type_ is the optional type to apply, by using .getTYPE to read the
-        #       configuration value from the file.
-
-        # [run]
-        ('branch', 'run:branch', 'boolean'),
-        ('concurrency', 'run:concurrency'),
-        ('cover_pylib', 'run:cover_pylib', 'boolean'),
-        ('data_file', 'run:data_file'),
-        ('debug', 'run:debug', 'list'),
-        ('include', 'run:include', 'list'),
-        ('note', 'run:note'),
-        ('omit', 'run:omit', 'list'),
-        ('parallel', 'run:parallel', 'boolean'),
-        ('plugins', 'run:plugins', 'list'),
-        ('source', 'run:source', 'list'),
-        ('timid', 'run:timid', 'boolean'),
-
-        # [report]
-        ('exclude_list', 'report:exclude_lines', 'regexlist'),
-        ('fail_under', 'report:fail_under', 'int'),
-        ('ignore_errors', 'report:ignore_errors', 'boolean'),
-        ('include', 'report:include', 'list'),
-        ('omit', 'report:omit', 'list'),
-        ('partial_always_list', 'report:partial_branches_always', 'regexlist'),
-        ('partial_list', 'report:partial_branches', 'regexlist'),
-        ('precision', 'report:precision', 'int'),
-        ('show_missing', 'report:show_missing', 'boolean'),
-        ('skip_covered', 'report:skip_covered', 'boolean'),
-
-        # [html]
-        ('extra_css', 'html:extra_css'),
-        ('html_dir', 'html:directory'),
-        ('html_title', 'html:title'),
-
-        # [xml]
-        ('xml_output', 'xml:output'),
-        ('xml_package_depth', 'xml:package_depth', 'int'),
-    ]
-
-    def _set_attr_from_config_option(self, cp, attr, where, type_=''):
-        """Set an attribute on self if it exists in the ConfigParser."""
-        section, option = where.split(":")
-        if cp.has_option(section, option):
-            method = getattr(cp, 'get' + type_)
-            setattr(self, attr, method(section, option))
-
-    def get_plugin_options(self, plugin):
-        """Get a dictionary of options for the plugin named `plugin`."""
-        return self.plugin_options.get(plugin, {})
-
-    def set_option(self, option_name, value):
-        """Set an option in the configuration.
-
-        `option_name` is a colon-separated string indicating the section and
-        option name.  For example, the ``branch`` option in the ``[run]``
-        section of the config file would be indicated with `"run:branch"`.
-
-        `value` is the new value for the option.
-
-        """
-
-        # Check all the hard-coded options.
-        for option_spec in self.CONFIG_FILE_OPTIONS:
-            attr, where = option_spec[:2]
-            if where == option_name:
-                setattr(self, attr, value)
-                return
-
-        # See if it's a plugin option.
-        plugin_name, _, key = option_name.partition(":")
-        if key and plugin_name in self.plugins:
-            self.plugin_options.setdefault(plugin_name, {})[key] = value
-            return
-
-        # If we get here, we didn't find the option.
-        raise CoverageException("No such option: %r" % option_name)
-
-    def get_option(self, option_name):
-        """Get an option from the configuration.
-
-        `option_name` is a colon-separated string indicating the section and
-        option name.  For example, the ``branch`` option in the ``[run]``
-        section of the config file would be indicated with `"run:branch"`.
-
-        Returns the value of the option.
-
-        """
-
-        # Check all the hard-coded options.
-        for option_spec in self.CONFIG_FILE_OPTIONS:
-            attr, where = option_spec[:2]
-            if where == option_name:
-                return getattr(self, attr)
-
-        # See if it's a plugin option.
-        plugin_name, _, key = option_name.partition(":")
-        if key and plugin_name in self.plugins:
-            return self.plugin_options.get(plugin_name, {}).get(key)
-
-        # If we get here, we didn't find the option.
-        raise CoverageException("No such option: %r" % option_name)
-
-#
-# eflag: FileType = Python2
--- a/DebugClients/Python/coverage/control.py	Sat Sep 03 18:02:37 2016 +0200
+++ /dev/null	Thu Jan 01 00:00:00 1970 +0000
@@ -1,1202 +0,0 @@
-# Licensed under the Apache License: http://www.apache.org/licenses/LICENSE-2.0
-# For details: https://bitbucket.org/ned/coveragepy/src/default/NOTICE.txt
-
-"""Core control stuff for coverage.py."""
-
-import atexit
-import inspect
-import os
-import platform
-import re
-import sys
-import traceback
-
-from coverage import env, files
-from coverage.annotate import AnnotateReporter
-from coverage.backward import string_class, iitems
-from coverage.collector import Collector
-from coverage.config import CoverageConfig
-from coverage.data import CoverageData, CoverageDataFiles
-from coverage.debug import DebugControl
-from coverage.files import TreeMatcher, FnmatchMatcher
-from coverage.files import PathAliases, find_python_files, prep_patterns
-from coverage.files import ModuleMatcher, abs_file
-from coverage.html import HtmlReporter
-from coverage.misc import CoverageException, bool_or_none, join_regex
-from coverage.misc import file_be_gone, isolate_module
-from coverage.monkey import patch_multiprocessing
-from coverage.plugin import FileReporter
-from coverage.plugin_support import Plugins
-from coverage.python import PythonFileReporter
-from coverage.results import Analysis, Numbers
-from coverage.summary import SummaryReporter
-from coverage.xmlreport import XmlReporter
-
-os = isolate_module(os)
-
-# Pypy has some unusual stuff in the "stdlib".  Consider those locations
-# when deciding where the stdlib is.
-try:
-    import _structseq
-except ImportError:
-    _structseq = None
-
-
-class Coverage(object):
-    """Programmatic access to coverage.py.
-
-    To use::
-
-        from coverage import Coverage
-
-        cov = Coverage()
-        cov.start()
-        #.. call your code ..
-        cov.stop()
-        cov.html_report(directory='covhtml')
-
-    """
-    def __init__(
-        self, data_file=None, data_suffix=None, cover_pylib=None,
-        auto_data=False, timid=None, branch=None, config_file=True,
-        source=None, omit=None, include=None, debug=None,
-        concurrency=None,
-    ):
-        """
-        `data_file` is the base name of the data file to use, defaulting to
-        ".coverage".  `data_suffix` is appended (with a dot) to `data_file` to
-        create the final file name.  If `data_suffix` is simply True, then a
-        suffix is created with the machine and process identity included.
-
-        `cover_pylib` is a boolean determining whether Python code installed
-        with the Python interpreter is measured.  This includes the Python
-        standard library and any packages installed with the interpreter.
-
-        If `auto_data` is true, then any existing data file will be read when
-        coverage measurement starts, and data will be saved automatically when
-        measurement stops.
-
-        If `timid` is true, then a slower and simpler trace function will be
-        used.  This is important for some environments where manipulation of
-        tracing functions breaks the faster trace function.
-
-        If `branch` is true, then branch coverage will be measured in addition
-        to the usual statement coverage.
-
-        `config_file` determines what configuration file to read:
-
-            * If it is ".coveragerc", it is interpreted as if it were True,
-              for backward compatibility.
-
-            * If it is a string, it is the name of the file to read.  If the
-              file can't be read, it is an error.
-
-            * If it is True, then a few standard files names are tried
-              (".coveragerc", "setup.cfg").  It is not an error for these files
-              to not be found.
-
-            * If it is False, then no configuration file is read.
-
-        `source` is a list of file paths or package names.  Only code located
-        in the trees indicated by the file paths or package names will be
-        measured.
-
-        `include` and `omit` are lists of file name patterns. Files that match
-        `include` will be measured, files that match `omit` will not.  Each
-        will also accept a single string argument.
-
-        `debug` is a list of strings indicating what debugging information is
-        desired.
-
-        `concurrency` is a string indicating the concurrency library being used
-        in the measured code.  Without this, coverage.py will get incorrect
-        results.  Valid strings are "greenlet", "eventlet", "gevent",
-        "multiprocessing", or "thread" (the default).
-
-        .. versionadded:: 4.0
-            The `concurrency` parameter.
-
-        """
-        # Build our configuration from a number of sources:
-        # 1: defaults:
-        self.config = CoverageConfig()
-
-        # 2: from the rcfile, .coveragerc or setup.cfg file:
-        if config_file:
-            did_read_rc = False
-            # Some API users were specifying ".coveragerc" to mean the same as
-            # True, so make it so.
-            if config_file == ".coveragerc":
-                config_file = True
-            specified_file = (config_file is not True)
-            if not specified_file:
-                config_file = ".coveragerc"
-
-            did_read_rc = self.config.from_file(config_file)
-
-            if not did_read_rc:
-                if specified_file:
-                    raise CoverageException(
-                        "Couldn't read '%s' as a config file" % config_file
-                        )
-                self.config.from_file("setup.cfg", section_prefix="coverage:")
-
-        # 3: from environment variables:
-        env_data_file = os.environ.get('COVERAGE_FILE')
-        if env_data_file:
-            self.config.data_file = env_data_file
-        debugs = os.environ.get('COVERAGE_DEBUG')
-        if debugs:
-            self.config.debug.extend(debugs.split(","))
-
-        # 4: from constructor arguments:
-        self.config.from_args(
-            data_file=data_file, cover_pylib=cover_pylib, timid=timid,
-            branch=branch, parallel=bool_or_none(data_suffix),
-            source=source, omit=omit, include=include, debug=debug,
-            concurrency=concurrency,
-            )
-
-        self._debug_file = None
-        self._auto_data = auto_data
-        self._data_suffix = data_suffix
-
-        # The matchers for _should_trace.
-        self.source_match = None
-        self.source_pkgs_match = None
-        self.pylib_match = self.cover_match = None
-        self.include_match = self.omit_match = None
-
-        # Is it ok for no data to be collected?
-        self._warn_no_data = True
-        self._warn_unimported_source = True
-
-        # A record of all the warnings that have been issued.
-        self._warnings = []
-
-        # Other instance attributes, set later.
-        self.omit = self.include = self.source = None
-        self.source_pkgs = None
-        self.data = self.data_files = self.collector = None
-        self.plugins = None
-        self.pylib_dirs = self.cover_dirs = None
-        self.data_suffix = self.run_suffix = None
-        self._exclude_re = None
-        self.debug = None
-
-        # State machine variables:
-        # Have we initialized everything?
-        self._inited = False
-        # Have we started collecting and not stopped it?
-        self._started = False
-        # Have we measured some data and not harvested it?
-        self._measured = False
-
-    def _init(self):
-        """Set all the initial state.
-
-        This is called by the public methods to initialize state. This lets us
-        construct a :class:`Coverage` object, then tweak its state before this
-        function is called.
-
-        """
-        if self._inited:
-            return
-
-        # Create and configure the debugging controller. COVERAGE_DEBUG_FILE
-        # is an environment variable, the name of a file to append debug logs
-        # to.
-        if self._debug_file is None:
-            debug_file_name = os.environ.get("COVERAGE_DEBUG_FILE")
-            if debug_file_name:
-                self._debug_file = open(debug_file_name, "a")
-            else:
-                self._debug_file = sys.stderr
-        self.debug = DebugControl(self.config.debug, self._debug_file)
-
-        # Load plugins
-        self.plugins = Plugins.load_plugins(self.config.plugins, self.config, self.debug)
-
-        # _exclude_re is a dict that maps exclusion list names to compiled
-        # regexes.
-        self._exclude_re = {}
-        self._exclude_regex_stale()
-
-        files.set_relative_directory()
-
-        # The source argument can be directories or package names.
-        self.source = []
-        self.source_pkgs = []
-        for src in self.config.source or []:
-            if os.path.exists(src):
-                self.source.append(files.canonical_filename(src))
-            else:
-                self.source_pkgs.append(src)
-
-        self.omit = prep_patterns(self.config.omit)
-        self.include = prep_patterns(self.config.include)
-
-        concurrency = self.config.concurrency
-        if concurrency == "multiprocessing":
-            patch_multiprocessing()
-            concurrency = None
-
-        self.collector = Collector(
-            should_trace=self._should_trace,
-            check_include=self._check_include_omit_etc,
-            timid=self.config.timid,
-            branch=self.config.branch,
-            warn=self._warn,
-            concurrency=concurrency,
-            )
-
-        # Early warning if we aren't going to be able to support plugins.
-        if self.plugins.file_tracers and not self.collector.supports_plugins:
-            self._warn(
-                "Plugin file tracers (%s) aren't supported with %s" % (
-                    ", ".join(
-                        plugin._coverage_plugin_name
-                            for plugin in self.plugins.file_tracers
-                        ),
-                    self.collector.tracer_name(),
-                    )
-                )
-            for plugin in self.plugins.file_tracers:
-                plugin._coverage_enabled = False
-
-        # Suffixes are a bit tricky.  We want to use the data suffix only when
-        # collecting data, not when combining data.  So we save it as
-        # `self.run_suffix` now, and promote it to `self.data_suffix` if we
-        # find that we are collecting data later.
-        if self._data_suffix or self.config.parallel:
-            if not isinstance(self._data_suffix, string_class):
-                # if data_suffix=True, use .machinename.pid.random
-                self._data_suffix = True
-        else:
-            self._data_suffix = None
-        self.data_suffix = None
-        self.run_suffix = self._data_suffix
-
-        # Create the data file.  We do this at construction time so that the
-        # data file will be written into the directory where the process
-        # started rather than wherever the process eventually chdir'd to.
-        self.data = CoverageData(debug=self.debug)
-        self.data_files = CoverageDataFiles(basename=self.config.data_file, warn=self._warn)
-
-        # The directories for files considered "installed with the interpreter".
-        self.pylib_dirs = set()
-        if not self.config.cover_pylib:
-            # Look at where some standard modules are located. That's the
-            # indication for "installed with the interpreter". In some
-            # environments (virtualenv, for example), these modules may be
-            # spread across a few locations. Look at all the candidate modules
-            # we've imported, and take all the different ones.
-            for m in (atexit, inspect, os, platform, re, _structseq, traceback):
-                if m is not None and hasattr(m, "__file__"):
-                    self.pylib_dirs.add(self._canonical_dir(m))
-            if _structseq and not hasattr(_structseq, '__file__'):
-                # PyPy 2.4 has no __file__ in the builtin modules, but the code
-                # objects still have the file names.  So dig into one to find
-                # the path to exclude.
-                structseq_new = _structseq.structseq_new
-                try:
-                    structseq_file = structseq_new.func_code.co_filename
-                except AttributeError:
-                    structseq_file = structseq_new.__code__.co_filename
-                self.pylib_dirs.add(self._canonical_dir(structseq_file))
-
-        # To avoid tracing the coverage.py code itself, we skip anything
-        # located where we are.
-        self.cover_dirs = [self._canonical_dir(__file__)]
-        if env.TESTING:
-            # When testing, we use PyContracts, which should be considered
-            # part of coverage.py, and it uses six. Exclude those directories
-            # just as we exclude ourselves.
-            import contracts, six
-            for mod in [contracts, six]:
-                self.cover_dirs.append(self._canonical_dir(mod))
-
-        # Set the reporting precision.
-        Numbers.set_precision(self.config.precision)
-
-        atexit.register(self._atexit)
-
-        self._inited = True
-
-        # Create the matchers we need for _should_trace
-        if self.source or self.source_pkgs:
-            self.source_match = TreeMatcher(self.source)
-            self.source_pkgs_match = ModuleMatcher(self.source_pkgs)
-        else:
-            if self.cover_dirs:
-                self.cover_match = TreeMatcher(self.cover_dirs)
-            if self.pylib_dirs:
-                self.pylib_match = TreeMatcher(self.pylib_dirs)
-        if self.include:
-            self.include_match = FnmatchMatcher(self.include)
-        if self.omit:
-            self.omit_match = FnmatchMatcher(self.omit)
-
-        # The user may want to debug things, show info if desired.
-        wrote_any = False
-        if self.debug.should('config'):
-            config_info = sorted(self.config.__dict__.items())
-            self.debug.write_formatted_info("config", config_info)
-            wrote_any = True
-
-        if self.debug.should('sys'):
-            self.debug.write_formatted_info("sys", self.sys_info())
-            for plugin in self.plugins:
-                header = "sys: " + plugin._coverage_plugin_name
-                info = plugin.sys_info()
-                self.debug.write_formatted_info(header, info)
-            wrote_any = True
-
-        if wrote_any:
-            self.debug.write_formatted_info("end", ())
-
-    def _canonical_dir(self, morf):
-        """Return the canonical directory of the module or file `morf`."""
-        morf_filename = PythonFileReporter(morf, self).filename
-        return os.path.split(morf_filename)[0]
-
-    def _source_for_file(self, filename):
-        """Return the source file for `filename`.
-
-        Given a file name being traced, return the best guess as to the source
-        file to attribute it to.
-
-        """
-        if filename.endswith(".py"):
-            # .py files are themselves source files.
-            return filename
-
-        elif filename.endswith((".pyc", ".pyo")):
-            # Bytecode files probably have source files near them.
-            py_filename = filename[:-1]
-            if os.path.exists(py_filename):
-                # Found a .py file, use that.
-                return py_filename
-            if env.WINDOWS:
-                # On Windows, it could be a .pyw file.
-                pyw_filename = py_filename + "w"
-                if os.path.exists(pyw_filename):
-                    return pyw_filename
-            # Didn't find source, but it's probably the .py file we want.
-            return py_filename
-
-        elif filename.endswith("$py.class"):
-            # Jython is easy to guess.
-            return filename[:-9] + ".py"
-
-        # No idea, just use the file name as-is.
-        return filename
-
-    def _name_for_module(self, module_globals, filename):
-        """Get the name of the module for a set of globals and file name.
-
-        For configurability's sake, we allow __main__ modules to be matched by
-        their importable name.
-
-        If loaded via runpy (aka -m), we can usually recover the "original"
-        full dotted module name, otherwise, we resort to interpreting the
-        file name to get the module's name.  In the case that the module name
-        can't be determined, None is returned.
-
-        """
-        dunder_name = module_globals.get('__name__', None)
-
-        if isinstance(dunder_name, str) and dunder_name != '__main__':
-            # This is the usual case: an imported module.
-            return dunder_name
-
-        loader = module_globals.get('__loader__', None)
-        for attrname in ('fullname', 'name'):   # attribute renamed in py3.2
-            if hasattr(loader, attrname):
-                fullname = getattr(loader, attrname)
-            else:
-                continue
-
-            if isinstance(fullname, str) and fullname != '__main__':
-                # Module loaded via: runpy -m
-                return fullname
-
-        # Script as first argument to Python command line.
-        inspectedname = inspect.getmodulename(filename)
-        if inspectedname is not None:
-            return inspectedname
-        else:
-            return dunder_name
-
-    def _should_trace_internal(self, filename, frame):
-        """Decide whether to trace execution in `filename`, with a reason.
-
-        This function is called from the trace function.  As each new file name
-        is encountered, this function determines whether it is traced or not.
-
-        Returns a FileDisposition object.
-
-        """
-        original_filename = filename
-        disp = _disposition_init(self.collector.file_disposition_class, filename)
-
-        def nope(disp, reason):
-            """Simple helper to make it easy to return NO."""
-            disp.trace = False
-            disp.reason = reason
-            return disp
-
-        # Compiled Python files have two file names: frame.f_code.co_filename is
-        # the file name at the time the .pyc was compiled.  The second name is
-        # __file__, which is where the .pyc was actually loaded from.  Since
-        # .pyc files can be moved after compilation (for example, by being
-        # installed), we look for __file__ in the frame and prefer it to the
-        # co_filename value.
-        dunder_file = frame.f_globals.get('__file__')
-        if dunder_file:
-            filename = self._source_for_file(dunder_file)
-            if original_filename and not original_filename.startswith('<'):
-                orig = os.path.basename(original_filename)
-                if orig != os.path.basename(filename):
-                    # Files shouldn't be renamed when moved. This happens when
-                    # exec'ing code.  If it seems like something is wrong with
-                    # the frame's file name, then just use the original.
-                    filename = original_filename
-
-        if not filename:
-            # Empty string is pretty useless.
-            return nope(disp, "empty string isn't a file name")
-
-        if filename.startswith('memory:'):
-            return nope(disp, "memory isn't traceable")
-
-        if filename.startswith('<'):
-            # Lots of non-file execution is represented with artificial
-            # file names like "<string>", "<doctest readme.txt[0]>", or
-            # "<exec_function>".  Don't ever trace these executions, since we
-            # can't do anything with the data later anyway.
-            return nope(disp, "not a real file name")
-
-        # pyexpat does a dumb thing, calling the trace function explicitly from
-        # C code with a C file name.
-        if re.search(r"[/\\]Modules[/\\]pyexpat.c", filename):
-            return nope(disp, "pyexpat lies about itself")
-
-        # Jython reports the .class file to the tracer, use the source file.
-        if filename.endswith("$py.class"):
-            filename = filename[:-9] + ".py"
-
-        canonical = files.canonical_filename(filename)
-        disp.canonical_filename = canonical
-
-        # Try the plugins, see if they have an opinion about the file.
-        plugin = None
-        for plugin in self.plugins.file_tracers:
-            if not plugin._coverage_enabled:
-                continue
-
-            try:
-                file_tracer = plugin.file_tracer(canonical)
-                if file_tracer is not None:
-                    file_tracer._coverage_plugin = plugin
-                    disp.trace = True
-                    disp.file_tracer = file_tracer
-                    if file_tracer.has_dynamic_source_filename():
-                        disp.has_dynamic_filename = True
-                    else:
-                        disp.source_filename = files.canonical_filename(
-                            file_tracer.source_filename()
-                        )
-                    break
-            except Exception:
-                self._warn(
-                    "Disabling plugin %r due to an exception:" % (
-                        plugin._coverage_plugin_name
-                    )
-                )
-                traceback.print_exc()
-                plugin._coverage_enabled = False
-                continue
-        else:
-            # No plugin wanted it: it's Python.
-            disp.trace = True
-            disp.source_filename = canonical
-
-        if not disp.has_dynamic_filename:
-            if not disp.source_filename:
-                raise CoverageException(
-                    "Plugin %r didn't set source_filename for %r" %
-                    (plugin, disp.original_filename)
-                )
-            reason = self._check_include_omit_etc_internal(
-                disp.source_filename, frame,
-            )
-            if reason:
-                nope(disp, reason)
-
-        return disp
-
-    def _check_include_omit_etc_internal(self, filename, frame):
-        """Check a file name against the include, omit, etc, rules.
-
-        Returns a string or None.  String means, don't trace, and is the reason
-        why.  None means no reason found to not trace.
-
-        """
-        modulename = self._name_for_module(frame.f_globals, filename)
-
-        # If the user specified source or include, then that's authoritative
-        # about the outer bound of what to measure and we don't have to apply
-        # any canned exclusions. If they didn't, then we have to exclude the
-        # stdlib and coverage.py directories.
-        if self.source_match:
-            if self.source_pkgs_match.match(modulename):
-                if modulename in self.source_pkgs:
-                    self.source_pkgs.remove(modulename)
-                return None  # There's no reason to skip this file.
-
-            if not self.source_match.match(filename):
-                return "falls outside the --source trees"
-        elif self.include_match:
-            if not self.include_match.match(filename):
-                return "falls outside the --include trees"
-        else:
-            # If we aren't supposed to trace installed code, then check if this
-            # is near the Python standard library and skip it if so.
-            if self.pylib_match and self.pylib_match.match(filename):
-                return "is in the stdlib"
-
-            # We exclude the coverage.py code itself, since a little of it
-            # will be measured otherwise.
-            if self.cover_match and self.cover_match.match(filename):
-                return "is part of coverage.py"
-
-        # Check the file against the omit pattern.
-        if self.omit_match and self.omit_match.match(filename):
-            return "is inside an --omit pattern"
-
-        # No reason found to skip this file.
-        return None
-
-    def _should_trace(self, filename, frame):
-        """Decide whether to trace execution in `filename`.
-
-        Calls `_should_trace_internal`, and returns the FileDisposition.
-
-        """
-        disp = self._should_trace_internal(filename, frame)
-        if self.debug.should('trace'):
-            self.debug.write(_disposition_debug_msg(disp))
-        return disp
-
-    def _check_include_omit_etc(self, filename, frame):
-        """Check a file name against the include/omit/etc, rules, verbosely.
-
-        Returns a boolean: True if the file should be traced, False if not.
-
-        """
-        reason = self._check_include_omit_etc_internal(filename, frame)
-        if self.debug.should('trace'):
-            if not reason:
-                msg = "Including %r" % (filename,)
-            else:
-                msg = "Not including %r: %s" % (filename, reason)
-            self.debug.write(msg)
-
-        return not reason
-
-    def _warn(self, msg):
-        """Use `msg` as a warning."""
-        self._warnings.append(msg)
-        if self.debug.should('pid'):
-            msg = "[%d] %s" % (os.getpid(), msg)
-        sys.stderr.write("Coverage.py warning: %s\n" % msg)
-
-    def get_option(self, option_name):
-        """Get an option from the configuration.
-
-        `option_name` is a colon-separated string indicating the section and
-        option name.  For example, the ``branch`` option in the ``[run]``
-        section of the config file would be indicated with `"run:branch"`.
-
-        Returns the value of the option.
-
-        .. versionadded:: 4.0
-
-        """
-        return self.config.get_option(option_name)
-
-    def set_option(self, option_name, value):
-        """Set an option in the configuration.
-
-        `option_name` is a colon-separated string indicating the section and
-        option name.  For example, the ``branch`` option in the ``[run]``
-        section of the config file would be indicated with ``"run:branch"``.
-
-        `value` is the new value for the option.  This should be a Python
-        value where appropriate.  For example, use True for booleans, not the
-        string ``"True"``.
-
-        As an example, calling::
-
-            cov.set_option("run:branch", True)
-
-        has the same effect as this configuration file::
-
-            [run]
-            branch = True
-
-        .. versionadded:: 4.0
-
-        """
-        self.config.set_option(option_name, value)
-
-    def use_cache(self, usecache):
-        """Obsolete method."""
-        self._init()
-        if not usecache:
-            self._warn("use_cache(False) is no longer supported.")
-
-    def load(self):
-        """Load previously-collected coverage data from the data file."""
-        self._init()
-        self.collector.reset()
-        self.data_files.read(self.data)
-
-    def start(self):
-        """Start measuring code coverage.
-
-        Coverage measurement actually occurs in functions called after
-        :meth:`start` is invoked.  Statements in the same scope as
-        :meth:`start` won't be measured.
-
-        Once you invoke :meth:`start`, you must also call :meth:`stop`
-        eventually, or your process might not shut down cleanly.
-
-        """
-        self._init()
-        if self.run_suffix:
-            # Calling start() means we're running code, so use the run_suffix
-            # as the data_suffix when we eventually save the data.
-            self.data_suffix = self.run_suffix
-        if self._auto_data:
-            self.load()
-
-        self.collector.start()
-        self._started = True
-        self._measured = True
-
-    def stop(self):
-        """Stop measuring code coverage."""
-        if self._started:
-            self.collector.stop()
-        self._started = False
-
-    def _atexit(self):
-        """Clean up on process shutdown."""
-        if self._started:
-            self.stop()
-        if self._auto_data:
-            self.save()
-
-    def erase(self):
-        """Erase previously-collected coverage data.
-
-        This removes the in-memory data collected in this session as well as
-        discarding the data file.
-
-        """
-        self._init()
-        self.collector.reset()
-        self.data.erase()
-        self.data_files.erase(parallel=self.config.parallel)
-
-    def clear_exclude(self, which='exclude'):
-        """Clear the exclude list."""
-        self._init()
-        setattr(self.config, which + "_list", [])
-        self._exclude_regex_stale()
-
-    def exclude(self, regex, which='exclude'):
-        """Exclude source lines from execution consideration.
-
-        A number of lists of regular expressions are maintained.  Each list
-        selects lines that are treated differently during reporting.
-
-        `which` determines which list is modified.  The "exclude" list selects
-        lines that are not considered executable at all.  The "partial" list
-        indicates lines with branches that are not taken.
-
-        `regex` is a regular expression.  The regex is added to the specified
-        list.  If any of the regexes in the list is found in a line, the line
-        is marked for special treatment during reporting.
-
-        """
-        self._init()
-        excl_list = getattr(self.config, which + "_list")
-        excl_list.append(regex)
-        self._exclude_regex_stale()
-
-    def _exclude_regex_stale(self):
-        """Drop all the compiled exclusion regexes, a list was modified."""
-        self._exclude_re.clear()
-
-    def _exclude_regex(self, which):
-        """Return a compiled regex for the given exclusion list."""
-        if which not in self._exclude_re:
-            excl_list = getattr(self.config, which + "_list")
-            self._exclude_re[which] = join_regex(excl_list)
-        return self._exclude_re[which]
-
-    def get_exclude_list(self, which='exclude'):
-        """Return a list of excluded regex patterns.
-
-        `which` indicates which list is desired.  See :meth:`exclude` for the
-        lists that are available, and their meaning.
-
-        """
-        self._init()
-        return getattr(self.config, which + "_list")
-
-    def save(self):
-        """Save the collected coverage data to the data file."""
-        self._init()
-        self.get_data()
-        self.data_files.write(self.data, suffix=self.data_suffix)
-
-    def combine(self, data_paths=None):
-        """Combine together a number of similarly-named coverage data files.
-
-        All coverage data files whose name starts with `data_file` (from the
-        coverage() constructor) will be read, and combined together into the
-        current measurements.
-
-        `data_paths` is a list of files or directories from which data should
-        be combined. If no list is passed, then the data files from the
-        directory indicated by the current data file (probably the current
-        directory) will be combined.
-
-        .. versionadded:: 4.0
-            The `data_paths` parameter.
-
-        """
-        self._init()
-        self.get_data()
-
-        aliases = None
-        if self.config.paths:
-            aliases = PathAliases()
-            for paths in self.config.paths.values():
-                result = paths[0]
-                for pattern in paths[1:]:
-                    aliases.add(pattern, result)
-
-        self.data_files.combine_parallel_data(self.data, aliases=aliases, data_paths=data_paths)
-
-    def get_data(self):
-        """Get the collected data and reset the collector.
-
-        Also warn about various problems collecting data.
-
-        Returns a :class:`coverage.CoverageData`, the collected coverage data.
-
-        .. versionadded:: 4.0
-
-        """
-        self._init()
-        if not self._measured:
-            return self.data
-
-        self.collector.save_data(self.data)
-
-        # If there are still entries in the source_pkgs list, then we never
-        # encountered those packages.
-        if self._warn_unimported_source:
-            for pkg in self.source_pkgs:
-                if pkg not in sys.modules:
-                    self._warn("Module %s was never imported." % pkg)
-                elif not (
-                    hasattr(sys.modules[pkg], '__file__') and
-                    os.path.exists(sys.modules[pkg].__file__)
-                ):
-                    self._warn("Module %s has no Python source." % pkg)
-                else:
-                    self._warn("Module %s was previously imported, but not measured." % pkg)
-
-        # Find out if we got any data.
-        if not self.data and self._warn_no_data:
-            self._warn("No data was collected.")
-
-        # Find files that were never executed at all.
-        for src in self.source:
-            for py_file in find_python_files(src):
-                py_file = files.canonical_filename(py_file)
-
-                if self.omit_match and self.omit_match.match(py_file):
-                    # Turns out this file was omitted, so don't pull it back
-                    # in as unexecuted.
-                    continue
-
-                self.data.touch_file(py_file)
-
-        if self.config.note:
-            self.data.add_run_info(note=self.config.note)
-
-        self._measured = False
-        return self.data
-
-    # Backward compatibility with version 1.
-    def analysis(self, morf):
-        """Like `analysis2` but doesn't return excluded line numbers."""
-        f, s, _, m, mf = self.analysis2(morf)
-        return f, s, m, mf
-
-    def analysis2(self, morf):
-        """Analyze a module.
-
-        `morf` is a module or a file name.  It will be analyzed to determine
-        its coverage statistics.  The return value is a 5-tuple:
-
-        * The file name for the module.
-        * A list of line numbers of executable statements.
-        * A list of line numbers of excluded statements.
-        * A list of line numbers of statements not run (missing from
-          execution).
-        * A readable formatted string of the missing line numbers.
-
-        The analysis uses the source file itself and the current measured
-        coverage data.
-
-        """
-        self._init()
-        analysis = self._analyze(morf)
-        return (
-            analysis.filename,
-            sorted(analysis.statements),
-            sorted(analysis.excluded),
-            sorted(analysis.missing),
-            analysis.missing_formatted(),
-            )
-
-    def _analyze(self, it):
-        """Analyze a single morf or code unit.
-
-        Returns an `Analysis` object.
-
-        """
-        self.get_data()
-        if not isinstance(it, FileReporter):
-            it = self._get_file_reporter(it)
-
-        return Analysis(self.data, it)
-
-    def _get_file_reporter(self, morf):
-        """Get a FileReporter for a module or file name."""
-        plugin = None
-        file_reporter = "python"
-
-        if isinstance(morf, string_class):
-            abs_morf = abs_file(morf)
-            plugin_name = self.data.file_tracer(abs_morf)
-            if plugin_name:
-                plugin = self.plugins.get(plugin_name)
-
-        if plugin:
-            file_reporter = plugin.file_reporter(abs_morf)
-            if file_reporter is None:
-                raise CoverageException(
-                    "Plugin %r did not provide a file reporter for %r." % (
-                        plugin._coverage_plugin_name, morf
-                    )
-                )
-
-        if file_reporter == "python":
-            file_reporter = PythonFileReporter(morf, self)
-
-        return file_reporter
-
-    def _get_file_reporters(self, morfs=None):
-        """Get a list of FileReporters for a list of modules or file names.
-
-        For each module or file name in `morfs`, find a FileReporter.  Return
-        the list of FileReporters.
-
-        If `morfs` is a single module or file name, this returns a list of one
-        FileReporter.  If `morfs` is empty or None, then the list of all files
-        measured is used to find the FileReporters.
-
-        """
-        if not morfs:
-            morfs = self.data.measured_files()
-
-        # Be sure we have a list.
-        if not isinstance(morfs, (list, tuple)):
-            morfs = [morfs]
-
-        file_reporters = []
-        for morf in morfs:
-            file_reporter = self._get_file_reporter(morf)
-            file_reporters.append(file_reporter)
-
-        return file_reporters
-
-    def report(
-        self, morfs=None, show_missing=None, ignore_errors=None,
-        file=None,                  # pylint: disable=redefined-builtin
-        omit=None, include=None, skip_covered=None,
-    ):
-        """Write a summary report to `file`.
-
-        Each module in `morfs` is listed, with counts of statements, executed
-        statements, missing statements, and a list of lines missed.
-
-        `include` is a list of file name patterns.  Files that match will be
-        included in the report. Files matching `omit` will not be included in
-        the report.
-
-        Returns a float, the total percentage covered.
-
-        """
-        self.get_data()
-        self.config.from_args(
-            ignore_errors=ignore_errors, omit=omit, include=include,
-            show_missing=show_missing, skip_covered=skip_covered,
-            )
-        reporter = SummaryReporter(self, self.config)
-        return reporter.report(morfs, outfile=file)
-
-    def annotate(
-        self, morfs=None, directory=None, ignore_errors=None,
-        omit=None, include=None,
-    ):
-        """Annotate a list of modules.
-
-        Each module in `morfs` is annotated.  The source is written to a new
-        file, named with a ",cover" suffix, with each line prefixed with a
-        marker to indicate the coverage of the line.  Covered lines have ">",
-        excluded lines have "-", and missing lines have "!".
-
-        See :meth:`report` for other arguments.
-
-        """
-        self.get_data()
-        self.config.from_args(
-            ignore_errors=ignore_errors, omit=omit, include=include
-            )
-        reporter = AnnotateReporter(self, self.config)
-        reporter.report(morfs, directory=directory)
-
-    def html_report(self, morfs=None, directory=None, ignore_errors=None,
-                    omit=None, include=None, extra_css=None, title=None):
-        """Generate an HTML report.
-
-        The HTML is written to `directory`.  The file "index.html" is the
-        overview starting point, with links to more detailed pages for
-        individual modules.
-
-        `extra_css` is a path to a file of other CSS to apply on the page.
-        It will be copied into the HTML directory.
-
-        `title` is a text string (not HTML) to use as the title of the HTML
-        report.
-
-        See :meth:`report` for other arguments.
-
-        Returns a float, the total percentage covered.
-
-        """
-        self.get_data()
-        self.config.from_args(
-            ignore_errors=ignore_errors, omit=omit, include=include,
-            html_dir=directory, extra_css=extra_css, html_title=title,
-            )
-        reporter = HtmlReporter(self, self.config)
-        return reporter.report(morfs)
-
-    def xml_report(
-        self, morfs=None, outfile=None, ignore_errors=None,
-        omit=None, include=None,
-    ):
-        """Generate an XML report of coverage results.
-
-        The report is compatible with Cobertura reports.
-
-        Each module in `morfs` is included in the report.  `outfile` is the
-        path to write the file to, "-" will write to stdout.
-
-        See :meth:`report` for other arguments.
-
-        Returns a float, the total percentage covered.
-
-        """
-        self.get_data()
-        self.config.from_args(
-            ignore_errors=ignore_errors, omit=omit, include=include,
-            xml_output=outfile,
-            )
-        file_to_close = None
-        delete_file = False
-        if self.config.xml_output:
-            if self.config.xml_output == '-':
-                outfile = sys.stdout
-            else:
-                # Ensure that the output directory is created; done here
-                # because this report pre-opens the output file.
-                # HTMLReport does this using the Report plumbing because
-                # its task is more complex, being multiple files.
-                output_dir = os.path.dirname(self.config.xml_output)
-                if output_dir and not os.path.isdir(output_dir):
-                    os.makedirs(output_dir)
-                open_kwargs = {}
-                if env.PY3:
-                    open_kwargs['encoding'] = 'utf8'
-                outfile = open(self.config.xml_output, "w", **open_kwargs)
-                file_to_close = outfile
-        try:
-            reporter = XmlReporter(self, self.config)
-            return reporter.report(morfs, outfile=outfile)
-        except CoverageException:
-            delete_file = True
-            raise
-        finally:
-            if file_to_close:
-                file_to_close.close()
-                if delete_file:
-                    file_be_gone(self.config.xml_output)
-
-    def sys_info(self):
-        """Return a list of (key, value) pairs showing internal information."""
-
-        import coverage as covmod
-
-        self._init()
-
-        ft_plugins = []
-        for ft in self.plugins.file_tracers:
-            ft_name = ft._coverage_plugin_name
-            if not ft._coverage_enabled:
-                ft_name += " (disabled)"
-            ft_plugins.append(ft_name)
-
-        info = [
-            ('version', covmod.__version__),
-            ('coverage', covmod.__file__),
-            ('cover_dirs', self.cover_dirs),
-            ('pylib_dirs', self.pylib_dirs),
-            ('tracer', self.collector.tracer_name()),
-            ('plugins.file_tracers', ft_plugins),
-            ('config_files', self.config.attempted_config_files),
-            ('configs_read', self.config.config_files),
-            ('data_path', self.data_files.filename),
-            ('python', sys.version.replace('\n', '')),
-            ('platform', platform.platform()),
-            ('implementation', platform.python_implementation()),
-            ('executable', sys.executable),
-            ('cwd', os.getcwd()),
-            ('path', sys.path),
-            ('environment', sorted(
-                ("%s = %s" % (k, v))
-                for k, v in iitems(os.environ)
-                if k.startswith(("COV", "PY"))
-            )),
-            ('command_line', " ".join(getattr(sys, 'argv', ['???']))),
-            ]
-
-        matcher_names = [
-            'source_match', 'source_pkgs_match',
-            'include_match', 'omit_match',
-            'cover_match', 'pylib_match',
-            ]
-
-        for matcher_name in matcher_names:
-            matcher = getattr(self, matcher_name)
-            if matcher:
-                matcher_info = matcher.info()
-            else:
-                matcher_info = '-none-'
-            info.append((matcher_name, matcher_info))
-
-        return info
-
-
-# FileDisposition "methods": FileDisposition is a pure value object, so it can
-# be implemented in either C or Python.  Acting on them is done with these
-# functions.
-
-def _disposition_init(cls, original_filename):
-    """Construct and initialize a new FileDisposition object."""
-    disp = cls()
-    disp.original_filename = original_filename
-    disp.canonical_filename = original_filename
-    disp.source_filename = None
-    disp.trace = False
-    disp.reason = ""
-    disp.file_tracer = None
-    disp.has_dynamic_filename = False
-    return disp
-
-
-def _disposition_debug_msg(disp):
-    """Make a nice debug message of what the FileDisposition is doing."""
-    if disp.trace:
-        msg = "Tracing %r" % (disp.original_filename,)
-        if disp.file_tracer:
-            msg += ": will be traced by %r" % disp.file_tracer
-    else:
-        msg = "Not tracing %r: %s" % (disp.original_filename, disp.reason)
-    return msg
-
-
-def process_startup():
-    """Call this at Python start-up to perhaps measure coverage.
-
-    If the environment variable COVERAGE_PROCESS_START is defined, coverage
-    measurement is started.  The value of the variable is the config file
-    to use.
-
-    There are two ways to configure your Python installation to invoke this
-    function when Python starts:
-
-    #. Create or append to sitecustomize.py to add these lines::
-
-        import coverage
-        coverage.process_startup()
-
-    #. Create a .pth file in your Python installation containing::
-
-        import coverage; coverage.process_startup()
-
-    Returns the :class:`Coverage` instance that was started, or None if it was
-    not started by this call.
-
-    """
-    cps = os.environ.get("COVERAGE_PROCESS_START")
-    if not cps:
-        # No request for coverage, nothing to do.
-        return None
-
-    # This function can be called more than once in a process. This happens
-    # because some virtualenv configurations make the same directory visible
-    # twice in sys.path.  This means that the .pth file will be found twice,
-    # and executed twice, executing this function twice.  We set a global
-    # flag (an attribute on this function) to indicate that coverage.py has
-    # already been started, so we can avoid doing it twice.
-    #
-    # https://bitbucket.org/ned/coveragepy/issue/340/keyerror-subpy has more
-    # details.
-
-    if hasattr(process_startup, "done"):
-        # We've annotated this function before, so we must have already
-        # started coverage.py in this process.  Nothing to do.
-        return None
-
-    process_startup.done = True
-    cov = Coverage(config_file=cps, auto_data=True)
-    cov.start()
-    cov._warn_no_data = False
-    cov._warn_unimported_source = False
-
-    return cov
-
-#
-# eflag: FileType = Python2
--- a/DebugClients/Python/coverage/data.py	Sat Sep 03 18:02:37 2016 +0200
+++ /dev/null	Thu Jan 01 00:00:00 1970 +0000
@@ -1,771 +0,0 @@
-# Licensed under the Apache License: http://www.apache.org/licenses/LICENSE-2.0
-# For details: https://bitbucket.org/ned/coveragepy/src/default/NOTICE.txt
-
-"""Coverage data for coverage.py."""
-
-import glob
-import itertools
-import json
-import optparse
-import os
-import os.path
-import random
-import re
-import socket
-
-from coverage import env
-from coverage.backward import iitems, string_class
-from coverage.debug import _TEST_NAME_FILE
-from coverage.files import PathAliases
-from coverage.misc import CoverageException, file_be_gone, isolate_module
-
-os = isolate_module(os)
-
-
-class CoverageData(object):
-    """Manages collected coverage data, including file storage.
-
-    This class is the public supported API to the data coverage.py collects
-    during program execution.  It includes information about what code was
-    executed. It does not include information from the analysis phase, to
-    determine what lines could have been executed, or what lines were not
-    executed.
-
-    .. note::
-
-        The file format is not documented or guaranteed.  It will change in
-        the future, in possibly complicated ways.  Do not read coverage.py
-        data files directly.  Use this API to avoid disruption.
-
-    There are a number of kinds of data that can be collected:
-
-    * **lines**: the line numbers of source lines that were executed.
-      These are always available.
-
-    * **arcs**: pairs of source and destination line numbers for transitions
-      between source lines.  These are only available if branch coverage was
-      used.
-
-    * **file tracer names**: the module names of the file tracer plugins that
-      handled each file in the data.
-
-    * **run information**: information about the program execution.  This is
-      written during "coverage run", and then accumulated during "coverage
-      combine".
-
-    Lines, arcs, and file tracer names are stored for each source file. File
-    names in this API are case-sensitive, even on platforms with
-    case-insensitive file systems.
-
-    To read a coverage.py data file, use :meth:`read_file`, or
-    :meth:`read_fileobj` if you have an already-opened file.  You can then
-    access the line, arc, or file tracer data with :meth:`lines`, :meth:`arcs`,
-    or :meth:`file_tracer`.  Run information is available with
-    :meth:`run_infos`.
-
-    The :meth:`has_arcs` method indicates whether arc data is available.  You
-    can get a list of the files in the data with :meth:`measured_files`.
-    A summary of the line data is available from :meth:`line_counts`.  As with
-    most Python containers, you can determine if there is any data at all by
-    using this object as a boolean value.
-
-
-    Most data files will be created by coverage.py itself, but you can use
-    methods here to create data files if you like.  The :meth:`add_lines`,
-    :meth:`add_arcs`, and :meth:`add_file_tracers` methods add data, in ways
-    that are convenient for coverage.py.  The :meth:`add_run_info` method adds
-    key-value pairs to the run information.
-
-    To add a file without any measured data, use :meth:`touch_file`.
-
-    You write to a named file with :meth:`write_file`, or to an already opened
-    file with :meth:`write_fileobj`.
-
-    You can clear the data in memory with :meth:`erase`.  Two data collections
-    can be combined by using :meth:`update` on one :class:`CoverageData`,
-    passing it the other.
-
-    """
-
-    # The data file format is JSON, with these keys:
-    #
-    #     * lines: a dict mapping file names to lists of line numbers
-    #       executed::
-    #
-    #         { "file1": [17,23,45], "file2": [1,2,3], ... }
-    #
-    #     * arcs: a dict mapping file names to lists of line number pairs::
-    #
-    #         { "file1": [[17,23], [17,25], [25,26]], ... }
-    #
-    #     * file_tracers: a dict mapping file names to plugin names::
-    #
-    #         { "file1": "django.coverage", ... }
-    #
-    #     * runs: a list of dicts of information about the coverage.py runs
-    #       contributing to the data::
-    #
-    #         [ { "brief_sys": "CPython 2.7.10 Darwin" }, ... ]
-    #
-    # Only one of `lines` or `arcs` will be present: with branch coverage, data
-    # is stored as arcs. Without branch coverage, it is stored as lines.  The
-    # line data is easily recovered from the arcs: it is all the first elements
-    # of the pairs that are greater than zero.
-
-    def __init__(self, debug=None):
-        """Create a CoverageData.
-
-        `debug` is a `DebugControl` object for writing debug messages.
-
-        """
-        self._debug = debug
-
-        # A map from canonical Python source file name to a dictionary in
-        # which there's an entry for each line number that has been
-        # executed:
-        #
-        #   { 'filename1.py': [12, 47, 1001], ... }
-        #
-        self._lines = None
-
-        # A map from canonical Python source file name to a dictionary with an
-        # entry for each pair of line numbers forming an arc:
-        #
-        #   { 'filename1.py': [(12,14), (47,48), ... ], ... }
-        #
-        self._arcs = None
-
-        # A map from canonical source file name to a plugin module name:
-        #
-        #   { 'filename1.py': 'django.coverage', ... }
-        #
-        self._file_tracers = {}
-
-        # A list of dicts of information about the coverage.py runs.
-        self._runs = []
-
-    def __repr__(self):
-        return "<{klass} lines={lines} arcs={arcs} tracers={tracers} runs={runs}>".format(
-            klass=self.__class__.__name__,
-            lines="None" if self._lines is None else "{{{0}}}".format(len(self._lines)),
-            arcs="None" if self._arcs is None else "{{{0}}}".format(len(self._arcs)),
-            tracers="{{{0}}}".format(len(self._file_tracers)),
-            runs="[{0}]".format(len(self._runs)),
-        )
-
-    ##
-    ## Reading data
-    ##
-
-    def has_arcs(self):
-        """Does this data have arcs?
-
-        Arc data is only available if branch coverage was used during
-        collection.
-
-        Returns a boolean.
-
-        """
-        return self._has_arcs()
-
-    def lines(self, filename):
-        """Get the list of lines executed for a file.
-
-        If the file was not measured, returns None.  A file might be measured,
-        and have no lines executed, in which case an empty list is returned.
-
-        If the file was executed, returns a list of integers, the line numbers
-        executed in the file. The list is in no particular order.
-
-        """
-        if self._arcs is not None:
-            arcs = self._arcs.get(filename)
-            if arcs is not None:
-                all_lines = itertools.chain.from_iterable(arcs)
-                return list(set(l for l in all_lines if l > 0))
-        elif self._lines is not None:
-            return self._lines.get(filename)
-        return None
-
-    def arcs(self, filename):
-        """Get the list of arcs executed for a file.
-
-        If the file was not measured, returns None.  A file might be measured,
-        and have no arcs executed, in which case an empty list is returned.
-
-        If the file was executed, returns a list of 2-tuples of integers. Each
-        pair is a starting line number and an ending line number for a
-        transition from one line to another. The list is in no particular
-        order.
-
-        Negative numbers have special meaning.  If the starting line number is
-        -N, it represents an entry to the code object that starts at line N.
-        If the ending ling number is -N, it's an exit from the code object that
-        starts at line N.
-
-        """
-        if self._arcs is not None:
-            if filename in self._arcs:
-                return self._arcs[filename]
-        return None
-
-    def file_tracer(self, filename):
-        """Get the plugin name of the file tracer for a file.
-
-        Returns the name of the plugin that handles this file.  If the file was
-        measured, but didn't use a plugin, then "" is returned.  If the file
-        was not measured, then None is returned.
-
-        """
-        # Because the vast majority of files involve no plugin, we don't store
-        # them explicitly in self._file_tracers.  Check the measured data
-        # instead to see if it was a known file with no plugin.
-        if filename in (self._arcs or self._lines or {}):
-            return self._file_tracers.get(filename, "")
-        return None
-
-    def run_infos(self):
-        """Return the list of dicts of run information.
-
-        For data collected during a single run, this will be a one-element
-        list.  If data has been combined, there will be one element for each
-        original data file.
-
-        """
-        return self._runs
-
-    def measured_files(self):
-        """A list of all files that had been measured."""
-        return list(self._arcs or self._lines or {})
-
-    def line_counts(self, fullpath=False):
-        """Return a dict summarizing the line coverage data.
-
-        Keys are based on the file names, and values are the number of executed
-        lines.  If `fullpath` is true, then the keys are the full pathnames of
-        the files, otherwise they are the basenames of the files.
-
-        Returns a dict mapping file names to counts of lines.
-
-        """
-        summ = {}
-        if fullpath:
-            filename_fn = lambda f: f
-        else:
-            filename_fn = os.path.basename
-        for filename in self.measured_files():
-            summ[filename_fn(filename)] = len(self.lines(filename))
-        return summ
-
-    def __nonzero__(self):
-        return bool(self._lines or self._arcs)
-
-    __bool__ = __nonzero__
-
-    def read_fileobj(self, file_obj):
-        """Read the coverage data from the given file object.
-
-        Should only be used on an empty CoverageData object.
-
-        """
-        data = self._read_raw_data(file_obj)
-
-        self._lines = self._arcs = None
-
-        if 'lines' in data:
-            self._lines = data['lines']
-        if 'arcs' in data:
-            self._arcs = dict(
-                (fname, [tuple(pair) for pair in arcs])
-                for fname, arcs in iitems(data['arcs'])
-            )
-        self._file_tracers = data.get('file_tracers', {})
-        self._runs = data.get('runs', [])
-
-        self._validate()
-
-    def read_file(self, filename):
-        """Read the coverage data from `filename` into this object."""
-        if self._debug and self._debug.should('dataio'):
-            self._debug.write("Reading data from %r" % (filename,))
-        try:
-            with self._open_for_reading(filename) as f:
-                self.read_fileobj(f)
-        except Exception as exc:
-            raise CoverageException(
-                "Couldn't read data from '%s': %s: %s" % (
-                    filename, exc.__class__.__name__, exc,
-                )
-            )
-
-    _GO_AWAY = "!coverage.py: This is a private format, don't read it directly!"
-
-    @classmethod
-    def _open_for_reading(cls, filename):
-        """Open a file appropriately for reading data."""
-        return open(filename, "r")
-
-    @classmethod
-    def _read_raw_data(cls, file_obj):
-        """Read the raw data from a file object."""
-        go_away = file_obj.read(len(cls._GO_AWAY))
-        if go_away != cls._GO_AWAY:
-            raise CoverageException("Doesn't seem to be a coverage.py data file")
-        return json.load(file_obj)
-
-    @classmethod
-    def _read_raw_data_file(cls, filename):
-        """Read the raw data from a file, for debugging."""
-        with cls._open_for_reading(filename) as f:
-            return cls._read_raw_data(f)
-
-    ##
-    ## Writing data
-    ##
-
-    def add_lines(self, line_data):
-        """Add measured line data.
-
-        `line_data` is a dictionary mapping file names to dictionaries::
-
-            { filename: { lineno: None, ... }, ...}
-
-        """
-        if self._debug and self._debug.should('dataop'):
-            self._debug.write("Adding lines: %d files, %d lines total" % (
-                len(line_data), sum(len(lines) for lines in line_data.values())
-            ))
-        if self._has_arcs():
-            raise CoverageException("Can't add lines to existing arc data")
-
-        if self._lines is None:
-            self._lines = {}
-        for filename, linenos in iitems(line_data):
-            if filename in self._lines:
-                new_linenos = set(self._lines[filename])
-                new_linenos.update(linenos)
-                linenos = new_linenos
-            self._lines[filename] = list(linenos)
-
-        self._validate()
-
-    def add_arcs(self, arc_data):
-        """Add measured arc data.
-
-        `arc_data` is a dictionary mapping file names to dictionaries::
-
-            { filename: { (l1,l2): None, ... }, ...}
-
-        """
-        if self._debug and self._debug.should('dataop'):
-            self._debug.write("Adding arcs: %d files, %d arcs total" % (
-                len(arc_data), sum(len(arcs) for arcs in arc_data.values())
-            ))
-        if self._has_lines():
-            raise CoverageException("Can't add arcs to existing line data")
-
-        if self._arcs is None:
-            self._arcs = {}
-        for filename, arcs in iitems(arc_data):
-            if filename in self._arcs:
-                new_arcs = set(self._arcs[filename])
-                new_arcs.update(arcs)
-                arcs = new_arcs
-            self._arcs[filename] = list(arcs)
-
-        self._validate()
-
-    def add_file_tracers(self, file_tracers):
-        """Add per-file plugin information.
-
-        `file_tracers` is { filename: plugin_name, ... }
-
-        """
-        if self._debug and self._debug.should('dataop'):
-            self._debug.write("Adding file tracers: %d files" % (len(file_tracers),))
-
-        existing_files = self._arcs or self._lines or {}
-        for filename, plugin_name in iitems(file_tracers):
-            if filename not in existing_files:
-                raise CoverageException(
-                    "Can't add file tracer data for unmeasured file '%s'" % (filename,)
-                )
-            existing_plugin = self._file_tracers.get(filename)
-            if existing_plugin is not None and plugin_name != existing_plugin:
-                raise CoverageException(
-                    "Conflicting file tracer name for '%s': %r vs %r" % (
-                        filename, existing_plugin, plugin_name,
-                    )
-                )
-            self._file_tracers[filename] = plugin_name
-
-        self._validate()
-
-    def add_run_info(self, **kwargs):
-        """Add information about the run.
-
-        Keywords are arbitrary, and are stored in the run dictionary. Values
-        must be JSON serializable.  You may use this function more than once,
-        but repeated keywords overwrite each other.
-
-        """
-        if self._debug and self._debug.should('dataop'):
-            self._debug.write("Adding run info: %r" % (kwargs,))
-        if not self._runs:
-            self._runs = [{}]
-        self._runs[0].update(kwargs)
-        self._validate()
-
-    def touch_file(self, filename):
-        """Ensure that `filename` appears in the data, empty if needed."""
-        if self._debug and self._debug.should('dataop'):
-            self._debug.write("Touching %r" % (filename,))
-        if not self._has_arcs() and not self._has_lines():
-            raise CoverageException("Can't touch files in an empty CoverageData")
-
-        if self._has_arcs():
-            where = self._arcs
-        else:
-            where = self._lines
-        where.setdefault(filename, [])
-
-        self._validate()
-
-    def write_fileobj(self, file_obj):
-        """Write the coverage data to `file_obj`."""
-
-        # Create the file data.
-        file_data = {}
-
-        if self._has_arcs():
-            file_data['arcs'] = self._arcs
-
-        if self._has_lines():
-            file_data['lines'] = self._lines
-
-        if self._file_tracers:
-            file_data['file_tracers'] = self._file_tracers
-
-        if self._runs:
-            file_data['runs'] = self._runs
-
-        # Write the data to the file.
-        file_obj.write(self._GO_AWAY)
-        json.dump(file_data, file_obj)
-
-    def write_file(self, filename):
-        """Write the coverage data to `filename`."""
-        if self._debug and self._debug.should('dataio'):
-            self._debug.write("Writing data to %r" % (filename,))
-        with open(filename, 'w') as fdata:
-            self.write_fileobj(fdata)
-
-    def erase(self):
-        """Erase the data in this object."""
-        self._lines = None
-        self._arcs = None
-        self._file_tracers = {}
-        self._runs = []
-        self._validate()
-
-    def update(self, other_data, aliases=None):
-        """Update this data with data from another `CoverageData`.
-
-        If `aliases` is provided, it's a `PathAliases` object that is used to
-        re-map paths to match the local machine's.
-
-        """
-        if self._has_lines() and other_data._has_arcs():
-            raise CoverageException("Can't combine arc data with line data")
-        if self._has_arcs() and other_data._has_lines():
-            raise CoverageException("Can't combine line data with arc data")
-
-        aliases = aliases or PathAliases()
-
-        # _file_tracers: only have a string, so they have to agree.
-        # Have to do these first, so that our examination of self._arcs and
-        # self._lines won't be confused by data updated from other_data.
-        for filename in other_data.measured_files():
-            other_plugin = other_data.file_tracer(filename)
-            filename = aliases.map(filename)
-            this_plugin = self.file_tracer(filename)
-            if this_plugin is None:
-                if other_plugin:
-                    self._file_tracers[filename] = other_plugin
-            elif this_plugin != other_plugin:
-                raise CoverageException(
-                    "Conflicting file tracer name for '%s': %r vs %r" % (
-                        filename, this_plugin, other_plugin,
-                    )
-                )
-
-        # _runs: add the new runs to these runs.
-        self._runs.extend(other_data._runs)
-
-        # _lines: merge dicts.
-        if other_data._has_lines():
-            if self._lines is None:
-                self._lines = {}
-            for filename, file_lines in iitems(other_data._lines):
-                filename = aliases.map(filename)
-                if filename in self._lines:
-                    lines = set(self._lines[filename])
-                    lines.update(file_lines)
-                    file_lines = list(lines)
-                self._lines[filename] = file_lines
-
-        # _arcs: merge dicts.
-        if other_data._has_arcs():
-            if self._arcs is None:
-                self._arcs = {}
-            for filename, file_arcs in iitems(other_data._arcs):
-                filename = aliases.map(filename)
-                if filename in self._arcs:
-                    arcs = set(self._arcs[filename])
-                    arcs.update(file_arcs)
-                    file_arcs = list(arcs)
-                self._arcs[filename] = file_arcs
-
-        self._validate()
-
-    ##
-    ## Miscellaneous
-    ##
-
-    def _validate(self):
-        """If we are in paranoid mode, validate that everything is right."""
-        if env.TESTING:
-            self._validate_invariants()
-
-    def _validate_invariants(self):
-        """Validate internal invariants."""
-        # Only one of _lines or _arcs should exist.
-        assert not(self._has_lines() and self._has_arcs()), (
-            "Shouldn't have both _lines and _arcs"
-        )
-
-        # _lines should be a dict of lists of ints.
-        if self._has_lines():
-            for fname, lines in iitems(self._lines):
-                assert isinstance(fname, string_class), "Key in _lines shouldn't be %r" % (fname,)
-                assert all(isinstance(x, int) for x in lines), (
-                    "_lines[%r] shouldn't be %r" % (fname, lines)
-                )
-
-        # _arcs should be a dict of lists of pairs of ints.
-        if self._has_arcs():
-            for fname, arcs in iitems(self._arcs):
-                assert isinstance(fname, string_class), "Key in _arcs shouldn't be %r" % (fname,)
-                assert all(isinstance(x, int) and isinstance(y, int) for x, y in arcs), (
-                    "_arcs[%r] shouldn't be %r" % (fname, arcs)
-                )
-
-        # _file_tracers should have only non-empty strings as values.
-        for fname, plugin in iitems(self._file_tracers):
-            assert isinstance(fname, string_class), (
-                "Key in _file_tracers shouldn't be %r" % (fname,)
-            )
-            assert plugin and isinstance(plugin, string_class), (
-                "_file_tracers[%r] shoudn't be %r" % (fname, plugin)
-            )
-
-        # _runs should be a list of dicts.
-        for val in self._runs:
-            assert isinstance(val, dict)
-            for key in val:
-                assert isinstance(key, string_class), "Key in _runs shouldn't be %r" % (key,)
-
-    def add_to_hash(self, filename, hasher):
-        """Contribute `filename`'s data to the `hasher`.
-
-        `hasher` is a `coverage.misc.Hasher` instance to be updated with
-        the file's data.  It should only get the results data, not the run
-        data.
-
-        """
-        if self._has_arcs():
-            hasher.update(sorted(self.arcs(filename) or []))
-        else:
-            hasher.update(sorted(self.lines(filename) or []))
-        hasher.update(self.file_tracer(filename))
-
-    ##
-    ## Internal
-    ##
-
-    def _has_lines(self):
-        """Do we have data in self._lines?"""
-        return self._lines is not None
-
-    def _has_arcs(self):
-        """Do we have data in self._arcs?"""
-        return self._arcs is not None
-
-
-class CoverageDataFiles(object):
-    """Manage the use of coverage data files."""
-
-    def __init__(self, basename=None, warn=None):
-        """Create a CoverageDataFiles to manage data files.
-
-        `warn` is the warning function to use.
-
-        `basename` is the name of the file to use for storing data.
-
-        """
-        self.warn = warn
-        # Construct the file name that will be used for data storage.
-        self.filename = os.path.abspath(basename or ".coverage")
-
-    def erase(self, parallel=False):
-        """Erase the data from the file storage.
-
-        If `parallel` is true, then also deletes data files created from the
-        basename by parallel-mode.
-
-        """
-        file_be_gone(self.filename)
-        if parallel:
-            data_dir, local = os.path.split(self.filename)
-            localdot = local + '.*'
-            pattern = os.path.join(os.path.abspath(data_dir), localdot)
-            for filename in glob.glob(pattern):
-                file_be_gone(filename)
-
-    def read(self, data):
-        """Read the coverage data."""
-        if os.path.exists(self.filename):
-            data.read_file(self.filename)
-
-    def write(self, data, suffix=None):
-        """Write the collected coverage data to a file.
-
-        `suffix` is a suffix to append to the base file name. This can be used
-        for multiple or parallel execution, so that many coverage data files
-        can exist simultaneously.  A dot will be used to join the base name and
-        the suffix.
-
-        """
-        filename = self.filename
-        if suffix is True:
-            # If data_suffix was a simple true value, then make a suffix with
-            # plenty of distinguishing information.  We do this here in
-            # `save()` at the last minute so that the pid will be correct even
-            # if the process forks.
-            extra = ""
-            if _TEST_NAME_FILE:                             # pragma: debugging
-                with open(_TEST_NAME_FILE) as f:
-                    test_name = f.read()
-                extra = "." + test_name
-            suffix = "%s%s.%s.%06d" % (
-                socket.gethostname(), extra, os.getpid(),
-                random.randint(0, 999999)
-            )
-
-        if suffix:
-            filename += "." + suffix
-        data.write_file(filename)
-
-    def combine_parallel_data(self, data, aliases=None, data_paths=None):
-        """Combine a number of data files together.
-
-        Treat `self.filename` as a file prefix, and combine the data from all
-        of the data files starting with that prefix plus a dot.
-
-        If `aliases` is provided, it's a `PathAliases` object that is used to
-        re-map paths to match the local machine's.
-
-        If `data_paths` is provided, it is a list of directories or files to
-        combine.  Directories are searched for files that start with
-        `self.filename` plus dot as a prefix, and those files are combined.
-
-        If `data_paths` is not provided, then the directory portion of
-        `self.filename` is used as the directory to search for data files.
-
-        Every data file found and combined is then deleted from disk. If a file
-        cannot be read, a warning will be issued, and the file will not be
-        deleted.
-
-        """
-        # Because of the os.path.abspath in the constructor, data_dir will
-        # never be an empty string.
-        data_dir, local = os.path.split(self.filename)
-        localdot = local + '.*'
-
-        data_paths = data_paths or [data_dir]
-        files_to_combine = []
-        for p in data_paths:
-            if os.path.isfile(p):
-                files_to_combine.append(os.path.abspath(p))
-            elif os.path.isdir(p):
-                pattern = os.path.join(os.path.abspath(p), localdot)
-                files_to_combine.extend(glob.glob(pattern))
-            else:
-                raise CoverageException("Couldn't combine from non-existent path '%s'" % (p,))
-
-        for f in files_to_combine:
-            new_data = CoverageData()
-            try:
-                new_data.read_file(f)
-            except CoverageException as exc:
-                if self.warn:
-                    # The CoverageException has the file name in it, so just
-                    # use the message as the warning.
-                    self.warn(str(exc))
-            else:
-                data.update(new_data, aliases=aliases)
-                file_be_gone(f)
-
-
-def canonicalize_json_data(data):
-    """Canonicalize our JSON data so it can be compared."""
-    for fname, lines in iitems(data.get('lines', {})):
-        data['lines'][fname] = sorted(lines)
-    for fname, arcs in iitems(data.get('arcs', {})):
-        data['arcs'][fname] = sorted(arcs)
-
-
-def pretty_data(data):
-    """Format data as JSON, but as nicely as possible.
-
-    Returns a string.
-
-    """
-    # Start with a basic JSON dump.
-    out = json.dumps(data, indent=4, sort_keys=True)
-    # But pairs of numbers shouldn't be split across lines...
-    out = re.sub(r"\[\s+(-?\d+),\s+(-?\d+)\s+]", r"[\1, \2]", out)
-    # Trailing spaces mess with tests, get rid of them.
-    out = re.sub(r"(?m)\s+$", "", out)
-    return out
-
-
-def debug_main(args):
-    """Dump the raw data from data files.
-
-    Run this as::
-
-        $ python -m coverage.data [FILE]
-
-    """
-    parser = optparse.OptionParser()
-    parser.add_option(
-        "-c", "--canonical", action="store_true",
-        help="Sort data into a canonical order",
-    )
-    options, args = parser.parse_args(args)
-
-    for filename in (args or [".coverage"]):
-        print("--- {0} ------------------------------".format(filename))
-        data = CoverageData._read_raw_data_file(filename)
-        if options.canonical:
-            canonicalize_json_data(data)
-        print(pretty_data(data))
-
-
-if __name__ == '__main__':
-    import sys
-    debug_main(sys.argv[1:])
-
-#
-# eflag: FileType = Python2
--- a/DebugClients/Python/coverage/debug.py	Sat Sep 03 18:02:37 2016 +0200
+++ /dev/null	Thu Jan 01 00:00:00 1970 +0000
@@ -1,109 +0,0 @@
-# Licensed under the Apache License: http://www.apache.org/licenses/LICENSE-2.0
-# For details: https://bitbucket.org/ned/coveragepy/src/default/NOTICE.txt
-
-"""Control of and utilities for debugging."""
-
-import inspect
-import os
-import sys
-
-from coverage.misc import isolate_module
-
-os = isolate_module(os)
-
-
-# When debugging, it can be helpful to force some options, especially when
-# debugging the configuration mechanisms you usually use to control debugging!
-# This is a list of forced debugging options.
-FORCED_DEBUG = []
-
-# A hack for debugging testing in sub-processes.
-_TEST_NAME_FILE = ""    # "/tmp/covtest.txt"
-
-
-class DebugControl(object):
-    """Control and output for debugging."""
-
-    def __init__(self, options, output):
-        """Configure the options and output file for debugging."""
-        self.options = options
-        self.output = output
-
-    def __repr__(self):
-        return "<DebugControl options=%r output=%r>" % (self.options, self.output)
-
-    def should(self, option):
-        """Decide whether to output debug information in category `option`."""
-        return (option in self.options or option in FORCED_DEBUG)
-
-    def write(self, msg):
-        """Write a line of debug output."""
-        if self.should('pid'):
-            msg = "pid %5d: %s" % (os.getpid(), msg)
-        self.output.write(msg+"\n")
-        if self.should('callers'):
-            dump_stack_frames(out=self.output)
-        self.output.flush()
-
-    def write_formatted_info(self, header, info):
-        """Write a sequence of (label,data) pairs nicely."""
-        self.write(info_header(header))
-        for line in info_formatter(info):
-            self.write(" %s" % line)
-
-
-def info_header(label):
-    """Make a nice header string."""
-    return "--{0:-<60s}".format(" "+label+" ")
-
-
-def info_formatter(info):
-    """Produce a sequence of formatted lines from info.
-
-    `info` is a sequence of pairs (label, data).  The produced lines are
-    nicely formatted, ready to print.
-
-    """
-    info = list(info)
-    if not info:
-        return
-    label_len = max(len(l) for l, _d in info)
-    for label, data in info:
-        if data == []:
-            data = "-none-"
-        if isinstance(data, (list, set, tuple)):
-            prefix = "%*s:" % (label_len, label)
-            for e in data:
-                yield "%*s %s" % (label_len+1, prefix, e)
-                prefix = ""
-        else:
-            yield "%*s: %s" % (label_len, label, data)
-
-
-def short_stack(limit=None):                                # pragma: debugging
-    """Return a string summarizing the call stack.
-
-    The string is multi-line, with one line per stack frame. Each line shows
-    the function name, the file name, and the line number:
-
-        ...
-        start_import_stop : /Users/ned/coverage/trunk/tests/coveragetest.py @95
-        import_local_file : /Users/ned/coverage/trunk/tests/coveragetest.py @81
-        import_local_file : /Users/ned/coverage/trunk/coverage/backward.py @159
-        ...
-
-    `limit` is the number of frames to include, defaulting to all of them.
-
-    """
-    stack = inspect.stack()[limit:0:-1]
-    return "\n".join("%30s : %s @%d" % (t[3], t[1], t[2]) for t in stack)
-
-
-def dump_stack_frames(limit=None, out=None):                # pragma: debugging
-    """Print a summary of the stack to stdout, or some place else."""
-    out = out or sys.stdout
-    out.write(short_stack(limit=limit))
-    out.write("\n")
-
-#
-# eflag: FileType = Python2
--- a/DebugClients/Python/coverage/doc/AUTHORS.txt	Sat Sep 03 18:02:37 2016 +0200
+++ /dev/null	Thu Jan 01 00:00:00 1970 +0000
@@ -1,73 +0,0 @@
-Coverage.py was originally written by Gareth Rees, and since 2004 has been
-extended and maintained by Ned Batchelder.
-
-Other contributions have been made by:
-
-Adi Roiban
-Alex Gaynor
-Alexander Todorov
-Anthony Sottile
-Arcadiy Ivanov
-Ben Finney
-Bill Hart
-Brandon Rhodes
-Brett Cannon
-Buck Evan
-Carl Gieringer
-Catherine Proulx
-Chris Adams
-Chris Rose
-Christian Heimes
-Christine Lytwynec
-Christoph Zwerschke
-Conrad Ho
-Danek Duvall
-Danny Allen
-David Christian
-David Stanek
-Detlev Offenbach
-Devin Jeanpierre
-Dmitry Shishov
-Dmitry Trofimov
-Eduardo Schettino
-Edward Loper
-Geoff Bache
-George Paci
-George Song
-Greg Rogers
-Guillaume Chazarain
-Ilia Meerovich
-Imri Goldberg
-Ionel Cristian Mărieș
-JT Olds
-Jessamyn Smith
-Jon Chappell
-Joseph Tate
-Julian Berman
-Krystian Kichewko
-Leonardo Pistone
-Lex Berezhny
-Marc Abramowitz
-Marcus Cobden
-Mark van der Wal
-Martin Fuzzey
-Matthew Desmarais
-Max Linke
-Mickie Betz
-Noel O'Boyle
-Pablo Carballo
-Patrick Mezard
-Peter Portante
-Rodrigue Cloutier
-Roger Hu
-Ross Lawley
-Sandra Martocchia
-Sigve Tjora
-Stan Hu
-Stefan Behnel
-Steve Leonard
-Steve Peak
-Ted Wexler
-Titus Brown
-Yury Selivanov
-Zooko Wilcox-O'Hearn
--- a/DebugClients/Python/coverage/doc/CHANGES.rst	Sat Sep 03 18:02:37 2016 +0200
+++ /dev/null	Thu Jan 01 00:00:00 1970 +0000
@@ -1,1654 +0,0 @@
-.. Licensed under the Apache License: http://www.apache.org/licenses/LICENSE-2.0
-.. For details: https://bitbucket.org/ned/coveragepy/src/default/NOTICE.txt
-
-==============================
-Change history for Coverage.py
-==============================
-
-
-Version 4.1 --- 2016-05-21
---------------------------
-
-- The internal attribute `Reporter.file_reporters` was removed in 4.1b3.  It
-  should have come has no surprise that there were third-party tools out there
-  using that attribute.  It has been restored, but with a deprecation warning.
-
-
-Version 4.1b3 --- 2016-05-10
-----------------------------
-
-- When running your program, execution can jump from an ``except X:`` line to
-  some other line when an exception other than ``X`` happens.  This jump is no
-  longer considered a branch when measuring branch coverage.
-
-- When measuring branch coverage, ``yield`` statements that were never resumed
-  were incorrectly marked as missing, as reported in `issue 440`_.  This is now
-  fixed.
-
-- During branch coverage of single-line callables like lambdas and generator
-  expressions, coverage.py can now distinguish between them never being called,
-  or being called but not completed.  Fixes `issue 90`_, `issue 460`_ and
-  `issue 475`_.
-
-- The HTML report now has a map of the file along the rightmost edge of the
-  page, giving an overview of where the missed lines are.  Thanks, Dmitry
-  Shishov.
-
-- The HTML report now uses different monospaced fonts, favoring Consolas over
-  Courier.  Along the way, `issue 472`_ about not properly handling one-space
-  indents was fixed.  The index page also has slightly different styling, to
-  try to make the clickable detail pages more apparent.
-
-- Missing branches reported with ``coverage report -m`` will now say ``->exit``
-  for missed branches to the exit of a function, rather than a negative number.
-  Fixes `issue 469`_.
-
-- ``coverage --help`` and ``coverage --version`` now mention which tracer is
-  installed, to help diagnose problems. The docs mention which features need
-  the C extension. (`issue 479`_)
-
-- Officially support PyPy 5.1, which required no changes, just updates to the
-  docs.
-
-- The `Coverage.report` function had two parameters with non-None defaults,
-  which have been changed.  `show_missing` used to default to True, but now
-  defaults to None.  If you had been calling `Coverage.report` without
-  specifying `show_missing`, you'll need to explicitly set it to True to keep
-  the same behavior.  `skip_covered` used to default to False. It is now None,
-  which doesn't change the behavior.  This fixes `issue 485`_.
-
-- It's never been possible to pass a namespace module to one of the analysis
-  functions, but now at least we raise a more specific error message, rather
-  than getting confused. (`issue 456`_)
-
-- The `coverage.process_startup` function now returns the `Coverage` instance
-  it creates, as suggested in `issue 481`_.
-
-- Make a small tweak to how we compare threads, to avoid buggy custom
-  comparison code in thread classes. (`issue 245`_)
-
-.. _issue 90: https://bitbucket.org/ned/coveragepy/issues/90/lambda-expression-confuses-branch
-.. _issue 245: https://bitbucket.org/ned/coveragepy/issues/245/change-solution-for-issue-164
-.. _issue 440: https://bitbucket.org/ned/coveragepy/issues/440/yielded-twisted-failure-marked-as-missed
-.. _issue 456: https://bitbucket.org/ned/coveragepy/issues/456/coverage-breaks-with-implicit-namespaces
-.. _issue 460: https://bitbucket.org/ned/coveragepy/issues/460/confusing-html-report-for-certain-partial
-.. _issue 469: https://bitbucket.org/ned/coveragepy/issues/469/strange-1-line-number-in-branch-coverage
-.. _issue 472: https://bitbucket.org/ned/coveragepy/issues/472/html-report-indents-incorrectly-for-one
-.. _issue 475: https://bitbucket.org/ned/coveragepy/issues/475/generator-expression-is-marked-as-not
-.. _issue 479: https://bitbucket.org/ned/coveragepy/issues/479/clarify-the-need-for-the-c-extension
-.. _issue 481: https://bitbucket.org/ned/coveragepy/issues/481/asyncioprocesspoolexecutor-tracing-not
-.. _issue 485: https://bitbucket.org/ned/coveragepy/issues/485/coveragereport-ignores-show_missing-and
-
-
-Version 4.1b2 --- 2016-01-23
-----------------------------
-
-- Problems with the new branch measurement in 4.1 beta 1 were fixed:
-
-  - Class docstrings were considered executable.  Now they no longer are.
-
-  - ``yield from`` and ``await`` were considered returns from functions, since
-    they could tranfer control to the caller.  This produced unhelpful "missing
-    branch" reports in a number of circumstances.  Now they no longer are
-    considered returns.
-
-  - In unusual situations, a missing branch to a negative number was reported.
-    This has been fixed, closing `issue 466`_.
-
-- The XML report now produces correct package names for modules found in
-  directories specified with ``source=``.  Fixes `issue 465`_.
-
-- ``coverage report`` won't produce trailing whitespace.
-
-.. _issue 465: https://bitbucket.org/ned/coveragepy/issues/465/coveragexml-produces-package-names-with-an
-.. _issue 466: https://bitbucket.org/ned/coveragepy/issues/466/impossible-missed-branch-to-a-negative
-
-
-Version 4.1b1 --- 2016-01-10
-----------------------------
-
-- Branch analysis has been rewritten: it used to be based on bytecode, but now
-  uses AST analysis.  This has changed a number of things:
-
-  - More code paths are now considered runnable, especially in
-    ``try``/``except`` structures.  This may mean that coverage.py will
-    identify more code paths as uncovered.  This could either raise or lower
-    your overall coverage number.
-
-  - Python 3.5's ``async`` and ``await`` keywords are properly supported,
-    fixing `issue 434`_.
-
-  - Some long-standing branch coverage bugs were fixed:
-
-    - `issue 129`_: functions with only a docstring for a body would
-      incorrectly report a missing branch on the ``def`` line.
-
-    - `issue 212`_: code in an ``except`` block could be incorrectly marked as
-      a missing branch.
-
-    - `issue 146`_: context managers (``with`` statements) in a loop or ``try``
-      block could confuse the branch measurement, reporting incorrect partial
-      branches.
-
-    - `issue 422`_: in Python 3.5, an actual partial branch could be marked as
-      complete.
-
-- Pragmas to disable coverage measurement can now be used on decorator lines,
-  and they will apply to the entire function or class being decorated.  This
-  implements the feature requested in `issue 131`_.
-
-- Multiprocessing support is now available on Windows.  Thanks, Rodrigue
-  Cloutier.
-
-- Files with two encoding declarations are properly supported, fixing
-  `issue 453`_. Thanks, Max Linke.
-
-- Non-ascii characters in regexes in the configuration file worked in 3.7, but
-  stopped working in 4.0.  Now they work again, closing `issue 455`_.
-
-- Form-feed characters would prevent accurate determination of the beginning of
-  statements in the rest of the file.  This is now fixed, closing `issue 461`_.
-
-.. _issue 129: https://bitbucket.org/ned/coveragepy/issues/129/misleading-branch-coverage-of-empty
-.. _issue 131: https://bitbucket.org/ned/coveragepy/issues/131/pragma-on-a-decorator-line-should-affect
-.. _issue 146: https://bitbucket.org/ned/coveragepy/issues/146/context-managers-confuse-branch-coverage
-.. _issue 212: https://bitbucket.org/ned/coveragepy/issues/212/coverage-erroneously-reports-partial
-.. _issue 422: https://bitbucket.org/ned/coveragepy/issues/422/python35-partial-branch-marked-as-fully
-.. _issue 434: https://bitbucket.org/ned/coveragepy/issues/434/indexerror-in-python-35
-.. _issue 453: https://bitbucket.org/ned/coveragepy/issues/453/source-code-encoding-can-only-be-specified
-.. _issue 455: https://bitbucket.org/ned/coveragepy/issues/455/unusual-exclusions-stopped-working-in
-.. _issue 461: https://bitbucket.org/ned/coveragepy/issues/461/multiline-asserts-need-too-many-pragma
-
-
-Version 4.0.3 --- 2015-11-24
-----------------------------
-
-- Fixed a mysterious problem that manifested in different ways: sometimes
-  hanging the process (`issue 420`_), sometimes making database connections
-  fail (`issue 445`_).
-
-- The XML report now has correct ``<source>`` elements when using a
-  ``--source=`` option somewhere besides the current directory.  This fixes
-  `issue 439`_. Thanks, Arcady Ivanov.
-
-- Fixed an unusual edge case of detecting source encodings, described in
-  `issue 443`_.
-
-- Help messages that mention the command to use now properly use the actual
-  command name, which might be different than "coverage".  Thanks to Ben
-  Finney, this closes `issue 438`_.
-
-.. _issue 420: https://bitbucket.org/ned/coveragepy/issues/420/coverage-40-hangs-indefinitely-on-python27
-.. _issue 438: https://bitbucket.org/ned/coveragepy/issues/438/parameterise-coverage-command-name
-.. _issue 439: https://bitbucket.org/ned/coveragepy/issues/439/incorrect-cobertura-file-sources-generated
-.. _issue 443: https://bitbucket.org/ned/coveragepy/issues/443/coverage-gets-confused-when-encoding
-.. _issue 445: https://bitbucket.org/ned/coveragepy/issues/445/django-app-cannot-connect-to-cassandra
-
-
-Version 4.0.2 --- 2015-11-04
-----------------------------
-
-- More work on supporting unusually encoded source. Fixed `issue 431`_.
-
-- Files or directories with non-ASCII characters are now handled properly,
-  fixing `issue 432`_.
-
-- Setting a trace function with sys.settrace was broken by a change in 4.0.1,
-  as reported in `issue 436`_.  This is now fixed.
-
-- Officially support PyPy 4.0, which required no changes, just updates to the
-  docs.
-
-.. _issue 431: https://bitbucket.org/ned/coveragepy/issues/431/couldnt-parse-python-file-with-cp1252
-.. _issue 432: https://bitbucket.org/ned/coveragepy/issues/432/path-with-unicode-characters-various
-.. _issue 436: https://bitbucket.org/ned/coveragepy/issues/436/disabled-coverage-ctracer-may-rise-from
-
-
-Version 4.0.1 --- 2015-10-13
-----------------------------
-
-- When combining data files, unreadable files will now generate a warning
-  instead of failing the command.  This is more in line with the older
-  coverage.py v3.7.1 behavior, which silently ignored unreadable files.
-  Prompted by `issue 418`_.
-
-- The --skip-covered option would skip reporting on 100% covered files, but
-  also skipped them when calculating total coverage.  This was wrong, it should
-  only remove lines from the report, not change the final answer.  This is now
-  fixed, closing `issue 423`_.
-
-- In 4.0, the data file recorded a summary of the system on which it was run.
-  Combined data files would keep all of those summaries.  This could lead to
-  enormous data files consisting of mostly repetitive useless information. That
-  summary is now gone, fixing `issue 415`_.  If you want summary information,
-  get in touch, and we'll figure out a better way to do it.
-
-- Test suites that mocked os.path.exists would experience strange failures, due
-  to coverage.py using their mock inadvertently.  This is now fixed, closing
-  `issue 416`_.
-
-- Importing a ``__init__`` module explicitly would lead to an error:
-  ``AttributeError: 'module' object has no attribute '__path__'``, as reported
-  in `issue 410`_.  This is now fixed.
-
-- Code that uses ``sys.settrace(sys.gettrace())`` used to incur a more than 2x
-  speed penalty.  Now there's no penalty at all. Fixes `issue 397`_.
-
-- Pyexpat C code will no longer be recorded as a source file, fixing
-  `issue 419`_.
-
-- The source kit now contains all of the files needed to have a complete source
-  tree, re-fixing `issue 137`_ and closing `issue 281`_.
-
-.. _issue 281: https://bitbucket.org/ned/coveragepy/issues/281/supply-scripts-for-testing-in-the
-.. _issue 397: https://bitbucket.org/ned/coveragepy/issues/397/stopping-and-resuming-coverage-with
-.. _issue 410: https://bitbucket.org/ned/coveragepy/issues/410/attributeerror-module-object-has-no
-.. _issue 415: https://bitbucket.org/ned/coveragepy/issues/415/repeated-coveragedataupdates-cause
-.. _issue 416: https://bitbucket.org/ned/coveragepy/issues/416/mocking-ospathexists-causes-failures
-.. _issue 418: https://bitbucket.org/ned/coveragepy/issues/418/json-parse-error
-.. _issue 419: https://bitbucket.org/ned/coveragepy/issues/419/nosource-no-source-for-code-path-to-c
-.. _issue 423: https://bitbucket.org/ned/coveragepy/issues/423/skip_covered-changes-reported-total
-
-
-Version 4.0 --- 2015-09-20
---------------------------
-
-No changes from 4.0b3
-
-
-Version 4.0b3 --- 2015-09-07
-----------------------------
-
-- Reporting on an unmeasured file would fail with a traceback.  This is now
-  fixed, closing `issue 403`_.
-
-- The Jenkins ShiningPanda plugin looks for an obsolete file name to find the
-  HTML reports to publish, so it was failing under coverage.py 4.0.  Now we
-  create that file if we are running under Jenkins, to keep things working
-  smoothly. `issue 404`_.
-
-- Kits used to include tests and docs, but didn't install them anywhere, or
-  provide all of the supporting tools to make them useful.  Kits no longer
-  include tests and docs.  If you were using them from the older packages, get
-  in touch and help me understand how.
-
-.. _issue 403: https://bitbucket.org/ned/coveragepy/issues/403/hasherupdate-fails-with-typeerror-nonetype
-.. _issue 404: https://bitbucket.org/ned/coveragepy/issues/404/shiningpanda-jenkins-plugin-cant-find-html
-
-
-
-Version 4.0b2 --- 2015-08-22
-----------------------------
-
-- 4.0b1 broke ``--append`` creating new data files.  This is now fixed, closing
-  `issue 392`_.
-
-- ``py.test --cov`` can write empty data, then touch files due to ``--source``,
-  which made coverage.py mistakenly force the data file to record lines instead
-  of arcs.  This would lead to a "Can't combine line data with arc data" error
-  message.  This is now fixed, and changed some method names in the
-  CoverageData interface.  Fixes `issue 399`_.
-
-- `CoverageData.read_fileobj` and `CoverageData.write_fileobj` replace the
-  `.read` and `.write` methods, and are now properly inverses of each other.
-
-- When using ``report --skip-covered``, a message will now be included in the
-  report output indicating how many files were skipped, and if all files are
-  skipped, coverage.py won't accidentally scold you for having no data to
-  report.  Thanks, Krystian Kichewko.
-
-- A new conversion utility has been added:  ``python -m coverage.pickle2json``
-  will convert v3.x pickle data files to v4.x JSON data files.  Thanks,
-  Alexander Todorov.  Closes `issue 395`_.
-
-- A new version identifier is available, `coverage.version_info`, a plain tuple
-  of values similar to `sys.version_info`_.
-
-.. _issue 392: https://bitbucket.org/ned/coveragepy/issues/392/run-append-doesnt-create-coverage-file
-.. _issue 395: https://bitbucket.org/ned/coveragepy/issues/395/rfe-read-pickled-files-as-well-for
-.. _issue 399: https://bitbucket.org/ned/coveragepy/issues/399/coverageexception-cant-combine-line-data
-.. _sys.version_info: https://docs.python.org/3/library/sys.html#sys.version_info
-
-
-Version 4.0b1 --- 2015-08-02
-----------------------------
-
-- Coverage.py is now licensed under the Apache 2.0 license.  See NOTICE.txt for
-  details.  Closes `issue 313`_.
-
-- The data storage has been completely revamped.  The data file is now
-  JSON-based instead of a pickle, closing `issue 236`_.  The `CoverageData`
-  class is now a public supported documented API to the data file.
-
-- A new configuration option, ``[run] note``, lets you set a note that will be
-  stored in the `runs` section of the data file.  You can use this to annotate
-  the data file with any information you like.
-
-- Unrecognized configuration options will now print an error message and stop
-  coverage.py.  This should help prevent configuration mistakes from passing
-  silently.  Finishes `issue 386`_.
-
-- In parallel mode, ``coverage erase`` will now delete all of the data files,
-  fixing `issue 262`_.
-
-- Coverage.py now accepts a directory name for ``coverage run`` and will run a
-  ``__main__.py`` found there, just like Python will.  Fixes `issue 252`_.
-  Thanks, Dmitry Trofimov.
-
-- The XML report now includes a ``missing-branches`` attribute.  Thanks, Steve
-  Peak.  This is not a part of the Cobertura DTD, so the XML report no longer
-  references the DTD.
-
-- Missing branches in the HTML report now have a bit more information in the
-  right-hand annotations.  Hopefully this will make their meaning clearer.
-
-- All the reporting functions now behave the same if no data had been
-  collected, exiting with a status code of 1.  Fixed ``fail_under`` to be
-  applied even when the report is empty.  Thanks, Ionel Cristian Mărieș.
-
-- Plugins are now initialized differently.  Instead of looking for a class
-  called ``Plugin``, coverage.py looks for a function called ``coverage_init``.
-
-- A file-tracing plugin can now ask to have built-in Python reporting by
-  returning `"python"` from its `file_reporter()` method.
-
-- Code that was executed with `exec` would be mis-attributed to the file that
-  called it.  This is now fixed, closing `issue 380`_.
-
-- The ability to use item access on `Coverage.config` (introduced in 4.0a2) has
-  been changed to a more explicit `Coverage.get_option` and
-  `Coverage.set_option` API.
-
-- The ``Coverage.use_cache`` method is no longer supported.
-
-- The private method ``Coverage._harvest_data`` is now called
-  ``Coverage.get_data``, and returns the ``CoverageData`` containing the
-  collected data.
-
-- The project is consistently referred to as "coverage.py" throughout the code
-  and the documentation, closing `issue 275`_.
-
-- Combining data files with an explicit configuration file was broken in 4.0a6,
-  but now works again, closing `issue 385`_.
-
-- ``coverage combine`` now accepts files as well as directories.
-
-- The speed is back to 3.7.1 levels, after having slowed down due to plugin
-  support, finishing up `issue 387`_.
-
-.. _issue 236: https://bitbucket.org/ned/coveragepy/issues/236/pickles-are-bad-and-you-should-feel-bad
-.. _issue 252: https://bitbucket.org/ned/coveragepy/issues/252/coverage-wont-run-a-program-with
-.. _issue 262: https://bitbucket.org/ned/coveragepy/issues/262/when-parallel-true-erase-should-erase-all
-.. _issue 275: https://bitbucket.org/ned/coveragepy/issues/275/refer-consistently-to-project-as-coverage
-.. _issue 313: https://bitbucket.org/ned/coveragepy/issues/313/add-license-file-containing-2-3-or-4
-.. _issue 380: https://bitbucket.org/ned/coveragepy/issues/380/code-executed-by-exec-excluded-from
-.. _issue 385: https://bitbucket.org/ned/coveragepy/issues/385/coverage-combine-doesnt-work-with-rcfile
-.. _issue 386: https://bitbucket.org/ned/coveragepy/issues/386/error-on-unrecognised-configuration
-.. _issue 387: https://bitbucket.org/ned/coveragepy/issues/387/performance-degradation-from-371-to-40
-
-.. 40 issues closed in 4.0 below here
-
-
-Version 4.0a6 --- 2015-06-21
-----------------------------
-
-- Python 3.5b2 and PyPy 2.6.0 are supported.
-
-- The original module-level function interface to coverage.py is no longer
-  supported.  You must now create a ``coverage.Coverage`` object, and use
-  methods on it.
-
-- The ``coverage combine`` command now accepts any number of directories as
-  arguments, and will combine all the data files from those directories.  This
-  means you don't have to copy the files to one directory before combining.
-  Thanks, Christine Lytwynec.  Finishes `issue 354`_.
-
-- Branch coverage couldn't properly handle certain extremely long files. This
-  is now fixed (`issue 359`_).
-
-- Branch coverage didn't understand yield statements properly.  Mickie Betz
-  persisted in pursuing this despite Ned's pessimism.  Fixes `issue 308`_ and
-  `issue 324`_.
-
-- The COVERAGE_DEBUG environment variable can be used to set the ``[run] debug``
-  configuration option to control what internal operations are logged.
-
-- HTML reports were truncated at formfeed characters.  This is now fixed
-  (`issue 360`_).  It's always fun when the problem is due to a `bug in the
-  Python standard library <http://bugs.python.org/issue19035>`_.
-
-- Files with incorrect encoding declaration comments are no longer ignored by
-  the reporting commands, fixing `issue 351`_.
-
-- HTML reports now include a timestamp in the footer, closing `issue 299`_.
-  Thanks, Conrad Ho.
-
-- HTML reports now begrudgingly use double-quotes rather than single quotes,
-  because there are "software engineers" out there writing tools that read HTML
-  and somehow have no idea that single quotes exist.  Capitulates to the absurd
-  `issue 361`_.  Thanks, Jon Chappell.
-
-- The ``coverage annotate`` command now handles non-ASCII characters properly,
-  closing `issue 363`_.  Thanks, Leonardo Pistone.
-
-- Drive letters on Windows were not normalized correctly, now they are. Thanks,
-  Ionel Cristian Mărieș.
-
-- Plugin support had some bugs fixed, closing `issue 374`_ and `issue 375`_.
-  Thanks, Stefan Behnel.
-
-.. _issue 299: https://bitbucket.org/ned/coveragepy/issue/299/inserted-created-on-yyyy-mm-dd-hh-mm-in
-.. _issue 308: https://bitbucket.org/ned/coveragepy/issue/308/yield-lambda-branch-coverage
-.. _issue 324: https://bitbucket.org/ned/coveragepy/issue/324/yield-in-loop-confuses-branch-coverage
-.. _issue 351: https://bitbucket.org/ned/coveragepy/issue/351/files-with-incorrect-encoding-are-ignored
-.. _issue 354: https://bitbucket.org/ned/coveragepy/issue/354/coverage-combine-should-take-a-list-of
-.. _issue 359: https://bitbucket.org/ned/coveragepy/issue/359/xml-report-chunk-error
-.. _issue 360: https://bitbucket.org/ned/coveragepy/issue/360/html-reports-get-confused-by-l-in-the-code
-.. _issue 361: https://bitbucket.org/ned/coveragepy/issue/361/use-double-quotes-in-html-output-to
-.. _issue 363: https://bitbucket.org/ned/coveragepy/issue/363/annotate-command-hits-unicode-happy-fun
-.. _issue 374: https://bitbucket.org/ned/coveragepy/issue/374/c-tracer-lookups-fail-in
-.. _issue 375: https://bitbucket.org/ned/coveragepy/issue/375/ctracer_handle_return-reads-byte-code
-
-
-Version 4.0a5 --- 2015-02-16
-----------------------------
-
-- Plugin support is now implemented in the C tracer instead of the Python
-  tracer. This greatly improves the speed of tracing projects using plugins.
-
-- Coverage.py now always adds the current directory to sys.path, so that
-  plugins can import files in the current directory (`issue 358`_).
-
-- If the `config_file` argument to the Coverage constructor is specified as
-  ".coveragerc", it is treated as if it were True.  This means setup.cfg is
-  also examined, and a missing file is not considered an error (`issue 357`_).
-
-- Wildly experimental: support for measuring processes started by the
-  multiprocessing module.  To use, set ``--concurrency=multiprocessing``,
-  either on the command line or in the .coveragerc file (`issue 117`_). Thanks,
-  Eduardo Schettino.  Currently, this does not work on Windows.
-
-- A new warning is possible, if a desired file isn't measured because it was
-  imported before coverage.py was started (`issue 353`_).
-
-- The `coverage.process_startup` function now will start coverage measurement
-  only once, no matter how many times it is called.  This fixes problems due
-  to unusual virtualenv configurations (`issue 340`_).
-
-- Added 3.5.0a1 to the list of supported CPython versions.
-
-.. _issue 117: https://bitbucket.org/ned/coveragepy/issue/117/enable-coverage-measurement-of-code-run-by
-.. _issue 340: https://bitbucket.org/ned/coveragepy/issue/340/keyerror-subpy
-.. _issue 353: https://bitbucket.org/ned/coveragepy/issue/353/40a3-introduces-an-unexpected-third-case
-.. _issue 357: https://bitbucket.org/ned/coveragepy/issue/357/behavior-changed-when-coveragerc-is
-.. _issue 358: https://bitbucket.org/ned/coveragepy/issue/358/all-coverage-commands-should-adjust
-
-
-Version 4.0a4 --- 2015-01-25
-----------------------------
-
-- Plugins can now provide sys_info for debugging output.
-
-- Started plugins documentation.
-
-- Prepared to move the docs to readthedocs.org.
-
-
-Version 4.0a3 --- 2015-01-20
-----------------------------
-
-- Reports now use file names with extensions.  Previously, a report would
-  describe a/b/c.py as "a/b/c".  Now it is shown as "a/b/c.py".  This allows
-  for better support of non-Python files, and also fixed `issue 69`_.
-
-- The XML report now reports each directory as a package again.  This was a bad
-  regression, I apologize.  This was reported in `issue 235`_, which is now
-  fixed.
-
-- A new configuration option for the XML report: ``[xml] package_depth``
-  controls which directories are identified as packages in the report.
-  Directories deeper than this depth are not reported as packages.
-  The default is that all directories are reported as packages.
-  Thanks, Lex Berezhny.
-
-- When looking for the source for a frame, check if the file exists. On
-  Windows, .pyw files are no longer recorded as .py files. Along the way, this
-  fixed `issue 290`_.
-
-- Empty files are now reported as 100% covered in the XML report, not 0%
-  covered (`issue 345`_).
-
-- Regexes in the configuration file are now compiled as soon as they are read,
-  to provide error messages earlier (`issue 349`_).
-
-.. _issue 69: https://bitbucket.org/ned/coveragepy/issue/69/coverage-html-overwrite-files-that-doesnt
-.. _issue 235: https://bitbucket.org/ned/coveragepy/issue/235/package-name-is-missing-in-xml-report
-.. _issue 290: https://bitbucket.org/ned/coveragepy/issue/290/running-programmatically-with-pyw-files
-.. _issue 345: https://bitbucket.org/ned/coveragepy/issue/345/xml-reports-line-rate-0-for-empty-files
-.. _issue 349: https://bitbucket.org/ned/coveragepy/issue/349/bad-regex-in-config-should-get-an-earlier
-
-
-Version 4.0a2 --- 2015-01-14
-----------------------------
-
-- Officially support PyPy 2.4, and PyPy3 2.4.  Drop support for
-  CPython 3.2 and older versions of PyPy.  The code won't work on CPython 3.2.
-  It will probably still work on older versions of PyPy, but I'm not testing
-  against them.
-
-- Plugins!
-
-- The original command line switches (`-x` to run a program, etc) are no
-  longer supported.
-
-- A new option: `coverage report --skip-covered` will reduce the number of
-  files reported by skipping files with 100% coverage.  Thanks, Krystian
-  Kichewko.  This means that empty `__init__.py` files will be skipped, since
-  they are 100% covered, closing `issue 315`_.
-
-- You can now specify the ``--fail-under`` option in the ``.coveragerc`` file
-  as the ``[report] fail_under`` option.  This closes `issue 314`_.
-
-- The ``COVERAGE_OPTIONS`` environment variable is no longer supported.  It was
-  a hack for ``--timid`` before configuration files were available.
-
-- The HTML report now has filtering.  Type text into the Filter box on the
-  index page, and only modules with that text in the name will be shown.
-  Thanks, Danny Allen.
-
-- The textual report and the HTML report used to report partial branches
-  differently for no good reason.  Now the text report's "missing branches"
-  column is a "partial branches" column so that both reports show the same
-  numbers.  This closes `issue 342`_.
-
-- If you specify a ``--rcfile`` that cannot be read, you will get an error
-  message.  Fixes `issue 343`_.
-
-- The ``--debug`` switch can now be used on any command.
-
-- You can now programmatically adjust the configuration of coverage.py by
-  setting items on `Coverage.config` after construction.
-
-- A module run with ``-m`` can be used as the argument to ``--source``, fixing
-  `issue 328`_.  Thanks, Buck Evan.
-
-- The regex for matching exclusion pragmas has been fixed to allow more kinds
-  of whitespace, fixing `issue 334`_.
-
-- Made some PyPy-specific tweaks to improve speed under PyPy.  Thanks, Alex
-  Gaynor.
-
-- In some cases, with a source file missing a final newline, coverage.py would
-  count statements incorrectly.  This is now fixed, closing `issue 293`_.
-
-- The status.dat file that HTML reports use to avoid re-creating files that
-  haven't changed is now a JSON file instead of a pickle file.  This obviates
-  `issue 287`_ and `issue 237`_.
-
-.. _issue 237: https://bitbucket.org/ned/coveragepy/issue/237/htmlcov-with-corrupt-statusdat
-.. _issue 287: https://bitbucket.org/ned/coveragepy/issue/287/htmlpy-doesnt-specify-pickle-protocol
-.. _issue 293: https://bitbucket.org/ned/coveragepy/issue/293/number-of-statement-detection-wrong-if-no
-.. _issue 314: https://bitbucket.org/ned/coveragepy/issue/314/fail_under-param-not-working-in-coveragerc
-.. _issue 315: https://bitbucket.org/ned/coveragepy/issue/315/option-to-omit-empty-files-eg-__init__py
-.. _issue 328: https://bitbucket.org/ned/coveragepy/issue/328/misbehavior-in-run-source
-.. _issue 334: https://bitbucket.org/ned/coveragepy/issue/334/pragma-not-recognized-if-tab-character
-.. _issue 342: https://bitbucket.org/ned/coveragepy/issue/342/console-and-html-coverage-reports-differ
-.. _issue 343: https://bitbucket.org/ned/coveragepy/issue/343/an-explicitly-named-non-existent-config
-
-
-Version 4.0a1 --- 2014-09-27
-----------------------------
-
-- Python versions supported are now CPython 2.6, 2.7, 3.2, 3.3, and 3.4, and
-  PyPy 2.2.
-
-- Gevent, eventlet, and greenlet are now supported, closing `issue 149`_.
-  The ``concurrency`` setting specifies the concurrency library in use.  Huge
-  thanks to Peter Portante for initial implementation, and to Joe Jevnik for
-  the final insight that completed the work.
-
-- Options are now also read from a setup.cfg file, if any.  Sections are
-  prefixed with "coverage:", so the ``[run]`` options will be read from the
-  ``[coverage:run]`` section of setup.cfg.  Finishes `issue 304`_.
-
-- The ``report -m`` command can now show missing branches when reporting on
-  branch coverage.  Thanks, Steve Leonard. Closes `issue 230`_.
-
-- The XML report now contains a <source> element, fixing `issue 94`_.  Thanks
-  Stan Hu.
-
-- The class defined in the coverage module is now called ``Coverage`` instead
-  of ``coverage``, though the old name still works, for backward compatibility.
-
-- The ``fail-under`` value is now rounded the same as reported results,
-  preventing paradoxical results, fixing `issue 284`_.
-
-- The XML report will now create the output directory if need be, fixing
-  `issue 285`_.  Thanks, Chris Rose.
-
-- HTML reports no longer raise UnicodeDecodeError if a Python file has
-  undecodable characters, fixing `issue 303`_ and `issue 331`_.
-
-- The annotate command will now annotate all files, not just ones relative to
-  the current directory, fixing `issue 57`_.
-
-- The coverage module no longer causes deprecation warnings on Python 3.4 by
-  importing the imp module, fixing `issue 305`_.
-
-- Encoding declarations in source files are only considered if they are truly
-  comments.  Thanks, Anthony Sottile.
-
-.. _issue 57: https://bitbucket.org/ned/coveragepy/issue/57/annotate-command-fails-to-annotate-many
-.. _issue 94: https://bitbucket.org/ned/coveragepy/issue/94/coverage-xml-doesnt-produce-sources
-.. _issue 149: https://bitbucket.org/ned/coveragepy/issue/149/coverage-gevent-looks-broken
-.. _issue 230: https://bitbucket.org/ned/coveragepy/issue/230/show-line-no-for-missing-branches-in
-.. _issue 284: https://bitbucket.org/ned/coveragepy/issue/284/fail-under-should-show-more-precision
-.. _issue 285: https://bitbucket.org/ned/coveragepy/issue/285/xml-report-fails-if-output-file-directory
-.. _issue 303: https://bitbucket.org/ned/coveragepy/issue/303/unicodedecodeerror
-.. _issue 304: https://bitbucket.org/ned/coveragepy/issue/304/attempt-to-get-configuration-from-setupcfg
-.. _issue 305: https://bitbucket.org/ned/coveragepy/issue/305/pendingdeprecationwarning-the-imp-module
-.. _issue 331: https://bitbucket.org/ned/coveragepy/issue/331/failure-of-encoding-detection-on-python2
-
-
-Version 3.7.1 --- 2013-12-13
-----------------------------
-
-- Improved the speed of HTML report generation by about 20%.
-
-- Fixed the mechanism for finding OS-installed static files for the HTML report
-  so that it will actually find OS-installed static files.
-
-
-Version 3.7 --- 2013-10-06
---------------------------
-
-- Added the ``--debug`` switch to ``coverage run``.  It accepts a list of
-  options indicating the type of internal activity to log to stderr.
-
-- Improved the branch coverage facility, fixing `issue 92`_ and `issue 175`_.
-
-- Running code with ``coverage run -m`` now behaves more like Python does,
-  setting sys.path properly, which fixes `issue 207`_ and `issue 242`_.
-
-- Coverage.py can now run .pyc files directly, closing `issue 264`_.
-
-- Coverage.py properly supports .pyw files, fixing `issue 261`_.
-
-- Omitting files within a tree specified with the ``source`` option would
-  cause them to be incorrectly marked as unexecuted, as described in
-  `issue 218`_.  This is now fixed.
-
-- When specifying paths to alias together during data combining, you can now
-  specify relative paths, fixing `issue 267`_.
-
-- Most file paths can now be specified with username expansion (``~/src``, or
-  ``~build/src``, for example), and with environment variable expansion
-  (``build/$BUILDNUM/src``).
-
-- Trying to create an XML report with no files to report on, would cause a
-  ZeroDivideError, but no longer does, fixing `issue 250`_.
-
-- When running a threaded program under the Python tracer, coverage.py no
-  longer issues a spurious warning about the trace function changing: "Trace
-  function changed, measurement is likely wrong: None."  This fixes `issue
-  164`_.
-
-- Static files necessary for HTML reports are found in system-installed places,
-  to ease OS-level packaging of coverage.py.  Closes `issue 259`_.
-
-- Source files with encoding declarations, but a blank first line, were not
-  decoded properly.  Now they are.  Thanks, Roger Hu.
-
-- The source kit now includes the ``__main__.py`` file in the root coverage
-  directory, fixing `issue 255`_.
-
-.. _issue 92: https://bitbucket.org/ned/coveragepy/issue/92/finally-clauses-arent-treated-properly-in
-.. _issue 164: https://bitbucket.org/ned/coveragepy/issue/164/trace-function-changed-warning-when-using
-.. _issue 175: https://bitbucket.org/ned/coveragepy/issue/175/branch-coverage-gets-confused-in-certain
-.. _issue 207: https://bitbucket.org/ned/coveragepy/issue/207/run-m-cannot-find-module-or-package-in
-.. _issue 242: https://bitbucket.org/ned/coveragepy/issue/242/running-a-two-level-package-doesnt-work
-.. _issue 218: https://bitbucket.org/ned/coveragepy/issue/218/run-command-does-not-respect-the-omit-flag
-.. _issue 250: https://bitbucket.org/ned/coveragepy/issue/250/uncaught-zerodivisionerror-when-generating
-.. _issue 255: https://bitbucket.org/ned/coveragepy/issue/255/directory-level-__main__py-not-included-in
-.. _issue 259: https://bitbucket.org/ned/coveragepy/issue/259/allow-use-of-system-installed-third-party
-.. _issue 261: https://bitbucket.org/ned/coveragepy/issue/261/pyw-files-arent-reported-properly
-.. _issue 264: https://bitbucket.org/ned/coveragepy/issue/264/coverage-wont-run-pyc-files
-.. _issue 267: https://bitbucket.org/ned/coveragepy/issue/267/relative-path-aliases-dont-work
-
-
-Version 3.6 --- 2013-01-05
---------------------------
-
-- Added a page to the docs about troublesome situations, closing `issue 226`_,
-  and added some info to the TODO file, closing `issue 227`_.
-
-.. _issue 226: https://bitbucket.org/ned/coveragepy/issue/226/make-readme-section-to-describe-when
-.. _issue 227: https://bitbucket.org/ned/coveragepy/issue/227/update-todo
-
-
-Version 3.6b3 --- 2012-12-29
-----------------------------
-
-- Beta 2 broke the nose plugin. It's fixed again, closing `issue 224`_.
-
-.. _issue 224: https://bitbucket.org/ned/coveragepy/issue/224/36b2-breaks-nosexcover
-
-
-Version 3.6b2 --- 2012-12-23
-----------------------------
-
-- Coverage.py runs on Python 2.3 and 2.4 again. It was broken in 3.6b1.
-
-- The C extension is optionally compiled using a different more widely-used
-  technique, taking another stab at fixing `issue 80`_ once and for all.
-
-- Combining data files would create entries for phantom files if used with
-  ``source`` and path aliases.  It no longer does.
-
-- ``debug sys`` now shows the configuration file path that was read.
-
-- If an oddly-behaved package claims that code came from an empty-string
-  file name, coverage.py no longer associates it with the directory name,
-  fixing `issue 221`_.
-
-.. _issue 221: https://bitbucket.org/ned/coveragepy/issue/221/coveragepy-incompatible-with-pyratemp
-
-
-Version 3.6b1 --- 2012-11-28
-----------------------------
-
-- Wildcards in ``include=`` and ``omit=`` arguments were not handled properly
-  in reporting functions, though they were when running.  Now they are handled
-  uniformly, closing `issue 143`_ and `issue 163`_.  **NOTE**: it is possible
-  that your configurations may now be incorrect.  If you use ``include`` or
-  ``omit`` during reporting, whether on the command line, through the API, or
-  in a configuration file, please check carefully that you were not relying on
-  the old broken behavior.
-
-- The **report**, **html**, and **xml** commands now accept a ``--fail-under``
-  switch that indicates in the exit status whether the coverage percentage was
-  less than a particular value.  Closes `issue 139`_.
-
-- The reporting functions coverage.report(), coverage.html_report(), and
-  coverage.xml_report() now all return a float, the total percentage covered
-  measurement.
-
-- The HTML report's title can now be set in the configuration file, with the
-  ``--title`` switch on the command line, or via the API.
-
-- Configuration files now support substitution of environment variables, using
-  syntax like ``${WORD}``.  Closes `issue 97`_.
-
-- Embarrassingly, the ``[xml] output=`` setting in the .coveragerc file simply
-  didn't work.  Now it does.
-
-- The XML report now consistently uses file names for the file name attribute,
-  rather than sometimes using module names.  Fixes `issue 67`_.
-  Thanks, Marcus Cobden.
-
-- Coverage percentage metrics are now computed slightly differently under
-  branch coverage.  This means that completely unexecuted files will now
-  correctly have 0% coverage, fixing `issue 156`_.  This also means that your
-  total coverage numbers will generally now be lower if you are measuring
-  branch coverage.
-
-- When installing, now in addition to creating a "coverage" command, two new
-  aliases are also installed.  A "coverage2" or "coverage3" command will be
-  created, depending on whether you are installing in Python 2.x or 3.x.
-  A "coverage-X.Y" command will also be created corresponding to your specific
-  version of Python.  Closes `issue 111`_.
-
-- The coverage.py installer no longer tries to bootstrap setuptools or
-  Distribute.  You must have one of them installed first, as `issue 202`_
-  recommended.
-
-- The coverage.py kit now includes docs (closing `issue 137`_) and tests.
-
-- On Windows, files are now reported in their correct case, fixing `issue 89`_
-  and `issue 203`_.
-
-- If a file is missing during reporting, the path shown in the error message
-  is now correct, rather than an incorrect path in the current directory.
-  Fixes `issue 60`_.
-
-- Running an HTML report in Python 3 in the same directory as an old Python 2
-  HTML report would fail with a UnicodeDecodeError. This issue (`issue 193`_)
-  is now fixed.
-
-- Fixed yet another error trying to parse non-Python files as Python, this
-  time an IndentationError, closing `issue 82`_ for the fourth time...
-
-- If `coverage xml` fails because there is no data to report, it used to
-  create a zero-length XML file.  Now it doesn't, fixing `issue 210`_.
-
-- Jython files now work with the ``--source`` option, fixing `issue 100`_.
-
-- Running coverage.py under a debugger is unlikely to work, but it shouldn't
-  fail with "TypeError: 'NoneType' object is not iterable".  Fixes `issue
-  201`_.
-
-- On some Linux distributions, when installed with the OS package manager,
-  coverage.py would report its own code as part of the results.  Now it won't,
-  fixing `issue 214`_, though this will take some time to be repackaged by the
-  operating systems.
-
-- Docstrings for the legacy singleton methods are more helpful.  Thanks Marius
-  Gedminas.  Closes `issue 205`_.
-
-- The pydoc tool can now show documentation for the class `coverage.coverage`.
-  Closes `issue 206`_.
-
-- Added a page to the docs about contributing to coverage.py, closing
-  `issue 171`_.
-
-- When coverage.py ended unsuccessfully, it may have reported odd errors like
-  ``'NoneType' object has no attribute 'isabs'``.  It no longer does,
-  so kiss `issue 153`_ goodbye.
-
-.. _issue 60: https://bitbucket.org/ned/coveragepy/issue/60/incorrect-path-to-orphaned-pyc-files
-.. _issue 67: https://bitbucket.org/ned/coveragepy/issue/67/xml-report-filenames-may-be-generated
-.. _issue 89: https://bitbucket.org/ned/coveragepy/issue/89/on-windows-all-packages-are-reported-in
-.. _issue 97: https://bitbucket.org/ned/coveragepy/issue/97/allow-environment-variables-to-be
-.. _issue 100: https://bitbucket.org/ned/coveragepy/issue/100/source-directive-doesnt-work-for-packages
-.. _issue 111: https://bitbucket.org/ned/coveragepy/issue/111/when-installing-coverage-with-pip-not
-.. _issue 137: https://bitbucket.org/ned/coveragepy/issue/137/provide-docs-with-source-distribution
-.. _issue 139: https://bitbucket.org/ned/coveragepy/issue/139/easy-check-for-a-certain-coverage-in-tests
-.. _issue 143: https://bitbucket.org/ned/coveragepy/issue/143/omit-doesnt-seem-to-work-in-coverage
-.. _issue 153: https://bitbucket.org/ned/coveragepy/issue/153/non-existent-filename-triggers
-.. _issue 156: https://bitbucket.org/ned/coveragepy/issue/156/a-completely-unexecuted-file-shows-14
-.. _issue 163: https://bitbucket.org/ned/coveragepy/issue/163/problem-with-include-and-omit-filename
-.. _issue 171: https://bitbucket.org/ned/coveragepy/issue/171/how-to-contribute-and-run-tests
-.. _issue 193: https://bitbucket.org/ned/coveragepy/issue/193/unicodedecodeerror-on-htmlpy
-.. _issue 201: https://bitbucket.org/ned/coveragepy/issue/201/coverage-using-django-14-with-pydb-on
-.. _issue 202: https://bitbucket.org/ned/coveragepy/issue/202/get-rid-of-ez_setuppy-and
-.. _issue 203: https://bitbucket.org/ned/coveragepy/issue/203/duplicate-filenames-reported-when-filename
-.. _issue 205: https://bitbucket.org/ned/coveragepy/issue/205/make-pydoc-coverage-more-friendly
-.. _issue 206: https://bitbucket.org/ned/coveragepy/issue/206/pydoc-coveragecoverage-fails-with-an-error
-.. _issue 210: https://bitbucket.org/ned/coveragepy/issue/210/if-theres-no-coverage-data-coverage-xml
-.. _issue 214: https://bitbucket.org/ned/coveragepy/issue/214/coveragepy-measures-itself-on-precise
-
-
-Version 3.5.3 --- 2012-09-29
-----------------------------
-
-- Line numbers in the HTML report line up better with the source lines, fixing
-  `issue 197`_, thanks Marius Gedminas.
-
-- When specifying a directory as the source= option, the directory itself no
-  longer needs to have a ``__init__.py`` file, though its sub-directories do,
-  to be considered as source files.
-
-- Files encoded as UTF-8 with a BOM are now properly handled, fixing
-  `issue 179`_.  Thanks, Pablo Carballo.
-
-- Fixed more cases of non-Python files being reported as Python source, and
-  then not being able to parse them as Python.  Closes `issue 82`_ (again).
-  Thanks, Julian Berman.
-
-- Fixed memory leaks under Python 3, thanks, Brett Cannon. Closes `issue 147`_.
-
-- Optimized .pyo files may not have been handled correctly, `issue 195`_.
-  Thanks, Marius Gedminas.
-
-- Certain unusually named file paths could have been mangled during reporting,
-  `issue 194`_.  Thanks, Marius Gedminas.
-
-- Try to do a better job of the impossible task of detecting when we can't
-  build the C extension, fixing `issue 183`_.
-
-- Testing is now done with `tox`_, thanks, Marc Abramowitz.
-
-.. _issue 147: https://bitbucket.org/ned/coveragepy/issue/147/massive-memory-usage-by-ctracer
-.. _issue 179: https://bitbucket.org/ned/coveragepy/issue/179/htmlreporter-fails-when-source-file-is
-.. _issue 183: https://bitbucket.org/ned/coveragepy/issue/183/install-fails-for-python-23
-.. _issue 194: https://bitbucket.org/ned/coveragepy/issue/194/filelocatorrelative_filename-could-mangle
-.. _issue 195: https://bitbucket.org/ned/coveragepy/issue/195/pyo-file-handling-in-codeunit
-.. _issue 197: https://bitbucket.org/ned/coveragepy/issue/197/line-numbers-in-html-report-do-not-align
-.. _tox: http://tox.readthedocs.org/
-
-
-Version 3.5.2 --- 2012-05-04
-----------------------------
-
-No changes since 3.5.2.b1
-
-
-Version 3.5.2b1 --- 2012-04-29
-------------------------------
-
-- The HTML report has slightly tweaked controls: the buttons at the top of
-  the page are color-coded to the source lines they affect.
-
-- Custom CSS can be applied to the HTML report by specifying a CSS file as
-  the ``extra_css`` configuration value in the ``[html]`` section.
-
-- Source files with custom encodings declared in a comment at the top are now
-  properly handled during reporting on Python 2.  Python 3 always handled them
-  properly.  This fixes `issue 157`_.
-
-- Backup files left behind by editors are no longer collected by the source=
-  option, fixing `issue 168`_.
-
-- If a file doesn't parse properly as Python, we don't report it as an error
-  if the file name seems like maybe it wasn't meant to be Python.  This is a
-  pragmatic fix for `issue 82`_.
-
-- The ``-m`` switch on ``coverage report``, which includes missing line numbers
-  in the summary report, can now be specified as ``show_missing`` in the
-  config file.  Closes `issue 173`_.
-
-- When running a module with ``coverage run -m <modulename>``, certain details
-  of the execution environment weren't the same as for
-  ``python -m <modulename>``.  This had the unfortunate side-effect of making
-  ``coverage run -m unittest discover`` not work if you had tests in a
-  directory named "test".  This fixes `issue 155`_ and `issue 142`_.
-
-- Now the exit status of your product code is properly used as the process
-  status when running ``python -m coverage run ...``.  Thanks, JT Olds.
-
-- When installing into pypy, we no longer attempt (and fail) to compile
-  the C tracer function, closing `issue 166`_.
-
-.. _issue 142: https://bitbucket.org/ned/coveragepy/issue/142/executing-python-file-syspath-is-replaced
-.. _issue 155: https://bitbucket.org/ned/coveragepy/issue/155/cant-use-coverage-run-m-unittest-discover
-.. _issue 157: https://bitbucket.org/ned/coveragepy/issue/157/chokes-on-source-files-with-non-utf-8
-.. _issue 166: https://bitbucket.org/ned/coveragepy/issue/166/dont-try-to-compile-c-extension-on-pypy
-.. _issue 168: https://bitbucket.org/ned/coveragepy/issue/168/dont-be-alarmed-by-emacs-droppings
-.. _issue 173: https://bitbucket.org/ned/coveragepy/issue/173/theres-no-way-to-specify-show-missing-in
-
-
-Version 3.5.1 --- 2011-09-23
-----------------------------
-
-- The ``[paths]`` feature unfortunately didn't work in real world situations
-  where you wanted to, you know, report on the combined data.  Now all paths
-  stored in the combined file are canonicalized properly.
-
-
-Version 3.5.1b1 --- 2011-08-28
-------------------------------
-
-- When combining data files from parallel runs, you can now instruct
-  coverage.py about which directories are equivalent on different machines.  A
-  ``[paths]`` section in the configuration file lists paths that are to be
-  considered equivalent.  Finishes `issue 17`_.
-
-- for-else constructs are understood better, and don't cause erroneous partial
-  branch warnings.  Fixes `issue 122`_.
-
-- Branch coverage for ``with`` statements is improved, fixing `issue 128`_.
-
-- The number of partial branches reported on the HTML summary page was
-  different than the number reported on the individual file pages.  This is
-  now fixed.
-
-- An explicit include directive to measure files in the Python installation
-  wouldn't work because of the standard library exclusion.  Now the include
-  directive takes precedence, and the files will be measured.  Fixes
-  `issue 138`_.
-
-- The HTML report now handles Unicode characters in Python source files
-  properly.  This fixes `issue 124`_ and `issue 144`_. Thanks, Devin
-  Jeanpierre.
-
-- In order to help the core developers measure the test coverage of the
-  standard library, Brandon Rhodes devised an aggressive hack to trick Python
-  into running some coverage.py code before anything else in the process.
-  See the coverage/fullcoverage directory if you are interested.
-
-.. _issue 17: http://bitbucket.org/ned/coveragepy/issue/17/support-combining-coverage-data-from
-.. _issue 122: http://bitbucket.org/ned/coveragepy/issue/122/for-else-always-reports-missing-branch
-.. _issue 124: http://bitbucket.org/ned/coveragepy/issue/124/no-arbitrary-unicode-in-html-reports-in
-.. _issue 128: http://bitbucket.org/ned/coveragepy/issue/128/branch-coverage-of-with-statement-in-27
-.. _issue 138: http://bitbucket.org/ned/coveragepy/issue/138/include-should-take-precedence-over-is
-.. _issue 144: http://bitbucket.org/ned/coveragepy/issue/144/failure-generating-html-output-for
-
-
-Version 3.5 --- 2011-06-29
---------------------------
-
-- The HTML report hotkeys now behave slightly differently when the current
-  chunk isn't visible at all:  a chunk on the screen will be selected,
-  instead of the old behavior of jumping to the literal next chunk.
-  The hotkeys now work in Google Chrome.  Thanks, Guido van Rossum.
-
-
-Version 3.5b1 --- 2011-06-05
-----------------------------
-
-- The HTML report now has hotkeys.  Try ``n``, ``s``, ``m``, ``x``, ``b``,
-  ``p``, and ``c`` on the overview page to change the column sorting.
-  On a file page, ``r``, ``m``, ``x``, and ``p`` toggle the run, missing,
-  excluded, and partial line markings.  You can navigate the highlighted
-  sections of code by using the ``j`` and ``k`` keys for next and previous.
-  The ``1`` (one) key jumps to the first highlighted section in the file,
-  and ``0`` (zero) scrolls to the top of the file.
-
-- The ``--omit`` and ``--include`` switches now interpret their values more
-  usefully.  If the value starts with a wildcard character, it is used as-is.
-  If it does not, it is interpreted relative to the current directory.
-  Closes `issue 121`_.
-
-- Partial branch warnings can now be pragma'd away.  The configuration option
-  ``partial_branches`` is a list of regular expressions.  Lines matching any of
-  those expressions will never be marked as a partial branch.  In addition,
-  there's a built-in list of regular expressions marking statements which should
-  never be marked as partial.  This list includes ``while True:``, ``while 1:``,
-  ``if 1:``, and ``if 0:``.
-
-- The ``coverage()`` constructor accepts single strings for the ``omit=`` and
-  ``include=`` arguments, adapting to a common error in programmatic use.
-
-- Modules can now be run directly using ``coverage run -m modulename``, to
-  mirror Python's ``-m`` flag.  Closes `issue 95`_, thanks, Brandon Rhodes.
-
-- ``coverage run`` didn't emulate Python accurately in one small detail: the
-  current directory inserted into ``sys.path`` was relative rather than
-  absolute. This is now fixed.
-
-- HTML reporting is now incremental: a record is kept of the data that
-  produced the HTML reports, and only files whose data has changed will
-  be generated.  This should make most HTML reporting faster.
-
-- Pathological code execution could disable the trace function behind our
-  backs, leading to incorrect code measurement.  Now if this happens,
-  coverage.py will issue a warning, at least alerting you to the problem.
-  Closes `issue 93`_.  Thanks to Marius Gedminas for the idea.
-
-- The C-based trace function now behaves properly when saved and restored
-  with ``sys.gettrace()`` and ``sys.settrace()``.  This fixes `issue 125`_
-  and `issue 123`_.  Thanks, Devin Jeanpierre.
-
-- Source files are now opened with Python 3.2's ``tokenize.open()`` where
-  possible, to get the best handling of Python source files with encodings.
-  Closes `issue 107`_, thanks, Brett Cannon.
-
-- Syntax errors in supposed Python files can now be ignored during reporting
-  with the ``-i`` switch just like other source errors.  Closes `issue 115`_.
-
-- Installation from source now succeeds on machines without a C compiler,
-  closing `issue 80`_.
-
-- Coverage.py can now be run directly from a working tree by specifying
-  the directory name to python:  ``python coverage_py_working_dir run ...``.
-  Thanks, Brett Cannon.
-
-- A little bit of Jython support: `coverage run` can now measure Jython
-  execution by adapting when $py.class files are traced. Thanks, Adi Roiban.
-  Jython still doesn't provide the Python libraries needed to make
-  coverage reporting work, unfortunately.
-
-- Internally, files are now closed explicitly, fixing `issue 104`_.  Thanks,
-  Brett Cannon.
-
-.. _issue 80: https://bitbucket.org/ned/coveragepy/issue/80/is-there-a-duck-typing-way-to-know-we-cant
-.. _issue 93: http://bitbucket.org/ned/coveragepy/issue/93/copying-a-mock-object-breaks-coverage
-.. _issue 95: https://bitbucket.org/ned/coveragepy/issue/95/run-subcommand-should-take-a-module-name
-.. _issue 104: https://bitbucket.org/ned/coveragepy/issue/104/explicitly-close-files
-.. _issue 107: https://bitbucket.org/ned/coveragepy/issue/107/codeparser-not-opening-source-files-with
-.. _issue 115: https://bitbucket.org/ned/coveragepy/issue/115/fail-gracefully-when-reporting-on-file
-.. _issue 121: https://bitbucket.org/ned/coveragepy/issue/121/filename-patterns-are-applied-stupidly
-.. _issue 123: https://bitbucket.org/ned/coveragepy/issue/123/pyeval_settrace-used-in-way-that-breaks
-.. _issue 125: https://bitbucket.org/ned/coveragepy/issue/125/coverage-removes-decoratortoolss-tracing
-
-
-Version 3.4 --- 2010-09-19
---------------------------
-
-- The XML report is now sorted by package name, fixing `issue 88`_.
-
-- Programs that exited with ``sys.exit()`` with no argument weren't handled
-  properly, producing a coverage.py stack trace.  That is now fixed.
-
-.. _issue 88: http://bitbucket.org/ned/coveragepy/issue/88/xml-report-lists-packages-in-random-order
-
-
-Version 3.4b2 --- 2010-09-06
-----------------------------
-
-- Completely unexecuted files can now be included in coverage results, reported
-  as 0% covered.  This only happens if the --source option is specified, since
-  coverage.py needs guidance about where to look for source files.
-
-- The XML report output now properly includes a percentage for branch coverage,
-  fixing `issue 65`_ and `issue 81`_.
-
-- Coverage percentages are now displayed uniformly across reporting methods.
-  Previously, different reports could round percentages differently.  Also,
-  percentages are only reported as 0% or 100% if they are truly 0 or 100, and
-  are rounded otherwise.  Fixes `issue 41`_ and `issue 70`_.
-
-- The precision of reported coverage percentages can be set with the
-  ``[report] precision`` config file setting.  Completes `issue 16`_.
-
-- Threads derived from ``threading.Thread`` with an overridden `run` method
-  would report no coverage for the `run` method.  This is now fixed, closing
-  `issue 85`_.
-
-.. _issue 16: http://bitbucket.org/ned/coveragepy/issue/16/allow-configuration-of-accuracy-of-percentage-totals
-.. _issue 41: http://bitbucket.org/ned/coveragepy/issue/41/report-says-100-when-it-isnt-quite-there
-.. _issue 65: http://bitbucket.org/ned/coveragepy/issue/65/branch-option-not-reported-in-cobertura
-.. _issue 70: http://bitbucket.org/ned/coveragepy/issue/70/text-report-and-html-report-disagree-on-coverage
-.. _issue 81: http://bitbucket.org/ned/coveragepy/issue/81/xml-report-does-not-have-condition-coverage-attribute-for-lines-with-a
-.. _issue 85: http://bitbucket.org/ned/coveragepy/issue/85/threadrun-isnt-measured
-
-
-Version 3.4b1 --- 2010-08-21
-----------------------------
-
-- BACKWARD INCOMPATIBILITY: the ``--omit`` and ``--include`` switches now take
-  file patterns rather than file prefixes, closing `issue 34`_ and `issue 36`_.
-
-- BACKWARD INCOMPATIBILITY: the `omit_prefixes` argument is gone throughout
-  coverage.py, replaced with `omit`, a list of file name patterns suitable for
-  `fnmatch`.  A parallel argument `include` controls what files are included.
-
-- The run command now has a ``--source`` switch, a list of directories or
-  module names.  If provided, coverage.py will only measure execution in those
-  source files.
-
-- Various warnings are printed to stderr for problems encountered during data
-  measurement: if a ``--source`` module has no Python source to measure, or is
-  never encountered at all, or if no data is collected.
-
-- The reporting commands (report, annotate, html, and xml) now have an
-  ``--include`` switch to restrict reporting to modules matching those file
-  patterns, similar to the existing ``--omit`` switch. Thanks, Zooko.
-
-- The run command now supports ``--include`` and ``--omit`` to control what
-  modules it measures. This can speed execution and reduce the amount of data
-  during reporting. Thanks Zooko.
-
-- Since coverage.py 3.1, using the Python trace function has been slower than
-  it needs to be.  A cache of tracing decisions was broken, but has now been
-  fixed.
-
-- Python 2.7 and 3.2 have introduced new opcodes that are now supported.
-
-- Python files with no statements, for example, empty ``__init__.py`` files,
-  are now reported as having zero statements instead of one.  Fixes `issue 1`_.
-
-- Reports now have a column of missed line counts rather than executed line
-  counts, since developers should focus on reducing the missed lines to zero,
-  rather than increasing the executed lines to varying targets.  Once
-  suggested, this seemed blindingly obvious.
-
-- Line numbers in HTML source pages are clickable, linking directly to that
-  line, which is highlighted on arrival.  Added a link back to the index page
-  at the bottom of each HTML page.
-
-- Programs that call ``os.fork`` will properly collect data from both the child
-  and parent processes.  Use ``coverage run -p`` to get two data files that can
-  be combined with ``coverage combine``.  Fixes `issue 56`_.
-
-- Coverage.py is now runnable as a module: ``python -m coverage``.  Thanks,
-  Brett Cannon.
-
-- When measuring code running in a virtualenv, most of the system library was
-  being measured when it shouldn't have been.  This is now fixed.
-
-- Doctest text files are no longer recorded in the coverage data, since they
-  can't be reported anyway.  Fixes `issue 52`_ and `issue 61`_.
-
-- Jinja HTML templates compile into Python code using the HTML file name,
-  which confused coverage.py.  Now these files are no longer traced, fixing
-  `issue 82`_.
-
-- Source files can have more than one dot in them (foo.test.py), and will be
-  treated properly while reporting.  Fixes `issue 46`_.
-
-- Source files with DOS line endings are now properly tokenized for syntax
-  coloring on non-DOS machines.  Fixes `issue 53`_.
-
-- Unusual code structure that confused exits from methods with exits from
-  classes is now properly analyzed.  See `issue 62`_.
-
-- Asking for an HTML report with no files now shows a nice error message rather
-  than a cryptic failure ('int' object is unsubscriptable). Fixes `issue 59`_.
-
-.. _issue 1:  http://bitbucket.org/ned/coveragepy/issue/1/empty-__init__py-files-are-reported-as-1-executable
-.. _issue 34: http://bitbucket.org/ned/coveragepy/issue/34/enhanced-omit-globbing-handling
-.. _issue 36: http://bitbucket.org/ned/coveragepy/issue/36/provide-regex-style-omit
-.. _issue 46: http://bitbucket.org/ned/coveragepy/issue/46
-.. _issue 53: http://bitbucket.org/ned/coveragepy/issue/53
-.. _issue 52: http://bitbucket.org/ned/coveragepy/issue/52/doctesttestfile-confuses-source-detection
-.. _issue 56: http://bitbucket.org/ned/coveragepy/issue/56
-.. _issue 61: http://bitbucket.org/ned/coveragepy/issue/61/annotate-i-doesnt-work
-.. _issue 62: http://bitbucket.org/ned/coveragepy/issue/62
-.. _issue 59: http://bitbucket.org/ned/coveragepy/issue/59/html-report-fails-with-int-object-is
-.. _issue 82: http://bitbucket.org/ned/coveragepy/issue/82/tokenerror-when-generating-html-report
-
-
-Version 3.3.1 --- 2010-03-06
-----------------------------
-
-- Using `parallel=True` in .coveragerc file prevented reporting, but now does
-  not, fixing `issue 49`_.
-
-- When running your code with "coverage run", if you call `sys.exit()`,
-  coverage.py will exit with that status code, fixing `issue 50`_.
-
-.. _issue 49: http://bitbucket.org/ned/coveragepy/issue/49
-.. _issue 50: http://bitbucket.org/ned/coveragepy/issue/50
-
-
-Version 3.3 --- 2010-02-24
---------------------------
-
-- Settings are now read from a .coveragerc file.  A specific file can be
-  specified on the command line with --rcfile=FILE.  The name of the file can
-  be programmatically set with the `config_file` argument to the coverage()
-  constructor, or reading a config file can be disabled with
-  `config_file=False`.
-
-- Fixed a problem with nested loops having their branch possibilities
-  mischaracterized: `issue 39`_.
-
-- Added coverage.process_start to enable coverage measurement when Python
-  starts.
-
-- Parallel data file names now have a random number appended to them in
-  addition to the machine name and process id.
-
-- Parallel data files combined with "coverage combine" are deleted after
-  they're combined, to clean up unneeded files.  Fixes `issue 40`_.
-
-- Exceptions thrown from product code run with "coverage run" are now displayed
-  without internal coverage.py frames, so the output is the same as when the
-  code is run without coverage.py.
-
-- The `data_suffix` argument to the coverage constructor is now appended with
-  an added dot rather than simply appended, so that .coveragerc files will not
-  be confused for data files.
-
-- Python source files that don't end with a newline can now be executed, fixing
-  `issue 47`_.
-
-- Added an AUTHORS.txt file.
-
-.. _issue 39: http://bitbucket.org/ned/coveragepy/issue/39
-.. _issue 40: http://bitbucket.org/ned/coveragepy/issue/40
-.. _issue 47: http://bitbucket.org/ned/coveragepy/issue/47
-
-
-Version 3.2 --- 2009-12-05
---------------------------
-
-- Added a ``--version`` option on the command line.
-
-
-Version 3.2b4 --- 2009-12-01
-----------------------------
-
-- Branch coverage improvements:
-
-  - The XML report now includes branch information.
-
-- Click-to-sort HTML report columns are now persisted in a cookie.  Viewing
-  a report will sort it first the way you last had a coverage report sorted.
-  Thanks, `Chris Adams`_.
-
-- On Python 3.x, setuptools has been replaced by `Distribute`_.
-
-.. _Distribute: http://packages.python.org/distribute/
-
-
-Version 3.2b3 --- 2009-11-23
-----------------------------
-
-- Fixed a memory leak in the C tracer that was introduced in 3.2b1.
-
-- Branch coverage improvements:
-
-  - Branches to excluded code are ignored.
-
-- The table of contents in the HTML report is now sortable: click the headers
-  on any column.  Thanks, `Chris Adams`_.
-
-.. _Chris Adams: http://improbable.org/chris/
-
-
-Version 3.2b2 --- 2009-11-19
-----------------------------
-
-- Branch coverage improvements:
-
-  - Classes are no longer incorrectly marked as branches: `issue 32`_.
-
-  - "except" clauses with types are no longer incorrectly marked as branches:
-    `issue 35`_.
-
-- Fixed some problems syntax coloring sources with line continuations and
-  source with tabs: `issue 30`_ and `issue 31`_.
-
-- The --omit option now works much better than before, fixing `issue 14`_ and
-  `issue 33`_.  Thanks, Danek Duvall.
-
-.. _issue 14: http://bitbucket.org/ned/coveragepy/issue/14
-.. _issue 30: http://bitbucket.org/ned/coveragepy/issue/30
-.. _issue 31: http://bitbucket.org/ned/coveragepy/issue/31
-.. _issue 32: http://bitbucket.org/ned/coveragepy/issue/32
-.. _issue 33: http://bitbucket.org/ned/coveragepy/issue/33
-.. _issue 35: http://bitbucket.org/ned/coveragepy/issue/35
-
-
-Version 3.2b1 --- 2009-11-10
-----------------------------
-
-- Branch coverage!
-
-- XML reporting has file paths that let Cobertura find the source code.
-
-- The tracer code has changed, it's a few percent faster.
-
-- Some exceptions reported by the command line interface have been cleaned up
-  so that tracebacks inside coverage.py aren't shown.  Fixes `issue 23`_.
-
-.. _issue 23: http://bitbucket.org/ned/coveragepy/issue/23
-
-
-Version 3.1 --- 2009-10-04
---------------------------
-
-- Source code can now be read from eggs.  Thanks, Ross Lawley.  Fixes
-  `issue 25`_.
-
-.. _issue 25: http://bitbucket.org/ned/coveragepy/issue/25
-
-
-Version 3.1b1 --- 2009-09-27
-----------------------------
-
-- Python 3.1 is now supported.
-
-- Coverage.py has a new command line syntax with sub-commands.  This expands
-  the possibilities for adding features and options in the future.  The old
-  syntax is still supported.  Try "coverage help" to see the new commands.
-  Thanks to Ben Finney for early help.
-
-- Added an experimental "coverage xml" command for producing coverage reports
-  in a Cobertura-compatible XML format.  Thanks, Bill Hart.
-
-- Added the --timid option to enable a simpler slower trace function that works
-  for DecoratorTools projects, including TurboGears.  Fixed `issue 12`_ and
-  `issue 13`_.
-
-- HTML reports show modules from other directories.  Fixed `issue 11`_.
-
-- HTML reports now display syntax-colored Python source.
-
-- Programs that change directory will still write .coverage files in the
-  directory where execution started.  Fixed `issue 24`_.
-
-- Added a "coverage debug" command for getting diagnostic information about the
-  coverage.py installation.
-
-.. _issue 11: http://bitbucket.org/ned/coveragepy/issue/11
-.. _issue 12: http://bitbucket.org/ned/coveragepy/issue/12
-.. _issue 13: http://bitbucket.org/ned/coveragepy/issue/13
-.. _issue 24: http://bitbucket.org/ned/coveragepy/issue/24
-
-
-Version 3.0.1 --- 2009-07-07
-----------------------------
-
-- Removed the recursion limit in the tracer function.  Previously, code that
-  ran more than 500 frames deep would crash. Fixed `issue 9`_.
-
-- Fixed a bizarre problem involving pyexpat, whereby lines following XML parser
-  invocations could be overlooked.  Fixed `issue 10`_.
-
-- On Python 2.3, coverage.py could mis-measure code with exceptions being
-  raised.  This is now fixed.
-
-- The coverage.py code itself will now not be measured by coverage.py, and no
-  coverage.py modules will be mentioned in the nose --with-cover plug-in.
-  Fixed `issue 8`_.
-
-- When running source files, coverage.py now opens them in universal newline
-  mode just like Python does.  This lets it run Windows files on Mac, for
-  example.
-
-.. _issue 9: http://bitbucket.org/ned/coveragepy/issue/9
-.. _issue 10: http://bitbucket.org/ned/coveragepy/issue/10
-.. _issue 8: http://bitbucket.org/ned/coveragepy/issue/8
-
-
-Version 3.0 --- 2009-06-13
---------------------------
-
-- Fixed the way the Python library was ignored.  Too much code was being
-  excluded the old way.
-
-- Tabs are now properly converted in HTML reports.  Previously indentation was
-  lost.  Fixed `issue 6`_.
-
-- Nested modules now get a proper flat_rootname.  Thanks, Christian Heimes.
-
-.. _issue 6: http://bitbucket.org/ned/coveragepy/issue/6
-
-
-Version 3.0b3 --- 2009-05-16
-----------------------------
-
-- Added parameters to coverage.__init__ for options that had been set on the
-  coverage object itself.
-
-- Added clear_exclude() and get_exclude_list() methods for programmatic
-  manipulation of the exclude regexes.
-
-- Added coverage.load() to read previously-saved data from the data file.
-
-- Improved the finding of code files.  For example, .pyc files that have been
-  installed after compiling are now located correctly.  Thanks, Detlev
-  Offenbach.
-
-- When using the object API (that is, constructing a coverage() object), data
-  is no longer saved automatically on process exit.  You can re-enable it with
-  the auto_data=True parameter on the coverage() constructor. The module-level
-  interface still uses automatic saving.
-
-
-Version 3.0b --- 2009-04-30
----------------------------
-
-HTML reporting, and continued refactoring.
-
-- HTML reports and annotation of source files: use the new -b (browser) switch.
-  Thanks to George Song for code, inspiration and guidance.
-
-- Code in the Python standard library is not measured by default.  If you need
-  to measure standard library code, use the -L command-line switch during
-  execution, or the cover_pylib=True argument to the coverage() constructor.
-
-- Source annotation into a directory (-a -d) behaves differently.  The
-  annotated files are named with their hierarchy flattened so that same-named
-  files from different directories no longer collide.  Also, only files in the
-  current tree are included.
-
-- coverage.annotate_file is no longer available.
-
-- Programs executed with -x now behave more as they should, for example,
-  __file__ has the correct value.
-
-- .coverage data files have a new pickle-based format designed for better
-  extensibility.
-
-- Removed the undocumented cache_file argument to coverage.usecache().
-
-
-Version 3.0b1 --- 2009-03-07
-----------------------------
-
-Major overhaul.
-
-- Coverage.py is now a package rather than a module.  Functionality has been
-  split into classes.
-
-- The trace function is implemented in C for speed.  Coverage.py runs are now
-  much faster.  Thanks to David Christian for productive micro-sprints and
-  other encouragement.
-
-- Executable lines are identified by reading the line number tables in the
-  compiled code, removing a great deal of complicated analysis code.
-
-- Precisely which lines are considered executable has changed in some cases.
-  Therefore, your coverage stats may also change slightly.
-
-- The singleton coverage object is only created if the module-level functions
-  are used.  This maintains the old interface while allowing better
-  programmatic use of Coverage.py.
-
-- The minimum supported Python version is 2.3.
-
-
-Version 2.85 --- 2008-09-14
----------------------------
-
-- Add support for finding source files in eggs. Don't check for
-  morf's being instances of ModuleType, instead use duck typing so that
-  pseudo-modules can participate. Thanks, Imri Goldberg.
-
-- Use os.realpath as part of the fixing of file names so that symlinks won't
-  confuse things. Thanks, Patrick Mezard.
-
-
-Version 2.80 --- 2008-05-25
----------------------------
-
-- Open files in rU mode to avoid line ending craziness. Thanks, Edward Loper.
-
-
-Version 2.78 --- 2007-09-30
----------------------------
-
-- Don't try to predict whether a file is Python source based on the extension.
-  Extension-less files are often Pythons scripts. Instead, simply parse the file
-  and catch the syntax errors. Hat tip to Ben Finney.
-
-
-Version 2.77 --- 2007-07-29
----------------------------
-
-- Better packaging.
-
-
-Version 2.76 --- 2007-07-23
----------------------------
-
-- Now Python 2.5 is *really* fully supported: the body of the new with
-  statement is counted as executable.
-
-
-Version 2.75 --- 2007-07-22
----------------------------
-
-- Python 2.5 now fully supported. The method of dealing with multi-line
-  statements is now less sensitive to the exact line that Python reports during
-  execution. Pass statements are handled specially so that their disappearance
-  during execution won't throw off the measurement.
-
-
-Version 2.7 --- 2007-07-21
---------------------------
-
-- "#pragma: nocover" is excluded by default.
-
-- Properly ignore docstrings and other constant expressions that appear in the
-  middle of a function, a problem reported by Tim Leslie.
-
-- coverage.erase() shouldn't clobber the exclude regex. Change how parallel
-  mode is invoked, and fix erase() so that it erases the cache when called
-  programmatically.
-
-- In reports, ignore code executed from strings, since we can't do anything
-  useful with it anyway.
-
-- Better file handling on Linux, thanks Guillaume Chazarain.
-
-- Better shell support on Windows, thanks Noel O'Boyle.
-
-- Python 2.2 support maintained, thanks Catherine Proulx.
-
-- Minor changes to avoid lint warnings.
-
-
-Version 2.6 --- 2006-08-23
---------------------------
-
-- Applied Joseph Tate's patch for function decorators.
-
-- Applied Sigve Tjora and Mark van der Wal's fixes for argument handling.
-
-- Applied Geoff Bache's parallel mode patch.
-
-- Refactorings to improve testability. Fixes to command-line logic for parallel
-  mode and collect.
-
-
-Version 2.5 --- 2005-12-04
---------------------------
-
-- Call threading.settrace so that all threads are measured. Thanks Martin
-  Fuzzey.
-
-- Add a file argument to report so that reports can be captured to a different
-  destination.
-
-- Coverage.py can now measure itself.
-
-- Adapted Greg Rogers' patch for using relative file names, and sorting and
-  omitting files to report on.
-
-
-Version 2.2 --- 2004-12-31
---------------------------
-
-- Allow for keyword arguments in the module global functions. Thanks, Allen.
-
-
-Version 2.1 --- 2004-12-14
---------------------------
-
-- Return 'analysis' to its original behavior and add 'analysis2'. Add a global
-  for 'annotate', and factor it, adding 'annotate_file'.
-
-
-Version 2.0 --- 2004-12-12
---------------------------
-
-Significant code changes.
-
-- Finding executable statements has been rewritten so that docstrings and
-  other quirks of Python execution aren't mistakenly identified as missing
-  lines.
-
-- Lines can be excluded from consideration, even entire suites of lines.
-
-- The file system cache of covered lines can be disabled programmatically.
-
-- Modernized the code.
-
-
-Earlier History
----------------
-
-2001-12-04 GDR Created.
-
-2001-12-06 GDR Added command-line interface and source code annotation.
-
-2001-12-09 GDR Moved design and interface to separate documents.
-
-2001-12-10 GDR Open cache file as binary on Windows. Allow simultaneous -e and
--x, or -a and -r.
-
-2001-12-12 GDR Added command-line help. Cache analysis so that it only needs to
-be done once when you specify -a and -r.
-
-2001-12-13 GDR Improved speed while recording. Portable between Python 1.5.2
-and 2.1.1.
-
-2002-01-03 GDR Module-level functions work correctly.
-
-2002-01-07 GDR Update sys.path when running a file with the -x option, so that
-it matches the value the program would get if it were run on its own.
--- a/DebugClients/Python/coverage/doc/LICENSE.txt	Sat Sep 03 18:02:37 2016 +0200
+++ /dev/null	Thu Jan 01 00:00:00 1970 +0000
@@ -1,177 +0,0 @@
-
-                                 Apache License
-                           Version 2.0, January 2004
-                        http://www.apache.org/licenses/
-
-   TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
-
-   1. Definitions.
-
-      "License" shall mean the terms and conditions for use, reproduction,
-      and distribution as defined by Sections 1 through 9 of this document.
-
-      "Licensor" shall mean the copyright owner or entity authorized by
-      the copyright owner that is granting the License.
-
-      "Legal Entity" shall mean the union of the acting entity and all
-      other entities that control, are controlled by, or are under common
-      control with that entity. For the purposes of this definition,
-      "control" means (i) the power, direct or indirect, to cause the
-      direction or management of such entity, whether by contract or
-      otherwise, or (ii) ownership of fifty percent (50%) or more of the
-      outstanding shares, or (iii) beneficial ownership of such entity.
-
-      "You" (or "Your") shall mean an individual or Legal Entity
-      exercising permissions granted by this License.
-
-      "Source" form shall mean the preferred form for making modifications,
-      including but not limited to software source code, documentation
-      source, and configuration files.
-
-      "Object" form shall mean any form resulting from mechanical
-      transformation or translation of a Source form, including but
-      not limited to compiled object code, generated documentation,
-      and conversions to other media types.
-
-      "Work" shall mean the work of authorship, whether in Source or
-      Object form, made available under the License, as indicated by a
-      copyright notice that is included in or attached to the work
-      (an example is provided in the Appendix below).
-
-      "Derivative Works" shall mean any work, whether in Source or Object
-      form, that is based on (or derived from) the Work and for which the
-      editorial revisions, annotations, elaborations, or other modifications
-      represent, as a whole, an original work of authorship. For the purposes
-      of this License, Derivative Works shall not include works that remain
-      separable from, or merely link (or bind by name) to the interfaces of,
-      the Work and Derivative Works thereof.
-
-      "Contribution" shall mean any work of authorship, including
-      the original version of the Work and any modifications or additions
-      to that Work or Derivative Works thereof, that is intentionally
-      submitted to Licensor for inclusion in the Work by the copyright owner
-      or by an individual or Legal Entity authorized to submit on behalf of
-      the copyright owner. For the purposes of this definition, "submitted"
-      means any form of electronic, verbal, or written communication sent
-      to the Licensor or its representatives, including but not limited to
-      communication on electronic mailing lists, source code control systems,
-      and issue tracking systems that are managed by, or on behalf of, the
-      Licensor for the purpose of discussing and improving the Work, but
-      excluding communication that is conspicuously marked or otherwise
-      designated in writing by the copyright owner as "Not a Contribution."
-
-      "Contributor" shall mean Licensor and any individual or Legal Entity
-      on behalf of whom a Contribution has been received by Licensor and
-      subsequently incorporated within the Work.
-
-   2. Grant of Copyright License. Subject to the terms and conditions of
-      this License, each Contributor hereby grants to You a perpetual,
-      worldwide, non-exclusive, no-charge, royalty-free, irrevocable
-      copyright license to reproduce, prepare Derivative Works of,
-      publicly display, publicly perform, sublicense, and distribute the
-      Work and such Derivative Works in Source or Object form.
-
-   3. Grant of Patent License. Subject to the terms and conditions of
-      this License, each Contributor hereby grants to You a perpetual,
-      worldwide, non-exclusive, no-charge, royalty-free, irrevocable
-      (except as stated in this section) patent license to make, have made,
-      use, offer to sell, sell, import, and otherwise transfer the Work,
-      where such license applies only to those patent claims licensable
-      by such Contributor that are necessarily infringed by their
-      Contribution(s) alone or by combination of their Contribution(s)
-      with the Work to which such Contribution(s) was submitted. If You
-      institute patent litigation against any entity (including a
-      cross-claim or counterclaim in a lawsuit) alleging that the Work
-      or a Contribution incorporated within the Work constitutes direct
-      or contributory patent infringement, then any patent licenses
-      granted to You under this License for that Work shall terminate
-      as of the date such litigation is filed.
-
-   4. Redistribution. You may reproduce and distribute copies of the
-      Work or Derivative Works thereof in any medium, with or without
-      modifications, and in Source or Object form, provided that You
-      meet the following conditions:
-
-      (a) You must give any other recipients of the Work or
-          Derivative Works a copy of this License; and
-
-      (b) You must cause any modified files to carry prominent notices
-          stating that You changed the files; and
-
-      (c) You must retain, in the Source form of any Derivative Works
-          that You distribute, all copyright, patent, trademark, and
-          attribution notices from the Source form of the Work,
-          excluding those notices that do not pertain to any part of
-          the Derivative Works; and
-
-      (d) If the Work includes a "NOTICE" text file as part of its
-          distribution, then any Derivative Works that You distribute must
-          include a readable copy of the attribution notices contained
-          within such NOTICE file, excluding those notices that do not
-          pertain to any part of the Derivative Works, in at least one
-          of the following places: within a NOTICE text file distributed
-          as part of the Derivative Works; within the Source form or
-          documentation, if provided along with the Derivative Works; or,
-          within a display generated by the Derivative Works, if and
-          wherever such third-party notices normally appear. The contents
-          of the NOTICE file are for informational purposes only and
-          do not modify the License. You may add Your own attribution
-          notices within Derivative Works that You distribute, alongside
-          or as an addendum to the NOTICE text from the Work, provided
-          that such additional attribution notices cannot be construed
-          as modifying the License.
-
-      You may add Your own copyright statement to Your modifications and
-      may provide additional or different license terms and conditions
-      for use, reproduction, or distribution of Your modifications, or
-      for any such Derivative Works as a whole, provided Your use,
-      reproduction, and distribution of the Work otherwise complies with
-      the conditions stated in this License.
-
-   5. Submission of Contributions. Unless You explicitly state otherwise,
-      any Contribution intentionally submitted for inclusion in the Work
-      by You to the Licensor shall be under the terms and conditions of
-      this License, without any additional terms or conditions.
-      Notwithstanding the above, nothing herein shall supersede or modify
-      the terms of any separate license agreement you may have executed
-      with Licensor regarding such Contributions.
-
-   6. Trademarks. This License does not grant permission to use the trade
-      names, trademarks, service marks, or product names of the Licensor,
-      except as required for reasonable and customary use in describing the
-      origin of the Work and reproducing the content of the NOTICE file.
-
-   7. Disclaimer of Warranty. Unless required by applicable law or
-      agreed to in writing, Licensor provides the Work (and each
-      Contributor provides its Contributions) on an "AS IS" BASIS,
-      WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
-      implied, including, without limitation, any warranties or conditions
-      of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
-      PARTICULAR PURPOSE. You are solely responsible for determining the
-      appropriateness of using or redistributing the Work and assume any
-      risks associated with Your exercise of permissions under this License.
-
-   8. Limitation of Liability. In no event and under no legal theory,
-      whether in tort (including negligence), contract, or otherwise,
-      unless required by applicable law (such as deliberate and grossly
-      negligent acts) or agreed to in writing, shall any Contributor be
-      liable to You for damages, including any direct, indirect, special,
-      incidental, or consequential damages of any character arising as a
-      result of this License or out of the use or inability to use the
-      Work (including but not limited to damages for loss of goodwill,
-      work stoppage, computer failure or malfunction, or any and all
-      other commercial damages or losses), even if such Contributor
-      has been advised of the possibility of such damages.
-
-   9. Accepting Warranty or Additional Liability. While redistributing
-      the Work or Derivative Works thereof, You may choose to offer,
-      and charge a fee for, acceptance of support, warranty, indemnity,
-      or other liability obligations and/or rights consistent with this
-      License. However, in accepting such obligations, You may act only
-      on Your own behalf and on Your sole responsibility, not on behalf
-      of any other Contributor, and only if You agree to indemnify,
-      defend, and hold each Contributor harmless for any liability
-      incurred by, or claims asserted against, such Contributor by reason
-      of your accepting any such warranty or additional liability.
-
-   END OF TERMS AND CONDITIONS
--- a/DebugClients/Python/coverage/doc/README.rst	Sat Sep 03 18:02:37 2016 +0200
+++ /dev/null	Thu Jan 01 00:00:00 1970 +0000
@@ -1,77 +0,0 @@
-.. Licensed under the Apache License: http://www.apache.org/licenses/LICENSE-2.0
-.. For details: https://bitbucket.org/ned/coveragepy/src/default/NOTICE.txt
-
-===========
-Coverage.py
-===========
-
-Code coverage testing for Python.
-
-|  |license| |versions| |status| |docs|
-|  |ci-status| |win-ci-status| |codecov|
-|  |kit| |format| |downloads|
-
-Coverage.py measures code coverage, typically during test execution. It uses
-the code analysis tools and tracing hooks provided in the Python standard
-library to determine which lines are executable, and which have been executed.
-
-Coverage.py runs on CPython 2.6, 2.7, and 3.3 through 3.6; PyPy 4.0 and 5.1;
-and PyPy3 2.4.
-
-Documentation is on `Read the Docs <http://coverage.readthedocs.io>`_.
-Code repository and issue tracker are on `Bitbucket <http://bitbucket.org/ned/coveragepy>`_,
-with a mirrored repository on `GitHub <https://github.com/nedbat/coveragepy>`_.
-
-**New in 4.1:** much-improved branch coverage.
-
-New in 4.0: ``--concurrency``, plugins for non-Python files, setup.cfg
-support, --skip-covered, HTML filtering, and more than 50 issues closed.
-
-
-Getting Started
----------------
-
-See the `quick start <http://coverage.readthedocs.io/#quick-start>`_
-section of the docs.
-
-
-License
--------
-
-Licensed under the Apache License: http://www.apache.org/licenses/LICENSE-2.0.
-For details, see https://bitbucket.org/ned/coveragepy/src/default/NOTICE.txt.
-
-
-.. |ci-status| image:: https://travis-ci.org/nedbat/coveragepy.svg?branch=master
-    :target: https://travis-ci.org/nedbat/coveragepy
-    :alt: Build status
-.. |win-ci-status| image:: https://ci.appveyor.com/api/projects/status/bitbucket/ned/coveragepy?svg=true
-    :target: https://ci.appveyor.com/project/nedbat/coveragepy
-    :alt: Windows build status
-.. |docs| image:: https://readthedocs.org/projects/coverage/badge/?version=latest&style=flat
-    :target: http://coverage.readthedocs.io
-    :alt: Documentation
-.. |reqs| image:: https://requires.io/github/nedbat/coveragepy/requirements.svg?branch=master
-    :target: https://requires.io/github/nedbat/coveragepy/requirements/?branch=master
-    :alt: Requirements status
-.. |kit| image:: https://badge.fury.io/py/coverage.svg
-    :target: https://pypi.python.org/pypi/coverage
-    :alt: PyPI status
-.. |format| image:: https://img.shields.io/pypi/format/coverage.svg
-    :target: https://pypi.python.org/pypi/coverage
-    :alt: Kit format
-.. |downloads| image:: https://img.shields.io/pypi/dw/coverage.svg
-    :target: https://pypi.python.org/pypi/coverage
-    :alt: Weekly PyPI downloads
-.. |versions| image:: https://img.shields.io/pypi/pyversions/coverage.svg
-    :target: https://pypi.python.org/pypi/coverage
-    :alt: Python versions supported
-.. |status| image:: https://img.shields.io/pypi/status/coverage.svg
-    :target: https://pypi.python.org/pypi/coverage
-    :alt: Package stability
-.. |license| image:: https://img.shields.io/pypi/l/coverage.svg
-    :target: https://pypi.python.org/pypi/coverage
-    :alt: License
-.. |codecov| image:: http://codecov.io/github/nedbat/coveragepy/coverage.svg?branch=master
-    :target: http://codecov.io/github/nedbat/coveragepy?branch=master
-    :alt: Coverage!
--- a/DebugClients/Python/coverage/env.py	Sat Sep 03 18:02:37 2016 +0200
+++ /dev/null	Thu Jan 01 00:00:00 1970 +0000
@@ -1,35 +0,0 @@
-# Licensed under the Apache License: http://www.apache.org/licenses/LICENSE-2.0
-# For details: https://bitbucket.org/ned/coveragepy/src/default/NOTICE.txt
-
-"""Determine facts about the environment."""
-
-import os
-import sys
-
-# Operating systems.
-WINDOWS = sys.platform == "win32"
-LINUX = sys.platform == "linux2"
-
-# Python implementations.
-PYPY = '__pypy__' in sys.builtin_module_names
-
-# Python versions.
-PYVERSION = sys.version_info
-PY2 = PYVERSION < (3, 0)
-PY3 = PYVERSION >= (3, 0)
-
-# Coverage.py specifics.
-
-# Are we using the C-implemented trace function?
-C_TRACER = os.getenv('COVERAGE_TEST_TRACER', 'c') == 'c'
-
-# Are we coverage-measuring ourselves?
-METACOV = os.getenv('COVERAGE_COVERAGE', '') != ''
-
-# Are we running our test suite?
-# Even when running tests, you can use COVERAGE_TESTING=0 to disable the
-# test-specific behavior like contracts.
-TESTING = os.getenv('COVERAGE_TESTING', '') == 'True'
-
-#
-# eflag: FileType = Python2
--- a/DebugClients/Python/coverage/execfile.py	Sat Sep 03 18:02:37 2016 +0200
+++ /dev/null	Thu Jan 01 00:00:00 1970 +0000
@@ -1,242 +0,0 @@
-# Licensed under the Apache License: http://www.apache.org/licenses/LICENSE-2.0
-# For details: https://bitbucket.org/ned/coveragepy/src/default/NOTICE.txt
-
-"""Execute files of Python code."""
-
-import marshal
-import os
-import sys
-import types
-
-from coverage.backward import BUILTINS
-from coverage.backward import PYC_MAGIC_NUMBER, imp, importlib_util_find_spec
-from coverage.misc import ExceptionDuringRun, NoCode, NoSource, isolate_module
-from coverage.phystokens import compile_unicode
-from coverage.python import get_python_source
-
-os = isolate_module(os)
-
-
-class DummyLoader(object):
-    """A shim for the pep302 __loader__, emulating pkgutil.ImpLoader.
-
-    Currently only implements the .fullname attribute
-    """
-    def __init__(self, fullname, *_args):
-        self.fullname = fullname
-
-
-if importlib_util_find_spec:
-    def find_module(modulename):
-        """Find the module named `modulename`.
-
-        Returns the file path of the module, and the name of the enclosing
-        package.
-        """
-        try:
-            spec = importlib_util_find_spec(modulename)
-        except ImportError as err:
-            raise NoSource(str(err))
-        if not spec:
-            raise NoSource("No module named %r" % (modulename,))
-        pathname = spec.origin
-        packagename = spec.name
-        if pathname.endswith("__init__.py") and not modulename.endswith("__init__"):
-            mod_main = modulename + ".__main__"
-            spec = importlib_util_find_spec(mod_main)
-            if not spec:
-                raise NoSource(
-                    "No module named %s; "
-                    "%r is a package and cannot be directly executed"
-                    % (mod_main, modulename)
-                )
-            pathname = spec.origin
-            packagename = spec.name
-        packagename = packagename.rpartition(".")[0]
-        return pathname, packagename
-else:
-    def find_module(modulename):
-        """Find the module named `modulename`.
-
-        Returns the file path of the module, and the name of the enclosing
-        package.
-        """
-        openfile = None
-        glo, loc = globals(), locals()
-        try:
-            # Search for the module - inside its parent package, if any - using
-            # standard import mechanics.
-            if '.' in modulename:
-                packagename, name = modulename.rsplit('.', 1)
-                package = __import__(packagename, glo, loc, ['__path__'])
-                searchpath = package.__path__
-            else:
-                packagename, name = None, modulename
-                searchpath = None  # "top-level search" in imp.find_module()
-            openfile, pathname, _ = imp.find_module(name, searchpath)
-
-            # Complain if this is a magic non-file module.
-            if openfile is None and pathname is None:
-                raise NoSource(
-                    "module does not live in a file: %r" % modulename
-                    )
-
-            # If `modulename` is actually a package, not a mere module, then we
-            # pretend to be Python 2.7 and try running its __main__.py script.
-            if openfile is None:
-                packagename = modulename
-                name = '__main__'
-                package = __import__(packagename, glo, loc, ['__path__'])
-                searchpath = package.__path__
-                openfile, pathname, _ = imp.find_module(name, searchpath)
-        except ImportError as err:
-            raise NoSource(str(err))
-        finally:
-            if openfile:
-                openfile.close()
-
-        return pathname, packagename
-
-
-def run_python_module(modulename, args):
-    """Run a Python module, as though with ``python -m name args...``.
-
-    `modulename` is the name of the module, possibly a dot-separated name.
-    `args` is the argument array to present as sys.argv, including the first
-    element naming the module being executed.
-
-    """
-    pathname, packagename = find_module(modulename)
-
-    pathname = os.path.abspath(pathname)
-    args[0] = pathname
-    run_python_file(pathname, args, package=packagename, modulename=modulename, path0="")
-
-
-def run_python_file(filename, args, package=None, modulename=None, path0=None):
-    """Run a Python file as if it were the main program on the command line.
-
-    `filename` is the path to the file to execute, it need not be a .py file.
-    `args` is the argument array to present as sys.argv, including the first
-    element naming the file being executed.  `package` is the name of the
-    enclosing package, if any.
-
-    `modulename` is the name of the module the file was run as.
-
-    `path0` is the value to put into sys.path[0].  If it's None, then this
-    function will decide on a value.
-
-    """
-    if modulename is None and sys.version_info >= (3, 3):
-        modulename = '__main__'
-
-    # Create a module to serve as __main__
-    old_main_mod = sys.modules['__main__']
-    main_mod = types.ModuleType('__main__')
-    sys.modules['__main__'] = main_mod
-    main_mod.__file__ = filename
-    if package:
-        main_mod.__package__ = package
-    if modulename:
-        main_mod.__loader__ = DummyLoader(modulename)
-
-    main_mod.__builtins__ = BUILTINS
-
-    # Set sys.argv properly.
-    old_argv = sys.argv
-    sys.argv = args
-
-    if os.path.isdir(filename):
-        # Running a directory means running the __main__.py file in that
-        # directory.
-        my_path0 = filename
-
-        for ext in [".py", ".pyc", ".pyo"]:
-            try_filename = os.path.join(filename, "__main__" + ext)
-            if os.path.exists(try_filename):
-                filename = try_filename
-                break
-        else:
-            raise NoSource("Can't find '__main__' module in '%s'" % filename)
-    else:
-        my_path0 = os.path.abspath(os.path.dirname(filename))
-
-    # Set sys.path correctly.
-    old_path0 = sys.path[0]
-    sys.path[0] = path0 if path0 is not None else my_path0
-
-    try:
-        # Make a code object somehow.
-        if filename.endswith((".pyc", ".pyo")):
-            code = make_code_from_pyc(filename)
-        else:
-            code = make_code_from_py(filename)
-
-        # Execute the code object.
-        try:
-            exec(code, main_mod.__dict__)
-        except SystemExit:
-            # The user called sys.exit().  Just pass it along to the upper
-            # layers, where it will be handled.
-            raise
-        except:
-            # Something went wrong while executing the user code.
-            # Get the exc_info, and pack them into an exception that we can
-            # throw up to the outer loop.  We peel one layer off the traceback
-            # so that the coverage.py code doesn't appear in the final printed
-            # traceback.
-            typ, err, tb = sys.exc_info()
-
-            # PyPy3 weirdness.  If I don't access __context__, then somehow it
-            # is non-None when the exception is reported at the upper layer,
-            # and a nested exception is shown to the user.  This getattr fixes
-            # it somehow? https://bitbucket.org/pypy/pypy/issue/1903
-            getattr(err, '__context__', None)
-
-            raise ExceptionDuringRun(typ, err, tb.tb_next)
-    finally:
-        # Restore the old __main__, argv, and path.
-        sys.modules['__main__'] = old_main_mod
-        sys.argv = old_argv
-        sys.path[0] = old_path0
-
-
-def make_code_from_py(filename):
-    """Get source from `filename` and make a code object of it."""
-    # Open the source file.
-    try:
-        source = get_python_source(filename)
-    except (IOError, NoSource):
-        raise NoSource("No file to run: '%s'" % filename)
-
-    code = compile_unicode(source, filename, "exec")
-    return code
-
-
-def make_code_from_pyc(filename):
-    """Get a code object from a .pyc file."""
-    try:
-        fpyc = open(filename, "rb")
-    except IOError:
-        raise NoCode("No file to run: '%s'" % filename)
-
-    with fpyc:
-        # First four bytes are a version-specific magic number.  It has to
-        # match or we won't run the file.
-        magic = fpyc.read(4)
-        if magic != PYC_MAGIC_NUMBER:
-            raise NoCode("Bad magic number in .pyc file")
-
-        # Skip the junk in the header that we don't need.
-        fpyc.read(4)            # Skip the moddate.
-        if sys.version_info >= (3, 3):
-            # 3.3 added another long to the header (size), skip it.
-            fpyc.read(4)
-
-        # The rest of the file is the code object we want.
-        code = marshal.load(fpyc)
-
-    return code
-
-#
-# eflag: FileType = Python2
--- a/DebugClients/Python/coverage/files.py	Sat Sep 03 18:02:37 2016 +0200
+++ /dev/null	Thu Jan 01 00:00:00 1970 +0000
@@ -1,381 +0,0 @@
-# Licensed under the Apache License: http://www.apache.org/licenses/LICENSE-2.0
-# For details: https://bitbucket.org/ned/coveragepy/src/default/NOTICE.txt
-
-"""File wrangling."""
-
-import fnmatch
-import ntpath
-import os
-import os.path
-import posixpath
-import re
-import sys
-
-from coverage import env
-from coverage.backward import unicode_class
-from coverage.misc import contract, CoverageException, join_regex, isolate_module
-
-
-os = isolate_module(os)
-
-
-def set_relative_directory():
-    """Set the directory that `relative_filename` will be relative to."""
-    global RELATIVE_DIR, CANONICAL_FILENAME_CACHE
-
-    # The absolute path to our current directory.
-    RELATIVE_DIR = os.path.normcase(abs_file(os.curdir) + os.sep)
-
-    # Cache of results of calling the canonical_filename() method, to
-    # avoid duplicating work.
-    CANONICAL_FILENAME_CACHE = {}
-
-
-def relative_directory():
-    """Return the directory that `relative_filename` is relative to."""
-    return RELATIVE_DIR
-
-
-@contract(returns='unicode')
-def relative_filename(filename):
-    """Return the relative form of `filename`.
-
-    The file name will be relative to the current directory when the
-    `set_relative_directory` was called.
-
-    """
-    fnorm = os.path.normcase(filename)
-    if fnorm.startswith(RELATIVE_DIR):
-        filename = filename[len(RELATIVE_DIR):]
-    return unicode_filename(filename)
-
-
-@contract(returns='unicode')
-def canonical_filename(filename):
-    """Return a canonical file name for `filename`.
-
-    An absolute path with no redundant components and normalized case.
-
-    """
-    if filename not in CANONICAL_FILENAME_CACHE:
-        if not os.path.isabs(filename):
-            for path in [os.curdir] + sys.path:
-                if path is None:
-                    continue
-                f = os.path.join(path, filename)
-                if os.path.exists(f):
-                    filename = f
-                    break
-        cf = abs_file(filename)
-        CANONICAL_FILENAME_CACHE[filename] = cf
-    return CANONICAL_FILENAME_CACHE[filename]
-
-
-def flat_rootname(filename):
-    """A base for a flat file name to correspond to this file.
-
-    Useful for writing files about the code where you want all the files in
-    the same directory, but need to differentiate same-named files from
-    different directories.
-
-    For example, the file a/b/c.py will return 'a_b_c_py'
-
-    """
-    name = ntpath.splitdrive(filename)[1]
-    return re.sub(r"[\\/.:]", "_", name)
-
-
-if env.WINDOWS:
-
-    _ACTUAL_PATH_CACHE = {}
-    _ACTUAL_PATH_LIST_CACHE = {}
-
-    def actual_path(path):
-        """Get the actual path of `path`, including the correct case."""
-        if env.PY2 and isinstance(path, unicode_class):
-            path = path.encode(sys.getfilesystemencoding())
-        if path in _ACTUAL_PATH_CACHE:
-            return _ACTUAL_PATH_CACHE[path]
-
-        head, tail = os.path.split(path)
-        if not tail:
-            # This means head is the drive spec: normalize it.
-            actpath = head.upper()
-        elif not head:
-            actpath = tail
-        else:
-            head = actual_path(head)
-            if head in _ACTUAL_PATH_LIST_CACHE:
-                files = _ACTUAL_PATH_LIST_CACHE[head]
-            else:
-                try:
-                    files = os.listdir(head)
-                except OSError:
-                    files = []
-                _ACTUAL_PATH_LIST_CACHE[head] = files
-            normtail = os.path.normcase(tail)
-            for f in files:
-                if os.path.normcase(f) == normtail:
-                    tail = f
-                    break
-            actpath = os.path.join(head, tail)
-        _ACTUAL_PATH_CACHE[path] = actpath
-        return actpath
-
-else:
-    def actual_path(filename):
-        """The actual path for non-Windows platforms."""
-        return filename
-
-
-if env.PY2:
-    @contract(returns='unicode')
-    def unicode_filename(filename):
-        """Return a Unicode version of `filename`."""
-        if isinstance(filename, str):
-            encoding = sys.getfilesystemencoding() or sys.getdefaultencoding()
-            filename = filename.decode(encoding, "replace")
-        return filename
-else:
-    @contract(filename='unicode', returns='unicode')
-    def unicode_filename(filename):
-        """Return a Unicode version of `filename`."""
-        return filename
-
-
-@contract(returns='unicode')
-def abs_file(filename):
-    """Return the absolute normalized form of `filename`."""
-    path = os.path.expandvars(os.path.expanduser(filename))
-    path = os.path.abspath(os.path.realpath(path))
-    path = actual_path(path)
-    path = unicode_filename(path)
-    return path
-
-
-RELATIVE_DIR = None
-CANONICAL_FILENAME_CACHE = None
-set_relative_directory()
-
-
-def isabs_anywhere(filename):
-    """Is `filename` an absolute path on any OS?"""
-    return ntpath.isabs(filename) or posixpath.isabs(filename)
-
-
-def prep_patterns(patterns):
-    """Prepare the file patterns for use in a `FnmatchMatcher`.
-
-    If a pattern starts with a wildcard, it is used as a pattern
-    as-is.  If it does not start with a wildcard, then it is made
-    absolute with the current directory.
-
-    If `patterns` is None, an empty list is returned.
-
-    """
-    prepped = []
-    for p in patterns or []:
-        if p.startswith(("*", "?")):
-            prepped.append(p)
-        else:
-            prepped.append(abs_file(p))
-    return prepped
-
-
-class TreeMatcher(object):
-    """A matcher for files in a tree."""
-    def __init__(self, directories):
-        self.dirs = list(directories)
-
-    def __repr__(self):
-        return "<TreeMatcher %r>" % self.dirs
-
-    def info(self):
-        """A list of strings for displaying when dumping state."""
-        return self.dirs
-
-    def match(self, fpath):
-        """Does `fpath` indicate a file in one of our trees?"""
-        for d in self.dirs:
-            if fpath.startswith(d):
-                if fpath == d:
-                    # This is the same file!
-                    return True
-                if fpath[len(d)] == os.sep:
-                    # This is a file in the directory
-                    return True
-        return False
-
-
-class ModuleMatcher(object):
-    """A matcher for modules in a tree."""
-    def __init__(self, module_names):
-        self.modules = list(module_names)
-
-    def __repr__(self):
-        return "<ModuleMatcher %r>" % (self.modules)
-
-    def info(self):
-        """A list of strings for displaying when dumping state."""
-        return self.modules
-
-    def match(self, module_name):
-        """Does `module_name` indicate a module in one of our packages?"""
-        if not module_name:
-            return False
-
-        for m in self.modules:
-            if module_name.startswith(m):
-                if module_name == m:
-                    return True
-                if module_name[len(m)] == '.':
-                    # This is a module in the package
-                    return True
-
-        return False
-
-
-class FnmatchMatcher(object):
-    """A matcher for files by file name pattern."""
-    def __init__(self, pats):
-        self.pats = pats[:]
-        # fnmatch is platform-specific. On Windows, it does the Windows thing
-        # of treating / and \ as equivalent. But on other platforms, we need to
-        # take care of that ourselves.
-        fnpats = (fnmatch.translate(p) for p in pats)
-        fnpats = (p.replace(r"\/", r"[\\/]") for p in fnpats)
-        if env.WINDOWS:
-            # Windows is also case-insensitive.  BTW: the regex docs say that
-            # flags like (?i) have to be at the beginning, but fnmatch puts
-            # them at the end, and having two there seems to work fine.
-            fnpats = (p + "(?i)" for p in fnpats)
-        self.re = re.compile(join_regex(fnpats))
-
-    def __repr__(self):
-        return "<FnmatchMatcher %r>" % self.pats
-
-    def info(self):
-        """A list of strings for displaying when dumping state."""
-        return self.pats
-
-    def match(self, fpath):
-        """Does `fpath` match one of our file name patterns?"""
-        return self.re.match(fpath) is not None
-
-
-def sep(s):
-    """Find the path separator used in this string, or os.sep if none."""
-    sep_match = re.search(r"[\\/]", s)
-    if sep_match:
-        the_sep = sep_match.group(0)
-    else:
-        the_sep = os.sep
-    return the_sep
-
-
-class PathAliases(object):
-    """A collection of aliases for paths.
-
-    When combining data files from remote machines, often the paths to source
-    code are different, for example, due to OS differences, or because of
-    serialized checkouts on continuous integration machines.
-
-    A `PathAliases` object tracks a list of pattern/result pairs, and can
-    map a path through those aliases to produce a unified path.
-
-    """
-    def __init__(self):
-        self.aliases = []
-
-    def add(self, pattern, result):
-        """Add the `pattern`/`result` pair to the list of aliases.
-
-        `pattern` is an `fnmatch`-style pattern.  `result` is a simple
-        string.  When mapping paths, if a path starts with a match against
-        `pattern`, then that match is replaced with `result`.  This models
-        isomorphic source trees being rooted at different places on two
-        different machines.
-
-        `pattern` can't end with a wildcard component, since that would
-        match an entire tree, and not just its root.
-
-        """
-        # The pattern can't end with a wildcard component.
-        pattern = pattern.rstrip(r"\/")
-        if pattern.endswith("*"):
-            raise CoverageException("Pattern must not end with wildcards.")
-        pattern_sep = sep(pattern)
-
-        # The pattern is meant to match a filepath.  Let's make it absolute
-        # unless it already is, or is meant to match any prefix.
-        if not pattern.startswith('*') and not isabs_anywhere(pattern):
-            pattern = abs_file(pattern)
-        pattern += pattern_sep
-
-        # Make a regex from the pattern.  fnmatch always adds a \Z to
-        # match the whole string, which we don't want.
-        regex_pat = fnmatch.translate(pattern).replace(r'\Z(', '(')
-
-        # We want */a/b.py to match on Windows too, so change slash to match
-        # either separator.
-        regex_pat = regex_pat.replace(r"\/", r"[\\/]")
-        # We want case-insensitive matching, so add that flag.
-        regex = re.compile(r"(?i)" + regex_pat)
-
-        # Normalize the result: it must end with a path separator.
-        result_sep = sep(result)
-        result = result.rstrip(r"\/") + result_sep
-        self.aliases.append((regex, result, pattern_sep, result_sep))
-
-    def map(self, path):
-        """Map `path` through the aliases.
-
-        `path` is checked against all of the patterns.  The first pattern to
-        match is used to replace the root of the path with the result root.
-        Only one pattern is ever used.  If no patterns match, `path` is
-        returned unchanged.
-
-        The separator style in the result is made to match that of the result
-        in the alias.
-
-        Returns the mapped path.  If a mapping has happened, this is a
-        canonical path.  If no mapping has happened, it is the original value
-        of `path` unchanged.
-
-        """
-        for regex, result, pattern_sep, result_sep in self.aliases:
-            m = regex.match(path)
-            if m:
-                new = path.replace(m.group(0), result)
-                if pattern_sep != result_sep:
-                    new = new.replace(pattern_sep, result_sep)
-                new = canonical_filename(new)
-                return new
-        return path
-
-
-def find_python_files(dirname):
-    """Yield all of the importable Python files in `dirname`, recursively.
-
-    To be importable, the files have to be in a directory with a __init__.py,
-    except for `dirname` itself, which isn't required to have one.  The
-    assumption is that `dirname` was specified directly, so the user knows
-    best, but sub-directories are checked for a __init__.py to be sure we only
-    find the importable files.
-
-    """
-    for i, (dirpath, dirnames, filenames) in enumerate(os.walk(dirname)):
-        if i > 0 and '__init__.py' not in filenames:
-            # If a directory doesn't have __init__.py, then it isn't
-            # importable and neither are its files
-            del dirnames[:]
-            continue
-        for filename in filenames:
-            # We're only interested in files that look like reasonable Python
-            # files: Must end with .py or .pyw, and must not have certain funny
-            # characters that probably mean they are editor junk.
-            if re.match(r"^[^.#~!$@%^&*()+=,]+\.pyw?$", filename):
-                yield os.path.join(dirpath, filename)
-
-#
-# eflag: FileType = Python2
--- a/DebugClients/Python/coverage/html.py	Sat Sep 03 18:02:37 2016 +0200
+++ /dev/null	Thu Jan 01 00:00:00 1970 +0000
@@ -1,438 +0,0 @@
-# Licensed under the Apache License: http://www.apache.org/licenses/LICENSE-2.0
-# For details: https://bitbucket.org/ned/coveragepy/src/default/NOTICE.txt
-
-"""HTML reporting for coverage.py."""
-
-import datetime
-import json
-import os
-import shutil
-
-import coverage
-from coverage import env
-from coverage.backward import iitems
-from coverage.files import flat_rootname
-from coverage.misc import CoverageException, Hasher, isolate_module
-from coverage.report import Reporter
-from coverage.results import Numbers
-from coverage.templite import Templite
-
-os = isolate_module(os)
-
-
-# Static files are looked for in a list of places.
-STATIC_PATH = [
-    # The place Debian puts system Javascript libraries.
-    "/usr/share/javascript",
-
-    # Our htmlfiles directory.
-    os.path.join(os.path.dirname(__file__), "htmlfiles"),
-]
-
-
-def data_filename(fname, pkgdir=""):
-    """Return the path to a data file of ours.
-
-    The file is searched for on `STATIC_PATH`, and the first place it's found,
-    is returned.
-
-    Each directory in `STATIC_PATH` is searched as-is, and also, if `pkgdir`
-    is provided, at that sub-directory.
-
-    """
-    tried = []
-    for static_dir in STATIC_PATH:
-        static_filename = os.path.join(static_dir, fname)
-        if os.path.exists(static_filename):
-            return static_filename
-        else:
-            tried.append(static_filename)
-        if pkgdir:
-            static_filename = os.path.join(static_dir, pkgdir, fname)
-            if os.path.exists(static_filename):
-                return static_filename
-            else:
-                tried.append(static_filename)
-    raise CoverageException(
-        "Couldn't find static file %r from %r, tried: %r" % (fname, os.getcwd(), tried)
-    )
-
-
-def read_data(fname):
-    """Return the contents of a data file of ours."""
-    with open(data_filename(fname)) as data_file:
-        return data_file.read()
-
-
-def write_html(fname, html):
-    """Write `html` to `fname`, properly encoded."""
-    with open(fname, "wb") as fout:
-        fout.write(html.encode('ascii', 'xmlcharrefreplace'))
-
-
-class HtmlReporter(Reporter):
-    """HTML reporting."""
-
-    # These files will be copied from the htmlfiles directory to the output
-    # directory.
-    STATIC_FILES = [
-        ("style.css", ""),
-        ("jquery.min.js", "jquery"),
-        ("jquery.debounce.min.js", "jquery-debounce"),
-        ("jquery.hotkeys.js", "jquery-hotkeys"),
-        ("jquery.isonscreen.js", "jquery-isonscreen"),
-        ("jquery.tablesorter.min.js", "jquery-tablesorter"),
-        ("coverage_html.js", ""),
-        ("keybd_closed.png", ""),
-        ("keybd_open.png", ""),
-    ]
-
-    def __init__(self, cov, config):
-        super(HtmlReporter, self).__init__(cov, config)
-        self.directory = None
-        title = self.config.html_title
-        if env.PY2:
-            title = title.decode("utf8")
-        self.template_globals = {
-            'escape': escape,
-            'pair': pair,
-            'title': title,
-            '__url__': coverage.__url__,
-            '__version__': coverage.__version__,
-        }
-        self.source_tmpl = Templite(read_data("pyfile.html"), self.template_globals)
-
-        self.coverage = cov
-
-        self.files = []
-        self.has_arcs = self.coverage.data.has_arcs()
-        self.status = HtmlStatus()
-        self.extra_css = None
-        self.totals = Numbers()
-        self.time_stamp = datetime.datetime.now().strftime('%Y-%m-%d %H:%M')
-
-    def report(self, morfs):
-        """Generate an HTML report for `morfs`.
-
-        `morfs` is a list of modules or file names.
-
-        """
-        assert self.config.html_dir, "must give a directory for html reporting"
-
-        # Read the status data.
-        self.status.read(self.config.html_dir)
-
-        # Check that this run used the same settings as the last run.
-        m = Hasher()
-        m.update(self.config)
-        these_settings = m.hexdigest()
-        if self.status.settings_hash() != these_settings:
-            self.status.reset()
-            self.status.set_settings_hash(these_settings)
-
-        # The user may have extra CSS they want copied.
-        if self.config.extra_css:
-            self.extra_css = os.path.basename(self.config.extra_css)
-
-        # Process all the files.
-        self.report_files(self.html_file, morfs, self.config.html_dir)
-
-        if not self.files:
-            raise CoverageException("No data to report.")
-
-        # Write the index file.
-        self.index_file()
-
-        self.make_local_static_report_files()
-        return self.totals.n_statements and self.totals.pc_covered
-
-    def make_local_static_report_files(self):
-        """Make local instances of static files for HTML report."""
-        # The files we provide must always be copied.
-        for static, pkgdir in self.STATIC_FILES:
-            shutil.copyfile(
-                data_filename(static, pkgdir),
-                os.path.join(self.directory, static)
-            )
-
-        # The user may have extra CSS they want copied.
-        if self.extra_css:
-            shutil.copyfile(
-                self.config.extra_css,
-                os.path.join(self.directory, self.extra_css)
-            )
-
-    def file_hash(self, source, fr):
-        """Compute a hash that changes if the file needs to be re-reported."""
-        m = Hasher()
-        m.update(source)
-        self.coverage.data.add_to_hash(fr.filename, m)
-        return m.hexdigest()
-
-    def html_file(self, fr, analysis):
-        """Generate an HTML file for one source file."""
-        source = fr.source()
-
-        # Find out if the file on disk is already correct.
-        rootname = flat_rootname(fr.relative_filename())
-        this_hash = self.file_hash(source.encode('utf-8'), fr)
-        that_hash = self.status.file_hash(rootname)
-        if this_hash == that_hash:
-            # Nothing has changed to require the file to be reported again.
-            self.files.append(self.status.index_info(rootname))
-            return
-
-        self.status.set_file_hash(rootname, this_hash)
-
-        # Get the numbers for this file.
-        nums = analysis.numbers
-
-        if self.has_arcs:
-            missing_branch_arcs = analysis.missing_branch_arcs()
-            arcs_executed = analysis.arcs_executed()
-
-        # These classes determine which lines are highlighted by default.
-        c_run = "run hide_run"
-        c_exc = "exc"
-        c_mis = "mis"
-        c_par = "par " + c_run
-
-        lines = []
-
-        for lineno, line in enumerate(fr.source_token_lines(), start=1):
-            # Figure out how to mark this line.
-            line_class = []
-            annotate_html = ""
-            annotate_long = ""
-            if lineno in analysis.statements:
-                line_class.append("stm")
-            if lineno in analysis.excluded:
-                line_class.append(c_exc)
-            elif lineno in analysis.missing:
-                line_class.append(c_mis)
-            elif self.has_arcs and lineno in missing_branch_arcs:
-                line_class.append(c_par)
-                shorts = []
-                longs = []
-                for b in missing_branch_arcs[lineno]:
-                    if b < 0:
-                        shorts.append("exit")
-                    else:
-                        shorts.append(b)
-                    longs.append(fr.missing_arc_description(lineno, b, arcs_executed))
-                # 202F is NARROW NO-BREAK SPACE.
-                # 219B is RIGHTWARDS ARROW WITH STROKE.
-                short_fmt = "%s&#x202F;&#x219B;&#x202F;%s"
-                annotate_html = ",&nbsp;&nbsp; ".join(short_fmt % (lineno, d) for d in shorts)
-
-                if len(longs) == 1:
-                    annotate_long = longs[0]
-                else:
-                    annotate_long = "%d missed branches: %s" % (
-                        len(longs),
-                        ", ".join("%d) %s" % (num, ann_long)
-                            for num, ann_long in enumerate(longs, start=1)),
-                    )
-            elif lineno in analysis.statements:
-                line_class.append(c_run)
-
-            # Build the HTML for the line.
-            html = []
-            for tok_type, tok_text in line:
-                if tok_type == "ws":
-                    html.append(escape(tok_text))
-                else:
-                    tok_html = escape(tok_text) or '&nbsp;'
-                    html.append(
-                        '<span class="%s">%s</span>' % (tok_type, tok_html)
-                    )
-
-            lines.append({
-                'html': ''.join(html),
-                'number': lineno,
-                'class': ' '.join(line_class) or "pln",
-                'annotate': annotate_html,
-                'annotate_long': annotate_long,
-            })
-
-        # Write the HTML page for this file.
-        html = self.source_tmpl.render({
-            'c_exc': c_exc,
-            'c_mis': c_mis,
-            'c_par': c_par,
-            'c_run': c_run,
-            'has_arcs': self.has_arcs,
-            'extra_css': self.extra_css,
-            'fr': fr,
-            'nums': nums,
-            'lines': lines,
-            'time_stamp': self.time_stamp,
-        })
-
-        html_filename = rootname + ".html"
-        html_path = os.path.join(self.directory, html_filename)
-        write_html(html_path, html)
-
-        # Save this file's information for the index file.
-        index_info = {
-            'nums': nums,
-            'html_filename': html_filename,
-            'relative_filename': fr.relative_filename(),
-        }
-        self.files.append(index_info)
-        self.status.set_index_info(rootname, index_info)
-
-    def index_file(self):
-        """Write the index.html file for this report."""
-        index_tmpl = Templite(read_data("index.html"), self.template_globals)
-
-        self.totals = sum(f['nums'] for f in self.files)
-
-        html = index_tmpl.render({
-            'has_arcs': self.has_arcs,
-            'extra_css': self.extra_css,
-            'files': self.files,
-            'totals': self.totals,
-            'time_stamp': self.time_stamp,
-        })
-
-        write_html(os.path.join(self.directory, "index.html"), html)
-
-        # Write the latest hashes for next time.
-        self.status.write(self.directory)
-
-
-class HtmlStatus(object):
-    """The status information we keep to support incremental reporting."""
-
-    STATUS_FILE = "status.json"
-    STATUS_FORMAT = 1
-
-    #           pylint: disable=wrong-spelling-in-comment,useless-suppression
-    #  The data looks like:
-    #
-    #  {
-    #      'format': 1,
-    #      'settings': '540ee119c15d52a68a53fe6f0897346d',
-    #      'version': '4.0a1',
-    #      'files': {
-    #          'cogapp___init__': {
-    #              'hash': 'e45581a5b48f879f301c0f30bf77a50c',
-    #              'index': {
-    #                  'html_filename': 'cogapp___init__.html',
-    #                  'name': 'cogapp/__init__',
-    #                  'nums': <coverage.results.Numbers object at 0x10ab7ed0>,
-    #              }
-    #          },
-    #          ...
-    #          'cogapp_whiteutils': {
-    #              'hash': '8504bb427fc488c4176809ded0277d51',
-    #              'index': {
-    #                  'html_filename': 'cogapp_whiteutils.html',
-    #                  'name': 'cogapp/whiteutils',
-    #                  'nums': <coverage.results.Numbers object at 0x10ab7d90>,
-    #              }
-    #          },
-    #      },
-    #  }
-
-    def __init__(self):
-        self.reset()
-
-    def reset(self):
-        """Initialize to empty."""
-        self.settings = ''
-        self.files = {}
-
-    def read(self, directory):
-        """Read the last status in `directory`."""
-        usable = False
-        try:
-            status_file = os.path.join(directory, self.STATUS_FILE)
-            with open(status_file, "r") as fstatus:
-                status = json.load(fstatus)
-        except (IOError, ValueError):
-            usable = False
-        else:
-            usable = True
-            if status['format'] != self.STATUS_FORMAT:
-                usable = False
-            elif status['version'] != coverage.__version__:
-                usable = False
-
-        if usable:
-            self.files = {}
-            for filename, fileinfo in iitems(status['files']):
-                fileinfo['index']['nums'] = Numbers(*fileinfo['index']['nums'])
-                self.files[filename] = fileinfo
-            self.settings = status['settings']
-        else:
-            self.reset()
-
-    def write(self, directory):
-        """Write the current status to `directory`."""
-        status_file = os.path.join(directory, self.STATUS_FILE)
-        files = {}
-        for filename, fileinfo in iitems(self.files):
-            fileinfo['index']['nums'] = fileinfo['index']['nums'].init_args()
-            files[filename] = fileinfo
-
-        status = {
-            'format': self.STATUS_FORMAT,
-            'version': coverage.__version__,
-            'settings': self.settings,
-            'files': files,
-        }
-        with open(status_file, "w") as fout:
-            json.dump(status, fout)
-
-        # Older versions of ShiningPanda look for the old name, status.dat.
-        # Accomodate them if we are running under Jenkins.
-        # https://issues.jenkins-ci.org/browse/JENKINS-28428
-        if "JENKINS_URL" in os.environ:
-            with open(os.path.join(directory, "status.dat"), "w") as dat:
-                dat.write("https://issues.jenkins-ci.org/browse/JENKINS-28428\n")
-
-    def settings_hash(self):
-        """Get the hash of the coverage.py settings."""
-        return self.settings
-
-    def set_settings_hash(self, settings):
-        """Set the hash of the coverage.py settings."""
-        self.settings = settings
-
-    def file_hash(self, fname):
-        """Get the hash of `fname`'s contents."""
-        return self.files.get(fname, {}).get('hash', '')
-
-    def set_file_hash(self, fname, val):
-        """Set the hash of `fname`'s contents."""
-        self.files.setdefault(fname, {})['hash'] = val
-
-    def index_info(self, fname):
-        """Get the information for index.html for `fname`."""
-        return self.files.get(fname, {}).get('index', {})
-
-    def set_index_info(self, fname, info):
-        """Set the information for index.html for `fname`."""
-        self.files.setdefault(fname, {})['index'] = info
-
-
-# Helpers for templates and generating HTML
-
-def escape(t):
-    """HTML-escape the text in `t`.
-
-    This is only suitable for HTML text, not attributes.
-
-    """
-    # Convert HTML special chars into HTML entities.
-    return t.replace("&", "&amp;").replace("<", "&lt;")
-
-
-def pair(ratio):
-    """Format a pair of numbers so JavaScript can read them in an attribute."""
-    return "%s %s" % ratio
-
-#
-# eflag: FileType = Python2
--- a/DebugClients/Python/coverage/misc.py	Sat Sep 03 18:02:37 2016 +0200
+++ /dev/null	Thu Jan 01 00:00:00 1970 +0000
@@ -1,259 +0,0 @@
-# Licensed under the Apache License: http://www.apache.org/licenses/LICENSE-2.0
-# For details: https://bitbucket.org/ned/coveragepy/src/default/NOTICE.txt
-
-"""Miscellaneous stuff for coverage.py."""
-
-import errno
-import hashlib
-import inspect
-import locale
-import os
-import sys
-import types
-
-from coverage import env
-from coverage.backward import string_class, to_bytes, unicode_class
-
-ISOLATED_MODULES = {}
-
-
-def isolate_module(mod):
-    """Copy a module so that we are isolated from aggressive mocking.
-
-    If a test suite mocks os.path.exists (for example), and then we need to use
-    it during the test, everything will get tangled up if we use their mock.
-    Making a copy of the module when we import it will isolate coverage.py from
-    those complications.
-    """
-    if mod not in ISOLATED_MODULES:
-        new_mod = types.ModuleType(mod.__name__)
-        ISOLATED_MODULES[mod] = new_mod
-        for name in dir(mod):
-            value = getattr(mod, name)
-            if isinstance(value, types.ModuleType):
-                value = isolate_module(value)
-            setattr(new_mod, name, value)
-    return ISOLATED_MODULES[mod]
-
-os = isolate_module(os)
-
-
-# Use PyContracts for assertion testing on parameters and returns, but only if
-# we are running our own test suite.
-if env.TESTING:
-    from contracts import contract              # pylint: disable=unused-import
-    from contracts import new_contract as raw_new_contract
-
-    def new_contract(*args, **kwargs):
-        """A proxy for contracts.new_contract that doesn't mind happening twice."""
-        try:
-            return raw_new_contract(*args, **kwargs)
-        except ValueError:
-            # During meta-coverage, this module is imported twice, and
-            # PyContracts doesn't like redefining contracts. It's OK.
-            pass
-
-    # Define contract words that PyContract doesn't have.
-    new_contract('bytes', lambda v: isinstance(v, bytes))
-    if env.PY3:
-        new_contract('unicode', lambda v: isinstance(v, unicode_class))
-else:                                           # pragma: not covered
-    # We aren't using real PyContracts, so just define a no-op decorator as a
-    # stunt double.
-    def contract(**unused):
-        """Dummy no-op implementation of `contract`."""
-        return lambda func: func
-
-    def new_contract(*args_unused, **kwargs_unused):
-        """Dummy no-op implementation of `new_contract`."""
-        pass
-
-
-def nice_pair(pair):
-    """Make a nice string representation of a pair of numbers.
-
-    If the numbers are equal, just return the number, otherwise return the pair
-    with a dash between them, indicating the range.
-
-    """
-    start, end = pair
-    if start == end:
-        return "%d" % start
-    else:
-        return "%d-%d" % (start, end)
-
-
-def format_lines(statements, lines):
-    """Nicely format a list of line numbers.
-
-    Format a list of line numbers for printing by coalescing groups of lines as
-    long as the lines represent consecutive statements.  This will coalesce
-    even if there are gaps between statements.
-
-    For example, if `statements` is [1,2,3,4,5,10,11,12,13,14] and
-    `lines` is [1,2,5,10,11,13,14] then the result will be "1-2, 5-11, 13-14".
-
-    """
-    pairs = []
-    i = 0
-    j = 0
-    start = None
-    statements = sorted(statements)
-    lines = sorted(lines)
-    while i < len(statements) and j < len(lines):
-        if statements[i] == lines[j]:
-            if start is None:
-                start = lines[j]
-            end = lines[j]
-            j += 1
-        elif start:
-            pairs.append((start, end))
-            start = None
-        i += 1
-    if start:
-        pairs.append((start, end))
-    ret = ', '.join(map(nice_pair, pairs))
-    return ret
-
-
-def expensive(fn):
-    """A decorator to indicate that a method shouldn't be called more than once.
-
-    Normally, this does nothing.  During testing, this raises an exception if
-    called more than once.
-
-    """
-    if env.TESTING:
-        attr = "_once_" + fn.__name__
-
-        def _wrapped(self):
-            """Inner function that checks the cache."""
-            if hasattr(self, attr):
-                raise Exception("Shouldn't have called %s more than once" % fn.__name__)
-            setattr(self, attr, True)
-            return fn(self)
-        return _wrapped
-    else:
-        return fn
-
-
-def bool_or_none(b):
-    """Return bool(b), but preserve None."""
-    if b is None:
-        return None
-    else:
-        return bool(b)
-
-
-def join_regex(regexes):
-    """Combine a list of regexes into one that matches any of them."""
-    return "|".join("(?:%s)" % r for r in regexes)
-
-
-def file_be_gone(path):
-    """Remove a file, and don't get annoyed if it doesn't exist."""
-    try:
-        os.remove(path)
-    except OSError as e:
-        if e.errno != errno.ENOENT:
-            raise
-
-
-def output_encoding(outfile=None):
-    """Determine the encoding to use for output written to `outfile` or stdout."""
-    if outfile is None:
-        outfile = sys.stdout
-    encoding = (
-        getattr(outfile, "encoding", None) or
-        getattr(sys.__stdout__, "encoding", None) or
-        locale.getpreferredencoding()
-    )
-    return encoding
-
-
-class Hasher(object):
-    """Hashes Python data into md5."""
-    def __init__(self):
-        self.md5 = hashlib.md5()
-
-    def update(self, v):
-        """Add `v` to the hash, recursively if needed."""
-        self.md5.update(to_bytes(str(type(v))))
-        if isinstance(v, string_class):
-            self.md5.update(to_bytes(v))
-        elif isinstance(v, bytes):
-            self.md5.update(v)
-        elif v is None:
-            pass
-        elif isinstance(v, (int, float)):
-            self.md5.update(to_bytes(str(v)))
-        elif isinstance(v, (tuple, list)):
-            for e in v:
-                self.update(e)
-        elif isinstance(v, dict):
-            keys = v.keys()
-            for k in sorted(keys):
-                self.update(k)
-                self.update(v[k])
-        else:
-            for k in dir(v):
-                if k.startswith('__'):
-                    continue
-                a = getattr(v, k)
-                if inspect.isroutine(a):
-                    continue
-                self.update(k)
-                self.update(a)
-
-    def hexdigest(self):
-        """Retrieve the hex digest of the hash."""
-        return self.md5.hexdigest()
-
-
-def _needs_to_implement(that, func_name):
-    """Helper to raise NotImplementedError in interface stubs."""
-    if hasattr(that, "_coverage_plugin_name"):
-        thing = "Plugin"
-        name = that._coverage_plugin_name
-    else:
-        thing = "Class"
-        klass = that.__class__
-        name = "{klass.__module__}.{klass.__name__}".format(klass=klass)
-
-    raise NotImplementedError(
-        "{thing} {name!r} needs to implement {func_name}()".format(
-            thing=thing, name=name, func_name=func_name
-            )
-        )
-
-
-class CoverageException(Exception):
-    """An exception specific to coverage.py."""
-    pass
-
-
-class NoSource(CoverageException):
-    """We couldn't find the source for a module."""
-    pass
-
-
-class NoCode(NoSource):
-    """We couldn't find any code at all."""
-    pass
-
-
-class NotPython(CoverageException):
-    """A source file turned out not to be parsable Python."""
-    pass
-
-
-class ExceptionDuringRun(CoverageException):
-    """An exception happened while running customer code.
-
-    Construct it with three arguments, the values from `sys.exc_info`.
-
-    """
-    pass
-
-#
-# eflag: FileType = Python2
--- a/DebugClients/Python/coverage/monkey.py	Sat Sep 03 18:02:37 2016 +0200
+++ /dev/null	Thu Jan 01 00:00:00 1970 +0000
@@ -1,83 +0,0 @@
-# Licensed under the Apache License: http://www.apache.org/licenses/LICENSE-2.0
-# For details: https://bitbucket.org/ned/coveragepy/src/default/NOTICE.txt
-
-"""Monkey-patching to make coverage.py work right in some cases."""
-
-import multiprocessing
-import multiprocessing.process
-import sys
-
-# An attribute that will be set on modules to indicate that they have been
-# monkey-patched.
-PATCHED_MARKER = "_coverage$patched"
-
-if sys.version_info >= (3, 4):
-    klass = multiprocessing.process.BaseProcess
-else:
-    klass = multiprocessing.Process
-
-original_bootstrap = klass._bootstrap
-
-
-class ProcessWithCoverage(klass):
-    """A replacement for multiprocess.Process that starts coverage."""
-    def _bootstrap(self):
-        """Wrapper around _bootstrap to start coverage."""
-        from coverage import Coverage
-        cov = Coverage(data_suffix=True)
-        cov.start()
-        try:
-            return original_bootstrap(self)
-        finally:
-            cov.stop()
-            cov.save()
-
-
-class Stowaway(object):
-    """An object to pickle, so when it is unpickled, it can apply the monkey-patch."""
-    def __getstate__(self):
-        return {}
-
-    def __setstate__(self, state_unused):
-        patch_multiprocessing()
-
-
-def patch_multiprocessing():
-    """Monkey-patch the multiprocessing module.
-
-    This enables coverage measurement of processes started by multiprocessing.
-    This is wildly experimental!
-
-    """
-    if hasattr(multiprocessing, PATCHED_MARKER):
-        return
-
-    if sys.version_info >= (3, 4):
-        klass._bootstrap = ProcessWithCoverage._bootstrap
-    else:
-        multiprocessing.Process = ProcessWithCoverage
-
-    # When spawning processes rather than forking them, we have no state in the
-    # new process.  We sneak in there with a Stowaway: we stuff one of our own
-    # objects into the data that gets pickled and sent to the sub-process. When
-    # the Stowaway is unpickled, it's __setstate__ method is called, which
-    # re-applies the monkey-patch.
-    # Windows only spawns, so this is needed to keep Windows working.
-    try:
-        from multiprocessing import spawn           # pylint: disable=no-name-in-module
-        original_get_preparation_data = spawn.get_preparation_data
-    except (ImportError, AttributeError):
-        pass
-    else:
-        def get_preparation_data_with_stowaway(name):
-            """Get the original preparation data, and also insert our stowaway."""
-            d = original_get_preparation_data(name)
-            d['stowaway'] = Stowaway()
-            return d
-
-        spawn.get_preparation_data = get_preparation_data_with_stowaway
-
-    setattr(multiprocessing, PATCHED_MARKER, True)
-
-#
-# eflag: FileType = Python2
--- a/DebugClients/Python/coverage/parser.py	Sat Sep 03 18:02:37 2016 +0200
+++ /dev/null	Thu Jan 01 00:00:00 1970 +0000
@@ -1,1034 +0,0 @@
-# Licensed under the Apache License: http://www.apache.org/licenses/LICENSE-2.0
-# For details: https://bitbucket.org/ned/coveragepy/src/default/NOTICE.txt
-
-"""Code parsing for coverage.py."""
-
-import ast
-import collections
-import os
-import re
-import token
-import tokenize
-
-from coverage import env
-from coverage.backward import range    # pylint: disable=redefined-builtin
-from coverage.backward import bytes_to_ints, string_class
-from coverage.bytecode import CodeObjects
-from coverage.debug import short_stack
-from coverage.misc import contract, new_contract, nice_pair, join_regex
-from coverage.misc import CoverageException, NoSource, NotPython
-from coverage.phystokens import compile_unicode, generate_tokens, neuter_encoding_declaration
-
-
-class PythonParser(object):
-    """Parse code to find executable lines, excluded lines, etc.
-
-    This information is all based on static analysis: no code execution is
-    involved.
-
-    """
-    @contract(text='unicode|None')
-    def __init__(self, text=None, filename=None, exclude=None):
-        """
-        Source can be provided as `text`, the text itself, or `filename`, from
-        which the text will be read.  Excluded lines are those that match
-        `exclude`, a regex.
-
-        """
-        assert text or filename, "PythonParser needs either text or filename"
-        self.filename = filename or "<code>"
-        self.text = text
-        if not self.text:
-            from coverage.python import get_python_source
-            try:
-                self.text = get_python_source(self.filename)
-            except IOError as err:
-                raise NoSource(
-                    "No source for code: '%s': %s" % (self.filename, err)
-                )
-
-        self.exclude = exclude
-
-        # The text lines of the parsed code.
-        self.lines = self.text.split('\n')
-
-        # The normalized line numbers of the statements in the code. Exclusions
-        # are taken into account, and statements are adjusted to their first
-        # lines.
-        self.statements = set()
-
-        # The normalized line numbers of the excluded lines in the code,
-        # adjusted to their first lines.
-        self.excluded = set()
-
-        # The raw_* attributes are only used in this class, and in
-        # lab/parser.py to show how this class is working.
-
-        # The line numbers that start statements, as reported by the line
-        # number table in the bytecode.
-        self.raw_statements = set()
-
-        # The raw line numbers of excluded lines of code, as marked by pragmas.
-        self.raw_excluded = set()
-
-        # The line numbers of class and function definitions.
-        self.raw_classdefs = set()
-
-        # The line numbers of docstring lines.
-        self.raw_docstrings = set()
-
-        # Internal detail, used by lab/parser.py.
-        self.show_tokens = False
-
-        # A dict mapping line numbers to lexical statement starts for
-        # multi-line statements.
-        self._multiline = {}
-
-        # Lazily-created ByteParser, arc data, and missing arc descriptions.
-        self._byte_parser = None
-        self._all_arcs = None
-        self._missing_arc_fragments = None
-
-    @property
-    def byte_parser(self):
-        """Create a ByteParser on demand."""
-        if not self._byte_parser:
-            self._byte_parser = ByteParser(self.text, filename=self.filename)
-        return self._byte_parser
-
-    def lines_matching(self, *regexes):
-        """Find the lines matching one of a list of regexes.
-
-        Returns a set of line numbers, the lines that contain a match for one
-        of the regexes in `regexes`.  The entire line needn't match, just a
-        part of it.
-
-        """
-        combined = join_regex(regexes)
-        if env.PY2:
-            combined = combined.decode("utf8")
-        regex_c = re.compile(combined)
-        matches = set()
-        for i, ltext in enumerate(self.lines, start=1):
-            if regex_c.search(ltext):
-                matches.add(i)
-        return matches
-
-    def _raw_parse(self):
-        """Parse the source to find the interesting facts about its lines.
-
-        A handful of attributes are updated.
-
-        """
-        # Find lines which match an exclusion pattern.
-        if self.exclude:
-            self.raw_excluded = self.lines_matching(self.exclude)
-
-        # Tokenize, to find excluded suites, to find docstrings, and to find
-        # multi-line statements.
-        indent = 0
-        exclude_indent = 0
-        excluding = False
-        excluding_decorators = False
-        prev_toktype = token.INDENT
-        first_line = None
-        empty = True
-        first_on_line = True
-
-        tokgen = generate_tokens(self.text)
-        for toktype, ttext, (slineno, _), (elineno, _), ltext in tokgen:
-            if self.show_tokens:                # pragma: not covered
-                print("%10s %5s %-20r %r" % (
-                    tokenize.tok_name.get(toktype, toktype),
-                    nice_pair((slineno, elineno)), ttext, ltext
-                ))
-            if toktype == token.INDENT:
-                indent += 1
-            elif toktype == token.DEDENT:
-                indent -= 1
-            elif toktype == token.NAME:
-                if ttext == 'class':
-                    # Class definitions look like branches in the bytecode, so
-                    # we need to exclude them.  The simplest way is to note the
-                    # lines with the 'class' keyword.
-                    self.raw_classdefs.add(slineno)
-            elif toktype == token.OP:
-                if ttext == ':':
-                    should_exclude = (elineno in self.raw_excluded) or excluding_decorators
-                    if not excluding and should_exclude:
-                        # Start excluding a suite.  We trigger off of the colon
-                        # token so that the #pragma comment will be recognized on
-                        # the same line as the colon.
-                        self.raw_excluded.add(elineno)
-                        exclude_indent = indent
-                        excluding = True
-                        excluding_decorators = False
-                elif ttext == '@' and first_on_line:
-                    # A decorator.
-                    if elineno in self.raw_excluded:
-                        excluding_decorators = True
-                    if excluding_decorators:
-                        self.raw_excluded.add(elineno)
-            elif toktype == token.STRING and prev_toktype == token.INDENT:
-                # Strings that are first on an indented line are docstrings.
-                # (a trick from trace.py in the stdlib.) This works for
-                # 99.9999% of cases.  For the rest (!) see:
-                # http://stackoverflow.com/questions/1769332/x/1769794#1769794
-                self.raw_docstrings.update(range(slineno, elineno+1))
-            elif toktype == token.NEWLINE:
-                if first_line is not None and elineno != first_line:
-                    # We're at the end of a line, and we've ended on a
-                    # different line than the first line of the statement,
-                    # so record a multi-line range.
-                    for l in range(first_line, elineno+1):
-                        self._multiline[l] = first_line
-                first_line = None
-                first_on_line = True
-
-            if ttext.strip() and toktype != tokenize.COMMENT:
-                # A non-whitespace token.
-                empty = False
-                if first_line is None:
-                    # The token is not whitespace, and is the first in a
-                    # statement.
-                    first_line = slineno
-                    # Check whether to end an excluded suite.
-                    if excluding and indent <= exclude_indent:
-                        excluding = False
-                    if excluding:
-                        self.raw_excluded.add(elineno)
-                    first_on_line = False
-
-            prev_toktype = toktype
-
-        # Find the starts of the executable statements.
-        if not empty:
-            self.raw_statements.update(self.byte_parser._find_statements())
-
-    def first_line(self, line):
-        """Return the first line number of the statement including `line`."""
-        return self._multiline.get(line, line)
-
-    def first_lines(self, lines):
-        """Map the line numbers in `lines` to the correct first line of the
-        statement.
-
-        Returns a set of the first lines.
-
-        """
-        return set(self.first_line(l) for l in lines)
-
-    def translate_lines(self, lines):
-        """Implement `FileReporter.translate_lines`."""
-        return self.first_lines(lines)
-
-    def translate_arcs(self, arcs):
-        """Implement `FileReporter.translate_arcs`."""
-        return [(self.first_line(a), self.first_line(b)) for (a, b) in arcs]
-
-    def parse_source(self):
-        """Parse source text to find executable lines, excluded lines, etc.
-
-        Sets the .excluded and .statements attributes, normalized to the first
-        line of multi-line statements.
-
-        """
-        try:
-            self._raw_parse()
-        except (tokenize.TokenError, IndentationError) as err:
-            if hasattr(err, "lineno"):
-                lineno = err.lineno         # IndentationError
-            else:
-                lineno = err.args[1][0]     # TokenError
-            raise NotPython(
-                u"Couldn't parse '%s' as Python source: '%s' at line %d" % (
-                    self.filename, err.args[0], lineno
-                )
-            )
-
-        self.excluded = self.first_lines(self.raw_excluded)
-
-        ignore = self.excluded | self.raw_docstrings
-        starts = self.raw_statements - ignore
-        self.statements = self.first_lines(starts) - ignore
-
-    def arcs(self):
-        """Get information about the arcs available in the code.
-
-        Returns a set of line number pairs.  Line numbers have been normalized
-        to the first line of multi-line statements.
-
-        """
-        if self._all_arcs is None:
-            self._analyze_ast()
-        return self._all_arcs
-
-    def _analyze_ast(self):
-        """Run the AstArcAnalyzer and save its results.
-
-        `_all_arcs` is the set of arcs in the code.
-
-        """
-        aaa = AstArcAnalyzer(self.text, self.raw_statements, self._multiline)
-        aaa.analyze()
-
-        self._all_arcs = set()
-        for l1, l2 in aaa.arcs:
-            fl1 = self.first_line(l1)
-            fl2 = self.first_line(l2)
-            if fl1 != fl2:
-                self._all_arcs.add((fl1, fl2))
-
-        self._missing_arc_fragments = aaa.missing_arc_fragments
-
-    def exit_counts(self):
-        """Get a count of exits from that each line.
-
-        Excluded lines are excluded.
-
-        """
-        exit_counts = collections.defaultdict(int)
-        for l1, l2 in self.arcs():
-            if l1 < 0:
-                # Don't ever report -1 as a line number
-                continue
-            if l1 in self.excluded:
-                # Don't report excluded lines as line numbers.
-                continue
-            if l2 in self.excluded:
-                # Arcs to excluded lines shouldn't count.
-                continue
-            exit_counts[l1] += 1
-
-        # Class definitions have one extra exit, so remove one for each:
-        for l in self.raw_classdefs:
-            # Ensure key is there: class definitions can include excluded lines.
-            if l in exit_counts:
-                exit_counts[l] -= 1
-
-        return exit_counts
-
-    def missing_arc_description(self, start, end, executed_arcs=None):
-        """Provide an English sentence describing a missing arc."""
-        if self._missing_arc_fragments is None:
-            self._analyze_ast()
-
-        actual_start = start
-
-        if (
-            executed_arcs and
-            end < 0 and end == -start and
-            (end, start) not in executed_arcs and
-            (end, start) in self._missing_arc_fragments
-        ):
-            # It's a one-line callable, and we never even started it,
-            # and we have a message about not starting it.
-            start, end = end, start
-
-        fragment_pairs = self._missing_arc_fragments.get((start, end), [(None, None)])
-
-        msgs = []
-        for fragment_pair in fragment_pairs:
-            smsg, emsg = fragment_pair
-
-            if emsg is None:
-                if end < 0:
-                    # Hmm, maybe we have a one-line callable, let's check.
-                    if (-end, end) in self._missing_arc_fragments:
-                        return self.missing_arc_description(-end, end)
-                    emsg = "didn't jump to the function exit"
-                else:
-                    emsg = "didn't jump to line {lineno}"
-            emsg = emsg.format(lineno=end)
-
-            msg = "line {start} {emsg}".format(start=actual_start, emsg=emsg)
-            if smsg is not None:
-                msg += ", because {smsg}".format(smsg=smsg.format(lineno=actual_start))
-
-            msgs.append(msg)
-
-        return " or ".join(msgs)
-
-
-class ByteParser(object):
-    """Parse bytecode to understand the structure of code."""
-
-    @contract(text='unicode')
-    def __init__(self, text, code=None, filename=None):
-        self.text = text
-        if code:
-            self.code = code
-        else:
-            try:
-                self.code = compile_unicode(text, filename, "exec")
-            except SyntaxError as synerr:
-                raise NotPython(
-                    u"Couldn't parse '%s' as Python source: '%s' at line %d" % (
-                        filename, synerr.msg, synerr.lineno
-                    )
-                )
-
-        # Alternative Python implementations don't always provide all the
-        # attributes on code objects that we need to do the analysis.
-        for attr in ['co_lnotab', 'co_firstlineno', 'co_consts']:
-            if not hasattr(self.code, attr):
-                raise CoverageException(
-                    "This implementation of Python doesn't support code analysis.\n"
-                    "Run coverage.py under CPython for this command."
-                )
-
-    def child_parsers(self):
-        """Iterate over all the code objects nested within this one.
-
-        The iteration includes `self` as its first value.
-
-        """
-        children = CodeObjects(self.code)
-        return (ByteParser(self.text, code=c) for c in children)
-
-    def _bytes_lines(self):
-        """Map byte offsets to line numbers in `code`.
-
-        Uses co_lnotab described in Python/compile.c to map byte offsets to
-        line numbers.  Produces a sequence: (b0, l0), (b1, l1), ...
-
-        Only byte offsets that correspond to line numbers are included in the
-        results.
-
-        """
-        # Adapted from dis.py in the standard library.
-        byte_increments = bytes_to_ints(self.code.co_lnotab[0::2])
-        line_increments = bytes_to_ints(self.code.co_lnotab[1::2])
-
-        last_line_num = None
-        line_num = self.code.co_firstlineno
-        byte_num = 0
-        for byte_incr, line_incr in zip(byte_increments, line_increments):
-            if byte_incr:
-                if line_num != last_line_num:
-                    yield (byte_num, line_num)
-                    last_line_num = line_num
-                byte_num += byte_incr
-            line_num += line_incr
-        if line_num != last_line_num:
-            yield (byte_num, line_num)
-
-    def _find_statements(self):
-        """Find the statements in `self.code`.
-
-        Produce a sequence of line numbers that start statements.  Recurses
-        into all code objects reachable from `self.code`.
-
-        """
-        for bp in self.child_parsers():
-            # Get all of the lineno information from this code.
-            for _, l in bp._bytes_lines():
-                yield l
-
-
-#
-# AST analysis
-#
-
-class LoopBlock(object):
-    """A block on the block stack representing a `for` or `while` loop."""
-    def __init__(self, start):
-        self.start = start
-        self.break_exits = set()
-
-
-class FunctionBlock(object):
-    """A block on the block stack representing a function definition."""
-    def __init__(self, start, name):
-        self.start = start
-        self.name = name
-
-
-class TryBlock(object):
-    """A block on the block stack representing a `try` block."""
-    def __init__(self, handler_start=None, final_start=None):
-        self.handler_start = handler_start
-        self.final_start = final_start
-        self.break_from = set()
-        self.continue_from = set()
-        self.return_from = set()
-        self.raise_from = set()
-
-
-class ArcStart(collections.namedtuple("Arc", "lineno, cause")):
-    """The information needed to start an arc.
-
-    `lineno` is the line number the arc starts from.  `cause` is a fragment
-    used as the startmsg for AstArcAnalyzer.missing_arc_fragments.
-
-    """
-    def __new__(cls, lineno, cause=None):
-        return super(ArcStart, cls).__new__(cls, lineno, cause)
-
-
-# Define contract words that PyContract doesn't have.
-# ArcStarts is for a list or set of ArcStart's.
-new_contract('ArcStarts', lambda seq: all(isinstance(x, ArcStart) for x in seq))
-
-
-class AstArcAnalyzer(object):
-    """Analyze source text with an AST to find executable code paths."""
-
-    @contract(text='unicode', statements=set)
-    def __init__(self, text, statements, multiline):
-        self.root_node = ast.parse(neuter_encoding_declaration(text))
-        # TODO: I think this is happening in too many places.
-        self.statements = set(multiline.get(l, l) for l in statements)
-        self.multiline = multiline
-
-        if int(os.environ.get("COVERAGE_ASTDUMP", 0)):      # pragma: debugging
-            # Dump the AST so that failing tests have helpful output.
-            print("Statements: {}".format(self.statements))
-            print("Multiline map: {}".format(self.multiline))
-            ast_dump(self.root_node)
-
-        self.arcs = set()
-
-        # A map from arc pairs to a pair of sentence fragments: (startmsg, endmsg).
-        # For an arc from line 17, they should be usable like:
-        #    "Line 17 {endmsg}, because {startmsg}"
-        self.missing_arc_fragments = collections.defaultdict(list)
-        self.block_stack = []
-
-        self.debug = bool(int(os.environ.get("COVERAGE_TRACK_ARCS", 0)))
-
-    def analyze(self):
-        """Examine the AST tree from `root_node` to determine possible arcs.
-
-        This sets the `arcs` attribute to be a set of (from, to) line number
-        pairs.
-
-        """
-        for node in ast.walk(self.root_node):
-            node_name = node.__class__.__name__
-            code_object_handler = getattr(self, "_code_object__" + node_name, None)
-            if code_object_handler is not None:
-                code_object_handler(node)
-
-    def add_arc(self, start, end, smsg=None, emsg=None):
-        """Add an arc, including message fragments to use if it is missing."""
-        if self.debug:
-            print("\nAdding arc: ({}, {}): {!r}, {!r}".format(start, end, smsg, emsg))
-            print(short_stack(limit=6))
-        self.arcs.add((start, end))
-
-        if smsg is not None or emsg is not None:
-            self.missing_arc_fragments[(start, end)].append((smsg, emsg))
-
-    def nearest_blocks(self):
-        """Yield the blocks in nearest-to-farthest order."""
-        return reversed(self.block_stack)
-
-    @contract(returns=int)
-    def line_for_node(self, node):
-        """What is the right line number to use for this node?
-
-        This dispatches to _line__Node functions where needed.
-
-        """
-        node_name = node.__class__.__name__
-        handler = getattr(self, "_line__" + node_name, None)
-        if handler is not None:
-            return handler(node)
-        else:
-            return node.lineno
-
-    def _line__Assign(self, node):
-        return self.line_for_node(node.value)
-
-    def _line__Dict(self, node):
-        # Python 3.5 changed how dict literals are made.
-        if env.PYVERSION >= (3, 5) and node.keys:
-            if node.keys[0] is not None:
-                return node.keys[0].lineno
-            else:
-                # Unpacked dict literals `{**{'a':1}}` have None as the key,
-                # use the value in that case.
-                return node.values[0].lineno
-        else:
-            return node.lineno
-
-    def _line__List(self, node):
-        if node.elts:
-            return self.line_for_node(node.elts[0])
-        else:
-            return node.lineno
-
-    def _line__Module(self, node):
-        if node.body:
-            return self.line_for_node(node.body[0])
-        else:
-            # Modules have no line number, they always start at 1.
-            return 1
-
-    OK_TO_DEFAULT = set([
-        "Assign", "Assert", "AugAssign", "Delete", "Exec", "Expr", "Global",
-        "Import", "ImportFrom", "Nonlocal", "Pass", "Print",
-    ])
-
-    @contract(returns='ArcStarts')
-    def add_arcs(self, node):
-        """Add the arcs for `node`.
-
-        Return a set of ArcStarts, exits from this node to the next.
-
-        """
-        node_name = node.__class__.__name__
-        handler = getattr(self, "_handle__" + node_name, None)
-        if handler is not None:
-            return handler(node)
-
-        if 0:
-            node_name = node.__class__.__name__
-            if node_name not in self.OK_TO_DEFAULT:
-                print("*** Unhandled: {0}".format(node))
-        return set([ArcStart(self.line_for_node(node), cause=None)])
-
-    @contract(returns='ArcStarts')
-    def add_body_arcs(self, body, from_start=None, prev_starts=None):
-        """Add arcs for the body of a compound statement.
-
-        `body` is the body node.  `from_start` is a single `ArcStart` that can
-        be the previous line in flow before this body.  `prev_starts` is a set
-        of ArcStarts that can be the previous line.  Only one of them should be
-        given.
-
-        Returns a set of ArcStarts, the exits from this body.
-
-        """
-        if prev_starts is None:
-            prev_starts = set([from_start])
-        for body_node in body:
-            lineno = self.line_for_node(body_node)
-            first_line = self.multiline.get(lineno, lineno)
-            if first_line not in self.statements:
-                continue
-            for prev_start in prev_starts:
-                self.add_arc(prev_start.lineno, lineno, prev_start.cause)
-            prev_starts = self.add_arcs(body_node)
-        return prev_starts
-
-    def is_constant_expr(self, node):
-        """Is this a compile-time constant?"""
-        node_name = node.__class__.__name__
-        if node_name in ["NameConstant", "Num"]:
-            return True
-        elif node_name == "Name":
-            if env.PY3 and node.id in ["True", "False", "None"]:
-                return True
-        return False
-
-    # tests to write:
-    # TODO: while EXPR:
-    # TODO: while False:
-    # TODO: listcomps hidden deep in other expressions
-    # TODO: listcomps hidden in lists: x = [[i for i in range(10)]]
-    # TODO: nested function definitions
-
-    @contract(exits='ArcStarts')
-    def process_break_exits(self, exits):
-        """Add arcs due to jumps from `exits` being breaks."""
-        for block in self.nearest_blocks():
-            if isinstance(block, LoopBlock):
-                block.break_exits.update(exits)
-                break
-            elif isinstance(block, TryBlock) and block.final_start is not None:
-                block.break_from.update(exits)
-                break
-
-    @contract(exits='ArcStarts')
-    def process_continue_exits(self, exits):
-        """Add arcs due to jumps from `exits` being continues."""
-        for block in self.nearest_blocks():
-            if isinstance(block, LoopBlock):
-                for xit in exits:
-                    self.add_arc(xit.lineno, block.start, xit.cause)
-                break
-            elif isinstance(block, TryBlock) and block.final_start is not None:
-                block.continue_from.update(exits)
-                break
-
-    @contract(exits='ArcStarts')
-    def process_raise_exits(self, exits):
-        """Add arcs due to jumps from `exits` being raises."""
-        for block in self.nearest_blocks():
-            if isinstance(block, TryBlock):
-                if block.handler_start is not None:
-                    for xit in exits:
-                        self.add_arc(xit.lineno, block.handler_start, xit.cause)
-                    break
-                elif block.final_start is not None:
-                    block.raise_from.update(exits)
-                    break
-            elif isinstance(block, FunctionBlock):
-                for xit in exits:
-                    self.add_arc(
-                        xit.lineno, -block.start, xit.cause,
-                        "didn't except from function '{0}'".format(block.name),
-                    )
-                break
-
-    @contract(exits='ArcStarts')
-    def process_return_exits(self, exits):
-        """Add arcs due to jumps from `exits` being returns."""
-        for block in self.nearest_blocks():
-            if isinstance(block, TryBlock) and block.final_start is not None:
-                block.return_from.update(exits)
-                break
-            elif isinstance(block, FunctionBlock):
-                for xit in exits:
-                    self.add_arc(
-                        xit.lineno, -block.start, xit.cause,
-                        "didn't return from function '{0}'".format(block.name),
-                    )
-                break
-
-    ## Handlers
-
-    @contract(returns='ArcStarts')
-    def _handle__Break(self, node):
-        here = self.line_for_node(node)
-        break_start = ArcStart(here, cause="the break on line {lineno} wasn't executed")
-        self.process_break_exits([break_start])
-        return set()
-
-    @contract(returns='ArcStarts')
-    def _handle_decorated(self, node):
-        """Add arcs for things that can be decorated (classes and functions)."""
-        last = self.line_for_node(node)
-        if node.decorator_list:
-            for dec_node in node.decorator_list:
-                dec_start = self.line_for_node(dec_node)
-                if dec_start != last:
-                    self.add_arc(last, dec_start)
-                    last = dec_start
-            # The definition line may have been missed, but we should have it
-            # in `self.statements`.  For some constructs, `line_for_node` is
-            # not what we'd think of as the first line in the statement, so map
-            # it to the first one.
-            body_start = self.line_for_node(node.body[0])
-            body_start = self.multiline.get(body_start, body_start)
-            for lineno in range(last+1, body_start):
-                if lineno in self.statements:
-                    self.add_arc(last, lineno)
-                    last = lineno
-        # The body is handled in collect_arcs.
-        return set([ArcStart(last, cause=None)])
-
-    _handle__ClassDef = _handle_decorated
-
-    @contract(returns='ArcStarts')
-    def _handle__Continue(self, node):
-        here = self.line_for_node(node)
-        continue_start = ArcStart(here, cause="the continue on line {lineno} wasn't executed")
-        self.process_continue_exits([continue_start])
-        return set()
-
-    @contract(returns='ArcStarts')
-    def _handle__For(self, node):
-        start = self.line_for_node(node.iter)
-        self.block_stack.append(LoopBlock(start=start))
-        from_start = ArcStart(start, cause="the loop on line {lineno} never started")
-        exits = self.add_body_arcs(node.body, from_start=from_start)
-        # Any exit from the body will go back to the top of the loop.
-        for xit in exits:
-            self.add_arc(xit.lineno, start, xit.cause)
-        my_block = self.block_stack.pop()
-        exits = my_block.break_exits
-        from_start = ArcStart(start, cause="the loop on line {lineno} didn't complete")
-        if node.orelse:
-            else_exits = self.add_body_arcs(node.orelse, from_start=from_start)
-            exits |= else_exits
-        else:
-            # no else clause: exit from the for line.
-            exits.add(from_start)
-        return exits
-
-    _handle__AsyncFor = _handle__For
-
-    _handle__FunctionDef = _handle_decorated
-    _handle__AsyncFunctionDef = _handle_decorated
-
-    @contract(returns='ArcStarts')
-    def _handle__If(self, node):
-        start = self.line_for_node(node.test)
-        from_start = ArcStart(start, cause="the condition on line {lineno} was never true")
-        exits = self.add_body_arcs(node.body, from_start=from_start)
-        from_start = ArcStart(start, cause="the condition on line {lineno} was never false")
-        exits |= self.add_body_arcs(node.orelse, from_start=from_start)
-        return exits
-
-    @contract(returns='ArcStarts')
-    def _handle__Raise(self, node):
-        here = self.line_for_node(node)
-        raise_start = ArcStart(here, cause="the raise on line {lineno} wasn't executed")
-        self.process_raise_exits([raise_start])
-        # `raise` statement jumps away, no exits from here.
-        return set()
-
-    @contract(returns='ArcStarts')
-    def _handle__Return(self, node):
-        here = self.line_for_node(node)
-        return_start = ArcStart(here, cause="the return on line {lineno} wasn't executed")
-        self.process_return_exits([return_start])
-        # `return` statement jumps away, no exits from here.
-        return set()
-
-    @contract(returns='ArcStarts')
-    def _handle__Try(self, node):
-        if node.handlers:
-            handler_start = self.line_for_node(node.handlers[0])
-        else:
-            handler_start = None
-
-        if node.finalbody:
-            final_start = self.line_for_node(node.finalbody[0])
-        else:
-            final_start = None
-
-        try_block = TryBlock(handler_start=handler_start, final_start=final_start)
-        self.block_stack.append(try_block)
-
-        start = self.line_for_node(node)
-        exits = self.add_body_arcs(node.body, from_start=ArcStart(start, cause=None))
-
-        # We're done with the `try` body, so this block no longer handles
-        # exceptions. We keep the block so the `finally` clause can pick up
-        # flows from the handlers and `else` clause.
-        if node.finalbody:
-            try_block.handler_start = None
-            if node.handlers:
-                # If there are `except` clauses, then raises in the try body
-                # will already jump to them.  Start this set over for raises in
-                # `except` and `else`.
-                try_block.raise_from = set([])
-        else:
-            self.block_stack.pop()
-
-        handler_exits = set()
-
-        if node.handlers:
-            last_handler_start = None
-            for handler_node in node.handlers:
-                handler_start = self.line_for_node(handler_node)
-                if last_handler_start is not None:
-                    self.add_arc(last_handler_start, handler_start)
-                last_handler_start = handler_start
-                from_cause = "the exception caught by line {lineno} didn't happen"
-                from_start = ArcStart(handler_start, cause=from_cause)
-                handler_exits |= self.add_body_arcs(handler_node.body, from_start=from_start)
-
-        if node.orelse:
-            exits = self.add_body_arcs(node.orelse, prev_starts=exits)
-
-        exits |= handler_exits
-
-        if node.finalbody:
-            self.block_stack.pop()
-            final_from = (                  # You can get to the `finally` clause from:
-                exits |                         # the exits of the body or `else` clause,
-                try_block.break_from |          # or a `break`,
-                try_block.continue_from |       # or a `continue`,
-                try_block.raise_from |          # or a `raise`,
-                try_block.return_from           # or a `return`.
-            )
-
-            exits = self.add_body_arcs(node.finalbody, prev_starts=final_from)
-            if try_block.break_from:
-                break_exits = self._combine_finally_starts(try_block.break_from, exits)
-                self.process_break_exits(break_exits)
-            if try_block.continue_from:
-                continue_exits = self._combine_finally_starts(try_block.continue_from, exits)
-                self.process_continue_exits(continue_exits)
-            if try_block.raise_from:
-                raise_exits = self._combine_finally_starts(try_block.raise_from, exits)
-                self.process_raise_exits(raise_exits)
-            if try_block.return_from:
-                return_exits = self._combine_finally_starts(try_block.return_from, exits)
-                self.process_return_exits(return_exits)
-
-        return exits
-
-    def _combine_finally_starts(self, starts, exits):
-        """Helper for building the cause of `finally` branches."""
-        causes = []
-        for lineno, cause in sorted(starts):
-            if cause is not None:
-                causes.append(cause.format(lineno=lineno))
-        cause = " or ".join(causes)
-        exits = set(ArcStart(ex.lineno, cause) for ex in exits)
-        return exits
-
-    @contract(returns='ArcStarts')
-    def _handle__TryExcept(self, node):
-        # Python 2.7 uses separate TryExcept and TryFinally nodes. If we get
-        # TryExcept, it means there was no finally, so fake it, and treat as
-        # a general Try node.
-        node.finalbody = []
-        return self._handle__Try(node)
-
-    @contract(returns='ArcStarts')
-    def _handle__TryFinally(self, node):
-        # Python 2.7 uses separate TryExcept and TryFinally nodes. If we get
-        # TryFinally, see if there's a TryExcept nested inside. If so, merge
-        # them. Otherwise, fake fields to complete a Try node.
-        node.handlers = []
-        node.orelse = []
-
-        first = node.body[0]
-        if first.__class__.__name__ == "TryExcept" and node.lineno == first.lineno:
-            assert len(node.body) == 1
-            node.body = first.body
-            node.handlers = first.handlers
-            node.orelse = first.orelse
-
-        return self._handle__Try(node)
-
-    @contract(returns='ArcStarts')
-    def _handle__While(self, node):
-        constant_test = self.is_constant_expr(node.test)
-        start = to_top = self.line_for_node(node.test)
-        if constant_test:
-            to_top = self.line_for_node(node.body[0])
-        self.block_stack.append(LoopBlock(start=start))
-        from_start = ArcStart(start, cause="the condition on line {lineno} was never true")
-        exits = self.add_body_arcs(node.body, from_start=from_start)
-        for xit in exits:
-            self.add_arc(xit.lineno, to_top, xit.cause)
-        exits = set()
-        my_block = self.block_stack.pop()
-        exits.update(my_block.break_exits)
-        from_start = ArcStart(start, cause="the condition on line {lineno} was never false")
-        if node.orelse:
-            else_exits = self.add_body_arcs(node.orelse, from_start=from_start)
-            exits |= else_exits
-        else:
-            # No `else` clause: you can exit from the start.
-            if not constant_test:
-                exits.add(from_start)
-        return exits
-
-    @contract(returns='ArcStarts')
-    def _handle__With(self, node):
-        start = self.line_for_node(node)
-        exits = self.add_body_arcs(node.body, from_start=ArcStart(start))
-        return exits
-
-    _handle__AsyncWith = _handle__With
-
-    def _code_object__Module(self, node):
-        start = self.line_for_node(node)
-        if node.body:
-            exits = self.add_body_arcs(node.body, from_start=ArcStart(-start))
-            for xit in exits:
-                self.add_arc(xit.lineno, -start, xit.cause, "didn't exit the module")
-        else:
-            # Empty module.
-            self.add_arc(-start, start)
-            self.add_arc(start, -start)
-
-    def _code_object__FunctionDef(self, node):
-        start = self.line_for_node(node)
-        self.block_stack.append(FunctionBlock(start=start, name=node.name))
-        exits = self.add_body_arcs(node.body, from_start=ArcStart(-start))
-        self.process_return_exits(exits)
-        self.block_stack.pop()
-
-    _code_object__AsyncFunctionDef = _code_object__FunctionDef
-
-    def _code_object__ClassDef(self, node):
-        start = self.line_for_node(node)
-        self.add_arc(-start, start)
-        exits = self.add_body_arcs(node.body, from_start=ArcStart(start))
-        for xit in exits:
-            self.add_arc(
-                xit.lineno, -start, xit.cause,
-                "didn't exit the body of class '{0}'".format(node.name),
-            )
-
-    def _make_oneline_code_method(noun):     # pylint: disable=no-self-argument
-        """A function to make methods for online callable _code_object__ methods."""
-        def _code_object__oneline_callable(self, node):
-            start = self.line_for_node(node)
-            self.add_arc(-start, start, None, "didn't run the {0} on line {1}".format(noun, start))
-            self.add_arc(
-                start, -start, None,
-                "didn't finish the {0} on line {1}".format(noun, start),
-            )
-        return _code_object__oneline_callable
-
-    _code_object__Lambda = _make_oneline_code_method("lambda")
-    _code_object__GeneratorExp = _make_oneline_code_method("generator expression")
-    _code_object__DictComp = _make_oneline_code_method("dictionary comprehension")
-    _code_object__SetComp = _make_oneline_code_method("set comprehension")
-    if env.PY3:
-        _code_object__ListComp = _make_oneline_code_method("list comprehension")
-
-
-SKIP_DUMP_FIELDS = ["ctx"]
-
-def _is_simple_value(value):
-    """Is `value` simple enough to be displayed on a single line?"""
-    return (
-        value in [None, [], (), {}, set()] or
-        isinstance(value, (string_class, int, float))
-    )
-
-# TODO: a test of ast_dump?
-def ast_dump(node, depth=0):
-    """Dump the AST for `node`.
-
-    This recursively walks the AST, printing a readable version.
-
-    """
-    indent = " " * depth
-    if not isinstance(node, ast.AST):
-        print("{0}<{1} {2!r}>".format(indent, node.__class__.__name__, node))
-        return
-
-    lineno = getattr(node, "lineno", None)
-    if lineno is not None:
-        linemark = " @ {0}".format(node.lineno)
-    else:
-        linemark = ""
-    head = "{0}<{1}{2}".format(indent, node.__class__.__name__, linemark)
-
-    named_fields = [
-        (name, value)
-        for name, value in ast.iter_fields(node)
-        if name not in SKIP_DUMP_FIELDS
-    ]
-    if not named_fields:
-        print("{0}>".format(head))
-    elif len(named_fields) == 1 and _is_simple_value(named_fields[0][1]):
-        field_name, value = named_fields[0]
-        print("{0} {1}: {2!r}>".format(head, field_name, value))
-    else:
-        print(head)
-        if 0:
-            print("{0}# mro: {1}".format(
-                indent, ", ".join(c.__name__ for c in node.__class__.__mro__[1:]),
-            ))
-        next_indent = indent + "    "
-        for field_name, value in named_fields:
-            prefix = "{0}{1}:".format(next_indent, field_name)
-            if _is_simple_value(value):
-                print("{0} {1!r}".format(prefix, value))
-            elif isinstance(value, list):
-                print("{0} [".format(prefix))
-                for n in value:
-                    ast_dump(n, depth + 8)
-                print("{0}]".format(next_indent))
-            else:
-                print(prefix)
-                ast_dump(value, depth + 8)
-
-        print("{0}>".format(indent))
-
-#
-# eflag: FileType = Python2
--- a/DebugClients/Python/coverage/phystokens.py	Sat Sep 03 18:02:37 2016 +0200
+++ /dev/null	Thu Jan 01 00:00:00 1970 +0000
@@ -1,297 +0,0 @@
-# Licensed under the Apache License: http://www.apache.org/licenses/LICENSE-2.0
-# For details: https://bitbucket.org/ned/coveragepy/src/default/NOTICE.txt
-
-"""Better tokenizing for coverage.py."""
-
-import codecs
-import keyword
-import re
-import sys
-import token
-import tokenize
-
-from coverage import env
-from coverage.backward import iternext
-from coverage.misc import contract
-
-
-def phys_tokens(toks):
-    """Return all physical tokens, even line continuations.
-
-    tokenize.generate_tokens() doesn't return a token for the backslash that
-    continues lines.  This wrapper provides those tokens so that we can
-    re-create a faithful representation of the original source.
-
-    Returns the same values as generate_tokens()
-
-    """
-    last_line = None
-    last_lineno = -1
-    last_ttype = None
-    for ttype, ttext, (slineno, scol), (elineno, ecol), ltext in toks:
-        if last_lineno != elineno:
-            if last_line and last_line.endswith("\\\n"):
-                # We are at the beginning of a new line, and the last line
-                # ended with a backslash.  We probably have to inject a
-                # backslash token into the stream. Unfortunately, there's more
-                # to figure out.  This code::
-                #
-                #   usage = """\
-                #   HEY THERE
-                #   """
-                #
-                # triggers this condition, but the token text is::
-                #
-                #   '"""\\\nHEY THERE\n"""'
-                #
-                # so we need to figure out if the backslash is already in the
-                # string token or not.
-                inject_backslash = True
-                if last_ttype == tokenize.COMMENT:
-                    # Comments like this \
-                    # should never result in a new token.
-                    inject_backslash = False
-                elif ttype == token.STRING:
-                    if "\n" in ttext and ttext.split('\n', 1)[0][-1] == '\\':
-                        # It's a multi-line string and the first line ends with
-                        # a backslash, so we don't need to inject another.
-                        inject_backslash = False
-                if inject_backslash:
-                    # Figure out what column the backslash is in.
-                    ccol = len(last_line.split("\n")[-2]) - 1
-                    # Yield the token, with a fake token type.
-                    yield (
-                        99999, "\\\n",
-                        (slineno, ccol), (slineno, ccol+2),
-                        last_line
-                        )
-            last_line = ltext
-            last_ttype = ttype
-        yield ttype, ttext, (slineno, scol), (elineno, ecol), ltext
-        last_lineno = elineno
-
-
-@contract(source='unicode')
-def source_token_lines(source):
-    """Generate a series of lines, one for each line in `source`.
-
-    Each line is a list of pairs, each pair is a token::
-
-        [('key', 'def'), ('ws', ' '), ('nam', 'hello'), ('op', '('), ... ]
-
-    Each pair has a token class, and the token text.
-
-    If you concatenate all the token texts, and then join them with newlines,
-    you should have your original `source` back, with two differences:
-    trailing whitespace is not preserved, and a final line with no newline
-    is indistinguishable from a final line with a newline.
-
-    """
-
-    ws_tokens = set([token.INDENT, token.DEDENT, token.NEWLINE, tokenize.NL])
-    line = []
-    col = 0
-
-    source = source.expandtabs(8).replace('\r\n', '\n')
-    tokgen = generate_tokens(source)
-
-    for ttype, ttext, (_, scol), (_, ecol), _ in phys_tokens(tokgen):
-        mark_start = True
-        for part in re.split('(\n)', ttext):
-            if part == '\n':
-                yield line
-                line = []
-                col = 0
-                mark_end = False
-            elif part == '':
-                mark_end = False
-            elif ttype in ws_tokens:
-                mark_end = False
-            else:
-                if mark_start and scol > col:
-                    line.append(("ws", u" " * (scol - col)))
-                    mark_start = False
-                tok_class = tokenize.tok_name.get(ttype, 'xx').lower()[:3]
-                if ttype == token.NAME and keyword.iskeyword(ttext):
-                    tok_class = "key"
-                line.append((tok_class, part))
-                mark_end = True
-            scol = 0
-        if mark_end:
-            col = ecol
-
-    if line:
-        yield line
-
-
-class CachedTokenizer(object):
-    """A one-element cache around tokenize.generate_tokens.
-
-    When reporting, coverage.py tokenizes files twice, once to find the
-    structure of the file, and once to syntax-color it.  Tokenizing is
-    expensive, and easily cached.
-
-    This is a one-element cache so that our twice-in-a-row tokenizing doesn't
-    actually tokenize twice.
-
-    """
-    def __init__(self):
-        self.last_text = None
-        self.last_tokens = None
-
-    @contract(text='unicode')
-    def generate_tokens(self, text):
-        """A stand-in for `tokenize.generate_tokens`."""
-        if text != self.last_text:
-            self.last_text = text
-            readline = iternext(text.splitlines(True))
-            self.last_tokens = list(tokenize.generate_tokens(readline))
-        return self.last_tokens
-
-# Create our generate_tokens cache as a callable replacement function.
-generate_tokens = CachedTokenizer().generate_tokens
-
-
-COOKIE_RE = re.compile(r"^[ \t]*#.*coding[:=][ \t]*([-\w.]+)", flags=re.MULTILINE)
-
-@contract(source='bytes')
-def _source_encoding_py2(source):
-    """Determine the encoding for `source`, according to PEP 263.
-
-    `source` is a byte string, the text of the program.
-
-    Returns a string, the name of the encoding.
-
-    """
-    assert isinstance(source, bytes)
-
-    # Do this so the detect_encode code we copied will work.
-    readline = iternext(source.splitlines(True))
-
-    # This is mostly code adapted from Py3.2's tokenize module.
-
-    def _get_normal_name(orig_enc):
-        """Imitates get_normal_name in tokenizer.c."""
-        # Only care about the first 12 characters.
-        enc = orig_enc[:12].lower().replace("_", "-")
-        if re.match(r"^utf-8($|-)", enc):
-            return "utf-8"
-        if re.match(r"^(latin-1|iso-8859-1|iso-latin-1)($|-)", enc):
-            return "iso-8859-1"
-        return orig_enc
-
-    # From detect_encode():
-    # It detects the encoding from the presence of a UTF-8 BOM or an encoding
-    # cookie as specified in PEP-0263.  If both a BOM and a cookie are present,
-    # but disagree, a SyntaxError will be raised.  If the encoding cookie is an
-    # invalid charset, raise a SyntaxError.  Note that if a UTF-8 BOM is found,
-    # 'utf-8-sig' is returned.
-
-    # If no encoding is specified, then the default will be returned.
-    default = 'ascii'
-
-    bom_found = False
-    encoding = None
-
-    def read_or_stop():
-        """Get the next source line, or ''."""
-        try:
-            return readline()
-        except StopIteration:
-            return ''
-
-    def find_cookie(line):
-        """Find an encoding cookie in `line`."""
-        try:
-            line_string = line.decode('ascii')
-        except UnicodeDecodeError:
-            return None
-
-        matches = COOKIE_RE.findall(line_string)
-        if not matches:
-            return None
-        encoding = _get_normal_name(matches[0])
-        try:
-            codec = codecs.lookup(encoding)
-        except LookupError:
-            # This behavior mimics the Python interpreter
-            raise SyntaxError("unknown encoding: " + encoding)
-
-        if bom_found:
-            # codecs in 2.3 were raw tuples of functions, assume the best.
-            codec_name = getattr(codec, 'name', encoding)
-            if codec_name != 'utf-8':
-                # This behavior mimics the Python interpreter
-                raise SyntaxError('encoding problem: utf-8')
-            encoding += '-sig'
-        return encoding
-
-    first = read_or_stop()
-    if first.startswith(codecs.BOM_UTF8):
-        bom_found = True
-        first = first[3:]
-        default = 'utf-8-sig'
-    if not first:
-        return default
-
-    encoding = find_cookie(first)
-    if encoding:
-        return encoding
-
-    second = read_or_stop()
-    if not second:
-        return default
-
-    encoding = find_cookie(second)
-    if encoding:
-        return encoding
-
-    return default
-
-
-@contract(source='bytes')
-def _source_encoding_py3(source):
-    """Determine the encoding for `source`, according to PEP 263.
-
-    `source` is a byte string: the text of the program.
-
-    Returns a string, the name of the encoding.
-
-    """
-    readline = iternext(source.splitlines(True))
-    return tokenize.detect_encoding(readline)[0]
-
-
-if env.PY3:
-    source_encoding = _source_encoding_py3
-else:
-    source_encoding = _source_encoding_py2
-
-
-@contract(source='unicode')
-def compile_unicode(source, filename, mode):
-    """Just like the `compile` builtin, but works on any Unicode string.
-
-    Python 2's compile() builtin has a stupid restriction: if the source string
-    is Unicode, then it may not have a encoding declaration in it.  Why not?
-    Who knows!  It also decodes to utf8, and then tries to interpret those utf8
-    bytes according to the encoding declaration.  Why? Who knows!
-
-    This function neuters the coding declaration, and compiles it.
-
-    """
-    source = neuter_encoding_declaration(source)
-    if env.PY2 and isinstance(filename, unicode):
-        filename = filename.encode(sys.getfilesystemencoding(), "replace")
-    code = compile(source, filename, mode)
-    return code
-
-
-@contract(source='unicode', returns='unicode')
-def neuter_encoding_declaration(source):
-    """Return `source`, with any encoding declaration neutered."""
-    source = COOKIE_RE.sub("# (deleted declaration)", source, count=2)
-    return source
-
-#
-# eflag: FileType = Python2
--- a/DebugClients/Python/coverage/pickle2json.py	Sat Sep 03 18:02:37 2016 +0200
+++ /dev/null	Thu Jan 01 00:00:00 1970 +0000
@@ -1,50 +0,0 @@
-# Licensed under the Apache License: http://www.apache.org/licenses/LICENSE-2.0
-# For details: https://bitbucket.org/ned/coveragepy/src/default/NOTICE.txt
-
-"""Convert pickle to JSON for coverage.py."""
-
-from coverage.backward import pickle
-from coverage.data import CoverageData
-
-
-def pickle_read_raw_data(cls_unused, file_obj):
-    """Replacement for CoverageData._read_raw_data."""
-    return pickle.load(file_obj)
-
-
-def pickle2json(infile, outfile):
-    """Convert a coverage.py 3.x pickle data file to a 4.x JSON data file."""
-    try:
-        old_read_raw_data = CoverageData._read_raw_data
-        CoverageData._read_raw_data = pickle_read_raw_data
-
-        covdata = CoverageData()
-
-        with open(infile, 'rb') as inf:
-            covdata.read_fileobj(inf)
-
-        covdata.write_file(outfile)
-    finally:
-        CoverageData._read_raw_data = old_read_raw_data
-
-
-if __name__ == "__main__":
-    from optparse import OptionParser
-
-    parser = OptionParser(usage="usage: %s [options]" % __file__)
-    parser.description = "Convert .coverage files from pickle to JSON format"
-    parser.add_option(
-        "-i", "--input-file", action="store", default=".coverage",
-        help="Name of input file. Default .coverage",
-    )
-    parser.add_option(
-        "-o", "--output-file", action="store", default=".coverage",
-        help="Name of output file. Default .coverage",
-    )
-
-    (options, args) = parser.parse_args()
-
-    pickle2json(options.input_file, options.output_file)
-
-#
-# eflag: FileType = Python2
--- a/DebugClients/Python/coverage/plugin.py	Sat Sep 03 18:02:37 2016 +0200
+++ /dev/null	Thu Jan 01 00:00:00 1970 +0000
@@ -1,399 +0,0 @@
-# Licensed under the Apache License: http://www.apache.org/licenses/LICENSE-2.0
-# For details: https://bitbucket.org/ned/coveragepy/src/default/NOTICE.txt
-
-"""Plugin interfaces for coverage.py"""
-
-from coverage import files
-from coverage.misc import contract, _needs_to_implement
-
-
-class CoveragePlugin(object):
-    """Base class for coverage.py plugins.
-
-    To write a coverage.py plugin, create a module with a subclass of
-    :class:`CoveragePlugin`.  You will override methods in your class to
-    participate in various aspects of coverage.py's processing.
-
-    Currently the only plugin type is a file tracer, for implementing
-    measurement support for non-Python files.  File tracer plugins implement
-    the :meth:`file_tracer` method to claim files and the :meth:`file_reporter`
-    method to report on those files.
-
-    Any plugin can optionally implement :meth:`sys_info` to provide debugging
-    information about their operation.
-
-    Coverage.py will store its own information on your plugin object, using
-    attributes whose names start with ``_coverage_``.  Don't be startled.
-
-    To register your plugin, define a function called `coverage_init` in your
-    module::
-
-        def coverage_init(reg, options):
-            reg.add_file_tracer(MyPlugin())
-
-    You use the `reg` parameter passed to your `coverage_init` function to
-    register your plugin object.  It has one method, `add_file_tracer`, which
-    takes a newly created instance of your plugin.
-
-    If your plugin takes options, the `options` parameter is a dictionary of
-    your plugin's options from the coverage.py configuration file.  Use them
-    however you want to configure your object before registering it.
-
-    """
-
-    def file_tracer(self, filename):        # pylint: disable=unused-argument
-        """Get a :class:`FileTracer` object for a file.
-
-        Every Python source file is offered to the plugin to give it a chance
-        to take responsibility for tracing the file.  If your plugin can handle
-        the file, then return a :class:`FileTracer` object.  Otherwise return
-        None.
-
-        There is no way to register your plugin for particular files.  Instead,
-        this method is invoked for all files, and the plugin decides whether it
-        can trace the file or not.  Be prepared for `filename` to refer to all
-        kinds of files that have nothing to do with your plugin.
-
-        The file name will be a Python file being executed.  There are two
-        broad categories of behavior for a plugin, depending on the kind of
-        files your plugin supports:
-
-        * Static file names: each of your original source files has been
-          converted into a distinct Python file.  Your plugin is invoked with
-          the Python file name, and it maps it back to its original source
-          file.
-
-        * Dynamic file names: all of your source files are executed by the same
-          Python file.  In this case, your plugin implements
-          :meth:`FileTracer.dynamic_source_filename` to provide the actual
-          source file for each execution frame.
-
-        `filename` is a string, the path to the file being considered.  This is
-        the absolute real path to the file.  If you are comparing to other
-        paths, be sure to take this into account.
-
-        Returns a :class:`FileTracer` object to use to trace `filename`, or
-        None if this plugin cannot trace this file.
-
-        """
-        return None
-
-    def file_reporter(self, filename):      # pylint: disable=unused-argument
-        """Get the :class:`FileReporter` class to use for a file.
-
-        This will only be invoked if `filename` returns non-None from
-        :meth:`file_tracer`.  It's an error to return None from this method.
-
-        Returns a :class:`FileReporter` object to use to report on `filename`.
-
-        """
-        _needs_to_implement(self, "file_reporter")
-
-    def sys_info(self):
-        """Get a list of information useful for debugging.
-
-        This method will be invoked for ``--debug=sys``.  Your
-        plugin can return any information it wants to be displayed.
-
-        Returns a list of pairs: `[(name, value), ...]`.
-
-        """
-        return []
-
-
-class FileTracer(object):
-    """Support needed for files during the execution phase.
-
-    You may construct this object from :meth:`CoveragePlugin.file_tracer` any
-    way you like.  A natural choice would be to pass the file name given to
-    `file_tracer`.
-
-    `FileTracer` objects should only be created in the
-    :meth:`CoveragePlugin.file_tracer` method.
-
-    See :ref:`howitworks` for details of the different coverage.py phases.
-
-    """
-
-    def source_filename(self):
-        """The source file name for this file.
-
-        This may be any file name you like.  A key responsibility of a plugin
-        is to own the mapping from Python execution back to whatever source
-        file name was originally the source of the code.
-
-        See :meth:`CoveragePlugin.file_tracer` for details about static and
-        dynamic file names.
-
-        Returns the file name to credit with this execution.
-
-        """
-        _needs_to_implement(self, "source_filename")
-
-    def has_dynamic_source_filename(self):
-        """Does this FileTracer have dynamic source file names?
-
-        FileTracers can provide dynamically determined file names by
-        implementing :meth:`dynamic_source_filename`.  Invoking that function
-        is expensive. To determine whether to invoke it, coverage.py uses the
-        result of this function to know if it needs to bother invoking
-        :meth:`dynamic_source_filename`.
-
-        See :meth:`CoveragePlugin.file_tracer` for details about static and
-        dynamic file names.
-
-        Returns True if :meth:`dynamic_source_filename` should be called to get
-        dynamic source file names.
-
-        """
-        return False
-
-    def dynamic_source_filename(self, filename, frame):     # pylint: disable=unused-argument
-        """Get a dynamically computed source file name.
-
-        Some plugins need to compute the source file name dynamically for each
-        frame.
-
-        This function will not be invoked if
-        :meth:`has_dynamic_source_filename` returns False.
-
-        Returns the source file name for this frame, or None if this frame
-        shouldn't be measured.
-
-        """
-        return None
-
-    def line_number_range(self, frame):
-        """Get the range of source line numbers for a given a call frame.
-
-        The call frame is examined, and the source line number in the original
-        file is returned.  The return value is a pair of numbers, the starting
-        line number and the ending line number, both inclusive.  For example,
-        returning (5, 7) means that lines 5, 6, and 7 should be considered
-        executed.
-
-        This function might decide that the frame doesn't indicate any lines
-        from the source file were executed.  Return (-1, -1) in this case to
-        tell coverage.py that no lines should be recorded for this frame.
-
-        """
-        lineno = frame.f_lineno
-        return lineno, lineno
-
-
-class FileReporter(object):
-    """Support needed for files during the analysis and reporting phases.
-
-    See :ref:`howitworks` for details of the different coverage.py phases.
-
-    `FileReporter` objects should only be created in the
-    :meth:`CoveragePlugin.file_reporter` method.
-
-    There are many methods here, but only :meth:`lines` is required, to provide
-    the set of executable lines in the file.
-
-    """
-
-    def __init__(self, filename):
-        """Simple initialization of a `FileReporter`.
-
-        The `filename` argument is the path to the file being reported.  This
-        will be available as the `.filename` attribute on the object.  Other
-        method implementations on this base class rely on this attribute.
-
-        """
-        self.filename = filename
-
-    def __repr__(self):
-        return "<{0.__class__.__name__} filename={0.filename!r}>".format(self)
-
-    def relative_filename(self):
-        """Get the relative file name for this file.
-
-        This file path will be displayed in reports.  The default
-        implementation will supply the actual project-relative file path.  You
-        only need to supply this method if you have an unusual syntax for file
-        paths.
-
-        """
-        return files.relative_filename(self.filename)
-
-    @contract(returns='unicode')
-    def source(self):
-        """Get the source for the file.
-
-        Returns a Unicode string.
-
-        The base implementation simply reads the `self.filename` file and
-        decodes it as UTF8.  Override this method if your file isn't readable
-        as a text file, or if you need other encoding support.
-
-        """
-        with open(self.filename, "rb") as f:
-            return f.read().decode("utf8")
-
-    def lines(self):
-        """Get the executable lines in this file.
-
-        Your plugin must determine which lines in the file were possibly
-        executable.  This method returns a set of those line numbers.
-
-        Returns a set of line numbers.
-
-        """
-        _needs_to_implement(self, "lines")
-
-    def excluded_lines(self):
-        """Get the excluded executable lines in this file.
-
-        Your plugin can use any method it likes to allow the user to exclude
-        executable lines from consideration.
-
-        Returns a set of line numbers.
-
-        The base implementation returns the empty set.
-
-        """
-        return set()
-
-    def translate_lines(self, lines):
-        """Translate recorded lines into reported lines.
-
-        Some file formats will want to report lines slightly differently than
-        they are recorded.  For example, Python records the last line of a
-        multi-line statement, but reports are nicer if they mention the first
-        line.
-
-        Your plugin can optionally define this method to perform these kinds of
-        adjustment.
-
-        `lines` is a sequence of integers, the recorded line numbers.
-
-        Returns a set of integers, the adjusted line numbers.
-
-        The base implementation returns the numbers unchanged.
-
-        """
-        return set(lines)
-
-    def arcs(self):
-        """Get the executable arcs in this file.
-
-        To support branch coverage, your plugin needs to be able to indicate
-        possible execution paths, as a set of line number pairs.  Each pair is
-        a `(prev, next)` pair indicating that execution can transition from the
-        `prev` line number to the `next` line number.
-
-        Returns a set of pairs of line numbers.  The default implementation
-        returns an empty set.
-
-        """
-        return set()
-
-    def no_branch_lines(self):
-        """Get the lines excused from branch coverage in this file.
-
-        Your plugin can use any method it likes to allow the user to exclude
-        lines from consideration of branch coverage.
-
-        Returns a set of line numbers.
-
-        The base implementation returns the empty set.
-
-        """
-        return set()
-
-    def translate_arcs(self, arcs):
-        """Translate recorded arcs into reported arcs.
-
-        Similar to :meth:`translate_lines`, but for arcs.  `arcs` is a set of
-        line number pairs.
-
-        Returns a set of line number pairs.
-
-        The default implementation returns `arcs` unchanged.
-
-        """
-        return arcs
-
-    def exit_counts(self):
-        """Get a count of exits from that each line.
-
-        To determine which lines are branches, coverage.py looks for lines that
-        have more than one exit.  This function creates a dict mapping each
-        executable line number to a count of how many exits it has.
-
-        To be honest, this feels wrong, and should be refactored.  Let me know
-        if you attempt to implement this method in your plugin...
-
-        """
-        return {}
-
-    def missing_arc_description(self, start, end, executed_arcs=None):     # pylint: disable=unused-argument
-        """Provide an English sentence describing a missing arc.
-
-        The `start` and `end` arguments are the line numbers of the missing
-        arc. Negative numbers indicate entering or exiting code objects.
-
-        The `executed_arcs` argument is a set of line number pairs, the arcs
-        that were executed in this file.
-
-        By default, this simply returns the string "Line {start} didn't jump
-        to {end}".
-
-        """
-        return "Line {start} didn't jump to line {end}".format(start=start, end=end)
-
-    def source_token_lines(self):
-        """Generate a series of tokenized lines, one for each line in `source`.
-
-        These tokens are used for syntax-colored reports.
-
-        Each line is a list of pairs, each pair is a token::
-
-            [('key', 'def'), ('ws', ' '), ('nam', 'hello'), ('op', '('), ... ]
-
-        Each pair has a token class, and the token text.  The token classes
-        are:
-
-        * ``'com'``: a comment
-        * ``'key'``: a keyword
-        * ``'nam'``: a name, or identifier
-        * ``'num'``: a number
-        * ``'op'``: an operator
-        * ``'str'``: a string literal
-        * ``'txt'``: some other kind of text
-
-        If you concatenate all the token texts, and then join them with
-        newlines, you should have your original source back.
-
-        The default implementation simply returns each line tagged as
-        ``'txt'``.
-
-        """
-        for line in self.source().splitlines():
-            yield [('txt', line)]
-
-    # Annoying comparison operators. Py3k wants __lt__ etc, and Py2k needs all
-    # of them defined.
-
-    def __eq__(self, other):
-        return isinstance(other, FileReporter) and self.filename == other.filename
-
-    def __ne__(self, other):
-        return not (self == other)
-
-    def __lt__(self, other):
-        return self.filename < other.filename
-
-    def __le__(self, other):
-        return self.filename <= other.filename
-
-    def __gt__(self, other):
-        return self.filename > other.filename
-
-    def __ge__(self, other):
-        return self.filename >= other.filename
-
-#
-# eflag: FileType = Python2
--- a/DebugClients/Python/coverage/plugin_support.py	Sat Sep 03 18:02:37 2016 +0200
+++ /dev/null	Thu Jan 01 00:00:00 1970 +0000
@@ -1,250 +0,0 @@
-# Licensed under the Apache License: http://www.apache.org/licenses/LICENSE-2.0
-# For details: https://bitbucket.org/ned/coveragepy/src/default/NOTICE.txt
-
-"""Support for plugins."""
-
-import os
-import os.path
-import sys
-
-from coverage.misc import CoverageException, isolate_module
-from coverage.plugin import CoveragePlugin, FileTracer, FileReporter
-
-os = isolate_module(os)
-
-
-class Plugins(object):
-    """The currently loaded collection of coverage.py plugins."""
-
-    def __init__(self):
-        self.order = []
-        self.names = {}
-        self.file_tracers = []
-
-        self.current_module = None
-        self.debug = None
-
-    @classmethod
-    def load_plugins(cls, modules, config, debug=None):
-        """Load plugins from `modules`.
-
-        Returns a list of loaded and configured plugins.
-
-        """
-        plugins = cls()
-        plugins.debug = debug
-
-        for module in modules:
-            plugins.current_module = module
-            __import__(module)
-            mod = sys.modules[module]
-
-            coverage_init = getattr(mod, "coverage_init", None)
-            if not coverage_init:
-                raise CoverageException(
-                    "Plugin module %r didn't define a coverage_init function" % module
-                )
-
-            options = config.get_plugin_options(module)
-            coverage_init(plugins, options)
-
-        plugins.current_module = None
-        return plugins
-
-    def add_file_tracer(self, plugin):
-        """Add a file tracer plugin.
-
-        `plugin` is an instance of a third-party plugin class.  It must
-        implement the :meth:`CoveragePlugin.file_tracer` method.
-
-        """
-        self._add_plugin(plugin, self.file_tracers)
-
-    def add_noop(self, plugin):
-        """Add a plugin that does nothing.
-
-        This is only useful for testing the plugin support.
-
-        """
-        self._add_plugin(plugin, None)
-
-    def _add_plugin(self, plugin, specialized):
-        """Add a plugin object.
-
-        `plugin` is a :class:`CoveragePlugin` instance to add.  `specialized`
-        is a list to append the plugin to.
-
-        """
-        plugin_name = "%s.%s" % (self.current_module, plugin.__class__.__name__)
-        if self.debug and self.debug.should('plugin'):
-            self.debug.write("Loaded plugin %r: %r" % (self.current_module, plugin))
-            labelled = LabelledDebug("plugin %r" % (self.current_module,), self.debug)
-            plugin = DebugPluginWrapper(plugin, labelled)
-
-        # pylint: disable=attribute-defined-outside-init
-        plugin._coverage_plugin_name = plugin_name
-        plugin._coverage_enabled = True
-        self.order.append(plugin)
-        self.names[plugin_name] = plugin
-        if specialized is not None:
-            specialized.append(plugin)
-
-    def __nonzero__(self):
-        return bool(self.order)
-
-    __bool__ = __nonzero__
-
-    def __iter__(self):
-        return iter(self.order)
-
-    def get(self, plugin_name):
-        """Return a plugin by name."""
-        return self.names[plugin_name]
-
-
-class LabelledDebug(object):
-    """A Debug writer, but with labels for prepending to the messages."""
-
-    def __init__(self, label, debug, prev_labels=()):
-        self.labels = list(prev_labels) + [label]
-        self.debug = debug
-
-    def add_label(self, label):
-        """Add a label to the writer, and return a new `LabelledDebug`."""
-        return LabelledDebug(label, self.debug, self.labels)
-
-    def message_prefix(self):
-        """The prefix to use on messages, combining the labels."""
-        prefixes = self.labels + ['']
-        return ":\n".join("  "*i+label for i, label in enumerate(prefixes))
-
-    def write(self, message):
-        """Write `message`, but with the labels prepended."""
-        self.debug.write("%s%s" % (self.message_prefix(), message))
-
-
-class DebugPluginWrapper(CoveragePlugin):
-    """Wrap a plugin, and use debug to report on what it's doing."""
-
-    def __init__(self, plugin, debug):
-        super(DebugPluginWrapper, self).__init__()
-        self.plugin = plugin
-        self.debug = debug
-
-    def file_tracer(self, filename):
-        tracer = self.plugin.file_tracer(filename)
-        self.debug.write("file_tracer(%r) --> %r" % (filename, tracer))
-        if tracer:
-            debug = self.debug.add_label("file %r" % (filename,))
-            tracer = DebugFileTracerWrapper(tracer, debug)
-        return tracer
-
-    def file_reporter(self, filename):
-        reporter = self.plugin.file_reporter(filename)
-        self.debug.write("file_reporter(%r) --> %r" % (filename, reporter))
-        if reporter:
-            debug = self.debug.add_label("file %r" % (filename,))
-            reporter = DebugFileReporterWrapper(filename, reporter, debug)
-        return reporter
-
-    def sys_info(self):
-        return self.plugin.sys_info()
-
-
-class DebugFileTracerWrapper(FileTracer):
-    """A debugging `FileTracer`."""
-
-    def __init__(self, tracer, debug):
-        self.tracer = tracer
-        self.debug = debug
-
-    def _show_frame(self, frame):
-        """A short string identifying a frame, for debug messages."""
-        return "%s@%d" % (
-            os.path.basename(frame.f_code.co_filename),
-            frame.f_lineno,
-        )
-
-    def source_filename(self):
-        sfilename = self.tracer.source_filename()
-        self.debug.write("source_filename() --> %r" % (sfilename,))
-        return sfilename
-
-    def has_dynamic_source_filename(self):
-        has = self.tracer.has_dynamic_source_filename()
-        self.debug.write("has_dynamic_source_filename() --> %r" % (has,))
-        return has
-
-    def dynamic_source_filename(self, filename, frame):
-        dyn = self.tracer.dynamic_source_filename(filename, frame)
-        self.debug.write("dynamic_source_filename(%r, %s) --> %r" % (
-            filename, self._show_frame(frame), dyn,
-        ))
-        return dyn
-
-    def line_number_range(self, frame):
-        pair = self.tracer.line_number_range(frame)
-        self.debug.write("line_number_range(%s) --> %r" % (self._show_frame(frame), pair))
-        return pair
-
-
-class DebugFileReporterWrapper(FileReporter):
-    """A debugging `FileReporter`."""
-
-    def __init__(self, filename, reporter, debug):
-        super(DebugFileReporterWrapper, self).__init__(filename)
-        self.reporter = reporter
-        self.debug = debug
-
-    def relative_filename(self):
-        ret = self.reporter.relative_filename()
-        self.debug.write("relative_filename() --> %r" % (ret,))
-        return ret
-
-    def lines(self):
-        ret = self.reporter.lines()
-        self.debug.write("lines() --> %r" % (ret,))
-        return ret
-
-    def excluded_lines(self):
-        ret = self.reporter.excluded_lines()
-        self.debug.write("excluded_lines() --> %r" % (ret,))
-        return ret
-
-    def translate_lines(self, lines):
-        ret = self.reporter.translate_lines(lines)
-        self.debug.write("translate_lines(%r) --> %r" % (lines, ret))
-        return ret
-
-    def translate_arcs(self, arcs):
-        ret = self.reporter.translate_arcs(arcs)
-        self.debug.write("translate_arcs(%r) --> %r" % (arcs, ret))
-        return ret
-
-    def no_branch_lines(self):
-        ret = self.reporter.no_branch_lines()
-        self.debug.write("no_branch_lines() --> %r" % (ret,))
-        return ret
-
-    def exit_counts(self):
-        ret = self.reporter.exit_counts()
-        self.debug.write("exit_counts() --> %r" % (ret,))
-        return ret
-
-    def arcs(self):
-        ret = self.reporter.arcs()
-        self.debug.write("arcs() --> %r" % (ret,))
-        return ret
-
-    def source(self):
-        ret = self.reporter.source()
-        self.debug.write("source() --> %d chars" % (len(ret),))
-        return ret
-
-    def source_token_lines(self):
-        ret = list(self.reporter.source_token_lines())
-        self.debug.write("source_token_lines() --> %d tokens" % (len(ret),))
-        return ret
-
-#
-# eflag: FileType = Python2
--- a/DebugClients/Python/coverage/python.py	Sat Sep 03 18:02:37 2016 +0200
+++ /dev/null	Thu Jan 01 00:00:00 1970 +0000
@@ -1,208 +0,0 @@
-# Licensed under the Apache License: http://www.apache.org/licenses/LICENSE-2.0
-# For details: https://bitbucket.org/ned/coveragepy/src/default/NOTICE.txt
-
-"""Python source expertise for coverage.py"""
-
-import os.path
-import types
-import zipimport
-
-from coverage import env, files
-from coverage.misc import (
-    contract, CoverageException, expensive, NoSource, join_regex, isolate_module,
-)
-from coverage.parser import PythonParser
-from coverage.phystokens import source_token_lines, source_encoding
-from coverage.plugin import FileReporter
-
-os = isolate_module(os)
-
-
-@contract(returns='bytes')
-def read_python_source(filename):
-    """Read the Python source text from `filename`.
-
-    Returns bytes.
-
-    """
-    with open(filename, "rb") as f:
-        return f.read().replace(b"\r\n", b"\n").replace(b"\r", b"\n")
-
-
-@contract(returns='unicode')
-def get_python_source(filename):
-    """Return the source code, as unicode."""
-    base, ext = os.path.splitext(filename)
-    if ext == ".py" and env.WINDOWS:
-        exts = [".py", ".pyw"]
-    else:
-        exts = [ext]
-
-    for ext in exts:
-        try_filename = base + ext
-        if os.path.exists(try_filename):
-            # A regular text file: open it.
-            source = read_python_source(try_filename)
-            break
-
-        # Maybe it's in a zip file?
-        source = get_zip_bytes(try_filename)
-        if source is not None:
-            break
-    else:
-        # Couldn't find source.
-        raise NoSource("No source for code: '%s'." % filename)
-
-    # Replace \f because of http://bugs.python.org/issue19035
-    source = source.replace(b'\f', b' ')
-    source = source.decode(source_encoding(source), "replace")
-
-    # Python code should always end with a line with a newline.
-    if source and source[-1] != '\n':
-        source += '\n'
-
-    return source
-
-
-@contract(returns='bytes|None')
-def get_zip_bytes(filename):
-    """Get data from `filename` if it is a zip file path.
-
-    Returns the bytestring data read from the zip file, or None if no zip file
-    could be found or `filename` isn't in it.  The data returned will be
-    an empty string if the file is empty.
-
-    """
-    markers = ['.zip'+os.sep, '.egg'+os.sep]
-    for marker in markers:
-        if marker in filename:
-            parts = filename.split(marker)
-            try:
-                zi = zipimport.zipimporter(parts[0]+marker[:-1])
-            except zipimport.ZipImportError:
-                continue
-            try:
-                data = zi.get_data(parts[1])
-            except IOError:
-                continue
-            return data
-    return None
-
-
-class PythonFileReporter(FileReporter):
-    """Report support for a Python file."""
-
-    def __init__(self, morf, coverage=None):
-        self.coverage = coverage
-
-        if hasattr(morf, '__file__'):
-            filename = morf.__file__
-        elif isinstance(morf, types.ModuleType):
-            # A module should have had .__file__, otherwise we can't use it.
-            # This could be a PEP-420 namespace package.
-            raise CoverageException("Module {0} has no file".format(morf))
-        else:
-            filename = morf
-
-        filename = files.unicode_filename(filename)
-
-        # .pyc files should always refer to a .py instead.
-        if filename.endswith(('.pyc', '.pyo')):
-            filename = filename[:-1]
-        elif filename.endswith('$py.class'):   # Jython
-            filename = filename[:-9] + ".py"
-
-        super(PythonFileReporter, self).__init__(files.canonical_filename(filename))
-
-        if hasattr(morf, '__name__'):
-            name = morf.__name__
-            name = name.replace(".", os.sep) + ".py"
-            name = files.unicode_filename(name)
-        else:
-            name = files.relative_filename(filename)
-        self.relname = name
-
-        self._source = None
-        self._parser = None
-        self._statements = None
-        self._excluded = None
-
-    @contract(returns='unicode')
-    def relative_filename(self):
-        return self.relname
-
-    @property
-    def parser(self):
-        """Lazily create a :class:`PythonParser`."""
-        if self._parser is None:
-            self._parser = PythonParser(
-                filename=self.filename,
-                exclude=self.coverage._exclude_regex('exclude'),
-            )
-            self._parser.parse_source()
-        return self._parser
-
-    def lines(self):
-        """Return the line numbers of statements in the file."""
-        return self.parser.statements
-
-    def excluded_lines(self):
-        """Return the line numbers of statements in the file."""
-        return self.parser.excluded
-
-    def translate_lines(self, lines):
-        return self.parser.translate_lines(lines)
-
-    def translate_arcs(self, arcs):
-        return self.parser.translate_arcs(arcs)
-
-    @expensive
-    def no_branch_lines(self):
-        no_branch = self.parser.lines_matching(
-            join_regex(self.coverage.config.partial_list),
-            join_regex(self.coverage.config.partial_always_list)
-            )
-        return no_branch
-
-    @expensive
-    def arcs(self):
-        return self.parser.arcs()
-
-    @expensive
-    def exit_counts(self):
-        return self.parser.exit_counts()
-
-    def missing_arc_description(self, start, end, executed_arcs=None):
-        return self.parser.missing_arc_description(start, end, executed_arcs)
-
-    @contract(returns='unicode')
-    def source(self):
-        if self._source is None:
-            self._source = get_python_source(self.filename)
-        return self._source
-
-    def should_be_python(self):
-        """Does it seem like this file should contain Python?
-
-        This is used to decide if a file reported as part of the execution of
-        a program was really likely to have contained Python in the first
-        place.
-
-        """
-        # Get the file extension.
-        _, ext = os.path.splitext(self.filename)
-
-        # Anything named *.py* should be Python.
-        if ext.startswith('.py'):
-            return True
-        # A file with no extension should be Python.
-        if not ext:
-            return True
-        # Everything else is probably not Python.
-        return False
-
-    def source_token_lines(self):
-        return source_token_lines(self.source())
-
-#
-# eflag: FileType = Python2
--- a/DebugClients/Python/coverage/pytracer.py	Sat Sep 03 18:02:37 2016 +0200
+++ /dev/null	Thu Jan 01 00:00:00 1970 +0000
@@ -1,158 +0,0 @@
-# Licensed under the Apache License: http://www.apache.org/licenses/LICENSE-2.0
-# For details: https://bitbucket.org/ned/coveragepy/src/default/NOTICE.txt
-
-"""Raw data collector for coverage.py."""
-
-import dis
-import sys
-
-from coverage import env
-
-# We need the YIELD_VALUE opcode below, in a comparison-friendly form.
-YIELD_VALUE = dis.opmap['YIELD_VALUE']
-if env.PY2:
-    YIELD_VALUE = chr(YIELD_VALUE)
-
-
-class PyTracer(object):
-    """Python implementation of the raw data tracer."""
-
-    # Because of poor implementations of trace-function-manipulating tools,
-    # the Python trace function must be kept very simple.  In particular, there
-    # must be only one function ever set as the trace function, both through
-    # sys.settrace, and as the return value from the trace function.  Put
-    # another way, the trace function must always return itself.  It cannot
-    # swap in other functions, or return None to avoid tracing a particular
-    # frame.
-    #
-    # The trace manipulator that introduced this restriction is DecoratorTools,
-    # which sets a trace function, and then later restores the pre-existing one
-    # by calling sys.settrace with a function it found in the current frame.
-    #
-    # Systems that use DecoratorTools (or similar trace manipulations) must use
-    # PyTracer to get accurate results.  The command-line --timid argument is
-    # used to force the use of this tracer.
-
-    def __init__(self):
-        # Attributes set from the collector:
-        self.data = None
-        self.trace_arcs = False
-        self.should_trace = None
-        self.should_trace_cache = None
-        self.warn = None
-        # The threading module to use, if any.
-        self.threading = None
-
-        self.cur_file_dict = []
-        self.last_line = [0]
-
-        self.data_stack = []
-        self.last_exc_back = None
-        self.last_exc_firstlineno = 0
-        self.thread = None
-        self.stopped = False
-
-    def __repr__(self):
-        return "<PyTracer at 0x{0:0x}: {1} lines in {2} files>".format(
-            id(self),
-            sum(len(v) for v in self.data.values()),
-            len(self.data),
-        )
-
-    def _trace(self, frame, event, arg_unused):
-        """The trace function passed to sys.settrace."""
-
-        if self.stopped:
-            return
-
-        if self.last_exc_back:
-            if frame == self.last_exc_back:
-                # Someone forgot a return event.
-                if self.trace_arcs and self.cur_file_dict:
-                    pair = (self.last_line, -self.last_exc_firstlineno)
-                    self.cur_file_dict[pair] = None
-                self.cur_file_dict, self.last_line = self.data_stack.pop()
-            self.last_exc_back = None
-
-        if event == 'call':
-            # Entering a new function context.  Decide if we should trace
-            # in this file.
-            self.data_stack.append((self.cur_file_dict, self.last_line))
-            filename = frame.f_code.co_filename
-            disp = self.should_trace_cache.get(filename)
-            if disp is None:
-                disp = self.should_trace(filename, frame)
-                self.should_trace_cache[filename] = disp
-
-            self.cur_file_dict = None
-            if disp.trace:
-                tracename = disp.source_filename
-                if tracename not in self.data:
-                    self.data[tracename] = {}
-                self.cur_file_dict = self.data[tracename]
-            # The call event is really a "start frame" event, and happens for
-            # function calls and re-entering generators.  The f_lasti field is
-            # -1 for calls, and a real offset for generators.  Use <0 as the
-            # line number for calls, and the real line number for generators.
-            if frame.f_lasti < 0:
-                self.last_line = -frame.f_code.co_firstlineno
-            else:
-                self.last_line = frame.f_lineno
-        elif event == 'line':
-            # Record an executed line.
-            if self.cur_file_dict is not None:
-                lineno = frame.f_lineno
-                if self.trace_arcs:
-                    self.cur_file_dict[(self.last_line, lineno)] = None
-                else:
-                    self.cur_file_dict[lineno] = None
-                self.last_line = lineno
-        elif event == 'return':
-            if self.trace_arcs and self.cur_file_dict:
-                # Record an arc leaving the function, but beware that a
-                # "return" event might just mean yielding from a generator.
-                bytecode = frame.f_code.co_code[frame.f_lasti]
-                if bytecode != YIELD_VALUE:
-                    first = frame.f_code.co_firstlineno
-                    self.cur_file_dict[(self.last_line, -first)] = None
-            # Leaving this function, pop the filename stack.
-            self.cur_file_dict, self.last_line = self.data_stack.pop()
-        elif event == 'exception':
-            self.last_exc_back = frame.f_back
-            self.last_exc_firstlineno = frame.f_code.co_firstlineno
-        return self._trace
-
-    def start(self):
-        """Start this Tracer.
-
-        Return a Python function suitable for use with sys.settrace().
-
-        """
-        if self.threading:
-            self.thread = self.threading.currentThread()
-        sys.settrace(self._trace)
-        self.stopped = False
-        return self._trace
-
-    def stop(self):
-        """Stop this Tracer."""
-        self.stopped = True
-        if self.threading and self.thread.ident != self.threading.currentThread().ident:
-            # Called on a different thread than started us: we can't unhook
-            # ourselves, but we've set the flag that we should stop, so we
-            # won't do any more tracing.
-            return
-
-        if self.warn:
-            if sys.gettrace() != self._trace:
-                msg = "Trace function changed, measurement is likely wrong: %r"
-                self.warn(msg % (sys.gettrace(),))
-
-        sys.settrace(None)
-
-    def get_stats(self):
-        """Return a dictionary of statistics, or None."""
-        return None
-
-#
-# eflag: FileType = Python2
--- a/DebugClients/Python/coverage/report.py	Sat Sep 03 18:02:37 2016 +0200
+++ /dev/null	Thu Jan 01 00:00:00 1970 +0000
@@ -1,104 +0,0 @@
-# Licensed under the Apache License: http://www.apache.org/licenses/LICENSE-2.0
-# For details: https://bitbucket.org/ned/coveragepy/src/default/NOTICE.txt
-
-"""Reporter foundation for coverage.py."""
-
-import os
-import warnings
-
-from coverage.files import prep_patterns, FnmatchMatcher
-from coverage.misc import CoverageException, NoSource, NotPython, isolate_module
-
-os = isolate_module(os)
-
-
-class Reporter(object):
-    """A base class for all reporters."""
-
-    def __init__(self, coverage, config):
-        """Create a reporter.
-
-        `coverage` is the coverage instance. `config` is an instance  of
-        CoverageConfig, for controlling all sorts of behavior.
-
-        """
-        self.coverage = coverage
-        self.config = config
-
-        # The directory into which to place the report, used by some derived
-        # classes.
-        self.directory = None
-
-        # Our method find_file_reporters used to set an attribute that other
-        # code could read.  That's been refactored away, but some third parties
-        # were using that attribute.  We'll continue to support it in a noisy
-        # way for now.
-        self._file_reporters = []
-
-    @property
-    def file_reporters(self):
-        """Keep .file_reporters working for private-grabbing tools."""
-        warnings.warn(
-            "Report.file_reporters will no longer be available in Coverage.py 4.2",
-            DeprecationWarning,
-        )
-        return self._file_reporters
-
-    def find_file_reporters(self, morfs):
-        """Find the FileReporters we'll report on.
-
-        `morfs` is a list of modules or file names.
-
-        Returns a list of FileReporters.
-
-        """
-        reporters = self.coverage._get_file_reporters(morfs)
-
-        if self.config.include:
-            matcher = FnmatchMatcher(prep_patterns(self.config.include))
-            reporters = [fr for fr in reporters if matcher.match(fr.filename)]
-
-        if self.config.omit:
-            matcher = FnmatchMatcher(prep_patterns(self.config.omit))
-            reporters = [fr for fr in reporters if not matcher.match(fr.filename)]
-
-        self._file_reporters = sorted(reporters)
-        return self._file_reporters
-
-    def report_files(self, report_fn, morfs, directory=None):
-        """Run a reporting function on a number of morfs.
-
-        `report_fn` is called for each relative morf in `morfs`.  It is called
-        as::
-
-            report_fn(file_reporter, analysis)
-
-        where `file_reporter` is the `FileReporter` for the morf, and
-        `analysis` is the `Analysis` for the morf.
-
-        """
-        file_reporters = self.find_file_reporters(morfs)
-
-        if not file_reporters:
-            raise CoverageException("No data to report.")
-
-        self.directory = directory
-        if self.directory and not os.path.exists(self.directory):
-            os.makedirs(self.directory)
-
-        for fr in file_reporters:
-            try:
-                report_fn(fr, self.coverage._analyze(fr))
-            except NoSource:
-                if not self.config.ignore_errors:
-                    raise
-            except NotPython:
-                # Only report errors for .py files, and only if we didn't
-                # explicitly suppress those errors.
-                # NotPython is only raised by PythonFileReporter, which has a
-                # should_be_python() method.
-                if fr.should_be_python() and not self.config.ignore_errors:
-                    raise
-
-#
-# eflag: FileType = Python2
--- a/DebugClients/Python/coverage/results.py	Sat Sep 03 18:02:37 2016 +0200
+++ /dev/null	Thu Jan 01 00:00:00 1970 +0000
@@ -1,274 +0,0 @@
-# Licensed under the Apache License: http://www.apache.org/licenses/LICENSE-2.0
-# For details: https://bitbucket.org/ned/coveragepy/src/default/NOTICE.txt
-
-"""Results of coverage measurement."""
-
-import collections
-
-from coverage.backward import iitems
-from coverage.misc import format_lines
-
-
-class Analysis(object):
-    """The results of analyzing a FileReporter."""
-
-    def __init__(self, data, file_reporter):
-        self.data = data
-        self.file_reporter = file_reporter
-        self.filename = self.file_reporter.filename
-        self.statements = self.file_reporter.lines()
-        self.excluded = self.file_reporter.excluded_lines()
-
-        # Identify missing statements.
-        executed = self.data.lines(self.filename) or []
-        executed = self.file_reporter.translate_lines(executed)
-        self.missing = self.statements - executed
-
-        if self.data.has_arcs():
-            self._arc_possibilities = sorted(self.file_reporter.arcs())
-            self.exit_counts = self.file_reporter.exit_counts()
-            self.no_branch = self.file_reporter.no_branch_lines()
-            n_branches = self.total_branches()
-            mba = self.missing_branch_arcs()
-            n_partial_branches = sum(len(v) for k,v in iitems(mba) if k not in self.missing)
-            n_missing_branches = sum(len(v) for k,v in iitems(mba))
-        else:
-            self._arc_possibilities = []
-            self.exit_counts = {}
-            self.no_branch = set()
-            n_branches = n_partial_branches = n_missing_branches = 0
-
-        self.numbers = Numbers(
-            n_files=1,
-            n_statements=len(self.statements),
-            n_excluded=len(self.excluded),
-            n_missing=len(self.missing),
-            n_branches=n_branches,
-            n_partial_branches=n_partial_branches,
-            n_missing_branches=n_missing_branches,
-        )
-
-    def missing_formatted(self):
-        """The missing line numbers, formatted nicely.
-
-        Returns a string like "1-2, 5-11, 13-14".
-
-        """
-        return format_lines(self.statements, self.missing)
-
-    def has_arcs(self):
-        """Were arcs measured in this result?"""
-        return self.data.has_arcs()
-
-    def arc_possibilities(self):
-        """Returns a sorted list of the arcs in the code."""
-        return self._arc_possibilities
-
-    def arcs_executed(self):
-        """Returns a sorted list of the arcs actually executed in the code."""
-        executed = self.data.arcs(self.filename) or []
-        executed = self.file_reporter.translate_arcs(executed)
-        return sorted(executed)
-
-    def arcs_missing(self):
-        """Returns a sorted list of the arcs in the code not executed."""
-        possible = self.arc_possibilities()
-        executed = self.arcs_executed()
-        missing = (
-            p for p in possible
-                if p not in executed
-                    and p[0] not in self.no_branch
-        )
-        return sorted(missing)
-
-    def arcs_missing_formatted(self):
-        """The missing branch arcs, formatted nicely.
-
-        Returns a string like "1->2, 1->3, 16->20". Omits any mention of
-        branches from missing lines, so if line 17 is missing, then 17->18
-        won't be included.
-
-        """
-        arcs = self.missing_branch_arcs()
-        missing = self.missing
-        line_exits = sorted(iitems(arcs))
-        pairs = []
-        for line, exits in line_exits:
-            for ex in sorted(exits):
-                if line not in missing:
-                    pairs.append("%d->%s" % (line, (ex if ex > 0 else "exit")))
-        return ', '.join(pairs)
-
-    def arcs_unpredicted(self):
-        """Returns a sorted list of the executed arcs missing from the code."""
-        possible = self.arc_possibilities()
-        executed = self.arcs_executed()
-        # Exclude arcs here which connect a line to itself.  They can occur
-        # in executed data in some cases.  This is where they can cause
-        # trouble, and here is where it's the least burden to remove them.
-        # Also, generators can somehow cause arcs from "enter" to "exit", so
-        # make sure we have at least one positive value.
-        unpredicted = (
-            e for e in executed
-                if e not in possible
-                    and e[0] != e[1]
-                    and (e[0] > 0 or e[1] > 0)
-        )
-        return sorted(unpredicted)
-
-    def branch_lines(self):
-        """Returns a list of line numbers that have more than one exit."""
-        return [l1 for l1,count in iitems(self.exit_counts) if count > 1]
-
-    def total_branches(self):
-        """How many total branches are there?"""
-        return sum(count for count in self.exit_counts.values() if count > 1)
-
-    def missing_branch_arcs(self):
-        """Return arcs that weren't executed from branch lines.
-
-        Returns {l1:[l2a,l2b,...], ...}
-
-        """
-        missing = self.arcs_missing()
-        branch_lines = set(self.branch_lines())
-        mba = collections.defaultdict(list)
-        for l1, l2 in missing:
-            if l1 in branch_lines:
-                mba[l1].append(l2)
-        return mba
-
-    def branch_stats(self):
-        """Get stats about branches.
-
-        Returns a dict mapping line numbers to a tuple:
-        (total_exits, taken_exits).
-        """
-
-        missing_arcs = self.missing_branch_arcs()
-        stats = {}
-        for lnum in self.branch_lines():
-            exits = self.exit_counts[lnum]
-            try:
-                missing = len(missing_arcs[lnum])
-            except KeyError:
-                missing = 0
-            stats[lnum] = (exits, exits - missing)
-        return stats
-
-
-class Numbers(object):
-    """The numerical results of measuring coverage.
-
-    This holds the basic statistics from `Analysis`, and is used to roll
-    up statistics across files.
-
-    """
-    # A global to determine the precision on coverage percentages, the number
-    # of decimal places.
-    _precision = 0
-    _near0 = 1.0              # These will change when _precision is changed.
-    _near100 = 99.0
-
-    def __init__(self, n_files=0, n_statements=0, n_excluded=0, n_missing=0,
-                    n_branches=0, n_partial_branches=0, n_missing_branches=0
-                    ):
-        self.n_files = n_files
-        self.n_statements = n_statements
-        self.n_excluded = n_excluded
-        self.n_missing = n_missing
-        self.n_branches = n_branches
-        self.n_partial_branches = n_partial_branches
-        self.n_missing_branches = n_missing_branches
-
-    def init_args(self):
-        """Return a list for __init__(*args) to recreate this object."""
-        return [
-            self.n_files, self.n_statements, self.n_excluded, self.n_missing,
-            self.n_branches, self.n_partial_branches, self.n_missing_branches,
-        ]
-
-    @classmethod
-    def set_precision(cls, precision):
-        """Set the number of decimal places used to report percentages."""
-        assert 0 <= precision < 10
-        cls._precision = precision
-        cls._near0 = 1.0 / 10**precision
-        cls._near100 = 100.0 - cls._near0
-
-    @property
-    def n_executed(self):
-        """Returns the number of executed statements."""
-        return self.n_statements - self.n_missing
-
-    @property
-    def n_executed_branches(self):
-        """Returns the number of executed branches."""
-        return self.n_branches - self.n_missing_branches
-
-    @property
-    def pc_covered(self):
-        """Returns a single percentage value for coverage."""
-        if self.n_statements > 0:
-            numerator, denominator = self.ratio_covered
-            pc_cov = (100.0 * numerator) / denominator
-        else:
-            pc_cov = 100.0
-        return pc_cov
-
-    @property
-    def pc_covered_str(self):
-        """Returns the percent covered, as a string, without a percent sign.
-
-        Note that "0" is only returned when the value is truly zero, and "100"
-        is only returned when the value is truly 100.  Rounding can never
-        result in either "0" or "100".
-
-        """
-        pc = self.pc_covered
-        if 0 < pc < self._near0:
-            pc = self._near0
-        elif self._near100 < pc < 100:
-            pc = self._near100
-        else:
-            pc = round(pc, self._precision)
-        return "%.*f" % (self._precision, pc)
-
-    @classmethod
-    def pc_str_width(cls):
-        """How many characters wide can pc_covered_str be?"""
-        width = 3   # "100"
-        if cls._precision > 0:
-            width += 1 + cls._precision
-        return width
-
-    @property
-    def ratio_covered(self):
-        """Return a numerator and denominator for the coverage ratio."""
-        numerator = self.n_executed + self.n_executed_branches
-        denominator = self.n_statements + self.n_branches
-        return numerator, denominator
-
-    def __add__(self, other):
-        nums = Numbers()
-        nums.n_files = self.n_files + other.n_files
-        nums.n_statements = self.n_statements + other.n_statements
-        nums.n_excluded = self.n_excluded + other.n_excluded
-        nums.n_missing = self.n_missing + other.n_missing
-        nums.n_branches = self.n_branches + other.n_branches
-        nums.n_partial_branches = (
-            self.n_partial_branches + other.n_partial_branches
-            )
-        nums.n_missing_branches = (
-            self.n_missing_branches + other.n_missing_branches
-            )
-        return nums
-
-    def __radd__(self, other):
-        # Implementing 0+Numbers allows us to sum() a list of Numbers.
-        if other == 0:
-            return self
-        return NotImplemented
-
-#
-# eflag: FileType = Python2
--- a/DebugClients/Python/coverage/summary.py	Sat Sep 03 18:02:37 2016 +0200
+++ /dev/null	Thu Jan 01 00:00:00 1970 +0000
@@ -1,124 +0,0 @@
-# Licensed under the Apache License: http://www.apache.org/licenses/LICENSE-2.0
-# For details: https://bitbucket.org/ned/coveragepy/src/default/NOTICE.txt
-
-"""Summary reporting"""
-
-import sys
-
-from coverage import env
-from coverage.report import Reporter
-from coverage.results import Numbers
-from coverage.misc import NotPython, CoverageException, output_encoding
-
-
-class SummaryReporter(Reporter):
-    """A reporter for writing the summary report."""
-
-    def __init__(self, coverage, config):
-        super(SummaryReporter, self).__init__(coverage, config)
-        self.branches = coverage.data.has_arcs()
-
-    def report(self, morfs, outfile=None):
-        """Writes a report summarizing coverage statistics per module.
-
-        `outfile` is a file object to write the summary to. It must be opened
-        for native strings (bytes on Python 2, Unicode on Python 3).
-
-        """
-        file_reporters = self.find_file_reporters(morfs)
-
-        # Prepare the formatting strings
-        max_name = max([len(fr.relative_filename()) for fr in file_reporters] + [5])
-        fmt_name = u"%%- %ds  " % max_name
-        fmt_err = u"%s   %s: %s"
-        fmt_skip_covered = u"\n%s file%s skipped due to complete coverage."
-
-        header = (fmt_name % "Name") + u" Stmts   Miss"
-        fmt_coverage = fmt_name + u"%6d %6d"
-        if self.branches:
-            header += u" Branch BrPart"
-            fmt_coverage += u" %6d %6d"
-        width100 = Numbers.pc_str_width()
-        header += u"%*s" % (width100+4, "Cover")
-        fmt_coverage += u"%%%ds%%%%" % (width100+3,)
-        if self.config.show_missing:
-            header += u"   Missing"
-            fmt_coverage += u"   %s"
-        rule = u"-" * len(header)
-
-        if outfile is None:
-            outfile = sys.stdout
-
-        def writeout(line):
-            """Write a line to the output, adding a newline."""
-            if env.PY2:
-                line = line.encode(output_encoding())
-            outfile.write(line.rstrip())
-            outfile.write("\n")
-
-        # Write the header
-        writeout(header)
-        writeout(rule)
-
-        total = Numbers()
-        skipped_count = 0
-
-        for fr in file_reporters:
-            try:
-                analysis = self.coverage._analyze(fr)
-                nums = analysis.numbers
-                total += nums
-
-                if self.config.skip_covered:
-                    # Don't report on 100% files.
-                    no_missing_lines = (nums.n_missing == 0)
-                    no_missing_branches = (nums.n_partial_branches == 0)
-                    if no_missing_lines and no_missing_branches:
-                        skipped_count += 1
-                        continue
-
-                args = (fr.relative_filename(), nums.n_statements, nums.n_missing)
-                if self.branches:
-                    args += (nums.n_branches, nums.n_partial_branches)
-                args += (nums.pc_covered_str,)
-                if self.config.show_missing:
-                    missing_fmtd = analysis.missing_formatted()
-                    if self.branches:
-                        branches_fmtd = analysis.arcs_missing_formatted()
-                        if branches_fmtd:
-                            if missing_fmtd:
-                                missing_fmtd += ", "
-                            missing_fmtd += branches_fmtd
-                    args += (missing_fmtd,)
-                writeout(fmt_coverage % args)
-            except Exception:
-                report_it = not self.config.ignore_errors
-                if report_it:
-                    typ, msg = sys.exc_info()[:2]
-                    # NotPython is only raised by PythonFileReporter, which has a
-                    # should_be_python() method.
-                    if typ is NotPython and not fr.should_be_python():
-                        report_it = False
-                if report_it:
-                    writeout(fmt_err % (fr.relative_filename(), typ.__name__, msg))
-
-        if total.n_files > 1:
-            writeout(rule)
-            args = ("TOTAL", total.n_statements, total.n_missing)
-            if self.branches:
-                args += (total.n_branches, total.n_partial_branches)
-            args += (total.pc_covered_str,)
-            if self.config.show_missing:
-                args += ("",)
-            writeout(fmt_coverage % args)
-
-        if not total.n_files and not skipped_count:
-            raise CoverageException("No data to report.")
-
-        if self.config.skip_covered and skipped_count:
-            writeout(fmt_skip_covered % (skipped_count, 's' if skipped_count > 1 else ''))
-
-        return total.n_statements and total.pc_covered
-
-#
-# eflag: FileType = Python2
--- a/DebugClients/Python/coverage/templite.py	Sat Sep 03 18:02:37 2016 +0200
+++ /dev/null	Thu Jan 01 00:00:00 1970 +0000
@@ -1,293 +0,0 @@
-# Licensed under the Apache License: http://www.apache.org/licenses/LICENSE-2.0
-# For details: https://bitbucket.org/ned/coveragepy/src/default/NOTICE.txt
-
-"""A simple Python template renderer, for a nano-subset of Django syntax.
-
-For a detailed discussion of this code, see this chapter from 500 Lines:
-http://aosabook.org/en/500L/a-template-engine.html
-
-"""
-
-# Coincidentally named the same as http://code.activestate.com/recipes/496702/
-
-import re
-
-from coverage import env
-
-
-class TempliteSyntaxError(ValueError):
-    """Raised when a template has a syntax error."""
-    pass
-
-
-class TempliteValueError(ValueError):
-    """Raised when an expression won't evaluate in a template."""
-    pass
-
-
-class CodeBuilder(object):
-    """Build source code conveniently."""
-
-    def __init__(self, indent=0):
-        self.code = []
-        self.indent_level = indent
-
-    def __str__(self):
-        return "".join(str(c) for c in self.code)
-
-    def add_line(self, line):
-        """Add a line of source to the code.
-
-        Indentation and newline will be added for you, don't provide them.
-
-        """
-        self.code.extend([" " * self.indent_level, line, "\n"])
-
-    def add_section(self):
-        """Add a section, a sub-CodeBuilder."""
-        section = CodeBuilder(self.indent_level)
-        self.code.append(section)
-        return section
-
-    INDENT_STEP = 4      # PEP8 says so!
-
-    def indent(self):
-        """Increase the current indent for following lines."""
-        self.indent_level += self.INDENT_STEP
-
-    def dedent(self):
-        """Decrease the current indent for following lines."""
-        self.indent_level -= self.INDENT_STEP
-
-    def get_globals(self):
-        """Execute the code, and return a dict of globals it defines."""
-        # A check that the caller really finished all the blocks they started.
-        assert self.indent_level == 0
-        # Get the Python source as a single string.
-        python_source = str(self)
-        # Execute the source, defining globals, and return them.
-        global_namespace = {}
-        exec(python_source, global_namespace)
-        return global_namespace
-
-
-class Templite(object):
-    """A simple template renderer, for a nano-subset of Django syntax.
-
-    Supported constructs are extended variable access::
-
-        {{var.modifier.modifier|filter|filter}}
-
-    loops::
-
-        {% for var in list %}...{% endfor %}
-
-    and ifs::
-
-        {% if var %}...{% endif %}
-
-    Comments are within curly-hash markers::
-
-        {# This will be ignored #}
-
-    Any of these constructs can have a hypen at the end (`-}}`, `-%}`, `-#}`),
-    which will collapse the whitespace following the tag.
-
-    Construct a Templite with the template text, then use `render` against a
-    dictionary context to create a finished string::
-
-        templite = Templite('''
-            <h1>Hello {{name|upper}}!</h1>
-            {% for topic in topics %}
-                <p>You are interested in {{topic}}.</p>
-            {% endif %}
-            ''',
-            {'upper': str.upper},
-        )
-        text = templite.render({
-            'name': "Ned",
-            'topics': ['Python', 'Geometry', 'Juggling'],
-        })
-
-    """
-    def __init__(self, text, *contexts):
-        """Construct a Templite with the given `text`.
-
-        `contexts` are dictionaries of values to use for future renderings.
-        These are good for filters and global values.
-
-        """
-        self.context = {}
-        for context in contexts:
-            self.context.update(context)
-
-        self.all_vars = set()
-        self.loop_vars = set()
-
-        # We construct a function in source form, then compile it and hold onto
-        # it, and execute it to render the template.
-        code = CodeBuilder()
-
-        code.add_line("def render_function(context, do_dots):")
-        code.indent()
-        vars_code = code.add_section()
-        code.add_line("result = []")
-        code.add_line("append_result = result.append")
-        code.add_line("extend_result = result.extend")
-        if env.PY2:
-            code.add_line("to_str = unicode")
-        else:
-            code.add_line("to_str = str")
-
-        buffered = []
-
-        def flush_output():
-            """Force `buffered` to the code builder."""
-            if len(buffered) == 1:
-                code.add_line("append_result(%s)" % buffered[0])
-            elif len(buffered) > 1:
-                code.add_line("extend_result([%s])" % ", ".join(buffered))
-            del buffered[:]
-
-        ops_stack = []
-
-        # Split the text to form a list of tokens.
-        tokens = re.split(r"(?s)({{.*?}}|{%.*?%}|{#.*?#})", text)
-
-        squash = False
-
-        for token in tokens:
-            if token.startswith('{'):
-                start, end = 2, -2
-                squash = (token[-3] == '-')
-                if squash:
-                    end = -3
-
-                if token.startswith('{#'):
-                    # Comment: ignore it and move on.
-                    continue
-                elif token.startswith('{{'):
-                    # An expression to evaluate.
-                    expr = self._expr_code(token[start:end].strip())
-                    buffered.append("to_str(%s)" % expr)
-                elif token.startswith('{%'):
-                    # Action tag: split into words and parse further.
-                    flush_output()
-
-                    words = token[start:end].strip().split()
-                    if words[0] == 'if':
-                        # An if statement: evaluate the expression to determine if.
-                        if len(words) != 2:
-                            self._syntax_error("Don't understand if", token)
-                        ops_stack.append('if')
-                        code.add_line("if %s:" % self._expr_code(words[1]))
-                        code.indent()
-                    elif words[0] == 'for':
-                        # A loop: iterate over expression result.
-                        if len(words) != 4 or words[2] != 'in':
-                            self._syntax_error("Don't understand for", token)
-                        ops_stack.append('for')
-                        self._variable(words[1], self.loop_vars)
-                        code.add_line(
-                            "for c_%s in %s:" % (
-                                words[1],
-                                self._expr_code(words[3])
-                            )
-                        )
-                        code.indent()
-                    elif words[0].startswith('end'):
-                        # Endsomething.  Pop the ops stack.
-                        if len(words) != 1:
-                            self._syntax_error("Don't understand end", token)
-                        end_what = words[0][3:]
-                        if not ops_stack:
-                            self._syntax_error("Too many ends", token)
-                        start_what = ops_stack.pop()
-                        if start_what != end_what:
-                            self._syntax_error("Mismatched end tag", end_what)
-                        code.dedent()
-                    else:
-                        self._syntax_error("Don't understand tag", words[0])
-            else:
-                # Literal content.  If it isn't empty, output it.
-                if squash:
-                    token = token.lstrip()
-                if token:
-                    buffered.append(repr(token))
-
-        if ops_stack:
-            self._syntax_error("Unmatched action tag", ops_stack[-1])
-
-        flush_output()
-
-        for var_name in self.all_vars - self.loop_vars:
-            vars_code.add_line("c_%s = context[%r]" % (var_name, var_name))
-
-        code.add_line('return "".join(result)')
-        code.dedent()
-        self._render_function = code.get_globals()['render_function']
-
-    def _expr_code(self, expr):
-        """Generate a Python expression for `expr`."""
-        if "|" in expr:
-            pipes = expr.split("|")
-            code = self._expr_code(pipes[0])
-            for func in pipes[1:]:
-                self._variable(func, self.all_vars)
-                code = "c_%s(%s)" % (func, code)
-        elif "." in expr:
-            dots = expr.split(".")
-            code = self._expr_code(dots[0])
-            args = ", ".join(repr(d) for d in dots[1:])
-            code = "do_dots(%s, %s)" % (code, args)
-        else:
-            self._variable(expr, self.all_vars)
-            code = "c_%s" % expr
-        return code
-
-    def _syntax_error(self, msg, thing):
-        """Raise a syntax error using `msg`, and showing `thing`."""
-        raise TempliteSyntaxError("%s: %r" % (msg, thing))
-
-    def _variable(self, name, vars_set):
-        """Track that `name` is used as a variable.
-
-        Adds the name to `vars_set`, a set of variable names.
-
-        Raises an syntax error if `name` is not a valid name.
-
-        """
-        if not re.match(r"[_a-zA-Z][_a-zA-Z0-9]*$", name):
-            self._syntax_error("Not a valid name", name)
-        vars_set.add(name)
-
-    def render(self, context=None):
-        """Render this template by applying it to `context`.
-
-        `context` is a dictionary of values to use in this rendering.
-
-        """
-        # Make the complete context we'll use.
-        render_context = dict(self.context)
-        if context:
-            render_context.update(context)
-        return self._render_function(render_context, self._do_dots)
-
-    def _do_dots(self, value, *dots):
-        """Evaluate dotted expressions at run-time."""
-        for dot in dots:
-            try:
-                value = getattr(value, dot)
-            except AttributeError:
-                try:
-                    value = value[dot]
-                except (TypeError, KeyError):
-                    raise TempliteValueError(
-                        "Couldn't evaluate %r.%s" % (value, dot)
-                    )
-            if callable(value):
-                value = value()
-        return value
-
-#
-# eflag: FileType = Python2
--- a/DebugClients/Python/coverage/test_helpers.py	Sat Sep 03 18:02:37 2016 +0200
+++ /dev/null	Thu Jan 01 00:00:00 1970 +0000
@@ -1,393 +0,0 @@
-# Licensed under the Apache License: http://www.apache.org/licenses/LICENSE-2.0
-# For details: https://bitbucket.org/ned/coveragepy/src/default/NOTICE.txt
-
-"""Mixin classes to help make good tests."""
-
-import atexit
-import collections
-import contextlib
-import os
-import random
-import shutil
-import sys
-import tempfile
-import textwrap
-
-from coverage.backunittest import TestCase
-from coverage.backward import StringIO, to_bytes
-
-
-class Tee(object):
-    """A file-like that writes to all the file-likes it has."""
-
-    def __init__(self, *files):
-        """Make a Tee that writes to all the files in `files.`"""
-        self._files = files
-        if hasattr(files[0], "encoding"):
-            self.encoding = files[0].encoding
-
-    def write(self, data):
-        """Write `data` to all the files."""
-        for f in self._files:
-            f.write(data)
-
-    def flush(self):
-        """Flush the data on all the files."""
-        for f in self._files:
-            f.flush()
-
-    if 0:
-        # Use this if you need to use a debugger, though it makes some tests
-        # fail, I'm not sure why...
-        def __getattr__(self, name):
-            return getattr(self._files[0], name)
-
-
-@contextlib.contextmanager
-def change_dir(new_dir):
-    """Change directory, and then change back.
-
-    Use as a context manager, it will give you the new directory, and later
-    restore the old one.
-
-    """
-    old_dir = os.getcwd()
-    os.chdir(new_dir)
-    try:
-        yield os.getcwd()
-    finally:
-        os.chdir(old_dir)
-
-
-@contextlib.contextmanager
-def saved_sys_path():
-    """Save sys.path, and restore it later."""
-    old_syspath = sys.path[:]
-    try:
-        yield
-    finally:
-        sys.path = old_syspath
-
-
-def setup_with_context_manager(testcase, cm):
-    """Use a contextmanager to setUp a test case.
-
-    If you have a context manager you like::
-
-        with ctxmgr(a, b, c) as v:
-            # do something with v
-
-    and you want to have that effect for a test case, call this function from
-    your setUp, and it will start the context manager for your test, and end it
-    when the test is done::
-
-        def setUp(self):
-            self.v = setup_with_context_manager(self, ctxmgr(a, b, c))
-
-        def test_foo(self):
-            # do something with self.v
-
-    """
-    val = cm.__enter__()
-    testcase.addCleanup(cm.__exit__, None, None, None)
-    return val
-
-
-class ModuleAwareMixin(TestCase):
-    """A test case mixin that isolates changes to sys.modules."""
-
-    def setUp(self):
-        super(ModuleAwareMixin, self).setUp()
-
-        # Record sys.modules here so we can restore it in cleanup_modules.
-        self.old_modules = list(sys.modules)
-        self.addCleanup(self.cleanup_modules)
-
-    def cleanup_modules(self):
-        """Remove any new modules imported during the test run.
-
-        This lets us import the same source files for more than one test.
-
-        """
-        for m in [m for m in sys.modules if m not in self.old_modules]:
-            del sys.modules[m]
-
-
-class SysPathAwareMixin(TestCase):
-    """A test case mixin that isolates changes to sys.path."""
-
-    def setUp(self):
-        super(SysPathAwareMixin, self).setUp()
-        setup_with_context_manager(self, saved_sys_path())
-
-
-class EnvironmentAwareMixin(TestCase):
-    """A test case mixin that isolates changes to the environment."""
-
-    def setUp(self):
-        super(EnvironmentAwareMixin, self).setUp()
-
-        # Record environment variables that we changed with set_environ.
-        self.environ_undos = {}
-
-        self.addCleanup(self.cleanup_environ)
-
-    def set_environ(self, name, value):
-        """Set an environment variable `name` to be `value`.
-
-        The environment variable is set, and record is kept that it was set,
-        so that `cleanup_environ` can restore its original value.
-
-        """
-        if name not in self.environ_undos:
-            self.environ_undos[name] = os.environ.get(name)
-        os.environ[name] = value
-
-    def cleanup_environ(self):
-        """Undo all the changes made by `set_environ`."""
-        for name, value in self.environ_undos.items():
-            if value is None:
-                del os.environ[name]
-            else:
-                os.environ[name] = value
-
-
-class StdStreamCapturingMixin(TestCase):
-    """A test case mixin that captures stdout and stderr."""
-
-    def setUp(self):
-        super(StdStreamCapturingMixin, self).setUp()
-
-        # Capture stdout and stderr so we can examine them in tests.
-        # nose keeps stdout from littering the screen, so we can safely Tee it,
-        # but it doesn't capture stderr, so we don't want to Tee stderr to the
-        # real stderr, since it will interfere with our nice field of dots.
-        old_stdout = sys.stdout
-        self.captured_stdout = StringIO()
-        sys.stdout = Tee(sys.stdout, self.captured_stdout)
-
-        old_stderr = sys.stderr
-        self.captured_stderr = StringIO()
-        sys.stderr = self.captured_stderr
-
-        self.addCleanup(self.cleanup_std_streams, old_stdout, old_stderr)
-
-    def cleanup_std_streams(self, old_stdout, old_stderr):
-        """Restore stdout and stderr."""
-        sys.stdout = old_stdout
-        sys.stderr = old_stderr
-
-    def stdout(self):
-        """Return the data written to stdout during the test."""
-        return self.captured_stdout.getvalue()
-
-    def stderr(self):
-        """Return the data written to stderr during the test."""
-        return self.captured_stderr.getvalue()
-
-
-class DelayedAssertionMixin(TestCase):
-    """A test case mixin that provides a `delayed_assertions` context manager.
-
-    Use it like this::
-
-        with self.delayed_assertions():
-            self.assertEqual(x, y)
-            self.assertEqual(z, w)
-
-    All of the assertions will run.  The failures will be displayed at the end
-    of the with-statement.
-
-    NOTE: this only works with some assertions.  These are known to work:
-
-        - `assertEqual(str, str)`
-
-        - `assertMultilineEqual(str, str)`
-
-    """
-    def __init__(self, *args, **kwargs):
-        super(DelayedAssertionMixin, self).__init__(*args, **kwargs)
-        # This mixin only works with assert methods that call `self.fail`.  In
-        # Python 2.7, `assertEqual` didn't, but we can do what Python 3 does,
-        # and use `assertMultiLineEqual` for comparing strings.
-        self.addTypeEqualityFunc(str, 'assertMultiLineEqual')
-        self._delayed_assertions = None
-
-    @contextlib.contextmanager
-    def delayed_assertions(self):
-        """The context manager: assert that we didn't collect any assertions."""
-        self._delayed_assertions = []
-        old_fail = self.fail
-        self.fail = self._delayed_fail
-        try:
-            yield
-        finally:
-            self.fail = old_fail
-        if self._delayed_assertions:
-            if len(self._delayed_assertions) == 1:
-                self.fail(self._delayed_assertions[0])
-            else:
-                self.fail(
-                    "{0} failed assertions:\n{1}".format(
-                        len(self._delayed_assertions),
-                        "\n".join(self._delayed_assertions),
-                    )
-                )
-
-    def _delayed_fail(self, msg=None):
-        """The stand-in for TestCase.fail during delayed_assertions."""
-        self._delayed_assertions.append(msg)
-
-
-class TempDirMixin(SysPathAwareMixin, ModuleAwareMixin, TestCase):
-    """A test case mixin that creates a temp directory and files in it.
-
-    Includes SysPathAwareMixin and ModuleAwareMixin, because making and using
-    temp directories like this will also need that kind of isolation.
-
-    """
-
-    # Our own setting: most of these tests run in their own temp directory.
-    # Set this to False in your subclass if you don't want a temp directory
-    # created.
-    run_in_temp_dir = True
-
-    # Set this if you aren't creating any files with make_file, but still want
-    # the temp directory.  This will stop the test behavior checker from
-    # complaining.
-    no_files_in_temp_dir = False
-
-    def setUp(self):
-        super(TempDirMixin, self).setUp()
-
-        if self.run_in_temp_dir:
-            # Create a temporary directory.
-            self.temp_dir = self.make_temp_dir("test_cover")
-            self.chdir(self.temp_dir)
-
-            # Modules should be importable from this temp directory.  We don't
-            # use '' because we make lots of different temp directories and
-            # nose's caching importer can get confused.  The full path prevents
-            # problems.
-            sys.path.insert(0, os.getcwd())
-
-        class_behavior = self.class_behavior()
-        class_behavior.tests += 1
-        class_behavior.temp_dir = self.run_in_temp_dir
-        class_behavior.no_files_ok = self.no_files_in_temp_dir
-
-        self.addCleanup(self.check_behavior)
-
-    def make_temp_dir(self, slug="test_cover"):
-        """Make a temp directory that is cleaned up when the test is done."""
-        name = "%s_%08d" % (slug, random.randint(0, 99999999))
-        temp_dir = os.path.join(tempfile.gettempdir(), name)
-        os.makedirs(temp_dir)
-        self.addCleanup(shutil.rmtree, temp_dir)
-        return temp_dir
-
-    def chdir(self, new_dir):
-        """Change directory, and change back when the test is done."""
-        old_dir = os.getcwd()
-        os.chdir(new_dir)
-        self.addCleanup(os.chdir, old_dir)
-
-    def check_behavior(self):
-        """Check that we did the right things."""
-
-        class_behavior = self.class_behavior()
-        if class_behavior.test_method_made_any_files:
-            class_behavior.tests_making_files += 1
-
-    def make_file(self, filename, text="", newline=None):
-        """Create a file for testing.
-
-        `filename` is the relative path to the file, including directories if
-        desired, which will be created if need be.
-
-        `text` is the content to create in the file, a native string (bytes in
-        Python 2, unicode in Python 3).
-
-        If `newline` is provided, it is a string that will be used as the line
-        endings in the created file, otherwise the line endings are as provided
-        in `text`.
-
-        Returns `filename`.
-
-        """
-        # Tests that call `make_file` should be run in a temp environment.
-        assert self.run_in_temp_dir
-        self.class_behavior().test_method_made_any_files = True
-
-        text = textwrap.dedent(text)
-        if newline:
-            text = text.replace("\n", newline)
-
-        # Make sure the directories are available.
-        dirs, _ = os.path.split(filename)
-        if dirs and not os.path.exists(dirs):
-            os.makedirs(dirs)
-
-        # Create the file.
-        with open(filename, 'wb') as f:
-            f.write(to_bytes(text))
-
-        return filename
-
-    # We run some tests in temporary directories, because they may need to make
-    # files for the tests. But this is expensive, so we can change per-class
-    # whether a temp directory is used or not.  It's easy to forget to set that
-    # option properly, so we track information about what the tests did, and
-    # then report at the end of the process on test classes that were set
-    # wrong.
-
-    class ClassBehavior(object):
-        """A value object to store per-class."""
-        def __init__(self):
-            self.tests = 0
-            self.skipped = 0
-            self.temp_dir = True
-            self.no_files_ok = False
-            self.tests_making_files = 0
-            self.test_method_made_any_files = False
-
-    # Map from class to info about how it ran.
-    class_behaviors = collections.defaultdict(ClassBehavior)
-
-    @classmethod
-    def report_on_class_behavior(cls):
-        """Called at process exit to report on class behavior."""
-        for test_class, behavior in cls.class_behaviors.items():
-            bad = ""
-            if behavior.tests <= behavior.skipped:
-                bad = ""
-            elif behavior.temp_dir and behavior.tests_making_files == 0:
-                if not behavior.no_files_ok:
-                    bad = "Inefficient"
-            elif not behavior.temp_dir and behavior.tests_making_files > 0:
-                bad = "Unsafe"
-
-            if bad:
-                if behavior.temp_dir:
-                    where = "in a temp directory"
-                else:
-                    where = "without a temp directory"
-                print(
-                    "%s: %s ran %d tests, %d made files %s" % (
-                        bad,
-                        test_class.__name__,
-                        behavior.tests,
-                        behavior.tests_making_files,
-                        where,
-                    )
-                )
-
-    def class_behavior(self):
-        """Get the ClassBehavior instance for this test."""
-        return self.class_behaviors[self.__class__]
-
-# When the process ends, find out about bad classes.
-atexit.register(TempDirMixin.report_on_class_behavior)
-
-#
-# eflag: FileType = Python2
--- a/DebugClients/Python/coverage/version.py	Sat Sep 03 18:02:37 2016 +0200
+++ /dev/null	Thu Jan 01 00:00:00 1970 +0000
@@ -1,36 +0,0 @@
-# Licensed under the Apache License: http://www.apache.org/licenses/LICENSE-2.0
-# For details: https://bitbucket.org/ned/coveragepy/src/default/NOTICE.txt
-
-"""The version and URL for coverage.py"""
-# This file is exec'ed in setup.py, don't import anything!
-
-# Same semantics as sys.version_info.
-version_info = (4, 1, 0, 'final', 0)
-
-
-def _make_version(major, minor, micro, releaselevel, serial):
-    """Create a readable version string from version_info tuple components."""
-    assert releaselevel in ['alpha', 'beta', 'candidate', 'final']
-    version = "%d.%d" % (major, minor)
-    if micro:
-        version += ".%d" % (micro,)
-    if releaselevel != 'final':
-        short = {'alpha': 'a', 'beta': 'b', 'candidate': 'rc'}[releaselevel]
-        version += "%s%d" % (short, serial)
-    return version
-
-
-def _make_url(major, minor, micro, releaselevel, serial):
-    """Make the URL people should start at for this version of coverage.py."""
-    url = "https://coverage.readthedocs.io"
-    if releaselevel != 'final':
-        # For pre-releases, use a version-specific URL.
-        url += "/en/coverage-" + _make_version(major, minor, micro, releaselevel, serial)
-    return url
-
-
-__version__ = _make_version(*version_info)
-__url__ = _make_url(*version_info)
-
-#
-# eflag: FileType = Python2
--- a/DebugClients/Python/coverage/xmlreport.py	Sat Sep 03 18:02:37 2016 +0200
+++ /dev/null	Thu Jan 01 00:00:00 1970 +0000
@@ -1,221 +0,0 @@
-# Licensed under the Apache License: http://www.apache.org/licenses/LICENSE-2.0
-# For details: https://bitbucket.org/ned/coveragepy/src/default/NOTICE.txt
-
-"""XML reporting for coverage.py"""
-
-import os
-import os.path
-import sys
-import time
-import xml.dom.minidom
-
-from coverage import env
-from coverage import __url__, __version__, files
-from coverage.backward import iitems
-from coverage.misc import isolate_module
-from coverage.report import Reporter
-
-os = isolate_module(os)
-
-
-DTD_URL = (
-    'https://raw.githubusercontent.com/cobertura/web/'
-    'f0366e5e2cf18f111cbd61fc34ef720a6584ba02'
-    '/htdocs/xml/coverage-03.dtd'
-)
-
-
-def rate(hit, num):
-    """Return the fraction of `hit`/`num`, as a string."""
-    if num == 0:
-        return "1"
-    else:
-        return "%.4g" % (float(hit) / num)
-
-
-class XmlReporter(Reporter):
-    """A reporter for writing Cobertura-style XML coverage results."""
-
-    def __init__(self, coverage, config):
-        super(XmlReporter, self).__init__(coverage, config)
-
-        self.source_paths = set()
-        if config.source:
-            for src in config.source:
-                if os.path.exists(src):
-                    self.source_paths.add(files.canonical_filename(src))
-        self.packages = {}
-        self.xml_out = None
-        self.has_arcs = coverage.data.has_arcs()
-
-    def report(self, morfs, outfile=None):
-        """Generate a Cobertura-compatible XML report for `morfs`.
-
-        `morfs` is a list of modules or file names.
-
-        `outfile` is a file object to write the XML to.
-
-        """
-        # Initial setup.
-        outfile = outfile or sys.stdout
-
-        # Create the DOM that will store the data.
-        impl = xml.dom.minidom.getDOMImplementation()
-        self.xml_out = impl.createDocument(None, "coverage", None)
-
-        # Write header stuff.
-        xcoverage = self.xml_out.documentElement
-        xcoverage.setAttribute("version", __version__)
-        xcoverage.setAttribute("timestamp", str(int(time.time()*1000)))
-        xcoverage.appendChild(self.xml_out.createComment(
-            " Generated by coverage.py: %s " % __url__
-            ))
-        xcoverage.appendChild(self.xml_out.createComment(" Based on %s " % DTD_URL))
-
-        # Call xml_file for each file in the data.
-        self.report_files(self.xml_file, morfs)
-
-        xsources = self.xml_out.createElement("sources")
-        xcoverage.appendChild(xsources)
-
-        # Populate the XML DOM with the source info.
-        for path in sorted(self.source_paths):
-            xsource = self.xml_out.createElement("source")
-            xsources.appendChild(xsource)
-            txt = self.xml_out.createTextNode(path)
-            xsource.appendChild(txt)
-
-        lnum_tot, lhits_tot = 0, 0
-        bnum_tot, bhits_tot = 0, 0
-
-        xpackages = self.xml_out.createElement("packages")
-        xcoverage.appendChild(xpackages)
-
-        # Populate the XML DOM with the package info.
-        for pkg_name, pkg_data in sorted(iitems(self.packages)):
-            class_elts, lhits, lnum, bhits, bnum = pkg_data
-            xpackage = self.xml_out.createElement("package")
-            xpackages.appendChild(xpackage)
-            xclasses = self.xml_out.createElement("classes")
-            xpackage.appendChild(xclasses)
-            for _, class_elt in sorted(iitems(class_elts)):
-                xclasses.appendChild(class_elt)
-            xpackage.setAttribute("name", pkg_name.replace(os.sep, '.'))
-            xpackage.setAttribute("line-rate", rate(lhits, lnum))
-            if self.has_arcs:
-                branch_rate = rate(bhits, bnum)
-            else:
-                branch_rate = "0"
-            xpackage.setAttribute("branch-rate", branch_rate)
-            xpackage.setAttribute("complexity", "0")
-
-            lnum_tot += lnum
-            lhits_tot += lhits
-            bnum_tot += bnum
-            bhits_tot += bhits
-
-        xcoverage.setAttribute("line-rate", rate(lhits_tot, lnum_tot))
-        if self.has_arcs:
-            branch_rate = rate(bhits_tot, bnum_tot)
-        else:
-            branch_rate = "0"
-        xcoverage.setAttribute("branch-rate", branch_rate)
-
-        # Use the DOM to write the output file.
-        out = self.xml_out.toprettyxml()
-        if env.PY2:
-            out = out.encode("utf8")
-        outfile.write(out)
-
-        # Return the total percentage.
-        denom = lnum_tot + bnum_tot
-        if denom == 0:
-            pct = 0.0
-        else:
-            pct = 100.0 * (lhits_tot + bhits_tot) / denom
-        return pct
-
-    def xml_file(self, fr, analysis):
-        """Add to the XML report for a single file."""
-
-        # Create the 'lines' and 'package' XML elements, which
-        # are populated later.  Note that a package == a directory.
-        filename = fr.filename.replace("\\", "/")
-        for source_path in self.source_paths:
-            if filename.startswith(source_path.replace("\\", "/") + "/"):
-                rel_name = filename[len(source_path)+1:]
-                break
-        else:
-            rel_name = fr.relative_filename()
-
-        dirname = os.path.dirname(rel_name) or "."
-        dirname = "/".join(dirname.split("/")[:self.config.xml_package_depth])
-        package_name = dirname.replace("/", ".")
-
-        if rel_name != fr.filename:
-            self.source_paths.add(fr.filename[:-len(rel_name)].rstrip(r"\/"))
-        package = self.packages.setdefault(package_name, [{}, 0, 0, 0, 0])
-
-        xclass = self.xml_out.createElement("class")
-
-        xclass.appendChild(self.xml_out.createElement("methods"))
-
-        xlines = self.xml_out.createElement("lines")
-        xclass.appendChild(xlines)
-
-        xclass.setAttribute("name", os.path.relpath(rel_name, dirname))
-        xclass.setAttribute("filename", fr.relative_filename().replace("\\", "/"))
-        xclass.setAttribute("complexity", "0")
-
-        branch_stats = analysis.branch_stats()
-        missing_branch_arcs = analysis.missing_branch_arcs()
-
-        # For each statement, create an XML 'line' element.
-        for line in sorted(analysis.statements):
-            xline = self.xml_out.createElement("line")
-            xline.setAttribute("number", str(line))
-
-            # Q: can we get info about the number of times a statement is
-            # executed?  If so, that should be recorded here.
-            xline.setAttribute("hits", str(int(line not in analysis.missing)))
-
-            if self.has_arcs:
-                if line in branch_stats:
-                    total, taken = branch_stats[line]
-                    xline.setAttribute("branch", "true")
-                    xline.setAttribute(
-                        "condition-coverage",
-                        "%d%% (%d/%d)" % (100*taken/total, taken, total)
-                        )
-                if line in missing_branch_arcs:
-                    annlines = ["exit" if b < 0 else str(b) for b in missing_branch_arcs[line]]
-                    xline.setAttribute("missing-branches", ",".join(annlines))
-            xlines.appendChild(xline)
-
-        class_lines = len(analysis.statements)
-        class_hits = class_lines - len(analysis.missing)
-
-        if self.has_arcs:
-            class_branches = sum(t for t, k in branch_stats.values())
-            missing_branches = sum(t - k for t, k in branch_stats.values())
-            class_br_hits = class_branches - missing_branches
-        else:
-            class_branches = 0.0
-            class_br_hits = 0.0
-
-        # Finalize the statistics that are collected in the XML DOM.
-        xclass.setAttribute("line-rate", rate(class_hits, class_lines))
-        if self.has_arcs:
-            branch_rate = rate(class_br_hits, class_branches)
-        else:
-            branch_rate = "0"
-        xclass.setAttribute("branch-rate", branch_rate)
-
-        package[0][rel_name] = xclass
-        package[1] += class_hits
-        package[2] += class_lines
-        package[3] += class_br_hits
-        package[4] += class_branches
-
-#
-# eflag: FileType = Python2
--- a/DebugClients/Python/eric6dbgstub.py	Sat Sep 03 18:02:37 2016 +0200
+++ /dev/null	Thu Jan 01 00:00:00 1970 +0000
@@ -1,95 +0,0 @@
-# -*- coding: utf-8 -*-
-
-# Copyright (c) 2002 - 2016 Detlev Offenbach <detlev@die-offenbachs.de>
-#
-
-"""
-Module implementing a debugger stub for remote debugging.
-"""
-
-import os
-import sys
-import distutils.sysconfig
-
-from eric6config import getConfig
-
-debugger = None
-__scriptname = None
-
-modDir = distutils.sysconfig.get_python_lib(True)
-ericpath = os.getenv('ERICDIR', getConfig('ericDir'))
-
-if ericpath not in sys.path:
-    sys.path.insert(-1, ericpath)
-    
-
-def initDebugger(kind="standard"):
-    """
-    Module function to initialize a debugger for remote debugging.
-    
-    @param kind type of debugger ("standard" or "threads")
-    @return flag indicating success (boolean)
-    @exception ValueError raised to indicate an invalid debugger kind
-        was requested
-    """
-    global debugger
-    res = 1
-    try:
-        if kind == "standard":
-            import DebugClient
-            debugger = DebugClient.DebugClient()
-        elif kind == "threads":
-            import DebugClientThreads
-            debugger = DebugClientThreads.DebugClientThreads()
-        else:
-            raise ValueError
-    except ImportError:
-        debugger = None
-        res = 0
-        
-    return res
-
-
-def runcall(func, *args):
-    """
-    Module function mimicing the Pdb interface.
-    
-    @param func function to be called (function object)
-    @param *args arguments being passed to func
-    @return the function result
-    """
-    global debugger, __scriptname
-    return debugger.run_call(__scriptname, func, *args)
-    
-
-def setScriptname(name):
-    """
-    Module function to set the scriptname to be reported back to the IDE.
-    
-    @param name absolute pathname of the script (string)
-    """
-    global __scriptname
-    __scriptname = name
-
-
-def startDebugger(enableTrace=True, exceptions=True,
-                  tracePython=False, redirect=True):
-    """
-    Module function used to start the remote debugger.
-    
-    @keyparam enableTrace flag to enable the tracing function (boolean)
-    @keyparam exceptions flag to enable exception reporting of the IDE
-        (boolean)
-    @keyparam tracePython flag to enable tracing into the Python library
-        (boolean)
-    @keyparam redirect flag indicating redirection of stdin, stdout and
-        stderr (boolean)
-    """
-    global debugger
-    if debugger:
-        debugger.startDebugger(enableTrace=enableTrace, exceptions=exceptions,
-                               tracePython=tracePython, redirect=redirect)
-
-#
-# eflag: FileType = Python2
-# eflag: noqa = M601, M702
--- a/DebugClients/Python/getpass.py	Sat Sep 03 18:02:37 2016 +0200
+++ /dev/null	Thu Jan 01 00:00:00 1970 +0000
@@ -1,57 +0,0 @@
-# -*- coding: utf-8 -*-
-
-# Copyright (c) 2004 - 2016 Detlev Offenbach <detlev@die-offenbachs.de>
-#
-
-"""
-Module implementing utilities to get a password and/or the current user name.
-
-getpass(prompt) - prompt for a password, with echo turned off
-getuser() - get the user name from the environment or password database
-
-This module is a replacement for the one found in the Python distribution. It
-is to provide a debugger compatible variant of the a.m. functions.
-"""
-
-__all__ = ["getpass", "getuser"]
-
-
-def getuser():
-    """
-    Function to get the username from the environment or password database.
-
-    First try various environment variables, then the password
-    database.  This works on Windows as long as USERNAME is set.
-    
-    @return username (string)
-    """
-    # this is copied from the oroginal getpass.py
-    
-    import os
-
-    for name in ('LOGNAME', 'USER', 'LNAME', 'USERNAME'):
-        user = os.environ.get(name)
-        if user:
-            return user
-
-    # If this fails, the exception will "explain" why
-    import pwd
-    return pwd.getpwuid(os.getuid())[0]
-
-
-def getpass(prompt='Password: '):
-    """
-    Function to prompt for a password, with echo turned off.
-    
-    @param prompt Prompt to be shown to the user (string)
-    @return Password entered by the user (string)
-    """
-    return raw_input(prompt, 0)
-    
-unix_getpass = getpass
-win_getpass = getpass
-default_getpass = getpass
-
-#
-# eflag: FileType = Python2
-# eflag: noqa = M601, M702
--- /dev/null	Thu Jan 01 00:00:00 1970 +0000
+++ b/DebugClients/Python2/AsyncFile.py	Sat Sep 03 18:12:12 2016 +0200
@@ -0,0 +1,339 @@
+# -*- coding: utf-8 -*-
+
+# Copyright (c) 2002 - 2016 Detlev Offenbach <detlev@die-offenbachs.de>
+#
+
+"""
+Module implementing an asynchronous file like socket interface for the
+debugger.
+"""
+
+import socket
+
+from DebugUtilities import prepareJsonCommand
+
+
+def AsyncPendingWrite(file):
+    """
+    Module function to check for data to be written.
+    
+    @param file The file object to be checked (file)
+    @return Flag indicating if there is data wating (int)
+    """
+    try:
+        pending = file.pendingWrite()
+    except Exception:
+        pending = 0
+
+    return pending
+
+
+class AsyncFile(object):
+    """
+    Class wrapping a socket object with a file interface.
+    """
+    maxtries = 10
+    maxbuffersize = 1024 * 1024 * 4
+    
+    def __init__(self, sock, mode, name):
+        """
+        Constructor
+        
+        @param sock the socket object being wrapped
+        @param mode mode of this file (string)
+        @param name name of this file (string)
+        """
+        # Initialise the attributes.
+        self.closed = False
+        self.sock = sock
+        self.mode = mode
+        self.name = name
+        self.nWriteErrors = 0
+        self.encoding = "utf-8"
+
+        self.wpending = u''
+
+    def __checkMode(self, mode):
+        """
+        Private method to check the mode.
+        
+        This method checks, if an operation is permitted according to
+        the mode of the file. If it is not, an IOError is raised.
+        
+        @param mode the mode to be checked (string)
+        @exception IOError raised to indicate a bad file descriptor
+        """
+        if mode != self.mode:
+            raise IOError('[Errno 9] Bad file descriptor')
+
+    def __nWrite(self, n):
+        """
+        Private method to write a specific number of pending bytes.
+        
+        @param n the number of bytes to be written (int)
+        """
+        if n:
+            try:
+                buf = self.wpending[:n]
+                try:
+                    buf = buf.encode('utf-8', 'backslashreplace')
+                except (UnicodeEncodeError, UnicodeDecodeError):
+                    pass
+                self.sock.sendall(buf)
+                self.wpending = self.wpending[n:]
+                self.nWriteErrors = 0
+            except socket.error:
+                self.nWriteErrors += 1
+                if self.nWriteErrors > self.maxtries:
+                    self.wpending = u''  # delete all output
+
+    def pendingWrite(self):
+        """
+        Public method that returns the number of bytes waiting to be written.
+        
+        @return the number of bytes to be written (int)
+        """
+        return self.wpending.rfind('\n') + 1
+
+    def close(self, closeit=False):
+        """
+        Public method to close the file.
+        
+        @param closeit flag to indicate a close ordered by the debugger code
+            (boolean)
+        """
+        if closeit and not self.closed:
+            self.flush()
+            self.sock.close()
+            self.closed = True
+
+    def flush(self):
+        """
+        Public method to write all pending bytes.
+        """
+        self.__nWrite(len(self.wpending))
+
+    def isatty(self):
+        """
+        Public method to indicate whether a tty interface is supported.
+        
+        @return always false
+        """
+        return False
+
+    def fileno(self):
+        """
+        Public method returning the file number.
+        
+        @return file number (int)
+        """
+        try:
+            return self.sock.fileno()
+        except socket.error:
+            return -1
+
+    def readable(self):
+        """
+        Public method to check, if the stream is readable.
+        
+        @return flag indicating a readable stream (boolean)
+        """
+        return self.mode == "r"
+    
+    def read_p(self, size=-1):
+        """
+        Public method to read bytes from this file.
+        
+        @param size maximum number of bytes to be read (int)
+        @return the bytes read (any)
+        """
+        self.__checkMode('r')
+
+        if size < 0:
+            size = 20000
+
+        return self.sock.recv(size).decode('utf8', 'backslashreplace')
+
+    def read(self, size=-1):
+        """
+        Public method to read bytes from this file.
+        
+        @param size maximum number of bytes to be read (int)
+        @return the bytes read (any)
+        """
+        self.__checkMode('r')
+
+        buf = raw_input()
+        if size >= 0:
+            buf = buf[:size]
+        return buf
+
+    def readline_p(self, size=-1):
+        """
+        Public method to read a line from this file.
+        
+        <b>Note</b>: This method will not block and may return
+        only a part of a line if that is all that is available.
+        
+        @param size maximum number of bytes to be read (int)
+        @return one line of text up to size bytes (string)
+        """
+        self.__checkMode('r')
+
+        if size < 0:
+            size = 20000
+
+        # The integration of the debugger client event loop and the connection
+        # to the debugger relies on the two lines of the debugger command being
+        # delivered as two separate events.  Therefore we make sure we only
+        # read a line at a time.
+        line = self.sock.recv(size, socket.MSG_PEEK)
+
+        eol = line.find(b'\n')
+
+        if eol >= 0:
+            size = eol + 1
+        else:
+            size = len(line)
+
+        # Now we know how big the line is, read it for real.
+        return self.sock.recv(size).decode('utf8', 'backslashreplace')
+
+    def readlines(self, sizehint=-1):
+        """
+        Public method to read all lines from this file.
+        
+        @param sizehint hint of the numbers of bytes to be read (int)
+        @return list of lines read (list of strings)
+        """
+        self.__checkMode('r')
+
+        lines = []
+        room = sizehint
+
+        line = self.readline_p(room)
+        linelen = len(line)
+
+        while linelen > 0:
+            lines.append(line)
+
+            if sizehint >= 0:
+                room = room - linelen
+
+                if room <= 0:
+                    break
+
+            line = self.readline_p(room)
+            linelen = len(line)
+
+        return lines
+
+    def readline(self, sizehint=-1):
+        """
+        Public method to read one line from this file.
+        
+        @param sizehint hint of the numbers of bytes to be read (int)
+        @return one line read (string)
+        """
+        self.__checkMode('r')
+
+        line = raw_input() + '\n'
+        if sizehint >= 0:
+            line = line[:sizehint]
+        return line
+        
+    def seekable(self):
+        """
+        Public method to check, if the stream is seekable.
+        
+        @return flag indicating a seekable stream (boolean)
+        """
+        return False
+    
+    def seek(self, offset, whence=0):
+        """
+        Public method to move the filepointer.
+        
+        @param offset offset to seek for
+        @param whence where to seek from
+        @exception IOError This method is not supported and always raises an
+        IOError.
+        """
+        raise IOError('[Errno 29] Illegal seek')
+
+    def tell(self):
+        """
+        Public method to get the filepointer position.
+        
+        @exception IOError This method is not supported and always raises an
+        IOError.
+        """
+        raise IOError('[Errno 29] Illegal seek')
+
+    def truncate(self, size=-1):
+        """
+        Public method to truncate the file.
+        
+        @param size size to truncate to (integer)
+        @exception IOError This method is not supported and always raises an
+        IOError.
+        """
+        raise IOError('[Errno 29] Illegal seek')
+
+    def writable(self):
+        """
+        Public method to check, if a stream is writable.
+        
+        @return flag indicating a writable stream (boolean)
+        """
+        return self.mode == "w"
+    
+    def write(self, s):
+        """
+        Public method to write a string to the file.
+        
+        @param s bytes to be written (string)
+        """
+        self.__checkMode('w')
+        
+        cmd = prepareJsonCommand("ClientOutput", {
+            "text": s,
+        })
+        self.write_p(cmd)
+    
+    def write_p(self, s):
+        """
+        Public method to write a string to the file.
+        
+        @param s text to be written (string)
+        @exception socket.error raised to indicate too many send attempts
+        """
+        self.__checkMode('w')
+        tries = 0
+        if not self.wpending:
+            self.wpending = s
+        elif len(self.wpending) + len(s) > self.maxbuffersize:
+            # flush wpending if it is too big
+            while self.wpending:
+                # if we have a persistent error in sending the data, an
+                # exception will be raised in __nWrite
+                self.flush()
+                tries += 1
+                if tries > self.maxtries:
+                    raise socket.error("Too many attempts to send data")
+            self.wpending = s
+        else:
+            self.wpending += s
+        self.__nWrite(self.pendingWrite())
+
+    def writelines(self, lines):
+        """
+        Public method to write a list of strings to the file.
+        
+        @param lines list of texts to be written (list of string)
+        """
+        self.write("".join(lines))
+
+#
+# eflag: FileType = Python2
+# eflag: noqa = M601, M702
--- /dev/null	Thu Jan 01 00:00:00 1970 +0000
+++ b/DebugClients/Python2/AsyncIO.py	Sat Sep 03 18:12:12 2016 +0200
@@ -0,0 +1,88 @@
+# -*- coding: utf-8 -*-
+
+# Copyright (c) 2002 - 2016 Detlev Offenbach <detlev@die-offenbachs.de>
+#
+
+"""
+Module implementing a base class of an asynchronous interface for the debugger.
+"""
+
+# TODO: delete this file
+class AsyncIO(object):
+    """
+    Class implementing asynchronous reading and writing.
+    """
+    def __init__(self):
+        """
+        Constructor
+        """
+        # There is no connection yet.
+        self.disconnect()
+
+    def disconnect(self):
+        """
+        Public method to disconnect any current connection.
+        """
+        self.readfd = None
+        self.writefd = None
+
+    def setDescriptors(self, rfd, wfd):
+        """
+        Public method called to set the descriptors for the connection.
+        
+        @param rfd file descriptor of the input file (int)
+        @param wfd file descriptor of the output file (int)
+        """
+        self.rbuf = ''
+        self.readfd = rfd
+
+        self.wbuf = ''
+        self.writefd = wfd
+
+    def readReady(self, fd):
+        """
+        Public method called when there is data ready to be read.
+        
+        @param fd file descriptor of the file that has data to be read (int)
+        """
+        try:
+            got = self.readfd.readline_p()
+        except Exception:
+            return
+
+        if len(got) == 0:
+            self.sessionClose()
+            return
+
+        self.rbuf = self.rbuf + got
+
+        # Call handleLine for the line if it is complete.
+        eol = self.rbuf.find('\n')
+
+        while eol >= 0:
+            s = self.rbuf[:eol + 1]
+            self.rbuf = self.rbuf[eol + 1:]
+            self.handleLine(s)
+            eol = self.rbuf.find('\n')
+
+    def writeReady(self, fd):
+        """
+        Public method called when we are ready to write data.
+        
+        @param fd file descriptor of the file that has data to be written (int)
+        """
+        self.writefd.write(self.wbuf)
+        self.writefd.flush()
+        self.wbuf = ''
+
+    def write(self, s):
+        """
+        Public method to write a string.
+        
+        @param s the data to be written (string)
+        """
+        self.wbuf = self.wbuf + s
+
+#
+# eflag: FileType = Python2
+# eflag: noqa = M601, M702
--- /dev/null	Thu Jan 01 00:00:00 1970 +0000
+++ b/DebugClients/Python2/DCTestResult.py	Sat Sep 03 18:12:12 2016 +0200
@@ -0,0 +1,131 @@
+# -*- coding: utf-8 -*-
+
+# Copyright (c) 2003 - 2016 Detlev Offenbach <detlev@die-offenbachs.de>
+#
+
+"""
+Module implementing a TestResult derivative for the eric6 debugger.
+"""
+
+import select
+from unittest import TestResult
+
+
+class DCTestResult(TestResult):
+    """
+    A TestResult derivative to work with eric6's debug client.
+    
+    For more details see unittest.py of the standard python distribution.
+    """
+    def __init__(self, dbgClient):
+        """
+        Constructor
+        
+        @param dbgClient reference to the debug client
+        @type DebugClientBase
+        """
+        TestResult.__init__(self)
+        self.__dbgClient = dbgClient
+        
+    def addFailure(self, test, err):
+        """
+        Public method called if a test failed.
+        
+        @param test Reference to the test object
+        @param err The error traceback
+        """
+        TestResult.addFailure(self, test, err)
+        tracebackLines = self._exc_info_to_string(err, test)
+        self.__dbgClient.sendJsonCommand("ResponseUTTestFailed", {
+            "testname": str(test),
+            "traceback": tracebackLines,
+            "id": test.id(),
+        })
+        
+    def addError(self, test, err):
+        """
+        Public method called if a test errored.
+        
+        @param test Reference to the test object
+        @param err The error traceback
+        """
+        TestResult.addError(self, test, err)
+        tracebackLines = self._exc_info_to_string(err, test)
+        self.__dbgClient.sendJsonCommand("ResponseUTTestErrored", {
+            "testname": str(test),
+            "traceback": tracebackLines,
+            "id": test.id(),
+        })
+        
+    def addSkip(self, test, reason):
+        """
+        Public method called if a test was skipped.
+        
+        @param test reference to the test object
+        @param reason reason for skipping the test (string)
+        """
+        TestResult.addSkip(self, test, reason)
+        self.__dbgClient.sendJsonCommand("ResponseUTTestSkipped", {
+            "testname": str(test),
+            "reason": reason,
+            "id": test.id(),
+        })
+        
+    def addExpectedFailure(self, test, err):
+        """
+        Public method called if a test failed expected.
+        
+        @param test reference to the test object
+        @param err error traceback
+        """
+        TestResult.addExpectedFailure(self, test, err)
+        tracebackLines = self._exc_info_to_string(err, test)
+        self.__dbgClient.sendJsonCommand("ResponseUTTestFailedExpected", {
+            "testname": str(test),
+            "traceback": tracebackLines,
+            "id": test.id(),
+        })
+        
+    def addUnexpectedSuccess(self, test):
+        """
+        Public method called if a test succeeded expectedly.
+        
+        @param test reference to the test object
+        """
+        TestResult.addUnexpectedSuccess(self, test)
+        self.__dbgClient.sendJsonCommand("ResponseUTTestSucceededUnexpected", {
+            "testname": str(test),
+            "id": test.id(),
+        })
+        
+    def startTest(self, test):
+        """
+        Public method called at the start of a test.
+        
+        @param test Reference to the test object
+        """
+        TestResult.startTest(self, test)
+        self.__dbgClient.sendJsonCommand("ResponseUTStartTest", {
+            "testname": str(test),
+            "description": test.shortDescription(),
+        })
+
+    def stopTest(self, test):
+        """
+        Public method called at the end of a test.
+        
+        @param test Reference to the test object
+        """
+        TestResult.stopTest(self, test)
+        self.__dbgClient.sendJsonCommand("ResponseUTStopTest", {})
+        
+        # ensure that pending input is processed
+        rrdy, wrdy, xrdy = select.select(
+            [self.__dbgClient.readstream], [], [], 0.01)
+
+        if self.__dbgClient.readstream in rrdy:
+            self.__dbgClient.readReady(self.__dbgClient.readstream)
+
+#
+# eflag: FileType = Python2
+# eflag: noqa = M601, M702
--- /dev/null	Thu Jan 01 00:00:00 1970 +0000
+++ b/DebugClients/Python2/DebugBase.py	Sat Sep 03 18:12:12 2016 +0200
@@ -0,0 +1,905 @@
+# -*- coding: utf-8 -*-
+
+# Copyright (c) 2002 - 2016 Detlev Offenbach <detlev@die-offenbachs.de>
+#
+
+"""
+Module implementing the debug base class.
+"""
+
+import sys
+import bdb
+import os
+import types
+import atexit
+import inspect
+import ctypes
+from inspect import CO_GENERATOR
+
+gRecursionLimit = 64
+
+
+def printerr(s):
+    """
+    Module function used for debugging the debug client.
+    
+    @param s data to be printed
+    """
+    sys.__stderr__.write('%s\n' % unicode(s))
+    sys.__stderr__.flush()
+
+
+def setRecursionLimit(limit):
+    """
+    Module function to set the recursion limit.
+    
+    @param limit recursion limit (integer)
+    """
+    global gRecursionLimit
+    gRecursionLimit = limit
+
+
+class DebugBase(bdb.Bdb):
+    """
+    Class implementing base class of the debugger.
+
+    Provides simple wrapper methods around bdb for the 'owning' client to
+    call to step etc.
+    """
+    def __init__(self, dbgClient):
+        """
+        Constructor
+        
+        @param dbgClient the owning client
+        """
+        bdb.Bdb.__init__(self)
+
+        self._dbgClient = dbgClient
+        self._mainThread = True
+        
+        self.breaks = self._dbgClient.breakpoints
+        
+        self.__event = ""
+        self.__isBroken = ""
+        self.cFrame = None
+        
+        # current frame we are at
+        self.currentFrame = None
+        
+        # frame that we are stepping in, can be different than currentFrame
+        self.stepFrame = None
+        
+        # provide a hook to perform a hard breakpoint
+        # Use it like this:
+        # if hasattr(sys, 'breakpoint): sys.breakpoint()
+        sys.breakpoint = self.set_trace
+        
+        # initialize parent
+        bdb.Bdb.reset(self)
+        
+        self.__recursionDepth = -1
+        self.setRecursionDepth(inspect.currentframe())
+    
+    def getCurrentFrame(self):
+        """
+        Public method to return the current frame.
+        
+        @return the current frame
+        """
+        return self.currentFrame
+    
+    def getFrameLocals(self, frmnr=0):
+        """
+        Public method to return the locals dictionary of the current frame
+        or a frame below.
+        
+        @keyparam frmnr distance of frame to get locals dictionary of. 0 is
+            the current frame (int)
+        @return locals dictionary of the frame
+        """
+        f = self.currentFrame
+        while f is not None and frmnr > 0:
+            f = f.f_back
+            frmnr -= 1
+        return f.f_locals
+    
+    def storeFrameLocals(self, frmnr=0):
+        """
+        Public method to store the locals into the frame, so an access to
+        frame.f_locals returns the last data.
+        
+        @keyparam frmnr distance of frame to store locals dictionary to. 0 is
+            the current frame (int)
+        """
+        cf = self.currentFrame
+        while cf is not None and frmnr > 0:
+            cf = cf.f_back
+            frmnr -= 1
+        ctypes.pythonapi.PyFrame_LocalsToFast(
+            ctypes.py_object(cf),
+            ctypes.c_int(0))
+    
+    def step(self, traceMode):
+        """
+        Public method to perform a step operation in this thread.
+        
+        @param traceMode If it is non-zero, then the step is a step into,
+              otherwise it is a step over.
+        """
+        self.stepFrame = self.currentFrame
+        
+        if traceMode:
+            self.currentFrame = None
+            self.set_step()
+        else:
+            self.set_next(self.currentFrame)
+    
+    def stepOut(self):
+        """
+        Public method to perform a step out of the current call.
+        """
+        self.stepFrame = self.currentFrame
+        self.set_return(self.currentFrame)
+    
+    def go(self, special):
+        """
+        Public method to resume the thread.
+
+        It resumes the thread stopping only at breakpoints or exceptions.
+        
+        @param special flag indicating a special continue operation
+        """
+        self.currentFrame = None
+        self.set_continue(special)
+    
+    def setRecursionDepth(self, frame):
+        """
+        Public method to determine the current recursion depth.
+        
+        @param frame The current stack frame.
+        """
+        self.__recursionDepth = 0
+        while frame is not None:
+            self.__recursionDepth += 1
+            frame = frame.f_back
+    
+    def profile(self, frame, event, arg):
+        """
+        Public method used to trace some stuff independent of the debugger
+        trace function.
+        
+        @param frame current stack frame.
+        @param event trace event (string)
+        @param arg arguments
+        @exception RuntimeError raised to indicate too many recursions
+        """
+        if event == 'return':
+            self.cFrame = frame.f_back
+            self.__recursionDepth -= 1
+            self.__sendCallTrace(event, frame, self.cFrame)
+        elif event == 'call':
+            self.__sendCallTrace(event, self.cFrame, frame)
+            self.cFrame = frame
+            self.__recursionDepth += 1
+            if self.__recursionDepth > gRecursionLimit:
+                raise RuntimeError(
+                    'maximum recursion depth exceeded\n'
+                    '(offending frame is two down the stack)')
+    
+    def __sendCallTrace(self, event, fromFrame, toFrame):
+        """
+        Private method to send a call/return trace.
+        
+        @param event trace event (string)
+        @param fromFrame originating frame (frame)
+        @param toFrame destination frame (frame)
+        """
+        if self._dbgClient.callTraceEnabled:
+            if not self.__skip_it(fromFrame) and not self.__skip_it(toFrame):
+                if event in ["call", "return"]:
+                    fr = fromFrame
+                    # TODO: change from and to info to a dictionary
+                    fromStr = "%s:%s:%s" % (
+                        self._dbgClient.absPath(self.fix_frame_filename(fr)),
+                        fr.f_lineno,
+                        fr.f_code.co_name)
+                    fr = toFrame
+                    toStr = "%s:%s:%s" % (
+                        self._dbgClient.absPath(self.fix_frame_filename(fr)),
+                        fr.f_lineno,
+                        fr.f_code.co_name)
+                    self._dbgClient.sendCallTrace(event, fromStr, toStr)
+    
+    def trace_dispatch(self, frame, event, arg):
+        """
+        Public method reimplemented from bdb.py to do some special things.
+        
+        This specialty is to check the connection to the debug server
+        for new events (i.e. new breakpoints) while we are going through
+        the code.
+        
+        @param frame The current stack frame.
+        @param event The trace event (string)
+        @param arg The arguments
+        @return local trace function
+        """
+        if self.quitting:
+            return  # None
+        
+        # give the client a chance to push through new break points.
+        self._dbgClient.eventPoll()
+        
+        self.__event == event
+        self.__isBroken = False
+        
+        if event == 'line':
+            return self.dispatch_line(frame)
+        if event == 'call':
+            return self.dispatch_call(frame, arg)
+        if event == 'return':
+            return self.dispatch_return(frame, arg)
+        if event == 'exception':
+            return self.dispatch_exception(frame, arg)
+        if event == 'c_call':
+            return self.trace_dispatch
+        if event == 'c_exception':
+            return self.trace_dispatch
+        if event == 'c_return':
+            return self.trace_dispatch
+        print 'DebugBase.trace_dispatch: unknown debugging event:', repr(event) # __IGNORE_WARNING__
+        return self.trace_dispatch
+
+    def dispatch_line(self, frame):
+        """
+        Public method reimplemented from bdb.py to do some special things.
+        
+        This speciality is to check the connection to the debug server
+        for new events (i.e. new breakpoints) while we are going through
+        the code.
+        
+        @param frame The current stack frame.
+        @return local trace function
+        @exception bdb.BdbQuit raised to indicate the end of the debug session
+        """
+        if self.stop_here(frame) or self.break_here(frame):
+            self.user_line(frame)
+            if self.quitting:
+                raise bdb.BdbQuit
+        return self.trace_dispatch
+
+    def dispatch_return(self, frame, arg):
+        """
+        Public method reimplemented from bdb.py to handle passive mode cleanly.
+        
+        @param frame The current stack frame.
+        @param arg The arguments
+        @return local trace function
+        @exception bdb.BdbQuit raised to indicate the end of the debug session
+        """
+        if self.stop_here(frame) or frame == self.returnframe:
+            # Ignore return events in generator except when stepping.
+            if self.stopframe and frame.f_code.co_flags & CO_GENERATOR:
+                return self.trace_dispatch
+            self.user_return(frame, arg)
+            if self.quitting and not self._dbgClient.passive:
+                raise bdb.BdbQuit
+        return self.trace_dispatch
+
+    def dispatch_exception(self, frame, arg):
+        """
+        Public method reimplemented from bdb.py to always call user_exception.
+        
+        @param frame The current stack frame.
+        @param arg The arguments
+        @return local trace function
+        @exception bdb.BdbQuit raised to indicate the end of the debug session
+        """
+        if not self.__skip_it(frame):
+            # When stepping with next/until/return in a generator frame,
+            # skip the internal StopIteration exception (with no traceback)
+            # triggered by a subiterator run with the 'yield from'
+            # statement.
+            if not (frame.f_code.co_flags & CO_GENERATOR and
+                    arg[0] is StopIteration and arg[2] is None):
+                self.user_exception(frame, arg)
+                if self.quitting:
+                    raise bdb.BdbQuit
+        
+        # Stop at the StopIteration or GeneratorExit exception when the user
+        # has set stopframe in a generator by issuing a return command, or a
+        # next/until command at the last statement in the generator before the
+        # exception.
+        elif (self.stopframe and frame is not self.stopframe and
+                self.stopframe.f_code.co_flags & CO_GENERATOR and
+                arg[0] in (StopIteration, GeneratorExit)):
+            self.user_exception(frame, arg)
+            if self.quitting:
+                raise bdb.BdbQuit
+        
+        return self.trace_dispatch
+
+    def set_trace(self, frame=None):
+        """
+        Public method reimplemented from bdb.py to do some special setup.
+        
+        @param frame frame to start debugging from
+        """
+        bdb.Bdb.set_trace(self, frame)
+        sys.setprofile(self.profile)
+    
+    def set_continue(self, special):
+        """
+        Public method reimplemented from bdb.py to always get informed of
+        exceptions.
+        
+        @param special flag indicating a special continue operation
+        """
+        # Modified version of the one found in bdb.py
+        # Here we only set a new stop frame if it is a normal continue.
+        if not special:
+            self._set_stopinfo(self.botframe, None)
+        else:
+            self._set_stopinfo(self.stopframe, None)
+
+    def set_quit(self):
+        """
+        Public method to quit.
+        
+        It wraps call to bdb to clear the current frame properly.
+        """
+        self.currentFrame = None
+        sys.setprofile(None)
+        bdb.Bdb.set_quit(self)
+    
+    def fix_frame_filename(self, frame):
+        """
+        Public method used to fixup the filename for a given frame.
+        
+        The logic employed here is that if a module was loaded
+        from a .pyc file, then the correct .py to operate with
+        should be in the same path as the .pyc. The reason this
+        logic is needed is that when a .pyc file is generated, the
+        filename embedded and thus what is readable in the code object
+        of the frame object is the fully qualified filepath when the
+        pyc is generated. If files are moved from machine to machine
+        this can break debugging as the .pyc will refer to the .py
+        on the original machine. Another case might be sharing
+        code over a network... This logic deals with that.
+        
+        @param frame the frame object
+        @return fixed up file name (string)
+        """
+        # get module name from __file__
+        if '__file__' in frame.f_globals and \
+           frame.f_globals['__file__'] and \
+           frame.f_globals['__file__'] == frame.f_code.co_filename:
+            root, ext = os.path.splitext(frame.f_globals['__file__'])
+            if ext in ['.pyc', '.py', '.py2', '.pyo']:
+                fixedName = root + '.py'
+                if os.path.exists(fixedName):
+                    return fixedName
+                
+                fixedName = root + '.py2'
+                if os.path.exists(fixedName):
+                    return fixedName
+
+        return frame.f_code.co_filename
+
+    def set_watch(self, cond, temporary=0):
+        """
+        Public method to set a watch expression.
+        
+        @param cond expression of the watch expression (string)
+        @param temporary flag indicating a temporary watch expression (boolean)
+        """
+        bp = bdb.Breakpoint("Watch", 0, temporary, cond)
+        if cond.endswith('??created??') or cond.endswith('??changed??'):
+            bp.condition, bp.special = cond.split()
+        else:
+            bp.condition = cond
+            bp.special = ""
+        bp.values = {}
+        if "Watch" not in self.breaks:
+            self.breaks["Watch"] = 1
+        else:
+            self.breaks["Watch"] += 1
+    
+    def clear_watch(self, cond):
+        """
+        Public method to clear a watch expression.
+        
+        @param cond expression of the watch expression to be cleared (string)
+        """
+        try:
+            possibles = bdb.Breakpoint.bplist["Watch", 0]
+            for i in range(0, len(possibles)):
+                b = possibles[i]
+                if b.cond == cond:
+                    b.deleteMe()
+                    self.breaks["Watch"] -= 1
+                    if self.breaks["Watch"] == 0:
+                        del self.breaks["Watch"]
+                    break
+        except KeyError:
+            pass
+    
+    def get_watch(self, cond):
+        """
+        Public method to get a watch expression.
+        
+        @param cond expression of the watch expression to be cleared (string)
+        @return reference to the watch point
+        """
+        possibles = bdb.Breakpoint.bplist["Watch", 0]
+        for i in range(0, len(possibles)):
+            b = possibles[i]
+            if b.cond == cond:
+                return b
+    
+    def __do_clearWatch(self, cond):
+        """
+        Private method called to clear a temporary watch expression.
+        
+        @param cond expression of the watch expression to be cleared (string)
+        """
+        self.clear_watch(cond)
+        self._dbgClient.sendClearTemporaryWatch(cond)
+
+    def __effective(self, frame):
+        """
+        Private method to determine, if a watch expression is effective.
+        
+        @param frame the current execution frame
+        @return tuple of watch expression and a flag to indicate, that a
+            temporary watch expression may be deleted (bdb.Breakpoint, boolean)
+        """
+        possibles = bdb.Breakpoint.bplist["Watch", 0]
+        for i in range(0, len(possibles)):
+            b = possibles[i]
+            if b.enabled == 0:
+                continue
+            if not b.cond:
+                # watch expression without expression shouldn't occur,
+                # just ignore it
+                continue
+            try:
+                val = eval(b.condition, frame.f_globals, frame.f_locals)
+                if b.special:
+                    if b.special == '??created??':
+                        if b.values[frame][0] == 0:
+                            b.values[frame][0] = 1
+                            b.values[frame][1] = val
+                            return (b, True)
+                        else:
+                            continue
+                    b.values[frame][0] = 1
+                    if b.special == '??changed??':
+                        if b.values[frame][1] != val:
+                            b.values[frame][1] = val
+                            if b.values[frame][2] > 0:
+                                b.values[frame][2] -= 1
+                                continue
+                            else:
+                                return (b, True)
+                        else:
+                            continue
+                    continue
+                if val:
+                    if b.ignore > 0:
+                        b.ignore -= 1
+                        continue
+                    else:
+                        return (b, 1)
+            except Exception:
+                if b.special:
+                    try:
+                        b.values[frame][0] = 0
+                    except KeyError:
+                        b.values[frame] = [0, None, b.ignore]
+                continue
+        return (None, False)
+    
+    def break_here(self, frame):
+        """
+        Public method reimplemented from bdb.py to fix the filename from the
+        frame.
+        
+        See fix_frame_filename for more info.
+        
+        @param frame the frame object
+        @return flag indicating the break status (boolean)
+        """
+        filename = self.canonic(self.fix_frame_filename(frame))
+        if filename not in self.breaks and "Watch" not in self.breaks:
+            return False
+        
+        if filename in self.breaks:
+            lineno = frame.f_lineno
+            if lineno not in self.breaks[filename]:
+                # The line itself has no breakpoint, but maybe the line is the
+                # first line of a function with breakpoint set by function
+                # name.
+                lineno = frame.f_code.co_firstlineno
+            if lineno in self.breaks[filename]:
+                # flag says ok to delete temp. breakpoint
+                (bp, flag) = bdb.effective(filename, lineno, frame)
+                if bp:
+                    self.currentbp = bp.number
+                    if (flag and bp.temporary):
+                        self.__do_clear(filename, lineno)
+                    return True
+        
+        if "Watch" in self.breaks:
+            # flag says ok to delete temp. watch
+            (bp, flag) = self.__effective(frame)
+            if bp:
+                self.currentbp = bp.number
+                if (flag and bp.temporary):
+                    self.__do_clearWatch(bp.cond)
+                return True
+        
+        return False
+
+    def break_anywhere(self, frame):
+        """
+        Public method reimplemented from bdb.py to do some special things.
+        
+        These speciality is to fix the filename from the frame
+        (see fix_frame_filename for more info).
+        
+        @param frame the frame object
+        @return flag indicating the break status (boolean)
+        """
+        return \
+            self.canonic(self.fix_frame_filename(frame)) in self.breaks or \
+            ("Watch" in self.breaks and self.breaks["Watch"])
+
+    def get_break(self, filename, lineno):
+        """
+        Public method reimplemented from bdb.py to get the first breakpoint of
+        a particular line.
+        
+        Because eric6 supports only one breakpoint per line, this overwritten
+        method will return this one and only breakpoint.
+        
+        @param filename filename of the bp to retrieve (string)
+        @param lineno linenumber of the bp to retrieve (integer)
+        @return breakpoint or None, if there is no bp
+        """
+        filename = self.canonic(filename)
+        return filename in self.breaks and \
+            lineno in self.breaks[filename] and \
+            bdb.Breakpoint.bplist[filename, lineno][0] or None
+    
+    def __do_clear(self, filename, lineno):
+        """
+        Private method called to clear a temporary breakpoint.
+        
+        @param filename name of the file the bp belongs to
+        @param lineno linenumber of the bp
+        """
+        self.clear_break(filename, lineno)
+        self._dbgClient.sendClearTemporaryBreakpoint(filename, lineno)
+
+    def getStack(self):
+        """
+        Public method to get the stack.
+        
+        @return list of lists with file name (string), line number (integer)
+            and function name (string)
+        """
+        fr = self.cFrame
+        stack = []
+        while fr is not None:
+            fname = self._dbgClient.absPath(self.fix_frame_filename(fr))
+            if not fname.startswith("<"):
+                fline = fr.f_lineno
+                ffunc = fr.f_code.co_name
+                
+                if ffunc == '?':
+                    ffunc = ''
+            
+                if ffunc and not ffunc.startswith("<"):
+                    argInfo = inspect.getargvalues(fr)
+                    try:
+                        fargs = inspect.formatargvalues(argInfo[0], argInfo[1],
+                                                        argInfo[2], argInfo[3])
+                    except Exception:
+                        fargs = ""
+                else:
+                    fargs = ""
+                
+            stack.append([fname, fline, ffunc, fargs])
+            
+            if fr == self._dbgClient.mainFrame:
+                fr = None
+            else:
+                fr = fr.f_back
+        
+        return stack
+    
+    def user_line(self, frame):
+        """
+        Public method reimplemented to handle the program about to execute a
+        particular line.
+        
+        @param frame the frame object
+        """
+        line = frame.f_lineno
+
+        # We never stop on line 0.
+        if line == 0:
+            return
+
+        fn = self._dbgClient.absPath(self.fix_frame_filename(frame))
+
+        # See if we are skipping at the start of a newly loaded program.
+        if self._dbgClient.mainFrame is None:
+            if fn != self._dbgClient.getRunning():
+                return
+            fr = frame
+            while (fr is not None and
+                   fr.f_code not in [
+                        self._dbgClient.handleLine.func_code,
+                        self._dbgClient.handleJsonCommand.func_code]):
+                self._dbgClient.mainFrame = fr
+                fr = fr.f_back
+
+        self.currentFrame = frame
+        
+        fr = frame
+        stack = []
+        while fr is not None:
+            # Reset the trace function so we can be sure
+            # to trace all functions up the stack... This gets around
+            # problems where an exception/breakpoint has occurred
+            # but we had disabled tracing along the way via a None
+            # return from dispatch_call
+            fr.f_trace = self.trace_dispatch
+            fname = self._dbgClient.absPath(self.fix_frame_filename(fr))
+            if not fname.startswith("<"):
+                fline = fr.f_lineno
+                ffunc = fr.f_code.co_name
+                
+                if ffunc == '?':
+                    ffunc = ''
+                
+                if ffunc and not ffunc.startswith("<"):
+                    argInfo = inspect.getargvalues(fr)
+                    try:
+                        fargs = inspect.formatargvalues(argInfo[0], argInfo[1],
+                                                        argInfo[2], argInfo[3])
+                    except Exception:
+                        fargs = ""
+                else:
+                    fargs = ""
+                
+            stack.append([fname, fline, ffunc, fargs])
+            
+            if fr == self._dbgClient.mainFrame:
+                fr = None
+            else:
+                fr = fr.f_back
+        
+        self.__isBroken = True
+        
+        self._dbgClient.sendResponseLine(stack)
+        self._dbgClient.eventLoop()
+
+    def user_exception(self, frame, (exctype, excval, exctb), unhandled=0):
+        """
+        Public method reimplemented to report an exception to the debug server.
+        
+        @param frame the frame object
+        @param exctype the type of the exception
+        @param excval data about the exception
+        @param exctb traceback for the exception
+        @param unhandled flag indicating an uncaught exception
+        """
+        if exctype in [GeneratorExit, StopIteration]:
+            # ignore these
+            return
+        
+        if exctype in [SystemExit, bdb.BdbQuit]:
+            atexit._run_exitfuncs()
+            if excval is None:
+                exitcode = 0
+                message = ""
+            elif isinstance(excval, (unicode, str)):
+                exitcode = 1
+                message = excval
+            elif isinstance(excval, int):
+                exitcode = excval
+                message = ""
+            elif isinstance(excval, SystemExit):
+                code = excval.code
+                if isinstance(code, (unicode, str)):
+                    exitcode = 1
+                    message = code
+                elif isinstance(code, int):
+                    exitcode = code
+                    message = ""
+                else:
+                    exitcode = 1
+                    message = str(code)
+            else:
+                exitcode = 1
+                message = str(excval)
+            self._dbgClient.progTerminated(exitcode, message)
+            return
+        
+        if exctype in [SyntaxError, IndentationError]:
+            try:
+                message, (filename, lineno, charno, text) = excval
+            except ValueError:
+                message = ""
+                filename = ""
+                lineno = 0
+                charno = 0
+                realSyntaxError = True
+            
+            if realSyntaxError:
+                self._dbgClient.sendSyntaxError(
+                    message, filename, lineno, charno)
+                self._dbgClient.eventLoop()
+                return
+        
+        if type(exctype) in [types.ClassType,   # Python up to 2.4
+                             types.TypeType]:   # Python 2.5+
+            exctype = exctype.__name__
+        
+        if excval is None:
+            excval = ''
+        
+        if unhandled:
+            exctypetxt = "unhandled %s" % unicode(exctype)
+        else:
+            exctypetxt = unicode(exctype)
+        try:
+            excvaltxt = unicode(excval).encode(self._dbgClient.getCoding())
+        except TypeError:
+            excvaltxt = str(excval)
+        
+        stack = []
+        if exctb:
+            frlist = self.__extract_stack(exctb)
+            frlist.reverse()
+            
+            self.currentFrame = frlist[0]
+            
+            for fr in frlist:
+                filename = self._dbgClient.absPath(self.fix_frame_filename(fr))
+                
+                if os.path.basename(filename).startswith("DebugClient") or \
+                   os.path.basename(filename) == "bdb.py":
+                    break
+                
+                linenr = fr.f_lineno
+                ffunc = fr.f_code.co_name
+                
+                if ffunc == '?':
+                    ffunc = ''
+                
+                if ffunc and not ffunc.startswith("<"):
+                    argInfo = inspect.getargvalues(fr)
+                    try:
+                        fargs = inspect.formatargvalues(argInfo[0], argInfo[1],
+                                                        argInfo[2], argInfo[3])
+                    except Exception:
+                        fargs = ""
+                else:
+                    fargs = ""
+                
+                stack.append([filename, linenr, ffunc, fargs])
+        
+        self._dbgClient.sendException(exctypetxt, excvaltxt, stack)
+        
+        if exctb is None:
+            return
+        
+        self._dbgClient.eventLoop()
+    
+    def __extract_stack(self, exctb):
+        """
+        Private member to return a list of stack frames.
+        
+        @param exctb exception traceback
+        @return list of stack frames
+        """
+        tb = exctb
+        stack = []
+        while tb is not None:
+            stack.append(tb.tb_frame)
+            tb = tb.tb_next
+        tb = None
+        return stack
+
+    def user_return(self, frame, retval):
+        """
+        Public method reimplemented to report program termination to the
+        debug server.
+        
+        @param frame the frame object
+        @param retval the return value of the program
+        """
+        # The program has finished if we have just left the first frame.
+        if frame == self._dbgClient.mainFrame and \
+                self._mainThread:
+            atexit._run_exitfuncs()
+            self._dbgClient.progTerminated(retval)
+        elif frame is not self.stepFrame:
+            self.stepFrame = None
+            self.user_line(frame)
+
+    def stop_here(self, frame):
+        """
+        Public method reimplemented to filter out debugger files.
+        
+        Tracing is turned off for files that are part of the
+        debugger that are called from the application being debugged.
+        
+        @param frame the frame object
+        @return flag indicating whether the debugger should stop here
+        """
+        if self.__skip_it(frame):
+            return False
+        return bdb.Bdb.stop_here(self, frame)
+
+    def __skip_it(self, frame):
+        """
+        Private method to filter out debugger files.
+        
+        Tracing is turned off for files that are part of the
+        debugger that are called from the application being debugged.
+        
+        @param frame the frame object
+        @return flag indicating whether the debugger should skip this frame
+        """
+        if frame is None:
+            return True
+        
+        fn = self.fix_frame_filename(frame)
+
+        # Eliminate things like <string> and <stdin>.
+        if fn[0] == '<':
+            return True
+
+        #XXX - think of a better way to do this.  It's only a convenience for
+        #debugging the debugger - when the debugger code is in the current
+        #directory.
+        if os.path.basename(fn) in [
+            'AsyncFile.py', 'DCTestResult.py',
+            'DebugBase.py', 'DebugClient.py',
+            'DebugClientBase.py',
+            'DebugClientCapabilities.py',
+            'DebugClientThreads.py',
+            'DebugConfig.py', 'DebugThread.py',
+            'DebugUtilities.py', 'FlexCompleter.py',
+            'PyProfile.py'] or \
+           os.path.dirname(fn).endswith("coverage"):
+            return True
+
+        if self._dbgClient.shouldSkip(fn):
+            return True
+        
+        return False
+    
+    def isBroken(self):
+        """
+        Public method to return the broken state of the debugger.
+        
+        @return flag indicating the broken state (boolean)
+        """
+        return self.__isBroken
+    
+    def getEvent(self):
+        """
+        Protected method to return the last debugger event.
+        
+        @return last debugger event (string)
+        """
+        return self.__event
+
+#
+# eflag: FileType = Python2
+# eflag: noqa = M601, M702
--- /dev/null	Thu Jan 01 00:00:00 1970 +0000
+++ b/DebugClients/Python2/DebugClient.py	Sat Sep 03 18:12:12 2016 +0200
@@ -0,0 +1,39 @@
+# -*- coding: utf-8 -*-
+
+# Copyright (c) 2003 - 2016 Detlev Offenbach <detlev@die-offenbachs.de>
+#
+
+"""
+Module implementing a Qt free version of the debug client.
+"""
+
+from DebugBase import DebugBase
+import DebugClientBase
+
+
+class DebugClient(DebugClientBase.DebugClientBase, DebugBase):
+    """
+    Class implementing the client side of the debugger.
+    
+    This variant of the debugger implements the standard debugger client
+    by subclassing all relevant base classes.
+    """
+    def __init__(self):
+        """
+        Constructor
+        """
+        DebugClientBase.DebugClientBase.__init__(self)
+        
+        DebugBase.__init__(self, self)
+        
+        self.variant = 'Standard'
+
+# We are normally called by the debugger to execute directly.
+
+if __name__ == '__main__':
+    debugClient = DebugClient()
+    debugClient.main()
+
+#
+# eflag: FileType = Python2
+# eflag: noqa = M601, M702
--- /dev/null	Thu Jan 01 00:00:00 1970 +0000
+++ b/DebugClients/Python2/DebugClientBase.py	Sat Sep 03 18:12:12 2016 +0200
@@ -0,0 +1,2284 @@
+# -*- coding: utf-8 -*-
+
+# Copyright (c) 2002 - 2016 Detlev Offenbach <detlev@die-offenbachs.de>
+#
+
+"""
+Module implementing a debug client base class.
+"""
+
+import sys
+import socket
+import select
+import codeop
+import traceback
+import os
+import time
+import imp
+import re
+import atexit
+import signal
+import inspect
+
+
+import DebugClientCapabilities
+from DebugBase import setRecursionLimit, printerr   # __IGNORE_WARNING__
+from AsyncFile import AsyncFile, AsyncPendingWrite
+from DebugConfig import ConfigVarTypeStrings
+from FlexCompleter import Completer
+from DebugUtilities import prepareJsonCommand
+
+
+DebugClientInstance = None
+
+###############################################################################
+
+
+def DebugClientRawInput(prompt="", echo=1):
+    """
+    Replacement for the standard raw_input builtin.
+    
+    This function works with the split debugger.
+    
+    @param prompt prompt to be shown. (string)
+    @param echo flag indicating echoing of the input (boolean)
+    @return result of the raw_input() call
+    """
+    if DebugClientInstance is None or not DebugClientInstance.redirect:
+        return DebugClientOrigRawInput(prompt)
+
+    return DebugClientInstance.raw_input(prompt, echo)
+
+# Use our own raw_input().
+try:
+    DebugClientOrigRawInput = __builtins__.__dict__['raw_input']
+    __builtins__.__dict__['raw_input'] = DebugClientRawInput
+except (AttributeError, KeyError):
+    import __main__
+    DebugClientOrigRawInput = __main__.__builtins__.__dict__['raw_input']
+    __main__.__builtins__.__dict__['raw_input'] = DebugClientRawInput
+
+###############################################################################
+
+
+def DebugClientInput(prompt=""):
+    """
+    Replacement for the standard input builtin.
+    
+    This function works with the split debugger.
+    
+    @param prompt prompt to be shown (string)
+    @return result of the input() call
+    """
+    if DebugClientInstance is None or DebugClientInstance.redirect == 0:
+        return DebugClientOrigInput(prompt)
+
+    return DebugClientInstance.input(prompt)
+
+# Use our own input().
+try:
+    DebugClientOrigInput = __builtins__.__dict__['input']
+    __builtins__.__dict__['input'] = DebugClientInput
+except (AttributeError, KeyError):
+    import __main__
+    DebugClientOrigInput = __main__.__builtins__.__dict__['input']
+    __main__.__builtins__.__dict__['input'] = DebugClientInput
+
+###############################################################################
+
+
+def DebugClientFork():
+    """
+    Replacement for the standard os.fork().
+    
+    @return result of the fork() call
+    """
+    if DebugClientInstance is None:
+        return DebugClientOrigFork()
+    
+    return DebugClientInstance.fork()
+
+# use our own fork().
+if 'fork' in dir(os):
+    DebugClientOrigFork = os.fork
+    os.fork = DebugClientFork
+
+###############################################################################
+
+
+def DebugClientClose(fd):
+    """
+    Replacement for the standard os.close(fd).
+    
+    @param fd open file descriptor to be closed (integer)
+    """
+    if DebugClientInstance is None:
+        DebugClientOrigClose(fd)
+    
+    DebugClientInstance.close(fd)
+
+# use our own close().
+if 'close' in dir(os):
+    DebugClientOrigClose = os.close
+    os.close = DebugClientClose
+
+###############################################################################
+
+
+def DebugClientSetRecursionLimit(limit):
+    """
+    Replacement for the standard sys.setrecursionlimit(limit).
+    
+    @param limit recursion limit (integer)
+    """
+    rl = max(limit, 64)
+    setRecursionLimit(rl)
+    DebugClientOrigSetRecursionLimit(rl + 64)
+
+# use our own setrecursionlimit().
+if 'setrecursionlimit' in dir(sys):
+    DebugClientOrigSetRecursionLimit = sys.setrecursionlimit
+    sys.setrecursionlimit = DebugClientSetRecursionLimit
+    DebugClientSetRecursionLimit(sys.getrecursionlimit())
+
+###############################################################################
+
+
+class DebugClientBase(object):
+    """
+    Class implementing the client side of the debugger.
+
+    It provides access to the Python interpeter from a debugger running in
+    another process whether or not the Qt event loop is running.
+
+    The protocol between the debugger and the client assumes that there will be
+    a single source of debugger commands and a single source of Python
+    statements.  Commands and statement are always exactly one line and may be
+    interspersed.
+
+    The protocol is as follows.  First the client opens a connection to the
+    debugger and then sends a series of one line commands.  A command is either
+    &gt;Load&lt;, &gt;Step&lt;, &gt;StepInto&lt;, ... or a Python statement.
+    See DebugProtocol.py for a listing of valid protocol tokens.
+
+    A Python statement consists of the statement to execute, followed (in a
+    separate line) by &gt;OK?&lt;. If the statement was incomplete then the
+    response is &gt;Continue&lt;. If there was an exception then the response
+    is &gt;Exception&lt;. Otherwise the response is &gt;OK&lt;. The reason
+    for the &gt;OK?&lt; part is to provide a sentinal (ie. the responding
+    &gt;OK&lt;) after any possible output as a result of executing the command.
+
+    The client may send any other lines at any other time which should be
+    interpreted as program output.
+
+    If the debugger closes the session there is no response from the client.
+    The client may close the session at any time as a result of the script
+    being debugged closing or crashing.
+    
+    <b>Note</b>: This class is meant to be subclassed by individual
+    DebugClient classes. Do not instantiate it directly.
+    """
+    clientCapabilities = DebugClientCapabilities.HasAll
+    
+    def __init__(self):
+        """
+        Constructor
+        """
+        self.breakpoints = {}
+        self.redirect = True
+        self.__receiveBuffer = ""
+
+        # The next couple of members are needed for the threaded version.
+        # For this base class they contain static values for the non threaded
+        # debugger
+        
+        # dictionary of all threads running
+        self.threads = {}
+        
+        # the "current" thread, basically the thread we are at a
+        # breakpoint for.
+        self.currentThread = self
+        
+        # special objects representing the main scripts thread and frame
+        self.mainThread = self
+        self.mainFrame = None
+        self.framenr = 0
+        
+        # The context to run the debugged program in.
+        self.debugMod = imp.new_module('__main__')
+        self.debugMod.__dict__['__builtins__'] = __builtins__
+
+        # The list of complete lines to execute.
+        self.buffer = ''
+        
+        # The list of regexp objects to filter variables against
+        self.globalsFilterObjects = []
+        self.localsFilterObjects = []
+
+        self._fncache = {}
+        self.dircache = []
+        self.mainProcStr = None     # used for the passive mode
+        self.passive = False        # used to indicate the passive mode
+        self.running = None
+        self.test = None
+        self.tracePython = False
+        self.debugging = False
+        
+        self.fork_auto = False
+        self.fork_child = False
+
+        self.readstream = None
+        self.writestream = None
+        self.errorstream = None
+        self.pollingDisabled = False
+        
+        self.callTraceEnabled = False
+        self.__newCallTraceEnabled = False
+        
+        self.skipdirs = sys.path[:]
+        
+        self.variant = 'You should not see this'
+        
+        # commandline completion stuff
+        self.complete = Completer(self.debugMod.__dict__).complete
+        
+        if sys.hexversion < 0x2020000:
+            self.compile_command = codeop.compile_command
+        else:
+            self.compile_command = codeop.CommandCompiler()
+        
+        self.coding_re = re.compile(r"coding[:=]\s*([-\w_.]+)")
+        self.defaultCoding = 'utf-8'
+        self.__coding = self.defaultCoding
+        self.noencoding = False
+
+    def getCoding(self):
+        """
+        Public method to return the current coding.
+        
+        @return codec name (string)
+        """
+        return self.__coding
+        
+    def __setCoding(self, filename):
+        """
+        Private method to set the coding used by a python file.
+        
+        @param filename name of the file to inspect (string)
+        """
+        if self.noencoding:
+            self.__coding = sys.getdefaultencoding()
+        else:
+            default = 'latin-1'
+            try:
+                f = open(filename, 'rb')
+                # read the first and second line
+                text = f.readline()
+                text = "%s%s" % (text, f.readline())
+                f.close()
+            except IOError:
+                self.__coding = default
+                return
+            
+            for l in text.splitlines():
+                m = self.coding_re.search(l)
+                if m:
+                    self.__coding = m.group(1)
+                    return
+            self.__coding = default
+
+    def attachThread(self, target=None, args=None, kwargs=None, mainThread=0):
+        """
+        Public method to setup a thread for DebugClient to debug.
+        
+        If mainThread is non-zero, then we are attaching to the already
+        started mainthread of the app and the rest of the args are ignored.
+        
+        This is just an empty function and is overridden in the threaded
+        debugger.
+        
+        @param target the start function of the target thread (i.e. the user
+            code)
+        @param args arguments to pass to target
+        @param kwargs keyword arguments to pass to target
+        @param mainThread non-zero, if we are attaching to the already
+              started mainthread of the app
+        """
+        if self.debugging:
+            sys.setprofile(self.profile)
+    
+    def __dumpThreadList(self):
+        """
+        Private method to send the list of threads.
+        """
+        threadList = []
+        if self.threads and self.currentThread:
+            # indication for the threaded debugger
+            currentId = self.currentThread.get_ident()
+            for t in self.threads.values():
+                d = {}
+                d["id"] = t.get_ident()
+                d["name"] = t.get_name()
+                d["broken"] = t.isBroken()
+                threadList.append(d)
+        else:
+            currentId = -1
+            d = {}
+            d["id"] = -1
+            d["name"] = "MainThread"
+            if hasattr(self, "isBroken"):
+                d["broken"] = self.isBroken()
+            else:
+                d["broken"] = False
+            threadList.append(d)
+        
+        self.sendJsonCommand("ResponseThreadList", {
+            "currentID": currentId,
+            "threadList": threadList,
+        })
+    
+    def raw_input(self, prompt, echo):
+        """
+        Public method to implement raw_input() using the event loop.
+        
+        @param prompt the prompt to be shown (string)
+        @param echo Flag indicating echoing of the input (boolean)
+        @return the entered string
+        """
+        self.sendJsonCommand("RequestRaw", {
+            "prompt": prompt,
+            "echo": echo,
+        })
+        self.eventLoop(True)
+        return self.rawLine
+
+    def input(self, prompt):
+        """
+        Public method to implement input() using the event loop.
+        
+        @param prompt the prompt to be shown (string)
+        @return the entered string evaluated as a Python expresion
+        """
+        return eval(self.raw_input(prompt, 1))
+        
+    def sessionClose(self, exit=True):
+        """
+        Public method to close the session with the debugger and optionally
+        terminate.
+        
+        @param exit flag indicating to terminate (boolean)
+        """
+        try:
+            self.set_quit()
+        except Exception:
+            pass
+
+        self.debugging = False
+        
+        # make sure we close down our end of the socket
+        # might be overkill as normally stdin, stdout and stderr
+        # SHOULD be closed on exit, but it does not hurt to do it here
+        self.readstream.close(True)
+        self.writestream.close(True)
+        self.errorstream.close(True)
+
+        if exit:
+            # Ok, go away.
+            sys.exit()
+
+    def handleLine(self, line):
+        """
+        Public method to handle the receipt of a complete line.
+
+        It first looks for a valid protocol token at the start of the line.
+        Thereafter it trys to execute the lines accumulated so far.
+        
+        @param line the received line
+        """
+        # Remove any newline.
+        if line[-1] == '\n':
+            line = line[:-1]
+
+##        printerr(line)          ##debug
+        
+        self.handleJsonCommand(line)
+    
+    def handleJsonCommand(self, jsonStr):
+        """
+        Public method to handle a command serialized as a JSON string.
+        
+        @param jsonStr string containing the command received from the IDE
+        @type str
+        """
+        import json
+        
+        try:
+            commandDict = json.loads(jsonStr.strip())
+        except json.JSONDecodeError as err:
+            printerr(str(err))
+            return
+        
+        method = commandDict["method"]
+        params = commandDict["params"]
+        
+        if method == "RequestVariables":
+            self.__dumpVariables(
+                params["frameNumber"], params["scope"], params["filters"])
+        
+        elif method == "RequestVariable":
+            self.__dumpVariable(
+                params["variable"], params["frameNumber"],
+                params["scope"], params["filters"])
+        
+        elif method == "RequestThreadList":
+            self.__dumpThreadList()
+        
+        elif method == "RequestThreadSet":
+            if params["threadID"] in self.threads:
+                self.setCurrentThread(params["threadID"])
+                self.sendJsonCommand("ResponseThreadSet", {})
+                stack = self.currentThread.getStack()
+                self.sendJsonCommand("ResponseStack", {
+                    "stack": stack,
+                })
+        
+        elif method == "RequestCapabilities":
+            self.sendJsonCommand("ResponseCapabilities", {
+                "capabilities": self.__clientCapabilities(),
+                "clientType": "Python3"
+            })
+        
+        elif method == "RequestBanner":
+            self.sendJsonCommand("ResponseBanner", {
+                "version": "Python {0}".format(sys.version),
+                "platform": socket.gethostname(),
+                "dbgclient": self.variant,
+            })
+        
+        elif method == "RequestSetFilter":
+            self.__generateFilterObjects(params["scope"], params["filter"])
+        
+        elif method == "RequestCallTrace":
+            if self.debugging:
+                self.callTraceEnabled = params["enable"]
+            else:
+                self.__newCallTraceEnabled = params["enable"]
+                # remember for later
+        
+        elif method == "RequestEnvironment":
+            for key, value in params["environment"].items():
+                if key.endswith("+"):
+                    if key[:-1] in os.environ:
+                        os.environ[key[:-1]] += value
+                    else:
+                        os.environ[key[:-1]] = value
+                else:
+                    os.environ[key] = value
+        
+        elif method == "RequestLoad":
+            self._fncache = {}
+            self.dircache = []
+            sys.argv = []
+            params["filename"] = params["filename"].encode(
+                sys.getfilesystemencoding())
+            self.__setCoding(params["filename"])
+            sys.argv.append(params["filename"])
+            sys.argv.extend(params["argv"])
+            sys.path = self.__getSysPath(os.path.dirname(sys.argv[0]))
+            if params["workdir"] == '':
+                os.chdir(sys.path[1])
+            else:
+                os.chdir(params["workdir"])
+            
+            self.running = sys.argv[0]
+            self.mainFrame = None
+            self.debugging = True
+            
+            self.fork_auto = params["autofork"]
+            self.fork_child = params["forkChild"]
+            
+            self.threads.clear()
+            self.attachThread(mainThread=True)
+            
+            # set the system exception handling function to ensure, that
+            # we report on all unhandled exceptions
+            sys.excepthook = self.__unhandled_exception
+            self.__interceptSignals()
+            
+            # clear all old breakpoints, they'll get set after we have
+            # started
+            self.mainThread.clear_all_breaks()
+            
+            self.mainThread.tracePython = params["traceInterpreter"]
+            
+            # This will eventually enter a local event loop.
+            self.debugMod.__dict__['__file__'] = self.running
+            sys.modules['__main__'] = self.debugMod
+            self.callTraceEnabled = self.__newCallTraceEnabled
+            res = self.mainThread.run(
+                'execfile(' + repr(self.running) + ')',
+                self.debugMod.__dict__)
+            self.progTerminated(res)
+
+        elif method == "RequestRun":
+            sys.argv = []
+            params["filename"] = params["filename"].encode(
+                sys.getfilesystemencoding())
+            self.__setCoding(params["filename"])
+            sys.argv.append(params["filename"])
+            sys.argv.extend(params["argv"])
+            sys.path = self.__getSysPath(os.path.dirname(sys.argv[0]))
+            if params["workdir"] == '':
+                os.chdir(sys.path[1])
+            else:
+                os.chdir(params["workdir"])
+
+            self.running = sys.argv[0]
+            self.mainFrame = None
+            self.botframe = None
+            
+            self.fork_auto = params["autofork"]
+            self.fork_child = params["forkChild"]
+            
+            self.threads.clear()
+            self.attachThread(mainThread=True)
+            
+            # set the system exception handling function to ensure, that
+            # we report on all unhandled exceptions
+            sys.excepthook = self.__unhandled_exception
+            self.__interceptSignals()
+            
+            self.mainThread.tracePython = False
+            
+            self.debugMod.__dict__['__file__'] = sys.argv[0]
+            sys.modules['__main__'] = self.debugMod
+            res = 0
+            try:
+                execfile(sys.argv[0], self.debugMod.__dict__)
+            except SystemExit as exc:
+                res = exc.code
+                atexit._run_exitfuncs()
+            self.writestream.flush()
+            self.progTerminated(res)
+
+        elif method == "RequestCoverage":
+            from coverage import coverage
+            sys.argv = []
+            params["filename"] = params["filename"].encode(
+                sys.getfilesystemencoding())
+            self.__setCoding(params["filename"])
+            sys.argv.append(params["filename"])
+            sys.argv.extend(params["argv"])
+            sys.path = self.__getSysPath(os.path.dirname(sys.argv[0]))
+            if params["workdir"] == '':
+                os.chdir(sys.path[1])
+            else:
+                os.chdir(params["workdir"])
+            
+            # set the system exception handling function to ensure, that
+            # we report on all unhandled exceptions
+            sys.excepthook = self.__unhandled_exception
+            self.__interceptSignals()
+            
+            # generate a coverage object
+            self.cover = coverage(
+                auto_data=True,
+                data_file="%s.coverage" % os.path.splitext(sys.argv[0])[0])
+            
+            if params["erase"]:
+                self.cover.erase()
+            sys.modules['__main__'] = self.debugMod
+            self.debugMod.__dict__['__file__'] = sys.argv[0]
+            self.running = sys.argv[0]
+            res = 0
+            self.cover.start()
+            try:
+                execfile(sys.argv[0], self.debugMod.__dict__)
+            except SystemExit as exc:
+                res = exc.code
+                atexit._run_exitfuncs()
+            self.cover.stop()
+            self.cover.save()
+            self.writestream.flush()
+            self.progTerminated(res)
+        
+        elif method == "RequestProfile":
+            sys.setprofile(None)
+            import PyProfile
+            sys.argv = []
+            params["filename"] = params["filename"].encode(
+                sys.getfilesystemencoding())
+            self.__setCoding(params["filename"])
+            sys.argv.append(params["filename"])
+            sys.argv.extend(params["argv"])
+            sys.path = self.__getSysPath(os.path.dirname(sys.argv[0]))
+            if params["workdir"] == '':
+                os.chdir(sys.path[1])
+            else:
+                os.chdir(params["workdir"])
+
+            # set the system exception handling function to ensure, that
+            # we report on all unhandled exceptions
+            sys.excepthook = self.__unhandled_exception
+            self.__interceptSignals()
+            
+            # generate a profile object
+            self.prof = PyProfile.PyProfile(sys.argv[0])
+            
+            if params["erase"]:
+                self.prof.erase()
+            self.debugMod.__dict__['__file__'] = sys.argv[0]
+            sys.modules['__main__'] = self.debugMod
+            self.running = sys.argv[0]
+            res = 0
+            try:
+                self.prof.run('execfile(%r)' % sys.argv[0])
+            except SystemExit as exc:
+                res = exc.code
+                atexit._run_exitfuncs()
+            self.prof.save()
+            self.writestream.flush()
+            self.progTerminated(res)
+        
+        elif method == "ExecuteStatement":
+            if self.buffer:
+                self.buffer = self.buffer + '\n' + params["statement"]
+            else:
+                self.buffer = params["statement"]
+
+            try:
+                code = self.compile_command(self.buffer, self.readstream.name)
+            except (OverflowError, SyntaxError, ValueError):
+                # Report the exception
+                sys.last_type, sys.last_value, sys.last_traceback = \
+                    sys.exc_info()
+                self.sendJsonCommand("ClientOutput", {
+                    "text": "".join(traceback.format_exception_only(
+                        sys.last_type, sys.last_value))
+                })
+                self.buffer = ''
+            else:
+                if code is None:
+                    self.sendJsonCommand("ResponseContinue", {})
+                    return
+                else:
+                    self.buffer = ''
+
+                    try:
+                        if self.running is None:
+                            exec code in self.debugMod.__dict__
+                        else:
+                            if self.currentThread is None:
+                                # program has terminated
+                                self.running = None
+                                _globals = self.debugMod.__dict__
+                                _locals = _globals
+                            else:
+                                cf = self.currentThread.getCurrentFrame()
+                                # program has terminated
+                                if cf is None:
+                                    self.running = None
+                                    _globals = self.debugMod.__dict__
+                                    _locals = _globals
+                                else:
+                                    frmnr = self.framenr
+                                    while cf is not None and frmnr > 0:
+                                        cf = cf.f_back
+                                        frmnr -= 1
+                                    _globals = cf.f_globals
+                                    _locals = \
+                                        self.currentThread.getFrameLocals(
+                                            self.framenr)
+                            # reset sys.stdout to our redirector
+                            # (unconditionally)
+                            if "sys" in _globals:
+                                __stdout = _globals["sys"].stdout
+                                _globals["sys"].stdout = self.writestream
+                                exec code in _globals, _locals
+                                _globals["sys"].stdout = __stdout
+                            elif "sys" in _locals:
+                                __stdout = _locals["sys"].stdout
+                                _locals["sys"].stdout = self.writestream
+                                exec code in _globals, _locals
+                                _locals["sys"].stdout = __stdout
+                            else:
+                                exec code in _globals, _locals
+                            
+                            self.currentThread.storeFrameLocals(self.framenr)
+                    except SystemExit, exc:
+                        self.progTerminated(exc.code)
+                    except Exception:
+                        # Report the exception and the traceback
+                        tlist = []
+                        try:
+                            exc_type, exc_value, exc_tb = sys.exc_info()
+                            sys.last_type = exc_type
+                            sys.last_value = exc_value
+                            sys.last_traceback = exc_tb
+                            tblist = traceback.extract_tb(exc_tb)
+                            del tblist[:1]
+                            tlist = traceback.format_list(tblist)
+                            if tlist:
+                                tlist.insert(
+                                    0, "Traceback (innermost last):\n")
+                                tlist.extend(traceback.format_exception_only(
+                                    exc_type, exc_value))
+                        finally:
+                            tblist = exc_tb = None
+
+                        self.sendJsonCommand("ClientOutput", {
+                            "text": "".join(tlist)
+                        })
+            
+            self.sendJsonCommand("ResponseOK", {})
+        
+        elif method == "RequestStep":
+            self.currentThread.step(True)
+            self.eventExit = True
+
+        elif method == "RequestStepOver":
+            self.currentThread.step(False)
+            self.eventExit = True
+        
+        elif method == "RequestStepOut":
+            self.currentThread.stepOut()
+            self.eventExit = True
+        
+        elif method == "RequestStepQuit":
+            if self.passive:
+                self.progTerminated(42)
+            else:
+                self.set_quit()
+                self.eventExit = True
+        
+        elif method == "RequestContinue":
+            self.currentThread.go(params["special"])
+            self.eventExit = True
+        
+        elif method == "RawInput":
+            # If we are handling raw mode input then break out of the current
+            # event loop.
+            self.rawLine = params["input"]
+            self.eventExit = True
+        
+        elif method == "RequestBreakpoint":
+            params["filename"] = params["filename"].encode(
+                sys.getfilesystemencoding())
+            if params["setBreakpoint"]:
+                if params["condition"] in ['None', '']:
+                    params["condition"] = None
+                elif params["condition"] is not None:
+                    try:
+                        compile(params["condition"], '<string>', 'eval')
+                    except SyntaxError:
+                        self.sendJsonCommand("ResponseBPConditionError", {
+                            "filename": params["filename"],
+                            "line": params["line"],
+                        })
+                        return
+                self.mainThread.set_break(
+                    params["filename"], params["line"], params["temporary"],
+                    params["condition"])
+            else:
+                self.mainThread.clear_break(params["filename"], params["line"])
+        
+        elif method == "RequestBreakpointEnable":
+            params["filename"] = params["filename"].encode(
+                sys.getfilesystemencoding())
+            bp = self.mainThread.get_break(params["filename"], params["line"])
+            if bp is not None:
+                if params["enable"]:
+                    bp.enable()
+                else:
+                    bp.disable()
+            
+        
+        elif method == "RequestBreakpointIgnore":
+            params["filename"] = params["filename"].encode(
+                sys.getfilesystemencoding())
+            bp = self.mainThread.get_break(params["filename"], params["line"])
+            if bp is not None:
+                bp.ignore = params["count"]
+        
+        elif method == "RequestWatch":
+            if params["setWatch"]:
+                if not params["condition"].endswith(
+                        ('??created??', '??changed??')):
+                    try:
+                        compile(params["condition"], '<string>', 'eval')
+                    except SyntaxError:
+                        self.sendJsonCommand("ResponseWatchConditionError", {
+                            "condition": params["condition"],
+                        })
+                        return
+                self.mainThread.set_watch(
+                    params["condition"], params["temporary"])
+            else:
+                self.mainThread.clear_watch(params["condition"])
+        
+        elif method == "RequestWatchEnable":
+            wp = self.mainThread.get_watch(params["condition"])
+            if wp is not None:
+                if params["enable"]:
+                    wp.enable()
+                else:
+                    wp.disable()
+        
+        elif method == "RequestWatchIgnore":
+            wp = self.mainThread.get_watch(params["condition"])
+            if wp is not None:
+                wp.ignore = params["count"]
+        
+        elif method == "RequestShutdown":
+            self.sessionClose()
+        
+        elif method == "RequestCompletion":
+            self.__completionList(params["text"])
+        
+        elif method == "RequestUTPrepare":
+            params["filename"] = params["filename"].encode(
+                sys.getfilesystemencoding())
+            sys.path.insert(
+                0, os.path.dirname(os.path.abspath(params["filename"])))
+            os.chdir(sys.path[0])
+            
+            # set the system exception handling function to ensure, that
+            # we report on all unhandled exceptions
+            sys.excepthook = self.__unhandled_exception
+            self.__interceptSignals()
+            
+            try:
+                import unittest
+                utModule = __import__(params["testname"])
+                try:
+                    if params["failed"]:
+                        self.test = unittest.defaultTestLoader\
+                            .loadTestsFromNames(params["failed"], utModule)
+                    else:
+                        self.test = unittest.defaultTestLoader\
+                            .loadTestsFromName(params["testfunctionname"],
+                                               utModule)
+                except AttributeError:
+                    self.test = unittest.defaultTestLoader\
+                        .loadTestsFromModule(utModule)
+            except Exception:
+                exc_type, exc_value, exc_tb = sys.exc_info()
+                self.sendJsonCommand("ResponseUTPrepared", {
+                    "count": 0,
+                    "exception": exc_type.__name__,
+                    "message": str(exc_value),
+                })
+                return
+            
+            # generate a coverage object
+            if params["coverage"]:
+                from coverage import coverage
+                self.cover = coverage(
+                    auto_data=True,
+                    data_file="%s.coverage" % \
+                        os.path.splitext(params["coveragefile"])[0])
+                if params["coverageerase"]:
+                    self.cover.erase()
+            else:
+                self.cover = None
+            
+            self.sendJsonCommand("ResponseUTPrepared", {
+                "count": self.test.countTestCases(),
+                "exception": "",
+                "message": "",
+            })
+        
+        elif method == "RequestUTRun":
+            from DCTestResult import DCTestResult
+            self.testResult = DCTestResult(self)
+            if self.cover:
+                self.cover.start()
+            self.test.run(self.testResult)
+            if self.cover:
+                self.cover.stop()
+                self.cover.save()
+            self.sendJsonCommand("ResponseUTFinished", {})
+        
+        elif method == "RequestUTStop":
+            self.testResult.stop()
+        
+        elif method == "ResponseForkTo":
+            # this results from a separate event loop
+            self.fork_child = (params["target"] == 'child')
+            self.eventExit = True
+    
+    def sendJsonCommand(self, method, params):
+        """
+        Public method to send a single command or response to the IDE.
+        
+        @param method command or response command name to be sent
+        @type str
+        @param params dictionary of named parameters for the command or
+            response
+        @type dict
+        """
+        cmd = prepareJsonCommand(method, params)
+        
+        self.writestream.write_p(cmd)
+        self.writestream.flush()
+    
+    def sendClearTemporaryBreakpoint(self, filename, lineno):
+        """
+        Public method to signal the deletion of a temporary breakpoint.
+        
+        @param filename name of the file the bp belongs to
+        @type str
+        @param lineno linenumber of the bp
+        @type int
+        """
+        self.sendJsonCommand("ResponseClearBreakpoint", {
+            "filename": filename,
+            "line": lineno
+        })
+    
+    def sendClearTemporaryWatch(self, condition):
+        """
+        Public method to signal the deletion of a temporary watch expression.
+        
+        @param condition condition of the watch expression to be cleared
+        @type str
+        """
+        self.sendJsonCommand("ResponseClearWatch", {
+            "condition": condition,
+        })
+    
+    def sendResponseLine(self, stack):
+        """
+        Public method to send the current call stack.
+        
+        @param stack call stack
+        @type list
+        """
+        self.sendJsonCommand("ResponseLine", {
+            "stack": stack,
+        })
+    
+    def sendCallTrace(self, event, fromStr, toStr):
+        """
+        Public method to send a call trace entry.
+        
+        @param event trace event (call or return)
+        @type str
+        @param fromStr pre-formatted origin info
+        @type str
+        @param toStr pre-formatted target info
+        @type str
+        """
+        self.sendJsonCommand("CallTrace", {
+            "event": event[0],
+            "from": fromStr,
+            "to": toStr,
+        })
+    
+    def sendException(self, exceptionType, exceptionMessage, stack):
+        """
+        Public method to send information for an exception.
+        
+        @param exceptionType type of exception raised
+        @type str
+        @param exceptionMessage message of the exception
+        @type str
+        @param stack stack trace information
+        @type list
+        """
+        self.sendJsonCommand("ResponseException", {
+            "type": exceptionType,
+            "message": exceptionMessage,
+            "stack": stack,
+        })
+    
+    def sendSyntaxError(self, message, filename, lineno, charno):
+        """
+        Public method to send information for a syntax error.
+        
+        @param message syntax error message
+        @type str
+        @param filename name of the faulty file
+        @type str
+        @param lineno line number info
+        @type int
+        @param charno character number info
+        @tyoe int
+        """
+        self.sendJsonCommand("ResponseSyntax", {
+            "message": message,
+            "filename": filename,
+            "linenumber": lineno,
+            "characternumber": charno,
+        })
+    
+    def sendPassiveStartup(self, filename, exceptions):
+        """
+        Public method to send the passive start information.
+        
+        @param filename name of the script
+        @type str
+        @param exceptions flag to enable exception reporting of the IDE
+        @type bool
+        """
+        self.sendJsonCommand("PassiveStartup", {
+            "filename": filename,
+            "exceptions": exceptions,
+        })
+
+    def __clientCapabilities(self):
+        """
+        Private method to determine the clients capabilities.
+        
+        @return client capabilities (integer)
+        """
+        try:
+            import PyProfile    # __IGNORE_WARNING__
+            try:
+                del sys.modules['PyProfile']
+            except KeyError:
+                pass
+            return self.clientCapabilities
+        except ImportError:
+            return (
+                self.clientCapabilities & ~DebugClientCapabilities.HasProfiler)
+    
+    def readReady(self, stream):
+        """
+        Public method called when there is data ready to be read.
+        
+        @param stream file like object that has data to be written
+        """
+        try:
+            got = stream.readline_p()
+        except Exception:
+            return
+
+        if len(got) == 0:
+            self.sessionClose()
+            return
+
+        self.__receiveBuffer = self.__receiveBuffer + got
+        
+        # Call handleLine for the line if it is complete.
+        eol = self.__receiveBuffer.find('\n')
+        while eol >= 0:
+            line = self.__receiveBuffer[:eol + 1]
+            self.__receiveBuffer = self.__receiveBuffer[eol + 1:]
+            self.handleLine(line)
+            eol = self.__receiveBuffer.find('\n')
+
+    def writeReady(self, stream):
+        """
+        Public method called when we are ready to write data.
+        
+        @param stream file like object that has data to be written
+        """
+        stream.write_p("")
+        stream.flush()
+
+    def __interact(self):
+        """
+        Private method to interact with the debugger.
+        """
+        global DebugClientInstance
+
+        DebugClientInstance = self
+        self.__receiveBuffer = ""
+
+        if not self.passive:
+            # At this point simulate an event loop.
+            self.eventLoop()
+
+    def eventLoop(self, disablePolling=False):
+        """
+        Public method implementing our event loop.
+        
+        @param disablePolling flag indicating to enter an event loop with
+            polling disabled (boolean)
+        """
+        self.eventExit = None
+        self.pollingDisabled = disablePolling
+
+        while self.eventExit is None:
+            wrdy = []
+
+            if self.writestream.nWriteErrors > self.writestream.maxtries:
+                break
+            
+            if AsyncPendingWrite(self.writestream):
+                wrdy.append(self.writestream)
+
+            if AsyncPendingWrite(self.errorstream):
+                wrdy.append(self.errorstream)
+            
+            try:
+                rrdy, wrdy, xrdy = select.select([self.readstream], wrdy, [])
+            except (select.error, KeyboardInterrupt, socket.error):
+                # just carry on
+                continue
+
+            if self.readstream in rrdy:
+                self.readReady(self.readstream)
+
+            if self.writestream in wrdy:
+                self.writeReady(self.writestream)
+
+            if self.errorstream in wrdy:
+                self.writeReady(self.errorstream)
+
+        self.eventExit = None
+        self.pollingDisabled = False
+
+    def eventPoll(self):
+        """
+        Public method to poll for events like 'set break point'.
+        """
+        if self.pollingDisabled:
+            return
+        
+        # the choice of a ~0.5 second poll interval is arbitrary.
+        lasteventpolltime = getattr(self, 'lasteventpolltime', time.time())
+        now = time.time()
+        if now - lasteventpolltime < 0.5:
+            self.lasteventpolltime = lasteventpolltime
+            return
+        else:
+            self.lasteventpolltime = now
+
+        wrdy = []
+        if AsyncPendingWrite(self.writestream):
+            wrdy.append(self.writestream)
+
+        if AsyncPendingWrite(self.errorstream):
+            wrdy.append(self.errorstream)
+        
+        # immediate return if nothing is ready.
+        try:
+            rrdy, wrdy, xrdy = select.select([self.readstream], wrdy, [], 0)
+        except (select.error, KeyboardInterrupt, socket.error):
+            return
+
+        if self.readstream in rrdy:
+            self.readReady(self.readstream)
+
+        if self.writestream in wrdy:
+            self.writeReady(self.writestream)
+
+        if self.errorstream in wrdy:
+            self.writeReady(self.errorstream)
+        
+    def connectDebugger(self, port, remoteAddress=None, redirect=1):
+        """
+        Public method to establish a session with the debugger.
+        
+        It opens a network connection to the debugger, connects it to stdin,
+        stdout and stderr and saves these file objects in case the application
+        being debugged redirects them itself.
+        
+        @param port the port number to connect to (int)
+        @param remoteAddress the network address of the debug server host
+            (string)
+        @param redirect flag indicating redirection of stdin, stdout and
+            stderr (boolean)
+        """
+        if remoteAddress is None:
+            remoteAddress = "127.0.0.1"
+        elif "@@i" in remoteAddress:
+            remoteAddress = remoteAddress.split("@@i")[0]
+        sock = socket.create_connection((remoteAddress, port))
+
+        self.readstream = AsyncFile(sock, sys.stdin.mode, sys.stdin.name)
+        self.writestream = AsyncFile(sock, sys.stdout.mode, sys.stdout.name)
+        self.errorstream = AsyncFile(sock, sys.stderr.mode, sys.stderr.name)
+        
+        if redirect:
+            sys.stdin = self.readstream
+            sys.stdout = self.writestream
+            sys.stderr = self.errorstream
+        self.redirect = redirect
+        
+        # attach to the main thread here
+        self.attachThread(mainThread=1)
+
+    def __unhandled_exception(self, exctype, excval, exctb):
+        """
+        Private method called to report an uncaught exception.
+        
+        @param exctype the type of the exception
+        @param excval data about the exception
+        @param exctb traceback for the exception
+        """
+        self.mainThread.user_exception(None, (exctype, excval, exctb), True)
+    
+    def __interceptSignals(self):
+        """
+        Private method to intercept common signals.
+        """
+        for signum in [
+            signal.SIGABRT,                 # abnormal termination
+            signal.SIGFPE,                  # floating point exception
+            signal.SIGILL,                  # illegal instruction
+            signal.SIGSEGV,                 # segmentation violation
+        ]:
+            signal.signal(signum, self.__signalHandler)
+    
+    def __signalHandler(self, signalNumber, stackFrame):
+        """
+        Private method to handle signals.
+        
+        @param signalNumber number of the signal to be handled
+        @type int
+        @param stackFrame current stack frame
+        @type frame object
+        """
+        if signalNumber == signal.SIGABRT:
+            message = "Abnormal Termination"
+        elif signalNumber == signal.SIGFPE:
+            message = "Floating Point Exception"
+        elif signalNumber == signal.SIGILL:
+            message = "Illegal Instruction"
+        elif signalNumber == signal.SIGSEGV:
+            message = "Segmentation Violation"
+        else:
+            message = "Unknown Signal '%d'" % signalNumber
+        
+        filename = self.absPath(stackFrame)
+        
+        linenr = stackFrame.f_lineno
+        ffunc = stackFrame.f_code.co_name
+        
+        if ffunc == '?':
+            ffunc = ''
+        
+        if ffunc and not ffunc.startswith("<"):
+            argInfo = inspect.getargvalues(stackFrame)
+            try:
+                fargs = inspect.formatargvalues(
+                    argInfo.args, argInfo.varargs,
+                    argInfo.keywords, argInfo.locals)
+            except Exception:
+                fargs = ""
+        else:
+            fargs = ""
+        
+        self.sendJsonCommand("ResponseSignal", {
+            "message": message,
+            "filename": filename,
+            "linenumber": linenr,
+            "function": ffunc,
+            "arguments": fargs,
+        })
+    
+    def absPath(self, fn):
+        """
+        Public method to convert a filename to an absolute name.
+
+        sys.path is used as a set of possible prefixes. The name stays
+        relative if a file could not be found.
+        
+        @param fn filename (string)
+        @return the converted filename (string)
+        """
+        if os.path.isabs(fn):
+            return fn
+
+        # Check the cache.
+        if fn in self._fncache:
+            return self._fncache[fn]
+
+        # Search sys.path.
+        for p in sys.path:
+            afn = os.path.abspath(os.path.join(p, fn))
+            nafn = os.path.normcase(afn)
+
+            if os.path.exists(nafn):
+                self._fncache[fn] = afn
+                d = os.path.dirname(afn)
+                if (d not in sys.path) and (d not in self.dircache):
+                    self.dircache.append(d)
+                return afn
+
+        # Search the additional directory cache
+        for p in self.dircache:
+            afn = os.path.abspath(os.path.join(p, fn))
+            nafn = os.path.normcase(afn)
+            
+            if os.path.exists(nafn):
+                self._fncache[fn] = afn
+                return afn
+                
+        # Nothing found.
+        return fn
+
+    def shouldSkip(self, fn):
+        """
+        Public method to check if a file should be skipped.
+        
+        @param fn filename to be checked
+        @return non-zero if fn represents a file we are 'skipping',
+            zero otherwise.
+        """
+        if self.mainThread.tracePython:     # trace into Python library
+            return False
+            
+        # Eliminate anything that is part of the Python installation.
+        afn = self.absPath(fn)
+        for d in self.skipdirs:
+            if afn.startswith(d):
+                return True
+        
+        # special treatment for paths containing site-packages or dist-packages
+        for part in ["site-packages", "dist-packages"]:
+            if part in afn:
+                return True
+        
+        return False
+        
+    def getRunning(self):
+        """
+        Public method to return the main script we are currently running.
+        
+        @return flag indicating a running debug session (boolean)
+        """
+        return self.running
+
+    def progTerminated(self, status, message=""):
+        """
+        Public method to tell the debugger that the program has terminated.
+        
+        @param status return status
+        @type int
+        @param message status message
+        @type str
+        """
+        if status is None:
+            status = 0
+        elif not isinstance(status, int):
+            message = str(status)
+            status = 1
+
+        if self.running:
+            self.set_quit()
+            self.running = None
+            self.sendJsonCommand("ResponseExit", {
+                "status": status,
+                "message": message,
+            })
+        
+        # reset coding
+        self.__coding = self.defaultCoding
+
+    def __dumpVariables(self, frmnr, scope, filter):
+        """
+        Private method to return the variables of a frame to the debug server.
+        
+        @param frmnr distance of frame reported on. 0 is the current frame
+            (int)
+        @param scope 1 to report global variables, 0 for local variables (int)
+        @param filter the indices of variable types to be filtered (list of
+            int)
+        """
+        if self.currentThread is None:
+            return
+        
+        if scope == 0:
+            self.framenr = frmnr
+        
+        f = self.currentThread.getCurrentFrame()
+        
+        while f is not None and frmnr > 0:
+            f = f.f_back
+            frmnr -= 1
+        
+        if f is None:
+            if scope:
+                dict = self.debugMod.__dict__
+            else:
+                scope = -1
+        elif scope:
+            dict = f.f_globals
+        elif f.f_globals is f.f_locals:
+                scope = -1
+        else:
+            dict = f.f_locals
+            
+        varlist = []
+        
+        if scope != -1:
+            keylist = dict.keys()
+            
+            vlist = self.__formatVariablesList(keylist, dict, scope, filter)
+            varlist.extend(vlist)
+            
+        self.sendJsonCommand("ResponseVariables", {
+            "scope": scope,
+            "variables": varlist,
+        })
+    
+    def __dumpVariable(self, var, frmnr, scope, filter):
+        """
+        Private method to return the variables of a frame to the debug server.
+        
+        @param var list encoded name of the requested variable
+            (list of strings)
+        @param frmnr distance of frame reported on. 0 is the current frame
+            (int)
+        @param scope 1 to report global variables, 0 for local variables (int)
+        @param filter the indices of variable types to be filtered
+            (list of int)
+        """
+        if self.currentThread is None:
+            return
+        
+        f = self.currentThread.getCurrentFrame()
+        
+        while f is not None and frmnr > 0:
+            f = f.f_back
+            frmnr -= 1
+        
+        if f is None:
+            if scope:
+                dict = self.debugMod.__dict__
+            else:
+                scope = -1
+        elif scope:
+            dict = f.f_globals
+        elif f.f_globals is f.f_locals:
+                scope = -1
+        else:
+            dict = f.f_locals
+        
+        varlist = []
+        
+        if scope != -1:
+            # search the correct dictionary
+            i = 0
+            rvar = var[:]
+            dictkeys = None
+            obj = None
+            isDict = False
+            formatSequences = False
+            access = ""
+            oaccess = ""
+            odict = dict
+            
+            qtVariable = False
+            qvar = None
+            qvtype = ""
+            
+            while i < len(var):
+                if len(dict):
+                    udict = dict
+                ndict = {}
+                # this has to be in line with VariablesViewer.indicators
+                if var[i][-2:] in ["[]", "()", "{}"]:   # __IGNORE_WARNING__
+                    if i + 1 == len(var):
+                        if var[i][:-2] == '...':
+                            dictkeys = [var[i - 1]]
+                        else:
+                            dictkeys = [var[i][:-2]]
+                        formatSequences = True
+                        if not access and not oaccess:
+                            if var[i][:-2] == '...':
+                                access = '["%s"]' % var[i - 1]
+                                dict = odict
+                            else:
+                                access = '["%s"]' % var[i][:-2]
+                        else:
+                            if var[i][:-2] == '...':
+                                if oaccess:
+                                    access = oaccess
+                                else:
+                                    access = '%s[%s]' % (access, var[i - 1])
+                                dict = odict
+                            else:
+                                if oaccess:
+                                    access = '%s[%s]' % (oaccess, var[i][:-2])
+                                    oaccess = ''
+                                else:
+                                    access = '%s[%s]' % (access, var[i][:-2])
+                        if var[i][-2:] == "{}":         # __IGNORE_WARNING__
+                            isDict = True
+                        break
+                    else:
+                        if not access:
+                            if var[i][:-2] == '...':
+                                access = '["%s"]' % var[i - 1]
+                                dict = odict
+                            else:
+                                access = '["%s"]' % var[i][:-2]
+                        else:
+                            if var[i][:-2] == '...':
+                                access = '%s[%s]' % (access, var[i - 1])
+                                dict = odict
+                            else:
+                                if oaccess:
+                                    access = '%s[%s]' % (oaccess, var[i][:-2])
+                                    oaccess = ''
+                                else:
+                                    access = '%s[%s]' % (access, var[i][:-2])
+                else:
+                    if access:
+                        if oaccess:
+                            access = '%s[%s]' % (oaccess, var[i])
+                        else:
+                            access = '%s[%s]' % (access, var[i])
+                        if var[i - 1][:-2] == '...':
+                            oaccess = access
+                        else:
+                            oaccess = ''
+                        try:
+                            exec 'mdict = dict%s.__dict__' % access
+                            ndict.update(mdict)     # __IGNORE_WARNING__
+                            exec 'obj = dict%s' % access
+                            if "PyQt4." in str(type(obj)) or \
+                                    "PyQt5." in str(type(obj)):
+                                qtVariable = True
+                                qvar = obj
+                                qvtype = ("%s" % type(qvar))[1:-1]\
+                                    .split()[1][1:-1]
+                        except Exception:
+                            pass
+                        try:
+                            exec 'mcdict = dict%s.__class__.__dict__' % access
+                            ndict.update(mcdict)     # __IGNORE_WARNING__
+                            if mdict and "sipThis" not in mdict.keys():  # __IGNORE_WARNING__
+                                del rvar[0:2]
+                                access = ""
+                        except Exception:
+                            pass
+                        try:
+                            cdict = {}
+                            exec 'slv = dict%s.__slots__' % access
+                            for v in slv:   # __IGNORE_WARNING__
+                                try:
+                                    exec 'cdict[v] = dict%s.%s' % (access, v)
+                                except Exception:
+                                    pass
+                            ndict.update(cdict)
+                            exec 'obj = dict%s' % access
+                            access = ""
+                            if "PyQt4." in str(type(obj)) or \
+                                    "PyQt5." in str(type(obj)):
+                                qtVariable = True
+                                qvar = obj
+                                qvtype = ("%s" % type(qvar))[1:-1]\
+                                    .split()[1][1:-1]
+                        except Exception:
+                            pass
+                    else:
+                        try:
+                            ndict.update(dict[var[i]].__dict__)
+                            ndict.update(dict[var[i]].__class__.__dict__)
+                            del rvar[0]
+                            obj = dict[var[i]]
+                            if "PyQt4." in str(type(obj)) or \
+                                    "PyQt5." in str(type(obj)):
+                                qtVariable = True
+                                qvar = obj
+                                qvtype = ("%s" % type(qvar))[1:-1]\
+                                    .split()[1][1:-1]
+                        except Exception:
+                            pass
+                        try:
+                            cdict = {}
+                            slv = dict[var[i]].__slots__
+                            for v in slv:
+                                try:
+                                    exec 'cdict[v] = dict[var[i]].%s' % v
+                                except Exception:
+                                    pass
+                            ndict.update(cdict)
+                            obj = dict[var[i]]
+                            if "PyQt4." in str(type(obj)) or \
+                                    "PyQt5." in str(type(obj)):
+                                qtVariable = True
+                                qvar = obj
+                                qvtype = ("%s" % type(qvar))[1:-1]\
+                                    .split()[1][1:-1]
+                        except Exception:
+                            pass
+                    odict = dict
+                    dict = ndict
+                i += 1
+            
+            if qtVariable:
+                vlist = self.__formatQtVariable(qvar, qvtype)
+            elif ("sipThis" in dict.keys() and len(dict) == 1) or \
+                    (len(dict) == 0 and len(udict) > 0):
+                if access:
+                    exec 'qvar = udict%s' % access
+                # this has to be in line with VariablesViewer.indicators
+                elif rvar and rvar[0][-2:] in ["[]", "()", "{}"]:   # __IGNORE_WARNING__
+                    exec 'qvar = udict["%s"][%s]' % (rvar[0][:-2], rvar[1])
+                else:
+                    qvar = udict[var[-1]]
+                qvtype = ("%s" % type(qvar))[1:-1].split()[1][1:-1]
+                if qvtype.startswith(("PyQt4", "PyQt5")):
+                    vlist = self.__formatQtVariable(qvar, qvtype)
+                else:
+                    vlist = []
+            else:
+                qtVariable = False
+                if len(dict) == 0 and len(udict) > 0:
+                    if access:
+                        exec 'qvar = udict%s' % access
+                    # this has to be in line with VariablesViewer.indicators
+                    elif rvar and rvar[0][-2:] in ["[]", "()", "{}"]:   # __IGNORE_WARNING__
+                        exec 'qvar = udict["%s"][%s]' % (rvar[0][:-2], rvar[1])
+                    else:
+                        qvar = udict[var[-1]]
+                    qvtype = ("%s" % type(qvar))[1:-1].split()[1][1:-1]
+                    if qvtype.startswith(("PyQt4", "PyQt5")):
+                        qtVariable = True
+                
+                if qtVariable:
+                    vlist = self.__formatQtVariable(qvar, qvtype)
+                else:
+                    # format the dictionary found
+                    if dictkeys is None:
+                        dictkeys = dict.keys()
+                    else:
+                        # treatment for sequences and dictionaries
+                        if access:
+                            exec "dict = dict%s" % access
+                        else:
+                            dict = dict[dictkeys[0]]
+                        if isDict:
+                            dictkeys = dict.keys()
+                        else:
+                            dictkeys = range(len(dict))
+                    vlist = self.__formatVariablesList(
+                        dictkeys, dict, scope, filter, formatSequences)
+            varlist.extend(vlist)
+        
+            if obj is not None and not formatSequences:
+                try:
+                    if unicode(repr(obj)).startswith('{'):
+                        varlist.append(('...', 'dict', "%d" % len(obj.keys())))
+                    elif unicode(repr(obj)).startswith('['):
+                        varlist.append(('...', 'list', "%d" % len(obj)))
+                    elif unicode(repr(obj)).startswith('('):
+                        varlist.append(('...', 'tuple', "%d" % len(obj)))
+                except Exception:
+                    pass
+        
+        self.sendJsonCommand("ResponseVariable", {
+            "scope": scope,
+            "variable": var,
+            "variables": varlist,
+        })
+        
+    def __formatQtVariable(self, value, vtype):
+        """
+        Private method to produce a formated output of a simple Qt4/Qt5 type.
+        
+        @param value variable to be formated
+        @param vtype type of the variable to be formatted (string)
+        @return A tuple consisting of a list of formatted variables. Each
+            variable entry is a tuple of three elements, the variable name,
+            its type and value.
+        """
+        qttype = vtype.split('.')[-1]
+        varlist = []
+        if qttype == 'QChar':
+            varlist.append(("", "QChar", "%s" % unichr(value.unicode())))
+            varlist.append(("", "int", "%d" % value.unicode()))
+        elif qttype == 'QByteArray':
+            varlist.append(("hex", "QByteArray", "%s" % value.toHex()))
+            varlist.append(("base64", "QByteArray", "%s" % value.toBase64()))
+            varlist.append(("percent encoding", "QByteArray",
+                            "%s" % value.toPercentEncoding()))
+        elif qttype == 'QString':
+            varlist.append(("", "QString", "%s" % value))
+        elif qttype == 'QStringList':
+            for i in range(value.count()):
+                varlist.append(("%d" % i, "QString", "%s" % value[i]))
+        elif qttype == 'QPoint':
+            varlist.append(("x", "int", "%d" % value.x()))
+            varlist.append(("y", "int", "%d" % value.y()))
+        elif qttype == 'QPointF':
+            varlist.append(("x", "float", "%g" % value.x()))
+            varlist.append(("y", "float", "%g" % value.y()))
+        elif qttype == 'QRect':
+            varlist.append(("x", "int", "%d" % value.x()))
+            varlist.append(("y", "int", "%d" % value.y()))
+            varlist.append(("width", "int", "%d" % value.width()))
+            varlist.append(("height", "int", "%d" % value.height()))
+        elif qttype == 'QRectF':
+            varlist.append(("x", "float", "%g" % value.x()))
+            varlist.append(("y", "float", "%g" % value.y()))
+            varlist.append(("width", "float", "%g" % value.width()))
+            varlist.append(("height", "float", "%g" % value.height()))
+        elif qttype == 'QSize':
+            varlist.append(("width", "int", "%d" % value.width()))
+            varlist.append(("height", "int", "%d" % value.height()))
+        elif qttype == 'QSizeF':
+            varlist.append(("width", "float", "%g" % value.width()))
+            varlist.append(("height", "float", "%g" % value.height()))
+        elif qttype == 'QColor':
+            varlist.append(("name", "str", "%s" % value.name()))
+            r, g, b, a = value.getRgb()
+            varlist.append(("rgba", "int", "%d, %d, %d, %d" % (r, g, b, a)))
+            h, s, v, a = value.getHsv()
+            varlist.append(("hsva", "int", "%d, %d, %d, %d" % (h, s, v, a)))
+            c, m, y, k, a = value.getCmyk()
+            varlist.append(
+                ("cmyka", "int", "%d, %d, %d, %d, %d" % (c, m, y, k, a)))
+        elif qttype == 'QDate':
+            varlist.append(("", "QDate", "%s" % value.toString()))
+        elif qttype == 'QTime':
+            varlist.append(("", "QTime", "%s" % value.toString()))
+        elif qttype == 'QDateTime':
+            varlist.append(("", "QDateTime", "%s" % value.toString()))
+        elif qttype == 'QDir':
+            varlist.append(("path", "str", "%s" % value.path()))
+            varlist.append(
+                ("absolutePath", "str", "%s" % value.absolutePath()))
+            varlist.append(
+                ("canonicalPath", "str", "%s" % value.canonicalPath()))
+        elif qttype == 'QFile':
+            varlist.append(("fileName", "str", "%s" % value.fileName()))
+        elif qttype == 'QFont':
+            varlist.append(("family", "str", "%s" % value.family()))
+            varlist.append(("pointSize", "int", "%d" % value.pointSize()))
+            varlist.append(("weight", "int", "%d" % value.weight()))
+            varlist.append(("bold", "bool", "%s" % value.bold()))
+            varlist.append(("italic", "bool", "%s" % value.italic()))
+        elif qttype == 'QUrl':
+            varlist.append(("url", "str", "%s" % value.toString()))
+            varlist.append(("scheme", "str", "%s" % value.scheme()))
+            varlist.append(("user", "str", "%s" % value.userName()))
+            varlist.append(("password", "str", "%s" % value.password()))
+            varlist.append(("host", "str", "%s" % value.host()))
+            varlist.append(("port", "int", "%d" % value.port()))
+            varlist.append(("path", "str", "%s" % value.path()))
+        elif qttype == 'QModelIndex':
+            varlist.append(("valid", "bool", "%s" % value.isValid()))
+            if value.isValid():
+                varlist.append(("row", "int", "%s" % value.row()))
+                varlist.append(("column", "int", "%s" % value.column()))
+                varlist.append(
+                    ("internalId", "int", "%s" % value.internalId()))
+                varlist.append(
+                    ("internalPointer", "void *", "%s" %
+                     value.internalPointer()))
+        elif qttype == 'QRegExp':
+            varlist.append(("pattern", "str", "%s" % value.pattern()))
+        
+        # GUI stuff
+        elif qttype == 'QAction':
+            varlist.append(("name", "str", "%s" % value.objectName()))
+            varlist.append(("text", "str", "%s" % value.text()))
+            varlist.append(("icon text", "str", "%s" % value.iconText()))
+            varlist.append(("tooltip", "str", "%s" % value.toolTip()))
+            varlist.append(("whatsthis", "str", "%s" % value.whatsThis()))
+            varlist.append(
+                ("shortcut", "str", "%s" % value.shortcut().toString()))
+        elif qttype == 'QKeySequence':
+            varlist.append(("value", "", "%s" % value.toString()))
+            
+        # XML stuff
+        elif qttype == 'QDomAttr':
+            varlist.append(("name", "str", "%s" % value.name()))
+            varlist.append(("value", "str", "%s" % value.value()))
+        elif qttype == 'QDomCharacterData':
+            varlist.append(("data", "str", "%s" % value.data()))
+        elif qttype == 'QDomComment':
+            varlist.append(("data", "str", "%s" % value.data()))
+        elif qttype == "QDomDocument":
+            varlist.append(("text", "str", "%s" % value.toString()))
+        elif qttype == 'QDomElement':
+            varlist.append(("tagName", "str", "%s" % value.tagName()))
+            varlist.append(("text", "str", "%s" % value.text()))
+        elif qttype == 'QDomText':
+            varlist.append(("data", "str", "%s" % value.data()))
+            
+        # Networking stuff
+        elif qttype == 'QHostAddress':
+            varlist.append(
+                ("address", "QHostAddress", "%s" % value.toString()))
+            
+        return varlist
+        
+    def __formatVariablesList(self, keylist, dict, scope, filter=[],
+                              formatSequences=0):
+        """
+        Private method to produce a formated variables list.
+        
+        The dictionary passed in to it is scanned. Variables are
+        only added to the list, if their type is not contained
+        in the filter list and their name doesn't match any of
+        the filter expressions. The formated variables list (a list of tuples
+        of 3 values) is returned.
+        
+        @param keylist keys of the dictionary
+        @param dict the dictionary to be scanned
+        @param scope 1 to filter using the globals filter, 0 using the locals
+            filter (int).
+            Variables are only added to the list, if their name do not match
+            any of the filter expressions.
+        @param filter the indices of variable types to be filtered. Variables
+            are only added to the list, if their type is not contained in the
+            filter list.
+        @param formatSequences flag indicating, that sequence or dictionary
+            variables should be formatted. If it is 0 (or false), just the
+            number of items contained in these variables is returned. (boolean)
+        @return A tuple consisting of a list of formatted variables. Each
+            variable entry is a tuple of three elements, the variable name,
+            its type and value.
+        """
+        varlist = []
+        if scope:
+            patternFilterObjects = self.globalsFilterObjects
+        else:
+            patternFilterObjects = self.localsFilterObjects
+        
+        for key in keylist:
+            # filter based on the filter pattern
+            matched = False
+            for pat in patternFilterObjects:
+                if pat.match(unicode(key)):
+                    matched = True
+                    break
+            if matched:
+                continue
+            
+            # filter hidden attributes (filter #0)
+            if 0 in filter and unicode(key)[:2] == '__':
+                continue
+            
+            # special handling for '__builtins__' (it's way too big)
+            if key == '__builtins__':
+                rvalue = '<module __builtin__ (built-in)>'
+                valtype = 'module'
+            else:
+                value = dict[key]
+                valtypestr = ("%s" % type(value))[1:-1]
+                    
+                if valtypestr.split(' ', 1)[0] == 'class':
+                    # handle new class type of python 2.2+
+                    if ConfigVarTypeStrings.index('instance') in filter:
+                        continue
+                    valtype = valtypestr
+                else:
+                    valtype = valtypestr[6:-1]
+                    try:
+                        if ConfigVarTypeStrings.index(valtype) in filter:
+                            continue
+                    except ValueError:
+                        if valtype == "classobj":
+                            if ConfigVarTypeStrings.index(
+                                    'instance') in filter:
+                                continue
+                        elif valtype == "sip.methoddescriptor":
+                            if ConfigVarTypeStrings.index(
+                                    'instance method') in filter:
+                                continue
+                        elif valtype == "sip.enumtype":
+                            if ConfigVarTypeStrings.index('class') in filter:
+                                continue
+                        elif not valtype.startswith("PySide") and \
+                                ConfigVarTypeStrings.index('other') in filter:
+                            continue
+                    
+                try:
+                    if valtype not in ['list', 'tuple', 'dict']:
+                        rvalue = repr(value)
+                        if valtype.startswith('class') and \
+                           rvalue[0] in ['{', '(', '[']:
+                            rvalue = ""
+                    else:
+                        if valtype == 'dict':
+                            rvalue = "%d" % len(value.keys())
+                        else:
+                            rvalue = "%d" % len(value)
+                except Exception:
+                    rvalue = ''
+                
+            if formatSequences:
+                if unicode(key) == key:
+                    key = "'%s'" % key
+                else:
+                    key = unicode(key)
+            varlist.append((key, valtype, rvalue))
+        
+        return varlist
+        
+    def __generateFilterObjects(self, scope, filterString):
+        """
+        Private slot to convert a filter string to a list of filter objects.
+        
+        @param scope 1 to generate filter for global variables, 0 for local
+            variables (int)
+        @param filterString string of filter patterns separated by ';'
+        """
+        patternFilterObjects = []
+        for pattern in filterString.split(';'):
+            patternFilterObjects.append(re.compile('^%s$' % pattern))
+        if scope:
+            self.globalsFilterObjects = patternFilterObjects[:]
+        else:
+            self.localsFilterObjects = patternFilterObjects[:]
+        
+    def __completionList(self, text):
+        """
+        Private slot to handle the request for a commandline completion list.
+        
+        @param text the text to be completed (string)
+        """
+        completerDelims = ' \t\n`~!@#$%^&*()-=+[{]}\\|;:\'",<>/?'
+        
+        completions = set()
+        # find position of last delim character
+        pos = -1
+        while pos >= -len(text):
+            if text[pos] in completerDelims:
+                if pos == -1:
+                    text = ''
+                else:
+                    text = text[pos + 1:]
+                break
+            pos -= 1
+        
+        # Get local and global completions
+        try:
+            localdict = self.currentThread.getFrameLocals(self.framenr)
+            localCompleter = Completer(localdict).complete
+            self.__getCompletionList(text, localCompleter, completions)
+        except AttributeError:
+            pass
+        self.__getCompletionList(text, self.complete, completions)
+        
+        self.sendJsonCommand("ResponseCompletion", {
+            "completions": list(completions),
+            "text": text,
+        })
+
+    def __getCompletionList(self, text, completer, completions):
+        """
+        Private method to create a completions list.
+        
+        @param text text to complete (string)
+        @param completer completer methode
+        @param completions set where to add new completions strings (set)
+        """
+        state = 0
+        try:
+            comp = completer(text, state)
+        except Exception:
+            comp = None
+        while comp is not None:
+            completions.add(comp)
+            state += 1
+            try:
+                comp = completer(text, state)
+            except Exception:
+                comp = None
+
+    def startDebugger(self, filename=None, host=None, port=None,
+                      enableTrace=True, exceptions=True, tracePython=False,
+                      redirect=True):
+        """
+        Public method used to start the remote debugger.
+        
+        @param filename the program to be debugged (string)
+        @param host hostname of the debug server (string)
+        @param port portnumber of the debug server (int)
+        @param enableTrace flag to enable the tracing function (boolean)
+        @param exceptions flag to enable exception reporting of the IDE
+            (boolean)
+        @param tracePython flag to enable tracing into the Python library
+            (boolean)
+        @param redirect flag indicating redirection of stdin, stdout and
+            stderr (boolean)
+        """
+        global debugClient
+        if host is None:
+            host = os.getenv('ERICHOST', 'localhost')
+        if port is None:
+            port = os.getenv('ERICPORT', 42424)
+        
+        remoteAddress = self.__resolveHost(host)
+        self.connectDebugger(port, remoteAddress, redirect)
+        if filename is not None:
+            self.running = os.path.abspath(filename)
+        else:
+            try:
+                self.running = os.path.abspath(sys.argv[0])
+            except IndexError:
+                self.running = None
+        if self.running:
+            self.__setCoding(self.running)
+        self.passive = True
+        self.sendPassiveStartup(self.running, exceptions)
+        self.__interact()
+        
+        # setup the debugger variables
+        self._fncache = {}
+        self.dircache = []
+        self.mainFrame = None
+        self.debugging = True
+        
+        self.attachThread(mainThread=True)
+        self.mainThread.tracePython = tracePython
+        
+        # set the system exception handling function to ensure, that
+        # we report on all unhandled exceptions
+        sys.excepthook = self.__unhandled_exception
+        self.__interceptSignals()
+        
+        # now start debugging
+        if enableTrace:
+            self.mainThread.set_trace()
+        
+    def startProgInDebugger(self, progargs, wd='', host=None,
+                            port=None, exceptions=True, tracePython=False,
+                            redirect=True):
+        """
+        Public method used to start the remote debugger.
+        
+        @param progargs commandline for the program to be debugged
+            (list of strings)
+        @param wd working directory for the program execution (string)
+        @param host hostname of the debug server (string)
+        @param port portnumber of the debug server (int)
+        @param exceptions flag to enable exception reporting of the IDE
+            (boolean)
+        @param tracePython flag to enable tracing into the Python library
+            (boolean)
+        @param redirect flag indicating redirection of stdin, stdout and
+            stderr (boolean)
+        """
+        if host is None:
+            host = os.getenv('ERICHOST', 'localhost')
+        if port is None:
+            port = os.getenv('ERICPORT', 42424)
+        
+        remoteAddress = self.__resolveHost(host)
+        self.connectDebugger(port, remoteAddress, redirect)
+        
+        self._fncache = {}
+        self.dircache = []
+        sys.argv = progargs[:]
+        sys.argv[0] = os.path.abspath(sys.argv[0])
+        sys.path = self.__getSysPath(os.path.dirname(sys.argv[0]))
+        if wd == '':
+            os.chdir(sys.path[1])
+        else:
+            os.chdir(wd)
+        self.running = sys.argv[0]
+        self.__setCoding(self.running)
+        self.mainFrame = None
+        self.debugging = True
+        
+        self.passive = True
+        self.sendPassiveStartup(self.running, exceptions)
+        self.__interact()
+        
+        self.attachThread(mainThread=1)
+        self.mainThread.tracePython = tracePython
+        
+        # set the system exception handling function to ensure, that
+        # we report on all unhandled exceptions
+        sys.excepthook = self.__unhandled_exception
+        self.__interceptSignals()
+        
+        # This will eventually enter a local event loop.
+        # Note the use of backquotes to cause a repr of self.running. The
+        # need for this is on Windows os where backslash is the path separator.
+        # They will get inadvertantly stripped away during the eval causing
+        # IOErrors if self.running is passed as a normal str.
+        self.debugMod.__dict__['__file__'] = self.running
+        sys.modules['__main__'] = self.debugMod
+        res = self.mainThread.run('execfile(' + repr(self.running) + ')',
+                                  self.debugMod.__dict__)
+        self.progTerminated(res)
+
+    def run_call(self, scriptname, func, *args):
+        """
+        Public method used to start the remote debugger and call a function.
+        
+        @param scriptname name of the script to be debugged (string)
+        @param func function to be called
+        @param *args arguments being passed to func
+        @return result of the function call
+        """
+        self.startDebugger(scriptname, enableTrace=0)
+        res = self.mainThread.runcall(func, *args)
+        self.progTerminated(res)
+        return res
+        
+    def __resolveHost(self, host):
+        """
+        Private method to resolve a hostname to an IP address.
+        
+        @param host hostname of the debug server (string)
+        @return IP address (string)
+        """
+        try:
+            host, version = host.split("@@")
+        except ValueError:
+            version = 'v4'
+        if version == 'v4':
+            family = socket.AF_INET
+        else:
+            family = socket.AF_INET6
+        return socket.getaddrinfo(host, None, family,
+                                  socket.SOCK_STREAM)[0][4][0]
+        
+    def main(self):
+        """
+        Public method implementing the main method.
+        """
+        if '--' in sys.argv:
+            args = sys.argv[1:]
+            host = None
+            port = None
+            wd = ''
+            tracePython = False
+            exceptions = True
+            redirect = True
+            while args[0]:
+                if args[0] == '-h':
+                    host = args[1]
+                    del args[0]
+                    del args[0]
+                elif args[0] == '-p':
+                    port = int(args[1])
+                    del args[0]
+                    del args[0]
+                elif args[0] == '-w':
+                    wd = args[1]
+                    del args[0]
+                    del args[0]
+                elif args[0] == '-t':
+                    tracePython = True
+                    del args[0]
+                elif args[0] == '-e':
+                    exceptions = False
+                    del args[0]
+                elif args[0] == '-n':
+                    redirect = False
+                    del args[0]
+                elif args[0] == '--no-encoding':
+                    self.noencoding = True
+                    del args[0]
+                elif args[0] == '--fork-child':
+                    self.fork_auto = True
+                    self.fork_child = True
+                    del args[0]
+                elif args[0] == '--fork-parent':
+                    self.fork_auto = True
+                    self.fork_child = False
+                    del args[0]
+                elif args[0] == '--':
+                    del args[0]
+                    break
+                else:   # unknown option
+                    del args[0]
+            if not args:
+                print "No program given. Aborting!"     # __IGNORE_WARNING__
+            else:
+                if not self.noencoding:
+                    self.__coding = self.defaultCoding
+                self.startProgInDebugger(args, wd, host, port,
+                                         exceptions=exceptions,
+                                         tracePython=tracePython,
+                                         redirect=redirect)
+        else:
+            if sys.argv[1] == '--no-encoding':
+                self.noencoding = True
+                del sys.argv[1]
+            if sys.argv[1] == '':
+                del sys.argv[1]
+            try:
+                port = int(sys.argv[1])
+            except (ValueError, IndexError):
+                port = -1
+            try:
+                redirect = int(sys.argv[2])
+            except (ValueError, IndexError):
+                redirect = True
+            try:
+                ipOrHost = sys.argv[3]
+                if ':' in ipOrHost:
+                    remoteAddress = ipOrHost
+                elif ipOrHost[0] in '0123456789':
+                    remoteAddress = ipOrHost
+                else:
+                    remoteAddress = self.__resolveHost(ipOrHost)
+            except Exception:
+                remoteAddress = None
+            sys.argv = ['']
+            if '' not in sys.path:
+                sys.path.insert(0, '')
+            if port >= 0:
+                if not self.noencoding:
+                    self.__coding = self.defaultCoding
+                self.connectDebugger(port, remoteAddress, redirect)
+                self.__interact()
+            else:
+                print "No network port given. Aborting..."  # __IGNORE_WARNING__
+        
+    def fork(self):
+        """
+        Public method implementing a fork routine deciding which branch to
+        follow.
+        
+        @return process ID (integer)
+        """
+        if not self.fork_auto:
+            self.sendJsonCommand("RequestForkTo", {})
+            self.eventLoop(True)
+        pid = DebugClientOrigFork()
+        if pid == 0:
+            # child
+            if not self.fork_child:
+                sys.settrace(None)
+                sys.setprofile(None)
+                self.sessionClose(0)
+        else:
+            # parent
+            if self.fork_child:
+                sys.settrace(None)
+                sys.setprofile(None)
+                self.sessionClose(0)
+        return pid
+        
+    def close(self, fd):
+        """
+        Public method implementing a close method as a replacement for
+        os.close().
+        
+        It prevents the debugger connections from being closed.
+        
+        @param fd file descriptor to be closed (integer)
+        """
+        if fd in [self.readstream.fileno(), self.writestream.fileno(),
+                  self.errorstream.fileno()]:
+            return
+        
+        DebugClientOrigClose(fd)
+        
+    def __getSysPath(self, firstEntry):
+        """
+        Private slot to calculate a path list including the PYTHONPATH
+        environment variable.
+        
+        @param firstEntry entry to be put first in sys.path (string)
+        @return path list for use as sys.path (list of strings)
+        """
+        sysPath = [path for path in os.environ.get("PYTHONPATH", "")
+                   .split(os.pathsep)
+                   if path not in sys.path] + sys.path[:]
+        if "" in sysPath:
+            sysPath.remove("")
+        sysPath.insert(0, firstEntry)
+        sysPath.insert(0, '')
+        return sysPath
+
+#
+# eflag: FileType = Python2
+# eflag: noqa = M601, M702
--- /dev/null	Thu Jan 01 00:00:00 1970 +0000
+++ b/DebugClients/Python2/DebugClientCapabilities.py	Sat Sep 03 18:12:12 2016 +0200
@@ -0,0 +1,23 @@
+# -*- coding: utf-8 -*-
+
+# Copyright (c) 2005 - 2016 Detlev Offenbach <detlev@die-offenbachs.de>
+#
+
+"""
+Module defining the debug clients capabilities.
+"""
+
+HasDebugger = 0x0001
+HasInterpreter = 0x0002
+HasProfiler = 0x0004
+HasCoverage = 0x0008
+HasCompleter = 0x0010
+HasUnittest = 0x0020
+HasShell = 0x0040
+
+HasAll = HasDebugger | HasInterpreter | HasProfiler | \
+    HasCoverage | HasCompleter | HasUnittest | HasShell
+
+#
+# eflag: FileType = Python2
+# eflag: noqa = M601, M702
--- /dev/null	Thu Jan 01 00:00:00 1970 +0000
+++ b/DebugClients/Python2/DebugClientThreads.py	Sat Sep 03 18:12:12 2016 +0200
@@ -0,0 +1,200 @@
+# -*- coding: utf-8 -*-
+
+# Copyright (c) 2003 - 2016 Detlev Offenbach <detlev@die-offenbachs.de>
+#
+
+"""
+Module implementing the multithreaded version of the debug client.
+"""
+
+import thread
+import sys
+
+from DebugThread import DebugThread
+import DebugClientBase
+
+
+def _debugclient_start_new_thread(target, args, kwargs={}):
+    """
+    Module function used to allow for debugging of multiple threads.
+    
+    The way it works is that below, we reset thread._start_new_thread to
+    this function object. Thus, providing a hook for us to see when
+    threads are started. From here we forward the request onto the
+    DebugClient which will create a DebugThread object to allow tracing
+    of the thread then start up the thread. These actions are always
+    performed in order to allow dropping into debug mode.
+    
+    See DebugClientThreads.attachThread and DebugThread.DebugThread in
+    DebugThread.py
+    
+    @param target the start function of the target thread (i.e. the user code)
+    @param args arguments to pass to target
+    @param kwargs keyword arguments to pass to target
+    @return The identifier of the created thread
+    """
+    if DebugClientBase.DebugClientInstance is not None:
+        return DebugClientBase.DebugClientInstance.attachThread(
+            target, args, kwargs)
+    else:
+        return _original_start_thread(target, args, kwargs)
+    
+# make thread hooks available to system
+_original_start_thread = thread.start_new_thread
+thread.start_new_thread = _debugclient_start_new_thread
+
+# Note: import threading here AFTER above hook, as threading cache's
+#       thread._start_new_thread.
+from threading import RLock
+
+
+class DebugClientThreads(DebugClientBase.DebugClientBase):
+    """
+    Class implementing the client side of the debugger.
+
+    This variant of the debugger implements a threaded debugger client
+    by subclassing all relevant base classes.
+    """
+    def __init__(self):
+        """
+        Constructor
+        """
+        DebugClientBase.DebugClientBase.__init__(self)
+        
+        # protection lock for synchronization
+        self.clientLock = RLock()
+        
+        # the "current" thread, basically the thread we are at a breakpoint
+        # for.
+        self.currentThread = None
+        
+        # special objects representing the main scripts thread and frame
+        self.mainThread = None
+        self.mainFrame = None
+        
+        self.variant = 'Threaded'
+
+    def attachThread(self, target=None, args=None, kwargs=None, mainThread=0):
+        """
+        Public method to setup a thread for DebugClient to debug.
+        
+        If mainThread is non-zero, then we are attaching to the already
+        started mainthread of the app and the rest of the args are ignored.
+        
+        @param target the start function of the target thread (i.e. the
+            user code)
+        @param args arguments to pass to target
+        @param kwargs keyword arguments to pass to target
+        @param mainThread non-zero, if we are attaching to the already
+              started mainthread of the app
+        @return The identifier of the created thread
+        """
+        try:
+            self.lockClient()
+            newThread = DebugThread(self, target, args, kwargs, mainThread)
+            ident = -1
+            if mainThread:
+                ident = thread.get_ident()
+                self.mainThread = newThread
+                if self.debugging:
+                    sys.setprofile(newThread.profile)
+            else:
+                ident = _original_start_thread(newThread.bootstrap, ())
+                if self.mainThread is not None:
+                    self.tracePython = self.mainThread.tracePython
+            newThread.set_ident(ident)
+            self.threads[newThread.get_ident()] = newThread
+        finally:
+            self.unlockClient()
+        return ident
+    
+    def threadTerminated(self, dbgThread):
+        """
+        Public method called when a DebugThread has exited.
+        
+        @param dbgThread the DebugThread that has exited
+        """
+        try:
+            self.lockClient()
+            try:
+                del self.threads[dbgThread.get_ident()]
+            except KeyError:
+                pass
+        finally:
+            self.unlockClient()
+            
+    def lockClient(self, blocking=1):
+        """
+        Public method to acquire the lock for this client.
+        
+        @param blocking flag to indicating a blocking lock
+        @return flag indicating successful locking
+        """
+        if blocking:
+            self.clientLock.acquire()
+        else:
+            return self.clientLock.acquire(blocking)
+        
+    def unlockClient(self):
+        """
+        Public method to release the lock for this client.
+        """
+        try:
+            self.clientLock.release()
+        except AssertionError:
+            pass
+        
+    def setCurrentThread(self, id):
+        """
+        Public method to set the current thread.
+
+        @param id the id the current thread should be set to.
+        """
+        try:
+            self.lockClient()
+            if id is None:
+                self.currentThread = None
+            else:
+                self.currentThread = self.threads[id]
+        finally:
+            self.unlockClient()
+    
+    def eventLoop(self, disablePolling=False):
+        """
+        Public method implementing our event loop.
+        
+        @param disablePolling flag indicating to enter an event loop with
+            polling disabled (boolean)
+        """
+        # make sure we set the current thread appropriately
+        threadid = thread.get_ident()
+        self.setCurrentThread(threadid)
+        
+        DebugClientBase.DebugClientBase.eventLoop(self, disablePolling)
+        
+        self.setCurrentThread(None)
+
+    def set_quit(self):
+        """
+        Public method to do a 'set quit' on all threads.
+        """
+        try:
+            locked = self.lockClient(0)
+            try:
+                for key in self.threads.keys():
+                    self.threads[key].set_quit()
+            except Exception:
+                pass
+        finally:
+            if locked:
+                self.unlockClient()
+
+# We are normally called by the debugger to execute directly.
+
+if __name__ == '__main__':
+    debugClient = DebugClientThreads()
+    debugClient.main()
+
+#
+# eflag: FileType = Python2
+# eflag: noqa = M601, M702, E402
--- /dev/null	Thu Jan 01 00:00:00 1970 +0000
+++ b/DebugClients/Python2/DebugConfig.py	Sat Sep 03 18:12:12 2016 +0200
@@ -0,0 +1,23 @@
+# -*- coding: utf-8 -*-
+
+# Copyright (c) 2005 - 2016 Detlev Offenbach <detlev@die-offenbachs.de>
+#
+
+"""
+Module defining type strings for the different Python types.
+"""
+
+ConfigVarTypeStrings = [
+    '__', 'NoneType', 'type',
+    'bool', 'int', 'long', 'float', 'complex',
+    'str', 'unicode', 'tuple', 'list',
+    'dict', 'dict-proxy', 'set', 'file', 'xrange',
+    'slice', 'buffer', 'class', 'instance',
+    'instance method', 'property', 'generator',
+    'function', 'builtin_function_or_method', 'code', 'module',
+    'ellipsis', 'traceback', 'frame', 'other'
+]
+
+#
+# eflag: FileType = Python2
+# eflag: noqa = M601, M702
--- /dev/null	Thu Jan 01 00:00:00 1970 +0000
+++ b/DebugClients/Python2/DebugProtocol.py	Sat Sep 03 18:12:12 2016 +0200
@@ -0,0 +1,88 @@
+# -*- coding: utf-8 -*-
+
+# Copyright (c) 2002 - 2016 Detlev Offenbach <detlev@die-offenbachs.de>
+#
+
+"""
+Module defining the debug protocol tokens.
+"""
+# TODO: delete this file
+# The address used for debugger/client communications.
+DebugAddress = '127.0.0.1'
+
+# The protocol "words".
+RequestOK = '>OK?<'
+RequestEnv = '>Environment<'
+RequestCapabilities = '>Capabilities<'
+RequestLoad = '>Load<'
+RequestRun = '>Run<'
+RequestCoverage = '>Coverage<'
+RequestProfile = '>Profile<'
+RequestContinue = '>Continue<'
+RequestStep = '>Step<'
+RequestStepOver = '>StepOver<'
+RequestStepOut = '>StepOut<'
+RequestStepQuit = '>StepQuit<'
+RequestBreak = '>Break<'
+RequestBreakEnable = '>EnableBreak<'
+RequestBreakIgnore = '>IgnoreBreak<'
+RequestWatch = '>Watch<'
+RequestWatchEnable = '>EnableWatch<'
+RequestWatchIgnore = '>IgnoreWatch<'
+RequestVariables = '>Variables<'
+RequestVariable = '>Variable<'
+RequestSetFilter = '>SetFilter<'
+RequestThreadList = '>ThreadList<'
+RequestThreadSet = '>ThreadSet<'
+RequestEval = '>Eval<'
+RequestExec = '>Exec<'
+RequestShutdown = '>Shutdown<'
+RequestBanner = '>Banner<'
+RequestCompletion = '>Completion<'
+RequestUTPrepare = '>UTPrepare<'
+RequestUTRun = '>UTRun<'
+RequestUTStop = '>UTStop<'
+RequestForkTo = '>ForkTo<'
+RequestForkMode = '>ForkMode<'
+
+ResponseOK = '>OK<'
+ResponseCapabilities = RequestCapabilities
+ResponseContinue = '>Continue<'
+ResponseException = '>Exception<'
+ResponseSyntax = '>SyntaxError<'
+ResponseSignal = '>Signal<'
+ResponseExit = '>Exit<'
+ResponseLine = '>Line<'
+ResponseRaw = '>Raw<'
+ResponseClearBreak = '>ClearBreak<'
+ResponseBPConditionError = '>BPConditionError<'
+ResponseClearWatch = '>ClearWatch<'
+ResponseWPConditionError = '>WPConditionError<'
+ResponseVariables = RequestVariables
+ResponseVariable = RequestVariable
+ResponseThreadList = RequestThreadList
+ResponseThreadSet = RequestThreadSet
+ResponseStack = '>CurrentStack<'
+ResponseBanner = RequestBanner
+ResponseCompletion = RequestCompletion
+ResponseUTPrepared = '>UTPrepared<'
+ResponseUTStartTest = '>UTStartTest<'
+ResponseUTStopTest = '>UTStopTest<'
+ResponseUTTestFailed = '>UTTestFailed<'
+ResponseUTTestErrored = '>UTTestErrored<'
+ResponseUTTestSkipped = '>UTTestSkipped<'
+ResponseUTTestFailedExpected = '>UTTestFailedExpected<'
+ResponseUTTestSucceededUnexpected = '>UTTestSucceededUnexpected<'
+ResponseUTFinished = '>UTFinished<'
+ResponseForkTo = RequestForkTo
+
+PassiveStartup = '>PassiveStartup<'
+
+RequestCallTrace = '>CallTrace<'
+CallTrace = '>CallTrace<'
+
+EOT = '>EOT<\n'
+
+#
+# eflag: FileType = Python2
+# eflag: noqa = M601, M702
--- /dev/null	Thu Jan 01 00:00:00 1970 +0000
+++ b/DebugClients/Python2/DebugThread.py	Sat Sep 03 18:12:12 2016 +0200
@@ -0,0 +1,134 @@
+# -*- coding: utf-8 -*-
+
+# Copyright (c) 2002 - 2016 Detlev Offenbach <detlev@die-offenbachs.de>
+#
+
+"""
+Module implementing the debug thread.
+"""
+
+import bdb
+import sys
+
+from DebugBase import DebugBase
+
+
+class DebugThread(DebugBase):
+    """
+    Class implementing a debug thread.
+
+    It represents a thread in the python interpreter that we are tracing.
+    
+    Provides simple wrapper methods around bdb for the 'owning' client to
+    call to step etc.
+    """
+    def __init__(self, dbgClient, targ=None, args=None, kwargs=None,
+                 mainThread=False):
+        """
+        Constructor
+        
+        @param dbgClient the owning client
+        @param targ the target method in the run thread
+        @param args  arguments to be passed to the thread
+        @param kwargs arguments to be passed to the thread
+        @param mainThread False if this thread is not the main script's thread
+        """
+        DebugBase.__init__(self, dbgClient)
+        
+        self._target = targ
+        self._args = args
+        self._kwargs = kwargs
+        self._mainThread = mainThread
+        # thread running tracks execution state of client code
+        # it will always be 0 for main thread as that is tracked
+        # by DebugClientThreads and Bdb...
+        self._threadRunning = False
+        
+        self.__ident = None  # id of this thread.
+        self.__name = ""
+        self.tracePython = False
+    
+    def set_ident(self, id):
+        """
+        Public method to set the id for this thread.
+        
+        @param id id for this thread (int)
+        """
+        self.__ident = id
+    
+    def get_ident(self):
+        """
+        Public method to return the id of this thread.
+        
+        @return the id of this thread (int)
+        """
+        return self.__ident
+    
+    def get_name(self):
+        """
+        Public method to return the name of this thread.
+        
+        @return name of this thread (string)
+        """
+        return self.__name
+    
+    def traceThread(self):
+        """
+        Public method to setup tracing for this thread.
+        """
+        self.set_trace()
+        if not self._mainThread:
+            self.set_continue(0)
+    
+    def bootstrap(self):
+        """
+        Public method to bootstrap the thread.
+        
+        It wraps the call to the user function to enable tracing
+        before hand.
+        """
+        try:
+            try:
+                self._threadRunning = True
+                self.traceThread()
+                self._target(*self._args, **self._kwargs)
+            except bdb.BdbQuit:
+                pass
+        finally:
+            self._threadRunning = False
+            self.quitting = True
+            self._dbgClient.threadTerminated(self)
+            sys.settrace(None)
+            sys.setprofile(None)
+    
+    def trace_dispatch(self, frame, event, arg):
+        """
+        Public method wrapping the trace_dispatch of bdb.py.
+        
+        It wraps the call to dispatch tracing into
+        bdb to make sure we have locked the client to prevent multiple
+        threads from entering the client event loop.
+        
+        @param frame The current stack frame.
+        @param event The trace event (string)
+        @param arg The arguments
+        @return local trace function
+        """
+        try:
+            self._dbgClient.lockClient()
+            # if this thread came out of a lock, and we are quitting
+            # and we are still running, then get rid of tracing for this thread
+            if self.quitting and self._threadRunning:
+                sys.settrace(None)
+                sys.setprofile(None)
+            import threading
+            self.__name = threading.currentThread().getName()
+            retval = DebugBase.trace_dispatch(self, frame, event, arg)
+        finally:
+            self._dbgClient.unlockClient()
+        
+        return retval
+
+#
+# eflag: FileType = Python2
+# eflag: noqa = M601, M702
--- /dev/null	Thu Jan 01 00:00:00 1970 +0000
+++ b/DebugClients/Python2/DebugUtilities.py	Sat Sep 03 18:12:12 2016 +0200
@@ -0,0 +1,34 @@
+# -*- coding: utf-8 -*-
+
+# Copyright (c) 2016 Detlev Offenbach <detlev@die-offenbachs.de>
+#
+
+"""
+Module implementing utilities functions for the debug client.
+"""
+
+
+def prepareJsonCommand(method, params):
+    """
+    Function to prepare a single command or response for transmission to
+    the IDE.
+    
+    @param method command or response name to be sent
+    @type str
+    @param params dictionary of named parameters for the command or response
+    @type dict
+    @return prepared JSON command or response string
+    @rtype str
+    """
+    import json
+    
+    commandDict = {
+        "jsonrpc": "2.0",
+        "method": method,
+        "params": params,
+    }
+    return json.dumps(commandDict) + '\n'
+
+#
+# eflag: FileType = Python2
+# eflag: noqa = M702
--- /dev/null	Thu Jan 01 00:00:00 1970 +0000
+++ b/DebugClients/Python2/FlexCompleter.py	Sat Sep 03 18:12:12 2016 +0200
@@ -0,0 +1,275 @@
+# -*- coding: utf-8 -*-
+
+"""
+Word completion for the eric6 shell.
+
+<h4>NOTE for eric6 variant</h4>
+
+    This version is a re-implementation of FlexCompleter
+    as found in the PyQwt package. It is modified to work with the eric6 debug
+    clients.
+
+
+<h4>NOTE for the PyQwt variant</h4>
+
+    This version is a re-implementation of FlexCompleter
+    with readline support for PyQt&sip-3.6 and earlier.
+
+    Full readline support is present in PyQt&sip-snapshot-20030531 and later.
+
+
+<h4>NOTE for FlexCompleter</h4>
+
+    This version is a re-implementation of rlcompleter with
+    selectable namespace.
+
+    The problem with rlcompleter is that it's hardwired to work with
+    __main__.__dict__, and in some cases one may have 'sandboxed' namespaces.
+    So this class is a ripoff of rlcompleter, with the namespace to work in as
+    an optional parameter.
+    
+    This class can be used just like rlcompleter, but the Completer class now
+    has a constructor with the optional 'namespace' parameter.
+    
+    A patch has been submitted to Python@sourceforge for these changes to go in
+    the standard Python distribution.
+
+
+<h4>Original rlcompleter documentation</h4>
+
+    This requires the latest extension to the readline module (the
+    completes keywords, built-ins and globals in __main__; when completing
+    NAME.NAME..., it evaluates (!) the expression up to the last dot and
+    completes its attributes.
+    
+    It's very cool to do "import string" type "string.", hit the
+    completion key (twice), and see the list of names defined by the
+    string module!
+    
+    Tip: to use the tab key as the completion key, call
+    
+    'readline.parse_and_bind("tab: complete")'
+    
+    <b>Notes</b>:
+    <ul>
+    <li>
+    Exceptions raised by the completer function are *ignored* (and
+    generally cause the completion to fail).  This is a feature -- since
+    readline sets the tty device in raw (or cbreak) mode, printing a
+    traceback wouldn't work well without some complicated hoopla to save,
+    reset and restore the tty state.
+    </li>
+    <li>
+    The evaluation of the NAME.NAME... form may cause arbitrary
+    application defined code to be executed if an object with a
+    __getattr__ hook is found.  Since it is the responsibility of the
+    application (or the user) to enable this feature, I consider this an
+    acceptable risk.  More complicated expressions (e.g. function calls or
+    indexing operations) are *not* evaluated.
+    </li>
+    <li>
+    GNU readline is also used by the built-in functions input() and
+    raw_input(), and thus these also benefit/suffer from the completer
+    features.  Clearly an interactive application can benefit by
+    specifying its own completer function and using raw_input() for all
+    its input.
+    </li>
+    <li>
+    When the original stdin is not a tty device, GNU readline is never
+    used, and this module (and the readline module) are silently inactive.
+    </li>
+    </ul>
+"""
+
+#*****************************************************************************
+#
+# Since this file is essentially a minimally modified copy of the rlcompleter
+# module which is part of the standard Python distribution, I assume that the
+# proper procedure is to maintain its copyright as belonging to the Python
+# Software Foundation:
+#
+#       Copyright (C) 2001 Python Software Foundation, www.python.org
+#
+#  Distributed under the terms of the Python Software Foundation license.
+#
+#  Full text available at:
+#
+#                  http://www.python.org/2.1/license.html
+#
+#*****************************************************************************
+
+import __builtin__
+import __main__
+
+__all__ = ["Completer"]
+
+
+class Completer(object):
+    """
+    Class implementing the command line completer object.
+    """
+    def __init__(self, namespace=None):
+        """
+        Constructor
+
+        Completer([namespace]) -> completer instance.
+
+        If unspecified, the default namespace where completions are performed
+        is __main__ (technically, __main__.__dict__). Namespaces should be
+        given as dictionaries.
+
+        Completer instances should be used as the completion mechanism of
+        readline via the set_completer() call:
+
+        readline.set_completer(Completer(my_namespace).complete)
+        
+        @param namespace namespace for the completer
+        @exception TypeError raised to indicate a wrong namespace structure
+        """
+        if namespace and not isinstance(namespace, dict):
+            raise TypeError('namespace must be a dictionary')
+
+        # Don't bind to namespace quite yet, but flag whether the user wants a
+        # specific namespace or to use __main__.__dict__. This will allow us
+        # to bind to __main__.__dict__ at completion time, not now.
+        if namespace is None:
+            self.use_main_ns = 1
+        else:
+            self.use_main_ns = 0
+            self.namespace = namespace
+
+    def complete(self, text, state):
+        """
+        Public method to return the next possible completion for 'text'.
+
+        This is called successively with state == 0, 1, 2, ... until it
+        returns None.  The completion should begin with 'text'.
+        
+        @param text The text to be completed. (string)
+        @param state The state of the completion. (integer)
+        @return The possible completions as a list of strings.
+        """
+        if self.use_main_ns:
+            self.namespace = __main__.__dict__
+            
+        if state == 0:
+            if "." in text:
+                self.matches = self.attr_matches(text)
+            else:
+                self.matches = self.global_matches(text)
+        try:
+            return self.matches[state]
+        except IndexError:
+            return None
+
+    def _callable_postfix(self, val, word):
+        """
+        Protected method to check for a callable.
+        
+        @param val value to check (object)
+        @param word word to ammend (string)
+        @return ammended word (string)
+        """
+        if hasattr(val, '__call__'):
+            word = word + "("
+        return word
+
+    def global_matches(self, text):
+        """
+        Public method to compute matches when text is a simple name.
+
+        @param text The text to be completed. (string)
+        @return A list of all keywords, built-in functions and names currently
+        defined in self.namespace that match.
+        """
+        import keyword
+        matches = []
+        n = len(text)
+        for word in keyword.kwlist:
+            if word[:n] == text:
+                matches.append(word)
+        for nspace in [__builtin__.__dict__, self.namespace]:
+            for word, val in nspace.items():
+                if word[:n] == text and word != "__builtins__":
+                    matches.append(self._callable_postfix(val, word))
+        return matches
+
+    def attr_matches(self, text):
+        """
+        Public method to compute matches when text contains a dot.
+
+        Assuming the text is of the form NAME.NAME....[NAME], and is
+        evaluatable in self.namespace, it will be evaluated and its attributes
+        (as revealed by dir()) are used as possible completions.  (For class
+        instances, class members are are also considered.)
+
+        <b>WARNING</b>: this can still invoke arbitrary C code, if an object
+        with a __getattr__ hook is evaluated.
+        
+        @param text The text to be completed. (string)
+        @return A list of all matches.
+        """
+        import re
+
+    # Testing. This is the original code:
+    #m = re.match(r"(\w+(\.\w+)*)\.(\w*)", text)
+
+    # Modified to catch [] in expressions:
+    #m = re.match(r"([\w\[\]]+(\.[\w\[\]]+)*)\.(\w*)", text)
+
+        # Another option, seems to work great. Catches things like ''.<tab>
+        m = re.match(r"(\S+(\.\w+)*)\.(\w*)", text)
+
+        if not m:
+            return
+        expr, attr = m.group(1, 3)
+        try:
+            thisobject = eval(expr, self.namespace)
+        except Exception:
+            return []
+
+        # get the content of the object, except __builtins__
+        words = dir(thisobject)
+        if "__builtins__" in words:
+            words.remove("__builtins__")
+
+        if hasattr(object, '__class__'):
+            words.append('__class__')
+            words = words + get_class_members(object.__class__)
+        matches = []
+        n = len(attr)
+        for word in words:
+            try:
+                if word[:n] == attr and hasattr(thisobject, word):
+                    val = getattr(thisobject, word)
+                    word = self._callable_postfix(
+                        val, "%s.%s" % (expr, word))
+                    matches.append(word)
+            except Exception:
+                # some badly behaved objects pollute dir() with non-strings,
+                # which cause the completion to fail.  This way we skip the
+                # bad entries and can still continue processing the others.
+                pass
+        return matches
+
+
+def get_class_members(klass):
+    """
+    Module function to retrieve the class members.
+    
+    @param klass The class object to be analysed.
+    @return A list of all names defined in the class.
+    """
+    # PyQwt's hack for PyQt&sip-3.6 and earlier
+    if hasattr(klass, 'getLazyNames'):
+        return klass.getLazyNames()
+    # vanilla Python stuff
+    ret = dir(klass)
+    if hasattr(klass, '__bases__'):
+        for base in klass.__bases__:
+            ret = ret + get_class_members(base)
+    return ret
+
+#
+# eflag: FileType = Python2
+# eflag: noqa = M601, M702, M111
--- /dev/null	Thu Jan 01 00:00:00 1970 +0000
+++ b/DebugClients/Python2/PyProfile.py	Sat Sep 03 18:12:12 2016 +0200
@@ -0,0 +1,176 @@
+# -*- coding: utf-8 -*-
+
+# Copyright (c) 2002 - 2016 Detlev Offenbach <detlev@die-offenbachs.de>
+
+"""
+Module defining additions to the standard Python profile.py.
+"""
+
+import os
+import marshal
+import profile
+import atexit
+import pickle
+
+
+class PyProfile(profile.Profile):
+    """
+    Class extending the standard Python profiler with additional methods.
+    
+    This class extends the standard Python profiler by the functionality to
+    save the collected timing data in a timing cache, to restore these data
+    on subsequent calls, to store a profile dump to a standard filename and
+    to erase these caches.
+    """
+    def __init__(self, basename, timer=None, bias=None):
+        """
+        Constructor
+        
+        @param basename name of the script to be profiled (string)
+        @param timer function defining the timing calculation
+        @param bias calibration value (float)
+        """
+        try:
+            profile.Profile.__init__(self, timer, bias)
+        except TypeError:
+            profile.Profile.__init__(self, timer)
+        
+        self.dispatch = self.__class__.dispatch
+        
+        basename = os.path.splitext(basename)[0]
+        self.profileCache = "%s.profile" % basename
+        self.timingCache = "%s.timings" % basename
+        
+        self.__restore()
+        atexit.register(self.save)
+        
+    def __restore(self):
+        """
+        Private method to restore the timing data from the timing cache.
+        """
+        if not os.path.exists(self.timingCache):
+            return
+            
+        try:
+            cache = open(self.timingCache, 'rb')
+            timings = marshal.load(cache)
+            cache.close()
+            if isinstance(timings, type.DictType):
+                self.timings = timings
+        except Exception:
+            pass
+        
+    def save(self):
+        """
+        Public method to store the collected profile data.
+        """
+        # dump the raw timing data
+        cache = open(self.timingCache, 'wb')
+        marshal.dump(self.timings, cache)
+        cache.close()
+        
+        # dump the profile data
+        self.dump_stats(self.profileCache)
+        
+    def dump_stats(self, file):
+        """
+        Public method to dump the statistics data.
+        
+        @param file name of the file to write to (string)
+        """
+        try:
+            f = open(file, 'wb')
+            self.create_stats()
+            pickle.dump(self.stats, f, 2)
+        except (EnvironmentError, pickle.PickleError):
+            pass
+        finally:
+            f.close()
+
+    def erase(self):
+        """
+        Public method to erase the collected timing data.
+        """
+        self.timings = {}
+        if os.path.exists(self.timingCache):
+            os.remove(self.timingCache)
+
+    def fix_frame_filename(self, frame):
+        """
+        Public method used to fixup the filename for a given frame.
+        
+        The logic employed here is that if a module was loaded
+        from a .pyc file, then the correct .py to operate with
+        should be in the same path as the .pyc. The reason this
+        logic is needed is that when a .pyc file is generated, the
+        filename embedded and thus what is readable in the code object
+        of the frame object is the fully qualified filepath when the
+        pyc is generated. If files are moved from machine to machine
+        this can break debugging as the .pyc will refer to the .py
+        on the original machine. Another case might be sharing
+        code over a network... This logic deals with that.
+        
+        @param frame the frame object
+        @return fixed up file name (string)
+        """
+        # get module name from __file__
+        if not isinstance(frame, profile.Profile.fake_frame) and \
+                '__file__' in frame.f_globals:
+            root, ext = os.path.splitext(frame.f_globals['__file__'])
+            if ext in ['.pyc', '.py', '.py2', '.pyo']:
+                fixedName = root + '.py'
+                if os.path.exists(fixedName):
+                    return fixedName
+                
+                fixedName = root + '.py2'
+                if os.path.exists(fixedName):
+                    return fixedName
+
+        return frame.f_code.co_filename
+
+    def trace_dispatch_call(self, frame, t):
+        """
+        Public method used to trace functions calls.
+        
+        This is a variant of the one found in the standard Python
+        profile.py calling fix_frame_filename above.
+        
+        @param frame reference to the call frame
+        @param t arguments of the call
+        @return flag indicating a handled call
+        """
+        if self.cur and frame.f_back is not self.cur[-2]:
+            rpt, rit, ret, rfn, rframe, rcur = self.cur
+            if not isinstance(rframe, profile.Profile.fake_frame):
+                assert rframe.f_back is frame.f_back, ("Bad call", rfn,
+                                                       rframe, rframe.f_back,
+                                                       frame, frame.f_back)
+                self.trace_dispatch_return(rframe, 0)
+                assert (self.cur is None or
+                        frame.f_back is self.cur[-2]), ("Bad call",
+                                                        self.cur[-3])
+        fcode = frame.f_code
+        fn = (self.fix_frame_filename(frame),
+              fcode.co_firstlineno, fcode.co_name)
+        self.cur = (t, 0, 0, fn, frame, self.cur)
+        timings = self.timings
+        if fn in timings:
+            cc, ns, tt, ct, callers = timings[fn]
+            timings[fn] = cc, ns + 1, tt, ct, callers
+        else:
+            timings[fn] = 0, 0, 0, 0, {}
+        return 1
+    
+    dispatch = {
+        "call": trace_dispatch_call,
+        "exception": profile.Profile.trace_dispatch_exception,
+        "return": profile.Profile.trace_dispatch_return,
+        "c_call": profile.Profile.trace_dispatch_c_call,
+        "c_exception": profile.Profile.trace_dispatch_return,
+        # the C function returned
+        "c_return": profile.Profile.trace_dispatch_return,
+    }
+
+#
+# eflag: FileType = Python2
+# eflag: noqa = M601, M702
--- /dev/null	Thu Jan 01 00:00:00 1970 +0000
+++ b/DebugClients/Python2/__init__.py	Sat Sep 03 18:12:12 2016 +0200
@@ -0,0 +1,13 @@
+# -*- coding: utf-8 -*-
+
+# Copyright (c) 2005 - 2016 Detlev Offenbach <detlev@die-offenbachs.de>
+#
+
+"""
+Package implementing the Python debugger.
+
+It consists of different kinds of debug clients.
+"""
+
+#
+# eflag: FileType = Python2
--- /dev/null	Thu Jan 01 00:00:00 1970 +0000
+++ b/DebugClients/Python2/coverage/__init__.py	Sat Sep 03 18:12:12 2016 +0200
@@ -0,0 +1,38 @@
+# Licensed under the Apache License: http://www.apache.org/licenses/LICENSE-2.0
+# For details: https://bitbucket.org/ned/coveragepy/src/default/NOTICE.txt
+
+"""Code coverage measurement for Python.
+
+Ned Batchelder
+http://nedbatchelder.com/code/coverage
+
+"""
+
+from coverage.version import __version__, __url__, version_info
+
+from coverage.control import Coverage, process_startup
+from coverage.data import CoverageData
+from coverage.misc import CoverageException
+from coverage.plugin import CoveragePlugin, FileTracer, FileReporter
+from coverage.pytracer import PyTracer
+
+# Backward compatibility.
+coverage = Coverage
+
+# On Windows, we encode and decode deep enough that something goes wrong and
+# the encodings.utf_8 module is loaded and then unloaded, I don't know why.
+# Adding a reference here prevents it from being unloaded.  Yuk.
+import encodings.utf_8
+
+# Because of the "from coverage.control import fooey" lines at the top of the
+# file, there's an entry for coverage.coverage in sys.modules, mapped to None.
+# This makes some inspection tools (like pydoc) unable to find the class
+# coverage.coverage.  So remove that entry.
+import sys
+try:
+    del sys.modules['coverage.coverage']
+except KeyError:
+    pass
+
+#
+# eflag: FileType = Python2
--- /dev/null	Thu Jan 01 00:00:00 1970 +0000
+++ b/DebugClients/Python2/coverage/__main__.py	Sat Sep 03 18:12:12 2016 +0200
@@ -0,0 +1,11 @@
+# Licensed under the Apache License: http://www.apache.org/licenses/LICENSE-2.0
+# For details: https://bitbucket.org/ned/coveragepy/src/default/NOTICE.txt
+
+"""Coverage.py's main entry point."""
+
+import sys
+from coverage.cmdline import main
+sys.exit(main())
+
+#
+# eflag: FileType = Python2
--- /dev/null	Thu Jan 01 00:00:00 1970 +0000
+++ b/DebugClients/Python2/coverage/annotate.py	Sat Sep 03 18:12:12 2016 +0200
@@ -0,0 +1,106 @@
+# Licensed under the Apache License: http://www.apache.org/licenses/LICENSE-2.0
+# For details: https://bitbucket.org/ned/coveragepy/src/default/NOTICE.txt
+
+"""Source file annotation for coverage.py."""
+
+import io
+import os
+import re
+
+from coverage.files import flat_rootname
+from coverage.misc import isolate_module
+from coverage.report import Reporter
+
+os = isolate_module(os)
+
+
+class AnnotateReporter(Reporter):
+    """Generate annotated source files showing line coverage.
+
+    This reporter creates annotated copies of the measured source files. Each
+    .py file is copied as a .py,cover file, with a left-hand margin annotating
+    each line::
+
+        > def h(x):
+        -     if 0:   #pragma: no cover
+        -         pass
+        >     if x == 1:
+        !         a = 1
+        >     else:
+        >         a = 2
+
+        > h(2)
+
+    Executed lines use '>', lines not executed use '!', lines excluded from
+    consideration use '-'.
+
+    """
+
+    def __init__(self, coverage, config):
+        super(AnnotateReporter, self).__init__(coverage, config)
+        self.directory = None
+
+    blank_re = re.compile(r"\s*(#|$)")
+    else_re = re.compile(r"\s*else\s*:\s*(#|$)")
+
+    def report(self, morfs, directory=None):
+        """Run the report.
+
+        See `coverage.report()` for arguments.
+
+        """
+        self.report_files(self.annotate_file, morfs, directory)
+
+    def annotate_file(self, fr, analysis):
+        """Annotate a single file.
+
+        `fr` is the FileReporter for the file to annotate.
+
+        """
+        statements = sorted(analysis.statements)
+        missing = sorted(analysis.missing)
+        excluded = sorted(analysis.excluded)
+
+        if self.directory:
+            dest_file = os.path.join(self.directory, flat_rootname(fr.relative_filename()))
+            if dest_file.endswith("_py"):
+                dest_file = dest_file[:-3] + ".py"
+            dest_file += ",cover"
+        else:
+            dest_file = fr.filename + ",cover"
+
+        with io.open(dest_file, 'w', encoding='utf8') as dest:
+            i = 0
+            j = 0
+            covered = True
+            source = fr.source()
+            for lineno, line in enumerate(source.splitlines(True), start=1):
+                while i < len(statements) and statements[i] < lineno:
+                    i += 1
+                while j < len(missing) and missing[j] < lineno:
+                    j += 1
+                if i < len(statements) and statements[i] == lineno:
+                    covered = j >= len(missing) or missing[j] > lineno
+                if self.blank_re.match(line):
+                    dest.write(u'  ')
+                elif self.else_re.match(line):
+                    # Special logic for lines containing only 'else:'.
+                    if i >= len(statements) and j >= len(missing):
+                        dest.write(u'! ')
+                    elif i >= len(statements) or j >= len(missing):
+                        dest.write(u'> ')
+                    elif statements[i] == missing[j]:
+                        dest.write(u'! ')
+                    else:
+                        dest.write(u'> ')
+                elif lineno in excluded:
+                    dest.write(u'- ')
+                elif covered:
+                    dest.write(u'> ')
+                else:
+                    dest.write(u'! ')
+
+                dest.write(line)
+
+#
+# eflag: FileType = Python2
--- /dev/null	Thu Jan 01 00:00:00 1970 +0000
+++ b/DebugClients/Python2/coverage/backunittest.py	Sat Sep 03 18:12:12 2016 +0200
@@ -0,0 +1,45 @@
+# Licensed under the Apache License: http://www.apache.org/licenses/LICENSE-2.0
+# For details: https://bitbucket.org/ned/coveragepy/src/default/NOTICE.txt
+
+"""Implementations of unittest features from the future."""
+
+# Use unittest2 if it's available, otherwise unittest.  This gives us
+# back-ported features for 2.6.
+try:
+    import unittest2 as unittest
+except ImportError:
+    import unittest
+
+
+def unittest_has(method):
+    """Does `unittest.TestCase` have `method` defined?"""
+    return hasattr(unittest.TestCase, method)
+
+
+class TestCase(unittest.TestCase):
+    """Just like unittest.TestCase, but with assert methods added.
+
+    Designed to be compatible with 3.1 unittest.  Methods are only defined if
+    `unittest` doesn't have them.
+
+    """
+    # pylint: disable=missing-docstring
+
+    # Many Pythons have this method defined.  But PyPy3 has a bug with it
+    # somehow (https://bitbucket.org/pypy/pypy/issues/2092), so always use our
+    # own implementation that works everywhere, at least for the ways we're
+    # calling it.
+    def assertCountEqual(self, s1, s2):
+        """Assert these have the same elements, regardless of order."""
+        self.assertEqual(sorted(s1), sorted(s2))
+
+    if not unittest_has('assertRaisesRegex'):
+        def assertRaisesRegex(self, *args, **kwargs):
+            return self.assertRaisesRegexp(*args, **kwargs)
+
+    if not unittest_has('assertRegex'):
+        def assertRegex(self, *args, **kwargs):
+            return self.assertRegexpMatches(*args, **kwargs)
+
+#
+# eflag: FileType = Python2
--- /dev/null	Thu Jan 01 00:00:00 1970 +0000
+++ b/DebugClients/Python2/coverage/backward.py	Sat Sep 03 18:12:12 2016 +0200
@@ -0,0 +1,175 @@
+# Licensed under the Apache License: http://www.apache.org/licenses/LICENSE-2.0
+# For details: https://bitbucket.org/ned/coveragepy/src/default/NOTICE.txt
+
+"""Add things to old Pythons so I can pretend they are newer."""
+
+# This file does lots of tricky stuff, so disable a bunch of pylint warnings.
+# pylint: disable=redefined-builtin
+# pylint: disable=unused-import
+# pylint: disable=no-name-in-module
+
+import sys
+
+from coverage import env
+
+
+# Pythons 2 and 3 differ on where to get StringIO.
+try:
+    from cStringIO import StringIO
+except ImportError:
+    from io import StringIO
+
+# In py3, ConfigParser was renamed to the more-standard configparser
+try:
+    import configparser
+except ImportError:
+    import ConfigParser as configparser
+
+# What's a string called?
+try:
+    string_class = basestring
+except NameError:
+    string_class = str
+
+# What's a Unicode string called?
+try:
+    unicode_class = unicode
+except NameError:
+    unicode_class = str
+
+# Where do pickles come from?
+try:
+    import cPickle as pickle
+except ImportError:
+    import pickle
+
+# range or xrange?
+try:
+    range = xrange
+except NameError:
+    range = range
+
+# shlex.quote is new, but there's an undocumented implementation in "pipes",
+# who knew!?
+try:
+    from shlex import quote as shlex_quote
+except ImportError:
+    # Useful function, available under a different (undocumented) name
+    # in Python versions earlier than 3.3.
+    from pipes import quote as shlex_quote
+
+# A function to iterate listlessly over a dict's items.
+try:
+    {}.iteritems
+except AttributeError:
+    def iitems(d):
+        """Produce the items from dict `d`."""
+        return d.items()
+else:
+    def iitems(d):
+        """Produce the items from dict `d`."""
+        return d.iteritems()
+
+# Getting the `next` function from an iterator is different in 2 and 3.
+try:
+    iter([]).next
+except AttributeError:
+    def iternext(seq):
+        """Get the `next` function for iterating over `seq`."""
+        return iter(seq).__next__
+else:
+    def iternext(seq):
+        """Get the `next` function for iterating over `seq`."""
+        return iter(seq).next
+
+# Python 3.x is picky about bytes and strings, so provide methods to
+# get them right, and make them no-ops in 2.x
+if env.PY3:
+    def to_bytes(s):
+        """Convert string `s` to bytes."""
+        return s.encode('utf8')
+
+    def binary_bytes(byte_values):
+        """Produce a byte string with the ints from `byte_values`."""
+        return bytes(byte_values)
+
+    def bytes_to_ints(bytes_value):
+        """Turn a bytes object into a sequence of ints."""
+        # In Python 3, iterating bytes gives ints.
+        return bytes_value
+
+else:
+    def to_bytes(s):
+        """Convert string `s` to bytes (no-op in 2.x)."""
+        return s
+
+    def binary_bytes(byte_values):
+        """Produce a byte string with the ints from `byte_values`."""
+        return "".join(chr(b) for b in byte_values)
+
+    def bytes_to_ints(bytes_value):
+        """Turn a bytes object into a sequence of ints."""
+        for byte in bytes_value:
+            yield ord(byte)
+
+
+try:
+    # In Python 2.x, the builtins were in __builtin__
+    BUILTINS = sys.modules['__builtin__']
+except KeyError:
+    # In Python 3.x, they're in builtins
+    BUILTINS = sys.modules['builtins']
+
+
+# imp was deprecated in Python 3.3
+try:
+    import importlib
+    import importlib.util
+    imp = None
+except ImportError:
+    importlib = None
+
+# We only want to use importlib if it has everything we need.
+try:
+    importlib_util_find_spec = importlib.util.find_spec
+except Exception:
+    import imp
+    importlib_util_find_spec = None
+
+# What is the .pyc magic number for this version of Python?
+try:
+    PYC_MAGIC_NUMBER = importlib.util.MAGIC_NUMBER
+except AttributeError:
+    PYC_MAGIC_NUMBER = imp.get_magic()
+
+
+def import_local_file(modname, modfile=None):
+    """Import a local file as a module.
+
+    Opens a file in the current directory named `modname`.py, imports it
+    as `modname`, and returns the module object.  `modfile` is the file to
+    import if it isn't in the current directory.
+
+    """
+    try:
+        from importlib.machinery import SourceFileLoader
+    except ImportError:
+        SourceFileLoader = None
+
+    if modfile is None:
+        modfile = modname + '.py'
+    if SourceFileLoader:
+        mod = SourceFileLoader(modname, modfile).load_module()
+    else:
+        for suff in imp.get_suffixes():                 # pragma: part covered
+            if suff[0] == '.py':
+                break
+
+        with open(modfile, 'r') as f:
+            # pylint: disable=undefined-loop-variable
+            mod = imp.load_module(modname, f, modfile, suff)
+
+    return mod
+
+#
+# eflag: FileType = Python2
--- /dev/null	Thu Jan 01 00:00:00 1970 +0000
+++ b/DebugClients/Python2/coverage/bytecode.py	Sat Sep 03 18:12:12 2016 +0200
@@ -0,0 +1,25 @@
+# Licensed under the Apache License: http://www.apache.org/licenses/LICENSE-2.0
+# For details: https://bitbucket.org/ned/coveragepy/src/default/NOTICE.txt
+
+"""Bytecode manipulation for coverage.py"""
+
+import types
+
+
+class CodeObjects(object):
+    """Iterate over all the code objects in `code`."""
+    def __init__(self, code):
+        self.stack = [code]
+
+    def __iter__(self):
+        while self.stack:
+            # We're going to return the code object on the stack, but first
+            # push its children for later returning.
+            code = self.stack.pop()
+            for c in code.co_consts:
+                if isinstance(c, types.CodeType):
+                    self.stack.append(c)
+            yield code
+
+#
+# eflag: FileType = Python2
--- /dev/null	Thu Jan 01 00:00:00 1970 +0000
+++ b/DebugClients/Python2/coverage/cmdline.py	Sat Sep 03 18:12:12 2016 +0200
@@ -0,0 +1,766 @@
+# Licensed under the Apache License: http://www.apache.org/licenses/LICENSE-2.0
+# For details: https://bitbucket.org/ned/coveragepy/src/default/NOTICE.txt
+
+"""Command-line support for coverage.py."""
+
+import glob
+import optparse
+import os.path
+import sys
+import textwrap
+import traceback
+
+from coverage import env
+from coverage.collector import CTracer
+from coverage.execfile import run_python_file, run_python_module
+from coverage.misc import CoverageException, ExceptionDuringRun, NoSource
+from coverage.debug import info_formatter, info_header
+
+
+class Opts(object):
+    """A namespace class for individual options we'll build parsers from."""
+
+    append = optparse.make_option(
+        '-a', '--append', action='store_true',
+        help="Append coverage data to .coverage, otherwise it is started clean with each run.",
+    )
+    branch = optparse.make_option(
+        '', '--branch', action='store_true',
+        help="Measure branch coverage in addition to statement coverage.",
+    )
+    CONCURRENCY_CHOICES = [
+        "thread", "gevent", "greenlet", "eventlet", "multiprocessing",
+    ]
+    concurrency = optparse.make_option(
+        '', '--concurrency', action='store', metavar="LIB",
+        choices=CONCURRENCY_CHOICES,
+        help=(
+            "Properly measure code using a concurrency library. "
+            "Valid values are: %s."
+        ) % ", ".join(CONCURRENCY_CHOICES),
+    )
+    debug = optparse.make_option(
+        '', '--debug', action='store', metavar="OPTS",
+        help="Debug options, separated by commas",
+    )
+    directory = optparse.make_option(
+        '-d', '--directory', action='store', metavar="DIR",
+        help="Write the output files to DIR.",
+    )
+    fail_under = optparse.make_option(
+        '', '--fail-under', action='store', metavar="MIN", type="int",
+        help="Exit with a status of 2 if the total coverage is less than MIN.",
+    )
+    help = optparse.make_option(
+        '-h', '--help', action='store_true',
+        help="Get help on this command.",
+    )
+    ignore_errors = optparse.make_option(
+        '-i', '--ignore-errors', action='store_true',
+        help="Ignore errors while reading source files.",
+    )
+    include = optparse.make_option(
+        '', '--include', action='store',
+        metavar="PAT1,PAT2,...",
+        help=(
+            "Include only files whose paths match one of these patterns. "
+            "Accepts shell-style wildcards, which must be quoted."
+        ),
+    )
+    pylib = optparse.make_option(
+        '-L', '--pylib', action='store_true',
+        help=(
+            "Measure coverage even inside the Python installed library, "
+            "which isn't done by default."
+        ),
+    )
+    show_missing = optparse.make_option(
+        '-m', '--show-missing', action='store_true',
+        help="Show line numbers of statements in each module that weren't executed.",
+    )
+    skip_covered = optparse.make_option(
+        '--skip-covered', action='store_true',
+        help="Skip files with 100% coverage.",
+    )
+    omit = optparse.make_option(
+        '', '--omit', action='store',
+        metavar="PAT1,PAT2,...",
+        help=(
+            "Omit files whose paths match one of these patterns. "
+            "Accepts shell-style wildcards, which must be quoted."
+        ),
+    )
+    output_xml = optparse.make_option(
+        '-o', '', action='store', dest="outfile",
+        metavar="OUTFILE",
+        help="Write the XML report to this file. Defaults to 'coverage.xml'",
+    )
+    parallel_mode = optparse.make_option(
+        '-p', '--parallel-mode', action='store_true',
+        help=(
+            "Append the machine name, process id and random number to the "
+            ".coverage data file name to simplify collecting data from "
+            "many processes."
+        ),
+    )
+    module = optparse.make_option(
+        '-m', '--module', action='store_true',
+        help=(
+            "<pyfile> is an importable Python module, not a script path, "
+            "to be run as 'python -m' would run it."
+        ),
+    )
+    rcfile = optparse.make_option(
+        '', '--rcfile', action='store',
+        help="Specify configuration file.  Defaults to '.coveragerc'",
+    )
+    source = optparse.make_option(
+        '', '--source', action='store', metavar="SRC1,SRC2,...",
+        help="A list of packages or directories of code to be measured.",
+    )
+    timid = optparse.make_option(
+        '', '--timid', action='store_true',
+        help=(
+            "Use a simpler but slower trace method.  Try this if you get "
+            "seemingly impossible results!"
+        ),
+    )
+    title = optparse.make_option(
+        '', '--title', action='store', metavar="TITLE",
+        help="A text string to use as the title on the HTML.",
+    )
+    version = optparse.make_option(
+        '', '--version', action='store_true',
+        help="Display version information and exit.",
+    )
+
+
+class CoverageOptionParser(optparse.OptionParser, object):
+    """Base OptionParser for coverage.py.
+
+    Problems don't exit the program.
+    Defaults are initialized for all options.
+
+    """
+
+    def __init__(self, *args, **kwargs):
+        super(CoverageOptionParser, self).__init__(
+            add_help_option=False, *args, **kwargs
+            )
+        self.set_defaults(
+            action=None,
+            append=None,
+            branch=None,
+            concurrency=None,
+            debug=None,
+            directory=None,
+            fail_under=None,
+            help=None,
+            ignore_errors=None,
+            include=None,
+            module=None,
+            omit=None,
+            parallel_mode=None,
+            pylib=None,
+            rcfile=True,
+            show_missing=None,
+            skip_covered=None,
+            source=None,
+            timid=None,
+            title=None,
+            version=None,
+            )
+
+        self.disable_interspersed_args()
+        self.help_fn = self.help_noop
+
+    def help_noop(self, error=None, topic=None, parser=None):
+        """No-op help function."""
+        pass
+
+    class OptionParserError(Exception):
+        """Used to stop the optparse error handler ending the process."""
+        pass
+
+    def parse_args_ok(self, args=None, options=None):
+        """Call optparse.parse_args, but return a triple:
+
+        (ok, options, args)
+
+        """
+        try:
+            options, args = \
+                super(CoverageOptionParser, self).parse_args(args, options)
+        except self.OptionParserError:
+            return False, None, None
+        return True, options, args
+
+    def error(self, msg):
+        """Override optparse.error so sys.exit doesn't get called."""
+        self.help_fn(msg)
+        raise self.OptionParserError
+
+
+class GlobalOptionParser(CoverageOptionParser):
+    """Command-line parser for coverage.py global option arguments."""
+
+    def __init__(self):
+        super(GlobalOptionParser, self).__init__()
+
+        self.add_options([
+            Opts.help,
+            Opts.version,
+        ])
+
+
+class CmdOptionParser(CoverageOptionParser):
+    """Parse one of the new-style commands for coverage.py."""
+
+    def __init__(self, action, options=None, defaults=None, usage=None, description=None):
+        """Create an OptionParser for a coverage.py command.
+
+        `action` is the slug to put into `options.action`.
+        `options` is a list of Option's for the command.
+        `defaults` is a dict of default value for options.
+        `usage` is the usage string to display in help.
+        `description` is the description of the command, for the help text.
+
+        """
+        if usage:
+            usage = "%prog " + usage
+        super(CmdOptionParser, self).__init__(
+            usage=usage,
+            description=description,
+        )
+        self.set_defaults(action=action, **(defaults or {}))
+        if options:
+            self.add_options(options)
+        self.cmd = action
+
+    def __eq__(self, other):
+        # A convenience equality, so that I can put strings in unit test
+        # results, and they will compare equal to objects.
+        return (other == "<CmdOptionParser:%s>" % self.cmd)
+
+    def get_prog_name(self):
+        """Override of an undocumented function in optparse.OptionParser."""
+        program_name = super(CmdOptionParser, self).get_prog_name()
+
+        # Include the sub-command for this parser as part of the command.
+        return "%(command)s %(subcommand)s" % {'command': program_name, 'subcommand': self.cmd}
+
+
+GLOBAL_ARGS = [
+    Opts.debug,
+    Opts.help,
+    Opts.rcfile,
+    ]
+
+CMDS = {
+    'annotate': CmdOptionParser(
+        "annotate",
+        [
+            Opts.directory,
+            Opts.ignore_errors,
+            Opts.include,
+            Opts.omit,
+            ] + GLOBAL_ARGS,
+        usage="[options] [modules]",
+        description=(
+            "Make annotated copies of the given files, marking statements that are executed "
+            "with > and statements that are missed with !."
+        ),
+    ),
+
+    'combine': CmdOptionParser(
+        "combine",
+        GLOBAL_ARGS,
+        usage="<path1> <path2> ... <pathN>",
+        description=(
+            "Combine data from multiple coverage files collected "
+            "with 'run -p'.  The combined results are written to a single "
+            "file representing the union of the data. The positional "
+            "arguments are data files or directories containing data files. "
+            "If no paths are provided, data files in the default data file's "
+            "directory are combined."
+        ),
+    ),
+
+    'debug': CmdOptionParser(
+        "debug", GLOBAL_ARGS,
+        usage="<topic>",
+        description=(
+            "Display information on the internals of coverage.py, "
+            "for diagnosing problems. "
+            "Topics are 'data' to show a summary of the collected data, "
+            "or 'sys' to show installation information."
+        ),
+    ),
+
+    'erase': CmdOptionParser(
+        "erase", GLOBAL_ARGS,
+        usage=" ",
+        description="Erase previously collected coverage data.",
+    ),
+
+    'help': CmdOptionParser(
+        "help", GLOBAL_ARGS,
+        usage="[command]",
+        description="Describe how to use coverage.py",
+    ),
+
+    'html': CmdOptionParser(
+        "html",
+        [
+            Opts.directory,
+            Opts.fail_under,
+            Opts.ignore_errors,
+            Opts.include,
+            Opts.omit,
+            Opts.title,
+            ] + GLOBAL_ARGS,
+        usage="[options] [modules]",
+        description=(
+            "Create an HTML report of the coverage of the files.  "
+            "Each file gets its own page, with the source decorated to show "
+            "executed, excluded, and missed lines."
+        ),
+    ),
+
+    'report': CmdOptionParser(
+        "report",
+        [
+            Opts.fail_under,
+            Opts.ignore_errors,
+            Opts.include,
+            Opts.omit,
+            Opts.show_missing,
+            Opts.skip_covered,
+            ] + GLOBAL_ARGS,
+        usage="[options] [modules]",
+        description="Report coverage statistics on modules."
+    ),
+
+    'run': CmdOptionParser(
+        "run",
+        [
+            Opts.append,
+            Opts.branch,
+            Opts.concurrency,
+            Opts.include,
+            Opts.module,
+            Opts.omit,
+            Opts.pylib,
+            Opts.parallel_mode,
+            Opts.source,
+            Opts.timid,
+            ] + GLOBAL_ARGS,
+        usage="[options] <pyfile> [program options]",
+        description="Run a Python program, measuring code execution."
+    ),
+
+    'xml': CmdOptionParser(
+        "xml",
+        [
+            Opts.fail_under,
+            Opts.ignore_errors,
+            Opts.include,
+            Opts.omit,
+            Opts.output_xml,
+            ] + GLOBAL_ARGS,
+        usage="[options] [modules]",
+        description="Generate an XML report of coverage results."
+    ),
+}
+
+
+OK, ERR, FAIL_UNDER = 0, 1, 2
+
+
+class CoverageScript(object):
+    """The command-line interface to coverage.py."""
+
+    def __init__(self, _covpkg=None, _run_python_file=None,
+                 _run_python_module=None, _help_fn=None, _path_exists=None):
+        # _covpkg is for dependency injection, so we can test this code.
+        if _covpkg:
+            self.covpkg = _covpkg
+        else:
+            import coverage
+            self.covpkg = coverage
+
+        # For dependency injection:
+        self.run_python_file = _run_python_file or run_python_file
+        self.run_python_module = _run_python_module or run_python_module
+        self.help_fn = _help_fn or self.help
+        self.path_exists = _path_exists or os.path.exists
+        self.global_option = False
+
+        self.coverage = None
+
+        self.program_name = os.path.basename(sys.argv[0])
+        if env.WINDOWS:
+            # entry_points={'console_scripts':...} on Windows makes files
+            # called coverage.exe, coverage3.exe, and coverage-3.5.exe. These
+            # invoke coverage-script.py, coverage3-script.py, and
+            # coverage-3.5-script.py.  argv[0] is the .py file, but we want to
+            # get back to the original form.
+            auto_suffix = "-script.py"
+            if self.program_name.endswith(auto_suffix):
+                self.program_name = self.program_name[:-len(auto_suffix)]
+
+    def command_line(self, argv):
+        """The bulk of the command line interface to coverage.py.
+
+        `argv` is the argument list to process.
+
+        Returns 0 if all is well, 1 if something went wrong.
+
+        """
+        # Collect the command-line options.
+        if not argv:
+            self.help_fn(topic='minimum_help')
+            return OK
+
+        # The command syntax we parse depends on the first argument.  Global
+        # switch syntax always starts with an option.
+        self.global_option = argv[0].startswith('-')
+        if self.global_option:
+            parser = GlobalOptionParser()
+        else:
+            parser = CMDS.get(argv[0])
+            if not parser:
+                self.help_fn("Unknown command: '%s'" % argv[0])
+                return ERR
+            argv = argv[1:]
+
+        parser.help_fn = self.help_fn
+        ok, options, args = parser.parse_args_ok(argv)
+        if not ok:
+            return ERR
+
+        # Handle help and version.
+        if self.do_help(options, args, parser):
+            return OK
+
+        # Check for conflicts and problems in the options.
+        if not self.args_ok(options, args):
+            return ERR
+
+        # We need to be able to import from the current directory, because
+        # plugins may try to, for example, to read Django settings.
+        sys.path[0] = ''
+
+        # Listify the list options.
+        source = unshell_list(options.source)
+        omit = unshell_list(options.omit)
+        include = unshell_list(options.include)
+        debug = unshell_list(options.debug)
+
+        # Do something.
+        self.coverage = self.covpkg.coverage(
+            data_suffix=options.parallel_mode,
+            cover_pylib=options.pylib,
+            timid=options.timid,
+            branch=options.branch,
+            config_file=options.rcfile,
+            source=source,
+            omit=omit,
+            include=include,
+            debug=debug,
+            concurrency=options.concurrency,
+            )
+
+        if options.action == "debug":
+            return self.do_debug(args)
+
+        elif options.action == "erase":
+            self.coverage.erase()
+            return OK
+
+        elif options.action == "run":
+            return self.do_run(options, args)
+
+        elif options.action == "combine":
+            self.coverage.load()
+            data_dirs = args or None
+            self.coverage.combine(data_dirs)
+            self.coverage.save()
+            return OK
+
+        # Remaining actions are reporting, with some common options.
+        report_args = dict(
+            morfs=unglob_args(args),
+            ignore_errors=options.ignore_errors,
+            omit=omit,
+            include=include,
+            )
+
+        self.coverage.load()
+
+        total = None
+        if options.action == "report":
+            total = self.coverage.report(
+                show_missing=options.show_missing,
+                skip_covered=options.skip_covered, **report_args)
+        elif options.action == "annotate":
+            self.coverage.annotate(
+                directory=options.directory, **report_args)
+        elif options.action == "html":
+            total = self.coverage.html_report(
+                directory=options.directory, title=options.title,
+                **report_args)
+        elif options.action == "xml":
+            outfile = options.outfile
+            total = self.coverage.xml_report(outfile=outfile, **report_args)
+
+        if total is not None:
+            # Apply the command line fail-under options, and then use the config
+            # value, so we can get fail_under from the config file.
+            if options.fail_under is not None:
+                self.coverage.set_option("report:fail_under", options.fail_under)
+
+            if self.coverage.get_option("report:fail_under"):
+
+                # Total needs to be rounded, but be careful of 0 and 100.
+                if 0 < total < 1:
+                    total = 1
+                elif 99 < total < 100:
+                    total = 99
+                else:
+                    total = round(total)
+
+                if total >= self.coverage.get_option("report:fail_under"):
+                    return OK
+                else:
+                    return FAIL_UNDER
+
+        return OK
+
+    def help(self, error=None, topic=None, parser=None):
+        """Display an error message, or the named topic."""
+        assert error or topic or parser
+        if error:
+            print(error)
+            print("Use '%s help' for help." % (self.program_name,))
+        elif parser:
+            print(parser.format_help().strip())
+        else:
+            help_params = dict(self.covpkg.__dict__)
+            help_params['program_name'] = self.program_name
+            if CTracer is not None:
+                help_params['extension_modifier'] = 'with C extension'
+            else:
+                help_params['extension_modifier'] = 'without C extension'
+            help_msg = textwrap.dedent(HELP_TOPICS.get(topic, '')).strip()
+            if help_msg:
+                print(help_msg.format(**help_params))
+            else:
+                print("Don't know topic %r" % topic)
+
+    def do_help(self, options, args, parser):
+        """Deal with help requests.
+
+        Return True if it handled the request, False if not.
+
+        """
+        # Handle help.
+        if options.help:
+            if self.global_option:
+                self.help_fn(topic='help')
+            else:
+                self.help_fn(parser=parser)
+            return True
+
+        if options.action == "help":
+            if args:
+                for a in args:
+                    parser = CMDS.get(a)
+                    if parser:
+                        self.help_fn(parser=parser)
+                    else:
+                        self.help_fn(topic=a)
+            else:
+                self.help_fn(topic='help')
+            return True
+
+        # Handle version.
+        if options.version:
+            self.help_fn(topic='version')
+            return True
+
+        return False
+
+    def args_ok(self, options, args):
+        """Check for conflicts and problems in the options.
+
+        Returns True if everything is OK, or False if not.
+
+        """
+        if options.action == "run" and not args:
+            self.help_fn("Nothing to do.")
+            return False
+
+        return True
+
+    def do_run(self, options, args):
+        """Implementation of 'coverage run'."""
+
+        if options.append and self.coverage.get_option("run:parallel"):
+            self.help_fn("Can't append to data files in parallel mode.")
+            return ERR
+
+        if not self.coverage.get_option("run:parallel"):
+            if not options.append:
+                self.coverage.erase()
+
+        # Run the script.
+        self.coverage.start()
+        code_ran = True
+        try:
+            if options.module:
+                self.run_python_module(args[0], args)
+            else:
+                filename = args[0]
+                self.run_python_file(filename, args)
+        except NoSource:
+            code_ran = False
+            raise
+        finally:
+            self.coverage.stop()
+            if code_ran:
+                if options.append:
+                    data_file = self.coverage.get_option("run:data_file")
+                    if self.path_exists(data_file):
+                        self.coverage.combine(data_paths=[data_file])
+                self.coverage.save()
+
+        return OK
+
+    def do_debug(self, args):
+        """Implementation of 'coverage debug'."""
+
+        if not args:
+            self.help_fn("What information would you like: data, sys?")
+            return ERR
+
+        for info in args:
+            if info == 'sys':
+                sys_info = self.coverage.sys_info()
+                print(info_header("sys"))
+                for line in info_formatter(sys_info):
+                    print(" %s" % line)
+            elif info == 'data':
+                self.coverage.load()
+                data = self.coverage.data
+                print(info_header("data"))
+                print("path: %s" % self.coverage.data_files.filename)
+                if data:
+                    print("has_arcs: %r" % data.has_arcs())
+                    summary = data.line_counts(fullpath=True)
+                    filenames = sorted(summary.keys())
+                    print("\n%d files:" % len(filenames))
+                    for f in filenames:
+                        line = "%s: %d lines" % (f, summary[f])
+                        plugin = data.file_tracer(f)
+                        if plugin:
+                            line += " [%s]" % plugin
+                        print(line)
+                else:
+                    print("No data collected")
+            else:
+                self.help_fn("Don't know what you mean by %r" % info)
+                return ERR
+
+        return OK
+
+
+def unshell_list(s):
+    """Turn a command-line argument into a list."""
+    if not s:
+        return None
+    if env.WINDOWS:
+        # When running coverage.py as coverage.exe, some of the behavior
+        # of the shell is emulated: wildcards are expanded into a list of
+        # file names.  So you have to single-quote patterns on the command
+        # line, but (not) helpfully, the single quotes are included in the
+        # argument, so we have to strip them off here.
+        s = s.strip("'")
+    return s.split(',')
+
+
+def unglob_args(args):
+    """Interpret shell wildcards for platforms that need it."""
+    if env.WINDOWS:
+        globbed = []
+        for arg in args:
+            if '?' in arg or '*' in arg:
+                globbed.extend(glob.glob(arg))
+            else:
+                globbed.append(arg)
+        args = globbed
+    return args
+
+
+HELP_TOPICS = {
+    'help': """\
+        Coverage.py, version {__version__} {extension_modifier}
+        Measure, collect, and report on code coverage in Python programs.
+
+        usage: {program_name} <command> [options] [args]
+
+        Commands:
+            annotate    Annotate source files with execution information.
+            combine     Combine a number of data files.
+            erase       Erase previously collected coverage data.
+            help        Get help on using coverage.py.
+            html        Create an HTML report.
+            report      Report coverage stats on modules.
+            run         Run a Python program and measure code execution.
+            xml         Create an XML report of coverage results.
+
+        Use "{program_name} help <command>" for detailed help on any command.
+        For full documentation, see {__url__}
+    """,
+
+    'minimum_help': """\
+        Code coverage for Python.  Use '{program_name} help' for help.
+    """,
+
+    'version': """\
+        Coverage.py, version {__version__} {extension_modifier}
+        Documentation at {__url__}
+    """,
+}
+
+
+def main(argv=None):
+    """The main entry point to coverage.py.
+
+    This is installed as the script entry point.
+
+    """
+    if argv is None:
+        argv = sys.argv[1:]
+    try:
+        status = CoverageScript().command_line(argv)
+    except ExceptionDuringRun as err:
+        # An exception was caught while running the product code.  The
+        # sys.exc_info() return tuple is packed into an ExceptionDuringRun
+        # exception.
+        traceback.print_exception(*err.args)
+        status = ERR
+    except CoverageException as err:
+        # A controlled error inside coverage.py: print the message to the user.
+        print(err)
+        status = ERR
+    except SystemExit as err:
+        # The user called `sys.exit()`.  Exit with their argument, if any.
+        if err.args:
+            status = err.args[0]
+        else:
+            status = None
+    return status
+
+#
+# eflag: FileType = Python2
--- /dev/null	Thu Jan 01 00:00:00 1970 +0000
+++ b/DebugClients/Python2/coverage/collector.py	Sat Sep 03 18:12:12 2016 +0200
@@ -0,0 +1,364 @@
+# Licensed under the Apache License: http://www.apache.org/licenses/LICENSE-2.0
+# For details: https://bitbucket.org/ned/coveragepy/src/default/NOTICE.txt
+
+"""Raw data collector for coverage.py."""
+
+import os
+import sys
+
+from coverage import env
+from coverage.backward import iitems
+from coverage.files import abs_file
+from coverage.misc import CoverageException, isolate_module
+from coverage.pytracer import PyTracer
+
+os = isolate_module(os)
+
+
+try:
+    # Use the C extension code when we can, for speed.
+    from coverage.tracer import CTracer, CFileDisposition   # pylint: disable=no-name-in-module
+except ImportError:
+    # Couldn't import the C extension, maybe it isn't built.
+    if os.getenv('COVERAGE_TEST_TRACER') == 'c':
+        # During testing, we use the COVERAGE_TEST_TRACER environment variable
+        # to indicate that we've fiddled with the environment to test this
+        # fallback code.  If we thought we had a C tracer, but couldn't import
+        # it, then exit quickly and clearly instead of dribbling confusing
+        # errors. I'm using sys.exit here instead of an exception because an
+        # exception here causes all sorts of other noise in unittest.
+        sys.stderr.write("*** COVERAGE_TEST_TRACER is 'c' but can't import CTracer!\n")
+        sys.exit(1)
+    CTracer = None
+
+
+class FileDisposition(object):
+    """A simple value type for recording what to do with a file."""
+    pass
+
+
+def should_start_context(frame):
+    """Who-Tests-What hack: Determine whether this frame begins a new who-context."""
+    fn_name = frame.f_code.co_name
+    if fn_name.startswith("test"):
+        return fn_name
+
+
+class Collector(object):
+    """Collects trace data.
+
+    Creates a Tracer object for each thread, since they track stack
+    information.  Each Tracer points to the same shared data, contributing
+    traced data points.
+
+    When the Collector is started, it creates a Tracer for the current thread,
+    and installs a function to create Tracers for each new thread started.
+    When the Collector is stopped, all active Tracers are stopped.
+
+    Threads started while the Collector is stopped will never have Tracers
+    associated with them.
+
+    """
+
+    # The stack of active Collectors.  Collectors are added here when started,
+    # and popped when stopped.  Collectors on the stack are paused when not
+    # the top, and resumed when they become the top again.
+    _collectors = []
+
+    def __init__(self, should_trace, check_include, timid, branch, warn, concurrency):
+        """Create a collector.
+
+        `should_trace` is a function, taking a file name, and returning a
+        `coverage.FileDisposition object`.
+
+        `check_include` is a function taking a file name and a frame. It returns
+        a boolean: True if the file should be traced, False if not.
+
+        If `timid` is true, then a slower simpler trace function will be
+        used.  This is important for some environments where manipulation of
+        tracing functions make the faster more sophisticated trace function not
+        operate properly.
+
+        If `branch` is true, then branches will be measured.  This involves
+        collecting data on which statements followed each other (arcs).  Use
+        `get_arc_data` to get the arc data.
+
+        `warn` is a warning function, taking a single string message argument,
+        to be used if a warning needs to be issued.
+
+        `concurrency` is a string indicating the concurrency library in use.
+        Valid values are "greenlet", "eventlet", "gevent", or "thread" (the
+        default).
+
+        """
+        self.should_trace = should_trace
+        self.check_include = check_include
+        self.warn = warn
+        self.branch = branch
+        self.threading = None
+        self.concurrency = concurrency
+
+        self.concur_id_func = None
+
+        try:
+            if concurrency == "greenlet":
+                import greenlet
+                self.concur_id_func = greenlet.getcurrent
+            elif concurrency == "eventlet":
+                import eventlet.greenthread     # pylint: disable=import-error,useless-suppression
+                self.concur_id_func = eventlet.greenthread.getcurrent
+            elif concurrency == "gevent":
+                import gevent                   # pylint: disable=import-error,useless-suppression
+                self.concur_id_func = gevent.getcurrent
+            elif concurrency == "thread" or not concurrency:
+                # It's important to import threading only if we need it.  If
+                # it's imported early, and the program being measured uses
+                # gevent, then gevent's monkey-patching won't work properly.
+                import threading
+                self.threading = threading
+            else:
+                raise CoverageException("Don't understand concurrency=%s" % concurrency)
+        except ImportError:
+            raise CoverageException(
+                "Couldn't trace with concurrency=%s, the module isn't installed." % concurrency
+            )
+
+        # Who-Tests-What is just a hack at the moment, so turn it on with an
+        # environment variable.
+        self.wtw = int(os.getenv('COVERAGE_WTW', 0))
+
+        self.reset()
+
+        if timid:
+            # Being timid: use the simple Python trace function.
+            self._trace_class = PyTracer
+        else:
+            # Being fast: use the C Tracer if it is available, else the Python
+            # trace function.
+            self._trace_class = CTracer or PyTracer
+
+        if self._trace_class is CTracer:
+            self.file_disposition_class = CFileDisposition
+            self.supports_plugins = True
+        else:
+            self.file_disposition_class = FileDisposition
+            self.supports_plugins = False
+
+    def __repr__(self):
+        return "<Collector at 0x%x: %s>" % (id(self), self.tracer_name())
+
+    def tracer_name(self):
+        """Return the class name of the tracer we're using."""
+        return self._trace_class.__name__
+
+    def reset(self):
+        """Clear collected data, and prepare to collect more."""
+        # A dictionary mapping file names to dicts with line number keys (if not
+        # branch coverage), or mapping file names to dicts with line number
+        # pairs as keys (if branch coverage).
+        self.data = {}
+
+        # A dict mapping contexts to data dictionaries.
+        self.contexts = {}
+        self.contexts[None] = self.data
+
+        # A dictionary mapping file names to file tracer plugin names that will
+        # handle them.
+        self.file_tracers = {}
+
+        # The .should_trace_cache attribute is a cache from file names to
+        # coverage.FileDisposition objects, or None.  When a file is first
+        # considered for tracing, a FileDisposition is obtained from
+        # Coverage.should_trace.  Its .trace attribute indicates whether the
+        # file should be traced or not.  If it should be, a plugin with dynamic
+        # file names can decide not to trace it based on the dynamic file name
+        # being excluded by the inclusion rules, in which case the
+        # FileDisposition will be replaced by None in the cache.
+        if env.PYPY:
+            import __pypy__                     # pylint: disable=import-error
+            # Alex Gaynor said:
+            # should_trace_cache is a strictly growing key: once a key is in
+            # it, it never changes.  Further, the keys used to access it are
+            # generally constant, given sufficient context. That is to say, at
+            # any given point _trace() is called, pypy is able to know the key.
+            # This is because the key is determined by the physical source code
+            # line, and that's invariant with the call site.
+            #
+            # This property of a dict with immutable keys, combined with
+            # call-site-constant keys is a match for PyPy's module dict,
+            # which is optimized for such workloads.
+            #
+            # This gives a 20% benefit on the workload described at
+            # https://bitbucket.org/pypy/pypy/issue/1871/10x-slower-than-cpython-under-coverage
+            self.should_trace_cache = __pypy__.newdict("module")
+        else:
+            self.should_trace_cache = {}
+
+        # Our active Tracers.
+        self.tracers = []
+
+    def _start_tracer(self):
+        """Start a new Tracer object, and store it in self.tracers."""
+        tracer = self._trace_class()
+        tracer.data = self.data
+        tracer.trace_arcs = self.branch
+        tracer.should_trace = self.should_trace
+        tracer.should_trace_cache = self.should_trace_cache
+        tracer.warn = self.warn
+
+        if hasattr(tracer, 'concur_id_func'):
+            tracer.concur_id_func = self.concur_id_func
+        elif self.concur_id_func:
+            raise CoverageException(
+                "Can't support concurrency=%s with %s, only threads are supported" % (
+                    self.concurrency, self.tracer_name(),
+                )
+            )
+
+        if hasattr(tracer, 'file_tracers'):
+            tracer.file_tracers = self.file_tracers
+        if hasattr(tracer, 'threading'):
+            tracer.threading = self.threading
+        if hasattr(tracer, 'check_include'):
+            tracer.check_include = self.check_include
+        if self.wtw:
+            if hasattr(tracer, 'should_start_context'):
+                tracer.should_start_context = should_start_context
+            if hasattr(tracer, 'switch_context'):
+                tracer.switch_context = self.switch_context
+
+        fn = tracer.start()
+        self.tracers.append(tracer)
+
+        return fn
+
+    # The trace function has to be set individually on each thread before
+    # execution begins.  Ironically, the only support the threading module has
+    # for running code before the thread main is the tracing function.  So we
+    # install this as a trace function, and the first time it's called, it does
+    # the real trace installation.
+
+    def _installation_trace(self, frame, event, arg):
+        """Called on new threads, installs the real tracer."""
+        # Remove ourselves as the trace function.
+        sys.settrace(None)
+        # Install the real tracer.
+        fn = self._start_tracer()
+        # Invoke the real trace function with the current event, to be sure
+        # not to lose an event.
+        if fn:
+            fn = fn(frame, event, arg)
+        # Return the new trace function to continue tracing in this scope.
+        return fn
+
+    def start(self):
+        """Start collecting trace information."""
+        if self._collectors:
+            self._collectors[-1].pause()
+
+        # Check to see whether we had a fullcoverage tracer installed. If so,
+        # get the stack frames it stashed away for us.
+        traces0 = []
+        fn0 = sys.gettrace()
+        if fn0:
+            tracer0 = getattr(fn0, '__self__', None)
+            if tracer0:
+                traces0 = getattr(tracer0, 'traces', [])
+
+        try:
+            # Install the tracer on this thread.
+            fn = self._start_tracer()
+        except:
+            if self._collectors:
+                self._collectors[-1].resume()
+            raise
+
+        # If _start_tracer succeeded, then we add ourselves to the global
+        # stack of collectors.
+        self._collectors.append(self)
+
+        # Replay all the events from fullcoverage into the new trace function.
+        for args in traces0:
+            (frame, event, arg), lineno = args
+            try:
+                fn(frame, event, arg, lineno=lineno)
+            except TypeError:
+                raise Exception("fullcoverage must be run with the C trace function.")
+
+        # Install our installation tracer in threading, to jump start other
+        # threads.
+        if self.threading:
+            self.threading.settrace(self._installation_trace)
+
+    def stop(self):
+        """Stop collecting trace information."""
+        assert self._collectors
+        assert self._collectors[-1] is self, (
+            "Expected current collector to be %r, but it's %r" % (self, self._collectors[-1])
+        )
+
+        self.pause()
+        self.tracers = []
+
+        # Remove this Collector from the stack, and resume the one underneath
+        # (if any).
+        self._collectors.pop()
+        if self._collectors:
+            self._collectors[-1].resume()
+
+    def pause(self):
+        """Pause tracing, but be prepared to `resume`."""
+        for tracer in self.tracers:
+            tracer.stop()
+            stats = tracer.get_stats()
+            if stats:
+                print("\nCoverage.py tracer stats:")
+                for k in sorted(stats.keys()):
+                    print("%20s: %s" % (k, stats[k]))
+        if self.threading:
+            self.threading.settrace(None)
+
+    def resume(self):
+        """Resume tracing after a `pause`."""
+        for tracer in self.tracers:
+            tracer.start()
+        if self.threading:
+            self.threading.settrace(self._installation_trace)
+        else:
+            self._start_tracer()
+
+    def switch_context(self, new_context):
+        """Who-Tests-What hack: switch to a new who-context."""
+        # Make a new data dict, or find the existing one, and switch all the
+        # tracers to use it.
+        data = self.contexts.setdefault(new_context, {})
+        for tracer in self.tracers:
+            tracer.data = data
+
+    def save_data(self, covdata):
+        """Save the collected data to a `CoverageData`.
+
+        Also resets the collector.
+
+        """
+        def abs_file_dict(d):
+            """Return a dict like d, but with keys modified by `abs_file`."""
+            return dict((abs_file(k), v) for k, v in iitems(d))
+
+        if self.branch:
+            covdata.add_arcs(abs_file_dict(self.data))
+        else:
+            covdata.add_lines(abs_file_dict(self.data))
+        covdata.add_file_tracers(abs_file_dict(self.file_tracers))
+
+        if self.wtw:
+            # Just a hack, so just hack it.
+            import pprint
+            out_file = "coverage_wtw_{:06}.py".format(os.getpid())
+            with open(out_file, "w") as wtw_out:
+                pprint.pprint(self.contexts, wtw_out)
+
+        self.reset()
+
+#
+# eflag: FileType = Python2
--- /dev/null	Thu Jan 01 00:00:00 1970 +0000
+++ b/DebugClients/Python2/coverage/config.py	Sat Sep 03 18:12:12 2016 +0200
@@ -0,0 +1,368 @@
+# Licensed under the Apache License: http://www.apache.org/licenses/LICENSE-2.0
+# For details: https://bitbucket.org/ned/coveragepy/src/default/NOTICE.txt
+
+"""Config file for coverage.py"""
+
+import collections
+import os
+import re
+import sys
+
+from coverage.backward import configparser, iitems, string_class
+from coverage.misc import CoverageException, isolate_module
+
+os = isolate_module(os)
+
+
+class HandyConfigParser(configparser.RawConfigParser):
+    """Our specialization of ConfigParser."""
+
+    def __init__(self, section_prefix):
+        configparser.RawConfigParser.__init__(self)
+        self.section_prefix = section_prefix
+
+    def read(self, filename):
+        """Read a file name as UTF-8 configuration data."""
+        kwargs = {}
+        if sys.version_info >= (3, 2):
+            kwargs['encoding'] = "utf-8"
+        return configparser.RawConfigParser.read(self, filename, **kwargs)
+
+    def has_option(self, section, option):
+        section = self.section_prefix + section
+        return configparser.RawConfigParser.has_option(self, section, option)
+
+    def has_section(self, section):
+        section = self.section_prefix + section
+        return configparser.RawConfigParser.has_section(self, section)
+
+    def options(self, section):
+        section = self.section_prefix + section
+        return configparser.RawConfigParser.options(self, section)
+
+    def get_section(self, section):
+        """Get the contents of a section, as a dictionary."""
+        d = {}
+        for opt in self.options(section):
+            d[opt] = self.get(section, opt)
+        return d
+
+    def get(self, section, *args, **kwargs):
+        """Get a value, replacing environment variables also.
+
+        The arguments are the same as `RawConfigParser.get`, but in the found
+        value, ``$WORD`` or ``${WORD}`` are replaced by the value of the
+        environment variable ``WORD``.
+
+        Returns the finished value.
+
+        """
+        section = self.section_prefix + section
+        v = configparser.RawConfigParser.get(self, section, *args, **kwargs)
+        def dollar_replace(m):
+            """Called for each $replacement."""
+            # Only one of the groups will have matched, just get its text.
+            word = next(w for w in m.groups() if w is not None)     # pragma: part covered
+            if word == "$":
+                return "$"
+            else:
+                return os.environ.get(word, '')
+
+        dollar_pattern = r"""(?x)   # Use extended regex syntax
+            \$(?:                   # A dollar sign, then
+            (?P<v1>\w+) |           #   a plain word,
+            {(?P<v2>\w+)} |         #   or a {-wrapped word,
+            (?P<char>[$])           #   or a dollar sign.
+            )
+            """
+        v = re.sub(dollar_pattern, dollar_replace, v)
+        return v
+
+    def getlist(self, section, option):
+        """Read a list of strings.
+
+        The value of `section` and `option` is treated as a comma- and newline-
+        separated list of strings.  Each value is stripped of whitespace.
+
+        Returns the list of strings.
+
+        """
+        value_list = self.get(section, option)
+        values = []
+        for value_line in value_list.split('\n'):
+            for value in value_line.split(','):
+                value = value.strip()
+                if value:
+                    values.append(value)
+        return values
+
+    def getregexlist(self, section, option):
+        """Read a list of full-line regexes.
+
+        The value of `section` and `option` is treated as a newline-separated
+        list of regexes.  Each value is stripped of whitespace.
+
+        Returns the list of strings.
+
+        """
+        line_list = self.get(section, option)
+        value_list = []
+        for value in line_list.splitlines():
+            value = value.strip()
+            try:
+                re.compile(value)
+            except re.error as e:
+                raise CoverageException(
+                    "Invalid [%s].%s value %r: %s" % (section, option, value, e)
+                )
+            if value:
+                value_list.append(value)
+        return value_list
+
+
+# The default line exclusion regexes.
+DEFAULT_EXCLUDE = [
+    r'(?i)#\s*pragma[:\s]?\s*no\s*cover',
+]
+
+# The default partial branch regexes, to be modified by the user.
+DEFAULT_PARTIAL = [
+    r'(?i)#\s*pragma[:\s]?\s*no\s*branch',
+]
+
+# The default partial branch regexes, based on Python semantics.
+# These are any Python branching constructs that can't actually execute all
+# their branches.
+DEFAULT_PARTIAL_ALWAYS = [
+    'while (True|1|False|0):',
+    'if (True|1|False|0):',
+]
+
+
+class CoverageConfig(object):
+    """Coverage.py configuration.
+
+    The attributes of this class are the various settings that control the
+    operation of coverage.py.
+
+    """
+    def __init__(self):
+        """Initialize the configuration attributes to their defaults."""
+        # Metadata about the config.
+        self.attempted_config_files = []
+        self.config_files = []
+
+        # Defaults for [run]
+        self.branch = False
+        self.concurrency = None
+        self.cover_pylib = False
+        self.data_file = ".coverage"
+        self.debug = []
+        self.note = None
+        self.parallel = False
+        self.plugins = []
+        self.source = None
+        self.timid = False
+
+        # Defaults for [report]
+        self.exclude_list = DEFAULT_EXCLUDE[:]
+        self.fail_under = 0
+        self.ignore_errors = False
+        self.include = None
+        self.omit = None
+        self.partial_always_list = DEFAULT_PARTIAL_ALWAYS[:]
+        self.partial_list = DEFAULT_PARTIAL[:]
+        self.precision = 0
+        self.show_missing = False
+        self.skip_covered = False
+
+        # Defaults for [html]
+        self.extra_css = None
+        self.html_dir = "htmlcov"
+        self.html_title = "Coverage report"
+
+        # Defaults for [xml]
+        self.xml_output = "coverage.xml"
+        self.xml_package_depth = 99
+
+        # Defaults for [paths]
+        self.paths = {}
+
+        # Options for plugins
+        self.plugin_options = {}
+
+    MUST_BE_LIST = ["omit", "include", "debug", "plugins"]
+
+    def from_args(self, **kwargs):
+        """Read config values from `kwargs`."""
+        for k, v in iitems(kwargs):
+            if v is not None:
+                if k in self.MUST_BE_LIST and isinstance(v, string_class):
+                    v = [v]
+                setattr(self, k, v)
+
+    def from_file(self, filename, section_prefix=""):
+        """Read configuration from a .rc file.
+
+        `filename` is a file name to read.
+
+        Returns True or False, whether the file could be read.
+
+        """
+        self.attempted_config_files.append(filename)
+
+        cp = HandyConfigParser(section_prefix)
+        try:
+            files_read = cp.read(filename)
+        except configparser.Error as err:
+            raise CoverageException("Couldn't read config file %s: %s" % (filename, err))
+        if not files_read:
+            return False
+
+        self.config_files.extend(files_read)
+
+        try:
+            for option_spec in self.CONFIG_FILE_OPTIONS:
+                self._set_attr_from_config_option(cp, *option_spec)
+        except ValueError as err:
+            raise CoverageException("Couldn't read config file %s: %s" % (filename, err))
+
+        # Check that there are no unrecognized options.
+        all_options = collections.defaultdict(set)
+        for option_spec in self.CONFIG_FILE_OPTIONS:
+            section, option = option_spec[1].split(":")
+            all_options[section].add(option)
+
+        for section, options in iitems(all_options):
+            if cp.has_section(section):
+                for unknown in set(cp.options(section)) - options:
+                    if section_prefix:
+                        section = section_prefix + section
+                    raise CoverageException(
+                        "Unrecognized option '[%s] %s=' in config file %s" % (
+                            section, unknown, filename
+                        )
+                    )
+
+        # [paths] is special
+        if cp.has_section('paths'):
+            for option in cp.options('paths'):
+                self.paths[option] = cp.getlist('paths', option)
+
+        # plugins can have options
+        for plugin in self.plugins:
+            if cp.has_section(plugin):
+                self.plugin_options[plugin] = cp.get_section(plugin)
+
+        return True
+
+    CONFIG_FILE_OPTIONS = [
+        # These are *args for _set_attr_from_config_option:
+        #   (attr, where, type_="")
+        #
+        #   attr is the attribute to set on the CoverageConfig object.
+        #   where is the section:name to read from the configuration file.
+        #   type_ is the optional type to apply, by using .getTYPE to read the
+        #       configuration value from the file.
+
+        # [run]
+        ('branch', 'run:branch', 'boolean'),
+        ('concurrency', 'run:concurrency'),
+        ('cover_pylib', 'run:cover_pylib', 'boolean'),
+        ('data_file', 'run:data_file'),
+        ('debug', 'run:debug', 'list'),
+        ('include', 'run:include', 'list'),
+        ('note', 'run:note'),
+        ('omit', 'run:omit', 'list'),
+        ('parallel', 'run:parallel', 'boolean'),
+        ('plugins', 'run:plugins', 'list'),
+        ('source', 'run:source', 'list'),
+        ('timid', 'run:timid', 'boolean'),
+
+        # [report]
+        ('exclude_list', 'report:exclude_lines', 'regexlist'),
+        ('fail_under', 'report:fail_under', 'int'),
+        ('ignore_errors', 'report:ignore_errors', 'boolean'),
+        ('include', 'report:include', 'list'),
+        ('omit', 'report:omit', 'list'),
+        ('partial_always_list', 'report:partial_branches_always', 'regexlist'),
+        ('partial_list', 'report:partial_branches', 'regexlist'),
+        ('precision', 'report:precision', 'int'),
+        ('show_missing', 'report:show_missing', 'boolean'),
+        ('skip_covered', 'report:skip_covered', 'boolean'),
+
+        # [html]
+        ('extra_css', 'html:extra_css'),
+        ('html_dir', 'html:directory'),
+        ('html_title', 'html:title'),
+
+        # [xml]
+        ('xml_output', 'xml:output'),
+        ('xml_package_depth', 'xml:package_depth', 'int'),
+    ]
+
+    def _set_attr_from_config_option(self, cp, attr, where, type_=''):
+        """Set an attribute on self if it exists in the ConfigParser."""
+        section, option = where.split(":")
+        if cp.has_option(section, option):
+            method = getattr(cp, 'get' + type_)
+            setattr(self, attr, method(section, option))
+
+    def get_plugin_options(self, plugin):
+        """Get a dictionary of options for the plugin named `plugin`."""
+        return self.plugin_options.get(plugin, {})
+
+    def set_option(self, option_name, value):
+        """Set an option in the configuration.
+
+        `option_name` is a colon-separated string indicating the section and
+        option name.  For example, the ``branch`` option in the ``[run]``
+        section of the config file would be indicated with `"run:branch"`.
+
+        `value` is the new value for the option.
+
+        """
+
+        # Check all the hard-coded options.
+        for option_spec in self.CONFIG_FILE_OPTIONS:
+            attr, where = option_spec[:2]
+            if where == option_name:
+                setattr(self, attr, value)
+                return
+
+        # See if it's a plugin option.
+        plugin_name, _, key = option_name.partition(":")
+        if key and plugin_name in self.plugins:
+            self.plugin_options.setdefault(plugin_name, {})[key] = value
+            return
+
+        # If we get here, we didn't find the option.
+        raise CoverageException("No such option: %r" % option_name)
+
+    def get_option(self, option_name):
+        """Get an option from the configuration.
+
+        `option_name` is a colon-separated string indicating the section and
+        option name.  For example, the ``branch`` option in the ``[run]``
+        section of the config file would be indicated with `"run:branch"`.
+
+        Returns the value of the option.
+
+        """
+
+        # Check all the hard-coded options.
+        for option_spec in self.CONFIG_FILE_OPTIONS:
+            attr, where = option_spec[:2]
+            if where == option_name:
+                return getattr(self, attr)
+
+        # See if it's a plugin option.
+        plugin_name, _, key = option_name.partition(":")
+        if key and plugin_name in self.plugins:
+            return self.plugin_options.get(plugin_name, {}).get(key)
+
+        # If we get here, we didn't find the option.
+        raise CoverageException("No such option: %r" % option_name)
+
+#
+# eflag: FileType = Python2
--- /dev/null	Thu Jan 01 00:00:00 1970 +0000
+++ b/DebugClients/Python2/coverage/control.py	Sat Sep 03 18:12:12 2016 +0200
@@ -0,0 +1,1202 @@
+# Licensed under the Apache License: http://www.apache.org/licenses/LICENSE-2.0
+# For details: https://bitbucket.org/ned/coveragepy/src/default/NOTICE.txt
+
+"""Core control stuff for coverage.py."""
+
+import atexit
+import inspect
+import os
+import platform
+import re
+import sys
+import traceback
+
+from coverage import env, files
+from coverage.annotate import AnnotateReporter
+from coverage.backward import string_class, iitems
+from coverage.collector import Collector
+from coverage.config import CoverageConfig
+from coverage.data import CoverageData, CoverageDataFiles
+from coverage.debug import DebugControl
+from coverage.files import TreeMatcher, FnmatchMatcher
+from coverage.files import PathAliases, find_python_files, prep_patterns
+from coverage.files import ModuleMatcher, abs_file
+from coverage.html import HtmlReporter
+from coverage.misc import CoverageException, bool_or_none, join_regex
+from coverage.misc import file_be_gone, isolate_module
+from coverage.monkey import patch_multiprocessing
+from coverage.plugin import FileReporter
+from coverage.plugin_support import Plugins
+from coverage.python import PythonFileReporter
+from coverage.results import Analysis, Numbers
+from coverage.summary import SummaryReporter
+from coverage.xmlreport import XmlReporter
+
+os = isolate_module(os)
+
+# Pypy has some unusual stuff in the "stdlib".  Consider those locations
+# when deciding where the stdlib is.
+try:
+    import _structseq
+except ImportError:
+    _structseq = None
+
+
+class Coverage(object):
+    """Programmatic access to coverage.py.
+
+    To use::
+
+        from coverage import Coverage
+
+        cov = Coverage()
+        cov.start()
+        #.. call your code ..
+        cov.stop()
+        cov.html_report(directory='covhtml')
+
+    """
+    def __init__(
+        self, data_file=None, data_suffix=None, cover_pylib=None,
+        auto_data=False, timid=None, branch=None, config_file=True,
+        source=None, omit=None, include=None, debug=None,
+        concurrency=None,
+    ):
+        """
+        `data_file` is the base name of the data file to use, defaulting to
+        ".coverage".  `data_suffix` is appended (with a dot) to `data_file` to
+        create the final file name.  If `data_suffix` is simply True, then a
+        suffix is created with the machine and process identity included.
+
+        `cover_pylib` is a boolean determining whether Python code installed
+        with the Python interpreter is measured.  This includes the Python
+        standard library and any packages installed with the interpreter.
+
+        If `auto_data` is true, then any existing data file will be read when
+        coverage measurement starts, and data will be saved automatically when
+        measurement stops.
+
+        If `timid` is true, then a slower and simpler trace function will be
+        used.  This is important for some environments where manipulation of
+        tracing functions breaks the faster trace function.
+
+        If `branch` is true, then branch coverage will be measured in addition
+        to the usual statement coverage.
+
+        `config_file` determines what configuration file to read:
+
+            * If it is ".coveragerc", it is interpreted as if it were True,
+              for backward compatibility.
+
+            * If it is a string, it is the name of the file to read.  If the
+              file can't be read, it is an error.
+
+            * If it is True, then a few standard files names are tried
+              (".coveragerc", "setup.cfg").  It is not an error for these files
+              to not be found.
+
+            * If it is False, then no configuration file is read.
+
+        `source` is a list of file paths or package names.  Only code located
+        in the trees indicated by the file paths or package names will be
+        measured.
+
+        `include` and `omit` are lists of file name patterns. Files that match
+        `include` will be measured, files that match `omit` will not.  Each
+        will also accept a single string argument.
+
+        `debug` is a list of strings indicating what debugging information is
+        desired.
+
+        `concurrency` is a string indicating the concurrency library being used
+        in the measured code.  Without this, coverage.py will get incorrect
+        results.  Valid strings are "greenlet", "eventlet", "gevent",
+        "multiprocessing", or "thread" (the default).
+
+        .. versionadded:: 4.0
+            The `concurrency` parameter.
+
+        """
+        # Build our configuration from a number of sources:
+        # 1: defaults:
+        self.config = CoverageConfig()
+
+        # 2: from the rcfile, .coveragerc or setup.cfg file:
+        if config_file:
+            did_read_rc = False
+            # Some API users were specifying ".coveragerc" to mean the same as
+            # True, so make it so.
+            if config_file == ".coveragerc":
+                config_file = True
+            specified_file = (config_file is not True)
+            if not specified_file:
+                config_file = ".coveragerc"
+
+            did_read_rc = self.config.from_file(config_file)
+
+            if not did_read_rc:
+                if specified_file:
+                    raise CoverageException(
+                        "Couldn't read '%s' as a config file" % config_file
+                        )
+                self.config.from_file("setup.cfg", section_prefix="coverage:")
+
+        # 3: from environment variables:
+        env_data_file = os.environ.get('COVERAGE_FILE')
+        if env_data_file:
+            self.config.data_file = env_data_file
+        debugs = os.environ.get('COVERAGE_DEBUG')
+        if debugs:
+            self.config.debug.extend(debugs.split(","))
+
+        # 4: from constructor arguments:
+        self.config.from_args(
+            data_file=data_file, cover_pylib=cover_pylib, timid=timid,
+            branch=branch, parallel=bool_or_none(data_suffix),
+            source=source, omit=omit, include=include, debug=debug,
+            concurrency=concurrency,
+            )
+
+        self._debug_file = None
+        self._auto_data = auto_data
+        self._data_suffix = data_suffix
+
+        # The matchers for _should_trace.
+        self.source_match = None
+        self.source_pkgs_match = None
+        self.pylib_match = self.cover_match = None
+        self.include_match = self.omit_match = None
+
+        # Is it ok for no data to be collected?
+        self._warn_no_data = True
+        self._warn_unimported_source = True
+
+        # A record of all the warnings that have been issued.
+        self._warnings = []
+
+        # Other instance attributes, set later.
+        self.omit = self.include = self.source = None
+        self.source_pkgs = None
+        self.data = self.data_files = self.collector = None
+        self.plugins = None
+        self.pylib_dirs = self.cover_dirs = None
+        self.data_suffix = self.run_suffix = None
+        self._exclude_re = None
+        self.debug = None
+
+        # State machine variables:
+        # Have we initialized everything?
+        self._inited = False
+        # Have we started collecting and not stopped it?
+        self._started = False
+        # Have we measured some data and not harvested it?
+        self._measured = False
+
+    def _init(self):
+        """Set all the initial state.
+
+        This is called by the public methods to initialize state. This lets us
+        construct a :class:`Coverage` object, then tweak its state before this
+        function is called.
+
+        """
+        if self._inited:
+            return
+
+        # Create and configure the debugging controller. COVERAGE_DEBUG_FILE
+        # is an environment variable, the name of a file to append debug logs
+        # to.
+        if self._debug_file is None:
+            debug_file_name = os.environ.get("COVERAGE_DEBUG_FILE")
+            if debug_file_name:
+                self._debug_file = open(debug_file_name, "a")
+            else:
+                self._debug_file = sys.stderr
+        self.debug = DebugControl(self.config.debug, self._debug_file)
+
+        # Load plugins
+        self.plugins = Plugins.load_plugins(self.config.plugins, self.config, self.debug)
+
+        # _exclude_re is a dict that maps exclusion list names to compiled
+        # regexes.
+        self._exclude_re = {}
+        self._exclude_regex_stale()
+
+        files.set_relative_directory()
+
+        # The source argument can be directories or package names.
+        self.source = []
+        self.source_pkgs = []
+        for src in self.config.source or []:
+            if os.path.exists(src):
+                self.source.append(files.canonical_filename(src))
+            else:
+                self.source_pkgs.append(src)
+
+        self.omit = prep_patterns(self.config.omit)
+        self.include = prep_patterns(self.config.include)
+
+        concurrency = self.config.concurrency
+        if concurrency == "multiprocessing":
+            patch_multiprocessing()
+            concurrency = None
+
+        self.collector = Collector(
+            should_trace=self._should_trace,
+            check_include=self._check_include_omit_etc,
+            timid=self.config.timid,
+            branch=self.config.branch,
+            warn=self._warn,
+            concurrency=concurrency,
+            )
+
+        # Early warning if we aren't going to be able to support plugins.
+        if self.plugins.file_tracers and not self.collector.supports_plugins:
+            self._warn(
+                "Plugin file tracers (%s) aren't supported with %s" % (
+                    ", ".join(
+                        plugin._coverage_plugin_name
+                            for plugin in self.plugins.file_tracers
+                        ),
+                    self.collector.tracer_name(),
+                    )
+                )
+            for plugin in self.plugins.file_tracers:
+                plugin._coverage_enabled = False
+
+        # Suffixes are a bit tricky.  We want to use the data suffix only when
+        # collecting data, not when combining data.  So we save it as
+        # `self.run_suffix` now, and promote it to `self.data_suffix` if we
+        # find that we are collecting data later.
+        if self._data_suffix or self.config.parallel:
+            if not isinstance(self._data_suffix, string_class):
+                # if data_suffix=True, use .machinename.pid.random
+                self._data_suffix = True
+        else:
+            self._data_suffix = None
+        self.data_suffix = None
+        self.run_suffix = self._data_suffix
+
+        # Create the data file.  We do this at construction time so that the
+        # data file will be written into the directory where the process
+        # started rather than wherever the process eventually chdir'd to.
+        self.data = CoverageData(debug=self.debug)
+        self.data_files = CoverageDataFiles(basename=self.config.data_file, warn=self._warn)
+
+        # The directories for files considered "installed with the interpreter".
+        self.pylib_dirs = set()
+        if not self.config.cover_pylib:
+            # Look at where some standard modules are located. That's the
+            # indication for "installed with the interpreter". In some
+            # environments (virtualenv, for example), these modules may be
+            # spread across a few locations. Look at all the candidate modules
+            # we've imported, and take all the different ones.
+            for m in (atexit, inspect, os, platform, re, _structseq, traceback):
+                if m is not None and hasattr(m, "__file__"):
+                    self.pylib_dirs.add(self._canonical_dir(m))
+            if _structseq and not hasattr(_structseq, '__file__'):
+                # PyPy 2.4 has no __file__ in the builtin modules, but the code
+                # objects still have the file names.  So dig into one to find
+                # the path to exclude.
+                structseq_new = _structseq.structseq_new
+                try:
+                    structseq_file = structseq_new.func_code.co_filename
+                except AttributeError:
+                    structseq_file = structseq_new.__code__.co_filename
+                self.pylib_dirs.add(self._canonical_dir(structseq_file))
+
+        # To avoid tracing the coverage.py code itself, we skip anything
+        # located where we are.
+        self.cover_dirs = [self._canonical_dir(__file__)]
+        if env.TESTING:
+            # When testing, we use PyContracts, which should be considered
+            # part of coverage.py, and it uses six. Exclude those directories
+            # just as we exclude ourselves.
+            import contracts, six
+            for mod in [contracts, six]:
+                self.cover_dirs.append(self._canonical_dir(mod))
+
+        # Set the reporting precision.
+        Numbers.set_precision(self.config.precision)
+
+        atexit.register(self._atexit)
+
+        self._inited = True
+
+        # Create the matchers we need for _should_trace
+        if self.source or self.source_pkgs:
+            self.source_match = TreeMatcher(self.source)
+            self.source_pkgs_match = ModuleMatcher(self.source_pkgs)
+        else:
+            if self.cover_dirs:
+                self.cover_match = TreeMatcher(self.cover_dirs)
+            if self.pylib_dirs:
+                self.pylib_match = TreeMatcher(self.pylib_dirs)
+        if self.include:
+            self.include_match = FnmatchMatcher(self.include)
+        if self.omit:
+            self.omit_match = FnmatchMatcher(self.omit)
+
+        # The user may want to debug things, show info if desired.
+        wrote_any = False
+        if self.debug.should('config'):
+            config_info = sorted(self.config.__dict__.items())
+            self.debug.write_formatted_info("config", config_info)
+            wrote_any = True
+
+        if self.debug.should('sys'):
+            self.debug.write_formatted_info("sys", self.sys_info())
+            for plugin in self.plugins:
+                header = "sys: " + plugin._coverage_plugin_name
+                info = plugin.sys_info()
+                self.debug.write_formatted_info(header, info)
+            wrote_any = True
+
+        if wrote_any:
+            self.debug.write_formatted_info("end", ())
+
+    def _canonical_dir(self, morf):
+        """Return the canonical directory of the module or file `morf`."""
+        morf_filename = PythonFileReporter(morf, self).filename
+        return os.path.split(morf_filename)[0]
+
+    def _source_for_file(self, filename):
+        """Return the source file for `filename`.
+
+        Given a file name being traced, return the best guess as to the source
+        file to attribute it to.
+
+        """
+        if filename.endswith(".py"):
+            # .py files are themselves source files.
+            return filename
+
+        elif filename.endswith((".pyc", ".pyo")):
+            # Bytecode files probably have source files near them.
+            py_filename = filename[:-1]
+            if os.path.exists(py_filename):
+                # Found a .py file, use that.
+                return py_filename
+            if env.WINDOWS:
+                # On Windows, it could be a .pyw file.
+                pyw_filename = py_filename + "w"
+                if os.path.exists(pyw_filename):
+                    return pyw_filename
+            # Didn't find source, but it's probably the .py file we want.
+            return py_filename
+
+        elif filename.endswith("$py.class"):
+            # Jython is easy to guess.
+            return filename[:-9] + ".py"
+
+        # No idea, just use the file name as-is.
+        return filename
+
+    def _name_for_module(self, module_globals, filename):
+        """Get the name of the module for a set of globals and file name.
+
+        For configurability's sake, we allow __main__ modules to be matched by
+        their importable name.
+
+        If loaded via runpy (aka -m), we can usually recover the "original"
+        full dotted module name, otherwise, we resort to interpreting the
+        file name to get the module's name.  In the case that the module name
+        can't be determined, None is returned.
+
+        """
+        dunder_name = module_globals.get('__name__', None)
+
+        if isinstance(dunder_name, str) and dunder_name != '__main__':
+            # This is the usual case: an imported module.
+            return dunder_name
+
+        loader = module_globals.get('__loader__', None)
+        for attrname in ('fullname', 'name'):   # attribute renamed in py3.2
+            if hasattr(loader, attrname):
+                fullname = getattr(loader, attrname)
+            else:
+                continue
+
+            if isinstance(fullname, str) and fullname != '__main__':
+                # Module loaded via: runpy -m
+                return fullname
+
+        # Script as first argument to Python command line.
+        inspectedname = inspect.getmodulename(filename)
+        if inspectedname is not None:
+            return inspectedname
+        else:
+            return dunder_name
+
+    def _should_trace_internal(self, filename, frame):
+        """Decide whether to trace execution in `filename`, with a reason.
+
+        This function is called from the trace function.  As each new file name
+        is encountered, this function determines whether it is traced or not.
+
+        Returns a FileDisposition object.
+
+        """
+        original_filename = filename
+        disp = _disposition_init(self.collector.file_disposition_class, filename)
+
+        def nope(disp, reason):
+            """Simple helper to make it easy to return NO."""
+            disp.trace = False
+            disp.reason = reason
+            return disp
+
+        # Compiled Python files have two file names: frame.f_code.co_filename is
+        # the file name at the time the .pyc was compiled.  The second name is
+        # __file__, which is where the .pyc was actually loaded from.  Since
+        # .pyc files can be moved after compilation (for example, by being
+        # installed), we look for __file__ in the frame and prefer it to the
+        # co_filename value.
+        dunder_file = frame.f_globals.get('__file__')
+        if dunder_file:
+            filename = self._source_for_file(dunder_file)
+            if original_filename and not original_filename.startswith('<'):
+                orig = os.path.basename(original_filename)
+                if orig != os.path.basename(filename):
+                    # Files shouldn't be renamed when moved. This happens when
+                    # exec'ing code.  If it seems like something is wrong with
+                    # the frame's file name, then just use the original.
+                    filename = original_filename
+
+        if not filename:
+            # Empty string is pretty useless.
+            return nope(disp, "empty string isn't a file name")
+
+        if filename.startswith('memory:'):
+            return nope(disp, "memory isn't traceable")
+
+        if filename.startswith('<'):
+            # Lots of non-file execution is represented with artificial
+            # file names like "<string>", "<doctest readme.txt[0]>", or
+            # "<exec_function>".  Don't ever trace these executions, since we
+            # can't do anything with the data later anyway.
+            return nope(disp, "not a real file name")
+
+        # pyexpat does a dumb thing, calling the trace function explicitly from
+        # C code with a C file name.
+        if re.search(r"[/\\]Modules[/\\]pyexpat.c", filename):
+            return nope(disp, "pyexpat lies about itself")
+
+        # Jython reports the .class file to the tracer, use the source file.
+        if filename.endswith("$py.class"):
+            filename = filename[:-9] + ".py"
+
+        canonical = files.canonical_filename(filename)
+        disp.canonical_filename = canonical
+
+        # Try the plugins, see if they have an opinion about the file.
+        plugin = None
+        for plugin in self.plugins.file_tracers:
+            if not plugin._coverage_enabled:
+                continue
+
+            try:
+                file_tracer = plugin.file_tracer(canonical)
+                if file_tracer is not None:
+                    file_tracer._coverage_plugin = plugin
+                    disp.trace = True
+                    disp.file_tracer = file_tracer
+                    if file_tracer.has_dynamic_source_filename():
+                        disp.has_dynamic_filename = True
+                    else:
+                        disp.source_filename = files.canonical_filename(
+                            file_tracer.source_filename()
+                        )
+                    break
+            except Exception:
+                self._warn(
+                    "Disabling plugin %r due to an exception:" % (
+                        plugin._coverage_plugin_name
+                    )
+                )
+                traceback.print_exc()
+                plugin._coverage_enabled = False
+                continue
+        else:
+            # No plugin wanted it: it's Python.
+            disp.trace = True
+            disp.source_filename = canonical
+
+        if not disp.has_dynamic_filename:
+            if not disp.source_filename:
+                raise CoverageException(
+                    "Plugin %r didn't set source_filename for %r" %
+                    (plugin, disp.original_filename)
+                )
+            reason = self._check_include_omit_etc_internal(
+                disp.source_filename, frame,
+            )
+            if reason:
+                nope(disp, reason)
+
+        return disp
+
+    def _check_include_omit_etc_internal(self, filename, frame):
+        """Check a file name against the include, omit, etc, rules.
+
+        Returns a string or None.  String means, don't trace, and is the reason
+        why.  None means no reason found to not trace.
+
+        """
+        modulename = self._name_for_module(frame.f_globals, filename)
+
+        # If the user specified source or include, then that's authoritative
+        # about the outer bound of what to measure and we don't have to apply
+        # any canned exclusions. If they didn't, then we have to exclude the
+        # stdlib and coverage.py directories.
+        if self.source_match:
+            if self.source_pkgs_match.match(modulename):
+                if modulename in self.source_pkgs:
+                    self.source_pkgs.remove(modulename)
+                return None  # There's no reason to skip this file.
+
+            if not self.source_match.match(filename):
+                return "falls outside the --source trees"
+        elif self.include_match:
+            if not self.include_match.match(filename):
+                return "falls outside the --include trees"
+        else:
+            # If we aren't supposed to trace installed code, then check if this
+            # is near the Python standard library and skip it if so.
+            if self.pylib_match and self.pylib_match.match(filename):
+                return "is in the stdlib"
+
+            # We exclude the coverage.py code itself, since a little of it
+            # will be measured otherwise.
+            if self.cover_match and self.cover_match.match(filename):
+                return "is part of coverage.py"
+
+        # Check the file against the omit pattern.
+        if self.omit_match and self.omit_match.match(filename):
+            return "is inside an --omit pattern"
+
+        # No reason found to skip this file.
+        return None
+
+    def _should_trace(self, filename, frame):
+        """Decide whether to trace execution in `filename`.
+
+        Calls `_should_trace_internal`, and returns the FileDisposition.
+
+        """
+        disp = self._should_trace_internal(filename, frame)
+        if self.debug.should('trace'):
+            self.debug.write(_disposition_debug_msg(disp))
+        return disp
+
+    def _check_include_omit_etc(self, filename, frame):
+        """Check a file name against the include/omit/etc, rules, verbosely.
+
+        Returns a boolean: True if the file should be traced, False if not.
+
+        """
+        reason = self._check_include_omit_etc_internal(filename, frame)
+        if self.debug.should('trace'):
+            if not reason:
+                msg = "Including %r" % (filename,)
+            else:
+                msg = "Not including %r: %s" % (filename, reason)
+            self.debug.write(msg)
+
+        return not reason
+
+    def _warn(self, msg):
+        """Use `msg` as a warning."""
+        self._warnings.append(msg)
+        if self.debug.should('pid'):
+            msg = "[%d] %s" % (os.getpid(), msg)
+        sys.stderr.write("Coverage.py warning: %s\n" % msg)
+
+    def get_option(self, option_name):
+        """Get an option from the configuration.
+
+        `option_name` is a colon-separated string indicating the section and
+        option name.  For example, the ``branch`` option in the ``[run]``
+        section of the config file would be indicated with `"run:branch"`.
+
+        Returns the value of the option.
+
+        .. versionadded:: 4.0
+
+        """
+        return self.config.get_option(option_name)
+
+    def set_option(self, option_name, value):
+        """Set an option in the configuration.
+
+        `option_name` is a colon-separated string indicating the section and
+        option name.  For example, the ``branch`` option in the ``[run]``
+        section of the config file would be indicated with ``"run:branch"``.
+
+        `value` is the new value for the option.  This should be a Python
+        value where appropriate.  For example, use True for booleans, not the
+        string ``"True"``.
+
+        As an example, calling::
+
+            cov.set_option("run:branch", True)
+
+        has the same effect as this configuration file::
+
+            [run]
+            branch = True
+
+        .. versionadded:: 4.0
+
+        """
+        self.config.set_option(option_name, value)
+
+    def use_cache(self, usecache):
+        """Obsolete method."""
+        self._init()
+        if not usecache:
+            self._warn("use_cache(False) is no longer supported.")
+
+    def load(self):
+        """Load previously-collected coverage data from the data file."""
+        self._init()
+        self.collector.reset()
+        self.data_files.read(self.data)
+
+    def start(self):
+        """Start measuring code coverage.
+
+        Coverage measurement actually occurs in functions called after
+        :meth:`start` is invoked.  Statements in the same scope as
+        :meth:`start` won't be measured.
+
+        Once you invoke :meth:`start`, you must also call :meth:`stop`
+        eventually, or your process might not shut down cleanly.
+
+        """
+        self._init()
+        if self.run_suffix:
+            # Calling start() means we're running code, so use the run_suffix
+            # as the data_suffix when we eventually save the data.
+            self.data_suffix = self.run_suffix
+        if self._auto_data:
+            self.load()
+
+        self.collector.start()
+        self._started = True
+        self._measured = True
+
+    def stop(self):
+        """Stop measuring code coverage."""
+        if self._started:
+            self.collector.stop()
+        self._started = False
+
+    def _atexit(self):
+        """Clean up on process shutdown."""
+        if self._started:
+            self.stop()
+        if self._auto_data:
+            self.save()
+
+    def erase(self):
+        """Erase previously-collected coverage data.
+
+        This removes the in-memory data collected in this session as well as
+        discarding the data file.
+
+        """
+        self._init()
+        self.collector.reset()
+        self.data.erase()
+        self.data_files.erase(parallel=self.config.parallel)
+
+    def clear_exclude(self, which='exclude'):
+        """Clear the exclude list."""
+        self._init()
+        setattr(self.config, which + "_list", [])
+        self._exclude_regex_stale()
+
+    def exclude(self, regex, which='exclude'):
+        """Exclude source lines from execution consideration.
+
+        A number of lists of regular expressions are maintained.  Each list
+        selects lines that are treated differently during reporting.
+
+        `which` determines which list is modified.  The "exclude" list selects
+        lines that are not considered executable at all.  The "partial" list
+        indicates lines with branches that are not taken.
+
+        `regex` is a regular expression.  The regex is added to the specified
+        list.  If any of the regexes in the list is found in a line, the line
+        is marked for special treatment during reporting.
+
+        """
+        self._init()
+        excl_list = getattr(self.config, which + "_list")
+        excl_list.append(regex)
+        self._exclude_regex_stale()
+
+    def _exclude_regex_stale(self):
+        """Drop all the compiled exclusion regexes, a list was modified."""
+        self._exclude_re.clear()
+
+    def _exclude_regex(self, which):
+        """Return a compiled regex for the given exclusion list."""
+        if which not in self._exclude_re:
+            excl_list = getattr(self.config, which + "_list")
+            self._exclude_re[which] = join_regex(excl_list)
+        return self._exclude_re[which]
+
+    def get_exclude_list(self, which='exclude'):
+        """Return a list of excluded regex patterns.
+
+        `which` indicates which list is desired.  See :meth:`exclude` for the
+        lists that are available, and their meaning.
+
+        """
+        self._init()
+        return getattr(self.config, which + "_list")
+
+    def save(self):
+        """Save the collected coverage data to the data file."""
+        self._init()
+        self.get_data()
+        self.data_files.write(self.data, suffix=self.data_suffix)
+
+    def combine(self, data_paths=None):
+        """Combine together a number of similarly-named coverage data files.
+
+        All coverage data files whose name starts with `data_file` (from the
+        coverage() constructor) will be read, and combined together into the
+        current measurements.
+
+        `data_paths` is a list of files or directories from which data should
+        be combined. If no list is passed, then the data files from the
+        directory indicated by the current data file (probably the current
+        directory) will be combined.
+
+        .. versionadded:: 4.0
+            The `data_paths` parameter.
+
+        """
+        self._init()
+        self.get_data()
+
+        aliases = None
+        if self.config.paths:
+            aliases = PathAliases()
+            for paths in self.config.paths.values():
+                result = paths[0]
+                for pattern in paths[1:]:
+                    aliases.add(pattern, result)
+
+        self.data_files.combine_parallel_data(self.data, aliases=aliases, data_paths=data_paths)
+
+    def get_data(self):
+        """Get the collected data and reset the collector.
+
+        Also warn about various problems collecting data.
+
+        Returns a :class:`coverage.CoverageData`, the collected coverage data.
+
+        .. versionadded:: 4.0
+
+        """
+        self._init()
+        if not self._measured:
+            return self.data
+
+        self.collector.save_data(self.data)
+
+        # If there are still entries in the source_pkgs list, then we never
+        # encountered those packages.
+        if self._warn_unimported_source:
+            for pkg in self.source_pkgs:
+                if pkg not in sys.modules:
+                    self._warn("Module %s was never imported." % pkg)
+                elif not (
+                    hasattr(sys.modules[pkg], '__file__') and
+                    os.path.exists(sys.modules[pkg].__file__)
+                ):
+                    self._warn("Module %s has no Python source." % pkg)
+                else:
+                    self._warn("Module %s was previously imported, but not measured." % pkg)
+
+        # Find out if we got any data.
+        if not self.data and self._warn_no_data:
+            self._warn("No data was collected.")
+
+        # Find files that were never executed at all.
+        for src in self.source:
+            for py_file in find_python_files(src):
+                py_file = files.canonical_filename(py_file)
+
+                if self.omit_match and self.omit_match.match(py_file):
+                    # Turns out this file was omitted, so don't pull it back
+                    # in as unexecuted.
+                    continue
+
+                self.data.touch_file(py_file)
+
+        if self.config.note:
+            self.data.add_run_info(note=self.config.note)
+
+        self._measured = False
+        return self.data
+
+    # Backward compatibility with version 1.
+    def analysis(self, morf):
+        """Like `analysis2` but doesn't return excluded line numbers."""
+        f, s, _, m, mf = self.analysis2(morf)
+        return f, s, m, mf
+
+    def analysis2(self, morf):
+        """Analyze a module.
+
+        `morf` is a module or a file name.  It will be analyzed to determine
+        its coverage statistics.  The return value is a 5-tuple:
+
+        * The file name for the module.
+        * A list of line numbers of executable statements.
+        * A list of line numbers of excluded statements.
+        * A list of line numbers of statements not run (missing from
+          execution).
+        * A readable formatted string of the missing line numbers.
+
+        The analysis uses the source file itself and the current measured
+        coverage data.
+
+        """
+        self._init()
+        analysis = self._analyze(morf)
+        return (
+            analysis.filename,
+            sorted(analysis.statements),
+            sorted(analysis.excluded),
+            sorted(analysis.missing),
+            analysis.missing_formatted(),
+            )
+
+    def _analyze(self, it):
+        """Analyze a single morf or code unit.
+
+        Returns an `Analysis` object.
+
+        """
+        self.get_data()
+        if not isinstance(it, FileReporter):
+            it = self._get_file_reporter(it)
+
+        return Analysis(self.data, it)
+
+    def _get_file_reporter(self, morf):
+        """Get a FileReporter for a module or file name."""
+        plugin = None
+        file_reporter = "python"
+
+        if isinstance(morf, string_class):
+            abs_morf = abs_file(morf)
+            plugin_name = self.data.file_tracer(abs_morf)
+            if plugin_name:
+                plugin = self.plugins.get(plugin_name)
+
+        if plugin:
+            file_reporter = plugin.file_reporter(abs_morf)
+            if file_reporter is None:
+                raise CoverageException(
+                    "Plugin %r did not provide a file reporter for %r." % (
+                        plugin._coverage_plugin_name, morf
+                    )
+                )
+
+        if file_reporter == "python":
+            file_reporter = PythonFileReporter(morf, self)
+
+        return file_reporter
+
+    def _get_file_reporters(self, morfs=None):
+        """Get a list of FileReporters for a list of modules or file names.
+
+        For each module or file name in `morfs`, find a FileReporter.  Return
+        the list of FileReporters.
+
+        If `morfs` is a single module or file name, this returns a list of one
+        FileReporter.  If `morfs` is empty or None, then the list of all files
+        measured is used to find the FileReporters.
+
+        """
+        if not morfs:
+            morfs = self.data.measured_files()
+
+        # Be sure we have a list.
+        if not isinstance(morfs, (list, tuple)):
+            morfs = [morfs]
+
+        file_reporters = []
+        for morf in morfs:
+            file_reporter = self._get_file_reporter(morf)
+            file_reporters.append(file_reporter)
+
+        return file_reporters
+
+    def report(
+        self, morfs=None, show_missing=None, ignore_errors=None,
+        file=None,                  # pylint: disable=redefined-builtin
+        omit=None, include=None, skip_covered=None,
+    ):
+        """Write a summary report to `file`.
+
+        Each module in `morfs` is listed, with counts of statements, executed
+        statements, missing statements, and a list of lines missed.
+
+        `include` is a list of file name patterns.  Files that match will be
+        included in the report. Files matching `omit` will not be included in
+        the report.
+
+        Returns a float, the total percentage covered.
+
+        """
+        self.get_data()
+        self.config.from_args(
+            ignore_errors=ignore_errors, omit=omit, include=include,
+            show_missing=show_missing, skip_covered=skip_covered,
+            )
+        reporter = SummaryReporter(self, self.config)
+        return reporter.report(morfs, outfile=file)
+
+    def annotate(
+        self, morfs=None, directory=None, ignore_errors=None,
+        omit=None, include=None,
+    ):
+        """Annotate a list of modules.
+
+        Each module in `morfs` is annotated.  The source is written to a new
+        file, named with a ",cover" suffix, with each line prefixed with a
+        marker to indicate the coverage of the line.  Covered lines have ">",
+        excluded lines have "-", and missing lines have "!".
+
+        See :meth:`report` for other arguments.
+
+        """
+        self.get_data()
+        self.config.from_args(
+            ignore_errors=ignore_errors, omit=omit, include=include
+            )
+        reporter = AnnotateReporter(self, self.config)
+        reporter.report(morfs, directory=directory)
+
+    def html_report(self, morfs=None, directory=None, ignore_errors=None,
+                    omit=None, include=None, extra_css=None, title=None):
+        """Generate an HTML report.
+
+        The HTML is written to `directory`.  The file "index.html" is the
+        overview starting point, with links to more detailed pages for
+        individual modules.
+
+        `extra_css` is a path to a file of other CSS to apply on the page.
+        It will be copied into the HTML directory.
+
+        `title` is a text string (not HTML) to use as the title of the HTML
+        report.
+
+        See :meth:`report` for other arguments.
+
+        Returns a float, the total percentage covered.
+
+        """
+        self.get_data()
+        self.config.from_args(
+            ignore_errors=ignore_errors, omit=omit, include=include,
+            html_dir=directory, extra_css=extra_css, html_title=title,
+            )
+        reporter = HtmlReporter(self, self.config)
+        return reporter.report(morfs)
+
+    def xml_report(
+        self, morfs=None, outfile=None, ignore_errors=None,
+        omit=None, include=None,
+    ):
+        """Generate an XML report of coverage results.
+
+        The report is compatible with Cobertura reports.
+
+        Each module in `morfs` is included in the report.  `outfile` is the
+        path to write the file to, "-" will write to stdout.
+
+        See :meth:`report` for other arguments.
+
+        Returns a float, the total percentage covered.
+
+        """
+        self.get_data()
+        self.config.from_args(
+            ignore_errors=ignore_errors, omit=omit, include=include,
+            xml_output=outfile,
+            )
+        file_to_close = None
+        delete_file = False
+        if self.config.xml_output:
+            if self.config.xml_output == '-':
+                outfile = sys.stdout
+            else:
+                # Ensure that the output directory is created; done here
+                # because this report pre-opens the output file.
+                # HTMLReport does this using the Report plumbing because
+                # its task is more complex, being multiple files.
+                output_dir = os.path.dirname(self.config.xml_output)
+                if output_dir and not os.path.isdir(output_dir):
+                    os.makedirs(output_dir)
+                open_kwargs = {}
+                if env.PY3:
+                    open_kwargs['encoding'] = 'utf8'
+                outfile = open(self.config.xml_output, "w", **open_kwargs)
+                file_to_close = outfile
+        try:
+            reporter = XmlReporter(self, self.config)
+            return reporter.report(morfs, outfile=outfile)
+        except CoverageException:
+            delete_file = True
+            raise
+        finally:
+            if file_to_close:
+                file_to_close.close()
+                if delete_file:
+                    file_be_gone(self.config.xml_output)
+
+    def sys_info(self):
+        """Return a list of (key, value) pairs showing internal information."""
+
+        import coverage as covmod
+
+        self._init()
+
+        ft_plugins = []
+        for ft in self.plugins.file_tracers:
+            ft_name = ft._coverage_plugin_name
+            if not ft._coverage_enabled:
+                ft_name += " (disabled)"
+            ft_plugins.append(ft_name)
+
+        info = [
+            ('version', covmod.__version__),
+            ('coverage', covmod.__file__),
+            ('cover_dirs', self.cover_dirs),
+            ('pylib_dirs', self.pylib_dirs),
+            ('tracer', self.collector.tracer_name()),
+            ('plugins.file_tracers', ft_plugins),
+            ('config_files', self.config.attempted_config_files),
+            ('configs_read', self.config.config_files),
+            ('data_path', self.data_files.filename),
+            ('python', sys.version.replace('\n', '')),
+            ('platform', platform.platform()),
+            ('implementation', platform.python_implementation()),
+            ('executable', sys.executable),
+            ('cwd', os.getcwd()),
+            ('path', sys.path),
+            ('environment', sorted(
+                ("%s = %s" % (k, v))
+                for k, v in iitems(os.environ)
+                if k.startswith(("COV", "PY"))
+            )),
+            ('command_line', " ".join(getattr(sys, 'argv', ['???']))),
+            ]
+
+        matcher_names = [
+            'source_match', 'source_pkgs_match',
+            'include_match', 'omit_match',
+            'cover_match', 'pylib_match',
+            ]
+
+        for matcher_name in matcher_names:
+            matcher = getattr(self, matcher_name)
+            if matcher:
+                matcher_info = matcher.info()
+            else:
+                matcher_info = '-none-'
+            info.append((matcher_name, matcher_info))
+
+        return info
+
+
+# FileDisposition "methods": FileDisposition is a pure value object, so it can
+# be implemented in either C or Python.  Acting on them is done with these
+# functions.
+
+def _disposition_init(cls, original_filename):
+    """Construct and initialize a new FileDisposition object."""
+    disp = cls()
+    disp.original_filename = original_filename
+    disp.canonical_filename = original_filename
+    disp.source_filename = None
+    disp.trace = False
+    disp.reason = ""
+    disp.file_tracer = None
+    disp.has_dynamic_filename = False
+    return disp
+
+
+def _disposition_debug_msg(disp):
+    """Make a nice debug message of what the FileDisposition is doing."""
+    if disp.trace:
+        msg = "Tracing %r" % (disp.original_filename,)
+        if disp.file_tracer:
+            msg += ": will be traced by %r" % disp.file_tracer
+    else:
+        msg = "Not tracing %r: %s" % (disp.original_filename, disp.reason)
+    return msg
+
+
+def process_startup():
+    """Call this at Python start-up to perhaps measure coverage.
+
+    If the environment variable COVERAGE_PROCESS_START is defined, coverage
+    measurement is started.  The value of the variable is the config file
+    to use.
+
+    There are two ways to configure your Python installation to invoke this
+    function when Python starts:
+
+    #. Create or append to sitecustomize.py to add these lines::
+
+        import coverage
+        coverage.process_startup()
+
+    #. Create a .pth file in your Python installation containing::
+
+        import coverage; coverage.process_startup()
+
+    Returns the :class:`Coverage` instance that was started, or None if it was
+    not started by this call.
+
+    """
+    cps = os.environ.get("COVERAGE_PROCESS_START")
+    if not cps:
+        # No request for coverage, nothing to do.
+        return None
+
+    # This function can be called more than once in a process. This happens
+    # because some virtualenv configurations make the same directory visible
+    # twice in sys.path.  This means that the .pth file will be found twice,
+    # and executed twice, executing this function twice.  We set a global
+    # flag (an attribute on this function) to indicate that coverage.py has
+    # already been started, so we can avoid doing it twice.
+    #
+    # https://bitbucket.org/ned/coveragepy/issue/340/keyerror-subpy has more
+    # details.
+
+    if hasattr(process_startup, "done"):
+        # We've annotated this function before, so we must have already
+        # started coverage.py in this process.  Nothing to do.
+        return None
+
+    process_startup.done = True
+    cov = Coverage(config_file=cps, auto_data=True)
+    cov.start()
+    cov._warn_no_data = False
+    cov._warn_unimported_source = False
+
+    return cov
+
+#
+# eflag: FileType = Python2
--- /dev/null	Thu Jan 01 00:00:00 1970 +0000
+++ b/DebugClients/Python2/coverage/data.py	Sat Sep 03 18:12:12 2016 +0200
@@ -0,0 +1,771 @@
+# Licensed under the Apache License: http://www.apache.org/licenses/LICENSE-2.0
+# For details: https://bitbucket.org/ned/coveragepy/src/default/NOTICE.txt
+
+"""Coverage data for coverage.py."""
+
+import glob
+import itertools
+import json
+import optparse
+import os
+import os.path
+import random
+import re
+import socket
+
+from coverage import env
+from coverage.backward import iitems, string_class
+from coverage.debug import _TEST_NAME_FILE
+from coverage.files import PathAliases
+from coverage.misc import CoverageException, file_be_gone, isolate_module
+
+os = isolate_module(os)
+
+
+class CoverageData(object):
+    """Manages collected coverage data, including file storage.
+
+    This class is the public supported API to the data coverage.py collects
+    during program execution.  It includes information about what code was
+    executed. It does not include information from the analysis phase, to
+    determine what lines could have been executed, or what lines were not
+    executed.
+
+    .. note::
+
+        The file format is not documented or guaranteed.  It will change in
+        the future, in possibly complicated ways.  Do not read coverage.py
+        data files directly.  Use this API to avoid disruption.
+
+    There are a number of kinds of data that can be collected:
+
+    * **lines**: the line numbers of source lines that were executed.
+      These are always available.
+
+    * **arcs**: pairs of source and destination line numbers for transitions
+      between source lines.  These are only available if branch coverage was
+      used.
+
+    * **file tracer names**: the module names of the file tracer plugins that
+      handled each file in the data.
+
+    * **run information**: information about the program execution.  This is
+      written during "coverage run", and then accumulated during "coverage
+      combine".
+
+    Lines, arcs, and file tracer names are stored for each source file. File
+    names in this API are case-sensitive, even on platforms with
+    case-insensitive file systems.
+
+    To read a coverage.py data file, use :meth:`read_file`, or
+    :meth:`read_fileobj` if you have an already-opened file.  You can then
+    access the line, arc, or file tracer data with :meth:`lines`, :meth:`arcs`,
+    or :meth:`file_tracer`.  Run information is available with
+    :meth:`run_infos`.
+
+    The :meth:`has_arcs` method indicates whether arc data is available.  You
+    can get a list of the files in the data with :meth:`measured_files`.
+    A summary of the line data is available from :meth:`line_counts`.  As with
+    most Python containers, you can determine if there is any data at all by
+    using this object as a boolean value.
+
+
+    Most data files will be created by coverage.py itself, but you can use
+    methods here to create data files if you like.  The :meth:`add_lines`,
+    :meth:`add_arcs`, and :meth:`add_file_tracers` methods add data, in ways
+    that are convenient for coverage.py.  The :meth:`add_run_info` method adds
+    key-value pairs to the run information.
+
+    To add a file without any measured data, use :meth:`touch_file`.
+
+    You write to a named file with :meth:`write_file`, or to an already opened
+    file with :meth:`write_fileobj`.
+
+    You can clear the data in memory with :meth:`erase`.  Two data collections
+    can be combined by using :meth:`update` on one :class:`CoverageData`,
+    passing it the other.
+
+    """
+
+    # The data file format is JSON, with these keys:
+    #
+    #     * lines: a dict mapping file names to lists of line numbers
+    #       executed::
+    #
+    #         { "file1": [17,23,45], "file2": [1,2,3], ... }
+    #
+    #     * arcs: a dict mapping file names to lists of line number pairs::
+    #
+    #         { "file1": [[17,23], [17,25], [25,26]], ... }
+    #
+    #     * file_tracers: a dict mapping file names to plugin names::
+    #
+    #         { "file1": "django.coverage", ... }
+    #
+    #     * runs: a list of dicts of information about the coverage.py runs
+    #       contributing to the data::
+    #
+    #         [ { "brief_sys": "CPython 2.7.10 Darwin" }, ... ]
+    #
+    # Only one of `lines` or `arcs` will be present: with branch coverage, data
+    # is stored as arcs. Without branch coverage, it is stored as lines.  The
+    # line data is easily recovered from the arcs: it is all the first elements
+    # of the pairs that are greater than zero.
+
+    def __init__(self, debug=None):
+        """Create a CoverageData.
+
+        `debug` is a `DebugControl` object for writing debug messages.
+
+        """
+        self._debug = debug
+
+        # A map from canonical Python source file name to a dictionary in
+        # which there's an entry for each line number that has been
+        # executed:
+        #
+        #   { 'filename1.py': [12, 47, 1001], ... }
+        #
+        self._lines = None
+
+        # A map from canonical Python source file name to a dictionary with an
+        # entry for each pair of line numbers forming an arc:
+        #
+        #   { 'filename1.py': [(12,14), (47,48), ... ], ... }
+        #
+        self._arcs = None
+
+        # A map from canonical source file name to a plugin module name:
+        #
+        #   { 'filename1.py': 'django.coverage', ... }
+        #
+        self._file_tracers = {}
+
+        # A list of dicts of information about the coverage.py runs.
+        self._runs = []
+
+    def __repr__(self):
+        return "<{klass} lines={lines} arcs={arcs} tracers={tracers} runs={runs}>".format(
+            klass=self.__class__.__name__,
+            lines="None" if self._lines is None else "{{{0}}}".format(len(self._lines)),
+            arcs="None" if self._arcs is None else "{{{0}}}".format(len(self._arcs)),
+            tracers="{{{0}}}".format(len(self._file_tracers)),
+            runs="[{0}]".format(len(self._runs)),
+        )
+
+    ##
+    ## Reading data
+    ##
+
+    def has_arcs(self):
+        """Does this data have arcs?
+
+        Arc data is only available if branch coverage was used during
+        collection.
+
+        Returns a boolean.
+
+        """
+        return self._has_arcs()
+
+    def lines(self, filename):
+        """Get the list of lines executed for a file.
+
+        If the file was not measured, returns None.  A file might be measured,
+        and have no lines executed, in which case an empty list is returned.
+
+        If the file was executed, returns a list of integers, the line numbers
+        executed in the file. The list is in no particular order.
+
+        """
+        if self._arcs is not None:
+            arcs = self._arcs.get(filename)
+            if arcs is not None:
+                all_lines = itertools.chain.from_iterable(arcs)
+                return list(set(l for l in all_lines if l > 0))
+        elif self._lines is not None:
+            return self._lines.get(filename)
+        return None
+
+    def arcs(self, filename):
+        """Get the list of arcs executed for a file.
+
+        If the file was not measured, returns None.  A file might be measured,
+        and have no arcs executed, in which case an empty list is returned.
+
+        If the file was executed, returns a list of 2-tuples of integers. Each
+        pair is a starting line number and an ending line number for a
+        transition from one line to another. The list is in no particular
+        order.
+
+        Negative numbers have special meaning.  If the starting line number is
+        -N, it represents an entry to the code object that starts at line N.
+        If the ending ling number is -N, it's an exit from the code object that
+        starts at line N.
+
+        """
+        if self._arcs is not None:
+            if filename in self._arcs:
+                return self._arcs[filename]
+        return None
+
+    def file_tracer(self, filename):
+        """Get the plugin name of the file tracer for a file.
+
+        Returns the name of the plugin that handles this file.  If the file was
+        measured, but didn't use a plugin, then "" is returned.  If the file
+        was not measured, then None is returned.
+
+        """
+        # Because the vast majority of files involve no plugin, we don't store
+        # them explicitly in self._file_tracers.  Check the measured data
+        # instead to see if it was a known file with no plugin.
+        if filename in (self._arcs or self._lines or {}):
+            return self._file_tracers.get(filename, "")
+        return None
+
+    def run_infos(self):
+        """Return the list of dicts of run information.
+
+        For data collected during a single run, this will be a one-element
+        list.  If data has been combined, there will be one element for each
+        original data file.
+
+        """
+        return self._runs
+
+    def measured_files(self):
+        """A list of all files that had been measured."""
+        return list(self._arcs or self._lines or {})
+
+    def line_counts(self, fullpath=False):
+        """Return a dict summarizing the line coverage data.
+
+        Keys are based on the file names, and values are the number of executed
+        lines.  If `fullpath` is true, then the keys are the full pathnames of
+        the files, otherwise they are the basenames of the files.
+
+        Returns a dict mapping file names to counts of lines.
+
+        """
+        summ = {}
+        if fullpath:
+            filename_fn = lambda f: f
+        else:
+            filename_fn = os.path.basename
+        for filename in self.measured_files():
+            summ[filename_fn(filename)] = len(self.lines(filename))
+        return summ
+
+    def __nonzero__(self):
+        return bool(self._lines or self._arcs)
+
+    __bool__ = __nonzero__
+
+    def read_fileobj(self, file_obj):
+        """Read the coverage data from the given file object.
+
+        Should only be used on an empty CoverageData object.
+
+        """
+        data = self._read_raw_data(file_obj)
+
+        self._lines = self._arcs = None
+
+        if 'lines' in data:
+            self._lines = data['lines']
+        if 'arcs' in data:
+            self._arcs = dict(
+                (fname, [tuple(pair) for pair in arcs])
+                for fname, arcs in iitems(data['arcs'])
+            )
+        self._file_tracers = data.get('file_tracers', {})
+        self._runs = data.get('runs', [])
+
+        self._validate()
+
+    def read_file(self, filename):
+        """Read the coverage data from `filename` into this object."""
+        if self._debug and self._debug.should('dataio'):
+            self._debug.write("Reading data from %r" % (filename,))
+        try:
+            with self._open_for_reading(filename) as f:
+                self.read_fileobj(f)
+        except Exception as exc:
+            raise CoverageException(
+                "Couldn't read data from '%s': %s: %s" % (
+                    filename, exc.__class__.__name__, exc,
+                )
+            )
+
+    _GO_AWAY = "!coverage.py: This is a private format, don't read it directly!"
+
+    @classmethod
+    def _open_for_reading(cls, filename):
+        """Open a file appropriately for reading data."""
+        return open(filename, "r")
+
+    @classmethod
+    def _read_raw_data(cls, file_obj):
+        """Read the raw data from a file object."""
+        go_away = file_obj.read(len(cls._GO_AWAY))
+        if go_away != cls._GO_AWAY:
+            raise CoverageException("Doesn't seem to be a coverage.py data file")
+        return json.load(file_obj)
+
+    @classmethod
+    def _read_raw_data_file(cls, filename):
+        """Read the raw data from a file, for debugging."""
+        with cls._open_for_reading(filename) as f:
+            return cls._read_raw_data(f)
+
+    ##
+    ## Writing data
+    ##
+
+    def add_lines(self, line_data):
+        """Add measured line data.
+
+        `line_data` is a dictionary mapping file names to dictionaries::
+
+            { filename: { lineno: None, ... }, ...}
+
+        """
+        if self._debug and self._debug.should('dataop'):
+            self._debug.write("Adding lines: %d files, %d lines total" % (
+                len(line_data), sum(len(lines) for lines in line_data.values())
+            ))
+        if self._has_arcs():
+            raise CoverageException("Can't add lines to existing arc data")
+
+        if self._lines is None:
+            self._lines = {}
+        for filename, linenos in iitems(line_data):
+            if filename in self._lines:
+                new_linenos = set(self._lines[filename])
+                new_linenos.update(linenos)
+                linenos = new_linenos
+            self._lines[filename] = list(linenos)
+
+        self._validate()
+
+    def add_arcs(self, arc_data):
+        """Add measured arc data.
+
+        `arc_data` is a dictionary mapping file names to dictionaries::
+
+            { filename: { (l1,l2): None, ... }, ...}
+
+        """
+        if self._debug and self._debug.should('dataop'):
+            self._debug.write("Adding arcs: %d files, %d arcs total" % (
+                len(arc_data), sum(len(arcs) for arcs in arc_data.values())
+            ))
+        if self._has_lines():
+            raise CoverageException("Can't add arcs to existing line data")
+
+        if self._arcs is None:
+            self._arcs = {}
+        for filename, arcs in iitems(arc_data):
+            if filename in self._arcs:
+                new_arcs = set(self._arcs[filename])
+                new_arcs.update(arcs)
+                arcs = new_arcs
+            self._arcs[filename] = list(arcs)
+
+        self._validate()
+
+    def add_file_tracers(self, file_tracers):
+        """Add per-file plugin information.
+
+        `file_tracers` is { filename: plugin_name, ... }
+
+        """
+        if self._debug and self._debug.should('dataop'):
+            self._debug.write("Adding file tracers: %d files" % (len(file_tracers),))
+
+        existing_files = self._arcs or self._lines or {}
+        for filename, plugin_name in iitems(file_tracers):
+            if filename not in existing_files:
+                raise CoverageException(
+                    "Can't add file tracer data for unmeasured file '%s'" % (filename,)
+                )
+            existing_plugin = self._file_tracers.get(filename)
+            if existing_plugin is not None and plugin_name != existing_plugin:
+                raise CoverageException(
+                    "Conflicting file tracer name for '%s': %r vs %r" % (
+                        filename, existing_plugin, plugin_name,
+                    )
+                )
+            self._file_tracers[filename] = plugin_name
+
+        self._validate()
+
+    def add_run_info(self, **kwargs):
+        """Add information about the run.
+
+        Keywords are arbitrary, and are stored in the run dictionary. Values
+        must be JSON serializable.  You may use this function more than once,
+        but repeated keywords overwrite each other.
+
+        """
+        if self._debug and self._debug.should('dataop'):
+            self._debug.write("Adding run info: %r" % (kwargs,))
+        if not self._runs:
+            self._runs = [{}]
+        self._runs[0].update(kwargs)
+        self._validate()
+
+    def touch_file(self, filename):
+        """Ensure that `filename` appears in the data, empty if needed."""
+        if self._debug and self._debug.should('dataop'):
+            self._debug.write("Touching %r" % (filename,))
+        if not self._has_arcs() and not self._has_lines():
+            raise CoverageException("Can't touch files in an empty CoverageData")
+
+        if self._has_arcs():
+            where = self._arcs
+        else:
+            where = self._lines
+        where.setdefault(filename, [])
+
+        self._validate()
+
+    def write_fileobj(self, file_obj):
+        """Write the coverage data to `file_obj`."""
+
+        # Create the file data.
+        file_data = {}
+
+        if self._has_arcs():
+            file_data['arcs'] = self._arcs
+
+        if self._has_lines():
+            file_data['lines'] = self._lines
+
+        if self._file_tracers:
+            file_data['file_tracers'] = self._file_tracers
+
+        if self._runs:
+            file_data['runs'] = self._runs
+
+        # Write the data to the file.
+        file_obj.write(self._GO_AWAY)
+        json.dump(file_data, file_obj)
+
+    def write_file(self, filename):
+        """Write the coverage data to `filename`."""
+        if self._debug and self._debug.should('dataio'):
+            self._debug.write("Writing data to %r" % (filename,))
+        with open(filename, 'w') as fdata:
+            self.write_fileobj(fdata)
+
+    def erase(self):
+        """Erase the data in this object."""
+        self._lines = None
+        self._arcs = None
+        self._file_tracers = {}
+        self._runs = []
+        self._validate()
+
+    def update(self, other_data, aliases=None):
+        """Update this data with data from another `CoverageData`.
+
+        If `aliases` is provided, it's a `PathAliases` object that is used to
+        re-map paths to match the local machine's.
+
+        """
+        if self._has_lines() and other_data._has_arcs():
+            raise CoverageException("Can't combine arc data with line data")
+        if self._has_arcs() and other_data._has_lines():
+            raise CoverageException("Can't combine line data with arc data")
+
+        aliases = aliases or PathAliases()
+
+        # _file_tracers: only have a string, so they have to agree.
+        # Have to do these first, so that our examination of self._arcs and
+        # self._lines won't be confused by data updated from other_data.
+        for filename in other_data.measured_files():
+            other_plugin = other_data.file_tracer(filename)
+            filename = aliases.map(filename)
+            this_plugin = self.file_tracer(filename)
+            if this_plugin is None:
+                if other_plugin:
+                    self._file_tracers[filename] = other_plugin
+            elif this_plugin != other_plugin:
+                raise CoverageException(
+                    "Conflicting file tracer name for '%s': %r vs %r" % (
+                        filename, this_plugin, other_plugin,
+                    )
+                )
+
+        # _runs: add the new runs to these runs.
+        self._runs.extend(other_data._runs)
+
+        # _lines: merge dicts.
+        if other_data._has_lines():
+            if self._lines is None:
+                self._lines = {}
+            for filename, file_lines in iitems(other_data._lines):
+                filename = aliases.map(filename)
+                if filename in self._lines:
+                    lines = set(self._lines[filename])
+                    lines.update(file_lines)
+                    file_lines = list(lines)
+                self._lines[filename] = file_lines
+
+        # _arcs: merge dicts.
+        if other_data._has_arcs():
+            if self._arcs is None:
+                self._arcs = {}
+            for filename, file_arcs in iitems(other_data._arcs):
+                filename = aliases.map(filename)
+                if filename in self._arcs:
+                    arcs = set(self._arcs[filename])
+                    arcs.update(file_arcs)
+                    file_arcs = list(arcs)
+                self._arcs[filename] = file_arcs
+
+        self._validate()
+
+    ##
+    ## Miscellaneous
+    ##
+
+    def _validate(self):
+        """If we are in paranoid mode, validate that everything is right."""
+        if env.TESTING:
+            self._validate_invariants()
+
+    def _validate_invariants(self):
+        """Validate internal invariants."""
+        # Only one of _lines or _arcs should exist.
+        assert not(self._has_lines() and self._has_arcs()), (
+            "Shouldn't have both _lines and _arcs"
+        )
+
+        # _lines should be a dict of lists of ints.
+        if self._has_lines():
+            for fname, lines in iitems(self._lines):
+                assert isinstance(fname, string_class), "Key in _lines shouldn't be %r" % (fname,)
+                assert all(isinstance(x, int) for x in lines), (
+                    "_lines[%r] shouldn't be %r" % (fname, lines)
+                )
+
+        # _arcs should be a dict of lists of pairs of ints.
+        if self._has_arcs():
+            for fname, arcs in iitems(self._arcs):
+                assert isinstance(fname, string_class), "Key in _arcs shouldn't be %r" % (fname,)
+                assert all(isinstance(x, int) and isinstance(y, int) for x, y in arcs), (
+                    "_arcs[%r] shouldn't be %r" % (fname, arcs)
+                )
+
+        # _file_tracers should have only non-empty strings as values.
+        for fname, plugin in iitems(self._file_tracers):
+            assert isinstance(fname, string_class), (
+                "Key in _file_tracers shouldn't be %r" % (fname,)
+            )
+            assert plugin and isinstance(plugin, string_class), (
+                "_file_tracers[%r] shoudn't be %r" % (fname, plugin)
+            )
+
+        # _runs should be a list of dicts.
+        for val in self._runs:
+            assert isinstance(val, dict)
+            for key in val:
+                assert isinstance(key, string_class), "Key in _runs shouldn't be %r" % (key,)
+
+    def add_to_hash(self, filename, hasher):
+        """Contribute `filename`'s data to the `hasher`.
+
+        `hasher` is a `coverage.misc.Hasher` instance to be updated with
+        the file's data.  It should only get the results data, not the run
+        data.
+
+        """
+        if self._has_arcs():
+            hasher.update(sorted(self.arcs(filename) or []))
+        else:
+            hasher.update(sorted(self.lines(filename) or []))
+        hasher.update(self.file_tracer(filename))
+
+    ##
+    ## Internal
+    ##
+
+    def _has_lines(self):
+        """Do we have data in self._lines?"""
+        return self._lines is not None
+
+    def _has_arcs(self):
+        """Do we have data in self._arcs?"""
+        return self._arcs is not None
+
+
+class CoverageDataFiles(object):
+    """Manage the use of coverage data files."""
+
+    def __init__(self, basename=None, warn=None):
+        """Create a CoverageDataFiles to manage data files.
+
+        `warn` is the warning function to use.
+
+        `basename` is the name of the file to use for storing data.
+
+        """
+        self.warn = warn
+        # Construct the file name that will be used for data storage.
+        self.filename = os.path.abspath(basename or ".coverage")
+
+    def erase(self, parallel=False):
+        """Erase the data from the file storage.
+
+        If `parallel` is true, then also deletes data files created from the
+        basename by parallel-mode.
+
+        """
+        file_be_gone(self.filename)
+        if parallel:
+            data_dir, local = os.path.split(self.filename)
+            localdot = local + '.*'
+            pattern = os.path.join(os.path.abspath(data_dir), localdot)
+            for filename in glob.glob(pattern):
+                file_be_gone(filename)
+
+    def read(self, data):
+        """Read the coverage data."""
+        if os.path.exists(self.filename):
+            data.read_file(self.filename)
+
+    def write(self, data, suffix=None):
+        """Write the collected coverage data to a file.
+
+        `suffix` is a suffix to append to the base file name. This can be used
+        for multiple or parallel execution, so that many coverage data files
+        can exist simultaneously.  A dot will be used to join the base name and
+        the suffix.
+
+        """
+        filename = self.filename
+        if suffix is True:
+            # If data_suffix was a simple true value, then make a suffix with
+            # plenty of distinguishing information.  We do this here in
+            # `save()` at the last minute so that the pid will be correct even
+            # if the process forks.
+            extra = ""
+            if _TEST_NAME_FILE:                             # pragma: debugging
+                with open(_TEST_NAME_FILE) as f:
+                    test_name = f.read()
+                extra = "." + test_name
+            suffix = "%s%s.%s.%06d" % (
+                socket.gethostname(), extra, os.getpid(),
+                random.randint(0, 999999)
+            )
+
+        if suffix:
+            filename += "." + suffix
+        data.write_file(filename)
+
+    def combine_parallel_data(self, data, aliases=None, data_paths=None):
+        """Combine a number of data files together.
+
+        Treat `self.filename` as a file prefix, and combine the data from all
+        of the data files starting with that prefix plus a dot.
+
+        If `aliases` is provided, it's a `PathAliases` object that is used to
+        re-map paths to match the local machine's.
+
+        If `data_paths` is provided, it is a list of directories or files to
+        combine.  Directories are searched for files that start with
+        `self.filename` plus dot as a prefix, and those files are combined.
+
+        If `data_paths` is not provided, then the directory portion of
+        `self.filename` is used as the directory to search for data files.
+
+        Every data file found and combined is then deleted from disk. If a file
+        cannot be read, a warning will be issued, and the file will not be
+        deleted.
+
+        """
+        # Because of the os.path.abspath in the constructor, data_dir will
+        # never be an empty string.
+        data_dir, local = os.path.split(self.filename)
+        localdot = local + '.*'
+
+        data_paths = data_paths or [data_dir]
+        files_to_combine = []
+        for p in data_paths:
+            if os.path.isfile(p):
+                files_to_combine.append(os.path.abspath(p))
+            elif os.path.isdir(p):
+                pattern = os.path.join(os.path.abspath(p), localdot)
+                files_to_combine.extend(glob.glob(pattern))
+            else:
+                raise CoverageException("Couldn't combine from non-existent path '%s'" % (p,))
+
+        for f in files_to_combine:
+            new_data = CoverageData()
+            try:
+                new_data.read_file(f)
+            except CoverageException as exc:
+                if self.warn:
+                    # The CoverageException has the file name in it, so just
+                    # use the message as the warning.
+                    self.warn(str(exc))
+            else:
+                data.update(new_data, aliases=aliases)
+                file_be_gone(f)
+
+
+def canonicalize_json_data(data):
+    """Canonicalize our JSON data so it can be compared."""
+    for fname, lines in iitems(data.get('lines', {})):
+        data['lines'][fname] = sorted(lines)
+    for fname, arcs in iitems(data.get('arcs', {})):
+        data['arcs'][fname] = sorted(arcs)
+
+
+def pretty_data(data):
+    """Format data as JSON, but as nicely as possible.
+
+    Returns a string.
+
+    """
+    # Start with a basic JSON dump.
+    out = json.dumps(data, indent=4, sort_keys=True)
+    # But pairs of numbers shouldn't be split across lines...
+    out = re.sub(r"\[\s+(-?\d+),\s+(-?\d+)\s+]", r"[\1, \2]", out)
+    # Trailing spaces mess with tests, get rid of them.
+    out = re.sub(r"(?m)\s+$", "", out)
+    return out
+
+
+def debug_main(args):
+    """Dump the raw data from data files.
+
+    Run this as::
+
+        $ python -m coverage.data [FILE]
+
+    """
+    parser = optparse.OptionParser()
+    parser.add_option(
+        "-c", "--canonical", action="store_true",
+        help="Sort data into a canonical order",
+    )
+    options, args = parser.parse_args(args)
+
+    for filename in (args or [".coverage"]):
+        print("--- {0} ------------------------------".format(filename))
+        data = CoverageData._read_raw_data_file(filename)
+        if options.canonical:
+            canonicalize_json_data(data)
+        print(pretty_data(data))
+
+
+if __name__ == '__main__':
+    import sys
+    debug_main(sys.argv[1:])
+
+#
+# eflag: FileType = Python2
--- /dev/null	Thu Jan 01 00:00:00 1970 +0000
+++ b/DebugClients/Python2/coverage/debug.py	Sat Sep 03 18:12:12 2016 +0200
@@ -0,0 +1,109 @@
+# Licensed under the Apache License: http://www.apache.org/licenses/LICENSE-2.0
+# For details: https://bitbucket.org/ned/coveragepy/src/default/NOTICE.txt
+
+"""Control of and utilities for debugging."""
+
+import inspect
+import os
+import sys
+
+from coverage.misc import isolate_module
+
+os = isolate_module(os)
+
+
+# When debugging, it can be helpful to force some options, especially when
+# debugging the configuration mechanisms you usually use to control debugging!
+# This is a list of forced debugging options.
+FORCED_DEBUG = []
+
+# A hack for debugging testing in sub-processes.
+_TEST_NAME_FILE = ""    # "/tmp/covtest.txt"
+
+
+class DebugControl(object):
+    """Control and output for debugging."""
+
+    def __init__(self, options, output):
+        """Configure the options and output file for debugging."""
+        self.options = options
+        self.output = output
+
+    def __repr__(self):
+        return "<DebugControl options=%r output=%r>" % (self.options, self.output)
+
+    def should(self, option):
+        """Decide whether to output debug information in category `option`."""
+        return (option in self.options or option in FORCED_DEBUG)
+
+    def write(self, msg):
+        """Write a line of debug output."""
+        if self.should('pid'):
+            msg = "pid %5d: %s" % (os.getpid(), msg)
+        self.output.write(msg+"\n")
+        if self.should('callers'):
+            dump_stack_frames(out=self.output)
+        self.output.flush()
+
+    def write_formatted_info(self, header, info):
+        """Write a sequence of (label,data) pairs nicely."""
+        self.write(info_header(header))
+        for line in info_formatter(info):
+            self.write(" %s" % line)
+
+
+def info_header(label):
+    """Make a nice header string."""
+    return "--{0:-<60s}".format(" "+label+" ")
+
+
+def info_formatter(info):
+    """Produce a sequence of formatted lines from info.
+
+    `info` is a sequence of pairs (label, data).  The produced lines are
+    nicely formatted, ready to print.
+
+    """
+    info = list(info)
+    if not info:
+        return
+    label_len = max(len(l) for l, _d in info)
+    for label, data in info:
+        if data == []:
+            data = "-none-"
+        if isinstance(data, (list, set, tuple)):
+            prefix = "%*s:" % (label_len, label)
+            for e in data:
+                yield "%*s %s" % (label_len+1, prefix, e)
+                prefix = ""
+        else:
+            yield "%*s: %s" % (label_len, label, data)
+
+
+def short_stack(limit=None):                                # pragma: debugging
+    """Return a string summarizing the call stack.
+
+    The string is multi-line, with one line per stack frame. Each line shows
+    the function name, the file name, and the line number:
+
+        ...
+        start_import_stop : /Users/ned/coverage/trunk/tests/coveragetest.py @95
+        import_local_file : /Users/ned/coverage/trunk/tests/coveragetest.py @81
+        import_local_file : /Users/ned/coverage/trunk/coverage/backward.py @159
+        ...
+
+    `limit` is the number of frames to include, defaulting to all of them.
+
+    """
+    stack = inspect.stack()[limit:0:-1]
+    return "\n".join("%30s : %s @%d" % (t[3], t[1], t[2]) for t in stack)
+
+
+def dump_stack_frames(limit=None, out=None):                # pragma: debugging
+    """Print a summary of the stack to stdout, or some place else."""
+    out = out or sys.stdout
+    out.write(short_stack(limit=limit))
+    out.write("\n")
+
+#
+# eflag: FileType = Python2
--- /dev/null	Thu Jan 01 00:00:00 1970 +0000
+++ b/DebugClients/Python2/coverage/doc/AUTHORS.txt	Sat Sep 03 18:12:12 2016 +0200
@@ -0,0 +1,73 @@
+Coverage.py was originally written by Gareth Rees, and since 2004 has been
+extended and maintained by Ned Batchelder.
+
+Other contributions have been made by:
+
+Adi Roiban
+Alex Gaynor
+Alexander Todorov
+Anthony Sottile
+Arcadiy Ivanov
+Ben Finney
+Bill Hart
+Brandon Rhodes
+Brett Cannon
+Buck Evan
+Carl Gieringer
+Catherine Proulx
+Chris Adams
+Chris Rose
+Christian Heimes
+Christine Lytwynec
+Christoph Zwerschke
+Conrad Ho
+Danek Duvall
+Danny Allen
+David Christian
+David Stanek
+Detlev Offenbach
+Devin Jeanpierre
+Dmitry Shishov
+Dmitry Trofimov
+Eduardo Schettino
+Edward Loper
+Geoff Bache
+George Paci
+George Song
+Greg Rogers
+Guillaume Chazarain
+Ilia Meerovich
+Imri Goldberg
+Ionel Cristian Mărieș
+JT Olds
+Jessamyn Smith
+Jon Chappell
+Joseph Tate
+Julian Berman
+Krystian Kichewko
+Leonardo Pistone
+Lex Berezhny
+Marc Abramowitz
+Marcus Cobden
+Mark van der Wal
+Martin Fuzzey
+Matthew Desmarais
+Max Linke
+Mickie Betz
+Noel O'Boyle
+Pablo Carballo
+Patrick Mezard
+Peter Portante
+Rodrigue Cloutier
+Roger Hu
+Ross Lawley
+Sandra Martocchia
+Sigve Tjora
+Stan Hu
+Stefan Behnel
+Steve Leonard
+Steve Peak
+Ted Wexler
+Titus Brown
+Yury Selivanov
+Zooko Wilcox-O'Hearn
--- /dev/null	Thu Jan 01 00:00:00 1970 +0000
+++ b/DebugClients/Python2/coverage/doc/CHANGES.rst	Sat Sep 03 18:12:12 2016 +0200
@@ -0,0 +1,1654 @@
+.. Licensed under the Apache License: http://www.apache.org/licenses/LICENSE-2.0
+.. For details: https://bitbucket.org/ned/coveragepy/src/default/NOTICE.txt
+
+==============================
+Change history for Coverage.py
+==============================
+
+
+Version 4.1 --- 2016-05-21
+--------------------------
+
+- The internal attribute `Reporter.file_reporters` was removed in 4.1b3.  It
+  should have come has no surprise that there were third-party tools out there
+  using that attribute.  It has been restored, but with a deprecation warning.
+
+
+Version 4.1b3 --- 2016-05-10
+----------------------------
+
+- When running your program, execution can jump from an ``except X:`` line to
+  some other line when an exception other than ``X`` happens.  This jump is no
+  longer considered a branch when measuring branch coverage.
+
+- When measuring branch coverage, ``yield`` statements that were never resumed
+  were incorrectly marked as missing, as reported in `issue 440`_.  This is now
+  fixed.
+
+- During branch coverage of single-line callables like lambdas and generator
+  expressions, coverage.py can now distinguish between them never being called,
+  or being called but not completed.  Fixes `issue 90`_, `issue 460`_ and
+  `issue 475`_.
+
+- The HTML report now has a map of the file along the rightmost edge of the
+  page, giving an overview of where the missed lines are.  Thanks, Dmitry
+  Shishov.
+
+- The HTML report now uses different monospaced fonts, favoring Consolas over
+  Courier.  Along the way, `issue 472`_ about not properly handling one-space
+  indents was fixed.  The index page also has slightly different styling, to
+  try to make the clickable detail pages more apparent.
+
+- Missing branches reported with ``coverage report -m`` will now say ``->exit``
+  for missed branches to the exit of a function, rather than a negative number.
+  Fixes `issue 469`_.
+
+- ``coverage --help`` and ``coverage --version`` now mention which tracer is
+  installed, to help diagnose problems. The docs mention which features need
+  the C extension. (`issue 479`_)
+
+- Officially support PyPy 5.1, which required no changes, just updates to the
+  docs.
+
+- The `Coverage.report` function had two parameters with non-None defaults,
+  which have been changed.  `show_missing` used to default to True, but now
+  defaults to None.  If you had been calling `Coverage.report` without
+  specifying `show_missing`, you'll need to explicitly set it to True to keep
+  the same behavior.  `skip_covered` used to default to False. It is now None,
+  which doesn't change the behavior.  This fixes `issue 485`_.
+
+- It's never been possible to pass a namespace module to one of the analysis
+  functions, but now at least we raise a more specific error message, rather
+  than getting confused. (`issue 456`_)
+
+- The `coverage.process_startup` function now returns the `Coverage` instance
+  it creates, as suggested in `issue 481`_.
+
+- Make a small tweak to how we compare threads, to avoid buggy custom
+  comparison code in thread classes. (`issue 245`_)
+
+.. _issue 90: https://bitbucket.org/ned/coveragepy/issues/90/lambda-expression-confuses-branch
+.. _issue 245: https://bitbucket.org/ned/coveragepy/issues/245/change-solution-for-issue-164
+.. _issue 440: https://bitbucket.org/ned/coveragepy/issues/440/yielded-twisted-failure-marked-as-missed
+.. _issue 456: https://bitbucket.org/ned/coveragepy/issues/456/coverage-breaks-with-implicit-namespaces
+.. _issue 460: https://bitbucket.org/ned/coveragepy/issues/460/confusing-html-report-for-certain-partial
+.. _issue 469: https://bitbucket.org/ned/coveragepy/issues/469/strange-1-line-number-in-branch-coverage
+.. _issue 472: https://bitbucket.org/ned/coveragepy/issues/472/html-report-indents-incorrectly-for-one
+.. _issue 475: https://bitbucket.org/ned/coveragepy/issues/475/generator-expression-is-marked-as-not
+.. _issue 479: https://bitbucket.org/ned/coveragepy/issues/479/clarify-the-need-for-the-c-extension
+.. _issue 481: https://bitbucket.org/ned/coveragepy/issues/481/asyncioprocesspoolexecutor-tracing-not
+.. _issue 485: https://bitbucket.org/ned/coveragepy/issues/485/coveragereport-ignores-show_missing-and
+
+
+Version 4.1b2 --- 2016-01-23
+----------------------------
+
+- Problems with the new branch measurement in 4.1 beta 1 were fixed:
+
+  - Class docstrings were considered executable.  Now they no longer are.
+
+  - ``yield from`` and ``await`` were considered returns from functions, since
+    they could tranfer control to the caller.  This produced unhelpful "missing
+    branch" reports in a number of circumstances.  Now they no longer are
+    considered returns.
+
+  - In unusual situations, a missing branch to a negative number was reported.
+    This has been fixed, closing `issue 466`_.
+
+- The XML report now produces correct package names for modules found in
+  directories specified with ``source=``.  Fixes `issue 465`_.
+
+- ``coverage report`` won't produce trailing whitespace.
+
+.. _issue 465: https://bitbucket.org/ned/coveragepy/issues/465/coveragexml-produces-package-names-with-an
+.. _issue 466: https://bitbucket.org/ned/coveragepy/issues/466/impossible-missed-branch-to-a-negative
+
+
+Version 4.1b1 --- 2016-01-10
+----------------------------
+
+- Branch analysis has been rewritten: it used to be based on bytecode, but now
+  uses AST analysis.  This has changed a number of things:
+
+  - More code paths are now considered runnable, especially in
+    ``try``/``except`` structures.  This may mean that coverage.py will
+    identify more code paths as uncovered.  This could either raise or lower
+    your overall coverage number.
+
+  - Python 3.5's ``async`` and ``await`` keywords are properly supported,
+    fixing `issue 434`_.
+
+  - Some long-standing branch coverage bugs were fixed:
+
+    - `issue 129`_: functions with only a docstring for a body would
+      incorrectly report a missing branch on the ``def`` line.
+
+    - `issue 212`_: code in an ``except`` block could be incorrectly marked as
+      a missing branch.
+
+    - `issue 146`_: context managers (``with`` statements) in a loop or ``try``
+      block could confuse the branch measurement, reporting incorrect partial
+      branches.
+
+    - `issue 422`_: in Python 3.5, an actual partial branch could be marked as
+      complete.
+
+- Pragmas to disable coverage measurement can now be used on decorator lines,
+  and they will apply to the entire function or class being decorated.  This
+  implements the feature requested in `issue 131`_.
+
+- Multiprocessing support is now available on Windows.  Thanks, Rodrigue
+  Cloutier.
+
+- Files with two encoding declarations are properly supported, fixing
+  `issue 453`_. Thanks, Max Linke.
+
+- Non-ascii characters in regexes in the configuration file worked in 3.7, but
+  stopped working in 4.0.  Now they work again, closing `issue 455`_.
+
+- Form-feed characters would prevent accurate determination of the beginning of
+  statements in the rest of the file.  This is now fixed, closing `issue 461`_.
+
+.. _issue 129: https://bitbucket.org/ned/coveragepy/issues/129/misleading-branch-coverage-of-empty
+.. _issue 131: https://bitbucket.org/ned/coveragepy/issues/131/pragma-on-a-decorator-line-should-affect
+.. _issue 146: https://bitbucket.org/ned/coveragepy/issues/146/context-managers-confuse-branch-coverage
+.. _issue 212: https://bitbucket.org/ned/coveragepy/issues/212/coverage-erroneously-reports-partial
+.. _issue 422: https://bitbucket.org/ned/coveragepy/issues/422/python35-partial-branch-marked-as-fully
+.. _issue 434: https://bitbucket.org/ned/coveragepy/issues/434/indexerror-in-python-35
+.. _issue 453: https://bitbucket.org/ned/coveragepy/issues/453/source-code-encoding-can-only-be-specified
+.. _issue 455: https://bitbucket.org/ned/coveragepy/issues/455/unusual-exclusions-stopped-working-in
+.. _issue 461: https://bitbucket.org/ned/coveragepy/issues/461/multiline-asserts-need-too-many-pragma
+
+
+Version 4.0.3 --- 2015-11-24
+----------------------------
+
+- Fixed a mysterious problem that manifested in different ways: sometimes
+  hanging the process (`issue 420`_), sometimes making database connections
+  fail (`issue 445`_).
+
+- The XML report now has correct ``<source>`` elements when using a
+  ``--source=`` option somewhere besides the current directory.  This fixes
+  `issue 439`_. Thanks, Arcady Ivanov.
+
+- Fixed an unusual edge case of detecting source encodings, described in
+  `issue 443`_.
+
+- Help messages that mention the command to use now properly use the actual
+  command name, which might be different than "coverage".  Thanks to Ben
+  Finney, this closes `issue 438`_.
+
+.. _issue 420: https://bitbucket.org/ned/coveragepy/issues/420/coverage-40-hangs-indefinitely-on-python27
+.. _issue 438: https://bitbucket.org/ned/coveragepy/issues/438/parameterise-coverage-command-name
+.. _issue 439: https://bitbucket.org/ned/coveragepy/issues/439/incorrect-cobertura-file-sources-generated
+.. _issue 443: https://bitbucket.org/ned/coveragepy/issues/443/coverage-gets-confused-when-encoding
+.. _issue 445: https://bitbucket.org/ned/coveragepy/issues/445/django-app-cannot-connect-to-cassandra
+
+
+Version 4.0.2 --- 2015-11-04
+----------------------------
+
+- More work on supporting unusually encoded source. Fixed `issue 431`_.
+
+- Files or directories with non-ASCII characters are now handled properly,
+  fixing `issue 432`_.
+
+- Setting a trace function with sys.settrace was broken by a change in 4.0.1,
+  as reported in `issue 436`_.  This is now fixed.
+
+- Officially support PyPy 4.0, which required no changes, just updates to the
+  docs.
+
+.. _issue 431: https://bitbucket.org/ned/coveragepy/issues/431/couldnt-parse-python-file-with-cp1252
+.. _issue 432: https://bitbucket.org/ned/coveragepy/issues/432/path-with-unicode-characters-various
+.. _issue 436: https://bitbucket.org/ned/coveragepy/issues/436/disabled-coverage-ctracer-may-rise-from
+
+
+Version 4.0.1 --- 2015-10-13
+----------------------------
+
+- When combining data files, unreadable files will now generate a warning
+  instead of failing the command.  This is more in line with the older
+  coverage.py v3.7.1 behavior, which silently ignored unreadable files.
+  Prompted by `issue 418`_.
+
+- The --skip-covered option would skip reporting on 100% covered files, but
+  also skipped them when calculating total coverage.  This was wrong, it should
+  only remove lines from the report, not change the final answer.  This is now
+  fixed, closing `issue 423`_.
+
+- In 4.0, the data file recorded a summary of the system on which it was run.
+  Combined data files would keep all of those summaries.  This could lead to
+  enormous data files consisting of mostly repetitive useless information. That
+  summary is now gone, fixing `issue 415`_.  If you want summary information,
+  get in touch, and we'll figure out a better way to do it.
+
+- Test suites that mocked os.path.exists would experience strange failures, due
+  to coverage.py using their mock inadvertently.  This is now fixed, closing
+  `issue 416`_.
+
+- Importing a ``__init__`` module explicitly would lead to an error:
+  ``AttributeError: 'module' object has no attribute '__path__'``, as reported
+  in `issue 410`_.  This is now fixed.
+
+- Code that uses ``sys.settrace(sys.gettrace())`` used to incur a more than 2x
+  speed penalty.  Now there's no penalty at all. Fixes `issue 397`_.
+
+- Pyexpat C code will no longer be recorded as a source file, fixing
+  `issue 419`_.
+
+- The source kit now contains all of the files needed to have a complete source
+  tree, re-fixing `issue 137`_ and closing `issue 281`_.
+
+.. _issue 281: https://bitbucket.org/ned/coveragepy/issues/281/supply-scripts-for-testing-in-the
+.. _issue 397: https://bitbucket.org/ned/coveragepy/issues/397/stopping-and-resuming-coverage-with
+.. _issue 410: https://bitbucket.org/ned/coveragepy/issues/410/attributeerror-module-object-has-no
+.. _issue 415: https://bitbucket.org/ned/coveragepy/issues/415/repeated-coveragedataupdates-cause
+.. _issue 416: https://bitbucket.org/ned/coveragepy/issues/416/mocking-ospathexists-causes-failures
+.. _issue 418: https://bitbucket.org/ned/coveragepy/issues/418/json-parse-error
+.. _issue 419: https://bitbucket.org/ned/coveragepy/issues/419/nosource-no-source-for-code-path-to-c
+.. _issue 423: https://bitbucket.org/ned/coveragepy/issues/423/skip_covered-changes-reported-total
+
+
+Version 4.0 --- 2015-09-20
+--------------------------
+
+No changes from 4.0b3
+
+
+Version 4.0b3 --- 2015-09-07
+----------------------------
+
+- Reporting on an unmeasured file would fail with a traceback.  This is now
+  fixed, closing `issue 403`_.
+
+- The Jenkins ShiningPanda plugin looks for an obsolete file name to find the
+  HTML reports to publish, so it was failing under coverage.py 4.0.  Now we
+  create that file if we are running under Jenkins, to keep things working
+  smoothly. `issue 404`_.
+
+- Kits used to include tests and docs, but didn't install them anywhere, or
+  provide all of the supporting tools to make them useful.  Kits no longer
+  include tests and docs.  If you were using them from the older packages, get
+  in touch and help me understand how.
+
+.. _issue 403: https://bitbucket.org/ned/coveragepy/issues/403/hasherupdate-fails-with-typeerror-nonetype
+.. _issue 404: https://bitbucket.org/ned/coveragepy/issues/404/shiningpanda-jenkins-plugin-cant-find-html
+
+
+
+Version 4.0b2 --- 2015-08-22
+----------------------------
+
+- 4.0b1 broke ``--append`` creating new data files.  This is now fixed, closing
+  `issue 392`_.
+
+- ``py.test --cov`` can write empty data, then touch files due to ``--source``,
+  which made coverage.py mistakenly force the data file to record lines instead
+  of arcs.  This would lead to a "Can't combine line data with arc data" error
+  message.  This is now fixed, and changed some method names in the
+  CoverageData interface.  Fixes `issue 399`_.
+
+- `CoverageData.read_fileobj` and `CoverageData.write_fileobj` replace the
+  `.read` and `.write` methods, and are now properly inverses of each other.
+
+- When using ``report --skip-covered``, a message will now be included in the
+  report output indicating how many files were skipped, and if all files are
+  skipped, coverage.py won't accidentally scold you for having no data to
+  report.  Thanks, Krystian Kichewko.
+
+- A new conversion utility has been added:  ``python -m coverage.pickle2json``
+  will convert v3.x pickle data files to v4.x JSON data files.  Thanks,
+  Alexander Todorov.  Closes `issue 395`_.
+
+- A new version identifier is available, `coverage.version_info`, a plain tuple
+  of values similar to `sys.version_info`_.
+
+.. _issue 392: https://bitbucket.org/ned/coveragepy/issues/392/run-append-doesnt-create-coverage-file
+.. _issue 395: https://bitbucket.org/ned/coveragepy/issues/395/rfe-read-pickled-files-as-well-for
+.. _issue 399: https://bitbucket.org/ned/coveragepy/issues/399/coverageexception-cant-combine-line-data
+.. _sys.version_info: https://docs.python.org/3/library/sys.html#sys.version_info
+
+
+Version 4.0b1 --- 2015-08-02
+----------------------------
+
+- Coverage.py is now licensed under the Apache 2.0 license.  See NOTICE.txt for
+  details.  Closes `issue 313`_.
+
+- The data storage has been completely revamped.  The data file is now
+  JSON-based instead of a pickle, closing `issue 236`_.  The `CoverageData`
+  class is now a public supported documented API to the data file.
+
+- A new configuration option, ``[run] note``, lets you set a note that will be
+  stored in the `runs` section of the data file.  You can use this to annotate
+  the data file with any information you like.
+
+- Unrecognized configuration options will now print an error message and stop
+  coverage.py.  This should help prevent configuration mistakes from passing
+  silently.  Finishes `issue 386`_.
+
+- In parallel mode, ``coverage erase`` will now delete all of the data files,
+  fixing `issue 262`_.
+
+- Coverage.py now accepts a directory name for ``coverage run`` and will run a
+  ``__main__.py`` found there, just like Python will.  Fixes `issue 252`_.
+  Thanks, Dmitry Trofimov.
+
+- The XML report now includes a ``missing-branches`` attribute.  Thanks, Steve
+  Peak.  This is not a part of the Cobertura DTD, so the XML report no longer
+  references the DTD.
+
+- Missing branches in the HTML report now have a bit more information in the
+  right-hand annotations.  Hopefully this will make their meaning clearer.
+
+- All the reporting functions now behave the same if no data had been
+  collected, exiting with a status code of 1.  Fixed ``fail_under`` to be
+  applied even when the report is empty.  Thanks, Ionel Cristian Mărieș.
+
+- Plugins are now initialized differently.  Instead of looking for a class
+  called ``Plugin``, coverage.py looks for a function called ``coverage_init``.
+
+- A file-tracing plugin can now ask to have built-in Python reporting by
+  returning `"python"` from its `file_reporter()` method.
+
+- Code that was executed with `exec` would be mis-attributed to the file that
+  called it.  This is now fixed, closing `issue 380`_.
+
+- The ability to use item access on `Coverage.config` (introduced in 4.0a2) has
+  been changed to a more explicit `Coverage.get_option` and
+  `Coverage.set_option` API.
+
+- The ``Coverage.use_cache`` method is no longer supported.
+
+- The private method ``Coverage._harvest_data`` is now called
+  ``Coverage.get_data``, and returns the ``CoverageData`` containing the
+  collected data.
+
+- The project is consistently referred to as "coverage.py" throughout the code
+  and the documentation, closing `issue 275`_.
+
+- Combining data files with an explicit configuration file was broken in 4.0a6,
+  but now works again, closing `issue 385`_.
+
+- ``coverage combine`` now accepts files as well as directories.
+
+- The speed is back to 3.7.1 levels, after having slowed down due to plugin
+  support, finishing up `issue 387`_.
+
+.. _issue 236: https://bitbucket.org/ned/coveragepy/issues/236/pickles-are-bad-and-you-should-feel-bad
+.. _issue 252: https://bitbucket.org/ned/coveragepy/issues/252/coverage-wont-run-a-program-with
+.. _issue 262: https://bitbucket.org/ned/coveragepy/issues/262/when-parallel-true-erase-should-erase-all
+.. _issue 275: https://bitbucket.org/ned/coveragepy/issues/275/refer-consistently-to-project-as-coverage
+.. _issue 313: https://bitbucket.org/ned/coveragepy/issues/313/add-license-file-containing-2-3-or-4
+.. _issue 380: https://bitbucket.org/ned/coveragepy/issues/380/code-executed-by-exec-excluded-from
+.. _issue 385: https://bitbucket.org/ned/coveragepy/issues/385/coverage-combine-doesnt-work-with-rcfile
+.. _issue 386: https://bitbucket.org/ned/coveragepy/issues/386/error-on-unrecognised-configuration
+.. _issue 387: https://bitbucket.org/ned/coveragepy/issues/387/performance-degradation-from-371-to-40
+
+.. 40 issues closed in 4.0 below here
+
+
+Version 4.0a6 --- 2015-06-21
+----------------------------
+
+- Python 3.5b2 and PyPy 2.6.0 are supported.
+
+- The original module-level function interface to coverage.py is no longer
+  supported.  You must now create a ``coverage.Coverage`` object, and use
+  methods on it.
+
+- The ``coverage combine`` command now accepts any number of directories as
+  arguments, and will combine all the data files from those directories.  This
+  means you don't have to copy the files to one directory before combining.
+  Thanks, Christine Lytwynec.  Finishes `issue 354`_.
+
+- Branch coverage couldn't properly handle certain extremely long files. This
+  is now fixed (`issue 359`_).
+
+- Branch coverage didn't understand yield statements properly.  Mickie Betz
+  persisted in pursuing this despite Ned's pessimism.  Fixes `issue 308`_ and
+  `issue 324`_.
+
+- The COVERAGE_DEBUG environment variable can be used to set the ``[run] debug``
+  configuration option to control what internal operations are logged.
+
+- HTML reports were truncated at formfeed characters.  This is now fixed
+  (`issue 360`_).  It's always fun when the problem is due to a `bug in the
+  Python standard library <http://bugs.python.org/issue19035>`_.
+
+- Files with incorrect encoding declaration comments are no longer ignored by
+  the reporting commands, fixing `issue 351`_.
+
+- HTML reports now include a timestamp in the footer, closing `issue 299`_.
+  Thanks, Conrad Ho.
+
+- HTML reports now begrudgingly use double-quotes rather than single quotes,
+  because there are "software engineers" out there writing tools that read HTML
+  and somehow have no idea that single quotes exist.  Capitulates to the absurd
+  `issue 361`_.  Thanks, Jon Chappell.
+
+- The ``coverage annotate`` command now handles non-ASCII characters properly,
+  closing `issue 363`_.  Thanks, Leonardo Pistone.
+
+- Drive letters on Windows were not normalized correctly, now they are. Thanks,
+  Ionel Cristian Mărieș.
+
+- Plugin support had some bugs fixed, closing `issue 374`_ and `issue 375`_.
+  Thanks, Stefan Behnel.
+
+.. _issue 299: https://bitbucket.org/ned/coveragepy/issue/299/inserted-created-on-yyyy-mm-dd-hh-mm-in
+.. _issue 308: https://bitbucket.org/ned/coveragepy/issue/308/yield-lambda-branch-coverage
+.. _issue 324: https://bitbucket.org/ned/coveragepy/issue/324/yield-in-loop-confuses-branch-coverage
+.. _issue 351: https://bitbucket.org/ned/coveragepy/issue/351/files-with-incorrect-encoding-are-ignored
+.. _issue 354: https://bitbucket.org/ned/coveragepy/issue/354/coverage-combine-should-take-a-list-of
+.. _issue 359: https://bitbucket.org/ned/coveragepy/issue/359/xml-report-chunk-error
+.. _issue 360: https://bitbucket.org/ned/coveragepy/issue/360/html-reports-get-confused-by-l-in-the-code
+.. _issue 361: https://bitbucket.org/ned/coveragepy/issue/361/use-double-quotes-in-html-output-to
+.. _issue 363: https://bitbucket.org/ned/coveragepy/issue/363/annotate-command-hits-unicode-happy-fun
+.. _issue 374: https://bitbucket.org/ned/coveragepy/issue/374/c-tracer-lookups-fail-in
+.. _issue 375: https://bitbucket.org/ned/coveragepy/issue/375/ctracer_handle_return-reads-byte-code
+
+
+Version 4.0a5 --- 2015-02-16
+----------------------------
+
+- Plugin support is now implemented in the C tracer instead of the Python
+  tracer. This greatly improves the speed of tracing projects using plugins.
+
+- Coverage.py now always adds the current directory to sys.path, so that
+  plugins can import files in the current directory (`issue 358`_).
+
+- If the `config_file` argument to the Coverage constructor is specified as
+  ".coveragerc", it is treated as if it were True.  This means setup.cfg is
+  also examined, and a missing file is not considered an error (`issue 357`_).
+
+- Wildly experimental: support for measuring processes started by the
+  multiprocessing module.  To use, set ``--concurrency=multiprocessing``,
+  either on the command line or in the .coveragerc file (`issue 117`_). Thanks,
+  Eduardo Schettino.  Currently, this does not work on Windows.
+
+- A new warning is possible, if a desired file isn't measured because it was
+  imported before coverage.py was started (`issue 353`_).
+
+- The `coverage.process_startup` function now will start coverage measurement
+  only once, no matter how many times it is called.  This fixes problems due
+  to unusual virtualenv configurations (`issue 340`_).
+
+- Added 3.5.0a1 to the list of supported CPython versions.
+
+.. _issue 117: https://bitbucket.org/ned/coveragepy/issue/117/enable-coverage-measurement-of-code-run-by
+.. _issue 340: https://bitbucket.org/ned/coveragepy/issue/340/keyerror-subpy
+.. _issue 353: https://bitbucket.org/ned/coveragepy/issue/353/40a3-introduces-an-unexpected-third-case
+.. _issue 357: https://bitbucket.org/ned/coveragepy/issue/357/behavior-changed-when-coveragerc-is
+.. _issue 358: https://bitbucket.org/ned/coveragepy/issue/358/all-coverage-commands-should-adjust
+
+
+Version 4.0a4 --- 2015-01-25
+----------------------------
+
+- Plugins can now provide sys_info for debugging output.
+
+- Started plugins documentation.
+
+- Prepared to move the docs to readthedocs.org.
+
+
+Version 4.0a3 --- 2015-01-20
+----------------------------
+
+- Reports now use file names with extensions.  Previously, a report would
+  describe a/b/c.py as "a/b/c".  Now it is shown as "a/b/c.py".  This allows
+  for better support of non-Python files, and also fixed `issue 69`_.
+
+- The XML report now reports each directory as a package again.  This was a bad
+  regression, I apologize.  This was reported in `issue 235`_, which is now
+  fixed.
+
+- A new configuration option for the XML report: ``[xml] package_depth``
+  controls which directories are identified as packages in the report.
+  Directories deeper than this depth are not reported as packages.
+  The default is that all directories are reported as packages.
+  Thanks, Lex Berezhny.
+
+- When looking for the source for a frame, check if the file exists. On
+  Windows, .pyw files are no longer recorded as .py files. Along the way, this
+  fixed `issue 290`_.
+
+- Empty files are now reported as 100% covered in the XML report, not 0%
+  covered (`issue 345`_).
+
+- Regexes in the configuration file are now compiled as soon as they are read,
+  to provide error messages earlier (`issue 349`_).
+
+.. _issue 69: https://bitbucket.org/ned/coveragepy/issue/69/coverage-html-overwrite-files-that-doesnt
+.. _issue 235: https://bitbucket.org/ned/coveragepy/issue/235/package-name-is-missing-in-xml-report
+.. _issue 290: https://bitbucket.org/ned/coveragepy/issue/290/running-programmatically-with-pyw-files
+.. _issue 345: https://bitbucket.org/ned/coveragepy/issue/345/xml-reports-line-rate-0-for-empty-files
+.. _issue 349: https://bitbucket.org/ned/coveragepy/issue/349/bad-regex-in-config-should-get-an-earlier
+
+
+Version 4.0a2 --- 2015-01-14
+----------------------------
+
+- Officially support PyPy 2.4, and PyPy3 2.4.  Drop support for
+  CPython 3.2 and older versions of PyPy.  The code won't work on CPython 3.2.
+  It will probably still work on older versions of PyPy, but I'm not testing
+  against them.
+
+- Plugins!
+
+- The original command line switches (`-x` to run a program, etc) are no
+  longer supported.
+
+- A new option: `coverage report --skip-covered` will reduce the number of
+  files reported by skipping files with 100% coverage.  Thanks, Krystian
+  Kichewko.  This means that empty `__init__.py` files will be skipped, since
+  they are 100% covered, closing `issue 315`_.
+
+- You can now specify the ``--fail-under`` option in the ``.coveragerc`` file
+  as the ``[report] fail_under`` option.  This closes `issue 314`_.
+
+- The ``COVERAGE_OPTIONS`` environment variable is no longer supported.  It was
+  a hack for ``--timid`` before configuration files were available.
+
+- The HTML report now has filtering.  Type text into the Filter box on the
+  index page, and only modules with that text in the name will be shown.
+  Thanks, Danny Allen.
+
+- The textual report and the HTML report used to report partial branches
+  differently for no good reason.  Now the text report's "missing branches"
+  column is a "partial branches" column so that both reports show the same
+  numbers.  This closes `issue 342`_.
+
+- If you specify a ``--rcfile`` that cannot be read, you will get an error
+  message.  Fixes `issue 343`_.
+
+- The ``--debug`` switch can now be used on any command.
+
+- You can now programmatically adjust the configuration of coverage.py by
+  setting items on `Coverage.config` after construction.
+
+- A module run with ``-m`` can be used as the argument to ``--source``, fixing
+  `issue 328`_.  Thanks, Buck Evan.
+
+- The regex for matching exclusion pragmas has been fixed to allow more kinds
+  of whitespace, fixing `issue 334`_.
+
+- Made some PyPy-specific tweaks to improve speed under PyPy.  Thanks, Alex
+  Gaynor.
+
+- In some cases, with a source file missing a final newline, coverage.py would
+  count statements incorrectly.  This is now fixed, closing `issue 293`_.
+
+- The status.dat file that HTML reports use to avoid re-creating files that
+  haven't changed is now a JSON file instead of a pickle file.  This obviates
+  `issue 287`_ and `issue 237`_.
+
+.. _issue 237: https://bitbucket.org/ned/coveragepy/issue/237/htmlcov-with-corrupt-statusdat
+.. _issue 287: https://bitbucket.org/ned/coveragepy/issue/287/htmlpy-doesnt-specify-pickle-protocol
+.. _issue 293: https://bitbucket.org/ned/coveragepy/issue/293/number-of-statement-detection-wrong-if-no
+.. _issue 314: https://bitbucket.org/ned/coveragepy/issue/314/fail_under-param-not-working-in-coveragerc
+.. _issue 315: https://bitbucket.org/ned/coveragepy/issue/315/option-to-omit-empty-files-eg-__init__py
+.. _issue 328: https://bitbucket.org/ned/coveragepy/issue/328/misbehavior-in-run-source
+.. _issue 334: https://bitbucket.org/ned/coveragepy/issue/334/pragma-not-recognized-if-tab-character
+.. _issue 342: https://bitbucket.org/ned/coveragepy/issue/342/console-and-html-coverage-reports-differ
+.. _issue 343: https://bitbucket.org/ned/coveragepy/issue/343/an-explicitly-named-non-existent-config
+
+
+Version 4.0a1 --- 2014-09-27
+----------------------------
+
+- Python versions supported are now CPython 2.6, 2.7, 3.2, 3.3, and 3.4, and
+  PyPy 2.2.
+
+- Gevent, eventlet, and greenlet are now supported, closing `issue 149`_.
+  The ``concurrency`` setting specifies the concurrency library in use.  Huge
+  thanks to Peter Portante for initial implementation, and to Joe Jevnik for
+  the final insight that completed the work.
+
+- Options are now also read from a setup.cfg file, if any.  Sections are
+  prefixed with "coverage:", so the ``[run]`` options will be read from the
+  ``[coverage:run]`` section of setup.cfg.  Finishes `issue 304`_.
+
+- The ``report -m`` command can now show missing branches when reporting on
+  branch coverage.  Thanks, Steve Leonard. Closes `issue 230`_.
+
+- The XML report now contains a <source> element, fixing `issue 94`_.  Thanks
+  Stan Hu.
+
+- The class defined in the coverage module is now called ``Coverage`` instead
+  of ``coverage``, though the old name still works, for backward compatibility.
+
+- The ``fail-under`` value is now rounded the same as reported results,
+  preventing paradoxical results, fixing `issue 284`_.
+
+- The XML report will now create the output directory if need be, fixing
+  `issue 285`_.  Thanks, Chris Rose.
+
+- HTML reports no longer raise UnicodeDecodeError if a Python file has
+  undecodable characters, fixing `issue 303`_ and `issue 331`_.
+
+- The annotate command will now annotate all files, not just ones relative to
+  the current directory, fixing `issue 57`_.
+
+- The coverage module no longer causes deprecation warnings on Python 3.4 by
+  importing the imp module, fixing `issue 305`_.
+
+- Encoding declarations in source files are only considered if they are truly
+  comments.  Thanks, Anthony Sottile.
+
+.. _issue 57: https://bitbucket.org/ned/coveragepy/issue/57/annotate-command-fails-to-annotate-many
+.. _issue 94: https://bitbucket.org/ned/coveragepy/issue/94/coverage-xml-doesnt-produce-sources
+.. _issue 149: https://bitbucket.org/ned/coveragepy/issue/149/coverage-gevent-looks-broken
+.. _issue 230: https://bitbucket.org/ned/coveragepy/issue/230/show-line-no-for-missing-branches-in
+.. _issue 284: https://bitbucket.org/ned/coveragepy/issue/284/fail-under-should-show-more-precision
+.. _issue 285: https://bitbucket.org/ned/coveragepy/issue/285/xml-report-fails-if-output-file-directory
+.. _issue 303: https://bitbucket.org/ned/coveragepy/issue/303/unicodedecodeerror
+.. _issue 304: https://bitbucket.org/ned/coveragepy/issue/304/attempt-to-get-configuration-from-setupcfg
+.. _issue 305: https://bitbucket.org/ned/coveragepy/issue/305/pendingdeprecationwarning-the-imp-module
+.. _issue 331: https://bitbucket.org/ned/coveragepy/issue/331/failure-of-encoding-detection-on-python2
+
+
+Version 3.7.1 --- 2013-12-13
+----------------------------
+
+- Improved the speed of HTML report generation by about 20%.
+
+- Fixed the mechanism for finding OS-installed static files for the HTML report
+  so that it will actually find OS-installed static files.
+
+
+Version 3.7 --- 2013-10-06
+--------------------------
+
+- Added the ``--debug`` switch to ``coverage run``.  It accepts a list of
+  options indicating the type of internal activity to log to stderr.
+
+- Improved the branch coverage facility, fixing `issue 92`_ and `issue 175`_.
+
+- Running code with ``coverage run -m`` now behaves more like Python does,
+  setting sys.path properly, which fixes `issue 207`_ and `issue 242`_.
+
+- Coverage.py can now run .pyc files directly, closing `issue 264`_.
+
+- Coverage.py properly supports .pyw files, fixing `issue 261`_.
+
+- Omitting files within a tree specified with the ``source`` option would
+  cause them to be incorrectly marked as unexecuted, as described in
+  `issue 218`_.  This is now fixed.
+
+- When specifying paths to alias together during data combining, you can now
+  specify relative paths, fixing `issue 267`_.
+
+- Most file paths can now be specified with username expansion (``~/src``, or
+  ``~build/src``, for example), and with environment variable expansion
+  (``build/$BUILDNUM/src``).
+
+- Trying to create an XML report with no files to report on, would cause a
+  ZeroDivideError, but no longer does, fixing `issue 250`_.
+
+- When running a threaded program under the Python tracer, coverage.py no
+  longer issues a spurious warning about the trace function changing: "Trace
+  function changed, measurement is likely wrong: None."  This fixes `issue
+  164`_.
+
+- Static files necessary for HTML reports are found in system-installed places,
+  to ease OS-level packaging of coverage.py.  Closes `issue 259`_.
+
+- Source files with encoding declarations, but a blank first line, were not
+  decoded properly.  Now they are.  Thanks, Roger Hu.
+
+- The source kit now includes the ``__main__.py`` file in the root coverage
+  directory, fixing `issue 255`_.
+
+.. _issue 92: https://bitbucket.org/ned/coveragepy/issue/92/finally-clauses-arent-treated-properly-in
+.. _issue 164: https://bitbucket.org/ned/coveragepy/issue/164/trace-function-changed-warning-when-using
+.. _issue 175: https://bitbucket.org/ned/coveragepy/issue/175/branch-coverage-gets-confused-in-certain
+.. _issue 207: https://bitbucket.org/ned/coveragepy/issue/207/run-m-cannot-find-module-or-package-in
+.. _issue 242: https://bitbucket.org/ned/coveragepy/issue/242/running-a-two-level-package-doesnt-work
+.. _issue 218: https://bitbucket.org/ned/coveragepy/issue/218/run-command-does-not-respect-the-omit-flag
+.. _issue 250: https://bitbucket.org/ned/coveragepy/issue/250/uncaught-zerodivisionerror-when-generating
+.. _issue 255: https://bitbucket.org/ned/coveragepy/issue/255/directory-level-__main__py-not-included-in
+.. _issue 259: https://bitbucket.org/ned/coveragepy/issue/259/allow-use-of-system-installed-third-party
+.. _issue 261: https://bitbucket.org/ned/coveragepy/issue/261/pyw-files-arent-reported-properly
+.. _issue 264: https://bitbucket.org/ned/coveragepy/issue/264/coverage-wont-run-pyc-files
+.. _issue 267: https://bitbucket.org/ned/coveragepy/issue/267/relative-path-aliases-dont-work
+
+
+Version 3.6 --- 2013-01-05
+--------------------------
+
+- Added a page to the docs about troublesome situations, closing `issue 226`_,
+  and added some info to the TODO file, closing `issue 227`_.
+
+.. _issue 226: https://bitbucket.org/ned/coveragepy/issue/226/make-readme-section-to-describe-when
+.. _issue 227: https://bitbucket.org/ned/coveragepy/issue/227/update-todo
+
+
+Version 3.6b3 --- 2012-12-29
+----------------------------
+
+- Beta 2 broke the nose plugin. It's fixed again, closing `issue 224`_.
+
+.. _issue 224: https://bitbucket.org/ned/coveragepy/issue/224/36b2-breaks-nosexcover
+
+
+Version 3.6b2 --- 2012-12-23
+----------------------------
+
+- Coverage.py runs on Python 2.3 and 2.4 again. It was broken in 3.6b1.
+
+- The C extension is optionally compiled using a different more widely-used
+  technique, taking another stab at fixing `issue 80`_ once and for all.
+
+- Combining data files would create entries for phantom files if used with
+  ``source`` and path aliases.  It no longer does.
+
+- ``debug sys`` now shows the configuration file path that was read.
+
+- If an oddly-behaved package claims that code came from an empty-string
+  file name, coverage.py no longer associates it with the directory name,
+  fixing `issue 221`_.
+
+.. _issue 221: https://bitbucket.org/ned/coveragepy/issue/221/coveragepy-incompatible-with-pyratemp
+
+
+Version 3.6b1 --- 2012-11-28
+----------------------------
+
+- Wildcards in ``include=`` and ``omit=`` arguments were not handled properly
+  in reporting functions, though they were when running.  Now they are handled
+  uniformly, closing `issue 143`_ and `issue 163`_.  **NOTE**: it is possible
+  that your configurations may now be incorrect.  If you use ``include`` or
+  ``omit`` during reporting, whether on the command line, through the API, or
+  in a configuration file, please check carefully that you were not relying on
+  the old broken behavior.
+
+- The **report**, **html**, and **xml** commands now accept a ``--fail-under``
+  switch that indicates in the exit status whether the coverage percentage was
+  less than a particular value.  Closes `issue 139`_.
+
+- The reporting functions coverage.report(), coverage.html_report(), and
+  coverage.xml_report() now all return a float, the total percentage covered
+  measurement.
+
+- The HTML report's title can now be set in the configuration file, with the
+  ``--title`` switch on the command line, or via the API.
+
+- Configuration files now support substitution of environment variables, using
+  syntax like ``${WORD}``.  Closes `issue 97`_.
+
+- Embarrassingly, the ``[xml] output=`` setting in the .coveragerc file simply
+  didn't work.  Now it does.
+
+- The XML report now consistently uses file names for the file name attribute,
+  rather than sometimes using module names.  Fixes `issue 67`_.
+  Thanks, Marcus Cobden.
+
+- Coverage percentage metrics are now computed slightly differently under
+  branch coverage.  This means that completely unexecuted files will now
+  correctly have 0% coverage, fixing `issue 156`_.  This also means that your
+  total coverage numbers will generally now be lower if you are measuring
+  branch coverage.
+
+- When installing, now in addition to creating a "coverage" command, two new
+  aliases are also installed.  A "coverage2" or "coverage3" command will be
+  created, depending on whether you are installing in Python 2.x or 3.x.
+  A "coverage-X.Y" command will also be created corresponding to your specific
+  version of Python.  Closes `issue 111`_.
+
+- The coverage.py installer no longer tries to bootstrap setuptools or
+  Distribute.  You must have one of them installed first, as `issue 202`_
+  recommended.
+
+- The coverage.py kit now includes docs (closing `issue 137`_) and tests.
+
+- On Windows, files are now reported in their correct case, fixing `issue 89`_
+  and `issue 203`_.
+
+- If a file is missing during reporting, the path shown in the error message
+  is now correct, rather than an incorrect path in the current directory.
+  Fixes `issue 60`_.
+
+- Running an HTML report in Python 3 in the same directory as an old Python 2
+  HTML report would fail with a UnicodeDecodeError. This issue (`issue 193`_)
+  is now fixed.
+
+- Fixed yet another error trying to parse non-Python files as Python, this
+  time an IndentationError, closing `issue 82`_ for the fourth time...
+
+- If `coverage xml` fails because there is no data to report, it used to
+  create a zero-length XML file.  Now it doesn't, fixing `issue 210`_.
+
+- Jython files now work with the ``--source`` option, fixing `issue 100`_.
+
+- Running coverage.py under a debugger is unlikely to work, but it shouldn't
+  fail with "TypeError: 'NoneType' object is not iterable".  Fixes `issue
+  201`_.
+
+- On some Linux distributions, when installed with the OS package manager,
+  coverage.py would report its own code as part of the results.  Now it won't,
+  fixing `issue 214`_, though this will take some time to be repackaged by the
+  operating systems.
+
+- Docstrings for the legacy singleton methods are more helpful.  Thanks Marius
+  Gedminas.  Closes `issue 205`_.
+
+- The pydoc tool can now show documentation for the class `coverage.coverage`.
+  Closes `issue 206`_.
+
+- Added a page to the docs about contributing to coverage.py, closing
+  `issue 171`_.
+
+- When coverage.py ended unsuccessfully, it may have reported odd errors like
+  ``'NoneType' object has no attribute 'isabs'``.  It no longer does,
+  so kiss `issue 153`_ goodbye.
+
+.. _issue 60: https://bitbucket.org/ned/coveragepy/issue/60/incorrect-path-to-orphaned-pyc-files
+.. _issue 67: https://bitbucket.org/ned/coveragepy/issue/67/xml-report-filenames-may-be-generated
+.. _issue 89: https://bitbucket.org/ned/coveragepy/issue/89/on-windows-all-packages-are-reported-in
+.. _issue 97: https://bitbucket.org/ned/coveragepy/issue/97/allow-environment-variables-to-be
+.. _issue 100: https://bitbucket.org/ned/coveragepy/issue/100/source-directive-doesnt-work-for-packages
+.. _issue 111: https://bitbucket.org/ned/coveragepy/issue/111/when-installing-coverage-with-pip-not
+.. _issue 137: https://bitbucket.org/ned/coveragepy/issue/137/provide-docs-with-source-distribution
+.. _issue 139: https://bitbucket.org/ned/coveragepy/issue/139/easy-check-for-a-certain-coverage-in-tests
+.. _issue 143: https://bitbucket.org/ned/coveragepy/issue/143/omit-doesnt-seem-to-work-in-coverage
+.. _issue 153: https://bitbucket.org/ned/coveragepy/issue/153/non-existent-filename-triggers
+.. _issue 156: https://bitbucket.org/ned/coveragepy/issue/156/a-completely-unexecuted-file-shows-14
+.. _issue 163: https://bitbucket.org/ned/coveragepy/issue/163/problem-with-include-and-omit-filename
+.. _issue 171: https://bitbucket.org/ned/coveragepy/issue/171/how-to-contribute-and-run-tests
+.. _issue 193: https://bitbucket.org/ned/coveragepy/issue/193/unicodedecodeerror-on-htmlpy
+.. _issue 201: https://bitbucket.org/ned/coveragepy/issue/201/coverage-using-django-14-with-pydb-on
+.. _issue 202: https://bitbucket.org/ned/coveragepy/issue/202/get-rid-of-ez_setuppy-and
+.. _issue 203: https://bitbucket.org/ned/coveragepy/issue/203/duplicate-filenames-reported-when-filename
+.. _issue 205: https://bitbucket.org/ned/coveragepy/issue/205/make-pydoc-coverage-more-friendly
+.. _issue 206: https://bitbucket.org/ned/coveragepy/issue/206/pydoc-coveragecoverage-fails-with-an-error
+.. _issue 210: https://bitbucket.org/ned/coveragepy/issue/210/if-theres-no-coverage-data-coverage-xml
+.. _issue 214: https://bitbucket.org/ned/coveragepy/issue/214/coveragepy-measures-itself-on-precise
+
+
+Version 3.5.3 --- 2012-09-29
+----------------------------
+
+- Line numbers in the HTML report line up better with the source lines, fixing
+  `issue 197`_, thanks Marius Gedminas.
+
+- When specifying a directory as the source= option, the directory itself no
+  longer needs to have a ``__init__.py`` file, though its sub-directories do,
+  to be considered as source files.
+
+- Files encoded as UTF-8 with a BOM are now properly handled, fixing
+  `issue 179`_.  Thanks, Pablo Carballo.
+
+- Fixed more cases of non-Python files being reported as Python source, and
+  then not being able to parse them as Python.  Closes `issue 82`_ (again).
+  Thanks, Julian Berman.
+
+- Fixed memory leaks under Python 3, thanks, Brett Cannon. Closes `issue 147`_.
+
+- Optimized .pyo files may not have been handled correctly, `issue 195`_.
+  Thanks, Marius Gedminas.
+
+- Certain unusually named file paths could have been mangled during reporting,
+  `issue 194`_.  Thanks, Marius Gedminas.
+
+- Try to do a better job of the impossible task of detecting when we can't
+  build the C extension, fixing `issue 183`_.
+
+- Testing is now done with `tox`_, thanks, Marc Abramowitz.
+
+.. _issue 147: https://bitbucket.org/ned/coveragepy/issue/147/massive-memory-usage-by-ctracer
+.. _issue 179: https://bitbucket.org/ned/coveragepy/issue/179/htmlreporter-fails-when-source-file-is
+.. _issue 183: https://bitbucket.org/ned/coveragepy/issue/183/install-fails-for-python-23
+.. _issue 194: https://bitbucket.org/ned/coveragepy/issue/194/filelocatorrelative_filename-could-mangle
+.. _issue 195: https://bitbucket.org/ned/coveragepy/issue/195/pyo-file-handling-in-codeunit
+.. _issue 197: https://bitbucket.org/ned/coveragepy/issue/197/line-numbers-in-html-report-do-not-align
+.. _tox: http://tox.readthedocs.org/
+
+
+Version 3.5.2 --- 2012-05-04
+----------------------------
+
+No changes since 3.5.2.b1
+
+
+Version 3.5.2b1 --- 2012-04-29
+------------------------------
+
+- The HTML report has slightly tweaked controls: the buttons at the top of
+  the page are color-coded to the source lines they affect.
+
+- Custom CSS can be applied to the HTML report by specifying a CSS file as
+  the ``extra_css`` configuration value in the ``[html]`` section.
+
+- Source files with custom encodings declared in a comment at the top are now
+  properly handled during reporting on Python 2.  Python 3 always handled them
+  properly.  This fixes `issue 157`_.
+
+- Backup files left behind by editors are no longer collected by the source=
+  option, fixing `issue 168`_.
+
+- If a file doesn't parse properly as Python, we don't report it as an error
+  if the file name seems like maybe it wasn't meant to be Python.  This is a
+  pragmatic fix for `issue 82`_.
+
+- The ``-m`` switch on ``coverage report``, which includes missing line numbers
+  in the summary report, can now be specified as ``show_missing`` in the
+  config file.  Closes `issue 173`_.
+
+- When running a module with ``coverage run -m <modulename>``, certain details
+  of the execution environment weren't the same as for
+  ``python -m <modulename>``.  This had the unfortunate side-effect of making
+  ``coverage run -m unittest discover`` not work if you had tests in a
+  directory named "test".  This fixes `issue 155`_ and `issue 142`_.
+
+- Now the exit status of your product code is properly used as the process
+  status when running ``python -m coverage run ...``.  Thanks, JT Olds.
+
+- When installing into pypy, we no longer attempt (and fail) to compile
+  the C tracer function, closing `issue 166`_.
+
+.. _issue 142: https://bitbucket.org/ned/coveragepy/issue/142/executing-python-file-syspath-is-replaced
+.. _issue 155: https://bitbucket.org/ned/coveragepy/issue/155/cant-use-coverage-run-m-unittest-discover
+.. _issue 157: https://bitbucket.org/ned/coveragepy/issue/157/chokes-on-source-files-with-non-utf-8
+.. _issue 166: https://bitbucket.org/ned/coveragepy/issue/166/dont-try-to-compile-c-extension-on-pypy
+.. _issue 168: https://bitbucket.org/ned/coveragepy/issue/168/dont-be-alarmed-by-emacs-droppings
+.. _issue 173: https://bitbucket.org/ned/coveragepy/issue/173/theres-no-way-to-specify-show-missing-in
+
+
+Version 3.5.1 --- 2011-09-23
+----------------------------
+
+- The ``[paths]`` feature unfortunately didn't work in real world situations
+  where you wanted to, you know, report on the combined data.  Now all paths
+  stored in the combined file are canonicalized properly.
+
+
+Version 3.5.1b1 --- 2011-08-28
+------------------------------
+
+- When combining data files from parallel runs, you can now instruct
+  coverage.py about which directories are equivalent on different machines.  A
+  ``[paths]`` section in the configuration file lists paths that are to be
+  considered equivalent.  Finishes `issue 17`_.
+
+- for-else constructs are understood better, and don't cause erroneous partial
+  branch warnings.  Fixes `issue 122`_.
+
+- Branch coverage for ``with`` statements is improved, fixing `issue 128`_.
+
+- The number of partial branches reported on the HTML summary page was
+  different than the number reported on the individual file pages.  This is
+  now fixed.
+
+- An explicit include directive to measure files in the Python installation
+  wouldn't work because of the standard library exclusion.  Now the include
+  directive takes precedence, and the files will be measured.  Fixes
+  `issue 138`_.
+
+- The HTML report now handles Unicode characters in Python source files
+  properly.  This fixes `issue 124`_ and `issue 144`_. Thanks, Devin
+  Jeanpierre.
+
+- In order to help the core developers measure the test coverage of the
+  standard library, Brandon Rhodes devised an aggressive hack to trick Python
+  into running some coverage.py code before anything else in the process.
+  See the coverage/fullcoverage directory if you are interested.
+
+.. _issue 17: http://bitbucket.org/ned/coveragepy/issue/17/support-combining-coverage-data-from
+.. _issue 122: http://bitbucket.org/ned/coveragepy/issue/122/for-else-always-reports-missing-branch
+.. _issue 124: http://bitbucket.org/ned/coveragepy/issue/124/no-arbitrary-unicode-in-html-reports-in
+.. _issue 128: http://bitbucket.org/ned/coveragepy/issue/128/branch-coverage-of-with-statement-in-27
+.. _issue 138: http://bitbucket.org/ned/coveragepy/issue/138/include-should-take-precedence-over-is
+.. _issue 144: http://bitbucket.org/ned/coveragepy/issue/144/failure-generating-html-output-for
+
+
+Version 3.5 --- 2011-06-29
+--------------------------
+
+- The HTML report hotkeys now behave slightly differently when the current
+  chunk isn't visible at all:  a chunk on the screen will be selected,
+  instead of the old behavior of jumping to the literal next chunk.
+  The hotkeys now work in Google Chrome.  Thanks, Guido van Rossum.
+
+
+Version 3.5b1 --- 2011-06-05
+----------------------------
+
+- The HTML report now has hotkeys.  Try ``n``, ``s``, ``m``, ``x``, ``b``,
+  ``p``, and ``c`` on the overview page to change the column sorting.
+  On a file page, ``r``, ``m``, ``x``, and ``p`` toggle the run, missing,
+  excluded, and partial line markings.  You can navigate the highlighted
+  sections of code by using the ``j`` and ``k`` keys for next and previous.
+  The ``1`` (one) key jumps to the first highlighted section in the file,
+  and ``0`` (zero) scrolls to the top of the file.
+
+- The ``--omit`` and ``--include`` switches now interpret their values more
+  usefully.  If the value starts with a wildcard character, it is used as-is.
+  If it does not, it is interpreted relative to the current directory.
+  Closes `issue 121`_.
+
+- Partial branch warnings can now be pragma'd away.  The configuration option
+  ``partial_branches`` is a list of regular expressions.  Lines matching any of
+  those expressions will never be marked as a partial branch.  In addition,
+  there's a built-in list of regular expressions marking statements which should
+  never be marked as partial.  This list includes ``while True:``, ``while 1:``,
+  ``if 1:``, and ``if 0:``.
+
+- The ``coverage()`` constructor accepts single strings for the ``omit=`` and
+  ``include=`` arguments, adapting to a common error in programmatic use.
+
+- Modules can now be run directly using ``coverage run -m modulename``, to
+  mirror Python's ``-m`` flag.  Closes `issue 95`_, thanks, Brandon Rhodes.
+
+- ``coverage run`` didn't emulate Python accurately in one small detail: the
+  current directory inserted into ``sys.path`` was relative rather than
+  absolute. This is now fixed.
+
+- HTML reporting is now incremental: a record is kept of the data that
+  produced the HTML reports, and only files whose data has changed will
+  be generated.  This should make most HTML reporting faster.
+
+- Pathological code execution could disable the trace function behind our
+  backs, leading to incorrect code measurement.  Now if this happens,
+  coverage.py will issue a warning, at least alerting you to the problem.
+  Closes `issue 93`_.  Thanks to Marius Gedminas for the idea.
+
+- The C-based trace function now behaves properly when saved and restored
+  with ``sys.gettrace()`` and ``sys.settrace()``.  This fixes `issue 125`_
+  and `issue 123`_.  Thanks, Devin Jeanpierre.
+
+- Source files are now opened with Python 3.2's ``tokenize.open()`` where
+  possible, to get the best handling of Python source files with encodings.
+  Closes `issue 107`_, thanks, Brett Cannon.
+
+- Syntax errors in supposed Python files can now be ignored during reporting
+  with the ``-i`` switch just like other source errors.  Closes `issue 115`_.
+
+- Installation from source now succeeds on machines without a C compiler,
+  closing `issue 80`_.
+
+- Coverage.py can now be run directly from a working tree by specifying
+  the directory name to python:  ``python coverage_py_working_dir run ...``.
+  Thanks, Brett Cannon.
+
+- A little bit of Jython support: `coverage run` can now measure Jython
+  execution by adapting when $py.class files are traced. Thanks, Adi Roiban.
+  Jython still doesn't provide the Python libraries needed to make
+  coverage reporting work, unfortunately.
+
+- Internally, files are now closed explicitly, fixing `issue 104`_.  Thanks,
+  Brett Cannon.
+
+.. _issue 80: https://bitbucket.org/ned/coveragepy/issue/80/is-there-a-duck-typing-way-to-know-we-cant
+.. _issue 93: http://bitbucket.org/ned/coveragepy/issue/93/copying-a-mock-object-breaks-coverage
+.. _issue 95: https://bitbucket.org/ned/coveragepy/issue/95/run-subcommand-should-take-a-module-name
+.. _issue 104: https://bitbucket.org/ned/coveragepy/issue/104/explicitly-close-files
+.. _issue 107: https://bitbucket.org/ned/coveragepy/issue/107/codeparser-not-opening-source-files-with
+.. _issue 115: https://bitbucket.org/ned/coveragepy/issue/115/fail-gracefully-when-reporting-on-file
+.. _issue 121: https://bitbucket.org/ned/coveragepy/issue/121/filename-patterns-are-applied-stupidly
+.. _issue 123: https://bitbucket.org/ned/coveragepy/issue/123/pyeval_settrace-used-in-way-that-breaks
+.. _issue 125: https://bitbucket.org/ned/coveragepy/issue/125/coverage-removes-decoratortoolss-tracing
+
+
+Version 3.4 --- 2010-09-19
+--------------------------
+
+- The XML report is now sorted by package name, fixing `issue 88`_.
+
+- Programs that exited with ``sys.exit()`` with no argument weren't handled
+  properly, producing a coverage.py stack trace.  That is now fixed.
+
+.. _issue 88: http://bitbucket.org/ned/coveragepy/issue/88/xml-report-lists-packages-in-random-order
+
+
+Version 3.4b2 --- 2010-09-06
+----------------------------
+
+- Completely unexecuted files can now be included in coverage results, reported
+  as 0% covered.  This only happens if the --source option is specified, since
+  coverage.py needs guidance about where to look for source files.
+
+- The XML report output now properly includes a percentage for branch coverage,
+  fixing `issue 65`_ and `issue 81`_.
+
+- Coverage percentages are now displayed uniformly across reporting methods.
+  Previously, different reports could round percentages differently.  Also,
+  percentages are only reported as 0% or 100% if they are truly 0 or 100, and
+  are rounded otherwise.  Fixes `issue 41`_ and `issue 70`_.
+
+- The precision of reported coverage percentages can be set with the
+  ``[report] precision`` config file setting.  Completes `issue 16`_.
+
+- Threads derived from ``threading.Thread`` with an overridden `run` method
+  would report no coverage for the `run` method.  This is now fixed, closing
+  `issue 85`_.
+
+.. _issue 16: http://bitbucket.org/ned/coveragepy/issue/16/allow-configuration-of-accuracy-of-percentage-totals
+.. _issue 41: http://bitbucket.org/ned/coveragepy/issue/41/report-says-100-when-it-isnt-quite-there
+.. _issue 65: http://bitbucket.org/ned/coveragepy/issue/65/branch-option-not-reported-in-cobertura
+.. _issue 70: http://bitbucket.org/ned/coveragepy/issue/70/text-report-and-html-report-disagree-on-coverage
+.. _issue 81: http://bitbucket.org/ned/coveragepy/issue/81/xml-report-does-not-have-condition-coverage-attribute-for-lines-with-a
+.. _issue 85: http://bitbucket.org/ned/coveragepy/issue/85/threadrun-isnt-measured
+
+
+Version 3.4b1 --- 2010-08-21
+----------------------------
+
+- BACKWARD INCOMPATIBILITY: the ``--omit`` and ``--include`` switches now take
+  file patterns rather than file prefixes, closing `issue 34`_ and `issue 36`_.
+
+- BACKWARD INCOMPATIBILITY: the `omit_prefixes` argument is gone throughout
+  coverage.py, replaced with `omit`, a list of file name patterns suitable for
+  `fnmatch`.  A parallel argument `include` controls what files are included.
+
+- The run command now has a ``--source`` switch, a list of directories or
+  module names.  If provided, coverage.py will only measure execution in those
+  source files.
+
+- Various warnings are printed to stderr for problems encountered during data
+  measurement: if a ``--source`` module has no Python source to measure, or is
+  never encountered at all, or if no data is collected.
+
+- The reporting commands (report, annotate, html, and xml) now have an
+  ``--include`` switch to restrict reporting to modules matching those file
+  patterns, similar to the existing ``--omit`` switch. Thanks, Zooko.
+
+- The run command now supports ``--include`` and ``--omit`` to control what
+  modules it measures. This can speed execution and reduce the amount of data
+  during reporting. Thanks Zooko.
+
+- Since coverage.py 3.1, using the Python trace function has been slower than
+  it needs to be.  A cache of tracing decisions was broken, but has now been
+  fixed.
+
+- Python 2.7 and 3.2 have introduced new opcodes that are now supported.
+
+- Python files with no statements, for example, empty ``__init__.py`` files,
+  are now reported as having zero statements instead of one.  Fixes `issue 1`_.
+
+- Reports now have a column of missed line counts rather than executed line
+  counts, since developers should focus on reducing the missed lines to zero,
+  rather than increasing the executed lines to varying targets.  Once
+  suggested, this seemed blindingly obvious.
+
+- Line numbers in HTML source pages are clickable, linking directly to that
+  line, which is highlighted on arrival.  Added a link back to the index page
+  at the bottom of each HTML page.
+
+- Programs that call ``os.fork`` will properly collect data from both the child
+  and parent processes.  Use ``coverage run -p`` to get two data files that can
+  be combined with ``coverage combine``.  Fixes `issue 56`_.
+
+- Coverage.py is now runnable as a module: ``python -m coverage``.  Thanks,
+  Brett Cannon.
+
+- When measuring code running in a virtualenv, most of the system library was
+  being measured when it shouldn't have been.  This is now fixed.
+
+- Doctest text files are no longer recorded in the coverage data, since they
+  can't be reported anyway.  Fixes `issue 52`_ and `issue 61`_.
+
+- Jinja HTML templates compile into Python code using the HTML file name,
+  which confused coverage.py.  Now these files are no longer traced, fixing
+  `issue 82`_.
+
+- Source files can have more than one dot in them (foo.test.py), and will be
+  treated properly while reporting.  Fixes `issue 46`_.
+
+- Source files with DOS line endings are now properly tokenized for syntax
+  coloring on non-DOS machines.  Fixes `issue 53`_.
+
+- Unusual code structure that confused exits from methods with exits from
+  classes is now properly analyzed.  See `issue 62`_.
+
+- Asking for an HTML report with no files now shows a nice error message rather
+  than a cryptic failure ('int' object is unsubscriptable). Fixes `issue 59`_.
+
+.. _issue 1:  http://bitbucket.org/ned/coveragepy/issue/1/empty-__init__py-files-are-reported-as-1-executable
+.. _issue 34: http://bitbucket.org/ned/coveragepy/issue/34/enhanced-omit-globbing-handling
+.. _issue 36: http://bitbucket.org/ned/coveragepy/issue/36/provide-regex-style-omit
+.. _issue 46: http://bitbucket.org/ned/coveragepy/issue/46
+.. _issue 53: http://bitbucket.org/ned/coveragepy/issue/53
+.. _issue 52: http://bitbucket.org/ned/coveragepy/issue/52/doctesttestfile-confuses-source-detection
+.. _issue 56: http://bitbucket.org/ned/coveragepy/issue/56
+.. _issue 61: http://bitbucket.org/ned/coveragepy/issue/61/annotate-i-doesnt-work
+.. _issue 62: http://bitbucket.org/ned/coveragepy/issue/62
+.. _issue 59: http://bitbucket.org/ned/coveragepy/issue/59/html-report-fails-with-int-object-is
+.. _issue 82: http://bitbucket.org/ned/coveragepy/issue/82/tokenerror-when-generating-html-report
+
+
+Version 3.3.1 --- 2010-03-06
+----------------------------
+
+- Using `parallel=True` in .coveragerc file prevented reporting, but now does
+  not, fixing `issue 49`_.
+
+- When running your code with "coverage run", if you call `sys.exit()`,
+  coverage.py will exit with that status code, fixing `issue 50`_.
+
+.. _issue 49: http://bitbucket.org/ned/coveragepy/issue/49
+.. _issue 50: http://bitbucket.org/ned/coveragepy/issue/50
+
+
+Version 3.3 --- 2010-02-24
+--------------------------
+
+- Settings are now read from a .coveragerc file.  A specific file can be
+  specified on the command line with --rcfile=FILE.  The name of the file can
+  be programmatically set with the `config_file` argument to the coverage()
+  constructor, or reading a config file can be disabled with
+  `config_file=False`.
+
+- Fixed a problem with nested loops having their branch possibilities
+  mischaracterized: `issue 39`_.
+
+- Added coverage.process_start to enable coverage measurement when Python
+  starts.
+
+- Parallel data file names now have a random number appended to them in
+  addition to the machine name and process id.
+
+- Parallel data files combined with "coverage combine" are deleted after
+  they're combined, to clean up unneeded files.  Fixes `issue 40`_.
+
+- Exceptions thrown from product code run with "coverage run" are now displayed
+  without internal coverage.py frames, so the output is the same as when the
+  code is run without coverage.py.
+
+- The `data_suffix` argument to the coverage constructor is now appended with
+  an added dot rather than simply appended, so that .coveragerc files will not
+  be confused for data files.
+
+- Python source files that don't end with a newline can now be executed, fixing
+  `issue 47`_.
+
+- Added an AUTHORS.txt file.
+
+.. _issue 39: http://bitbucket.org/ned/coveragepy/issue/39
+.. _issue 40: http://bitbucket.org/ned/coveragepy/issue/40
+.. _issue 47: http://bitbucket.org/ned/coveragepy/issue/47
+
+
+Version 3.2 --- 2009-12-05
+--------------------------
+
+- Added a ``--version`` option on the command line.
+
+
+Version 3.2b4 --- 2009-12-01
+----------------------------
+
+- Branch coverage improvements:
+
+  - The XML report now includes branch information.
+
+- Click-to-sort HTML report columns are now persisted in a cookie.  Viewing
+  a report will sort it first the way you last had a coverage report sorted.
+  Thanks, `Chris Adams`_.
+
+- On Python 3.x, setuptools has been replaced by `Distribute`_.
+
+.. _Distribute: http://packages.python.org/distribute/
+
+
+Version 3.2b3 --- 2009-11-23
+----------------------------
+
+- Fixed a memory leak in the C tracer that was introduced in 3.2b1.
+
+- Branch coverage improvements:
+
+  - Branches to excluded code are ignored.
+
+- The table of contents in the HTML report is now sortable: click the headers
+  on any column.  Thanks, `Chris Adams`_.
+
+.. _Chris Adams: http://improbable.org/chris/
+
+
+Version 3.2b2 --- 2009-11-19
+----------------------------
+
+- Branch coverage improvements:
+
+  - Classes are no longer incorrectly marked as branches: `issue 32`_.
+
+  - "except" clauses with types are no longer incorrectly marked as branches:
+    `issue 35`_.
+
+- Fixed some problems syntax coloring sources with line continuations and
+  source with tabs: `issue 30`_ and `issue 31`_.
+
+- The --omit option now works much better than before, fixing `issue 14`_ and
+  `issue 33`_.  Thanks, Danek Duvall.
+
+.. _issue 14: http://bitbucket.org/ned/coveragepy/issue/14
+.. _issue 30: http://bitbucket.org/ned/coveragepy/issue/30
+.. _issue 31: http://bitbucket.org/ned/coveragepy/issue/31
+.. _issue 32: http://bitbucket.org/ned/coveragepy/issue/32
+.. _issue 33: http://bitbucket.org/ned/coveragepy/issue/33
+.. _issue 35: http://bitbucket.org/ned/coveragepy/issue/35
+
+
+Version 3.2b1 --- 2009-11-10
+----------------------------
+
+- Branch coverage!
+
+- XML reporting has file paths that let Cobertura find the source code.
+
+- The tracer code has changed, it's a few percent faster.
+
+- Some exceptions reported by the command line interface have been cleaned up
+  so that tracebacks inside coverage.py aren't shown.  Fixes `issue 23`_.
+
+.. _issue 23: http://bitbucket.org/ned/coveragepy/issue/23
+
+
+Version 3.1 --- 2009-10-04
+--------------------------
+
+- Source code can now be read from eggs.  Thanks, Ross Lawley.  Fixes
+  `issue 25`_.
+
+.. _issue 25: http://bitbucket.org/ned/coveragepy/issue/25
+
+
+Version 3.1b1 --- 2009-09-27
+----------------------------
+
+- Python 3.1 is now supported.
+
+- Coverage.py has a new command line syntax with sub-commands.  This expands
+  the possibilities for adding features and options in the future.  The old
+  syntax is still supported.  Try "coverage help" to see the new commands.
+  Thanks to Ben Finney for early help.
+
+- Added an experimental "coverage xml" command for producing coverage reports
+  in a Cobertura-compatible XML format.  Thanks, Bill Hart.
+
+- Added the --timid option to enable a simpler slower trace function that works
+  for DecoratorTools projects, including TurboGears.  Fixed `issue 12`_ and
+  `issue 13`_.
+
+- HTML reports show modules from other directories.  Fixed `issue 11`_.
+
+- HTML reports now display syntax-colored Python source.
+
+- Programs that change directory will still write .coverage files in the
+  directory where execution started.  Fixed `issue 24`_.
+
+- Added a "coverage debug" command for getting diagnostic information about the
+  coverage.py installation.
+
+.. _issue 11: http://bitbucket.org/ned/coveragepy/issue/11
+.. _issue 12: http://bitbucket.org/ned/coveragepy/issue/12
+.. _issue 13: http://bitbucket.org/ned/coveragepy/issue/13
+.. _issue 24: http://bitbucket.org/ned/coveragepy/issue/24
+
+
+Version 3.0.1 --- 2009-07-07
+----------------------------
+
+- Removed the recursion limit in the tracer function.  Previously, code that
+  ran more than 500 frames deep would crash. Fixed `issue 9`_.
+
+- Fixed a bizarre problem involving pyexpat, whereby lines following XML parser
+  invocations could be overlooked.  Fixed `issue 10`_.
+
+- On Python 2.3, coverage.py could mis-measure code with exceptions being
+  raised.  This is now fixed.
+
+- The coverage.py code itself will now not be measured by coverage.py, and no
+  coverage.py modules will be mentioned in the nose --with-cover plug-in.
+  Fixed `issue 8`_.
+
+- When running source files, coverage.py now opens them in universal newline
+  mode just like Python does.  This lets it run Windows files on Mac, for
+  example.
+
+.. _issue 9: http://bitbucket.org/ned/coveragepy/issue/9
+.. _issue 10: http://bitbucket.org/ned/coveragepy/issue/10
+.. _issue 8: http://bitbucket.org/ned/coveragepy/issue/8
+
+
+Version 3.0 --- 2009-06-13
+--------------------------
+
+- Fixed the way the Python library was ignored.  Too much code was being
+  excluded the old way.
+
+- Tabs are now properly converted in HTML reports.  Previously indentation was
+  lost.  Fixed `issue 6`_.
+
+- Nested modules now get a proper flat_rootname.  Thanks, Christian Heimes.
+
+.. _issue 6: http://bitbucket.org/ned/coveragepy/issue/6
+
+
+Version 3.0b3 --- 2009-05-16
+----------------------------
+
+- Added parameters to coverage.__init__ for options that had been set on the
+  coverage object itself.
+
+- Added clear_exclude() and get_exclude_list() methods for programmatic
+  manipulation of the exclude regexes.
+
+- Added coverage.load() to read previously-saved data from the data file.
+
+- Improved the finding of code files.  For example, .pyc files that have been
+  installed after compiling are now located correctly.  Thanks, Detlev
+  Offenbach.
+
+- When using the object API (that is, constructing a coverage() object), data
+  is no longer saved automatically on process exit.  You can re-enable it with
+  the auto_data=True parameter on the coverage() constructor. The module-level
+  interface still uses automatic saving.
+
+
+Version 3.0b --- 2009-04-30
+---------------------------
+
+HTML reporting, and continued refactoring.
+
+- HTML reports and annotation of source files: use the new -b (browser) switch.
+  Thanks to George Song for code, inspiration and guidance.
+
+- Code in the Python standard library is not measured by default.  If you need
+  to measure standard library code, use the -L command-line switch during
+  execution, or the cover_pylib=True argument to the coverage() constructor.
+
+- Source annotation into a directory (-a -d) behaves differently.  The
+  annotated files are named with their hierarchy flattened so that same-named
+  files from different directories no longer collide.  Also, only files in the
+  current tree are included.
+
+- coverage.annotate_file is no longer available.
+
+- Programs executed with -x now behave more as they should, for example,
+  __file__ has the correct value.
+
+- .coverage data files have a new pickle-based format designed for better
+  extensibility.
+
+- Removed the undocumented cache_file argument to coverage.usecache().
+
+
+Version 3.0b1 --- 2009-03-07
+----------------------------
+
+Major overhaul.
+
+- Coverage.py is now a package rather than a module.  Functionality has been
+  split into classes.
+
+- The trace function is implemented in C for speed.  Coverage.py runs are now
+  much faster.  Thanks to David Christian for productive micro-sprints and
+  other encouragement.
+
+- Executable lines are identified by reading the line number tables in the
+  compiled code, removing a great deal of complicated analysis code.
+
+- Precisely which lines are considered executable has changed in some cases.
+  Therefore, your coverage stats may also change slightly.
+
+- The singleton coverage object is only created if the module-level functions
+  are used.  This maintains the old interface while allowing better
+  programmatic use of Coverage.py.
+
+- The minimum supported Python version is 2.3.
+
+
+Version 2.85 --- 2008-09-14
+---------------------------
+
+- Add support for finding source files in eggs. Don't check for
+  morf's being instances of ModuleType, instead use duck typing so that
+  pseudo-modules can participate. Thanks, Imri Goldberg.
+
+- Use os.realpath as part of the fixing of file names so that symlinks won't
+  confuse things. Thanks, Patrick Mezard.
+
+
+Version 2.80 --- 2008-05-25
+---------------------------
+
+- Open files in rU mode to avoid line ending craziness. Thanks, Edward Loper.
+
+
+Version 2.78 --- 2007-09-30
+---------------------------
+
+- Don't try to predict whether a file is Python source based on the extension.
+  Extension-less files are often Pythons scripts. Instead, simply parse the file
+  and catch the syntax errors. Hat tip to Ben Finney.
+
+
+Version 2.77 --- 2007-07-29
+---------------------------
+
+- Better packaging.
+
+
+Version 2.76 --- 2007-07-23
+---------------------------
+
+- Now Python 2.5 is *really* fully supported: the body of the new with
+  statement is counted as executable.
+
+
+Version 2.75 --- 2007-07-22
+---------------------------
+
+- Python 2.5 now fully supported. The method of dealing with multi-line
+  statements is now less sensitive to the exact line that Python reports during
+  execution. Pass statements are handled specially so that their disappearance
+  during execution won't throw off the measurement.
+
+
+Version 2.7 --- 2007-07-21
+--------------------------
+
+- "#pragma: nocover" is excluded by default.
+
+- Properly ignore docstrings and other constant expressions that appear in the
+  middle of a function, a problem reported by Tim Leslie.
+
+- coverage.erase() shouldn't clobber the exclude regex. Change how parallel
+  mode is invoked, and fix erase() so that it erases the cache when called
+  programmatically.
+
+- In reports, ignore code executed from strings, since we can't do anything
+  useful with it anyway.
+
+- Better file handling on Linux, thanks Guillaume Chazarain.
+
+- Better shell support on Windows, thanks Noel O'Boyle.
+
+- Python 2.2 support maintained, thanks Catherine Proulx.
+
+- Minor changes to avoid lint warnings.
+
+
+Version 2.6 --- 2006-08-23
+--------------------------
+
+- Applied Joseph Tate's patch for function decorators.
+
+- Applied Sigve Tjora and Mark van der Wal's fixes for argument handling.
+
+- Applied Geoff Bache's parallel mode patch.
+
+- Refactorings to improve testability. Fixes to command-line logic for parallel
+  mode and collect.
+
+
+Version 2.5 --- 2005-12-04
+--------------------------
+
+- Call threading.settrace so that all threads are measured. Thanks Martin
+  Fuzzey.
+
+- Add a file argument to report so that reports can be captured to a different
+  destination.
+
+- Coverage.py can now measure itself.
+
+- Adapted Greg Rogers' patch for using relative file names, and sorting and
+  omitting files to report on.
+
+
+Version 2.2 --- 2004-12-31
+--------------------------
+
+- Allow for keyword arguments in the module global functions. Thanks, Allen.
+
+
+Version 2.1 --- 2004-12-14
+--------------------------
+
+- Return 'analysis' to its original behavior and add 'analysis2'. Add a global
+  for 'annotate', and factor it, adding 'annotate_file'.
+
+
+Version 2.0 --- 2004-12-12
+--------------------------
+
+Significant code changes.
+
+- Finding executable statements has been rewritten so that docstrings and
+  other quirks of Python execution aren't mistakenly identified as missing
+  lines.
+
+- Lines can be excluded from consideration, even entire suites of lines.
+
+- The file system cache of covered lines can be disabled programmatically.
+
+- Modernized the code.
+
+
+Earlier History
+---------------
+
+2001-12-04 GDR Created.
+
+2001-12-06 GDR Added command-line interface and source code annotation.
+
+2001-12-09 GDR Moved design and interface to separate documents.
+
+2001-12-10 GDR Open cache file as binary on Windows. Allow simultaneous -e and
+-x, or -a and -r.
+
+2001-12-12 GDR Added command-line help. Cache analysis so that it only needs to
+be done once when you specify -a and -r.
+
+2001-12-13 GDR Improved speed while recording. Portable between Python 1.5.2
+and 2.1.1.
+
+2002-01-03 GDR Module-level functions work correctly.
+
+2002-01-07 GDR Update sys.path when running a file with the -x option, so that
+it matches the value the program would get if it were run on its own.
--- /dev/null	Thu Jan 01 00:00:00 1970 +0000
+++ b/DebugClients/Python2/coverage/doc/LICENSE.txt	Sat Sep 03 18:12:12 2016 +0200
@@ -0,0 +1,177 @@
+
+                                 Apache License
+                           Version 2.0, January 2004
+                        http://www.apache.org/licenses/
+
+   TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
+
+   1. Definitions.
+
+      "License" shall mean the terms and conditions for use, reproduction,
+      and distribution as defined by Sections 1 through 9 of this document.
+
+      "Licensor" shall mean the copyright owner or entity authorized by
+      the copyright owner that is granting the License.
+
+      "Legal Entity" shall mean the union of the acting entity and all
+      other entities that control, are controlled by, or are under common
+      control with that entity. For the purposes of this definition,
+      "control" means (i) the power, direct or indirect, to cause the
+      direction or management of such entity, whether by contract or
+      otherwise, or (ii) ownership of fifty percent (50%) or more of the
+      outstanding shares, or (iii) beneficial ownership of such entity.
+
+      "You" (or "Your") shall mean an individual or Legal Entity
+      exercising permissions granted by this License.
+
+      "Source" form shall mean the preferred form for making modifications,
+      including but not limited to software source code, documentation
+      source, and configuration files.
+
+      "Object" form shall mean any form resulting from mechanical
+      transformation or translation of a Source form, including but
+      not limited to compiled object code, generated documentation,
+      and conversions to other media types.
+
+      "Work" shall mean the work of authorship, whether in Source or
+      Object form, made available under the License, as indicated by a
+      copyright notice that is included in or attached to the work
+      (an example is provided in the Appendix below).
+
+      "Derivative Works" shall mean any work, whether in Source or Object
+      form, that is based on (or derived from) the Work and for which the
+      editorial revisions, annotations, elaborations, or other modifications
+      represent, as a whole, an original work of authorship. For the purposes
+      of this License, Derivative Works shall not include works that remain
+      separable from, or merely link (or bind by name) to the interfaces of,
+      the Work and Derivative Works thereof.
+
+      "Contribution" shall mean any work of authorship, including
+      the original version of the Work and any modifications or additions
+      to that Work or Derivative Works thereof, that is intentionally
+      submitted to Licensor for inclusion in the Work by the copyright owner
+      or by an individual or Legal Entity authorized to submit on behalf of
+      the copyright owner. For the purposes of this definition, "submitted"
+      means any form of electronic, verbal, or written communication sent
+      to the Licensor or its representatives, including but not limited to
+      communication on electronic mailing lists, source code control systems,
+      and issue tracking systems that are managed by, or on behalf of, the
+      Licensor for the purpose of discussing and improving the Work, but
+      excluding communication that is conspicuously marked or otherwise
+      designated in writing by the copyright owner as "Not a Contribution."
+
+      "Contributor" shall mean Licensor and any individual or Legal Entity
+      on behalf of whom a Contribution has been received by Licensor and
+      subsequently incorporated within the Work.
+
+   2. Grant of Copyright License. Subject to the terms and conditions of
+      this License, each Contributor hereby grants to You a perpetual,
+      worldwide, non-exclusive, no-charge, royalty-free, irrevocable
+      copyright license to reproduce, prepare Derivative Works of,
+      publicly display, publicly perform, sublicense, and distribute the
+      Work and such Derivative Works in Source or Object form.
+
+   3. Grant of Patent License. Subject to the terms and conditions of
+      this License, each Contributor hereby grants to You a perpetual,
+      worldwide, non-exclusive, no-charge, royalty-free, irrevocable
+      (except as stated in this section) patent license to make, have made,
+      use, offer to sell, sell, import, and otherwise transfer the Work,
+      where such license applies only to those patent claims licensable
+      by such Contributor that are necessarily infringed by their
+      Contribution(s) alone or by combination of their Contribution(s)
+      with the Work to which such Contribution(s) was submitted. If You
+      institute patent litigation against any entity (including a
+      cross-claim or counterclaim in a lawsuit) alleging that the Work
+      or a Contribution incorporated within the Work constitutes direct
+      or contributory patent infringement, then any patent licenses
+      granted to You under this License for that Work shall terminate
+      as of the date such litigation is filed.
+
+   4. Redistribution. You may reproduce and distribute copies of the
+      Work or Derivative Works thereof in any medium, with or without
+      modifications, and in Source or Object form, provided that You
+      meet the following conditions:
+
+      (a) You must give any other recipients of the Work or
+          Derivative Works a copy of this License; and
+
+      (b) You must cause any modified files to carry prominent notices
+          stating that You changed the files; and
+
+      (c) You must retain, in the Source form of any Derivative Works
+          that You distribute, all copyright, patent, trademark, and
+          attribution notices from the Source form of the Work,
+          excluding those notices that do not pertain to any part of
+          the Derivative Works; and
+
+      (d) If the Work includes a "NOTICE" text file as part of its
+          distribution, then any Derivative Works that You distribute must
+          include a readable copy of the attribution notices contained
+          within such NOTICE file, excluding those notices that do not
+          pertain to any part of the Derivative Works, in at least one
+          of the following places: within a NOTICE text file distributed
+          as part of the Derivative Works; within the Source form or
+          documentation, if provided along with the Derivative Works; or,
+          within a display generated by the Derivative Works, if and
+          wherever such third-party notices normally appear. The contents
+          of the NOTICE file are for informational purposes only and
+          do not modify the License. You may add Your own attribution
+          notices within Derivative Works that You distribute, alongside
+          or as an addendum to the NOTICE text from the Work, provided
+          that such additional attribution notices cannot be construed
+          as modifying the License.
+
+      You may add Your own copyright statement to Your modifications and
+      may provide additional or different license terms and conditions
+      for use, reproduction, or distribution of Your modifications, or
+      for any such Derivative Works as a whole, provided Your use,
+      reproduction, and distribution of the Work otherwise complies with
+      the conditions stated in this License.
+
+   5. Submission of Contributions. Unless You explicitly state otherwise,
+      any Contribution intentionally submitted for inclusion in the Work
+      by You to the Licensor shall be under the terms and conditions of
+      this License, without any additional terms or conditions.
+      Notwithstanding the above, nothing herein shall supersede or modify
+      the terms of any separate license agreement you may have executed
+      with Licensor regarding such Contributions.
+
+   6. Trademarks. This License does not grant permission to use the trade
+      names, trademarks, service marks, or product names of the Licensor,
+      except as required for reasonable and customary use in describing the
+      origin of the Work and reproducing the content of the NOTICE file.
+
+   7. Disclaimer of Warranty. Unless required by applicable law or
+      agreed to in writing, Licensor provides the Work (and each
+      Contributor provides its Contributions) on an "AS IS" BASIS,
+      WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
+      implied, including, without limitation, any warranties or conditions
+      of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
+      PARTICULAR PURPOSE. You are solely responsible for determining the
+      appropriateness of using or redistributing the Work and assume any
+      risks associated with Your exercise of permissions under this License.
+
+   8. Limitation of Liability. In no event and under no legal theory,
+      whether in tort (including negligence), contract, or otherwise,
+      unless required by applicable law (such as deliberate and grossly
+      negligent acts) or agreed to in writing, shall any Contributor be
+      liable to You for damages, including any direct, indirect, special,
+      incidental, or consequential damages of any character arising as a
+      result of this License or out of the use or inability to use the
+      Work (including but not limited to damages for loss of goodwill,
+      work stoppage, computer failure or malfunction, or any and all
+      other commercial damages or losses), even if such Contributor
+      has been advised of the possibility of such damages.
+
+   9. Accepting Warranty or Additional Liability. While redistributing
+      the Work or Derivative Works thereof, You may choose to offer,
+      and charge a fee for, acceptance of support, warranty, indemnity,
+      or other liability obligations and/or rights consistent with this
+      License. However, in accepting such obligations, You may act only
+      on Your own behalf and on Your sole responsibility, not on behalf
+      of any other Contributor, and only if You agree to indemnify,
+      defend, and hold each Contributor harmless for any liability
+      incurred by, or claims asserted against, such Contributor by reason
+      of your accepting any such warranty or additional liability.
+
+   END OF TERMS AND CONDITIONS
--- /dev/null	Thu Jan 01 00:00:00 1970 +0000
+++ b/DebugClients/Python2/coverage/doc/README.rst	Sat Sep 03 18:12:12 2016 +0200
@@ -0,0 +1,77 @@
+.. Licensed under the Apache License: http://www.apache.org/licenses/LICENSE-2.0
+.. For details: https://bitbucket.org/ned/coveragepy/src/default/NOTICE.txt
+
+===========
+Coverage.py
+===========
+
+Code coverage testing for Python.
+
+|  |license| |versions| |status| |docs|
+|  |ci-status| |win-ci-status| |codecov|
+|  |kit| |format| |downloads|
+
+Coverage.py measures code coverage, typically during test execution. It uses
+the code analysis tools and tracing hooks provided in the Python standard
+library to determine which lines are executable, and which have been executed.
+
+Coverage.py runs on CPython 2.6, 2.7, and 3.3 through 3.6; PyPy 4.0 and 5.1;
+and PyPy3 2.4.
+
+Documentation is on `Read the Docs <http://coverage.readthedocs.io>`_.
+Code repository and issue tracker are on `Bitbucket <http://bitbucket.org/ned/coveragepy>`_,
+with a mirrored repository on `GitHub <https://github.com/nedbat/coveragepy>`_.
+
+**New in 4.1:** much-improved branch coverage.
+
+New in 4.0: ``--concurrency``, plugins for non-Python files, setup.cfg
+support, --skip-covered, HTML filtering, and more than 50 issues closed.
+
+
+Getting Started
+---------------
+
+See the `quick start <http://coverage.readthedocs.io/#quick-start>`_
+section of the docs.
+
+
+License
+-------
+
+Licensed under the Apache License: http://www.apache.org/licenses/LICENSE-2.0.
+For details, see https://bitbucket.org/ned/coveragepy/src/default/NOTICE.txt.
+
+
+.. |ci-status| image:: https://travis-ci.org/nedbat/coveragepy.svg?branch=master
+    :target: https://travis-ci.org/nedbat/coveragepy
+    :alt: Build status
+.. |win-ci-status| image:: https://ci.appveyor.com/api/projects/status/bitbucket/ned/coveragepy?svg=true
+    :target: https://ci.appveyor.com/project/nedbat/coveragepy
+    :alt: Windows build status
+.. |docs| image:: https://readthedocs.org/projects/coverage/badge/?version=latest&style=flat
+    :target: http://coverage.readthedocs.io
+    :alt: Documentation
+.. |reqs| image:: https://requires.io/github/nedbat/coveragepy/requirements.svg?branch=master
+    :target: https://requires.io/github/nedbat/coveragepy/requirements/?branch=master
+    :alt: Requirements status
+.. |kit| image:: https://badge.fury.io/py/coverage.svg
+    :target: https://pypi.python.org/pypi/coverage
+    :alt: PyPI status
+.. |format| image:: https://img.shields.io/pypi/format/coverage.svg
+    :target: https://pypi.python.org/pypi/coverage
+    :alt: Kit format
+.. |downloads| image:: https://img.shields.io/pypi/dw/coverage.svg
+    :target: https://pypi.python.org/pypi/coverage
+    :alt: Weekly PyPI downloads
+.. |versions| image:: https://img.shields.io/pypi/pyversions/coverage.svg
+    :target: https://pypi.python.org/pypi/coverage
+    :alt: Python versions supported
+.. |status| image:: https://img.shields.io/pypi/status/coverage.svg
+    :target: https://pypi.python.org/pypi/coverage
+    :alt: Package stability
+.. |license| image:: https://img.shields.io/pypi/l/coverage.svg
+    :target: https://pypi.python.org/pypi/coverage
+    :alt: License
+.. |codecov| image:: http://codecov.io/github/nedbat/coveragepy/coverage.svg?branch=master
+    :target: http://codecov.io/github/nedbat/coveragepy?branch=master
+    :alt: Coverage!
--- /dev/null	Thu Jan 01 00:00:00 1970 +0000
+++ b/DebugClients/Python2/coverage/env.py	Sat Sep 03 18:12:12 2016 +0200
@@ -0,0 +1,35 @@
+# Licensed under the Apache License: http://www.apache.org/licenses/LICENSE-2.0
+# For details: https://bitbucket.org/ned/coveragepy/src/default/NOTICE.txt
+
+"""Determine facts about the environment."""
+
+import os
+import sys
+
+# Operating systems.
+WINDOWS = sys.platform == "win32"
+LINUX = sys.platform == "linux2"
+
+# Python implementations.
+PYPY = '__pypy__' in sys.builtin_module_names
+
+# Python versions.
+PYVERSION = sys.version_info
+PY2 = PYVERSION < (3, 0)
+PY3 = PYVERSION >= (3, 0)
+
+# Coverage.py specifics.
+
+# Are we using the C-implemented trace function?
+C_TRACER = os.getenv('COVERAGE_TEST_TRACER', 'c') == 'c'
+
+# Are we coverage-measuring ourselves?
+METACOV = os.getenv('COVERAGE_COVERAGE', '') != ''
+
+# Are we running our test suite?
+# Even when running tests, you can use COVERAGE_TESTING=0 to disable the
+# test-specific behavior like contracts.
+TESTING = os.getenv('COVERAGE_TESTING', '') == 'True'
+
+#
+# eflag: FileType = Python2
--- /dev/null	Thu Jan 01 00:00:00 1970 +0000
+++ b/DebugClients/Python2/coverage/execfile.py	Sat Sep 03 18:12:12 2016 +0200
@@ -0,0 +1,242 @@
+# Licensed under the Apache License: http://www.apache.org/licenses/LICENSE-2.0
+# For details: https://bitbucket.org/ned/coveragepy/src/default/NOTICE.txt
+
+"""Execute files of Python code."""
+
+import marshal
+import os
+import sys
+import types
+
+from coverage.backward import BUILTINS
+from coverage.backward import PYC_MAGIC_NUMBER, imp, importlib_util_find_spec
+from coverage.misc import ExceptionDuringRun, NoCode, NoSource, isolate_module
+from coverage.phystokens import compile_unicode
+from coverage.python import get_python_source
+
+os = isolate_module(os)
+
+
+class DummyLoader(object):
+    """A shim for the pep302 __loader__, emulating pkgutil.ImpLoader.
+
+    Currently only implements the .fullname attribute
+    """
+    def __init__(self, fullname, *_args):
+        self.fullname = fullname
+
+
+if importlib_util_find_spec:
+    def find_module(modulename):
+        """Find the module named `modulename`.
+
+        Returns the file path of the module, and the name of the enclosing
+        package.
+        """
+        try:
+            spec = importlib_util_find_spec(modulename)
+        except ImportError as err:
+            raise NoSource(str(err))
+        if not spec:
+            raise NoSource("No module named %r" % (modulename,))
+        pathname = spec.origin
+        packagename = spec.name
+        if pathname.endswith("__init__.py") and not modulename.endswith("__init__"):
+            mod_main = modulename + ".__main__"
+            spec = importlib_util_find_spec(mod_main)
+            if not spec:
+                raise NoSource(
+                    "No module named %s; "
+                    "%r is a package and cannot be directly executed"
+                    % (mod_main, modulename)
+                )
+            pathname = spec.origin
+            packagename = spec.name
+        packagename = packagename.rpartition(".")[0]
+        return pathname, packagename
+else:
+    def find_module(modulename):
+        """Find the module named `modulename`.
+
+        Returns the file path of the module, and the name of the enclosing
+        package.
+        """
+        openfile = None
+        glo, loc = globals(), locals()
+        try:
+            # Search for the module - inside its parent package, if any - using
+            # standard import mechanics.
+            if '.' in modulename:
+                packagename, name = modulename.rsplit('.', 1)
+                package = __import__(packagename, glo, loc, ['__path__'])
+                searchpath = package.__path__
+            else:
+                packagename, name = None, modulename
+                searchpath = None  # "top-level search" in imp.find_module()
+            openfile, pathname, _ = imp.find_module(name, searchpath)
+
+            # Complain if this is a magic non-file module.
+            if openfile is None and pathname is None:
+                raise NoSource(
+                    "module does not live in a file: %r" % modulename
+                    )
+
+            # If `modulename` is actually a package, not a mere module, then we
+            # pretend to be Python 2.7 and try running its __main__.py script.
+            if openfile is None:
+                packagename = modulename
+                name = '__main__'
+                package = __import__(packagename, glo, loc, ['__path__'])
+                searchpath = package.__path__
+                openfile, pathname, _ = imp.find_module(name, searchpath)
+        except ImportError as err:
+            raise NoSource(str(err))
+        finally:
+            if openfile:
+                openfile.close()
+
+        return pathname, packagename
+
+
+def run_python_module(modulename, args):
+    """Run a Python module, as though with ``python -m name args...``.
+
+    `modulename` is the name of the module, possibly a dot-separated name.
+    `args` is the argument array to present as sys.argv, including the first
+    element naming the module being executed.
+
+    """
+    pathname, packagename = find_module(modulename)
+
+    pathname = os.path.abspath(pathname)
+    args[0] = pathname
+    run_python_file(pathname, args, package=packagename, modulename=modulename, path0="")
+
+
+def run_python_file(filename, args, package=None, modulename=None, path0=None):
+    """Run a Python file as if it were the main program on the command line.
+
+    `filename` is the path to the file to execute, it need not be a .py file.
+    `args` is the argument array to present as sys.argv, including the first
+    element naming the file being executed.  `package` is the name of the
+    enclosing package, if any.
+
+    `modulename` is the name of the module the file was run as.
+
+    `path0` is the value to put into sys.path[0].  If it's None, then this
+    function will decide on a value.
+
+    """
+    if modulename is None and sys.version_info >= (3, 3):
+        modulename = '__main__'
+
+    # Create a module to serve as __main__
+    old_main_mod = sys.modules['__main__']
+    main_mod = types.ModuleType('__main__')
+    sys.modules['__main__'] = main_mod
+    main_mod.__file__ = filename
+    if package:
+        main_mod.__package__ = package
+    if modulename:
+        main_mod.__loader__ = DummyLoader(modulename)
+
+    main_mod.__builtins__ = BUILTINS
+
+    # Set sys.argv properly.
+    old_argv = sys.argv
+    sys.argv = args
+
+    if os.path.isdir(filename):
+        # Running a directory means running the __main__.py file in that
+        # directory.
+        my_path0 = filename
+
+        for ext in [".py", ".pyc", ".pyo"]:
+            try_filename = os.path.join(filename, "__main__" + ext)
+            if os.path.exists(try_filename):
+                filename = try_filename
+                break
+        else:
+            raise NoSource("Can't find '__main__' module in '%s'" % filename)
+    else:
+        my_path0 = os.path.abspath(os.path.dirname(filename))
+
+    # Set sys.path correctly.
+    old_path0 = sys.path[0]
+    sys.path[0] = path0 if path0 is not None else my_path0
+
+    try:
+        # Make a code object somehow.
+        if filename.endswith((".pyc", ".pyo")):
+            code = make_code_from_pyc(filename)
+        else:
+            code = make_code_from_py(filename)
+
+        # Execute the code object.
+        try:
+            exec(code, main_mod.__dict__)
+        except SystemExit:
+            # The user called sys.exit().  Just pass it along to the upper
+            # layers, where it will be handled.
+            raise
+        except:
+            # Something went wrong while executing the user code.
+            # Get the exc_info, and pack them into an exception that we can
+            # throw up to the outer loop.  We peel one layer off the traceback
+            # so that the coverage.py code doesn't appear in the final printed
+            # traceback.
+            typ, err, tb = sys.exc_info()
+
+            # PyPy3 weirdness.  If I don't access __context__, then somehow it
+            # is non-None when the exception is reported at the upper layer,
+            # and a nested exception is shown to the user.  This getattr fixes
+            # it somehow? https://bitbucket.org/pypy/pypy/issue/1903
+            getattr(err, '__context__', None)
+
+            raise ExceptionDuringRun(typ, err, tb.tb_next)
+    finally:
+        # Restore the old __main__, argv, and path.
+        sys.modules['__main__'] = old_main_mod
+        sys.argv = old_argv
+        sys.path[0] = old_path0
+
+
+def make_code_from_py(filename):
+    """Get source from `filename` and make a code object of it."""
+    # Open the source file.
+    try:
+        source = get_python_source(filename)
+    except (IOError, NoSource):
+        raise NoSource("No file to run: '%s'" % filename)
+
+    code = compile_unicode(source, filename, "exec")
+    return code
+
+
+def make_code_from_pyc(filename):
+    """Get a code object from a .pyc file."""
+    try:
+        fpyc = open(filename, "rb")
+    except IOError:
+        raise NoCode("No file to run: '%s'" % filename)
+
+    with fpyc:
+        # First four bytes are a version-specific magic number.  It has to
+        # match or we won't run the file.
+        magic = fpyc.read(4)
+        if magic != PYC_MAGIC_NUMBER:
+            raise NoCode("Bad magic number in .pyc file")
+
+        # Skip the junk in the header that we don't need.
+        fpyc.read(4)            # Skip the moddate.
+        if sys.version_info >= (3, 3):
+            # 3.3 added another long to the header (size), skip it.
+            fpyc.read(4)
+
+        # The rest of the file is the code object we want.
+        code = marshal.load(fpyc)
+
+    return code
+
+#
+# eflag: FileType = Python2
--- /dev/null	Thu Jan 01 00:00:00 1970 +0000
+++ b/DebugClients/Python2/coverage/files.py	Sat Sep 03 18:12:12 2016 +0200
@@ -0,0 +1,381 @@
+# Licensed under the Apache License: http://www.apache.org/licenses/LICENSE-2.0
+# For details: https://bitbucket.org/ned/coveragepy/src/default/NOTICE.txt
+
+"""File wrangling."""
+
+import fnmatch
+import ntpath
+import os
+import os.path
+import posixpath
+import re
+import sys
+
+from coverage import env
+from coverage.backward import unicode_class
+from coverage.misc import contract, CoverageException, join_regex, isolate_module
+
+
+os = isolate_module(os)
+
+
+def set_relative_directory():
+    """Set the directory that `relative_filename` will be relative to."""
+    global RELATIVE_DIR, CANONICAL_FILENAME_CACHE
+
+    # The absolute path to our current directory.
+    RELATIVE_DIR = os.path.normcase(abs_file(os.curdir) + os.sep)
+
+    # Cache of results of calling the canonical_filename() method, to
+    # avoid duplicating work.
+    CANONICAL_FILENAME_CACHE = {}
+
+
+def relative_directory():
+    """Return the directory that `relative_filename` is relative to."""
+    return RELATIVE_DIR
+
+
+@contract(returns='unicode')
+def relative_filename(filename):
+    """Return the relative form of `filename`.
+
+    The file name will be relative to the current directory when the
+    `set_relative_directory` was called.
+
+    """
+    fnorm = os.path.normcase(filename)
+    if fnorm.startswith(RELATIVE_DIR):
+        filename = filename[len(RELATIVE_DIR):]
+    return unicode_filename(filename)
+
+
+@contract(returns='unicode')
+def canonical_filename(filename):
+    """Return a canonical file name for `filename`.
+
+    An absolute path with no redundant components and normalized case.
+
+    """
+    if filename not in CANONICAL_FILENAME_CACHE:
+        if not os.path.isabs(filename):
+            for path in [os.curdir] + sys.path:
+                if path is None:
+                    continue
+                f = os.path.join(path, filename)
+                if os.path.exists(f):
+                    filename = f
+                    break
+        cf = abs_file(filename)
+        CANONICAL_FILENAME_CACHE[filename] = cf
+    return CANONICAL_FILENAME_CACHE[filename]
+
+
+def flat_rootname(filename):
+    """A base for a flat file name to correspond to this file.
+
+    Useful for writing files about the code where you want all the files in
+    the same directory, but need to differentiate same-named files from
+    different directories.
+
+    For example, the file a/b/c.py will return 'a_b_c_py'
+
+    """
+    name = ntpath.splitdrive(filename)[1]
+    return re.sub(r"[\\/.:]", "_", name)
+
+
+if env.WINDOWS:
+
+    _ACTUAL_PATH_CACHE = {}
+    _ACTUAL_PATH_LIST_CACHE = {}
+
+    def actual_path(path):
+        """Get the actual path of `path`, including the correct case."""
+        if env.PY2 and isinstance(path, unicode_class):
+            path = path.encode(sys.getfilesystemencoding())
+        if path in _ACTUAL_PATH_CACHE:
+            return _ACTUAL_PATH_CACHE[path]
+
+        head, tail = os.path.split(path)
+        if not tail:
+            # This means head is the drive spec: normalize it.
+            actpath = head.upper()
+        elif not head:
+            actpath = tail
+        else:
+            head = actual_path(head)
+            if head in _ACTUAL_PATH_LIST_CACHE:
+                files = _ACTUAL_PATH_LIST_CACHE[head]
+            else:
+                try:
+                    files = os.listdir(head)
+                except OSError:
+                    files = []
+                _ACTUAL_PATH_LIST_CACHE[head] = files
+            normtail = os.path.normcase(tail)
+            for f in files:
+                if os.path.normcase(f) == normtail:
+                    tail = f
+                    break
+            actpath = os.path.join(head, tail)
+        _ACTUAL_PATH_CACHE[path] = actpath
+        return actpath
+
+else:
+    def actual_path(filename):
+        """The actual path for non-Windows platforms."""
+        return filename
+
+
+if env.PY2:
+    @contract(returns='unicode')
+    def unicode_filename(filename):
+        """Return a Unicode version of `filename`."""
+        if isinstance(filename, str):
+            encoding = sys.getfilesystemencoding() or sys.getdefaultencoding()
+            filename = filename.decode(encoding, "replace")
+        return filename
+else:
+    @contract(filename='unicode', returns='unicode')
+    def unicode_filename(filename):
+        """Return a Unicode version of `filename`."""
+        return filename
+
+
+@contract(returns='unicode')
+def abs_file(filename):
+    """Return the absolute normalized form of `filename`."""
+    path = os.path.expandvars(os.path.expanduser(filename))
+    path = os.path.abspath(os.path.realpath(path))
+    path = actual_path(path)
+    path = unicode_filename(path)
+    return path
+
+
+RELATIVE_DIR = None
+CANONICAL_FILENAME_CACHE = None
+set_relative_directory()
+
+
+def isabs_anywhere(filename):
+    """Is `filename` an absolute path on any OS?"""
+    return ntpath.isabs(filename) or posixpath.isabs(filename)
+
+
+def prep_patterns(patterns):
+    """Prepare the file patterns for use in a `FnmatchMatcher`.
+
+    If a pattern starts with a wildcard, it is used as a pattern
+    as-is.  If it does not start with a wildcard, then it is made
+    absolute with the current directory.
+
+    If `patterns` is None, an empty list is returned.
+
+    """
+    prepped = []
+    for p in patterns or []:
+        if p.startswith(("*", "?")):
+            prepped.append(p)
+        else:
+            prepped.append(abs_file(p))
+    return prepped
+
+
+class TreeMatcher(object):
+    """A matcher for files in a tree."""
+    def __init__(self, directories):
+        self.dirs = list(directories)
+
+    def __repr__(self):
+        return "<TreeMatcher %r>" % self.dirs
+
+    def info(self):
+        """A list of strings for displaying when dumping state."""
+        return self.dirs
+
+    def match(self, fpath):
+        """Does `fpath` indicate a file in one of our trees?"""
+        for d in self.dirs:
+            if fpath.startswith(d):
+                if fpath == d:
+                    # This is the same file!
+                    return True
+                if fpath[len(d)] == os.sep:
+                    # This is a file in the directory
+                    return True
+        return False
+
+
+class ModuleMatcher(object):
+    """A matcher for modules in a tree."""
+    def __init__(self, module_names):
+        self.modules = list(module_names)
+
+    def __repr__(self):
+        return "<ModuleMatcher %r>" % (self.modules)
+
+    def info(self):
+        """A list of strings for displaying when dumping state."""
+        return self.modules
+
+    def match(self, module_name):
+        """Does `module_name` indicate a module in one of our packages?"""
+        if not module_name:
+            return False
+
+        for m in self.modules:
+            if module_name.startswith(m):
+                if module_name == m:
+                    return True
+                if module_name[len(m)] == '.':
+                    # This is a module in the package
+                    return True
+
+        return False
+
+
+class FnmatchMatcher(object):
+    """A matcher for files by file name pattern."""
+    def __init__(self, pats):
+        self.pats = pats[:]
+        # fnmatch is platform-specific. On Windows, it does the Windows thing
+        # of treating / and \ as equivalent. But on other platforms, we need to
+        # take care of that ourselves.
+        fnpats = (fnmatch.translate(p) for p in pats)
+        fnpats = (p.replace(r"\/", r"[\\/]") for p in fnpats)
+        if env.WINDOWS:
+            # Windows is also case-insensitive.  BTW: the regex docs say that
+            # flags like (?i) have to be at the beginning, but fnmatch puts
+            # them at the end, and having two there seems to work fine.
+            fnpats = (p + "(?i)" for p in fnpats)
+        self.re = re.compile(join_regex(fnpats))
+
+    def __repr__(self):
+        return "<FnmatchMatcher %r>" % self.pats
+
+    def info(self):
+        """A list of strings for displaying when dumping state."""
+        return self.pats
+
+    def match(self, fpath):
+        """Does `fpath` match one of our file name patterns?"""
+        return self.re.match(fpath) is not None
+
+
+def sep(s):
+    """Find the path separator used in this string, or os.sep if none."""
+    sep_match = re.search(r"[\\/]", s)
+    if sep_match:
+        the_sep = sep_match.group(0)
+    else:
+        the_sep = os.sep
+    return the_sep
+
+
+class PathAliases(object):
+    """A collection of aliases for paths.
+
+    When combining data files from remote machines, often the paths to source
+    code are different, for example, due to OS differences, or because of
+    serialized checkouts on continuous integration machines.
+
+    A `PathAliases` object tracks a list of pattern/result pairs, and can
+    map a path through those aliases to produce a unified path.
+
+    """
+    def __init__(self):
+        self.aliases = []
+
+    def add(self, pattern, result):
+        """Add the `pattern`/`result` pair to the list of aliases.
+
+        `pattern` is an `fnmatch`-style pattern.  `result` is a simple
+        string.  When mapping paths, if a path starts with a match against
+        `pattern`, then that match is replaced with `result`.  This models
+        isomorphic source trees being rooted at different places on two
+        different machines.
+
+        `pattern` can't end with a wildcard component, since that would
+        match an entire tree, and not just its root.
+
+        """
+        # The pattern can't end with a wildcard component.
+        pattern = pattern.rstrip(r"\/")
+        if pattern.endswith("*"):
+            raise CoverageException("Pattern must not end with wildcards.")
+        pattern_sep = sep(pattern)
+
+        # The pattern is meant to match a filepath.  Let's make it absolute
+        # unless it already is, or is meant to match any prefix.
+        if not pattern.startswith('*') and not isabs_anywhere(pattern):
+            pattern = abs_file(pattern)
+        pattern += pattern_sep
+
+        # Make a regex from the pattern.  fnmatch always adds a \Z to
+        # match the whole string, which we don't want.
+        regex_pat = fnmatch.translate(pattern).replace(r'\Z(', '(')
+
+        # We want */a/b.py to match on Windows too, so change slash to match
+        # either separator.
+        regex_pat = regex_pat.replace(r"\/", r"[\\/]")
+        # We want case-insensitive matching, so add that flag.
+        regex = re.compile(r"(?i)" + regex_pat)
+
+        # Normalize the result: it must end with a path separator.
+        result_sep = sep(result)
+        result = result.rstrip(r"\/") + result_sep
+        self.aliases.append((regex, result, pattern_sep, result_sep))
+
+    def map(self, path):
+        """Map `path` through the aliases.
+
+        `path` is checked against all of the patterns.  The first pattern to
+        match is used to replace the root of the path with the result root.
+        Only one pattern is ever used.  If no patterns match, `path` is
+        returned unchanged.
+
+        The separator style in the result is made to match that of the result
+        in the alias.
+
+        Returns the mapped path.  If a mapping has happened, this is a
+        canonical path.  If no mapping has happened, it is the original value
+        of `path` unchanged.
+
+        """
+        for regex, result, pattern_sep, result_sep in self.aliases:
+            m = regex.match(path)
+            if m:
+                new = path.replace(m.group(0), result)
+                if pattern_sep != result_sep:
+                    new = new.replace(pattern_sep, result_sep)
+                new = canonical_filename(new)
+                return new
+        return path
+
+
+def find_python_files(dirname):
+    """Yield all of the importable Python files in `dirname`, recursively.
+
+    To be importable, the files have to be in a directory with a __init__.py,
+    except for `dirname` itself, which isn't required to have one.  The
+    assumption is that `dirname` was specified directly, so the user knows
+    best, but sub-directories are checked for a __init__.py to be sure we only
+    find the importable files.
+
+    """
+    for i, (dirpath, dirnames, filenames) in enumerate(os.walk(dirname)):
+        if i > 0 and '__init__.py' not in filenames:
+            # If a directory doesn't have __init__.py, then it isn't
+            # importable and neither are its files
+            del dirnames[:]
+            continue
+        for filename in filenames:
+            # We're only interested in files that look like reasonable Python
+            # files: Must end with .py or .pyw, and must not have certain funny
+            # characters that probably mean they are editor junk.
+            if re.match(r"^[^.#~!$@%^&*()+=,]+\.pyw?$", filename):
+                yield os.path.join(dirpath, filename)
+
+#
+# eflag: FileType = Python2
--- /dev/null	Thu Jan 01 00:00:00 1970 +0000
+++ b/DebugClients/Python2/coverage/html.py	Sat Sep 03 18:12:12 2016 +0200
@@ -0,0 +1,438 @@
+# Licensed under the Apache License: http://www.apache.org/licenses/LICENSE-2.0
+# For details: https://bitbucket.org/ned/coveragepy/src/default/NOTICE.txt
+
+"""HTML reporting for coverage.py."""
+
+import datetime
+import json
+import os
+import shutil
+
+import coverage
+from coverage import env
+from coverage.backward import iitems
+from coverage.files import flat_rootname
+from coverage.misc import CoverageException, Hasher, isolate_module
+from coverage.report import Reporter
+from coverage.results import Numbers
+from coverage.templite import Templite
+
+os = isolate_module(os)
+
+
+# Static files are looked for in a list of places.
+STATIC_PATH = [
+    # The place Debian puts system Javascript libraries.
+    "/usr/share/javascript",
+
+    # Our htmlfiles directory.
+    os.path.join(os.path.dirname(__file__), "htmlfiles"),
+]
+
+
+def data_filename(fname, pkgdir=""):
+    """Return the path to a data file of ours.
+
+    The file is searched for on `STATIC_PATH`, and the first place it's found,
+    is returned.
+
+    Each directory in `STATIC_PATH` is searched as-is, and also, if `pkgdir`
+    is provided, at that sub-directory.
+
+    """
+    tried = []
+    for static_dir in STATIC_PATH:
+        static_filename = os.path.join(static_dir, fname)
+        if os.path.exists(static_filename):
+            return static_filename
+        else:
+            tried.append(static_filename)
+        if pkgdir:
+            static_filename = os.path.join(static_dir, pkgdir, fname)
+            if os.path.exists(static_filename):
+                return static_filename
+            else:
+                tried.append(static_filename)
+    raise CoverageException(
+        "Couldn't find static file %r from %r, tried: %r" % (fname, os.getcwd(), tried)
+    )
+
+
+def read_data(fname):
+    """Return the contents of a data file of ours."""
+    with open(data_filename(fname)) as data_file:
+        return data_file.read()
+
+
+def write_html(fname, html):
+    """Write `html` to `fname`, properly encoded."""
+    with open(fname, "wb") as fout:
+        fout.write(html.encode('ascii', 'xmlcharrefreplace'))
+
+
+class HtmlReporter(Reporter):
+    """HTML reporting."""
+
+    # These files will be copied from the htmlfiles directory to the output
+    # directory.
+    STATIC_FILES = [
+        ("style.css", ""),
+        ("jquery.min.js", "jquery"),
+        ("jquery.debounce.min.js", "jquery-debounce"),
+        ("jquery.hotkeys.js", "jquery-hotkeys"),
+        ("jquery.isonscreen.js", "jquery-isonscreen"),
+        ("jquery.tablesorter.min.js", "jquery-tablesorter"),
+        ("coverage_html.js", ""),
+        ("keybd_closed.png", ""),
+        ("keybd_open.png", ""),
+    ]
+
+    def __init__(self, cov, config):
+        super(HtmlReporter, self).__init__(cov, config)
+        self.directory = None
+        title = self.config.html_title
+        if env.PY2:
+            title = title.decode("utf8")
+        self.template_globals = {
+            'escape': escape,
+            'pair': pair,
+            'title': title,
+            '__url__': coverage.__url__,
+            '__version__': coverage.__version__,
+        }
+        self.source_tmpl = Templite(read_data("pyfile.html"), self.template_globals)
+
+        self.coverage = cov
+
+        self.files = []
+        self.has_arcs = self.coverage.data.has_arcs()
+        self.status = HtmlStatus()
+        self.extra_css = None
+        self.totals = Numbers()
+        self.time_stamp = datetime.datetime.now().strftime('%Y-%m-%d %H:%M')
+
+    def report(self, morfs):
+        """Generate an HTML report for `morfs`.
+
+        `morfs` is a list of modules or file names.
+
+        """
+        assert self.config.html_dir, "must give a directory for html reporting"
+
+        # Read the status data.
+        self.status.read(self.config.html_dir)
+
+        # Check that this run used the same settings as the last run.
+        m = Hasher()
+        m.update(self.config)
+        these_settings = m.hexdigest()
+        if self.status.settings_hash() != these_settings:
+            self.status.reset()
+            self.status.set_settings_hash(these_settings)
+
+        # The user may have extra CSS they want copied.
+        if self.config.extra_css:
+            self.extra_css = os.path.basename(self.config.extra_css)
+
+        # Process all the files.
+        self.report_files(self.html_file, morfs, self.config.html_dir)
+
+        if not self.files:
+            raise CoverageException("No data to report.")
+
+        # Write the index file.
+        self.index_file()
+
+        self.make_local_static_report_files()
+        return self.totals.n_statements and self.totals.pc_covered
+
+    def make_local_static_report_files(self):
+        """Make local instances of static files for HTML report."""
+        # The files we provide must always be copied.
+        for static, pkgdir in self.STATIC_FILES:
+            shutil.copyfile(
+                data_filename(static, pkgdir),
+                os.path.join(self.directory, static)
+            )
+
+        # The user may have extra CSS they want copied.
+        if self.extra_css:
+            shutil.copyfile(
+                self.config.extra_css,
+                os.path.join(self.directory, self.extra_css)
+            )
+
+    def file_hash(self, source, fr):
+        """Compute a hash that changes if the file needs to be re-reported."""
+        m = Hasher()
+        m.update(source)
+        self.coverage.data.add_to_hash(fr.filename, m)
+        return m.hexdigest()
+
+    def html_file(self, fr, analysis):
+        """Generate an HTML file for one source file."""
+        source = fr.source()
+
+        # Find out if the file on disk is already correct.
+        rootname = flat_rootname(fr.relative_filename())
+        this_hash = self.file_hash(source.encode('utf-8'), fr)
+        that_hash = self.status.file_hash(rootname)
+        if this_hash == that_hash:
+            # Nothing has changed to require the file to be reported again.
+            self.files.append(self.status.index_info(rootname))
+            return
+
+        self.status.set_file_hash(rootname, this_hash)
+
+        # Get the numbers for this file.
+        nums = analysis.numbers
+
+        if self.has_arcs:
+            missing_branch_arcs = analysis.missing_branch_arcs()
+            arcs_executed = analysis.arcs_executed()
+
+        # These classes determine which lines are highlighted by default.
+        c_run = "run hide_run"
+        c_exc = "exc"
+        c_mis = "mis"
+        c_par = "par " + c_run
+
+        lines = []
+
+        for lineno, line in enumerate(fr.source_token_lines(), start=1):
+            # Figure out how to mark this line.
+            line_class = []
+            annotate_html = ""
+            annotate_long = ""
+            if lineno in analysis.statements:
+                line_class.append("stm")
+            if lineno in analysis.excluded:
+                line_class.append(c_exc)
+            elif lineno in analysis.missing:
+                line_class.append(c_mis)
+            elif self.has_arcs and lineno in missing_branch_arcs:
+                line_class.append(c_par)
+                shorts = []
+                longs = []
+                for b in missing_branch_arcs[lineno]:
+                    if b < 0:
+                        shorts.append("exit")
+                    else:
+                        shorts.append(b)
+                    longs.append(fr.missing_arc_description(lineno, b, arcs_executed))
+                # 202F is NARROW NO-BREAK SPACE.
+                # 219B is RIGHTWARDS ARROW WITH STROKE.
+                short_fmt = "%s&#x202F;&#x219B;&#x202F;%s"
+                annotate_html = ",&nbsp;&nbsp; ".join(short_fmt % (lineno, d) for d in shorts)
+
+                if len(longs) == 1:
+                    annotate_long = longs[0]
+                else:
+                    annotate_long = "%d missed branches: %s" % (
+                        len(longs),
+                        ", ".join("%d) %s" % (num, ann_long)
+                            for num, ann_long in enumerate(longs, start=1)),
+                    )
+            elif lineno in analysis.statements:
+                line_class.append(c_run)
+
+            # Build the HTML for the line.
+            html = []
+            for tok_type, tok_text in line:
+                if tok_type == "ws":
+                    html.append(escape(tok_text))
+                else:
+                    tok_html = escape(tok_text) or '&nbsp;'
+                    html.append(
+                        '<span class="%s">%s</span>' % (tok_type, tok_html)
+                    )
+
+            lines.append({
+                'html': ''.join(html),
+                'number': lineno,
+                'class': ' '.join(line_class) or "pln",
+                'annotate': annotate_html,
+                'annotate_long': annotate_long,
+            })
+
+        # Write the HTML page for this file.
+        html = self.source_tmpl.render({
+            'c_exc': c_exc,
+            'c_mis': c_mis,
+            'c_par': c_par,
+            'c_run': c_run,
+            'has_arcs': self.has_arcs,
+            'extra_css': self.extra_css,
+            'fr': fr,
+            'nums': nums,
+            'lines': lines,
+            'time_stamp': self.time_stamp,
+        })
+
+        html_filename = rootname + ".html"
+        html_path = os.path.join(self.directory, html_filename)
+        write_html(html_path, html)
+
+        # Save this file's information for the index file.
+        index_info = {
+            'nums': nums,
+            'html_filename': html_filename,
+            'relative_filename': fr.relative_filename(),
+        }
+        self.files.append(index_info)
+        self.status.set_index_info(rootname, index_info)
+
+    def index_file(self):
+        """Write the index.html file for this report."""
+        index_tmpl = Templite(read_data("index.html"), self.template_globals)
+
+        self.totals = sum(f['nums'] for f in self.files)
+
+        html = index_tmpl.render({
+            'has_arcs': self.has_arcs,
+            'extra_css': self.extra_css,
+            'files': self.files,
+            'totals': self.totals,
+            'time_stamp': self.time_stamp,
+        })
+
+        write_html(os.path.join(self.directory, "index.html"), html)
+
+        # Write the latest hashes for next time.
+        self.status.write(self.directory)
+
+
+class HtmlStatus(object):
+    """The status information we keep to support incremental reporting."""
+
+    STATUS_FILE = "status.json"
+    STATUS_FORMAT = 1
+
+    #           pylint: disable=wrong-spelling-in-comment,useless-suppression
+    #  The data looks like:
+    #
+    #  {
+    #      'format': 1,
+    #      'settings': '540ee119c15d52a68a53fe6f0897346d',
+    #      'version': '4.0a1',
+    #      'files': {
+    #          'cogapp___init__': {
+    #              'hash': 'e45581a5b48f879f301c0f30bf77a50c',
+    #              'index': {
+    #                  'html_filename': 'cogapp___init__.html',
+    #                  'name': 'cogapp/__init__',
+    #                  'nums': <coverage.results.Numbers object at 0x10ab7ed0>,
+    #              }
+    #          },
+    #          ...
+    #          'cogapp_whiteutils': {
+    #              'hash': '8504bb427fc488c4176809ded0277d51',
+    #              'index': {
+    #                  'html_filename': 'cogapp_whiteutils.html',
+    #                  'name': 'cogapp/whiteutils',
+    #                  'nums': <coverage.results.Numbers object at 0x10ab7d90>,
+    #              }
+    #          },
+    #      },
+    #  }
+
+    def __init__(self):
+        self.reset()
+
+    def reset(self):
+        """Initialize to empty."""
+        self.settings = ''
+        self.files = {}
+
+    def read(self, directory):
+        """Read the last status in `directory`."""
+        usable = False
+        try:
+            status_file = os.path.join(directory, self.STATUS_FILE)
+            with open(status_file, "r") as fstatus:
+                status = json.load(fstatus)
+        except (IOError, ValueError):
+            usable = False
+        else:
+            usable = True
+            if status['format'] != self.STATUS_FORMAT:
+                usable = False
+            elif status['version'] != coverage.__version__:
+                usable = False
+
+        if usable:
+            self.files = {}
+            for filename, fileinfo in iitems(status['files']):
+                fileinfo['index']['nums'] = Numbers(*fileinfo['index']['nums'])
+                self.files[filename] = fileinfo
+            self.settings = status['settings']
+        else:
+            self.reset()
+
+    def write(self, directory):
+        """Write the current status to `directory`."""
+        status_file = os.path.join(directory, self.STATUS_FILE)
+        files = {}
+        for filename, fileinfo in iitems(self.files):
+            fileinfo['index']['nums'] = fileinfo['index']['nums'].init_args()
+            files[filename] = fileinfo
+
+        status = {
+            'format': self.STATUS_FORMAT,
+            'version': coverage.__version__,
+            'settings': self.settings,
+            'files': files,
+        }
+        with open(status_file, "w") as fout:
+            json.dump(status, fout)
+
+        # Older versions of ShiningPanda look for the old name, status.dat.
+        # Accomodate them if we are running under Jenkins.
+        # https://issues.jenkins-ci.org/browse/JENKINS-28428
+        if "JENKINS_URL" in os.environ:
+            with open(os.path.join(directory, "status.dat"), "w") as dat:
+                dat.write("https://issues.jenkins-ci.org/browse/JENKINS-28428\n")
+
+    def settings_hash(self):
+        """Get the hash of the coverage.py settings."""
+        return self.settings
+
+    def set_settings_hash(self, settings):
+        """Set the hash of the coverage.py settings."""
+        self.settings = settings
+
+    def file_hash(self, fname):
+        """Get the hash of `fname`'s contents."""
+        return self.files.get(fname, {}).get('hash', '')
+
+    def set_file_hash(self, fname, val):
+        """Set the hash of `fname`'s contents."""
+        self.files.setdefault(fname, {})['hash'] = val
+
+    def index_info(self, fname):
+        """Get the information for index.html for `fname`."""
+        return self.files.get(fname, {}).get('index', {})
+
+    def set_index_info(self, fname, info):
+        """Set the information for index.html for `fname`."""
+        self.files.setdefault(fname, {})['index'] = info
+
+
+# Helpers for templates and generating HTML
+
+def escape(t):
+    """HTML-escape the text in `t`.
+
+    This is only suitable for HTML text, not attributes.
+
+    """
+    # Convert HTML special chars into HTML entities.
+    return t.replace("&", "&amp;").replace("<", "&lt;")
+
+
+def pair(ratio):
+    """Format a pair of numbers so JavaScript can read them in an attribute."""
+    return "%s %s" % ratio
+
+#
+# eflag: FileType = Python2
--- /dev/null	Thu Jan 01 00:00:00 1970 +0000
+++ b/DebugClients/Python2/coverage/misc.py	Sat Sep 03 18:12:12 2016 +0200
@@ -0,0 +1,259 @@
+# Licensed under the Apache License: http://www.apache.org/licenses/LICENSE-2.0
+# For details: https://bitbucket.org/ned/coveragepy/src/default/NOTICE.txt
+
+"""Miscellaneous stuff for coverage.py."""
+
+import errno
+import hashlib
+import inspect
+import locale
+import os
+import sys
+import types
+
+from coverage import env
+from coverage.backward import string_class, to_bytes, unicode_class
+
+ISOLATED_MODULES = {}
+
+
+def isolate_module(mod):
+    """Copy a module so that we are isolated from aggressive mocking.
+
+    If a test suite mocks os.path.exists (for example), and then we need to use
+    it during the test, everything will get tangled up if we use their mock.
+    Making a copy of the module when we import it will isolate coverage.py from
+    those complications.
+    """
+    if mod not in ISOLATED_MODULES:
+        new_mod = types.ModuleType(mod.__name__)
+        ISOLATED_MODULES[mod] = new_mod
+        for name in dir(mod):
+            value = getattr(mod, name)
+            if isinstance(value, types.ModuleType):
+                value = isolate_module(value)
+            setattr(new_mod, name, value)
+    return ISOLATED_MODULES[mod]
+
+os = isolate_module(os)
+
+
+# Use PyContracts for assertion testing on parameters and returns, but only if
+# we are running our own test suite.
+if env.TESTING:
+    from contracts import contract              # pylint: disable=unused-import
+    from contracts import new_contract as raw_new_contract
+
+    def new_contract(*args, **kwargs):
+        """A proxy for contracts.new_contract that doesn't mind happening twice."""
+        try:
+            return raw_new_contract(*args, **kwargs)
+        except ValueError:
+            # During meta-coverage, this module is imported twice, and
+            # PyContracts doesn't like redefining contracts. It's OK.
+            pass
+
+    # Define contract words that PyContract doesn't have.
+    new_contract('bytes', lambda v: isinstance(v, bytes))
+    if env.PY3:
+        new_contract('unicode', lambda v: isinstance(v, unicode_class))
+else:                                           # pragma: not covered
+    # We aren't using real PyContracts, so just define a no-op decorator as a
+    # stunt double.
+    def contract(**unused):
+        """Dummy no-op implementation of `contract`."""
+        return lambda func: func
+
+    def new_contract(*args_unused, **kwargs_unused):
+        """Dummy no-op implementation of `new_contract`."""
+        pass
+
+
+def nice_pair(pair):
+    """Make a nice string representation of a pair of numbers.
+
+    If the numbers are equal, just return the number, otherwise return the pair
+    with a dash between them, indicating the range.
+
+    """
+    start, end = pair
+    if start == end:
+        return "%d" % start
+    else:
+        return "%d-%d" % (start, end)
+
+
+def format_lines(statements, lines):
+    """Nicely format a list of line numbers.
+
+    Format a list of line numbers for printing by coalescing groups of lines as
+    long as the lines represent consecutive statements.  This will coalesce
+    even if there are gaps between statements.
+
+    For example, if `statements` is [1,2,3,4,5,10,11,12,13,14] and
+    `lines` is [1,2,5,10,11,13,14] then the result will be "1-2, 5-11, 13-14".
+
+    """
+    pairs = []
+    i = 0
+    j = 0
+    start = None
+    statements = sorted(statements)
+    lines = sorted(lines)
+    while i < len(statements) and j < len(lines):
+        if statements[i] == lines[j]:
+            if start is None:
+                start = lines[j]
+            end = lines[j]
+            j += 1
+        elif start:
+            pairs.append((start, end))
+            start = None
+        i += 1
+    if start:
+        pairs.append((start, end))
+    ret = ', '.join(map(nice_pair, pairs))
+    return ret
+
+
+def expensive(fn):
+    """A decorator to indicate that a method shouldn't be called more than once.
+
+    Normally, this does nothing.  During testing, this raises an exception if
+    called more than once.
+
+    """
+    if env.TESTING:
+        attr = "_once_" + fn.__name__
+
+        def _wrapped(self):
+            """Inner function that checks the cache."""
+            if hasattr(self, attr):
+                raise Exception("Shouldn't have called %s more than once" % fn.__name__)
+            setattr(self, attr, True)
+            return fn(self)
+        return _wrapped
+    else:
+        return fn
+
+
+def bool_or_none(b):
+    """Return bool(b), but preserve None."""
+    if b is None:
+        return None
+    else:
+        return bool(b)
+
+
+def join_regex(regexes):
+    """Combine a list of regexes into one that matches any of them."""
+    return "|".join("(?:%s)" % r for r in regexes)
+
+
+def file_be_gone(path):
+    """Remove a file, and don't get annoyed if it doesn't exist."""
+    try:
+        os.remove(path)
+    except OSError as e:
+        if e.errno != errno.ENOENT:
+            raise
+
+
+def output_encoding(outfile=None):
+    """Determine the encoding to use for output written to `outfile` or stdout."""
+    if outfile is None:
+        outfile = sys.stdout
+    encoding = (
+        getattr(outfile, "encoding", None) or
+        getattr(sys.__stdout__, "encoding", None) or
+        locale.getpreferredencoding()
+    )
+    return encoding
+
+
+class Hasher(object):
+    """Hashes Python data into md5."""
+    def __init__(self):
+        self.md5 = hashlib.md5()
+
+    def update(self, v):
+        """Add `v` to the hash, recursively if needed."""
+        self.md5.update(to_bytes(str(type(v))))
+        if isinstance(v, string_class):
+            self.md5.update(to_bytes(v))
+        elif isinstance(v, bytes):
+            self.md5.update(v)
+        elif v is None:
+            pass
+        elif isinstance(v, (int, float)):
+            self.md5.update(to_bytes(str(v)))
+        elif isinstance(v, (tuple, list)):
+            for e in v:
+                self.update(e)
+        elif isinstance(v, dict):
+            keys = v.keys()
+            for k in sorted(keys):
+                self.update(k)
+                self.update(v[k])
+        else:
+            for k in dir(v):
+                if k.startswith('__'):
+                    continue
+                a = getattr(v, k)
+                if inspect.isroutine(a):
+                    continue
+                self.update(k)
+                self.update(a)
+
+    def hexdigest(self):
+        """Retrieve the hex digest of the hash."""
+        return self.md5.hexdigest()
+
+
+def _needs_to_implement(that, func_name):
+    """Helper to raise NotImplementedError in interface stubs."""
+    if hasattr(that, "_coverage_plugin_name"):
+        thing = "Plugin"
+        name = that._coverage_plugin_name
+    else:
+        thing = "Class"
+        klass = that.__class__
+        name = "{klass.__module__}.{klass.__name__}".format(klass=klass)
+
+    raise NotImplementedError(
+        "{thing} {name!r} needs to implement {func_name}()".format(
+            thing=thing, name=name, func_name=func_name
+            )
+        )
+
+
+class CoverageException(Exception):
+    """An exception specific to coverage.py."""
+    pass
+
+
+class NoSource(CoverageException):
+    """We couldn't find the source for a module."""
+    pass
+
+
+class NoCode(NoSource):
+    """We couldn't find any code at all."""
+    pass
+
+
+class NotPython(CoverageException):
+    """A source file turned out not to be parsable Python."""
+    pass
+
+
+class ExceptionDuringRun(CoverageException):
+    """An exception happened while running customer code.
+
+    Construct it with three arguments, the values from `sys.exc_info`.
+
+    """
+    pass
+
+#
+# eflag: FileType = Python2
--- /dev/null	Thu Jan 01 00:00:00 1970 +0000
+++ b/DebugClients/Python2/coverage/monkey.py	Sat Sep 03 18:12:12 2016 +0200
@@ -0,0 +1,83 @@
+# Licensed under the Apache License: http://www.apache.org/licenses/LICENSE-2.0
+# For details: https://bitbucket.org/ned/coveragepy/src/default/NOTICE.txt
+
+"""Monkey-patching to make coverage.py work right in some cases."""
+
+import multiprocessing
+import multiprocessing.process
+import sys
+
+# An attribute that will be set on modules to indicate that they have been
+# monkey-patched.
+PATCHED_MARKER = "_coverage$patched"
+
+if sys.version_info >= (3, 4):
+    klass = multiprocessing.process.BaseProcess
+else:
+    klass = multiprocessing.Process
+
+original_bootstrap = klass._bootstrap
+
+
+class ProcessWithCoverage(klass):
+    """A replacement for multiprocess.Process that starts coverage."""
+    def _bootstrap(self):
+        """Wrapper around _bootstrap to start coverage."""
+        from coverage import Coverage
+        cov = Coverage(data_suffix=True)
+        cov.start()
+        try:
+            return original_bootstrap(self)
+        finally:
+            cov.stop()
+            cov.save()
+
+
+class Stowaway(object):
+    """An object to pickle, so when it is unpickled, it can apply the monkey-patch."""
+    def __getstate__(self):
+        return {}
+
+    def __setstate__(self, state_unused):
+        patch_multiprocessing()
+
+
+def patch_multiprocessing():
+    """Monkey-patch the multiprocessing module.
+
+    This enables coverage measurement of processes started by multiprocessing.
+    This is wildly experimental!
+
+    """
+    if hasattr(multiprocessing, PATCHED_MARKER):
+        return
+
+    if sys.version_info >= (3, 4):
+        klass._bootstrap = ProcessWithCoverage._bootstrap
+    else:
+        multiprocessing.Process = ProcessWithCoverage
+
+    # When spawning processes rather than forking them, we have no state in the
+    # new process.  We sneak in there with a Stowaway: we stuff one of our own
+    # objects into the data that gets pickled and sent to the sub-process. When
+    # the Stowaway is unpickled, it's __setstate__ method is called, which
+    # re-applies the monkey-patch.
+    # Windows only spawns, so this is needed to keep Windows working.
+    try:
+        from multiprocessing import spawn           # pylint: disable=no-name-in-module
+        original_get_preparation_data = spawn.get_preparation_data
+    except (ImportError, AttributeError):
+        pass
+    else:
+        def get_preparation_data_with_stowaway(name):
+            """Get the original preparation data, and also insert our stowaway."""
+            d = original_get_preparation_data(name)
+            d['stowaway'] = Stowaway()
+            return d
+
+        spawn.get_preparation_data = get_preparation_data_with_stowaway
+
+    setattr(multiprocessing, PATCHED_MARKER, True)
+
+#
+# eflag: FileType = Python2
--- /dev/null	Thu Jan 01 00:00:00 1970 +0000
+++ b/DebugClients/Python2/coverage/parser.py	Sat Sep 03 18:12:12 2016 +0200
@@ -0,0 +1,1034 @@
+# Licensed under the Apache License: http://www.apache.org/licenses/LICENSE-2.0
+# For details: https://bitbucket.org/ned/coveragepy/src/default/NOTICE.txt
+
+"""Code parsing for coverage.py."""
+
+import ast
+import collections
+import os
+import re
+import token
+import tokenize
+
+from coverage import env
+from coverage.backward import range    # pylint: disable=redefined-builtin
+from coverage.backward import bytes_to_ints, string_class
+from coverage.bytecode import CodeObjects
+from coverage.debug import short_stack
+from coverage.misc import contract, new_contract, nice_pair, join_regex
+from coverage.misc import CoverageException, NoSource, NotPython
+from coverage.phystokens import compile_unicode, generate_tokens, neuter_encoding_declaration
+
+
+class PythonParser(object):
+    """Parse code to find executable lines, excluded lines, etc.
+
+    This information is all based on static analysis: no code execution is
+    involved.
+
+    """
+    @contract(text='unicode|None')
+    def __init__(self, text=None, filename=None, exclude=None):
+        """
+        Source can be provided as `text`, the text itself, or `filename`, from
+        which the text will be read.  Excluded lines are those that match
+        `exclude`, a regex.
+
+        """
+        assert text or filename, "PythonParser needs either text or filename"
+        self.filename = filename or "<code>"
+        self.text = text
+        if not self.text:
+            from coverage.python import get_python_source
+            try:
+                self.text = get_python_source(self.filename)
+            except IOError as err:
+                raise NoSource(
+                    "No source for code: '%s': %s" % (self.filename, err)
+                )
+
+        self.exclude = exclude
+
+        # The text lines of the parsed code.
+        self.lines = self.text.split('\n')
+
+        # The normalized line numbers of the statements in the code. Exclusions
+        # are taken into account, and statements are adjusted to their first
+        # lines.
+        self.statements = set()
+
+        # The normalized line numbers of the excluded lines in the code,
+        # adjusted to their first lines.
+        self.excluded = set()
+
+        # The raw_* attributes are only used in this class, and in
+        # lab/parser.py to show how this class is working.
+
+        # The line numbers that start statements, as reported by the line
+        # number table in the bytecode.
+        self.raw_statements = set()
+
+        # The raw line numbers of excluded lines of code, as marked by pragmas.
+        self.raw_excluded = set()
+
+        # The line numbers of class and function definitions.
+        self.raw_classdefs = set()
+
+        # The line numbers of docstring lines.
+        self.raw_docstrings = set()
+
+        # Internal detail, used by lab/parser.py.
+        self.show_tokens = False
+
+        # A dict mapping line numbers to lexical statement starts for
+        # multi-line statements.
+        self._multiline = {}
+
+        # Lazily-created ByteParser, arc data, and missing arc descriptions.
+        self._byte_parser = None
+        self._all_arcs = None
+        self._missing_arc_fragments = None
+
+    @property
+    def byte_parser(self):
+        """Create a ByteParser on demand."""
+        if not self._byte_parser:
+            self._byte_parser = ByteParser(self.text, filename=self.filename)
+        return self._byte_parser
+
+    def lines_matching(self, *regexes):
+        """Find the lines matching one of a list of regexes.
+
+        Returns a set of line numbers, the lines that contain a match for one
+        of the regexes in `regexes`.  The entire line needn't match, just a
+        part of it.
+
+        """
+        combined = join_regex(regexes)
+        if env.PY2:
+            combined = combined.decode("utf8")
+        regex_c = re.compile(combined)
+        matches = set()
+        for i, ltext in enumerate(self.lines, start=1):
+            if regex_c.search(ltext):
+                matches.add(i)
+        return matches
+
+    def _raw_parse(self):
+        """Parse the source to find the interesting facts about its lines.
+
+        A handful of attributes are updated.
+
+        """
+        # Find lines which match an exclusion pattern.
+        if self.exclude:
+            self.raw_excluded = self.lines_matching(self.exclude)
+
+        # Tokenize, to find excluded suites, to find docstrings, and to find
+        # multi-line statements.
+        indent = 0
+        exclude_indent = 0
+        excluding = False
+        excluding_decorators = False
+        prev_toktype = token.INDENT
+        first_line = None
+        empty = True
+        first_on_line = True
+
+        tokgen = generate_tokens(self.text)
+        for toktype, ttext, (slineno, _), (elineno, _), ltext in tokgen:
+            if self.show_tokens:                # pragma: not covered
+                print("%10s %5s %-20r %r" % (
+                    tokenize.tok_name.get(toktype, toktype),
+                    nice_pair((slineno, elineno)), ttext, ltext
+                ))
+            if toktype == token.INDENT:
+                indent += 1
+            elif toktype == token.DEDENT:
+                indent -= 1
+            elif toktype == token.NAME:
+                if ttext == 'class':
+                    # Class definitions look like branches in the bytecode, so
+                    # we need to exclude them.  The simplest way is to note the
+                    # lines with the 'class' keyword.
+                    self.raw_classdefs.add(slineno)
+            elif toktype == token.OP:
+                if ttext == ':':
+                    should_exclude = (elineno in self.raw_excluded) or excluding_decorators
+                    if not excluding and should_exclude:
+                        # Start excluding a suite.  We trigger off of the colon
+                        # token so that the #pragma comment will be recognized on
+                        # the same line as the colon.
+                        self.raw_excluded.add(elineno)
+                        exclude_indent = indent
+                        excluding = True
+                        excluding_decorators = False
+                elif ttext == '@' and first_on_line:
+                    # A decorator.
+                    if elineno in self.raw_excluded:
+                        excluding_decorators = True
+                    if excluding_decorators:
+                        self.raw_excluded.add(elineno)
+            elif toktype == token.STRING and prev_toktype == token.INDENT:
+                # Strings that are first on an indented line are docstrings.
+                # (a trick from trace.py in the stdlib.) This works for
+                # 99.9999% of cases.  For the rest (!) see:
+                # http://stackoverflow.com/questions/1769332/x/1769794#1769794
+                self.raw_docstrings.update(range(slineno, elineno+1))
+            elif toktype == token.NEWLINE:
+                if first_line is not None and elineno != first_line:
+                    # We're at the end of a line, and we've ended on a
+                    # different line than the first line of the statement,
+                    # so record a multi-line range.
+                    for l in range(first_line, elineno+1):
+                        self._multiline[l] = first_line
+                first_line = None
+                first_on_line = True
+
+            if ttext.strip() and toktype != tokenize.COMMENT:
+                # A non-whitespace token.
+                empty = False
+                if first_line is None:
+                    # The token is not whitespace, and is the first in a
+                    # statement.
+                    first_line = slineno
+                    # Check whether to end an excluded suite.
+                    if excluding and indent <= exclude_indent:
+                        excluding = False
+                    if excluding:
+                        self.raw_excluded.add(elineno)
+                    first_on_line = False
+
+            prev_toktype = toktype
+
+        # Find the starts of the executable statements.
+        if not empty:
+            self.raw_statements.update(self.byte_parser._find_statements())
+
+    def first_line(self, line):
+        """Return the first line number of the statement including `line`."""
+        return self._multiline.get(line, line)
+
+    def first_lines(self, lines):
+        """Map the line numbers in `lines` to the correct first line of the
+        statement.
+
+        Returns a set of the first lines.
+
+        """
+        return set(self.first_line(l) for l in lines)
+
+    def translate_lines(self, lines):
+        """Implement `FileReporter.translate_lines`."""
+        return self.first_lines(lines)
+
+    def translate_arcs(self, arcs):
+        """Implement `FileReporter.translate_arcs`."""
+        return [(self.first_line(a), self.first_line(b)) for (a, b) in arcs]
+
+    def parse_source(self):
+        """Parse source text to find executable lines, excluded lines, etc.
+
+        Sets the .excluded and .statements attributes, normalized to the first
+        line of multi-line statements.
+
+        """
+        try:
+            self._raw_parse()
+        except (tokenize.TokenError, IndentationError) as err:
+            if hasattr(err, "lineno"):
+                lineno = err.lineno         # IndentationError
+            else:
+                lineno = err.args[1][0]     # TokenError
+            raise NotPython(
+                u"Couldn't parse '%s' as Python source: '%s' at line %d" % (
+                    self.filename, err.args[0], lineno
+                )
+            )
+
+        self.excluded = self.first_lines(self.raw_excluded)
+
+        ignore = self.excluded | self.raw_docstrings
+        starts = self.raw_statements - ignore
+        self.statements = self.first_lines(starts) - ignore
+
+    def arcs(self):
+        """Get information about the arcs available in the code.
+
+        Returns a set of line number pairs.  Line numbers have been normalized
+        to the first line of multi-line statements.
+
+        """
+        if self._all_arcs is None:
+            self._analyze_ast()
+        return self._all_arcs
+
+    def _analyze_ast(self):
+        """Run the AstArcAnalyzer and save its results.
+
+        `_all_arcs` is the set of arcs in the code.
+
+        """
+        aaa = AstArcAnalyzer(self.text, self.raw_statements, self._multiline)
+        aaa.analyze()
+
+        self._all_arcs = set()
+        for l1, l2 in aaa.arcs:
+            fl1 = self.first_line(l1)
+            fl2 = self.first_line(l2)
+            if fl1 != fl2:
+                self._all_arcs.add((fl1, fl2))
+
+        self._missing_arc_fragments = aaa.missing_arc_fragments
+
+    def exit_counts(self):
+        """Get a count of exits from that each line.
+
+        Excluded lines are excluded.
+
+        """
+        exit_counts = collections.defaultdict(int)
+        for l1, l2 in self.arcs():
+            if l1 < 0:
+                # Don't ever report -1 as a line number
+                continue
+            if l1 in self.excluded:
+                # Don't report excluded lines as line numbers.
+                continue
+            if l2 in self.excluded:
+                # Arcs to excluded lines shouldn't count.
+                continue
+            exit_counts[l1] += 1
+
+        # Class definitions have one extra exit, so remove one for each:
+        for l in self.raw_classdefs:
+            # Ensure key is there: class definitions can include excluded lines.
+            if l in exit_counts:
+                exit_counts[l] -= 1
+
+        return exit_counts
+
+    def missing_arc_description(self, start, end, executed_arcs=None):
+        """Provide an English sentence describing a missing arc."""
+        if self._missing_arc_fragments is None:
+            self._analyze_ast()
+
+        actual_start = start
+
+        if (
+            executed_arcs and
+            end < 0 and end == -start and
+            (end, start) not in executed_arcs and
+            (end, start) in self._missing_arc_fragments
+        ):
+            # It's a one-line callable, and we never even started it,
+            # and we have a message about not starting it.
+            start, end = end, start
+
+        fragment_pairs = self._missing_arc_fragments.get((start, end), [(None, None)])
+
+        msgs = []
+        for fragment_pair in fragment_pairs:
+            smsg, emsg = fragment_pair
+
+            if emsg is None:
+                if end < 0:
+                    # Hmm, maybe we have a one-line callable, let's check.
+                    if (-end, end) in self._missing_arc_fragments:
+                        return self.missing_arc_description(-end, end)
+                    emsg = "didn't jump to the function exit"
+                else:
+                    emsg = "didn't jump to line {lineno}"
+            emsg = emsg.format(lineno=end)
+
+            msg = "line {start} {emsg}".format(start=actual_start, emsg=emsg)
+            if smsg is not None:
+                msg += ", because {smsg}".format(smsg=smsg.format(lineno=actual_start))
+
+            msgs.append(msg)
+
+        return " or ".join(msgs)
+
+
+class ByteParser(object):
+    """Parse bytecode to understand the structure of code."""
+
+    @contract(text='unicode')
+    def __init__(self, text, code=None, filename=None):
+        self.text = text
+        if code:
+            self.code = code
+        else:
+            try:
+                self.code = compile_unicode(text, filename, "exec")
+            except SyntaxError as synerr:
+                raise NotPython(
+                    u"Couldn't parse '%s' as Python source: '%s' at line %d" % (
+                        filename, synerr.msg, synerr.lineno
+                    )
+                )
+
+        # Alternative Python implementations don't always provide all the
+        # attributes on code objects that we need to do the analysis.
+        for attr in ['co_lnotab', 'co_firstlineno', 'co_consts']:
+            if not hasattr(self.code, attr):
+                raise CoverageException(
+                    "This implementation of Python doesn't support code analysis.\n"
+                    "Run coverage.py under CPython for this command."
+                )
+
+    def child_parsers(self):
+        """Iterate over all the code objects nested within this one.
+
+        The iteration includes `self` as its first value.
+
+        """
+        children = CodeObjects(self.code)
+        return (ByteParser(self.text, code=c) for c in children)
+
+    def _bytes_lines(self):
+        """Map byte offsets to line numbers in `code`.
+
+        Uses co_lnotab described in Python/compile.c to map byte offsets to
+        line numbers.  Produces a sequence: (b0, l0), (b1, l1), ...
+
+        Only byte offsets that correspond to line numbers are included in the
+        results.
+
+        """
+        # Adapted from dis.py in the standard library.
+        byte_increments = bytes_to_ints(self.code.co_lnotab[0::2])
+        line_increments = bytes_to_ints(self.code.co_lnotab[1::2])
+
+        last_line_num = None
+        line_num = self.code.co_firstlineno
+        byte_num = 0
+        for byte_incr, line_incr in zip(byte_increments, line_increments):
+            if byte_incr:
+                if line_num != last_line_num:
+                    yield (byte_num, line_num)
+                    last_line_num = line_num
+                byte_num += byte_incr
+            line_num += line_incr
+        if line_num != last_line_num:
+            yield (byte_num, line_num)
+
+    def _find_statements(self):
+        """Find the statements in `self.code`.
+
+        Produce a sequence of line numbers that start statements.  Recurses
+        into all code objects reachable from `self.code`.
+
+        """
+        for bp in self.child_parsers():
+            # Get all of the lineno information from this code.
+            for _, l in bp._bytes_lines():
+                yield l
+
+
+#
+# AST analysis
+#
+
+class LoopBlock(object):
+    """A block on the block stack representing a `for` or `while` loop."""
+    def __init__(self, start):
+        self.start = start
+        self.break_exits = set()
+
+
+class FunctionBlock(object):
+    """A block on the block stack representing a function definition."""
+    def __init__(self, start, name):
+        self.start = start
+        self.name = name
+
+
+class TryBlock(object):
+    """A block on the block stack representing a `try` block."""
+    def __init__(self, handler_start=None, final_start=None):
+        self.handler_start = handler_start
+        self.final_start = final_start
+        self.break_from = set()
+        self.continue_from = set()
+        self.return_from = set()
+        self.raise_from = set()
+
+
+class ArcStart(collections.namedtuple("Arc", "lineno, cause")):
+    """The information needed to start an arc.
+
+    `lineno` is the line number the arc starts from.  `cause` is a fragment
+    used as the startmsg for AstArcAnalyzer.missing_arc_fragments.
+
+    """
+    def __new__(cls, lineno, cause=None):
+        return super(ArcStart, cls).__new__(cls, lineno, cause)
+
+
+# Define contract words that PyContract doesn't have.
+# ArcStarts is for a list or set of ArcStart's.
+new_contract('ArcStarts', lambda seq: all(isinstance(x, ArcStart) for x in seq))
+
+
+class AstArcAnalyzer(object):
+    """Analyze source text with an AST to find executable code paths."""
+
+    @contract(text='unicode', statements=set)
+    def __init__(self, text, statements, multiline):
+        self.root_node = ast.parse(neuter_encoding_declaration(text))
+        # TODO: I think this is happening in too many places.
+        self.statements = set(multiline.get(l, l) for l in statements)
+        self.multiline = multiline
+
+        if int(os.environ.get("COVERAGE_ASTDUMP", 0)):      # pragma: debugging
+            # Dump the AST so that failing tests have helpful output.
+            print("Statements: {}".format(self.statements))
+            print("Multiline map: {}".format(self.multiline))
+            ast_dump(self.root_node)
+
+        self.arcs = set()
+
+        # A map from arc pairs to a pair of sentence fragments: (startmsg, endmsg).
+        # For an arc from line 17, they should be usable like:
+        #    "Line 17 {endmsg}, because {startmsg}"
+        self.missing_arc_fragments = collections.defaultdict(list)
+        self.block_stack = []
+
+        self.debug = bool(int(os.environ.get("COVERAGE_TRACK_ARCS", 0)))
+
+    def analyze(self):
+        """Examine the AST tree from `root_node` to determine possible arcs.
+
+        This sets the `arcs` attribute to be a set of (from, to) line number
+        pairs.
+
+        """
+        for node in ast.walk(self.root_node):
+            node_name = node.__class__.__name__
+            code_object_handler = getattr(self, "_code_object__" + node_name, None)
+            if code_object_handler is not None:
+                code_object_handler(node)
+
+    def add_arc(self, start, end, smsg=None, emsg=None):
+        """Add an arc, including message fragments to use if it is missing."""
+        if self.debug:
+            print("\nAdding arc: ({}, {}): {!r}, {!r}".format(start, end, smsg, emsg))
+            print(short_stack(limit=6))
+        self.arcs.add((start, end))
+
+        if smsg is not None or emsg is not None:
+            self.missing_arc_fragments[(start, end)].append((smsg, emsg))
+
+    def nearest_blocks(self):
+        """Yield the blocks in nearest-to-farthest order."""
+        return reversed(self.block_stack)
+
+    @contract(returns=int)
+    def line_for_node(self, node):
+        """What is the right line number to use for this node?
+
+        This dispatches to _line__Node functions where needed.
+
+        """
+        node_name = node.__class__.__name__
+        handler = getattr(self, "_line__" + node_name, None)
+        if handler is not None:
+            return handler(node)
+        else:
+            return node.lineno
+
+    def _line__Assign(self, node):
+        return self.line_for_node(node.value)
+
+    def _line__Dict(self, node):
+        # Python 3.5 changed how dict literals are made.
+        if env.PYVERSION >= (3, 5) and node.keys:
+            if node.keys[0] is not None:
+                return node.keys[0].lineno
+            else:
+                # Unpacked dict literals `{**{'a':1}}` have None as the key,
+                # use the value in that case.
+                return node.values[0].lineno
+        else:
+            return node.lineno
+
+    def _line__List(self, node):
+        if node.elts:
+            return self.line_for_node(node.elts[0])
+        else:
+            return node.lineno
+
+    def _line__Module(self, node):
+        if node.body:
+            return self.line_for_node(node.body[0])
+        else:
+            # Modules have no line number, they always start at 1.
+            return 1
+
+    OK_TO_DEFAULT = set([
+        "Assign", "Assert", "AugAssign", "Delete", "Exec", "Expr", "Global",
+        "Import", "ImportFrom", "Nonlocal", "Pass", "Print",
+    ])
+
+    @contract(returns='ArcStarts')
+    def add_arcs(self, node):
+        """Add the arcs for `node`.
+
+        Return a set of ArcStarts, exits from this node to the next.
+
+        """
+        node_name = node.__class__.__name__
+        handler = getattr(self, "_handle__" + node_name, None)
+        if handler is not None:
+            return handler(node)
+
+        if 0:
+            node_name = node.__class__.__name__
+            if node_name not in self.OK_TO_DEFAULT:
+                print("*** Unhandled: {0}".format(node))
+        return set([ArcStart(self.line_for_node(node), cause=None)])
+
+    @contract(returns='ArcStarts')
+    def add_body_arcs(self, body, from_start=None, prev_starts=None):
+        """Add arcs for the body of a compound statement.
+
+        `body` is the body node.  `from_start` is a single `ArcStart` that can
+        be the previous line in flow before this body.  `prev_starts` is a set
+        of ArcStarts that can be the previous line.  Only one of them should be
+        given.
+
+        Returns a set of ArcStarts, the exits from this body.
+
+        """
+        if prev_starts is None:
+            prev_starts = set([from_start])
+        for body_node in body:
+            lineno = self.line_for_node(body_node)
+            first_line = self.multiline.get(lineno, lineno)
+            if first_line not in self.statements:
+                continue
+            for prev_start in prev_starts:
+                self.add_arc(prev_start.lineno, lineno, prev_start.cause)
+            prev_starts = self.add_arcs(body_node)
+        return prev_starts
+
+    def is_constant_expr(self, node):
+        """Is this a compile-time constant?"""
+        node_name = node.__class__.__name__
+        if node_name in ["NameConstant", "Num"]:
+            return True
+        elif node_name == "Name":
+            if env.PY3 and node.id in ["True", "False", "None"]:
+                return True
+        return False
+
+    # tests to write:
+    # TODO: while EXPR:
+    # TODO: while False:
+    # TODO: listcomps hidden deep in other expressions
+    # TODO: listcomps hidden in lists: x = [[i for i in range(10)]]
+    # TODO: nested function definitions
+
+    @contract(exits='ArcStarts')
+    def process_break_exits(self, exits):
+        """Add arcs due to jumps from `exits` being breaks."""
+        for block in self.nearest_blocks():
+            if isinstance(block, LoopBlock):
+                block.break_exits.update(exits)
+                break
+            elif isinstance(block, TryBlock) and block.final_start is not None:
+                block.break_from.update(exits)
+                break
+
+    @contract(exits='ArcStarts')
+    def process_continue_exits(self, exits):
+        """Add arcs due to jumps from `exits` being continues."""
+        for block in self.nearest_blocks():
+            if isinstance(block, LoopBlock):
+                for xit in exits:
+                    self.add_arc(xit.lineno, block.start, xit.cause)
+                break
+            elif isinstance(block, TryBlock) and block.final_start is not None:
+                block.continue_from.update(exits)
+                break
+
+    @contract(exits='ArcStarts')
+    def process_raise_exits(self, exits):
+        """Add arcs due to jumps from `exits` being raises."""
+        for block in self.nearest_blocks():
+            if isinstance(block, TryBlock):
+                if block.handler_start is not None:
+                    for xit in exits:
+                        self.add_arc(xit.lineno, block.handler_start, xit.cause)
+                    break
+                elif block.final_start is not None:
+                    block.raise_from.update(exits)
+                    break
+            elif isinstance(block, FunctionBlock):
+                for xit in exits:
+                    self.add_arc(
+                        xit.lineno, -block.start, xit.cause,
+                        "didn't except from function '{0}'".format(block.name),
+                    )
+                break
+
+    @contract(exits='ArcStarts')
+    def process_return_exits(self, exits):
+        """Add arcs due to jumps from `exits` being returns."""
+        for block in self.nearest_blocks():
+            if isinstance(block, TryBlock) and block.final_start is not None:
+                block.return_from.update(exits)
+                break
+            elif isinstance(block, FunctionBlock):
+                for xit in exits:
+                    self.add_arc(
+                        xit.lineno, -block.start, xit.cause,
+                        "didn't return from function '{0}'".format(block.name),
+                    )
+                break
+
+    ## Handlers
+
+    @contract(returns='ArcStarts')
+    def _handle__Break(self, node):
+        here = self.line_for_node(node)
+        break_start = ArcStart(here, cause="the break on line {lineno} wasn't executed")
+        self.process_break_exits([break_start])
+        return set()
+
+    @contract(returns='ArcStarts')
+    def _handle_decorated(self, node):
+        """Add arcs for things that can be decorated (classes and functions)."""
+        last = self.line_for_node(node)
+        if node.decorator_list:
+            for dec_node in node.decorator_list:
+                dec_start = self.line_for_node(dec_node)
+                if dec_start != last:
+                    self.add_arc(last, dec_start)
+                    last = dec_start
+            # The definition line may have been missed, but we should have it
+            # in `self.statements`.  For some constructs, `line_for_node` is
+            # not what we'd think of as the first line in the statement, so map
+            # it to the first one.
+            body_start = self.line_for_node(node.body[0])
+            body_start = self.multiline.get(body_start, body_start)
+            for lineno in range(last+1, body_start):
+                if lineno in self.statements:
+                    self.add_arc(last, lineno)
+                    last = lineno
+        # The body is handled in collect_arcs.
+        return set([ArcStart(last, cause=None)])
+
+    _handle__ClassDef = _handle_decorated
+
+    @contract(returns='ArcStarts')
+    def _handle__Continue(self, node):
+        here = self.line_for_node(node)
+        continue_start = ArcStart(here, cause="the continue on line {lineno} wasn't executed")
+        self.process_continue_exits([continue_start])
+        return set()
+
+    @contract(returns='ArcStarts')
+    def _handle__For(self, node):
+        start = self.line_for_node(node.iter)
+        self.block_stack.append(LoopBlock(start=start))
+        from_start = ArcStart(start, cause="the loop on line {lineno} never started")
+        exits = self.add_body_arcs(node.body, from_start=from_start)
+        # Any exit from the body will go back to the top of the loop.
+        for xit in exits:
+            self.add_arc(xit.lineno, start, xit.cause)
+        my_block = self.block_stack.pop()
+        exits = my_block.break_exits
+        from_start = ArcStart(start, cause="the loop on line {lineno} didn't complete")
+        if node.orelse:
+            else_exits = self.add_body_arcs(node.orelse, from_start=from_start)
+            exits |= else_exits
+        else:
+            # no else clause: exit from the for line.
+            exits.add(from_start)
+        return exits
+
+    _handle__AsyncFor = _handle__For
+
+    _handle__FunctionDef = _handle_decorated
+    _handle__AsyncFunctionDef = _handle_decorated
+
+    @contract(returns='ArcStarts')
+    def _handle__If(self, node):
+        start = self.line_for_node(node.test)
+        from_start = ArcStart(start, cause="the condition on line {lineno} was never true")
+        exits = self.add_body_arcs(node.body, from_start=from_start)
+        from_start = ArcStart(start, cause="the condition on line {lineno} was never false")
+        exits |= self.add_body_arcs(node.orelse, from_start=from_start)
+        return exits
+
+    @contract(returns='ArcStarts')
+    def _handle__Raise(self, node):
+        here = self.line_for_node(node)
+        raise_start = ArcStart(here, cause="the raise on line {lineno} wasn't executed")
+        self.process_raise_exits([raise_start])
+        # `raise` statement jumps away, no exits from here.
+        return set()
+
+    @contract(returns='ArcStarts')
+    def _handle__Return(self, node):
+        here = self.line_for_node(node)
+        return_start = ArcStart(here, cause="the return on line {lineno} wasn't executed")
+        self.process_return_exits([return_start])
+        # `return` statement jumps away, no exits from here.
+        return set()
+
+    @contract(returns='ArcStarts')
+    def _handle__Try(self, node):
+        if node.handlers:
+            handler_start = self.line_for_node(node.handlers[0])
+        else:
+            handler_start = None
+
+        if node.finalbody:
+            final_start = self.line_for_node(node.finalbody[0])
+        else:
+            final_start = None
+
+        try_block = TryBlock(handler_start=handler_start, final_start=final_start)
+        self.block_stack.append(try_block)
+
+        start = self.line_for_node(node)
+        exits = self.add_body_arcs(node.body, from_start=ArcStart(start, cause=None))
+
+        # We're done with the `try` body, so this block no longer handles
+        # exceptions. We keep the block so the `finally` clause can pick up
+        # flows from the handlers and `else` clause.
+        if node.finalbody:
+            try_block.handler_start = None
+            if node.handlers:
+                # If there are `except` clauses, then raises in the try body
+                # will already jump to them.  Start this set over for raises in
+                # `except` and `else`.
+                try_block.raise_from = set([])
+        else:
+            self.block_stack.pop()
+
+        handler_exits = set()
+
+        if node.handlers:
+            last_handler_start = None
+            for handler_node in node.handlers:
+                handler_start = self.line_for_node(handler_node)
+                if last_handler_start is not None:
+                    self.add_arc(last_handler_start, handler_start)
+                last_handler_start = handler_start
+                from_cause = "the exception caught by line {lineno} didn't happen"
+                from_start = ArcStart(handler_start, cause=from_cause)
+                handler_exits |= self.add_body_arcs(handler_node.body, from_start=from_start)
+
+        if node.orelse:
+            exits = self.add_body_arcs(node.orelse, prev_starts=exits)
+
+        exits |= handler_exits
+
+        if node.finalbody:
+            self.block_stack.pop()
+            final_from = (                  # You can get to the `finally` clause from:
+                exits |                         # the exits of the body or `else` clause,
+                try_block.break_from |          # or a `break`,
+                try_block.continue_from |       # or a `continue`,
+                try_block.raise_from |          # or a `raise`,
+                try_block.return_from           # or a `return`.
+            )
+
+            exits = self.add_body_arcs(node.finalbody, prev_starts=final_from)
+            if try_block.break_from:
+                break_exits = self._combine_finally_starts(try_block.break_from, exits)
+                self.process_break_exits(break_exits)
+            if try_block.continue_from:
+                continue_exits = self._combine_finally_starts(try_block.continue_from, exits)
+                self.process_continue_exits(continue_exits)
+            if try_block.raise_from:
+                raise_exits = self._combine_finally_starts(try_block.raise_from, exits)
+                self.process_raise_exits(raise_exits)
+            if try_block.return_from:
+                return_exits = self._combine_finally_starts(try_block.return_from, exits)
+                self.process_return_exits(return_exits)
+
+        return exits
+
+    def _combine_finally_starts(self, starts, exits):
+        """Helper for building the cause of `finally` branches."""
+        causes = []
+        for lineno, cause in sorted(starts):
+            if cause is not None:
+                causes.append(cause.format(lineno=lineno))
+        cause = " or ".join(causes)
+        exits = set(ArcStart(ex.lineno, cause) for ex in exits)
+        return exits
+
+    @contract(returns='ArcStarts')
+    def _handle__TryExcept(self, node):
+        # Python 2.7 uses separate TryExcept and TryFinally nodes. If we get
+        # TryExcept, it means there was no finally, so fake it, and treat as
+        # a general Try node.
+        node.finalbody = []
+        return self._handle__Try(node)
+
+    @contract(returns='ArcStarts')
+    def _handle__TryFinally(self, node):
+        # Python 2.7 uses separate TryExcept and TryFinally nodes. If we get
+        # TryFinally, see if there's a TryExcept nested inside. If so, merge
+        # them. Otherwise, fake fields to complete a Try node.
+        node.handlers = []
+        node.orelse = []
+
+        first = node.body[0]
+        if first.__class__.__name__ == "TryExcept" and node.lineno == first.lineno:
+            assert len(node.body) == 1
+            node.body = first.body
+            node.handlers = first.handlers
+            node.orelse = first.orelse
+
+        return self._handle__Try(node)
+
+    @contract(returns='ArcStarts')
+    def _handle__While(self, node):
+        constant_test = self.is_constant_expr(node.test)
+        start = to_top = self.line_for_node(node.test)
+        if constant_test:
+            to_top = self.line_for_node(node.body[0])
+        self.block_stack.append(LoopBlock(start=start))
+        from_start = ArcStart(start, cause="the condition on line {lineno} was never true")
+        exits = self.add_body_arcs(node.body, from_start=from_start)
+        for xit in exits:
+            self.add_arc(xit.lineno, to_top, xit.cause)
+        exits = set()
+        my_block = self.block_stack.pop()
+        exits.update(my_block.break_exits)
+        from_start = ArcStart(start, cause="the condition on line {lineno} was never false")
+        if node.orelse:
+            else_exits = self.add_body_arcs(node.orelse, from_start=from_start)
+            exits |= else_exits
+        else:
+            # No `else` clause: you can exit from the start.
+            if not constant_test:
+                exits.add(from_start)
+        return exits
+
+    @contract(returns='ArcStarts')
+    def _handle__With(self, node):
+        start = self.line_for_node(node)
+        exits = self.add_body_arcs(node.body, from_start=ArcStart(start))
+        return exits
+
+    _handle__AsyncWith = _handle__With
+
+    def _code_object__Module(self, node):
+        start = self.line_for_node(node)
+        if node.body:
+            exits = self.add_body_arcs(node.body, from_start=ArcStart(-start))
+            for xit in exits:
+                self.add_arc(xit.lineno, -start, xit.cause, "didn't exit the module")
+        else:
+            # Empty module.
+            self.add_arc(-start, start)
+            self.add_arc(start, -start)
+
+    def _code_object__FunctionDef(self, node):
+        start = self.line_for_node(node)
+        self.block_stack.append(FunctionBlock(start=start, name=node.name))
+        exits = self.add_body_arcs(node.body, from_start=ArcStart(-start))
+        self.process_return_exits(exits)
+        self.block_stack.pop()
+
+    _code_object__AsyncFunctionDef = _code_object__FunctionDef
+
+    def _code_object__ClassDef(self, node):
+        start = self.line_for_node(node)
+        self.add_arc(-start, start)
+        exits = self.add_body_arcs(node.body, from_start=ArcStart(start))
+        for xit in exits:
+            self.add_arc(
+                xit.lineno, -start, xit.cause,
+                "didn't exit the body of class '{0}'".format(node.name),
+            )
+
+    def _make_oneline_code_method(noun):     # pylint: disable=no-self-argument
+        """A function to make methods for online callable _code_object__ methods."""
+        def _code_object__oneline_callable(self, node):
+            start = self.line_for_node(node)
+            self.add_arc(-start, start, None, "didn't run the {0} on line {1}".format(noun, start))
+            self.add_arc(
+                start, -start, None,
+                "didn't finish the {0} on line {1}".format(noun, start),
+            )
+        return _code_object__oneline_callable
+
+    _code_object__Lambda = _make_oneline_code_method("lambda")
+    _code_object__GeneratorExp = _make_oneline_code_method("generator expression")
+    _code_object__DictComp = _make_oneline_code_method("dictionary comprehension")
+    _code_object__SetComp = _make_oneline_code_method("set comprehension")
+    if env.PY3:
+        _code_object__ListComp = _make_oneline_code_method("list comprehension")
+
+
+SKIP_DUMP_FIELDS = ["ctx"]
+
+def _is_simple_value(value):
+    """Is `value` simple enough to be displayed on a single line?"""
+    return (
+        value in [None, [], (), {}, set()] or
+        isinstance(value, (string_class, int, float))
+    )
+
+# TODO: a test of ast_dump?
+def ast_dump(node, depth=0):
+    """Dump the AST for `node`.
+
+    This recursively walks the AST, printing a readable version.
+
+    """
+    indent = " " * depth
+    if not isinstance(node, ast.AST):
+        print("{0}<{1} {2!r}>".format(indent, node.__class__.__name__, node))
+        return
+
+    lineno = getattr(node, "lineno", None)
+    if lineno is not None:
+        linemark = " @ {0}".format(node.lineno)
+    else:
+        linemark = ""
+    head = "{0}<{1}{2}".format(indent, node.__class__.__name__, linemark)
+
+    named_fields = [
+        (name, value)
+        for name, value in ast.iter_fields(node)
+        if name not in SKIP_DUMP_FIELDS
+    ]
+    if not named_fields:
+        print("{0}>".format(head))
+    elif len(named_fields) == 1 and _is_simple_value(named_fields[0][1]):
+        field_name, value = named_fields[0]
+        print("{0} {1}: {2!r}>".format(head, field_name, value))
+    else:
+        print(head)
+        if 0:
+            print("{0}# mro: {1}".format(
+                indent, ", ".join(c.__name__ for c in node.__class__.__mro__[1:]),
+            ))
+        next_indent = indent + "    "
+        for field_name, value in named_fields:
+            prefix = "{0}{1}:".format(next_indent, field_name)
+            if _is_simple_value(value):
+                print("{0} {1!r}".format(prefix, value))
+            elif isinstance(value, list):
+                print("{0} [".format(prefix))
+                for n in value:
+                    ast_dump(n, depth + 8)
+                print("{0}]".format(next_indent))
+            else:
+                print(prefix)
+                ast_dump(value, depth + 8)
+
+        print("{0}>".format(indent))
+
+#
+# eflag: FileType = Python2
--- /dev/null	Thu Jan 01 00:00:00 1970 +0000
+++ b/DebugClients/Python2/coverage/phystokens.py	Sat Sep 03 18:12:12 2016 +0200
@@ -0,0 +1,297 @@
+# Licensed under the Apache License: http://www.apache.org/licenses/LICENSE-2.0
+# For details: https://bitbucket.org/ned/coveragepy/src/default/NOTICE.txt
+
+"""Better tokenizing for coverage.py."""
+
+import codecs
+import keyword
+import re
+import sys
+import token
+import tokenize
+
+from coverage import env
+from coverage.backward import iternext
+from coverage.misc import contract
+
+
+def phys_tokens(toks):
+    """Return all physical tokens, even line continuations.
+
+    tokenize.generate_tokens() doesn't return a token for the backslash that
+    continues lines.  This wrapper provides those tokens so that we can
+    re-create a faithful representation of the original source.
+
+    Returns the same values as generate_tokens()
+
+    """
+    last_line = None
+    last_lineno = -1
+    last_ttype = None
+    for ttype, ttext, (slineno, scol), (elineno, ecol), ltext in toks:
+        if last_lineno != elineno:
+            if last_line and last_line.endswith("\\\n"):
+                # We are at the beginning of a new line, and the last line
+                # ended with a backslash.  We probably have to inject a
+                # backslash token into the stream. Unfortunately, there's more
+                # to figure out.  This code::
+                #
+                #   usage = """\
+                #   HEY THERE
+                #   """
+                #
+                # triggers this condition, but the token text is::
+                #
+                #   '"""\\\nHEY THERE\n"""'
+                #
+                # so we need to figure out if the backslash is already in the
+                # string token or not.
+                inject_backslash = True
+                if last_ttype == tokenize.COMMENT:
+                    # Comments like this \
+                    # should never result in a new token.
+                    inject_backslash = False
+                elif ttype == token.STRING:
+                    if "\n" in ttext and ttext.split('\n', 1)[0][-1] == '\\':
+                        # It's a multi-line string and the first line ends with
+                        # a backslash, so we don't need to inject another.
+                        inject_backslash = False
+                if inject_backslash:
+                    # Figure out what column the backslash is in.
+                    ccol = len(last_line.split("\n")[-2]) - 1
+                    # Yield the token, with a fake token type.
+                    yield (
+                        99999, "\\\n",
+                        (slineno, ccol), (slineno, ccol+2),
+                        last_line
+                        )
+            last_line = ltext
+            last_ttype = ttype
+        yield ttype, ttext, (slineno, scol), (elineno, ecol), ltext
+        last_lineno = elineno
+
+
+@contract(source='unicode')
+def source_token_lines(source):
+    """Generate a series of lines, one for each line in `source`.
+
+    Each line is a list of pairs, each pair is a token::
+
+        [('key', 'def'), ('ws', ' '), ('nam', 'hello'), ('op', '('), ... ]
+
+    Each pair has a token class, and the token text.
+
+    If you concatenate all the token texts, and then join them with newlines,
+    you should have your original `source` back, with two differences:
+    trailing whitespace is not preserved, and a final line with no newline
+    is indistinguishable from a final line with a newline.
+
+    """
+
+    ws_tokens = set([token.INDENT, token.DEDENT, token.NEWLINE, tokenize.NL])
+    line = []
+    col = 0
+
+    source = source.expandtabs(8).replace('\r\n', '\n')
+    tokgen = generate_tokens(source)
+
+    for ttype, ttext, (_, scol), (_, ecol), _ in phys_tokens(tokgen):
+        mark_start = True
+        for part in re.split('(\n)', ttext):
+            if part == '\n':
+                yield line
+                line = []
+                col = 0
+                mark_end = False
+            elif part == '':
+                mark_end = False
+            elif ttype in ws_tokens:
+                mark_end = False
+            else:
+                if mark_start and scol > col:
+                    line.append(("ws", u" " * (scol - col)))
+                    mark_start = False
+                tok_class = tokenize.tok_name.get(ttype, 'xx').lower()[:3]
+                if ttype == token.NAME and keyword.iskeyword(ttext):
+                    tok_class = "key"
+                line.append((tok_class, part))
+                mark_end = True
+            scol = 0
+        if mark_end:
+            col = ecol
+
+    if line:
+        yield line
+
+
+class CachedTokenizer(object):
+    """A one-element cache around tokenize.generate_tokens.
+
+    When reporting, coverage.py tokenizes files twice, once to find the
+    structure of the file, and once to syntax-color it.  Tokenizing is
+    expensive, and easily cached.
+
+    This is a one-element cache so that our twice-in-a-row tokenizing doesn't
+    actually tokenize twice.
+
+    """
+    def __init__(self):
+        self.last_text = None
+        self.last_tokens = None
+
+    @contract(text='unicode')
+    def generate_tokens(self, text):
+        """A stand-in for `tokenize.generate_tokens`."""
+        if text != self.last_text:
+            self.last_text = text
+            readline = iternext(text.splitlines(True))
+            self.last_tokens = list(tokenize.generate_tokens(readline))
+        return self.last_tokens
+
+# Create our generate_tokens cache as a callable replacement function.
+generate_tokens = CachedTokenizer().generate_tokens
+
+
+COOKIE_RE = re.compile(r"^[ \t]*#.*coding[:=][ \t]*([-\w.]+)", flags=re.MULTILINE)
+
+@contract(source='bytes')
+def _source_encoding_py2(source):
+    """Determine the encoding for `source`, according to PEP 263.
+
+    `source` is a byte string, the text of the program.
+
+    Returns a string, the name of the encoding.
+
+    """
+    assert isinstance(source, bytes)
+
+    # Do this so the detect_encode code we copied will work.
+    readline = iternext(source.splitlines(True))
+
+    # This is mostly code adapted from Py3.2's tokenize module.
+
+    def _get_normal_name(orig_enc):
+        """Imitates get_normal_name in tokenizer.c."""
+        # Only care about the first 12 characters.
+        enc = orig_enc[:12].lower().replace("_", "-")
+        if re.match(r"^utf-8($|-)", enc):
+            return "utf-8"
+        if re.match(r"^(latin-1|iso-8859-1|iso-latin-1)($|-)", enc):
+            return "iso-8859-1"
+        return orig_enc
+
+    # From detect_encode():
+    # It detects the encoding from the presence of a UTF-8 BOM or an encoding
+    # cookie as specified in PEP-0263.  If both a BOM and a cookie are present,
+    # but disagree, a SyntaxError will be raised.  If the encoding cookie is an
+    # invalid charset, raise a SyntaxError.  Note that if a UTF-8 BOM is found,
+    # 'utf-8-sig' is returned.
+
+    # If no encoding is specified, then the default will be returned.
+    default = 'ascii'
+
+    bom_found = False
+    encoding = None
+
+    def read_or_stop():
+        """Get the next source line, or ''."""
+        try:
+            return readline()
+        except StopIteration:
+            return ''
+
+    def find_cookie(line):
+        """Find an encoding cookie in `line`."""
+        try:
+            line_string = line.decode('ascii')
+        except UnicodeDecodeError:
+            return None
+
+        matches = COOKIE_RE.findall(line_string)
+        if not matches:
+            return None
+        encoding = _get_normal_name(matches[0])
+        try:
+            codec = codecs.lookup(encoding)
+        except LookupError:
+            # This behavior mimics the Python interpreter
+            raise SyntaxError("unknown encoding: " + encoding)
+
+        if bom_found:
+            # codecs in 2.3 were raw tuples of functions, assume the best.
+            codec_name = getattr(codec, 'name', encoding)
+            if codec_name != 'utf-8':
+                # This behavior mimics the Python interpreter
+                raise SyntaxError('encoding problem: utf-8')
+            encoding += '-sig'
+        return encoding
+
+    first = read_or_stop()
+    if first.startswith(codecs.BOM_UTF8):
+        bom_found = True
+        first = first[3:]
+        default = 'utf-8-sig'
+    if not first:
+        return default
+
+    encoding = find_cookie(first)
+    if encoding:
+        return encoding
+
+    second = read_or_stop()
+    if not second:
+        return default
+
+    encoding = find_cookie(second)
+    if encoding:
+        return encoding
+
+    return default
+
+
+@contract(source='bytes')
+def _source_encoding_py3(source):
+    """Determine the encoding for `source`, according to PEP 263.
+
+    `source` is a byte string: the text of the program.
+
+    Returns a string, the name of the encoding.
+
+    """
+    readline = iternext(source.splitlines(True))
+    return tokenize.detect_encoding(readline)[0]
+
+
+if env.PY3:
+    source_encoding = _source_encoding_py3
+else:
+    source_encoding = _source_encoding_py2
+
+
+@contract(source='unicode')
+def compile_unicode(source, filename, mode):
+    """Just like the `compile` builtin, but works on any Unicode string.
+
+    Python 2's compile() builtin has a stupid restriction: if the source string
+    is Unicode, then it may not have a encoding declaration in it.  Why not?
+    Who knows!  It also decodes to utf8, and then tries to interpret those utf8
+    bytes according to the encoding declaration.  Why? Who knows!
+
+    This function neuters the coding declaration, and compiles it.
+
+    """
+    source = neuter_encoding_declaration(source)
+    if env.PY2 and isinstance(filename, unicode):
+        filename = filename.encode(sys.getfilesystemencoding(), "replace")
+    code = compile(source, filename, mode)
+    return code
+
+
+@contract(source='unicode', returns='unicode')
+def neuter_encoding_declaration(source):
+    """Return `source`, with any encoding declaration neutered."""
+    source = COOKIE_RE.sub("# (deleted declaration)", source, count=2)
+    return source
+
+#
+# eflag: FileType = Python2
--- /dev/null	Thu Jan 01 00:00:00 1970 +0000
+++ b/DebugClients/Python2/coverage/pickle2json.py	Sat Sep 03 18:12:12 2016 +0200
@@ -0,0 +1,50 @@
+# Licensed under the Apache License: http://www.apache.org/licenses/LICENSE-2.0
+# For details: https://bitbucket.org/ned/coveragepy/src/default/NOTICE.txt
+
+"""Convert pickle to JSON for coverage.py."""
+
+from coverage.backward import pickle
+from coverage.data import CoverageData
+
+
+def pickle_read_raw_data(cls_unused, file_obj):
+    """Replacement for CoverageData._read_raw_data."""
+    return pickle.load(file_obj)
+
+
+def pickle2json(infile, outfile):
+    """Convert a coverage.py 3.x pickle data file to a 4.x JSON data file."""
+    try:
+        old_read_raw_data = CoverageData._read_raw_data
+        CoverageData._read_raw_data = pickle_read_raw_data
+
+        covdata = CoverageData()
+
+        with open(infile, 'rb') as inf:
+            covdata.read_fileobj(inf)
+
+        covdata.write_file(outfile)
+    finally:
+        CoverageData._read_raw_data = old_read_raw_data
+
+
+if __name__ == "__main__":
+    from optparse import OptionParser
+
+    parser = OptionParser(usage="usage: %s [options]" % __file__)
+    parser.description = "Convert .coverage files from pickle to JSON format"
+    parser.add_option(
+        "-i", "--input-file", action="store", default=".coverage",
+        help="Name of input file. Default .coverage",
+    )
+    parser.add_option(
+        "-o", "--output-file", action="store", default=".coverage",
+        help="Name of output file. Default .coverage",
+    )
+
+    (options, args) = parser.parse_args()
+
+    pickle2json(options.input_file, options.output_file)
+
+#
+# eflag: FileType = Python2
--- /dev/null	Thu Jan 01 00:00:00 1970 +0000
+++ b/DebugClients/Python2/coverage/plugin.py	Sat Sep 03 18:12:12 2016 +0200
@@ -0,0 +1,399 @@
+# Licensed under the Apache License: http://www.apache.org/licenses/LICENSE-2.0
+# For details: https://bitbucket.org/ned/coveragepy/src/default/NOTICE.txt
+
+"""Plugin interfaces for coverage.py"""
+
+from coverage import files
+from coverage.misc import contract, _needs_to_implement
+
+
+class CoveragePlugin(object):
+    """Base class for coverage.py plugins.
+
+    To write a coverage.py plugin, create a module with a subclass of
+    :class:`CoveragePlugin`.  You will override methods in your class to
+    participate in various aspects of coverage.py's processing.
+
+    Currently the only plugin type is a file tracer, for implementing
+    measurement support for non-Python files.  File tracer plugins implement
+    the :meth:`file_tracer` method to claim files and the :meth:`file_reporter`
+    method to report on those files.
+
+    Any plugin can optionally implement :meth:`sys_info` to provide debugging
+    information about their operation.
+
+    Coverage.py will store its own information on your plugin object, using
+    attributes whose names start with ``_coverage_``.  Don't be startled.
+
+    To register your plugin, define a function called `coverage_init` in your
+    module::
+
+        def coverage_init(reg, options):
+            reg.add_file_tracer(MyPlugin())
+
+    You use the `reg` parameter passed to your `coverage_init` function to
+    register your plugin object.  It has one method, `add_file_tracer`, which
+    takes a newly created instance of your plugin.
+
+    If your plugin takes options, the `options` parameter is a dictionary of
+    your plugin's options from the coverage.py configuration file.  Use them
+    however you want to configure your object before registering it.
+
+    """
+
+    def file_tracer(self, filename):        # pylint: disable=unused-argument
+        """Get a :class:`FileTracer` object for a file.
+
+        Every Python source file is offered to the plugin to give it a chance
+        to take responsibility for tracing the file.  If your plugin can handle
+        the file, then return a :class:`FileTracer` object.  Otherwise return
+        None.
+
+        There is no way to register your plugin for particular files.  Instead,
+        this method is invoked for all files, and the plugin decides whether it
+        can trace the file or not.  Be prepared for `filename` to refer to all
+        kinds of files that have nothing to do with your plugin.
+
+        The file name will be a Python file being executed.  There are two
+        broad categories of behavior for a plugin, depending on the kind of
+        files your plugin supports:
+
+        * Static file names: each of your original source files has been
+          converted into a distinct Python file.  Your plugin is invoked with
+          the Python file name, and it maps it back to its original source
+          file.
+
+        * Dynamic file names: all of your source files are executed by the same
+          Python file.  In this case, your plugin implements
+          :meth:`FileTracer.dynamic_source_filename` to provide the actual
+          source file for each execution frame.
+
+        `filename` is a string, the path to the file being considered.  This is
+        the absolute real path to the file.  If you are comparing to other
+        paths, be sure to take this into account.
+
+        Returns a :class:`FileTracer` object to use to trace `filename`, or
+        None if this plugin cannot trace this file.
+
+        """
+        return None
+
+    def file_reporter(self, filename):      # pylint: disable=unused-argument
+        """Get the :class:`FileReporter` class to use for a file.
+
+        This will only be invoked if `filename` returns non-None from
+        :meth:`file_tracer`.  It's an error to return None from this method.
+
+        Returns a :class:`FileReporter` object to use to report on `filename`.
+
+        """
+        _needs_to_implement(self, "file_reporter")
+
+    def sys_info(self):
+        """Get a list of information useful for debugging.
+
+        This method will be invoked for ``--debug=sys``.  Your
+        plugin can return any information it wants to be displayed.
+
+        Returns a list of pairs: `[(name, value), ...]`.
+
+        """
+        return []
+
+
+class FileTracer(object):
+    """Support needed for files during the execution phase.
+
+    You may construct this object from :meth:`CoveragePlugin.file_tracer` any
+    way you like.  A natural choice would be to pass the file name given to
+    `file_tracer`.
+
+    `FileTracer` objects should only be created in the
+    :meth:`CoveragePlugin.file_tracer` method.
+
+    See :ref:`howitworks` for details of the different coverage.py phases.
+
+    """
+
+    def source_filename(self):
+        """The source file name for this file.
+
+        This may be any file name you like.  A key responsibility of a plugin
+        is to own the mapping from Python execution back to whatever source
+        file name was originally the source of the code.
+
+        See :meth:`CoveragePlugin.file_tracer` for details about static and
+        dynamic file names.
+
+        Returns the file name to credit with this execution.
+
+        """
+        _needs_to_implement(self, "source_filename")
+
+    def has_dynamic_source_filename(self):
+        """Does this FileTracer have dynamic source file names?
+
+        FileTracers can provide dynamically determined file names by
+        implementing :meth:`dynamic_source_filename`.  Invoking that function
+        is expensive. To determine whether to invoke it, coverage.py uses the
+        result of this function to know if it needs to bother invoking
+        :meth:`dynamic_source_filename`.
+
+        See :meth:`CoveragePlugin.file_tracer` for details about static and
+        dynamic file names.
+
+        Returns True if :meth:`dynamic_source_filename` should be called to get
+        dynamic source file names.
+
+        """
+        return False
+
+    def dynamic_source_filename(self, filename, frame):     # pylint: disable=unused-argument
+        """Get a dynamically computed source file name.
+
+        Some plugins need to compute the source file name dynamically for each
+        frame.
+
+        This function will not be invoked if
+        :meth:`has_dynamic_source_filename` returns False.
+
+        Returns the source file name for this frame, or None if this frame
+        shouldn't be measured.
+
+        """
+        return None
+
+    def line_number_range(self, frame):
+        """Get the range of source line numbers for a given a call frame.
+
+        The call frame is examined, and the source line number in the original
+        file is returned.  The return value is a pair of numbers, the starting
+        line number and the ending line number, both inclusive.  For example,
+        returning (5, 7) means that lines 5, 6, and 7 should be considered
+        executed.
+
+        This function might decide that the frame doesn't indicate any lines
+        from the source file were executed.  Return (-1, -1) in this case to
+        tell coverage.py that no lines should be recorded for this frame.
+
+        """
+        lineno = frame.f_lineno
+        return lineno, lineno
+
+
+class FileReporter(object):
+    """Support needed for files during the analysis and reporting phases.
+
+    See :ref:`howitworks` for details of the different coverage.py phases.
+
+    `FileReporter` objects should only be created in the
+    :meth:`CoveragePlugin.file_reporter` method.
+
+    There are many methods here, but only :meth:`lines` is required, to provide
+    the set of executable lines in the file.
+
+    """
+
+    def __init__(self, filename):
+        """Simple initialization of a `FileReporter`.
+
+        The `filename` argument is the path to the file being reported.  This
+        will be available as the `.filename` attribute on the object.  Other
+        method implementations on this base class rely on this attribute.
+
+        """
+        self.filename = filename
+
+    def __repr__(self):
+        return "<{0.__class__.__name__} filename={0.filename!r}>".format(self)
+
+    def relative_filename(self):
+        """Get the relative file name for this file.
+
+        This file path will be displayed in reports.  The default
+        implementation will supply the actual project-relative file path.  You
+        only need to supply this method if you have an unusual syntax for file
+        paths.
+
+        """
+        return files.relative_filename(self.filename)
+
+    @contract(returns='unicode')
+    def source(self):
+        """Get the source for the file.
+
+        Returns a Unicode string.
+
+        The base implementation simply reads the `self.filename` file and
+        decodes it as UTF8.  Override this method if your file isn't readable
+        as a text file, or if you need other encoding support.
+
+        """
+        with open(self.filename, "rb") as f:
+            return f.read().decode("utf8")
+
+    def lines(self):
+        """Get the executable lines in this file.
+
+        Your plugin must determine which lines in the file were possibly
+        executable.  This method returns a set of those line numbers.
+
+        Returns a set of line numbers.
+
+        """
+        _needs_to_implement(self, "lines")
+
+    def excluded_lines(self):
+        """Get the excluded executable lines in this file.
+
+        Your plugin can use any method it likes to allow the user to exclude
+        executable lines from consideration.
+
+        Returns a set of line numbers.
+
+        The base implementation returns the empty set.
+
+        """
+        return set()
+
+    def translate_lines(self, lines):
+        """Translate recorded lines into reported lines.
+
+        Some file formats will want to report lines slightly differently than
+        they are recorded.  For example, Python records the last line of a
+        multi-line statement, but reports are nicer if they mention the first
+        line.
+
+        Your plugin can optionally define this method to perform these kinds of
+        adjustment.
+
+        `lines` is a sequence of integers, the recorded line numbers.
+
+        Returns a set of integers, the adjusted line numbers.
+
+        The base implementation returns the numbers unchanged.
+
+        """
+        return set(lines)
+
+    def arcs(self):
+        """Get the executable arcs in this file.
+
+        To support branch coverage, your plugin needs to be able to indicate
+        possible execution paths, as a set of line number pairs.  Each pair is
+        a `(prev, next)` pair indicating that execution can transition from the
+        `prev` line number to the `next` line number.
+
+        Returns a set of pairs of line numbers.  The default implementation
+        returns an empty set.
+
+        """
+        return set()
+
+    def no_branch_lines(self):
+        """Get the lines excused from branch coverage in this file.
+
+        Your plugin can use any method it likes to allow the user to exclude
+        lines from consideration of branch coverage.
+
+        Returns a set of line numbers.
+
+        The base implementation returns the empty set.
+
+        """
+        return set()
+
+    def translate_arcs(self, arcs):
+        """Translate recorded arcs into reported arcs.
+
+        Similar to :meth:`translate_lines`, but for arcs.  `arcs` is a set of
+        line number pairs.
+
+        Returns a set of line number pairs.
+
+        The default implementation returns `arcs` unchanged.
+
+        """
+        return arcs
+
+    def exit_counts(self):
+        """Get a count of exits from that each line.
+
+        To determine which lines are branches, coverage.py looks for lines that
+        have more than one exit.  This function creates a dict mapping each
+        executable line number to a count of how many exits it has.
+
+        To be honest, this feels wrong, and should be refactored.  Let me know
+        if you attempt to implement this method in your plugin...
+
+        """
+        return {}
+
+    def missing_arc_description(self, start, end, executed_arcs=None):     # pylint: disable=unused-argument
+        """Provide an English sentence describing a missing arc.
+
+        The `start` and `end` arguments are the line numbers of the missing
+        arc. Negative numbers indicate entering or exiting code objects.
+
+        The `executed_arcs` argument is a set of line number pairs, the arcs
+        that were executed in this file.
+
+        By default, this simply returns the string "Line {start} didn't jump
+        to {end}".
+
+        """
+        return "Line {start} didn't jump to line {end}".format(start=start, end=end)
+
+    def source_token_lines(self):
+        """Generate a series of tokenized lines, one for each line in `source`.
+
+        These tokens are used for syntax-colored reports.
+
+        Each line is a list of pairs, each pair is a token::
+
+            [('key', 'def'), ('ws', ' '), ('nam', 'hello'), ('op', '('), ... ]
+
+        Each pair has a token class, and the token text.  The token classes
+        are:
+
+        * ``'com'``: a comment
+        * ``'key'``: a keyword
+        * ``'nam'``: a name, or identifier
+        * ``'num'``: a number
+        * ``'op'``: an operator
+        * ``'str'``: a string literal
+        * ``'txt'``: some other kind of text
+
+        If you concatenate all the token texts, and then join them with
+        newlines, you should have your original source back.
+
+        The default implementation simply returns each line tagged as
+        ``'txt'``.
+
+        """
+        for line in self.source().splitlines():
+            yield [('txt', line)]
+
+    # Annoying comparison operators. Py3k wants __lt__ etc, and Py2k needs all
+    # of them defined.
+
+    def __eq__(self, other):
+        return isinstance(other, FileReporter) and self.filename == other.filename
+
+    def __ne__(self, other):
+        return not (self == other)
+
+    def __lt__(self, other):
+        return self.filename < other.filename
+
+    def __le__(self, other):
+        return self.filename <= other.filename
+
+    def __gt__(self, other):
+        return self.filename > other.filename
+
+    def __ge__(self, other):
+        return self.filename >= other.filename
+
+#
+# eflag: FileType = Python2
--- /dev/null	Thu Jan 01 00:00:00 1970 +0000
+++ b/DebugClients/Python2/coverage/plugin_support.py	Sat Sep 03 18:12:12 2016 +0200
@@ -0,0 +1,250 @@
+# Licensed under the Apache License: http://www.apache.org/licenses/LICENSE-2.0
+# For details: https://bitbucket.org/ned/coveragepy/src/default/NOTICE.txt
+
+"""Support for plugins."""
+
+import os
+import os.path
+import sys
+
+from coverage.misc import CoverageException, isolate_module
+from coverage.plugin import CoveragePlugin, FileTracer, FileReporter
+
+os = isolate_module(os)
+
+
+class Plugins(object):
+    """The currently loaded collection of coverage.py plugins."""
+
+    def __init__(self):
+        self.order = []
+        self.names = {}
+        self.file_tracers = []
+
+        self.current_module = None
+        self.debug = None
+
+    @classmethod
+    def load_plugins(cls, modules, config, debug=None):
+        """Load plugins from `modules`.
+
+        Returns a list of loaded and configured plugins.
+
+        """
+        plugins = cls()
+        plugins.debug = debug
+
+        for module in modules:
+            plugins.current_module = module
+            __import__(module)
+            mod = sys.modules[module]
+
+            coverage_init = getattr(mod, "coverage_init", None)
+            if not coverage_init:
+                raise CoverageException(
+                    "Plugin module %r didn't define a coverage_init function" % module
+                )
+
+            options = config.get_plugin_options(module)
+            coverage_init(plugins, options)
+
+        plugins.current_module = None
+        return plugins
+
+    def add_file_tracer(self, plugin):
+        """Add a file tracer plugin.
+
+        `plugin` is an instance of a third-party plugin class.  It must
+        implement the :meth:`CoveragePlugin.file_tracer` method.
+
+        """
+        self._add_plugin(plugin, self.file_tracers)
+
+    def add_noop(self, plugin):
+        """Add a plugin that does nothing.
+
+        This is only useful for testing the plugin support.
+
+        """
+        self._add_plugin(plugin, None)
+
+    def _add_plugin(self, plugin, specialized):
+        """Add a plugin object.
+
+        `plugin` is a :class:`CoveragePlugin` instance to add.  `specialized`
+        is a list to append the plugin to.
+
+        """
+        plugin_name = "%s.%s" % (self.current_module, plugin.__class__.__name__)
+        if self.debug and self.debug.should('plugin'):
+            self.debug.write("Loaded plugin %r: %r" % (self.current_module, plugin))
+            labelled = LabelledDebug("plugin %r" % (self.current_module,), self.debug)
+            plugin = DebugPluginWrapper(plugin, labelled)
+
+        # pylint: disable=attribute-defined-outside-init
+        plugin._coverage_plugin_name = plugin_name
+        plugin._coverage_enabled = True
+        self.order.append(plugin)
+        self.names[plugin_name] = plugin
+        if specialized is not None:
+            specialized.append(plugin)
+
+    def __nonzero__(self):
+        return bool(self.order)
+
+    __bool__ = __nonzero__
+
+    def __iter__(self):
+        return iter(self.order)
+
+    def get(self, plugin_name):
+        """Return a plugin by name."""
+        return self.names[plugin_name]
+
+
+class LabelledDebug(object):
+    """A Debug writer, but with labels for prepending to the messages."""
+
+    def __init__(self, label, debug, prev_labels=()):
+        self.labels = list(prev_labels) + [label]
+        self.debug = debug
+
+    def add_label(self, label):
+        """Add a label to the writer, and return a new `LabelledDebug`."""
+        return LabelledDebug(label, self.debug, self.labels)
+
+    def message_prefix(self):
+        """The prefix to use on messages, combining the labels."""
+        prefixes = self.labels + ['']
+        return ":\n".join("  "*i+label for i, label in enumerate(prefixes))
+
+    def write(self, message):
+        """Write `message`, but with the labels prepended."""
+        self.debug.write("%s%s" % (self.message_prefix(), message))
+
+
+class DebugPluginWrapper(CoveragePlugin):
+    """Wrap a plugin, and use debug to report on what it's doing."""
+
+    def __init__(self, plugin, debug):
+        super(DebugPluginWrapper, self).__init__()
+        self.plugin = plugin
+        self.debug = debug
+
+    def file_tracer(self, filename):
+        tracer = self.plugin.file_tracer(filename)
+        self.debug.write("file_tracer(%r) --> %r" % (filename, tracer))
+        if tracer:
+            debug = self.debug.add_label("file %r" % (filename,))
+            tracer = DebugFileTracerWrapper(tracer, debug)
+        return tracer
+
+    def file_reporter(self, filename):
+        reporter = self.plugin.file_reporter(filename)
+        self.debug.write("file_reporter(%r) --> %r" % (filename, reporter))
+        if reporter:
+            debug = self.debug.add_label("file %r" % (filename,))
+            reporter = DebugFileReporterWrapper(filename, reporter, debug)
+        return reporter
+
+    def sys_info(self):
+        return self.plugin.sys_info()
+
+
+class DebugFileTracerWrapper(FileTracer):
+    """A debugging `FileTracer`."""
+
+    def __init__(self, tracer, debug):
+        self.tracer = tracer
+        self.debug = debug
+
+    def _show_frame(self, frame):
+        """A short string identifying a frame, for debug messages."""
+        return "%s@%d" % (
+            os.path.basename(frame.f_code.co_filename),
+            frame.f_lineno,
+        )
+
+    def source_filename(self):
+        sfilename = self.tracer.source_filename()
+        self.debug.write("source_filename() --> %r" % (sfilename,))
+        return sfilename
+
+    def has_dynamic_source_filename(self):
+        has = self.tracer.has_dynamic_source_filename()
+        self.debug.write("has_dynamic_source_filename() --> %r" % (has,))
+        return has
+
+    def dynamic_source_filename(self, filename, frame):
+        dyn = self.tracer.dynamic_source_filename(filename, frame)
+        self.debug.write("dynamic_source_filename(%r, %s) --> %r" % (
+            filename, self._show_frame(frame), dyn,
+        ))
+        return dyn
+
+    def line_number_range(self, frame):
+        pair = self.tracer.line_number_range(frame)
+        self.debug.write("line_number_range(%s) --> %r" % (self._show_frame(frame), pair))
+        return pair
+
+
+class DebugFileReporterWrapper(FileReporter):
+    """A debugging `FileReporter`."""
+
+    def __init__(self, filename, reporter, debug):
+        super(DebugFileReporterWrapper, self).__init__(filename)
+        self.reporter = reporter
+        self.debug = debug
+
+    def relative_filename(self):
+        ret = self.reporter.relative_filename()
+        self.debug.write("relative_filename() --> %r" % (ret,))
+        return ret
+
+    def lines(self):
+        ret = self.reporter.lines()
+        self.debug.write("lines() --> %r" % (ret,))
+        return ret
+
+    def excluded_lines(self):
+        ret = self.reporter.excluded_lines()
+        self.debug.write("excluded_lines() --> %r" % (ret,))
+        return ret
+
+    def translate_lines(self, lines):
+        ret = self.reporter.translate_lines(lines)
+        self.debug.write("translate_lines(%r) --> %r" % (lines, ret))
+        return ret
+
+    def translate_arcs(self, arcs):
+        ret = self.reporter.translate_arcs(arcs)
+        self.debug.write("translate_arcs(%r) --> %r" % (arcs, ret))
+        return ret
+
+    def no_branch_lines(self):
+        ret = self.reporter.no_branch_lines()
+        self.debug.write("no_branch_lines() --> %r" % (ret,))
+        return ret
+
+    def exit_counts(self):
+        ret = self.reporter.exit_counts()
+        self.debug.write("exit_counts() --> %r" % (ret,))
+        return ret
+
+    def arcs(self):
+        ret = self.reporter.arcs()
+        self.debug.write("arcs() --> %r" % (ret,))
+        return ret
+
+    def source(self):
+        ret = self.reporter.source()
+        self.debug.write("source() --> %d chars" % (len(ret),))
+        return ret
+
+    def source_token_lines(self):
+        ret = list(self.reporter.source_token_lines())
+        self.debug.write("source_token_lines() --> %d tokens" % (len(ret),))
+        return ret
+
+#
+# eflag: FileType = Python2
--- /dev/null	Thu Jan 01 00:00:00 1970 +0000
+++ b/DebugClients/Python2/coverage/python.py	Sat Sep 03 18:12:12 2016 +0200
@@ -0,0 +1,208 @@
+# Licensed under the Apache License: http://www.apache.org/licenses/LICENSE-2.0
+# For details: https://bitbucket.org/ned/coveragepy/src/default/NOTICE.txt
+
+"""Python source expertise for coverage.py"""
+
+import os.path
+import types
+import zipimport
+
+from coverage import env, files
+from coverage.misc import (
+    contract, CoverageException, expensive, NoSource, join_regex, isolate_module,
+)
+from coverage.parser import PythonParser
+from coverage.phystokens import source_token_lines, source_encoding
+from coverage.plugin import FileReporter
+
+os = isolate_module(os)
+
+
+@contract(returns='bytes')
+def read_python_source(filename):
+    """Read the Python source text from `filename`.
+
+    Returns bytes.
+
+    """
+    with open(filename, "rb") as f:
+        return f.read().replace(b"\r\n", b"\n").replace(b"\r", b"\n")
+
+
+@contract(returns='unicode')
+def get_python_source(filename):
+    """Return the source code, as unicode."""
+    base, ext = os.path.splitext(filename)
+    if ext == ".py" and env.WINDOWS:
+        exts = [".py", ".pyw"]
+    else:
+        exts = [ext]
+
+    for ext in exts:
+        try_filename = base + ext
+        if os.path.exists(try_filename):
+            # A regular text file: open it.
+            source = read_python_source(try_filename)
+            break
+
+        # Maybe it's in a zip file?
+        source = get_zip_bytes(try_filename)
+        if source is not None:
+            break
+    else:
+        # Couldn't find source.
+        raise NoSource("No source for code: '%s'." % filename)
+
+    # Replace \f because of http://bugs.python.org/issue19035
+    source = source.replace(b'\f', b' ')
+    source = source.decode(source_encoding(source), "replace")
+
+    # Python code should always end with a line with a newline.
+    if source and source[-1] != '\n':
+        source += '\n'
+
+    return source
+
+
+@contract(returns='bytes|None')
+def get_zip_bytes(filename):
+    """Get data from `filename` if it is a zip file path.
+
+    Returns the bytestring data read from the zip file, or None if no zip file
+    could be found or `filename` isn't in it.  The data returned will be
+    an empty string if the file is empty.
+
+    """
+    markers = ['.zip'+os.sep, '.egg'+os.sep]
+    for marker in markers:
+        if marker in filename:
+            parts = filename.split(marker)
+            try:
+                zi = zipimport.zipimporter(parts[0]+marker[:-1])
+            except zipimport.ZipImportError:
+                continue
+            try:
+                data = zi.get_data(parts[1])
+            except IOError:
+                continue
+            return data
+    return None
+
+
+class PythonFileReporter(FileReporter):
+    """Report support for a Python file."""
+
+    def __init__(self, morf, coverage=None):
+        self.coverage = coverage
+
+        if hasattr(morf, '__file__'):
+            filename = morf.__file__
+        elif isinstance(morf, types.ModuleType):
+            # A module should have had .__file__, otherwise we can't use it.
+            # This could be a PEP-420 namespace package.
+            raise CoverageException("Module {0} has no file".format(morf))
+        else:
+            filename = morf
+
+        filename = files.unicode_filename(filename)
+
+        # .pyc files should always refer to a .py instead.
+        if filename.endswith(('.pyc', '.pyo')):
+            filename = filename[:-1]
+        elif filename.endswith('$py.class'):   # Jython
+            filename = filename[:-9] + ".py"
+
+        super(PythonFileReporter, self).__init__(files.canonical_filename(filename))
+
+        if hasattr(morf, '__name__'):
+            name = morf.__name__
+            name = name.replace(".", os.sep) + ".py"
+            name = files.unicode_filename(name)
+        else:
+            name = files.relative_filename(filename)
+        self.relname = name
+
+        self._source = None
+        self._parser = None
+        self._statements = None
+        self._excluded = None
+
+    @contract(returns='unicode')
+    def relative_filename(self):
+        return self.relname
+
+    @property
+    def parser(self):
+        """Lazily create a :class:`PythonParser`."""
+        if self._parser is None:
+            self._parser = PythonParser(
+                filename=self.filename,
+                exclude=self.coverage._exclude_regex('exclude'),
+            )
+            self._parser.parse_source()
+        return self._parser
+
+    def lines(self):
+        """Return the line numbers of statements in the file."""
+        return self.parser.statements
+
+    def excluded_lines(self):
+        """Return the line numbers of statements in the file."""
+        return self.parser.excluded
+
+    def translate_lines(self, lines):
+        return self.parser.translate_lines(lines)
+
+    def translate_arcs(self, arcs):
+        return self.parser.translate_arcs(arcs)
+
+    @expensive
+    def no_branch_lines(self):
+        no_branch = self.parser.lines_matching(
+            join_regex(self.coverage.config.partial_list),
+            join_regex(self.coverage.config.partial_always_list)
+            )
+        return no_branch
+
+    @expensive
+    def arcs(self):
+        return self.parser.arcs()
+
+    @expensive
+    def exit_counts(self):
+        return self.parser.exit_counts()
+
+    def missing_arc_description(self, start, end, executed_arcs=None):
+        return self.parser.missing_arc_description(start, end, executed_arcs)
+
+    @contract(returns='unicode')
+    def source(self):
+        if self._source is None:
+            self._source = get_python_source(self.filename)
+        return self._source
+
+    def should_be_python(self):
+        """Does it seem like this file should contain Python?
+
+        This is used to decide if a file reported as part of the execution of
+        a program was really likely to have contained Python in the first
+        place.
+
+        """
+        # Get the file extension.
+        _, ext = os.path.splitext(self.filename)
+
+        # Anything named *.py* should be Python.
+        if ext.startswith('.py'):
+            return True
+        # A file with no extension should be Python.
+        if not ext:
+            return True
+        # Everything else is probably not Python.
+        return False
+
+    def source_token_lines(self):
+        return source_token_lines(self.source())
+
+#
+# eflag: FileType = Python2
--- /dev/null	Thu Jan 01 00:00:00 1970 +0000
+++ b/DebugClients/Python2/coverage/pytracer.py	Sat Sep 03 18:12:12 2016 +0200
@@ -0,0 +1,158 @@
+# Licensed under the Apache License: http://www.apache.org/licenses/LICENSE-2.0
+# For details: https://bitbucket.org/ned/coveragepy/src/default/NOTICE.txt
+
+"""Raw data collector for coverage.py."""
+
+import dis
+import sys
+
+from coverage import env
+
+# We need the YIELD_VALUE opcode below, in a comparison-friendly form.
+YIELD_VALUE = dis.opmap['YIELD_VALUE']
+if env.PY2:
+    YIELD_VALUE = chr(YIELD_VALUE)
+
+
+class PyTracer(object):
+    """Python implementation of the raw data tracer."""
+
+    # Because of poor implementations of trace-function-manipulating tools,
+    # the Python trace function must be kept very simple.  In particular, there
+    # must be only one function ever set as the trace function, both through
+    # sys.settrace, and as the return value from the trace function.  Put
+    # another way, the trace function must always return itself.  It cannot
+    # swap in other functions, or return None to avoid tracing a particular
+    # frame.
+    #
+    # The trace manipulator that introduced this restriction is DecoratorTools,
+    # which sets a trace function, and then later restores the pre-existing one
+    # by calling sys.settrace with a function it found in the current frame.
+    #
+    # Systems that use DecoratorTools (or similar trace manipulations) must use
+    # PyTracer to get accurate results.  The command-line --timid argument is
+    # used to force the use of this tracer.
+
+    def __init__(self):
+        # Attributes set from the collector:
+        self.data = None
+        self.trace_arcs = False
+        self.should_trace = None
+        self.should_trace_cache = None
+        self.warn = None
+        # The threading module to use, if any.
+        self.threading = None
+
+        self.cur_file_dict = []
+        self.last_line = [0]
+
+        self.data_stack = []
+        self.last_exc_back = None
+        self.last_exc_firstlineno = 0
+        self.thread = None
+        self.stopped = False
+
+    def __repr__(self):
+        return "<PyTracer at 0x{0:0x}: {1} lines in {2} files>".format(
+            id(self),
+            sum(len(v) for v in self.data.values()),
+            len(self.data),
+        )
+
+    def _trace(self, frame, event, arg_unused):
+        """The trace function passed to sys.settrace."""
+
+        if self.stopped:
+            return
+
+        if self.last_exc_back:
+            if frame == self.last_exc_back:
+                # Someone forgot a return event.
+                if self.trace_arcs and self.cur_file_dict:
+                    pair = (self.last_line, -self.last_exc_firstlineno)
+                    self.cur_file_dict[pair] = None
+                self.cur_file_dict, self.last_line = self.data_stack.pop()
+            self.last_exc_back = None
+
+        if event == 'call':
+            # Entering a new function context.  Decide if we should trace
+            # in this file.
+            self.data_stack.append((self.cur_file_dict, self.last_line))
+            filename = frame.f_code.co_filename
+            disp = self.should_trace_cache.get(filename)
+            if disp is None:
+                disp = self.should_trace(filename, frame)
+                self.should_trace_cache[filename] = disp
+
+            self.cur_file_dict = None
+            if disp.trace:
+                tracename = disp.source_filename
+                if tracename not in self.data:
+                    self.data[tracename] = {}
+                self.cur_file_dict = self.data[tracename]
+            # The call event is really a "start frame" event, and happens for
+            # function calls and re-entering generators.  The f_lasti field is
+            # -1 for calls, and a real offset for generators.  Use <0 as the
+            # line number for calls, and the real line number for generators.
+            if frame.f_lasti < 0:
+                self.last_line = -frame.f_code.co_firstlineno
+            else:
+                self.last_line = frame.f_lineno
+        elif event == 'line':
+            # Record an executed line.
+            if self.cur_file_dict is not None:
+                lineno = frame.f_lineno
+                if self.trace_arcs:
+                    self.cur_file_dict[(self.last_line, lineno)] = None
+                else:
+                    self.cur_file_dict[lineno] = None
+                self.last_line = lineno
+        elif event == 'return':
+            if self.trace_arcs and self.cur_file_dict:
+                # Record an arc leaving the function, but beware that a
+                # "return" event might just mean yielding from a generator.
+                bytecode = frame.f_code.co_code[frame.f_lasti]
+                if bytecode != YIELD_VALUE:
+                    first = frame.f_code.co_firstlineno
+                    self.cur_file_dict[(self.last_line, -first)] = None
+            # Leaving this function, pop the filename stack.
+            self.cur_file_dict, self.last_line = self.data_stack.pop()
+        elif event == 'exception':
+            self.last_exc_back = frame.f_back
+            self.last_exc_firstlineno = frame.f_code.co_firstlineno
+        return self._trace
+
+    def start(self):
+        """Start this Tracer.
+
+        Return a Python function suitable for use with sys.settrace().
+
+        """
+        if self.threading:
+            self.thread = self.threading.currentThread()
+        sys.settrace(self._trace)
+        self.stopped = False
+        return self._trace
+
+    def stop(self):
+        """Stop this Tracer."""
+        self.stopped = True
+        if self.threading and self.thread.ident != self.threading.currentThread().ident:
+            # Called on a different thread than started us: we can't unhook
+            # ourselves, but we've set the flag that we should stop, so we
+            # won't do any more tracing.
+            return
+
+        if self.warn:
+            if sys.gettrace() != self._trace:
+                msg = "Trace function changed, measurement is likely wrong: %r"
+                self.warn(msg % (sys.gettrace(),))
+
+        sys.settrace(None)
+
+    def get_stats(self):
+        """Return a dictionary of statistics, or None."""
+        return None
+
+#
+# eflag: FileType = Python2
--- /dev/null	Thu Jan 01 00:00:00 1970 +0000
+++ b/DebugClients/Python2/coverage/report.py	Sat Sep 03 18:12:12 2016 +0200
@@ -0,0 +1,104 @@
+# Licensed under the Apache License: http://www.apache.org/licenses/LICENSE-2.0
+# For details: https://bitbucket.org/ned/coveragepy/src/default/NOTICE.txt
+
+"""Reporter foundation for coverage.py."""
+
+import os
+import warnings
+
+from coverage.files import prep_patterns, FnmatchMatcher
+from coverage.misc import CoverageException, NoSource, NotPython, isolate_module
+
+os = isolate_module(os)
+
+
+class Reporter(object):
+    """A base class for all reporters."""
+
+    def __init__(self, coverage, config):
+        """Create a reporter.
+
+        `coverage` is the coverage instance. `config` is an instance  of
+        CoverageConfig, for controlling all sorts of behavior.
+
+        """
+        self.coverage = coverage
+        self.config = config
+
+        # The directory into which to place the report, used by some derived
+        # classes.
+        self.directory = None
+
+        # Our method find_file_reporters used to set an attribute that other
+        # code could read.  That's been refactored away, but some third parties
+        # were using that attribute.  We'll continue to support it in a noisy
+        # way for now.
+        self._file_reporters = []
+
+    @property
+    def file_reporters(self):
+        """Keep .file_reporters working for private-grabbing tools."""
+        warnings.warn(
+            "Report.file_reporters will no longer be available in Coverage.py 4.2",
+            DeprecationWarning,
+        )
+        return self._file_reporters
+
+    def find_file_reporters(self, morfs):
+        """Find the FileReporters we'll report on.
+
+        `morfs` is a list of modules or file names.
+
+        Returns a list of FileReporters.
+
+        """
+        reporters = self.coverage._get_file_reporters(morfs)
+
+        if self.config.include:
+            matcher = FnmatchMatcher(prep_patterns(self.config.include))
+            reporters = [fr for fr in reporters if matcher.match(fr.filename)]
+
+        if self.config.omit:
+            matcher = FnmatchMatcher(prep_patterns(self.config.omit))
+            reporters = [fr for fr in reporters if not matcher.match(fr.filename)]
+
+        self._file_reporters = sorted(reporters)
+        return self._file_reporters
+
+    def report_files(self, report_fn, morfs, directory=None):
+        """Run a reporting function on a number of morfs.
+
+        `report_fn` is called for each relative morf in `morfs`.  It is called
+        as::
+
+            report_fn(file_reporter, analysis)
+
+        where `file_reporter` is the `FileReporter` for the morf, and
+        `analysis` is the `Analysis` for the morf.
+
+        """
+        file_reporters = self.find_file_reporters(morfs)
+
+        if not file_reporters:
+            raise CoverageException("No data to report.")
+
+        self.directory = directory
+        if self.directory and not os.path.exists(self.directory):
+            os.makedirs(self.directory)
+
+        for fr in file_reporters:
+            try:
+                report_fn(fr, self.coverage._analyze(fr))
+            except NoSource:
+                if not self.config.ignore_errors:
+                    raise
+            except NotPython:
+                # Only report errors for .py files, and only if we didn't
+                # explicitly suppress those errors.
+                # NotPython is only raised by PythonFileReporter, which has a
+                # should_be_python() method.
+                if fr.should_be_python() and not self.config.ignore_errors:
+                    raise
+
+#
+# eflag: FileType = Python2
--- /dev/null	Thu Jan 01 00:00:00 1970 +0000
+++ b/DebugClients/Python2/coverage/results.py	Sat Sep 03 18:12:12 2016 +0200
@@ -0,0 +1,274 @@
+# Licensed under the Apache License: http://www.apache.org/licenses/LICENSE-2.0
+# For details: https://bitbucket.org/ned/coveragepy/src/default/NOTICE.txt
+
+"""Results of coverage measurement."""
+
+import collections
+
+from coverage.backward import iitems
+from coverage.misc import format_lines
+
+
+class Analysis(object):
+    """The results of analyzing a FileReporter."""
+
+    def __init__(self, data, file_reporter):
+        self.data = data
+        self.file_reporter = file_reporter
+        self.filename = self.file_reporter.filename
+        self.statements = self.file_reporter.lines()
+        self.excluded = self.file_reporter.excluded_lines()
+
+        # Identify missing statements.
+        executed = self.data.lines(self.filename) or []
+        executed = self.file_reporter.translate_lines(executed)
+        self.missing = self.statements - executed
+
+        if self.data.has_arcs():
+            self._arc_possibilities = sorted(self.file_reporter.arcs())
+            self.exit_counts = self.file_reporter.exit_counts()
+            self.no_branch = self.file_reporter.no_branch_lines()
+            n_branches = self.total_branches()
+            mba = self.missing_branch_arcs()
+            n_partial_branches = sum(len(v) for k,v in iitems(mba) if k not in self.missing)
+            n_missing_branches = sum(len(v) for k,v in iitems(mba))
+        else:
+            self._arc_possibilities = []
+            self.exit_counts = {}
+            self.no_branch = set()
+            n_branches = n_partial_branches = n_missing_branches = 0
+
+        self.numbers = Numbers(
+            n_files=1,
+            n_statements=len(self.statements),
+            n_excluded=len(self.excluded),
+            n_missing=len(self.missing),
+            n_branches=n_branches,
+            n_partial_branches=n_partial_branches,
+            n_missing_branches=n_missing_branches,
+        )
+
+    def missing_formatted(self):
+        """The missing line numbers, formatted nicely.
+
+        Returns a string like "1-2, 5-11, 13-14".
+
+        """
+        return format_lines(self.statements, self.missing)
+
+    def has_arcs(self):
+        """Were arcs measured in this result?"""
+        return self.data.has_arcs()
+
+    def arc_possibilities(self):
+        """Returns a sorted list of the arcs in the code."""
+        return self._arc_possibilities
+
+    def arcs_executed(self):
+        """Returns a sorted list of the arcs actually executed in the code."""
+        executed = self.data.arcs(self.filename) or []
+        executed = self.file_reporter.translate_arcs(executed)
+        return sorted(executed)
+
+    def arcs_missing(self):
+        """Returns a sorted list of the arcs in the code not executed."""
+        possible = self.arc_possibilities()
+        executed = self.arcs_executed()
+        missing = (
+            p for p in possible
+                if p not in executed
+                    and p[0] not in self.no_branch
+        )
+        return sorted(missing)
+
+    def arcs_missing_formatted(self):
+        """The missing branch arcs, formatted nicely.
+
+        Returns a string like "1->2, 1->3, 16->20". Omits any mention of
+        branches from missing lines, so if line 17 is missing, then 17->18
+        won't be included.
+
+        """
+        arcs = self.missing_branch_arcs()
+        missing = self.missing
+        line_exits = sorted(iitems(arcs))
+        pairs = []
+        for line, exits in line_exits:
+            for ex in sorted(exits):
+                if line not in missing:
+                    pairs.append("%d->%s" % (line, (ex if ex > 0 else "exit")))
+        return ', '.join(pairs)
+
+    def arcs_unpredicted(self):
+        """Returns a sorted list of the executed arcs missing from the code."""
+        possible = self.arc_possibilities()
+        executed = self.arcs_executed()
+        # Exclude arcs here which connect a line to itself.  They can occur
+        # in executed data in some cases.  This is where they can cause
+        # trouble, and here is where it's the least burden to remove them.
+        # Also, generators can somehow cause arcs from "enter" to "exit", so
+        # make sure we have at least one positive value.
+        unpredicted = (
+            e for e in executed
+                if e not in possible
+                    and e[0] != e[1]
+                    and (e[0] > 0 or e[1] > 0)
+        )
+        return sorted(unpredicted)
+
+    def branch_lines(self):
+        """Returns a list of line numbers that have more than one exit."""
+        return [l1 for l1,count in iitems(self.exit_counts) if count > 1]
+
+    def total_branches(self):
+        """How many total branches are there?"""
+        return sum(count for count in self.exit_counts.values() if count > 1)
+
+    def missing_branch_arcs(self):
+        """Return arcs that weren't executed from branch lines.
+
+        Returns {l1:[l2a,l2b,...], ...}
+
+        """
+        missing = self.arcs_missing()
+        branch_lines = set(self.branch_lines())
+        mba = collections.defaultdict(list)
+        for l1, l2 in missing:
+            if l1 in branch_lines:
+                mba[l1].append(l2)
+        return mba
+
+    def branch_stats(self):
+        """Get stats about branches.
+
+        Returns a dict mapping line numbers to a tuple:
+        (total_exits, taken_exits).
+        """
+
+        missing_arcs = self.missing_branch_arcs()
+        stats = {}
+        for lnum in self.branch_lines():
+            exits = self.exit_counts[lnum]
+            try:
+                missing = len(missing_arcs[lnum])
+            except KeyError:
+                missing = 0
+            stats[lnum] = (exits, exits - missing)
+        return stats
+
+
+class Numbers(object):
+    """The numerical results of measuring coverage.
+
+    This holds the basic statistics from `Analysis`, and is used to roll
+    up statistics across files.
+
+    """
+    # A global to determine the precision on coverage percentages, the number
+    # of decimal places.
+    _precision = 0
+    _near0 = 1.0              # These will change when _precision is changed.
+    _near100 = 99.0
+
+    def __init__(self, n_files=0, n_statements=0, n_excluded=0, n_missing=0,
+                    n_branches=0, n_partial_branches=0, n_missing_branches=0
+                    ):
+        self.n_files = n_files
+        self.n_statements = n_statements
+        self.n_excluded = n_excluded
+        self.n_missing = n_missing
+        self.n_branches = n_branches
+        self.n_partial_branches = n_partial_branches
+        self.n_missing_branches = n_missing_branches
+
+    def init_args(self):
+        """Return a list for __init__(*args) to recreate this object."""
+        return [
+            self.n_files, self.n_statements, self.n_excluded, self.n_missing,
+            self.n_branches, self.n_partial_branches, self.n_missing_branches,
+        ]
+
+    @classmethod
+    def set_precision(cls, precision):
+        """Set the number of decimal places used to report percentages."""
+        assert 0 <= precision < 10
+        cls._precision = precision
+        cls._near0 = 1.0 / 10**precision
+        cls._near100 = 100.0 - cls._near0
+
+    @property
+    def n_executed(self):
+        """Returns the number of executed statements."""
+        return self.n_statements - self.n_missing
+
+    @property
+    def n_executed_branches(self):
+        """Returns the number of executed branches."""
+        return self.n_branches - self.n_missing_branches
+
+    @property
+    def pc_covered(self):
+        """Returns a single percentage value for coverage."""
+        if self.n_statements > 0:
+            numerator, denominator = self.ratio_covered
+            pc_cov = (100.0 * numerator) / denominator
+        else:
+            pc_cov = 100.0
+        return pc_cov
+
+    @property
+    def pc_covered_str(self):
+        """Returns the percent covered, as a string, without a percent sign.
+
+        Note that "0" is only returned when the value is truly zero, and "100"
+        is only returned when the value is truly 100.  Rounding can never
+        result in either "0" or "100".
+
+        """
+        pc = self.pc_covered
+        if 0 < pc < self._near0:
+            pc = self._near0
+        elif self._near100 < pc < 100:
+            pc = self._near100
+        else:
+            pc = round(pc, self._precision)
+        return "%.*f" % (self._precision, pc)
+
+    @classmethod
+    def pc_str_width(cls):
+        """How many characters wide can pc_covered_str be?"""
+        width = 3   # "100"
+        if cls._precision > 0:
+            width += 1 + cls._precision
+        return width
+
+    @property
+    def ratio_covered(self):
+        """Return a numerator and denominator for the coverage ratio."""
+        numerator = self.n_executed + self.n_executed_branches
+        denominator = self.n_statements + self.n_branches
+        return numerator, denominator
+
+    def __add__(self, other):
+        nums = Numbers()
+        nums.n_files = self.n_files + other.n_files
+        nums.n_statements = self.n_statements + other.n_statements
+        nums.n_excluded = self.n_excluded + other.n_excluded
+        nums.n_missing = self.n_missing + other.n_missing
+        nums.n_branches = self.n_branches + other.n_branches
+        nums.n_partial_branches = (
+            self.n_partial_branches + other.n_partial_branches
+            )
+        nums.n_missing_branches = (
+            self.n_missing_branches + other.n_missing_branches
+            )
+        return nums
+
+    def __radd__(self, other):
+        # Implementing 0+Numbers allows us to sum() a list of Numbers.
+        if other == 0:
+            return self
+        return NotImplemented
+
+#
+# eflag: FileType = Python2
--- /dev/null	Thu Jan 01 00:00:00 1970 +0000
+++ b/DebugClients/Python2/coverage/summary.py	Sat Sep 03 18:12:12 2016 +0200
@@ -0,0 +1,124 @@
+# Licensed under the Apache License: http://www.apache.org/licenses/LICENSE-2.0
+# For details: https://bitbucket.org/ned/coveragepy/src/default/NOTICE.txt
+
+"""Summary reporting"""
+
+import sys
+
+from coverage import env
+from coverage.report import Reporter
+from coverage.results import Numbers
+from coverage.misc import NotPython, CoverageException, output_encoding
+
+
+class SummaryReporter(Reporter):
+    """A reporter for writing the summary report."""
+
+    def __init__(self, coverage, config):
+        super(SummaryReporter, self).__init__(coverage, config)
+        self.branches = coverage.data.has_arcs()
+
+    def report(self, morfs, outfile=None):
+        """Writes a report summarizing coverage statistics per module.
+
+        `outfile` is a file object to write the summary to. It must be opened
+        for native strings (bytes on Python 2, Unicode on Python 3).
+
+        """
+        file_reporters = self.find_file_reporters(morfs)
+
+        # Prepare the formatting strings
+        max_name = max([len(fr.relative_filename()) for fr in file_reporters] + [5])
+        fmt_name = u"%%- %ds  " % max_name
+        fmt_err = u"%s   %s: %s"
+        fmt_skip_covered = u"\n%s file%s skipped due to complete coverage."
+
+        header = (fmt_name % "Name") + u" Stmts   Miss"
+        fmt_coverage = fmt_name + u"%6d %6d"
+        if self.branches:
+            header += u" Branch BrPart"
+            fmt_coverage += u" %6d %6d"
+        width100 = Numbers.pc_str_width()
+        header += u"%*s" % (width100+4, "Cover")
+        fmt_coverage += u"%%%ds%%%%" % (width100+3,)
+        if self.config.show_missing:
+            header += u"   Missing"
+            fmt_coverage += u"   %s"
+        rule = u"-" * len(header)
+
+        if outfile is None:
+            outfile = sys.stdout
+
+        def writeout(line):
+            """Write a line to the output, adding a newline."""
+            if env.PY2:
+                line = line.encode(output_encoding())
+            outfile.write(line.rstrip())
+            outfile.write("\n")
+
+        # Write the header
+        writeout(header)
+        writeout(rule)
+
+        total = Numbers()
+        skipped_count = 0
+
+        for fr in file_reporters:
+            try:
+                analysis = self.coverage._analyze(fr)
+                nums = analysis.numbers
+                total += nums
+
+                if self.config.skip_covered:
+                    # Don't report on 100% files.
+                    no_missing_lines = (nums.n_missing == 0)
+                    no_missing_branches = (nums.n_partial_branches == 0)
+                    if no_missing_lines and no_missing_branches:
+                        skipped_count += 1
+                        continue
+
+                args = (fr.relative_filename(), nums.n_statements, nums.n_missing)
+                if self.branches:
+                    args += (nums.n_branches, nums.n_partial_branches)
+                args += (nums.pc_covered_str,)
+                if self.config.show_missing:
+                    missing_fmtd = analysis.missing_formatted()
+                    if self.branches:
+                        branches_fmtd = analysis.arcs_missing_formatted()
+                        if branches_fmtd:
+                            if missing_fmtd:
+                                missing_fmtd += ", "
+                            missing_fmtd += branches_fmtd
+                    args += (missing_fmtd,)
+                writeout(fmt_coverage % args)
+            except Exception:
+                report_it = not self.config.ignore_errors
+                if report_it:
+                    typ, msg = sys.exc_info()[:2]
+                    # NotPython is only raised by PythonFileReporter, which has a
+                    # should_be_python() method.
+                    if typ is NotPython and not fr.should_be_python():
+                        report_it = False
+                if report_it:
+                    writeout(fmt_err % (fr.relative_filename(), typ.__name__, msg))
+
+        if total.n_files > 1:
+            writeout(rule)
+            args = ("TOTAL", total.n_statements, total.n_missing)
+            if self.branches:
+                args += (total.n_branches, total.n_partial_branches)
+            args += (total.pc_covered_str,)
+            if self.config.show_missing:
+                args += ("",)
+            writeout(fmt_coverage % args)
+
+        if not total.n_files and not skipped_count:
+            raise CoverageException("No data to report.")
+
+        if self.config.skip_covered and skipped_count:
+            writeout(fmt_skip_covered % (skipped_count, 's' if skipped_count > 1 else ''))
+
+        return total.n_statements and total.pc_covered
+
+#
+# eflag: FileType = Python2
--- /dev/null	Thu Jan 01 00:00:00 1970 +0000
+++ b/DebugClients/Python2/coverage/templite.py	Sat Sep 03 18:12:12 2016 +0200
@@ -0,0 +1,293 @@
+# Licensed under the Apache License: http://www.apache.org/licenses/LICENSE-2.0
+# For details: https://bitbucket.org/ned/coveragepy/src/default/NOTICE.txt
+
+"""A simple Python template renderer, for a nano-subset of Django syntax.
+
+For a detailed discussion of this code, see this chapter from 500 Lines:
+http://aosabook.org/en/500L/a-template-engine.html
+
+"""
+
+# Coincidentally named the same as http://code.activestate.com/recipes/496702/
+
+import re
+
+from coverage import env
+
+
+class TempliteSyntaxError(ValueError):
+    """Raised when a template has a syntax error."""
+    pass
+
+
+class TempliteValueError(ValueError):
+    """Raised when an expression won't evaluate in a template."""
+    pass
+
+
+class CodeBuilder(object):
+    """Build source code conveniently."""
+
+    def __init__(self, indent=0):
+        self.code = []
+        self.indent_level = indent
+
+    def __str__(self):
+        return "".join(str(c) for c in self.code)
+
+    def add_line(self, line):
+        """Add a line of source to the code.
+
+        Indentation and newline will be added for you, don't provide them.
+
+        """
+        self.code.extend([" " * self.indent_level, line, "\n"])
+
+    def add_section(self):
+        """Add a section, a sub-CodeBuilder."""
+        section = CodeBuilder(self.indent_level)
+        self.code.append(section)
+        return section
+
+    INDENT_STEP = 4      # PEP8 says so!
+
+    def indent(self):
+        """Increase the current indent for following lines."""
+        self.indent_level += self.INDENT_STEP
+
+    def dedent(self):
+        """Decrease the current indent for following lines."""
+        self.indent_level -= self.INDENT_STEP
+
+    def get_globals(self):
+        """Execute the code, and return a dict of globals it defines."""
+        # A check that the caller really finished all the blocks they started.
+        assert self.indent_level == 0
+        # Get the Python source as a single string.
+        python_source = str(self)
+        # Execute the source, defining globals, and return them.
+        global_namespace = {}
+        exec(python_source, global_namespace)
+        return global_namespace
+
+
+class Templite(object):
+    """A simple template renderer, for a nano-subset of Django syntax.
+
+    Supported constructs are extended variable access::
+
+        {{var.modifier.modifier|filter|filter}}
+
+    loops::
+
+        {% for var in list %}...{% endfor %}
+
+    and ifs::
+
+        {% if var %}...{% endif %}
+
+    Comments are within curly-hash markers::
+
+        {# This will be ignored #}
+
+    Any of these constructs can have a hypen at the end (`-}}`, `-%}`, `-#}`),
+    which will collapse the whitespace following the tag.
+
+    Construct a Templite with the template text, then use `render` against a
+    dictionary context to create a finished string::
+
+        templite = Templite('''
+            <h1>Hello {{name|upper}}!</h1>
+            {% for topic in topics %}
+                <p>You are interested in {{topic}}.</p>
+            {% endif %}
+            ''',
+            {'upper': str.upper},
+        )
+        text = templite.render({
+            'name': "Ned",
+            'topics': ['Python', 'Geometry', 'Juggling'],
+        })
+
+    """
+    def __init__(self, text, *contexts):
+        """Construct a Templite with the given `text`.
+
+        `contexts` are dictionaries of values to use for future renderings.
+        These are good for filters and global values.
+
+        """
+        self.context = {}
+        for context in contexts:
+            self.context.update(context)
+
+        self.all_vars = set()
+        self.loop_vars = set()
+
+        # We construct a function in source form, then compile it and hold onto
+        # it, and execute it to render the template.
+        code = CodeBuilder()
+
+        code.add_line("def render_function(context, do_dots):")
+        code.indent()
+        vars_code = code.add_section()
+        code.add_line("result = []")
+        code.add_line("append_result = result.append")
+        code.add_line("extend_result = result.extend")
+        if env.PY2:
+            code.add_line("to_str = unicode")
+        else:
+            code.add_line("to_str = str")
+
+        buffered = []
+
+        def flush_output():
+            """Force `buffered` to the code builder."""
+            if len(buffered) == 1:
+                code.add_line("append_result(%s)" % buffered[0])
+            elif len(buffered) > 1:
+                code.add_line("extend_result([%s])" % ", ".join(buffered))
+            del buffered[:]
+
+        ops_stack = []
+
+        # Split the text to form a list of tokens.
+        tokens = re.split(r"(?s)({{.*?}}|{%.*?%}|{#.*?#})", text)
+
+        squash = False
+
+        for token in tokens:
+            if token.startswith('{'):
+                start, end = 2, -2
+                squash = (token[-3] == '-')
+                if squash:
+                    end = -3
+
+                if token.startswith('{#'):
+                    # Comment: ignore it and move on.
+                    continue
+                elif token.startswith('{{'):
+                    # An expression to evaluate.
+                    expr = self._expr_code(token[start:end].strip())
+                    buffered.append("to_str(%s)" % expr)
+                elif token.startswith('{%'):
+                    # Action tag: split into words and parse further.
+                    flush_output()
+
+                    words = token[start:end].strip().split()
+                    if words[0] == 'if':
+                        # An if statement: evaluate the expression to determine if.
+                        if len(words) != 2:
+                            self._syntax_error("Don't understand if", token)
+                        ops_stack.append('if')
+                        code.add_line("if %s:" % self._expr_code(words[1]))
+                        code.indent()
+                    elif words[0] == 'for':
+                        # A loop: iterate over expression result.
+                        if len(words) != 4 or words[2] != 'in':
+                            self._syntax_error("Don't understand for", token)
+                        ops_stack.append('for')
+                        self._variable(words[1], self.loop_vars)
+                        code.add_line(
+                            "for c_%s in %s:" % (
+                                words[1],
+                                self._expr_code(words[3])
+                            )
+                        )
+                        code.indent()
+                    elif words[0].startswith('end'):
+                        # Endsomething.  Pop the ops stack.
+                        if len(words) != 1:
+                            self._syntax_error("Don't understand end", token)
+                        end_what = words[0][3:]
+                        if not ops_stack:
+                            self._syntax_error("Too many ends", token)
+                        start_what = ops_stack.pop()
+                        if start_what != end_what:
+                            self._syntax_error("Mismatched end tag", end_what)
+                        code.dedent()
+                    else:
+                        self._syntax_error("Don't understand tag", words[0])
+            else:
+                # Literal content.  If it isn't empty, output it.
+                if squash:
+                    token = token.lstrip()
+                if token:
+                    buffered.append(repr(token))
+
+        if ops_stack:
+            self._syntax_error("Unmatched action tag", ops_stack[-1])
+
+        flush_output()
+
+        for var_name in self.all_vars - self.loop_vars:
+            vars_code.add_line("c_%s = context[%r]" % (var_name, var_name))
+
+        code.add_line('return "".join(result)')
+        code.dedent()
+        self._render_function = code.get_globals()['render_function']
+
+    def _expr_code(self, expr):
+        """Generate a Python expression for `expr`."""
+        if "|" in expr:
+            pipes = expr.split("|")
+            code = self._expr_code(pipes[0])
+            for func in pipes[1:]:
+                self._variable(func, self.all_vars)
+                code = "c_%s(%s)" % (func, code)
+        elif "." in expr:
+            dots = expr.split(".")
+            code = self._expr_code(dots[0])
+            args = ", ".join(repr(d) for d in dots[1:])
+            code = "do_dots(%s, %s)" % (code, args)
+        else:
+            self._variable(expr, self.all_vars)
+            code = "c_%s" % expr
+        return code
+
+    def _syntax_error(self, msg, thing):
+        """Raise a syntax error using `msg`, and showing `thing`."""
+        raise TempliteSyntaxError("%s: %r" % (msg, thing))
+
+    def _variable(self, name, vars_set):
+        """Track that `name` is used as a variable.
+
+        Adds the name to `vars_set`, a set of variable names.
+
+        Raises an syntax error if `name` is not a valid name.
+
+        """
+        if not re.match(r"[_a-zA-Z][_a-zA-Z0-9]*$", name):
+            self._syntax_error("Not a valid name", name)
+        vars_set.add(name)
+
+    def render(self, context=None):
+        """Render this template by applying it to `context`.
+
+        `context` is a dictionary of values to use in this rendering.
+
+        """
+        # Make the complete context we'll use.
+        render_context = dict(self.context)
+        if context:
+            render_context.update(context)
+        return self._render_function(render_context, self._do_dots)
+
+    def _do_dots(self, value, *dots):
+        """Evaluate dotted expressions at run-time."""
+        for dot in dots:
+            try:
+                value = getattr(value, dot)
+            except AttributeError:
+                try:
+                    value = value[dot]
+                except (TypeError, KeyError):
+                    raise TempliteValueError(
+                        "Couldn't evaluate %r.%s" % (value, dot)
+                    )
+            if callable(value):
+                value = value()
+        return value
+
+#
+# eflag: FileType = Python2
--- /dev/null	Thu Jan 01 00:00:00 1970 +0000
+++ b/DebugClients/Python2/coverage/test_helpers.py	Sat Sep 03 18:12:12 2016 +0200
@@ -0,0 +1,393 @@
+# Licensed under the Apache License: http://www.apache.org/licenses/LICENSE-2.0
+# For details: https://bitbucket.org/ned/coveragepy/src/default/NOTICE.txt
+
+"""Mixin classes to help make good tests."""
+
+import atexit
+import collections
+import contextlib
+import os
+import random
+import shutil
+import sys
+import tempfile
+import textwrap
+
+from coverage.backunittest import TestCase
+from coverage.backward import StringIO, to_bytes
+
+
+class Tee(object):
+    """A file-like that writes to all the file-likes it has."""
+
+    def __init__(self, *files):
+        """Make a Tee that writes to all the files in `files.`"""
+        self._files = files
+        if hasattr(files[0], "encoding"):
+            self.encoding = files[0].encoding
+
+    def write(self, data):
+        """Write `data` to all the files."""
+        for f in self._files:
+            f.write(data)
+
+    def flush(self):
+        """Flush the data on all the files."""
+        for f in self._files:
+            f.flush()
+
+    if 0:
+        # Use this if you need to use a debugger, though it makes some tests
+        # fail, I'm not sure why...
+        def __getattr__(self, name):
+            return getattr(self._files[0], name)
+
+
+@contextlib.contextmanager
+def change_dir(new_dir):
+    """Change directory, and then change back.
+
+    Use as a context manager, it will give you the new directory, and later
+    restore the old one.
+
+    """
+    old_dir = os.getcwd()
+    os.chdir(new_dir)
+    try:
+        yield os.getcwd()
+    finally:
+        os.chdir(old_dir)
+
+
+@contextlib.contextmanager
+def saved_sys_path():
+    """Save sys.path, and restore it later."""
+    old_syspath = sys.path[:]
+    try:
+        yield
+    finally:
+        sys.path = old_syspath
+
+
+def setup_with_context_manager(testcase, cm):
+    """Use a contextmanager to setUp a test case.
+
+    If you have a context manager you like::
+
+        with ctxmgr(a, b, c) as v:
+            # do something with v
+
+    and you want to have that effect for a test case, call this function from
+    your setUp, and it will start the context manager for your test, and end it
+    when the test is done::
+
+        def setUp(self):
+            self.v = setup_with_context_manager(self, ctxmgr(a, b, c))
+
+        def test_foo(self):
+            # do something with self.v
+
+    """
+    val = cm.__enter__()
+    testcase.addCleanup(cm.__exit__, None, None, None)
+    return val
+
+
+class ModuleAwareMixin(TestCase):
+    """A test case mixin that isolates changes to sys.modules."""
+
+    def setUp(self):
+        super(ModuleAwareMixin, self).setUp()
+
+        # Record sys.modules here so we can restore it in cleanup_modules.
+        self.old_modules = list(sys.modules)
+        self.addCleanup(self.cleanup_modules)
+
+    def cleanup_modules(self):
+        """Remove any new modules imported during the test run.
+
+        This lets us import the same source files for more than one test.
+
+        """
+        for m in [m for m in sys.modules if m not in self.old_modules]:
+            del sys.modules[m]
+
+
+class SysPathAwareMixin(TestCase):
+    """A test case mixin that isolates changes to sys.path."""
+
+    def setUp(self):
+        super(SysPathAwareMixin, self).setUp()
+        setup_with_context_manager(self, saved_sys_path())
+
+
+class EnvironmentAwareMixin(TestCase):
+    """A test case mixin that isolates changes to the environment."""
+
+    def setUp(self):
+        super(EnvironmentAwareMixin, self).setUp()
+
+        # Record environment variables that we changed with set_environ.
+        self.environ_undos = {}
+
+        self.addCleanup(self.cleanup_environ)
+
+    def set_environ(self, name, value):
+        """Set an environment variable `name` to be `value`.
+
+        The environment variable is set, and record is kept that it was set,
+        so that `cleanup_environ` can restore its original value.
+
+        """
+        if name not in self.environ_undos:
+            self.environ_undos[name] = os.environ.get(name)
+        os.environ[name] = value
+
+    def cleanup_environ(self):
+        """Undo all the changes made by `set_environ`."""
+        for name, value in self.environ_undos.items():
+            if value is None:
+                del os.environ[name]
+            else:
+                os.environ[name] = value
+
+
+class StdStreamCapturingMixin(TestCase):
+    """A test case mixin that captures stdout and stderr."""
+
+    def setUp(self):
+        super(StdStreamCapturingMixin, self).setUp()
+
+        # Capture stdout and stderr so we can examine them in tests.
+        # nose keeps stdout from littering the screen, so we can safely Tee it,
+        # but it doesn't capture stderr, so we don't want to Tee stderr to the
+        # real stderr, since it will interfere with our nice field of dots.
+        old_stdout = sys.stdout
+        self.captured_stdout = StringIO()
+        sys.stdout = Tee(sys.stdout, self.captured_stdout)
+
+        old_stderr = sys.stderr
+        self.captured_stderr = StringIO()
+        sys.stderr = self.captured_stderr
+
+        self.addCleanup(self.cleanup_std_streams, old_stdout, old_stderr)
+
+    def cleanup_std_streams(self, old_stdout, old_stderr):
+        """Restore stdout and stderr."""
+        sys.stdout = old_stdout
+        sys.stderr = old_stderr
+
+    def stdout(self):
+        """Return the data written to stdout during the test."""
+        return self.captured_stdout.getvalue()
+
+    def stderr(self):
+        """Return the data written to stderr during the test."""
+        return self.captured_stderr.getvalue()
+
+
+class DelayedAssertionMixin(TestCase):
+    """A test case mixin that provides a `delayed_assertions` context manager.
+
+    Use it like this::
+
+        with self.delayed_assertions():
+            self.assertEqual(x, y)
+            self.assertEqual(z, w)
+
+    All of the assertions will run.  The failures will be displayed at the end
+    of the with-statement.
+
+    NOTE: this only works with some assertions.  These are known to work:
+
+        - `assertEqual(str, str)`
+
+        - `assertMultilineEqual(str, str)`
+
+    """
+    def __init__(self, *args, **kwargs):
+        super(DelayedAssertionMixin, self).__init__(*args, **kwargs)
+        # This mixin only works with assert methods that call `self.fail`.  In
+        # Python 2.7, `assertEqual` didn't, but we can do what Python 3 does,
+        # and use `assertMultiLineEqual` for comparing strings.
+        self.addTypeEqualityFunc(str, 'assertMultiLineEqual')
+        self._delayed_assertions = None
+
+    @contextlib.contextmanager
+    def delayed_assertions(self):
+        """The context manager: assert that we didn't collect any assertions."""
+        self._delayed_assertions = []
+        old_fail = self.fail
+        self.fail = self._delayed_fail
+        try:
+            yield
+        finally:
+            self.fail = old_fail
+        if self._delayed_assertions:
+            if len(self._delayed_assertions) == 1:
+                self.fail(self._delayed_assertions[0])
+            else:
+                self.fail(
+                    "{0} failed assertions:\n{1}".format(
+                        len(self._delayed_assertions),
+                        "\n".join(self._delayed_assertions),
+                    )
+                )
+
+    def _delayed_fail(self, msg=None):
+        """The stand-in for TestCase.fail during delayed_assertions."""
+        self._delayed_assertions.append(msg)
+
+
+class TempDirMixin(SysPathAwareMixin, ModuleAwareMixin, TestCase):
+    """A test case mixin that creates a temp directory and files in it.
+
+    Includes SysPathAwareMixin and ModuleAwareMixin, because making and using
+    temp directories like this will also need that kind of isolation.
+
+    """
+
+    # Our own setting: most of these tests run in their own temp directory.
+    # Set this to False in your subclass if you don't want a temp directory
+    # created.
+    run_in_temp_dir = True
+
+    # Set this if you aren't creating any files with make_file, but still want
+    # the temp directory.  This will stop the test behavior checker from
+    # complaining.
+    no_files_in_temp_dir = False
+
+    def setUp(self):
+        super(TempDirMixin, self).setUp()
+
+        if self.run_in_temp_dir:
+            # Create a temporary directory.
+            self.temp_dir = self.make_temp_dir("test_cover")
+            self.chdir(self.temp_dir)
+
+            # Modules should be importable from this temp directory.  We don't
+            # use '' because we make lots of different temp directories and
+            # nose's caching importer can get confused.  The full path prevents
+            # problems.
+            sys.path.insert(0, os.getcwd())
+
+        class_behavior = self.class_behavior()
+        class_behavior.tests += 1
+        class_behavior.temp_dir = self.run_in_temp_dir
+        class_behavior.no_files_ok = self.no_files_in_temp_dir
+
+        self.addCleanup(self.check_behavior)
+
+    def make_temp_dir(self, slug="test_cover"):
+        """Make a temp directory that is cleaned up when the test is done."""
+        name = "%s_%08d" % (slug, random.randint(0, 99999999))
+        temp_dir = os.path.join(tempfile.gettempdir(), name)
+        os.makedirs(temp_dir)
+        self.addCleanup(shutil.rmtree, temp_dir)
+        return temp_dir
+
+    def chdir(self, new_dir):
+        """Change directory, and change back when the test is done."""
+        old_dir = os.getcwd()
+        os.chdir(new_dir)
+        self.addCleanup(os.chdir, old_dir)
+
+    def check_behavior(self):
+        """Check that we did the right things."""
+
+        class_behavior = self.class_behavior()
+        if class_behavior.test_method_made_any_files:
+            class_behavior.tests_making_files += 1
+
+    def make_file(self, filename, text="", newline=None):
+        """Create a file for testing.
+
+        `filename` is the relative path to the file, including directories if
+        desired, which will be created if need be.
+
+        `text` is the content to create in the file, a native string (bytes in
+        Python 2, unicode in Python 3).
+
+        If `newline` is provided, it is a string that will be used as the line
+        endings in the created file, otherwise the line endings are as provided
+        in `text`.
+
+        Returns `filename`.
+
+        """
+        # Tests that call `make_file` should be run in a temp environment.
+        assert self.run_in_temp_dir
+        self.class_behavior().test_method_made_any_files = True
+
+        text = textwrap.dedent(text)
+        if newline:
+            text = text.replace("\n", newline)
+
+        # Make sure the directories are available.
+        dirs, _ = os.path.split(filename)
+        if dirs and not os.path.exists(dirs):
+            os.makedirs(dirs)
+
+        # Create the file.
+        with open(filename, 'wb') as f:
+            f.write(to_bytes(text))
+
+        return filename
+
+    # We run some tests in temporary directories, because they may need to make
+    # files for the tests. But this is expensive, so we can change per-class
+    # whether a temp directory is used or not.  It's easy to forget to set that
+    # option properly, so we track information about what the tests did, and
+    # then report at the end of the process on test classes that were set
+    # wrong.
+
+    class ClassBehavior(object):
+        """A value object to store per-class."""
+        def __init__(self):
+            self.tests = 0
+            self.skipped = 0
+            self.temp_dir = True
+            self.no_files_ok = False
+            self.tests_making_files = 0
+            self.test_method_made_any_files = False
+
+    # Map from class to info about how it ran.
+    class_behaviors = collections.defaultdict(ClassBehavior)
+
+    @classmethod
+    def report_on_class_behavior(cls):
+        """Called at process exit to report on class behavior."""
+        for test_class, behavior in cls.class_behaviors.items():
+            bad = ""
+            if behavior.tests <= behavior.skipped:
+                bad = ""
+            elif behavior.temp_dir and behavior.tests_making_files == 0:
+                if not behavior.no_files_ok:
+                    bad = "Inefficient"
+            elif not behavior.temp_dir and behavior.tests_making_files > 0:
+                bad = "Unsafe"
+
+            if bad:
+                if behavior.temp_dir:
+                    where = "in a temp directory"
+                else:
+                    where = "without a temp directory"
+                print(
+                    "%s: %s ran %d tests, %d made files %s" % (
+                        bad,
+                        test_class.__name__,
+                        behavior.tests,
+                        behavior.tests_making_files,
+                        where,
+                    )
+                )
+
+    def class_behavior(self):
+        """Get the ClassBehavior instance for this test."""
+        return self.class_behaviors[self.__class__]
+
+# When the process ends, find out about bad classes.
+atexit.register(TempDirMixin.report_on_class_behavior)
+
+#
+# eflag: FileType = Python2
--- /dev/null	Thu Jan 01 00:00:00 1970 +0000
+++ b/DebugClients/Python2/coverage/version.py	Sat Sep 03 18:12:12 2016 +0200
@@ -0,0 +1,36 @@
+# Licensed under the Apache License: http://www.apache.org/licenses/LICENSE-2.0
+# For details: https://bitbucket.org/ned/coveragepy/src/default/NOTICE.txt
+
+"""The version and URL for coverage.py"""
+# This file is exec'ed in setup.py, don't import anything!
+
+# Same semantics as sys.version_info.
+version_info = (4, 1, 0, 'final', 0)
+
+
+def _make_version(major, minor, micro, releaselevel, serial):
+    """Create a readable version string from version_info tuple components."""
+    assert releaselevel in ['alpha', 'beta', 'candidate', 'final']
+    version = "%d.%d" % (major, minor)
+    if micro:
+        version += ".%d" % (micro,)
+    if releaselevel != 'final':
+        short = {'alpha': 'a', 'beta': 'b', 'candidate': 'rc'}[releaselevel]
+        version += "%s%d" % (short, serial)
+    return version
+
+
+def _make_url(major, minor, micro, releaselevel, serial):
+    """Make the URL people should start at for this version of coverage.py."""
+    url = "https://coverage.readthedocs.io"
+    if releaselevel != 'final':
+        # For pre-releases, use a version-specific URL.
+        url += "/en/coverage-" + _make_version(major, minor, micro, releaselevel, serial)
+    return url
+
+
+__version__ = _make_version(*version_info)
+__url__ = _make_url(*version_info)
+
+#
+# eflag: FileType = Python2
--- /dev/null	Thu Jan 01 00:00:00 1970 +0000
+++ b/DebugClients/Python2/coverage/xmlreport.py	Sat Sep 03 18:12:12 2016 +0200
@@ -0,0 +1,221 @@
+# Licensed under the Apache License: http://www.apache.org/licenses/LICENSE-2.0
+# For details: https://bitbucket.org/ned/coveragepy/src/default/NOTICE.txt
+
+"""XML reporting for coverage.py"""
+
+import os
+import os.path
+import sys
+import time
+import xml.dom.minidom
+
+from coverage import env
+from coverage import __url__, __version__, files
+from coverage.backward import iitems
+from coverage.misc import isolate_module
+from coverage.report import Reporter
+
+os = isolate_module(os)
+
+
+DTD_URL = (
+    'https://raw.githubusercontent.com/cobertura/web/'
+    'f0366e5e2cf18f111cbd61fc34ef720a6584ba02'
+    '/htdocs/xml/coverage-03.dtd'
+)
+
+
+def rate(hit, num):
+    """Return the fraction of `hit`/`num`, as a string."""
+    if num == 0:
+        return "1"
+    else:
+        return "%.4g" % (float(hit) / num)
+
+
+class XmlReporter(Reporter):
+    """A reporter for writing Cobertura-style XML coverage results."""
+
+    def __init__(self, coverage, config):
+        super(XmlReporter, self).__init__(coverage, config)
+
+        self.source_paths = set()
+        if config.source:
+            for src in config.source:
+                if os.path.exists(src):
+                    self.source_paths.add(files.canonical_filename(src))
+        self.packages = {}
+        self.xml_out = None
+        self.has_arcs = coverage.data.has_arcs()
+
+    def report(self, morfs, outfile=None):
+        """Generate a Cobertura-compatible XML report for `morfs`.
+
+        `morfs` is a list of modules or file names.
+
+        `outfile` is a file object to write the XML to.
+
+        """
+        # Initial setup.
+        outfile = outfile or sys.stdout
+
+        # Create the DOM that will store the data.
+        impl = xml.dom.minidom.getDOMImplementation()
+        self.xml_out = impl.createDocument(None, "coverage", None)
+
+        # Write header stuff.
+        xcoverage = self.xml_out.documentElement
+        xcoverage.setAttribute("version", __version__)
+        xcoverage.setAttribute("timestamp", str(int(time.time()*1000)))
+        xcoverage.appendChild(self.xml_out.createComment(
+            " Generated by coverage.py: %s " % __url__
+            ))
+        xcoverage.appendChild(self.xml_out.createComment(" Based on %s " % DTD_URL))
+
+        # Call xml_file for each file in the data.
+        self.report_files(self.xml_file, morfs)
+
+        xsources = self.xml_out.createElement("sources")
+        xcoverage.appendChild(xsources)
+
+        # Populate the XML DOM with the source info.
+        for path in sorted(self.source_paths):
+            xsource = self.xml_out.createElement("source")
+            xsources.appendChild(xsource)
+            txt = self.xml_out.createTextNode(path)
+            xsource.appendChild(txt)
+
+        lnum_tot, lhits_tot = 0, 0
+        bnum_tot, bhits_tot = 0, 0
+
+        xpackages = self.xml_out.createElement("packages")
+        xcoverage.appendChild(xpackages)
+
+        # Populate the XML DOM with the package info.
+        for pkg_name, pkg_data in sorted(iitems(self.packages)):
+            class_elts, lhits, lnum, bhits, bnum = pkg_data
+            xpackage = self.xml_out.createElement("package")
+            xpackages.appendChild(xpackage)
+            xclasses = self.xml_out.createElement("classes")
+            xpackage.appendChild(xclasses)
+            for _, class_elt in sorted(iitems(class_elts)):
+                xclasses.appendChild(class_elt)
+            xpackage.setAttribute("name", pkg_name.replace(os.sep, '.'))
+            xpackage.setAttribute("line-rate", rate(lhits, lnum))
+            if self.has_arcs:
+                branch_rate = rate(bhits, bnum)
+            else:
+                branch_rate = "0"
+            xpackage.setAttribute("branch-rate", branch_rate)
+            xpackage.setAttribute("complexity", "0")
+
+            lnum_tot += lnum
+            lhits_tot += lhits
+            bnum_tot += bnum
+            bhits_tot += bhits
+
+        xcoverage.setAttribute("line-rate", rate(lhits_tot, lnum_tot))
+        if self.has_arcs:
+            branch_rate = rate(bhits_tot, bnum_tot)
+        else:
+            branch_rate = "0"
+        xcoverage.setAttribute("branch-rate", branch_rate)
+
+        # Use the DOM to write the output file.
+        out = self.xml_out.toprettyxml()
+        if env.PY2:
+            out = out.encode("utf8")
+        outfile.write(out)
+
+        # Return the total percentage.
+        denom = lnum_tot + bnum_tot
+        if denom == 0:
+            pct = 0.0
+        else:
+            pct = 100.0 * (lhits_tot + bhits_tot) / denom
+        return pct
+
+    def xml_file(self, fr, analysis):
+        """Add to the XML report for a single file."""
+
+        # Create the 'lines' and 'package' XML elements, which
+        # are populated later.  Note that a package == a directory.
+        filename = fr.filename.replace("\\", "/")
+        for source_path in self.source_paths:
+            if filename.startswith(source_path.replace("\\", "/") + "/"):
+                rel_name = filename[len(source_path)+1:]
+                break
+        else:
+            rel_name = fr.relative_filename()
+
+        dirname = os.path.dirname(rel_name) or "."
+        dirname = "/".join(dirname.split("/")[:self.config.xml_package_depth])
+        package_name = dirname.replace("/", ".")
+
+        if rel_name != fr.filename:
+            self.source_paths.add(fr.filename[:-len(rel_name)].rstrip(r"\/"))
+        package = self.packages.setdefault(package_name, [{}, 0, 0, 0, 0])
+
+        xclass = self.xml_out.createElement("class")
+
+        xclass.appendChild(self.xml_out.createElement("methods"))
+
+        xlines = self.xml_out.createElement("lines")
+        xclass.appendChild(xlines)
+
+        xclass.setAttribute("name", os.path.relpath(rel_name, dirname))
+        xclass.setAttribute("filename", fr.relative_filename().replace("\\", "/"))
+        xclass.setAttribute("complexity", "0")
+
+        branch_stats = analysis.branch_stats()
+        missing_branch_arcs = analysis.missing_branch_arcs()
+
+        # For each statement, create an XML 'line' element.
+        for line in sorted(analysis.statements):
+            xline = self.xml_out.createElement("line")
+            xline.setAttribute("number", str(line))
+
+            # Q: can we get info about the number of times a statement is
+            # executed?  If so, that should be recorded here.
+            xline.setAttribute("hits", str(int(line not in analysis.missing)))
+
+            if self.has_arcs:
+                if line in branch_stats:
+                    total, taken = branch_stats[line]
+                    xline.setAttribute("branch", "true")
+                    xline.setAttribute(
+                        "condition-coverage",
+                        "%d%% (%d/%d)" % (100*taken/total, taken, total)
+                        )
+                if line in missing_branch_arcs:
+                    annlines = ["exit" if b < 0 else str(b) for b in missing_branch_arcs[line]]
+                    xline.setAttribute("missing-branches", ",".join(annlines))
+            xlines.appendChild(xline)
+
+        class_lines = len(analysis.statements)
+        class_hits = class_lines - len(analysis.missing)
+
+        if self.has_arcs:
+            class_branches = sum(t for t, k in branch_stats.values())
+            missing_branches = sum(t - k for t, k in branch_stats.values())
+            class_br_hits = class_branches - missing_branches
+        else:
+            class_branches = 0.0
+            class_br_hits = 0.0
+
+        # Finalize the statistics that are collected in the XML DOM.
+        xclass.setAttribute("line-rate", rate(class_hits, class_lines))
+        if self.has_arcs:
+            branch_rate = rate(class_br_hits, class_branches)
+        else:
+            branch_rate = "0"
+        xclass.setAttribute("branch-rate", branch_rate)
+
+        package[0][rel_name] = xclass
+        package[1] += class_hits
+        package[2] += class_lines
+        package[3] += class_br_hits
+        package[4] += class_branches
+
+#
+# eflag: FileType = Python2
--- /dev/null	Thu Jan 01 00:00:00 1970 +0000
+++ b/DebugClients/Python2/eric6dbgstub.py	Sat Sep 03 18:12:12 2016 +0200
@@ -0,0 +1,95 @@
+# -*- coding: utf-8 -*-
+
+# Copyright (c) 2002 - 2016 Detlev Offenbach <detlev@die-offenbachs.de>
+#
+
+"""
+Module implementing a debugger stub for remote debugging.
+"""
+
+import os
+import sys
+import distutils.sysconfig
+
+from eric6config import getConfig
+
+debugger = None
+__scriptname = None
+
+modDir = distutils.sysconfig.get_python_lib(True)
+ericpath = os.getenv('ERICDIR', getConfig('ericDir'))
+
+if ericpath not in sys.path:
+    sys.path.insert(-1, ericpath)
+    
+
+def initDebugger(kind="standard"):
+    """
+    Module function to initialize a debugger for remote debugging.
+    
+    @param kind type of debugger ("standard" or "threads")
+    @return flag indicating success (boolean)
+    @exception ValueError raised to indicate an invalid debugger kind
+        was requested
+    """
+    global debugger
+    res = 1
+    try:
+        if kind == "standard":
+            import DebugClient
+            debugger = DebugClient.DebugClient()
+        elif kind == "threads":
+            import DebugClientThreads
+            debugger = DebugClientThreads.DebugClientThreads()
+        else:
+            raise ValueError
+    except ImportError:
+        debugger = None
+        res = 0
+        
+    return res
+
+
+def runcall(func, *args):
+    """
+    Module function mimicing the Pdb interface.
+    
+    @param func function to be called (function object)
+    @param *args arguments being passed to func
+    @return the function result
+    """
+    global debugger, __scriptname
+    return debugger.run_call(__scriptname, func, *args)
+    
+
+def setScriptname(name):
+    """
+    Module function to set the scriptname to be reported back to the IDE.
+    
+    @param name absolute pathname of the script (string)
+    """
+    global __scriptname
+    __scriptname = name
+
+
+def startDebugger(enableTrace=True, exceptions=True,
+                  tracePython=False, redirect=True):
+    """
+    Module function used to start the remote debugger.
+    
+    @keyparam enableTrace flag to enable the tracing function (boolean)
+    @keyparam exceptions flag to enable exception reporting of the IDE
+        (boolean)
+    @keyparam tracePython flag to enable tracing into the Python library
+        (boolean)
+    @keyparam redirect flag indicating redirection of stdin, stdout and
+        stderr (boolean)
+    """
+    global debugger
+    if debugger:
+        debugger.startDebugger(enableTrace=enableTrace, exceptions=exceptions,
+                               tracePython=tracePython, redirect=redirect)
+
+#
+# eflag: FileType = Python2
+# eflag: noqa = M601, M702
--- /dev/null	Thu Jan 01 00:00:00 1970 +0000
+++ b/DebugClients/Python2/getpass.py	Sat Sep 03 18:12:12 2016 +0200
@@ -0,0 +1,57 @@
+# -*- coding: utf-8 -*-
+
+# Copyright (c) 2004 - 2016 Detlev Offenbach <detlev@die-offenbachs.de>
+#
+
+"""
+Module implementing utilities to get a password and/or the current user name.
+
+getpass(prompt) - prompt for a password, with echo turned off
+getuser() - get the user name from the environment or password database
+
+This module is a replacement for the one found in the Python distribution. It
+is to provide a debugger compatible variant of the a.m. functions.
+"""
+
+__all__ = ["getpass", "getuser"]
+
+
+def getuser():
+    """
+    Function to get the username from the environment or password database.
+
+    First try various environment variables, then the password
+    database.  This works on Windows as long as USERNAME is set.
+    
+    @return username (string)
+    """
+    # this is copied from the oroginal getpass.py
+    
+    import os
+
+    for name in ('LOGNAME', 'USER', 'LNAME', 'USERNAME'):
+        user = os.environ.get(name)
+        if user:
+            return user
+
+    # If this fails, the exception will "explain" why
+    import pwd
+    return pwd.getpwuid(os.getuid())[0]
+
+
+def getpass(prompt='Password: '):
+    """
+    Function to prompt for a password, with echo turned off.
+    
+    @param prompt Prompt to be shown to the user (string)
+    @return Password entered by the user (string)
+    """
+    return raw_input(prompt, 0)
+    
+unix_getpass = getpass
+win_getpass = getpass
+default_getpass = getpass
+
+#
+# eflag: FileType = Python2
+# eflag: noqa = M601, M702
--- a/Debugger/DebuggerInterfacePython2.py	Sat Sep 03 18:02:37 2016 +0200
+++ b/Debugger/DebuggerInterfacePython2.py	Sat Sep 03 18:12:12 2016 +0200
@@ -149,17 +149,17 @@
         debugClientType = Preferences.getDebugger("DebugClientType")
         if debugClientType == "standard":
             debugClient = os.path.join(getConfig('ericDir'),
-                                       "DebugClients", "Python",
+                                       "DebugClients", "Python2",
                                        "DebugClient.py")
         elif debugClientType == "threaded":
             debugClient = os.path.join(getConfig('ericDir'),
-                                       "DebugClients", "Python",
+                                       "DebugClients", "Python2",
                                        "DebugClientThreads.py")
         else:
             debugClient = Preferences.getDebugger("DebugClient")
             if debugClient == "":
                 debugClient = os.path.join(sys.path[0],
-                                           "DebugClients", "Python",
+                                           "DebugClients", "Python2",
                                            "DebugClient.py")
         
         redirect = str(Preferences.getDebugger("PythonRedirect"))
--- a/Documentation/Source/eric6.DebugClients.Python.AsyncFile.html	Sat Sep 03 18:02:37 2016 +0200
+++ /dev/null	Thu Jan 01 00:00:00 1970 +0000
@@ -1,383 +0,0 @@
-<!DOCTYPE html>
-<html><head>
-<title>eric6.DebugClients.Python.AsyncFile</title>
-<meta charset="UTF-8">
-<style>
-body {
-    background: #EDECE6;
-    margin: 0em 1em 10em 1em;
-    color: black;
-}
-
-h1 { color: white; background: #85774A; }
-h2 { color: white; background: #85774A; }
-h3 { color: white; background: #9D936E; }
-h4 { color: white; background: #9D936E; }
-    
-a { color: #BA6D36; }
-
-</style>
-</head>
-<body><a NAME="top" ID="top"></a>
-<h1>eric6.DebugClients.Python.AsyncFile</h1>
-<p>
-Module implementing an asynchronous file like socket interface for the
-debugger.
-</p>
-<h3>Global Attributes</h3>
-<table>
-<tr><td>None</td></tr>
-</table>
-<h3>Classes</h3>
-<table>
-<tr>
-<td><a href="#AsyncFile">AsyncFile</a></td>
-<td>Class wrapping a socket object with a file interface.</td>
-</tr>
-</table>
-<h3>Functions</h3>
-<table>
-<tr>
-<td><a href="#AsyncPendingWrite">AsyncPendingWrite</a></td>
-<td>Module function to check for data to be written.</td>
-</tr>
-</table>
-<hr /><hr />
-<a NAME="AsyncFile" ID="AsyncFile"></a>
-<h2>AsyncFile</h2>
-<p>
-    Class wrapping a socket object with a file interface.
-</p>
-<h3>Derived from</h3>
-object
-<h3>Class Attributes</h3>
-<table>
-<tr><td>maxbuffersize</td></tr><tr><td>maxtries</td></tr>
-</table>
-<h3>Class Methods</h3>
-<table>
-<tr><td>None</td></tr>
-</table>
-<h3>Methods</h3>
-<table>
-<tr>
-<td><a href="#AsyncFile.__init__">AsyncFile</a></td>
-<td>Constructor</td>
-</tr><tr>
-<td><a href="#AsyncFile.__checkMode">__checkMode</a></td>
-<td>Private method to check the mode.</td>
-</tr><tr>
-<td><a href="#AsyncFile.__nWrite">__nWrite</a></td>
-<td>Private method to write a specific number of pending bytes.</td>
-</tr><tr>
-<td><a href="#AsyncFile.close">close</a></td>
-<td>Public method to close the file.</td>
-</tr><tr>
-<td><a href="#AsyncFile.fileno">fileno</a></td>
-<td>Public method returning the file number.</td>
-</tr><tr>
-<td><a href="#AsyncFile.flush">flush</a></td>
-<td>Public method to write all pending bytes.</td>
-</tr><tr>
-<td><a href="#AsyncFile.isatty">isatty</a></td>
-<td>Public method to indicate whether a tty interface is supported.</td>
-</tr><tr>
-<td><a href="#AsyncFile.pendingWrite">pendingWrite</a></td>
-<td>Public method that returns the number of bytes waiting to be written.</td>
-</tr><tr>
-<td><a href="#AsyncFile.read">read</a></td>
-<td>Public method to read bytes from this file.</td>
-</tr><tr>
-<td><a href="#AsyncFile.read_p">read_p</a></td>
-<td>Public method to read bytes from this file.</td>
-</tr><tr>
-<td><a href="#AsyncFile.readline">readline</a></td>
-<td>Public method to read one line from this file.</td>
-</tr><tr>
-<td><a href="#AsyncFile.readline_p">readline_p</a></td>
-<td>Public method to read a line from this file.</td>
-</tr><tr>
-<td><a href="#AsyncFile.readlines">readlines</a></td>
-<td>Public method to read all lines from this file.</td>
-</tr><tr>
-<td><a href="#AsyncFile.seek">seek</a></td>
-<td>Public method to move the filepointer.</td>
-</tr><tr>
-<td><a href="#AsyncFile.tell">tell</a></td>
-<td>Public method to get the filepointer position.</td>
-</tr><tr>
-<td><a href="#AsyncFile.truncate">truncate</a></td>
-<td>Public method to truncate the file.</td>
-</tr><tr>
-<td><a href="#AsyncFile.write">write</a></td>
-<td>Public method to write a string to the file.</td>
-</tr><tr>
-<td><a href="#AsyncFile.writelines">writelines</a></td>
-<td>Public method to write a list of strings to the file.</td>
-</tr>
-</table>
-<h3>Static Methods</h3>
-<table>
-<tr><td>None</td></tr>
-</table>
-<a NAME="AsyncFile.__init__" ID="AsyncFile.__init__"></a>
-<h4>AsyncFile (Constructor)</h4>
-<b>AsyncFile</b>(<i>sock, mode, name</i>)
-<p>
-        Constructor
-</p><dl>
-<dt><i>sock</i></dt>
-<dd>
-the socket object being wrapped
-</dd><dt><i>mode</i></dt>
-<dd>
-mode of this file (string)
-</dd><dt><i>name</i></dt>
-<dd>
-name of this file (string)
-</dd>
-</dl><a NAME="AsyncFile.__checkMode" ID="AsyncFile.__checkMode"></a>
-<h4>AsyncFile.__checkMode</h4>
-<b>__checkMode</b>(<i>mode</i>)
-<p>
-        Private method to check the mode.
-</p><p>
-        This method checks, if an operation is permitted according to
-        the mode of the file. If it is not, an IOError is raised.
-</p><dl>
-<dt><i>mode</i></dt>
-<dd>
-the mode to be checked (string)
-</dd>
-</dl><dl>
-<dt>Raises <b>IOError</b>:</dt>
-<dd>
-raised to indicate a bad file descriptor
-</dd>
-</dl><a NAME="AsyncFile.__nWrite" ID="AsyncFile.__nWrite"></a>
-<h4>AsyncFile.__nWrite</h4>
-<b>__nWrite</b>(<i>n</i>)
-<p>
-        Private method to write a specific number of pending bytes.
-</p><dl>
-<dt><i>n</i></dt>
-<dd>
-the number of bytes to be written (int)
-</dd>
-</dl><a NAME="AsyncFile.close" ID="AsyncFile.close"></a>
-<h4>AsyncFile.close</h4>
-<b>close</b>(<i>closeit=0</i>)
-<p>
-        Public method to close the file.
-</p><dl>
-<dt><i>closeit</i></dt>
-<dd>
-flag to indicate a close ordered by the debugger code
-            (boolean)
-</dd>
-</dl><a NAME="AsyncFile.fileno" ID="AsyncFile.fileno"></a>
-<h4>AsyncFile.fileno</h4>
-<b>fileno</b>(<i></i>)
-<p>
-        Public method returning the file number.
-</p><dl>
-<dt>Returns:</dt>
-<dd>
-file number (int)
-</dd>
-</dl><a NAME="AsyncFile.flush" ID="AsyncFile.flush"></a>
-<h4>AsyncFile.flush</h4>
-<b>flush</b>(<i></i>)
-<p>
-        Public method to write all pending bytes.
-</p><a NAME="AsyncFile.isatty" ID="AsyncFile.isatty"></a>
-<h4>AsyncFile.isatty</h4>
-<b>isatty</b>(<i></i>)
-<p>
-        Public method to indicate whether a tty interface is supported.
-</p><dl>
-<dt>Returns:</dt>
-<dd>
-always false
-</dd>
-</dl><a NAME="AsyncFile.pendingWrite" ID="AsyncFile.pendingWrite"></a>
-<h4>AsyncFile.pendingWrite</h4>
-<b>pendingWrite</b>(<i></i>)
-<p>
-        Public method that returns the number of bytes waiting to be written.
-</p><dl>
-<dt>Returns:</dt>
-<dd>
-the number of bytes to be written (int)
-</dd>
-</dl><a NAME="AsyncFile.read" ID="AsyncFile.read"></a>
-<h4>AsyncFile.read</h4>
-<b>read</b>(<i>size=-1</i>)
-<p>
-        Public method to read bytes from this file.
-</p><dl>
-<dt><i>size</i></dt>
-<dd>
-maximum number of bytes to be read (int)
-</dd>
-</dl><dl>
-<dt>Returns:</dt>
-<dd>
-the bytes read (any)
-</dd>
-</dl><a NAME="AsyncFile.read_p" ID="AsyncFile.read_p"></a>
-<h4>AsyncFile.read_p</h4>
-<b>read_p</b>(<i>size=-1</i>)
-<p>
-        Public method to read bytes from this file.
-</p><dl>
-<dt><i>size</i></dt>
-<dd>
-maximum number of bytes to be read (int)
-</dd>
-</dl><dl>
-<dt>Returns:</dt>
-<dd>
-the bytes read (any)
-</dd>
-</dl><a NAME="AsyncFile.readline" ID="AsyncFile.readline"></a>
-<h4>AsyncFile.readline</h4>
-<b>readline</b>(<i>sizehint=-1</i>)
-<p>
-        Public method to read one line from this file.
-</p><dl>
-<dt><i>sizehint</i></dt>
-<dd>
-hint of the numbers of bytes to be read (int)
-</dd>
-</dl><dl>
-<dt>Returns:</dt>
-<dd>
-one line read (string)
-</dd>
-</dl><a NAME="AsyncFile.readline_p" ID="AsyncFile.readline_p"></a>
-<h4>AsyncFile.readline_p</h4>
-<b>readline_p</b>(<i>size=-1</i>)
-<p>
-        Public method to read a line from this file.
-</p><p>
-        <b>Note</b>: This method will not block and may return
-        only a part of a line if that is all that is available.
-</p><dl>
-<dt><i>size</i></dt>
-<dd>
-maximum number of bytes to be read (int)
-</dd>
-</dl><dl>
-<dt>Returns:</dt>
-<dd>
-one line of text up to size bytes (string)
-</dd>
-</dl><a NAME="AsyncFile.readlines" ID="AsyncFile.readlines"></a>
-<h4>AsyncFile.readlines</h4>
-<b>readlines</b>(<i>sizehint=-1</i>)
-<p>
-        Public method to read all lines from this file.
-</p><dl>
-<dt><i>sizehint</i></dt>
-<dd>
-hint of the numbers of bytes to be read (int)
-</dd>
-</dl><dl>
-<dt>Returns:</dt>
-<dd>
-list of lines read (list of strings)
-</dd>
-</dl><a NAME="AsyncFile.seek" ID="AsyncFile.seek"></a>
-<h4>AsyncFile.seek</h4>
-<b>seek</b>(<i>offset, whence=0</i>)
-<p>
-        Public method to move the filepointer.
-</p><dl>
-<dt><i>offset</i></dt>
-<dd>
-offset to seek for
-</dd><dt><i>whence</i></dt>
-<dd>
-where to seek from
-</dd>
-</dl><dl>
-<dt>Raises <b>IOError</b>:</dt>
-<dd>
-This method is not supported and always raises an
-        IOError.
-</dd>
-</dl><a NAME="AsyncFile.tell" ID="AsyncFile.tell"></a>
-<h4>AsyncFile.tell</h4>
-<b>tell</b>(<i></i>)
-<p>
-        Public method to get the filepointer position.
-</p><dl>
-<dt>Raises <b>IOError</b>:</dt>
-<dd>
-This method is not supported and always raises an
-        IOError.
-</dd>
-</dl><a NAME="AsyncFile.truncate" ID="AsyncFile.truncate"></a>
-<h4>AsyncFile.truncate</h4>
-<b>truncate</b>(<i>size=-1</i>)
-<p>
-        Public method to truncate the file.
-</p><dl>
-<dt><i>size</i></dt>
-<dd>
-size to truncate to (integer)
-</dd>
-</dl><dl>
-<dt>Raises <b>IOError</b>:</dt>
-<dd>
-This method is not supported and always raises an
-        IOError.
-</dd>
-</dl><a NAME="AsyncFile.write" ID="AsyncFile.write"></a>
-<h4>AsyncFile.write</h4>
-<b>write</b>(<i>s</i>)
-<p>
-        Public method to write a string to the file.
-</p><dl>
-<dt><i>s</i></dt>
-<dd>
-bytes to be written (string)
-</dd>
-</dl><dl>
-<dt>Raises <b>socket.error</b>:</dt>
-<dd>
-raised to indicate too many send attempts
-</dd>
-</dl><a NAME="AsyncFile.writelines" ID="AsyncFile.writelines"></a>
-<h4>AsyncFile.writelines</h4>
-<b>writelines</b>(<i>list</i>)
-<p>
-        Public method to write a list of strings to the file.
-</p><dl>
-<dt><i>list</i></dt>
-<dd>
-the list to be written (list of string)
-</dd>
-</dl>
-<div align="right"><a href="#top">Up</a></div>
-<hr /><hr />
-<a NAME="AsyncPendingWrite" ID="AsyncPendingWrite"></a>
-<h2>AsyncPendingWrite</h2>
-<b>AsyncPendingWrite</b>(<i>file</i>)
-<p>
-    Module function to check for data to be written.
-</p><dl>
-<dt><i>file</i></dt>
-<dd>
-The file object to be checked (file)
-</dd>
-</dl><dl>
-<dt>Returns:</dt>
-<dd>
-Flag indicating if there is data wating (int)
-</dd>
-</dl>
-<div align="right"><a href="#top">Up</a></div>
-<hr />
-</body></html>
\ No newline at end of file
--- a/Documentation/Source/eric6.DebugClients.Python.DCTestResult.html	Sat Sep 03 18:02:37 2016 +0200
+++ /dev/null	Thu Jan 01 00:00:00 1970 +0000
@@ -1,186 +0,0 @@
-<!DOCTYPE html>
-<html><head>
-<title>eric6.DebugClients.Python.DCTestResult</title>
-<meta charset="UTF-8">
-<style>
-body {
-    background: #EDECE6;
-    margin: 0em 1em 10em 1em;
-    color: black;
-}
-
-h1 { color: white; background: #85774A; }
-h2 { color: white; background: #85774A; }
-h3 { color: white; background: #9D936E; }
-h4 { color: white; background: #9D936E; }
-    
-a { color: #BA6D36; }
-
-</style>
-</head>
-<body><a NAME="top" ID="top"></a>
-<h1>eric6.DebugClients.Python.DCTestResult</h1>
-<p>
-Module implementing a TestResult derivative for the eric6 debugger.
-</p>
-<h3>Global Attributes</h3>
-<table>
-<tr><td>None</td></tr>
-</table>
-<h3>Classes</h3>
-<table>
-<tr>
-<td><a href="#DCTestResult">DCTestResult</a></td>
-<td>A TestResult derivative to work with eric6's debug client.</td>
-</tr>
-</table>
-<h3>Functions</h3>
-<table>
-<tr><td>None</td></tr>
-</table>
-<hr /><hr />
-<a NAME="DCTestResult" ID="DCTestResult"></a>
-<h2>DCTestResult</h2>
-<p>
-    A TestResult derivative to work with eric6's debug client.
-</p><p>
-    For more details see unittest.py of the standard python distribution.
-</p>
-<h3>Derived from</h3>
-TestResult
-<h3>Class Attributes</h3>
-<table>
-<tr><td>None</td></tr>
-</table>
-<h3>Class Methods</h3>
-<table>
-<tr><td>None</td></tr>
-</table>
-<h3>Methods</h3>
-<table>
-<tr>
-<td><a href="#DCTestResult.__init__">DCTestResult</a></td>
-<td>Constructor</td>
-</tr><tr>
-<td><a href="#DCTestResult.addError">addError</a></td>
-<td>Public method called if a test errored.</td>
-</tr><tr>
-<td><a href="#DCTestResult.addExpectedFailure">addExpectedFailure</a></td>
-<td>Public method called if a test failed expected.</td>
-</tr><tr>
-<td><a href="#DCTestResult.addFailure">addFailure</a></td>
-<td>Public method called if a test failed.</td>
-</tr><tr>
-<td><a href="#DCTestResult.addSkip">addSkip</a></td>
-<td>Public method called if a test was skipped.</td>
-</tr><tr>
-<td><a href="#DCTestResult.addUnexpectedSuccess">addUnexpectedSuccess</a></td>
-<td>Public method called if a test succeeded expectedly.</td>
-</tr><tr>
-<td><a href="#DCTestResult.startTest">startTest</a></td>
-<td>Public method called at the start of a test.</td>
-</tr><tr>
-<td><a href="#DCTestResult.stopTest">stopTest</a></td>
-<td>Public method called at the end of a test.</td>
-</tr>
-</table>
-<h3>Static Methods</h3>
-<table>
-<tr><td>None</td></tr>
-</table>
-<a NAME="DCTestResult.__init__" ID="DCTestResult.__init__"></a>
-<h4>DCTestResult (Constructor)</h4>
-<b>DCTestResult</b>(<i>parent</i>)
-<p>
-        Constructor
-</p><dl>
-<dt><i>parent</i></dt>
-<dd>
-The parent widget.
-</dd>
-</dl><a NAME="DCTestResult.addError" ID="DCTestResult.addError"></a>
-<h4>DCTestResult.addError</h4>
-<b>addError</b>(<i>test, err</i>)
-<p>
-        Public method called if a test errored.
-</p><dl>
-<dt><i>test</i></dt>
-<dd>
-Reference to the test object
-</dd><dt><i>err</i></dt>
-<dd>
-The error traceback
-</dd>
-</dl><a NAME="DCTestResult.addExpectedFailure" ID="DCTestResult.addExpectedFailure"></a>
-<h4>DCTestResult.addExpectedFailure</h4>
-<b>addExpectedFailure</b>(<i>test, err</i>)
-<p>
-        Public method called if a test failed expected.
-</p><dl>
-<dt><i>test</i></dt>
-<dd>
-reference to the test object
-</dd><dt><i>err</i></dt>
-<dd>
-error traceback
-</dd>
-</dl><a NAME="DCTestResult.addFailure" ID="DCTestResult.addFailure"></a>
-<h4>DCTestResult.addFailure</h4>
-<b>addFailure</b>(<i>test, err</i>)
-<p>
-        Public method called if a test failed.
-</p><dl>
-<dt><i>test</i></dt>
-<dd>
-Reference to the test object
-</dd><dt><i>err</i></dt>
-<dd>
-The error traceback
-</dd>
-</dl><a NAME="DCTestResult.addSkip" ID="DCTestResult.addSkip"></a>
-<h4>DCTestResult.addSkip</h4>
-<b>addSkip</b>(<i>test, reason</i>)
-<p>
-        Public method called if a test was skipped.
-</p><dl>
-<dt><i>test</i></dt>
-<dd>
-reference to the test object
-</dd><dt><i>reason</i></dt>
-<dd>
-reason for skipping the test (string)
-</dd>
-</dl><a NAME="DCTestResult.addUnexpectedSuccess" ID="DCTestResult.addUnexpectedSuccess"></a>
-<h4>DCTestResult.addUnexpectedSuccess</h4>
-<b>addUnexpectedSuccess</b>(<i>test</i>)
-<p>
-        Public method called if a test succeeded expectedly.
-</p><dl>
-<dt><i>test</i></dt>
-<dd>
-reference to the test object
-</dd>
-</dl><a NAME="DCTestResult.startTest" ID="DCTestResult.startTest"></a>
-<h4>DCTestResult.startTest</h4>
-<b>startTest</b>(<i>test</i>)
-<p>
-        Public method called at the start of a test.
-</p><dl>
-<dt><i>test</i></dt>
-<dd>
-Reference to the test object
-</dd>
-</dl><a NAME="DCTestResult.stopTest" ID="DCTestResult.stopTest"></a>
-<h4>DCTestResult.stopTest</h4>
-<b>stopTest</b>(<i>test</i>)
-<p>
-        Public method called at the end of a test.
-</p><dl>
-<dt><i>test</i></dt>
-<dd>
-Reference to the test object
-</dd>
-</dl>
-<div align="right"><a href="#top">Up</a></div>
-<hr />
-</body></html>
\ No newline at end of file
--- a/Documentation/Source/eric6.DebugClients.Python.DebugBase.html	Sat Sep 03 18:02:37 2016 +0200
+++ /dev/null	Thu Jan 01 00:00:00 1970 +0000
@@ -1,749 +0,0 @@
-<!DOCTYPE html>
-<html><head>
-<title>eric6.DebugClients.Python.DebugBase</title>
-<meta charset="UTF-8">
-<style>
-body {
-    background: #EDECE6;
-    margin: 0em 1em 10em 1em;
-    color: black;
-}
-
-h1 { color: white; background: #85774A; }
-h2 { color: white; background: #85774A; }
-h3 { color: white; background: #9D936E; }
-h4 { color: white; background: #9D936E; }
-    
-a { color: #BA6D36; }
-
-</style>
-</head>
-<body><a NAME="top" ID="top"></a>
-<h1>eric6.DebugClients.Python.DebugBase</h1>
-<p>
-Module implementing the debug base class.
-</p>
-<h3>Global Attributes</h3>
-<table>
-<tr><td>gRecursionLimit</td></tr>
-</table>
-<h3>Classes</h3>
-<table>
-<tr>
-<td><a href="#DebugBase">DebugBase</a></td>
-<td>Class implementing base class of the debugger.</td>
-</tr>
-</table>
-<h3>Functions</h3>
-<table>
-<tr>
-<td><a href="#printerr">printerr</a></td>
-<td>Module function used for debugging the debug client.</td>
-</tr><tr>
-<td><a href="#setRecursionLimit">setRecursionLimit</a></td>
-<td>Module function to set the recursion limit.</td>
-</tr>
-</table>
-<hr /><hr />
-<a NAME="DebugBase" ID="DebugBase"></a>
-<h2>DebugBase</h2>
-<p>
-    Class implementing base class of the debugger.
-</p><p>
-    Provides simple wrapper methods around bdb for the 'owning' client to
-    call to step etc.
-</p>
-<h3>Derived from</h3>
-bdb.Bdb
-<h3>Class Attributes</h3>
-<table>
-<tr><td>None</td></tr>
-</table>
-<h3>Class Methods</h3>
-<table>
-<tr><td>None</td></tr>
-</table>
-<h3>Methods</h3>
-<table>
-<tr>
-<td><a href="#DebugBase.__init__">DebugBase</a></td>
-<td>Constructor</td>
-</tr><tr>
-<td><a href="#DebugBase.__do_clear">__do_clear</a></td>
-<td>Private method called to clear a temporary breakpoint.</td>
-</tr><tr>
-<td><a href="#DebugBase.__do_clearWatch">__do_clearWatch</a></td>
-<td>Private method called to clear a temporary watch expression.</td>
-</tr><tr>
-<td><a href="#DebugBase.__effective">__effective</a></td>
-<td>Private method to determine, if a watch expression is effective.</td>
-</tr><tr>
-<td><a href="#DebugBase.__extract_stack">__extract_stack</a></td>
-<td>Private member to return a list of stack frames.</td>
-</tr><tr>
-<td><a href="#DebugBase.__sendCallTrace">__sendCallTrace</a></td>
-<td>Private method to send a call/return trace.</td>
-</tr><tr>
-<td><a href="#DebugBase.__skip_it">__skip_it</a></td>
-<td>Private method to filter out debugger files.</td>
-</tr><tr>
-<td><a href="#DebugBase.break_anywhere">break_anywhere</a></td>
-<td>Public method reimplemented from bdb.py to do some special things.</td>
-</tr><tr>
-<td><a href="#DebugBase.break_here">break_here</a></td>
-<td>Public method reimplemented from bdb.py to fix the filename from the frame.</td>
-</tr><tr>
-<td><a href="#DebugBase.clear_watch">clear_watch</a></td>
-<td>Public method to clear a watch expression.</td>
-</tr><tr>
-<td><a href="#DebugBase.dispatch_exception">dispatch_exception</a></td>
-<td>Public method reimplemented from bdb.py to always call user_exception.</td>
-</tr><tr>
-<td><a href="#DebugBase.dispatch_line">dispatch_line</a></td>
-<td>Public method reimplemented from bdb.py to do some special things.</td>
-</tr><tr>
-<td><a href="#DebugBase.dispatch_return">dispatch_return</a></td>
-<td>Public method reimplemented from bdb.py to handle passive mode cleanly.</td>
-</tr><tr>
-<td><a href="#DebugBase.fix_frame_filename">fix_frame_filename</a></td>
-<td>Public method used to fixup the filename for a given frame.</td>
-</tr><tr>
-<td><a href="#DebugBase.getCurrentFrame">getCurrentFrame</a></td>
-<td>Public method to return the current frame.</td>
-</tr><tr>
-<td><a href="#DebugBase.getEvent">getEvent</a></td>
-<td>Protected method to return the last debugger event.</td>
-</tr><tr>
-<td><a href="#DebugBase.getFrameLocals">getFrameLocals</a></td>
-<td>Public method to return the locals dictionary of the current frame or a frame below.</td>
-</tr><tr>
-<td><a href="#DebugBase.getStack">getStack</a></td>
-<td>Public method to get the stack.</td>
-</tr><tr>
-<td><a href="#DebugBase.get_break">get_break</a></td>
-<td>Public method reimplemented from bdb.py to get the first breakpoint of a particular line.</td>
-</tr><tr>
-<td><a href="#DebugBase.get_watch">get_watch</a></td>
-<td>Public method to get a watch expression.</td>
-</tr><tr>
-<td><a href="#DebugBase.go">go</a></td>
-<td>Public method to resume the thread.</td>
-</tr><tr>
-<td><a href="#DebugBase.isBroken">isBroken</a></td>
-<td>Public method to return the broken state of the debugger.</td>
-</tr><tr>
-<td><a href="#DebugBase.profile">profile</a></td>
-<td>Public method used to trace some stuff independent of the debugger trace function.</td>
-</tr><tr>
-<td><a href="#DebugBase.setRecursionDepth">setRecursionDepth</a></td>
-<td>Public method to determine the current recursion depth.</td>
-</tr><tr>
-<td><a href="#DebugBase.set_continue">set_continue</a></td>
-<td>Public method reimplemented from bdb.py to always get informed of exceptions.</td>
-</tr><tr>
-<td><a href="#DebugBase.set_quit">set_quit</a></td>
-<td>Public method to quit.</td>
-</tr><tr>
-<td><a href="#DebugBase.set_trace">set_trace</a></td>
-<td>Public method reimplemented from bdb.py to do some special setup.</td>
-</tr><tr>
-<td><a href="#DebugBase.set_watch">set_watch</a></td>
-<td>Public method to set a watch expression.</td>
-</tr><tr>
-<td><a href="#DebugBase.step">step</a></td>
-<td>Public method to perform a step operation in this thread.</td>
-</tr><tr>
-<td><a href="#DebugBase.stepOut">stepOut</a></td>
-<td>Public method to perform a step out of the current call.</td>
-</tr><tr>
-<td><a href="#DebugBase.stop_here">stop_here</a></td>
-<td>Public method reimplemented to filter out debugger files.</td>
-</tr><tr>
-<td><a href="#DebugBase.storeFrameLocals">storeFrameLocals</a></td>
-<td>Public method to store the locals into the frame, so an access to frame.f_locals returns the last data.</td>
-</tr><tr>
-<td><a href="#DebugBase.trace_dispatch">trace_dispatch</a></td>
-<td>Public method reimplemented from bdb.py to do some special things.</td>
-</tr><tr>
-<td><a href="#DebugBase.user_exception">user_exception</a></td>
-<td>Public method reimplemented to report an exception to the debug server.</td>
-</tr><tr>
-<td><a href="#DebugBase.user_line">user_line</a></td>
-<td>Public method reimplemented to handle the program about to execute a particular line.</td>
-</tr><tr>
-<td><a href="#DebugBase.user_return">user_return</a></td>
-<td>Public method reimplemented to report program termination to the debug server.</td>
-</tr>
-</table>
-<h3>Static Methods</h3>
-<table>
-<tr><td>None</td></tr>
-</table>
-<a NAME="DebugBase.__init__" ID="DebugBase.__init__"></a>
-<h4>DebugBase (Constructor)</h4>
-<b>DebugBase</b>(<i>dbgClient</i>)
-<p>
-        Constructor
-</p><dl>
-<dt><i>dbgClient</i></dt>
-<dd>
-the owning client
-</dd>
-</dl><a NAME="DebugBase.__do_clear" ID="DebugBase.__do_clear"></a>
-<h4>DebugBase.__do_clear</h4>
-<b>__do_clear</b>(<i>filename, lineno</i>)
-<p>
-        Private method called to clear a temporary breakpoint.
-</p><dl>
-<dt><i>filename</i></dt>
-<dd>
-name of the file the bp belongs to
-</dd><dt><i>lineno</i></dt>
-<dd>
-linenumber of the bp
-</dd>
-</dl><a NAME="DebugBase.__do_clearWatch" ID="DebugBase.__do_clearWatch"></a>
-<h4>DebugBase.__do_clearWatch</h4>
-<b>__do_clearWatch</b>(<i>cond</i>)
-<p>
-        Private method called to clear a temporary watch expression.
-</p><dl>
-<dt><i>cond</i></dt>
-<dd>
-expression of the watch expression to be cleared (string)
-</dd>
-</dl><a NAME="DebugBase.__effective" ID="DebugBase.__effective"></a>
-<h4>DebugBase.__effective</h4>
-<b>__effective</b>(<i>frame</i>)
-<p>
-        Private method to determine, if a watch expression is effective.
-</p><dl>
-<dt><i>frame</i></dt>
-<dd>
-the current execution frame
-</dd>
-</dl><dl>
-<dt>Returns:</dt>
-<dd>
-tuple of watch expression and a flag to indicate, that a
-            temporary watch expression may be deleted (bdb.Breakpoint, boolean)
-</dd>
-</dl><a NAME="DebugBase.__extract_stack" ID="DebugBase.__extract_stack"></a>
-<h4>DebugBase.__extract_stack</h4>
-<b>__extract_stack</b>(<i>exctb</i>)
-<p>
-        Private member to return a list of stack frames.
-</p><dl>
-<dt><i>exctb</i></dt>
-<dd>
-exception traceback
-</dd>
-</dl><dl>
-<dt>Returns:</dt>
-<dd>
-list of stack frames
-</dd>
-</dl><a NAME="DebugBase.__sendCallTrace" ID="DebugBase.__sendCallTrace"></a>
-<h4>DebugBase.__sendCallTrace</h4>
-<b>__sendCallTrace</b>(<i>event, fromFrame, toFrame</i>)
-<p>
-        Private method to send a call/return trace.
-</p><dl>
-<dt><i>event</i></dt>
-<dd>
-trace event (string)
-</dd><dt><i>fromFrame</i></dt>
-<dd>
-originating frame (frame)
-</dd><dt><i>toFrame</i></dt>
-<dd>
-destination frame (frame)
-</dd>
-</dl><a NAME="DebugBase.__skip_it" ID="DebugBase.__skip_it"></a>
-<h4>DebugBase.__skip_it</h4>
-<b>__skip_it</b>(<i>frame</i>)
-<p>
-        Private method to filter out debugger files.
-</p><p>
-        Tracing is turned off for files that are part of the
-        debugger that are called from the application being debugged.
-</p><dl>
-<dt><i>frame</i></dt>
-<dd>
-the frame object
-</dd>
-</dl><dl>
-<dt>Returns:</dt>
-<dd>
-flag indicating whether the debugger should skip this frame
-</dd>
-</dl><a NAME="DebugBase.break_anywhere" ID="DebugBase.break_anywhere"></a>
-<h4>DebugBase.break_anywhere</h4>
-<b>break_anywhere</b>(<i>frame</i>)
-<p>
-        Public method reimplemented from bdb.py to do some special things.
-</p><p>
-        These speciality is to fix the filename from the frame
-        (see fix_frame_filename for more info).
-</p><dl>
-<dt><i>frame</i></dt>
-<dd>
-the frame object
-</dd>
-</dl><dl>
-<dt>Returns:</dt>
-<dd>
-flag indicating the break status (boolean)
-</dd>
-</dl><a NAME="DebugBase.break_here" ID="DebugBase.break_here"></a>
-<h4>DebugBase.break_here</h4>
-<b>break_here</b>(<i>frame</i>)
-<p>
-        Public method reimplemented from bdb.py to fix the filename from the
-        frame.
-</p><p>
-        See fix_frame_filename for more info.
-</p><dl>
-<dt><i>frame</i></dt>
-<dd>
-the frame object
-</dd>
-</dl><dl>
-<dt>Returns:</dt>
-<dd>
-flag indicating the break status (boolean)
-</dd>
-</dl><a NAME="DebugBase.clear_watch" ID="DebugBase.clear_watch"></a>
-<h4>DebugBase.clear_watch</h4>
-<b>clear_watch</b>(<i>cond</i>)
-<p>
-        Public method to clear a watch expression.
-</p><dl>
-<dt><i>cond</i></dt>
-<dd>
-expression of the watch expression to be cleared (string)
-</dd>
-</dl><a NAME="DebugBase.dispatch_exception" ID="DebugBase.dispatch_exception"></a>
-<h4>DebugBase.dispatch_exception</h4>
-<b>dispatch_exception</b>(<i>frame, arg</i>)
-<p>
-        Public method reimplemented from bdb.py to always call user_exception.
-</p><dl>
-<dt><i>frame</i></dt>
-<dd>
-The current stack frame.
-</dd><dt><i>arg</i></dt>
-<dd>
-The arguments
-</dd>
-</dl><dl>
-<dt>Returns:</dt>
-<dd>
-local trace function
-</dd>
-</dl><dl>
-<dt>Raises <b>bdb.BdbQuit</b>:</dt>
-<dd>
-raised to indicate the end of the debug session
-</dd>
-</dl><a NAME="DebugBase.dispatch_line" ID="DebugBase.dispatch_line"></a>
-<h4>DebugBase.dispatch_line</h4>
-<b>dispatch_line</b>(<i>frame</i>)
-<p>
-        Public method reimplemented from bdb.py to do some special things.
-</p><p>
-        This speciality is to check the connection to the debug server
-        for new events (i.e. new breakpoints) while we are going through
-        the code.
-</p><dl>
-<dt><i>frame</i></dt>
-<dd>
-The current stack frame.
-</dd>
-</dl><dl>
-<dt>Returns:</dt>
-<dd>
-local trace function
-</dd>
-</dl><dl>
-<dt>Raises <b>bdb.BdbQuit</b>:</dt>
-<dd>
-raised to indicate the end of the debug session
-</dd>
-</dl><a NAME="DebugBase.dispatch_return" ID="DebugBase.dispatch_return"></a>
-<h4>DebugBase.dispatch_return</h4>
-<b>dispatch_return</b>(<i>frame, arg</i>)
-<p>
-        Public method reimplemented from bdb.py to handle passive mode cleanly.
-</p><dl>
-<dt><i>frame</i></dt>
-<dd>
-The current stack frame.
-</dd><dt><i>arg</i></dt>
-<dd>
-The arguments
-</dd>
-</dl><dl>
-<dt>Returns:</dt>
-<dd>
-local trace function
-</dd>
-</dl><dl>
-<dt>Raises <b>bdb.BdbQuit</b>:</dt>
-<dd>
-raised to indicate the end of the debug session
-</dd>
-</dl><a NAME="DebugBase.fix_frame_filename" ID="DebugBase.fix_frame_filename"></a>
-<h4>DebugBase.fix_frame_filename</h4>
-<b>fix_frame_filename</b>(<i>frame</i>)
-<p>
-        Public method used to fixup the filename for a given frame.
-</p><p>
-        The logic employed here is that if a module was loaded
-        from a .pyc file, then the correct .py to operate with
-        should be in the same path as the .pyc. The reason this
-        logic is needed is that when a .pyc file is generated, the
-        filename embedded and thus what is readable in the code object
-        of the frame object is the fully qualified filepath when the
-        pyc is generated. If files are moved from machine to machine
-        this can break debugging as the .pyc will refer to the .py
-        on the original machine. Another case might be sharing
-        code over a network... This logic deals with that.
-</p><dl>
-<dt><i>frame</i></dt>
-<dd>
-the frame object
-</dd>
-</dl><dl>
-<dt>Returns:</dt>
-<dd>
-fixed up file name (string)
-</dd>
-</dl><a NAME="DebugBase.getCurrentFrame" ID="DebugBase.getCurrentFrame"></a>
-<h4>DebugBase.getCurrentFrame</h4>
-<b>getCurrentFrame</b>(<i></i>)
-<p>
-        Public method to return the current frame.
-</p><dl>
-<dt>Returns:</dt>
-<dd>
-the current frame
-</dd>
-</dl><a NAME="DebugBase.getEvent" ID="DebugBase.getEvent"></a>
-<h4>DebugBase.getEvent</h4>
-<b>getEvent</b>(<i></i>)
-<p>
-        Protected method to return the last debugger event.
-</p><dl>
-<dt>Returns:</dt>
-<dd>
-last debugger event (string)
-</dd>
-</dl><a NAME="DebugBase.getFrameLocals" ID="DebugBase.getFrameLocals"></a>
-<h4>DebugBase.getFrameLocals</h4>
-<b>getFrameLocals</b>(<i>frmnr=0</i>)
-<p>
-        Public method to return the locals dictionary of the current frame
-        or a frame below.
-</p><dl>
-<dt><i>frmnr=</i></dt>
-<dd>
-distance of frame to get locals dictionary of. 0 is
-            the current frame (int)
-</dd>
-</dl><dl>
-<dt>Returns:</dt>
-<dd>
-locals dictionary of the frame
-</dd>
-</dl><a NAME="DebugBase.getStack" ID="DebugBase.getStack"></a>
-<h4>DebugBase.getStack</h4>
-<b>getStack</b>(<i></i>)
-<p>
-        Public method to get the stack.
-</p><dl>
-<dt>Returns:</dt>
-<dd>
-list of lists with file name (string), line number (integer)
-            and function name (string)
-</dd>
-</dl><a NAME="DebugBase.get_break" ID="DebugBase.get_break"></a>
-<h4>DebugBase.get_break</h4>
-<b>get_break</b>(<i>filename, lineno</i>)
-<p>
-        Public method reimplemented from bdb.py to get the first breakpoint of
-        a particular line.
-</p><p>
-        Because eric6 supports only one breakpoint per line, this overwritten
-        method will return this one and only breakpoint.
-</p><dl>
-<dt><i>filename</i></dt>
-<dd>
-filename of the bp to retrieve (string)
-</dd><dt><i>lineno</i></dt>
-<dd>
-linenumber of the bp to retrieve (integer)
-</dd>
-</dl><dl>
-<dt>Returns:</dt>
-<dd>
-breakpoint or None, if there is no bp
-</dd>
-</dl><a NAME="DebugBase.get_watch" ID="DebugBase.get_watch"></a>
-<h4>DebugBase.get_watch</h4>
-<b>get_watch</b>(<i>cond</i>)
-<p>
-        Public method to get a watch expression.
-</p><dl>
-<dt><i>cond</i></dt>
-<dd>
-expression of the watch expression to be cleared (string)
-</dd>
-</dl><dl>
-<dt>Returns:</dt>
-<dd>
-reference to the watch point
-</dd>
-</dl><a NAME="DebugBase.go" ID="DebugBase.go"></a>
-<h4>DebugBase.go</h4>
-<b>go</b>(<i>special</i>)
-<p>
-        Public method to resume the thread.
-</p><p>
-        It resumes the thread stopping only at breakpoints or exceptions.
-</p><dl>
-<dt><i>special</i></dt>
-<dd>
-flag indicating a special continue operation
-</dd>
-</dl><a NAME="DebugBase.isBroken" ID="DebugBase.isBroken"></a>
-<h4>DebugBase.isBroken</h4>
-<b>isBroken</b>(<i></i>)
-<p>
-        Public method to return the broken state of the debugger.
-</p><dl>
-<dt>Returns:</dt>
-<dd>
-flag indicating the broken state (boolean)
-</dd>
-</dl><a NAME="DebugBase.profile" ID="DebugBase.profile"></a>
-<h4>DebugBase.profile</h4>
-<b>profile</b>(<i>frame, event, arg</i>)
-<p>
-        Public method used to trace some stuff independent of the debugger
-        trace function.
-</p><dl>
-<dt><i>frame</i></dt>
-<dd>
-current stack frame.
-</dd><dt><i>event</i></dt>
-<dd>
-trace event (string)
-</dd><dt><i>arg</i></dt>
-<dd>
-arguments
-</dd>
-</dl><dl>
-<dt>Raises <b>RuntimeError</b>:</dt>
-<dd>
-raised to indicate too many recursions
-</dd>
-</dl><a NAME="DebugBase.setRecursionDepth" ID="DebugBase.setRecursionDepth"></a>
-<h4>DebugBase.setRecursionDepth</h4>
-<b>setRecursionDepth</b>(<i>frame</i>)
-<p>
-        Public method to determine the current recursion depth.
-</p><dl>
-<dt><i>frame</i></dt>
-<dd>
-The current stack frame.
-</dd>
-</dl><a NAME="DebugBase.set_continue" ID="DebugBase.set_continue"></a>
-<h4>DebugBase.set_continue</h4>
-<b>set_continue</b>(<i>special</i>)
-<p>
-        Public method reimplemented from bdb.py to always get informed of
-        exceptions.
-</p><dl>
-<dt><i>special</i></dt>
-<dd>
-flag indicating a special continue operation
-</dd>
-</dl><a NAME="DebugBase.set_quit" ID="DebugBase.set_quit"></a>
-<h4>DebugBase.set_quit</h4>
-<b>set_quit</b>(<i></i>)
-<p>
-        Public method to quit.
-</p><p>
-        It wraps call to bdb to clear the current frame properly.
-</p><a NAME="DebugBase.set_trace" ID="DebugBase.set_trace"></a>
-<h4>DebugBase.set_trace</h4>
-<b>set_trace</b>(<i>frame=None</i>)
-<p>
-        Public method reimplemented from bdb.py to do some special setup.
-</p><dl>
-<dt><i>frame</i></dt>
-<dd>
-frame to start debugging from
-</dd>
-</dl><a NAME="DebugBase.set_watch" ID="DebugBase.set_watch"></a>
-<h4>DebugBase.set_watch</h4>
-<b>set_watch</b>(<i>cond, temporary=0</i>)
-<p>
-        Public method to set a watch expression.
-</p><dl>
-<dt><i>cond</i></dt>
-<dd>
-expression of the watch expression (string)
-</dd><dt><i>temporary</i></dt>
-<dd>
-flag indicating a temporary watch expression (boolean)
-</dd>
-</dl><a NAME="DebugBase.step" ID="DebugBase.step"></a>
-<h4>DebugBase.step</h4>
-<b>step</b>(<i>traceMode</i>)
-<p>
-        Public method to perform a step operation in this thread.
-</p><dl>
-<dt><i>traceMode</i></dt>
-<dd>
-If it is non-zero, then the step is a step into,
-              otherwise it is a step over.
-</dd>
-</dl><a NAME="DebugBase.stepOut" ID="DebugBase.stepOut"></a>
-<h4>DebugBase.stepOut</h4>
-<b>stepOut</b>(<i></i>)
-<p>
-        Public method to perform a step out of the current call.
-</p><a NAME="DebugBase.stop_here" ID="DebugBase.stop_here"></a>
-<h4>DebugBase.stop_here</h4>
-<b>stop_here</b>(<i>frame</i>)
-<p>
-        Public method reimplemented to filter out debugger files.
-</p><p>
-        Tracing is turned off for files that are part of the
-        debugger that are called from the application being debugged.
-</p><dl>
-<dt><i>frame</i></dt>
-<dd>
-the frame object
-</dd>
-</dl><dl>
-<dt>Returns:</dt>
-<dd>
-flag indicating whether the debugger should stop here
-</dd>
-</dl><a NAME="DebugBase.storeFrameLocals" ID="DebugBase.storeFrameLocals"></a>
-<h4>DebugBase.storeFrameLocals</h4>
-<b>storeFrameLocals</b>(<i>frmnr=0</i>)
-<p>
-        Public method to store the locals into the frame, so an access to
-        frame.f_locals returns the last data.
-</p><dl>
-<dt><i>frmnr=</i></dt>
-<dd>
-distance of frame to store locals dictionary to. 0 is
-            the current frame (int)
-</dd>
-</dl><a NAME="DebugBase.trace_dispatch" ID="DebugBase.trace_dispatch"></a>
-<h4>DebugBase.trace_dispatch</h4>
-<b>trace_dispatch</b>(<i>frame, event, arg</i>)
-<p>
-        Public method reimplemented from bdb.py to do some special things.
-</p><p>
-        This specialty is to check the connection to the debug server
-        for new events (i.e. new breakpoints) while we are going through
-        the code.
-</p><dl>
-<dt><i>frame</i></dt>
-<dd>
-The current stack frame.
-</dd><dt><i>event</i></dt>
-<dd>
-The trace event (string)
-</dd><dt><i>arg</i></dt>
-<dd>
-The arguments
-</dd>
-</dl><dl>
-<dt>Returns:</dt>
-<dd>
-local trace function
-</dd>
-</dl><a NAME="DebugBase.user_exception" ID="DebugBase.user_exception"></a>
-<h4>DebugBase.user_exception</h4>
-<b>user_exception</b>(<i>frame, (exctype, excval, exctb), unhandled=0</i>)
-<p>
-        Public method reimplemented to report an exception to the debug server.
-</p><dl>
-<dt><i>frame</i></dt>
-<dd>
-the frame object
-</dd><dt><i>exctype</i></dt>
-<dd>
-the type of the exception
-</dd><dt><i>excval</i></dt>
-<dd>
-data about the exception
-</dd><dt><i>exctb</i></dt>
-<dd>
-traceback for the exception
-</dd><dt><i>unhandled</i></dt>
-<dd>
-flag indicating an uncaught exception
-</dd>
-</dl><a NAME="DebugBase.user_line" ID="DebugBase.user_line"></a>
-<h4>DebugBase.user_line</h4>
-<b>user_line</b>(<i>frame</i>)
-<p>
-        Public method reimplemented to handle the program about to execute a
-        particular line.
-</p><dl>
-<dt><i>frame</i></dt>
-<dd>
-the frame object
-</dd>
-</dl><a NAME="DebugBase.user_return" ID="DebugBase.user_return"></a>
-<h4>DebugBase.user_return</h4>
-<b>user_return</b>(<i>frame, retval</i>)
-<p>
-        Public method reimplemented to report program termination to the
-        debug server.
-</p><dl>
-<dt><i>frame</i></dt>
-<dd>
-the frame object
-</dd><dt><i>retval</i></dt>
-<dd>
-the return value of the program
-</dd>
-</dl>
-<div align="right"><a href="#top">Up</a></div>
-<hr /><hr />
-<a NAME="printerr" ID="printerr"></a>
-<h2>printerr</h2>
-<b>printerr</b>(<i>s</i>)
-<p>
-    Module function used for debugging the debug client.
-</p><dl>
-<dt><i>s</i></dt>
-<dd>
-data to be printed
-</dd>
-</dl>
-<div align="right"><a href="#top">Up</a></div>
-<hr /><hr />
-<a NAME="setRecursionLimit" ID="setRecursionLimit"></a>
-<h2>setRecursionLimit</h2>
-<b>setRecursionLimit</b>(<i>limit</i>)
-<p>
-    Module function to set the recursion limit.
-</p><dl>
-<dt><i>limit</i></dt>
-<dd>
-recursion limit (integer)
-</dd>
-</dl>
-<div align="right"><a href="#top">Up</a></div>
-<hr />
-</body></html>
\ No newline at end of file
--- a/Documentation/Source/eric6.DebugClients.Python.DebugClient.html	Sat Sep 03 18:02:37 2016 +0200
+++ /dev/null	Thu Jan 01 00:00:00 1970 +0000
@@ -1,79 +0,0 @@
-<!DOCTYPE html>
-<html><head>
-<title>eric6.DebugClients.Python.DebugClient</title>
-<meta charset="UTF-8">
-<style>
-body {
-    background: #EDECE6;
-    margin: 0em 1em 10em 1em;
-    color: black;
-}
-
-h1 { color: white; background: #85774A; }
-h2 { color: white; background: #85774A; }
-h3 { color: white; background: #9D936E; }
-h4 { color: white; background: #9D936E; }
-    
-a { color: #BA6D36; }
-
-</style>
-</head>
-<body><a NAME="top" ID="top"></a>
-<h1>eric6.DebugClients.Python.DebugClient</h1>
-<p>
-Module implementing a Qt free version of the debug client.
-</p>
-<h3>Global Attributes</h3>
-<table>
-<tr><td>None</td></tr>
-</table>
-<h3>Classes</h3>
-<table>
-<tr>
-<td><a href="#DebugClient">DebugClient</a></td>
-<td>Class implementing the client side of the debugger.</td>
-</tr>
-</table>
-<h3>Functions</h3>
-<table>
-<tr><td>None</td></tr>
-</table>
-<hr /><hr />
-<a NAME="DebugClient" ID="DebugClient"></a>
-<h2>DebugClient</h2>
-<p>
-    Class implementing the client side of the debugger.
-</p><p>
-    This variant of the debugger implements the standard debugger client
-    by subclassing all relevant base classes.
-</p>
-<h3>Derived from</h3>
-DebugClientBase.DebugClientBase, AsyncIO, DebugBase
-<h3>Class Attributes</h3>
-<table>
-<tr><td>debugClient</td></tr>
-</table>
-<h3>Class Methods</h3>
-<table>
-<tr><td>None</td></tr>
-</table>
-<h3>Methods</h3>
-<table>
-<tr>
-<td><a href="#DebugClient.__init__">DebugClient</a></td>
-<td>Constructor</td>
-</tr>
-</table>
-<h3>Static Methods</h3>
-<table>
-<tr><td>None</td></tr>
-</table>
-<a NAME="DebugClient.__init__" ID="DebugClient.__init__"></a>
-<h4>DebugClient (Constructor)</h4>
-<b>DebugClient</b>(<i></i>)
-<p>
-        Constructor
-</p>
-<div align="right"><a href="#top">Up</a></div>
-<hr />
-</body></html>
\ No newline at end of file
--- a/Documentation/Source/eric6.DebugClients.Python.DebugClientBase.html	Sat Sep 03 18:02:37 2016 +0200
+++ /dev/null	Thu Jan 01 00:00:00 1970 +0000
@@ -1,867 +0,0 @@
-<!DOCTYPE html>
-<html><head>
-<title>eric6.DebugClients.Python.DebugClientBase</title>
-<meta charset="UTF-8">
-<style>
-body {
-    background: #EDECE6;
-    margin: 0em 1em 10em 1em;
-    color: black;
-}
-
-h1 { color: white; background: #85774A; }
-h2 { color: white; background: #85774A; }
-h3 { color: white; background: #9D936E; }
-h4 { color: white; background: #9D936E; }
-    
-a { color: #BA6D36; }
-
-</style>
-</head>
-<body><a NAME="top" ID="top"></a>
-<h1>eric6.DebugClients.Python.DebugClientBase</h1>
-<p>
-Module implementing a debug client base class.
-</p>
-<h3>Global Attributes</h3>
-<table>
-<tr><td>DebugClientInstance</td></tr>
-</table>
-<h3>Classes</h3>
-<table>
-<tr>
-<td><a href="#DebugClientBase">DebugClientBase</a></td>
-<td>Class implementing the client side of the debugger.</td>
-</tr>
-</table>
-<h3>Functions</h3>
-<table>
-<tr>
-<td><a href="#DebugClientClose">DebugClientClose</a></td>
-<td>Replacement for the standard os.close(fd).</td>
-</tr><tr>
-<td><a href="#DebugClientFork">DebugClientFork</a></td>
-<td>Replacement for the standard os.fork().</td>
-</tr><tr>
-<td><a href="#DebugClientInput">DebugClientInput</a></td>
-<td>Replacement for the standard input builtin.</td>
-</tr><tr>
-<td><a href="#DebugClientRawInput">DebugClientRawInput</a></td>
-<td>Replacement for the standard raw_input builtin.</td>
-</tr><tr>
-<td><a href="#DebugClientSetRecursionLimit">DebugClientSetRecursionLimit</a></td>
-<td>Replacement for the standard sys.setrecursionlimit(limit).</td>
-</tr>
-</table>
-<hr /><hr />
-<a NAME="DebugClientBase" ID="DebugClientBase"></a>
-<h2>DebugClientBase</h2>
-<p>
-    Class implementing the client side of the debugger.
-</p><p>
-    It provides access to the Python interpeter from a debugger running in
-    another process whether or not the Qt event loop is running.
-</p><p>
-    The protocol between the debugger and the client assumes that there will be
-    a single source of debugger commands and a single source of Python
-    statements.  Commands and statement are always exactly one line and may be
-    interspersed.
-</p><p>
-    The protocol is as follows.  First the client opens a connection to the
-    debugger and then sends a series of one line commands.  A command is either
-    &gt;Load&lt;, &gt;Step&lt;, &gt;StepInto&lt;, ... or a Python statement.
-    See DebugProtocol.py for a listing of valid protocol tokens.
-</p><p>
-    A Python statement consists of the statement to execute, followed (in a
-    separate line) by &gt;OK?&lt;. If the statement was incomplete then the
-    response is &gt;Continue&lt;. If there was an exception then the response
-    is &gt;Exception&lt;. Otherwise the response is &gt;OK&lt;. The reason
-    for the &gt;OK?&lt; part is to provide a sentinal (ie. the responding
-    &gt;OK&lt;) after any possible output as a result of executing the command.
-</p><p>
-    The client may send any other lines at any other time which should be
-    interpreted as program output.
-</p><p>
-    If the debugger closes the session there is no response from the client.
-    The client may close the session at any time as a result of the script
-    being debugged closing or crashing.
-</p><p>
-    <b>Note</b>: This class is meant to be subclassed by individual
-    DebugClient classes. Do not instantiate it directly.
-</p>
-<h3>Derived from</h3>
-object
-<h3>Class Attributes</h3>
-<table>
-<tr><td>clientCapabilities</td></tr>
-</table>
-<h3>Class Methods</h3>
-<table>
-<tr><td>None</td></tr>
-</table>
-<h3>Methods</h3>
-<table>
-<tr>
-<td><a href="#DebugClientBase.__init__">DebugClientBase</a></td>
-<td>Constructor</td>
-</tr><tr>
-<td><a href="#DebugClientBase.__clientCapabilities">__clientCapabilities</a></td>
-<td>Private method to determine the clients capabilities.</td>
-</tr><tr>
-<td><a href="#DebugClientBase.__completionList">__completionList</a></td>
-<td>Private slot to handle the request for a commandline completion list.</td>
-</tr><tr>
-<td><a href="#DebugClientBase.__dumpThreadList">__dumpThreadList</a></td>
-<td>Private method to send the list of threads.</td>
-</tr><tr>
-<td><a href="#DebugClientBase.__dumpVariable">__dumpVariable</a></td>
-<td>Private method to return the variables of a frame to the debug server.</td>
-</tr><tr>
-<td><a href="#DebugClientBase.__dumpVariables">__dumpVariables</a></td>
-<td>Private method to return the variables of a frame to the debug server.</td>
-</tr><tr>
-<td><a href="#DebugClientBase.__exceptionRaised">__exceptionRaised</a></td>
-<td>Private method called in the case of an exception.</td>
-</tr><tr>
-<td><a href="#DebugClientBase.__formatQtVariable">__formatQtVariable</a></td>
-<td>Private method to produce a formated output of a simple Qt4/Qt5 type.</td>
-</tr><tr>
-<td><a href="#DebugClientBase.__formatVariablesList">__formatVariablesList</a></td>
-<td>Private method to produce a formated variables list.</td>
-</tr><tr>
-<td><a href="#DebugClientBase.__generateFilterObjects">__generateFilterObjects</a></td>
-<td>Private slot to convert a filter string to a list of filter objects.</td>
-</tr><tr>
-<td><a href="#DebugClientBase.__getCompletionList">__getCompletionList</a></td>
-<td>Private method to create a completions list.</td>
-</tr><tr>
-<td><a href="#DebugClientBase.__getSysPath">__getSysPath</a></td>
-<td>Private slot to calculate a path list including the PYTHONPATH environment variable.</td>
-</tr><tr>
-<td><a href="#DebugClientBase.__interact">__interact</a></td>
-<td>Private method to interact with the debugger.</td>
-</tr><tr>
-<td><a href="#DebugClientBase.__interceptSignals">__interceptSignals</a></td>
-<td>Private method to intercept common signals.</td>
-</tr><tr>
-<td><a href="#DebugClientBase.__resolveHost">__resolveHost</a></td>
-<td>Private method to resolve a hostname to an IP address.</td>
-</tr><tr>
-<td><a href="#DebugClientBase.__setCoding">__setCoding</a></td>
-<td>Private method to set the coding used by a python file.</td>
-</tr><tr>
-<td><a href="#DebugClientBase.__signalHandler">__signalHandler</a></td>
-<td>Private method to handle signals.</td>
-</tr><tr>
-<td><a href="#DebugClientBase.__unhandled_exception">__unhandled_exception</a></td>
-<td>Private method called to report an uncaught exception.</td>
-</tr><tr>
-<td><a href="#DebugClientBase.absPath">absPath</a></td>
-<td>Public method to convert a filename to an absolute name.</td>
-</tr><tr>
-<td><a href="#DebugClientBase.attachThread">attachThread</a></td>
-<td>Public method to setup a thread for DebugClient to debug.</td>
-</tr><tr>
-<td><a href="#DebugClientBase.close">close</a></td>
-<td>Public method implementing a close method as a replacement for os.close().</td>
-</tr><tr>
-<td><a href="#DebugClientBase.connectDebugger">connectDebugger</a></td>
-<td>Public method to establish a session with the debugger.</td>
-</tr><tr>
-<td><a href="#DebugClientBase.eventLoop">eventLoop</a></td>
-<td>Public method implementing our event loop.</td>
-</tr><tr>
-<td><a href="#DebugClientBase.eventPoll">eventPoll</a></td>
-<td>Public method to poll for events like 'set break point'.</td>
-</tr><tr>
-<td><a href="#DebugClientBase.fork">fork</a></td>
-<td>Public method implementing a fork routine deciding which branch to follow.</td>
-</tr><tr>
-<td><a href="#DebugClientBase.getCoding">getCoding</a></td>
-<td>Public method to return the current coding.</td>
-</tr><tr>
-<td><a href="#DebugClientBase.getRunning">getRunning</a></td>
-<td>Public method to return the main script we are currently running.</td>
-</tr><tr>
-<td><a href="#DebugClientBase.handleLine">handleLine</a></td>
-<td>Public method to handle the receipt of a complete line.</td>
-</tr><tr>
-<td><a href="#DebugClientBase.input">input</a></td>
-<td>Public method to implement input() using the event loop.</td>
-</tr><tr>
-<td><a href="#DebugClientBase.main">main</a></td>
-<td>Public method implementing the main method.</td>
-</tr><tr>
-<td><a href="#DebugClientBase.progTerminated">progTerminated</a></td>
-<td>Public method to tell the debugger that the program has terminated.</td>
-</tr><tr>
-<td><a href="#DebugClientBase.raw_input">raw_input</a></td>
-<td>Public method to implement raw_input() using the event loop.</td>
-</tr><tr>
-<td><a href="#DebugClientBase.run_call">run_call</a></td>
-<td>Public method used to start the remote debugger and call a function.</td>
-</tr><tr>
-<td><a href="#DebugClientBase.sessionClose">sessionClose</a></td>
-<td>Public method to close the session with the debugger and optionally terminate.</td>
-</tr><tr>
-<td><a href="#DebugClientBase.shouldSkip">shouldSkip</a></td>
-<td>Public method to check if a file should be skipped.</td>
-</tr><tr>
-<td><a href="#DebugClientBase.startDebugger">startDebugger</a></td>
-<td>Public method used to start the remote debugger.</td>
-</tr><tr>
-<td><a href="#DebugClientBase.startProgInDebugger">startProgInDebugger</a></td>
-<td>Public method used to start the remote debugger.</td>
-</tr><tr>
-<td><a href="#DebugClientBase.write">write</a></td>
-<td>Public method to write data to the output stream.</td>
-</tr>
-</table>
-<h3>Static Methods</h3>
-<table>
-<tr><td>None</td></tr>
-</table>
-<a NAME="DebugClientBase.__init__" ID="DebugClientBase.__init__"></a>
-<h4>DebugClientBase (Constructor)</h4>
-<b>DebugClientBase</b>(<i></i>)
-<p>
-        Constructor
-</p><a NAME="DebugClientBase.__clientCapabilities" ID="DebugClientBase.__clientCapabilities"></a>
-<h4>DebugClientBase.__clientCapabilities</h4>
-<b>__clientCapabilities</b>(<i></i>)
-<p>
-        Private method to determine the clients capabilities.
-</p><dl>
-<dt>Returns:</dt>
-<dd>
-client capabilities (integer)
-</dd>
-</dl><a NAME="DebugClientBase.__completionList" ID="DebugClientBase.__completionList"></a>
-<h4>DebugClientBase.__completionList</h4>
-<b>__completionList</b>(<i>text</i>)
-<p>
-        Private slot to handle the request for a commandline completion list.
-</p><dl>
-<dt><i>text</i></dt>
-<dd>
-the text to be completed (string)
-</dd>
-</dl><a NAME="DebugClientBase.__dumpThreadList" ID="DebugClientBase.__dumpThreadList"></a>
-<h4>DebugClientBase.__dumpThreadList</h4>
-<b>__dumpThreadList</b>(<i></i>)
-<p>
-        Private method to send the list of threads.
-</p><a NAME="DebugClientBase.__dumpVariable" ID="DebugClientBase.__dumpVariable"></a>
-<h4>DebugClientBase.__dumpVariable</h4>
-<b>__dumpVariable</b>(<i>var, frmnr, scope, filter</i>)
-<p>
-        Private method to return the variables of a frame to the debug server.
-</p><dl>
-<dt><i>var</i></dt>
-<dd>
-list encoded name of the requested variable
-            (list of strings)
-</dd><dt><i>frmnr</i></dt>
-<dd>
-distance of frame reported on. 0 is the current frame
-            (int)
-</dd><dt><i>scope</i></dt>
-<dd>
-1 to report global variables, 0 for local variables (int)
-</dd><dt><i>filter</i></dt>
-<dd>
-the indices of variable types to be filtered
-            (list of int)
-</dd>
-</dl><a NAME="DebugClientBase.__dumpVariables" ID="DebugClientBase.__dumpVariables"></a>
-<h4>DebugClientBase.__dumpVariables</h4>
-<b>__dumpVariables</b>(<i>frmnr, scope, filter</i>)
-<p>
-        Private method to return the variables of a frame to the debug server.
-</p><dl>
-<dt><i>frmnr</i></dt>
-<dd>
-distance of frame reported on. 0 is the current frame
-            (int)
-</dd><dt><i>scope</i></dt>
-<dd>
-1 to report global variables, 0 for local variables (int)
-</dd><dt><i>filter</i></dt>
-<dd>
-the indices of variable types to be filtered (list of
-            int)
-</dd>
-</dl><a NAME="DebugClientBase.__exceptionRaised" ID="DebugClientBase.__exceptionRaised"></a>
-<h4>DebugClientBase.__exceptionRaised</h4>
-<b>__exceptionRaised</b>(<i></i>)
-<p>
-        Private method called in the case of an exception.
-</p><p>
-        It ensures that the debug server is informed of the raised exception.
-</p><a NAME="DebugClientBase.__formatQtVariable" ID="DebugClientBase.__formatQtVariable"></a>
-<h4>DebugClientBase.__formatQtVariable</h4>
-<b>__formatQtVariable</b>(<i>value, vtype</i>)
-<p>
-        Private method to produce a formated output of a simple Qt4/Qt5 type.
-</p><dl>
-<dt><i>value</i></dt>
-<dd>
-variable to be formated
-</dd><dt><i>vtype</i></dt>
-<dd>
-type of the variable to be formatted (string)
-</dd>
-</dl><dl>
-<dt>Returns:</dt>
-<dd>
-A tuple consisting of a list of formatted variables. Each
-            variable entry is a tuple of three elements, the variable name,
-            its type and value.
-</dd>
-</dl><a NAME="DebugClientBase.__formatVariablesList" ID="DebugClientBase.__formatVariablesList"></a>
-<h4>DebugClientBase.__formatVariablesList</h4>
-<b>__formatVariablesList</b>(<i>keylist, dict, scope, filter=[], formatSequences=0</i>)
-<p>
-        Private method to produce a formated variables list.
-</p><p>
-        The dictionary passed in to it is scanned. Variables are
-        only added to the list, if their type is not contained
-        in the filter list and their name doesn't match any of
-        the filter expressions. The formated variables list (a list of tuples
-        of 3 values) is returned.
-</p><dl>
-<dt><i>keylist</i></dt>
-<dd>
-keys of the dictionary
-</dd><dt><i>dict</i></dt>
-<dd>
-the dictionary to be scanned
-</dd><dt><i>scope</i></dt>
-<dd>
-1 to filter using the globals filter, 0 using the locals
-            filter (int).
-            Variables are only added to the list, if their name do not match
-            any of the filter expressions.
-</dd><dt><i>filter</i></dt>
-<dd>
-the indices of variable types to be filtered. Variables
-            are only added to the list, if their type is not contained in the
-            filter list.
-</dd><dt><i>formatSequences</i></dt>
-<dd>
-flag indicating, that sequence or dictionary
-            variables should be formatted. If it is 0 (or false), just the
-            number of items contained in these variables is returned. (boolean)
-</dd>
-</dl><dl>
-<dt>Returns:</dt>
-<dd>
-A tuple consisting of a list of formatted variables. Each
-            variable entry is a tuple of three elements, the variable name,
-            its type and value.
-</dd>
-</dl><a NAME="DebugClientBase.__generateFilterObjects" ID="DebugClientBase.__generateFilterObjects"></a>
-<h4>DebugClientBase.__generateFilterObjects</h4>
-<b>__generateFilterObjects</b>(<i>scope, filterString</i>)
-<p>
-        Private slot to convert a filter string to a list of filter objects.
-</p><dl>
-<dt><i>scope</i></dt>
-<dd>
-1 to generate filter for global variables, 0 for local
-            variables (int)
-</dd><dt><i>filterString</i></dt>
-<dd>
-string of filter patterns separated by ';'
-</dd>
-</dl><a NAME="DebugClientBase.__getCompletionList" ID="DebugClientBase.__getCompletionList"></a>
-<h4>DebugClientBase.__getCompletionList</h4>
-<b>__getCompletionList</b>(<i>text, completer, completions</i>)
-<p>
-        Private method to create a completions list.
-</p><dl>
-<dt><i>text</i></dt>
-<dd>
-text to complete (string)
-</dd><dt><i>completer</i></dt>
-<dd>
-completer methode
-</dd><dt><i>completions</i></dt>
-<dd>
-set where to add new completions strings (set)
-</dd>
-</dl><a NAME="DebugClientBase.__getSysPath" ID="DebugClientBase.__getSysPath"></a>
-<h4>DebugClientBase.__getSysPath</h4>
-<b>__getSysPath</b>(<i>firstEntry</i>)
-<p>
-        Private slot to calculate a path list including the PYTHONPATH
-        environment variable.
-</p><dl>
-<dt><i>firstEntry</i></dt>
-<dd>
-entry to be put first in sys.path (string)
-</dd>
-</dl><dl>
-<dt>Returns:</dt>
-<dd>
-path list for use as sys.path (list of strings)
-</dd>
-</dl><a NAME="DebugClientBase.__interact" ID="DebugClientBase.__interact"></a>
-<h4>DebugClientBase.__interact</h4>
-<b>__interact</b>(<i></i>)
-<p>
-        Private method to interact with the debugger.
-</p><a NAME="DebugClientBase.__interceptSignals" ID="DebugClientBase.__interceptSignals"></a>
-<h4>DebugClientBase.__interceptSignals</h4>
-<b>__interceptSignals</b>(<i></i>)
-<p>
-        Private method to intercept common signals.
-</p><a NAME="DebugClientBase.__resolveHost" ID="DebugClientBase.__resolveHost"></a>
-<h4>DebugClientBase.__resolveHost</h4>
-<b>__resolveHost</b>(<i>host</i>)
-<p>
-        Private method to resolve a hostname to an IP address.
-</p><dl>
-<dt><i>host</i></dt>
-<dd>
-hostname of the debug server (string)
-</dd>
-</dl><dl>
-<dt>Returns:</dt>
-<dd>
-IP address (string)
-</dd>
-</dl><a NAME="DebugClientBase.__setCoding" ID="DebugClientBase.__setCoding"></a>
-<h4>DebugClientBase.__setCoding</h4>
-<b>__setCoding</b>(<i>filename</i>)
-<p>
-        Private method to set the coding used by a python file.
-</p><dl>
-<dt><i>filename</i></dt>
-<dd>
-name of the file to inspect (string)
-</dd>
-</dl><a NAME="DebugClientBase.__signalHandler" ID="DebugClientBase.__signalHandler"></a>
-<h4>DebugClientBase.__signalHandler</h4>
-<b>__signalHandler</b>(<i>signalNumber, stackFrame</i>)
-<p>
-        Private method to handle signals.
-</p><dl>
-<dt><i>signalNumber</i> (int)</dt>
-<dd>
-number of the signal to be handled
-</dd><dt><i>stackFrame</i> (frame object)</dt>
-<dd>
-current stack frame
-</dd>
-</dl><a NAME="DebugClientBase.__unhandled_exception" ID="DebugClientBase.__unhandled_exception"></a>
-<h4>DebugClientBase.__unhandled_exception</h4>
-<b>__unhandled_exception</b>(<i>exctype, excval, exctb</i>)
-<p>
-        Private method called to report an uncaught exception.
-</p><dl>
-<dt><i>exctype</i></dt>
-<dd>
-the type of the exception
-</dd><dt><i>excval</i></dt>
-<dd>
-data about the exception
-</dd><dt><i>exctb</i></dt>
-<dd>
-traceback for the exception
-</dd>
-</dl><a NAME="DebugClientBase.absPath" ID="DebugClientBase.absPath"></a>
-<h4>DebugClientBase.absPath</h4>
-<b>absPath</b>(<i>fn</i>)
-<p>
-        Public method to convert a filename to an absolute name.
-</p><p>
-        sys.path is used as a set of possible prefixes. The name stays
-        relative if a file could not be found.
-</p><dl>
-<dt><i>fn</i></dt>
-<dd>
-filename (string)
-</dd>
-</dl><dl>
-<dt>Returns:</dt>
-<dd>
-the converted filename (string)
-</dd>
-</dl><a NAME="DebugClientBase.attachThread" ID="DebugClientBase.attachThread"></a>
-<h4>DebugClientBase.attachThread</h4>
-<b>attachThread</b>(<i>target=None, args=None, kwargs=None, mainThread=0</i>)
-<p>
-        Public method to setup a thread for DebugClient to debug.
-</p><p>
-        If mainThread is non-zero, then we are attaching to the already
-        started mainthread of the app and the rest of the args are ignored.
-</p><p>
-        This is just an empty function and is overridden in the threaded
-        debugger.
-</p><dl>
-<dt><i>target</i></dt>
-<dd>
-the start function of the target thread (i.e. the user
-            code)
-</dd><dt><i>args</i></dt>
-<dd>
-arguments to pass to target
-</dd><dt><i>kwargs</i></dt>
-<dd>
-keyword arguments to pass to target
-</dd><dt><i>mainThread</i></dt>
-<dd>
-non-zero, if we are attaching to the already
-              started mainthread of the app
-</dd>
-</dl><a NAME="DebugClientBase.close" ID="DebugClientBase.close"></a>
-<h4>DebugClientBase.close</h4>
-<b>close</b>(<i>fd</i>)
-<p>
-        Public method implementing a close method as a replacement for
-        os.close().
-</p><p>
-        It prevents the debugger connections from being closed.
-</p><dl>
-<dt><i>fd</i></dt>
-<dd>
-file descriptor to be closed (integer)
-</dd>
-</dl><a NAME="DebugClientBase.connectDebugger" ID="DebugClientBase.connectDebugger"></a>
-<h4>DebugClientBase.connectDebugger</h4>
-<b>connectDebugger</b>(<i>port, remoteAddress=None, redirect=1</i>)
-<p>
-        Public method to establish a session with the debugger.
-</p><p>
-        It opens a network connection to the debugger, connects it to stdin,
-        stdout and stderr and saves these file objects in case the application
-        being debugged redirects them itself.
-</p><dl>
-<dt><i>port</i></dt>
-<dd>
-the port number to connect to (int)
-</dd><dt><i>remoteAddress</i></dt>
-<dd>
-the network address of the debug server host
-            (string)
-</dd><dt><i>redirect</i></dt>
-<dd>
-flag indicating redirection of stdin, stdout and
-            stderr (boolean)
-</dd>
-</dl><a NAME="DebugClientBase.eventLoop" ID="DebugClientBase.eventLoop"></a>
-<h4>DebugClientBase.eventLoop</h4>
-<b>eventLoop</b>(<i>disablePolling=False</i>)
-<p>
-        Public method implementing our event loop.
-</p><dl>
-<dt><i>disablePolling</i></dt>
-<dd>
-flag indicating to enter an event loop with
-            polling disabled (boolean)
-</dd>
-</dl><a NAME="DebugClientBase.eventPoll" ID="DebugClientBase.eventPoll"></a>
-<h4>DebugClientBase.eventPoll</h4>
-<b>eventPoll</b>(<i></i>)
-<p>
-        Public method to poll for events like 'set break point'.
-</p><a NAME="DebugClientBase.fork" ID="DebugClientBase.fork"></a>
-<h4>DebugClientBase.fork</h4>
-<b>fork</b>(<i></i>)
-<p>
-        Public method implementing a fork routine deciding which branch to
-        follow.
-</p><dl>
-<dt>Returns:</dt>
-<dd>
-process ID (integer)
-</dd>
-</dl><a NAME="DebugClientBase.getCoding" ID="DebugClientBase.getCoding"></a>
-<h4>DebugClientBase.getCoding</h4>
-<b>getCoding</b>(<i></i>)
-<p>
-        Public method to return the current coding.
-</p><dl>
-<dt>Returns:</dt>
-<dd>
-codec name (string)
-</dd>
-</dl><a NAME="DebugClientBase.getRunning" ID="DebugClientBase.getRunning"></a>
-<h4>DebugClientBase.getRunning</h4>
-<b>getRunning</b>(<i></i>)
-<p>
-        Public method to return the main script we are currently running.
-</p><dl>
-<dt>Returns:</dt>
-<dd>
-flag indicating a running debug session (boolean)
-</dd>
-</dl><a NAME="DebugClientBase.handleLine" ID="DebugClientBase.handleLine"></a>
-<h4>DebugClientBase.handleLine</h4>
-<b>handleLine</b>(<i>line</i>)
-<p>
-        Public method to handle the receipt of a complete line.
-</p><p>
-        It first looks for a valid protocol token at the start of the line.
-        Thereafter it trys to execute the lines accumulated so far.
-</p><dl>
-<dt><i>line</i></dt>
-<dd>
-the received line
-</dd>
-</dl><a NAME="DebugClientBase.input" ID="DebugClientBase.input"></a>
-<h4>DebugClientBase.input</h4>
-<b>input</b>(<i>prompt</i>)
-<p>
-        Public method to implement input() using the event loop.
-</p><dl>
-<dt><i>prompt</i></dt>
-<dd>
-the prompt to be shown (string)
-</dd>
-</dl><dl>
-<dt>Returns:</dt>
-<dd>
-the entered string evaluated as a Python expresion
-</dd>
-</dl><a NAME="DebugClientBase.main" ID="DebugClientBase.main"></a>
-<h4>DebugClientBase.main</h4>
-<b>main</b>(<i></i>)
-<p>
-        Public method implementing the main method.
-</p><a NAME="DebugClientBase.progTerminated" ID="DebugClientBase.progTerminated"></a>
-<h4>DebugClientBase.progTerminated</h4>
-<b>progTerminated</b>(<i>status</i>)
-<p>
-        Public method to tell the debugger that the program has terminated.
-</p><dl>
-<dt><i>status</i> (int)</dt>
-<dd>
-return status
-</dd>
-</dl><a NAME="DebugClientBase.raw_input" ID="DebugClientBase.raw_input"></a>
-<h4>DebugClientBase.raw_input</h4>
-<b>raw_input</b>(<i>prompt, echo</i>)
-<p>
-        Public method to implement raw_input() using the event loop.
-</p><dl>
-<dt><i>prompt</i></dt>
-<dd>
-the prompt to be shown (string)
-</dd><dt><i>echo</i></dt>
-<dd>
-Flag indicating echoing of the input (boolean)
-</dd>
-</dl><dl>
-<dt>Returns:</dt>
-<dd>
-the entered string
-</dd>
-</dl><a NAME="DebugClientBase.run_call" ID="DebugClientBase.run_call"></a>
-<h4>DebugClientBase.run_call</h4>
-<b>run_call</b>(<i>scriptname, func, *args</i>)
-<p>
-        Public method used to start the remote debugger and call a function.
-</p><dl>
-<dt><i>scriptname</i></dt>
-<dd>
-name of the script to be debugged (string)
-</dd><dt><i>func</i></dt>
-<dd>
-function to be called
-</dd><dt><i>*args</i></dt>
-<dd>
-arguments being passed to func
-</dd>
-</dl><dl>
-<dt>Returns:</dt>
-<dd>
-result of the function call
-</dd>
-</dl><a NAME="DebugClientBase.sessionClose" ID="DebugClientBase.sessionClose"></a>
-<h4>DebugClientBase.sessionClose</h4>
-<b>sessionClose</b>(<i>exit=1</i>)
-<p>
-        Public method to close the session with the debugger and optionally
-        terminate.
-</p><dl>
-<dt><i>exit</i></dt>
-<dd>
-flag indicating to terminate (boolean)
-</dd>
-</dl><a NAME="DebugClientBase.shouldSkip" ID="DebugClientBase.shouldSkip"></a>
-<h4>DebugClientBase.shouldSkip</h4>
-<b>shouldSkip</b>(<i>fn</i>)
-<p>
-        Public method to check if a file should be skipped.
-</p><dl>
-<dt><i>fn</i></dt>
-<dd>
-filename to be checked
-</dd>
-</dl><dl>
-<dt>Returns:</dt>
-<dd>
-non-zero if fn represents a file we are 'skipping',
-            zero otherwise.
-</dd>
-</dl><a NAME="DebugClientBase.startDebugger" ID="DebugClientBase.startDebugger"></a>
-<h4>DebugClientBase.startDebugger</h4>
-<b>startDebugger</b>(<i>filename=None, host=None, port=None, enableTrace=1, exceptions=1, tracePython=0, redirect=1</i>)
-<p>
-        Public method used to start the remote debugger.
-</p><dl>
-<dt><i>filename</i></dt>
-<dd>
-the program to be debugged (string)
-</dd><dt><i>host</i></dt>
-<dd>
-hostname of the debug server (string)
-</dd><dt><i>port</i></dt>
-<dd>
-portnumber of the debug server (int)
-</dd><dt><i>enableTrace</i></dt>
-<dd>
-flag to enable the tracing function (boolean)
-</dd><dt><i>exceptions</i></dt>
-<dd>
-flag to enable exception reporting of the IDE
-            (boolean)
-</dd><dt><i>tracePython</i></dt>
-<dd>
-flag to enable tracing into the Python library
-            (boolean)
-</dd><dt><i>redirect</i></dt>
-<dd>
-flag indicating redirection of stdin, stdout and
-            stderr (boolean)
-</dd>
-</dl><a NAME="DebugClientBase.startProgInDebugger" ID="DebugClientBase.startProgInDebugger"></a>
-<h4>DebugClientBase.startProgInDebugger</h4>
-<b>startProgInDebugger</b>(<i>progargs, wd='', host=None, port=None, exceptions=1, tracePython=0, redirect=1</i>)
-<p>
-        Public method used to start the remote debugger.
-</p><dl>
-<dt><i>progargs</i></dt>
-<dd>
-commandline for the program to be debugged
-            (list of strings)
-</dd><dt><i>wd</i></dt>
-<dd>
-working directory for the program execution (string)
-</dd><dt><i>host</i></dt>
-<dd>
-hostname of the debug server (string)
-</dd><dt><i>port</i></dt>
-<dd>
-portnumber of the debug server (int)
-</dd><dt><i>exceptions</i></dt>
-<dd>
-flag to enable exception reporting of the IDE
-            (boolean)
-</dd><dt><i>tracePython</i></dt>
-<dd>
-flag to enable tracing into the Python library
-            (boolean)
-</dd><dt><i>redirect</i></dt>
-<dd>
-flag indicating redirection of stdin, stdout and
-            stderr (boolean)
-</dd>
-</dl><a NAME="DebugClientBase.write" ID="DebugClientBase.write"></a>
-<h4>DebugClientBase.write</h4>
-<b>write</b>(<i>s</i>)
-<p>
-        Public method to write data to the output stream.
-</p><dl>
-<dt><i>s</i></dt>
-<dd>
-data to be written (string)
-</dd>
-</dl>
-<div align="right"><a href="#top">Up</a></div>
-<hr /><hr />
-<a NAME="DebugClientClose" ID="DebugClientClose"></a>
-<h2>DebugClientClose</h2>
-<b>DebugClientClose</b>(<i>fd</i>)
-<p>
-    Replacement for the standard os.close(fd).
-</p><dl>
-<dt><i>fd</i></dt>
-<dd>
-open file descriptor to be closed (integer)
-</dd>
-</dl>
-<div align="right"><a href="#top">Up</a></div>
-<hr /><hr />
-<a NAME="DebugClientFork" ID="DebugClientFork"></a>
-<h2>DebugClientFork</h2>
-<b>DebugClientFork</b>(<i></i>)
-<p>
-    Replacement for the standard os.fork().
-</p><dl>
-<dt>Returns:</dt>
-<dd>
-result of the fork() call
-</dd>
-</dl>
-<div align="right"><a href="#top">Up</a></div>
-<hr /><hr />
-<a NAME="DebugClientInput" ID="DebugClientInput"></a>
-<h2>DebugClientInput</h2>
-<b>DebugClientInput</b>(<i>prompt=""</i>)
-<p>
-    Replacement for the standard input builtin.
-</p><p>
-    This function works with the split debugger.
-</p><dl>
-<dt><i>prompt</i></dt>
-<dd>
-prompt to be shown (string)
-</dd>
-</dl><dl>
-<dt>Returns:</dt>
-<dd>
-result of the input() call
-</dd>
-</dl>
-<div align="right"><a href="#top">Up</a></div>
-<hr /><hr />
-<a NAME="DebugClientRawInput" ID="DebugClientRawInput"></a>
-<h2>DebugClientRawInput</h2>
-<b>DebugClientRawInput</b>(<i>prompt="", echo=1</i>)
-<p>
-    Replacement for the standard raw_input builtin.
-</p><p>
-    This function works with the split debugger.
-</p><dl>
-<dt><i>prompt</i></dt>
-<dd>
-prompt to be shown. (string)
-</dd><dt><i>echo</i></dt>
-<dd>
-flag indicating echoing of the input (boolean)
-</dd>
-</dl><dl>
-<dt>Returns:</dt>
-<dd>
-result of the raw_input() call
-</dd>
-</dl>
-<div align="right"><a href="#top">Up</a></div>
-<hr /><hr />
-<a NAME="DebugClientSetRecursionLimit" ID="DebugClientSetRecursionLimit"></a>
-<h2>DebugClientSetRecursionLimit</h2>
-<b>DebugClientSetRecursionLimit</b>(<i>limit</i>)
-<p>
-    Replacement for the standard sys.setrecursionlimit(limit).
-</p><dl>
-<dt><i>limit</i></dt>
-<dd>
-recursion limit (integer)
-</dd>
-</dl>
-<div align="right"><a href="#top">Up</a></div>
-<hr />
-</body></html>
\ No newline at end of file
--- a/Documentation/Source/eric6.DebugClients.Python.DebugClientCapabilities.html	Sat Sep 03 18:02:37 2016 +0200
+++ /dev/null	Thu Jan 01 00:00:00 1970 +0000
@@ -1,39 +0,0 @@
-<!DOCTYPE html>
-<html><head>
-<title>eric6.DebugClients.Python.DebugClientCapabilities</title>
-<meta charset="UTF-8">
-<style>
-body {
-    background: #EDECE6;
-    margin: 0em 1em 10em 1em;
-    color: black;
-}
-
-h1 { color: white; background: #85774A; }
-h2 { color: white; background: #85774A; }
-h3 { color: white; background: #9D936E; }
-h4 { color: white; background: #9D936E; }
-    
-a { color: #BA6D36; }
-
-</style>
-</head>
-<body><a NAME="top" ID="top"></a>
-<h1>eric6.DebugClients.Python.DebugClientCapabilities</h1>
-<p>
-Module defining the debug clients capabilities.
-</p>
-<h3>Global Attributes</h3>
-<table>
-<tr><td>HasAll</td></tr><tr><td>HasCompleter</td></tr><tr><td>HasCoverage</td></tr><tr><td>HasDebugger</td></tr><tr><td>HasInterpreter</td></tr><tr><td>HasProfiler</td></tr><tr><td>HasShell</td></tr><tr><td>HasUnittest</td></tr>
-</table>
-<h3>Classes</h3>
-<table>
-<tr><td>None</td></tr>
-</table>
-<h3>Functions</h3>
-<table>
-<tr><td>None</td></tr>
-</table>
-<hr />
-</body></html>
\ No newline at end of file
--- a/Documentation/Source/eric6.DebugClients.Python.DebugClientThreads.html	Sat Sep 03 18:02:37 2016 +0200
+++ /dev/null	Thu Jan 01 00:00:00 1970 +0000
@@ -1,222 +0,0 @@
-<!DOCTYPE html>
-<html><head>
-<title>eric6.DebugClients.Python.DebugClientThreads</title>
-<meta charset="UTF-8">
-<style>
-body {
-    background: #EDECE6;
-    margin: 0em 1em 10em 1em;
-    color: black;
-}
-
-h1 { color: white; background: #85774A; }
-h2 { color: white; background: #85774A; }
-h3 { color: white; background: #9D936E; }
-h4 { color: white; background: #9D936E; }
-    
-a { color: #BA6D36; }
-
-</style>
-</head>
-<body><a NAME="top" ID="top"></a>
-<h1>eric6.DebugClients.Python.DebugClientThreads</h1>
-<p>
-Module implementing the multithreaded version of the debug client.
-</p>
-<h3>Global Attributes</h3>
-<table>
-<tr><td>_original_start_thread</td></tr>
-</table>
-<h3>Classes</h3>
-<table>
-<tr>
-<td><a href="#DebugClientThreads">DebugClientThreads</a></td>
-<td>Class implementing the client side of the debugger.</td>
-</tr>
-</table>
-<h3>Functions</h3>
-<table>
-<tr>
-<td><a href="#_debugclient_start_new_thread">_debugclient_start_new_thread</a></td>
-<td>Module function used to allow for debugging of multiple threads.</td>
-</tr>
-</table>
-<hr /><hr />
-<a NAME="DebugClientThreads" ID="DebugClientThreads"></a>
-<h2>DebugClientThreads</h2>
-<p>
-    Class implementing the client side of the debugger.
-</p><p>
-    This variant of the debugger implements a threaded debugger client
-    by subclassing all relevant base classes.
-</p>
-<h3>Derived from</h3>
-DebugClientBase.DebugClientBase, AsyncIO
-<h3>Class Attributes</h3>
-<table>
-<tr><td>debugClient</td></tr>
-</table>
-<h3>Class Methods</h3>
-<table>
-<tr><td>None</td></tr>
-</table>
-<h3>Methods</h3>
-<table>
-<tr>
-<td><a href="#DebugClientThreads.__init__">DebugClientThreads</a></td>
-<td>Constructor</td>
-</tr><tr>
-<td><a href="#DebugClientThreads.attachThread">attachThread</a></td>
-<td>Public method to setup a thread for DebugClient to debug.</td>
-</tr><tr>
-<td><a href="#DebugClientThreads.eventLoop">eventLoop</a></td>
-<td>Public method implementing our event loop.</td>
-</tr><tr>
-<td><a href="#DebugClientThreads.lockClient">lockClient</a></td>
-<td>Public method to acquire the lock for this client.</td>
-</tr><tr>
-<td><a href="#DebugClientThreads.setCurrentThread">setCurrentThread</a></td>
-<td>Public method to set the current thread.</td>
-</tr><tr>
-<td><a href="#DebugClientThreads.set_quit">set_quit</a></td>
-<td>Public method to do a 'set quit' on all threads.</td>
-</tr><tr>
-<td><a href="#DebugClientThreads.threadTerminated">threadTerminated</a></td>
-<td>Public method called when a DebugThread has exited.</td>
-</tr><tr>
-<td><a href="#DebugClientThreads.unlockClient">unlockClient</a></td>
-<td>Public method to release the lock for this client.</td>
-</tr>
-</table>
-<h3>Static Methods</h3>
-<table>
-<tr><td>None</td></tr>
-</table>
-<a NAME="DebugClientThreads.__init__" ID="DebugClientThreads.__init__"></a>
-<h4>DebugClientThreads (Constructor)</h4>
-<b>DebugClientThreads</b>(<i></i>)
-<p>
-        Constructor
-</p><a NAME="DebugClientThreads.attachThread" ID="DebugClientThreads.attachThread"></a>
-<h4>DebugClientThreads.attachThread</h4>
-<b>attachThread</b>(<i>target=None, args=None, kwargs=None, mainThread=0</i>)
-<p>
-        Public method to setup a thread for DebugClient to debug.
-</p><p>
-        If mainThread is non-zero, then we are attaching to the already
-        started mainthread of the app and the rest of the args are ignored.
-</p><dl>
-<dt><i>target</i></dt>
-<dd>
-the start function of the target thread (i.e. the
-            user code)
-</dd><dt><i>args</i></dt>
-<dd>
-arguments to pass to target
-</dd><dt><i>kwargs</i></dt>
-<dd>
-keyword arguments to pass to target
-</dd><dt><i>mainThread</i></dt>
-<dd>
-non-zero, if we are attaching to the already
-              started mainthread of the app
-</dd>
-</dl><dl>
-<dt>Returns:</dt>
-<dd>
-The identifier of the created thread
-</dd>
-</dl><a NAME="DebugClientThreads.eventLoop" ID="DebugClientThreads.eventLoop"></a>
-<h4>DebugClientThreads.eventLoop</h4>
-<b>eventLoop</b>(<i>disablePolling=False</i>)
-<p>
-        Public method implementing our event loop.
-</p><dl>
-<dt><i>disablePolling</i></dt>
-<dd>
-flag indicating to enter an event loop with
-            polling disabled (boolean)
-</dd>
-</dl><a NAME="DebugClientThreads.lockClient" ID="DebugClientThreads.lockClient"></a>
-<h4>DebugClientThreads.lockClient</h4>
-<b>lockClient</b>(<i>blocking=1</i>)
-<p>
-        Public method to acquire the lock for this client.
-</p><dl>
-<dt><i>blocking</i></dt>
-<dd>
-flag to indicating a blocking lock
-</dd>
-</dl><dl>
-<dt>Returns:</dt>
-<dd>
-flag indicating successful locking
-</dd>
-</dl><a NAME="DebugClientThreads.setCurrentThread" ID="DebugClientThreads.setCurrentThread"></a>
-<h4>DebugClientThreads.setCurrentThread</h4>
-<b>setCurrentThread</b>(<i>id</i>)
-<p>
-        Public method to set the current thread.
-</p><dl>
-<dt><i>id</i></dt>
-<dd>
-the id the current thread should be set to.
-</dd>
-</dl><a NAME="DebugClientThreads.set_quit" ID="DebugClientThreads.set_quit"></a>
-<h4>DebugClientThreads.set_quit</h4>
-<b>set_quit</b>(<i></i>)
-<p>
-        Public method to do a 'set quit' on all threads.
-</p><a NAME="DebugClientThreads.threadTerminated" ID="DebugClientThreads.threadTerminated"></a>
-<h4>DebugClientThreads.threadTerminated</h4>
-<b>threadTerminated</b>(<i>dbgThread</i>)
-<p>
-        Public method called when a DebugThread has exited.
-</p><dl>
-<dt><i>dbgThread</i></dt>
-<dd>
-the DebugThread that has exited
-</dd>
-</dl><a NAME="DebugClientThreads.unlockClient" ID="DebugClientThreads.unlockClient"></a>
-<h4>DebugClientThreads.unlockClient</h4>
-<b>unlockClient</b>(<i></i>)
-<p>
-        Public method to release the lock for this client.
-</p>
-<div align="right"><a href="#top">Up</a></div>
-<hr /><hr />
-<a NAME="_debugclient_start_new_thread" ID="_debugclient_start_new_thread"></a>
-<h2>_debugclient_start_new_thread</h2>
-<b>_debugclient_start_new_thread</b>(<i>target, args, kwargs={}</i>)
-<p>
-    Module function used to allow for debugging of multiple threads.
-</p><p>
-    The way it works is that below, we reset thread._start_new_thread to
-    this function object. Thus, providing a hook for us to see when
-    threads are started. From here we forward the request onto the
-    DebugClient which will create a DebugThread object to allow tracing
-    of the thread then start up the thread. These actions are always
-    performed in order to allow dropping into debug mode.
-</p><p>
-    See DebugClientThreads.attachThread and DebugThread.DebugThread in
-    DebugThread.py
-</p><dl>
-<dt><i>target</i></dt>
-<dd>
-the start function of the target thread (i.e. the user code)
-</dd><dt><i>args</i></dt>
-<dd>
-arguments to pass to target
-</dd><dt><i>kwargs</i></dt>
-<dd>
-keyword arguments to pass to target
-</dd>
-</dl><dl>
-<dt>Returns:</dt>
-<dd>
-The identifier of the created thread
-</dd>
-</dl>
-<div align="right"><a href="#top">Up</a></div>
-<hr />
-</body></html>
\ No newline at end of file
--- a/Documentation/Source/eric6.DebugClients.Python.DebugConfig.html	Sat Sep 03 18:02:37 2016 +0200
+++ /dev/null	Thu Jan 01 00:00:00 1970 +0000
@@ -1,39 +0,0 @@
-<!DOCTYPE html>
-<html><head>
-<title>eric6.DebugClients.Python.DebugConfig</title>
-<meta charset="UTF-8">
-<style>
-body {
-    background: #EDECE6;
-    margin: 0em 1em 10em 1em;
-    color: black;
-}
-
-h1 { color: white; background: #85774A; }
-h2 { color: white; background: #85774A; }
-h3 { color: white; background: #9D936E; }
-h4 { color: white; background: #9D936E; }
-    
-a { color: #BA6D36; }
-
-</style>
-</head>
-<body><a NAME="top" ID="top"></a>
-<h1>eric6.DebugClients.Python.DebugConfig</h1>
-<p>
-Module defining type strings for the different Python types.
-</p>
-<h3>Global Attributes</h3>
-<table>
-<tr><td>ConfigVarTypeStrings</td></tr>
-</table>
-<h3>Classes</h3>
-<table>
-<tr><td>None</td></tr>
-</table>
-<h3>Functions</h3>
-<table>
-<tr><td>None</td></tr>
-</table>
-<hr />
-</body></html>
\ No newline at end of file
--- a/Documentation/Source/eric6.DebugClients.Python.DebugThread.html	Sat Sep 03 18:02:37 2016 +0200
+++ /dev/null	Thu Jan 01 00:00:00 1970 +0000
@@ -1,184 +0,0 @@
-<!DOCTYPE html>
-<html><head>
-<title>eric6.DebugClients.Python.DebugThread</title>
-<meta charset="UTF-8">
-<style>
-body {
-    background: #EDECE6;
-    margin: 0em 1em 10em 1em;
-    color: black;
-}
-
-h1 { color: white; background: #85774A; }
-h2 { color: white; background: #85774A; }
-h3 { color: white; background: #9D936E; }
-h4 { color: white; background: #9D936E; }
-    
-a { color: #BA6D36; }
-
-</style>
-</head>
-<body><a NAME="top" ID="top"></a>
-<h1>eric6.DebugClients.Python.DebugThread</h1>
-<p>
-Module implementing the debug thread.
-</p>
-<h3>Global Attributes</h3>
-<table>
-<tr><td>None</td></tr>
-</table>
-<h3>Classes</h3>
-<table>
-<tr>
-<td><a href="#DebugThread">DebugThread</a></td>
-<td>Class implementing a debug thread.</td>
-</tr>
-</table>
-<h3>Functions</h3>
-<table>
-<tr><td>None</td></tr>
-</table>
-<hr /><hr />
-<a NAME="DebugThread" ID="DebugThread"></a>
-<h2>DebugThread</h2>
-<p>
-    Class implementing a debug thread.
-</p><p>
-    It represents a thread in the python interpreter that we are tracing.
-</p><p>
-    Provides simple wrapper methods around bdb for the 'owning' client to
-    call to step etc.
-</p>
-<h3>Derived from</h3>
-DebugBase
-<h3>Class Attributes</h3>
-<table>
-<tr><td>None</td></tr>
-</table>
-<h3>Class Methods</h3>
-<table>
-<tr><td>None</td></tr>
-</table>
-<h3>Methods</h3>
-<table>
-<tr>
-<td><a href="#DebugThread.__init__">DebugThread</a></td>
-<td>Constructor</td>
-</tr><tr>
-<td><a href="#DebugThread.bootstrap">bootstrap</a></td>
-<td>Public method to bootstrap the thread.</td>
-</tr><tr>
-<td><a href="#DebugThread.get_ident">get_ident</a></td>
-<td>Public method to return the id of this thread.</td>
-</tr><tr>
-<td><a href="#DebugThread.get_name">get_name</a></td>
-<td>Public method to return the name of this thread.</td>
-</tr><tr>
-<td><a href="#DebugThread.set_ident">set_ident</a></td>
-<td>Public method to set the id for this thread.</td>
-</tr><tr>
-<td><a href="#DebugThread.traceThread">traceThread</a></td>
-<td>Public method to setup tracing for this thread.</td>
-</tr><tr>
-<td><a href="#DebugThread.trace_dispatch">trace_dispatch</a></td>
-<td>Public method wrapping the trace_dispatch of bdb.py.</td>
-</tr>
-</table>
-<h3>Static Methods</h3>
-<table>
-<tr><td>None</td></tr>
-</table>
-<a NAME="DebugThread.__init__" ID="DebugThread.__init__"></a>
-<h4>DebugThread (Constructor)</h4>
-<b>DebugThread</b>(<i>dbgClient, targ=None, args=None, kwargs=None, mainThread=0</i>)
-<p>
-        Constructor
-</p><dl>
-<dt><i>dbgClient</i></dt>
-<dd>
-the owning client
-</dd><dt><i>targ</i></dt>
-<dd>
-the target method in the run thread
-</dd><dt><i>args</i></dt>
-<dd>
-arguments to be passed to the thread
-</dd><dt><i>kwargs</i></dt>
-<dd>
-arguments to be passed to the thread
-</dd><dt><i>mainThread</i></dt>
-<dd>
-0 if this thread is not the mainscripts thread
-</dd>
-</dl><a NAME="DebugThread.bootstrap" ID="DebugThread.bootstrap"></a>
-<h4>DebugThread.bootstrap</h4>
-<b>bootstrap</b>(<i></i>)
-<p>
-        Public method to bootstrap the thread.
-</p><p>
-        It wraps the call to the user function to enable tracing
-        before hand.
-</p><a NAME="DebugThread.get_ident" ID="DebugThread.get_ident"></a>
-<h4>DebugThread.get_ident</h4>
-<b>get_ident</b>(<i></i>)
-<p>
-        Public method to return the id of this thread.
-</p><dl>
-<dt>Returns:</dt>
-<dd>
-the id of this thread (int)
-</dd>
-</dl><a NAME="DebugThread.get_name" ID="DebugThread.get_name"></a>
-<h4>DebugThread.get_name</h4>
-<b>get_name</b>(<i></i>)
-<p>
-        Public method to return the name of this thread.
-</p><dl>
-<dt>Returns:</dt>
-<dd>
-name of this thread (string)
-</dd>
-</dl><a NAME="DebugThread.set_ident" ID="DebugThread.set_ident"></a>
-<h4>DebugThread.set_ident</h4>
-<b>set_ident</b>(<i>id</i>)
-<p>
-        Public method to set the id for this thread.
-</p><dl>
-<dt><i>id</i></dt>
-<dd>
-id for this thread (int)
-</dd>
-</dl><a NAME="DebugThread.traceThread" ID="DebugThread.traceThread"></a>
-<h4>DebugThread.traceThread</h4>
-<b>traceThread</b>(<i></i>)
-<p>
-        Public method to setup tracing for this thread.
-</p><a NAME="DebugThread.trace_dispatch" ID="DebugThread.trace_dispatch"></a>
-<h4>DebugThread.trace_dispatch</h4>
-<b>trace_dispatch</b>(<i>frame, event, arg</i>)
-<p>
-        Public method wrapping the trace_dispatch of bdb.py.
-</p><p>
-        It wraps the call to dispatch tracing into
-        bdb to make sure we have locked the client to prevent multiple
-        threads from entering the client event loop.
-</p><dl>
-<dt><i>frame</i></dt>
-<dd>
-The current stack frame.
-</dd><dt><i>event</i></dt>
-<dd>
-The trace event (string)
-</dd><dt><i>arg</i></dt>
-<dd>
-The arguments
-</dd>
-</dl><dl>
-<dt>Returns:</dt>
-<dd>
-local trace function
-</dd>
-</dl>
-<div align="right"><a href="#top">Up</a></div>
-<hr />
-</body></html>
\ No newline at end of file
--- a/Documentation/Source/eric6.DebugClients.Python.FlexCompleter.html	Sat Sep 03 18:02:37 2016 +0200
+++ /dev/null	Thu Jan 01 00:00:00 1970 +0000
@@ -1,288 +0,0 @@
-<!DOCTYPE html>
-<html><head>
-<title>eric6.DebugClients.Python.FlexCompleter</title>
-<meta charset="UTF-8">
-<style>
-body {
-    background: #EDECE6;
-    margin: 0em 1em 10em 1em;
-    color: black;
-}
-
-h1 { color: white; background: #85774A; }
-h2 { color: white; background: #85774A; }
-h3 { color: white; background: #9D936E; }
-h4 { color: white; background: #9D936E; }
-    
-a { color: #BA6D36; }
-
-</style>
-</head>
-<body><a NAME="top" ID="top"></a>
-<h1>eric6.DebugClients.Python.FlexCompleter</h1>
-<p>
-Word completion for the eric6 shell.
-</p><p>
-<h4>NOTE for eric6 variant</h4>
-</p><p>
-    This version is a re-implementation of FlexCompleter
-    as found in the PyQwt package. It is modified to work with the eric6 debug
-    clients.
-</p><p>
-
-</p><p>
-<h4>NOTE for the PyQwt variant</h4>
-</p><p>
-    This version is a re-implementation of FlexCompleter
-    with readline support for PyQt&sip-3.6 and earlier.
-</p><p>
-    Full readline support is present in PyQt&sip-snapshot-20030531 and later.
-</p><p>
-
-</p><p>
-<h4>NOTE for FlexCompleter</h4>
-</p><p>
-    This version is a re-implementation of rlcompleter with
-    selectable namespace.
-</p><p>
-    The problem with rlcompleter is that it's hardwired to work with
-    __main__.__dict__, and in some cases one may have 'sandboxed' namespaces.
-    So this class is a ripoff of rlcompleter, with the namespace to work in as
-    an optional parameter.
-</p><p>
-    This class can be used just like rlcompleter, but the Completer class now
-    has a constructor with the optional 'namespace' parameter.
-</p><p>
-    A patch has been submitted to Python@sourceforge for these changes to go in
-    the standard Python distribution.
-</p><p>
-
-</p><p>
-<h4>Original rlcompleter documentation</h4>
-</p><p>
-    This requires the latest extension to the readline module (the
-    completes keywords, built-ins and globals in __main__; when completing
-    NAME.NAME..., it evaluates (!) the expression up to the last dot and
-    completes its attributes.
-</p><p>
-    It's very cool to do "import string" type "string.", hit the
-    completion key (twice), and see the list of names defined by the
-    string module!
-</p><p>
-    Tip: to use the tab key as the completion key, call
-</p><p>
-    'readline.parse_and_bind("tab: complete")'
-</p><p>
-    <b>Notes</b>:
-    <ul>
-    <li>
-    Exceptions raised by the completer function are *ignored* (and
-    generally cause the completion to fail).  This is a feature -- since
-    readline sets the tty device in raw (or cbreak) mode, printing a
-    traceback wouldn't work well without some complicated hoopla to save,
-    reset and restore the tty state.
-    </li>
-    <li>
-    The evaluation of the NAME.NAME... form may cause arbitrary
-    application defined code to be executed if an object with a
-    __getattr__ hook is found.  Since it is the responsibility of the
-    application (or the user) to enable this feature, I consider this an
-    acceptable risk.  More complicated expressions (e.g. function calls or
-    indexing operations) are *not* evaluated.
-    </li>
-    <li>
-    GNU readline is also used by the built-in functions input() and
-    raw_input(), and thus these also benefit/suffer from the completer
-    features.  Clearly an interactive application can benefit by
-    specifying its own completer function and using raw_input() for all
-    its input.
-    </li>
-    <li>
-    When the original stdin is not a tty device, GNU readline is never
-    used, and this module (and the readline module) are silently inactive.
-    </li>
-    </ul>
-</p>
-<h3>Global Attributes</h3>
-<table>
-<tr><td>__all__</td></tr>
-</table>
-<h3>Classes</h3>
-<table>
-<tr>
-<td><a href="#Completer">Completer</a></td>
-<td>Class implementing the command line completer object.</td>
-</tr>
-</table>
-<h3>Functions</h3>
-<table>
-<tr>
-<td><a href="#get_class_members">get_class_members</a></td>
-<td>Module function to retrieve the class members.</td>
-</tr>
-</table>
-<hr /><hr />
-<a NAME="Completer" ID="Completer"></a>
-<h2>Completer</h2>
-<p>
-    Class implementing the command line completer object.
-</p>
-<h3>Derived from</h3>
-object
-<h3>Class Attributes</h3>
-<table>
-<tr><td>None</td></tr>
-</table>
-<h3>Class Methods</h3>
-<table>
-<tr><td>None</td></tr>
-</table>
-<h3>Methods</h3>
-<table>
-<tr>
-<td><a href="#Completer.__init__">Completer</a></td>
-<td>Constructor</td>
-</tr><tr>
-<td><a href="#Completer._callable_postfix">_callable_postfix</a></td>
-<td>Protected method to check for a callable.</td>
-</tr><tr>
-<td><a href="#Completer.attr_matches">attr_matches</a></td>
-<td>Public method to compute matches when text contains a dot.</td>
-</tr><tr>
-<td><a href="#Completer.complete">complete</a></td>
-<td>Public method to return the next possible completion for 'text'.</td>
-</tr><tr>
-<td><a href="#Completer.global_matches">global_matches</a></td>
-<td>Public method to compute matches when text is a simple name.</td>
-</tr>
-</table>
-<h3>Static Methods</h3>
-<table>
-<tr><td>None</td></tr>
-</table>
-<a NAME="Completer.__init__" ID="Completer.__init__"></a>
-<h4>Completer (Constructor)</h4>
-<b>Completer</b>(<i>namespace=None</i>)
-<p>
-        Constructor
-</p><p>
-        Completer([namespace]) -> completer instance.
-</p><p>
-        If unspecified, the default namespace where completions are performed
-        is __main__ (technically, __main__.__dict__). Namespaces should be
-        given as dictionaries.
-</p><p>
-        Completer instances should be used as the completion mechanism of
-        readline via the set_completer() call:
-</p><p>
-        readline.set_completer(Completer(my_namespace).complete)
-</p><dl>
-<dt><i>namespace</i></dt>
-<dd>
-namespace for the completer
-</dd>
-</dl><dl>
-<dt>Raises <b>TypeError</b>:</dt>
-<dd>
-raised to indicate a wrong namespace structure
-</dd>
-</dl><a NAME="Completer._callable_postfix" ID="Completer._callable_postfix"></a>
-<h4>Completer._callable_postfix</h4>
-<b>_callable_postfix</b>(<i>val, word</i>)
-<p>
-        Protected method to check for a callable.
-</p><dl>
-<dt><i>val</i></dt>
-<dd>
-value to check (object)
-</dd><dt><i>word</i></dt>
-<dd>
-word to ammend (string)
-</dd>
-</dl><dl>
-<dt>Returns:</dt>
-<dd>
-ammended word (string)
-</dd>
-</dl><a NAME="Completer.attr_matches" ID="Completer.attr_matches"></a>
-<h4>Completer.attr_matches</h4>
-<b>attr_matches</b>(<i>text</i>)
-<p>
-        Public method to compute matches when text contains a dot.
-</p><p>
-        Assuming the text is of the form NAME.NAME....[NAME], and is
-        evaluatable in self.namespace, it will be evaluated and its attributes
-        (as revealed by dir()) are used as possible completions.  (For class
-        instances, class members are are also considered.)
-</p><p>
-        <b>WARNING</b>: this can still invoke arbitrary C code, if an object
-        with a __getattr__ hook is evaluated.
-</p><dl>
-<dt><i>text</i></dt>
-<dd>
-The text to be completed. (string)
-</dd>
-</dl><dl>
-<dt>Returns:</dt>
-<dd>
-A list of all matches.
-</dd>
-</dl><a NAME="Completer.complete" ID="Completer.complete"></a>
-<h4>Completer.complete</h4>
-<b>complete</b>(<i>text, state</i>)
-<p>
-        Public method to return the next possible completion for 'text'.
-</p><p>
-        This is called successively with state == 0, 1, 2, ... until it
-        returns None.  The completion should begin with 'text'.
-</p><dl>
-<dt><i>text</i></dt>
-<dd>
-The text to be completed. (string)
-</dd><dt><i>state</i></dt>
-<dd>
-The state of the completion. (integer)
-</dd>
-</dl><dl>
-<dt>Returns:</dt>
-<dd>
-The possible completions as a list of strings.
-</dd>
-</dl><a NAME="Completer.global_matches" ID="Completer.global_matches"></a>
-<h4>Completer.global_matches</h4>
-<b>global_matches</b>(<i>text</i>)
-<p>
-        Public method to compute matches when text is a simple name.
-</p><dl>
-<dt><i>text</i></dt>
-<dd>
-The text to be completed. (string)
-</dd>
-</dl><dl>
-<dt>Returns:</dt>
-<dd>
-A list of all keywords, built-in functions and names currently
-        defined in self.namespace that match.
-</dd>
-</dl>
-<div align="right"><a href="#top">Up</a></div>
-<hr /><hr />
-<a NAME="get_class_members" ID="get_class_members"></a>
-<h2>get_class_members</h2>
-<b>get_class_members</b>(<i>klass</i>)
-<p>
-    Module function to retrieve the class members.
-</p><dl>
-<dt><i>klass</i></dt>
-<dd>
-The class object to be analysed.
-</dd>
-</dl><dl>
-<dt>Returns:</dt>
-<dd>
-A list of all names defined in the class.
-</dd>
-</dl>
-<div align="right"><a href="#top">Up</a></div>
-<hr />
-</body></html>
\ No newline at end of file
--- a/Documentation/Source/eric6.DebugClients.Python.PyProfile.html	Sat Sep 03 18:02:37 2016 +0200
+++ /dev/null	Thu Jan 01 00:00:00 1970 +0000
@@ -1,182 +0,0 @@
-<!DOCTYPE html>
-<html><head>
-<title>eric6.DebugClients.Python.PyProfile</title>
-<meta charset="UTF-8">
-<style>
-body {
-    background: #EDECE6;
-    margin: 0em 1em 10em 1em;
-    color: black;
-}
-
-h1 { color: white; background: #85774A; }
-h2 { color: white; background: #85774A; }
-h3 { color: white; background: #9D936E; }
-h4 { color: white; background: #9D936E; }
-    
-a { color: #BA6D36; }
-
-</style>
-</head>
-<body><a NAME="top" ID="top"></a>
-<h1>eric6.DebugClients.Python.PyProfile</h1>
-<p>
-Module defining additions to the standard Python profile.py.
-</p>
-<h3>Global Attributes</h3>
-<table>
-<tr><td>None</td></tr>
-</table>
-<h3>Classes</h3>
-<table>
-<tr>
-<td><a href="#PyProfile">PyProfile</a></td>
-<td>Class extending the standard Python profiler with additional methods.</td>
-</tr>
-</table>
-<h3>Functions</h3>
-<table>
-<tr><td>None</td></tr>
-</table>
-<hr /><hr />
-<a NAME="PyProfile" ID="PyProfile"></a>
-<h2>PyProfile</h2>
-<p>
-    Class extending the standard Python profiler with additional methods.
-</p><p>
-    This class extends the standard Python profiler by the functionality to
-    save the collected timing data in a timing cache, to restore these data
-    on subsequent calls, to store a profile dump to a standard filename and
-    to erase these caches.
-</p>
-<h3>Derived from</h3>
-profile.Profile
-<h3>Class Attributes</h3>
-<table>
-<tr><td>dispatch</td></tr>
-</table>
-<h3>Class Methods</h3>
-<table>
-<tr><td>None</td></tr>
-</table>
-<h3>Methods</h3>
-<table>
-<tr>
-<td><a href="#PyProfile.__init__">PyProfile</a></td>
-<td>Constructor</td>
-</tr><tr>
-<td><a href="#PyProfile.__restore">__restore</a></td>
-<td>Private method to restore the timing data from the timing cache.</td>
-</tr><tr>
-<td><a href="#PyProfile.dump_stats">dump_stats</a></td>
-<td>Public method to dump the statistics data.</td>
-</tr><tr>
-<td><a href="#PyProfile.erase">erase</a></td>
-<td>Public method to erase the collected timing data.</td>
-</tr><tr>
-<td><a href="#PyProfile.fix_frame_filename">fix_frame_filename</a></td>
-<td>Public method used to fixup the filename for a given frame.</td>
-</tr><tr>
-<td><a href="#PyProfile.save">save</a></td>
-<td>Public method to store the collected profile data.</td>
-</tr><tr>
-<td><a href="#PyProfile.trace_dispatch_call">trace_dispatch_call</a></td>
-<td>Public method used to trace functions calls.</td>
-</tr>
-</table>
-<h3>Static Methods</h3>
-<table>
-<tr><td>None</td></tr>
-</table>
-<a NAME="PyProfile.__init__" ID="PyProfile.__init__"></a>
-<h4>PyProfile (Constructor)</h4>
-<b>PyProfile</b>(<i>basename, timer=None, bias=None</i>)
-<p>
-        Constructor
-</p><dl>
-<dt><i>basename</i></dt>
-<dd>
-name of the script to be profiled (string)
-</dd><dt><i>timer</i></dt>
-<dd>
-function defining the timing calculation
-</dd><dt><i>bias</i></dt>
-<dd>
-calibration value (float)
-</dd>
-</dl><a NAME="PyProfile.__restore" ID="PyProfile.__restore"></a>
-<h4>PyProfile.__restore</h4>
-<b>__restore</b>(<i></i>)
-<p>
-        Private method to restore the timing data from the timing cache.
-</p><a NAME="PyProfile.dump_stats" ID="PyProfile.dump_stats"></a>
-<h4>PyProfile.dump_stats</h4>
-<b>dump_stats</b>(<i>file</i>)
-<p>
-        Public method to dump the statistics data.
-</p><dl>
-<dt><i>file</i></dt>
-<dd>
-name of the file to write to (string)
-</dd>
-</dl><a NAME="PyProfile.erase" ID="PyProfile.erase"></a>
-<h4>PyProfile.erase</h4>
-<b>erase</b>(<i></i>)
-<p>
-        Public method to erase the collected timing data.
-</p><a NAME="PyProfile.fix_frame_filename" ID="PyProfile.fix_frame_filename"></a>
-<h4>PyProfile.fix_frame_filename</h4>
-<b>fix_frame_filename</b>(<i>frame</i>)
-<p>
-        Public method used to fixup the filename for a given frame.
-</p><p>
-        The logic employed here is that if a module was loaded
-        from a .pyc file, then the correct .py to operate with
-        should be in the same path as the .pyc. The reason this
-        logic is needed is that when a .pyc file is generated, the
-        filename embedded and thus what is readable in the code object
-        of the frame object is the fully qualified filepath when the
-        pyc is generated. If files are moved from machine to machine
-        this can break debugging as the .pyc will refer to the .py
-        on the original machine. Another case might be sharing
-        code over a network... This logic deals with that.
-</p><dl>
-<dt><i>frame</i></dt>
-<dd>
-the frame object
-</dd>
-</dl><dl>
-<dt>Returns:</dt>
-<dd>
-fixed up file name (string)
-</dd>
-</dl><a NAME="PyProfile.save" ID="PyProfile.save"></a>
-<h4>PyProfile.save</h4>
-<b>save</b>(<i></i>)
-<p>
-        Public method to store the collected profile data.
-</p><a NAME="PyProfile.trace_dispatch_call" ID="PyProfile.trace_dispatch_call"></a>
-<h4>PyProfile.trace_dispatch_call</h4>
-<b>trace_dispatch_call</b>(<i>frame, t</i>)
-<p>
-        Public method used to trace functions calls.
-</p><p>
-        This is a variant of the one found in the standard Python
-        profile.py calling fix_frame_filename above.
-</p><dl>
-<dt><i>frame</i></dt>
-<dd>
-reference to the call frame
-</dd><dt><i>t</i></dt>
-<dd>
-arguments of the call
-</dd>
-</dl><dl>
-<dt>Returns:</dt>
-<dd>
-flag indicating a handled call
-</dd>
-</dl>
-<div align="right"><a href="#top">Up</a></div>
-<hr />
-</body></html>
\ No newline at end of file
--- a/Documentation/Source/eric6.DebugClients.Python.eric6dbgstub.html	Sat Sep 03 18:02:37 2016 +0200
+++ /dev/null	Thu Jan 01 00:00:00 1970 +0000
@@ -1,134 +0,0 @@
-<!DOCTYPE html>
-<html><head>
-<title>eric6.DebugClients.Python.eric6dbgstub</title>
-<meta charset="UTF-8">
-<style>
-body {
-    background: #EDECE6;
-    margin: 0em 1em 10em 1em;
-    color: black;
-}
-
-h1 { color: white; background: #85774A; }
-h2 { color: white; background: #85774A; }
-h3 { color: white; background: #9D936E; }
-h4 { color: white; background: #9D936E; }
-    
-a { color: #BA6D36; }
-
-</style>
-</head>
-<body><a NAME="top" ID="top"></a>
-<h1>eric6.DebugClients.Python.eric6dbgstub</h1>
-<p>
-Module implementing a debugger stub for remote debugging.
-</p>
-<h3>Global Attributes</h3>
-<table>
-<tr><td>__scriptname</td></tr><tr><td>debugger</td></tr><tr><td>ericpath</td></tr><tr><td>modDir</td></tr>
-</table>
-<h3>Classes</h3>
-<table>
-<tr><td>None</td></tr>
-</table>
-<h3>Functions</h3>
-<table>
-<tr>
-<td><a href="#initDebugger">initDebugger</a></td>
-<td>Module function to initialize a debugger for remote debugging.</td>
-</tr><tr>
-<td><a href="#runcall">runcall</a></td>
-<td>Module function mimicing the Pdb interface.</td>
-</tr><tr>
-<td><a href="#setScriptname">setScriptname</a></td>
-<td>Module function to set the scriptname to be reported back to the IDE.</td>
-</tr><tr>
-<td><a href="#startDebugger">startDebugger</a></td>
-<td>Module function used to start the remote debugger.</td>
-</tr>
-</table>
-<hr /><hr />
-<a NAME="initDebugger" ID="initDebugger"></a>
-<h2>initDebugger</h2>
-<b>initDebugger</b>(<i>kind="standard"</i>)
-<p>
-    Module function to initialize a debugger for remote debugging.
-</p><dl>
-<dt><i>kind</i></dt>
-<dd>
-type of debugger ("standard" or "threads")
-</dd>
-</dl><dl>
-<dt>Returns:</dt>
-<dd>
-flag indicating success (boolean)
-</dd>
-</dl><dl>
-<dt>Raises <b>ValueError</b>:</dt>
-<dd>
-raised to indicate an invalid debugger kind
-        was requested
-</dd>
-</dl>
-<div align="right"><a href="#top">Up</a></div>
-<hr /><hr />
-<a NAME="runcall" ID="runcall"></a>
-<h2>runcall</h2>
-<b>runcall</b>(<i>func, *args</i>)
-<p>
-    Module function mimicing the Pdb interface.
-</p><dl>
-<dt><i>func</i></dt>
-<dd>
-function to be called (function object)
-</dd><dt><i>*args</i></dt>
-<dd>
-arguments being passed to func
-</dd>
-</dl><dl>
-<dt>Returns:</dt>
-<dd>
-the function result
-</dd>
-</dl>
-<div align="right"><a href="#top">Up</a></div>
-<hr /><hr />
-<a NAME="setScriptname" ID="setScriptname"></a>
-<h2>setScriptname</h2>
-<b>setScriptname</b>(<i>name</i>)
-<p>
-    Module function to set the scriptname to be reported back to the IDE.
-</p><dl>
-<dt><i>name</i></dt>
-<dd>
-absolute pathname of the script (string)
-</dd>
-</dl>
-<div align="right"><a href="#top">Up</a></div>
-<hr /><hr />
-<a NAME="startDebugger" ID="startDebugger"></a>
-<h2>startDebugger</h2>
-<b>startDebugger</b>(<i>enableTrace=True, exceptions=True, tracePython=False, redirect=True</i>)
-<p>
-    Module function used to start the remote debugger.
-</p><dl>
-<dt><i>enableTrace=</i></dt>
-<dd>
-flag to enable the tracing function (boolean)
-</dd><dt><i>exceptions=</i></dt>
-<dd>
-flag to enable exception reporting of the IDE
-        (boolean)
-</dd><dt><i>tracePython=</i></dt>
-<dd>
-flag to enable tracing into the Python library
-        (boolean)
-</dd><dt><i>redirect=</i></dt>
-<dd>
-flag indicating redirection of stdin, stdout and
-        stderr (boolean)
-</dd>
-</dl>
-<div align="right"><a href="#top">Up</a></div>
-<hr />
-</body></html>
\ No newline at end of file
--- a/Documentation/Source/eric6.DebugClients.Python.getpass.html	Sat Sep 03 18:02:37 2016 +0200
+++ /dev/null	Thu Jan 01 00:00:00 1970 +0000
@@ -1,85 +0,0 @@
-<!DOCTYPE html>
-<html><head>
-<title>eric6.DebugClients.Python.getpass</title>
-<meta charset="UTF-8">
-<style>
-body {
-    background: #EDECE6;
-    margin: 0em 1em 10em 1em;
-    color: black;
-}
-
-h1 { color: white; background: #85774A; }
-h2 { color: white; background: #85774A; }
-h3 { color: white; background: #9D936E; }
-h4 { color: white; background: #9D936E; }
-    
-a { color: #BA6D36; }
-
-</style>
-</head>
-<body><a NAME="top" ID="top"></a>
-<h1>eric6.DebugClients.Python.getpass</h1>
-<p>
-Module implementing utilities to get a password and/or the current user name.
-</p><p>
-getpass(prompt) - prompt for a password, with echo turned off
-getuser() - get the user name from the environment or password database
-</p><p>
-This module is a replacement for the one found in the Python distribution. It
-is to provide a debugger compatible variant of the a.m. functions.
-</p>
-<h3>Global Attributes</h3>
-<table>
-<tr><td>__all__</td></tr><tr><td>default_getpass</td></tr><tr><td>unix_getpass</td></tr><tr><td>win_getpass</td></tr>
-</table>
-<h3>Classes</h3>
-<table>
-<tr><td>None</td></tr>
-</table>
-<h3>Functions</h3>
-<table>
-<tr>
-<td><a href="#getpass">getpass</a></td>
-<td>Function to prompt for a password, with echo turned off.</td>
-</tr><tr>
-<td><a href="#getuser">getuser</a></td>
-<td>Function to get the username from the environment or password database.</td>
-</tr>
-</table>
-<hr /><hr />
-<a NAME="getpass" ID="getpass"></a>
-<h2>getpass</h2>
-<b>getpass</b>(<i>prompt='Password: '</i>)
-<p>
-    Function to prompt for a password, with echo turned off.
-</p><dl>
-<dt><i>prompt</i></dt>
-<dd>
-Prompt to be shown to the user (string)
-</dd>
-</dl><dl>
-<dt>Returns:</dt>
-<dd>
-Password entered by the user (string)
-</dd>
-</dl>
-<div align="right"><a href="#top">Up</a></div>
-<hr /><hr />
-<a NAME="getuser" ID="getuser"></a>
-<h2>getuser</h2>
-<b>getuser</b>(<i></i>)
-<p>
-    Function to get the username from the environment or password database.
-</p><p>
-    First try various environment variables, then the password
-    database.  This works on Windows as long as USERNAME is set.
-</p><dl>
-<dt>Returns:</dt>
-<dd>
-username (string)
-</dd>
-</dl>
-<div align="right"><a href="#top">Up</a></div>
-<hr />
-</body></html>
\ No newline at end of file
--- a/Documentation/Source/index-eric6.DebugClients.Python.html	Sat Sep 03 18:02:37 2016 +0200
+++ /dev/null	Thu Jan 01 00:00:00 1970 +0000
@@ -1,79 +0,0 @@
-<!DOCTYPE html>
-<html><head>
-<title>eric6.DebugClients.Python</title>
-<meta charset="UTF-8">
-<style>
-body {
-    background: #EDECE6;
-    margin: 0em 1em 10em 1em;
-    color: black;
-}
-
-h1 { color: white; background: #85774A; }
-h2 { color: white; background: #85774A; }
-h3 { color: white; background: #9D936E; }
-h4 { color: white; background: #9D936E; }
-    
-a { color: #BA6D36; }
-
-</style>
-</head>
-<body>
-<h1>eric6.DebugClients.Python</h1>
-<p>
-Package implementing the Python debugger.
-</p><p>
-It consists of different kinds of debug clients.
-</p>
-
-
-<h3>Modules</h3>
-<table>
-<tr>
-<td><a href="eric6.DebugClients.Python.AsyncFile.html">AsyncFile</a></td>
-<td>Module implementing an asynchronous file like socket interface for the debugger.</td>
-</tr><tr>
-<td><a href="eric6.DebugClients.Python.AsyncIO.html">AsyncIO</a></td>
-<td>Module implementing a base class of an asynchronous interface for the debugger.</td>
-</tr><tr>
-<td><a href="eric6.DebugClients.Python.DCTestResult.html">DCTestResult</a></td>
-<td>Module implementing a TestResult derivative for the eric6 debugger.</td>
-</tr><tr>
-<td><a href="eric6.DebugClients.Python.DebugBase.html">DebugBase</a></td>
-<td>Module implementing the debug base class.</td>
-</tr><tr>
-<td><a href="eric6.DebugClients.Python.DebugClient.html">DebugClient</a></td>
-<td>Module implementing a Qt free version of the debug client.</td>
-</tr><tr>
-<td><a href="eric6.DebugClients.Python.DebugClientBase.html">DebugClientBase</a></td>
-<td>Module implementing a debug client base class.</td>
-</tr><tr>
-<td><a href="eric6.DebugClients.Python.DebugClientCapabilities.html">DebugClientCapabilities</a></td>
-<td>Module defining the debug clients capabilities.</td>
-</tr><tr>
-<td><a href="eric6.DebugClients.Python.DebugClientThreads.html">DebugClientThreads</a></td>
-<td>Module implementing the multithreaded version of the debug client.</td>
-</tr><tr>
-<td><a href="eric6.DebugClients.Python.DebugConfig.html">DebugConfig</a></td>
-<td>Module defining type strings for the different Python types.</td>
-</tr><tr>
-<td><a href="eric6.DebugClients.Python.DebugProtocol.html">DebugProtocol</a></td>
-<td>Module defining the debug protocol tokens.</td>
-</tr><tr>
-<td><a href="eric6.DebugClients.Python.DebugThread.html">DebugThread</a></td>
-<td>Module implementing the debug thread.</td>
-</tr><tr>
-<td><a href="eric6.DebugClients.Python.FlexCompleter.html">FlexCompleter</a></td>
-<td>Word completion for the eric6 shell.</td>
-</tr><tr>
-<td><a href="eric6.DebugClients.Python.PyProfile.html">PyProfile</a></td>
-<td>Module defining additions to the standard Python profile.py.</td>
-</tr><tr>
-<td><a href="eric6.DebugClients.Python.eric6dbgstub.html">eric6dbgstub</a></td>
-<td>Module implementing a debugger stub for remote debugging.</td>
-</tr><tr>
-<td><a href="eric6.DebugClients.Python.getpass.html">getpass</a></td>
-<td>Module implementing utilities to get a password and/or the current user name.</td>
-</tr>
-</table>
-</body></html>
\ No newline at end of file
--- a/eric6.e4p	Sat Sep 03 18:02:37 2016 +0200
+++ b/eric6.e4p	Sat Sep 03 18:12:12 2016 +0200
@@ -26,55 +26,55 @@
     <Source>DataViews/PyCoverageDialog.py</Source>
     <Source>DataViews/PyProfileDialog.py</Source>
     <Source>DataViews/__init__.py</Source>
-    <Source>DebugClients/Python/AsyncFile.py</Source>
-    <Source>DebugClients/Python/AsyncIO.py</Source>
-    <Source>DebugClients/Python/DCTestResult.py</Source>
-    <Source>DebugClients/Python/DebugBase.py</Source>
-    <Source>DebugClients/Python/DebugClient.py</Source>
-    <Source>DebugClients/Python/DebugClientBase.py</Source>
-    <Source>DebugClients/Python/DebugClientCapabilities.py</Source>
-    <Source>DebugClients/Python/DebugClientThreads.py</Source>
-    <Source>DebugClients/Python/DebugConfig.py</Source>
-    <Source>DebugClients/Python/DebugProtocol.py</Source>
-    <Source>DebugClients/Python/DebugThread.py</Source>
-    <Source>DebugClients/Python/DebugUtilities.py</Source>
-    <Source>DebugClients/Python/FlexCompleter.py</Source>
-    <Source>DebugClients/Python/PyProfile.py</Source>
-    <Source>DebugClients/Python/__init__.py</Source>
-    <Source>DebugClients/Python/coverage/__init__.py</Source>
-    <Source>DebugClients/Python/coverage/__main__.py</Source>
-    <Source>DebugClients/Python/coverage/annotate.py</Source>
-    <Source>DebugClients/Python/coverage/backunittest.py</Source>
-    <Source>DebugClients/Python/coverage/backward.py</Source>
-    <Source>DebugClients/Python/coverage/bytecode.py</Source>
-    <Source>DebugClients/Python/coverage/cmdline.py</Source>
-    <Source>DebugClients/Python/coverage/collector.py</Source>
-    <Source>DebugClients/Python/coverage/config.py</Source>
-    <Source>DebugClients/Python/coverage/control.py</Source>
-    <Source>DebugClients/Python/coverage/data.py</Source>
-    <Source>DebugClients/Python/coverage/debug.py</Source>
-    <Source>DebugClients/Python/coverage/env.py</Source>
-    <Source>DebugClients/Python/coverage/execfile.py</Source>
-    <Source>DebugClients/Python/coverage/files.py</Source>
-    <Source>DebugClients/Python/coverage/html.py</Source>
-    <Source>DebugClients/Python/coverage/misc.py</Source>
-    <Source>DebugClients/Python/coverage/monkey.py</Source>
-    <Source>DebugClients/Python/coverage/parser.py</Source>
-    <Source>DebugClients/Python/coverage/phystokens.py</Source>
-    <Source>DebugClients/Python/coverage/pickle2json.py</Source>
-    <Source>DebugClients/Python/coverage/plugin.py</Source>
-    <Source>DebugClients/Python/coverage/plugin_support.py</Source>
-    <Source>DebugClients/Python/coverage/python.py</Source>
-    <Source>DebugClients/Python/coverage/pytracer.py</Source>
-    <Source>DebugClients/Python/coverage/report.py</Source>
-    <Source>DebugClients/Python/coverage/results.py</Source>
-    <Source>DebugClients/Python/coverage/summary.py</Source>
-    <Source>DebugClients/Python/coverage/templite.py</Source>
-    <Source>DebugClients/Python/coverage/test_helpers.py</Source>
-    <Source>DebugClients/Python/coverage/version.py</Source>
-    <Source>DebugClients/Python/coverage/xmlreport.py</Source>
-    <Source>DebugClients/Python/eric6dbgstub.py</Source>
-    <Source>DebugClients/Python/getpass.py</Source>
+    <Source>DebugClients/Python2/AsyncFile.py</Source>
+    <Source>DebugClients/Python2/AsyncIO.py</Source>
+    <Source>DebugClients/Python2/DCTestResult.py</Source>
+    <Source>DebugClients/Python2/DebugBase.py</Source>
+    <Source>DebugClients/Python2/DebugClient.py</Source>
+    <Source>DebugClients/Python2/DebugClientBase.py</Source>
+    <Source>DebugClients/Python2/DebugClientCapabilities.py</Source>
+    <Source>DebugClients/Python2/DebugClientThreads.py</Source>
+    <Source>DebugClients/Python2/DebugConfig.py</Source>
+    <Source>DebugClients/Python2/DebugProtocol.py</Source>
+    <Source>DebugClients/Python2/DebugThread.py</Source>
+    <Source>DebugClients/Python2/DebugUtilities.py</Source>
+    <Source>DebugClients/Python2/FlexCompleter.py</Source>
+    <Source>DebugClients/Python2/PyProfile.py</Source>
+    <Source>DebugClients/Python2/__init__.py</Source>
+    <Source>DebugClients/Python2/coverage/__init__.py</Source>
+    <Source>DebugClients/Python2/coverage/__main__.py</Source>
+    <Source>DebugClients/Python2/coverage/annotate.py</Source>
+    <Source>DebugClients/Python2/coverage/backunittest.py</Source>
+    <Source>DebugClients/Python2/coverage/backward.py</Source>
+    <Source>DebugClients/Python2/coverage/bytecode.py</Source>
+    <Source>DebugClients/Python2/coverage/cmdline.py</Source>
+    <Source>DebugClients/Python2/coverage/collector.py</Source>
+    <Source>DebugClients/Python2/coverage/config.py</Source>
+    <Source>DebugClients/Python2/coverage/control.py</Source>
+    <Source>DebugClients/Python2/coverage/data.py</Source>
+    <Source>DebugClients/Python2/coverage/debug.py</Source>
+    <Source>DebugClients/Python2/coverage/env.py</Source>
+    <Source>DebugClients/Python2/coverage/execfile.py</Source>
+    <Source>DebugClients/Python2/coverage/files.py</Source>
+    <Source>DebugClients/Python2/coverage/html.py</Source>
+    <Source>DebugClients/Python2/coverage/misc.py</Source>
+    <Source>DebugClients/Python2/coverage/monkey.py</Source>
+    <Source>DebugClients/Python2/coverage/parser.py</Source>
+    <Source>DebugClients/Python2/coverage/phystokens.py</Source>
+    <Source>DebugClients/Python2/coverage/pickle2json.py</Source>
+    <Source>DebugClients/Python2/coverage/plugin.py</Source>
+    <Source>DebugClients/Python2/coverage/plugin_support.py</Source>
+    <Source>DebugClients/Python2/coverage/python.py</Source>
+    <Source>DebugClients/Python2/coverage/pytracer.py</Source>
+    <Source>DebugClients/Python2/coverage/report.py</Source>
+    <Source>DebugClients/Python2/coverage/results.py</Source>
+    <Source>DebugClients/Python2/coverage/summary.py</Source>
+    <Source>DebugClients/Python2/coverage/templite.py</Source>
+    <Source>DebugClients/Python2/coverage/test_helpers.py</Source>
+    <Source>DebugClients/Python2/coverage/version.py</Source>
+    <Source>DebugClients/Python2/coverage/xmlreport.py</Source>
+    <Source>DebugClients/Python2/eric6dbgstub.py</Source>
+    <Source>DebugClients/Python2/getpass.py</Source>
     <Source>DebugClients/Python3/AsyncFile.py</Source>
     <Source>DebugClients/Python3/DCTestResult.py</Source>
     <Source>DebugClients/Python3/DebugBase.py</Source>
@@ -1984,12 +1984,13 @@
     <Other>CSSs</Other>
     <Other>CodeTemplates</Other>
     <Other>DTDs</Other>
-    <Other>DebugClients/Python/coverage/doc</Other>
+    <Other>DebugClients/Python2/coverage/doc</Other>
     <Other>DebugClients/Python3/coverage/doc</Other>
     <Other>DesignerTemplates</Other>
     <Other>Dictionaries</Other>
     <Other>Documentation/Help</Other>
     <Other>Documentation/Source</Other>
+    <Other>Documentation/Source/eric6.Debugger.DebuggerInterfacePython2.html</Other>
     <Other>Documentation/eric6-plugin.odt</Other>
     <Other>Documentation/eric6-plugin.pdf</Other>
     <Other>Documentation/mod_python.odt</Other>

eric ide

mercurial