OllamaInterface/Documentation/Source/Plugin_AI_Ollama.OllamaInterface.OllamaClient.html

Mon, 07 Apr 2025 18:22:30 +0200

author
Detlev Offenbach <detlev@die-offenbachs.de>
date
Mon, 07 Apr 2025 18:22:30 +0200
changeset 69
eb9340034f26
parent 18
0a5b9c233a6e
permissions
-rw-r--r--

Created global tag <release-10.1.8>.

<!DOCTYPE html>
<html><head>
<title>Plugin_AI_Ollama.OllamaInterface.OllamaClient</title>
<meta charset="UTF-8">
<link rel="stylesheet" href="styles.css">
</head>
<body>
<a NAME="top" ID="top"></a>
<h1>Plugin_AI_Ollama.OllamaInterface.OllamaClient</h1>
<p>
Module implementing the 'ollama' client.
</p>

<h3>Global Attributes</h3>
<table>
<tr><td>None</td></tr>
</table>

<h3>Classes</h3>
<table>
<tr>
<td><a href="#OllamaClient">OllamaClient</a></td>
<td>Class implementing the 'ollama' client.</td>
</tr>
<tr>
<td><a href="#OllamaClientState">OllamaClientState</a></td>
<td>Class defining the various client states.</td>
</tr>
</table>

<h3>Functions</h3>
<table>
<tr><td>None</td></tr>
</table>

<hr />
<hr />
<a NAME="OllamaClient" ID="OllamaClient"></a>
<h2>OllamaClient</h2>
<p>
    Class implementing the 'ollama' client.
</p>

<h3>Signals</h3>
<dl>

<dt>errorOccurred(error:str)</dt>
<dd>
emitted to indicate a network error occurred
        while processing the request
</dd>
<dt>finished()</dt>
<dd>
emitted to indicate the completion of a request
</dd>
<dt>modelsList(modelNames:list[str])</dt>
<dd>
emitted after the list of model
        names was obtained from the 'ollama' server
</dd>
<dt>pullError(msg:str)</dt>
<dd>
emitted to indicate an error during a pull operation
</dd>
<dt>pullStatus(msg:str, id:str, total:int, completed:int)</dt>
<dd>
emitted to indicate
        the status of a pull request as reported by the 'ollama' server
</dd>
<dt>replyReceived(content:str, role:str, done:bool)</dt>
<dd>
emitted after a response
        from the 'ollama' server was received
</dd>
<dt>serverStateChanged(ok:bool, msg:str)</dt>
<dd>
emitted to indicate a change of the
        server responsiveness
</dd>
<dt>serverVersion(version:str)</dt>
<dd>
emitted after the server version was obtained
        from the 'ollama' server
</dd>
</dl>
<h3>Derived from</h3>
QObject
<h3>Class Attributes</h3>
<table>
<tr><td>None</td></tr>
</table>

<h3>Class Methods</h3>
<table>
<tr><td>None</td></tr>
</table>

<h3>Methods</h3>
<table>
<tr>
<td><a href="#OllamaClient.__init__">OllamaClient</a></td>
<td>Constructor</td>
</tr>
<tr>
<td><a href="#OllamaClient.__errorOccurred">__errorOccurred</a></td>
<td>Private method to handle a network error of the given reply.</td>
</tr>
<tr>
<td><a href="#OllamaClient.__getHeartbeatUrl">__getHeartbeatUrl</a></td>
<td>Private method to get the current heartbeat URL.</td>
</tr>
<tr>
<td><a href="#OllamaClient.__getServerReply">__getServerReply</a></td>
<td>Private method to send a request to the 'ollama' server and return a reply object.</td>
</tr>
<tr>
<td><a href="#OllamaClient.__periodicHeartbeat">__periodicHeartbeat</a></td>
<td>Private slot to do a periodic check of the 'ollama' server responsiveness.</td>
</tr>
<tr>
<td><a href="#OllamaClient.__processChatResponse">__processChatResponse</a></td>
<td>Private method to process the chat response of the 'ollama' server.</td>
</tr>
<tr>
<td><a href="#OllamaClient.__processData">__processData</a></td>
<td>Private method to receive data from the 'ollama' server and process it with a given processing function or method.</td>
</tr>
<tr>
<td><a href="#OllamaClient.__processGenerateResponse">__processGenerateResponse</a></td>
<td>Private method to process the generate response of the 'ollama' server.</td>
</tr>
<tr>
<td><a href="#OllamaClient.__processModelsList">__processModelsList</a></td>
<td>Private method to process the tags response of the 'ollama' server.</td>
</tr>
<tr>
<td><a href="#OllamaClient.__processPullResponse">__processPullResponse</a></td>
<td>Private method to process a pull response of the 'ollama' server.</td>
</tr>
<tr>
<td><a href="#OllamaClient.__processVersion">__processVersion</a></td>
<td>Private method to process the version response of the 'ollama' server.</td>
</tr>
<tr>
<td><a href="#OllamaClient.__replyFinished">__replyFinished</a></td>
<td>Private method to handle the finished signal of the reply.</td>
</tr>
<tr>
<td><a href="#OllamaClient.__sendRequest">__sendRequest</a></td>
<td>Private method to send a request to the 'ollama' server and handle its responses.</td>
</tr>
<tr>
<td><a href="#OllamaClient.__sendSyncRequest">__sendSyncRequest</a></td>
<td>Private method to send a request to the 'ollama' server and handle its responses.</td>
</tr>
<tr>
<td><a href="#OllamaClient.__serverNotRespondingMessage">__serverNotRespondingMessage</a></td>
<td>Private method to assemble and return a message for a non-responsive server.</td>
</tr>
<tr>
<td><a href="#OllamaClient.__setHeartbeatTimer">__setHeartbeatTimer</a></td>
<td>Private slot to configure the heartbeat timer.</td>
</tr>
<tr>
<td><a href="#OllamaClient.abortPull">abortPull</a></td>
<td>Public method to abort an ongoing pull operation.</td>
</tr>
<tr>
<td><a href="#OllamaClient.chat">chat</a></td>
<td>Public method to request a chat completion from the 'ollama' server.</td>
</tr>
<tr>
<td><a href="#OllamaClient.generate">generate</a></td>
<td>Public method to request to generate a completion from the 'ollama' server.</td>
</tr>
<tr>
<td><a href="#OllamaClient.heartbeat">heartbeat</a></td>
<td>Public method to check, if the 'ollama' server has started and is responsive.</td>
</tr>
<tr>
<td><a href="#OllamaClient.list">list</a></td>
<td>Public method to request a list of models available locally from the 'ollama' server.</td>
</tr>
<tr>
<td><a href="#OllamaClient.listDetails">listDetails</a></td>
<td>Public method to request a list of models available locally from the 'ollama' server with some model details.</td>
</tr>
<tr>
<td><a href="#OllamaClient.listRunning">listRunning</a></td>
<td>Public method to request a list of running models from the 'ollama' server.</td>
</tr>
<tr>
<td><a href="#OllamaClient.pull">pull</a></td>
<td>Public method to ask the 'ollama' server to pull the given model.</td>
</tr>
<tr>
<td><a href="#OllamaClient.remove">remove</a></td>
<td>Public method to ask the 'ollama' server to delete the given model.</td>
</tr>
<tr>
<td><a href="#OllamaClient.setMode">setMode</a></td>
<td>Public method to set the client mode to local.</td>
</tr>
<tr>
<td><a href="#OllamaClient.state">state</a></td>
<td>Public method to get the current client state.</td>
</tr>
<tr>
<td><a href="#OllamaClient.version">version</a></td>
<td>Public method to request the version from the 'ollama' server.</td>
</tr>
</table>

<h3>Static Methods</h3>
<table>
<tr><td>None</td></tr>
</table>


<a NAME="OllamaClient.__init__" ID="OllamaClient.__init__"></a>
<h4>OllamaClient (Constructor)</h4>
<b>OllamaClient</b>(<i>plugin, parent=None</i>)
<p>
        Constructor
</p>

<dl>

<dt><i>plugin</i> (PluginOllamaInterface)</dt>
<dd>
reference to the plugin object
</dd>
<dt><i>parent</i> (QObject (optional))</dt>
<dd>
reference to the parent object (defaults to None)
</dd>
</dl>
<a NAME="OllamaClient.__errorOccurred" ID="OllamaClient.__errorOccurred"></a>
<h4>OllamaClient.__errorOccurred</h4>
<b>__errorOccurred</b>(<i>errorCode, reply</i>)
<p>
        Private method to handle a network error of the given reply.
</p>

<dl>

<dt><i>errorCode</i> (QNetworkReply.NetworkError)</dt>
<dd>
error code reported by the reply
</dd>
<dt><i>reply</i> (QNetworkReply)</dt>
<dd>
reference to the network reply object
</dd>
</dl>
<a NAME="OllamaClient.__getHeartbeatUrl" ID="OllamaClient.__getHeartbeatUrl"></a>
<h4>OllamaClient.__getHeartbeatUrl</h4>
<b>__getHeartbeatUrl</b>(<i></i>)
<p>
        Private method to get the current heartbeat URL.
</p>

<dl>
<dt>Return:</dt>
<dd>
URL to be contacted by the heartbeat check
</dd>
</dl>
<dl>
<dt>Return Type:</dt>
<dd>
str
</dd>
</dl>
<a NAME="OllamaClient.__getServerReply" ID="OllamaClient.__getServerReply"></a>
<h4>OllamaClient.__getServerReply</h4>
<b>__getServerReply</b>(<i>endpoint, data=None, delete=False</i>)
<p>
        Private method to send a request to the 'ollama' server and return a reply
        object.
</p>

<dl>

<dt><i>endpoint</i> (str)</dt>
<dd>
'ollama' API endpoint to be contacted
</dd>
<dt><i>data</i> (dict (optional))</dt>
<dd>
dictionary containing the data to send to the server
            (defaults to None)
</dd>
<dt><i>delete</i> (bool (optional))</dt>
<dd>
flag indicating to send a delete request (defaults to False)
</dd>
</dl>
<dl>
<dt>Return:</dt>
<dd>
'ollama' server reply
</dd>
</dl>
<dl>
<dt>Return Type:</dt>
<dd>
QNetworkReply
</dd>
</dl>
<a NAME="OllamaClient.__periodicHeartbeat" ID="OllamaClient.__periodicHeartbeat"></a>
<h4>OllamaClient.__periodicHeartbeat</h4>
<b>__periodicHeartbeat</b>(<i></i>)
<p>
        Private slot to do a periodic check of the 'ollama' server responsiveness.
</p>

<a NAME="OllamaClient.__processChatResponse" ID="OllamaClient.__processChatResponse"></a>
<h4>OllamaClient.__processChatResponse</h4>
<b>__processChatResponse</b>(<i>response</i>)
<p>
        Private method to process the chat response of the 'ollama' server.
</p>

<dl>

<dt><i>response</i> (dict)</dt>
<dd>
dictionary containing the chat response
</dd>
</dl>
<a NAME="OllamaClient.__processData" ID="OllamaClient.__processData"></a>
<h4>OllamaClient.__processData</h4>
<b>__processData</b>(<i>reply, processResponse</i>)
<p>
        Private method to receive data from the 'ollama' server and process it with a
        given processing function or method.
</p>

<dl>

<dt><i>reply</i> (QNetworkReply)</dt>
<dd>
reference to the network reply object
</dd>
<dt><i>processResponse</i> (function)</dt>
<dd>
processing function
</dd>
</dl>
<a NAME="OllamaClient.__processGenerateResponse" ID="OllamaClient.__processGenerateResponse"></a>
<h4>OllamaClient.__processGenerateResponse</h4>
<b>__processGenerateResponse</b>(<i>response</i>)
<p>
        Private method to process the generate response of the 'ollama' server.
</p>

<dl>

<dt><i>response</i> (dict)</dt>
<dd>
dictionary containing the generate response
</dd>
</dl>
<a NAME="OllamaClient.__processModelsList" ID="OllamaClient.__processModelsList"></a>
<h4>OllamaClient.__processModelsList</h4>
<b>__processModelsList</b>(<i>response</i>)
<p>
        Private method to process the tags response of the 'ollama' server.
</p>

<dl>

<dt><i>response</i> (dict)</dt>
<dd>
dictionary containing the tags response
</dd>
</dl>
<a NAME="OllamaClient.__processPullResponse" ID="OllamaClient.__processPullResponse"></a>
<h4>OllamaClient.__processPullResponse</h4>
<b>__processPullResponse</b>(<i>response</i>)
<p>
        Private method to process a pull response of the 'ollama' server.
</p>

<dl>

<dt><i>response</i> (dict)</dt>
<dd>
dictionary containing the pull response
</dd>
</dl>
<a NAME="OllamaClient.__processVersion" ID="OllamaClient.__processVersion"></a>
<h4>OllamaClient.__processVersion</h4>
<b>__processVersion</b>(<i>response</i>)
<p>
        Private method to process the version response of the 'ollama' server.
</p>

<dl>

<dt><i>response</i> (dict)</dt>
<dd>
dictionary containing the version response
</dd>
</dl>
<a NAME="OllamaClient.__replyFinished" ID="OllamaClient.__replyFinished"></a>
<h4>OllamaClient.__replyFinished</h4>
<b>__replyFinished</b>(<i>reply</i>)
<p>
        Private method to handle the finished signal of the reply.
</p>

<dl>

<dt><i>reply</i> (QNetworkReply)</dt>
<dd>
reference to the finished network reply object
</dd>
</dl>
<a NAME="OllamaClient.__sendRequest" ID="OllamaClient.__sendRequest"></a>
<h4>OllamaClient.__sendRequest</h4>
<b>__sendRequest</b>(<i>endpoint, data=None, processResponse=None</i>)
<p>
        Private method to send a request to the 'ollama' server and handle its
        responses.
</p>

<dl>

<dt><i>endpoint</i> (str)</dt>
<dd>
'ollama' API endpoint to be contacted
</dd>
<dt><i>data</i> (dict (optional))</dt>
<dd>
dictionary containing the data to send to the server
            (defaults to None)
</dd>
<dt><i>processResponse</i> (function (optional))</dt>
<dd>
function handling the received data (defaults to None)
</dd>
</dl>
<a NAME="OllamaClient.__sendSyncRequest" ID="OllamaClient.__sendSyncRequest"></a>
<h4>OllamaClient.__sendSyncRequest</h4>
<b>__sendSyncRequest</b>(<i>endpoint, data=None, delete=False</i>)
<p>
        Private method to send a request to the 'ollama' server and handle its
        responses.
</p>

<dl>

<dt><i>endpoint</i> (str)</dt>
<dd>
'ollama' API endpoint to be contacted
</dd>
<dt><i>data</i> (dict (optional))</dt>
<dd>
dictionary containing the data to send to the server
            (defaults to None)
</dd>
<dt><i>delete</i> (bool (optional))</dt>
<dd>
flag indicating to send a delete request (defaults to False)
</dd>
</dl>
<dl>
<dt>Return:</dt>
<dd>
tuple containing the data sent by the 'ollama' server and the HTTP
            status code
</dd>
</dl>
<dl>
<dt>Return Type:</dt>
<dd>
tuple of (Any, int)
</dd>
</dl>
<a NAME="OllamaClient.__serverNotRespondingMessage" ID="OllamaClient.__serverNotRespondingMessage"></a>
<h4>OllamaClient.__serverNotRespondingMessage</h4>
<b>__serverNotRespondingMessage</b>(<i></i>)
<p>
        Private method to assemble and return a message for a non-responsive server.
</p>

<dl>
<dt>Return:</dt>
<dd>
error message
</dd>
</dl>
<dl>
<dt>Return Type:</dt>
<dd>
str
</dd>
</dl>
<a NAME="OllamaClient.__setHeartbeatTimer" ID="OllamaClient.__setHeartbeatTimer"></a>
<h4>OllamaClient.__setHeartbeatTimer</h4>
<b>__setHeartbeatTimer</b>(<i></i>)
<p>
        Private slot to configure the heartbeat timer.
</p>

<a NAME="OllamaClient.abortPull" ID="OllamaClient.abortPull"></a>
<h4>OllamaClient.abortPull</h4>
<b>abortPull</b>(<i></i>)
<p>
        Public method to abort an ongoing pull operation.
</p>

<a NAME="OllamaClient.chat" ID="OllamaClient.chat"></a>
<h4>OllamaClient.chat</h4>
<b>chat</b>(<i>model, messages, streaming=True</i>)
<p>
        Public method to request a chat completion from the 'ollama' server.
</p>

<dl>

<dt><i>model</i> (str)</dt>
<dd>
name of the model to be used
</dd>
<dt><i>messages</i> (list of dict)</dt>
<dd>
list of message objects
</dd>
<dt><i>streaming</i> (bool)</dt>
<dd>
flag indicating to receive a streaming response
</dd>
</dl>
<a NAME="OllamaClient.generate" ID="OllamaClient.generate"></a>
<h4>OllamaClient.generate</h4>
<b>generate</b>(<i>model, prompt, suffix=None</i>)
<p>
        Public method to request to generate a completion from the 'ollama' server.
</p>

<dl>

<dt><i>model</i> (str)</dt>
<dd>
name of the model to be used
</dd>
<dt><i>prompt</i> (str)</dt>
<dd>
prompt to generate a response for
</dd>
<dt><i>suffix</i> (str (optional))</dt>
<dd>
text after the model response (defaults to None)
</dd>
</dl>
<a NAME="OllamaClient.heartbeat" ID="OllamaClient.heartbeat"></a>
<h4>OllamaClient.heartbeat</h4>
<b>heartbeat</b>(<i></i>)
<p>
        Public method to check, if the 'ollama' server has started and is responsive.
</p>

<dl>
<dt>Return:</dt>
<dd>
flag indicating a responsive 'ollama' server
</dd>
</dl>
<dl>
<dt>Return Type:</dt>
<dd>
bool
</dd>
</dl>
<a NAME="OllamaClient.list" ID="OllamaClient.list"></a>
<h4>OllamaClient.list</h4>
<b>list</b>(<i></i>)
<p>
        Public method to request a list of models available locally from the 'ollama'
        server.
</p>

<a NAME="OllamaClient.listDetails" ID="OllamaClient.listDetails"></a>
<h4>OllamaClient.listDetails</h4>
<b>listDetails</b>(<i></i>)
<p>
        Public method to request a list of models available locally from the 'ollama'
        server with some model details.
</p>

<dl>
<dt>Return:</dt>
<dd>
list of dictionaries containing the available models and related data
</dd>
</dl>
<dl>
<dt>Return Type:</dt>
<dd>
list[dict[str, Any]]
</dd>
</dl>
<a NAME="OllamaClient.listRunning" ID="OllamaClient.listRunning"></a>
<h4>OllamaClient.listRunning</h4>
<b>listRunning</b>(<i></i>)
<p>
        Public method to request a list of running models from the 'ollama' server.
</p>

<dl>
<dt>Return:</dt>
<dd>
list of dictionaries containing the running models and related data
</dd>
</dl>
<dl>
<dt>Return Type:</dt>
<dd>
list[dict[str, Any]]
</dd>
</dl>
<a NAME="OllamaClient.pull" ID="OllamaClient.pull"></a>
<h4>OllamaClient.pull</h4>
<b>pull</b>(<i>model</i>)
<p>
        Public method to ask the 'ollama' server to pull the given model.
</p>

<dl>

<dt><i>model</i> (str)</dt>
<dd>
name of the model
</dd>
</dl>
<a NAME="OllamaClient.remove" ID="OllamaClient.remove"></a>
<h4>OllamaClient.remove</h4>
<b>remove</b>(<i>model</i>)
<p>
        Public method to ask the 'ollama' server to delete the given model.
</p>

<dl>

<dt><i>model</i> (str)</dt>
<dd>
name of the model
</dd>
</dl>
<dl>
<dt>Return:</dt>
<dd>
flag indicating success
</dd>
</dl>
<dl>
<dt>Return Type:</dt>
<dd>
bool
</dd>
</dl>
<a NAME="OllamaClient.setMode" ID="OllamaClient.setMode"></a>
<h4>OllamaClient.setMode</h4>
<b>setMode</b>(<i>local</i>)
<p>
        Public method to set the client mode to local.
</p>

<dl>

<dt><i>local</i> (bool)</dt>
<dd>
flag indicating to connect to a locally started ollama server
</dd>
</dl>
<a NAME="OllamaClient.state" ID="OllamaClient.state"></a>
<h4>OllamaClient.state</h4>
<b>state</b>(<i></i>)
<p>
        Public method to get the current client state.
</p>

<dl>
<dt>Return:</dt>
<dd>
current client state
</dd>
</dl>
<dl>
<dt>Return Type:</dt>
<dd>
OllamaClientState
</dd>
</dl>
<a NAME="OllamaClient.version" ID="OllamaClient.version"></a>
<h4>OllamaClient.version</h4>
<b>version</b>(<i></i>)
<p>
        Public method to request the version from the 'ollama' server.
</p>

<div align="right"><a href="#top">Up</a></div>
<hr />
<hr />
<a NAME="OllamaClientState" ID="OllamaClientState"></a>
<h2>OllamaClientState</h2>
<p>
    Class defining the various client states.
</p>

<h3>Derived from</h3>
enum.Enum
<h3>Class Attributes</h3>
<table>
<tr><td>Finished</td></tr>
<tr><td>Receiving</td></tr>
<tr><td>Requesting</td></tr>
<tr><td>Waiting</td></tr>
</table>

<h3>Class Methods</h3>
<table>
<tr><td>None</td></tr>
</table>

<h3>Methods</h3>
<table>
<tr><td>None</td></tr>
</table>

<h3>Static Methods</h3>
<table>
<tr><td>None</td></tr>
</table>


<div align="right"><a href="#top">Up</a></div>
<hr />
</body></html>

eric ide

mercurial