summaryrefslogtreecommitdiffstats
path: root/docs/interference.md
diff options
context:
space:
mode:
authorHeiner Lohaus <hlohaus@users.noreply.github.com>2024-02-21 17:02:54 +0100
committerHeiner Lohaus <hlohaus@users.noreply.github.com>2024-02-21 17:02:54 +0100
commit0a0698c7f3fa117e95eaf9b017e4122d15ef4566 (patch)
tree8d2997750aef7bf9ae0f9ec63410279119fffb69 /docs/interference.md
parentMerge pull request #1603 from xtekky/index (diff)
downloadgpt4free-0a0698c7f3fa117e95eaf9b017e4122d15ef4566.tar
gpt4free-0a0698c7f3fa117e95eaf9b017e4122d15ef4566.tar.gz
gpt4free-0a0698c7f3fa117e95eaf9b017e4122d15ef4566.tar.bz2
gpt4free-0a0698c7f3fa117e95eaf9b017e4122d15ef4566.tar.lz
gpt4free-0a0698c7f3fa117e95eaf9b017e4122d15ef4566.tar.xz
gpt4free-0a0698c7f3fa117e95eaf9b017e4122d15ef4566.tar.zst
gpt4free-0a0698c7f3fa117e95eaf9b017e4122d15ef4566.zip
Diffstat (limited to '')
-rw-r--r--docs/interference.md69
1 files changed, 69 insertions, 0 deletions
diff --git a/docs/interference.md b/docs/interference.md
new file mode 100644
index 00000000..b140f66a
--- /dev/null
+++ b/docs/interference.md
@@ -0,0 +1,69 @@
+### Interference openai-proxy API
+
+#### Run interference API from PyPi package
+
+```python
+from g4f.api import run_api
+
+run_api()
+```
+
+#### Run interference API from repo
+
+Run server:
+
+```sh
+g4f api
+```
+
+or
+
+```sh
+python -m g4f.api.run
+```
+
+```python
+from openai import OpenAI
+
+client = OpenAI(
+ api_key="",
+ # Change the API base URL to the local interference API
+ base_url="http://localhost:1337/v1"
+)
+
+ response = client.chat.completions.create(
+ model="gpt-3.5-turbo",
+ messages=[{"role": "user", "content": "write a poem about a tree"}],
+ stream=True,
+ )
+
+ if isinstance(response, dict):
+ # Not streaming
+ print(response.choices[0].message.content)
+ else:
+ # Streaming
+ for token in response:
+ content = token.choices[0].delta.content
+ if content is not None:
+ print(content, end="", flush=True)
+```
+
+#### API usage (POST)
+Send the POST request to /v1/chat/completions with body containing the `model` method. This example uses python with requests library:
+```python
+import requests
+url = "http://localhost:1337/v1/chat/completions"
+body = {
+ "model": "gpt-3.5-turbo-16k",
+ "stream": False,
+ "messages": [
+ {"role": "assistant", "content": "What can you do?"}
+ ]
+}
+json_response = requests.post(url, json=body).json().get('choices', [])
+
+for choice in json_response:
+ print(choice.get('message', {}).get('content', ''))
+```
+
+[Return to Home](/) \ No newline at end of file