summaryrefslogtreecommitdiffstats
path: root/docs/interference-api.md
diff options
context:
space:
mode:
authorTekky <98614666+xtekky@users.noreply.github.com>2024-10-22 23:32:27 +0200
committerGitHub <noreply@github.com>2024-10-22 23:32:27 +0200
commita63c18de796bd4f3e818ff170b6ff595304f95e0 (patch)
tree844dbb9a8d3526a8b60564b78f7a19a4e0f605d9 /docs/interference-api.md
parentMerge pull request #2282 from Karasiq/patch-1 (diff)
parentUpdated docs/providers-and-models.md g4f/models.py g4f/Provider/Upstage.py (diff)
downloadgpt4free-a63c18de796bd4f3e818ff170b6ff595304f95e0.tar
gpt4free-a63c18de796bd4f3e818ff170b6ff595304f95e0.tar.gz
gpt4free-a63c18de796bd4f3e818ff170b6ff595304f95e0.tar.bz2
gpt4free-a63c18de796bd4f3e818ff170b6ff595304f95e0.tar.lz
gpt4free-a63c18de796bd4f3e818ff170b6ff595304f95e0.tar.xz
gpt4free-a63c18de796bd4f3e818ff170b6ff595304f95e0.tar.zst
gpt4free-a63c18de796bd4f3e818ff170b6ff595304f95e0.zip
Diffstat (limited to 'docs/interference-api.md')
-rw-r--r--docs/interference-api.md110
1 files changed, 110 insertions, 0 deletions
diff --git a/docs/interference-api.md b/docs/interference-api.md
new file mode 100644
index 00000000..4050f84f
--- /dev/null
+++ b/docs/interference-api.md
@@ -0,0 +1,110 @@
+
+# G4F - Interference API Usage Guide
+
+
+## Table of Contents
+ - [Introduction](#introduction)
+ - [Running the Interference API](#running-the-interference-api)
+ - [From PyPI Package](#from-pypi-package)
+ - [From Repository](#from-repository)
+ - [Usage with OpenAI Library](#usage-with-openai-library)
+ - [Usage with Requests Library](#usage-with-requests-library)
+ - [Key Points](#key-points)
+
+## Introduction
+The Interference API allows you to serve other OpenAI integrations with G4F. It acts as a proxy, translating requests to the OpenAI API into requests to the G4F providers.
+
+## Running the Interference API
+
+### From PyPI Package
+**You can run the Interference API directly from the G4F PyPI package:**
+```python
+from g4f.api import run_api
+
+run_api()
+```
+
+
+
+### From Repository
+Alternatively, you can run the Interference API from the cloned repository.
+
+**Run the server with:**
+```bash
+g4f api
+```
+or
+```bash
+python -m g4f.api.run
+```
+
+
+
+## Usage with OpenAI Library
+
+
+
+```python
+from openai import OpenAI
+
+client = OpenAI(
+ api_key="",
+ # Change the API base URL to the local interference API
+ base_url="http://localhost:1337/v1"
+)
+
+response = client.chat.completions.create(
+ model="gpt-3.5-turbo",
+ messages=[{"role": "user", "content": "write a poem about a tree"}],
+ stream=True,
+)
+
+if isinstance(response, dict):
+ # Not streaming
+ print(response.choices[0].message.content)
+else:
+ # Streaming
+ for token in response:
+ content = token.choices[0].delta.content
+ if content is not None:
+ print(content, end="", flush=True)
+```
+
+
+
+## Usage with Requests Library
+You can also send requests directly to the Interference API using the requests library.
+
+**Send a POST request to `/v1/chat/completions` with the request body containing the model and other parameters:**
+```python
+import requests
+
+url = "http://localhost:1337/v1/chat/completions"
+body = {
+ "model": "gpt-3.5-turbo",
+ "stream": False,
+ "messages": [
+ {"role": "assistant", "content": "What can you do?"}
+ ]
+}
+
+json_response = requests.post(url, json=body).json().get('choices', [])
+
+for choice in json_response:
+ print(choice.get('message', {}).get('content', ''))
+```
+
+
+
+## Key Points
+- The Interference API translates OpenAI API requests into G4F provider requests
+- You can run it from the PyPI package or the cloned repository
+- It supports usage with the OpenAI Python library by changing the `base_url`
+- Direct requests can be sent to the API endpoints using libraries like `requests`
+
+
+**_The Interference API allows easy integration of G4F with existing OpenAI-based applications and tools._**
+
+---
+
+[Return to Home](/)