From e79c8b01f58d21502c962f38c804bf81196f89fb Mon Sep 17 00:00:00 2001 From: kqlio67 Date: Tue, 29 Oct 2024 22:03:05 +0200 Subject: Update (docs/async_client.md docs/client.md docs/interference-api.md g4f/client/client.py) --- docs/async_client.md | 4 +--- 1 file changed, 1 insertion(+), 3 deletions(-) (limited to 'docs/async_client.md') diff --git a/docs/async_client.md b/docs/async_client.md index 357b0d86..0719a463 100644 --- a/docs/async_client.md +++ b/docs/async_client.md @@ -3,7 +3,7 @@ The G4F async client API is a powerful asynchronous interface for interacting wi ## Compatibility Note -The G4F async client API is designed to be compatible with the OpenAI and Anthropic API, making it easy for developers familiar with OpenAI's or Anthropic's interface to transition to G4F. +The G4F async client API is designed to be compatible with the OpenAI API, making it easy for developers familiar with OpenAI's interface to transition to G4F. ## Table of Contents - [Introduction](#introduction) @@ -57,7 +57,6 @@ client = Client( **Here’s an improved example of creating chat completions:** ```python response = await async_client.chat.completions.create( - system="You are a helpful assistant.", model="gpt-3.5-turbo", messages=[ { @@ -70,7 +69,6 @@ response = await async_client.chat.completions.create( ``` **This example:** - - Sets a system message to define the assistant's role - Asks a specific question `Say this is a test` - Configures various parameters like temperature and max_tokens for more control over the output - Disables streaming for a complete response -- cgit v1.2.3