From b0f15e25d580683600f81d93754be09c2ff849da Mon Sep 17 00:00:00 2001 From: MiriamScharnke Date: Mon, 29 Jul 2024 16:12:32 +0100 Subject: [PATCH 1/4] Update doc/functions/openAIChat.md Co-authored-by: Christopher Creutzig <89011131+ccreutzi@users.noreply.github.com> --- doc/functions/openAIChat.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/doc/functions/openAIChat.md b/doc/functions/openAIChat.md index aa368b5..7694654 100644 --- a/doc/functions/openAIChat.md +++ b/doc/functions/openAIChat.md @@ -34,7 +34,7 @@ To connect to the OpenAI API, you need a valid API key. For information on how t `chat = openAIChat(___,APIKey=key)` uses the specified API key. -`chat = openAIChat(___,``Name=Value``)` specifies additional options using one or more name\-value arguments. +`chat = openAIChat(___,Name=Value)` specifies additional options using one or more name\-value arguments. ## Input Arguments ### `systemPrompt` \- System prompt From ada0ff810338786e6e035e39aeb8ffb51d74f5dd Mon Sep 17 00:00:00 2001 From: Miriam Scharnke Date: Tue, 30 Jul 2024 14:19:09 +0100 Subject: [PATCH 2/4] Update openAIChat documentation. --- doc/functions/openAIChat.md | 36 +++++++++++++++++++++--------------- 1 file changed, 21 insertions(+), 15 deletions(-) diff --git a/doc/functions/openAIChat.md b/doc/functions/openAIChat.md index 7694654..f63bef7 100644 --- a/doc/functions/openAIChat.md +++ b/doc/functions/openAIChat.md @@ -19,7 +19,7 @@ Connect to OpenAI™ Chat Completion API ## Description -Connect to the OpenAI™ Chat Completion API to generate text using large language models developed by OpenAI. +Connect to the OpenAI Chat Completion API to generate text using large language models developed by OpenAI. To connect to the OpenAI API, you need a valid API key. For information on how to obtain an API key, see [https://platform.openai.com/docs/quickstart](https://platform.openai.com/docs/quickstart). @@ -56,7 +56,7 @@ character vector | string scalar OpenAI API key to access OpenAI APIs such as ChatGPT. -Instead of using the `APIKey` name\-value argument, you can also set the environment variable OPEN\_API\_KEY. For more information, see [https://github.com/matlab\-deep\-learning/llms\-with\-matlab/blob/main/doc/OpenAI.md](https://github.com/matlab-deep-learning/llms-with-matlab/blob/main/doc/OpenAI.md). +Instead of using the `APIKey` name\-value argument, you can also set the environment variable OPEN\_API\_KEY. For more information, see [OpenAI API](../OpenAI.md). ### `ModelName` \- Model name @@ -66,7 +66,7 @@ Instead of using the `APIKey` name\-value argument, you can also set the environ Name of the OpenAI model to use for text or image generation. -For a list of currently supported models, see [https://github.com/matlab\-deep\-learning/llms\-with\-matlab/blob/main/doc/OpenAI.md](https://github.com/matlab-deep-learning/llms-with-matlab/blob/main/doc/OpenAI.md). +For a list of currently supported models, see [OpenAI API](../OpenAI.md). ### `Temperature` \- Temperature @@ -152,7 +152,7 @@ If you set the response format to `"json"`, then the generated output is a JSON - `ModelName="gpt-4"` - `ModelName="gpt-4-0613"` -To configure the format of the generated JSON file, describe the format using natural language and provide it to the model either in the system prompt or as a user message. For an example, see [Analyze Sentiment in Text Using ChatGPT in JSON Mode](https://github.com/matlab-deep-learning/llms-with-matlab/blob/main/examples/AnalyzeSentimentinTextUsingChatGPTinJSONMode.md). +To configure the format of the generated JSON file, describe the format using natural language and provide it to the model either in the system prompt or as a user message. For an example, see [Analyze Sentiment in Text Using ChatGPT in JSON Mode](../../examples/AnalyzeSentimentinTextUsingChatGPTinJSONMode.md). # Properties ### `SystemPrompt` \- System prompt @@ -176,7 +176,7 @@ The system prompt is a natural\-language description that provides the framework Name of the OpenAI model to use for text or image generation. -For a list of currently supported models, see [https://github.com/matlab\-deep\-learning/llms\-with\-matlab/blob/main/doc/OpenAI.md](https://github.com/matlab-deep-learning/llms-with-matlab/blob/main/doc/OpenAI.md). +For a list of currently supported models, see [OpenAI API](../OpenAI.md). ### `Temperature` \- Temperature @@ -190,7 +190,7 @@ Temperature value for controlling the randomness of the output. Higher temperatu `1` (default) | numeric scalar between `0` and `1` -Top probability mass for controlling the diversity of the generated output. Higher top probability mass corresponds to higher diversity. +Top probability mass for controlling the diversity of the generated output using top-p sampling. Higher top probability mass corresponds to higher diversity. ### `StopSequences` \- Stop sequences @@ -246,13 +246,20 @@ Format of generated output. If the response format is `"text"`, then the generated output is a string. -If the response format is `"json"`, then the generated output is a JSON (\*.json) file. This option is not supported for these models: +If the response format is `"json"`, then the generated output is a string containing JSON encoded data. + + +To configure the format of the generated JSON file, describe the format using natural language and provide it to the model either in the system prompt or as a user message. The prompt or message describing the format must contain the word `"json"` or `"JSON"`. + + +For an example, see [Analyze Sentiment in Text Using ChatGPT in JSON Mode](../../examples/AnalyzeSentimentinTextUsingChatGPTinJSONMode.md). + + +The JSON response format is not supported for these models: - `ModelName="gpt-4"` - `ModelName="gpt-4-0613"` -To configure the format of the generated JSON file, describe the format using natural language and provide it to the model either in the system prompt or as a user message. For an example, see [Analyze Sentiment in Text Using ChatGPT in JSON Mode](https://github.com/matlab-deep-learning/llms-with-matlab/blob/main/examples/AnalyzeSentimentinTextUsingChatGPTinJSONMode.md). - ### `FunctionNames` \- Names of OpenAI functions to use during output generation string array @@ -280,10 +287,9 @@ chat = openAIChat(StreamFun=sf); generate(chat,"Why is a raven like a writing desk?") ``` # See Also -- [Create Simple Chat Bot](https://github.com/matlab-deep-learning/llms-with-matlab/blob/main/examples/CreateSimpleChatBot.md) -- [Process Generated Text in Real Time Using ChatGPT in Streaming Mode](https://github.com/matlab-deep-learning/llms-with-matlab/blob/main/examples/ProcessGeneratedTextinRealTimebyUsingChatGPTinStreamingMode.md) -- [Analyze Scientific Papers Using Function Calls](https://github.com/matlab-deep-learning/llms-with-matlab/blob/main/examples/AnalyzeScientificPapersUsingFunctionCalls.md) -- [Analyze Sentiment in Text Using ChatGPT in JSON Mode](https://github.com/matlab-deep-learning/llms-with-matlab/blob/main/examples/AnalyzeSentimentinTextUsingChatGPTinJSONMode.md) - -Copyright 2024 The MathWorks, Inc. +- [Create Simple Chat Bot](../../examples/CreateSimpleChatBot.md) +- [Process Generated Text in Real Time Using ChatGPT in Streaming Mode](../../examples/ProcessGeneratedTextinRealTimebyUsingChatGPTinStreamingMode.md) +- [Analyze Scientific Papers Using Function Calls](../../examples/AnalyzeScientificPapersUsingFunctionCalls.md) +- [Analyze Sentiment in Text Using ChatGPT in JSON Mode](../../examples/AnalyzeSentimentinTextUsingChatGPTinJSONMode.md) +Copyright 2024 The MathWorks, Inc. \ No newline at end of file From 3eb22c50157923e408381da864ffc46a0087b7f0 Mon Sep 17 00:00:00 2001 From: Miriam Scharnke Date: Fri, 9 Aug 2024 13:26:26 +0100 Subject: [PATCH 3/4] Add newly created openAIChat.md to documentation. --- doc/functions/openAIChat.md | 209 ++++++++++++++---------------------- 1 file changed, 82 insertions(+), 127 deletions(-) diff --git a/doc/functions/openAIChat.md b/doc/functions/openAIChat.md index f63bef7..39154a8 100644 --- a/doc/functions/openAIChat.md +++ b/doc/functions/openAIChat.md @@ -1,7 +1,7 @@ -# openAIChat +# openAIChat— -Connect to OpenAI™ Chat Completion API +Connect to OpenAI® Chat Completion API from MATLAB® # Creation ## Syntax @@ -36,19 +36,19 @@ To connect to the OpenAI API, you need a valid API key. For information on how t `chat = openAIChat(___,Name=Value)` specifies additional options using one or more name\-value arguments. -## Input Arguments -### `systemPrompt` \- System prompt +# Input Arguments +### `systemPrompt` — System prompt character vector | string scalar -The system prompt is a natural\-language description that provides the framework in which a large language model generates its responses. The system prompt can include instructions about tone, communications style, language, etc. +Specify the system prompt and set the `SystemPrompt` property. The system prompt is a natural\-language description that provides the framework in which a large language model generates its responses. The system prompt can include instructions about tone, communications style, language, etc. **Example**: "You are a helpful assistant who provides answers to user queries in iambic pentameter." ## Name\-Value Arguments -### `APIKey` \- OpenAI API key +### `APIKey` — OpenAI API key character vector | string scalar @@ -56,9 +56,23 @@ character vector | string scalar OpenAI API key to access OpenAI APIs such as ChatGPT. -Instead of using the `APIKey` name\-value argument, you can also set the environment variable OPEN\_API\_KEY. For more information, see [OpenAI API](../OpenAI.md). +Instead of using the `APIKey` name\-value argument, you can also set the environment variable OPEN\_API\_KEY. For more information, see [OpenAI API](http://../OpenAI.md). + +### `Tools` — OpenAI functions to use during output generation + +`openAIFunction` object | array of `openAIFunction` objects + + +Custom functions used by the model to collect or generate additional data. + -### `ModelName` \- Model name +For an example, see [Analyze Scientific Papers Using ChatGPT Function Calls](http://../examples/AnalyzeScientificPapersUsingFunctionCalls.md). + +# Properties Settable at Construction + +Optionally specify these properties at construction using name\-value arguments. Specify `PropertyName1=PropertyValue1,...,PropertyNameN=PropertyValueN`, where `PropertyName` is the property name and `PropertyValue` is the corresponding value. + +### `ModelName` — Model name `"gpt-4o-mini"` (default) | `"gpt-4"` | `"gpt-3.5-turbo"` | `"dall-e-2"` | ... @@ -68,30 +82,23 @@ Name of the OpenAI model to use for text or image generation. For a list of currently supported models, see [OpenAI API](../OpenAI.md). -### `Temperature` \- Temperature +### `Temperature` — Temperature `1` (default) | numeric scalar between `0` and `2` Temperature value for controlling the randomness of the output. Higher temperature increases the randomness of the output. Setting the temperature to `0` results in fully deterministic output. -### `TopP` \- Top probability mass +### `TopP` — Top probability mass `1` (default) | numeric scalar between `0` and `1` Top probability mass for controlling the diversity of the generated output. Higher top probability mass corresponds to higher diversity. -### `Tools` \- OpenAI functions to use during output generation - -`openAIFunction` object | array of `openAIFunction` objects - - -Custom functions used by the model to process its input and output. +### `StopSequences` — Stop sequences -### `StopSequences` \- Stop sequences - -`""` (default) | string array with between `1` and `4` elements +`[]` (default) | string array with between `0` and `4` elements Sequences that stop generation of tokens. @@ -99,7 +106,7 @@ Sequences that stop generation of tokens. **Example:** `["The end.","And that is all she wrote."]` -### `PresencePenalty` \- Presence penalty +### `PresencePenalty` — Presence penalty `0` (default) | numeric scalar between `-2` and `2` @@ -109,7 +116,7 @@ Penalty value for using a token that has already been used at least once in the The presence penalty is independent of the number of incidents of a token, so long as it has been used at least once. To increase the penalty for every additional time a token is generated, use the `FrequencyPenalty` name\-value argument. -### `FrequencyPenalty` \- Frequency penalty +### `FrequencyPenalty` — Frequency penalty `0` (default) | numeric scalar between `-2` and `2` @@ -117,16 +124,19 @@ The presence penalty is independent of the number of incidents of a token, so lo Penalty value for repeatedly using the same token in the generated output. Higher values reduce the repetition of tokens. Negative values increase the repetition of tokens. -The frequence penalty increases with every instance of a token in the generated output. To use a constant penalty for a repeated token, independent of the number of instances that token is generated, use the `PresencePenalty` name\-value argument. +The frequency penalty increases with every instance of a token in the generated output. To use a constant penalty for a repeated token, independent of the number of instances that token is generated, use the `PresencePenalty` name\-value argument. -### `TimeOut` \- Connection timeout in seconds +### `TimeOut` — Connection timeout in seconds `10` (default) | nonnegative numeric scalar +After construction, this property is read\-only. + + If the OpenAI server does not respond within the timeout, then the function throws an error. -### `StreamFun` \- Custom streaming function +### `StreamFun` — Custom streaming function function handle @@ -134,133 +144,53 @@ function handle Specify a custom streaming function to process the generated output token by token as it is being generated, rather than having to wait for the end of the generation. For example, you can use this function to print the output as it is generated. -**Example:** `@(token) fprint("%s",token)` - -### `ResponseFormat` \- Response format - -`"text"` (default) | `"json"` - - -Format of generated output. - - -If you set the response format to `"text"`, then the generated output is a string. - - -If you set the response format to `"json"`, then the generated output is a JSON (\*.json) file. This option is not supported for these models: - -- `ModelName="gpt-4"` -- `ModelName="gpt-4-0613"` - -To configure the format of the generated JSON file, describe the format using natural language and provide it to the model either in the system prompt or as a user message. For an example, see [Analyze Sentiment in Text Using ChatGPT in JSON Mode](../../examples/AnalyzeSentimentinTextUsingChatGPTinJSONMode.md). - -# Properties -### `SystemPrompt` \- System prompt - -character vector | string scalar - - -This property is read\-only. - - -The system prompt is a natural\-language description that provides the framework in which a large language model generates its responses. The system prompt can include instructions about tone, communications style, language, etc. - - -**Example**: "You are a helpful assistant who provides answers to user queries in iambic pentameter." - -### `ModelName` \- Model name - -`"gpt-4o-mini"` (default) | `"gpt-4"` | `"gpt-3.5-turbo"` | `"dall-e-2"` | ... - - -Name of the OpenAI model to use for text or image generation. - - -For a list of currently supported models, see [OpenAI API](../OpenAI.md). - -### `Temperature` \- Temperature - -`1` (default) | numeric scalar between `0` and `2` - - -Temperature value for controlling the randomness of the output. Higher temperature increases the randomness of the output. Setting the temperature to `0` results in no randomness. - -### `TopP` \- Top probability mass - -`1` (default) | numeric scalar between `0` and `1` - - -Top probability mass for controlling the diversity of the generated output using top-p sampling. Higher top probability mass corresponds to higher diversity. - -### `StopSequences` \- Stop sequences - -`""` (default) | string array with between `1` and `4` elements - - -Sequences that stop generation of tokens. - - -**Example:** `["The end.","And that is all she wrote."]` +For an example, see [Process Generated Text in Real Time by Using ChatGPT™ in Streaming Mode](../../examples/ProcessGeneratedTextinRealTimebyUsingChatGPTinStreamingMode.md). -### `PresencePenalty` \- Presence penalty -`0` (default) | numeric scalar between `-2` and `2` +**Example:** `@(token) fprint("%s",token)` +### `ResponseFormat` — Response format -Penalty value for using a token that has already been used at least once in the generated output. Higher values reduce the repetition of tokens. Negative values increase the repetition of tokens. +`"text"` (default) | `"json"` -The presence penalty is independent of the number of incidents of a token, so long as it has been used at least once. To increase the penalty for every additional time a token is generated, use the `FrequencyPenalty` name\-value argument. +After construction, this property is read\-only. -### `FrequencyPenalty` \- Frequency penalty -`0` (default) | numeric scalar between `-2` and `2` +Format of generated output. -Penalty value for repeatedly using the same token in the generated output. Higher values reduce the repetition of tokens. Negative values increase the repetition of tokens. +If you set the response format to `"text"`, then the generated output is a string. -The frequence penalty increases with every instance of a token in the generated output. To use a constant penalty for a repeated token, independent of the number of instances that token is generated, use the `PresencePenalty` name\-value argument. +If you set the response format to `"json"`, then the generated output is a string containing JSON encoded data. -### `TimeOut` \- Connection timeout in seconds -`10` (default) | nonnegative numeric scalar +To configure the format of the generated JSON file, describe the format using natural language and provide it to the model either in the system prompt or as a user message. The prompt or message describing the format must contain the word `"json"` or `"JSON"`. -This property is read\-only. +For an example, see [Analyze Sentiment in Text Using ChatGPT in JSON Mode](../../examples/AnalyzeSentimentinTextUsingChatGPTinJSONMode.md). -If the OpenAI server does not respond within the timeout, then the function throws an error. +The JSON response format is not supported for these models: -### `ResponseFormat` \- Response format +- `ModelName="gpt-4"` +- `ModelName="gpt-4-0613"` +# Other Properties +### `SystemPrompt` — System prompt -`"text"` (default) | `"json"` +character vector | string scalar This property is read\-only. -Format of generated output. - - -If the response format is `"text"`, then the generated output is a string. - - -If the response format is `"json"`, then the generated output is a string containing JSON encoded data. - - -To configure the format of the generated JSON file, describe the format using natural language and provide it to the model either in the system prompt or as a user message. The prompt or message describing the format must contain the word `"json"` or `"JSON"`. - - -For an example, see [Analyze Sentiment in Text Using ChatGPT in JSON Mode](../../examples/AnalyzeSentimentinTextUsingChatGPTinJSONMode.md). - +The system prompt is a natural\-language description that provides the framework in which a large language model generates its responses. The system prompt can include instructions about tone, communications style, language, etc. -The JSON response format is not supported for these models: -- `ModelName="gpt-4"` -- `ModelName="gpt-4-0613"` +**Example**: "You are a helpful assistant who provides answers to user queries in iambic pentameter." -### `FunctionNames` \- Names of OpenAI functions to use during output generation +### `FunctionNames` — Names of OpenAI functions to use during output generation string array @@ -272,19 +202,43 @@ Names of the custom functions specified in the `Tools` name\-value argument. # Object Functions -`generate` \- Generate text +[`generate`](generate.md) — Generate output from large language models # Examples ## Create OpenAI Chat ```matlab -modelName = "gpt-3.5-turbo"; +loadenv(".env") +modelName = "gpt-4o-mini"; chat = openAIChat("You are a helpful assistant awaiting further instructions.",ModelName=modelName) ``` + +```matlabTextOutput +chat = + openAIChat with properties: + + ModelName: "gpt-4o-mini" + Temperature: 1 + TopP: 1 + StopSequences: [0x0 string] + TimeOut: 10 + SystemPrompt: {[1x1 struct]} + ResponseFormat: "text" + PresencePenalty: 0 + FrequencyPenalty: 0 + FunctionNames: [] + +``` ## Generate and Stream Text ```matlab +loadenv(".env") sf = @(x) fprintf("%s",x); chat = openAIChat(StreamFun=sf); -generate(chat,"Why is a raven like a writing desk?") +generate(chat,"Why is a raven like a writing desk?",MaxNumTokens=50) +``` + +```matlabTextOutput +The phrase "Why is a raven like a writing desk?" comes from Lewis Carroll's "Alice's Adventures in Wonderland." Initially posed by the Mad Hatter during the tea party scene, the question is often interpreted as nonsense, in line with the book +ans = "The phrase "Why is a raven like a writing desk?" comes from Lewis Carroll's "Alice's Adventures in Wonderland." Initially posed by the Mad Hatter during the tea party scene, the question is often interpreted as nonsense, in line with the book" ``` # See Also - [Create Simple Chat Bot](../../examples/CreateSimpleChatBot.md) @@ -292,4 +246,5 @@ generate(chat,"Why is a raven like a writing desk?") - [Analyze Scientific Papers Using Function Calls](../../examples/AnalyzeScientificPapersUsingFunctionCalls.md) - [Analyze Sentiment in Text Using ChatGPT in JSON Mode](../../examples/AnalyzeSentimentinTextUsingChatGPTinJSONMode.md) -Copyright 2024 The MathWorks, Inc. \ No newline at end of file +*Copyright 2024 The MathWorks, Inc.* + From 51eecb6e7b8f8000af7ed8e4d842059ec33bc4e8 Mon Sep 17 00:00:00 2001 From: Miriam Scharnke Date: Fri, 9 Aug 2024 13:32:01 +0100 Subject: [PATCH 4/4] Remove spurious emdash. --- doc/functions/openAIChat.md | 5 ++--- 1 file changed, 2 insertions(+), 3 deletions(-) diff --git a/doc/functions/openAIChat.md b/doc/functions/openAIChat.md index 39154a8..6ab95b0 100644 --- a/doc/functions/openAIChat.md +++ b/doc/functions/openAIChat.md @@ -1,5 +1,5 @@ -# openAIChat— +# openAIChat Connect to OpenAI® Chat Completion API from MATLAB® @@ -246,5 +246,4 @@ ans = "The phrase "Why is a raven like a writing desk?" comes from Lewis Carroll - [Analyze Scientific Papers Using Function Calls](../../examples/AnalyzeScientificPapersUsingFunctionCalls.md) - [Analyze Sentiment in Text Using ChatGPT in JSON Mode](../../examples/AnalyzeSentimentinTextUsingChatGPTinJSONMode.md) -*Copyright 2024 The MathWorks, Inc.* - +*Copyright 2024 The MathWorks, Inc.* \ No newline at end of file