Skip to content

Commit

Permalink
Azure OpenAI: add capabilities up through 2023-07-01-preview (#37539)
Browse files Browse the repository at this point in the history
* squashed commit for omnibus changes

* restore stringent sig= sanitizers for image urls

* Update to consume content_filter_results in Completions; test support

* snap to official .tsp and add image generation model factory support

* swap chat tests to gpt-4 for contemporary Azure Chat Functions support

* codegen update: properly address optionality of filter categories for function_call responses via tsp

* add missing test recording

* Incorporate PR feedback using a new PR typespec commit hash. Thanks, Jose!

* merge #37536 changes

* PR feedback (+ analyzer accepts for var/not-var)

* Missing Role on modified test

* Add one more snippet collection, this time for Chat Functions

* proactive removal of GetStream helper; changelog; cleanup of unused code

* snap to latest merged .tsp (matching previous PR commit, should be no change whatsoever)

* PR feedback. Thank you yet again, Jose!
  • Loading branch information
trrwilson authored Jul 19, 2023
1 parent 0782ff3 commit 2d27d7e
Show file tree
Hide file tree
Showing 139 changed files with 35,688 additions and 55,713 deletions.
11 changes: 10 additions & 1 deletion sdk/openai/Azure.AI.OpenAI/CHANGELOG.md
Original file line number Diff line number Diff line change
Expand Up @@ -4,10 +4,19 @@

### Features Added

- Introduced model factory `Azure.AI.OpenAI.AzureOpenAIModelFactory` for mocking.
- DALL-E image generation is now supported. See [the Azure OpenAI quickstart](https://learn.microsoft.com/azure/cognitive-services/openai/dall-e-quickstart) for conceptual background and detailed setup instructions.
- `OpenAIClient` gains a new `GetImageGenerations` method that accepts an `ImageGenerationOptions` and produces an `ImageGenerations` via its response. This response object encapsulates the temporary storage location of generated images for future retrieval.
- In contrast to other capabilities, DALL-E image generation does not require explicit creation or specification of a deployment or model. Its surface as such does not include this concept.
- Functions for chat completions are now supported: see [OpenAI's blog post on the topic](https://openai.com/blog/function-calling-and-other-api-updates) for much more detail.
- A list of `FunctionDefinition` objects may be populated on `ChatCompletionsOptions` via its `Functions` property. These definitions include a name and description together with a serialized JSON Schema representation of its parameters; these parameters can be generated easily via `BinaryData.FromObjectAsJson` with dynamic objects -- see the README for example usage.
- **NOTE**: Chat Functions requires a minimum of the `-0613` model versions for `gpt-4` and `gpt-3.5-turbo`/`gpt-35-turbo`. Please ensure you're using these later model versions, as Functions are not supported with older model revisions. For Azure OpenAI, you can update a deployment's model version or create a new model deployment with an updated version via the Azure AI Studio interface, also accessible through Azure Portal.
- (Azure OpenAI specific) Completions and Chat Completions responses now include embedded content filter annotations for prompts and responses
- A new `Azure.AI.OpenAI.AzureOpenAIModelFactory` is now present for mocking.

### Breaking Changes

- `ChatMessage`'s one-parameter constructor has been replaced with a no-parameter constructor. Please replace any hybrid construction with one of these two options that either completely rely on property setting or completely rely on constructor parameters.

### Bugs Fixed

### Other Changes
Expand Down
123 changes: 123 additions & 0 deletions sdk/openai/Azure.AI.OpenAI/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -221,6 +221,129 @@ await foreach (StreamingChatChoice choice in streamingChatCompletions.GetChoices
}
```

### Use Chat Functions

Chat Functions allow a caller of Chat Completions to define capabilities that the model can use to extend its
functionality into external tools and data sources.

You can read more about Chat Functions on OpenAI's blog: https://openai.com/blog/function-calling-and-other-api-updates

**NOTE**: Chat Functions require model versions beginning with gpt-4 and gpt-3.5-turbo's `-0613` labels. They are not
available with older versions of the models.

To use Chat Functions, you first define the function you'd like the model to be able to use when appropriate. Using
the example from the linked blog post, above:

```C# Snippet:ChatFunctions:DefineFunction
var getWeatherFuntionDefinition = new FunctionDefinition()
{
Name = "get_current_weather",
Description = "Get the current weather in a given location",
Parameters = BinaryData.FromObjectAsJson(
new
{
Type = "object",
Properties = new
{
Location = new
{
Type = "string",
Description = "The city and state, e.g. San Francisco, CA",
},
Unit = new
{
Type = "string",
Enum = new[] { "celsius", "fahrenheit" },
}
},
Required = new[] { "location" },
},
new JsonSerializerOptions() { PropertyNamingPolicy = JsonNamingPolicy.CamelCase }),
};
```

With the function defined, it can then be used in a Chat Completions request via its options. Function data is
handled across multiple calls that build up data for subsequent stateless requests, so we maintain a list of chat
messages as a form of conversation history.

```C# Snippet:ChatFunctions:RequestWithFunctions
var conversationMessages = new List<ChatMessage>()
{
new(ChatRole.User, "What is the weather like in Boston?"),
};

var chatCompletionsOptions = new ChatCompletionsOptions();
foreach (ChatMessage chatMessage in conversationMessages)
{
chatCompletionsOptions.Messages.Add(chatMessage);
}
chatCompletionsOptions.Functions.Add(getWeatherFuntionDefinition);

Response<ChatCompletions> response = await client.GetChatCompletionsAsync(
"gpt-35-turbo-0613",
chatCompletionsOptions);
```

If the model determines that it should call a Chat Function, a finish reason of 'FunctionCall' will be populated on
the choice and details will be present in the response message's `FunctionCall` property. Usually, the name of the
function call will be one that was provided and the arguments will be a populated JSON document matching the schema
included in the `FunctionDefinition` used; it is **not guaranteed** that this data is valid or even properly formatted,
however, so validation and error checking should always accompany function call processing.

To resolve the function call and continue the user-facing interaction, process the argument payload as needed and then
serialize appropriate response data into a new message with `ChatRole.Function`. Then make a new request with all of
the messages so far -- the initial `User` message, the first response's `FunctionCall` message, and the resolving
`Function` message generated in reply to the function call -- so the model can use the data to better formulate a chat
completions response.

Note that the function call response you provide does not need to follow any schema provided in the initial call. The
model will infer usage of the response data based on inferred context of names and fields.

```C# Snippet:ChatFunctions:HandleFunctionCall
ChatChoice responseChoice = response.Value.Choices[0];
if (responseChoice.FinishReason == CompletionsFinishReason.FunctionCall)
{
// Include the FunctionCall message in the conversation history
conversationMessages.Add(responseChoice.Message);

if (responseChoice.Message.FunctionCall.Name == "get_current_weather")
{
// Validate and process the JSON arguments for the function call
string unvalidatedArguments = responseChoice.Message.FunctionCall.Arguments;
var functionResultData = (object)null; // GetYourFunctionResultData(unvalidatedArguments);
// Here, replacing with an example as if returned from GetYourFunctionResultData
functionResultData = new
{
Temperature = 31,
Unit = "celsius",
};
// Serialize the result data from the function into a new chat message with the 'Function' role,
// then add it to the messages after the first User message and initial response FunctionCall
var functionResponseMessage = new ChatMessage(
ChatRole.Function,
JsonSerializer.Serialize(
functionResultData,
new JsonSerializerOptions() { PropertyNamingPolicy = JsonNamingPolicy.CamelCase }));
conversationMessages.Add(functionResponseMessage);
// Now make a new request using all three messages in conversationMessages
}
}
```

### Generate images with DALL-E image generation models

```C# Snippet:GenerateImages
Response<ImageGenerations> imageGenerations = await client.GetImageGenerationsAsync(
new ImageGenerationOptions()
{
Prompt = "a happy monkey eating a banana, in watercolor",
Size = ImageSize.Size256x256,
});

// Image Generations responses provide URLs you can use to retrieve requested images
Uri imageUri = imageGenerations.Value.Data[0].Url;
```

## Troubleshooting

When you interact with Azure OpenAI using the .NET SDK, errors returned by the service correspond to the same HTTP status codes returned for [REST API][openai_rest] requests.
Expand Down
Loading

0 comments on commit 2d27d7e

Please sign in to comment.