Skip to content

Commit

Permalink
feat: use the token from the environment variable if it is set
Browse files Browse the repository at this point in the history
  • Loading branch information
xyxc0673 committed Apr 1, 2023
1 parent 58c866e commit c2acfcb
Show file tree
Hide file tree
Showing 3 changed files with 71 additions and 41 deletions.
39 changes: 26 additions & 13 deletions README.md
Original file line number Diff line number Diff line change
@@ -1,4 +1,5 @@
# Azure OpenAI Proxy

[![Go Report Card](https://goreportcard.com/badge/github.com/diemus/azure-openai-proxy)](https://goreportcard.com/report/github.com/diemus/azure-openai-proxy)
[![License](https://badgen.net/badge/license/MIT/cyan)](https://github.com/diemus/azure-openai-proxy/blob/main/LICENSE)
[![Release](https://badgen.net/github/release/diemus/azure-openai-proxy/latest)](https://github.com/diemus/azure-openai-proxy)
Expand All @@ -14,24 +15,29 @@
Azure OpenAI Proxy is a proxy for Azure OpenAI API that can convert an OpenAI request to an Azure OpenAI request. It is designed to use as a backend for various open source ChatGPT web project. It also supports being used as a simple OpenAI API proxy to solve the problem of OpenAI API being restricted in some regions.

Highlights:
+ 🌐 Supports proxying all Azure OpenAI APIs
+ 🧠 Supports proxying all Azure OpenAI models and custom fine-tuned models
+ 🗺️ Supports custom mapping between Azure deployment names and OpenAI models
+ 🔄 Supports both reverse proxy and forward proxy usage

- 🌐 Supports proxying all Azure OpenAI APIs
- 🧠 Supports proxying all Azure OpenAI models and custom fine-tuned models
- 🗺️ Supports custom mapping between Azure deployment names and OpenAI models
- 🔄 Supports both reverse proxy and forward proxy usage

## Usage

### 1. Used as reverse proxy (i.e. an OpenAI API gateway)

Environment Variables

| Parameters | Description | Default Value |
|:---------------------------|:---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:------------------------------------------------------------------------|
| AZURE_OPENAI_PROXY_ADDRESS | Service listening address | 0.0.0.0:8080 |
| Parameters | Description | Default Value |
| :------------------------- | :------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | :---------------------------------------------------------------------- |
| AZURE_OPENAI_PROXY_ADDRESS | Service listening address | 0.0.0.0:8080 |
| AZURE_OPENAI_PROXY_MODE | Proxy mode, can be either "azure" or "openai". | azure |
| AZURE_OPENAI_ENDPOINT | Azure OpenAI Endpoint, usually looks like https://{custom}.openai.azure.com. Required. | |
| AZURE_OPENAI_APIVERSION | Azure OpenAI API version. Default is 2023-03-15-preview. | 2023-03-15-preview |
| AZURE_OPENAI_MODEL_MAPPER | A comma-separated list of model=deployment pairs. Maps model names to deployment names. For example, `gpt-3.5-turbo=gpt-35-turbo`, `gpt-3.5-turbo-0301=gpt-35-turbo-0301`. If there is no match, the proxy will pass model as deployment name directly (in fact, most Azure model names are same with OpenAI). | `gpt-3.5-turbo=gpt-35-turbo`<br/>`gpt-3.5-turbo-0301=gpt-35-turbo-0301` |
| AZURE_OPENAI_TOKEN | Azure OpenAI API Token. If this environment variable is set, the token in the request header will be ignored. | "" |

Use in command line

```shell
curl https://{your-custom-domain}/v1/chat/completions \
-H "Content-Type: application/json" \
Expand All @@ -42,11 +48,12 @@ curl https://{your-custom-domain}/v1/chat/completions \
}'
```


### 2. Used as forward proxy (i.e. an HTTP proxy)

When accessing Azure OpenAI API through HTTP, it can be used directly as a proxy, but this tool does not have built-in HTTPS support, so you need an HTTPS proxy such as Nginx to support accessing HTTPS version of OpenAI API.

Assuming that the proxy domain you configured is `https://{your-domain}.com`, you can execute the following commands in the terminal to use the https proxy:

```shell
export https_proxy=https://{your-domain}.com

Expand All @@ -60,9 +67,11 @@ curl https://api.openai.com/v1/chat/completions \
```

Or configure it as an HTTP proxy in other open source Web ChatGPT projects:

```
export HTTPS_PROXY=https://{your-domain}.com
```

## Deploy

Deploying through Docker
Expand All @@ -74,6 +83,7 @@ docker run -d -p 8080:8080 --name=azure-openai-proxy \
--env AZURE_OPENAI_MODEL_MAPPER={your custom model mapper ,like: gpt-3.5-turbo=gpt-35-turbo,gpt-3.5-turbo-0301=gpt-35-turbo-0301} \
ishadows/azure-openai-proxy:latest
```

Calling

```shell
Expand All @@ -87,19 +97,22 @@ curl https://localhost:8080/v1/chat/completions \
```

## Model Mapping Mechanism

There are a series of rules for model mapping pre-defined in `AZURE_OPENAI_MODEL_MAPPER`, and the default configuration basically satisfies the mapping of all Azure models. The rules include:
+ `gpt-3.5-turbo` -> `gpt-35-turbo`
+ `gpt-3.5-turbo-0301` -> `gpt-35-turbo-0301`
+ A mapping mechanism that pass model name directly as fallback.

- `gpt-3.5-turbo` -> `gpt-35-turbo`
- `gpt-3.5-turbo-0301` -> `gpt-35-turbo-0301`
- A mapping mechanism that pass model name directly as fallback.

For custom fine-tuned models, the model name can be passed directly. For models with deployment names different from the model names, custom mapping relationships can be defined, such as:

| Model Name | Deployment Name |
|:-------------------|:-----------------------------|
| Model Name | Deployment Name |
| :----------------- | :--------------------------- |
| gpt-3.5-turbo | gpt-35-turbo-upgrade |
| gpt-3.5-turbo-0301 | gpt-35-turbo-0301-fine-tuned |

## License

MIT

## Star History
Expand Down
56 changes: 30 additions & 26 deletions README.zh-cn.md
Original file line number Diff line number Diff line change
@@ -1,4 +1,5 @@
# Azure OpenAI Proxy

[![Go Report Card](https://goreportcard.com/badge/github.com/diemus/azure-openai-proxy)](https://goreportcard.com/report/github.com/diemus/azure-openai-proxy)
[![License](https://badgen.net/badge/license/MIT/cyan)](https://github.com/diemus/azure-openai-proxy/blob/main/LICENSE)
[![Release](https://badgen.net/github/release/diemus/azure-openai-proxy/latest)](https://github.com/diemus/azure-openai-proxy)
Expand All @@ -11,29 +12,30 @@
<a href="./README.md">English</a> |
<a href="./README.zh-cn.md">中文</a>

一个Azure OpenAI API的代理工具,可以将一个OpenAI请求转化为Azure
OpenAI请求,方便作为各类开源ChatGPT的后端使用。同时也支持作为单纯的OpenAI接口代理使用,用来解决OpenAI接口在部分地区的被限制使用的问题
一个 Azure OpenAI API 的代理工具,可以将一个 OpenAI 请求转化为 Azure
OpenAI 请求,方便作为各类开源 ChatGPT 的后端使用。同时也支持作为单纯的 OpenAI 接口代理使用,用来解决 OpenAI 接口在部分地区的被限制使用的问题

亮点:

+ 🌐 支持代理所有 Azure OpenAI 接口
+ 🧠 支持代理所有 Azure OpenAI 模型以及自定义微调模型
+ 🗺️ 支持自定义 Azure 部署名与 OpenAI 模型的映射关系
+ 🔄 支持反向代理和正向代理两种方式使用
- 🌐 支持代理所有 Azure OpenAI 接口
- 🧠 支持代理所有 Azure OpenAI 模型以及自定义微调模型
- 🗺️ 支持自定义 Azure 部署名与 OpenAI 模型的映射关系
- 🔄 支持反向代理和正向代理两种方式使用

## 使用方式

### 1. 作为反向代理使用(即一个OpenAI API网关
### 1. 作为反向代理使用(即一个 OpenAI API 网关

环境变量

| 参数名 | 描述 | 默认值 |
|:---------------------------|:---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:------------------------------------------------------------------------|
| AZURE_OPENAI_PROXY_ADDRESS | 服务监听地址 | 0.0.0.0:8080 |
| AZURE_OPENAI_PROXY_MODE | 代理模式,可以为azure/openai 2种模式 | azure |
| AZURE_OPENAI_ENDPOINT | Azure OpenAI Endpoint,一般类似https://{custom}.openai.azure.com的格式。必需。 | |
| AZURE_OPENAI_APIVERSION | Azure OpenAI API 的 API 版本。默认为 2023-03-15-preview。 | 2023-03-15-preview |
| AZURE_OPENAI_MODEL_MAPPER | 一个逗号分隔的 model=deployment 对列表。模型名称映射到部署名称。例如,`gpt-3.5-turbo=gpt-35-turbo`,`gpt-3.5-turbo-0301=gpt-35-turbo-0301`。未匹配到的情况下,代理会直接透传model作为deployment使用(其实Azure大部分模型名字和OpenAI的保持一致)。 | `gpt-3.5-turbo=gpt-35-turbo`<br/>`gpt-3.5-turbo-0301=gpt-35-turbo-0301` |
| 参数名 | 描述 | 默认值 |
| :------------------------- | :------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ | :---------------------------------------------------------------------- |
| AZURE_OPENAI_PROXY_ADDRESS | 服务监听地址 | 0.0.0.0:8080 |
| AZURE_OPENAI_PROXY_MODE | 代理模式,可以为 azure/openai 2 种模式 | azure |
| AZURE_OPENAI_ENDPOINT | Azure OpenAI Endpoint,一般类似 https://{custom}.openai.azure.com 的格式。必需。 | |
| AZURE_OPENAI_APIVERSION | Azure OpenAI API 的 API 版本。默认为 2023-03-15-preview。 | 2023-03-15-preview |
| AZURE_OPENAI_MODEL_MAPPER | 一个逗号分隔的 model=deployment 对列表。模型名称映射到部署名称。例如,`gpt-3.5-turbo=gpt-35-turbo`,`gpt-3.5-turbo-0301=gpt-35-turbo-0301`。未匹配到的情况下,代理会直接透传 model 作为 deployment 使用(其实 Azure 大部分模型名字和 OpenAI 的保持一致)。 | `gpt-3.5-turbo=gpt-35-turbo`<br/>`gpt-3.5-turbo-0301=gpt-35-turbo-0301` |
| AZURE_OPENAI_TOKEN | Azure OpenAI API Token。 如果设置该环境变量则忽略请求头中的 Token | "" |

在命令行调用

Expand All @@ -48,14 +50,14 @@ curl https://{your-custom-domain}/v1/chat/completions \

```

### 2. 作为正向代理使用(即一个HTTP Proxy)
### 2. 作为正向代理使用(即一个 HTTP Proxy)

通过HTTP访问Azure OpenAI接口时,可以直接作为代理使用,但是这个工具没有内置原生的HTTPS支持,需要在工具前架设一个类似Nginx的HTTPS代理,来支持访问HTTPS版本的OpenAI接口
通过 HTTP 访问 Azure OpenAI 接口时,可以直接作为代理使用,但是这个工具没有内置原生的 HTTPS 支持,需要在工具前架设一个类似 Nginx 的 HTTPS 代理,来支持访问 HTTPS 版本的 OpenAI 接口

假设你配置好后的代理域名为`https://{your-domain}.com`你可以在终端中执行以下命令来配置http代理
假设你配置好后的代理域名为`https://{your-domain}.com`你可以在终端中执行以下命令来配置 http 代理

```shell
export https_proxy=https://{your-domain}.com
export https_proxy=https://{your-domain}.com

curl https://api.openai.com/v1/chat/completions \
-H "Content-Type: application/json" \
Expand All @@ -67,15 +69,15 @@ curl https://api.openai.com/v1/chat/completions \

```

或者在其他开源Web ChatGPT项目中配置为HTTP代理
或者在其他开源 Web ChatGPT 项目中配置为 HTTP 代理

```
export HTTPS_PROXY=https://{your-domain}.com
```

## 部署方式

通过docker部署
通过 docker 部署

```shell
docker pull ishadows/azure-openai-proxy:latest
Expand All @@ -84,7 +86,9 @@ docker run -d -p 8080:8080 --name=azure-openai-proxy \
--env AZURE_OPENAI_MODEL_MAPPER={your custom model mapper ,like: gpt-3.5-turbo=gpt-35-turbo,gpt-3.5-turbo-0301=gpt-35-turbo-0301} \
ishadows/azure-openai-proxy:latest
```

调用

```shell
curl https://localhost:8080/v1/chat/completions \
-H "Content-Type: application/json" \
Expand All @@ -97,16 +101,16 @@ curl https://localhost:8080/v1/chat/completions \

## 模型映射机制

`AZURE_OPENAI_MODEL_MAPPER`中预定义了一系列模型映射的规则,默认配置基本上满足了所有Azure模型的映射,规则包括:
`AZURE_OPENAI_MODEL_MAPPER`中预定义了一系列模型映射的规则,默认配置基本上满足了所有 Azure 模型的映射,规则包括:

+ `gpt-3.5-turbo` -> `gpt-35-turbo`
+ `gpt-3.5-turbo-0301` -> `gpt-35-turbo-0301`
+ 以及一个透传模型名的机制作为fallback手段
- `gpt-3.5-turbo` -> `gpt-35-turbo`
- `gpt-3.5-turbo-0301` -> `gpt-35-turbo-0301`
- 以及一个透传模型名的机制作为 fallback 手段

对于自定义的微调模型,可以直接透传模型名。对于部署名字和模型名不一样的,可以自定义映射关系,比如:

| 模型名称 | 部署名称 |
|:-------------------|:-----------------------------|
| 模型名称 | 部署名称 |
| :----------------- | :--------------------------- |
| gpt-3.5-turbo | gpt-35-turbo-upgrade |
| gpt-3.5-turbo-0301 | gpt-35-turbo-0301-fine-tuned |

Expand Down
17 changes: 15 additions & 2 deletions pkg/azure/proxy.go
Original file line number Diff line number Diff line change
Expand Up @@ -3,7 +3,6 @@ package azure
import (
"bytes"
"fmt"
"github.com/tidwall/gjson"
"io/ioutil"
"log"
"net/http"
Expand All @@ -13,9 +12,12 @@ import (
"path"
"regexp"
"strings"

"github.com/tidwall/gjson"
)

var (
AzureOpenAIToken = ""
AzureOpenAIAPIVersion = "2023-03-15-preview"
AzureOpenAIEndpoint = ""
AzureOpenAIModelMapper = map[string]string{
Expand All @@ -42,6 +44,9 @@ func init() {
AzureOpenAIModelMapper[info[0]] = info[1]
}
}
if v := os.Getenv("AZURE_OPENAI_TOKEN"); v != "" {
AzureOpenAIToken = v
}

log.Printf("loading azure api endpoint: %s", AzureOpenAIEndpoint)
log.Printf("loading azure api version: %s", AzureOpenAIAPIVersion)
Expand All @@ -68,7 +73,15 @@ func NewOpenAIReverseProxy() *httputil.ReverseProxy {
deployment := GetDeploymentByModel(model)

// Replace the Bearer field in the Authorization header with api-key
token := strings.ReplaceAll(req.Header.Get("Authorization"), "Bearer ", "")
token := ""

// use the token from the environment variable if it is set
if AzureOpenAIToken != "" {
token = AzureOpenAIToken
} else {
token = strings.ReplaceAll(req.Header.Get("Authorization"), "Bearer ", "")
}

req.Header.Set("api-key", token)
req.Header.Del("Authorization")

Expand Down

0 comments on commit c2acfcb

Please sign in to comment.