Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feat: use chatgpt helper + add options #3

Merged
merged 3 commits into from
Oct 23, 2024
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
40 changes: 4 additions & 36 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -18,20 +18,13 @@ First, navigate to your Hexabot project directory and make sure the dependencies

```sh
cd ~/projects/Hexabot
npm install
```

To install the ChatGPT Plugin, run the following command:

```sh
npx hexabot install plugin Hexastack/hexabot-plugin-chatgpt
npm install hexabot-plugin-chatgpt --prefix ./api
```

## Configuration

The ChatGPT Plugin provides several customizable settings that can be configured through the Hexabot admin interface:

- **Token**: Your OpenAI API token. This is required for authentication.
- **Model**: The model to be used for generating responses (e.g., `gpt-4o-mini`).
- **Response Size**: The maximum number of tokens in the AI response.
- **Messages to Retrieve**: The number of recent messages to include as context when making requests to OpenAI.
Expand All @@ -40,34 +33,9 @@ The ChatGPT Plugin provides several customizable settings that can be configured

## How to Use

1. Access the Hexabot Visual Editor.
2. Drag the ChatGPT RAG block from "Custom Blocks" onto the canvas.
3. Double-click the block to edit and configure the plugin’s settings, including the API token, model, and context.

## Example

Here’s an example prompt generated by the plugin when sending a request to OpenAI:

```

CONTEXT: You are an AI Chatbot that works for Hexastack, an organization that provides AI-powered solutions. Use the following information to assist users:

- Description: Hexastack offers multi-channel and multilingual chatbots designed for ease of use and management.

DOCUMENTS:
DOCUMENT 0
Title: Example Title
Data: Example Data...

INSTRUCTIONS:
Based on the provided context and documents, answer the user's question clearly and concisely.

QUESTION: What services does Hexastack offer?


```

The plugin will then use this prompt to generate a response via OpenAI’s API.
1. Access settings and confure the API Token.
2. Access the Hexabot Visual Editor and Drag the ChatGPT Plugin block from "Custom Blocks" onto the canvas.
3. Double-click the block to edit and configure the plugin’s settings, including the model, context and other options.

## Contributing

Expand Down
106 changes: 44 additions & 62 deletions chatgpt.plugin.ts
Original file line number Diff line number Diff line change
Expand Up @@ -7,7 +7,6 @@
*/

import { Injectable } from '@nestjs/common';
import OpenAI from 'openai';

import { Block } from '@/chat/schemas/block.schema';
import { Context } from '@/chat/schemas/types/context';
Expand All @@ -21,100 +20,83 @@ import { LoggerService } from '@/logger/logger.service';
import { BaseBlockPlugin } from '@/plugins/base-block-plugin';
import { PluginService } from '@/plugins/plugins.service';

import { CHATGPT_PLUGIN_SETTINGS } from './settings';
import ChatGptLlmHelper from '@/extensions/helpers/hexabot-helper-chatgpt/index.helper';
import { HelperService } from '@/helper/helper.service';
import { HelperType } from '@/helper/types';
import { PluginBlockTemplate } from '@/plugins/types';
import CHATGPT_PLUGIN_SETTINGS from './settings';

@Injectable()
export class ChatgptPlugin extends BaseBlockPlugin<
typeof CHATGPT_PLUGIN_SETTINGS
> {
private openai: OpenAI;
template: PluginBlockTemplate = { name: 'ChatGPT RAG Plugin' };

constructor(
pluginService: PluginService,
private helperService: HelperService,
private logger: LoggerService,
private contentService: ContentService,
private readonly messageService: MessageService,
) {
super('chatgpt', CHATGPT_PLUGIN_SETTINGS, pluginService);

this.template = { name: 'ChatGPT RAG Block' };

this.effects = {
onStoreContextData: () => {},
};
super('chatgpt-plugin', pluginService);
}

private async getMessagesContext(context: Context, maxMessagesCtx = 5) {
const recentMessages = await this.messageService.findLastMessages(
context.user,
maxMessagesCtx,
);

const messagesContext: { role: 'user' | 'assistant'; content: string }[] =
recentMessages.map((m) => {
const text =
'text' in m.message && m.message.text
? m.message.text
: JSON.stringify(m.message);
return {
role: 'sender' in m && m.sender ? 'user' : 'assistant',
content: text,
};
});

return messagesContext;
getPath(): string {
return __dirname;
}

async process(block: Block, context: Context, _convId: string) {
const RAG = await this.contentService.textSearch(context.text);
const args = this.getArguments(block);
const client = this.getInstance(args.token);
const historicalMessages = await this.getMessagesContext(
context,
const chatGptHelper = this.helperService.use(
HelperType.LLM,
ChatGptLlmHelper,
);

const history = await this.messageService.findLastMessages(
context.user,
args.max_messages_ctx,
);
const completion = await client.chat.completions.create({
model: args.model,
messages: [
{
role: 'system',
content: `CONTEXT: ${args.context}

const options = this.settings
.filter(
(setting) =>
'subgroup' in setting &&
setting.subgroup === 'options' &&
setting.value !== null,
)
.reduce((acc, { label }) => {
acc[label] = args[label];
return acc;
}, {});

const systemPrompt = `CONTEXT: ${args.context}
DOCUMENTS: \n${RAG.reduce(
(prev, curr, index) =>
`${prev}\n\tDOCUMENT ${index} \n\t\tTitle:${curr.title}\n\t\tData:${curr.rag}`,
'',
)}\nINSTRUCTIONS:
${args.instructions}
`,
},
...historicalMessages,
{ role: 'user', content: context.text },
],
temperature: 0.8,
max_tokens: args.num_ctx || 256,
});
`;

const text = await chatGptHelper.generateChatCompletion(
context.text,
args.model,
systemPrompt,
history,
{
...options,
user: context.user.id,
},
);

const envelope: StdOutgoingTextEnvelope = {
format: OutgoingMessageFormat.text,
message: {
text: completion.choices[0].message.content,
text,
},
};
return envelope;
}

private getInstance(token: string) {
if (this.openai) {
return this.openai;
}

try {
this.openai = new OpenAI({
apiKey: token,
});
return this.openai;
} catch (err) {
this.logger.warn('RAG: Unable to instanciate OpenAI', err);
}
}
}
20 changes: 19 additions & 1 deletion i18n/en/help.json
Original file line number Diff line number Diff line change
Expand Up @@ -4,5 +4,23 @@
"context": "Provide initial context or information that the model should consider when generating responses.",
"instructions": "Specify any special instructions or guidelines that the model should follow in its responses.",
"num_ctx": "Set the maximum number of tokens (words and punctuation) that the model can consider from the provided context.",
"max_messages_ctx": "Define the maximum number of previous interaction messages that the model should remember and consider when responding."
"max_messages_ctx": "Define the maximum number of previous interaction messages that the model should remember and consider when responding.",
"frequency_penalty": "Penalizes new tokens based on their frequency in the generated text. A positive value reduces the likelihood of repeating the same token. (Default: 0, range: -2.0 to 2.0)",
"function_call": "Determines if the model should call a function or generate a message. 'none' disables function calls, 'auto' allows the model to choose, and specifying a function forces the model to call it. (Default: 'none')",
"logit_bias": "Modifies the likelihood of specified tokens appearing. Accepts a JSON object mapping tokens to a bias value (-100 to 100). A high positive bias makes a token more likely, while a negative value reduces the likelihood.",
"logprobs": "If true, returns the log probabilities of each output token, useful for understanding token selection. (Default: false)",
"max_completion_tokens": "Limits the maximum number of tokens in the completion, helping to control costs and avoid long outputs. (Default: 1000)",
"n": "Controls how many completions to generate for each prompt. (Default: 1)",
"parallel_tool_calls": "Enables or disables parallel function calls when using tools. (Default: false)",
"presence_penalty": "Penalizes new tokens based on their appearance in the conversation, encouraging the model to talk about new topics. (Default: 0, range: -2.0 to 2.0)",
"response_format": "Specifies the format of the model's output. Options include 'text' for regular output and 'json' for structured responses. (Default: 'text')",
"seed": "Specifies a random seed for deterministic responses. Using the same seed with the same parameters will return the same result. (Default: null)",
"stop": "Specifies stop sequences that prevent further token generation when encountered. You can specify one or more stop sequences. (Default: null)",
"store": "Indicates whether to store the output of the request for future use in model distillation or evaluation. (Default: false)",
"stream": "If enabled, returns partial message deltas as the completion is generated, similar to ChatGPT streaming. (Default: false)",
"temperature": "Controls the randomness of the output. Higher values make the output more creative, while lower values make it more focused and deterministic. (Default: 0.8, range: 0 to 2.0)",
"tool_choice": "Controls which tool (if any) the model should call. 'none' disables tool use, 'auto' lets the model choose, and 'required' forces tool usage. (Default: 'auto')",
"top_logprobs": "Specifies the number of most likely tokens to return with their associated log probabilities. Requires 'logprobs' to be true. (Default: null)",
"top_p": "Alternative to temperature, using nucleus sampling to consider tokens with a combined probability mass. A lower value (e.g., 0.5) generates more focused text, while a higher value (e.g., 0.95) generates more diverse text. (Default: 0.9)",
"user": "A unique identifier representing your end-user, which can be used for monitoring and detecting abuse. (Default: empty string)"
}
20 changes: 19 additions & 1 deletion i18n/en/label.json
Original file line number Diff line number Diff line change
Expand Up @@ -4,5 +4,23 @@
"context": "Context",
"instructions": "Instructions",
"num_ctx": "Maximum number of tokens",
"max_messages_ctx": "Number of Messages"
"max_messages_ctx": "Number of Messages",
"frequency_penalty": "Frequency Penalty",
"function_call": "Function Call",
"logit_bias": "Logit Bias",
"logprobs": "Log Probabilities",
"max_completion_tokens": "Max Completion Tokens",
"n": "Number of Choices",
"parallel_tool_calls": "Parallel Tool Calls",
"presence_penalty": "Presence Penalty",
"response_format": "Response Format",
"seed": "Seed",
"stop": "Stop",
"store": "Store Output",
"stream": "Stream Output",
"temperature": "Temperature",
"tool_choice": "Tool Choice",
"top_logprobs": "Top Log Probabilities",
"top_p": "Top P",
"user": "End User ID"
}
2 changes: 1 addition & 1 deletion i18n/en/title.json
Original file line number Diff line number Diff line change
@@ -1,3 +1,3 @@
{
"chatgpt": "ChatGPT"
"chatgpt_plugin": "ChatGPT Plugin"
}
20 changes: 19 additions & 1 deletion i18n/fr/help.json
Original file line number Diff line number Diff line change
Expand Up @@ -4,5 +4,23 @@
"context": "Fournissez un contexte initial ou des informations que le modèle devrait prendre en compte lors de la génération des réponses.",
"instructions": "Spécifiez des instructions spéciales ou des directives que le modèle devrait suivre dans ses réponses.",
"num_ctx": "Définissez le nombre maximum de jetons (mots et ponctuations) que le modèle peut considérer à partir du contexte fourni.",
"max_messages_ctx": "Déterminez le nombre maximum de messages d'interactions précédentes que le modèle doit se souvenir et considérer lorsqu'il répond."
"max_messages_ctx": "Déterminez le nombre maximum de messages d'interactions précédentes que le modèle doit se souvenir et considérer lorsqu'il répond.",
"frequency_penalty": "Pénalise les nouveaux tokens en fonction de leur fréquence dans le texte généré. Une valeur positive réduit la probabilité de répéter le même token. (Par défaut : 0, plage : -2.0 à 2.0)",
"function_call": "Détermine si le modèle doit appeler une fonction ou générer un message. 'none' désactive les appels de fonction, 'auto' permet au modèle de choisir, et spécifier une fonction force le modèle à l'appeler. (Par défaut : 'none')",
"logit_bias": "Modifie la probabilité d'apparition des tokens spécifiés. Accepte un objet JSON mappant des tokens à une valeur de biais (-100 à 100). Un biais fortement positif rend un token plus probable, tandis qu'une valeur négative réduit la probabilité.",
"logprobs": "Si activé, renvoie les probabilités logarithmiques de chaque token de sortie, utile pour comprendre la sélection des tokens. (Par défaut : false)",
"max_completion_tokens": "Limite le nombre maximal de tokens dans la complétion, aidant à contrôler les coûts et à éviter les sorties trop longues. (Par défaut : 1000)",
"n": "Contrôle combien de complétions générer pour chaque invite. (Par défaut : 1)",
"parallel_tool_calls": "Active ou désactive les appels de fonction parallèles lors de l'utilisation d'outils. (Par défaut : false)",
"presence_penalty": "Pénalise les nouveaux tokens en fonction de leur apparition dans la conversation, encourageant le modèle à aborder de nouveaux sujets. (Par défaut : 0, plage : -2.0 à 2.0)",
"response_format": "Spécifie le format de la réponse du modèle. Les options incluent 'text' pour une sortie classique et 'json' pour des réponses structurées. (Par défaut : 'text')",
"seed": "Spécifie une graine aléatoire pour des réponses déterministes. Utiliser la même graine avec les mêmes paramètres renverra le même résultat. (Par défaut : null)",
"stop": "Spécifie des séquences d'arrêt qui empêchent la génération de tokens supplémentaires lorsqu'elles sont rencontrées. Vous pouvez spécifier une ou plusieurs séquences d'arrêt. (Par défaut : null)",
"store": "Indique si la sortie de la requête doit être stockée pour une utilisation future dans la distillation ou l'évaluation du modèle. (Par défaut : false)",
"stream": "Si activé, renvoie des deltas partiels de messages au fur et à mesure que la complétion est générée, similaire au streaming de ChatGPT. (Par défaut : false)",
"temperature": "Contrôle l'imprévisibilité de la sortie. Des valeurs plus élevées rendent la sortie plus créative, tandis que des valeurs plus faibles la rendent plus focalisée et déterministe. (Par défaut : 0.8, plage : 0 à 2.0)",
"tool_choice": "Contrôle quel outil (le cas échéant) le modèle doit utiliser. 'none' désactive l'utilisation des outils, 'auto' laisse le modèle choisir, et 'required' force l'utilisation de l'outil. (Par défaut : 'auto')",
"top_logprobs": "Spécifie le nombre de tokens les plus probables à renvoyer avec leurs probabilités logarithmiques associées. Nécessite que 'logprobs' soit activé. (Par défaut : null)",
"top_p": "Alternative à la température, utilisant l'échantillonnage nucleus pour considérer les tokens avec une masse de probabilité combinée. Une valeur plus faible (ex : 0.5) génère un texte plus focalisé, tandis qu'une valeur plus élevée (ex : 0.95) génère un texte plus diversifié. (Par défaut : 0.9)",
"user": "Un identifiant unique représentant votre utilisateur final, pouvant être utilisé pour la surveillance et la détection des abus. (Par défaut : chaîne vide)"
}
20 changes: 19 additions & 1 deletion i18n/fr/label.json
Original file line number Diff line number Diff line change
Expand Up @@ -4,5 +4,23 @@
"context": "Contexte",
"instructions": "Instructions",
"num_ctx": "Nombre maximale de jetons",
"max_messages_ctx": "Nombre de messages"
"max_messages_ctx": "Nombre de messages",
"frequency_penalty": "Pénalité de Fréquence",
"function_call": "Appel de Fonction",
"logit_bias": "Biais de Logit",
"logprobs": "Probabilités Logarithmiques",
"max_completion_tokens": "Nombre Maximum de Jetons de Complétion",
"n": "Nombre de Choix",
"parallel_tool_calls": "Appels d'Outils en Parallèle",
"presence_penalty": "Pénalité de Présence",
"response_format": "Format de Réponse",
"seed": "Graine",
"stop": "Arrêter",
"store": "Stocker la Sortie",
"stream": "Diffusion de la Sortie",
"temperature": "Température",
"tool_choice": "Choix de l'Outil",
"top_logprobs": "Probabilités Logarithmiques Maximales",
"top_p": "Top P",
"user": "ID de l'Utilisateur Final"
}
2 changes: 1 addition & 1 deletion i18n/fr/title.json
Original file line number Diff line number Diff line change
@@ -1,3 +1,3 @@
{
"chatgpt": "ChatGPT"
"chatgpt_plugin": "ChatGPT Plugin"
}
4 changes: 2 additions & 2 deletions package.json
Original file line number Diff line number Diff line change
@@ -1,9 +1,9 @@
{
"name": "hexabot-plugin-chatgpt",
"version": "2.0.0",
"version": "2.0.1",
"description": "The OpenAI ChatGPT Plugin for Hexabot Chatbot / Agent Builder to enable the LLM RAG Capability",
"dependencies": {
"openai": "^4.54.0"
"hexabot-helper-chatgpt": "^2.0.0"
},
"author": "Hexastack",
"license": "AGPL-3.0-only"
Expand Down
Loading