Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Building blocks: LLM utils #557

Open
sachinr opened this issue Oct 27, 2023 · 1 comment
Open

Building blocks: LLM utils #557

sachinr opened this issue Oct 27, 2023 · 1 comment
Assignees
Labels

Comments

@sachinr
Copy link
Member

sachinr commented Oct 27, 2023

SR: I need someone with more experience to flesh this out

@miguel-nascimento miguel-nascimento self-assigned this Nov 1, 2023
@sachinr sachinr mentioned this issue Nov 13, 2023
21 tasks
@miguel-nascimento
Copy link
Contributor

miguel-nascimento commented Dec 13, 2023

llm-utils applet

Question: Do you guys like the approach that I made? Are you guys happy with the usage?
Is there any missing feature/util/config?

I saw some comments in the applet and I want to raise them here:

  1. What if we export a default AI object using ChatOpenAI LLM? So we could support something like
import { chat } from 'ai/chat'
import { ai } from 'ai/general'
  1. Unify invoke with run and allow undefined args
    Currently, chat is different from the ai, (the latter is a more direct Q&A). The Chat API uses .invoke to send the request to the LLM service, and the AI API uses .run to do the same

What do you guys think about unifying both? Is it needed?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

2 participants