-
Notifications
You must be signed in to change notification settings - Fork 4.4k
Commit
This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.
Merge pull request #877 from YoungPhlo/docs/community-streams
docs: Add AI Agent Dev School Parts 2 and 3 summaries and timestamps
- Loading branch information
Showing
2 changed files
with
215 additions
and
0 deletions.
There are no files selected for viewing
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,117 @@ | ||
# AI Agent Dev School Part 2 | ||
|
||
**Building Complex AI Agents with Actions, Providers, & Evaluators** | ||
|
||
Date: 2024-12-03 | ||
YouTube Link: https://www.youtube.com/watch?v=XenGeAcPAQo | ||
|
||
## Timestamps | ||
|
||
**00:03:33** - Shift in focus from characters (Dev School Part 1) to agent capabilities | ||
- Link: https://www.youtube.com/watch?v=XenGeAcPAQo&t=213 | ||
|
||
**00:07:09** - Deep dive into providers, actions, and evaluators, the core building blocks of Eliza | ||
- Link: https://www.youtube.com/watch?v=XenGeAcPAQo&t=429 | ||
|
||
**00:07:28** - Discussion about actions vs. tools, favoring decoupled intent and action execution | ||
- Link: https://www.youtube.com/watch?v=XenGeAcPAQo&t=448 | ||
|
||
**00:18:02** - Explanation of providers and their function as information sources for agents | ||
- Link: https://www.youtube.com/watch?v=XenGeAcPAQo&t=1082 | ||
|
||
**00:20:15** - Introduction to evaluators and their role in agent reflection and state analysis | ||
- Link: https://www.youtube.com/watch?v=XenGeAcPAQo&t=1215 | ||
|
||
**00:29:22** - Brief overview of clients as connectors to external platforms | ||
- Link: https://www.youtube.com/watch?v=XenGeAcPAQo&t=1762 | ||
|
||
**00:31:02** - Description of adapters and their function in database interactions | ||
- Link: https://www.youtube.com/watch?v=XenGeAcPAQo&t=1862 | ||
|
||
**00:34:02** - Discussion about plugins as bundles of core components, examples, and recommendations | ||
- Link: https://www.youtube.com/watch?v=XenGeAcPAQo&t=2042 | ||
|
||
**00:40:31** - Live Coding Demo begins: Creating a new plugin from scratch (DevSchoolExamplePlugin) | ||
- Link: https://www.youtube.com/watch?v=XenGeAcPAQo&t=2431 | ||
|
||
**00:47:54** - Implementing the simple HelloWorldAction | ||
- Link: https://www.youtube.com/watch?v=XenGeAcPAQo&t=2791 | ||
|
||
**01:00:26** - Implementing the CurrentNewsAction (fetching and formatting news data) | ||
- Link: https://www.youtube.com/watch?v=XenGeAcPAQo&t=3626 | ||
|
||
**01:22:09** - Demonstrating the Eliza Client for interacting with agents locally | ||
- Link: https://www.youtube.com/watch?v=XenGeAcPAQo&t=4929 | ||
|
||
**01:23:54** - Q&A: Plugin usage in character files, installation, Eliza vs. Eliza Starter | ||
- Link: https://www.youtube.com/watch?v=XenGeAcPAQo&t=5034 | ||
|
||
**01:36:17** - Saving agent responses as memories in the database | ||
- Link: https://www.youtube.com/watch?v=XenGeAcPAQo&t=5777 | ||
|
||
**01:43:06** - Using prompts for data extraction within actions | ||
- Link: https://www.youtube.com/watch?v=XenGeAcPAQo&t=6186 | ||
|
||
**01:51:54** - Importance of deleting the database during development to avoid context issues | ||
- Link: https://www.youtube.com/watch?v=XenGeAcPAQo&t=6714 | ||
|
||
**01:57:04** - Viewing agent context via console logs to understand model inputs | ||
- Link: https://www.youtube.com/watch?v=XenGeAcPAQo&t=7024 | ||
|
||
**02:07:07** - Explanation of memory management with knowledge, facts, and lore | ||
- Link: https://www.youtube.com/watch?v=XenGeAcPAQo&t=7627 | ||
|
||
**02:16:53** - Q&A: Prompt engineering opportunities, knowledge chunking and retrieval | ||
- Link: https://www.youtube.com/watch?v=XenGeAcPAQo&t=8213 | ||
|
||
**02:22:57** - Call for contributions: Encouraging viewers to create their own actions and plugins | ||
- Link: https://www.youtube.com/watch?v=XenGeAcPAQo&t=8577 | ||
|
||
**02:26:31** - Closing remarks and future DevSchool session announcements | ||
- Link: https://www.youtube.com/watch?v=XenGeAcPAQo&t=8791 | ||
|
||
## Summary | ||
|
||
AI Agent Dev School Part 2, Electric Boogaloo | ||
|
||
The session focuses on building complex AI agents, with Shaw diving into core abstractions: plugins, providers, actions, and evaluators. | ||
|
||
Actions are defined as capabilities that agents can execute, ranging from simple tasks to complex workflows. Providers serve as information sources for agents, similar to context providers in React. Evaluators run after actions, enabling agents to reflect on their state and decisions. | ||
|
||
The live coding portion demonstrates creating a "DevSchool" plugin from scratch, starting with a simple "Hello World" action and progressing to a more complex "Current News" action that fetches and formats news articles. Shaw shows how to extract data from conversations using prompts, making actions dynamic. | ||
|
||
The session covers memory management, explaining how agents store and recall information through different types of memory: | ||
- Knowledge: Information retrievable through search | ||
- Lore: Random facts that add variety to responses | ||
- Conversation history: Recent interactions and context | ||
|
||
Shaw emphasizes the importance of prompt engineering, demonstrating how the structure and order of information significantly impacts agent responses. He shows how to view agent context through console logs to understand model inputs and improve agent behavior. | ||
|
||
The session concludes with discussions about knowledge management, retrieval augmented generation (RAG), and future developments in AI agent capabilities, including the possibility of dynamically generating character files. | ||
|
||
## Hot Takes | ||
|
||
1. **OpenAI models are "dumb" due to RLHF and "wokeness" (02:03:00-02:04:07)** | ||
> "But basically, I've also made them sort of useless by RLHFing. Like, very basic capability, like a haystack test out of them. ... I'm against killing the capability and making models way worse than they are for someone's political agenda. I just don't think that's the world we want to live in." | ||
Shaw here expresses frustration with OpenAI's approach to alignment, arguing that RLHF has diminished the capabilities of their models and that this is due to a "woke" agenda. This take is controversial because it attributes technical limitations to political motivations and ignores the complexities of aligning powerful AI systems. | ||
|
||
2. **OpenAI models shouldn't be "telling" developers what they can and can't do (02:03:29-02:03:50)** | ||
> "OpenAI, if you're listening, please fucking stop telling me how to run models. You don't know as well as I do. I do this every day. You're a fucking engineer who has to go train, like, an LLM. I actually have to use the LLM." | ||
This rant criticizes OpenAI's models for "telling" developers what they can and can't do, arguing that the models are not as knowledgeable as the developers who are actually using them. This take could be seen as dismissive of the role of AI systems in providing helpful feedback and limitations. | ||
|
||
3. **Prompt engineering is the "most easy improvement" for AI agents (02:06:09-02:06:27)** | ||
> "Huge amount of research would go into that... That's where we'll see like the most easy improvement in our agents." | ||
Shaw argues that prompt engineering holds the key to significant improvements in AI agents, stating that it's the "most easy improvement." This take is controversial because it downplays the importance of other areas like model architecture, training data, and algorithm development. | ||
|
||
4. **Character files could be generated at runtime, making existing character files obsolete (02:22:05-02:22:53)** | ||
> "The entire character file could be generated at runtime... The agent's like, I have no idea who I am. And you're like, oh, your name is Eliza, and you like berries. OK, cool. I guess I like berries." | ||
This take suggests that character files could be generated at runtime, rendering current character files obsolete. This idea is controversial because it could lead to a more dynamic and unpredictable agent behavior, which could raise concerns about control and reliability. | ||
|
||
5. **A "badge" system will reward developers who create custom actions, evaluators, and providers (02:24:45-02:25:49)** | ||
> "If you want that badge, what I'd like you to do is come to the AI Agent Dev School, make an action, have your agent do something. Those are the kinds of people that I really think we'll want to, you know, keep in our ecosystem and keep busy." | ||
This take suggests a "badge" system to recognize developers who go beyond the basics and create custom components for AI agents. This could be seen as elitist or exclusionary, potentially creating a hierarchy within the AI agent development community. |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,98 @@ | ||
# AI Agent Dev School Part 3 | ||
|
||
**Form-Filling Frenzy & Eliza's Wild Ride** | ||
|
||
Date: 2024-12-05 | ||
YouTube Link: https://www.youtube.com/watch?v=Y1DiqSVy4aU | ||
|
||
## Timestamps | ||
|
||
**00:00:00** - Intro & Housekeeping: | ||
- Link: https://www.youtube.com/watch?v=Y1DiqSVy4aU&t=0 | ||
- Recap of previous sessions (Typescript, plugins, actions) | ||
- Importance of staying on the latest Eliza branch | ||
- How to pull latest changes and stash local modifications | ||
|
||
**00:08:05** - Building a Form-Filling Agent: | ||
- Link: https://www.youtube.com/watch?v=Y1DiqSVy4aU&t=485 | ||
- Introduction to Providers & Evaluators | ||
- Practical use case: Extracting user data (name, location, job) | ||
- Steps for a provider-evaluator loop to gather info and trigger actions | ||
|
||
**00:16:15** - Deep Dive into Evaluators: | ||
- Link: https://www.youtube.com/watch?v=Y1DiqSVy4aU&t=975 | ||
- Understanding "Evaluator" in Eliza's context | ||
- When they run, their role in agent's self-reflection | ||
|
||
**00:27:45** - Code walkthrough of the "Fact Evaluator": | ||
- Link: https://www.youtube.com/watch?v=Y1DiqSVy4aU&t=1675 | ||
- Code walkthrough of the "Fact Evaluator" | ||
|
||
**00:36:07** - Building a User Data Evaluator: | ||
- Link: https://www.youtube.com/watch?v=Y1DiqSVy4aU&t=2167 | ||
- Starting from scratch, creating a basic evaluator | ||
- Registering the evaluator directly in the agent (no plugin) | ||
- Logging evaluator activity and inspecting context | ||
|
||
**00:51:50** - Exploring Eliza's Cache Manager: | ||
- Link: https://www.youtube.com/watch?v=Y1DiqSVy4aU&t=3110 | ||
- Shaw uses Code2Prompt to analyze cache manager code | ||
- Applying cache manager principles to user data storage | ||
|
||
**01:06:01** - Using Claude AI for Code Generation: | ||
- Link: https://www.youtube.com/watch?v=Y1DiqSVy4aU&t=3961 | ||
- Pasting code into Claude and giving instructions | ||
- Iterative process: Refining code and providing feedback to Claude | ||
|
||
**01:21:18** - Testing the User Data Flow: | ||
- Link: https://www.youtube.com/watch?v=Y1DiqSVy4aU&t=4878 | ||
- Running the agent and interacting with it | ||
- Observing evaluator logs and context injections | ||
- Troubleshooting and iterating on code based on agent behavior | ||
|
||
**01:30:27** - Adding a Dynamic Provider Based on Completion: | ||
- Link: https://www.youtube.com/watch?v=Y1DiqSVy4aU&t=5427 | ||
- Creating a new provider that only triggers after user data is collected | ||
- Example: Providing a secret code or access link as a reward | ||
|
||
**01:37:16** - Q&A with the Audience: | ||
- Link: https://www.youtube.com/watch?v=Y1DiqSVy4aU&t=5836 | ||
- Python vs. TypeScript agents | ||
- Pre-evaluation vs. post-evaluation hooks | ||
- Agent overwhelm with many plugins/evaluators | ||
- Agentic app use cases beyond chat | ||
- Running stateless agents | ||
- Building AIXBT agents | ||
|
||
**01:47:31** - Outro and Next Steps: | ||
- Link: https://www.youtube.com/watch?v=Y1DiqSVy4aU&t=6451 | ||
- Recap of key learnings and the potential of provider-evaluator loops | ||
- Call to action: Share project ideas and feedback for future sessions | ||
|
||
## Summary | ||
|
||
This is the third part of the live stream series "AI Agent Dev School" hosted by Shaw from ai16z, focusing on building AI agents using the Eliza framework. | ||
|
||
**Key takeaways:** | ||
|
||
* **Updating Eliza:** Shaw emphasizes staying up-to-date with the rapidly evolving Eliza project due to frequent bug fixes and new features. He provides instructions on pulling the latest changes from the main branch on GitHub. | ||
* **Focus on Providers and Evaluators:** The stream focuses on building a practical provider-evaluator loop to demonstrate a popular use case for AI agents – filling out a form by extracting user information. | ||
* **Form Builder Example:** Shaw walks the audience through building a "form provider" that gathers a user's name, location, and job. This provider utilizes a cache to store already extracted information and instructs the agent to prompt the user for any missing details. | ||
* **Evaluator Role:** The evaluator continually checks the cache for the completeness of user data. Once all information is extracted, the evaluator triggers an action to send the collected data to an external API (simulated in the example). | ||
* **Live Coding and AI Assistance:** Shaw live codes the example, using tools like "Code2Prompt" and Claude AI to help generate and refine the code. He advocates for writing code in a human-readable manner, utilizing comments to provide context and guidance for both developers and AI assistants. | ||
* **Agentic Applications:** Shaw highlights the potential of agentic applications to replicate existing website functionality through conversational interfaces, bringing services directly to users within their preferred social media platforms. | ||
* **Community Engagement:** Shaw encourages active participation from the community, suggesting contributions to the project through pull requests and feedback on desired features and patterns for future Dev School sessions. | ||
|
||
**Overall, this live stream provided a practical tutorial on building a common AI agent use case (form filling) while emphasizing the potential of the Eliza framework for developing a wide range of agentic applications.** | ||
|
||
## Hot Takes | ||
|
||
1. **"I'm just going to struggle bus some code today." (00:09:31,664)** - Shaw embraces a "struggle bus" approach, showcasing live coding with errors and debugging, reflecting the reality of AI agent development. This contrasts with polished tutorials, highlighting the iterative and messy nature of this new technology. | ||
|
||
2. **"I'm actually not gonna put this in a plugin. I'm gonna put this in the agent... just so you can see what happens if you were to, like, make your own agent without using a plugin at all." (00:37:24,793)** - Shaw goes against the Eliza framework's plugin structure, showing viewers how to bypass it entirely. This bold move emphasizes flexibility, but could spark debate on best practices and potential drawbacks. | ||
|
||
3. **"I really don't remember conversations from people very well, like verbatim, but I definitely remember like the gist, the context, the really needy ideas." (00:24:48,180)** - Shaw draws a controversial parallel between human memory and the Eliza agent's fact extraction. Reducing human interaction to "needy ideas" is provocative, questioning the depth of social understanding AI agents currently possess. | ||
|
||
4. **"It's just an LLM. It's just making those numbers up. It could be off. I don't really buy the confidence here." (01:13:56,971)** - Shaw dismisses the confidence scores generated by the Large Language Model (LLM), revealing a distrust of these black-box outputs. This skepticism is crucial in a field where relying solely on AI's self-assessment can be misleading. | ||
|
||
5. **"Dude, that's a $250 million market cap token. Let's get that shit in Bubba Cat." (01:45:34,809)** - Shaw throws out a blunt, market-driven statement regarding the AIXBT token. Bringing finance directly into the technical discussion highlights the intertwined nature of AI development and potential financial incentives, a topic often tiptoed around. |