Skip to content

[Docs] Combine home and quick start pages #1574

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 10 commits into from
Apr 8, 2025
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
152 changes: 117 additions & 35 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -124,12 +124,22 @@ We have several agent concepts in AG2 to help you build your AI agents. We intro

### Conversable agent

The conversable agent is the most used agent and is created for generating conversations among agents.
It serves as a base class for all agents in AG2.
The [ConversableAgent](https://docs.ag2.ai/latest/docs/api-reference/autogen/ConversableAgent) is the fundamental building block of AG2, designed to enable seamless communication between AI entities. This core agent type handles message exchange and response generation, serving as the base class for all agents in the framework.

In the example below, we'll create a simple information validation workflow with two specialized agents that communicate with each other:

Note: Before running this code, make sure to set your `OPENAI_API_KEY` as an environment variable. This example uses `gpt-4o-mini`, but you can replace it with any other [model](https://docs.ag2.ai/latest/docs/user-guide/models/amazon-bedrock) supported by AG2.

```python
from autogen import ConversableAgent
# 1. Import ConversableAgent class
from autogen import ConversableAgent, LLMConfig

# 2. Define our LLM configuration for OpenAI's GPT-4o mini
# uses the OPENAI_API_KEY environment variable
llm_config = LLMConfig(api_type="openai", model="gpt-4o-mini")


# 3. Create our LLM agent
with llm_config:
# Create an AI agent
assistant = ConversableAgent(
Expand All @@ -143,7 +153,7 @@ with llm_config:
system_message="You are a fact-checking assistant.",
)

# Start the conversation
# 4. Start the conversation
assistant.initiate_chat(
recipient=fact_checker,
message="What is AG2?",
Expand All @@ -153,33 +163,42 @@ assistant.initiate_chat(

### Human in the loop

Sometimes your wished workflow requires human input. Therefore you can enable the human in the loop feature.
Human oversight is crucial for many AI workflows, especially when dealing with critical decisions, creative tasks, or situations requiring expert judgment. AG2 makes integrating human feedback seamless through its human-in-the-loop functionality.
You can configure how and when human input is solicited using the `human_input_mode` parameter:

If you set `human_input_mode` to `ALWAYS` on ConversableAgent you can give human input to the conversation.
- `ALWAYS`: Requires human input for every response
- `NEVER`: Operates autonomously without human involvement
- `TERMINATE`: Only requests human input to end conversations

There are three modes for `human_input_mode`: `ALWAYS`, `NEVER`, `TERMINATE`.
For convenience, AG2 provides the specialized `UserProxyAgent` class that automatically sets `human_input_mode` to `ALWAYS` and supports code execution:

We created a class which sets the `human_input_mode` to `ALWAYS` for you. Its called `UserProxyAgent`.
Note: Before running this code, make sure to set your `OPENAI_API_KEY` as an environment variable. This example uses `gpt-4o-mini`, but you can replace it with any other [model](https://docs.ag2.ai/latest/docs/user-guide/models/amazon-bedrock) supported by AG2.

```python
from autogen import ConversableAgent
# 1. Import ConversableAgent and UserProxyAgent classes
from autogen import ConversableAgent, UserProxyAgent, LLMConfig

# 2. Define our LLM configuration for OpenAI's GPT-4o mini
# uses the OPENAI_API_KEY environment variable
llm_config = LLMConfig(api_type="openai", model="gpt-4o-mini")

# Create an AI agent

# 3. Create our LLM agent
with llm_config:
assistant = ConversableAgent(
name="assistant",
system_message="You are a helpful assistant.",
)

# Create a human agent with manual input mode
# 4. Create a human agent with manual input mode
human = ConversableAgent(
name="human",
human_input_mode="ALWAYS"
)
# or
human = UserProxyAgent(name="human", code_execution_config={"work_dir": "coding", "use_docker": False})

# Start the chat
# 5. Start the chat
human.initiate_chat(
recipient=assistant,
message="Hello! What's 2 + 2?"
Expand All @@ -189,45 +208,106 @@ human.initiate_chat(

### Orchestrating multiple agents

Users can define their own orchestration patterns using the flexible programming interface from AG2.
AG2 enables sophisticated multi-agent collaboration through flexible orchestration patterns, allowing you to create dynamic systems where specialized agents work together to solve complex problems.

Additionally AG2 provides multiple built-in patterns to orchestrate multiple agents, such as `GroupChat` and `Swarm`.
The framework offers both custom orchestration and several built-in collaboration patterns including `GroupChat` and `Swarm`.

Both concepts are used to orchestrate multiple agents to solve a task.
Here's how to implement a collaborative team for curriculum development using GroupChat:

The group chat works like a chat where each registered agent can participate in the conversation.
Note: Before running this code, make sure to set your `OPENAI_API_KEY` as an environment variable. This example uses `gpt-4o-mini`, but you can replace it with any other [model](https://docs.ag2.ai/latest/docs/user-guide/models/amazon-bedrock) supported by AG2.

```python
from autogen import ConversableAgent, GroupChat, GroupChatManager
from autogen import ConversableAgent, GroupChat, GroupChatManager, LLMConfig

# Put your key in the OPENAI_API_KEY environment variable
llm_config = LLMConfig(api_type="openai", model="gpt-4o-mini")

planner_message = """You are a classroom lesson agent.
Given a topic, write a lesson plan for a fourth grade class.
Use the following format:
<title>Lesson plan title</title>
<learning_objectives>Key learning objectives</learning_objectives>
<script>How to introduce the topic to the kids</script>
"""

reviewer_message = """You are a classroom lesson reviewer.
You compare the lesson plan to the fourth grade curriculum and provide a maximum of 3 recommended changes.
Provide only one round of reviews to a lesson plan.
"""

# 1. Add a separate 'description' for our planner and reviewer agents
planner_description = "Creates or revises lesson plans."

# Create AI agents
teacher = ConversableAgent(name="teacher", system_message="You suggest lesson topics.")
planner = ConversableAgent(name="planner", system_message="You create lesson plans.")
reviewer = ConversableAgent(name="reviewer", system_message="You review lesson plans.")
reviewer_description = """Provides one round of reviews to a lesson plan
for the lesson_planner to revise."""

# Create GroupChat
groupchat = GroupChat(agents=[teacher, planner, reviewer], speaker_selection_method="auto")
with llm_config:
lesson_planner = ConversableAgent(
name="planner_agent",
system_message=planner_message,
description=planner_description,
)

lesson_reviewer = ConversableAgent(
name="reviewer_agent",
system_message=reviewer_message,
description=reviewer_description,
)

# 2. The teacher's system message can also be used as a description, so we don't define it
teacher_message = """You are a classroom teacher.
You decide topics for lessons and work with a lesson planner.
and reviewer to create and finalise lesson plans.
When you are happy with a lesson plan, output "DONE!".
"""

# Create the GroupChatManager, it will manage the conversation and uses an LLM to select the next agent
manager = GroupChatManager(name="manager", groupchat=groupchat)
with llm_config:
teacher = ConversableAgent(
name="teacher_agent",
system_message=teacher_message,
# 3. Our teacher can end the conversation by saying DONE!
is_termination_msg=lambda x: "DONE!" in (x.get("content", "") or "").upper(),
)

# 4. Create the GroupChat with agents and selection method
groupchat = GroupChat(
agents=[teacher, lesson_planner, lesson_reviewer],
speaker_selection_method="auto",
messages=[],
)

# 5. Our GroupChatManager will manage the conversation and uses an LLM to select the next agent
manager = GroupChatManager(
name="group_manager",
groupchat=groupchat,
llm_config=llm_config,
)

# Start the conversation
teacher.initiate_chat(manager, "Create a lesson on photosynthesis.")
# 6. Initiate the chat with the GroupChatManager as the recipient
teacher.initiate_chat(
recipient=manager,
message="Today, let's introduce our kids to the solar system."
)
```

The swarm requires a more rigid structure and the flow needs to be defined with hand-off, post-tool, and post-work transitions from an agent to another agent.
When executed, this code creates a collaborative system where the teacher initiates the conversation, and the lesson planner and reviewer agents work together to create and refine a lesson plan. The GroupChatManager orchestrates the conversation, selecting the next agent to respond based on the context of the discussion.

Read more about it in the [documentation](https://docs.ag2.ai/docs/user-guide/advanced-concepts/conversation-patterns-deep-dive)
For workflows requiring more structured processes, explore the Swarm pattern in the detailed [documentation](https://docs.ag2.ai/latest/docs/user-guide/advanced-concepts/conversation-patterns-deep-dive).

### Tools

Agents gain significant utility through tools as they provide access to external data, APIs, and functionality.

Note: Before running this code, make sure to set your `OPENAI_API_KEY` as an environment variable. This example uses `gpt-4o-mini`, but you can replace it with any other [model](https://docs.ag2.ai/latest/docs/user-guide/models/amazon-bedrock) supported by AG2.

```python
from datetime import datetime
from typing import Annotated

from autogen import ConversableAgent, register_function
from autogen import ConversableAgent, register_function, LLMConfig

# Put your key in the OPENAI_API_KEY environment variable
llm_config = LLMConfig(api_type="openai", model="gpt-4o-mini")

# 1. Our tool, returns the day of the week for a given date
def get_weekday(date_string: Annotated[str, "Format: YYYY-MM-DD"]) -> str:
Expand All @@ -236,10 +316,10 @@ def get_weekday(date_string: Annotated[str, "Format: YYYY-MM-DD"]) -> str:

# 2. Agent for determining whether to run the tool
with llm_config:
date_agent = ConversableAgent(
name="date_agent",
system_message="You get the day of the week for a given date.",
)
date_agent = ConversableAgent(
name="date_agent",
system_message="You get the day of the week for a given date.",
)

# 3. And an agent for executing the tool
executor_agent = ConversableAgent(
Expand All @@ -259,8 +339,10 @@ register_function(
chat_result = executor_agent.initiate_chat(
recipient=date_agent,
message="I was born on the 25th of March 1995, what day was it?",
max_turns=1,
max_turns=2,
)

print(chat_result.chat_history[-1]["content"])
```

### Advanced agentic design patterns
Expand Down
10 changes: 9 additions & 1 deletion autogen/_website/generate_mkdocs.py
Original file line number Diff line number Diff line change
Expand Up @@ -370,6 +370,9 @@ def process_and_copy_files(input_dir: Path, output_dir: Path, files: list[Path])
if file.suffix == ".mdx":
dest = output_dir / file.relative_to(input_dir).with_suffix(".md")

if file.name == "home.mdx":
dest = output_dir / "home.md"

if f"{sep}user-stories{sep}" in str(dest):
dest = rename_user_story(dest)

Expand Down Expand Up @@ -437,7 +440,12 @@ def format_navigation(nav: list[NavigationGroup], depth: int = 0, keywords: Opti
result.append(format_page_entry(page, indent, keywords))

ret_val = "\n".join(result)
ret_val = ret_val.replace("- Home\n", "- [Home](index.md)\n")

ret_val = ret_val.replace(
"- Home\n - [Home](docs/home/home.md)\n",
"- [Home](docs/home.md)\n",
)
ret_val = ret_val.replace("- FAQs\n - [Faq](docs/faq/FAQ.md)\n", "- [FAQs](docs/faq/FAQ.md)\n")
return ret_val


Expand Down
1 change: 1 addition & 0 deletions pyproject.toml
Original file line number Diff line number Diff line change
Expand Up @@ -284,6 +284,7 @@ docs = [
"mkdocs-minify-plugin==0.8.0",
"mkdocs-macros-plugin==1.3.7", # includes with variables
"mkdocs-glightbox==0.4.0", # img zoom
"mkdocs-redirects==1.2.2", # required for handling redirects natively
"pillow", # required for mkdocs-glightbo
"cairosvg", # required for mkdocs-glightbo
"pdoc3==0.11.6",
Expand Down
6 changes: 2 additions & 4 deletions test/website/test_generate_mkdocs.py
Original file line number Diff line number Diff line change
Expand Up @@ -1048,7 +1048,7 @@ def test_invalid_syntax_admonition(self) -> None:
@pytest.fixture
def navigation() -> list[NavigationGroup]:
return [
{"group": "Home", "pages": ["docs/home/home", "docs/home/quick-start"]},
{"group": "Home", "pages": ["docs/home/home"]},
{
"group": "User Guide",
"pages": [
Expand Down Expand Up @@ -1088,9 +1088,7 @@ def navigation() -> list[NavigationGroup]:

@pytest.fixture
def expected_nav() -> str:
return """- [Home](index.md)
- [Home](docs/home/home.md)
- [Quick Start](docs/home/quick-start.md)
return """- [Home](docs/home.md)
- User Guide
- Basic Concepts
- [Installing AG2](docs/user-guide/basic-concepts/installing-ag2.md)
Expand Down
Loading
Loading