Skip to content

Commit

Permalink
Merge pull request #1 from harishmohanraj/mintlify-migration
Browse files Browse the repository at this point in the history
Mintlify migration
  • Loading branch information
harishmohanraj authored Dec 13, 2024
2 parents 4fe28fc + bf20b62 commit e7f3aee
Show file tree
Hide file tree
Showing 65 changed files with 1,433 additions and 303 deletions.
39 changes: 6 additions & 33 deletions website/README.md
Original file line number Diff line number Diff line change
@@ -1,40 +1,13 @@
# Website
## Development

This website is built using [Docusaurus 3](https://docusaurus.io/), a modern static website generator.
Install the [Mintlify CLI](https://www.npmjs.com/package/mintlify) to preview the documentation changes locally. To install, use the following command

## Prerequisites

To build and test documentation locally, begin by downloading and installing [Node.js](https://nodejs.org/en/download/), and then installing [Yarn](https://classic.yarnpkg.com/en/).
On Windows, you can install via the npm package manager (npm) which comes bundled with Node.js:

```console
npm install --global yarn
```

## Installation

```console
pip install pydoc-markdown pyyaml colored
cd website
yarn install
npm install
```

### Install Quarto

`quarto` is used to render notebooks.

Install it [here](https://github.com/quarto-dev/quarto-cli/releases).
Run the following command at the root of your documentation (where mint.json is)

> Note: Ensure that your `quarto` version is `1.5.23` or higher.
## Local Development

Navigate to the `website` folder and run:

```console
pydoc-markdown
python ./process_notebooks.py render
yarn start
```

This command starts a local development server and opens up a browser window. Most changes are reflected live without having to restart the server.
npm run mintlify:dev
```
Original file line number Diff line number Diff line change
Expand Up @@ -64,7 +64,7 @@ class CompletionResponseStreamChoice(BaseModel):
```


## Interact with model using `oai.Completion` (requires openai<1)
## Interact with model using `oai.Completion` (requires openai{'<'}1)

Now the models can be directly accessed through openai-python library as well as `autogen.oai.Completion` and `autogen.oai.ChatCompletion`.

Expand Down
4 changes: 1 addition & 3 deletions website/blog/2024-02-02-AutoAnny/index.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -5,10 +5,8 @@ authors:
tags: [AutoGen]
---

import AutoAnnyLogo from './img/AutoAnnyLogo.jpg';

<div style={{ display: "flex", justifyContent: "center" }}>
<img src={AutoAnnyLogo} alt="AutoAnny Logo" style={{ width: "250px" }} />
<img src='./img/AutoAnnyLogo.jpg' alt="AutoAnny Logo" style={{ width: "250px" }} />
</div>
<p align="center"><em>Anny is a Discord bot powered by AutoGen to help AutoGen's Discord server.</em></p>

Expand Down
4 changes: 2 additions & 2 deletions website/blog/2024-03-03-AutoGen-Update/index.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -38,10 +38,10 @@ Many users have deep understanding of the value in different dimensions, such as

> The same reason autogen is significant is the same reason OOP is a good idea. Autogen packages up all that complexity into an agent I can create in one line, or modify with another.
<!--
{/*
I had lots of ideas I wanted to implement, but it needed a framework like this
and I am just not the guy to make such a robust and intelligent framework.
-->
*/}

Over time, more and more users share their experiences in using or contributing to autogen.

Expand Down
4 changes: 2 additions & 2 deletions website/docs/Examples.md → website/docs/Examples.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -42,7 +42,7 @@ Links to notebook examples:
### Applications

- Automated Continual Learning from New Data - [View Notebook](/docs/notebooks/agentchat_stream)
<!-- - [OptiGuide](https://github.com/microsoft/optiguide) - Coding, Tool Using, Safeguarding & Question Answering for Supply Chain Optimization -->
{/* - [OptiGuide](https://github.com/microsoft/optiguide) - Coding, Tool Using, Safeguarding & Question Answering for Supply Chain Optimization */}
- [AutoAnny](https://github.com/ag2ai/build-with-ag2/tree/main/samples/apps/auto-anny) - A Discord bot built using AutoGen

### RAG
Expand Down Expand Up @@ -98,7 +98,7 @@ Links to notebook examples:

### Long Context Handling

<!-- - Conversations with Chat History Compression Enabled - [View Notebook](https://github.com/ag2ai/ag2/blob/main/notebook/agentchat_compression.ipynb) -->
{/* - Conversations with Chat History Compression Enabled - [View Notebook](https://github.com/ag2ai/ag2/blob/main/notebook/agentchat_compression.ipynb) */}
- Long Context Handling as A Capability - [View Notebook](/docs/notebooks/agentchat_transform_messages)

### Evaluation and Assessment
Expand Down
27 changes: 13 additions & 14 deletions website/docs/FAQ.mdx
Original file line number Diff line number Diff line change
@@ -1,8 +1,7 @@
import TOCInline from "@theme/TOCInline";

# Frequently Asked Questions

<TOCInline toc={toc} />
---
title: Frequently Asked Questions
sidebarTitle: FAQ
---

## Install the correct package - `autogen`

Expand Down Expand Up @@ -34,8 +33,8 @@ In version >=1, OpenAI renamed their `api_base` parameter to `base_url`. So for

Yes. You currently have two options:

- Autogen can work with any API endpoint which complies with OpenAI-compatible RESTful APIs - e.g. serving local LLM via FastChat or LM Studio. Please check https://ag2ai.github.io/ag2/blog/2023/07/14/Local-LLMs for an example.
- You can supply your own custom model implementation and use it with Autogen. Please check https://ag2ai.github.io/ag2/blog/2024/01/26/Custom-Models for more information.
- Autogen can work with any API endpoint which complies with OpenAI-compatible RESTful APIs - e.g. serving local LLM via FastChat or LM Studio. Please check [here](/blog/2023-07-14-Local-LLMs) for an example.
- You can supply your own custom model implementation and use it with Autogen. Please check [here](/blog/2024-01-26-Custom-Models) for more information.

## Handle Rate Limit Error and Timeout Error

Expand All @@ -52,9 +51,9 @@ When you call `initiate_chat` the conversation restarts by default. You can use

## `max_consecutive_auto_reply` vs `max_turn` vs `max_round`

- [`max_consecutive_auto_reply`](https://ag2ai.github.io/ag2/docs/reference/agentchat/conversable_agent#max_consecutive_auto_reply) the maximum number of consecutive auto replie (a reply from an agent without human input is considered an auto reply). It plays a role when `human_input_mode` is not "ALWAYS".
- [`max_turns` in `ConversableAgent.initiate_chat`](https://ag2ai.github.io/ag2/docs/reference/agentchat/conversable_agent#initiate_chat) limits the number of conversation turns between two conversable agents (without differentiating auto-reply and reply/input from human)
- [`max_round` in GroupChat](https://ag2ai.github.io/ag2/docs/reference/agentchat/groupchat#groupchat-objects) specifies the maximum number of rounds in a group chat session.
- [`max_consecutive_auto_reply`](/docs/reference/agentchat/conversable_agent#max_consecutive_auto_reply) the maximum number of consecutive auto replie (a reply from an agent without human input is considered an auto reply). It plays a role when `human_input_mode` is not "ALWAYS".
- [`max_turns` in `ConversableAgent.initiate_chat`](/docs/reference/agentchat/conversable_agent#initiate_chat) limits the number of conversation turns between two conversable agents (without differentiating auto-reply and reply/input from human)
- [`max_round` in GroupChat](/docs/reference/agentchat/groupchat#groupchat-objects) specifies the maximum number of rounds in a group chat session.

## How do we decide what LLM is used for each agent? How many agents can be used? How do we decide how many agents in the group?

Expand Down Expand Up @@ -159,7 +158,7 @@ Explanation: Per [this gist](https://gist.github.com/defulmere/8b9695e415a442710

(from [issue #478](https://github.com/microsoft/autogen/issues/478))

See here https://ag2ai.github.io/ag2/docs/reference/agentchat/conversable_agent/#register_reply
See here /docs/reference/agentchat/conversable_agent/#register_reply

For example, you can register a reply function that gets called when `generate_reply` is called for an agent.

Expand Down Expand Up @@ -188,11 +187,11 @@ In the above, we register a `print_messages` function that is called each time t

## How to get last message ?

Refer to https://ag2ai.github.io/ag2/docs/reference/agentchat/conversable_agent/#last_message
Refer to /docs/reference/agentchat/conversable_agent/#last_message

## How to get each agent message ?

Please refer to https://ag2ai.github.io/ag2/docs/reference/agentchat/conversable_agent#chat_messages
Please refer to /docs/reference/agentchat/conversable_agent#chat_messages

## When using autogen docker, is it always necessary to reinstall modules?

Expand Down Expand Up @@ -285,4 +284,4 @@ RUN apt-get clean && \
apt-get install sudo git npm # and whatever packages need to be installed in this specific version of the devcontainer
```

This is a combination of StackOverflow suggestions [here](https://stackoverflow.com/a/48777773/2114580) and [here](https://stackoverflow.com/a/76092743/2114580).
This is a combination of StackOverflow suggestions [here](https://stackoverflow.com/a/48777773/2114580) and [here](https://stackoverflow.com/a/76092743/2114580).
4 changes: 2 additions & 2 deletions website/docs/Gallery.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -2,8 +2,8 @@
hide_table_of_contents: true
---

import GalleryPage from '../src/components/GalleryPage';
import galleryData from "../src/data/gallery.json";
import GalleryPage from "/snippets/components/GalleryPage.js";
import galleryData from "/snippets/data/gallery.json";

# Gallery

Expand Down
51 changes: 29 additions & 22 deletions website/docs/Getting-Started.mdx
Original file line number Diff line number Diff line change
@@ -1,7 +1,6 @@
import Tabs from "@theme/Tabs";
import TabItem from "@theme/TabItem";

# Getting Started
---
title: "Getting Started"
---

AG2 (formerly AutoGen) is an open-source programming framework for building AI agents and facilitating
cooperation among multiple agents to solve tasks. AG2 aims to provide an easy-to-use
Expand All @@ -10,7 +9,7 @@ like PyTorch for Deep Learning. It offers features such as agents that can conve
with other agents, LLM and tool use support, autonomous and human-in-the-loop workflows,
and multi-agent conversation patterns.

![AG2 Overview](/img/autogen_agentchat.png)
![AG2 Overview](/static/img/autogen_agentchat.png)

### Main Features

Expand All @@ -37,12 +36,15 @@ Microsoft, Penn State University, and University of Washington.
```sh
pip install autogen
```
:::tip
You can also install with different [optional dependencies](/docs/installation/Optional-Dependencies).
:::
<div class="tip">
<Tip>
You can also install with different [optional
dependencies](/website/docs/installation/Optional-Dependencies).
</Tip>
</div>

<Tabs>
<TabItem value="nocode" label="No code execution" default>
<Tab title="No code execution">

```python
import os
Expand All @@ -59,12 +61,14 @@ user_proxy.initiate_chat(
)
```

</TabItem>
<TabItem value="local" label="Local execution" default>
</Tab>
<Tab title="Local execution">

:::warning
When asked, be sure to check the generated code before continuing to ensure it is safe to run.
:::
<div class="warning">
<Warning>
When asked, be sure to check the generated code before continuing to ensure it is safe to run.
</Warning>
</div>

```python
import os
Expand All @@ -85,8 +89,8 @@ user_proxy.initiate_chat(
)
```

</TabItem>
<TabItem value="docker" label="Docker execution" default>
</Tab>
<Tab title="Docker execution">

```python
import os
Expand All @@ -110,13 +114,16 @@ with autogen.coding.DockerCommandLineCodeExecutor(work_dir="coding") as code_exe

Open `coding/plot.png` to see the generated plot.

</TabItem>
</Tab>

</Tabs>

:::tip
Learn more about configuring LLMs for agents [here](/docs/topics/llm_configuration).
:::
<div class="tip">
<Tip>
Learn more about configuring LLMs for agents
[here](/website/docs/topics/llm_configuration).
</Tip>
</div>

#### Multi-Agent Conversation Framework

Expand All @@ -125,7 +132,7 @@ By automating chat among multiple capable agents, one can easily make them colle

The figure below shows an example conversation flow with AG2.

![Agent Chat Example](/img/chat_example.png)
![Agent Chat Example](/static/img/chat_example.png)

### Where to Go Next?

Expand All @@ -141,7 +148,7 @@ The figure below shows an example conversation flow with AG2.
If you like our project, please give it a [star](https://github.com/ag2ai/ag2) on GitHub. If you are interested in contributing, please read [Contributor's Guide](/docs/contributor-guide/contributing).

<iframe
src="https://ghbtns.com/github-btn.html?user=ag2ai&amp;repo=autogen&amp;type=star&amp;count=true&amp;size=large"
src="https://ghbtns.com/github-btn.html?user=ag2ai&amp;repo=ag2&amp;type=star&amp;count=true&amp;size=large"
frameborder="0"
scrolling="0"
width="170"
Expand Down
Original file line number Diff line number Diff line change
@@ -1,4 +1,6 @@
# Migration Guide
---
title: Migration Guide
---

## Migrating to 0.2

Expand Down Expand Up @@ -26,7 +28,7 @@ autogen.runtime_logging.start()
# Stop logging
autogen.runtime_logging.stop()
```
Checkout [Logging documentation](https://ag2ai.github.io/ag2/docs/Use-Cases/enhanced_inference#logging) and [Logging example notebook](https://github.com/ag2ai/ag2/blob/main/notebook/agentchat_logging.ipynb) to learn more.
Checkout [Logging documentation](/docs/Use-Cases/enhanced_inference#logging) and [Logging example notebook](https://github.com/ag2ai/ag2/blob/main/notebook/agentchat_logging.ipynb) to learn more.

Inference parameter tuning can be done via [`flaml.tune`](https://microsoft.github.io/FLAML/docs/Use-Cases/Tune-User-Defined-Function).
- `seed` in autogen is renamed into `cache_seed` to accommodate the newly added `seed` param in openai chat completion api. `use_cache` is removed as a kwarg in `OpenAIWrapper.create()` for being automatically decided by `cache_seed`: int | None. The difference between autogen's `cache_seed` and openai's `seed` is that:
Expand Down
4 changes: 3 additions & 1 deletion website/docs/Research.md → website/docs/Research.mdx
Original file line number Diff line number Diff line change
@@ -1,4 +1,6 @@
# Research
---
title: Research
---

For technical details, please check our technical report and research publications.

Expand Down
Original file line number Diff line number Diff line change
@@ -1,4 +1,6 @@
# Multi-agent Conversation Framework
---
title: "Multi-agent Conversation Framework"
---

AutoGen offers a unified multi-agent conversation framework as a high-level abstraction of using foundation models. It features capable, customizable and conversable agents which integrate LLMs, tools, and humans via automated agent chat.
By automating chat among multiple capable agents, one can easily make them collectively perform tasks autonomously or with human feedback, including tasks that require using tools via code.
Expand Down
Original file line number Diff line number Diff line change
@@ -1,10 +1,12 @@
# Enhanced Inference
---
title: "Enhanced Inference"
---

`autogen.OpenAIWrapper` provides enhanced LLM inference for `openai>=1`.
`autogen.Completion` is a drop-in replacement of `openai.Completion` and `openai.ChatCompletion` for enhanced LLM inference using `openai<1`.
There are a number of benefits of using `autogen` to perform inference: performance tuning, API unification, caching, error handling, multi-config inference, result filtering, templating and so on.

## Tune Inference Parameters (for openai<1)
## Tune Inference Parameters (for openai{'<'}1)

Find a list of examples in this page: [Tune Inference Parameters Examples](../Examples.md#inference-hyperparameters-tuning)

Expand Down Expand Up @@ -68,7 +70,7 @@ Users can specify the (optional) search range for each hyperparameter.
1. model. Either a constant str, or multiple choices specified by `flaml.tune.choice`.
1. prompt/messages. Prompt is either a str or a list of strs, of the prompt templates. messages is a list of dicts or a list of lists, of the message templates.
Each prompt/message template will be formatted with each data instance. For example, the prompt template can be:
"{problem} Solve the problem carefully. Simplify your answer as much as possible. Put the final answer in \\boxed{{}}."
"\{problem\} Solve the problem carefully. Simplify your answer as much as possible. Put the final answer in \\boxed\{\{}}."
And `{problem}` will be replaced by the "problem" field of each data instance.
1. max_tokens, n, best_of. They can be constants, or specified by `flaml.tune.randint`, `flaml.tune.qrandint`, `flaml.tune.lograndint` or `flaml.qlograndint`. By default, max_tokens is searched in [50, 1000); n is searched in [1, 100); and best_of is fixed to 1.
1. stop. It can be a str or a list of strs, or a list of lists of strs or None. Default is None.
Expand Down
4 changes: 2 additions & 2 deletions website/docs/Use-Cases/images/agent_example.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
4 changes: 2 additions & 2 deletions website/docs/Use-Cases/images/app.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
4 changes: 2 additions & 2 deletions website/docs/Use-Cases/images/autogen_agents.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Original file line number Diff line number Diff line change
@@ -1,4 +1,7 @@
# AutoGen Studio FAQs
---
title: FAQs
sidebarTitle: FAQs
---

## Q: How do I specify the directory where files(e.g. database) are stored?

Expand Down
Loading

0 comments on commit e7f3aee

Please sign in to comment.