diff --git a/README.md b/README.md
index 663e0b451..f0b44cefc 100644
--- a/README.md
+++ b/README.md
@@ -1,10 +1,6 @@
-
-
![Logo](https://github.com/assafelovic/gpt-researcher/assets/13554167/20af8286-b386-44a5-9a83-3be1365139c3)
+
![Logo](https://github.com/assafelovic/gpt-researcher/assets/13554167/20af8286-b386-44a5-9a83-3be1365139c3)
####
@@ -18,47 +14,43 @@
[![Docker Image Version](https://img.shields.io/docker/v/elestio/gpt-researcher/latest?arch=amd64&style=flat&logo=docker&logoColor=white&color=1D63ED)](https://hub.docker.com/r/gptresearcher/gpt-researcher)
[![Twitter Follow](https://img.shields.io/twitter/follow/assaf_elovic?style=social)](https://twitter.com/assaf_elovic)
-[English](README.md) |
-[中文](README-zh_CN.md) |
-[日本語](README-ja_JP.md) |
-[한국어](README-ko_KR.md)
+[English](README.md) | [中文](README-zh_CN.md) | [日本語](README-ja_JP.md) | [한국어](README-ko_KR.md)
+
# 🔎 GPT Researcher
**GPT Researcher is an autonomous agent designed for comprehensive web and local research on any given task.**
-The agent can produce detailed, factual and unbiased research reports, with customization options for focusing on relevant resources and outlines. Inspired by the recent [Plan-and-Solve](https://arxiv.org/abs/2305.04091) and [RAG](https://arxiv.org/abs/2005.11401) papers, GPT Researcher addresses issues of misinformation, speed, determinism and reliability, offering a more stable performance and increased speed through parallelized agent work, as opposed to synchronous operations.
+The agent produces detailed, factual, and unbiased research reports with customization options for focusing on relevant resources and outlines. Inspired by the recent [Plan-and-Solve](https://arxiv.org/abs/2305.04091) and [RAG](https://arxiv.org/abs/2005.11401) papers, GPT Researcher addresses misinformation, speed, determinism, and reliability by offering stable performance and increased speed through parallelized agent work.
-**Our mission is to empower individuals and organizations with accurate, unbiased, and factual information by leveraging the power of AI.**
+**Our mission is to empower individuals and organizations with accurate, unbiased, and factual information through AI.**
## Why GPT Researcher?
-- Forming objective conclusions for manual research tasks can take time, sometimes weeks, to find the right resources and information
-- Current LLMs are trained on past and outdated information, with heavy risks of hallucinations, making them almost irrelevant for research tasks.
-- Current LLMs are limited to short token outputs, which are insufficient for long, detailed research reports (over 2,000 words).
-- Services that enable web searches, such as ChatGPT or Perplexity, only consider limited sources and content, which in some cases results in misinformation and shallow results.
-- Using only a selection of web sources can create bias in determining the right conclusions for research tasks.
+- Objective conclusions for manual research can take weeks, requiring vast resources and time.
+- LLMs trained on outdated information can hallucinate, becoming irrelevant for current research tasks.
+- Current LLMs have token limitations, insufficient for generating long research reports.
+- Limited web sources in existing services lead to misinformation and shallow results.
+- Selective web sources can introduce bias into research tasks.
## Demo
https://github.com/user-attachments/assets/092e9e71-7e27-475d-8c4f-9dddd28934a3
## Architecture
-The main idea is to run 'planner' and 'execution' agents, where the planner generates questions for research, and the execution agents seek the most relevant information based on each generated research question. Finally, the planner filters and aggregates all related information and creates a research report.
-The agents leverage both `gpt-4o-mini` and `gpt-4o` (128K context) to complete a research task. We optimize for costs using each only when necessary. **The average research task takes about 2 minutes to complete and costs approximately $0.005.**
+
+The core idea is to utilize 'planner' and 'execution' agents. The planner generates research questions, while the execution agents gather relevant information. The planner then aggregates all findings into a comprehensive report.
-
-
-More specifically:
-* Create a domain specific agent based on research query or task.
-* Generate a set of research questions that together form an objective opinion on any given task.
-* For each research question, trigger a crawler agent that scrapes online resources for information relevant to the given task.
-* For each scraped resources, summarize based on relevant information and keep track of its sources.
-* Finally, filter and aggregate all summarized sources and generate a final research report.
+Steps:
+* Create a task-specific agent based on a research query.
+* Generate questions that collectively form an objective opinion on the task.
+* Use a crawler agent for gathering information for each question.
+* Summarize and source-track each resource.
+* Filter and aggregate summaries into a final research report.
## Tutorials
- [How it Works](https://docs.gptr.dev/blog/building-gpt-researcher)
@@ -66,74 +58,60 @@ More specifically:
- [Live Demo](https://www.loom.com/share/6a3385db4e8747a1913dd85a7834846f?sid=a740fd5b-2aa3-457e-8fb7-86976f59f9b8)
## Features
-- 📝 Generate research, outlines, resources and lessons reports with local documents and web sources
-- 🖼️ Supports smart article image scraping and filtering
-- 📜 Can generate long and detailed research reports (over 2K words)
-- 🌐 Aggregates over 20 web sources per research to form objective and factual conclusions
-- 🖥️ Includes both lightweight (HTML/CSS/JS) and production ready (NextJS + Tailwind) UX/UI
-- 🔍 Scrapes web sources with javascript support
-- 📂 Keeps track and context and memory throughout the research process
-- 📄 Export research reports to PDF, Word and more...
-## 📖 Documentation
+- 📝 Generate detailed research reports using web and local documents.
+- 🖼️ Smart image scraping and filtering for reports.
+- 📜 Generate detailed reports exceeding 2,000 words.
+- 🌐 Aggregate over 20 sources for objective conclusions.
+- 🖥️ Frontend available in lightweight (HTML/CSS/JS) and production-ready (NextJS + Tailwind) versions.
+- 🔍 JavaScript-enabled web scraping.
+- 📂 Maintains memory and context throughout research.
+- 📄 Export reports to PDF, Word, and other formats.
-Please see [here](https://docs.gptr.dev/docs/gpt-researcher/getting-started/getting-started) for full documentation on:
+## 📖 Documentation
-- Getting started (installation, setting up the environment, simple examples)
-- Customization and configuration
-- How-To examples (demos, integrations, docker support)
-- Reference (full API docs)
+See the [Documentation](https://docs.gptr.dev/docs/gpt-researcher/getting-started/getting-started) for:
+- Installation and setup guides
+- Configuration and customization options
+- How-To examples
+- Full API references
## ⚙️ Getting Started
-### Installation
-> **Step 0** - Install Python 3.11 or later. [See here](https://www.tutorialsteacher.com/python/install-python) for a step-by-step guide.
-
-> **Step 1** - Download the project and navigate to its directory
-
-```bash
-git clone https://github.com/assafelovic/gpt-researcher.git
-cd gpt-researcher
-```
-> **Step 3** - Set up API keys using two methods: exporting them directly or storing them in a `.env` file.
+### Installation
-For Linux/Windows temporary setup, use the export method:
+1. Install Python 3.11 or later. [Guide](https://www.tutorialsteacher.com/python/install-python).
+2. Clone the project and navigate to the directory:
-```bash
-export OPENAI_API_KEY={Your OpenAI API Key here}
-export TAVILY_API_KEY={Your Tavily API Key here}
-```
+ ```bash
+ git clone https://github.com/assafelovic/gpt-researcher.git
+ cd gpt-researcher
+ ```
-For a more permanent setup, create a `.env` file in the current `gpt-researcher` directory and input the env vars (without `export`).
+3. Set up API keys by exporting them or storing them in a `.env` file.
-- The default LLM is [GPT](https://platform.openai.com/docs/guides/gpt), but you can use other LLMs such as `claude`, `ollama3`, `gemini`, `mistral` and more. To learn how to change the LLM provider, see the [LLMs documentation](https://docs.gptr.dev/docs/gpt-researcher/llms/llms) page. Please note: this project is optimized for OpenAI GPT models.
-- The default retriever is [Tavily](https://app.tavily.com), but you can refer to other retrievers such as `duckduckgo`, `google`, `bing`, `searchapi`, `serper`, `searx`, `arxiv`, `exa` and more. To learn how to change the search provider, see the [retrievers documentation](https://docs.gptr.dev/docs/gpt-researcher/search-engines/retrievers) page.
+ ```bash
+ export OPENAI_API_KEY={Your OpenAI API Key here}
+ export TAVILY_API_KEY={Your Tavily API Key here}
+ ```
-### Quickstart
+4. Install dependencies and start the server:
-> **Step 1** - Install dependencies
+ ```bash
+ pip install -r requirements.txt
+ python -m uvicorn main:app --reload
+ ```
-```bash
-pip install -r requirements.txt
-```
+Visit [http://localhost:8000](http://localhost:8000) to start.
-> **Step 2** - Run the agent with FastAPI
+For other setups (e.g., Poetry or virtual environments), check the [Getting Started page](https://docs.gptr.dev/docs/gpt-researcher/getting-started/getting-started).
-```bash
-python -m uvicorn main:app --reload
-```
-
-> **Step 3** - Go to http://localhost:8000 on any browser and enjoy researching!
-
-
-
-**To learn how to get started with [Poetry](https://docs.gptr.dev/docs/gpt-researcher/getting-started/getting-started#poetry) or a [virtual environment](https://docs.gptr.dev/docs/gpt-researcher/getting-started/getting-started#virtual-environment) check out the [documentation](https://docs.gptr.dev/docs/gpt-researcher/getting-started/getting-started) page.**
-
-### Run as PIP package
+## Run as PIP package
```bash
pip install gpt-researcher
-```
+```
+### Example Usage:
```python
...
from gpt_researcher import GPTResearcher