Index is a state-of-the-art open-source browser agent that autonomously executes complex tasks on the web.
- It is powered by Claude 3.7 Sonnet with extented thinking. More models will be supported in the future.
- Index is also available as a hosted API.
- You can also try out Index via hosted UI or fully self-host the UI.
- Supports advanced browser agent observability powered by Laminar.
prompt: go to ycombinator.com. summarize first 3 companies in the W25 batch and make new spreadsheet in google sheets.
local_agent_spreadsheet_demo.mp4
Index API is available as hosted api on the Laminar platform. Index API manages remote browser sessions and agent infrastructure. Index API is the best way to run AI browser automation in production. To get started, sign up and create project API key.
pip install lmnr
from lmnr import Laminar, AsyncLaminarClient
# you can also set LMNR_PROJECT_API_KEY environment variable
# Initialize tracing
Laminar.initialize(project_api_key="your_api_key")
# Initialize the client
client = AsyncLaminarClient(api_key="your_api_key")
async def main():
# Run a task
response = await client.agent.run(
prompt="Navigate to news.ycombinator.com, find a post about AI, and summarize it"
)
# Print the result
print(response.result)
if __name__ == "__main__":
asyncio.run(main())
When you call Index via API, you automatically get full browser agent observability on Laminar platform. Learn more about Index browser observability.
pip install lmnr-index
# Install playwright
playwright install chromium
import asyncio
from index import Agent, AnthropicProvider
async def main():
# Initialize the LLM provider
llm = AnthropicProvider(
model="claude-3-7-sonnet-20250219",
enable_thinking=True,
thinking_token_budget=2048)
# Create an agent with the LLM
agent = Agent(llm=llm)
# Run the agent with a task
output = await agent.run(
prompt="Navigate to news.ycombinator.com, find a post about AI, and summarize it"
)
# Print the result
print(output.result)
if __name__ == "__main__":
asyncio.run(main())
from index import Agent, AnthropicProvider
agent = Agent(llm=AnthropicProvider(model="claude-3-7-sonnet-20250219"))
# Stream the agent's output
async for chunk in agent.run_stream(
prompt="Navigate to news.ycombinator.com, find a post about AI, and summarize it"):
print(chunk)
To trace Index agent's actions and record browser session you simply need to initialize Laminar tracing before running the agent.
from lmnr import Laminar
Laminar.initialize(project_api_key="your_api_key")
Then you will get full observability on the agent's actions synced with the browser session in the Laminar platform.

import asyncio
from index import Agent, AnthropicProvider, BrowserConfig
async def main():
# Configure browser to connect to an existing Chrome DevTools Protocol endpoint
browser_config = BrowserConfig(
cdp_url="<cdp_url>"
)
# Initialize the LLM provider
llm = AnthropicProvider(model="claude-3-7-sonnet-20250219", enable_thinking=True, thinking_token_budget=2048)
# Create an agent with the LLM and browser
agent = Agent(llm=llm, browser_config=browser_config)
# Run the agent with a task
output = await agent.run(
prompt="Navigate to news.ycombinator.com and find the top story"
)
# Print the result
print(output.result)
if __name__ == "__main__":
asyncio.run(main())
import asyncio
from index import Agent, AnthropicProvider, BrowserConfig
async def main():
# Configure browser with custom viewport size
browser_config = BrowserConfig(
viewport_size={"width": 1200, "height": 900}
)
# Initialize the LLM provider
llm = AnthropicProvider(model="claude-3-7-sonnet-20250219")
# Create an agent with the LLM and browser
agent = Agent(llm=llm, browser_config=browser_config)
# Run the agent with a task
output = await agent.run(
"Navigate to a responsive website and capture how it looks in full HD resolution"
)
# Print the result
print(output.result)
if __name__ == "__main__":
asyncio.run(main())
Made with ❤️ by the Laminar team