Skip to content

Control GenAI interactions with power, precision, and consistency using Conversation Modeling paradigms

License

Notifications You must be signed in to change notification settings

emcie-co/parlant

Folders and files

NameName
Last commit message
Last commit date

Latest commit

Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 

Repository files navigation

Parlant Banner

Parlant: The Conversation Modeling Engine πŸ’¬ βœ…

Website β€” Introduction β€” Tutorial β€” About

PyPI - Version PyPI - Python Version Apache 2 License GitHub commit activity PyPI - Downloads Discord

IMPORTANT NOTE: We're looking for more contributors to help get user-facing agents under control! To be a part of this effort, join our Discord server and tell us about your relevant skills and how you wish to help.

Parlant Introduction

What is Conversation Modeling?

You've built an AI agentβ€”that's great! However, when you actually test it, you see it's not handling many customer interactions properly, and your business experts are displeased with it. What do you do?

Enter Conversation Modeling (CM): a new powerful and reliable approach to controlling how your agents interact with your users.

A conversation model is a structured, domain-specific set of principles, actions, objectives, and terms that an agent applies to a given conversation.

Why Conversation Modeling?

The problem of getting your AI agent to say what you want it to say is a hard one, experienced by virtually anyone building customer-facing agents. Here's how Conversation Modeling compares to other approaches to solving this problem.

  • Flow engines (such as Rasa, Botpress or LangFlow) force the user to interact according to predefined flows. In contrast, a CM engine dynamically adapts to a user's natural interaction patterns while conforming to your rules.

  • Free-form prompt engineering (such as with LangGraph or LlamaIndex) leads to inconsistency, frequently failing to uphold requirements. Conversely, a CM engine leverages structure to enforce conformance to a Conversation Model.

Who uses Parlant?

Parlant is used to deliver complex conversational agents that reliably follow your business protocols in use cases such as:

  • 🏦 Regulated financial services
  • πŸ₯ Healthcare communications
  • πŸ“œ Legal assistance
  • πŸ›‘οΈ Compliance-focused use cases
  • 🎯 Brand-sensitive customer service
  • 🀝 Personal advocacy and representation

How is Parlant used?

Developers and data-scientists are using Parlant to:

  • πŸ€– Create custom-tailored conversational agents quickly and easily
  • πŸ‘£ Define behavioral guidelines for agents to follow (Parlant ensures they are followed reliably)
  • πŸ› οΈ Attach tools with specific guidance on how to properly use them in different contexts
  • πŸ“– Manage their agents’ glossary to ensure strict interpretation of terms in a conversational context
  • πŸ‘€ Add customer-specific information to deliver personalized interactions

How does Parlant work?

graph TD
    API(Parlant REST API) -->|React to Session Trigger| Engine[AI Response Engine]
    Engine -->|Load Domain Terminology| GlossaryStore
    Engine -->|Match Guidelines| GuidelineMatcher
    Engine -->|Infer & Call Tools| ToolCaller
    Engine -->|Tailor Guided Message| MessageComposer
Loading

When an agent needs to respond to a customer, Parlant's engine evaluates the situation, checks relevant guidelines, gathers necessary information through your tools, and continuously re-evaluates its approach based on your guidelines as new information emerges. When it's time to generate a message, Parlant implements self-critique mechanisms to ensure that the agent's responses precisely align with your intended behavior as given by the contextually-matched guidelines.

πŸ“š More technical docs on the architecture and API are available under docs/.

πŸ“¦ Quickstart

Parlant comes pre-built with responsive session (conversation) management, a detection mechanism for incoherence and contradictions in guidelines, content-filtering, jailbreak protection, an integrated sandbox UI for behavioral testing, native API clients in Python and TypeScript, and other goodies.

$ pip install parlant
$ parlant-server run
$ # Open the sandbox UI at http://localhost:8800 and play

πŸ™‹β€β™‚οΈπŸ™‹β€β™€οΈ Who Is Parlant For?

Parlant is the right tool for the job if you're building an LLM-based chat agent, and:

  1. 🎯 Your use case places a high importance on behavioral precision and consistency, particularly in customer-facing scenarios
  2. πŸ”„ Your agent is expected to undergo continuous behavioral refinements and changes, and you need a way to implement those changes efficiently and confidently
  3. πŸ“ˆ You're expected to maintain a growing set of behavioral guidelines, and you need to maintain them coherently and with version-tracking
  4. πŸ’¬ Conversational UX and user-engagmeent is an important concern for your use case, and you want to easily control the flow and tone of conversations

⭐ Star Us: Your Support Goes a Long Way!

Star History Chart

πŸ€” What Makes Parlant Different?

In a word: Guidance. 🧭🚦🀝

Parlant's engine revolves around solving one key problem: How can we reliably guide customer-facing agents to behave in alignment with our needs and intentions.

Hence Parlant's fundamentally different approach to agent building: Managed Guidelines:

$ parlant guideline create \
    --agent-id MY_AGENT_ID \
    --condition "the customer wants to return an item" \
    --action "get the order number and item name and then help them return it"

By giving structure to behavioral guidelines, and granularizing guidelines (i.e. making each behavioral guideline a first-class entity in the engine), Parlant's engine is able to offer unprecedented control, quality, and efficiency in building LLM-based agents:

  1. πŸ›‘οΈ Reliability: Running focused self-critique in real-time, per guideline, to ensure it is actually followed
  2. πŸ’‘ Explainability: Providing feedback around its interpretation of guidelines in each real-life context, which helps in troubleshooting and improvement
  3. πŸ”§ Maintainability: Helping you maintain a coherent set of guidelines by detecting and alerting you to possible contradictions (gross or subtle) in your instructions

πŸ€– Works with all major LLM providers

πŸ“š Learning Parlant

To start learning and building with Parlant, visit our documentation portal.

Need help? Ask us anything on Discord. We're happy to answer questions and help you get up and running!

πŸ’» Usage Example

Adding a guideline for an agentβ€”for example, to ask a counter-question to get more info when a customer asks a question:

parlant guideline create \
    --condition "a free-tier customer is asking how to use our product" \
    --action "first seek to understand what they're trying to achieve"

πŸ‘‹ Contributing

We use the Linux-standard Developer Certificate of Origin (DCO.md), so that, by contributing, you confirm that you have the rights to submit your contribution under the Apache 2.0 license (i.e., that the code you're contributing is truly yours to share with the project).

Please consult CONTRIBUTING.md for more details.

Can't wait to get involved? Join us on Discord and let's discuss how you can help shape Parlant. We're excited to work with contributors directly while we set up our formal processes!

Otherwise, feel free to start a discussion or open an issue here on GitHubβ€”freestyle 😎.