diff --git a/docs/docs/integrations/chat/cohere.ipynb b/docs/docs/integrations/chat/cohere.ipynb index 5787d4f37cea3..6abebf7735439 100644 --- a/docs/docs/integrations/chat/cohere.ipynb +++ b/docs/docs/integrations/chat/cohere.ipynb @@ -15,39 +15,107 @@ "id": "bf733a38-db84-4363-89e2-de6735c37230", "metadata": {}, "source": [ - "# ChatCohere\n", + "# Cohere\n", "\n", - "This notebook covers how to get started with Cohere chat models." + "This notebook covers how to get started with [Cohere chat models](https://cohere.com/chat).\n", + "\n", + "Head to the [API reference](https://api.python.langchain.com/en/latest/chat_models/langchain_community.chat_models.cohere.ChatCohere.html) for detailed documentation of all attributes and methods." + ] + }, + { + "cell_type": "markdown", + "id": "3607d67e-e56c-4102-bbba-df2edc0e109e", + "metadata": {}, + "source": [ + "## Setup\n", + "\n", + "The integration lives in the `langchain-community` package. We also need to install the `cohere` package itself. We can install these with:\n", + "\n", + "```bash\n", + "pip install -U langchain-community cohere\n", + "```\n", + "\n", + "We'll also need to get a [Cohere API key](https://cohere.com/) and set the `COHERE_API_KEY` environment variable:" ] }, { "cell_type": "code", - "execution_count": 54, + "execution_count": 6, + "id": "2108b517-1e8d-473d-92fa-4f930e8072a7", + "metadata": {}, + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "········\n" + ] + } + ], + "source": [ + "import getpass\n", + "import os\n", + "\n", + "os.environ[\"COHERE_API_KEY\"] = getpass.getpass()" + ] + }, + { + "cell_type": "markdown", + "id": "cf690fbb", + "metadata": {}, + "source": [ + "It's also helpful (but not needed) to set up [LangSmith](https://smith.langchain.com/) for best-in-class observability" + ] + }, + { + "cell_type": "code", + "execution_count": 2, + "id": "7f11de02", + "metadata": {}, + "outputs": [], + "source": [ + "# os.environ[\"LANGCHAIN_TRACING_V2\"] = \"true\"\n", + "# os.environ[\"LANGCHAIN_API_KEY\"] = getpass.getpass()" + ] + }, + { + "cell_type": "markdown", + "id": "4c26754b-b3c9-4d93-8f36-43049bd943bf", + "metadata": {}, + "source": [ + "## Usage\n", + "\n", + "ChatCohere supports all [ChatModel](/docs/modules/model_io/chat/) functionality:" + ] + }, + { + "cell_type": "code", + "execution_count": 1, "id": "d4a7c55d-b235-4ca4-a579-c90cc9570da9", "metadata": { "tags": [] }, "outputs": [], "source": [ - "from langchain.schema import HumanMessage\n", - "from langchain_community.chat_models import ChatCohere" + "from langchain_community.chat_models import ChatCohere\n", + "from langchain_core.messages import HumanMessage" ] }, { "cell_type": "code", - "execution_count": 55, + "execution_count": 2, "id": "70cf04e8-423a-4ff6-8b09-f11fb711c817", "metadata": { "tags": [] }, "outputs": [], "source": [ - "chat = ChatCohere()" + "chat = ChatCohere(model=\"command\", max_tokens=256, temperature=0.75)" ] }, { "cell_type": "code", - "execution_count": 56, + "execution_count": 3, "id": "8199ef8f-eb8b-4253-9ea0-6c24a013ca4c", "metadata": { "tags": [] @@ -59,41 +127,43 @@ "AIMessage(content=\"Who's there?\")" ] }, - "execution_count": 56, + "execution_count": 3, "metadata": {}, "output_type": "execute_result" } ], "source": [ "messages = [HumanMessage(content=\"knock knock\")]\n", - "chat(messages)" - ] - }, - { - "cell_type": "markdown", - "id": "c361ab1e-8c0c-4206-9e3c-9d1424a12b9c", - "metadata": {}, - "source": [ - "## `ChatCohere` also supports async and streaming functionality:" + "chat.invoke(messages)" ] }, { "cell_type": "code", - "execution_count": 57, - "id": "93a21c5c-6ef9-4688-be60-b2e1f94842fb", + "execution_count": 4, + "id": "c5fac0e9-05a4-4fc1-a3b3-e5bbb24b971b", "metadata": { "tags": [] }, - "outputs": [], + "outputs": [ + { + "data": { + "text/plain": [ + "AIMessage(content=\"Who's there?\")" + ] + }, + "execution_count": 4, + "metadata": {}, + "output_type": "execute_result" + } + ], "source": [ - "from langchain.callbacks.manager import CallbackManager\n", - "from langchain.callbacks.streaming_stdout import StreamingStdOutCallbackHandler" + "await chat.ainvoke(messages)" ] }, { "cell_type": "code", - "execution_count": 64, - "id": "c5fac0e9-05a4-4fc1-a3b3-e5bbb24b971b", + "execution_count": 5, + "id": "025be980-e50d-4a68-93dc-c9c7b500ce34", "metadata": { "tags": [] }, @@ -104,55 +174,76 @@ "text": [ "Who's there?" ] - }, + } + ], + "source": [ + "for chunk in chat.stream(messages):\n", + " print(chunk.content, end=\"\", flush=True)" + ] + }, + { + "cell_type": "code", + "execution_count": 6, + "id": "064288e4-f184-4496-9427-bcf148fa055e", + "metadata": {}, + "outputs": [ { "data": { "text/plain": [ - "LLMResult(generations=[[ChatGenerationChunk(text=\"Who's there?\", message=AIMessageChunk(content=\"Who's there?\"))]], llm_output={}, run=[RunInfo(run_id=UUID('1e9eaefc-9c99-4fa9-8297-ef9975d4751e'))])" + "[AIMessage(content=\"Who's there?\")]" ] }, - "execution_count": 64, + "execution_count": 6, "metadata": {}, "output_type": "execute_result" } ], "source": [ - "await chat.agenerate([messages])" + "chat.batch([messages])" + ] + }, + { + "cell_type": "markdown", + "id": "f1c56460", + "metadata": {}, + "source": [ + "## Chaining\n", + "\n", + "You can also easily combine with a prompt template for easy structuring of user input. We can do this using [LCEL](/docs/expression_language)" ] }, { "cell_type": "code", - "execution_count": 63, - "id": "025be980-e50d-4a68-93dc-c9c7b500ce34", - "metadata": { - "tags": [] - }, + "execution_count": 7, + "id": "0851b103", + "metadata": {}, + "outputs": [], + "source": [ + "from langchain_core.prompts import ChatPromptTemplate\n", + "\n", + "prompt = ChatPromptTemplate.from_template(\"Tell me a joke about {topic}\")\n", + "chain = prompt | chat" + ] + }, + { + "cell_type": "code", + "execution_count": 8, + "id": "ae950c0f-1691-47f1-b609-273033cae707", + "metadata": {}, "outputs": [ - { - "name": "stdout", - "output_type": "stream", - "text": [ - "Who's there?" - ] - }, { "data": { "text/plain": [ - "AIMessageChunk(content=\"Who's there?\")" + "AIMessage(content=\"Why did the bear go to the chiropractor?\\n\\nBecause she was feeling a bit grizzly!\\n\\nHope you found that joke about bears to be a little bit amusing! If you'd like to hear another one, just let me know. In the meantime, if you have any other questions or need assistance with a different topic, feel free to let me know. \\n\\nJust remember, even if you have a sore back like the bear, it's always best to consult a licensed professional for injuries or pain you may be experiencing. \\n\\nWould you like me to tell you another joke?\")" ] }, - "execution_count": 63, + "execution_count": 8, "metadata": {}, "output_type": "execute_result" } ], "source": [ - "chat = ChatCohere(\n", - " streaming=True,\n", - " verbose=True,\n", - " callback_manager=CallbackManager([StreamingStdOutCallbackHandler()]),\n", - ")\n", - "chat(messages)" + "chain.invoke({\"topic\": \"bears\"})" ] } ], @@ -172,7 +263,7 @@ "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", - "version": "3.11.5" + "version": "3.9.1" } }, "nbformat": 4, diff --git a/docs/docs/integrations/llms/cohere.ipynb b/docs/docs/integrations/llms/cohere.ipynb index a4f93a1fd5945..f39717e0cd5a2 100644 --- a/docs/docs/integrations/llms/cohere.ipynb +++ b/docs/docs/integrations/llms/cohere.ipynb @@ -9,20 +9,25 @@ "\n", ">[Cohere](https://cohere.ai/about) is a Canadian startup that provides natural language processing models that help companies improve human-machine interactions.\n", "\n", - "This example goes over how to use LangChain to interact with `Cohere` [models](https://docs.cohere.ai/docs/generation-card)." + "Head to the [API reference](https://api.python.langchain.com/en/latest/llms/langchain_community.llms.cohere.Cohere.html) for detailed documentation of all attributes and methods." ] }, { - "cell_type": "code", - "execution_count": null, - "id": "91ea14ce-831d-409a-a88f-30353acdabd1", + "cell_type": "markdown", + "id": "873eb81e-6049-4a68-b219-baa421d7cba8", "metadata": { "tags": [] }, - "outputs": [], "source": [ - "# Install the package\n", - "!pip install cohere" + "## Setup\n", + "\n", + "The integration lives in the `langchain-community` package. We also need to install the `cohere` package itself. We can install these with:\n", + "\n", + "```bash\n", + "pip install -U langchain-community cohere\n", + "```\n", + "\n", + "We'll also need to get a [Cohere API key](https://cohere.com/) and set the `COHERE_API_KEY` environment variable:" ] }, { @@ -42,88 +47,187 @@ } ], "source": [ - "# get a new token: https://dashboard.cohere.ai/\n", + "import getpass\n", + "import os\n", "\n", - "from getpass import getpass\n", + "os.environ[\"COHERE_API_KEY\"] = getpass.getpass()" + ] + }, + { + "cell_type": "markdown", + "id": "c07a576d-e39d-4ca2-8f16-41df284d136c", + "metadata": {}, + "source": [ + "It's also helpful (but not needed) to set up [LangSmith](https://smith.langchain.com/) for best-in-class observability" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "5af022d3-d24a-49fa-b660-ec76f1bce9a9", + "metadata": {}, + "outputs": [], + "source": [ + "# os.environ[\"LANGCHAIN_TRACING_V2\"] = \"true\"\n", + "# os.environ[\"LANGCHAIN_API_KEY\"] = getpass.getpass()" + ] + }, + { + "cell_type": "markdown", + "id": "0b4e02bf-5beb-48af-a2a2-52cbcd8ebed6", + "metadata": {}, + "source": [ + "## Usage\n", "\n", - "COHERE_API_KEY = getpass()" + "Cohere supports all [LLM](/docs/modules/model_io/llms/) functionality:" ] }, { "cell_type": "code", - "execution_count": 3, + "execution_count": 1, "id": "6fb585dd", "metadata": { "tags": [] }, "outputs": [], "source": [ - "from langchain.chains import LLMChain\n", - "from langchain.prompts import PromptTemplate\n", - "from langchain_community.llms import Cohere" + "from langchain_community.llms import Cohere\n", + "from langchain_core.messages import HumanMessage" ] }, { "cell_type": "code", - "execution_count": 4, - "id": "035dea0f", - "metadata": { - "tags": [] - }, + "execution_count": 2, + "id": "be042d9f-c625-4316-b5e5-272b5ce8904f", + "metadata": {}, "outputs": [], "source": [ - "template = \"\"\"Question: {question}\n", - "\n", - "Answer: Let's think step by step.\"\"\"\n", - "\n", - "prompt = PromptTemplate(template=template, input_variables=[\"question\"])" + "model = Cohere(model=\"command\", max_tokens=256, temperature=0.75)" ] }, { "cell_type": "code", "execution_count": 6, - "id": "3f3458d9", - "metadata": { - "tags": [] - }, - "outputs": [], + "id": "8cbfc906-4278-4bc9-8756-1681bb647752", + "metadata": {}, + "outputs": [ + { + "data": { + "text/plain": [ + "\" Who's there?\"" + ] + }, + "execution_count": 6, + "metadata": {}, + "output_type": "execute_result" + } + ], "source": [ - "llm = Cohere(cohere_api_key=COHERE_API_KEY)" + "message = \"Knock knock\"\n", + "model.invoke(message)" ] }, { "cell_type": "code", - "execution_count": 7, - "id": "a641dbd9", - "metadata": { - "tags": [] - }, - "outputs": [], + "execution_count": 8, + "id": "a9a9ffcf-5a74-4875-ad3e-d66d3b871f66", + "metadata": {}, + "outputs": [ + { + "data": { + "text/plain": [ + "\" Who's there?\"" + ] + }, + "execution_count": 8, + "metadata": {}, + "output_type": "execute_result" + } + ], "source": [ - "llm_chain = LLMChain(prompt=prompt, llm=llm)" + "await model.ainvoke(message)" ] }, { "cell_type": "code", - "execution_count": 5, - "id": "9f844993", + "execution_count": 9, + "id": "ab3550b5-4271-4333-a75c-e4bce58c0452", + "metadata": {}, + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + " Who's there?" + ] + } + ], + "source": [ + "for chunk in model.stream(message):\n", + " print(chunk, end=\"\", flush=True)" + ] + }, + { + "cell_type": "code", + "execution_count": 10, + "id": "587c850d-76bd-4f74-bcf7-50cdacec538e", "metadata": {}, "outputs": [ { "data": { "text/plain": [ - "\" Let's start with the year that Justin Beiber was born. You know that he was born in 1994. We have to go back one year. 1993.\\n\\n1993 was the year that the Dallas Cowboys won the Super Bowl. They won over the Buffalo Bills in Super Bowl 26.\\n\\nNow, let's do it backwards. According to our information, the Green Bay Packers last won the Super Bowl in the 2010-2011 season. Now, we can't go back in time, so let's go from 2011 when the Packers won the Super Bowl, back to 1984. That is the year that the Packers won the Super Bowl over the Raiders.\\n\\nSo, we have the year that Justin Beiber was born, 1994, and the year that the Packers last won the Super Bowl, 2011, and now we have to go in the middle, 1986. That is the year that the New York Giants won the Super Bowl over the Denver Broncos. The Giants won Super Bowl 21.\\n\\nThe New York Giants won the Super Bowl in 1986. This means that the Green Bay Packers won the Super Bowl in 2011.\\n\\nDid you get it right? If you are still a bit confused, just try to go back to the question again and review the answer\"" + "[\" Who's there?\"]" ] }, - "execution_count": 5, + "execution_count": 10, "metadata": {}, "output_type": "execute_result" } ], "source": [ - "question = \"What NFL team won the Super Bowl in the year Justin Beiber was born?\"\n", + "model.batch([message])" + ] + }, + { + "cell_type": "markdown", + "id": "39198f7d-6fc8-4662-954a-37ad38c4bec4", + "metadata": {}, + "source": [ + "You can also easily combine with a prompt template for easy structuring of user input. We can do this using [LCEL](/docs/expression_language)" + ] + }, + { + "cell_type": "code", + "execution_count": 12, + "id": "7cbe3136-eff2-4e6a-807c-81cbf2a488a6", + "metadata": {}, + "outputs": [], + "source": [ + "from langchain_core.prompts import PromptTemplate\n", "\n", - "llm_chain.run(question)" + "prompt = PromptTemplate.from_template(\"Tell me a joke about {topic}\")\n", + "chain = prompt | model" + ] + }, + { + "cell_type": "code", + "execution_count": 13, + "id": "d08eb676-dc24-41ae-ba32-19a95e22d3bb", + "metadata": {}, + "outputs": [ + { + "data": { + "text/plain": [ + "' Why did the teddy bear cross the road?\\nBecause he had bear crossings.\\n\\nWould you like to hear another joke? '" + ] + }, + "execution_count": 13, + "metadata": {}, + "output_type": "execute_result" + } + ], + "source": [ + "chain.invoke({\"topic\": \"bears\"})" ] }, { @@ -151,7 +255,7 @@ "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", - "version": "3.10.6" + "version": "3.11.1" } }, "nbformat": 4, diff --git a/docs/docs/integrations/memory/sql_chat_message_history.ipynb b/docs/docs/integrations/memory/sql_chat_message_history.ipynb index de15a53f6d140..00965e7a16d4b 100644 --- a/docs/docs/integrations/memory/sql_chat_message_history.ipynb +++ b/docs/docs/integrations/memory/sql_chat_message_history.ipynb @@ -18,26 +18,48 @@ }, { "cell_type": "markdown", - "id": "f8f2830ee9ca1e01", + "id": "da400c79-a360-43e2-be60-401fd02b2819", "metadata": {}, "source": [ - "## Basic Usage\n", + "## Setup\n", "\n", - "To use the storage you need to provide only 2 things:\n", + "The integration lives in the `langchain-community` package, so we need to install that. We also need to install the `SQLAlchemy` package.\n", "\n", - "1. Session Id - a unique identifier of the session, like user name, email, chat id etc.\n", - "2. Connection string - a string that specifies the database connection. It will be passed to SQLAlchemy create_engine function.\n", - "3. Install `SQLAlchemy` python package." + "```bash\n", + "pip install -U langchain-community SQLAlchemy\n", + "```" + ] + }, + { + "cell_type": "markdown", + "id": "70ed8354-ce96-46b5-a580-4cb7aac5c3d0", + "metadata": {}, + "source": [ + "It's also helpful (but not needed) to set up [LangSmith](https://smith.langchain.com/) for best-in-class observability" ] }, { "cell_type": "code", "execution_count": null, - "id": "ab016290-3823-4e1b-9610-ae9a1b71cb07", + "id": "0976204d-c681-4288-bfe5-a550e0340f35", "metadata": {}, "outputs": [], "source": [ - "!pip install SQLAlchemy" + "# os.environ[\"LANGCHAIN_TRACING_V2\"] = \"true\"\n", + "# os.environ[\"LANGCHAIN_API_KEY\"] = getpass.getpass()" + ] + }, + { + "cell_type": "markdown", + "id": "f8f2830ee9ca1e01", + "metadata": {}, + "source": [ + "## Usage\n", + "\n", + "To use the storage you need to provide only 2 things:\n", + "\n", + "1. Session Id - a unique identifier of the session, like user name, email, chat id etc.\n", + "2. Connection string - a string that specifies the database connection. It will be passed to SQLAlchemy create_engine function." ] }, { @@ -84,8 +106,7 @@ { "data": { "text/plain": [ - "[HumanMessage(content='Hello', additional_kwargs={}, example=False),\n", - " AIMessage(content='Hi', additional_kwargs={}, example=False)]" + "[HumanMessage(content='Hello'), AIMessage(content='Hi')]" ] }, "execution_count": 2, @@ -102,134 +123,113 @@ "id": "2e5337719d5614fd", "metadata": {}, "source": [ - "## Custom Storage Format\n", - "\n", - "By default, only the session id and message dictionary are stored in the table.\n", + "## Chaining\n", "\n", - "However, sometimes you might want to store some additional information, like message date, author, language etc.\n", + "We can easily combine this message history class with [LCEL Runnables](/docs/expression_language/how_to/message_history)\n", "\n", - "To do that, you can create a custom message converter, by implementing **BaseMessageConverter** interface." + "To do this we will want to use OpenAI, so we need to install that\n" ] }, { "cell_type": "code", "execution_count": 3, - "id": "fdfde84c07d071bb", - "metadata": { - "ExecuteTime": { - "end_time": "2023-08-28T10:04:41.510498Z", - "start_time": "2023-08-28T10:04:41.494912Z" - }, - "collapsed": false, - "jupyter": { - "outputs_hidden": false - } - }, + "id": "6558418b-0ece-4d01-9661-56d562d78f7a", + "metadata": {}, "outputs": [], "source": [ - "from datetime import datetime\n", - "from typing import Any\n", - "\n", - "from langchain.schema import AIMessage, BaseMessage, HumanMessage, SystemMessage\n", - "from langchain_community.chat_message_histories.sql import BaseMessageConverter\n", - "from sqlalchemy import Column, DateTime, Integer, Text\n", - "from sqlalchemy.orm import declarative_base\n", - "\n", - "Base = declarative_base()\n", - "\n", - "\n", - "class CustomMessage(Base):\n", - " __tablename__ = \"custom_message_store\"\n", + "from typing import Optional\n", "\n", - " id = Column(Integer, primary_key=True)\n", - " session_id = Column(Text)\n", - " type = Column(Text)\n", - " content = Column(Text)\n", - " created_at = Column(DateTime)\n", - " author_email = Column(Text)\n", - "\n", - "\n", - "class CustomMessageConverter(BaseMessageConverter):\n", - " def __init__(self, author_email: str):\n", - " self.author_email = author_email\n", - "\n", - " def from_sql_model(self, sql_message: Any) -> BaseMessage:\n", - " if sql_message.type == \"human\":\n", - " return HumanMessage(\n", - " content=sql_message.content,\n", - " )\n", - " elif sql_message.type == \"ai\":\n", - " return AIMessage(\n", - " content=sql_message.content,\n", - " )\n", - " elif sql_message.type == \"system\":\n", - " return SystemMessage(\n", - " content=sql_message.content,\n", - " )\n", - " else:\n", - " raise ValueError(f\"Unknown message type: {sql_message.type}\")\n", - "\n", - " def to_sql_model(self, message: BaseMessage, session_id: str) -> Any:\n", - " now = datetime.now()\n", - " return CustomMessage(\n", - " session_id=session_id,\n", - " type=message.type,\n", - " content=message.content,\n", - " created_at=now,\n", - " author_email=self.author_email,\n", - " )\n", - "\n", - " def get_sql_model_class(self) -> Any:\n", - " return CustomMessage\n", - "\n", - "\n", - "chat_message_history = SQLChatMessageHistory(\n", - " session_id=\"test_session\",\n", - " connection_string=\"sqlite:///sqlite.db\",\n", - " custom_message_converter=CustomMessageConverter(author_email=\"test@example.com\"),\n", + "from langchain_community.chat_models import ChatOpenAI\n", + "from langchain_core.prompts import ChatPromptTemplate, MessagesPlaceholder\n", + "from langchain_core.runnables.history import RunnableWithMessageHistory" + ] + }, + { + "cell_type": "code", + "execution_count": 4, + "id": "82149122-61d3-490d-9bdb-bb98606e8ba1", + "metadata": {}, + "outputs": [], + "source": [ + "prompt = ChatPromptTemplate.from_messages(\n", + " [\n", + " (\"system\", \"You are a helpful assistant.\"),\n", + " MessagesPlaceholder(variable_name=\"history\"),\n", + " (\"human\", \"{question}\"),\n", + " ]\n", ")\n", "\n", - "chat_message_history.add_user_message(\"Hello\")\n", - "chat_message_history.add_ai_message(\"Hi\")" + "chain = prompt | ChatOpenAI()" ] }, { "cell_type": "code", - "execution_count": 4, - "id": "4a6a54d8a9e2856f", - "metadata": { - "ExecuteTime": { - "end_time": "2023-08-28T10:04:43.497990Z", - "start_time": "2023-08-28T10:04:43.492517Z" - }, - "collapsed": false, - "jupyter": { - "outputs_hidden": false - } - }, + "execution_count": 5, + "id": "2df90853-b67c-490f-b7f8-b69d69270b9c", + "metadata": {}, + "outputs": [], + "source": [ + "chain_with_history = RunnableWithMessageHistory(\n", + " chain,\n", + " lambda session_id: SQLChatMessageHistory(\n", + " session_id=session_id, connection_string=\"sqlite:///sqlite.db\"\n", + " ),\n", + " input_messages_key=\"question\",\n", + " history_messages_key=\"history\",\n", + ")" + ] + }, + { + "cell_type": "code", + "execution_count": 8, + "id": "0ce596b8-3b78-48fd-9f92-46dccbbfd58b", + "metadata": {}, + "outputs": [], + "source": [ + "# This is where we configure the session id\n", + "config = {\"configurable\": {\"session_id\": \"\"}}" + ] + }, + { + "cell_type": "code", + "execution_count": 9, + "id": "38e1423b-ba86-4496-9151-25932fab1a8b", + "metadata": {}, "outputs": [ { "data": { "text/plain": [ - "[HumanMessage(content='Hello', additional_kwargs={}, example=False),\n", - " AIMessage(content='Hi', additional_kwargs={}, example=False)]" + "AIMessage(content='Hello Bob! How can I assist you today?')" ] }, - "execution_count": 4, + "execution_count": 9, "metadata": {}, "output_type": "execute_result" } ], "source": [ - "chat_message_history.messages" + "chain_with_history.invoke({\"question\": \"Hi! I'm bob\"}, config=config)" ] }, { - "cell_type": "markdown", - "id": "622aded629a1adeb", + "cell_type": "code", + "execution_count": 10, + "id": "2ee4ee62-a216-4fb1-bf33-57476a84cf16", "metadata": {}, + "outputs": [ + { + "data": { + "text/plain": [ + "AIMessage(content='Your name is Bob! Is there anything specific you would like assistance with, Bob?')" + ] + }, + "execution_count": 10, + "metadata": {}, + "output_type": "execute_result" + } + ], "source": [ - "You also might want to change the name of session_id column. In this case you'll need to specify `session_id_field_name` parameter." + "chain_with_history.invoke({\"question\": \"Whats my name\"}, config=config)" ] } ], @@ -249,7 +249,7 @@ "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", - "version": "3.10.12" + "version": "3.9.1" } }, "nbformat": 4, diff --git a/docs/docs/integrations/retrievers/tavily.ipynb b/docs/docs/integrations/retrievers/tavily.ipynb index 34b1d1396387a..99d4699d285e0 100644 --- a/docs/docs/integrations/retrievers/tavily.ipynb +++ b/docs/docs/integrations/retrievers/tavily.ipynb @@ -9,9 +9,17 @@ "\n", "[Tavily's Search API](https://tavily.com) is a search engine built specifically for AI agents (LLMs), delivering real-time, accurate, and factual results at speed.\n", "\n", - "## Usage\n", + "We can use this as a [retriever](/docs/modules/data_connection/retrievers). It will show functionality specific to this integration. After going through, it may be useful to explore [relevant use-case pages](/docs/use_cases/question_answering) to learn how to use this vectorstore as part of a larger chain.\n", "\n", - "For a full list of allowed arguments, see [the official documentation](https://app.tavily.com/documentation/python). You can also pass any param to the SDK via a `kwargs` dictionary." + "## Setup\n", + "\n", + "The integration lives in the `langchain-community` package. We also need to install the `tavily-python` package itself.\n", + "\n", + "```bash\n", + "pip install -U langchain-community tavily-python\n", + "```\n", + "\n", + "We also need to set our Tavily API key." ] }, { @@ -20,44 +28,128 @@ "metadata": {}, "outputs": [], "source": [ - "# %pip install tavily-python" + "import getpass\n", + "import os\n", + "\n", + "os.environ[\"TAVILY_API_KEY\"] = getpass.getpass()" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "It's also helpful (but not needed) to set up [LangSmith](https://smith.langchain.com/) for best-in-class observability" ] }, { "cell_type": "code", - "execution_count": 1, + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "# os.environ[\"LANGCHAIN_TRACING_V2\"] = \"true\"\n", + "# os.environ[\"LANGCHAIN_API_KEY\"] = getpass.getpass()" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## Usage" + ] + }, + { + "cell_type": "code", + "execution_count": 8, "metadata": {}, "outputs": [ { "data": { "text/plain": [ - "[Document(page_content='Nintendo Designer (s) Hidemaro Fujibayashi (director) Eiji Aonuma (producer/group manager) Release date (s) United States of America: • March 3, 2017 Japan: • March 3, 2017 Australia / New Zealand: • March 2, 2017 Belgium: • March 3, 2017 Hong Kong: • Feburary 1, 2018 South Korea: • February 1, 2018 The UK / Ireland: • March 3, 2017 Content ratings', metadata={'title': 'The Legend of Zelda: Breath of the Wild - Zelda Wiki', 'source': 'https://zelda.fandom.com/wiki/The_Legend_of_Zelda:_Breath_of_the_Wild', 'score': 0.96994, 'images': None}),\n", - " Document(page_content='02/01/23 Nintendo Switch Online member exclusive: Save on two digital games Read more 09/13/22 Out of the Shadows … the Legend of Zelda: Tears of the Kingdom Launches for Nintendo Switch on May...', metadata={'title': 'The Legend of Zelda™: Breath of the Wild - Nintendo', 'source': 'https://www.nintendo.com/store/products/the-legend-of-zelda-breath-of-the-wild-switch/', 'score': 0.94346, 'images': None}),\n", - " Document(page_content='Now we finally have a concrete release date of May 12, 2023. The date was announced alongside this brief (and mysterious) new trailer that also confirmed its title: The Legend of Zelda: Tears...', metadata={'title': 'The Legend of Zelda: Tears of the Kingdom: Release Date, Gameplay ... - IGN', 'source': 'https://www.ign.com/articles/the-legend-of-zelda-breath-of-the-wild-2-release-date-gameplay-news-rumors', 'score': 0.94145, 'images': None}),\n", - " Document(page_content='It was eventually released on March 3, 2017, as a launch game for the Switch and the final Nintendo game for the Wii U. It received widespread acclaim and won numerous Game of the Year accolades. Critics praised its open-ended gameplay, open-world design, and attention to detail, though some criticized its technical performance.', metadata={'title': 'The Legend of Zelda: Breath of the Wild - Wikipedia', 'source': 'https://en.wikipedia.org/wiki/The_Legend_of_Zelda:_Breath_of_the_Wild', 'score': 0.92102, 'images': None})]" + "[Document(page_content='Trending topics\\nTrending topics\\nThe Legend of Zelda™: Breath of the Wild\\nSelect a product\\nThe Legend of Zelda™: Breath of the Wild\\nThe Legend of Zelda™: Breath of the Wild\\nThe Legend of Zelda™: Breath of the Wild and The Legend of Zelda™: Breath of the Wild Expansion Pass Bundle\\nThis item will be sent to your system automatically after purchase or Nintendo Switch Game Voucher redemption. The Legend of Zelda: Breath of the Wild Expansion Pass\\nMore like this\\nSuper Mario Odyssey™\\nThe Legend of Zelda™: Tears of the Kingdom\\nMario + Rabbids® Kingdom Battle\\nThe Legend of Zelda™: Link’s Awakening\\nHollow Knight\\nThe Legend of Zelda™: Skyward Sword HD\\nStarlink: Battle for Atlas™ Digital Edition\\nDRAGON QUEST BUILDERS™ 2\\nDragon Quest Builders™\\nWARNING: If you have epilepsy or have had seizures or other unusual reactions to flashing lights or patterns, consult a doctor before playing video games. Saddle up with a herd of horse-filled games!\\nESRB rating\\nSupported play modes\\nTV\\nTabletop\\nHandheld\\nProduct information\\nRelease date\\nNo. of players\\nGenre\\nPublisher\\nESRB rating\\nSupported play modes\\nGame file size\\nSupported languages\\nPlay online, access classic NES™ and Super NES™ games, and more with a Nintendo Switch Online membership.\\n Two Game Boy games are now available for Nintendo Switch Online members\\n02/01/23\\nNintendo Switch Online member exclusive: Save on two digital games\\n09/13/22\\nOut of the Shadows … the Legend of Zelda: About Nintendo\\nShop\\nMy Nintendo Store orders\\nSupport\\nParents\\nCommunity\\nPrivacy\\n© Nintendo.', metadata={'title': 'The Legend of Zelda™: Breath of the Wild - Nintendo', 'source': 'https://www.nintendo.com/us/store/products/the-legend-of-zelda-breath-of-the-wild-switch/', 'score': 0.97451, 'images': None}),\n", + " Document(page_content='The Legend of Zelda: Breath of the Wild is a masterpiece of open-world design and exploration, released on March 3, 2017 for Nintendo Switch. Find out the latest news, reviews, guides, videos, and more for this award-winning game on IGN.', metadata={'title': 'The Legend of Zelda: Breath of the Wild - IGN', 'source': 'https://www.ign.com/games/the-legend-of-zelda-breath-of-the-wild', 'score': 0.94496, 'images': None}),\n", + " Document(page_content='Reviewers also commented on the unexpected permutations of interactions between Link, villagers, pets, and enemies,[129][130][131] many of which were shared widely on social media.[132] A tribute to former Nintendo president Satoru Iwata, who died during development, also attracted praise.[129][134]\\nJim Sterling was more critical than most, giving Breath of the Wild a 7/10 score, criticizing the difficulty, weapon durability, and level design, but praising the open world and variety of content.[135] Other criticism focused on the unstable frame rate and the low resolution of 900p;[136] updates addressed some of these problems.[137][138]\\nSales\\nBreath of the Wild broke sales records for a Nintendo launch game in multiple regions.[139][140] In Japan, the Switch and Wii U versions sold a combined 230,000 copies in the first week of release, with the Switch version becoming the top-selling game released that week.[141] Nintendo reported that Breath of the Wild sold more than one million copies in the US that month—925,000 of which were for Switch, outselling the Switch itself.[145][146][147][148] Nintendo president Tatsumi Kimishima said that the attach rate on the Switch was \"unprecedented\".[149] Breath of the Wild had sold 31.15 million copies on the Switch by September 2023 and 1.70 million copies on the Wii U by December 2020.[150][151]\\nAwards\\nFollowing its demonstration at E3 2016, Breath of the Wild received several accolades from the Game Critics Awards[152] and from publications such as IGN and Destructoid.[153][154] It was listed among the best games at E3 by Eurogamer,[81] The game, he continued, would challenge the series\\' conventions, such as the requirement that players complete dungeons in a set order.[2][73] The next year, Nintendo introduced the game\\'s high-definition, cel-shaded visual style with in-game footage at its E3 press event.[74][75] Once planned for release in 2015, the game was delayed early in the year and did not show at that year\\'s E3.[76][77] Zelda series creator Shigeru Miyamoto reaffirmed that the game would still release for the Wii U despite the development of Nintendo\\'s next console, the Nintendo Switch.[78] The Switch version also has higher-quality environmental sounds.[53][54] Certain ideas that were planned for the game, like flying and underground dungeons were not implemented due to the Wii U’s limitations; they would eventually resurface in the game\\'s sequel.[55] Aonuma stated that the art design was inspired by gouache and en plein air art to help identify the vast world.[56] Takizawa has also cited the Jōmon period as an inspiration for the ancient Sheikah technology and architecture that is found in the game, due to the mystery surrounding the period.[57] Journalists commented on unexpected interactions between game elements,[129][130][131] with serendipitous moments proving popular on social media.[132] Chris Plante of The Verge predicted that whereas prior open-world games tended to feature prescribed challenges, Zelda would influence a new generation of games with open-ended problem-solving.[132] Digital Trends wrote that the game\\'s level of experimentation allowed players to interact with and exploit the environment in creative ways, resulting in various \"tricks\" still discovered years after release.[127]\\nReviewers lauded the sense of detail and immersion.[133][129] Kotaku recommended turning off UI elements in praise of the indirect cues that contextually indicate the same information, such as Link shivering in the cold or waypoints appearing when using the scope.[133]', metadata={'title': 'The Legend of Zelda: Breath of the Wild - Wikipedia', 'source': 'https://en.wikipedia.org/wiki/The_Legend_of_Zelda:_Breath_of_the_Wild', 'score': 0.93348, 'images': None})]" ] }, - "execution_count": 1, + "execution_count": 8, "metadata": {}, "output_type": "execute_result" } ], "source": [ - "import os\n", - "\n", "from langchain.retrievers.tavily_search_api import TavilySearchAPIRetriever\n", "\n", - "os.environ[\"TAVILY_API_KEY\"] = \"YOUR_API_KEY\"\n", - "\n", - "retriever = TavilySearchAPIRetriever(k=4)\n", + "retriever = TavilySearchAPIRetriever(k=3)\n", "\n", "retriever.invoke(\"what year was breath of the wild released?\")" ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## Chaining\n", + "\n", + "We can easily combine this retriever in to a chain." + ] + }, + { + "cell_type": "code", + "execution_count": 11, + "metadata": {}, + "outputs": [], + "source": [ + "from langchain_community.chat_models import ChatOpenAI\n", + "from langchain_core.output_parsers import StrOutputParser\n", + "from langchain_core.prompts import ChatPromptTemplate\n", + "from langchain_core.runnables import RunnablePassthrough\n", + "\n", + "prompt = ChatPromptTemplate.from_template(\n", + " \"\"\"Answer the question based only on the context provided.\n", + "\n", + "Context: {context}\n", + "\n", + "Question: {question}\"\"\"\n", + ")\n", + "chain = (\n", + " RunnablePassthrough.assign(context=(lambda x: x[\"question\"]) | retriever)\n", + " | prompt\n", + " | ChatOpenAI(model=\"gpt-4-1106-preview\")\n", + " | StrOutputParser()\n", + ")" + ] + }, + { + "cell_type": "code", + "execution_count": 13, + "metadata": {}, + "outputs": [ + { + "data": { + "text/plain": [ + "'As of the end of 2020, \"The Legend of Zelda: Breath of the Wild\" sold over 21.45 million copies worldwide.'" + ] + }, + "execution_count": 13, + "metadata": {}, + "output_type": "execute_result" + } + ], + "source": [ + "chain.invoke({\"question\": \"how many units did bretch of the wild sell in 2020\"})" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [] } ], "metadata": { "kernelspec": { - "display_name": ".venv", + "display_name": "Python 3 (ipykernel)", "language": "python", "name": "python3" }, @@ -71,10 +163,9 @@ "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", - "version": "3.10.5" - }, - "orig_nbformat": 4 + "version": "3.11.1" + } }, "nbformat": 4, - "nbformat_minor": 2 + "nbformat_minor": 4 } diff --git a/docs/docs/integrations/toolkits/gmail.ipynb b/docs/docs/integrations/toolkits/gmail.ipynb index c161122e2ecb6..3880711031c91 100644 --- a/docs/docs/integrations/toolkits/gmail.ipynb +++ b/docs/docs/integrations/toolkits/gmail.ipynb @@ -23,6 +23,34 @@ "!pip install beautifulsoup4 > /dev/null # This is optional but is useful for parsing HTML messages" ] }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "You also need to install the `langchain-community` package where the integration lives:\n", + "\n", + "```bash\n", + "pip install -U langchain-community\n", + "```" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "It's also helpful (but not needed) to set up [LangSmith](https://smith.langchain.com/) for best-in-class observability" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "# os.environ[\"LANGCHAIN_TRACING_V2\"] = \"true\"\n", + "# os.environ[\"LANGCHAIN_API_KEY\"] = getpass.getpass()" + ] + }, { "cell_type": "markdown", "metadata": {}, @@ -110,12 +138,30 @@ "cell_type": "markdown", "metadata": {}, "source": [ - "## Use within an Agent" + "## Usage\n", + "\n", + "We show here how to use it as part of an [agent](/docs/modules/agents). We use the OpenAI Functions Agent, so we will need to setup and install the required dependencies for that. We will also use [LangSmith Hub](https://smith.langchain.com/hub) to pull the prompt from, so we will need to install that.\n", + "\n", + "```bash\n", + "pip install -U openai langchainhub\n", + "```" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "import getpass\n", + "import os\n", + "\n", + "os.environ[\"OPENAI_API_KEY\"] = getpass.getpass()" ] }, { "cell_type": "code", - "execution_count": 8, + "execution_count": 1, "metadata": { "tags": [] }, @@ -128,11 +174,11 @@ }, { "cell_type": "code", - "execution_count": 7, + "execution_count": 2, "metadata": {}, "outputs": [], "source": [ - "instructions = \"\"\"You my assistant.\"\"\"\n", + "instructions = \"\"\"You are an assistant.\"\"\"\n", "base_prompt = hub.pull(\"langchain-ai/openai-functions-template\")\n", "prompt = base_prompt.partial(instructions=instructions)" ] @@ -248,7 +294,7 @@ "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", - "version": "3.10.1" + "version": "3.11.1" } }, "nbformat": 4, diff --git a/docs/docs/integrations/tools/tavily_search.ipynb b/docs/docs/integrations/tools/tavily_search.ipynb index 978b91c837817..79b611a62e8d0 100644 --- a/docs/docs/integrations/tools/tavily_search.ipynb +++ b/docs/docs/integrations/tools/tavily_search.ipynb @@ -13,29 +13,168 @@ "id": "5e24a889", "metadata": {}, "source": [ - "Tavily Search is a robust search API tailored specifically for LLM Agents. It seamlessly integrates with diverse data sources to ensure a superior, relevant search experience.\n", + "[Tavily's Search API](https://tavily.com) is a search engine built specifically for AI agents (LLMs), delivering real-time, accurate, and factual results at speed.\n", "\n", - "Set up API key [here](https://app.tavily.com/)." + "## Setup\n", + "\n", + "The integration lives in the `langchain-community` package. We also need to install the `tavily-python` package itself.\n", + "\n", + "```bash\n", + "pip install -U langchain-community tavily-python\n", + "```\n", + "\n", + "We also need to set our Tavily API key." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "e0b178a2-8816-40ca-b57c-ccdd86dde9c9", + "metadata": {}, + "outputs": [], + "source": [ + "import getpass\n", + "import os\n", + "\n", + "os.environ[\"TAVILY_API_KEY\"] = getpass.getpass()" + ] + }, + { + "cell_type": "markdown", + "id": "bc5ab717-fd27-4c59-b912-bdd099541478", + "metadata": {}, + "source": [ + "It's also helpful (but not needed) to set up [LangSmith](https://smith.langchain.com/) for best-in-class observability" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "a6c2f136-6367-4f1f-825d-ae741e1bf281", + "metadata": {}, + "outputs": [], + "source": [ + "# os.environ[\"LANGCHAIN_TRACING_V2\"] = \"true\"\n", + "# os.environ[\"LANGCHAIN_API_KEY\"] = getpass.getpass()" + ] + }, + { + "cell_type": "markdown", + "id": "1c97218f-f366-479d-8bf7-fe9f2f6df73f", + "metadata": {}, + "source": [ + "## Usage\n", + "\n", + "Here we show how to use the tool individually." + ] + }, + { + "cell_type": "code", + "execution_count": 7, + "id": "8b3ddfe9-ca79-494c-a7ab-1f56d9407a64", + "metadata": {}, + "outputs": [], + "source": [ + "from langchain_community.tools.tavily_search import TavilySearchResults\n", + "\n", + "tool = TavilySearchResults()" + ] + }, + { + "cell_type": "code", + "execution_count": 9, + "id": "65310a8b-eb0c-4d9e-a618-4f4abe2414fc", + "metadata": {}, + "outputs": [ + { + "data": { + "text/plain": [ + "[{'url': 'https://apnews.com/article/burning-man-flooding-nevada-stranded-3971a523f4b993f8f35e158fd1a17a1e',\n", + " 'content': 'festival goers are helped off a truck from the Burning Man festival site in Black Rock, Nev., on Monday, Sept. 4, 2023. festival goers are helped off a truck from the Burning Man festival site in Black Rock, Nev., on Monday, Sept. 4, 2023. at the site of the Burning Man festival where thousands of attendees remained stranded as flooding from storms swept at the site of the Burning Man festival where thousands of attendees remained stranded as flooding from storms sweptRENO, Nev. (AP) — The traffic jam leaving the Burning Man festival eased up considerably Tuesday as the exodus from the mud-caked Nevada desert entered another day following massive rain that left tens of thousands of partygoers stranded for days.'},\n", + " {'url': 'https://www.theguardian.com/culture/2023/sep/03/burning-man-nevada-festival-floods',\n", + " 'content': 'Officials investigate death at Burning Man as thousands stranded by floods Burning Man festival-goers trapped in desert as rain turns site to mud the burning of a giant sculpture to cap off the event, if weather permits. The festival said the roads remain too wet Burning Man festivalgoers surrounded by mud in Nevada desert – videoMichael Sainato @msainat1 Sun 3 Sep 2023 14.31 EDT Over 70,000 attendees of the annual Burning Man festival in the Black Rock desert of Nevada are stranded as the festival comes to a close on...'},\n", + " {'url': 'https://abcnews.go.com/US/burning-man-flooding-happened-stranded-festivalgoers/story?id=102908331',\n", + " 'content': 'ABC News Video Live Shows Election 2024 538 Stream on Burning Man flooding: What happened to stranded festivalgoers? Tens of thousands of Burning Man attendees are now able to leave the festival after a downpour and massive flooding Burning Man has been hosted for over 30 years, according to a statement from the organizers. people last year, and just as many were expected this year. Burning Man began on Aug. 28 and was scheduled to runJulie Jammot/AFP via Getty Images Tens of thousands of Burning Man attendees are now able to leave the festival after a downpour and massive flooding left them stranded over the weekend. The festival, held in the Black Rock Desert in northwestern Nevada, was attended by more than 70,000 people last year, and just as many were expected this year.'},\n", + " {'url': 'https://www.vox.com/culture/2023/9/6/23861675/burning-man-2023-mud-stranded-climate-change-playa-foot',\n", + " 'content': 'This year’s rains opened the floodgates for Burning Man criticism Pray for him people #burningman #burningman2023 #titanicsound #mud #festival who went to Burning Man that large wooden Man won’t be the only one burning.Celebrity Culture The Burning Man flameout, explained Climate change — and schadenfreude — finally caught up to the survivalist cosplayers. By Aja Romano @ajaromano Sep 6, 2023, 3:00pm EDT Share'},\n", + " {'url': 'https://www.cnn.com/2023/09/03/us/burning-man-storms-shelter-sunday/index.html',\n", + " 'content': 'Editor’s Note: Find the latest Burning Man festival coverage here. CNN values your feedback More than 70,000 Burning Man festival attendees remain stuck in Nevada desert after rain Burning Man organizers said Sunday night. Thousands of people remain trapped at the Burning Man festival in the Nevada desert after heavy rains inundated the\"A mucky, muddy, environment.\" This is what Burning Man looks like See More Videos Editor\\'s Note: Find the latest Burning Man festival coverage here. CNN —'}]" + ] + }, + "execution_count": 9, + "metadata": {}, + "output_type": "execute_result" + } + ], + "source": [ + "tool.invoke({\"query\": \"What happened in the latest burning man floods\"})" ] }, { "cell_type": "markdown", - "id": "b50d6c92", + "id": "21c5b56f-0da0-485f-b6f5-38950bae4fd0", "metadata": {}, "source": [ - "## Try it out!" + "## Chaining\n", + "We show here how to use it as part of an [agent](/docs/modules/agents). We use the OpenAI Functions Agent, so we will need to setup and install the required dependencies for that. We will also use [LangSmith Hub](https://smith.langchain.com/hub) to pull the prompt from, so we will need to install that.\n", + "\n", + "```bash\n", + "pip install -U openai langchainhub\n", + "```" ] }, { "cell_type": "code", - "execution_count": 2, - "id": "8cc8ded6", + "execution_count": null, + "id": "a1c8ea19-7100-407d-8e8c-f037f9317255", + "metadata": {}, + "outputs": [], + "source": [ + "import getpass\n", + "import os\n", + "\n", + "os.environ[\"OPENAI_API_KEY\"] = getpass.getpass()" + ] + }, + { + "cell_type": "code", + "execution_count": 3, + "id": "520767b8-9e61-4485-840a-d16f1da5eb3a", "metadata": { "ExecuteTime": { "end_time": "2023-10-21T13:15:37.974229Z", "start_time": "2023-10-21T13:15:10.007898Z" - }, - "scrolled": false + } + }, + "outputs": [], + "source": [ + "from langchain import hub\n", + "from langchain.agents import AgentExecutor, create_openai_functions_agent\n", + "from langchain_community.chat_models import ChatOpenAI\n", + "\n", + "instructions = \"\"\"You are an assistant.\"\"\"\n", + "base_prompt = hub.pull(\"langchain-ai/openai-functions-template\")\n", + "prompt = base_prompt.partial(instructions=instructions)\n", + "llm = ChatOpenAI(temperature=0)\n", + "tavily_tool = TavilySearchResults()\n", + "tools = [tavily_tool]\n", + "agent = create_openai_functions_agent(llm, tools, prompt)\n", + "agent_executor = AgentExecutor(\n", + " agent=agent,\n", + " tools=tools,\n", + " verbose=True,\n", + ")" + ] + }, + { + "cell_type": "code", + "execution_count": 6, + "id": "e9303451-3853-47ce-93c9-1898436a6472", + "metadata": { + "ExecuteTime": { + "end_time": "2023-10-21T13:15:37.974229Z", + "start_time": "2023-10-21T13:15:10.007898Z" + } }, "outputs": [ { @@ -45,23 +184,11 @@ "\n", "\n", "\u001b[1m> Entering new AgentExecutor chain...\u001b[0m\n", - "\u001b[32;1m\u001b[1;3mThought: I'm not aware of the current situation regarding the Burning Man event. I'll need to search for recent news about any flooding that might have affected it.\n", - "Action:\n", - "```\n", - "{\n", - " \"action\": \"tavily_search_results_json\",\n", - " \"action_input\": {\"query\": \"Burning Man floods latest news\"}\n", - "}\n", - "```\u001b[0m\n", - "Observation: \u001b[36;1m\u001b[1;3m[{'url': 'https://www.theguardian.com/culture/2023/sep/03/burning-man-nevada-festival-floods', 'content': 'More on this story\\nMore on this story\\nBurning Man revelers begin exodus from festival after road reopens\\nBurning Man festival-goers trapped in desert as rain turns site to mud\\n\\nOfficials investigate death at Burning Man as thousands stranded by floods\\n\\nBurning Man festivalgoers surrounded by mud in Nevada desert – video\\nBurning Man attendees roadblocked by climate activists: ‘They have a privileged mindset’\\n\\nin our favor. We will let you know. It could be sooner, and it could be later,” said an update on the Burning Man website on Saturday evening.'}, {'url': 'https://www.npr.org/2023/09/03/1197497458/the-latest-on-the-burning-man-flooding', 'content': \"National\\nThe latest on the Burning Man flooding\\nClaudia Peschiutta\\n\\nClaudia Peschiutta\\nAuthorities are investigating a death at the Burning Man festival in the Nevada desert after tens of thousands of people are stuck in camps because of rain.\\nSCOTT DETROW, HOST:\\n\\nDETROW: Well, that's NPR's Claudia Peschiutta covered and caked in a lot of mud at Burning Man. Thanks for talking to us.\\nPESCHIUTTA: Confirmed.\\nDETROW: Stay dry as much as you can.\\n\\nwith NPR's Claudia Peschiutta, who's at her first burn, and she told me it's muddy where she is, but that she and her camp family have been making the best of things.\"}, {'url': 'https://www.npr.org/2023/09/03/1197497458/the-latest-on-the-burning-man-flooding', 'content': \"National\\nThe latest on the Burning Man flooding\\nClaudia Peschiutta\\n\\nClaudia Peschiutta\\nAuthorities are investigating a death at the Burning Man festival in the Nevada desert after tens of thousands of people are stuck in camps because of rain.\\nSCOTT DETROW, HOST:\\n\\nDETROW: Well, that's NPR's Claudia Peschiutta covered and caked in a lot of mud at Burning Man. Thanks for talking to us.\\nPESCHIUTTA: Confirmed.\\nDETROW: Stay dry as much as you can.\\n\\nwith NPR's Claudia Peschiutta, who's at her first burn, and she told me it's muddy where she is, but that she and her camp family have been making the best of things.\"}, {'url': 'https://abcnews.go.com/US/burning-man-flooding-happened-stranded-festivalgoers/story?id=102908331', 'content': 'Tens of thousands of Burning Man attendees are now able to leave the festival after a downpour and massive flooding left them stranded over the weekend.\\n\\nIn 2013, according to a blog post in the \"Burning Man Journal,\" a rainstorm similarly rolled in, unexpectedly \"trapping 160 people on the playa overnight.\"\\n\\nABC News\\nVideo\\nLive\\nShows\\nElection 2024\\n538\\nStream on\\nBurning Man flooding: What happened to stranded festivalgoers?\\nSome 64,000 people were still on site Monday as the exodus began.\\n\\nBurning Man has been hosted for over 30 years, according to a statement from the organizers.'}, {'url': 'https://www.today.com/news/what-is-burning-man-flood-death-rcna103231', 'content': 'Tens of thousands of Burning Man festivalgoers are slowly making their way home from the Nevada desert after muddy conditions from heavy rains made it nearly impossible to leave over the weekend.\\n\\naccording to burningman.org.\\n\\nPresident Biden was notified of the situation and, according to a spokesperson, administration officials monitored and received updates on the latest details.\\nWhy are people stranded at Burning Man?\\n\\n\"Thank goodness this community knows how to take care of each other,\" the Instagram page for Burning Man Information Radio wrote on a post predicting more rain.'}]\u001b[0m\n", - "Thought:\u001b[32;1m\u001b[1;3mThe latest Burning Man event was severely affected by heavy rainfall that led to flooding. This resulted in tens of thousands of festival attendees getting stuck in their camps due to the muddy conditions. As a result, the exodus from the festival was delayed. An unfortunate incident also occurred, with a death being investigated at the festival. The situation was severe enough that President Biden was informed about it and administration officials were monitoring it. However, it seems that the festival goers were able to handle the situation well, as the Burning Man community is known for looking out for each other. This is not the first time a rainstorm has disrupted the Burning Man event; a similar incident occurred in 2013 where a sudden storm trapped people overnight. \n", - "Action:\n", - "```\n", - "{\n", - " \"action\": \"Final Answer\",\n", - " \"action_input\": \"The latest Burning Man event was severely affected by heavy rainfall that led to flooding. This resulted in tens of thousands of festival attendees getting stuck in their camps due to the muddy conditions, delaying their exit from the festival. An unfortunate incident also occurred, with a death being investigated at the festival. The situation was severe enough that President Biden was informed about it and administration officials were monitoring it. However, the festival goers were able to handle the situation well, as the Burning Man community is known for looking out for each other. This is not the first time a rainstorm has disrupted the Burning Man event; a similar incident occurred in 2013 when a sudden storm trapped people overnight.\"\n", - "}\n", - "```\u001b[0m\n", + "\u001b[32;1m\u001b[1;3m\n", + "Invoking: `tavily_search_results_json` with `{'query': 'latest burning man floods'}`\n", + "\n", + "\n", + "\u001b[0m\u001b[36;1m\u001b[1;3m[{'url': 'https://www.politifact.com/factchecks/2023/sep/06/instagram-posts/there-were-floods-there-was-mud-but-burning-man-sp/', 'content': 'The Associated Press, Burning Man flooding triggers false claims of Ebola outbreak, ‘national emergency,’ Sept. 5, 2023 BBC, Thousands queue for hours to leave Burning Man festival, Sept. 5, 2023 Newsweek, Is FEMA at Burning Man? Virus Outbreak Conspiracy Theory Spreads Online, Sept. 4, 2023 CBS News, Burning Man \"exodus operations\" begin as driving ban is lifted, organizers say, Sept. 4, 2023(AP) By Madison Czopek September 6, 2023 There were floods, there was mud, but Burning Man sparked no emergency declaration If Your Time is short Heavy rain began falling Sept. 1 in Nevada\\'s...'}, {'url': 'https://www.nbcnews.com/news/us-news/live-blog/live-updates-burning-man-flooding-keeps-thousands-stranded-nevada-site-rcna103193', 'content': 'As heavy rain turns Burning Man 2023 into a muddy mess, a deluge of unsympathetic jokes has swamped the internet Burning Man flooding keeps thousands stranded at Nevada site as authorities investigate 1 death Burning Man revelers unfazed by deluge and deep mud Reuters The flash flood watch is in effect until tomorrow morning. Burning Man is ‘absolutely soaked,’ festivalgoer saysThousands of Burning Man attendees partied hard on Sunday despite downpours that turned the Nevada desert where the annual arts and music festival takes place into a sea of sticky mud and led...'}, {'url': 'https://apnews.com/article/burning-man-flooding-nevada-stranded-3971a523f4b993f8f35e158fd1a17a1e', 'content': 'festival goers are helped off a truck from the Burning Man festival site in Black Rock, Nev., on Monday, Sept. 4, 2023. festival goers are helped off a truck from the Burning Man festival site in Black Rock, Nev., on Monday, Sept. 4, 2023. at the site of the Burning Man festival where thousands of attendees remained stranded as flooding from storms swept at the site of the Burning Man festival where thousands of attendees remained stranded as flooding from storms sweptRENO, Nev. (AP) — The traffic jam leaving the Burning Man festival eased up considerably Tuesday as the exodus from the mud-caked Nevada desert entered another day following massive rain that left tens of thousands of partygoers stranded for days.'}, {'url': 'https://apnews.com/article/burning-man-flooding-nevada-stranded-0726190c9f8378935e2a3cce7f154785', 'content': 'festival goers are helped off a truck from the Burning Man festival site in Black Rock, Nev., on Monday, Sept. 4, 2023. festival goers are helped off a truck from the Burning Man festival site in Black Rock, Nev., on Monday, Sept. 4, 2023. at the site of the Burning Man festival where thousands of attendees remained stranded as flooding from storms swept Wait times to exit Burning Man drop after flooding left tens of thousands stranded in Nevada desertFILE - In this satellite photo provided by Maxar Technologies, an overview of Burning Man festival in Black Rock, Nev on Monday, Aug. 28, 2023. Authorities in Nevada were investigating a death at the site of the Burning Man festival where thousands of attendees remained stranded as flooding from storms swept through the Nevada desert.'}, {'url': 'https://www.theguardian.com/culture/2023/sep/03/burning-man-nevada-festival-floods', 'content': 'Officials investigate death at Burning Man as thousands stranded by floods Burning Man festival-goers trapped in desert as rain turns site to mud the burning of a giant sculpture to cap off the event, if weather permits. The festival said the roads remain too wet Burning Man festivalgoers surrounded by mud in Nevada desert – videoMichael Sainato @msainat1 Sun 3 Sep 2023 14.31 EDT Over 70,000 attendees of the annual Burning Man festival in the Black Rock desert of Nevada are stranded as the festival comes to a close on...'}]\u001b[0m\u001b[32;1m\u001b[1;3mThe latest Burning Man festival experienced heavy rain, resulting in floods and muddy conditions. Thousands of attendees were stranded at the festival site in Nevada. There were false claims of an Ebola outbreak and a national emergency, but no emergency declaration was made. One death was reported at the festival, which is currently under investigation. Despite the challenging conditions, many festivalgoers remained unfazed and continued to enjoy the event. The exodus from the festival site began as the mud-caked desert started to dry up. Authorities issued a flash flood watch, and investigations are ongoing regarding the death at the festival.\u001b[0m\n", "\n", "\u001b[1m> Finished chain.\u001b[0m\n" ] @@ -69,43 +196,17 @@ { "data": { "text/plain": [ - "'The latest Burning Man event was severely affected by heavy rainfall that led to flooding. This resulted in tens of thousands of festival attendees getting stuck in their camps due to the muddy conditions, delaying their exit from the festival. An unfortunate incident also occurred, with a death being investigated at the festival. The situation was severe enough that President Biden was informed about it and administration officials were monitoring it. However, the festival goers were able to handle the situation well, as the Burning Man community is known for looking out for each other. This is not the first time a rainstorm has disrupted the Burning Man event; a similar incident occurred in 2013 when a sudden storm trapped people overnight.'" + "{'input': 'What happened in the latest burning man floods?',\n", + " 'output': 'The latest Burning Man festival experienced heavy rain, resulting in floods and muddy conditions. Thousands of attendees were stranded at the festival site in Nevada. There were false claims of an Ebola outbreak and a national emergency, but no emergency declaration was made. One death was reported at the festival, which is currently under investigation. Despite the challenging conditions, many festivalgoers remained unfazed and continued to enjoy the event. The exodus from the festival site began as the mud-caked desert started to dry up. Authorities issued a flash flood watch, and investigations are ongoing regarding the death at the festival.'}" ] }, - "execution_count": 2, + "execution_count": 6, "metadata": {}, "output_type": "execute_result" } ], "source": [ - "# libraries\n", - "import os\n", - "\n", - "from langchain.agents import AgentType, initialize_agent\n", - "from langchain_community.chat_models import ChatOpenAI\n", - "from langchain_community.tools.tavily_search import TavilySearchResults\n", - "from langchain_community.utilities.tavily_search import TavilySearchAPIWrapper\n", - "\n", - "# set up API key\n", - "os.environ[\"TAVILY_API_KEY\"] = \"...\"\n", - "\n", - "# set up the agent\n", - "llm = ChatOpenAI(model_name=\"gpt-4\", temperature=0.7)\n", - "search = TavilySearchAPIWrapper()\n", - "tavily_tool = TavilySearchResults(api_wrapper=search)\n", - "\n", - "# initialize the agent\n", - "agent_chain = initialize_agent(\n", - " [tavily_tool],\n", - " llm,\n", - " agent=AgentType.STRUCTURED_CHAT_ZERO_SHOT_REACT_DESCRIPTION,\n", - " verbose=True,\n", - ")\n", - "\n", - "# run the agent\n", - "agent_chain.run(\n", - " \"What happened in the latest burning man floods?\",\n", - ")" + "agent_executor.invoke({\"input\": \"What happened in the latest burning man floods?\"})" ] }, { @@ -133,7 +234,7 @@ "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", - "version": "3.10.6" + "version": "3.11.1" } }, "nbformat": 4, diff --git a/docs/docs/integrations/vectorstores/faiss.ipynb b/docs/docs/integrations/vectorstores/faiss.ipynb index c1d8b2e2f43b5..e277be120f520 100644 --- a/docs/docs/integrations/vectorstores/faiss.ipynb +++ b/docs/docs/integrations/vectorstores/faiss.ipynb @@ -11,45 +11,80 @@ "\n", "[Faiss documentation](https://faiss.ai/).\n", "\n", - "This notebook shows how to use functionality related to the `FAISS` vector database." + "This notebook shows how to use functionality related to the `FAISS` vector database. It will show functionality specific to this integration. After going through, it may be useful to explore [relevant use-case pages](/docs/use_cases/question_answering) to learn how to use this vectorstore as part of a larger chain." ] }, { - "cell_type": "code", - "execution_count": null, - "id": "497fcd89-e832-46a7-a74a-c71199666206", + "cell_type": "markdown", + "id": "601ac1d5-48a2-4e41-bf51-f1d5fdd5639d", "metadata": { "tags": [] }, + "source": [ + "## Setup\n", + "\n", + "The integration lives in the `langchain-community` package. We also need to install the `faiss` package itself. We will also be using OpenAI for embeddings, so we need to install those requirements. We can install these with:\n", + "\n", + "```bash\n", + "pip install -U langchain-community faiss-cpu openai tiktoken\n", + "```\n", + "\n", + "Note that you can also install `faiss-gpu` if you want to use the GPU enabled version\n", + "\n", + "Since we are using OpenAI, you will need an OpenAI API Key." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "23984e60-c29a-461a-be2b-219108ac37ee", + "metadata": {}, "outputs": [], "source": [ - "!pip install faiss-gpu # For CUDA 7.5+ Supported GPU's.\n", - "# OR\n", - "!pip install faiss-cpu # For CPU Installation" + "import getpass\n", + "import os\n", + "\n", + "os.environ[\"OPENAI_API_KEY\"] = getpass.getpass()" ] }, { "cell_type": "markdown", - "id": "38237514-b3fa-44a4-9cff-30cd6bf50073", + "id": "408be78f-7b0e-44d4-8d48-56a6cb9b3fb9", "metadata": {}, "source": [ - "We want to use OpenAIEmbeddings so we have to get the OpenAI API Key. " + "It's also helpful (but not needed) to set up [LangSmith](https://smith.langchain.com/) for best-in-class observability" ] }, { "cell_type": "code", "execution_count": null, + "id": "951c82cb-40bf-46ac-9f3f-d2fca7d204b8", + "metadata": {}, + "outputs": [], + "source": [ + "# os.environ[\"LANGCHAIN_TRACING_V2\"] = \"true\"\n", + "# os.environ[\"LANGCHAIN_API_KEY\"] = getpass.getpass()" + ] + }, + { + "cell_type": "markdown", + "id": "78dde98a-584f-4f2a-98d5-e776fd9558fa", + "metadata": {}, + "source": [ + "## Ingestion\n", + "\n", + "Here, we ingest documents into the vectorstore" + ] + }, + { + "cell_type": "code", + "execution_count": 1, "id": "dc37144c-208d-4ab3-9f3a-0407a69fe052", "metadata": { "tags": [] }, "outputs": [], "source": [ - "import getpass\n", - "import os\n", - "\n", - "os.environ[\"OPENAI_API_KEY\"] = getpass.getpass(\"OpenAI API Key:\")\n", - "\n", "# Uncomment the following line if you need to initialize FAISS with no AVX2 optimization\n", "# os.environ['FAISS_NO_AVX2'] = '1'\n", "\n", @@ -58,32 +93,40 @@ "from langchain_community.embeddings.openai import OpenAIEmbeddings\n", "from langchain_community.vectorstores import FAISS\n", "\n", - "loader = TextLoader(\"../../../extras/modules/state_of_the_union.txt\")\n", + "loader = TextLoader(\"../../modules/state_of_the_union.txt\")\n", "documents = loader.load()\n", "text_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=0)\n", "docs = text_splitter.split_documents(documents)\n", + "embeddings = OpenAIEmbeddings()\n", + "db = FAISS.from_documents(docs, embeddings)" + ] + }, + { + "cell_type": "markdown", + "id": "ecdd7a65-f310-4b36-bc1e-2a39dfd58d5f", + "metadata": {}, + "source": [ + "## Querying\n", "\n", - "embeddings = OpenAIEmbeddings()" + "Now, we can query the vectorstore. There a few methods to do this. The most standard is to use `similarity_search`." ] }, { "cell_type": "code", - "execution_count": 3, + "execution_count": 4, "id": "5eabdb75", "metadata": { "tags": [] }, "outputs": [], "source": [ - "db = FAISS.from_documents(docs, embeddings)\n", - "\n", "query = \"What did the president say about Ketanji Brown Jackson\"\n", "docs = db.similarity_search(query)" ] }, { "cell_type": "code", - "execution_count": 12, + "execution_count": 5, "id": "4b172de8", "metadata": { "tags": [] @@ -107,6 +150,60 @@ "print(docs[0].page_content)" ] }, + { + "cell_type": "markdown", + "id": "6d9286c2-0802-4f02-8f9a-9f7fae7c79b0", + "metadata": {}, + "source": [ + "## As a Retriever\n", + "\n", + "We can also convert the vectorstore into a [Retriever](/docs/modules/data_connection/retrievers) class. This allows us to easily use it in other LangChain methods, which largely work with retrievers" + ] + }, + { + "cell_type": "code", + "execution_count": 7, + "id": "6e91b475-3878-44e0-8720-98d903754b46", + "metadata": {}, + "outputs": [], + "source": [ + "retriever = db.as_retriever()" + ] + }, + { + "cell_type": "code", + "execution_count": 8, + "id": "a869c874-84b5-4d2c-9993-2513f10aee83", + "metadata": {}, + "outputs": [], + "source": [ + "docs = retriever.invoke(query)" + ] + }, + { + "cell_type": "code", + "execution_count": 9, + "id": "046739d2-91fe-4101-8b72-c0bcdd9e02b9", + "metadata": {}, + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "Tonight. I call on the Senate to: Pass the Freedom to Vote Act. Pass the John Lewis Voting Rights Act. And while you’re at it, pass the Disclose Act so Americans can know who is funding our elections. \n", + "\n", + "Tonight, I’d like to honor someone who has dedicated his life to serve this country: Justice Stephen Breyer—an Army veteran, Constitutional scholar, and retiring Justice of the United States Supreme Court. Justice Breyer, thank you for your service. \n", + "\n", + "One of the most serious constitutional responsibilities a President has is nominating someone to serve on the United States Supreme Court. \n", + "\n", + "And I did that 4 days ago, when I nominated Circuit Court of Appeals Judge Ketanji Brown Jackson. One of our nation’s top legal minds, who will continue Justice Breyer’s legacy of excellence.\n" + ] + } + ], + "source": [ + "print(docs[0].page_content)" + ] + }, { "cell_type": "markdown", "id": "f13473b5", @@ -522,7 +619,7 @@ "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", - "version": "3.10.12" + "version": "3.11.1" } }, "nbformat": 4, diff --git a/libs/community/langchain_community/chat_models/cohere.py b/libs/community/langchain_community/chat_models/cohere.py index f74abcee3caae..bfdab6f3b1dae 100644 --- a/libs/community/langchain_community/chat_models/cohere.py +++ b/libs/community/langchain_community/chat_models/cohere.py @@ -105,9 +105,10 @@ class ChatCohere(BaseChatModel, BaseCohere): from langchain_community.chat_models import ChatCohere from langchain_core.messages import HumanMessage - chat = ChatCohere(model="foo") - result = chat([HumanMessage(content="Hello")]) - print(result.content) + chat = ChatCohere(model="command", max_tokens=256, temperature=0.75) + + messages = [HumanMessage(content="knock knock")] + chat.invoke(messages) """ class Config: