Releases: griptape-ai/griptape
Releases · griptape-ai/griptape
v0.22.0
Added
PromptImageGenerationEngine
for generating images from text prompts.VariationImageGenerationEngine
for generating variations of an input image according to a text prompt.InpaintingImageGenerationEngine
for modifying an input image according to a text prompt within the bounds of a mask defined by a mask image.OutpaintingImageGenerationEngine
for modifying an input image according to a text prompt outside the bounds of a mask defined by a mask image.PromptImageGenerationClient
for enabling an LLM to use thePromptImageGenerationEngine
.VariationImageGenerationClient
for enabling an LLM to use theVariationImageGenerationEngine
.InpaintingImageGenerationClient
for enabling an LLM to use theInpaintingImageGenerationEngine
.OutpaintingImageGenerationClient
for enabling an LLM to use theOutpaintingImageGenerationEngine
.OpenAiImageGenerationDriver
for use with OpenAI's image generation models.LeonardoImageGenerationDriver
for use with Leonoaro AI's image generation models.AmazonBedrockImageGenerationDriver
for use with Amazon Bedrock's image generation models; requires a Image Generation Model Driver.BedrockTitanImageGenerationModelDriver
for use with Amazon Bedrock's Titan image generation.ImageArtifact
for storing image data; used heavily by the image Engines, Tasks, and Drivers.ImageLoader
for loading images files intoImageArtifact
s.- Support for all Tokenizers in
OpenAiChatPromptDriver
, enabling OpenAI drop-in clients such as Together AI. AmazonSageMakerEmbeddingDriver
for using Amazon SageMaker to generate embeddings. Thanks @KaushikIyer16!- Claude 2.1 support in
AnthropicPromptDriver
andAmazonBedrockPromptDriver
viaBedrockClaudePromptModelDriver
. CodeExecutionTask
for executing code as a Task without the need for an LLM.BedrockLlamaPromptModelDriver
for using Llama models on Amazon Bedrock.
Fixed
MongoDbAtlasVectorStore
namespace not being used properly when querying.- Miscellaneous type errors throughout the codebase.
- Remove unused section from
ToolTask
system prompt template. - Structure execution args being cleared after run, preventing inspection of the Structure's
input_task
'sinput
. - Unhandled
SqlClient
exception. Thanks @michal-repo!
Changed
- BREAKING: Rename
input_template
field toinput
in Tasks that take a text input. - BREAKING: Rename
BedrockTitanEmbeddingDriver
toAmazonBedrockTitanEmbeddingDriver
. - BREAKING: Rename
AmazonBedrockStableDiffusionImageGenerationModelDriver
toBedrockStableDiffusionImageGenerationModelDriver
. - BREAKING: Rename
AmazonBedrockTitanImageGenerationModelDriver
toBedrockTitanImageGenerationModelDriver
. - BREAKING: Rename
ImageGenerationTask
toPromptImageGenerationTask
. - BREAKING: Rename
ImageGenerationEngine
toPromptImageGenerationEngine
. - BREAKING: Rename
ImageGenerationTool
toPromptImageGenerationClient
. - Improve system prompt generation with Claude 2.0.
- Improve integration test coverage.
BaseTextInputTask
to accept astr
,TextArtifact
or callable returning aTextArtifact
.
v0.21.2
v0.21.1
🔧 Improvements
- Fixed README link.
- Fixed missing module file in
griptape.schemas.utils
.
v0.21.0
🚨 Breaking Changes
- Renamed
hugging_face_hub_prompt_driver.py
tohuggingface_hub_prompt_driver.py
. - Renamed
hugging_face_pipeline_prompt_driver.py
tohuggingface_pipeline_prompt_driver.py
. - Renamed
hugging_face_tokenizer.py
tohuggingface_tokenizer.py
. - Moved
ProxycurlClient
Tool to dedicated repository.
🆕 New Features
- Added
HuggingFaceEmbeddingDriver
. - Added
SimpleTokenizer
for use with LLM providers that don't provide tokenization APIs. - Added streaming support to
Chat
util. Thanks @mattma1970! - Added Image Generation Drivers (
AzureOpenAiDalleImageGenerationDriver
,LeonardoImageGenerationDriver
, andAmazonBedrockImageGenerationDriver
). - Added streaming support to
HuggingFaceHubPromptDriver
. - Added
prompt_stack
andprompt
fields toStartPromptEvent
to allow for inspection of the prompt before sending to the LLM. - Created Tool Template repository.
🔧 Improvements
- Added python 3.12 support.
- Updated
HuggingFaceHubPromptDriver
to use newInferenceClient
. - Fixed
OpenSearchVectorStoreDriver.query
argument order. Thanks @igor-2lemetry! - Fixed
FileManager
working directory not updating when usingos.cwd()
. - Fixed
WebLoader
failing when extracting empty pages. - Fixed
SummaryConversationMemory
failing to deserialize.
📖 Docs
- Updated contribution guidelines for new Tools.
- Added docs for
HuggingFaceEmbeddingDriver
. - Updated docs to reflect
hugging_face
->huggingface
rename. - Added dedicated page for Tokenizers.
- Added note to Prompt Drivers page advising users that if they choose to override the Prompt Driver, they should also consider overriding the Embedding Driver.
- Fixed broken import on Task Memory page.
v0.20.0
🚨 Breaking Changes
- Removed
BaseTask.add_child
andBaseTask.add_parent
. UseWorkflow.add_task(s)
andWorkflow.insert_task(s)
instead. - Changed Workflows to always have a single input Task and single output Task; enabling Workflow Conversation Memory.
- Split dependencies into extras to reduce base installation size. Read more about this change here.
- Renamed
ToolOutputProcessor
toTaskMemoryClient
. - Removed Memory Actions in favor of
TaskMemoryClient
. If your Tools have data that needs to go to the LLM, you must set either setoff_prompt=False
on the Tool, or addTaskMemoryClient
withoff_prompt=False
. - Changed all
Structure.run
to return aStructure
. This resolves inconsistencies in therun
return values across Agents, Pipelines and Workflows. You can access the output Task value viaStructure.output_task.output.value
. - Renamed
AzureOpenAiChatPromptDriver
'sdeployment_id
andapi_base
toazure_deployment
andazure_endpoint
. - Renamed
Structure.tool_memory
toStructure.task_memory
. - Renamed
ToolMemory
toTaskMemory
. - Renamed
Structure.memory
toStructure.conversation_memory
. - Removed
TextQueryTask.load
method.
🆕 New Features
- Added
seed
parameter toOpenAiChatPromptDriver
. Read more about this parameter here. - Added
response_format
parameter toOpenAiChatPromptDriver
andAzureOpenAiChatPromptDriver
. Read more about this parameter here. - Added support for
gpt-4-1106-preview
(gpt-4-turbo
) inOpenAiChatPromptDriver
. - Added new Workflow method
insert_tasks
. See example usage here. - Added Conversation Memory to Workflows.
- Added
off_prompt
parameter to all Tools. With the exception ofTaskMemoryClient
(no default provided),off_prompt
defaults toTrue
meaning that the Tool results will go into Task Memory. If you'd like the results to go directly back to the LLM, setoff_prompt
toFalse
. - Added support for Rulesets and Rules to all Tasks.
- Added
namespace
parameter toTextQueryTask
. - Added
encoding
parameter toTextLoader
. - Added
download_objects
parameter toAwsS3Client
. - Added
encoding
parameter toTextLoader
andTextArtifact
. - Added
to_bytes
method toTextArtifact
.
🔧 Improvements
- Updated
openai
's sdk to^1.1.0
. - Renamed Tool Memory to Task Memory. Soon, all Tasks will be able to output Artifacts into Task Memory, giving the ability to share Artifacts across Tasks.
- Added
MetaMemory
to storeActionSubtask
results to improve the LLM's reasoning when usingToolkitTask
. Other applications ofMetaMemory
will be added in future releases. - Flattened Memory Actions and Tool Actions into Actions to improve the LLM's reasoning when using
ToolkitTask
. - Improved system prompt for
ToolTask
andToolkitTask
to reduce action hallucinations.
📖 Docs
- Updated examples to reflect Tool Memory to Task Memory rename.
- Update examples to reflect flattening of Memory Actions and Tool Actions into Actions.
- Updated Overview and Prompt Driver pages to reflect pip extras changes.
- Updated Workflow example to use new
Workflow.insert_tasks
method. - Added note to OpenAI Prompt Driver examples regarding
seed
andresponse_format
parameters. - Updated Tool examples to use
TaskMemoryClient
andoff_prompt
parameters.
v0.19.4
v0.19.3
🚨 Breaking Changes
- Updated
Structure.add_event_listener
to take anEventListener
directly.
🔧 Improvements
- Fixed freeze when using the
Stream
utility multiple times. - Fixed boto3 session not being passed from
AmazonBedrockEmbeddingDriver
toBedrockTitanTokenizer
. - Added
Structure.remove_event_listener
.
v0.19.2
v0.19.1
🔧 Improvements
- Moved
requests
from Toolrequirements.txt
to hard dependency inpyproject.toml
.
v0.19.0
🚨 Breaking Changes
- Removed
BufferConversationMemory
. - Simplified Event Listener format.
- Restrict Python support to
>=3.9,<3.12
.3.12
support is coming, but is currently blocked by several downstream libraries. - Removed Tokenizer's
encode
anddecode
methods. - Removed
TextToolMemory
andBlobToolMemory
.
🆕 New Features
- Combined
TextToolMemory
andBlobToolMemory
underToolMemory
. - Added Conversation Memory buffering via
max_runs
parameter. - Added Conversation Memory pruning to prevent long running conversations from exceeding prompt token limit.
- Added prompt streaming to Prompt Drivers that can support it.
- Added
EmailLoader
. - Added support for more file extensions in
FileManager
Tool. - Added
share_file
activity toGoogleDriveClient
Tool. - Added serialization and deserialization for all Events.
🔧 Improvements
- Added
BaseVectorStoreDriver.QueryResult.id
. - Fixed
SummaryConversationMemory
bugs. - Refactored
EmailClient
to useEmailLoader
. - Refactored Embedding Drivers to use Chunkers for improvements on embedding large text.
- Added parsing of OpenAi rate limit headers for future use.
- Added
__len__
magic methods for Artifacts (thanks @Bubble-Interface!). - Fixed boto3 session not being passed from
BedrockTitanEmbeddingDriver
intoBedrockTitanTokenizer
.
📖 Docs
- Updated docs for new Event Listener format.
- Added page for Extraction Engines.
- Added sections for additional Task types.
- Added page for
OpenWeatherClient
Tool. - Added example for
PgVectorStoreDriver
- Updated page for Conversation Memory buffering changes.