-
Notifications
You must be signed in to change notification settings - Fork 0
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add ChatGPT wrapper pipeline #185
Conversation
WalkthroughThis pull request introduces a ChatGPT wrapper pipeline functionality across multiple files in the application. The changes include creating a new function to convert chat history to a string format, defining a Changes
Sequence DiagramsequenceDiagram
participant User
participant Router
participant ChatGPTWrapperPipeline
participant RequestHandler
participant Callback
User->>Router: Initiate Chat
Router->>ChatGPTWrapperPipeline: Create Pipeline
ChatGPTWrapperPipeline->>ChatGPTWrapperPipeline: Convert Chat History
ChatGPTWrapperPipeline->>RequestHandler: Send Chat Prompts
RequestHandler-->>ChatGPTWrapperPipeline: Generate Response
ChatGPTWrapperPipeline->>Callback: Invoke Done Method
Callback-->>User: Return Chat Response
Possibly related PRs
Suggested Labels
Finishing Touches
Thank you for using CodeRabbit. We offer it for free to the OSS community and would appreciate your support in helping us grow. If you find it useful, would you consider giving us a shout-out on your favorite social media? 🪧 TipsChatThere are 3 ways to chat with CodeRabbit:
Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments. CodeRabbit Commands (Invoked using PR comments)
Other keywords and placeholders
CodeRabbit Configuration File (
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 0
🧹 Nitpick comments (4)
app/pipeline/chat_gpt_wrapper_pipeline.py (3)
22-32
: Offer flexibility for GPT version and context length.
The GPT version4.5
andcontext_length=16385
are hardcoded in theRequirementList
. Consider exposing these values as optional constructor parameters or constants, allowing other parts of the system or configuration files to override them if needed.
33-33
: Unusedself.tokens
attribute.
self.tokens
is declared but never used. This may lead to confusion or code clutter. If it is not needed, consider removing it.
48-50
: Consider making the temperature parameter configurable.
The call toCompletionArguments(temperature=0.4)
sets a fixed temperature. Exposing the temperature as an argument could provide more flexibility.app/domain/chat_gpt_wrapper_pipeline_execution_dto.py (1)
8-10
: Optional validation onconversation
.
You might consider a Pydantic validator to ensure the conversation is not empty, matching the requirement that at least one message must exist. This could catch errors earlier rather than raising aValueError
downstream.
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (5)
app/domain/chat_gpt_wrapper_pipeline_execution_dto.py
(1 hunks)app/domain/status/chat_gpt_wrapper_status_update_dto.py
(1 hunks)app/pipeline/chat_gpt_wrapper_pipeline.py
(1 hunks)app/web/routers/pipelines.py
(4 hunks)app/web/status/status_update.py
(3 hunks)
🔇 Additional comments (5)
app/domain/status/chat_gpt_wrapper_status_update_dto.py (1)
4-10
: DTO structure looks good.
Using a dedicated fieldresult: str
for the pipeline operation outcome is consistent with the existing status update design.app/web/status/status_update.py (1)
305-328
: Class constructor logic is consistent with other callbacks.
TheChatGPTWrapperCallback
class is properly extendingStatusCallback
and aligns with the existing pattern in the codebase for pipeline-specific callbacks.app/web/routers/pipelines.py (3)
237-259
: Worker logic mirrors existing patterns correctly.
Therun_chatgpt_wrapper_pipeline_worker
function follows a familiar try-except structure, setting up the callback and pipeline, then handling runtime exceptions. This consistency aids maintainability.
260-271
: Route definition is consistent with other pipeline routes.
Using the@router.post
decorator with threaded execution aligns with the approach used for other pipeline endpoints.
335-342
: New pipeline variant recognized.
Adding"CHAT_GPT_WRAPPER"
retrieves the default variant as expected. This ensures front-end or user code can discover and use the new ChatGPT wrapper pipeline variant.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 0
🧹 Nitpick comments (2)
app/pipeline/chat/course_chat_pipeline.py (2)
103-104
: Consider unifying repeated logic for model initialization.Both
llm_big
andllm_small
share the samecompletion_args
, which could be refactored into a single point of definition for maintainability. For instance, definecompletion_args
outside of each constructor call so that future changes can be made in one place.
111-112
: Reuse sharedcompletion_args
for clarity.Here, the
completion_args
are duplicated in the same way as inllm_big
. If the logic for handling model parameters is consistent across multiple models, consider encapsulating this setup in a helper function or shared configuration to reduce repetition.
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (7)
app/domain/chat_gpt_wrapper_pipeline_execution_dto.py
(1 hunks)app/pipeline/chat/course_chat_pipeline.py
(1 hunks)app/pipeline/chat/exercise_chat_agent_pipeline.py
(1 hunks)app/pipeline/chat_gpt_wrapper_pipeline.py
(1 hunks)app/pipeline/shared/citation_pipeline.py
(1 hunks)app/web/routers/pipelines.py
(4 hunks)app/web/status/status_update.py
(3 hunks)
✅ Files skipped from review due to trivial changes (2)
- app/pipeline/shared/citation_pipeline.py
- app/pipeline/chat/exercise_chat_agent_pipeline.py
🚧 Files skipped from review as they are similar to previous changes (4)
- app/domain/chat_gpt_wrapper_pipeline_execution_dto.py
- app/web/status/status_update.py
- app/pipeline/chat_gpt_wrapper_pipeline.py
- app/web/routers/pipelines.py
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 1
🧹 Nitpick comments (2)
app/pipeline/chat_gpt_wrapper_pipeline.py (2)
25-35
: Remove unused tokens list and consider configuration improvements.A few suggestions to improve the constructor:
- The
tokens
list is initialized but never used in the code- Consider moving the GPT configuration (version and context length) to a configuration file for better maintainability
- Add a docstring explaining the constructor parameters
def __init__(self, callback: Optional[ChatGPTWrapperCallback] = None): + """Initialize the ChatGPT wrapper pipeline. + + Args: + callback: Optional callback for handling pipeline execution updates + """ super().__init__(implementation_id="chat_gpt_wrapper_pipeline_reference_impl") self.callback = callback self.request_handler = CapabilityRequestHandler( requirements=RequirementList( gpt_version_equivalent=4.5, context_length=16385, ) ) - self.tokens = []
21-61
: Document experimental status and lifecycle.Since this is mentioned as a temporary, experimental implementation in the PR objectives, consider:
- Adding a class-level docstring indicating its experimental status
- Including deprecation timeline if known
- Documenting any known limitations or experimental features
class ChatGPTWrapperPipeline(Pipeline): + """Experimental ChatGPT wrapper pipeline implementation. + + Note: This is a temporary implementation for experimental purposes. + It may be subject to significant changes or removal in future versions. + + Known limitations: + - Fixed GPT version and context length + - Basic error handling + - No retry mechanism + """
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (2)
app/pipeline/chat_gpt_wrapper_pipeline.py
(1 hunks)app/pipeline/prompts/chat_gpt_wrapper_prompts.py
(1 hunks)
✅ Files skipped from review due to trivial changes (1)
- app/pipeline/prompts/chat_gpt_wrapper_prompts.py
⏰ Context from checks skipped due to timeout of 90000ms (2)
- GitHub Check: Build and Push Docker Image
- GitHub Check: Mend Security Check
🔇 Additional comments (2)
app/pipeline/chat_gpt_wrapper_pipeline.py (2)
1-19
: LGTM! Well-organized imports and proper logging setup.The imports are logically grouped and all dependencies are properly referenced.
21-24
: LGTM! Clean class definition with proper type hints.The class structure is well-defined and follows good object-oriented principles.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 1
🔭 Outside diff range comments (1)
README.MD (1)
Line range hint
1-350
: Document the ChatGPT wrapper pipeline implementationThe README lacks documentation for the newly added ChatGPT wrapper pipeline. Based on the AI summary, please add documentation covering:
- Overview of the ChatGPT wrapper pipeline
- Components (DTOs, pipeline class, callback mechanism)
- API endpoints and usage examples
- Configuration requirements
- Integration with existing pipelines
Would you like me to help generate a template for this documentation?
🧹 Nitpick comments (1)
README.MD (1)
187-187
: Clarify model requirements for the ChatGPT wrapper pipelineWhile the warning about
gpt_version_equivalent
is helpful, it should specifically mention if the new ChatGPT wrapper pipeline has different model requirements or capabilities requirements.
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
⛔ Files ignored due to path filters (2)
application.example.yml
is excluded by!**/*.yml
example_application.yml
is excluded by!**/*.yml
📒 Files selected for processing (2)
README.MD
(6 hunks)app/pipeline/chat_gpt_wrapper_pipeline.py
(1 hunks)
🚧 Files skipped from review as they are similar to previous changes (1)
- app/pipeline/chat_gpt_wrapper_pipeline.py
🧰 Additional context used
🪛 LanguageTool
README.MD
[uncategorized] ~323-~323: Loose punctuation mark.
Context: ...lfile. -
PYRIS_LLM_CONFIG_YML_FILE: Path to your
llm_config.yml` file. -...
(UNLIKELY_OPENING_PUNCTUATION)
🔇 Additional comments (1)
README.MD (1)
98-98
: LGTM! Consistent naming convention appliedThe configuration file naming has been standardized from hyphen-based (
llm-config.yml
) to underscore-based (llm_config.yml
) throughout the document.Also applies to: 101-101, 195-195, 323-323, 332-332
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 0
🧹 Nitpick comments (2)
app/main.py (2)
23-40
: Add type hints and improve documentation.The function implementation is correct but could benefit from better typing and documentation.
Apply these improvements:
-def custom_openapi(): +def custom_openapi() -> dict: + """Customize OpenAPI schema to include bearer token authentication. + + Returns: + dict: Modified OpenAPI schema with global bearer token security. + """🧰 Tools
🪛 GitHub Actions: Run linters
[warning] File needs to be reformatted using Black code formatter
35-37
: Consider making security requirements configurable.Applying bearer token security globally to all endpoints might be too restrictive. Some endpoints (like health checks) typically remain public.
Consider:
- Adding configuration to specify which paths/operations should be secured
- Using FastAPI's security dependencies instead for more granular control
🧰 Tools
🪛 GitHub Actions: Run linters
[warning] File needs to be reformatted using Black code formatter
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (1)
app/main.py
(1 hunks)
🧰 Additional context used
🪛 GitHub Actions: Run linters
app/main.py
[warning] File needs to be reformatted using Black code formatter
⏰ Context from checks skipped due to timeout of 90000ms (1)
- GitHub Check: Build and Push Docker Image
🔇 Additional comments (3)
app/main.py (3)
41-41
: LGTM!The OpenAPI schema override is correctly implemented.
🧰 Tools
🪛 GitHub Actions: Run linters
[warning] File needs to be reformatted using Black code formatter
23-41
: Verify authentication requirements for ChatGPT wrapper.Since this PR adds a ChatGPT wrapper pipeline, please verify:
- If the bearer token authentication is required for ChatGPT API access
- How the token will be propagated to the ChatGPT wrapper
🧰 Tools
🪛 GitHub Actions: Run linters
[warning] File needs to be reformatted using Black code formatter
23-41
: Format code using Black.The pipeline indicates formatting issues. Please run Black formatter on this file:
Run this script to check the formatting differences:
🧰 Tools
🪛 GitHub Actions: Run linters
[warning] File needs to be reformatted using Black code formatter
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 0
🧹 Nitpick comments (2)
app/main.py (2)
24-36
: Add type hints to improve code clarity and IDE support.The function should include return type annotation and consider adding intermediate type hints.
-def custom_openapi(): +def custom_openapi() -> dict:
28-35
: Consider more granular security controls.The current implementation applies bearer token authentication globally to all endpoints. Consider allowing more granular control by:
- Adding a description to the security scheme
- Potentially allowing certain endpoints (like health checks) to be public
openapi_schema["components"]["securitySchemes"] = { - "bearerAuth": {"type": "apiKey", "in": "header", "name": "Authorization"} + "bearerAuth": { + "type": "apiKey", + "in": "header", + "name": "Authorization", + "description": "Bearer token authentication for ChatGPT wrapper pipeline" + } }
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (1)
app/main.py
(1 hunks)
⏰ Context from checks skipped due to timeout of 90000ms (1)
- GitHub Check: Build and Push Docker Image
🔇 Additional comments (2)
app/main.py (2)
39-39
: LGTM! OpenAPI schema override is correctly implemented.The placement and implementation of the OpenAPI schema override is clean and follows best practices.
24-41
: Verify security implementation in routers.Let's ensure the security scheme is properly handled across the codebase.
✅ Verification successful
Security implementation is properly enforced
The OpenAPI security scheme is correctly implemented and enforced:
- All router endpoints are protected using
TokenValidator
- Authorization header is properly validated
- Appropriate error handling is in place (401/403)
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash # Description: Check for proper security handling in routers and dependencies # Check for security dependencies and auth handling echo "Checking for security-related imports and dependencies..." rg -l "(?i)(security|auth|bearer|jwt|token)" app/ # Check for FastAPI security decorators echo "Checking for FastAPI security decorators..." rg "(?i)(depends.*security|security.*depends|oauth2|bearer)" app/web/routers/ # Check for potential security bypass echo "Checking for potential security bypass..." rg -l "(?i)(skip.*auth|bypass.*security|no.*auth)" app/Length of output: 1878
Script:
#!/bin/bash # Check the security implementation details # Examine dependencies.py for security implementation echo "=== Security implementation in dependencies.py ===" rg -A 5 -B 5 "(?i)(security|auth|bearer)" app/dependencies.py # Check custom exceptions related to auth echo -e "\n=== Auth-related custom exceptions ===" rg -A 2 -B 2 "(?i)(auth|security)" app/common/custom_exceptions.py # Check how routers use dependencies echo -e "\n=== Router dependency usage ===" rg "(?i)(dependencies|Depends)" app/web/routers/Length of output: 2951
Following test payload:
|
Actual example response:
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 3
🧹 Nitpick comments (1)
app/pipeline/chat_gpt_wrapper_pipeline.py (1)
4-12
: Improve import organization.Consider consolidating related imports and adding newlines between import groups:
import logging from typing import List, Optional -from langchain_core.prompts import ( - ChatPromptTemplate, -) -from app.common.pyris_message import IrisMessageRole, PyrisMessage -from app.domain.chat.exercise_chat.exercise_chat_pipeline_execution_dto import ExerciseChatPipelineExecutionDTO -from app.domain.data.text_message_content_dto import TextMessageContentDTO -from app.pipeline.prompts.chat_gpt_wrapper_prompts import chat_gpt_initial_system_prompt -from langchain_core.messages import SystemMessage, HumanMessage +from langchain_core.prompts import ChatPromptTemplate +from langchain_core.messages import SystemMessage, HumanMessage + +from app.common.pyris_message import IrisMessageRole, PyrisMessage +from app.domain.chat.exercise_chat.exercise_chat_pipeline_execution_dto import ExerciseChatPipelineExecutionDTO +from app.domain.data.text_message_content_dto import TextMessageContentDTO +from app.pipeline.prompts.chat_gpt_wrapper_prompts import chat_gpt_initial_system_prompt🧰 Tools
🪛 Ruff (0.8.2)
9-9:
app.domain.data.text_message_content_dto.TextMessageContentDTO
imported but unusedRemove unused import:
app.domain.data.text_message_content_dto.TextMessageContentDTO
(F401)
🪛 GitHub Actions: Run linters
[warning] File needs to be reformatted using Black code formatter
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (3)
app/pipeline/chat_gpt_wrapper_pipeline.py
(1 hunks)app/web/routers/pipelines.py
(4 hunks)app/web/status/status_update.py
(1 hunks)
🚧 Files skipped from review as they are similar to previous changes (2)
- app/web/status/status_update.py
- app/web/routers/pipelines.py
🧰 Additional context used
🪛 Ruff (0.8.2)
app/pipeline/chat_gpt_wrapper_pipeline.py
9-9: app.domain.data.text_message_content_dto.TextMessageContentDTO
imported but unused
Remove unused import: app.domain.data.text_message_content_dto.TextMessageContentDTO
(F401)
🪛 GitHub Actions: Run linters
app/pipeline/chat_gpt_wrapper_pipeline.py
[warning] File needs to be reformatted using Black code formatter
⏰ Context from checks skipped due to timeout of 90000ms (1)
- GitHub Check: Build and Push Docker Image
🔇 Additional comments (1)
app/pipeline/chat_gpt_wrapper_pipeline.py (1)
59-89
: Add error handling and improve configuration management.Several critical issues need to be addressed as mentioned in the previous review:
- Add error handling for the chat request
- Move the temperature value to configuration
- Add return type hint for the method
Additional improvements needed:
- Add documentation for the chat history slicing logic
- Validate callback before use
def __call__( self, dto: ExerciseChatPipelineExecutionDTO, prompt: Optional[ChatPromptTemplate] = None, **kwargs, - ): + ) -> None: """ Run the ChatGPT wrapper pipeline. This consists of a single response generation step. + + Args: + dto: The pipeline execution DTO containing conversation + prompt: Optional chat prompt template + **kwargs: Additional keyword arguments + + Raises: + ValueError: If callback is not set or chat history is invalid + RuntimeError: If chat request fails """ + if not self.callback: + raise ValueError("Callback must be set before running the pipeline") + query = dto.chat_history[-1] if dto.chat_history else None if query and query.sender != IrisMessageRole.USER: query = None + # Keep last 5 messages if no query, or last 6 excluding the last one if query exists chat_history = ( dto.chat_history[-5:] if query is None else dto.chat_history[-6:-1] ) chat_history_messages = convert_chat_history_to_str(chat_history) prompts = ChatPromptTemplate.from_messages( [ SystemMessage(chat_gpt_initial_system_prompt), HumanMessage(chat_history_messages), ] ) + try: response = self.request_handler.chat( - prompts, CompletionArguments(temperature=0.4) + prompts, CompletionArguments(temperature=self.config.get("temperature", 0.4)) ) self.callback.done(final_result=response) + except Exception as e: + logger.error(f"Chat request failed: {str(e)}") + raise RuntimeError(f"Failed to process chat request: {str(e)}")🧰 Tools
🪛 GitHub Actions: Run linters
[warning] File needs to be reformatted using Black code formatter
class ChatGPTWrapperPipeline(Pipeline): | ||
callback: ExerciseChatStatusCallback | ||
request_handler: CapabilityRequestHandler | ||
|
||
def __init__(self, callback: Optional[ExerciseChatStatusCallback] = None): | ||
super().__init__(implementation_id="chat_gpt_wrapper_pipeline_reference_impl") | ||
self.callback = callback | ||
self.request_handler = CapabilityRequestHandler( | ||
requirements=RequirementList( | ||
gpt_version_equivalent=4.5, | ||
context_length=16385, | ||
) | ||
) | ||
self.tokens = [] | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Add type annotations and remove unused variable.
The class definition needs improvements:
- Add proper type annotations for class variables
- Consider moving the implementation_id to configuration
- Remove the unused tokens list
class ChatGPTWrapperPipeline(Pipeline):
- callback: ExerciseChatStatusCallback
- request_handler: CapabilityRequestHandler
+ callback: Optional[ExerciseChatStatusCallback]
+ request_handler: CapabilityRequestHandler
def __init__(self, callback: Optional[ExerciseChatStatusCallback] = None):
super().__init__(implementation_id="chat_gpt_wrapper_pipeline_reference_impl")
self.callback = callback
self.request_handler = CapabilityRequestHandler(
requirements=RequirementList(
gpt_version_equivalent=4.5,
context_length=16385,
)
)
- self.tokens = []
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
class ChatGPTWrapperPipeline(Pipeline): | |
callback: ExerciseChatStatusCallback | |
request_handler: CapabilityRequestHandler | |
def __init__(self, callback: Optional[ExerciseChatStatusCallback] = None): | |
super().__init__(implementation_id="chat_gpt_wrapper_pipeline_reference_impl") | |
self.callback = callback | |
self.request_handler = CapabilityRequestHandler( | |
requirements=RequirementList( | |
gpt_version_equivalent=4.5, | |
context_length=16385, | |
) | |
) | |
self.tokens = [] | |
class ChatGPTWrapperPipeline(Pipeline): | |
callback: Optional[ExerciseChatStatusCallback] | |
request_handler: CapabilityRequestHandler | |
def __init__(self, callback: Optional[ExerciseChatStatusCallback] = None): | |
super().__init__(implementation_id="chat_gpt_wrapper_pipeline_reference_impl") | |
self.callback = callback | |
self.request_handler = CapabilityRequestHandler( | |
requirements=RequirementList( | |
gpt_version_equivalent=4.5, | |
context_length=16385, | |
) | |
) |
🧰 Tools
🪛 GitHub Actions: Run linters
[warning] File needs to be reformatted using Black code formatter
import logging | ||
from typing import List, Optional | ||
|
||
from langchain_core.prompts import ( | ||
ChatPromptTemplate, | ||
) | ||
from app.common.pyris_message import IrisMessageRole, PyrisMessage | ||
from app.domain.chat.exercise_chat.exercise_chat_pipeline_execution_dto import ExerciseChatPipelineExecutionDTO | ||
from app.domain.data.text_message_content_dto import TextMessageContentDTO | ||
from app.pipeline.prompts.chat_gpt_wrapper_prompts import chat_gpt_initial_system_prompt | ||
from langchain_core.messages import SystemMessage, HumanMessage | ||
|
||
from app.llm import CapabilityRequestHandler, RequirementList, CompletionArguments | ||
from app.pipeline import Pipeline | ||
from app.web.status.status_update import ExerciseChatStatusCallback | ||
|
||
logger = logging.getLogger(__name__) | ||
|
||
def convert_chat_history_to_str(chat_history: List[PyrisMessage]) -> str: | ||
""" | ||
Converts the chat history to a string | ||
:param chat_history: The chat history | ||
:return: The chat history as a string | ||
""" | ||
|
||
def map_message_role(role: IrisMessageRole) -> str: | ||
if role == IrisMessageRole.SYSTEM: | ||
return "System" | ||
elif role == IrisMessageRole.ASSISTANT: | ||
return "AI Tutor" | ||
elif role == IrisMessageRole.USER: | ||
return "Student" | ||
else: | ||
return "Unknown" | ||
|
||
return "\n\n".join( | ||
[ | ||
f"{map_message_role(message.sender)} {"" if not message.sent_at else f"at {message.sent_at.strftime( | ||
"%Y-%m-%d %H:%M:%S")}"}: {message.contents[0].text_content}" | ||
for message in chat_history | ||
] | ||
) | ||
|
||
class ChatGPTWrapperPipeline(Pipeline): | ||
callback: ExerciseChatStatusCallback | ||
request_handler: CapabilityRequestHandler | ||
|
||
def __init__(self, callback: Optional[ExerciseChatStatusCallback] = None): | ||
super().__init__(implementation_id="chat_gpt_wrapper_pipeline_reference_impl") | ||
self.callback = callback | ||
self.request_handler = CapabilityRequestHandler( | ||
requirements=RequirementList( | ||
gpt_version_equivalent=4.5, | ||
context_length=16385, | ||
) | ||
) | ||
self.tokens = [] | ||
|
||
def __call__( | ||
self, | ||
dto: ExerciseChatPipelineExecutionDTO, | ||
prompt: Optional[ChatPromptTemplate] = None, | ||
**kwargs, | ||
): | ||
""" | ||
Run the ChatGPT wrapper pipeline. | ||
This consists of a single response generation step. | ||
""" | ||
query = dto.chat_history[-1] if dto.chat_history else None | ||
if query and query.sender != IrisMessageRole.USER: | ||
query = None | ||
|
||
chat_history = ( | ||
dto.chat_history[-5:] if query is None else dto.chat_history[-6:-1] | ||
) | ||
|
||
chat_history_messages = convert_chat_history_to_str(chat_history) | ||
|
||
prompts = ChatPromptTemplate.from_messages( | ||
[ | ||
SystemMessage(chat_gpt_initial_system_prompt), | ||
HumanMessage(chat_history_messages), | ||
] | ||
) | ||
|
||
response = self.request_handler.chat( | ||
prompts, CompletionArguments(temperature=0.4) | ||
) | ||
self.callback.done(final_result=response) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
💡 Codebase verification
Black formatting is needed to fix line length and quote consistency issues.
The code has formatting issues that Black would fix:
- Lines exceeding 88 characters (max found: 112)
- Complex string formatting with inconsistent quote usage (lines 38-39)
- Long string literals that need wrapping (line 49)
🔗 Analysis chain
Format code using Black.
The pipeline indicates that this file needs to be reformatted using the Black code formatter.
Run the following command to format the code:
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Format the file using Black
black app/pipeline/chat_gpt_wrapper_pipeline.py
Length of output: 145
Script:
#!/bin/bash
# Check max line length
cat app/pipeline/chat_gpt_wrapper_pipeline.py | wc -L
# Check for mixed quotes and potential formatting issues
rg -n "'" app/pipeline/chat_gpt_wrapper_pipeline.py
rg -n '"' app/pipeline/chat_gpt_wrapper_pipeline.py
Length of output: 659
🧰 Tools
🪛 Ruff (0.8.2)
9-9: app.domain.data.text_message_content_dto.TextMessageContentDTO
imported but unused
Remove unused import: app.domain.data.text_message_content_dto.TextMessageContentDTO
(F401)
🪛 GitHub Actions: Run linters
[warning] File needs to be reformatted using Black code formatter
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 1
🧹 Nitpick comments (3)
app/pipeline/chat_gpt_wrapper_pipeline.py (2)
23-27
: Improve docstring with proper type hints and error cases.The docstring should follow modern Python conventions and document possible exceptions.
- """ - Converts the chat history to a string - :param chat_history: The chat history - :return: The chat history as a string - """ + """Convert a list of PyrisMessage objects to a formatted string. + + Args: + chat_history: List of messages containing sender and content information. + + Returns: + str: A formatted string representation of the chat history. + + Raises: + IndexError: If a message has no contents. + AttributeError: If message content is missing required attributes. + """
49-51
: Add proper type hints for class attributes.The class attributes should have explicit type annotations.
- callback: ExerciseChatStatusCallback - request_handler: CapabilityRequestHandler + callback: Optional[ExerciseChatStatusCallback] + request_handler: CapabilityRequestHandlerapp/web/routers/pipelines.py (1)
78-83
: Consider using match statement for better readability.The if-else branching could be replaced with a match statement for consistency with other pipeline variants in the file.
- if variant == "default": - thread = Thread( - target=run_exercise_chat_pipeline_worker, args=(dto, variant, event) - ) - else: - thread = Thread(target=run_chatgpt_wrapper_pipeline_worker, args=(dto, variant)) + match variant: + case "default": + thread = Thread( + target=run_exercise_chat_pipeline_worker, args=(dto, variant, event) + ) + case _: + thread = Thread(target=run_chatgpt_wrapper_pipeline_worker, args=(dto, variant))
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (2)
app/pipeline/chat_gpt_wrapper_pipeline.py
(1 hunks)app/web/routers/pipelines.py
(4 hunks)
🧰 Additional context used
🪛 Ruff (0.8.2)
app/pipeline/chat_gpt_wrapper_pipeline.py
11-11: app.domain.data.text_message_content_dto.TextMessageContentDTO
imported but unused
Remove unused import: app.domain.data.text_message_content_dto.TextMessageContentDTO
(F401)
🪛 GitHub Actions: Run linters
app/pipeline/chat_gpt_wrapper_pipeline.py
[error] 11-11: Unused import: 'app.domain.data.text_message_content_dto.TextMessageContentDTO' is imported but never used
⏰ Context from checks skipped due to timeout of 90000ms (1)
- GitHub Check: Build and Push Docker Image
🔇 Additional comments (5)
app/pipeline/chat_gpt_wrapper_pipeline.py (3)
61-61
: 🛠️ Refactor suggestionRemove unused tokens list.
The
tokens
list is initialized but never used.- self.tokens = []
Likely invalid or redundant comment.
41-43
:⚠️ Potential issueAdd null checks and simplify string formatting.
The code assumes message contents exist and uses complex string formatting.
- f"{map_message_role(message.sender)} {"" if not message.sent_at else f"at {message.sent_at.strftime( - "%Y-%m-%d %H:%M:%S")}"}: {message.contents[0].text_content}" + f"{map_message_role(message.sender)}" + f"{' at ' + message.sent_at.strftime('%Y-%m-%d %H:%M:%S') if message.sent_at else ''}: " + f"{message.contents[0].text_content if message.contents else 'No content'}"Likely invalid or redundant comment.
63-93
: 🛠️ Refactor suggestionAdd error handling and improve type safety.
The
__call__
method needs several improvements:
- Add return type hint
- Add error handling for chat request
- Consider moving the temperature value to configuration
def __call__( self, dto: ExerciseChatPipelineExecutionDTO, prompt: Optional[ChatPromptTemplate] = None, **kwargs, - ): + ) -> None: """Run the ChatGPT wrapper pipeline. Args: dto: The pipeline execution DTO containing conversation. prompt: Optional chat prompt template. **kwargs: Additional keyword arguments. Raises: ValueError: If conversation is empty or callback is not set. RuntimeError: If chat request fails. """ + if not self.callback: + raise ValueError("Callback must be set before running the pipeline") + + if not dto.chat_history: + raise ValueError("Chat history cannot be empty") query = dto.chat_history[-1] if dto.chat_history else None if query and query.sender != IrisMessageRole.USER: query = None chat_history = ( dto.chat_history[-5:] if query is None else dto.chat_history[-6:-1] ) chat_history_messages = convert_chat_history_to_str(chat_history) prompts = ChatPromptTemplate.from_messages( [ SystemMessage(chat_gpt_initial_system_prompt), HumanMessage(chat_history_messages), ] ) + try: response = self.request_handler.chat( - prompts, CompletionArguments(temperature=0.4) + prompts, CompletionArguments(temperature=self.config.temperature) ) self.callback.done(final_result=response) + except Exception as e: + logger.error(f"Chat request failed: {str(e)}") + raise RuntimeError(f"Failed to process chat request: {str(e)}")Likely invalid or redundant comment.
app/web/routers/pipelines.py (2)
239-261
: LGTM!The implementation follows the established error handling patterns and is consistent with other pipeline workers.
325-332
: LGTM!The implementation follows the established pattern for feature variants.
from app.domain.chat.exercise_chat.exercise_chat_pipeline_execution_dto import ( | ||
ExerciseChatPipelineExecutionDTO, | ||
) | ||
from app.domain.data.text_message_content_dto import TextMessageContentDTO |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Remove unused import.
The TextMessageContentDTO
import is not used in the code.
-from app.domain.data.text_message_content_dto import TextMessageContentDTO
🧰 Tools
🪛 Ruff (0.8.2)
11-11: app.domain.data.text_message_content_dto.TextMessageContentDTO
imported but unused
Remove unused import: app.domain.data.text_message_content_dto.TextMessageContentDTO
(F401)
🪛 GitHub Actions: Run linters
[error] 11-11: Unused import: 'app.domain.data.text_message_content_dto.TextMessageContentDTO' is imported but never used
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 1
🧹 Nitpick comments (2)
app/pipeline/chat_gpt_wrapper_pipeline.py (2)
1-22
: Organize imports following PEP 8 style guide.Reorganize imports into the following groups, separated by a single blank line:
- Standard library imports
- Third-party imports
- Local application imports
import logging from typing import List, Optional -from langchain_core.prompts import ( - ChatPromptTemplate, -) +from langchain_core.messages import SystemMessage, HumanMessage +from langchain_core.output_parsers import StrOutputParser +from langchain_core.prompts import ChatPromptTemplate +from langchain_core.runnables import Runnable + from app.common.pyris_message import IrisMessageRole, PyrisMessage +from app.common.message_converters import convert_iris_message_to_langchain_human_message from app.domain.chat.exercise_chat.exercise_chat_pipeline_execution_dto import ( ExerciseChatPipelineExecutionDTO, ) -from langchain_core.output_parsers import StrOutputParser from app.llm.langchain.iris_langchain_chat_model import IrisLangchainChatModel -from app.pipeline.prompts.chat_gpt_wrapper_prompts import chat_gpt_initial_system_prompt -from langchain_core.messages import SystemMessage, HumanMessage -from langchain_core.runnables import Runnable - from app.llm import CapabilityRequestHandler, RequirementList, CompletionArguments from app.pipeline import Pipeline +from app.pipeline.prompts.chat_gpt_wrapper_prompts import chat_gpt_initial_system_prompt from app.web.status.status_update import ExerciseChatStatusCallback -from ..common.message_converters import convert_iris_message_to_langchain_human_message
42-48
: Improve string formatting readability.The string formatting in the join operation is complex and hard to read. Consider breaking it down into smaller parts or using a more readable format.
- return "\n\n".join( - [ - f"{map_message_role(message.sender)} {"" if not message.sent_at else f"at {message.sent_at.strftime( - "%Y-%m-%d %H:%M:%S")}"}: {message.contents[0].text_content}" - for message in chat_history - ] - ) + def format_timestamp(message: PyrisMessage) -> str: + return "" if not message.sent_at else f"at {message.sent_at.strftime('%Y-%m-%d %H:%M:%S')}" + + formatted_messages = [] + for message in chat_history: + role = map_message_role(message.sender) + timestamp = format_timestamp(message) + content = message.contents[0].text_content + formatted_message = f"{role} {timestamp}: {content}" + formatted_messages.append(formatted_message) + + return "\n\n".join(formatted_messages)
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (1)
app/pipeline/chat_gpt_wrapper_pipeline.py
(1 hunks)
⏰ Context from checks skipped due to timeout of 90000ms (1)
- GitHub Check: Build and Push Docker Image
🔇 Additional comments (3)
app/pipeline/chat_gpt_wrapper_pipeline.py (3)
70-70
: 🛠️ Refactor suggestionRemove unused tokens list.
The
tokens
list is initialized but never used.- self.tokens = []
Likely invalid or redundant comment.
82-84
:⚠️ Potential issueAdd validation for empty chat history.
The code assumes chat history exists but doesn't handle the empty case properly.
+ if not dto.chat_history: + raise ValueError("Chat history cannot be empty") + query = dto.chat_history[-1] if dto.chat_history else None if query and query.sender != IrisMessageRole.USER: query = NoneLikely invalid or redundant comment.
86-88
: Review chat history slicing logic.The chat history slicing logic is complex and might lead to unexpected behavior:
- When query is None: Takes last 5 messages
- When query exists: Takes 6th to 2nd last messages, potentially excluding relevant context
Consider simplifying the logic and verifying the impact:
- chat_history = ( - dto.chat_history[-5:] if query is None else dto.chat_history[-6:-1] - ) + # Always include last 5 messages for context + chat_history = dto.chat_history[-5:]
response = (prompt | self.pipeline).invoke({}) | ||
self.callback.done(final_result=response) | ||
self.callback.skip("Skipping suggestion generation as no output was generated.") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Fix incorrect skip message logic.
The skip message is sent after every successful completion, which is incorrect. It should only be sent when no output is generated.
response = (prompt | self.pipeline).invoke({})
self.callback.done(final_result=response)
- self.callback.skip("Skipping suggestion generation as no output was generated.")
+ if not response:
+ self.callback.skip("Skipping suggestion generation as no output was generated.")
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
response = (prompt | self.pipeline).invoke({}) | |
self.callback.done(final_result=response) | |
self.callback.skip("Skipping suggestion generation as no output was generated.") | |
response = (prompt | self.pipeline).invoke({}) | |
self.callback.done(final_result=response) | |
if not response: | |
self.callback.skip("Skipping suggestion generation as no output was generated.") |
…sometimes otherwise
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 2
🧹 Nitpick comments (2)
app/web/routers/pipelines.py (2)
239-261
: Add docstring and improve error handling consistency.The new worker function needs improvements:
- Add docstring with type hints
- Make error messages consistent with other workers
- Consider extracting common error handling logic
Apply this diff to improve the implementation:
def run_chatgpt_wrapper_pipeline_worker( dto: ExerciseChatPipelineExecutionDTO, _variant: str ): + """ + Worker function to run the ChatGPT wrapper pipeline. + + Args: + dto: The pipeline execution DTO + _variant: The pipeline variant (unused) + + Note: + This function runs in a separate thread and handles its own exceptions. + """ try: callback = ExerciseChatStatusCallback( run_id=dto.settings.authentication_token, base_url=dto.settings.artemis_base_url, initial_stages=dto.initial_stages, ) pipeline = ChatGPTWrapperPipeline(callback=callback) except Exception as e: - logger.error(f"Error preparing ChatGPT wrapper pipeline: {e}") + logger.error(f"Error preparing exercise chat pipeline: {e}") logger.error(traceback.format_exc()) capture_exception(e) return try: pipeline(dto=dto) except Exception as e: - logger.error(f"Error running ChatGPT wrapper pipeline: {e}") + logger.error(f"Error running exercise chat pipeline: {e}") logger.error(traceback.format_exc()) callback.error("Fatal error.", exception=e)
78-83
: Consider extracting thread creation logic.The thread creation logic is duplicated across multiple pipeline variants. Consider extracting it into a helper function.
Apply this diff to improve the implementation:
+def create_pipeline_thread(worker_fn, *args) -> Thread: + """Create and return a daemon thread for running a pipeline worker.""" + return Thread(target=worker_fn, args=args, daemon=True) + if variant == "chat-gpt-wrapper": - thread = Thread(target=run_chatgpt_wrapper_pipeline_worker, args=(dto, variant)) + thread = create_pipeline_thread(run_chatgpt_wrapper_pipeline_worker, dto, variant) else: - thread = Thread( - target=run_exercise_chat_pipeline_worker, args=(dto, variant, event) - ) + thread = create_pipeline_thread(run_exercise_chat_pipeline_worker, dto, variant, event)
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (2)
app/pipeline/chat_gpt_wrapper_pipeline.py
(1 hunks)app/web/routers/pipelines.py
(4 hunks)
⏰ Context from checks skipped due to timeout of 90000ms (1)
- GitHub Check: Build and Push Docker Image
🔇 Additional comments (2)
app/pipeline/chat_gpt_wrapper_pipeline.py (1)
25-48
: 🛠️ Refactor suggestionAdd error handling and improve docstring.
The function needs several improvements:
- Add validation for empty chat history
- Add error handling for empty message contents
- Improve docstring with type hints and error cases
- Simplify string formatting
Apply this diff to improve the implementation:
def convert_chat_history_to_str(chat_history: List[PyrisMessage]) -> str: """ - Converts the chat history to a string - :param chat_history: The chat history - :return: The chat history as a string + Converts a list of PyrisMessage objects into a formatted string representation. + + Args: + chat_history: List of messages containing sender and content + + Returns: + A formatted string with each message on a new line + + Raises: + ValueError: If chat history is empty + IndexError: If a message has no contents """ + if not chat_history: + raise ValueError("Chat history cannot be empty") def map_message_role(role: IrisMessageRole) -> str: if role == IrisMessageRole.SYSTEM: return "System" elif role == IrisMessageRole.ASSISTANT: return "AI Tutor" elif role == IrisMessageRole.USER: return "Student" else: return "Unknown" + def format_message(message: PyrisMessage) -> str: + if not message.contents: + raise IndexError(f"Message from {message.sender} has no contents") + + timestamp = f" at {message.sent_at.strftime('%Y-%m-%d %H:%M:%S')}" if message.sent_at else "" + return f"{map_message_role(message.sender)}{timestamp}: {message.contents[0].text_content}" - return "\n\n".join( - [ - f"{map_message_role(message.sender)} {"" if not message.sent_at else f"at {message.sent_at.strftime( - "%Y-%m-%d %H:%M:%S")}"}: {message.contents[0].text_content}" - for message in chat_history - ] - ) + return "\n\n".join(format_message(message) for message in chat_history)Likely invalid or redundant comment.
app/web/routers/pipelines.py (1)
325-332
: LGTM!The feature variant for the ChatGPT wrapper is correctly defined and consistent with other variants.
class ChatGPTWrapperPipeline(Pipeline): | ||
callback: ExerciseChatStatusCallback | ||
llm: IrisLangchainChatModel | ||
pipeline: Runnable | ||
|
||
def __init__(self, callback: Optional[ExerciseChatStatusCallback] = None): | ||
super().__init__(implementation_id="chat_gpt_wrapper_pipeline_reference_impl") | ||
self.callback = callback | ||
request_handler = CapabilityRequestHandler( | ||
requirements=RequirementList( | ||
gpt_version_equivalent=4.5, | ||
context_length=16385, | ||
) | ||
) | ||
completion_args = CompletionArguments(temperature=0.5, max_tokens=2000) | ||
self.llm = IrisLangchainChatModel( | ||
request_handler=request_handler, completion_args=completion_args | ||
) | ||
self.pipeline = self.llm | StrOutputParser() | ||
self.tokens = [] | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Remove unused attribute and add type hints.
The class initialization has the following issues:
- The
tokens
list is unused and should be removed - Class attributes need type hints
- Magic numbers in
CompletionArguments
should be moved to constants
Apply this diff to improve the implementation:
+from typing import Optional, List
+
+# Constants for completion arguments
+DEFAULT_TEMPERATURE = 0.5
+DEFAULT_MAX_TOKENS = 2000
class ChatGPTWrapperPipeline(Pipeline):
- callback: ExerciseChatStatusCallback
- llm: IrisLangchainChatModel
- pipeline: Runnable
+ callback: Optional[ExerciseChatStatusCallback]
+ llm: IrisLangchainChatModel
+ pipeline: Runnable
+
def __init__(self, callback: Optional[ExerciseChatStatusCallback] = None):
super().__init__(implementation_id="chat_gpt_wrapper_pipeline_reference_impl")
self.callback = callback
request_handler = CapabilityRequestHandler(
requirements=RequirementList(
gpt_version_equivalent=4.5,
context_length=16385,
)
)
- completion_args = CompletionArguments(temperature=0.5, max_tokens=2000)
+ completion_args = CompletionArguments(
+ temperature=DEFAULT_TEMPERATURE,
+ max_tokens=DEFAULT_MAX_TOKENS
+ )
self.llm = IrisLangchainChatModel(
request_handler=request_handler, completion_args=completion_args
)
self.pipeline = self.llm | StrOutputParser()
- self.tokens = []
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
class ChatGPTWrapperPipeline(Pipeline): | |
callback: ExerciseChatStatusCallback | |
llm: IrisLangchainChatModel | |
pipeline: Runnable | |
def __init__(self, callback: Optional[ExerciseChatStatusCallback] = None): | |
super().__init__(implementation_id="chat_gpt_wrapper_pipeline_reference_impl") | |
self.callback = callback | |
request_handler = CapabilityRequestHandler( | |
requirements=RequirementList( | |
gpt_version_equivalent=4.5, | |
context_length=16385, | |
) | |
) | |
completion_args = CompletionArguments(temperature=0.5, max_tokens=2000) | |
self.llm = IrisLangchainChatModel( | |
request_handler=request_handler, completion_args=completion_args | |
) | |
self.pipeline = self.llm | StrOutputParser() | |
self.tokens = [] | |
from typing import Optional, List | |
# Constants for completion arguments | |
DEFAULT_TEMPERATURE = 0.5 | |
DEFAULT_MAX_TOKENS = 2000 | |
class ChatGPTWrapperPipeline(Pipeline): | |
callback: Optional[ExerciseChatStatusCallback] | |
llm: IrisLangchainChatModel | |
pipeline: Runnable | |
def __init__(self, callback: Optional[ExerciseChatStatusCallback] = None): | |
super().__init__(implementation_id="chat_gpt_wrapper_pipeline_reference_impl") | |
self.callback = callback | |
request_handler = CapabilityRequestHandler( | |
requirements=RequirementList( | |
gpt_version_equivalent=4.5, | |
context_length=16385, | |
) | |
) | |
completion_args = CompletionArguments( | |
temperature=DEFAULT_TEMPERATURE, | |
max_tokens=DEFAULT_MAX_TOKENS | |
) | |
self.llm = IrisLangchainChatModel( | |
request_handler=request_handler, completion_args=completion_args | |
) | |
self.pipeline = self.llm | StrOutputParser() |
def __call__( | ||
self, | ||
dto: ExerciseChatPipelineExecutionDTO, | ||
prompt: Optional[ChatPromptTemplate] = None, | ||
**kwargs, | ||
): | ||
""" | ||
Run the ChatGPT wrapper pipeline. | ||
This consists of a single response generation step. | ||
""" | ||
query = dto.chat_history[-1] if dto.chat_history else None | ||
if query and query.sender != IrisMessageRole.USER: | ||
query = None | ||
|
||
chat_history = ( | ||
dto.chat_history[-5:] if query is None else dto.chat_history[-6:-1] | ||
) | ||
|
||
chat_history_messages = convert_chat_history_to_str(chat_history) | ||
|
||
prompt = ChatPromptTemplate.from_messages( | ||
[ | ||
SystemMessage(chat_gpt_initial_system_prompt), | ||
HumanMessage(chat_history_messages), | ||
convert_iris_message_to_langchain_human_message(query), | ||
] | ||
) | ||
|
||
response = (prompt | self.pipeline).invoke({}) | ||
self.callback.done() | ||
self.callback.done(final_result=response) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Fix callback calls and add error handling.
The __call__
method has several issues:
- Missing return type hint
- Duplicate
done
callback calls - No error handling for empty chat history
- Magic numbers in chat history slicing
Apply this diff to improve the implementation:
def __call__(
self,
dto: ExerciseChatPipelineExecutionDTO,
prompt: Optional[ChatPromptTemplate] = None,
**kwargs,
- ):
+ ) -> None:
"""
Run the ChatGPT wrapper pipeline.
This consists of a single response generation step.
+
+ Args:
+ dto: The pipeline execution DTO
+ prompt: Optional chat prompt template
+ **kwargs: Additional keyword arguments
+
+ Raises:
+ ValueError: If chat history is empty or callback is not set
"""
+ if not self.callback:
+ raise ValueError("Callback must be set before running the pipeline")
+
+ if not dto.chat_history:
+ raise ValueError("Chat history cannot be empty")
+
query = dto.chat_history[-1] if dto.chat_history else None
if query and query.sender != IrisMessageRole.USER:
query = None
+ # Constants for chat history
+ MAX_HISTORY_LENGTH = 5
chat_history = (
- dto.chat_history[-5:] if query is None else dto.chat_history[-6:-1]
+ dto.chat_history[-MAX_HISTORY_LENGTH:] if query is None
+ else dto.chat_history[-(MAX_HISTORY_LENGTH + 1):-1]
)
chat_history_messages = convert_chat_history_to_str(chat_history)
prompt = ChatPromptTemplate.from_messages(
[
SystemMessage(chat_gpt_initial_system_prompt),
HumanMessage(chat_history_messages),
convert_iris_message_to_langchain_human_message(query),
]
)
response = (prompt | self.pipeline).invoke({})
- self.callback.done()
self.callback.done(final_result=response)
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
def __call__( | |
self, | |
dto: ExerciseChatPipelineExecutionDTO, | |
prompt: Optional[ChatPromptTemplate] = None, | |
**kwargs, | |
): | |
""" | |
Run the ChatGPT wrapper pipeline. | |
This consists of a single response generation step. | |
""" | |
query = dto.chat_history[-1] if dto.chat_history else None | |
if query and query.sender != IrisMessageRole.USER: | |
query = None | |
chat_history = ( | |
dto.chat_history[-5:] if query is None else dto.chat_history[-6:-1] | |
) | |
chat_history_messages = convert_chat_history_to_str(chat_history) | |
prompt = ChatPromptTemplate.from_messages( | |
[ | |
SystemMessage(chat_gpt_initial_system_prompt), | |
HumanMessage(chat_history_messages), | |
convert_iris_message_to_langchain_human_message(query), | |
] | |
) | |
response = (prompt | self.pipeline).invoke({}) | |
self.callback.done() | |
self.callback.done(final_result=response) | |
def __call__( | |
self, | |
dto: ExerciseChatPipelineExecutionDTO, | |
prompt: Optional[ChatPromptTemplate] = None, | |
**kwargs, | |
) -> None: | |
""" | |
Run the ChatGPT wrapper pipeline. | |
This consists of a single response generation step. | |
Args: | |
dto: The pipeline execution DTO | |
prompt: Optional chat prompt template | |
**kwargs: Additional keyword arguments | |
Raises: | |
ValueError: If chat history is empty or callback is not set | |
""" | |
if not self.callback: | |
raise ValueError("Callback must be set before running the pipeline") | |
if not dto.chat_history: | |
raise ValueError("Chat history cannot be empty") | |
query = dto.chat_history[-1] if dto.chat_history else None | |
if query and query.sender != IrisMessageRole.USER: | |
query = None | |
# Constants for chat history | |
MAX_HISTORY_LENGTH = 5 | |
chat_history = ( | |
dto.chat_history[-MAX_HISTORY_LENGTH:] if query is None | |
else dto.chat_history[-(MAX_HISTORY_LENGTH + 1):-1] | |
) | |
chat_history_messages = convert_chat_history_to_str(chat_history) | |
prompt = ChatPromptTemplate.from_messages( | |
[ | |
SystemMessage(chat_gpt_initial_system_prompt), | |
HumanMessage(chat_history_messages), | |
convert_iris_message_to_langchain_human_message(query), | |
] | |
) | |
response = (prompt | self.pipeline).invoke({}) | |
self.callback.done(final_result=response) |
Only to be added temporarily as an experiment.
Summary by CodeRabbit
Release Notes
New Features
Improvements
Bug Fixes
Chores