-
-
Notifications
You must be signed in to change notification settings - Fork 729
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
fix: refactor handle_response_model #1032
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
👍 Looks good to me! Reviewed everything up to 67e0cdf in 13 seconds
More details
- Looked at
817
lines of code in1
files - Skipped
0
files when reviewing. - Skipped posting
3
drafted comments based on config settings.
1. instructor/process_response.py:497
- Draft comment:
The functionis_typed_dict
is defined twice in the file. Consider removing the duplicate definition to avoid redundancy. - Reason this comment was not posted:
Confidence changes required:80%
The functionis_typed_dict
is defined twice in the file, which is redundant and can lead to confusion. It should be defined only once.
2. instructor/process_response.py:548
- Draft comment:
The lambda functions for JSON modes inmode_handlers
are unnecessary and add complexity. Consider directly referencing thehandle_json_modes
function with the mode as an argument instead of using lambdas. - Reason this comment was not posted:
Confidence changes required:50%
The refactoring ofhandle_response_model
has introduced a potential issue with themode_handlers
dictionary. The lambda functions used for JSON modes are not necessary and add complexity.
3. instructor/process_response.py:528
- Draft comment:
Ensure that tests and documentation are updated to reflect changes inhandle_response_model
. This is crucial for maintaining library integrity. - Reason this comment was not posted:
Comment did not seem useful.
Workflow ID: wflow_YaQ3OblEfHhIXOYL
You can customize Ellipsis with 👍 / 👎 feedback, review rules, user-specific overrides, quiet
mode, and more.
Deploying instructor with
|
Latest commit: |
7315225
|
Status: | ✅ Deploy successful! |
Preview URL: | https://fab4174a.instructor.pages.dev |
Branch Preview URL: | https://refactor-process-response.instructor.pages.dev |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
👍 Looks good to me! Incremental review on 34a09d4 in 10 seconds
More details
- Looked at
27
lines of code in1
files - Skipped
0
files when reviewing. - Skipped posting
2
drafted comments based on config settings.
1. instructor/process_response.py:531
- Draft comment:
The docstring should mention the new functions added for handling different modes, as it currently does not reflect the refactoring changes. - Reason this comment was not posted:
Confidence changes required:50%
The functionhandle_response_model
has a docstring that is not updated to reflect the refactoring changes. It should mention the new functions added for handling different modes.
2. instructor/process_response.py:528
- Draft comment:
The function names in themode_handlers
dictionary should follow a consistent naming pattern. For example,handle_json_o1
andhandle_json_modes
could be renamed tohandle_json_o1_mode
andhandle_json_mode
respectively to maintain consistency with other handler names likehandle_parallel_tools
. This issue is also present in other handler functions likehandle_functions
,handle_tools_strict
, etc. - Reason this comment was not posted:
Confidence changes required:80%
The functionhandle_response_model
has been refactored, but the docstring is overly verbose and could be more concise. Additionally, the function names in themode_handlers
dictionary should follow a consistent naming pattern, which they currently do not.
Workflow ID: wflow_qm7zhjqo0ekpBBgt
You can customize Ellipsis with 👍 / 👎 feedback, review rules, user-specific overrides, quiet
mode, and more.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
👍 Looks good to me! Incremental review on 50a19ac in 14 seconds
More details
- Looked at
98
lines of code in1
files - Skipped
0
files when reviewing. - Skipped posting
3
drafted comments based on config settings.
1. instructor/process_response.py:537
- Draft comment:
The check forresponse_model is None
is redundant here as it is already handled inhandle_response_model
. Consider removing it. - Reason this comment was not posted:
Confidence changes required:50%
The functionprepare_response_model
has a redundant check forresponse_model
beingNone
at the beginning, which is already handled in the calling functionhandle_response_model
.
2. instructor/process_response.py:40
- Draft comment:
The check forresponse_model is None
inprepare_response_model
is redundant as it is already handled inhandle_response_model
. Consider removing it. - Reason this comment was not posted:
Confidence changes required:50%
The functionprepare_response_model
has a redundant check forresponse_model
beingNone
at the beginning, which is already handled in the calling functionhandle_response_model
.
3. instructor/process_response.py:40
- Draft comment:
When library code changes, ensure that both documentation and tests are updated accordingly. This refactoring ofhandle_response_model
should be reflected in the documentation and tests. - Reason this comment was not posted:
Comment did not seem useful.
Workflow ID: wflow_ZOiOulxi6mRjtJyI
You can customize Ellipsis with 👍 / 👎 feedback, review rules, user-specific overrides, quiet
mode, and more.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
👍 Looks good to me! Incremental review on 2b858fa in 9 seconds
More details
- Looked at
13
lines of code in1
files - Skipped
0
files when reviewing. - Skipped posting
2
drafted comments based on config settings.
1. instructor/retry.py:157
- Draft comment:
The change in the type hint forresponse_model
fromtype[T_Model]
totype[T_Model] | None
is appropriate and aligns with the refactoring changes, allowingresponse_model
to beNone
. - Reason this comment was not posted:
Confidence changes required:0%
The change in the type hint forresponse_model
fromtype[T_Model]
totype[T_Model] | None
is appropriate given the context of the refactoring. It allows for the possibility ofresponse_model
beingNone
, which is a valid scenario in the refactored code.
2. instructor/retry.py:155
- Draft comment:
Function names should follow a consistent pattern. Consider renamingretry_sync
to match the pattern used in the refactored functions, such ashandle_retry_sync
. This applies toretry_async
as well. - Reason this comment was not posted:
Confidence changes required:80%
The function names in the refactored code should follow a consistent pattern. The functionretry_sync
andretry_async
should have consistent naming with the new functions added in the refactor.
Workflow ID: wflow_9eHY4Xs5DqWTwhws
You can customize Ellipsis with 👍 / 👎 feedback, review rules, user-specific overrides, quiet
mode, and more.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
👍 Looks good to me! Incremental review on e56a84a in 22 seconds
More details
- Looked at
37
lines of code in1
files - Skipped
0
files when reviewing. - Skipped posting
2
drafted comments based on config settings.
1. instructor/process_response.py:505
- Draft comment:
new_kwargs["tools"]
is not set inhandle_cohere_tools
. Ensuretools
is set if expected. - Reason this comment was not posted:
Comment did not seem useful.
2. instructor/process_response.py:497
- Draft comment:
Consider using a more descriptive variable name instead ofnew_kwargs
to improve readability and maintainability. This applies to other functions as well. - Reason this comment was not posted:
Confidence changes required:70%
The variablenew_kwargs
is used in multiple functions but is not descriptive enough. A more descriptive name would improve readability and maintainability.
Workflow ID: wflow_yg3G3SY76r0DG3O3
You can customize Ellipsis with 👍 / 👎 feedback, review rules, user-specific overrides, quiet
mode, and more.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
👍 Looks good to me! Incremental review on 7c78eca in 19 seconds
More details
- Looked at
73
lines of code in3
files - Skipped
0
files when reviewing. - Skipped posting
4
drafted comments based on config settings.
1. tests/llm/test_openai/test_parallel.py:41
- Draft comment:
Ensure that the model namegpt-4o-mini
is used consistently across all test cases to maintain uniformity in testing conditions. - Reason this comment was not posted:
Confidence changes required:30%
The test cases intest_parallel.py
have been updated to use a different model namegpt-4o-mini
instead ofgpt-4-turbo-preview
. This change should be consistent across all test cases to ensure they are testing the same conditions.
2. tests/llm/test_openai/test_parallel.py:90
- Draft comment:
Consider moving the import statement forAsyncOpenAI
to the top of the file with other imports for better readability and efficiency. - Reason this comment was not posted:
Confidence changes required:30%
The import statement forAsyncOpenAI
intest_async_parallel_tools_one
is placed inside the function. It would be more efficient to move it to the top of the file with other imports to avoid repeated imports and improve readability.
3. instructor/process_response.py:594
- Draft comment:
Assertions should have a formatted error message. Please update the assertion inhandle_parallel_tools
. This also applies to other assertions in the newly added functions. - Reason this comment was not posted:
Confidence changes required:80%
The functionhandle_parallel_tools
is added in this PR and its assertion lacks a formatted error message.
4. tests/llm/test_openai/test_parallel.py:38
- Draft comment:
If library code changes, ensure that documentation is updated. Please check if the documentation reflects the changes made in this test file. - Reason this comment was not posted:
Confidence changes required:70%
The test filetest_parallel.py
has been modified, but there is no indication that the documentation has been updated accordingly.
Workflow ID: wflow_FW27WwvzXUFdDrs2
You can customize Ellipsis with 👍 / 👎 feedback, review rules, user-specific overrides, quiet
mode, and more.
Gemini Tests are passing
Vertex AI Tests are also passing
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks good to me
async def test_async_parallel_tools_one(aclient): | ||
client = instructor.patch(aclient, mode=instructor.Mode.PARALLEL_TOOLS) | ||
async def test_async_parallel_tools_one(): | ||
from openai import AsyncOpenAI |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@jxnl ,Added this import here because we were getting an event loop error when using the aclient.
@@ -154,7 +154,7 @@ def reask_messages(response: ChatCompletion, mode: Mode, exception: Exception): | |||
|
|||
def retry_sync( | |||
func: Callable[T_ParamSpec, T_Retval], | |||
response_model: type[T_Model], | |||
response_model: type[T_Model] | None, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This makes it consistent with retry_async below @jxnl
async def retry_async(
func: Callable[T_ParamSpec, T_Retval],
response_model: type[T] | None,
context: dict[str, Any] | None,
args: Any,
kwargs: Any,
max_retries: int | AsyncRetrying = 1,
strict: bool | None = None,
mode: Mode = Mode.TOOLS,
) -> T:
return handle_cohere_modes(new_kwargs) | ||
return None, new_kwargs | ||
|
||
if mode in {Mode.PARALLEL_TOOLS}: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
In the original mode code, we handled parallel tools differently too so I added it as a special case
if is_simple_type(response_model):
response_model = ModelAdapter[response_model]
if is_typed_dict(response_model):
response_model: BaseModel = create_model(
response_model.__name__,
**{k: (v, ...) for k, v in response_model.__annotations__.items()},
)
# This a special case for parallel tools
if mode == Mode.PARALLEL_TOOLS:
assert (
new_kwargs.get("stream", False) is False
), "stream=True is not supported when using PARALLEL_TOOLS mode"
new_kwargs["tools"] = handle_parallel_model(response_model)
new_kwargs["tool_choice"] = "auto"
# This is a special case for parallel models
response_model = ParallelModel(typehint=response_model)
return response_model, new_kwargs
"allowed_function_names": [response_model.__name__], | ||
}, | ||
} | ||
new_kwargs["chat_history"] = [ |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Reverted back to using normal JSON mode - wasn't able to get tool calling working with Cohere, will fix in a separate PR down the line
Co-authored-by: Sarah Chieng <[email protected]>
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
👍 Looks good to me! Incremental review on fd150be in 53 seconds
More details
- Looked at
902
lines of code in24
files - Skipped
1
files when reviewing. - Skipped posting
4
drafted comments based on config settings.
1. pyproject.toml:79
- Draft comment:
Thepytest-examples
dependency version has been updated. Ensure that any related documentation and tests are also updated to reflect this change. - Reason this comment was not posted:
Confidence changes required:50%
The PR includes changes to thepyproject.toml
file, specifically updating thepytest-examples
dependency version. This is a library code change, so I need to check if the documentation and tests are updated accordingly.
2. docs/examples/batch_job_oai.md:63
- Draft comment:
The example output has been updated. Ensure that any related documentation and tests are also updated to reflect this change. - Reason this comment was not posted:
Confidence changes required:50%
The PR includes changes to thedocs/examples/batch_job_oai.md
file, specifically updating the example output. This is a library code change, so I need to check if the documentation and tests are updated accordingly.
3. docs/examples/multi_modal_gemini.md:178
- Draft comment:
The example output has been updated. Ensure that any related documentation and tests are also updated to reflect this change. - Reason this comment was not posted:
Confidence changes required:50%
The PR includes changes to thedocs/examples/multi_modal_gemini.md
file, specifically updating the example output. This is a library code change, so I need to check if the documentation and tests are updated accordingly.
4. docs/examples/multi_modal_gemini.md:179
- Draft comment:
The example output has been updated. Ensure that any related documentation and tests are also updated to reflect this change. - Reason this comment was not posted:
Confidence changes required:50%
The PR includes changes to thedocs/examples/multi_modal_gemini.md
file, specifically updating the example output. This is a library code change, so I need to check if the documentation and tests are updated accordingly.
Workflow ID: wflow_FhyHRvCFxpQAMdsG
You can customize Ellipsis with 👍 / 👎 feedback, review rules, user-specific overrides, quiet
mode, and more.
Important
Refactor
handle_response_model
inprocess_response.py
into multiple mode-specific functions and updateretry.py
for optionalresponse_model
.handle_response_model
into multiple mode-specific functions inprocess_response.py
.prepare_response_model
for common response model preparations.handle_parallel_tools
,handle_functions
,handle_tools_strict
,handle_tools
,handle_mistral_tools
,handle_json_o1
,handle_json_modes
,handle_anthropic_tools
,handle_anthropic_json
,handle_cohere_modes
,handle_gemini_json
,handle_gemini_tools
,handle_vertexai_tools
,handle_vertexai_json
,handle_cohere_json_schema
,handle_cohere_tools
.retry_sync
andretry_async
inretry.py
to handle optionalresponse_model
.parallel.py
.This description was created by
for fd150be. It will automatically update as commits are pushed.