-
Notifications
You must be signed in to change notification settings - Fork 27.9k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Refactoring of ImageProcessorFast #35069
Refactoring of ImageProcessorFast #35069
Conversation
The docs for this PR live here. All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update. |
9e31e43
to
c393647
Compare
@ArthurZucker This PR is still in a rough state but pinging you for visibility as there are lots of refactoring choices that are up for discussion! |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I made an initial quick pass. My main concern here is how common these methods are. I mean, for preprocess
, does it scale to most models, or will we need to override it somewhere?
I think (but might be wrong) that the better design choice would be to follow standards in image processing, where we define each transformation as a separate class and then can pipe them. This approach is followed by torchvision, albumentations, kornia, and imgaug. So I suppose this is a pretty robust approach that should work well in our case too.
We can reuse the original torch transforms if they are compatible with torch compile, otherwise, we can rewrite them ourselves. The most common transforms will live in the base file, while custom will live in image_processing_*
model's file and can be moved to common for reuse.
Let me know what you think regarding this? Do you see any difficulties using this approach?
@qubvel The problem I see with this is that some image processors needs additional logic in-between transforms, that depends on the result of the previous transforms (like the padding operation in detr-like image processors), so unless I'm mistaken I don't think piping class transforms would be possible here. That said, I might be misunderstanding or missing something if this is a standard approach in other image preprocessing library, so happy to discuss this further. |
867b1f5
to
3f8674f
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Very cool PR, happy to see fast image processors getting propagated!
I left a few questions here and there, mostly nits. Overall I like the "less code for users" and moving redundant parts for the base class idea. But seems like we are hiding the core processing logic in base class. So I agree here with @qubvel that having a pipe of transforms
looks nicer and easier to consume.
And for weird model, prob we don't need separate patching mixin, as it doesn't scale much because not many of those patch methods will stick for long time and new methods will continue to be invented. I am also thinking if all this repeated code can be squeezed when we add modular to those models? For example llava-next style patching is used in three models, so we need to define it once and then let modular copy the main code body
src/transformers/models/convnext/image_processing_convnext_fast.py
Outdated
Show resolved
Hide resolved
f12c5d7
to
da6de2e
Compare
Hello @qubvel @zucchini-nlp ! Thanks again for your reviews. I added examples of how I would write processors using class-based transforms, and a section in the PR with the advantages and drawbacks I see for functional vs class transforms. If you have some time I would be glad to have your opinions on this :) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Great work! Thanks for providing both options for review 👍 I agree, the functional approach looks more flexible right now.
I would differentiate image processors for the following cases:
- Basic image transforms: operate only on raw images, and usually, in the original code, we might see the
torchvision
defined preprocessing pipeline. - Patching transforms: operate on patches but still work with images only. The preprocessing usually looks as follows: split into patches -> apply basic image transforms to patches (normalize, rescale, etc.).
- Annotation-aware image transforms: operate on images and annotations (object detection, image segmentation). These are the most complicated, I suppose, because for each transform, we have to handle annotation transformation like bounding boxes/masks/keypoints.
It migth be oversimplified, am I missing something?
I think we can have different styles of image processors for these cases. And the thing we have to take into consideration is that it's common to have transforms written in torchvision, so it would be nice to have a clear way to port the original transforms to be transformers
-like.
src/transformers/models/convnext/image_processing_convnext_fast.py
Outdated
Show resolved
Hide resolved
src/transformers/models/convnext/image_processing_convnext_fast.py
Outdated
Show resolved
Hide resolved
src/transformers/models/convnext/image_processing_convnext_fast.py
Outdated
Show resolved
Hide resolved
src/transformers/models/llava_onevision/image_processing_llava_onevision_fast.py
Outdated
Show resolved
Hide resolved
src/transformers/models/llava_onevision/processing_llava_onevision.py
Outdated
Show resolved
Hide resolved
I believe so too for the functional approach. Using class transforms might look nicer in some cases, but they are limiting and add a layer of complexity for contributors that I don't think is necessary.
Agreed on the three different categories, but I'm not sure how to handle this differentiation in the code. What I was thinking was to add some guidance in the fast image processor file generated by the transformers-cli, explaining what existing processor to use as a baseline depending on what sort of image processing the model needs |
@ArthurZucker, @qubvel , @zucchini-nlp Here's a new iteration where I implemented @ArthurZucker suggestions. The "pre"-preprocessing functions now can handle arbitrary kwargs, as long as they are specified in the |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Much better thanks!
Would be nice to use Unpack[DefaultFastImageProcessorKwargs]
rather than having to move around the 20 + arguments for both init and forward
src/transformers/models/llava_next/image_processing_llava_next_fast.py
Outdated
Show resolved
Hide resolved
src/transformers/models/deformable_detr/image_processing_deformable_detr_fast.py
Outdated
Show resolved
Hide resolved
Agreed but if I do that the CI won't let me use a start docstrings describing all the parameters, and the type hints when instantiating an image processor will be much less useful. What do you think we should prioritize @ArthurZucker? Or do you see a way to use both |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Super cool! Last nits IMO and we should be alright
} | ||
data_format = kwargs.pop("data_format", None) | ||
data_format = data_format if data_format is not None else ChannelDimension.FIRST | ||
|
||
images = self._prepare_input_images( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
here you will have a n issue: you are poping way too many time from the kwargs, so you pop then you use self, values will be different.
This should be simplified. Just update self with kwargs, then do the rest with always self. something like that
crop_pct=crop_pct, | ||
**kwargs, | ||
) | ||
def __init__(self, **kwargs: Unpack[valid_init_kwargs]): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
def __init__(self, **kwargs: Unpack[valid_init_kwargs]): | |
def __init__(self, **kwargs: Unpack[ConvNextFastImageProcessorInitKwargs]): |
this is better and explicit
@@ -114,8 +95,8 @@ def __init__( | |||
overridden by `crop_pct` in the`preprocess` method. | |||
""", | |||
) | |||
def preprocess(self, *args, **kwargs) -> BatchFeature: | |||
return super().preprocess(*args, **kwargs) | |||
def preprocess(self, images: ImageInput, **kwargs: Unpack[valid_preprocess_kwargs]) -> BatchFeature: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
same comment let's be explicit if possible
valid_init_kwargs = DeformableDetrFastImageProcessorInitKwargs | ||
valid_preprocess_kwargs = DeformableDetrFastImageProcessorPreprocessKwargs | ||
|
||
def __init__(self, **kwargs: Unpack[valid_init_kwargs]) -> None: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
same comment
…m/yonigozlan/transformers into improve-fast-image-processor-base
valid_init_kwargs = LlavaNextFastImageProcessorInitKwargs | ||
valid_preprocess_kwargs = LlavaNextFastImageProcessorPreprocessKwargs |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
this is probably a ditto
valid_init_kwargs = DefaultFastImageProcessorInitKwargs | ||
valid_preprocess_kwargs = DefaultFastImageProcessorPreprocessKwargs | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
should be removed!
* add init and base image processing functions * add add_fast_image_processor to transformers-cli * add working fast image processor clip * add fast image processor to doc, working tests * remove "to be implemented" SigLip * fix unprotected import * fix unprotected vision import * update ViTImageProcessorFast * increase threshold slow fast ewuivalence * add fast img blip * add fast class in tests with cli * improve cli * add fast image processor convnext * add LlavaPatchingMixin and fast image processor for llava_next and llava_onevision * add device kwarg to ImagesKwargs for fast processing on cuda * cleanup * fix unprotected import * group images by sizes and add batch processing * Add batch equivalence tests, skip when center_crop is used * cleanup * update init and cli * fix-copies * refactor convnext, cleanup base * fix * remove patching mixins, add piped torchvision transforms for ViT * fix unbatched processing * fix f strings * protect imports * change llava onevision to class transforms (test) * fix convnext * improve formatting (following Pavel review) * fix handling device arg * improve cli * fix * fix inits * Add distinction between preprocess and _preprocess, and support for arbitrary kwargs through valid_extra_kwargs * uniformize qwen2_vl fast * fix docstrings * add add fast image processor llava * remove min_pixels max_pixels from accepted size * nit * nit * refactor fast image processors docstrings * cleanup and remove fast class transforms * update add fast image processor transformers cli * cleanup docstring * uniformize pixtral fast and make _process_image explicit * fix prepare image structure llava next/onevision * Use typed kwargs instead of explicit args * nit fix import Unpack * clearly separate pops and gets in base preprocess. Use explicit typed kwargs * make qwen2_vl preprocess arguments hashable
What does this PR do?
This PR introduces a significant refactoring of how fast image processors can be implemented in Transformers.
Motivation
The primary goal of this refactoring is to simplify the process of creating fast image processors when a slow image processor already exists.
Unlike the current
BaseImageProcessor
, which provides only a minimal skeleton for image processor classes, the newly introducedBaseImageProcessorFast
includes all the basic functionalities needed by a typical image processor. This design allows contributors to focus primarily on modifying default parameters, such as the image mean, std, or default resizing dimensions, rather than rewriting foundational code.Key Advantages:
BaseImageProcessorFast
provides a natural starting point with predefined functionalities.BaseImageProcessorFast
are automatically propagated to all derived fast image processors.Implementation
Functional or class transforms
Following the
torchvision
approach to defining image transforms, there are two main ways to write the processing logic for image processors: using functional transforms or class-based transforms.This PR showcases both approaches:
BaseImageProcessorFast
: implemented using functional transforms.ViTImageProcessorFast
: almost equivalent but uses piped class-based transforms.LlavaNextImageProcessorFast
: implemented using functionals.LlavaOnevisionImageProcessorFast
: implemented using a mix of class-based transforms and functionals. equivalent toLlavaNextImageProcessorFast
The choice is entirely open for debate at this point. I see advantages to both approaches, but I’m sure I haven’t considered everything, so please share your thoughts if you have a preference one way or the other.
To me, the advantages/drawbacks of functionals are the following:
For class transforms:
LlavaOnevisionImageProcessorFast
).LlavaOnevisionImageProcessorFast
fails to compile, as it seemingly gets stuck in an infinite compilation loop, while its functional equivalent,LlavaNextImageProcessorFast
, compiles without problems. Hover this needs more thorough investigation.New
add-fast-image-processor
Command intransformers-cli
This PR introduces a new CLI command that automates the creation of fast image processors. Given a model folder name, it generates all necessary imports, documentation, dummy objects, and a fast image processor file where default parameters are parsed from the slow image processor file.
The new fast image processor class is also added to the image processor test file by the CLI. However, some tests may still need manual adjustments to properly include and validate the new fast image processor class.
Example Usage:
Example Output:
In
transformers/src/models/blip/image_processing_blip_fast.py
:In this case, this is enough to get a fully working fast Blip image processor!
New Mixins for Common Logic
To handle shared preprocessing and post-processing logic, this PR introduces reusable mixins
(only. Additional mixins are planned for other common patterns, such as:LlavaPatchingMixin
is present as an example in this PR)Edit: Removing the Mixins for patching in favor of
# Copied from
or modular in the futur, as such preprocessing techniques don't usually stick for a long time, and adding Mixins every time a technique is used twice or more wouldn't scale well.Summary: Three Types of Fast Image Processors
blip
,clip
,deit
,siglip
, andvit
.2. Mixin-Based Processors:- These rely heavily on predefined mixins to implement shared logic.- Examples in this PR:llava_next
andllava_onevision
.Edit: Removing Mixins in favor of
# Copied From
or modular for now, see earlier comment.__init__
andpreprocess
while reusing the syntax and structure ofBaseImageProcessorFast
.convnext
Miscellaneous Issues and Questions
image_processing_utils_fast
might make the file large and difficult to read. This also raises the question of when a mixin should be created for repeated patterns across image processors. Should it be created when a pattern is shared by two or more models? Or when it presents an idea likely to be reused in the future?BaseImageProcessorFast
because the image processors that use padding implement it in slightly different ways. This makes it challenging to standardize the padding logic. A more consistent approach should maybe be added in the future.center_crop
functions in Transformers'image_transforms
and intorchvision
have different implementations, namely the cropping boundaries in transformers are defined as:top = (orig_height - crop_height) // 2
,left = (orig_width - crop_width) // 2
andtop = int(round((orig_height - crop_height) / 2.0))
,left = int(round((orig_width - crop_width) / 2.0))
, which can result in shifted cropping when the size before cropping is odd. I don't think this should be much of a problem for users, but when comparing outputs of slow and fast image processors in tests, this results in huge differences.Who can review?
@qubvel @molbap @zucchini-nlp