Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

OV API2.0 #1098

Merged
merged 9 commits into from
Jun 2, 2023
Merged

OV API2.0 #1098

merged 9 commits into from
Jun 2, 2023

Conversation

paularamo
Copy link

@paularamo paularamo commented May 19, 2023

Description

  • Refactoring OpenVINO API 1.0 to API 2.0. Also including model caching and config argument for changing model precision in the OpenVINO inferencer.

  • Fixes # (issue)

Changes

  • Bug fix (non-breaking change which fixes an issue)
  • Refactor (non-breaking change which refactors the code base)
  • New feature (non-breaking change which adds functionality)
  • Breaking change (fix or feature that would cause existing functionality to not work as expected)
  • This change requires a documentation update

Checklist

  • My code follows the pre-commit style and check guidelines of this project.
  • I have performed a self-review of my code
  • I have commented my code, particularly in hard-to-understand areas
  • I have made corresponding changes to the documentation
  • My changes generate no new warnings
  • I have added tests that prove my fix is effective or that my feature works
  • New and existing tests pass locally with my changes
  • I have added a summary of my changes to the CHANGELOG (not for minor changes, docs and tests).

@paularamo
Copy link
Author

@samet-akcay Running NNCF Notebook, I see this error:
image

C:\Users\paularam\Intel\tests\anomalib_env\lib\site-packages\torchmetrics\utilities\prints.py:36: UserWarning: Metric PrecisionRecallCurve will save all targets and predictions in buffer. For large datasets this may lead to large memory footprint.
warnings.warn(*args, **kwargs)
FeatureExtractor is deprecated. Use TimmFeatureExtractor instead. Both FeatureExtractor and TimmFeatureExtractor will be removed in a future release.
C:\Users\paularam\Intel\tests\anomalib_env\lib\site-packages\openvino\offline_transformations_init_.py:10: FutureWarning: The module is private and following namespace offline_transformations will be removed in the future, use openvino.runtime.passes instead!
warnings.warn(
INFO:nncf:NNCF initialized successfully. Supported frameworks detected: torch, onnx, openvino
WARNING:nncf:NNCF provides best results with torch==1.13.1, while current torch version is 2.0.1+cpu. If you encounter issues, consider switching to torch==1.13.1
GPU available: False, used: False
TPU available: False, using: 0 TPU cores
IPU available: False, using: 0 IPUs
HPU available: False, using: 0 HPUs
Trainer(limit_train_batches=1.0) was configured so 100% of the batches per epoch will be used..
Trainer(limit_val_batches=1.0) was configured so 100% of the batches will be used..
Trainer(limit_test_batches=1.0) was configured so 100% of the batches will be used..
Trainer(limit_predict_batches=1.0) was configured so 100% of the batches will be used..
Trainer(val_check_interval=1.0) was configured so validation will run at the end of the training epoch..

TypeError Traceback (most recent call last)
Cell In[10], line 7
5 # start training
6 trainer = Trainer(**config.trainer, callbacks=callbacks)
----> 7 trainer.fit(model=model, datamodule=datamodule)
8 int8_results = trainer.test(model=model, datamodule=datamodule)

File ~\Intel\tests\anomalib_env\lib\site-packages\pytorch_lightning\trainer\trainer.py:608, in Trainer.fit(self, model, train_dataloaders, val_dataloaders, datamodule, ckpt_path)
606 model = self._maybe_unwrap_optimized(model)
607 self.strategy._lightning_module = model
--> 608 call._call_and_handle_interrupt(
609 self, self._fit_impl, model, train_dataloaders, val_dataloaders, datamodule, ckpt_path
610 )

File ~\Intel\tests\anomalib_env\lib\site-packages\pytorch_lightning\trainer\call.py:38, in _call_and_handle_interrupt(trainer, trainer_fn, *args, **kwargs)
36 return trainer.strategy.launcher.launch(trainer_fn, *args, trainer=trainer, **kwargs)
37 else:
---> 38 return trainer_fn(*args, **kwargs)
40 except _TunerExitException:
41 trainer._call_teardown_hook()

File ~\Intel\tests\anomalib_env\lib\site-packages\pytorch_lightning\trainer\trainer.py:650, in Trainer._fit_impl(self, model, train_dataloaders, val_dataloaders, datamodule, ckpt_path)
643 ckpt_path = ckpt_path or self.resume_from_checkpoint
644 self._ckpt_path = self._checkpoint_connector._set_ckpt_path(
645 self.state.fn,
646 ckpt_path, # type: ignore[arg-type]
647 model_provided=True,
648 model_connected=self.lightning_module is not None,
649 )
--> 650 self._run(model, ckpt_path=self.ckpt_path)
652 assert self.state.stopped
653 self.training = False

File ~\Intel\tests\anomalib_env\lib\site-packages\pytorch_lightning\trainer\trainer.py:1051, in Trainer._run(self, model, ckpt_path)
1048 self.strategy.setup_environment()
1049 self.__setup_profiler()
-> 1051 self._call_setup_hook() # allow user to setup lightning_module in accelerator environment
1053 # check if we should delay restoring checkpoint till later
1054 if not self.strategy.restore_checkpoint_after_setup:

File ~\Intel\tests\anomalib_env\lib\site-packages\pytorch_lightning\trainer\trainer.py:1299, in Trainer._call_setup_hook(self)
1297 if self.datamodule is not None:
1298 self._call_lightning_datamodule_hook("setup", stage=fn)
-> 1299 self._call_callback_hooks("setup", stage=fn)
1300 self._call_lightning_module_hook("setup", stage=fn)
1302 self.strategy.barrier("post_setup")

File ~\Intel\tests\anomalib_env\lib\site-packages\pytorch_lightning\trainer\trainer.py:1394, in Trainer._call_callback_hooks(self, hook_name, *args, **kwargs)
1392 if callable(fn):
1393 with self.profiler.profile(f"[Callback]{callback.state_key}.{hook_name}"):
-> 1394 fn(self, self.lightning_module, *args, **kwargs)
1396 if pl_module:
1397 # restore current_fx when nested context
1398 pl_module._current_fx_name = prev_fx_name

File ~\Intel\tests\anomalib_env\lib\site-packages\anomalib\utils\callbacks\nncf\callback.py:54, in NNCFCallback.setup(failed resolving arguments)
51 init_loader = InitLoader(trainer.datamodule.val_dataloader()) # type: ignore
52 config = register_default_init_args(self.config, init_loader)
---> 54 self.nncf_ctrl, pl_module.model = wrap_nncf_model(
55 model=pl_module.model, config=config, dataloader=trainer.datamodule.train_dataloader() # type: ignore
56 )

TypeError: wrap_nncf_model() missing 1 required positional argument: 'init_state_dict'

@review-notebook-app
Copy link

Check out this pull request on  ReviewNB

See visual diffs & provide feedback on Jupyter Notebooks.


Powered by ReviewNB

@paularamo
Copy link
Author

@samet-akcay @dmatveev @ashwinvaidya17 Could you please take a look of this we need to test it out in Hugging Face Space. If this issue is not solved I cannot test it. Thanks for your helping.


self.config = config
self.input_blob, self.output_blob, self.model = self.load_model(path)
self.metadata = super()._load_metadata(metadata_path)
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

metadata_path is not defined. Did you mean to use metadata?

@samet-akcay samet-akcay merged commit 4e757d9 into openvinotoolkit:main Jun 2, 2023
@samet-akcay samet-akcay mentioned this pull request Jun 15, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants