Skip to content

Commit

Permalink
chore: small linting
Browse files Browse the repository at this point in the history
Signed-off-by: Naren Dasan <[email protected]>
Signed-off-by: Naren Dasan <[email protected]>
  • Loading branch information
narendasan committed Apr 17, 2024
1 parent c272b78 commit be0e0e3
Show file tree
Hide file tree
Showing 4 changed files with 5 additions and 11 deletions.
6 changes: 1 addition & 5 deletions docsrc/py_api/torch_tensorrt.rst
Original file line number Diff line number Diff line change
Expand Up @@ -37,10 +37,6 @@ Classes
:members:
:special-members: __init__

.. autoclass:: TRTModuleNext
:members:
:special-members: __init__

Enums
-------

Expand All @@ -50,7 +46,7 @@ Enums

.. autoclass:: EngineCapability

.. autoclass:: TensorFormat
.. autoclass:: memory_format

Submodules
----------
Expand Down
3 changes: 1 addition & 2 deletions py/torch_tensorrt/_enums.py
Original file line number Diff line number Diff line change
Expand Up @@ -299,7 +299,6 @@ def try_to(
use_default: bool,
) -> Optional[Union[torch.dtype, trt.DataType, np.dtype, dtype]]:
try:
print(self)
casted_format = self.to(t, use_default)
return casted_format
except (ValueError, TypeError) as e:
Expand Down Expand Up @@ -689,7 +688,7 @@ def to(
else:
raise ValueError("Provided an unsupported engine capability")

elif t == DeviceType:
elif t == EngineCapability:
return self

elif ENABLED_FEATURES.torchscript_frontend:
Expand Down
6 changes: 3 additions & 3 deletions py/torch_tensorrt/dynamo/_compiler.py
Original file line number Diff line number Diff line change
Expand Up @@ -419,9 +419,9 @@ def convert_module_to_trt_engine(
module: torch.fx.GraphModule,
method_name: str = "forward",
inputs: Optional[Sequence[Input | torch.Tensor]] = None,
enabled_precisions: Set[torch.dtype | dtype] | Tuple[torch.dtype | dtype] = (
dtype.float32,
),
enabled_precisions: (
Set[torch.dtype | dtype] | Tuple[torch.dtype | dtype]
) = _defaults.ENABLED_PRECISIONS,
debug: bool = _defaults.DEBUG,
workspace_size: int = _defaults.WORKSPACE_SIZE,
min_block_size: int = _defaults.MIN_BLOCK_SIZE,
Expand Down
1 change: 0 additions & 1 deletion py/torch_tensorrt/ts/_Device.py
Original file line number Diff line number Diff line change
Expand Up @@ -51,7 +51,6 @@ def __init__(self, *args: Any, **kwargs: Any):
- Device(gpu_id=1)
"""
super().__init__(*args, **kwargs)
print(self)

def _to_internal(self) -> _C.Device:
internal_dev = _C.Device()
Expand Down

0 comments on commit be0e0e3

Please sign in to comment.