Releases: HolyWu/vs-dpir
Releases Β· HolyWu/vs-dpir
v4.3.0
- Rename
num_batches
parameter to batch_size
for idiom.
- Redirect stdout to stderr so as to avoid corrupted pipe due to warning messages from TensorRT.
- Add
auto_download
parameter to allow only download the specified model at first run rather than forcibly download all models at once.
- Bump PyTorch and Torch-TensorRT to 2.6.0.
v4.2.0
- Add
num_batches
and trt_static_shape
parameters.
- Remove
trt_int8
, trt_int8_sample_step
and trt_int8_batch_size
parameters.
- Improve performance by using separate streams for transfering tensors between CPU and GPU.
v4.1.0
- Lower default
trt_min_shape
.
- Mildly decrease TRT engine building time.
- Bump PyTorch to 2.4.0.dev.
- Remove vstools dependency.
v4.0.0
- Add support for TensorRT dynamic shapes.
- Add support for TensorRT INT8 mode using Post Training Quantization (PTQ), giving 2x performance increase over FP16 mode.
- Bump PyTorch to 2.3.
- Bump VapourSynth to R66.
- Bump TensorRT to 10.0.1.
v3.1.1
- Remove
nvfuser
and cuda_graphs
parameters.
- Bump PyTorch to 2.0.1.
- Bump TensorRT to 8.6.1.
- Bump VapourSynth to R60.
v3.0.1
- Allow
strength
clip to be any GRAY format.
- Don't globally set default floating point tensor type when the input is of RGBH format.
v3.0.0
- Switch to PyTorch again for inference.
- Change function name to lowercase.
v2.3.0
- Fix strength clip not properly normalized.
- Allow GRAY8 format for strength clip.
- Add
dual
parameter to perform inference in two threads for better performance. Mostly useful for TensorRT, not so useful for CUDA, and not supported for DirectML.
v2.2.0
- Add
trt_max_workspace_size
parameter.
- Allow specifying GRAYS clip for
strength
parameter.
v2.1.0
- Add AMD MIGraphX provider.