This changelog follows the specifications detailed in: Keep a Changelog.
This project adheres to Semantic Versioning, although we have not yet reached a 1.0.0
release.
-
_dump_monitor_tensorboard
now additionally writes a bash script to quickly let the user re-visualize results in the case of mpl backend failure. -
load_partial_state
now has an algorithm to better match model keys when the only difference is in key prefixes.- adds keyword arg association which defaults to prefix-hack, the old default was module-hack, and embedding is more theoretically correct but too slow.
- Optimizer.coerce now works correctly with any
torch.optim
ortorch_optimizer
optimizer.
BatchContainer.pack
for easier use of non-container aware models.colored
option toFitHarnPreferences
, which can be set to False to disable ANSI coloring
harn.deploy_fpath
is now populated when the model is deployed.- Improved docs on
netharn/data/toydata.py
- Changed name of
torch_shapshots
directory name tocheckpoints
.
- Ported experimental
ChannelSpec
andDataContainser
from bioharn to netharn.data. - Added basic classification example that works on generic coco datasets
- Threshold curves to ConfusionVector metrics
- Initial weights are now saved in
initial_state
directory. - New
plots
submodule.
- Fixed bug in XPU auto mode which caused it always to choose GPU 0.
- Bug in hyperparams where dict-based loader spec was not working.
- Display intervals were not working correctly with ProgIter, hacked in a temporary fix.
- Enhanced VOC ensure data
- Version issues from last release
- Timeout to FitHarn.preferences
- Basic gradient logging
- Several new functions are now registered with OutputShapeFor to support efficientnet (F.pad, F.conv2d, torch.sigmoid)
- Balanced batch samplers
- Hyperparams "name" can now be specified instead of "nice". We will transition from "nice" to "name", for now both are supported but "nice" will eventually be deprecated.
- FitHarn.preferences now uses scriptconfig, which means the "help" sections are coupled with the object.
- Removed explicit support for Python 2.7
- Reverted default of
keyboard_debug
to True. - Moved
analytic_for
,output_shape_for
, andreceptive_field
for to the netharn.analytic subpackage. Original names are still available, but deprecated and will be removed in a future version. - Moved helpers from
netharn.hyperparams
tonetharn.util
- Made pytorch-optimizer optional: https://github.com/jettify/pytorch-optimizer
- netharn now will timeout within an epoch
- Bug when a value in
harn.intervals
was zero.
- EfficientNet backbone and Swish activation
- Handle "No running processes found" case in
XPU.coerce('auto')
- Resize now works with newer
imgaug
versions - Fixed incorrect use of the word "logit", what I was calling logits are actually log probabilities.
- Using new mode in
gpu_info
, this is more stable - Examples are now in the netharn.examples directory, which means you can run them without having to git clone netharn.
- Moved data grabbers into netharn.data
- Moved unfinished examples to dev
- Add
tensorboard_groups
to config - Add
min_lr
to Monitor - Add
harn.iter_index
, new property which tracks the number iterations
- Reworked
remove_comments_and_docstrings
, so it always produces valid code. nh.XPU
classmethods now work correctly for inheriting classes- Iteration indexes are now correct in tensorboard.
nh.XPU.cast
will now throw deprecation warnings usenh.XPU.coerce
instead.harn.config
is deprecated usingharn.preferences
instead.- Progress display now counts epochs starting from 1, so the final epoch will
read
({harn.epoch + 1})/({harn.monitor.max_epoch})
. The internalharn.epoch
is still 0-based. - Made
psutil
dependency optional.
- Rectify nonlinearity now supports more torch activations
- Smoothing no longer applied to lr (learning rate) and momentum monitor plots
- pandas and scipy are now optional (in this package)
- removed several old dependencies
- Small issues in CIFAR Example
- Small
imgaug
issue inexamples/sseg_camvid.py
andexamples/segmentation.py
- FitHarn no longer fails when loaders are missing batch sizes
- Fixed windows issue in
util_zip.zopen
. - Fixed runtime dependency on
strip_ansi
from xdoctest.
- Second public release
- Add support for
main_device
in deviceModuleMixin
- Add
coerce
to DeployedModel
- Grad clipping dynamics now defaults to L2 norm. Can change the p-norm using
dynamic['grad_norm_type']
- Add
AnalyticModule
- Add support for interpolate output-shape-for
- Add PSPNet and DeepLab
- Support for 'AdaptivePooling' in output-shape-for
- Added CamVid capable torch dataset
- Add
nh.util.freeze_layers
- Added
super_setup.py
to handle external utility dependencies. FitHarn
now attempts to checkpoint the model if it encounters an error.DeployedModel.ensure_mounted_model
makes writing prediction scripts easier- Add property
FitHarn.batch_index
that points to the current batch index. - Add border mode and imgaug stochastic params to
Resize
. - Add
InputNorm
layer, which couples input centering with a torch model. - General segmentation example
- General object detection example
- Add
ForwardFor
- Add
getitem
,view
, andshape
forForwardFor
andOutputShapeFor
- Add
add
,sub
,mul
, anddiv
forForwardFor
andOutputShapeFor
andReceptiveFieldFor
XPU.coerce('argv')
will now look for the--xpu
CLI arg in addition tocpu
andgpu
.
Hyperparams
now allows the user to specify pre-constructed instances of relevant classes (experimental).Hyperparams
now tries to coerceinitkw
to json-compatible values.train_hyper_id_brief
is no longer generated withshort=True
.nh.Initializer.coerce
can now accept its argument as a string.find_unused_gpu
now prioritizes the GPU with the fewest number of compute processesnh.Pretrained
can now accept fuzzy paths, as long as they resolve to a single unique file.- Netharn now creates symlink to a static "deploy.zip" version of the deployed models with robust name tags.
- Tensorboard mixins now dumps losses in both linear and symlog space by default.
- Increased speed of dumping matplotlib outputs
- Breakdown requirements into runtime, tests, optional, etc...
- Defaults for
num_keep
andkeep_freq
have changed to 2 and 20 to reduce disk consumption. - Reorganized focal loss code.
- The
step
scheduler can now specify all step points: e.g. step-90-140-250 - The
stepXXX
scheduler code must now be given all step points: e.g. step-90-140-250 run_tests.py
now returns the proper exit code
- Fixed issue extracting generic torch weights from zipfile with Pretrained initializer
- Fixed issue in closer, where attributes referenced in calls were not considered
- Bug fixes in focal loss
- Issues with new torchvision version
- Issue with large numbers in RunningStats
- Fixed issues with torch
1.1.0
- Fixed export failure when
Hyperparams
is given aMountedModel
instance
- Deprecate SlidingSlices and SlidingSlicesDataset
util_fname
is now deprecated.
- Old
_run_batch
internal function - Removed the
initializer
argument ofnh.Pretrained
in favor ofleftover
(BREAKING).
- Support for classical classification (SVM / RF) on top of deep features.
- Add
FitHarn.prepare_epoch
callback.
- Refactored
netharn.utils
to depend onkwarray
,kwimage
, andkwplot
, this removes a lot of the extra cruft added in0.1.8
. - Can now specify the package zip-file name when deploying.
- Add option
FitHarn.config['use_tensorboard'] = True
load_partial_state
now returns dict containing info on which keys were unusednh.initializers.Pretrained
now returns info dict fromload_partial_state
nll_focal_loss
now is as fast asnll_loss
whenfocus=0
Note: many of the changes in this version were immediately factored out into external modules
- Backport
ndsampler
Coco-API - Add
arglexmax
,argmaxima
,argminima
- Add
util_distributions
-
Move
Boxes
andDataFrameLight
fromnetharn.util
tonetharn.util.structs
-
Enhance
Boxes
andDataFrameLight
functionality / docs -
Add
netharn.util.structs.Detections
-
Loss components are now automatically logged when loss is returned as a dict .
-
Add a small interactive debug interface on
KeyboardInterrupt
-
Fix XPU.coerce / XPU.cast when input is multi-gpu
-
Add
draw_clf_on_image
-
Add
valign
todraw_text_on_image
-
Add
border
todraw_text_on_image
-
A handful of PF GGR-related commits stashed on my home machine meant for 0.1.7
-
Add
nh.data.batch_samplers.MatchingSamplerPK
-
Add
shift_sat
andshift_val
to HSV augmenter -
Refactor and clean
api.py
-
Refactor and clean
netharn.initializers
-
Refactor
draw_boxes
anddraw_segments
intompl_draw
-
Fixed issues with YOLO example
-
Add
torch_ravel_multi_index
tonh.util
- Add
plot_surface3d
- Add
models.DescriptorNetwork
MLP
can now acceptdim=0
- Modified batch outputs to all use
:g
format - Use
progiter
by default instead oftqdm
nh.XPU.move
is now applied recursively to containers (e.g. dict list)- All
MovingAve
objects can now track variance CumMovingAve
can now track varianceExpMovingAve
can now track varianceWindowedMovingAve
can now track varianceimread
now attempts to return RGB or gray-scale by default.lr_range_test
now shows std-dev error bars- Improve API coerce methods (PF / IF)
nh.XPU.variable
is deprecated and removed.
- Fix Python2 compatibility issues.
- Fixed bug in
IgnoreLayerContext
preventing it from being used withDataParallel
.
- Add
api.py
containing code to help reduce netharn boilerplate by parsing a config dictionary. - Add new
ListedScheduler
which is able to modify multiple optimizer attributes including learning rate and momentum. - Add variant of Leslie Smith's learning rate test
nh.util.ExpMovingAve
now has a bias-correction option.
- Remove deprecated
_to_var
.
- FitHarn now logs momentum by default in addition to learning rate
- Switched to
skbuild
- Bug fixes
- Ported
multi_plot
fromKWIL
- Add
devices
tonh.layers.Module
FitHarn.config
can now specifyexport_modules
, which will be modules to expand when running the pytorch exporter.
- Scheduler states are now saved by default
- Netharn now dumps tensorboard plots every epoch by default
- The default
prepare_batch
now returns a dictionary with keysinput
andlabel
. - Ported modifications from KWIL to
imwrite
,stack_imges
, etc... - Improve CIFAR example.
- Improve MNIST example.
- Rename internal variables of
nh.Monitor
- Improve doc-strings for
nh.Monitor
- Move folder functionality into
hyperparams
.
- Fix issue with relative imports in netharn exporter
- Refactored the exporter closure-extractor into its own file.
- Deprecate and remove
HiddenShapesFor
- Move
HiddenShapesFor
functionality toOutputShapeFor
- Add (hacked-in) better
imgaug
hyperparameter logging. - Add verbose kwarg to
Pretrained.forward
- Add
IgnoreLayerContext
- Add
nh.ReceptiveFieldFor
nh.util.DisableBatchNorm
renamed tonh.util.BatchNormContext
train_info.json
now gets backed up if it would be overwritten
- Fix Python 2.7 bugs.
nh.CocoAPI.show_image
now correctly clears the axis before drawing- Fix bug in
FitHarn._check_divergence
- Add
_demo_epoch
function toFitHarn
which runs a single epoch for testing purposes. - Add new layers:
GaussianBlurNd
,L2Norm
,Permute
,Conv1d_pad
,Conv2d_pad
nh.XPU
now supports__eq__
one_hot_embedding
now supports thedim
keyword argument.- Add
nh.XPU.raw
to access the raw underlying model. - Add
util_filesys
which has the functionget_file_info
. - Add dependency on
astunparse
to fix bug where exporter could not handle complex assignments
- Focal loss no longer produces warnings with newer versions of torch.
- The
nh.util.group_items
utility will now default to theubelt
implementation for object and string arrays. - Improve efficiency of
DataFrameArray.groupby
nh.Pretrained
can now discover weights inside deployment files.nh.Pretrained
initializer now only requires the path to the deploy zip-file. It can figure out which files in the deployment are the weights.nh.CocoAPI
can now look up images by filenamenh.CocoAPI
can now delete categories by category name
- Deprecate and removed irrelevant parts of
CocoAPI
- Add remove categories to
CocoAPI
- Add experimental
_build_hashid
toCocoAPI
- Add compress to
ObjectList1D
inCocoAPI
- Add
hidden_shape_for
- Add
__json__
method tonh.XPU
- Remote annotations and categories now dynamically updates indexes
CocoAPI
harn._export
is now its own function
- Fixed take in
ObjectList1D
inCocoAPI
- Fix bug where
OutputShapeFor(_MaxPoolNd)
did not respectceil_mode
. - Fix bug where CPU implementation of non-max-suppression was different
- Fix bug where snapshots are corrupted with an
EOFError
- Fix bug where temporary directories were not cleaned up
- Integrate the publicly released Pytorch exporter and deployer.
- Fix bug where train info was not written if you specified a custom train dpath.
- Add
DataFrameLight
tonh.util
, which provides a subset ofpandas.DataFrame
functionality, but much much faster.
- Tentative Python 2.7 support
- Fix issue with per-instance FitHarn class loggers
- Fix tests and raised better errors if
tensorflow
does not exist
- Fix bug where
seed_global
did not set calltorch.cuda.manual_seed_all
- Better support for torch.device with
nh.XPU
- Minor reorganization of FitHarn, added more callbacks
- Fix issue with unseeded random states. Now defaults to the global
np.random
state. - Fix bug in
load_arr
- FitHarn now uses
StreamLogger
instead of print
- Fix torch 0.4.1 deprecation warnings in focal loss
- Add
before_epochs
callback
- Fix tests
- Add
nh.util.global_seed
- Fix MNIST example
- Various minor bug fixes
- Small improvements to outputs
- Better test images
- Better YOLO example
- Other stuff I forgot to log, I'm doing this mostly in my spare time!
- Add
SlidingWindow
as simplified alternative toSlidingSlices
- Add zip-awareness to pre-trained loader
- Expand COCO-API functionality
- Better detection metrics with alternative implementations
- Fix YOLO scheduler
- Fix issue with
autompl
. Now correctly detects if display is available.
- Early and undocumented commits