Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Feature: adaptive histogram equalization #673

Conversation

brianpopow
Copy link
Collaborator

@brianpopow brianpopow commented Aug 7, 2018

Prerequisites

  • I have written a descriptive pull-request title
  • I have verified that there are no overlapping pull-requests open
  • I have verified that I am following matches the existing coding patterns and practice as demonstrated in the repository. These follow strict Stylecop rules 👮.
  • I have provided test coverage for my change (where applicable)

Description

This PR adds a sliding window implementation of the adaptive histogram equalization (AHE). It also add clipping the histogram, which is supposed to reduce the effect of over-amplifying noise in relatively homogeneous regions (which happens with AHE).

Here is an example input:

ahe_example_input

and the output:

ahe_example_output_jpg

(click on the image, at least for me the preview was not displaying the image correctly)

This algorithm is computational very demanding, especially with 65536 grey levels.
To be honest, im a bit disappointed about the performance of this implementation with 16 bit greyscale images. It can take up to 30 seconds with an image of 1000x1000 pixels on my machine.
For 256 greylevels it would take ~500 ms.

I don't know if this is good enough for ImageSharp, so i was unsure if i should open the PR in the first place. AHE can produce some pretty nice result, so i though i give it a shot anyways and see what you guys think about it.

@codecov
Copy link

codecov bot commented Aug 8, 2018

Codecov Report

Merging #673 into master will increase coverage by 0.11%.
The diff coverage is 98.81%.

Impacted file tree graph

@@            Coverage Diff             @@
##           master     #673      +/-   ##
==========================================
+ Coverage   89.07%   89.19%   +0.11%     
==========================================
  Files        1024     1028       +4     
  Lines       44877    45417     +540     
  Branches     3211     3250      +39     
==========================================
+ Hits        39974    40509     +535     
- Misses       4203     4205       +2     
- Partials      700      703       +3
Impacted Files Coverage Δ
src/ImageSharp/Common/Helpers/ImageMaths.cs 87.01% <ø> (ø) ⬆️
tests/ImageSharp.Tests/TestImages.cs 100% <ø> (ø) ⬆️
...Normalization/AdaptiveHistEqualizationProcessor.cs 100% <100%> (ø)
...essing/Normalization/HistogramEqualizationTests.cs 100% <100%> (ø) ⬆️
...rmalization/AdaptiveHistEqualizationSWProcessor.cs 100% <100%> (ø)
...Sharp/Processing/HistogramEqualizationExtension.cs 72.72% <72.72%> (+22.72%) ⬆️
...sors/Normalization/HistogramEqualizationOptions.cs 83.33% <83.33%> (ø)
...malization/GlobalHistogramEqualizationProcessor.cs 96.29% <96.29%> (ø)
...rs/Normalization/HistogramEqualizationProcessor.cs 97.29% <96.66%> (-2.71%) ⬇️
... and 4 more

Continue to review full report at Codecov.

Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update 2fcba54...f194f4f. Read the comment docs.

@CLAassistant
Copy link

CLAassistant commented Aug 12, 2018

CLA assistant check
All committers have signed the CLA.

@JimBobSquarePants
Copy link
Member

It can take up to 30 seconds with an image of 1000x1000 pixels on my machine.
For 256 greylevels it would take ~500 ms.

I'd like to see if we can do anything to improve this as the output is incredible!

@brianpopow
Copy link
Collaborator Author

brianpopow commented Aug 12, 2018

I'd like to see if we can do anything to improve this as the output is incredible!

@JimBobSquarePants: I am glad you like it. I will try to improve it, but i think the main reason why this approach is slow, is that the distribution function needs to be calculated for each pixel and with 65k grey levels, thats a lot to chew on.

There is a different approach to this: The image would be split into n tiles dependent on how big the gridsize is chosen. For each tile the cdf is pre-calculated. Now the final grey level would be calculated by interpolating between 4 adjacent tiles. This approach should be in theory much faster.
I will give this a try over the next week and see how it goes.

@JimBobSquarePants
Copy link
Member

Great! Looking forward to seeing what you come up with 👍

…computed for each tile. Grey value will be determined by interpolating between 4 tiles
@brianpopow
Copy link
Collaborator Author

I have implemented the tile interpolation approach and it is indeed much faster. Even with 65536 grey levels it now takes around 500 ms to compute.

Here is an example output:

example_ahe_interpolation

Its still pretty much a work in progress, but i think its going in the right direction. I have still some issues to fix. I need to figure out howto deal with the borders and i think im still doing something wrong with the interpolation. If a small gridsize is choosen like 32, the tile edges seem to be too bright and it looks kind of blocky. Here is an example:

example_ahe_bright_tile_edges

@JimBobSquarePants
Copy link
Member

That sounds much more promising!

It looks to me like the sum of your samples is off. When interpolating the sum of your samples should equal 1. This maintains the correct brightness. I’d choose clamping to deal with edge pixels also.

@brianpopow
Copy link
Collaborator Author

@JimBobSquarePants just a quick update: i did not had much time last week to work on this. I hope i find some time this weekend to continue with it.

@JimBobSquarePants
Copy link
Member

@brianpopow no worries, looking forward to seeing this complete.

@JimBobSquarePants
Copy link
Member

JimBobSquarePants commented Feb 14, 2019

@brianpopow Finally getting round to having another look at this to get it finished.

Two questions:

  1. Is it a requirement of the algorithm to mirror edges? We can optimize things quite a lot if we can remove the reflection and simply reuse the edge pixels (we do that when resampling in the resizer).
  2. Would it be possible to gather the histogram values for each tile by row instead of per column? We could simply slice then copy.

@brianpopow
Copy link
Collaborator Author

@JimBobSquarePants:

  1. Mirroring the edges is the best way i know to deal with the borders of the image where there is no data. I do not understand exactly what you mean by reusing the edge pixels. Can you point me to an example source code line where this is done in th resizer?

  2. We now move the window from left to right. This means when we move one pixel, we need to remove one column from the left and add another on the right. When i was moving the window from top to bottom i could read a row (with the size of the window) when moving one pixel down, but you said this is not a good idea.
    Maybe i do not understand exactly what you mean or you see something i do not see here, but when moving the window from left to right we need to read a column when moving the window.

Im sorry that i could not be more helpful here.

@JimBobSquarePants
Copy link
Member

Hi @brianpopow

1. Mirroring the edges is the best way i know to deal with the borders of the image where there is no data. I do not understand exactly what you mean by reusing the edge pixels. Can you point me to an example source code line where this is done in th resizer?

It's actually pretty simple, instead of mirroring you reuse the 0 and source.Width pixels when sampling. So in your current code you have:

if (x < 0)
{
    x = Math.Abs(x);
}
else if (x >= source.Width)
{
    int diff = x - source.Width;
    x = source.Width - diff - 1;
}

In the most simplistic terms you would have

if (x < 0)
{
    x = 0;
}
else if (x >= source.Width)
{
    x = x - source.Width -1;
}

This yields a different result on the image edges but the algorithm still works. I'm fairly certain we can optimize that example further though if we were going per-row instead of per-column within the tile.

2. We now move the window from left to right. This means when we move one pixel, we need to remove one column from the left and add another on the right. When i was moving the window from top to bottom i could read a row (with the size of the window) when moving one pixel down, but you said this is not a good idea.
Maybe i do not understand exactly what you mean or you see something i do not see here, but when moving the window from left to right we need to read a column when moving the window.

There must be something I am not understanding about the algorithm.

When I first suggested going top down it was because you were using a parallel for 0 => source.Width which meant you couldn't slice a row to operate on it. You're now going 0 => source.Height on the outermost loop now which is good but you're now operating per column within the tile and I don't understand why?

8f19e5e#diff-6ca0f7e3ceae4a5676d95d4de12f7b8bR62

Im sorry that i could not be more helpful here.

Never say that chum, your work is amazing.

@brianpopow
Copy link
Collaborator Author

brianpopow commented Feb 18, 2019

@JimBobSquarePants:

  1. ok i understand now what you mean, but we can not do that. That would over amplify the edge pixels in the border. The edge pixels would be added multiple times to the histogram, worst case is the corners of the image where it would lead adding the same value as often as the window width.

One other option is to just ignore the pixels outside of the image and do not add them. Im not sure, if that a good idea either. I have to try that and see how it looks.

Another option i have found is here: https://digitalcommons.unf.edu/cgi/viewcontent.cgi?referer=https://www.google.de/&httpsredir=1&article=1264&context=etd
See Section Image Border, Page 26.
They suggest to keep the window in place in the corners (what i call window, they call contextual region)
I think that sounds like a good alternative, but im not sure at the moment, if this will be better in performance than the mirroring approach.

  1. Moving the sliding window around the image is comparable to other filtering methods, where you apply one filter mask to a region around a pixel. Lets say like a Sobel filter, but here the mask / window / contextual region is a bit bigger (typically something like 64 by 64 pixels). This window needs to be moved over each pixel. It does not matter if its from left to right or from top to bottom, but it needs to be moved over all pixels.
    All pixels which are covered by the window needs to be added to the histogram. To not add all pixels under the window to the histogram again when you move one pixel, you only have to add one column to the right and remove one column from the left (if you move from left to right).

Here is an example image, maybe that makes it clearer:

slidingwindow-2-figure1-1

If you we move the window from top to bottom, which was the case in the commit you pointed out, we would add one row on the bottom and remove one from the top.

I hope this makes it a bit clearer, let me know if you have still any questions.

Unfortunately i think i will not find time to continue with this in this week. Maybe i can find some time next week.

@JimBobSquarePants
Copy link
Member

ok i understand now what you mean, but we can not do that. That would over amplify the edge pixels in the border. The edge pixels would be added multiple times to the histogram, worst case is the corners of the image where it would lead adding the same value as often as the window width

Reading the linked article I understand now that sampling is more accurate away from the edge border so mirroring seems to be the valid approach.

If you we move the window from top to bottom, which was the case in the commit you pointed out, we would add one row on the bottom and remove one from the top.

Definitely preferred over columns as we can do span.Slice(x, range).Copy() for anything other than edge pixels.

Gonna have a good read through the article. It explains the process very clearly. 👍

@brianpopow
Copy link
Collaborator Author

@JimBobSquarePants: ok, so we should keep mirroring and switch back to moving the window from top to bottom? I will do that, if you agree to this.

@JimBobSquarePants
Copy link
Member

Sorry... on holiday.

Yeah, grab per row but do a fast and slow path since we only need mirroring on edges.

@brianpopow
Copy link
Collaborator Author

@JimBobSquarePants im sorry for no update on this one for so long, i really wanted to finish it, but i cannot find time recently to continue on this, because of too much work currently at my job. I will try to get back on this next weekend.

@JimBobSquarePants
Copy link
Member

@brianpopow No worries, thanks again for your help!

brianpopow and others added 5 commits April 23, 2019 19:16
@brianpopow
Copy link
Collaborator Author

@JimBobSquarePants: I finally found some time to continue on this. As discussed, the sliding window is moved from top to bottom again, so we can add a row to the histogram when it moves down one step.

I have changed the mirroring, so it will only be done on the edges now.

@JimBobSquarePants
Copy link
Member

@brianpopow Excellent!

Looks like we need to update the reference image again as the output is now different. I'll review this asap. 👍

@brianpopow
Copy link
Collaborator Author

@JimBobSquarePants: yes, but lets update the reference image after the review is done.

Copy link
Member

@JimBobSquarePants JimBobSquarePants left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Congrats @brianpopow we've finally made it!

I ran a pass of the code and did the final performance tweaks myself removing some bounds checks so that you weren't left doing more work.

I'm very happy with this now, thanks so much for your patience and persistance. It's an amazing bit of functionality!

@JimBobSquarePants JimBobSquarePants merged commit 4197e69 into SixLabors:master Apr 27, 2019
@brianpopow
Copy link
Collaborator Author

@JimBobSquarePants: Great, thank you! This took quite some time, but i still think it was worth it.

JimBobSquarePants pushed a commit that referenced this pull request Aug 25, 2019
* Nits - Benchmarks (#884)

* Update metadata names

* Use WithIterationCount

* Format Benchmark documents

* Update copyright assignment to Six Labors & Contributors

* Update deps

* React to Benchmark library update

* ResizeWindowOld

* ResizeWindow refactor 1

* ResizeWindow refactor 2

* ResizeWindow refactor 3

* ResizeWindow refactor 4

* basic sliding window implementation (semi-stable state)

* fix ResizeWithCropHeightMode

* reference output for Resize_BasicSmall

* minor optimization

* Handle incorrect colorspace metadata. Fix #882 (#885)

* refactor
- ResizeWindow -> ResizeWorker
- Most logic moved to ResizeWorker

* refactor stuff + implement CalculateResizeWorkerWindowCount()

* utilize CalculateResizeWorkerHeightInWindowBands()
which has been renamed from CalculateResizeWorkerWindowCount()

* improve benchmark: ArrayCopy -> CopyBuffers

* moar RowInterval stuff

* buffer.CopyColumns(...)

* simplify ResizeWorker logic

* WorkingBufferSizeHintInBytes_IsAppliedCorrectly

* more robust tests

* ResizeTests.LargeImage

* optimized sliding works!

* reapply unsafe optimizations

* moar unsafe optimization

* benchmark WorkingBufferSizeHint effects

* memory profiling with Sandbox46

* add ResizeWorker.pptx

* refine ResizeWorker.pptx

* xmldoc for ResizeWorker

* update ResizeWorker.pptx [skip CI]

* fix tests

* fix license text in CopyBuffers benchmark

* update travis.yml

* I'm terrible copy-paster

* extend the CopyBuffers benchmark

* use HashCode.Combine()

* Pass correct output size in ResizeMode.Min #892 (#893)

* Cleanup General Convolution (#887)

* Remove multiple premultiplication.

* Use in DenseMatrix everywhere.

* Make private

* Dont convert vector row on first pass

* Remove incorrectly assigned alpha.

* Remove boxing.

* Use correct min row.

* Reorder parameters

* Correctly handle alpha component.

* Update tests

* Use dedicated methods over branching.

* Faster Jpeg Huffman Decoding. (#894)

* Read from underlying stream less often

* Update benchmark dependencies

* Experimental mango port

Currently broken

* Populate table, 64byte buffer

Still broken.

* Baseline, non RST works

* 15/19 baseline tests pass now.

* Optimize position change.

* 18/19 pass

* 19/19 baseline decoded

* Can now decode all images.

* Now faster and much cleaner.

* Cleanup

* Fix reader, update benchmarks

* Update dependencies

* Remove unused method

* No need to clean initial buffer

* Remove bounds check on ReadByte()

* Refactor from feedback

* Feature: adaptive histogram equalization (#673)

* first version of sliding window adaptive histogram equalization

* going now from top to bottom of the image, added more comments

* using memory allocator to create the histogram and the cdf

* mirroring rows which exceeds the borders

* mirroring also left and right borders

* gridsize and cliplimit are now parameters of the constructor

* using Parallel.For

* only applying clipping once, effect applying it multiple times is neglectable

* added abstract base class for histogram equalization, added option to enable / disable clipping

* small improvements

* clipLimit now in percent of the total number of pixels in the grid

* optimization: only calculating the cdf until the maximum histogram index

* fix: using configuration from the parameter instead of the default

* removed unnecessary loops in CalculateCdf, fixed typo in method name AddPixelsToHistogram

* added different approach for ahe: image is split up in tiles, cdf is computed for each tile. Grey value will be determined by interpolating between 4 tiles

* simplified interpolation between the tiles

* number of tiles is now fixed and depended on the width and height of the image

* moved calculating LUT's into separate method

* number of tiles is now part of the options and will be used with the sliding window approach also, so both methods are comparable

* removed no longer valid xml comment

* attempt fixing the borders

* refactoring to improve readability

* linear interpolation in the border tiles

* refactored processing the borders into separate methods

* fixing corner tiles

* fixed build errors

* fixing mistake during merge from upstream: setting test images to "update Resize reference output because of improved ResizeKernelMap calculations"

* using Parallel.ForEach for all inner tile calculations

* using Parallel.ForEach to calculate the lookup tables

* re-using pre allocated pixel row in GetPixelRow

* fixed issue with the border tiles, when tile width != tile height

* changed default value for ClipHistogram to false again

* alpha channel from the original image is now preserved

* added unit tests for adaptive histogram equalization

* Update External

* 2x faster adaptive tiled processor

* Remove double indexing and bounds checks

* Begin optimizing the global histogram

* Parallelize GlobalHistogramEqualizationProcessor

* Moving sliding window from left to right instead of from top to bottom

* The tile width and height is again depended on the image width: image.Width / Tiles

* Removed keeping track of the maximum histogram position

* Updated reference image for sliding window AHE for moving the sliding window from left to right

* Removed unnecessary call to Span.Clear(), all values are overwritten anyway

* Revert "Moving sliding window from left to right instead of from top to bottom"

This reverts commit 8f19e5e.

# Conflicts:
#	src/ImageSharp/Processing/Processors/Normalization/AdaptiveHistEqualizationSWProcessor.cs

* Split GetPixelRow in two version: one which mirrors the edges (only needed in the borders of the images) and one which does not

* Refactoring and cleanup sliding window processor

* Added an upper limit of 100 tiles

* Performance tweaks

* Update External

* ImageBrush shouldn't Dispose of the image it is using. (#883)

Fixes #881

* Now throws a better excpetion DrawImage source does not overlap target (#877)

* No longer throws when DrawImage source does not overlap target

Previously, when DrawImage was used to overlay an image, in cases where the source image did not overlap the target image, a very confusing error was reported: "System.ArgumentOutOfRangeException: Specified argument was out of the range of valid values.
Parameter name: MaxDegreeOfParallelism"

Now, when this case happens, the DrawImage method will simply not affect the target image, which is the same way FillRegionProcessor handles such cases.

ParallelHelper.IterRows also now does more validation of the input rectangle so that any further cases of this kind of problem throw a more relvant exception. Note I switched from DebugGuard to Guard because IterRows is a public API and thus should always validate its inputs.

Fixes #875

* Refines DrawImage non-overlap error logic

Adresses PR feedback in #877.

Changes DrawImage shortcut to be less than or equal to. Also changes maxX calculation to use `Right` over `Width`. This is a semantic change that reflects intention better. No actual change because Top,Left for that rectanngle should always be 0,0.

And adds more informative argument names to ParallelHelper.IterRows error message.

* Non-overlapping DrawImage now throws

Adressing PR feedback from #877

DrawImage now throws when the source image does not overlap, with a useful error message.

Also improved the error messages for IterRows (and added validation to the other IterRows method)

* DrawImage overlap test changed to support RELEASE

The tests on the CI server are run in RELEASE which wrap the expected exception with an ImageProcessingException.

* Adress feedback for DrawImage exception

DrawImage throws an ImageProcessor exception which makes it easier to catch.

And reverted IterRows to use Guard helpers

* Bitmap encoder writes V3 header as default (#889)

* Bitmap encoder will now write a V3 header as default.

Introduces a new encoder option `SupportTransparency`: With this option a V4 header will be written with BITFIELDS compression.

* Add 4 and 16 bit images to the Identify test

* Add some useful links for bitmap documentation

* Add 32 bpp images to the Identify test

* Added further information on what will change for the encoder, if SupportTransparency is used.

* Add support for encoding 16 bit per pixel bitmaps (#899)

* Implemented encoding of 16 bits per pixel bitmaps

* Add unit tests for 16 bit encoding and Bgra5551 conversion

* Add additional Bgra5551 pixel conversion tests

* Add Bgra5551 tests for Short2/4 and HalfVector2/4

* Use scaled vector conversion

* define IImageVisitor

* pixel agnostic Mutate/Clone defined

* pixel-agnostic ResizeProcessor

* pixel-agnostic decoder API

* refactor FilterProcessor stuff

* basic fixes after rebase + temporarily comment out target frameworks

* refactor the rest of the FilterProcessor code

* reached a fully compiling state

* fix processor invocation tests

* re-add and fix filter invocation tests

* fix warnings and improve xmldocs

* *ProcessorImplementation<T> ===> *Processor<T>,

add suppression of SA1413 to AssemblyInfo.cs

* validating tests for Convolution processors

* validating tests for Convolution processors

* mark FileTestBase obsolete

* improve DitherTests CQ

* extended diffusion + dither tests

* Add additional pixel conversion tests

* validating tests for Effects

* validating tests for AutoOrient

* validating tests for Skew and Entropy crop

* validating tests for Overlay processors

* Add more pixel conversion tests

* skip DitherTests on old 32 bit runtimes

* refactor BoxBlur and GaussianBlur

* making filters public

* Further refactor on Gaussian stuff

* drop IEdgeDetectorProcessor

* Refactor edge detection

* refactor Effects processors

* Finished refactoring transforms

* sealed everything

* refactor HistogramEqualization

* cache Image dimensions into a field + re-enable all target frameworks

* fix Image.FromStream() + add tests

* full coverage for Image.Load (I hope)

* fix changes applied by mistake

* Fix docs for processing extensions

* publish non-generic QuantizeProcessor

* temporarily disable multitargeting

* add skeleton for Color type

* add more Rgba64 constructor overloads

* basic Color methods

* Pixel-agnostic Binarization processors

* Implement WernerPalette and WebSafePalette for Color

* refactor dithering and error diffusion to use Color

* Add support for encoding 8-bit bitmaps

* Changed WuQuantizer to OctreeQuantizer

* Update Readme (#905)

A few minor grammatical corrections.

* Trying MagickReferenceDecoder for the 8bit greyscale image encoder tests

* Setting dither to false in the OctreeQuantizer

* verbose naming for Histogram Equalization stuff + make it public

* formatting

* refactor of Overlays

* made AutoOrientExtensions non-generic

* cleanup and document Color

* cleanup

* Switched back to the default reference decoder

* Made PaletteQuantizer non-generic all the way

* QuantizedFrame<T> using ReadOnlyMemory<T>

* Correct readonly-semantics for QuantizedFrame

* introduce IQuantizedFrame<T>

* move all extension methods under a subfolder

(non-namespace providing)

* re-enable all target frameworks

* Using tolerant comparer for the Bit8Gs image

* More docs for Color, correct naming in ColorBuilder<TPixel>

* temporarily disable target frameworks

* Enabled dither again

* validating tests for: DrawPath, FillComplexPolygon

* validation in DrawImageTest

* RecolorImageTests

* FillPolygonTests

* DrawPolygonTests, DrawLinesTests

* DrawBeziersTests, DrawComplexPolygonTests

* move implicit Color conversion to pixel types,

add micro-optimizations

* fix merge issues

* DrawImageTests: add tolerance to make all test configurations happy

* Review changes

* started the refactor

* Pen, Brush & Processors refactored

* ImageSharp.Drawing compiles

* everything builds

* tests are passing

* fix new 8bit bmp code

* move drawing extensions to a (non-namespace-provider) subfolder

* rename files

* ImageBrush can apply a source image of a different pixel type than the target

* non-generic DrawImageProcessor

* DrawImageOfDifferentPixelType test cases

* clean-up drawing processors

* fix remaining stylecop issues

* Rgba32.Definitions: use Color instead of NamedColors<T>

* Remove NamedColors<T> usages

* Not using quantization for Grey8 images when encoding 8Bit Bitmaps

* Refactor Write8Bit into two methods: one for gray and one for color

* drop unnecessary generic IImageProcessorContext<TPixel> usages

* drop almost all usages of FileTestBase

* fix tests

* re-enable target frameworks

* Add tests for quantization of 32 bit bitmap to 8 bit color image

* Renamed DrawImageTest to DrawImageTests, fixed namespace

* Using tolerant comparer for the Encode_8BitColor tests

* Removed the Encode_8BitColor, because of the quantization differences with net472 and netcore

* Re-enabled Encode_8BitColor tests, if its 64 bit process

* Using MemoryMarshal.AsBytes in Write8BitGray to get the bytes of a row

* Fix merge mistake: Using DrawImageTests from current master branch

* API cleanup (related to #907) (#911)

* temporarily disable target frameworks

* drop DelegateProcessor

* drop IImageProcessingContext<TPixel>

* drop NamedColors<T>

* drop ColorBuilder<T>

* drop the *Base postfix for clean class hierarchies

* re-enable target frameworks

* use MathF in gradient brushes

* Move PngFilterMethod to the correct namespace.

* Fix for Issue #871 and #914 (#915)

* Fix #466 (#916)

* Added funding file with a link to open collective.

* Removes usage of linq in several critical paths (#918)

* Remove linq usage from jpeg + formatting

* png

* ICC + formattiing

* Resize

* Fix base class comparison.

* Updating the repo to use Directory.Build.props/targets files (#920)

* Updating the repo to use Directory.Build.props/targets files

* Adding an InternalsVisibleTo for DynamicProxyGenAssembly2, PublicKeyToken=null

* Removing duplicate includes from the ImageSharp.csproj

* Updating the .gitattributes file to explicitly list the line endings

* Removing the ImageSharp.ruleset file, as the one from standards should be used instead

* Updating the package version management to use `PackageReference Update`

* Fix build/test (#923)

* Prevent zigzag overflow. Fix #922

* Add test image and use constants

* Improve robustness of huffman decoder.

* Fix missing "using PixelFormats" line in Readme example (#921)

* Fix missing PixelFormats line in first API example

"<Rgba32>" does not appear to be defined without the "using SixLabors.ImageSharp.PixelFormats;" line and causes a "The type or namespace name 'Rgba32' could not be found (are you missing a using directive or an assembly reference?) (CS0246)" error to occur.

* Remove stray newline

* Feature: Bitmap RLE undefined pixel handling (#927)

* Add bitmap decoder option, how to treat skipped pixels for RLE

* Refactored bitmap tests into smaller tests, instead of just one test which goes through all bitmap files

* Add another adobe v3 header bitmap testcase

* Using the constant from BmpConstants to Identify bitmaps

* Bitmap decoder now can handle oversized palette's

* Add test for invalid palette size

* Renamed RleUndefinedPixelHandling to RleSkippedPixelHandling

* Explicitly using SystemDrawingReferenceDecoder in some BitmapDecoder tests

* Add test cases for unsupported bitmaps

* Comparing RLE test images to reference decoder only on windows

* Add test case for decoding winv4 fast path

* Add another 8 Bit RLE test with magick reference decoder

* Optimize RLE skipped pixel handling

* Refactor RLE decoding to eliminate code duplication

* Using MagickReferenceDecoder for the 8-Bit RLE test

* Fix 925 (#929)

* Prevent overflow

* Cleanup huffman table

* Search for RST markers.

* Fix Benchmarks project

* Bitmap decoder now can decode bitmap arrays (#930)

* Bitmap Decoder can now decode BitmapArray

* Add tests for bitmap metadata decoing. Fix an issue that a bitmap with a v5 header would be set in the metadata as an v4 header.

* Fixed issue with decoding bitmap arrays: color map size was not determined correctly. Added more test images.

* Refactor colormap size duplicate declaration.

* Fixed an issue, that when an unsupported bitmap is loaded the typ marker was not correctly shown in the error message

* Throw UnkownFormatException on Image.Load  (#932)

* Throw ImageFormatException on load

* Unseal class and make constructor internal
- This is so that no one can new it up / inherit it outside of the assembly

* Add new exception for distinguish between different exception
- This will be used on image.load operations with invalid image streams

* ImageFormatException -> UnkownImageFormatException

* Add Image.Load throws exception tests

* Fix #937 (#938)

* Add support for decoding RLE24 Bitmaps (#939)

* Add support for decoding RLE24

* Simplified determining colorMapSize, OS/2 always has 3 bytes for each palette entry

* Enum value for RLE24 is remapped to a different value, to be clearly separate from valid windows values.

* Introduce non-generic ImageFrameCollection (#941)

* temporarily disable target frameworks

* drop DelegateProcessor

* drop IImageProcessingContext<TPixel>

* drop NamedColors<T>

* drop ColorBuilder<T>

* drop the *Base postfix for clean class hierarchies

* adding basic skeletons

* non-generic ImageFrameCollection API definition

* non-generic ImageFrameCollection tests

* cleanup + docs + more tests

* implement ImageFrameCollection methods

* tests for generic PixelOperations.To<TDest>()

* experimental implementation

* fix .ttinclude

* generate generic From<TSourcePixel>(...)

* fix RgbaVector <--> BT709 Gray pixel conversion

* Gray8 and Gray16 using ConvertFromRgbaScaledVector4() by default

* fixed all conversion tests

* ConstructGif_FromDifferentPixelTypes

* fix xmldoc and other StyelCop findings

* re-enable all target frameworks

* fix NonGenericAddFrame() and NonGenericInsertFrame()

* fix remaining bugs

* #946: AoT compiler fixes and hid Seed methods

* #946: fixed StyleCop errors

* Master cleanup (#952)

* Fix gitignore and line endings

* Update README.md

* Bokeh blur implementation (#842)

* Added base BokehBlurProcessor class, and kernel parameters

* Added method to calculate the kernel parameters

* Switched to float, added method to create the 1D kernels

* Added complex kernels normalization

* Added BokehBlurExtensions class

* Added the Complex64 struct type

* Switched to Complex64 in the BokehBlurProcessor

* Added caching system for the bokeh processor parameters

* Added WeightedSum method to the Complex64 type

* Added IEquatable<T> interface to the Complex64 type

* New complex types added

* Added method to reshape a DenseMatrix<T> with no copies

* Added bokeh convolution first pass (WIP)

* Added second bokeh convolution pass (WIP)

* Added image sum pass to the bokeh processor (WIP)

* Minor bug fixes (WIP)

* Switched to Vector4 processing in the bokeh computation

* Minor tweaks

* Added Unit test for the bokeh kernel components

* Minor performance improvements

* Minor code refactoring, added gamma parameter (WIP)

* Removed unused temp buffers in the bokeh processing

* Gamma highlight processing implemented

* Speed optimizations, fixed partials computations in target rectangle

* Increased epsilon value in the unit tests

* Fixed for alpha transparency blur

* Fixed a bug when only blurring a target rectangle

* Added bokeh blur image tests (WIP)

* Added IXunitSerializable interface to the test info class

* culture independent parsing in BokehBlurTest.cs

* Performance optimizations in the bokeh processor

* Reduced number of memory allocations, fixed bug with multiple components

* Initialization and other speed improvements

* More initialization speed improvements

* Replaced LINQ with manual loop

* Added BokehBlur overload to just specify the target bounds

* Speed optimizations to the bokeh 1D convolutions

* More speed optimizations to the bokeh processor

* Fixed code style and Complex64.ToString method

* Fixed processing buffer initialization

* Minor performance improvements

* FIxed issue when applying bokeh blur to specific bounds

* Minor speed optimizaations

* Minor code refactoring

* Fixed convolution upper bound in second 1D pass

* improve BokehBlurTest coverage

* use Gray8 instead of Alpha8

* Adjusted guard position in bokeh processor constructor

* Added BokehBlurParameters struct

* Added BokehBlurKernelData struct

* Minor code refactoring

* Fixed API change build errors

* Bug fixes with the pixel premultiplication steps

* Removed unwanted unpremultiplication pass

* Removed unused using directives

* Fixed missing using directives in conditional branches

* Update from latest upstream master

* Update Block8x8F.Generated.cs

* Update GenericBlock8x8.Generated.cs

* Manual checking for files with LF (see gitter)

* Removed unused using directive

* Added IEquatable<ComplexVector4> interface

* Added IEquatable<BokehBlurParameters> interface

* Moved bokeh blur parameters types

* Added reference to original source code

* Complex convolution methods moved to another class

* Switched to MathF in the bokeh blur processor

* Switched to Vector4.Clamp

* Added Buffer2D<T>.Slice API

* Added BokehBlurExecutionMode enum

* Added new bokeh blur processor constructors

* Added new bokeh blur extension overloads with execution mode

* Code refactoring in preparation for the execution mode switch

* Implemented execution mode switch in the bokeh processor

* Moved BokehBlurExecutionMode struct

* Removed unused using directives

* Minor code refactoring

* More minor code refactoring

* Update External

* Fix undisposed buffers

* Bokeh blur processor cache switched to concurrent dictionary

* Minor code refactoring

* remove duplicate props from csproj

* Add support for read and write tEXt, iTXt and zTXt chunks (#951)

* Add support for writing tEXt chunks

* Add support for reading zTXt chunks

* Add check, if keyword is valid

* Add support for reading iTXt chunks

* Add support for writing iTXt chunks

* Remove Test Decode_TextEncodingSetToUnicode_TextIsReadWithCorrectEncoding: Assertion is wrong, the correct keyword name is "Software"

* Add support for writing zTXt chunk

* Add an encoder Option to enable compression when the string is larger than a given threshold

* Moved uncompressing text into separate method

* Remove textEncoding option from png decoder options: the encoding is determined by the specification: https://www.w3.org/TR/PNG/#11zTXt

* Removed invalid compressed zTXt chunk from test image

* Revert accidentally committed changes to Sandbox Program.cs

* Review adjustments

* Using 1024 bytes as a limit when to compress text as recommended by the spec

* Fix inconsistent line endings

* Trim leading and trailing whitespace on png keywords

* Move some metadata related tests into GifMetaDataTests.cs

* Add test case for gif with large text

* Gif text metadata is now a list of strings

* Encoder writes each comment as a separate block

* Adjustment of the Tests to the recent changes

* Move comments to GifMetadata

* Move Png TextData to format PngMetaData

* #244 Add support for interlaced PNG encoding (#955)

* #244 Implement interlaced PNG encoding

* #244 Update documentations

* #244 Remove comment

* Cleanup

* Update PngEncoderCore.cs

* fix some spelling (#957)

* fix some spelling

* more typos

* more typos

* more typos

* more typos

* more typos

* linearSegment

* fix "as" usage (#959)

* redundant usings (#960)

* use params where possible (#961)

* use params where possible

* use params

* fix some spelling (#962)

*  Add test illustrating issue

* Add test from issue #928

* Add possible fix

* remove unused variables and methods (#963)

* remove unused variables and methods

* remove some redundant variables

* remove some redundant variables

* redundant variables

* Update DrawTextOnImageTests.cs

* Minor optimizations

* cleanup

* Cleanup (#965)

* redundant ()

* redundant stirng interpolation

* use method groups

* redundant unsafe

* redundant qualifiers

* redundant ()

* redundant init

* redundant init

* redundant casts

* redundant casts

* Throw ObjectDisposedException when trying to operate on a disposed image (#968)

* disable multitargeting + TreatWarningsAsErrors to for fast development

* Check if image is disposed

in significant Image and Image<T> methods

* Mutate / Clone: ensure image is not disposed

* Revert "disable multitargeting + TreatWarningsAsErrors to for fast development"

This reverts commit 9ad74f7.

* remove some redundant variables and type params (#971)

* remove redundant variable init

* redundant variables

* remove redundant tileY variable

* remove redundant sum variable

* redundant mcu variable

* redundant type params

* Revert "remove redundant sum variable"

This reverts commit 21de86c.

* redundant comment

* remove redundant ParallelOptions

* aviod multiple array lookup

*  use var where apparent  (#972)

* use var where apparent

* use var where apparent

* should use Rgb24

* cache max in ConvertToRgba

* cache max value

* remove some redundant usings (#976)

* remove SteppedRange (#980)

* Implement decoder tests for debugging tests and add deflate bug output file

* Add TiffConfigurationModule to core Configuration. Implement simple unit-tests for Tiff decoder.

* Add benchmarks for big and medium tiff files

* Move Tiff classes to Formats.Tiff namespace

* Mark Tiff format tests with "Tiff" category, cleanup test classes

* Improve performance of Tiff decoders

* remove some redundant constructor overloads from exceptions (#979)

* remove some redundant constructor overloads from exceptions

* re add ImageProcessingException ctor

only used in release

* Fix of build error, cleanup.
Correct Metadata updated name (for Resolution properties), temporary disable tiff native metadata properties.

* Implement temporary tiff native metadata structures

* Mark missed tiff tests, exclude DecodeManual test

* Update test images submodule

* Explanations
antonfirsov pushed a commit to antonfirsov/ImageSharp that referenced this pull request Nov 11, 2019
* first version of sliding window adaptive histogram equalization

* going now from top to bottom of the image, added more comments

* using memory allocator to create the histogram and the cdf

* mirroring rows which exceeds the borders

* mirroring also left and right borders

* gridsize and cliplimit are now parameters of the constructor

* using Parallel.For

* only applying clipping once, effect applying it multiple times is neglectable

* added abstract base class for histogram equalization, added option to enable / disable clipping

* small improvements

* clipLimit now in percent of the total number of pixels in the grid

* optimization: only calculating the cdf until the maximum histogram index

* fix: using configuration from the parameter instead of the default

* removed unnecessary loops in CalculateCdf, fixed typo in method name AddPixelsToHistogram

* added different approach for ahe: image is split up in tiles, cdf is computed for each tile. Grey value will be determined by interpolating between 4 tiles

* simplified interpolation between the tiles

* number of tiles is now fixed and depended on the width and height of the image

* moved calculating LUT's into separate method

* number of tiles is now part of the options and will be used with the sliding window approach also, so both methods are comparable

* removed no longer valid xml comment

* attempt fixing the borders

* refactoring to improve readability

* linear interpolation in the border tiles

* refactored processing the borders into separate methods

* fixing corner tiles

* fixed build errors

* fixing mistake during merge from upstream: setting test images to "update Resize reference output because of improved ResizeKernelMap calculations"

* using Parallel.ForEach for all inner tile calculations

* using Parallel.ForEach to calculate the lookup tables

* re-using pre allocated pixel row in GetPixelRow

* fixed issue with the border tiles, when tile width != tile height

* changed default value for ClipHistogram to false again

* alpha channel from the original image is now preserved

* added unit tests for adaptive histogram equalization

* Update External

* 2x faster adaptive tiled processor

* Remove double indexing and bounds checks

* Begin optimizing the global histogram

* Parallelize GlobalHistogramEqualizationProcessor

* Moving sliding window from left to right instead of from top to bottom

* The tile width and height is again depended on the image width: image.Width / Tiles

* Removed keeping track of the maximum histogram position

* Updated reference image for sliding window AHE for moving the sliding window from left to right

* Removed unnecessary call to Span.Clear(), all values are overwritten anyway

* Revert "Moving sliding window from left to right instead of from top to bottom"

This reverts commit 8f19e5e.

# Conflicts:
#	src/ImageSharp/Processing/Processors/Normalization/AdaptiveHistEqualizationSWProcessor.cs

* Split GetPixelRow in two version: one which mirrors the edges (only needed in the borders of the images) and one which does not

* Refactoring and cleanup sliding window processor

* Added an upper limit of 100 tiles

* Performance tweaks

* Update External
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants