-
-
Notifications
You must be signed in to change notification settings - Fork 85
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
CSS4 toGamut fixes #352
CSS4 toGamut fixes #352
Conversation
@svgeesus Do you have ideas on why I was seeing this not resolve to an in gamut color with the check that the (JND - DeltaEOK) < epsilon? |
I haven't looked too close at the implementation yet, but I would note one thing, using oklab > new Color('oklab', [1, 0, 0]).to('srgb')
{
"spaceId": "srgb",
"coords": [
0.9999999694343976,
1.000000008665371,
1.000000114856359
],
"alpha": 1
} If using an Oklab value is desired, I would recommend you use what color.js gives you when convert white to oklab: > new Color('srgb', [1, 1, 1]).to('oklab')
{
"spaceId": "oklab",
"coords": [
0.9999999934735462,
8.095285553011422e-11,
3.727390762708893e-8
],
"alpha": 1
} A 64 bit matrix can be calculated for Oklab which gives a really good value for white. This is what I do in my own library. It gives the same results as CSS up to 32 bit precision, but also has a much tighter transform for achromatic values in Oklab. >>> Color('oklab', [1, 0, 0]).convert('srgb')[:]
[1.0000000000000004, 0.9999999999999997, 0.9999999999999997, 1.0] I'm not necessarily saying CSS needs to do this, but for those that are curious, this is how I went about it: https://github.com/facelessuser/coloraide/blob/main/tools/calc_oklab_matrices.py. |
What was desired was to specify a white without adding a dependency on a color space not already imported. Nothing specific to oklab. Any ideas? |
You don't have to use sRGB, just the precalculated oklab value. Alternatively, you could use white as defined by XYZ D65. That is what I do as it is the one color space that is assumed to always be imported as all conversions that require chromatic adaptation pass through it. |
Since the intention is "white/black in the destination color space", would it make sense to add static white and black coordinates to each color space, and then refer directly to those? This would work for |
This would be one way to approach it. You could definitely control and optimize the "best" white/black for each color space. Personally, I prefer a more generic approach, at least that is what I did in my implementation. Using XYZ D65 and just applying chromatic adaptation is more than enough and gets you pretty darn close. Color.js unfortunately seems to be using very poor chromatic adaptation matrices and are introducing errors that are unnecessary. I plan to issue a pull to fix it. You can see here how there is a bit of error introduced due to chromatic adaptation from D65 to D50. > new Color('white').to('prophoto')
Color {
space: <ref *1> RGBColorSpace {
id: 'prophoto',
name: 'ProPhoto',
base: RGBColorSpace {
id: 'prophoto-linear',
name: 'Linear ProPhoto',
base: [ColorSpace],
aliases: undefined,
fromBase: [Function (anonymous)],
toBase: [Function (anonymous)],
coords: [Object],
white: [Array],
formats: {},
referred: 'display',
path: [Array]
},
aliases: undefined,
fromBase: [Function: fromBase],
toBase: [Function: toBase],
coords: { r: [Object], g: [Object], b: [Object] },
white: [ 0.9642956764295677, 1, 0.8251046025104602 ],
formats: { color: [Object] },
referred: 'display',
path: [ [ColorSpace], [ColorSpace], [RGBColorSpace], [Circular *1] ]
},
coords: [ 0.9999999886663737, 1.0000000327777285, 0.9999999636791804 ],
alpha: 1
} This is because they don't calculate the inverse of the CAT matrix. They are doubles but an inverse well below the double precision. defineCAT({
id: "Bradford",
// Convert an array of XYZ values in the range 0.0 - 1.0
// to cone fundamentals
toCone_M: [
[ 0.8951000, 0.2664000, -0.1614000 ],
[ -0.7502000, 1.7135000, 0.0367000 ],
[ 0.0389000, -0.0685000, 1.0296000 ]
],
// and back
fromCone_M: [
[ 0.9869929, -0.1470543, 0.1599627 ],
[ 0.4323053, 0.5183603, 0.0492912 ],
[ -0.0085287, 0.0400428, 0.9684867 ]
]
}); They should be using: defineCAT({
id: "Bradford",
// Convert an array of XYZ values in the range 0.0 - 1.0
// to cone fundamentals
toCone_M: [
[ 0.8951000, 0.2664000, -0.1614000 ],
[ -0.7502000, 1.7135000, 0.0367000 ],
[ 0.0389000, -0.0685000, 1.0296000 ]
],
// and back
fromCone_M: [
[0.9869929054667123, -0.14705425642099013, 0.15996265166373122],
[0.43230526972339445, 0.5183602715367776, 0.049291228212855594],
[-0.008528664575177328, 0.04004282165408486, 0.96848669578755]
]
}); Further, they precalculated the D65 <--> D50 CAT here, for speed, but it was precalculated with the less accurate matrices: https://github.com/LeaVerou/color.js/blob/main/src/adapt.js#L39 They should use: if (!env.M) {
if (env.W1 === WHITES.D65 && env.W2 === WHITES.D50) {
env.M = [
[ 1.0479297925449969, 0.022946870601609708, -0.05019226628920527 ],
[ 0.02962780877005588, 0.99043442675388, -0.017073799063418806 ],
[ -0.009243040646204504, 0.015055191490298138, 0.7518742814281371 ]
];
}
else if (env.W1 === WHITES.D50 && env.W2 === WHITES.D65) {
env.M = [
[ 0.947386632323667, 0.28196156725620036, -0.1708280666484637 ],
[ -0.7357288996314816, 1.6804471734451398, 0.035992069603406264 ],
[ 0.029218329379919382, -0.05145129980782719, 0.7733468362356041 ]
];
}
} All of this improves the chromatic adaptation: > new Color('white').to('prophoto')
Color {
space: <ref *1> RGBColorSpace {
id: 'prophoto',
name: 'ProPhoto',
base: RGBColorSpace {
id: 'prophoto-linear',
name: 'Linear ProPhoto',
base: [ColorSpace],
aliases: undefined,
fromBase: [Function (anonymous)],
toBase: [Function (anonymous)],
coords: [Object],
white: [Array],
formats: {},
referred: 'display',
path: [Array]
},
aliases: undefined,
fromBase: [Function: fromBase],
toBase: [Function: toBase],
coords: { r: [Object], g: [Object], b: [Object] },
white: [ 0.9642956764295677, 1, 0.8251046025104602 ],
formats: { color: [Object] },
referred: 'display',
path: [ [ColorSpace], [ColorSpace], [RGBColorSpace], [Circular *1] ]
},
coords: [ 1.0000000000000002, 0.9999999999999999, 1.0000000000000002 ],
alpha: 1
} |
The original gamut map code does not return, but falls through to the clip part. IIRC we did it like that to catch any final slightly-oog values. |
That would be great. From memory, the forward and inverse matrices were just copied directly from Lindbloom. Another issue is that Lindbloom uses the ASTM E308-01 values for white points; color.js and CSS Color 4 did so at first too, but then settled on consistently using the published four-figure values for D50 and D65 (again to hunt down some small errors from using different values in different places). |
Yep, I'll update all the CAT matrices with accurate inverse matrices over the weekend. |
We are continuing to explore the out of gamut issue, and this comment is documentation of our exploration so far. Essentially, what we're seeing is if the delta is always less than the JND, but the difference between JND and delta is greater than epsilon, chroma continues to project outwards until it gets within an epsilon of the original value. This is a rough graph of chroma progressing through the algorithm, with the red line as an approximate gamut boundary. What I would expect to see is the chroma going above the boundary, and then bouncing back and forth as it gets closer. What we observed is that the delta can be very small, and consistently less than the JND. One theory is that the magnitude of DeltaEOK is not as expected, and multiplying it by 100 (or reducing the JND to .0002) did produce better results. We compared our algorithm to the spec, to the WIP exploration codepen, as well as Coloraide, and didn't find any significant differences that would cause this difference. |
Keep in mind that the goal is not to exactly hit the cusp, but to find the closest acceptable color without reducing chroma too much. A small amount of alteration to lightness and hue is tolerable, which is why clipping is performed. We aren't trying to hit the 0.07 chroma cusp exactly. We only want to reduce the chroma to get in close enough range to clip. Over-reduction can cause other side effects: w3c/csswg-drafts#7135. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@jamesnw I think I found the difference? This is now returning the same value for new Color("p3", [1, 0.77535, 0.85752]).toGamut({ space: 'srgb' })
as ColorAide and the codepen.
Aside from my comments about the inaccuracy of the current white being used, without testing the actual results, I think the logic looks good to me. |
It looks like @jgerigmeyer's change fixes almost but not all cases, and I found a very small number of cases where the algorithm does not return an in gamut color. Notes on the failuresI wrote a script to create colors in the `display-p3` space, with each channel from 0-1 in increments of .01, and then converted each to the sRGB gamut. Out of those 101^3 checks, [58 colors](https://codepen.io/jamessw/pen/QWYQrWB?editors=1000) still ended up out of gamut.Given a quick look, the failures seem to be fairly representative of the areas that are in gamut for p3 and out of gamut for I reran the test with smaller increments, with 201^3 checks. This had 483 ending out of gamut). This was in 0.006% of cases, which was proportional to the 101^3 checks. It also appears that the algorithm is failing for small ranges- given Given this, it seems to be some mathematical/precision/conversion issue, and not something inherent about a color. I spot checked the failures against Coloraide, and it appears we would get the same outputs if we would apply a final clip. ComparisonFor instance, given `color(display-p3 0.99 0.79 0.07)`:Coloraide:
Color.js
Applying a clip to that would result in However, I'm not certain that a clip of Instead of clipping I pushed that change in 1375f3e. With that change, no colors in the 101^3 checks are returning out of gamut. @svgeesus Should this check be added to the spec, in addition to the check @jgerigmeyer added before the I think the spec change on this would be replacing step 15 with-
Alternatively, we could say that 50 out of a million colors being naively clipped is a tradeoff for an extra inGamut check on all colors, in which case I think the spec change would be changing line 15 to-
|
@jamesnw The clipped version is what you want to return. You are reducing chroma of the current color until it is just under the JND distance to the clipped version of the color. When this is satisfied, you can use the clipped version as the color difference won't be noticeable. |
I would actually propose simplifying the code and just always return clipped. No extra gamut check is needed at the end. If the color is already in gamut, the clipped would equal the current color. There is no need to set current to clipped, just return clipped. |
@svgeesus I'm a bit confused by the change in w3c/csswg-drafts@0674822 - I think you may have added the early return in the wrong place, and then removed some needed steps that had become unreachable. The change we were looking to make in # 2 above was to insert after line 10:
This is identical to the logic within the |
✅ Deploy Preview for colorjs ready!
To edit notification comments on pull requests, go to your Netlify site configuration. |
I updated the gamut tests in the new suite (I appreciate the update, btw!). I added some tests of the new algorithm, which I verified these results against Coloraide's results. You can see them here. I also reorganized it, as the new suite doesn't appear to support tests that are 3 layers deep. There were also some failing tests which I fixed by updating the Expected to match the Actual, which I think was appropriate in this case. This also allowed me to increase the epsilon magnitude for a tighter check. |
Would love to hear more about your experience with the testsuite since I'm piloting it on color.js but plan to release it more widely soon. |
FYI, I opened a CSSWG Issue with some suggested from questions that arose in this PR. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This all looks good now!
It looks as if a rebase will be needed before this can be merged |
Done! Thanks! |
src/toGamut.js
Outdated
|
||
while ((max - min) > ε) { | ||
const chroma = (min + max) / 2; | ||
current = clone(origin_OKLCH); | ||
current.coords[1] = chroma; | ||
if (min_inGamut && inGamut(current, space)) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
One thing that I missed, this should really be using an epsilon
of zero. The earlier shortcut out should be based on an epsilon
of zero as well (line 160).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I updated this in 907c8f3
The Gamut tests are passing on this branch- are there additional cases I should add for checking what you were looking at in #378 ?
Is this ready to merge? |
Yes, I don’t think there are any outstanding questions. |
Merged! Thanks everyone! |
* main: Add CAM16 (JMh) (color-js#379) [spaces/prophoto] Use 64 bit-accurate conversion matrices formatting [spaces/hsl] Better handling of negative saturation on very oog colors Point to the actual, working tests not the old ones CSS4 toGamut fixes (color-js#352)
JND
, the clipped color is returned before adjusting further.clipped
instead ofcurrent
.To compare:
Previous output:
With this change, the output is: