Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

alphaTest with fixed mipmap level sample #20522

Open
Fyrestar opened this issue Oct 17, 2020 · 18 comments
Open

alphaTest with fixed mipmap level sample #20522

Fyrestar opened this issue Oct 17, 2020 · 18 comments
Milestone

Comments

@Fyrestar
Copy link

Fyrestar commented Oct 17, 2020

When using alphaTest the alpha value from map is used where the corresponding mipmap level has been used, but this causes objects to "dissolve" quickly, leaves of trees for example will disappear quickly and they end up as blank trunks.

By sampling the map for alphaTest with a fixed level this problem disappears/gets reduced enough to prevent this to happen too early. Here are 2 tests, the first using the sample from map and the second using fixed mipmap level.

Using diffuseColor.a from map:
alphaTest2

Sampling again with fixed mipmap level:
alphaTest1

You see in the first there are barely trunks left while they remain filled in the second, it generally improved finer details that tend to disappear quickly. It also could be implemented optionally as it adds another texture fetch, if MSAA is available the issue is reduced a lot as well. However, otherwise there is dissolving vegetation, hair or other which can get really ugly.

I've been using textureLod here, but bias with texture2D seems to be the same visually. I've temporarily implemented it like this:

#ifdef ALPHATEST

	#if defined( USE_MAP ) && __VERSION__ == 300
		float alphaTest = textureLod( map, vUv, 0.0 ).a;
		if ( alphaTest < ALPHATEST ) discard;
	#else
		if ( diffuseColor.a < ALPHATEST ) discard;
	#endif

#endif

Edit: i changed it to texture and the bias being a parameter, as it works very well with less detailed textures but can turn harshly pixelated close up with very fine detailed textures like hair, this is something that needs to be fine tuned depending on the texture details / mesh size, but for most cases it works well with some general value.

@gkjohnson
Copy link
Collaborator

This blog post provides a pretty good survey of the problem in the second half and provide a couple options for addressing it. It seems that Unity will take this into account when generating mip maps so the apparent volume of the alpha tested texture remains consistent as mipmaps change.

https://medium.com/@bgolus/anti-aliased-alpha-test-the-esoteric-alpha-to-coverage-8b177335ae4f

@Fyrestar
Copy link
Author

If MSAA is available this is also much more less of a problem, though if you can't use it like with a multi render target setup it's really bad. If it's available with gl.enable( gl.SAMPLE_ALPHA_TO_COVERAGE ) we also get pretty antialiased results, but there are many scenarios you can't use it or it might not be available.

Thanks i'll take a look 👍

@mrdoob
Copy link
Owner

mrdoob commented Oct 19, 2020

I have not read the article but...

Wouldn't splitting the color/alpha into map and alphaMap work?

diffuseColor.a *= texture2D( alphaMap, vUv ).g;

if ( diffuseColor.a < ALPHATEST ) discard;

(And disabling mipmapping in alphaMap)

@Fyrestar
Copy link
Author

Fyrestar commented Oct 20, 2020

The issue with using no mipmaps is that you get heavily aliased results with fine detailed textures such as hair like i mentioned in the edit. What seems to work best is using a mipmap offset with the bias parameter, so mipmapping or some kind of SDF volume is still required to let too fine details dissolve in a controlled manner, it really depends on the asset/texture.

In case of hair with a rather small bias offset it still looks much better than just the current mimap level, but also not over-detailed like a lot thin triangles causing heavy aliasing.

Using alphaMap could work similar to mimaps maybe if they were blurred, instead a bias offset it could use derivatives to determine the threshold value for alpha test. Not sure how well it would work or look like, but it would be more complicated than using RGBA textures.

Thinking about that now, that could even look better with some kind of SDF alphaMap, as instead dissolving/shrinking it could expand the volume, like for hair or vegetation. Finer details will persist while only transparent area will get opaque.

However this also means the diffuse texture needs to consider this which is normally the case, just if you'd create a texture of a branch with leaves working with alpha for empty areas that would mean it expands into the non-painted background color.

@mrdoob
Copy link
Owner

mrdoob commented Oct 20, 2020

Hmm... What are the next steps here then? 🤔

@Fyrestar
Copy link
Author

Fyrestar commented Oct 21, 2020

My suggestion would be using a mipmap level offset, as it gives individual control when needed and would be simple to add and use. Something like alphaBias besides alphaTest

#ifdef ALPHATEST

	#if defined( USE_MAP ) && defined( ALPHABIAS )
		if ( texture2D( map, vUv, -float( ALPHABIAS) ).a < ALPHATEST ) discard;
	#else
		if ( diffuseColor.a < ALPHATEST ) discard;
	#endif

#endif

I like the idea of just using alphaMap with a blurred mask that adapts the tolerance and can expand, as thin bush textures will dissolve at some relatively early point even with their highest lod, expanding their threshold so they get a thicker combined volume sure will fix this, i'll test it later how well it works with different textures. However it would be more complicated to use/maintain, generating the proper mask (not on-the-fly), a RGBA texture is just more intuitive to create and maintain + supported by all formats. With alphaBias existing projects could easily make use of it.

In case the alphaMap idea works really well it could be just another option using the same alphaBias, basically some alternative when ALPHABIAS is defined as well as USE_ALPHAMAP, doing the threshold test.

@gkjohnson
Copy link
Collaborator

@Fyrestar is setting the mipmap bias different from just creating a texture that is two sizes smaller (rather than setting the bias to 2) and setting it to the alphamap?

However it would be more complicated to use/maintain, generating the proper mask (not on-the-fly), a RGBA texture is just more intuitive to create and maintain + supported by all formats. With alphaBias existing projects could easily make use of it.

It's true it is more complicated. I could see a utility in the examples or something that could be used to generate the custom mipmaps for this on the fly. I took a look at the article again and it looks like one of the links has been taken down -- here is a wayback machine link to the article on the Witness which is referenced in the original post but has since been taken down which has some code references too just incase anyone was interested.

One of the other solutions mentioned is to scale the alpha test threshold in in the shader based on the mip level being sample. I'm not sure if this can be done with a built in shader function but it seems that the mip level can be computed. This might make it look better automatically?

@Fyrestar
Copy link
Author

Fyrestar commented Oct 22, 2020

is setting the mipmap bias different from just creating a texture that is two sizes smaller (rather than setting the bias to 2) and setting it to the alphamap?

The bias is relative to the current mipmap level picked by not providing a bias, to pick a higher it needs to be negative. One map alone won't do it unfortunately as the alpha mask basically is like geometry with alpha test where we should avoid having too many narrow details that go subpixel causing many pixel artifacts being left over.

One of the other solutions mentioned is to scale the alpha test threshold in in the shader based on the mip level being sample. I'm not sure if this can be done with a built in shader function but it seems that the mip level can be computed. This might make it look better automatically?

I unfortunately couldn't test it yesterday will try it today. It should be a more optimal solution for bushes, hair and such but it might be not desired in some cases where "sticking out details" are supposed to disappear as this solution is mainly supposed to shape a merged volume to avoid causing disconnectivities with fragments. With the suggestion above both could be used without interfering with other features.

@gkjohnson
Copy link
Collaborator

The bias is relative to the current mipmap level picked by not providing a bias, to pick a higher it needs to be negative

Right for some reason I thought you'd be providing a larger bias to select a coarser mipmap in which case just making a smaller map should be the same. A negative bias makes more sense after reading through the initial post again, though. My mistake!

@mrdoob mrdoob added this to the rXXX milestone Oct 23, 2020
@Mugen87
Copy link
Collaborator

Mugen87 commented Jan 17, 2021

There is actually an existing issue focused on the same topic: #14091

I've read the linked paper and the presented approach (Alpha Distribution) can be implemented similar to PMREMGenerator. Meaning a component that pre-processes a texture before actually using it. The advantage of this approach is that no modifications to the GLSL code are necessary. So it avoids any sort of additional shader overhead but still provides an improved alpha testing result. Definitely better than using a fixed mipmap level sample (the original proposal of this issue) which is highlighted in the paper.

@Mugen87
Copy link
Collaborator

Mugen87 commented Jan 17, 2021

@mrdoob I suggest to mark this issue as a duplicate and close it in favor of #14091.

@Fyrestar
Copy link
Author

Doesn't this result in heavy dithering? I would still suggest considering adding both options as the one i described above works really well without strong dithering, other than this, this pre-processing step could be only done in development stage so it doesn't work out of box.

@Mugen87
Copy link
Collaborator

Mugen87 commented Jan 18, 2021

Doesn't this result in heavy dithering?

The paper presents two approaches for Alpha Distribution. The second one (Alpha Pyramid) avoids the dithering pattern produced by the first approach.

this pre-processing step could be only done in development stage so it doesn't work out of box.

Not sure what you mean here. With Alpha Distribution, you normally load the texture and just pre-process it before uploading it to the shader. This approach is not restricted to the development stage.

Besides, your presented approach seems not to handle semi-transparent regions which Alpha Distribution does.

@Fyrestar
Copy link
Author

Not sure what you mean here. With Alpha Distribution, you normally load the texture and just pre-process it before uploading it to the shader. This approach is not restricted to the development stage.

Yes i mean texture's optimally shouldn't need any further pre-processing step as this gets costly with many textures and influences compression or other formats, it is already a performance hit when textures are getting resized if they aren't provided in PO2. One would also need to explicitly say what texture even uses alpha to avoid processing opaque textures.

Besides, your presented approach seems not to handle semi-transparent regions which Alpha Distribution does.

Yes this is only for alpha testing, avoiding the lower mipmap alpha masks, but it only requires adding 2-3 short lines with compiler condition. Like i said i would suggest using both as optional features for different use cases/requirements, if possible.

@Mugen87
Copy link
Collaborator

Mugen87 commented Jan 18, 2021

but it only requires adding 2-3 short lines with compiler condition.

I just want to avoid more complexity in the shaders. An Alpha Distribution implementation similar to PMREMGenerator can be used by WebGPURenderer out of the box.

Besides, I clearly favor a single overhead at the beginning of the rendering process than having a permanent overhead in the fragment shader.

@Fyrestar
Copy link
Author

than having a permanent overhead in the fragment shader

Yes of course, like the alpha expansion i mentioned above a preparation of the alpha mask is also the only way to prevent dissolving when even the original mask density alone isn't enough to cover pixels on screen properly, like a hair or very sparse detailed bush texture. However it would be great if this processing step would be optional for files being served with this process already applied.

I clearly favor a single overhead at the beginning of the rendering process
The issue i see here is that many apps and games load resources on-the-fly, not everything initially.

I still need to take a look into the paper, it won't load from here. It would be interesting to see and compare the results, like i mentioned before there are cases a controlled relative dissolve of the mask so too fine details in distance don't cause noise looks better. For example the needle tips of a pine branch texture can dissolve reasonably but not the denser parts, this sort of LOD prevents minimal details popping up here an there that cause a pixel soup without MSAA or SSAA.

I'm really curious how this works without causing pixellated noise with no MSAA, especially semi-transparent areas. In the screenshots and video it looks quite pixellated and i'm not sure if MSAA is used. It should look acceptable without MSAA with postprocessing too.

@mrdoob
Copy link
Owner

mrdoob commented Feb 6, 2021

@Fyrestar How do you think the API should accommodate this?

@Fyrestar
Copy link
Author

Fyrestar commented Feb 12, 2021

A easy to implement option would be the alpha bias, for most relatively dense alpha masks this works very well out of the box, it requires a separate fetch on the diffuse map what isn't costly (but cheaper than a extra alpha mask texture + fetch), but could be avoided with modified alpha mask, however for the alpha distribution i think a lot users only use the build of THREE not working with nodejs, and having a on-the-fly generator isn't very ideal also making some formats and or compression useless like basis textures, causing longer loading times and increasing memory from the locally processed textures. But using this externally to prepare the assets along when converting to basis for example that would be the most ideal way, so if this method will look visually better even with semi-transparent areas then this, for more advanced usage with external processing, would be sure most ideal.

For that reason i would consider adding both what Michael suggested and possibly the alpha bias, the other thing i mentioned before growing the alpha mask with a blur works great too to fix less dense masks, but it also requires to expand the color information on edges a lot (the alpha-distribution generator would have to take care of too) which most of all programs don't write (where alpha is zero), the expansion has to extend the edges colors similar as Solidify for Photoshop does, this is by default a process that is done for assets to avoid bleeding from mipmaps into the default white or black color.

For alpha bias i would do this by a constant like the code example above, that can be different for some assets, like is said for some assets it helps to gradually dissolve thin details that would only cause too much pixel soup flickering.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

4 participants