-
-
Notifications
You must be signed in to change notification settings - Fork 35.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Texture encoding example code, and thoughts on a correct implementation #6593
Comments
Yep! I was going to suggest the same thing 😊 |
Should this approach have a fall-back for devices that do not support reading half float textures or rendering to half float targets? According to http://webglstats.com/ reading half float textures ( |
Yeah, the fallback should be rendering to a float texture, but again that requires an extension. A further fallback can be doing what I currently do in Clara.io - I do live decoding. But I am sure that is slow (because of a switch statement for each texture access, but I guess one could use a define instead of a uniform) and it has all the problems I outlined above (lack of linear filtering, etc.) |
I submitted a pull request last year for supporting HDR packing formats #5687. I ended up doing the live decoding of the RGBE (or RGBM, RGBD, LogLuv) textures in the shader and custom mipmap generation for my RGBE maps. This allowed mipmapping but not linear filtering between mips. It also only reduced the texture size by half compared to 16-bit FP targets so probably wasn’t worth the hassle. Ultimately, I agree that unpacking into a 16-bit FP texture at load time rather than live every frame sounds like the way to go. I was initially concerned about not being able to use DXT compression for these potentially very large textures but, from my experience last year, the lossy compression of DXT screws up RGBE (along the seams between different exponents). RGBM and RGBD work much better with DXT compression and with linear filtering (though not technically correct) but they don’t look nearly as nice as RGBE when used for HDR textures that make use of a large range of intensities. They could be an option for a fallback on systems that don’t support FP targets though. i.e. On these platforms, during the unpack step, unpack from RGBE into a 8-bit per channel RGBD texture to be live decoded in the shader.
This is pretty minor but the first two constants are referring to colour space while the second two are packing formats. I know it works since HDR textures are always linear (as far as I know) but it doesn’t quite feel right to me. What if there was a packing format that was sRGB when unpacked? Hmm, well maybe that’s not an issue since the unpacking could also convert to linear... I guess I don’t have a really good reason. Being able to explicitly specify whether to do the unpack to a FP texture would be nice as well since some packing formats might work nicely with DXT (RGBM, RGBD, various normal map packing formats, etc.) and I’d rather keep these small in memory if I can. |
Nice comment. What is RGBD? BTW I view sRGB as a packing format. :) |
RGBD is similar to RGBM except that the alpha channel stores the inverse of the scale so that you don't have to define a max value. http://vemberaudio.se/graphics/RGBdiv8.pdf Oh, and to clarify the "normal map packing formats" for anyone that may be wondering... It was common in DX9-era (last-gen consoles) games to pack a normal map into a DXT5 texture as (r=?, g=y, b=z, a=x) because the alpha channel for DXT5 has better precision than the other channels. Another format that can be used is two 16-bit channels for x and y (deriving z in the shader). It would be great if we had the flexibility of defining these other formats easily and have some of them unpack to FP textures and some unpack in the shader either depending on user preference, on the format or based on device capabilities. |
Nice, I like RGBD. Thanks for sharing that. :) Compressed textures are a pain across platforms as none have good support. DXT5 would be great for RGBE/RGBD/RGBM but alas it is PC only. |
Yeah, we (verold.com) generate multiple versions of a texture when one is uploaded and then select the appropriate one based on the user's device and preferences set. So, on desktops, you'd get DXT and, on mobile, in theory, you'd get PVRTC, ATC, etc. We haven't implemented those though. I'm waiting for ASTC support but that's unlikely before WebGL 2.0 since it's part of ES 3.0. But that would be something with hardware support across a wide array of devices. |
Very cool. ASTC looks amazing. It looks like ASTC it isn't part of OpenGL ES 3.0, it is technically an extension. Thus it is likely a bit further away than just WebGL 2.0 I understand. See discussion here: https://www.khronos.org/webgl/public-mailing-list/archives/1401/msg00000.html Probably an early extension though.... @MiiBond What utilities do you use for texture conversion? I haven't found any command line tools that run on Linux that support DXT5. |
@bhouston We compiled Nvidia texture tools for Linux and that's what we currently run on our processing servers. However, it's pretty slow unless you run with a GPU and Cuda acceleration enabled. If you use the -fast flag, it gets just as fast as running with Cuda but the resulting quality is pretty bad. We want to start looking at some of the tools mentioned on this site but haven't had time yet: http://gamma.cs.unc.edu/FasTC/ It looks like there are better options out there. If you try out one of those tools, please let me know how you find it. |
Here is a summary of the current r.72dev implementation. If
No other colors or textures are linearized. If If We need more flexibility for the future, and I believe the following changes are required:
I know there are implementation issues. We need to decide on something... Comments welcome. |
Hi all, I've done a PR that implements what is outlined here: #8117 |
@WestLangley asked for a code example of decodeTexel pattern that I described here
The constants I added:
The glsl implementation:
Big Caveat
Now even though this is currently what I am using in https://Clara.io, I have found it to be very problematic. Live texture decoding of vaues that are not monotonic in their encoded state (such as RGBE) means no built-in texture filtering or interpolation can be used -- which means that it leads to low quality.
The solution to this is something similar to what @mrdoob suggested a while ago, one should have a decode step for textures first. mrdoob suggested that we do it in JavaScript, but there are issues with that because a direct 32bit RGBA to 32bit RGBA sRGB decode will loss precision (about a 20% precision loss if you do the math), and it will be hugely wasteful for floating point results because we need a float16 result but JavaScript doesn't have support for that type (and neither does the CPU.)
I would instead suggest that we do the decode in WebGL as a texture load setup step and usually do it from an input 8bit per channel RGBA texture into a float16 RGBA texture. Then one can use filtering and linear interpolation on HDR textures properly. And it will simplify the run-time because there will be no costly texelDecodes on each texel access.
The text was updated successfully, but these errors were encountered: