-
Notifications
You must be signed in to change notification settings - Fork 162
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Sprite file supports storing 16/32-bit sprites as indexed images with palette #1461
Sprite file supports storing 16/32-bit sprites as indexed images with palette #1461
Conversation
There's a separate case when we read the data from the input stream only to write it into another output stream. For that case we want to parse/convert data as less as possible and deal with pure byte arrays. It's not convenient to have same Write* function that both handles writing a byte array and writing a pixel data which may be of variable bitness.
This lets us pass any arbitrary pixel buffer there, not necessarily from a Bitmap object.
6d0646e
to
0244deb
Compare
i personally have never seen rgb sprites that used alpha values other than 0xff or 0x00 (i.e. either transparent, or not transparent). did you encounter cases that use varying alpha levels ? if not it might be preferable to just allocate first palette slot with "transparent color" entry and use index 0 for all transparent pixels. |
Yes, they are common in games that have antialiased sprites with alpha channel: then the character/object edges will consist of pixels with gradually changing alpha value. My "cameras tech demo" may be seen as an example: it is a 1280x720 game with a number of hi-res antialiased sprites, and this second palette method works where first fails. But, to be fair, this is the borderline case, so i'm still in doubts whether to add this right away. It may be more desirable to have a better proper compression method, like LZW or deflate. |
0244deb
to
879a167
Compare
Alright, I decided to leave this alternate method off for this pr, as it's not clear how often it will be useful. Additionally, changed the meaning of the "storage flags" byte to also contain compression type. This is in case we'll have separate compression option per sprite. The low 4 bits will be meant for a compression type id (which allows values 0-15) and high 4 bits are for flags. The "indexed bitmap" flag is now 0x80. |
should this mean "0x01-0x0f - used compression; this will be replaced with compression type of same value later;" ? we might use this opportunity to add support for 24 bpp images (i.e. all 32bit images could be automatically be converted to it if they use identical alpha channel value for all pixels) and 15 bpp + transparency bit, i.e. RGB555. if so, it would probably preferable if the "bytes per pixel" field be modded to mean "bits ber pixel" and stored as short, and then use 2 bytes for the flags field too // edit, or alternatively using a 1byte "format-identifier enum". |
How i understand this, we need following data:
If the first two may be merged, that is - it will be always possible to deduce how to convert the image back to original, - then they may be one byte enum. If not, or this is not certain, then these should be stored in two separate bytes, second being enum. edit: Also to clarify, i may add prerequisites for supporting other storage formats, but i wont add anymore of these within this pr. |
the enum might also look like
i.e. combining all the 3 different properties you listed into single enum. i'm just pointing out a possibility, not suggesting that this is better.
yeah, i understand. i'm just raising this so we make new format future-proof, and dont have to introduce yet another format in a couple months when e.g. 24 bit support will eventually be added. |
879a167
to
1f351b9
Compare
So, here's an alternate variant:
The "storage format" can currently be: I went with separate "original color depth / format" and "storage format", for two reasons:
Both of these parameters may be made same enum actually. In this case values 1 - 4 may be kept for compatibility (as 8-bit using global palette, rgb565, rgb-24bit and argb-32bit respectively). This is also why I made the "indexed" storage format value = 32 (to keep some gap between values). As this is an enum, the actual number should not matter probably. Compression type is now stored as a separate 1 byte, value currently is only 0 or 1 (meaning RLE). Compression is separate, because if there are more compression types, then the number of combinations will multiply and become quite inconvenient to keep in one enum. |
i don't quite get this. let's say we have 8bpp image with palette. but then how would you store palette with just 8 bits ? usually one has a palette in 24 bit full color. // edit
this one doesn't seem to make much sense either. if the palette has more than 256 entries, you need 2 bytes to index into it, in which case the image could just as well be stored as rgb565 with almost no visible difference in quality. // edit2: another flag that may be useful for all alpha-less image types is "first color is transparent". |
I was not thinking about 8-bit when writing this. I made this rule to avoid repacking the colors for 16-bit images; so to have them written as-is in palette instead of converting to 24-bit rgb and then back to 16 bit on load.
I can make it 1 byte, but then will have to either use value 0 as 256 (count), or treat it as (count - 1) to allow max 256 entries.
I am not sure where to put this "flag" now, when there's no place for flags in this format; besides, if there's a transparent color in the image, may not it be placed in the palette without additional instructions? |
agsprite does it for simplicity, though as you mentioned in ags case it might indeed be helpful/simpler to e.g. store 16bit palette. so it might be helpful to have a "palette type" entry, or maybe it could be combined with image type/bpp enum, e.g. 8bit with rgb565 palette, 8bit with 32bit palette, or just "rgb565palette" because it is assumed palettized format always uses 8bit for palette indexing.
i seem to recall that some formats do indeed use count -1 for this purpose.
i'm not certain how ags handles transparency currently (apart from types with alpha channel), but there's historical practice of either detecting a special "pink" color as transparent, or have first color (index 0) be the transparent color. but there might be usecases where you dont want transparency at all, so having such a flag might be useful. if ags default to color 0 as transparent, then one could also just simply not use color 0 in the image, at the cost of one palette slot. |
I don't see much point in discussing how to store 8-bit image with palette, because AGS does not have a use for these kinds of sprites. It uses global palette everywhere, which is a combination of game palette and room palette (which may be customized per room afaik). There are script functions that allow to edit this global palette at runtime, performing palette-based animations. I can make separate enum values for 16-bit palette and 32-bit palette of course, even though this will likely be redundant in the end.
AGS uses "magic pink" for transparency in 16-bit and 32-bit images. For 8-bit games, I suspect the index 0 is a transparent color. I'd rather not invent new things for hypothetical purpose, which no one will likely use in the current engine. |
i assumed the entire point of this PR was to transform memory-hungry 32bit/16 bit images into palettized 8bit (that means, using an 8bit index between 0 and 0xff into a 32bit/16bit palette) for efficient storage, and unpack them into "native" 16bit or 32 bit when loaded. |
Yes, it is the point of this PR. I was refering to previous questions:
and to the suggestion of having a flag telling that the transparency is in the first palette index (which purpose I still dont understand tbh). I thought you are speaking of storing regular 8-bit images with their own palettes. |
oh, i see now. ags never supported 8 bit input files with palette other than global palette, so your intention was to leave it that way and only add this new palette feature for optimization of 16 and 32bit images.
well, i was thinking of the case where an input file in 32bit would only make use of alpha-values 0 and 0xff and less than 256 colors total, in which case the sprite packing could make use of color 0 for all transparent pixels, so it would need to store that flag. but i guess this can be solved then by using magic pink instead. |
Hm, "input" is a good name for this, i kept calling them "original sprites". But in practice this will only make sense if we modify the engine, teaching it how to handle these, because right now their palette would be ignored.
Do you maybe mean that any combination of RGB with alpha 0 would be replaced with same "transparent color" index? On the other hand, in a "correct" case where you have just pixels 0x00000000 and 0xffrrggbb, these should nicely fit in a palette so long as there's <257 combinations of RGB there, so there's no need of a special transparent color at index 0 (the full transparent color will appear in one of the indexes anyway). |
yes, that was my thinking, because theoretically an RGBA image could use hundreds of different colors, but have them set fully transparent, so even if visible number of colors is less than 256, it wouldnt get repacked with palette.
right. engine could theoretically just replace all fully transparent pixels with 00000000 to address this when creating sprite pack. |
1f351b9
to
7931388
Compare
So, i made separate enums for storage format with palette:
Palette length is now 1 byte, and saved as (count - 1), so to get proper count you need to do +1 to the read value. Will update the first post with the new format information.
I'd leave this for the time being, because like said, I'm not fully certain if it won't break something, but in theory such approach may be also added later on top of existing conversion. EDIT: also I begin to have a bad feeling about this, and wonder if it would be better to just use some simple PNG library and store sprites as PNGs instead of inventing all this custom format; as people seem to mostly work with PNGs today. |
i just checked the updated table in the first post, it seems you still use the reserved byte, but then add the palette length as an optional separate byte - which means that when palette is used the offsets become unaligned. i'd think it would be highly preferable to replace the "reserved" entry with the non-optional palette size field with in the first 4 bytes. edit:
you've already done 95% of the work and you presented some pretty convincing savings. even if one would use png, the format doesnt magically create palettized images, and multimedia data usually doesnt compress that well. |
7931388
to
a818d9b
Compare
Made it save palette count always right after "storage format", and before "compression" byte, so:
|
a818d9b
to
b3f7456
Compare
This option tells the spritefile writer to optimize sprite storage when possible.
b3f7456
to
ddec7ef
Compare
Resolves #983.
General information
This permits sprite file to store 16 and 32-bit sprites as indexed images (8-bit) with palette when possible. This is a form of a lossless compression, and does not change anything in the game looks.
The idea is that the sprite writer will calculate the number of individual colors within the sprite, and if that number is lower than 257, it will create a palette of these colors and convert the sprite to a 8-bit image using palette. The result of this conversion is then saved into the spritefile.
IMPORTANT: this does not affect how the loaded sprites are stored in the program memory (whether in the Engine or Editor). Only how they are saved in the compiled spritefile.
On success may reduce eligible 32-bit sprites roughly 4 times (not precisely, as palette also takes space), and 16-bit sprites 2 times.
The effect of this method will vary from game to game. It's ideal for the 32-bit games with simplier graphics and less colors used per sprite. While games with highly detailed graphics will likely get little to no improvement.
NOTE: this method is separate from existing compression, and may actually be used in combination with it, decreasing sprite sizes even further.
Editor
In the Editor's General Settings this adds "Enable sprite storage optimization" option. It is ENABLED by default.
(I decided to give this option more generic name to make it easier, and just in case we'll add something else in the future)
Format changes
Sprite File's header
Sprite file's version is increased to 12.
In the file's header 4 more bytes are appended to the end of meta data. First byte contains "sprite storage flags", which is a collection of flags describing which storage methods were allowed when writing this file. This is purely for information, for Editor or other tools that may want to know this. Other 3 bytes are reserved.
Sprite format
Current format looks like this:
The new format will look like:
First of all, the 2 bytes meant for color depth in the beginning were split into 2 values 1 byte each.
Color depth remains and defines the "input format" (what the sprite is meant to be). It's in byte-per-pixels (1,2,4) but may in theory be turned into an enum, while keeping same values for compatibility.
"Storage format" is an enum, describes how this sprite is stored. Currently supported values are only ones related to converting bitmap into the indexed bitmap with palette:
0 - undefined (keep same);
32 = palette888: may be used for normal 8-bit sprites accompanied by palette, or maybe to pack 24-bit images;
33 = palette8888: used when packing 32-bit ARGB sprites;
34 = palette565: used when packing 16-bit RGB565 sprites;
Palette is only saved if storage format is appropriate. Palette entries are supposed to have same size and same format as the original image (for the 32-bit sprite palette entry is 4-bytes ARGB, for 16-bit sprite it's 2-bytes RGB565).
The size of pixel data is now mandatory at all times (previously was only written for compressed sprite files).