Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Sprite file supports storing 16/32-bit sprites as indexed images with palette #1461

Merged

Conversation

ivan-mogilko
Copy link
Contributor

@ivan-mogilko ivan-mogilko commented Dec 11, 2021

Resolves #983.

General information

This permits sprite file to store 16 and 32-bit sprites as indexed images (8-bit) with palette when possible. This is a form of a lossless compression, and does not change anything in the game looks.

The idea is that the sprite writer will calculate the number of individual colors within the sprite, and if that number is lower than 257, it will create a palette of these colors and convert the sprite to a 8-bit image using palette. The result of this conversion is then saved into the spritefile.

IMPORTANT: this does not affect how the loaded sprites are stored in the program memory (whether in the Engine or Editor). Only how they are saved in the compiled spritefile.

On success may reduce eligible 32-bit sprites roughly 4 times (not precisely, as palette also takes space), and 16-bit sprites 2 times.

The effect of this method will vary from game to game. It's ideal for the 32-bit games with simplier graphics and less colors used per sprite. While games with highly detailed graphics will likely get little to no improvement.

NOTE: this method is separate from existing compression, and may actually be used in combination with it, decreasing sprite sizes even further.

Editor

In the Editor's General Settings this adds "Enable sprite storage optimization" option. It is ENABLED by default.
(I decided to give this option more generic name to make it easier, and just in case we'll add something else in the future)


Format changes

Sprite File's header

Sprite file's version is increased to 12.
In the file's header 4 more bytes are appended to the end of meta data. First byte contains "sprite storage flags", which is a collection of flags describing which storage methods were allowed when writing this file. This is purely for information, for Editor or other tools that may want to know this. Other 3 bytes are reserved.

Sprite format

Current format looks like this:

bytes meaning
2 color depth (bytes per pixel)
2 width
2 height
4 optional size of pixel data, in bytes, only written when the sprite is compressed
N sprite's pixel data, optionally compressed

The new format will look like:

bytes meaning
1 color depth (bytes per pixel), may be turned into input format enum
1 storage format (enum)
1 palette length, if applicable, stored as (entries count - 1) (to let save all 256 elements in a byte), or 0
1 compression type (used as bool, but may be turned into enum)
2 width
2 height
N1 (optional) palette, entry's size is defined by the storage format (usually = sprite BPP), total size = pal length * entry size
4 mandatory size of pixel data, in bytes
N2 sprite's pixel data, optionally compressed and/or converted

First of all, the 2 bytes meant for color depth in the beginning were split into 2 values 1 byte each.
Color depth remains and defines the "input format" (what the sprite is meant to be). It's in byte-per-pixels (1,2,4) but may in theory be turned into an enum, while keeping same values for compatibility.
"Storage format" is an enum, describes how this sprite is stored. Currently supported values are only ones related to converting bitmap into the indexed bitmap with palette:
0 - undefined (keep same);
32 = palette888: may be used for normal 8-bit sprites accompanied by palette, or maybe to pack 24-bit images;
33 = palette8888: used when packing 32-bit ARGB sprites;
34 = palette565: used when packing 16-bit RGB565 sprites;

Palette is only saved if storage format is appropriate. Palette entries are supposed to have same size and same format as the original image (for the 32-bit sprite palette entry is 4-bytes ARGB, for 16-bit sprite it's 2-bytes RGB565).

The size of pixel data is now mandatory at all times (previously was only written for compressed sprite files).

There's a separate case when we read the data from the input stream only to write it into another output stream. For that case we want to parse/convert data as less as possible and deal with pure byte arrays. It's not convenient to have same Write* function that both handles writing a byte array and writing a pixel data which may be of variable bitness.
This lets us pass any arbitrary pixel buffer there, not necessarily from a Bitmap object.
@rofl0r
Copy link
Contributor

rofl0r commented Dec 11, 2021

In addition to the solution proposed in #983, this pr contains an extra method, where it would also try to save 32-bit sprites as 16-bit pixel data with an alpha channel: where first 8-bit is a palette index and second 8-bit is an alpha value.

i personally have never seen rgb sprites that used alpha values other than 0xff or 0x00 (i.e. either transparent, or not transparent). did you encounter cases that use varying alpha levels ? if not it might be preferable to just allocate first palette slot with "transparent color" entry and use index 0 for all transparent pixels.

@ivan-mogilko
Copy link
Contributor Author

ivan-mogilko commented Dec 11, 2021

i personally have never seen rgb sprites that used alpha values other than 0xff or 0x00 (i.e. either transparent, or not transparent). did you encounter cases that use varying alpha levels ?

Yes, they are common in games that have antialiased sprites with alpha channel: then the character/object edges will consist of pixels with gradually changing alpha value. My "cameras tech demo" may be seen as an example: it is a 1280x720 game with a number of hi-res antialiased sprites, and this second palette method works where first fails.

But, to be fair, this is the borderline case, so i'm still in doubts whether to add this right away. It may be more desirable to have a better proper compression method, like LZW or deflate.

@ivan-mogilko
Copy link
Contributor Author

Alright, I decided to leave this alternate method off for this pr, as it's not clear how often it will be useful.

Additionally, changed the meaning of the "storage flags" byte to also contain compression type. This is in case we'll have separate compression option per sprite. The low 4 bits will be meant for a compression type id (which allows values 0-15) and high 4 bits are for flags. The "indexed bitmap" flag is now 0x80.

@rofl0r
Copy link
Contributor

rofl0r commented Dec 12, 2021

0x01 - used compression; this will be replaced with compression type of same value later;

should this mean "0x01-0x0f - used compression; this will be replaced with compression type of same value later;" ?

we might use this opportunity to add support for 24 bpp images (i.e. all 32bit images could be automatically be converted to it if they use identical alpha channel value for all pixels) and 15 bpp + transparency bit, i.e. RGB555. if so, it would probably preferable if the "bytes per pixel" field be modded to mean "bits ber pixel" and stored as short, and then use 2 bytes for the flags field too

// edit, or alternatively using a 1byte "format-identifier enum".

@ivan-mogilko
Copy link
Contributor Author

ivan-mogilko commented Dec 13, 2021

if so, it would probably preferable if the "bytes per pixel" field be modded to mean "bits ber pixel" and stored as short, and then use 2 bytes for the flags field too
// edit, or alternatively using a 1byte "format-identifier enum".

How i understand this, we need following data:

  • what the image is supposed to be (8, 16, 32 bit)
  • how is it stored (indexed bitmap, 24 bits with same alpha value, etc)
  • how is it compressed.

If the first two may be merged, that is - it will be always possible to deduce how to convert the image back to original, - then they may be one byte enum. If not, or this is not certain, then these should be stored in two separate bytes, second being enum.

edit: Also to clarify, i may add prerequisites for supporting other storage formats, but i wont add anymore of these within this pr.

@rofl0r
Copy link
Contributor

rofl0r commented Dec 14, 2021

the enum might also look like

enum spriteformat {
s8bit_globalpalette = 0,
s8bit_palette = 1,
s8bit_palette_rle = 2,
s16bit_565 = 3,
s16bit_555a = 4,
s24bit = 5,
...
};

i.e. combining all the 3 different properties you listed into single enum. i'm just pointing out a possibility, not suggesting that this is better.

Also to clarify, i may add prerequisites for supporting other storage formats, but i wont add anymore of these within this pr.

yeah, i understand. i'm just raising this so we make new format future-proof, and dont have to introduce yet another format in a couple months when e.g. 24 bit support will eventually be added.

@ivan-mogilko
Copy link
Contributor Author

ivan-mogilko commented Dec 16, 2021

So, here's an alternate variant:

bytes meaning
1 color depth (bytes per pixel), may be turned into format enum
1 storage format (enum)
1 compression type (bool, but may be turned into enum)
1 reserved
2 width
2 height
2 (optional) length of palette (num of entries)
N1 (optional) palette, where each entry is of BPP size
4 mandatory size of pixel data, in bytes
N2 sprite's pixel data, optionally compressed and/or converted

The "storage format" can currently be:
0 - undefined (keep same);
32 - indexed format (colormap with palette, where palette entries have same format and size as the original bitmap); I explain why it's value is 32 below.

I went with separate "original color depth / format" and "storage format", for two reasons:

  1. I am still not certain about possible use cases, as this was not thought through prior; and having these values separate may make it easier to describe conversions between formats;
  2. Hypothetically a user may want to use a sprite as a source of data rather than just a drawn picture, in which case it's best to be able to have the original format explicit. E.g. to have a mask or a value map read from bitmap. (Currently this is bit complicated because AGS usually converts sprites to the global game color depth, but in theory an option to keep it unconverted may be added for such purpose).

Both of these parameters may be made same enum actually. In this case values 1 - 4 may be kept for compatibility (as 8-bit using global palette, rgb565, rgb-24bit and argb-32bit respectively). This is also why I made the "indexed" storage format value = 32 (to keep some gap between values). As this is an enum, the actual number should not matter probably.

Compression type is now stored as a separate 1 byte, value currently is only 0 or 1 (meaning RLE). Compression is separate, because if there are more compression types, then the number of combinations will multiply and become quite inconvenient to keep in one enum.

@rofl0r
Copy link
Contributor

rofl0r commented Dec 16, 2021

N1 | (optional) palette, where each entry is of BPP size

i don't quite get this. let's say we have 8bpp image with palette. but then how would you store palette with just 8 bits ? usually one has a palette in 24 bit full color.

// edit

2 | (optional) length of palette (num of entries)

this one doesn't seem to make much sense either. if the palette has more than 256 entries, you need 2 bytes to index into it, in which case the image could just as well be stored as rgb565 with almost no visible difference in quality.
therefore we could make this field 1 byte and use instead of "reserved".

// edit2:

another flag that may be useful for all alpha-less image types is "first color is transparent".

@ivan-mogilko
Copy link
Contributor Author

ivan-mogilko commented Dec 16, 2021

N1 | (optional) palette, where each entry is of BPP size

i don't quite get this. let's say we have 8bpp image with palette. but then how would you store palette with just 8 bits ? usually one has a palette in 24 bit full color.

I was not thinking about 8-bit when writing this. I made this rule to avoid repacking the colors for 16-bit images; so to have them written as-is in palette instead of converting to 24-bit rgb and then back to 16 bit on load.
I can make it be 32-bits per entry all the time, similar to your agsprite example.

// edit

2 | (optional) length of palette (num of entries)

this one doesn't seem to make much sense either. if the palette has more than 256 entries, you need 2 bytes to index into it, in which case the image could just as well be stored as rgb565 with almost no visible difference in quality. therefore we could make this field 1 byte and use instead of "reserved".

I can make it 1 byte, but then will have to either use value 0 as 256 (count), or treat it as (count - 1) to allow max 256 entries.

// edit2:

another flag that may be useful for all alpha-less image types is "first color is transparent".

I am not sure where to put this "flag" now, when there's no place for flags in this format; besides, if there's a transparent color in the image, may not it be placed in the palette without additional instructions?

@rofl0r
Copy link
Contributor

rofl0r commented Dec 16, 2021

I can make it be 32-bits per entry all the time, similar to your agsprite example.

agsprite does it for simplicity, though as you mentioned in ags case it might indeed be helpful/simpler to e.g. store 16bit palette. so it might be helpful to have a "palette type" entry, or maybe it could be combined with image type/bpp enum, e.g. 8bit with rgb565 palette, 8bit with 32bit palette, or just "rgb565palette" because it is assumed palettized format always uses 8bit for palette indexing.

I can make it 1 byte, but then will have to either use value 0 as 256 (count), or treat it as (count - 1) to allow max 256 entries.

i seem to recall that some formats do indeed use count -1 for this purpose.

I am not sure where to put this "flag" now, when there's no place for flags in this format; besides, if there's a transparent color in the image, may not it be placed in the palette without additional instructions?

i'm not certain how ags handles transparency currently (apart from types with alpha channel), but there's historical practice of either detecting a special "pink" color as transparent, or have first color (index 0) be the transparent color. but there might be usecases where you dont want transparency at all, so having such a flag might be useful. if ags default to color 0 as transparent, then one could also just simply not use color 0 in the image, at the cost of one palette slot.

@ivan-mogilko
Copy link
Contributor Author

ivan-mogilko commented Dec 16, 2021

I don't see much point in discussing how to store 8-bit image with palette, because AGS does not have a use for these kinds of sprites. It uses global palette everywhere, which is a combination of game palette and room palette (which may be customized per room afaik). There are script functions that allow to edit this global palette at runtime, performing palette-based animations.
In the last decade I've seen only couple of people making 8-bit games, and we never had a request to have sprites with individual palettes. I'm not even sure how these could be used within this enviroment. Are they supposed to be converted to global index by matching closest colors or something?

I can make separate enum values for 16-bit palette and 32-bit palette of course, even though this will likely be redundant in the end.

i'm not certain how ags handles transparency currently (apart from types with alpha channel), but there's historical practice of either detecting a special "pink" color as transparent, or have first color (index 0) be the transparent color. but there might be usecases where you dont want transparency at all, so having such a flag might be useful. if ags default to color 0 as transparent, then one could also just simply not use color 0 in the image, at the cost of one palette slot.

AGS uses "magic pink" for transparency in 16-bit and 32-bit images. For 8-bit games, I suspect the index 0 is a transparent color.
But again, since AGS uses global palette everywhere, I dont quite see a point of telling whether sprite's palette contains "transparent color" in slot 0 or not, as it's again not clear how this may work with the global engine rules.

I'd rather not invent new things for hypothetical purpose, which no one will likely use in the current engine.

@rofl0r
Copy link
Contributor

rofl0r commented Dec 16, 2021

I don't see much point in discussing how to store 8-bit image with palette, because AGS does not have a use for these kinds of sprites.

i assumed the entire point of this PR was to transform memory-hungry 32bit/16 bit images into palettized 8bit (that means, using an 8bit index between 0 and 0xff into a 32bit/16bit palette) for efficient storage, and unpack them into "native" 16bit or 32 bit when loaded.

@ivan-mogilko
Copy link
Contributor Author

ivan-mogilko commented Dec 16, 2021

I don't see much point in discussing how to store 8-bit image with palette, because AGS does not have a use for these kinds of sprites.

i assumed the entire point of this PR was to transform memory-hungry 32bit/16 bit images into palettized 8bit.

Yes, it is the point of this PR.

I was refering to previous questions:

i don't quite get this. let's say we have 8bpp image with palette. but then how would you store palette with just 8 bits ?

and to the suggestion of having a flag telling that the transparency is in the first palette index (which purpose I still dont understand tbh).

I thought you are speaking of storing regular 8-bit images with their own palettes.

@rofl0r
Copy link
Contributor

rofl0r commented Dec 16, 2021

I was refering to previous questions:

i don't quite get this. let's say we have 8bpp image with palette. but then how would you store palette with just 8 bits ?

oh, i see now. ags never supported 8 bit input files with palette other than global palette, so your intention was to leave it that way and only add this new palette feature for optimization of 16 and 32bit images.

and to the suggestion of having a flag telling that the transparency is in the first palette index (which purpose I still dont understand tbh).

well, i was thinking of the case where an input file in 32bit would only make use of alpha-values 0 and 0xff and less than 256 colors total, in which case the sprite packing could make use of color 0 for all transparent pixels, so it would need to store that flag. but i guess this can be solved then by using magic pink instead.

@ivan-mogilko
Copy link
Contributor Author

ivan-mogilko commented Dec 16, 2021

oh, i see now. ags never supported 8 bit input files with palette other than global palette, so your intention was to leave it that way and only add this new palette feature for optimization of 16 and 32bit images.

Hm, "input" is a good name for this, i kept calling them "original sprites".
Well, at first I did not even think about 8-bit input files in this context, for the reasons above.
In theory, they may still use this new format feature, as you may have a combination of input format and storage format set to explain that it's just a 8-bit image stored with palette. In that case the palette itself should be be 24-bit (rgb888).

But in practice this will only make sense if we modify the engine, teaching it how to handle these, because right now their palette would be ignored.

well, i was thinking of the case where an input file in 32bit would only make use of alpha-values 0 and 0xff and less than 256 colors total, in which case the sprite packing could make use of color 0 for all transparent pixels, so it would need to store that flag. but i guess this can be solved then by using magic pink instead.

Do you maybe mean that any combination of RGB with alpha 0 would be replaced with same "transparent color" index?
I'm not completely sure right now, but I have suspicion that this "Use Alpha Channel" flag that AGS uses with sprites may deny such strategy, because when it's set engine treats a 32-bit sprite as a 24-bit basically (fully opaque), therefore all RGB combinations must be kept distinct and not lost in conversions. Although I realize this is a weird case.

On the other hand, in a "correct" case where you have just pixels 0x00000000 and 0xffrrggbb, these should nicely fit in a palette so long as there's <257 combinations of RGB there, so there's no need of a special transparent color at index 0 (the full transparent color will appear in one of the indexes anyway).

@rofl0r
Copy link
Contributor

rofl0r commented Dec 16, 2021

Do you maybe mean that any combination of RGB with alpha 0 would be replaced with same "transparent color" index?

yes, that was my thinking, because theoretically an RGBA image could use hundreds of different colors, but have them set fully transparent, so even if visible number of colors is less than 256, it wouldnt get repacked with palette.

On the other hand, in a "correct" case where you have just pixels 0x00000000 and 0xffrrggbb, these should nicely fit in a palette so long as there's <257 combinations of RGB there

right. engine could theoretically just replace all fully transparent pixels with 00000000 to address this when creating sprite pack.

@ivan-mogilko
Copy link
Contributor Author

ivan-mogilko commented Dec 17, 2021

So, i made separate enums for storage format with palette:

  • 32 = rgb888 (may be used for normal 8-bit sprites, or maybe to pack 24-bit images in the same way);
  • 33 = argb8888 (used when packing 32-bit sprites);
  • 34 = rgb565 (used when packing 16-bit sprites);

Palette length is now 1 byte, and saved as (count - 1), so to get proper count you need to do +1 to the read value.

Will update the first post with the new format information.

Do you maybe mean that any combination of RGB with alpha 0 would be replaced with same "transparent color" index?

yes, that was my thinking, because theoretically an RGBA image could use hundreds of different colors, but have them set fully transparent, so even if visible number of colors is less than 256, it wouldnt get repacked with palette.

I'd leave this for the time being, because like said, I'm not fully certain if it won't break something, but in theory such approach may be also added later on top of existing conversion.

EDIT: also I begin to have a bad feeling about this, and wonder if it would be better to just use some simple PNG library and store sprites as PNGs instead of inventing all this custom format; as people seem to mostly work with PNGs today.

@rofl0r
Copy link
Contributor

rofl0r commented Dec 17, 2021

i just checked the updated table in the first post, it seems you still use the reserved byte, but then add the palette length as an optional separate byte - which means that when palette is used the offsets become unaligned. i'd think it would be highly preferable to replace the "reserved" entry with the non-optional palette size field with in the first 4 bytes.

edit:

EDIT: also I begin to have a bad feeling about this, and wonder if it would be better to just use some simple PNG library and store sprites as PNGs instead of inventing all this custom format; as people seem to mostly work with PNGs today.

you've already done 95% of the work and you presented some pretty convincing savings. even if one would use png, the format doesnt magically create palettized images, and multimedia data usually doesnt compress that well.

@ivan-mogilko
Copy link
Contributor Author

Made it save palette count always right after "storage format", and before "compression" byte, so:

  • input fmt (bpp for now)
  • storage fmt
  • palette count
  • compression

@ivan-mogilko ivan-mogilko merged commit 8729913 into adventuregamestudio:master Dec 19, 2021
@ivan-mogilko ivan-mogilko deleted the 360--spriteindexedbmp branch December 19, 2021 20:38
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
context: game files related to the files that game uses at runtime context: spritecache
Projects
None yet
Development

Successfully merging this pull request may close these issues.

RFC: update sprite cache file to allow 8bit bitmaps with own palette
2 participants