Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Resulting in larger files #30

Closed
GoogleCodeExporter opened this issue Mar 9, 2015 · 3 comments
Closed

Resulting in larger files #30

GoogleCodeExporter opened this issue Mar 9, 2015 · 3 comments

Comments

@GoogleCodeExporter
Copy link

I am making use of a project called AdvanceCOMP, in it's compression options 
allows zopfli (-4, or --shrink-insane). While going through PNG's for 
Torchlight there are dozens of png's that end up larger. I've attached the 
gloves so you can review and test it yourself as appropriate.

advpng.exe -z -4 */*.png

43653       43653 100% wardrobe/dragon_gloves.png (Bigger 45713)
89125       89125 100% wardrobe/heavyleather_boots.png (Bigger 96720)
11412       10817  94% wardrobe/heavyleather_boots_alt01.png
90564       90564 100% wardrobe/heavyleather_chest.png (Bigger 98420)
89328       89328 100% wardrobe/heavyleather_gloves.png (Bigger 97619)

At worst the compression should max at the same size; This means the problem's 
root comes from some sections of the data are compressing better while others 
end up being left as uncompressed. I noted this in some of my own compression 
experiments years ago.


Viable solution:
 With sections that have no compression on them (causing expansion) they should instead compress and find an identical matching length. This is most likely part of another match. If this at the beginning or end of another compressed section, that section will truncate to allow the uncompressed section to compress at a 1:1 rate (so long as the other match remains long enough to retain compression).

I am not aware of the full details of zlib compression, so there needs to be an 
additional rule.

If in the instance that a match is found in the middle of another match, then 
it should split the match into two matches avoiding the middle (on purpose) to 
give the inner match to the uncompressed section. This should only happen in 
the case that these 3 matches takes less space than 1 match & 1 non-match.

How much more complexity this will give I'm not sure, nor how much extra time 
it will take.

Original issue reported on code.google.com by rtcv...@yahoo.com on 2 Nov 2013 at 9:00

Attachments:

@GoogleCodeExporter
Copy link
Author

Hi,

The latest zopfli on git includes a tool called zopflipng.  You could get it 
and use it to compress your pngs instead of AdvanceCOMP.  There is an issue 
with it that can cause it to crash so you will also need the patch from here 
https://code.google.com/p/zopfli/issues/detail?id=28#c1

I ran zopflipng on dragon_gloves.png with several different levels of 
optimization and saw reductions in file size for each.

zopflipng: 94.763% of the original size (40K)
zopflipng -m: 94.289% of the original size (40K)
zopflipng --iterations=500 --splitting=3 --filters=01234mepb --lossy_8bit 
--lossy_transparent: 56.232% of the original size (23K)

The last method took a *very* long time.  Attached is the file.

Original comment by robw...@gmail.com on 12 Nov 2013 at 2:34

Attachments:

@GoogleCodeExporter
Copy link
Author

You should be using advdef, not advpng.

The reason why advpng gives worse results for you is because of the way it uses 
filters. Filters are part of the PNG spec and describe various methods for 
predicting the color of the next pixel based on the known colors of previous 
pixels (the pixel immediately to the left, the pixel immediately above, and/or 
the pixel to the upper left). These filters can be defined once for the whole 
image (filters 0-4) or individually for every scanline (called filter 5, but 
really a combination of filters 0-4).

Because of advpng's intended use, the compression of screenshots from 
MAME-emulated games, it forces every line of the image to use filter 0 
(basically, no prediction). This is the best choice for MAME screenshots, 
because they typically will have 256 colors or fewer and will have large areas 
of flat color. The prediction filters usually only produce better results than 
no prediction in the cases of full-spectrum color and gradients -- photographs 
and modern 2D and 3D art. Your image uses almost 7000 colors, as well as 
gradients, so it does worse when forced to use no prediction.

In contrast to advpng's behavior, advdef will simply recompress any DEFLATE 
stream, without trying to modify the filters being used. Using your original 
image, I got the following results from advdef:

advdef -z4 dragon_gloves.png
       43653       41406  94% dragon_gloves.png

With greatly increased iterations (this takes a long time), even better results 
can be had:

advdef -z4 -i1024 dragon_gloves.png
       41406       41117  99% dragon_gloves.png

Now, if we want to go a bit crazy, we can use another tool called pngwolf to 
choose the scanline filters more intelligently. As there is a choice of five 
different filters for each scan line, the total number of combinations, even 
for a small image like this, is too large to test exhaustively. So any PNG 
compressor running in Adaptive (filter 5) mode uses a heuristic to choose a set 
of filters. pngwolf uses a better set of heuristics, as well as a genetic 
algorithm to find a filter set that produces better overall compression.

Your original image uses the following filter set:
    000000000000000000000000000000000000000000000000000000000000000000000000
    000000000000000000000000000000000000000000000000000000000000000000034000
    320200200044111111111114434333343344444444444444444444444444444444444444
    4444444444444444444444444422244444444441

The best result I found with pngwolf was:
    000000000000000000000000000000000000000000000000000000000000000000000000
    000000000000000000000000000000000000000200000000000000000000000000000000
    000000000044111111111111411133323334144444444414444444414444114441444444
    4444444444444444444444444422244444414411

When this output was then compressed with advdef -z4 -i1024, I got a final size 
of 41061 bytes.

Now, beware of the previous poster's results. He/she used a technique called 
"dirty transparency" that tries to adjust the color contents of any pixels in 
the image that are set to be fully transparent by the alpha channel in order to 
improve the image compression. This assumes that any program that will use the 
image will be ignoring any fully-transparent pixels anyway. In your image, 
though, this ends up wiping out the whole lower right portion of the image, as 
the transparency info for that portion of the gloves is missing. If you were 
editing the image with the intent of adding that transparency info later, dirty 
transparency is not a good technique to use.

However, if you don't need that portion of the image, using dirty transparency 
obviously reduces the file size a lot in this particular image. Using a 
different tool (truepng) to do the dirty transparency and following it up by 
pngwolf and advdef -z4 -i1024, I got the file size down to 24465 bytes.

I've attached copies of my optimized files. One is completely lossless, while 
the other uses dirty transparency. I hope this works for you.

Original comment by Adre...@gmail.com on 1 Feb 2014 at 11:34

Attachments:

@GoogleCodeExporter
Copy link
Author

I tried the current version of zopflipng on the input image (which is 43653 
bytes).

With default settings, it makes the image 6% smaller: 41344 bytes
With --lossy_transparent, it makes it 44% smaller: 24692 bytes
With --lossy_transparent --iterations=1000, it gives 24551 bytes.

Note that --lossy_transparent has the same meaning as "dirty transparency" 
mentioned above.
The result is slighty worse, probably due to having less ideal PNG filter 
values than pngwolf gave above.

But zopfli and zopflipng are making the image smaller and all techniques 
implemented in it are working as intended here, so I think this bug can be 
closed. If AdvanceCOMP still makes it larger, please report it there. Thanks!

Original comment by l...@google.com on 2 Jul 2014 at 2:27

  • Changed state: WontFix

kornelski pushed a commit to ImageOptim/zopfli that referenced this issue Dec 3, 2018
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

1 participant