This implements the code for row-by-row filter selection but does not provide an
actual implementation; the selection function just chooses the lowest set filter
bit.
Signed-off-by: John Bowler <jbowler@acm.org>
A debug() assert fired if windowBits was set to 8 for the Huffman only and
no-compression cases. This commit changes it to do some extra checking. Remove
unreachable code in pz_default_settings, eliminate a spurious warning in pngcp
for small files.
Signed-off-by: John Bowler <jbowler@acm.org>
Implemented better defaulting of zlib settings based on image properties.
Implemented pass-through of png_write_rows when the rows can be used directly (a
common case) optimizing the handling of previous-row buffering.
Removed the METHODICAL filter selection method and disabled the HEURISTIC one;
the first was ridiculously slow (though useful for experiments) the second
doesn't work. Filter selection is temporarily disabled (it defaults to the
lowest numbered filter in the list; typically 'none').
New handling of compression settings (incomplete), new PNG compression level
(not yet visible in an API).
Back ported 'PNG_FAST_FILTERS' from 1.6 (in png.h).
There are minimal API changes beyond removal of the selection options. Work is
still to be done to investigate a filter selection mechanism that is at least as
good as the previous one.
Signed-off-by: John Bowler <jbowler@acm.org>
More sophisticated defaulting which helps significantly for some files along
with code to make it easier to control the compression defaults and to make the
settings honor the API calls the application makes (previously low windowBits
settings would get reset to higher values.)
Signed-off-by: John Bowler <jbowler@acm.org>
Output results during the search; not a perfect implementation but sufficient
for basic tests on zlib parameters.
Signed-off-by: John Bowler <jbowler@acm.org>
Some refinements for the search option and better (more consistent) reporting of
the results plus changes so that when compiled against libpng 1.6 the program
correctly copies text chunks; previously when a search option caused multiple
copies of the same file the copies after the first would not have the text
chunks.
Signed-off-by: John Bowler <jbowler@acm.org>
Batched fixes from libpng16: PNG_IMAGE_PNG_SIZE_MAX macro fix, contrib/libtests
exit(77) change to just do that for configure and changes to pngstest to
(by default) make random backgrounds on a per-file, not per-session, basis.
Signed-off-by: John Bowler <jbowler@acm.org>
This is an API change for 1.7, albeit a quiet one; it may produce compiler
warnings but should not result in errors, unless warnings are treated as errors.
On 64-bit systems it widens the results of the various PNG_IMAGE_ macros that
return size values (component counts, byte sizes) to 64 bits. It also changes
the row_stride parameter, which is the pointer difference between adjacent rows
of the image buffer, to ptrdiff_t which is the ANSI-C90 defined type of the
difference of two pointers. The existing (1.6.22) checks for overflow are
preserved but now accomdate images that require more than 32 bits of address
space when size_t/ptrdiff_t are 64 bit types.
Signed-off-by: John Bowler <jbowler@acm.org>
This implements an API and provides a number of assist macros to allow an
application which uses the simplified API write to bypass stdio and write
directly to memory.
It also includes some warnings (png.h) and some check code to detect *possible*
overflow in the ROW_STRIDE and simplified image SIZE macros. This disallows
image width/height/format that *might* overflow. A quiet API change that limits
in-memory image size (uncompressed) to less that 4GByte and image row size
(stride) to less than 2GByte.
Signed-off-by: John Bowler <jbowler@acm.org>
The internal read code change to stop sharing the palette was incompletely
implemented. The result is that unless palette index checking is turned off and
there are no read transformations the png_info palette gets deleted when the
png_struct is deleted. This is normally harmless (png_info gets deleted first)
but in the case of pngcp it results in use-after-free of the palette and,
therefore, palette corruption and maybe on some operating systems and access
violation.
This also updated pngcp 'search' mode to check a restricted range of memLevels;
there is an unrelated bug which means that lower zlib memLevels result in memory
corruption under some circumstances, probably less often than 1:1000.
Signed-off-by: John Bowler <jbowler@acm.org>
Also change the order of the 'level' and 'windowBits' searches to seach
windowBits first; this favours windowBits optimizations over compression level
ones on the basis that the latter should only affect the write code.
Signed-off-by: John Bowler <jbowler@acm.org>
The sequential read code failed to read to the end of the IDAT stream in about
1/820 cases, resulting in a spurious warning. The
png_set_compression_buffer_size API also would not work (or do bad things) if
the size of a zlib uInt was less than 32 bits.
This includes a quiet API change to alter png_set_compression_buffer_size to use
a png_alloc_size_t, not png_size_t and implement the correct checks.
Signed-off-by: John Bowler <jbowler@acm.org>
png_set_compression_buffer_size would result in a spurious debug assert if the
compression buffer size was set to something other than a multiple of
PNG_ROW_BUFFER_SIZE; the debug test failed to add the buffer 'start'
Signed-off-by: John Bowler <jbowler@acm.org>
This is still a work-in-progress but it seems fairly stable (if not exactly 100%
optimal). pngcp now allows 'all' for some options which iterates through all
possible settings (this reliably produces the smallest IDAT that libpng can
produce with those settings.) It also contains a --search command line option
which attempts to optimize this by skipping pointless tests; it is close, most
of the time, but not perfect.
Signed-off-by: John Bowler <jbowler@acm.org>
There are two separate problems. The first is that the CMINFO optimization code
gets run twice on any PNG IDAT stream longer than 2048 bytes and the second time
can overwrite bytes 2048,2049 destroying the output.
The second is that one of the (debug) checks was slightly wrong (< when <=
should have been used) and this causes write to abort maybe 1/2048 times.
Signed-off-by: John Bowler <jbowler@acm.org>