Age | Commit message (Collapse) | Author |
|
There was an unintended recoverable error in a test file. It wasn't
hurting anything, but it was obscuring the actual intent of the test.
|
|
|
|
|
|
|
|
This is in preparation for implementing page groups.
|
|
|
|
|
|
Add a newline unconditionally before endstream even if a newline was
already written as part of the stream data.
|
|
Bad /W in an xref stream could cause a division by zero error. Now
this is handled as a special case.
|
|
|
|
Remove problematic test files
|
|
Also accept more errors than before.
|
|
Eliminate PCRE and find endobj not preceded by endstream. Be more lax
about placement of endstream and endobj.
|
|
|
|
|
|
|
|
|
|
Sometimes we want to ignore bad tokens rather than having them throw
an exception. A coverage case is commented out here and added in a
later commit.
|
|
Also fix a bug resulting from incorrect use of PointerHolder because
of this unused parameter.
|
|
Passed arguments to the constructor in the wrong order.
|
|
|
|
|
|
|
|
|
|
main() had gotten absurdly long. Split it into reasonable chunks. This
refactoring is in preparation for handling splitting output into
single pages.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
When parsing content streams, allow content to be split arbitrarily
across stream boundaries.
|
|
|
|
Very badly corrupted files may not have a retrievable root dictionary.
Handle that as a special case so that a more helpful error message can
be provided.
|
|
When requested, QPDFWriter will do more aggress prechecking of streams
to make sure it can actually succeed in decoding them before
attempting to do so. This will allow preservation of raw data even
when the raw data is corrupted relative to the specified filters.
|
|
|
|
|
|
QPDFObjectHandle::parseInternal now issues warnings instead of
throwing exceptions for all error conditions that it finds (except
internal logic errors) and has stronger recovery for things like
invalid tokens and malformed dictionaries. This should improve qpdf's
ability to recover from a wide range of broken files that currently
cause it to fail.
|
|
fixes #117
fixes #118
fixes #119
fixes #120
Several other infinite loop bugs were fixed by previous changes.
Include their test files in the test suite.
|
|
During parsing of an object, sometimes parts of the object have to be
resolved. An example is stream lengths. If such an object directly or
indirectly points to the object being parsed, it can cause an infinite
loop. Guard against all cases of re-entrant resolution of objects.
|
|
This is CVE-2017-9208.
The QPDF library uses object ID 0 internally as a sentinel to
represent a direct object, but prior to this fix, was not blocking
handling of 0 0 obj or 0 0 R as a special case. Creating an object in
the file with 0 0 obj could cause various infinite loops. The PDF spec
doesn't allow for object 0. Having qpdf handle object 0 might be a
better fix, but changing all the places in the code that assumes objid
== 0 means direct would be risky.
|
|
This is CVE-2017-9209.
|
|
This is CVE-2017-9210.
The description string for an error message included unparsing an
object, which is too complex of a thing to try to do while throwing an
exception. There was only one example of this in the entire codebase,
so it is not a pervasive problem. Fixing this eliminated one class of
infinite loop errors.
|
|
|
|
Working with absolute paths makes debugging easier, but some called
scripts always need / as dir separator or won't work.
|
|
/dev/null is not portable, so use File::Spec instead, which provides
portable "paths" and especially "nul" on Windows. I changed all places
with hard coded /dev/null to be sure, while I think it only is a
problem in direct system calls, because the other executed commands go
to sh.exe from MSYS which itself should port /dev/null to NUL. The
test still pass, so shouldn't have made any harm...
|
|
expr needs ARG + ARG
quote paths to support support spaces
|
|
Shebang doesn't work well on Windows.
|