A vector of length 0 can have a null data pointer, which causes UB when
passed to memcpy, so only copy when we actually have data to copy. This
caused crashes in certain cases when compiling Dolphin with Clang and
LTO enabled.
A deep-copy method CopyReader has been added to BlobReader (virtual) and all of its subclasses (override). This should create a second BlobReader to open the same set of data but with an independent read pointer so that it doesn't interfere with any reads done on the original Reader.
As part of this, IOFile has added code to create a deep copy IOFile pointer onto the same file, with code based on the platform in question to find the file ID from the file pointer and open a new one. There has also been a small piece added to FileInfo to enable a deep copy, but its only subclass at this time already had a copy constructor so this was relatively minor.
Previously, we had WBFS and CISO which both returned an upper bound
of the size, and other formats which returned an accurate size. But
now we also have NFS, which returns a lower bound of the size. To
allow VolumeVerifier to make better informed decisions for NFS, let's
use an enum instead of a bool for the type of data size a blob has.
Needed for the next commit. NFS disc images are hashed but not encrypted.
While we're at it, also get rid of SupportsIntegrityCheck.
It does the same thing as old IsEncryptedAndHashed and new HasWiiHashes.
New dolphin-tool command: "header"
-b / --block_size
-c / --compression
-l / --compression_level
Informative RVZ/WIA header2 value "compression_level" is now a s32 instead of a u32, because negative compression is a thing.
Speaking of, it is now possible to use negative compression levels in dolphin-tool's convert command (not the GUI, though).
SPDX standardizes how source code conveys its copyright and licensing
information. See https://spdx.github.io/spdx-spec/1-rationale/ . SPDX
tags are adopted in many large projects, including things like the Linux
kernel.
The loop in WIARVZFileReader::Chunk::Read could terminate
prematurely if the size argument was smaller than the size
of an exception list which had only been partially loaded.
We want to use positional arguments in translatable strings
that have more than one argument so that translators can change
the order of them, but the question is: Should we also use
positional arguments in translatable strings with only one
argument? I think it makes most sense that way, partially
so that translators don't even have to be aware of the
non-positional syntax and partially because "translatable
strings use positional arguments" is an easier rule for us
to remember than "transitional strings which have more than
one argument use positional arguments". But let me know if
you have a different opinion.
Once nice benefit of fmt is that we can use positional arguments
in localizable strings. This a feature which has been
requested for the Korean translation of strings like
"Errors were found in %zu blocks in the %s partition."
and which will no doubt be useful for other languages too.
By calling ZSTD_CCtx_setPledgedSrcSize, we can let zstd know
how large a chunk is going to be before which start compressing
it, which lets zstd avoid allocating more memory than needed
for various internal buffers. This greatly reduces the RAM usage
when using a high compression level with a small chunk size,
and doesn't have much of an effect in other circumstances.
A side effect of calling ZSTD_CCtx_setPledgedSrcSize is that
zstd by default will write the uncompressed size into the
compressed data stream as metadata. In order to save space,
and since the decompressed size can be figured out through
the structure of the RVZ format anyway, we disable writing
the uncompressed size by setting ZSTD_c_contentSizeFlag to 0.
It's possible (but rare) for a WIA or RVZ file to support
this for some partitions but not all, and for the game and
the blob code to disagree on how large a partition is.
The heuristic was not allocating enough space for Metroid: Other M,
at least when using the default settings. (This didn't break the
file, it just caused some headers to be placed at the end of the
file instead of at the start and wasted a few hundred kilobytes.)
PURGE isn't especially useful, while requiring some annoying
special handling in the file format. If you want no compression,
use NONE. If you want fast compression, use Zstandard.
Gets rid of the need to seek to the end of the file
when opening a file.
The downside of this is that we waste a little space,
since we can't know in advance exactly how much
space the compressed parts of the headers will need.