This improves the speed of verifying Wii WIA/RVZ files.
For me, the verification speed for LZMA2-compressed files
has gone from 11-12 MiB/s to 13-14 MiB/s.
One thing VolumeVerifier does to achieve parallelism is to
compute hashes for one chunk of data while reading the next
chunk of data. In master, when reading data from a Wii
partition, each such chunk is 32 KiB. This is normally fine,
but with WIA and RVZ it leads to rather lopsided read times
(without the compute times being lopsided): The first 32 KiB
of each 2 MiB takes a long time to read, and the remaining
part of the 2 MiB can be read nearly instantly. (The WIA/RVZ
code has to read the entire 2 MiB in order to compute hashes
which appear at the beginning of the 2 MiB, and then caches
the result afterwards.) This leads to us at times not doing
much reading and at other times not doing much computation.
To improve this, this change makes us use 2 MiB chunks
instead of 32 KiB chunks when reading from Wii partitions.
(block = 32 KiB, group = 2 MiB)
This can't actually happen in practice due to how WAD files work,
but it's very easy to add support for thanks to the last commit,
so we might as well add support for it.
The performance gains of doing this aren't too important since you
normally wouldn't run into any disc image that has overlapping blocks
(which by extension means overlapping partitions), but this change also
lets us get rid of things like VolumeVerifier's mutex that used to
exist just for the sake of handling overlapping blocks.
Panic alerts in DiscIO can potentially be very annoying since
large amounts of them can pop up when loading the game list
if you have some particularly weird files in your game list.
This was a much bigger problem back in 5.0 with its
"Tried to decrypt data from a non-Wii volume" panic alert, but
I figured I would take it all the way and remove the remaining
panic alerts that can show up when loading the game list.
I have exempted uses of ASSERT/ASSERT_MSG since they indicate
a bug in Dolphin rather than a malformed file.
The loop in WIARVZFileReader::Chunk::Read could terminate
prematurely if the size argument was smaller than the size
of an exception list which had only been partially loaded.
Some of the device names can be ambiguous and require fully or partly
qualifying the name (e.g. IOS::HLE::FS::) in a somewhat verbose way.
Additionally, insufficiently qualified names are prone to breaking.
Consider the example of IOS::HLE::FS:: (namespace) and
IOS::HLE::Device::FS (class). If we use FS::Foo in a file that doesn't
know about the class, everything will work fine. However, as soon as
Device::FS is declared via a header include or even just forward
declared, that code will cease to compile because FS:: now resolves
to Device::FS if FS::Foo was used in the Device namespace.
It also leads to having to write IOS::ES:: to access ES types and
utilities even for code that is already under the IOS namespace.
The fix for this is simple: rename the device classes and give them
a "device" suffix in their names if the existing ones may be ambiguous.
This makes it clear whether we're referring to the device class or to
something else.
This is not any longer to type, considering it lets us get rid of the
Device namespace, which is now wholly unnecessary.
There are no functional changes in this commit.
A future commit will fix unnecessarily qualified names.
We want to use positional arguments in translatable strings
that have more than one argument so that translators can change
the order of them, but the question is: Should we also use
positional arguments in translatable strings with only one
argument? I think it makes most sense that way, partially
so that translators don't even have to be aware of the
non-positional syntax and partially because "translatable
strings use positional arguments" is an easier rule for us
to remember than "transitional strings which have more than
one argument use positional arguments". But let me know if
you have a different opinion.
Fixes the following warning:
../../../../../../Core\DiscIO/DirectoryBlob.h:156:3: warning: explicitly defaulted move constructor is implicitly deleted [-Wdefaulted-function-deleted]
DirectoryBlobReader(DirectoryBlobReader&&) = default;
^
../../../../../../Core\DiscIO/DirectoryBlob.h:205:22: note: move constructor of 'DirectoryBlobReader' is implicitly deleted because field 'm_encryption_cache' has a deleted move constructor
WiiEncryptionCache m_encryption_cache;
^
Once nice benefit of fmt is that we can use positional arguments
in localizable strings. This a feature which has been
requested for the Korean translation of strings like
"Errors were found in %zu blocks in the %s partition."
and which will no doubt be useful for other languages too.
Noticed missing include as a build failure on gcc-11:
```
[ 26%] Building CXX object Source/Core/DiscIO/CMakeFiles/discio.dir/WIACompression.cpp.o
../../../../Source/Core/DiscIO/WIACompression.cpp: In lambda function:
../../../../Source/Core/DiscIO/WIACompression.cpp:170:31: error: 'numeric_limits' is not a member of 'std'
170 | std::min<size_t>(std::numeric_limits<unsigned int>().max(), x));
| ^~~~~~~~~~~~~~
../../../../Source/Core/DiscIO/WIACompression.cpp:170:46: error: expected primary-expression before 'unsigned'
170 | std::min<size_t>(std::numeric_limits<unsigned int>().max(), x));
| ^~~~~~~~
```
Signed-off-by: Sergei Trofimovich <slyfox@gentoo.org>
By calling ZSTD_CCtx_setPledgedSrcSize, we can let zstd know
how large a chunk is going to be before which start compressing
it, which lets zstd avoid allocating more memory than needed
for various internal buffers. This greatly reduces the RAM usage
when using a high compression level with a small chunk size,
and doesn't have much of an effect in other circumstances.
A side effect of calling ZSTD_CCtx_setPledgedSrcSize is that
zstd by default will write the uncompressed size into the
compressed data stream as metadata. In order to save space,
and since the decompressed size can be figured out through
the structure of the RVZ format anyway, we disable writing
the uncompressed size by setting ZSTD_c_contentSizeFlag to 0.
It's possible (but rare) for a WIA or RVZ file to support
this for some partitions but not all, and for the game and
the blob code to disagree on how large a partition is.
The heuristic was not allocating enough space for Metroid: Other M,
at least when using the default settings. (This didn't break the
file, it just caused some headers to be placed at the end of the
file instead of at the start and wasted a few hundred kilobytes.)
The new hash check catches essentially all desync problems
that VolumeVerifier can catch, so from the user's perspective,
such problems will result in Dolphin refusing to start the
game on netplay rather than actually getting a desync.
The "FindLibLZMA.cmake" module in CMake versions prior to 3.14 do not
set an alias like how Externals/liblzma/CMakeLists.txt does, so builds
performed using one of those older CMake versions will fail if the
system LZMA library is detected. To fix this, we need to link against
"lzma" instead of "LibLZMA::LibLZMA".
Fixes: b59ef81a7e ("WIA: Implement bzip2, LZMA, and LZMA2 decompression")
When I first made VolumeVerifier, I figured that the distinction
between an unsigned ticket and an unsigned TMD was a technical
detail that users would have no reason to care about. However,
while this might be true for discs, it isn't equally true for
WADs, due to the widespread practice of fakesigning tickets to
set the console ID to 0. This practice does not require
fakesigning the TMD (though apparently people do it anyway,
at least sometimes...), and the presence of a correctly signed
TMD is a useful indicator that the contents have not been
tampered with, even if the ticket isn't correctly signed.
Instead of comparing the game ID, revision, disc number and name,
we can compare a hash of important parts of the disc including
all the aforementioned data but also additional data such as the
FST. The primary reason why I'm making this change is to let us
catch more desyncs before they happen, but this should also fix
https://bugs.dolphin-emu.org/issues/12115. As a bonus, the UI can
now distinguish the case where a client doesn't have the game at
all from the case where a client has the wrong version of the game.
It is my opinion that nobody should use NKit disc images without
being aware of the drawbacks of them. Since it seems like almost
nobody who is using NKit disc images knows what NKit is (hmm, now
how could that have happened...?), I am adding a warning to Dolphin
so that you can't run NKit disc images without finding out about the
drawbacks. In case someone really does want to use NKit disc images,
the warning has a "Don't show this again" option. Unfortunately, I
can't retroactively add the warning where it's most needed:
in Dolphin 5.0, which does not support Wii NKit disc images.
This could cause read errors if chunks were laid out a certain
way in the file and the whole chunk wasn't being read at once.
Should fix https://bugs.dolphin-emu.org/issues/12184.
PURGE isn't especially useful, while requiring some annoying
special handling in the file format. If you want no compression,
use NONE. If you want fast compression, use Zstandard.
Gets rid of the need to seek to the end of the file
when opening a file.
The downside of this is that we waste a little space,
since we can't know in advance exactly how much
space the compressed parts of the headers will need.
This is useful for the way Dolphin scrubs Wii discs.
The encrypted data is what gets zeroed out, but this
zeroed out data then gets decrypted before being stored,
and the resulting data does not compress well.
However, each block of decrypted scrubbed data is
identical given the same encryption key, and there's
nothing stopping us from making multiple group entries
point to the same offset in the file, so we only have
to store one copy of this data per partition.
For reference, wit zeroes out the decrypted data,
but Dolphin's WIA writer can't do this because it currently
doesn't know which parts of the disc are scrubbed.
This is also useful for things such as storing Datel discs
full of 0x55 blocks (repesenting unreadable blocks)
without compression enabled.
Fixes https://bugs.dolphin-emu.org/issues/10654.
To quote the documenation file included with the program tgctogcm:
"TGC's are miniaturized .gcm images with a 32kB header.
The embedded gcm contains some bogus data, namely:
-FST Location (0x424 in gcm)
-DOL Location (0x420 in gcm)
-FST File offsets (all files are offset/spoofed by a certain amount)"
Dolphin has been handling the values at 0x420 and 0x424 by simply
overwriting them with a working value (just like tgctogcm does),
but it has used a different approach for the file offsets in the FST.
Instead of changing the offsets that are stored in the FST, Dolphin
changed where the files actually are placed on the virtual disc.
My hope was that this would make the loading times more accurate to
how they are when running a TGC file as part of a larger disc.
However, there are TGC files where we would need to move files
backwards on the disc in order to do this (this is what issue
10654 is about), so the approach we have been using is flawed.
This change makes Dolphin overwrite offsets in the FST instead, like
tgctogcm does. Other than making Dolphin handle the affected TGC files
correctly, this change also makes it so that unnecessary padding data
isn't written if you use Dolphin to convert a TGC file to an ISO file.
This feature is not actually implemented in Dolphin as of now, but I'm
planning to add it in the near future as part of a larger feature.
This is intended to catch WIA files which have been created using
wit's default parameters (40 MiB block size), once the WIA PR is
merged. The check does however also work for GCZ files – not that
I think anyone has a GCZ file with a block size that large.
The code was actually already rather well adapted for this.
We more or less just have to skip ParseDisc and run
ParsePartitionData directly. This required the PartitionHeader
struct to be removed (which wasn't that useful anyway).
If we start 31 KiB into a 32 KiB block and want to mark 2 KiB
of data as used, we need to mark 2 blocks as used, not just 1.
This problem is avoided when calling MarkAsUsed from
MarkAsUsedE, since MarkAsUsedE aligns to 32 KiB on its own.
Most calls to MarkAsUsed are from MarkAsUsedE, which is why
this hasn't been a noticeable problem in the past.
The constant DESIRED_BUFFER_SIZE was determined by multiplying the
old hardcoded value 32 with the default GCZ block size 16 KiB.
Not sure if it actually is the best value, but it seems fine.
Add a function that safely returns whether a character is printable
i.e. whether 0x20 <= c <= 0x7e is true.
This is done in several places in our codebase and it's easy to run
into undefined behaviour if the C version defined in <cctype>
is used instead of this one, since its behaviour is undefined
if the character is not representable as an unsigned char.
This fixes MemoryViewWidget.
A small, nonexhaustive set of warning fixes. The DiscIO Volume change
is a workaround for a GCC bug [1] that causes returning an unengaged
std::optional to emit annoying -Wmaybe-uninitialized warnings.
This last change alone fixes pages upon pages of warnings since
Volume.h is included from several files.
-Wstringop-truncation is another irrelevant warning for us, but
unfortunately there seems to be no way to disable it without
adding ugly pragmas wherever the warning appears.
string_view is a thin wrapper around C strings, so it's more efficient
for constant strings than C++ strings.
The unordered_set<> also adds extra runtime overhead. For small arrays,
a simple linear search works. For larger arrays, std::binary_search()
works better than linear but without the unordered_set<> overhead.
ShouldBeDualLayer(): Removed a duplicate "SK8X52" entry.
Fixes using DirectoryBlob on extracted games that were unencrypted
prior to being extracted.
(One day I'll make DirectoryBlob actually support raw reads and then
the order of these two won't matter...)
This happens if someone manually downloads a regular datfile from
redump.org and puts it where Dolphin stores datfiles. Dolphin needs
"special" datfiles that contain fields for serials and versions.
Before this change, all discs (except Datel discs) would show up as
"Unknown disc" when using a regular datfile.
This was making it impossible to use the Redump.org integration
without first manually creating a Redump folder in the Cache folder.
https://bugs.dolphin-emu.org/issues/11885
PARTITION_NONE technically has a runtime static constructor otherwise.
This allows compile-time instances of Partition to be created without
the use of a static constructor.
The old implementation of this was not able to distinguish between
a title that had the common key index set to 1 because it actually
was Korean and a title that had the common key index set to 1 due to
fakesigning. This new implementation solves the problem by
decrypting a content with each possible common key and checking
which result matches the provided SHA-1 hash.
The problem that the old implementation causes has only been reported
to affect a certain pirated WAD of Chronos Twins DX (WC6EUP), but it's
possible that the problem would start affecting more WADs if we add
support for the vWii common key (which uses index 2). Adding support
for the vWii common key would also prevent us from using the simpler
solution of always forcing the index to 0 if the title is not Korean.
...in addition to the existing function CreateVolume
(renamed from CreateVolumeFromFilename).
Lets code easily add constraints such as not letting the user
select a WAD file when using the disc changing functionality.
Since C++17, non-member std::size() is present in the standard library
which also operates on regular C arrays. Given that, we can just replace
usages of ArraySize with that where applicable.
In many cases, we can just change the actual C array ArraySize() was
called on into a std::array and just use its .size() member function
instead.
In some other cases, we can collapse the loops they were used in, into a
ranged-for loop, eliminating the need for en explicit bounds query.
Removes redundant initializers from the constructor and provides
initializers for all members that don't already have one for consistency
(and deterministic initial state).
Given the volume verifier has quite a few non-trivial object within it,
it's best to default the destructor within the cpp file to prevent
inlining complex destruction logic elsewhere, while also making it nicer
if a forward-declared type is ever used in a member variable.
The difference between Dolphin's game IDs and GameTDB's game IDs
is that GameTDB uses four characters for non-disc titles, whereas
Dolphin uses six characters for all titles.
This fixes:
- TitleDatabase considering Datel discs to be NHL Hitz 2002
- Gecko code downloading not working for discs with IDs starting with P
- Cover downloading mixing up discs with channels (e.g. Mario Kart Wii
and Mario Kart Channel) and making extra HTTP requests. (Android was
actually doing a better job at this than DolphinQt!)