This could cause read errors if chunks were laid out a certain
way in the file and the whole chunk wasn't being read at once.
Should fix https://bugs.dolphin-emu.org/issues/12184.
PURGE isn't especially useful, while requiring some annoying
special handling in the file format. If you want no compression,
use NONE. If you want fast compression, use Zstandard.
Gets rid of the need to seek to the end of the file
when opening a file.
The downside of this is that we waste a little space,
since we can't know in advance exactly how much
space the compressed parts of the headers will need.
This is useful for the way Dolphin scrubs Wii discs.
The encrypted data is what gets zeroed out, but this
zeroed out data then gets decrypted before being stored,
and the resulting data does not compress well.
However, each block of decrypted scrubbed data is
identical given the same encryption key, and there's
nothing stopping us from making multiple group entries
point to the same offset in the file, so we only have
to store one copy of this data per partition.
For reference, wit zeroes out the decrypted data,
but Dolphin's WIA writer can't do this because it currently
doesn't know which parts of the disc are scrubbed.
This is also useful for things such as storing Datel discs
full of 0x55 blocks (repesenting unreadable blocks)
without compression enabled.
Fixes https://bugs.dolphin-emu.org/issues/10654.
To quote the documenation file included with the program tgctogcm:
"TGC's are miniaturized .gcm images with a 32kB header.
The embedded gcm contains some bogus data, namely:
-FST Location (0x424 in gcm)
-DOL Location (0x420 in gcm)
-FST File offsets (all files are offset/spoofed by a certain amount)"
Dolphin has been handling the values at 0x420 and 0x424 by simply
overwriting them with a working value (just like tgctogcm does),
but it has used a different approach for the file offsets in the FST.
Instead of changing the offsets that are stored in the FST, Dolphin
changed where the files actually are placed on the virtual disc.
My hope was that this would make the loading times more accurate to
how they are when running a TGC file as part of a larger disc.
However, there are TGC files where we would need to move files
backwards on the disc in order to do this (this is what issue
10654 is about), so the approach we have been using is flawed.
This change makes Dolphin overwrite offsets in the FST instead, like
tgctogcm does. Other than making Dolphin handle the affected TGC files
correctly, this change also makes it so that unnecessary padding data
isn't written if you use Dolphin to convert a TGC file to an ISO file.
This feature is not actually implemented in Dolphin as of now, but I'm
planning to add it in the near future as part of a larger feature.
This is intended to catch WIA files which have been created using
wit's default parameters (40 MiB block size), once the WIA PR is
merged. The check does however also work for GCZ files – not that
I think anyone has a GCZ file with a block size that large.
The code was actually already rather well adapted for this.
We more or less just have to skip ParseDisc and run
ParsePartitionData directly. This required the PartitionHeader
struct to be removed (which wasn't that useful anyway).
If we start 31 KiB into a 32 KiB block and want to mark 2 KiB
of data as used, we need to mark 2 blocks as used, not just 1.
This problem is avoided when calling MarkAsUsed from
MarkAsUsedE, since MarkAsUsedE aligns to 32 KiB on its own.
Most calls to MarkAsUsed are from MarkAsUsedE, which is why
this hasn't been a noticeable problem in the past.
The constant DESIRED_BUFFER_SIZE was determined by multiplying the
old hardcoded value 32 with the default GCZ block size 16 KiB.
Not sure if it actually is the best value, but it seems fine.
Add a function that safely returns whether a character is printable
i.e. whether 0x20 <= c <= 0x7e is true.
This is done in several places in our codebase and it's easy to run
into undefined behaviour if the C version defined in <cctype>
is used instead of this one, since its behaviour is undefined
if the character is not representable as an unsigned char.
This fixes MemoryViewWidget.
A small, nonexhaustive set of warning fixes. The DiscIO Volume change
is a workaround for a GCC bug [1] that causes returning an unengaged
std::optional to emit annoying -Wmaybe-uninitialized warnings.
This last change alone fixes pages upon pages of warnings since
Volume.h is included from several files.
-Wstringop-truncation is another irrelevant warning for us, but
unfortunately there seems to be no way to disable it without
adding ugly pragmas wherever the warning appears.
string_view is a thin wrapper around C strings, so it's more efficient
for constant strings than C++ strings.
The unordered_set<> also adds extra runtime overhead. For small arrays,
a simple linear search works. For larger arrays, std::binary_search()
works better than linear but without the unordered_set<> overhead.
ShouldBeDualLayer(): Removed a duplicate "SK8X52" entry.
Fixes using DirectoryBlob on extracted games that were unencrypted
prior to being extracted.
(One day I'll make DirectoryBlob actually support raw reads and then
the order of these two won't matter...)
This happens if someone manually downloads a regular datfile from
redump.org and puts it where Dolphin stores datfiles. Dolphin needs
"special" datfiles that contain fields for serials and versions.
Before this change, all discs (except Datel discs) would show up as
"Unknown disc" when using a regular datfile.
This was making it impossible to use the Redump.org integration
without first manually creating a Redump folder in the Cache folder.
https://bugs.dolphin-emu.org/issues/11885