Save one robocopy by copying LICENSE to the build folder instead of copying it twice.
Azure Pipelines:
Switch artifact preparation to PowerShell to be consistent with GitHub Actions.
Remove CPU info step.
Azure Pipelines:
Remove unneeded robocopy for SDL2.dll, use direct path for artifact instead.
GitHub Actions:
Update ISSUE_TEMPLATE exclusion.
Name stuff.
Switch back to default shell/robocopy for artifact preparation.
Seperate xenia-vfs-dump artifact.
GitHub Actions/AppVeyor:
Add -mx1 to 7z (supposedly faster)
Exclude subdirectories of docs and .github;
This fixes GitHub Actions triggering Azure.
Remove unneeded cd commands.
GitHub Actions:
Use shorter version numbers.
Now that we're using GitHub Actions to automate copying the builds to releases, we can pretty easily query the releases API and get the last build date and link to the downoad.
Fixes Xenia not being able to locate unpacked XEXs..
Ideally this should be inside xenia_main.cc though, need to figure out some way to get it working there.
Writing FATX header likely means the game is trying to format it, so we'll help it along by deleting the contents too.
HostPathDevice actually loads in all the file entries when its first inited though, so we also have to re-init the HostPathDevice for the cache partition too :/
If this flag is set there shouldn't be any need to read from the hash tables, we can just read from each subsequent block number.
We'll only trust this flag if the package is read-only (LIVE/PIRS) though, in CON packages we'll always read from hash tables.
This'll try salvaging any corrupt packages loaded in: normally we find the block numbers belonging to a file by reading them from the hash table.
Seems there's some packages out there (eg. Mass Effect 2 demo) that have corrupt hash tables though, so using the block numbers from there just results in a crash.
By verifying the hash of each hash table we can detect if this is the case, and if so we can try just using current_block_number + 1 instead of use any invalid block number.
(we also let the user know about the corrupt table in the log file)
In LIVE/PIRS packages this should hopefully let us get the correct data, since files are usually stored inside consecutive blocks in those package types.
It's doubtful that it'd help much with CON ones though, since those are pretty much a living filesystem.
The older & more used a CON package is, the more likely blocks will be fragmented throughout the file...
Reading from the hash table is the only way to properly read data from these, using current_block + 1 likely won't help much (we'd be going wxPirs-mode, in a way :P)
CON packages do have something that might help with this though: redundant hash blocks, where each hash table is actually made up of two blocks.
Maybe in future we can find a way to automatically use the secondary block whenever the primary one is invalid.
Should let Halo 3 savegames work now (Resume Solo Game...)
This changes XamShowDeviceSelectorUI code to pretty much the same code as xenia-master, with some param checks added from XAM.
Looks like it was 0xB notification which made Halo 3 abort (looks like thats for when storage devices are added/removed)
I guess Halo doesn't like to see that notify just after it was told the device it could make use of?
I reverted the function to master since I don't really think the threading stuff is needed any more.
AFAIK the threads just turned out to be a band-aid fix for the issue in https://github.com/xenia-project/xenia/pull/1417, where games could only ever see a single notification for a given ID.
(Since we'd send the first notify in one thread, then wait a little while before sending in another thread, that'd gave it enough time to see both notifys - but now with the fix from that PR this band-aid isn't needed)
If there's actually any regressions from removing the threading code we can easily put it back in (I'd be really interested in any games that might require this kind of thing too)
The way these functions are handled (and really everything that uses XOVERLAPPED) isn't really correct tho, since they'll cause blocking in the caller thread while it does the work which the actual XAM impls don't do AFAIK.
There probably should be a seperate thread that handles all that, completing the overlapped etc, but I really don't think the way I did it with this band-aid fix was the best way for it...
This fixes Halo CEA putting itself into a packaged/demo mode, where it doesn't perform some DVD checks & skips cache mounting, and probably other things too.
Now CEA should actually mount & write some stuff into the cache dir while the intro cinematic plays
(first writes a pak_stream.s3dpak file, then after a few seconds into the intro video it writes fileList.ps, which seems to have some strange values inside..)
I also added an exception to the code for arcade titles, since they'll probably need to use XamContentGetLicenseMask even when ran outside of a package.
These get stored in different header IDs, 0-31 is inside 0x30000, 32-63 is in 0x30100, and 64+ is probably in 0x30200, etc.
Not sure if any games actually try checking these extended privileges, but hopefully this code should help if any do.
Sega Rally Revo has a weird check for the 0xB privilege, if it's set the game bails out for some reason, a bunch of code doesn't get executed and the game hangs.
Fortunately we can kludge a way around this: the cache code we're trying to influence with the 0xB privilege always seems to check for the 0x17 privilege just beforehand.
The bailout code doesn't, instead it checks for 0xB first and then 0x17 - so we can just add a check to only return true for 0xB once we've already seen 0x17.
This fixes some games (eg. Halo: CEA) not mounting cache.
It seems some games use different SectorsPerClusters values, which then changes the value it expects from NtQueryVolumeInformationFile.
Since we'd always respond with a static value this would make the game think that mounting failed, and it'd bail out of the cache-mounting code.
Now we capture the SectorsPerCluster value when the game writes it to the cache-partition header, and update the NullDevice's sectors_per_allocation_unit with it.
This will make logs more readable and little lighter
Todo: Find out what that register means. It doesn't even have description.
My best guess is that it is used for sync?