bsnes/nall/stream/zip.hpp

36 lines
672 B
C++
Raw Normal View History

#pragma once
Update to v088r03 release. byuu says: static vector<uint8_t> file::read(const string &filename); replaces: static bool file::read(const string &filename, uint8_t *&data, unsigned &size); This allows automatic deletion of the underlying data. Added vectorstream, which is obviously a vector<uint8_t> wrapper for a data stream. Plan is for all data accesses inside my emulation cores to take stream objects, especially MSU1. This lets you feed the core anything: memorystream, filestream, zipstream, gzipstream, httpstream, etc. There will still be exceptions for link and serial, those need actual library files on disk. But those aren't official hardware devices anyway. So to help with speed a bit, I'm rethinking the video rendering path. Previous system: - core outputs system-native samples (SNES = 19-bit LRGB, NES = 9-bit emphasis+palette, DMG = 2-bit grayscale, etc.) - interfaceSystem transforms samples to 30-bit via lookup table inside the emulation core - interfaceSystem masks off overscan areas, if enabled - interfaceUI runs filter to produce new target buffer, if enabled - interfaceUI transforms 30-bit video to native display depth (24-bit or 30-bit), and applies color-adjustments (gamma, etc) at the same time New system: - all cores now generate an internal palette, and call Interface::videoColor(uint32_t source, uint16_t red, uint16_t green, uint16_t blue) to get native display color post-adjusted (gamma, etc applied already.) - all cores output to uint32_t* buffer now (output video.palette[color] instead of just color) - interfaceUI runs filter to produce new target buffer, if enabled - interfaceUI memcpy()'s buffer to the video card videoColor() is pretty neat. source is the raw pixel (as per the old-format, 19-bit SNES, 9-bit NES, etc), and you can create a color from that if you really want to. Or return that value to get a buffer just like v088 and below. red, green, blue are 16-bits per channel, because why the hell not, right? Just lop off all the bits you don't want. If you have more bits on your display than that, fuck you :P The last step is extremely difficult to avoid. Video cards can and do have pitches that differ from the width of the texture. Trying to make the core account for this would be really awful. And even if we did that, the emulation routine would need to write directly to a video card RAM buffer. Some APIs require you to lock the video buffer while writing, so this would leave the video buffer locked for a long time. Probably not catastrophic, but still awful. And lastly, if the emulation core tried writing directly to the display texture, software filters would no longer be possible (unless you -really- jump through hooks and divert to a memory buffer when a filter is enabled, but ... fuck.) Anyway, the point of all that work was to eliminate an extra video copy, and the need for a really painful 30-bit to 24-bit conversion (three shifts, three masks, three array indexes.) So this basically reverts us, performance-wise, to where we were pre-30 bit support. [...] The downside to this is that we're going to need a filter for each output depth. Since the array type is uint32_t*, and I don't intend to support higher or lower depths, we really only need 24+30-bit versions of each filter. Kinda shitty, but oh well.
2012-04-27 12:12:53 +00:00
Update to v094r09 release. byuu says: This will easily be the biggest diff in the history of higan. And not in a good way. * target-higan and target-loki have been blown away completely * nall and ruby massively updated * phoenix replaced with hiro (pretty near a total rewrite) * target-higan restarted using hiro (just a window for now) * all emulation cores updated to compile again * installation changed to not require root privileges (installs locally) For the foreseeable future (maybe even permanently?), the new higan UI will only build under Linux/BSD with GTK+ 2.20+. Probably the most likely route for Windows/OS X will be to try and figure out how to build hiro/GTK on those platforms, as awful as that would be. The other alternative would be to produce new UIs for those platforms ... which would actually be a good opportunity to make something much more user friendly. Being that I just started on this a few hours ago, that means that for at least a few weeks, don't expect to be able to actually play any games. Right now, you can pretty much just compile the binary and that's it. It's quite possible that some nall changes didn't produce compilation errors, but will produce runtime errors. So until the UI can actually load games, we won't know if anything is broken. But we should mostly be okay. It was mostly just trim<1> -> trim changes, moving to Hash::SHA256 (much cleaner), and patching some reckless memory copy functions enough to compile. Progress isn't going to be like it was before: I'm now dividing my time much thinner between studying and other hobbies. My aim this time is not to produce a binary for everyone to play games on. Rather, it's to keep the emulator alive. I want to be able to apply critical patches again. And I would also like the base of the emulator to live on, for use in other emulator frontends that utilize higan.
2015-02-26 10:10:46 +00:00
#include <nall/decode/zip.hpp>
namespace nall {
struct zipstream : memorystream {
using stream::read;
using stream::write;
zipstream(const stream& stream, const string& filter = "*") {
uint size = stream.size();
auto data = new uint8[size];
stream.read(data, size);
Update to v094r09 release. byuu says: This will easily be the biggest diff in the history of higan. And not in a good way. * target-higan and target-loki have been blown away completely * nall and ruby massively updated * phoenix replaced with hiro (pretty near a total rewrite) * target-higan restarted using hiro (just a window for now) * all emulation cores updated to compile again * installation changed to not require root privileges (installs locally) For the foreseeable future (maybe even permanently?), the new higan UI will only build under Linux/BSD with GTK+ 2.20+. Probably the most likely route for Windows/OS X will be to try and figure out how to build hiro/GTK on those platforms, as awful as that would be. The other alternative would be to produce new UIs for those platforms ... which would actually be a good opportunity to make something much more user friendly. Being that I just started on this a few hours ago, that means that for at least a few weeks, don't expect to be able to actually play any games. Right now, you can pretty much just compile the binary and that's it. It's quite possible that some nall changes didn't produce compilation errors, but will produce runtime errors. So until the UI can actually load games, we won't know if anything is broken. But we should mostly be okay. It was mostly just trim<1> -> trim changes, moving to Hash::SHA256 (much cleaner), and patching some reckless memory copy functions enough to compile. Progress isn't going to be like it was before: I'm now dividing my time much thinner between studying and other hobbies. My aim this time is not to produce a binary for everyone to play games on. Rather, it's to keep the emulator alive. I want to be able to apply critical patches again. And I would also like the base of the emulator to live on, for use in other emulator frontends that utilize higan.
2015-02-26 10:10:46 +00:00
Decode::ZIP archive;
if(archive.open(data, size) == false) return;
delete[] data;
for(auto& file : archive.file) {
if(file.name.match(filter)) {
auto buffer = archive.extract(file);
psize = buffer.size();
pdata = buffer.release();
return;
}
}
}
Update to v088r03 release. byuu says: static vector<uint8_t> file::read(const string &filename); replaces: static bool file::read(const string &filename, uint8_t *&data, unsigned &size); This allows automatic deletion of the underlying data. Added vectorstream, which is obviously a vector<uint8_t> wrapper for a data stream. Plan is for all data accesses inside my emulation cores to take stream objects, especially MSU1. This lets you feed the core anything: memorystream, filestream, zipstream, gzipstream, httpstream, etc. There will still be exceptions for link and serial, those need actual library files on disk. But those aren't official hardware devices anyway. So to help with speed a bit, I'm rethinking the video rendering path. Previous system: - core outputs system-native samples (SNES = 19-bit LRGB, NES = 9-bit emphasis+palette, DMG = 2-bit grayscale, etc.) - interfaceSystem transforms samples to 30-bit via lookup table inside the emulation core - interfaceSystem masks off overscan areas, if enabled - interfaceUI runs filter to produce new target buffer, if enabled - interfaceUI transforms 30-bit video to native display depth (24-bit or 30-bit), and applies color-adjustments (gamma, etc) at the same time New system: - all cores now generate an internal palette, and call Interface::videoColor(uint32_t source, uint16_t red, uint16_t green, uint16_t blue) to get native display color post-adjusted (gamma, etc applied already.) - all cores output to uint32_t* buffer now (output video.palette[color] instead of just color) - interfaceUI runs filter to produce new target buffer, if enabled - interfaceUI memcpy()'s buffer to the video card videoColor() is pretty neat. source is the raw pixel (as per the old-format, 19-bit SNES, 9-bit NES, etc), and you can create a color from that if you really want to. Or return that value to get a buffer just like v088 and below. red, green, blue are 16-bits per channel, because why the hell not, right? Just lop off all the bits you don't want. If you have more bits on your display than that, fuck you :P The last step is extremely difficult to avoid. Video cards can and do have pitches that differ from the width of the texture. Trying to make the core account for this would be really awful. And even if we did that, the emulation routine would need to write directly to a video card RAM buffer. Some APIs require you to lock the video buffer while writing, so this would leave the video buffer locked for a long time. Probably not catastrophic, but still awful. And lastly, if the emulation core tried writing directly to the display texture, software filters would no longer be possible (unless you -really- jump through hooks and divert to a memory buffer when a filter is enabled, but ... fuck.) Anyway, the point of all that work was to eliminate an extra video copy, and the need for a really painful 30-bit to 24-bit conversion (three shifts, three masks, three array indexes.) So this basically reverts us, performance-wise, to where we were pre-30 bit support. [...] The downside to this is that we're going to need a filter for each output depth. Since the array type is uint32_t*, and I don't intend to support higher or lower depths, we really only need 24+30-bit versions of each filter. Kinda shitty, but oh well.
2012-04-27 12:12:53 +00:00
~zipstream() {
if(pdata) delete[] pdata;
}
};
}