bsnes/nall/decode/png.hpp

333 lines
8.9 KiB
C++
Raw Normal View History

#pragma once
#include <nall/string.hpp>
Update to v094r09 release. byuu says: This will easily be the biggest diff in the history of higan. And not in a good way. * target-higan and target-loki have been blown away completely * nall and ruby massively updated * phoenix replaced with hiro (pretty near a total rewrite) * target-higan restarted using hiro (just a window for now) * all emulation cores updated to compile again * installation changed to not require root privileges (installs locally) For the foreseeable future (maybe even permanently?), the new higan UI will only build under Linux/BSD with GTK+ 2.20+. Probably the most likely route for Windows/OS X will be to try and figure out how to build hiro/GTK on those platforms, as awful as that would be. The other alternative would be to produce new UIs for those platforms ... which would actually be a good opportunity to make something much more user friendly. Being that I just started on this a few hours ago, that means that for at least a few weeks, don't expect to be able to actually play any games. Right now, you can pretty much just compile the binary and that's it. It's quite possible that some nall changes didn't produce compilation errors, but will produce runtime errors. So until the UI can actually load games, we won't know if anything is broken. But we should mostly be okay. It was mostly just trim<1> -> trim changes, moving to Hash::SHA256 (much cleaner), and patching some reckless memory copy functions enough to compile. Progress isn't going to be like it was before: I'm now dividing my time much thinner between studying and other hobbies. My aim this time is not to produce a binary for everyone to play games on. Rather, it's to keep the emulator alive. I want to be able to apply critical patches again. And I would also like the base of the emulator to live on, for use in other emulator frontends that utilize higan.
2015-02-26 10:10:46 +00:00
#include <nall/decode/inflate.hpp>
Update to v094r09 release. byuu says: This will easily be the biggest diff in the history of higan. And not in a good way. * target-higan and target-loki have been blown away completely * nall and ruby massively updated * phoenix replaced with hiro (pretty near a total rewrite) * target-higan restarted using hiro (just a window for now) * all emulation cores updated to compile again * installation changed to not require root privileges (installs locally) For the foreseeable future (maybe even permanently?), the new higan UI will only build under Linux/BSD with GTK+ 2.20+. Probably the most likely route for Windows/OS X will be to try and figure out how to build hiro/GTK on those platforms, as awful as that would be. The other alternative would be to produce new UIs for those platforms ... which would actually be a good opportunity to make something much more user friendly. Being that I just started on this a few hours ago, that means that for at least a few weeks, don't expect to be able to actually play any games. Right now, you can pretty much just compile the binary and that's it. It's quite possible that some nall changes didn't produce compilation errors, but will produce runtime errors. So until the UI can actually load games, we won't know if anything is broken. But we should mostly be okay. It was mostly just trim<1> -> trim changes, moving to Hash::SHA256 (much cleaner), and patching some reckless memory copy functions enough to compile. Progress isn't going to be like it was before: I'm now dividing my time much thinner between studying and other hobbies. My aim this time is not to produce a binary for everyone to play games on. Rather, it's to keep the emulator alive. I want to be able to apply critical patches again. And I would also like the base of the emulator to live on, for use in other emulator frontends that utilize higan.
2015-02-26 10:10:46 +00:00
namespace nall { namespace Decode {
Update to v094r09 release. byuu says: This will easily be the biggest diff in the history of higan. And not in a good way. * target-higan and target-loki have been blown away completely * nall and ruby massively updated * phoenix replaced with hiro (pretty near a total rewrite) * target-higan restarted using hiro (just a window for now) * all emulation cores updated to compile again * installation changed to not require root privileges (installs locally) For the foreseeable future (maybe even permanently?), the new higan UI will only build under Linux/BSD with GTK+ 2.20+. Probably the most likely route for Windows/OS X will be to try and figure out how to build hiro/GTK on those platforms, as awful as that would be. The other alternative would be to produce new UIs for those platforms ... which would actually be a good opportunity to make something much more user friendly. Being that I just started on this a few hours ago, that means that for at least a few weeks, don't expect to be able to actually play any games. Right now, you can pretty much just compile the binary and that's it. It's quite possible that some nall changes didn't produce compilation errors, but will produce runtime errors. So until the UI can actually load games, we won't know if anything is broken. But we should mostly be okay. It was mostly just trim<1> -> trim changes, moving to Hash::SHA256 (much cleaner), and patching some reckless memory copy functions enough to compile. Progress isn't going to be like it was before: I'm now dividing my time much thinner between studying and other hobbies. My aim this time is not to produce a binary for everyone to play games on. Rather, it's to keep the emulator alive. I want to be able to apply critical patches again. And I would also like the base of the emulator to live on, for use in other emulator frontends that utilize higan.
2015-02-26 10:10:46 +00:00
struct PNG {
inline PNG();
inline ~PNG();
inline auto load(const string& filename) -> bool;
inline auto load(const uint8_t* sourceData, uint sourceSize) -> bool;
inline auto readbits(const uint8_t*& data) -> uint;
struct Info {
uint width;
uint height;
uint bitDepth;
//colorType:
//0 = L (luma)
//2 = R,G,B
//3 = P (palette)
//4 = L,A
//6 = R,G,B,A
uint colorType;
uint compressionMethod;
uint filterType;
uint interlaceMethod;
uint bytesPerPixel;
uint pitch;
uint8_t palette[256][3];
} info;
uint8_t* data = nullptr;
uint size = 0;
uint bitpos = 0;
protected:
enum class FourCC : uint {
IHDR = 0x49484452,
PLTE = 0x504c5445,
IDAT = 0x49444154,
IEND = 0x49454e44,
};
inline auto interlace(uint pass, uint index) -> uint;
inline auto inflateSize() -> uint;
inline auto deinterlace(const uint8_t*& inputData, uint pass) -> bool;
inline auto filter(uint8_t* outputData, const uint8_t* inputData, uint width, uint height) -> bool;
inline auto read(const uint8_t* data, uint length) -> uint;
};
PNG::PNG() {
}
PNG::~PNG() {
if(data) delete[] data;
}
auto PNG::load(const string& filename) -> bool {
Update to v088r03 release. byuu says: static vector<uint8_t> file::read(const string &filename); replaces: static bool file::read(const string &filename, uint8_t *&data, unsigned &size); This allows automatic deletion of the underlying data. Added vectorstream, which is obviously a vector<uint8_t> wrapper for a data stream. Plan is for all data accesses inside my emulation cores to take stream objects, especially MSU1. This lets you feed the core anything: memorystream, filestream, zipstream, gzipstream, httpstream, etc. There will still be exceptions for link and serial, those need actual library files on disk. But those aren't official hardware devices anyway. So to help with speed a bit, I'm rethinking the video rendering path. Previous system: - core outputs system-native samples (SNES = 19-bit LRGB, NES = 9-bit emphasis+palette, DMG = 2-bit grayscale, etc.) - interfaceSystem transforms samples to 30-bit via lookup table inside the emulation core - interfaceSystem masks off overscan areas, if enabled - interfaceUI runs filter to produce new target buffer, if enabled - interfaceUI transforms 30-bit video to native display depth (24-bit or 30-bit), and applies color-adjustments (gamma, etc) at the same time New system: - all cores now generate an internal palette, and call Interface::videoColor(uint32_t source, uint16_t red, uint16_t green, uint16_t blue) to get native display color post-adjusted (gamma, etc applied already.) - all cores output to uint32_t* buffer now (output video.palette[color] instead of just color) - interfaceUI runs filter to produce new target buffer, if enabled - interfaceUI memcpy()'s buffer to the video card videoColor() is pretty neat. source is the raw pixel (as per the old-format, 19-bit SNES, 9-bit NES, etc), and you can create a color from that if you really want to. Or return that value to get a buffer just like v088 and below. red, green, blue are 16-bits per channel, because why the hell not, right? Just lop off all the bits you don't want. If you have more bits on your display than that, fuck you :P The last step is extremely difficult to avoid. Video cards can and do have pitches that differ from the width of the texture. Trying to make the core account for this would be really awful. And even if we did that, the emulation routine would need to write directly to a video card RAM buffer. Some APIs require you to lock the video buffer while writing, so this would leave the video buffer locked for a long time. Probably not catastrophic, but still awful. And lastly, if the emulation core tried writing directly to the display texture, software filters would no longer be possible (unless you -really- jump through hooks and divert to a memory buffer when a filter is enabled, but ... fuck.) Anyway, the point of all that work was to eliminate an extra video copy, and the need for a really painful 30-bit to 24-bit conversion (three shifts, three masks, three array indexes.) So this basically reverts us, performance-wise, to where we were pre-30 bit support. [...] The downside to this is that we're going to need a filter for each output depth. Since the array type is uint32_t*, and I don't intend to support higher or lower depths, we really only need 24+30-bit versions of each filter. Kinda shitty, but oh well.
2012-04-27 12:12:53 +00:00
if(auto memory = file::read(filename)) {
return load(memory.data(), memory.size());
Update to v088r03 release. byuu says: static vector<uint8_t> file::read(const string &filename); replaces: static bool file::read(const string &filename, uint8_t *&data, unsigned &size); This allows automatic deletion of the underlying data. Added vectorstream, which is obviously a vector<uint8_t> wrapper for a data stream. Plan is for all data accesses inside my emulation cores to take stream objects, especially MSU1. This lets you feed the core anything: memorystream, filestream, zipstream, gzipstream, httpstream, etc. There will still be exceptions for link and serial, those need actual library files on disk. But those aren't official hardware devices anyway. So to help with speed a bit, I'm rethinking the video rendering path. Previous system: - core outputs system-native samples (SNES = 19-bit LRGB, NES = 9-bit emphasis+palette, DMG = 2-bit grayscale, etc.) - interfaceSystem transforms samples to 30-bit via lookup table inside the emulation core - interfaceSystem masks off overscan areas, if enabled - interfaceUI runs filter to produce new target buffer, if enabled - interfaceUI transforms 30-bit video to native display depth (24-bit or 30-bit), and applies color-adjustments (gamma, etc) at the same time New system: - all cores now generate an internal palette, and call Interface::videoColor(uint32_t source, uint16_t red, uint16_t green, uint16_t blue) to get native display color post-adjusted (gamma, etc applied already.) - all cores output to uint32_t* buffer now (output video.palette[color] instead of just color) - interfaceUI runs filter to produce new target buffer, if enabled - interfaceUI memcpy()'s buffer to the video card videoColor() is pretty neat. source is the raw pixel (as per the old-format, 19-bit SNES, 9-bit NES, etc), and you can create a color from that if you really want to. Or return that value to get a buffer just like v088 and below. red, green, blue are 16-bits per channel, because why the hell not, right? Just lop off all the bits you don't want. If you have more bits on your display than that, fuck you :P The last step is extremely difficult to avoid. Video cards can and do have pitches that differ from the width of the texture. Trying to make the core account for this would be really awful. And even if we did that, the emulation routine would need to write directly to a video card RAM buffer. Some APIs require you to lock the video buffer while writing, so this would leave the video buffer locked for a long time. Probably not catastrophic, but still awful. And lastly, if the emulation core tried writing directly to the display texture, software filters would no longer be possible (unless you -really- jump through hooks and divert to a memory buffer when a filter is enabled, but ... fuck.) Anyway, the point of all that work was to eliminate an extra video copy, and the need for a really painful 30-bit to 24-bit conversion (three shifts, three masks, three array indexes.) So this basically reverts us, performance-wise, to where we were pre-30 bit support. [...] The downside to this is that we're going to need a filter for each output depth. Since the array type is uint32_t*, and I don't intend to support higher or lower depths, we really only need 24+30-bit versions of each filter. Kinda shitty, but oh well.
2012-04-27 12:12:53 +00:00
}
return false;
}
auto PNG::load(const uint8_t* sourceData, uint sourceSize) -> bool {
if(sourceSize < 8) return false;
if(read(sourceData + 0, 4) != 0x89504e47) return false;
if(read(sourceData + 4, 4) != 0x0d0a1a0a) return false;
uint8_t* compressedData = nullptr;
uint compressedSize = 0;
uint offset = 8;
while(offset < sourceSize) {
uint length = read(sourceData + offset + 0, 4);
uint fourCC = read(sourceData + offset + 4, 4);
uint checksum = read(sourceData + offset + 8 + length, 4);
if(fourCC == (uint)FourCC::IHDR) {
info.width = read(sourceData + offset + 8, 4);
info.height = read(sourceData + offset + 12, 4);
info.bitDepth = read(sourceData + offset + 16, 1);
info.colorType = read(sourceData + offset + 17, 1);
info.compressionMethod = read(sourceData + offset + 18, 1);
info.filterType = read(sourceData + offset + 19, 1);
info.interlaceMethod = read(sourceData + offset + 20, 1);
if(info.bitDepth == 0 || info.bitDepth > 16) return false;
if(info.bitDepth & (info.bitDepth - 1)) return false; //not a power of two
if(info.compressionMethod != 0) return false;
if(info.filterType != 0) return false;
if(info.interlaceMethod != 0 && info.interlaceMethod != 1) return false;
switch(info.colorType) {
case 0: info.bytesPerPixel = info.bitDepth * 1; break; //L
case 2: info.bytesPerPixel = info.bitDepth * 3; break; //R,G,B
case 3: info.bytesPerPixel = info.bitDepth * 1; break; //P
case 4: info.bytesPerPixel = info.bitDepth * 2; break; //L,A
case 6: info.bytesPerPixel = info.bitDepth * 4; break; //R,G,B,A
default: return false;
}
if(info.colorType == 2 || info.colorType == 4 || info.colorType == 6) {
if(info.bitDepth != 8 && info.bitDepth != 16) return false;
}
if(info.colorType == 3 && info.bitDepth == 16) return false;
info.bytesPerPixel = (info.bytesPerPixel + 7) / 8;
info.pitch = (int)info.width * info.bytesPerPixel;
}
if(fourCC == (uint)FourCC::PLTE) {
if(length % 3) return false;
for(uint n = 0, p = offset + 8; n < length / 3; n++) {
info.palette[n][0] = sourceData[p++];
info.palette[n][1] = sourceData[p++];
info.palette[n][2] = sourceData[p++];
}
}
if(fourCC == (uint)FourCC::IDAT) {
compressedData = (uint8_t*)realloc(compressedData, compressedSize + length);
memcpy(compressedData + compressedSize, sourceData + offset + 8, length);
compressedSize += length;
}
if(fourCC == (uint)FourCC::IEND) {
break;
}
offset += 4 + 4 + length + 4;
}
uint interlacedSize = inflateSize();
auto interlacedData = new uint8_t[interlacedSize];
bool result = inflate(interlacedData, interlacedSize, compressedData + 2, compressedSize - 6);
free(compressedData);
if(result == false) {
delete[] interlacedData;
return false;
}
size = info.width * info.height * info.bytesPerPixel;
data = new uint8_t[size];
if(info.interlaceMethod == 0) {
if(filter(data, interlacedData, info.width, info.height) == false) {
delete[] interlacedData;
delete[] data;
data = nullptr;
return false;
}
} else {
const uint8_t* passData = interlacedData;
for(uint pass = 0; pass < 7; pass++) {
if(deinterlace(passData, pass) == false) {
delete[] interlacedData;
delete[] data;
data = nullptr;
return false;
}
}
}
delete[] interlacedData;
return true;
}
auto PNG::interlace(uint pass, uint index) -> uint {
static const uint data[7][4] = {
//x-distance, y-distance, x-origin, y-origin
{8, 8, 0, 0},
{8, 8, 4, 0},
{4, 8, 0, 4},
{4, 4, 2, 0},
{2, 4, 0, 2},
{2, 2, 1, 0},
{1, 2, 0, 1},
};
return data[pass][index];
}
auto PNG::inflateSize() -> uint {
if(info.interlaceMethod == 0) {
return info.width * info.height * info.bytesPerPixel + info.height;
}
uint size = 0;
for(uint pass = 0; pass < 7; pass++) {
uint xd = interlace(pass, 0), yd = interlace(pass, 1);
uint xo = interlace(pass, 2), yo = interlace(pass, 3);
uint width = (info.width + (xd - xo - 1)) / xd;
uint height = (info.height + (yd - yo - 1)) / yd;
if(width == 0 || height == 0) continue;
size += width * height * info.bytesPerPixel + height;
}
return size;
}
auto PNG::deinterlace(const uint8_t*& inputData, uint pass) -> bool {
uint xd = interlace(pass, 0), yd = interlace(pass, 1);
uint xo = interlace(pass, 2), yo = interlace(pass, 3);
uint width = (info.width + (xd - xo - 1)) / xd;
uint height = (info.height + (yd - yo - 1)) / yd;
if(width == 0 || height == 0) return true;
uint outputSize = width * height * info.bytesPerPixel;
auto outputData = new uint8_t[outputSize];
bool result = filter(outputData, inputData, width, height);
const uint8_t* rd = outputData;
for(uint y = yo; y < info.height; y += yd) {
uint8_t* wr = data + y * info.pitch;
for(uint x = xo; x < info.width; x += xd) {
for(uint b = 0; b < info.bytesPerPixel; b++) {
wr[x * info.bytesPerPixel + b] = *rd++;
}
}
}
inputData += outputSize + height;
delete[] outputData;
return result;
}
auto PNG::filter(uint8_t* outputData, const uint8_t* inputData, uint width, uint height) -> bool {
uint8_t* wr = outputData;
const uint8_t* rd = inputData;
int bpp = info.bytesPerPixel, pitch = width * bpp;
for(int y = 0; y < height; y++) {
uint8_t filter = *rd++;
switch(filter) {
case 0x00: //None
for(int x = 0; x < pitch; x++) {
wr[x] = rd[x];
}
break;
case 0x01: //Subtract
for(int x = 0; x < pitch; x++) {
wr[x] = rd[x] + (x - bpp < 0 ? 0 : wr[x - bpp]);
}
break;
case 0x02: //Above
for(int x = 0; x < pitch; x++) {
wr[x] = rd[x] + (y - 1 < 0 ? 0 : wr[x - pitch]);
}
break;
case 0x03: //Average
for(int x = 0; x < pitch; x++) {
short a = x - bpp < 0 ? 0 : wr[x - bpp];
short b = y - 1 < 0 ? 0 : wr[x - pitch];
wr[x] = rd[x] + (uint8_t)((a + b) / 2);
}
break;
case 0x04: //Paeth
for(int x = 0; x < pitch; x++) {
short a = x - bpp < 0 ? 0 : wr[x - bpp];
short b = y - 1 < 0 ? 0 : wr[x - pitch];
short c = x - bpp < 0 || y - 1 < 0 ? 0 : wr[x - pitch - bpp];
short p = a + b - c;
short pa = p > a ? p - a : a - p;
short pb = p > b ? p - b : b - p;
short pc = p > c ? p - c : c - p;
auto paeth = (uint8_t)((pa <= pb && pa <= pc) ? a : (pb <= pc) ? b : c);
wr[x] = rd[x] + paeth;
}
break;
default: //Invalid
return false;
}
rd += pitch;
wr += pitch;
}
return true;
}
auto PNG::read(const uint8_t* data, uint length) -> uint {
uint result = 0;
while(length--) result = (result << 8) | (*data++);
return result;
}
auto PNG::readbits(const uint8_t*& data) -> uint {
uint result = 0;
switch(info.bitDepth) {
case 1:
result = (*data >> bitpos) & 1;
bitpos++;
if(bitpos == 8) { data++; bitpos = 0; }
break;
case 2:
result = (*data >> bitpos) & 3;
bitpos += 2;
if(bitpos == 8) { data++; bitpos = 0; }
break;
case 4:
result = (*data >> bitpos) & 15;
bitpos += 4;
if(bitpos == 8) { data++; bitpos = 0; }
break;
case 8:
result = *data++;
break;
case 16:
result = (data[0] << 8) | (data[1] << 0);
data += 2;
break;
}
return result;
}
Update to v094r09 release. byuu says: This will easily be the biggest diff in the history of higan. And not in a good way. * target-higan and target-loki have been blown away completely * nall and ruby massively updated * phoenix replaced with hiro (pretty near a total rewrite) * target-higan restarted using hiro (just a window for now) * all emulation cores updated to compile again * installation changed to not require root privileges (installs locally) For the foreseeable future (maybe even permanently?), the new higan UI will only build under Linux/BSD with GTK+ 2.20+. Probably the most likely route for Windows/OS X will be to try and figure out how to build hiro/GTK on those platforms, as awful as that would be. The other alternative would be to produce new UIs for those platforms ... which would actually be a good opportunity to make something much more user friendly. Being that I just started on this a few hours ago, that means that for at least a few weeks, don't expect to be able to actually play any games. Right now, you can pretty much just compile the binary and that's it. It's quite possible that some nall changes didn't produce compilation errors, but will produce runtime errors. So until the UI can actually load games, we won't know if anything is broken. But we should mostly be okay. It was mostly just trim<1> -> trim changes, moving to Hash::SHA256 (much cleaner), and patching some reckless memory copy functions enough to compile. Progress isn't going to be like it was before: I'm now dividing my time much thinner between studying and other hobbies. My aim this time is not to produce a binary for everyone to play games on. Rather, it's to keep the emulator alive. I want to be able to apply critical patches again. And I would also like the base of the emulator to live on, for use in other emulator frontends that utilize higan.
2015-02-26 10:10:46 +00:00
}}