DX9: Write a detailed description of the vertex position offset magic in drawShadedTexQuad. I hope this makes at least a bit sense to anyone but me, it's better than no documentation at all though.

DX9: Revert to the old EFB coordinate scaling. Glitches caused by higher EFB scales probably can't even be fixed properly in DX9, so let's not even mess with it...

git-svn-id: https://dolphin-emu.googlecode.com/svn/trunk@6573 8ced0084-cf51-0410-be5f-012b33b47a6e
This commit is contained in:
NeoBrainX 2010-12-13 17:49:21 +00:00
parent 5fd9951649
commit 7cf3ef6ddc
2 changed files with 26 additions and 4 deletions

View File

@ -90,12 +90,12 @@ public:
virtual TargetRectangle ConvertEFBRectangle(const EFBRectangle& rc) = 0;
// Use this to upscale native EFB coordinates to IDEAL internal resolution
static int EFBToScaledX(int x) { return x * (GetTargetWidth()-1) / (EFB_WIDTH-1); }
static int EFBToScaledY(int y) { return y * (GetTargetHeight()-1) / (EFB_HEIGHT-1); }
static int EFBToScaledX(int x) { return x * GetTargetWidth() / EFB_WIDTH; }
static int EFBToScaledY(int y) { return y * GetTargetHeight() / EFB_HEIGHT; }
// Floating point versions of the above - only use them if really necessary
static float EFBToScaledXf(float x) { return x * (float)(GetTargetWidth()-1) / (float)(EFB_WIDTH-1); }
static float EFBToScaledYf(float y) { return y * (float)(GetTargetHeight()-1) / (float)(EFB_HEIGHT-1); }
static float EFBToScaledXf(float x) { return x * (float)GetTargetWidth() / (float)EFB_WIDTH; }
static float EFBToScaledYf(float y) { return y * (float)GetTargetHeight() / (float)EFB_HEIGHT; }
// Returns the offset at which the EFB will be drawn onto the backbuffer
// NOTE: Never calculate this manually (e.g. to "increase accuracy"), since you might end up getting off-by-one errors.

View File

@ -355,6 +355,27 @@ void quad2d(float x1, float y1, float x2, float y2, u32 color, float u1, float v
RestoreShaders();
}
/* Explanation of texture copying via drawShadedTexQuad and drawShadedTexSubQuad:
From MSDN: "When rendering 2D output using pre-transformed vertices,
care must be taken to ensure that each texel area correctly corresponds
to a single pixel area, otherwise texture distortion can occur."
=> We need to subtract 0.5 from the vertex positions to properly map texels to pixels.
HOWEVER, the MSDN article talks about D3DFVF_XYZRHW vertices, which bypass the programmable pipeline.
Since we're using D3DFVF_XYZW and the programmable pipeline though, the vertex positions
are normalized to [-1;+1], i.e. we need to scale the -0.5 offset by the texture dimensions.
For example see a texture with a width of 640 pixels:
"Expected" coordinate range when using D3DFVF_XYZRHW: [0;640]
Normalizing this coordinate range for D3DFVF_XYZW: [0;640]->[-320;320]->[-1;1]
i.e. we're subtracting width/2 and dividing by width/2
BUT: The actual range when using D3DFVF_XYZRHW needs to be [-0.5;639.5] because of the need for exact texel->pixel mapping.
We can still apply the same normalizing procedure though:
[-0.5;639.5]->[-320-0.5;320-0.5]->[-1-0.5/320;1-0.5/320]
So generally speaking the correct coordinate range is [-1-0.5/(w/2);1-0.5/(w/2)]
which can be simplified to [-1-1/w;1-1/w].
For a detailed explanation of this read the MSDN article "Directly Mapping Texels to Pixels (Direct3D 9)".
*/
void drawShadedTexQuad(IDirect3DTexture9 *texture,
const RECT *rSource,
int SourceWidth,
@ -373,6 +394,7 @@ void drawShadedTexQuad(IDirect3DTexture9 *texture,
float v1=((float)rSource->top) * sh;
float v2=((float)rSource->bottom) * sh;
// TODO: Why do we ADD dh here?
struct Q2DVertex { float x,y,z,rhw,u,v,w,h,L,T,R,B; } coords[4] = {
{-1.0f - dw,-1.0f + dh, 0.0f,1.0f, u1, v2, sw, sh,u1,v1,u2,v2},
{-1.0f - dw, 1.0f + dh, 0.0f,1.0f, u1, v1, sw, sh,u1,v1,u2,v2},