Screen coordinates in fragment shader

Roberto picture Roberto · Apr 5, 2014 · Viewed 11.6k times · Source

In a fragment shader like the below:

Shader "ColorReplacement" {

    Properties {
        _MainTex ("Greyscale (R) Alpha (A)", 2D) = "white" {}
    }
    SubShader {
        ZTest LEqual
        ZWrite On
        Pass {
            Name "ColorReplacement"
            CGPROGRAM
                #pragma vertex vert
                #pragma fragment frag
                #pragma fragmentoption ARB_precision_hint_fastest
        #pragma target 3.0
                #include "UnityCG.cginc"


                struct v2f
                {
                    float4  pos : SV_POSITION;
                    float4  uv : TEXCOORD0;
                }; 

                v2f vert (appdata_tan v)
                {
                    v2f o;
                    o.pos = mul(UNITY_MATRIX_MVP, v.vertex);
                    o.uv = v.texcoord.xy;
                    return o;
                }

                sampler2D _MainTex;

                float4 frag(v2f i) : COLOR
                {

                }
            ENDCG
        }
    }
    Fallback off
 }

Is there a way to know the coordinates of i.uv in the screen?

I'm totally new to shaders. The shader is applied to an object being drawn somewhere in the screen, the first pixel passed to frag possibly does not correspond to the first pixel of the screen (the viewport), is there a way to know the position of this pixel in screen coordinates?


EDIT Yes, I want to obtain the fragment location on the screen. Unity accepts vertex and fragment programs written in both Cg and HLSL. But I don't know how to convert this shader to HLSL.

The equivalent of gl_FragCoord in Cg is WPOS. I can run the following shader:

Shader "Custom/WindowCoordinates/Base" {
SubShader {
    Pass {
        CGPROGRAM

        #pragma vertex vert
        #pragma fragment frag
        #pragma target 3.0

        #include "UnityCG.cginc"

        float4 vert(appdata_base v) : POSITION {
            return mul (UNITY_MATRIX_MVP, v.vertex);
        }

        fixed4 frag(float4 sp:WPOS) : COLOR {
            return fixed4(sp.xy/_ScreenParams.xy,0.0,1.0);
        }
        ENDCG
    }
}
}

That uses the screen position the way I want, but I'm a such noob that I can't even mix both shaders to get what I want: in my shader I'm trying to access v2f.pos, which is calculated the same way of sp in the shader above, but I get the error:

Program 'frag', variable/member "pos" has semantic "POSITION" which is not visible in this profile

If I change pos to be WPOS instead of SV_POSITION I get a similar error:

Program 'vert', variable/member "pos" has semantic "WPOS" which is not visible in this profile at line 35

Which is strange, since I'm using the same target 3.0 of the above shader.

Answer

datenwolf picture datenwolf · Apr 5, 2014

In GLSL fragment stage there's a built-in variable gl_FragCoord which carries the fragment pixel position within the viewport. If the viewport covers the whole screen, this is all you need. If the viewport covers only a subwindow of the screen, you'll have to pass the xy offset the viewport and add it to gl_FragCoord.xy to get the screen position. Now your shader code is not written in GLSL, but apparently Cg (with Unity extensions as it seems). But it should have some correspondence this this available.