Unity Shader:深度和法線(xiàn)紋理

本文同時(shí)發(fā)布在我的個(gè)人博客上:https://dragon_boy.gitee.io

獲取深度和法線(xiàn)紋理

原理

深度紋理實(shí)際是一張渲染紋理,存儲(chǔ)的是深度值,范圍是[0,1],通常是非線(xiàn)性分布。

在頂點(diǎn)空間變換時(shí),變換到裁剪空間的坐標(biāo)為NDC,即范圍在[-1,1],所以需要映射一下到[0,1]。

在Unity中,會(huì)使用著色器替換技術(shù)選擇那些渲染類(lèi)型為Opaque的物體,判斷它們使用的渲染隊(duì)列是否小于等于2500,如果滿(mǎn)足條件就把它渲染到深度和法線(xiàn)紋理中。

在Unity中,我們可以選擇讓一個(gè)攝像機(jī)生成一張深度紋理或是一張深度+法線(xiàn)紋理。選擇前者,Unity會(huì)直接獲取深度緩沖或上述的著色器替換技術(shù),選擇需要的不透明物體,并對(duì)使用它投射陰影時(shí)使用的Pass來(lái)得到深度紋理。如果選擇后者,Unity會(huì)創(chuàng)建一張和屏幕分辨率相同、精度為32為的紋理,其中觀察空間下的發(fā)現(xiàn)信息在RG通道,深度信息在BA通道。法線(xiàn)信息在延遲測(cè)試中可以非常容易得到,Unity只需合并深度和發(fā)現(xiàn)緩存。在前向渲染中,默認(rèn)情況下不會(huì)創(chuàng)建法線(xiàn)緩存,因此Unity底層使用一個(gè)單獨(dú)的Pass把整個(gè)場(chǎng)景再次渲染一遍來(lái)完成。

如何獲取

獲取深度紋理很簡(jiǎn)單,只需在腳本中設(shè)置深度紋理模式,然后可以在Shader中使用_CameraDepthTexture訪問(wèn)深度紋理:

camera.depthTextureMode = DepthTextureMode.Depth;

深度法線(xiàn)紋理同理:

camera.depthTextureMode = DepthTextureMode.DepthNormals;

在Shader中使用_CameraDepthNormalsTexture訪問(wèn)。

Unity中提供了一個(gè)宏定義SAMPLE_DEPTH_TEXTURE來(lái)對(duì)深度紋理采樣,主要是為了處理平臺(tái)差異。

當(dāng)通過(guò)紋理采樣獲得深度值后,這些深度值往往是非線(xiàn)性的,這種非線(xiàn)性來(lái)自于透視投影使用的裁剪矩陣。實(shí)際計(jì)算中我們需要線(xiàn)性的深度值,所以我們需要將深度值變換到線(xiàn)性空間下,例如視角空間下的深度值,推導(dǎo)過(guò)程如下:

當(dāng)我們使用透視投影的裁剪矩陣對(duì)視角空間下的頂點(diǎn)變換后,裁剪空間下的頂點(diǎn)的z和w分量是:

z_{clip} = -z_{view}\frac{Far+Near}{Far-Near} - \frac{2\cdot Near \cdot Far}{Far- Near}

w_{clip} = -z_{view}

通過(guò)透視除法,就可以得到NDC下的z分量:

z_{ndc} = \frac{z_clip}{w_clip} = \frac{Far + Near}{Far - Near} + \frac{2\cdot Near \cdot Far}{(Far- Near)\cdot z_{view}}

深度紋理中的深度值是通過(guò)上面的NDC分量計(jì)算得到的:

d = 0.5 \cdot z_{ndc} + 0.5

根據(jù)上述推導(dǎo)的z_{view}表達(dá)式:

z_{view} = \frac{1}{\frac{Far - Near}{Near\cdot Far}d - \frac{1}{Near}}

由于Unity中視圖空間正對(duì)的z值為負(fù)數(shù),將上式取反:

z'_{view} = \frac{1}{\frac{Near - Far}{Near\cdot Far}d + \frac{1}{Near}}

上式取值范圍是[Near,Far],為得到[0,1]之間的深度值,將上式結(jié)果除以Far,結(jié)果如下:

z_{01} = \frac{1}{\frac{Near - Far}{Near}d + \frac{Far}{Near}}

針對(duì)上述的推導(dǎo)過(guò)程,Unity提供了兩個(gè)函數(shù)。LinearEyeDepth將深度紋理的采樣結(jié)果轉(zhuǎn)換到視角空間下的深度值,即z'_{view}Linear01Depth將返回范圍在[0,1]的線(xiàn)性深度值,即z_{01}。這兩個(gè)函數(shù)使用Unity內(nèi)置的_ZBuffferParams變量來(lái)得到遠(yuǎn)近裁剪平面的距離。

我們可以使用tex2D直接對(duì)深度法線(xiàn)紋理采樣,然后使用Unity的DecodeDepthNormal函數(shù)來(lái)對(duì)采樣結(jié)果解碼。

運(yùn)動(dòng)模糊

這里我們使用速度映射圖來(lái)模擬運(yùn)動(dòng)模糊。我們利用深度紋理在片元著色器中為每個(gè)像素計(jì)算其在世界空間下的位置,這是通過(guò)使用當(dāng)前視角投影矩陣的逆矩陣對(duì)NDC下的頂點(diǎn)坐標(biāo)進(jìn)行變換得到的。接著我們使用前一幀的視角投影矩陣對(duì)其進(jìn)行變換,得到該位置在前一幀中的NDC坐標(biāo)。然后,我們計(jì)算前一幀和當(dāng)前幀的位置差,生成該像素的速度。

下面編寫(xiě)腳本:

using UnityEngine;
using System.Collections;

public class MotionBlurWithDepthTexture : PostEffectsBase
{

    public Shader motionBlurShader;
    private Material motionBlurMaterial = null;

    public Material material
    {
        get
        {
            motionBlurMaterial = CheckShaderAndCreateMaterial(motionBlurShader, motionBlurMaterial);
            return motionBlurMaterial;
        }
    }

    private Camera myCamera;
    public Camera camera
    {
        get
        {
            if (myCamera == null)
            {
                myCamera = GetComponent<Camera>();
            }
            return myCamera;
        }
    }

    [Range(0.0f, 1.0f)]
    public float blurSize = 0.5f;

    private Matrix4x4 previousViewProjectionMatrix;

    void OnEnable()
    {
        camera.depthTextureMode |= DepthTextureMode.Depth;

        previousViewProjectionMatrix = camera.projectionMatrix * camera.worldToCameraMatrix;
    }

    void OnRenderImage(RenderTexture src, RenderTexture dest)
    {
        if (material != null)
        {
            material.SetFloat("_BlurSize", blurSize);

            material.SetMatrix("_PreviousViewProjectionMatrix", previousViewProjectionMatrix);
            Matrix4x4 currentViewProjectionMatrix = camera.projectionMatrix * camera.worldToCameraMatrix;
            Matrix4x4 currentViewProjectionInverseMatrix = currentViewProjectionMatrix.inverse;
            material.SetMatrix("_CurrentViewProjectionInverseMatrix", currentViewProjectionInverseMatrix);
            previousViewProjectionMatrix = currentViewProjectionMatrix;

            Graphics.Blit(src, dest, material);
        }
        else
        {
            Graphics.Blit(src, dest);
        }
    }
}

Shader代碼如下:

Shader "Unlit/MotionBlurWIthDepthTexture"
{
    Properties
    {
        _MainTex ("Texture", 2D) = "white" {}
        _BlurSize ("Blur Size", Float) = 1.0
    }
        SubShader{
            CGINCLUDE

            #include "UnityCG.cginc"

            sampler2D _MainTex;
            half4 _MainTex_TexelSize;
            sampler2D _CameraDepthTexture;
            float4x4 _CurrentViewProjectionInverseMatrix;
            float4x4 _PreviousViewProjectionMatrix;
            half _BlurSize;

            struct v2f {
                float4 pos : SV_POSITION;
                half2 uv : TEXCOORD0;
                half2 uv_depth : TEXCOORD1;
            };

            v2f vert(appdata_img v) {
                v2f o;
                o.pos = UnityObjectToClipPos(v.vertex);

                o.uv = v.texcoord;
                o.uv_depth = v.texcoord;

                #if UNITY_UV_STARTS_AT_TOP
                if (_MainTex_TexelSize.y < 0)
                    o.uv_depth.y = 1 - o.uv_depth.y;
                #endif

                return o;
            }

            fixed4 frag(v2f i) : SV_Target {
                // Get the depth buffer value at this pixel.
                float d = SAMPLE_DEPTH_TEXTURE(_CameraDepthTexture, i.uv_depth);
                // H is the viewport position at this pixel in the range -1 to 1.
                float4 H = float4(i.uv.x * 2 - 1, i.uv.y * 2 - 1, d * 2 - 1, 1);
                // Transform by the view-projection inverse.
                float4 D = mul(_CurrentViewProjectionInverseMatrix, H);
                // Divide by w to get the world position. 
                float4 worldPos = D / D.w;

                // Current viewport position 
                float4 currentPos = H;
                // Use the world position, and transform by the previous view-projection matrix.  
                float4 previousPos = mul(_PreviousViewProjectionMatrix, worldPos);
                // Convert to nonhomogeneous points [-1,1] by dividing by w.
                previousPos /= previousPos.w;

                // Use this frame's position and last frame's to compute the pixel velocity.
                float2 velocity = (currentPos.xy - previousPos.xy) / 2.0f;

                float2 uv = i.uv;
                float4 c = tex2D(_MainTex, uv);
                uv += velocity * _BlurSize;
                for (int it = 1; it < 3; it++, uv += velocity * _BlurSize) {
                    float4 currentColor = tex2D(_MainTex, uv);
                    c += currentColor;
                }
                c /= 3;

                return fixed4(c.rgb, 1.0);
            }

            ENDCG

        Pass {
            ZTest Always Cull Off ZWrite Off

            CGPROGRAM

            #pragma vertex vert  
            #pragma fragment frag  

            ENDCG
        }
    }
}

當(dāng)?shù)玫较袼厮俣群?,我們就根?jù)這個(gè)速度來(lái)對(duì)它的鄰域像素進(jìn)行采樣,接著平均。

全局霧效

這里介紹一種快速?gòu)纳疃燃y理重建世界坐標(biāo)的方法。這種方法首先對(duì)圖像空間下的視錐體射線(xiàn)(從攝像機(jī)出發(fā),指向圖像上的某點(diǎn)的射線(xiàn))進(jìn)行插值,這條射線(xiàn)存儲(chǔ)了該像素在世界空間下到攝像機(jī)的方向信息。然后,我們把該射線(xiàn)和線(xiàn)性化后的視角空間下的深度值相乘,再加上攝像機(jī)的世界位置,就可以得到該像素再世界空間下的位置。

重建世界坐標(biāo)

重建世界坐標(biāo)的代碼如下:

float4 worldPos = _WorldSpaceCameraPos + linearDepth * interpolatedRay;

其中,_WorldSpaceCameraPos是攝像機(jī)在世界空間下的位置,這可以右Unity的內(nèi)置變量直接訪問(wèn)得到。而linearDepth * interpolatedRay則可以計(jì)算得到該像素相對(duì)于攝像機(jī)的偏移量,linearDepth是由深度紋理得到的線(xiàn)性深度值,interpolatedRay是由頂點(diǎn)著色器輸出并插值后得到的射線(xiàn),它不僅包含該像素到攝像機(jī)的方向,也包含了距離信息。

interpolatedRay來(lái)源于對(duì)近裁剪平面的4個(gè)角的某個(gè)特定向量的插值,這4個(gè)向量包含了它們到攝像機(jī)的方向和距離信息。下面進(jìn)行推導(dǎo):

首先計(jì)算兩個(gè)向量,toTop,toRight,它們是起點(diǎn)位于近裁剪平面中心、分別指向攝像機(jī)正上方和正右方的向量,計(jì)算公式如下:

halfHeight = Near\times tan(\frac{FOV}{2})

toTop = camera.up \times halfHeight

toRight = camera.right \times halfHeight \cdot aspect

得到這兩個(gè)矢量后,就可以計(jì)算近裁剪平面的四個(gè)角相對(duì)于攝像機(jī)的方向。以左上角TL為例:

TL = camera.forward\cdot Near + toTop - toRight

同理,其它三個(gè)角:

TR = camera.forward\cdot Near + toTop + toRight

BL = camera.forward\cdot Near - toTop - toRight

BR = camera.forward\cdot Near - toTop + toRight

上面求得的四個(gè)向量不僅包含方向信息,它們的模對(duì)應(yīng)了4個(gè)點(diǎn)到攝像機(jī)的空間距離。由于我們得到的線(xiàn)性深度值并非是顛倒攝像機(jī)的歐式距離,而是z方向的距離,因此,不能直接使用深度值和4個(gè)角的單位方向的乘積來(lái)計(jì)算它們到攝像機(jī)的偏移量。下面進(jìn)行線(xiàn)性深度值到歐式距離的轉(zhuǎn)化。

TL所在的射線(xiàn)上,像素的深度值和它到攝像機(jī)的實(shí)際距離的比等于近裁剪平面的距離和TL向量的模的比,即:

\frac{depth}{dist} = \frac{Near}{|TL|}

那么TL點(diǎn)距離攝像機(jī)的歐式距離dist:

dist = \frac{|TL|}{Near}\times depth

由于其它三個(gè)向量的模和TL相等,那么我們可以提取一個(gè)縮放因子:

scale = \frac{|TL|}{|Near|}

我們可以使用這個(gè)縮放因子和單位向量相乘來(lái)得到對(duì)應(yīng)的向量值,如:

Ray_{TL} = \frac{TL}{|TL|}\times scale

屏幕后處理的原理就是使用特定的材質(zhì)去渲染一個(gè)剛好填充整個(gè)屏幕的四邊形面片。這個(gè)四邊形面片的4個(gè)頂點(diǎn)對(duì)應(yīng)了近裁剪平面的4個(gè)角。我們將上述的計(jì)算結(jié)果傳遞給頂點(diǎn)著色器,頂點(diǎn)著色器根據(jù)當(dāng)前的位置選擇他所對(duì)應(yīng)的相應(yīng)向量,然后將其輸出,經(jīng)插值后傳遞給片元著色器得到interpolatedRay。

霧的計(jì)算

在簡(jiǎn)單的霧效實(shí)現(xiàn)中,我們需要計(jì)算一個(gè)霧效系數(shù)f,作為混合原始顏色和霧的顏色的混合系數(shù):

float3 afterFog = f * fogColor + (1 - f) * origColor;

f的計(jì)算方法很多,Unity內(nèi)置的霧效實(shí)現(xiàn)支持三種:線(xiàn)性、指數(shù)和指數(shù)的平方。當(dāng)給定距離z后,f的計(jì)算公式如下:

  • Linear:
    f = \frac{d_{max} - |z|}{d_{max} - d_{min}},d_{min}d_{max}分別是受霧影響的最小距離和最大距離。
  • Exponential:
    f = e^{-d\cdot |z|},d是控制霧的濃度的參數(shù)。
  • Exponential Squared:
    f = e^{-(d-|z|)^2}d是控制霧的濃度的參數(shù)。

在這里使用類(lèi)似線(xiàn)性霧的計(jì)算方式,計(jì)算基于高度的霧效,具體方法是,當(dāng)給定一點(diǎn)在世界空間下的高度y后,f的計(jì)算公式為:
f = \frac{H_{end} - y}{H_{end} - H_{start}},H_{strat}H_{end}分別表示受霧影響的起始高度和終止高度。

首先實(shí)現(xiàn)腳本:

using UnityEngine;
using System.Collections;

public class FogWithDepthTexture : PostEffectsBase
{

    public Shader fogShader;
    private Material fogMaterial = null;

    public Material material
    {
        get
        {
            fogMaterial = CheckShaderAndCreateMaterial(fogShader, fogMaterial);
            return fogMaterial;
        }
    }

    private Camera myCamera;
    public Camera camera
    {
        get
        {
            if (myCamera == null)
            {
                myCamera = GetComponent<Camera>();
            }
            return myCamera;
        }
    }

    private Transform myCameraTransform;
    public Transform cameraTransform
    {
        get
        {
            if (myCameraTransform == null)
            {
                myCameraTransform = camera.transform;
            }

            return myCameraTransform;
        }
    }

    [Range(0.0f, 3.0f)]
    public float fogDensity = 1.0f;

    public Color fogColor = Color.white;

    public float fogStart = 0.0f;
    public float fogEnd = 2.0f;

    void OnEnable()
    {
        camera.depthTextureMode |= DepthTextureMode.Depth;
    }

    void OnRenderImage(RenderTexture src, RenderTexture dest)
    {
        if (material != null)
        {
            Matrix4x4 frustumCorners = Matrix4x4.identity;

            float fov = camera.fieldOfView;
            float near = camera.nearClipPlane;
            float aspect = camera.aspect;

            float halfHeight = near * Mathf.Tan(fov * 0.5f * Mathf.Deg2Rad);
            Vector3 toRight = cameraTransform.right * halfHeight * aspect;
            Vector3 toTop = cameraTransform.up * halfHeight;

            Vector3 topLeft = cameraTransform.forward * near + toTop - toRight;
            float scale = topLeft.magnitude / near;

            topLeft.Normalize();
            topLeft *= scale;

            Vector3 topRight = cameraTransform.forward * near + toRight + toTop;
            topRight.Normalize();
            topRight *= scale;

            Vector3 bottomLeft = cameraTransform.forward * near - toTop - toRight;
            bottomLeft.Normalize();
            bottomLeft *= scale;

            Vector3 bottomRight = cameraTransform.forward * near + toRight - toTop;
            bottomRight.Normalize();
            bottomRight *= scale;

            frustumCorners.SetRow(0, bottomLeft);
            frustumCorners.SetRow(1, bottomRight);
            frustumCorners.SetRow(2, topRight);
            frustumCorners.SetRow(3, topLeft);

            material.SetMatrix("_FrustumCornersRay", frustumCorners);

            material.SetFloat("_FogDensity", fogDensity);
            material.SetColor("_FogColor", fogColor);
            material.SetFloat("_FogStart", fogStart);
            material.SetFloat("_FogEnd", fogEnd);

            Graphics.Blit(src, dest, material);
        }
        else
        {
            Graphics.Blit(src, dest);
        }
    }
}

按照之前的理論計(jì)算四個(gè)射線(xiàn)向量,然后按照以左下角為原點(diǎn),逆時(shí)針按行構(gòu)建矩陣。這個(gè)順序非常重要,因?yàn)檫@決定了我們?cè)陧旤c(diǎn)著色器中使用哪一行作為該點(diǎn)的待插值向量。

Shader代碼如下:

Shader "Unlit/Fog"
{
    Properties{
         _MainTex("Base (RGB)", 2D) = "white" {}
         _FogDensity("Fog Density", Float) = 1.0
         _FogColor("Fog Color", Color) = (1, 1, 1, 1)
         _FogStart("Fog Start", Float) = 0.0
         _FogEnd("Fog End", Float) = 1.0
    }
        SubShader{
            CGINCLUDE

            #include "UnityCG.cginc"

            float4x4 _FrustumCornersRay;

            sampler2D _MainTex;
            half4 _MainTex_TexelSize;
            sampler2D _CameraDepthTexture;
            half _FogDensity;
            fixed4 _FogColor;
            float _FogStart;
            float _FogEnd;

            struct v2f {
                float4 pos : SV_POSITION;
                half2 uv : TEXCOORD0;
                half2 uv_depth : TEXCOORD1;
                float4 interpolatedRay : TEXCOORD2;
            };

            v2f vert(appdata_img v) {
                v2f o;
                o.pos = UnityObjectToClipPos(v.vertex);

                o.uv = v.texcoord;
                o.uv_depth = v.texcoord;

                #if UNITY_UV_STARTS_AT_TOP
                if (_MainTex_TexelSize.y < 0)
                    o.uv_depth.y = 1 - o.uv_depth.y;
                #endif

                int index = 0;
                if (v.texcoord.x < 0.5 && v.texcoord.y < 0.5) {
                    index = 0;
                }
                else if (v.texcoord.x > 0.5 && v.texcoord.y < 0.5) {
                     index = 1;
                }
                else if (v.texcoord.x > 0.5 && v.texcoord.y > 0.5) {
                 index = 2;
                }
                else {
                 index = 3;
                }

                #if UNITY_UV_STARTS_AT_TOP
                if (_MainTex_TexelSize.y < 0)
                    index = 3 - index;
                #endif

                o.interpolatedRay = _FrustumCornersRay[index];

                return o;
                }

                fixed4 frag(v2f i) : SV_Target {
                    float linearDepth = LinearEyeDepth(SAMPLE_DEPTH_TEXTURE(_CameraDepthTexture, i.uv_depth));
                    float3 worldPos = _WorldSpaceCameraPos + linearDepth * i.interpolatedRay.xyz;

                    float fogDensity = (_FogEnd - worldPos.y) / (_FogEnd - _FogStart);
                    fogDensity = saturate(fogDensity * _FogDensity);

                    fixed4 finalColor = tex2D(_MainTex, i.uv);
                    finalColor.rgb = lerp(finalColor.rgb, _FogColor.rgb, fogDensity);

                    return finalColor;
                }

                ENDCG

                Pass {
                    ZTest Always Cull Off ZWrite Off

                    CGPROGRAM

                    #pragma vertex vert  
                    #pragma fragment frag  

                    ENDCG
                }
         }
}

邊緣檢測(cè)

這里使用深度和法線(xiàn)紋理進(jìn)行邊緣檢測(cè)。

腳本實(shí)現(xiàn)如下:

using UnityEngine;
using System.Collections;

public class EdgeDetectNormalsAndDepth : PostEffectsBase
{

    public Shader edgeDetectShader;
    private Material edgeDetectMaterial = null;
    public Material material
    {
        get
        {
            edgeDetectMaterial = CheckShaderAndCreateMaterial(edgeDetectShader, edgeDetectMaterial);
            return edgeDetectMaterial;
        }
    }

    [Range(0.0f, 1.0f)]
    public float edgesOnly = 0.0f;

    public Color edgeColor = Color.black;

    public Color backgroundColor = Color.white;

    public float sampleDistance = 1.0f;

    public float sensitivityDepth = 1.0f;

    public float sensitivityNormals = 1.0f;

    void OnEnable()
    {
        GetComponent<Camera>().depthTextureMode |= DepthTextureMode.DepthNormals;
    }

    [ImageEffectOpaque]
    void OnRenderImage(RenderTexture src, RenderTexture dest)
    {
        if (material != null)
        {
            material.SetFloat("_EdgeOnly", edgesOnly);
            material.SetColor("_EdgeColor", edgeColor);
            material.SetColor("_BackgroundColor", backgroundColor);
            material.SetFloat("_SampleDistance", sampleDistance);
            material.SetVector("_Sensitivity", new Vector4(sensitivityNormals, sensitivityDepth, 0.0f, 0.0f));

            Graphics.Blit(src, dest, material);
        }
        else
        {
            Graphics.Blit(src, dest);
        }
    }
}

注意我們?yōu)?code>OnRenderImage函數(shù)添加了[ImageEffectOpaque]屬性,不對(duì)透明物體產(chǎn)生影響。

這里使用Roberts算子進(jìn)行邊緣檢測(cè),Shader代碼如下:

Shader "Unlit/EdgeDetect"
{
    Properties{
        _MainTex("Base (RGB)", 2D) = "white" {}
        _EdgeOnly("Edge Only", Float) = 1.0
        _EdgeColor("Edge Color", Color) = (0, 0, 0, 1)
        _BackgroundColor("Background Color", Color) = (1, 1, 1, 1)
        _SampleDistance("Sample Distance", Float) = 1.0
        _Sensitivity("Sensitivity", Vector) = (1, 1, 1, 1)
    }
        SubShader{
            CGINCLUDE

            #include "UnityCG.cginc"

            sampler2D _MainTex;
            half4 _MainTex_TexelSize;
            fixed _EdgeOnly;
            fixed4 _EdgeColor;
            fixed4 _BackgroundColor;
            float _SampleDistance;
            half4 _Sensitivity;

            sampler2D _CameraDepthNormalsTexture;

            struct v2f {
                float4 pos : SV_POSITION;
                half2 uv[5]: TEXCOORD0;
            };

            v2f vert(appdata_img v) {
                v2f o;
                o.pos = UnityObjectToClipPos(v.vertex);

                half2 uv = v.texcoord;
                o.uv[0] = uv;

                #if UNITY_UV_STARTS_AT_TOP
                if (_MainTex_TexelSize.y < 0)
                    uv.y = 1 - uv.y;
                #endif

                o.uv[1] = uv + _MainTex_TexelSize.xy * half2(1,1) * _SampleDistance;
                o.uv[2] = uv + _MainTex_TexelSize.xy * half2(-1,-1) * _SampleDistance;
                o.uv[3] = uv + _MainTex_TexelSize.xy * half2(-1,1) * _SampleDistance;
                o.uv[4] = uv + _MainTex_TexelSize.xy * half2(1,-1) * _SampleDistance;

                return o;
            }

            half CheckSame(half4 center, half4 sample) {
                half2 centerNormal = center.xy;
                float centerDepth = DecodeFloatRG(center.zw);
                half2 sampleNormal = sample.xy;
                float sampleDepth = DecodeFloatRG(sample.zw);

                // difference in normals
                // do not bother decoding normals - there's no need here
                half2 diffNormal = abs(centerNormal - sampleNormal) * _Sensitivity.x;
                int isSameNormal = (diffNormal.x + diffNormal.y) < 0.1;
                // difference in depth
                float diffDepth = abs(centerDepth - sampleDepth) * _Sensitivity.y;
                // scale the required threshold by the distance
                int isSameDepth = diffDepth < 0.1 * centerDepth;

                // return:
                // 1 - if normals and depth are similar enough
                // 0 - otherwise
                return isSameNormal * isSameDepth ? 1.0 : 0.0;
            }

            fixed4 fragRobertsCrossDepthAndNormal(v2f i) : SV_Target {
                half4 sample1 = tex2D(_CameraDepthNormalsTexture, i.uv[1]);
                half4 sample2 = tex2D(_CameraDepthNormalsTexture, i.uv[2]);
                half4 sample3 = tex2D(_CameraDepthNormalsTexture, i.uv[3]);
                half4 sample4 = tex2D(_CameraDepthNormalsTexture, i.uv[4]);

                half edge = 1.0;

                edge *= CheckSame(sample1, sample2);
                edge *= CheckSame(sample3, sample4);

                fixed4 withEdgeColor = lerp(_EdgeColor, tex2D(_MainTex, i.uv[0]), edge);
                fixed4 onlyEdgeColor = lerp(_EdgeColor, _BackgroundColor, edge);

                return lerp(withEdgeColor, onlyEdgeColor, _EdgeOnly);
            }

            ENDCG

            Pass {
                ZTest Always Cull Off ZWrite Off

                CGPROGRAM

                #pragma vertex vert  
                #pragma fragment fragRobertsCrossDepthAndNormal

                ENDCG
            }
        }
}

我們調(diào)用CheckSame來(lái)計(jì)算算子的對(duì)角線(xiàn)上的兩個(gè)紋理值的插值,返回0表明存在邊界。

CheckSame函數(shù)中,我們首先得到兩個(gè)采樣點(diǎn)法線(xiàn)和深度值,我們計(jì)算兩個(gè)采樣點(diǎn)的法線(xiàn)和深度值的插值,并稱(chēng)一對(duì)應(yīng)的敏感系數(shù),將差異值得每個(gè)分量相加再與閾值比較,如果小于閾值則表明不存在邊界,反之存在邊界。最后將法線(xiàn)和深度得檢查結(jié)果相乘,作為組合值返回。

最后編輯于
?著作權(quán)歸作者所有,轉(zhuǎn)載或內(nèi)容合作請(qǐng)聯(lián)系作者
【社區(qū)內(nèi)容提示】社區(qū)部分內(nèi)容疑似由AI輔助生成,瀏覽時(shí)請(qǐng)結(jié)合常識(shí)與多方信息審慎甄別。
平臺(tái)聲明:文章內(nèi)容(如有圖片或視頻亦包括在內(nèi))由作者上傳并發(fā)布,文章內(nèi)容僅代表作者本人觀點(diǎn),簡(jiǎn)書(shū)系信息發(fā)布平臺(tái),僅提供信息存儲(chǔ)服務(wù)。

友情鏈接更多精彩內(nèi)容