Triplanar mapping is a great solution for dealing with texturing on complex geometry that is difficult or impossible to use traditional UVs for with out obvious stretching and/or texture seams. It’s also a technique plagued by half hearted and straight up wrong implementations of normal mapping.

*edit: Note that the shader code in this article is written for use in Unity. All of the techniques discussed can work with other engines, but modifications will likely be necessary to make them work properly.*

*Example Unity shaders available here:*

*https://github.com/bgolus/Normal-Mapping-for-a-Triplanar-Shader*

## Table of Contents

**Triplanar Mapping**“Try Play” What Now?

**The Problem**

Not So Normal Mapping

The Naive Method (aka, just use the mesh tangents)**Tangent Space Normal Maps**

A Side Tangent**Approaching a Solution**

The Basic Swizzle

Blending in Detail**Triplanar Normal Mapping**UDN Blend

Whiteout Blend

Reoriented Normal Mapping**“Ground Truth”**

Chasing the truth**Other Techniques for Triplanar Normal Mapping**

Screen Space Partial Derivatives

Cross Product Tangent Reconstruction**Triplanar Blending**

Blending out the Details**Additional Thoughts**

Mirrored UVs

Unity Surface Shaders

Triplanar Normals for Other Renderers (Not Unity)

# Triplanar Mapping

## “Try Play” What Now?

A quick recap of what is meant when someone mentions triplanar mapping. World Space Triplanar Projection Mapping is a technique that applies textures to an object from three directions using the world space position. Three planes. This is also called “UV free texturing”, “world space projected UVs” and a few other names. There are also object space variations on this technique. Imagine looking at an object from directly above, from the side, and from the front and applying a texture onto the object from those directions.

Terrain is a popular use case because it often has difficult geometry to texture. Basic terrain usually has a single UV projection from above. We could call this “monoplanar” projected UVs. For rolling hills this looks fine, but on steep cliffs the single projection direction can look quite bad.

The texture stretching on the near vertical cliffs from the single UV projection is quite obvious on the above terrain. Lets see what this looks like with triplanar mapping.

The texture stretching has completely vanished!

There are plenty of excellent articles that go into the basics of the implementation of triplanar mapping. There are also pros and cons to the technique, and some limitations.

I’m going to skip most of that and go straight to the meat of this article.

# The Problem

## Not So Normal Mapping

The common naive “solution” to triplanar normal mapping is to just use the mesh tangents. Often it can seem “close enough”. You’re then relying on your textures being ambiguous enough that the normals being wrong won’t be obvious. Don’t do this. There are *cheaper *methods that are significantly better looking.

Another technique I see is attempting to construct tangent to world space matrices in the vertex or fragment shader using a handful of cross products and the world normals. This is a perfectly fine technique if done properly. But too often the results are just minorly better than the naive method as people don’t really understand what they’re doing. This isn’t one of the cheaper methods I was referring to, so likely this isn’t want you want to do either.

One of the cheap solutions I *am *referring to dates back to 2003. It comes from Nvidia’s GPU Gems 3.

** GPU Gems 3: Generating Complex Procedural Terrains Using the GPU** https://developer.nvidia.com/gpugems/GPUGems3/gpugems3_ch01.html

This is a good and *cheap* approximation. The shader code looks a little odd though, like looks like it *shouldn’t work* odd. I rarely if ever see it actually used. I suspect its a case of bad implementations making people think it doesn’t work. Indeed if you implement this technique verbatim in Unity it can even look worse than the naive method.

So before we delve into what that GPU Gems 3 article is doing, lets look at the naive method, and what problems it has.

## The Naive Method (aka, just use the mesh tangents)

This is the first thing most people try, and at first glance it “just works”. So lets try it using a basic rock texture and normal map.

This is using Unity’s default sphere, with a directional light coming from directly above, and there’s nothing obviously wrong. There are bumps and ridges where you expect them to be, so what’s the problem? Well, this is the best case scenario, the way the UVs are wrapped on the default sphere, the light direction, and the vague orientation of the normal map all work together. It makes everything look like it’s working properly. Or at least working well enough for most people to not care or notice. But lets change the normal map to something a bit more obvious and play with the lighting direction.

So now the lighting is coming from the left, but are those bumps going in, or out? The left side looks like they’re out, right side looks like they’re in, and the top, well. On the top the lighting looks like it’s coming from completely different directions depending on what part you’re looking at. This is why the naive method is bad, sometimes you’ll get lucky and sometimes they’ll be totally wrong.

For comparison, this is what it *should* look like.

The bumps are clearing popping out, not in, and they all line up with the light direction. The soft blend between the sides is a little odd looking for this normal map, but ignoring that the above could be considered “ground truth”.

Here they are side by side.

# Tangent Space Normal Maps

## A Side Tangent

So let’s talk about what tangent space normal mapping is and why the naive method doesn’t really work.

Tangent space normal mapping and normal maps in general tend to cause confusion for people new to shaders and they just get thought of as “magic”. It doesn’t help that some articles that talk about them immediately go into the math or shader code and don’t explain it in basic terms first. So here goes my attempt.

Normal maps are directions relative to the orientation of the texture. There are a ton of implementation details and minutia beyond that, but they don’t change that basic premise. For Unity the red is how “right” it is (x axis), green is how “up” it is (y axis), and blue is how perpendicular to the surface it is (z axis). All these are relative to the texture UVs and vertex normal. This is sometimes called a +Y normal map.¹

Don’t worry if you’re having trouble just looking at a normal map and making sense of it. Personally I like to look at them one channel at a time to see each of the directions more clearly.

For realtime graphics the mesh’s vertices store a tangent, or “left to right” orientation of the UVs. This gets passed along with the vertex normal and bitangent (sometimes called *binormal*²) to the fragment shader as a tangent to world space transform matrix. This is so the direction stored in the normal map can be rotated from tangent space to world space. The crux of this is the tangents only match the stored texture UV’s orientation. If your triplanar projected mapped UVs don’t match this (which it’s guaranteed not to for most of the mesh!) then you get normals that look like they’re inverted, or sideways, or facing any other way but the way you want.

Here we have a texture with the tangent (x) and bitangent (y) alignment drawn on it. Since the mesh’s tangents are based on the UVs, this serves as a decent approximation of those mesh tangents. Below is the sphere mesh using this texture. It is alternating between using the UVs stored in the mesh and the generated triplanar UVs, both scaled so the texture is tiled multiple times. You can see on the left side of the sphere the orientation roughly lines up, but on the top and right side they’re significantly different. If you look again at the naive method you can see where this discrepancy leads to the normal maps appearing to be being flipped and rotated.

*edit: An additional note here about mesh tangents and triplanar mapping. Generally speaking if you’re doing triplanar mapping the mesh being used doesn’t need and shouldn’t even have UVs stored in the vertices. By extension the mesh shouldn’t have tangents either. For meshes that are imported into or ship with Unity, the default behavior is for Unity to import or generate tangents for the mesh. This is the only reason why the naive method is even possible. Meshes that are created in code by default do not have tangents, or even UVs or normals, and the naive method would be even more broken.*

# Approaching a Solution

## The Basic Swizzle

What you really want for triplanar mapping is to calculate the unique tangents and bitangents for each “face” of the triplanar texture UVs. But, if you remember near the start I said don’t do this. Why? Because we can make surprisingly decent approximations with very little data. We can use the world axis as the tangents. Even more simply and cheaply we can swap some of the normal map components around (aka swizzle), and get the same result!

// Basic Swizzle// Triplanar uvs

float2 uvX = i.worldPos.zy; // x facing plane

float2 uvY = i.worldPos.xz; // y facing plane

float2 uvZ = i.worldPos.xy; // z facing plane// Tangent space normal maps

half3 tnormalX = UnpackNormal(tex2D(_BumpMap, uvX));

half3 tnormalY = UnpackNormal(tex2D(_BumpMap, uvY));

half3 tnormalZ = UnpackNormal(tex2D(_BumpMap, uvZ));// Get the sign (-1 or 1) of the surface normal

half3 axisSign = sign(i.worldNormal);// Flip tangent normal z to account for surface normal facing

tnormalX.z *= axisSign.x;

tnormalY.z *= axisSign.y;

tnormalZ.z *= axisSign.z;// Swizzle tangent normals to match world orientation and triblend

half3 worldNormal = normalize(

tnormalX.zyx * blend.x +

tnormalY.xzy * blend.y +

tnormalZ.xyz * blend.z

);

I’m leaving out most of the shader here. We’ll talk about that later. The main thing to look at is the first and last 3 lines. Notice the UVs for each plane and the swizzled components of the normal maps are the same order. As we discussed above, a tangent space normal map should be aligned to the orientation of its UVs. Since we’re using the world position for the UVs, that’s the alignment!

You may have noticed I suddenly switched to using a cube. This is because the basic swizzle method works well for boxy, voxel style terrain. It is actually ground truth if used on flat, axis aligned walls. Unfortunately it isn’t as effective on rounded surfaces.

So can we somehow add some of that roundness into the swizzled normal maps?

## Blending in Detail

Okay, lets stop here for a moment. We’re going to talk about tangent space normal map blending. Specifically for something like how detail normals get used, not related to triplanar mapping at all. Why you ask? Just stick with me.

There are several techniques for this. The first idea people think of is “add them together”, but is not really a solution, it just flattens both normals out. Some more clever people might think “what about an overlay blend?” It works okay, but its popularity stems purely from it being simple to do in Photoshop without totally breaking everything. It is not because it is remotely correct or even especially cheap. It’s another case of a technique that’s more expensive than doing it the *right* way. There are basically two competing approximations that get used the most often, the so called “Whiteout” and “UDN” normal blending. They’re very similar, both in implementation and results. UDN is slightly cheaper but can flatten out the edges a little, and Whiteout looks slightly better but is slightly more expensive. For modern desktop and console GPUs there’s little reason to not use the Whiteout method over the UDN method. But the UDN blend still has its use for mobile.

You can read more about different normal map blending techniques here Stephen Hill’s blog:

** Blending in Detail**http://blog.selfshadow.com/publications/blending-in-detail/

# Triplanar Normal Mapping

So how does tangent space normal map blending all apply to triplanar mapping? We can treat each plane as an individual normal map blend where one of the “normal maps” we’re blending with is the vertex normal! The UDN blend from that article actually ends up being especially cheap since it simply adds the x and y normal map values to the vertex normal! Lets look at what that looks like.

## UDN Blend

// UDN blend// Triplanar uvs

float2 uvX = i.worldPos.zy; // x facing plane

float2 uvY = i.worldPos.xz; // y facing plane

float2 uvZ = i.worldPos.xy; // z facing plane// Tangent space normal maps

half3 tnormalX = UnpackNormal(tex2D(_BumpMap, uvX));

half3 tnormalY = UnpackNormal(tex2D(_BumpMap, uvY));

half3 tnormalZ = UnpackNormal(tex2D(_BumpMap, uvZ));// Swizzle world normals into tangent space and apply UDN blend.

// These should get normalized, but it's very a minor visual

// difference to skip it until after the blend.

tnormalX = half3(tnormalX.xy + i.worldNormal.zy, i.worldNormal.x);

tnormalY = half3(tnormalY.xy + i.worldNormal.xz, i.worldNormal.y);

tnormalZ = half3(tnormalZ.xy + i.worldNormal.xy, i.worldNormal.z);// Swizzle tangent normals to match world orientation and triblend

half3 worldNormal = normalize(

tnormalX.zyx * blend.x +

tnormalY.xzy * blend.y +

tnormalZ.xyz * blend.z

);

That looks pretty good, doesn’t it? The UDN blend is quite popular because it’s so cheap and effective. But the blend has one drawback. Because of the way the math works out the normal map gets slightly flattened out at angles greater than 45 degrees. This leads to the slight loss in detail in the blended normal.

## Whiteout Blend

The Whiteout blend from that article doesn’t have the issues the UDN blend suffers from. Going by that article it’s only slightly more expensive too, so lets try that.

// Whiteout blend// Triplanar uvs

float2 uvX = i.worldPos.zy; // x facing plane

float2 uvY = i.worldPos.xz; // y facing plane

float2 uvZ = i.worldPos.xy; // z facing plane// Tangent space normal maps

half3 tnormalX = UnpackNormal(tex2D(_BumpMap, uvX));

half3 tnormalY = UnpackNormal(tex2D(_BumpMap, uvY));

half3 tnormalZ = UnpackNormal(tex2D(_BumpMap, uvZ));// Swizzle world normals into tangent space and apply Whiteout blend

tnormalX = half3(

tnormalX.xy + i.worldNormal.zy,

abs(tnormalX.z) * i.worldNormal.x

);

tnormalY = half3(

tnormalY.xy + i.worldNormal.xz,

abs(tnormalY.z) * i.worldNormal.y

);

tnormalZ = half3(

tnormalZ.xy + i.worldNormal.xy,

abs(tnormalZ.z) * i.worldNormal.z

);// Swizzle tangent normals to match world orientation and triblend

half3 worldNormal = normalize(

tnormalX.zyx * blend.x +

tnormalY.xzy * blend.y +

tnormalZ.xyz * blend.z

);

They’re pretty similar with out comparing them directly.

Here you can see how the normals for the UDN blend don’t look wrong, but do look slightly flattened out compared to Whiteout.

And there we have two completely plausible approximations of triplanar normal mapping. Neither have obvious visual issues with lighting. Either is good enough for the majority of use cases. And both are *faster *than the naive option, or even the straight normal map swizzle. Whiteout is also ground truth for axis aligned walls like the straight swizzle technique! I suspect no one would even know Whiteout wasn’t perfect unless they were shown them side by side, and even then it is pretty darn hard to pick out problems.

What about the technique in GPU Gems 3? It is actually doing the same idea as the above two shaders. It’s doing the same normal map component swizzling, but it’s dropping the “z” component of the normal map and using zero instead. Why?

If you look closely at the GPU Gems 3 code, it actually works out to be the same as the UDN blend! *Neither* actually use the z component of the normal maps. Their implementation ends up being a few instructions faster than the UDN blend shader I wrote, but produces identical results!

// GPU Gems 3 blend// Triplanar uvs

float2 uvX = i.worldPos.zy; // x facing plane

float2 uvY = i.worldPos.xz; // y facing plane

float2 uvZ = i.worldPos.xy; // z facing plane// Tangent space normal maps

half3 tnormalX = UnpackNormal(tex2D(_BumpMap, uvX));

half3 tnormalY = UnpackNormal(tex2D(_BumpMap, uvY));

half3 tnormalZ = UnpackNormal(tex2D(_BumpMap, uvZ));// Swizzle tangemt normals into world space and zero out "z"

half3 normalX = half3(0.0, tnormalX.yx);

half3 normalY = half3(tnormalY.x, 0.0, tnormalY.y);

half3 normalZ = half3(tnormalZ.xy, 0.0);// Triblend normals and add to world normal

half3 worldNormal = normalize(

normalX.xyz * blend.x +

normalY.xyz * blend.y +

normalZ.xyz * blend.z +

i.worldNormal

);

The GPU Gems 3 blend is the fastest option of these three shaders.³ But it’s unlikely the additional cost of using the Whiteout blend will be appreciable, even for mobile VR. And the quality difference may be an issue for some content and discerning artists.

## Reoriented Normal Mapping

So what about the “Ground Truth” image I keep showing? The Blending in Detail article I linked to above describes several other methods beyond the UDN and Whiteout blends. This includes a method which they call Reoriented Normal Mapping. This method is fantastic; it ends up being a little more expensive than the GPU Gems 3 or Whiteout methods, but is quite close to “ground truth”! In fact all the example images of “ground truth” above in this article are using this technique! It is also *still* likely faster than the naive method overall even though it produces a more complex shader.

In the Blending in Details the RNM function shown makes some assumptions about the normal maps being passed to it that aren’t true for Unity. A minor modification to the original code needs to be made. In the comments of the article Stephen Hill provides this example for using RNM with Unity. With that function the triplanar blend shader looks like this.

// Reoriented Normal Mapping blend// Triplanar uvs

float2 uvX = i.worldPos.zy; // x facing plane

float2 uvY = i.worldPos.xz; // y facing plane

float2 uvZ = i.worldPos.xy; // z facing plane// Tangent space normal maps

half3 tnormalX = UnpackNormal(tex2D(_BumpMap, uvX));

half3 tnormalY = UnpackNormal(tex2D(_BumpMap, uvY));

half3 tnormalZ = UnpackNormal(tex2D(_BumpMap, uvZ));// Get absolute value of normal to ensure positive tangent "z" for blend

half3 absVertNormal = abs(i.worldNormal);// Swizzle world normals to match tangent space and apply RNM blend

tnormalX = rnmBlendUnpacked(half3(i.worldNormal.zy, absVertNormal.x), tnormalX);

tnormalY = rnmBlendUnpacked(half3(i.worldNormal.xz, absVertNormal.y), tnormalY);

tnormalZ = rnmBlendUnpacked(half3(i.worldNormal.xy, absVertNormal.z), tnormalZ);// Get the sign (-1 or 1) of the surface normal

half3 axisSign = sign(i.worldNormal);// Reapply sign to Z

tnormalX.z *= axisSign.x;

tnormalY.z *= axisSign.y;

tnormalZ.z *= axisSign.z;// Triblend normals and add to world normal

half3 worldNormal = normalize(

normalX.xyz * blend.x +

normalY.xyz * blend.y +

normalZ.xyz * blend.z +

i.worldNormal

);

And here’s the function I linked to above.

`// Reoriented Normal Mapping for Unity3d`

// http://discourse.selfshadow.com/t/blending-in-detail/21/18

float3 rnmBlendUnpacked(float3 n1, float3 n2)

{

n1 += float3( 0, 0, 1);

n2 *= float3(-1, -1, 1);

return n1*dot(n1, n2)/n1.z - n2;

}

I say “for Unity3d” in there, but this should work for any tangent space normal maps that have been unpacked into a normalized -1 to 1 range.

# “Ground Truth”

## Chasing the truth

Except Reoriented Normal Mapping still isn’t ground truth.

All of the above methods are still approximations. For the pedantic readers I have been using scare quotes on “ground truth” intentionally since it really isn’t true. The problem is the true definition of ground truth for triplanar normal mapping is a difficult to nail down. One might assume any one projection plane by itself should look the same as what it would look like using a mesh with tangents baked in with the same UV projection. Something like what any basic terrain shader would do. But that method isn’t correct either, this style of projection is an *improper use of tangent space normal mapping*. The only time a triplanar mapping with normal maps can ever be considered truly ground truth is when it is applied to an axis aligned box!

I also lied a little bit above when describing how tangent space normal mapping works. Specifically the bitangent, which I mostly gloss over. I describe it as the “green arrow” on the tangent texture example. This isn’t entirely true. For tangent space normal mapping the bitangent for is defined as the cross of the normal and the tangent. The tangent and bitangent should both be perpendicular to the normal, and to each other. If you look at the triplanar tangent texture example you can see the tangent and bitangent arrows start to skew and eventually become a triangle in the corners. The result is the tangent and bitangent are perpendicular to the normal, but *not* to each other.

To be correct and match the way tangent space should be calculated would remove this skewing of the bitangent and resulting tangent space matrix. The result is the normal direction will exhibit what appears to be a slight rotation in orientation when using the *correct* tangent space. A different rotation appears when using the *skewed* tangent space.

The problem is that any kind of projected normal map like this is technically wrong. The normal maps being using weren’t generated with the same tangent space we’re displaying them at. They were likely generated for a perfectly flat plane. Normal maps should only ever be used with the geometry and tangent space they were originally calculated for. That means using a repeating texture with normal maps on terrain, or as a detail normal, or for triplanar mapping, is an improper use. So there *is no true ground truth* for these cases*. *Of course we use normal maps like this all of the time, and no one really notices anything wrong.

So it comes down to deciding what should be the ground truth you’re trying to reach. At the most basic you need to decide what the normal map is supposed to represent. Is it the resulting normal from a surface displacement aligned to the texture’s projection, or the surface normal? If it’s a displacement along the projection the Whiteout blend is closer to ground truth! If it’s a displacement along the surface normal then RNM is the closer method. For my eyes RNM retains more of the normal maps’ detail and looks better overall. Ultimately it comes down to personal preference and performance.

Except the naive method. That’s always wrong.

# Other Techniques For Triplanar Normal Mapping

The main benefit of the methods described above is they don’t need to do the matrix rotations to transform between tangent and world space, or vice versa. However you may still want tangent space vectors, like for parallax mapping techniques. So how do you get those?

## Screen Space Partial Derivatives

It’s possible to reconstruct a tangent space rotation matrix using screen space partial derivatives. This idea comes by way of Derivative Mapping which was popularized by Morten Mikkleson. This technique is incorrectly referred to as “normal mapping with out tangents” but it is different from tangent space normal mapping. It’s a great option for triplanar mapping and has a lot of benefits. For the purposes of this article I’m not going to go into derivative mapping. If you want to jump down that rabbit hole try the two links above along with this one (which has links to the other two as well):

However you can use screen space partial derivatives to help with tangent space normal maps. You can use them to reconstruct the tangent to world rotation matrix in the fragment shader for arbitrary projections. This can then be applied to the normal map for each side. Christian Schüler refers to this matrix as the cotangent frame in this article as it isn’t exactly the same as the matrix from the usual tangent space.

// Cotangent frame from Screen Space Partial Derivates// Triplanar uvs

float2 uvX = i.worldPos.zy; // x facing plane

float2 uvY = i.worldPos.xz; // y facing plane

float2 uvZ = i.worldPos.xy; // z facing plane// Tangent space normal maps

half3 tnormalX = UnpackNormal(tex2D(_BumpMap, uvX));

half3 tnormalY = UnpackNormal(tex2D(_BumpMap, uvY));

half3 tnormalZ = UnpackNormal(tex2D(_BumpMap, uvZ));// Normalize surface normal

half3 vertexNormal = normalize(i.worldNormal);// Calculate the cotangent frame for each plane

half3x3 tbnX = cotangent_frame(vertexNormal, i.worldPos, uvX);

half3x3 tbnY = cotangent_frame(vertexNormal, i.worldPos, uvY);

half3x3 tbnZ = cotangent_frame(vertexNormal, i.worldPos, uvZ);// Apply cotangent frame and triblend normals

half3 worldNormal = normalize(

mul(tnormalX, tbnX) * blend.x +

mul(tnormalY, tbnY) * blend.y +

mul(tnormalZ, tbnZ) * blend.z

);

And here is the `cotangent_frame()`

function modified for use with Unity.

// Unity version of http://www.thetenthplanet.de/archives/1180

float3x3 cotangent_frame(float3 normal, float3 position, float2 uv)

{

// get edge vectors of the pixel triangle

float3 dp1 = ddx( position );

float3 dp2 = ddy( position ) * _ProjectionParams.x;

float2 duv1 = ddx( uv );

float2 duv2 = ddy( uv ) * _ProjectionParams.x; // solve the linear system

float3 dp2perp = cross( dp2, normal );

float3 dp1perp = cross( normal, dp1 );

float3 T = dp2perp * duv1.x + dp1perp * duv2.x;

float3 B = dp2perp * duv1.y + dp1perp * duv2.y; // construct a scale-invariant frame

float invmax = rsqrt( max( dot(T,T), dot(B,B) ) ); // matrix is transposed, use mul(VECTOR, MATRIX)

return float3x3( T * invmax, B * invmax, normal );

}

I wouldn’t necessarily suggest using this option for production triplanar mapping as it’s more expensive than the above options, though it could also be considered to be very close to a version of “ground truth”. It also has some problems where the polygonal shape of the geometry can become noticeable as hard edges in the normals. This is because the tangents are being calculated from the actual geometry surface instead of a smoothly interpolated value. Whether or not this is an issue depends on the albedo, normal map and geometry you’re using.

On a side note you can use this to your advantage to get low poly style faceted surface normals.

`half3 normal = normalize(cross(dp1, dp2));`

There are a number of points Christian Schüler makes in that article about normal mapping in general that are interesting food for thought. He goes into much more detail than I do on how we’re all going about tangent space normal maps wrong.

## Cross Product Tangent Reconstruction

What about replicating how tangents are calculated on a mesh? Even though I mentioned that this isn’t correct, it can be done. Unity’s terrain shaders use a very similar technique to calculate the tangents in the vertex shader. Here it is being done in the fragment shader instead.

// Tangent Reconstruction// Triplanar uvs

float2 uvX = i.worldPos.zy; // x facing plane

float2 uvY = i.worldPos.xz; // y facing plane

float2 uvZ = i.worldPos.xy; // z facing plane// Tangent space normal maps

half3 tnormalX = UnpackNormal(tex2D(_BumpMap, uvX));

half3 tnormalY = UnpackNormal(tex2D(_BumpMap, uvY));

half3 tnormalZ = UnpackNormal(tex2D(_BumpMap, uvZ));// Get the sign (-1 or 1) of the surface normal

half3 axisSign = sign(i.worldNormal);// Construct tangent to world matrices for each axis

half3 tangentX = normalize(cross(i.worldNormal, half3(0, axisSign.x, 0)));

half3 bitangentX = normalize(cross(tangentX, i.worldNormal)) * axisSign.x;

half3x3 tbnX = half3x3(tangentX, bitangentX, i.worldNormal);half3 tangentY = normalize(cross(i.worldNormal, half3(0, 0, axisSign.y)));

half3 bitangentY = normalize(cross(tangentY, i.worldNormal)) * axisSign.y;

half3x3 tbnY = half3x3(tangentY, bitangentY, i.worldNormal);half3 tangentZ = normalize(cross(i.worldNormal, half3(0, -axisSign.z, 0)));

half3 bitangentZ = normalize(-cross(tangentZ, i.worldNormal)) * axisSign.z;

half3x3 tbnZ = half3x3(tangentZ, bitangentZ, i.worldNormal);// Apply tangent to world matrix and triblend

// Using clamp() because the cross products may be NANs

half3 worldNormal = normalize(

clamp(mul(tnormalX, tbnX), -1, 1) * blend.x +

clamp(mul(tnormalY, tbnY), -1, 1) * blend.y +

clamp(mul(tnormalZ, tbnZ), -1, 1) * blend.z

);

This isn’t exactly the same as calculating in the vertex shader and using the interpolated values. But again as discussed that’s wrong too so the difference isn’t really a problem.

# Triplanar Blending

## Blending *out* the Details

If you’ve ever worked on triplanar mapping you’re probably familiar with a bit of code that looks something like this:

`float3 blend = abs(normal.xyz);`

blend /= blend.x + blend.y + blend.z;

The reason for that divide is to normalize the sum. The normalize() function in shaders returns a vector with a normalized *magnitude*, but we want the vector’s components to have a normalized *sum* of 1.0. The sum is essentially always going to be more than 1.0 for a “normalized” float3, somewhere between 1.0 and 1.7321, if we don’t do anything to it. So if you use the absolute value of the normals to blend the albedo textures the corners will be overly bright. Dividing by the sum ensures the new sum is always 1.0.

Now that basic blend is quite soft, so the next step is usually to figure out some way to sharpen the blend. Here’s what I see being used frequently.

`float3 blend = abs(normal.xyz);`

blend = (blend - 0.2) * 7.0;

blend = max(blend, 0);

blend /= blend.x + blend.y + blend.z;

This helps sharpen a little bit, but it is a curious bit of code because that `* 7.0`

is useless! Changing the `7.0`

to any non zero number or removing it completely has no effect, but this bit of code seems to show up in half of the triplanar implementations I see. A little thought will explain why it’s unnecessary; dividing a number by itself is always 1 and multiply it by some non zero value first doesn’t change that.

As best I can tell this bit of code appears to originate from that same GPU Gems 3 article I referenced earlier! It’s unfortunate that the part that article got wrong seems to be the part that has seen the most reuse.

However the `- 0.2`

is fine, just leave out the multiply after that. There’s also an additional optimization that can be made by abusing a dot product to sum the components before the divide.

`float3 blend = abs(normal.xyz);`

blend = max(blend - 0.2, 0);

blend /= dot(blend, float3(1,1,1));

It’s a minor optimization, the original uses 6 instructions and the optimized version uses 4. But there’s no reason to not do it as the results are identical, and it removes a useless magic number.

That’s fairly minor, though good enough if your geometry is mainly flat walls. But what if we want an even sharper transition? You can increase the subtracted value, but the corners get noticeable sharper before the rest, and go too high and it starts to go black. A value of -0.55 is about the highest you can go before getting obvious issues. This is because at the corners a normalized vector is (0.577, 0.577, 0.577), so subtracting more than that will cause that max() function to turn the corners into (0.0, 0.0, 0.0) and then you’re not only loosing the values, but you’re dividing by zero!

However there’s another technique that I prefer; use an exponent. If you’re okay with a fixed blend sharpness, using a hard coded power of 4 is just as fast as the above optimized option, and in my opinion looks better.

`float3 blend = pow(normal.xyz, 4);`

blend /= dot(blend, float3(1,1,1));

It’s a subtle difference, but it’s the same cost of 4 instructions. If you want more control over the blend sharpness using a material property, this is a bit more expensive. Unity will show it as 5 instructions, but in reality it’s more like 11 instructions. Other hard coded powers will be less (1 more instruction for every power of 2 increase, so pow(normal, 8) is 5 instructions, 16 is 6 instructions, etc.

The lower powers might still not look great with the brick, but I’m intentionally using a “worst case” texture to make the blend obvious. With a different texture and lighting a slightly soft blend can be beneficial.

There are also more advanced asymmetric blend shapes that can be done too that might work better for some textures or use cases.

// Asymmetric Triplanar Blendfloat3 blend = 0;// Blend for sides only

float2 xzBlend = abs(normalize(normal.xz));

blend.xz = max(0, xzBlend - 0.67);

blend.xz /= max(0.00001, dot(blend.xz, float2(1,1)));// Blend for top

blend.y = saturate((abs(normal.y) - 0.675) * 80.0);

blend.xz *= (1 - blend.y);

There’s also height blending that’s even nicer, if you have height data for the texture. Some people cheat and just use the texture luminance which can serve as decent approximations of height for some textures.

// Height Map Triplanar Blendfloat3 blend = abs(normal.xyz);

blend /= dot(blend, float(1,1,1));// Height value from each plane's texture. This is usually

// packed in to another texture or (less optimally) as a separate

// texture.

float3 heights = float3(heightX, heightY, heightZ) + (blend * 3.0);// _HeightmapBlending is a value between 0.01 and 1.0

float height_start = max(max(heights.x, heights.y), heights.z) - _HeightmapBlending;

float3 h = max(heights - height_start.xxx, float3(0,0,0));

blend = h / dot(h, float3(1,1,1));

The magic `blend * 3.0`

in there is to reduce blend region. This produces a Gems 3 like blend shape. The height map blend tends to work best with a wide starting blend region, but with clamped edges. The power method alone is to too smooth allowing the blend to show bad texture stretching.

The above height blending code is roughly based on the method in this article.

*Height-blending shader **by Octopoid*

http://untitledgam.es/2017/01/height-blending-shader/

# Additional Thoughts

## Mirrored UVs

Using the basic triplanar UV calculation results in the textures on some sides being mirrored. Most of the time this isn’t really an issue on it’s own. However it can lead to situations where the corners of two planes are showing the same texture at roughly the same UV making this mirroring much more obvious.

It’s not difficult to solve though. The easiest solution is to slightly offset all 3 planes to reduce the chance of a perfect overlap. Add `0.33`

to the Y plane’s UVs and `0.67`

to the Z plane’s UVs and you’re done. However this may lead to unwanted alignment issues if you do want the textures to line up.

Alternatively you can also flip the UVs. This can be additionally useful if you’re using textures with an obvious orientation, like text. Multiply the x component of the UVs by the surface normal’s axis sign, then do the same to the tangent space normal maps to undo the flip.

// Projected UV Flip// Triplanar uvs

float2 uvX = i.worldPos.zy; // x facing plane

float2 uvY = i.worldPos.xz; // y facing plane

float2 uvZ = i.worldPos.xy; // z facing plane// Get the sign (-1 or 1) of the surface normal

half3 axisSign = sign(i.worldNormal);// Flip UVs to correct for mirroring

uvX.x *= axisSign.x;

uvY.x *= axisSign.y;

uvZ.x *= -axisSign.z;// Tangent space normal maps

half3 tnormalX = UnpackNormal(tex2D(_BumpMap, uvX));

half3 tnormalY = UnpackNormal(tex2D(_BumpMap, uvY));

half3 tnormalZ = UnpackNormal(tex2D(_BumpMap, uvZ));// Flip normals to correct for the flipped UVs

tnormalX.x *= axisSign.x;

tnormalY.x *= axisSign.y;

tnormalZ.x *= -axisSign.z;

You can of course use both the flip and offset together if you wish.

## Unity Surface Shaders

Unity’s Surface shaders cause much pain for cases like this. There is currently no way to tell a surface shader to *not* apply the mesh’s tangent to world transform to normals output by the `surf()`

function. The result is you must either manually transform the world space normals *into tangent space*, or hand modify the generated shader code. Transforming the normals into tangent space is possible, but needlessly expensive. Hand modifying the generated code is a pain. My personal choice has been to hand write vertex fragment shaders that follow the same general structure as surface shaders, but with slightly more sane formatting. It would be nice if Unity offered the option to skip tangent space in surface shaders altogether.

There is a functional Surface Shader example transforming the world normals into tangent space within the surf function here:

https://github.com/bgolus/Normal-Mapping-for-a-Triplanar-Shader/blob/master/TriplanarSurfaceShader.shader

## Triplanar Normals for Other Renderers (Not Unity)

The techniques discussed above can be used in any renderer, real time or otherwise. Trick will be understanding that renderer’s world coordinate system, and normal map orientation. Unity is a left handed, Y up world space coordinate system and +Y normal maps. The fall out of that is many of the lines for the Z plane have added negative signs the other lines do not. If Unity was right handed and Y up, it would not need those. Renderers that use -Y normal maps will likely need a *lot* more lines with negative signs added as the world space orientation and normal maps’ Y axis will be inverted for many (if not all) planes! This is one of the big complications with triplanar normal mapping that will bite most people. It can be figured out by just knowing the coordinate system and the normal map orientation, but it’s easy to get wrong even then. Try looking at one face at a time and flip the sign around until it looks correct, then check the reverse side too! It’ll be a lot of trial and error at the start. Just think of it as an application of the scientific method!

¹^ Unreal Engine and some other tools use what are called -Y normal maps. This means the green channel is inverted from what Unity and other +Y tools would use. Green in that case is how “down” it is. The difference originally comes from OpenGL and DirectX conventions for UV handling. The UV position “0.0, 0.0” in OpenGL is in the bottom left, and in DirectX it’s top left. This means by default in UVs flow “up and right” for OpenGL and “down and right” for DirectX. Ultimately it’s trivial to use which ever convention you’re most used to and flip the normal maps in the shader. Since Unity has to support both OpenGL and DirectX it actually flips the textures upside down for DirectX platforms!

²^ Bitangent vs binormal. The vast majority of literature and shaders out there will refer to the three vectors for tangent space as normal, tangent, and binormal. A “bitangent vector” is arguably a more accurately descriptive name for the third vector in that it is a second surface tangent. A tangent is a line that touches but does not intersect a curve or surface. Those who argue against the term bitangent point out that it already has a specific, unrelated meaning in the world of mathematics. A bitangent is traditionally a line that is the tangent for two points on a curve. So, clearly that’s not remotely similar. However technically binormal also had a previous meaning, though it is closer to how it’s used for our purposes. A binormal is the cross product of the normal vector and the tangent vector for a curve. However by curve they mean explicitly an infinitely thin line, not a surface. If you think of a normal as a direction pointing directly away from a surface, a normal for a line is any direction perpendicular to the tangent. So the binormal is a good name there as it is another normal of the line. For a surface the term binormal makes less sense as it is no longer perpendicular to the surface and thus no longer a normal. This is why bitangent becomes attractive because it mimics the use of the bi- prefix in binormal, but more accurately describes it as the additional tangent that it is.

So, basically, both are kind of wrong. If you want to get down to it even the term tangent is *slightly *wrong here as it’s being used to describe a unit vector that is specifically both a surface tangent and the texture coordinate’s x derivative, which is in itself a tangent. Thus it is a single line that is a tangent for two curves, so maybe the *tangent *should be called *bitangent*? And the binormal / bitangent should be a *bibitangent*? Or cobitangent? Crossbitangent? Fred? …

You can see this argument can keep going for awhile. Naming things is hard. I personally subscribe to bitangent over binormal. Either way it’s good to understand that in the context of tangent space normal mapping both are being used to refer to the same thing.

³^ On desktop the GPU Gems 3 & UDN blends work out to be quite a bit cheaper than the Whiteout blend, even though the code for Whiteout does not appear to be doing all that much more. This is because the Gems 3 blend doesn’t need to use the z component of the normal map. In most modern game engines normal maps only store the x and y components and reconstruct the z component. Part of what Unity’s `UnpackNormal()`

function does is that z component reconstruction. Since Gems 3 does not use the z, the shader can skip reconstructing it. This results in the GPU Gems 3 blend triplanar shader saving roughly 14 instructions over Whiteout. This would make it seem like the Gems 3 blend shader would be the clear choice for mobile, but it’s not. On mobile Unity stores normal maps for all 3 components instead of reconstructing the z. This is specifically so it does not have to pay the cost of the reconstruction and to save some texture memory at the cost of a minor visual quality reduction. The result is the Gems 3 and Whiteout shaders are almost identical in performance on mobile. In some case the Whiteout blend may even be slightly faster depending on how the shader compiler chooses to optimize.