I was wondering: what if I actually want to keep that alpha blendish look, merging it with depth sorting?
This is called Order Independent Transparency, or OIT. The reason why you’re having problems finding a Unity shader to do all the things you want is because it doesn’t exist. Realistically you have to choose something to give up, you can’t have smooth alpha blending, correct depth sorting, shadows, and performance at the same time. Unity also makes this much harder as it still does not support shadow receiving on transparent materials. At least not when using the built in forward rendering path.
I’ve seen some techniques use dithering to achieve this, but doesn’t really look like I intended, especially with long or thin strands of fur, it doesn’t stack with each other.
Dithered approaches are common because they play nicely with deferred rendering. You can leaverage A2C and dithering together to get better results than using the straight alpha value alone to approximate alpha blending, but the issue you’re complaining about, and which I described the cause of in the article concerning stacking, remains. The best dithered approaches use a different dither pattern depending on the depth or primitive id so it’s not just screen space. Chris Wyman and Morgan McGuire’s Hashed Alpha Testing paper goes into a technique for doing this that I’ve used for VR.
You can additionally dither the coverage mask itself to get around that so that the mask sample indices themselves are randomized, but that has other side effects and doesn’t guarantee it won’t still have the same stacking issues. The same Hashed Alpha Testing article makes mention of this, but doesn’t provide any shader code, but there are some good examples in that same twitter thread.
My naive-yet-functional approach was this: slapped 2 times the same mesh (fur only) on the model, and I added an alpha blended shader on one, and a cutout in the other. I know. I’m a barbarian :P
Yet, I’m trying to use or stay close to standard shaders as much as I can, because it’s easier to mantain (Unity) and 100% tested.
Your approach of rendering twice, once using cutout, and once using transparency, is a good option. I even make reference to this technique in the article. You can do this with a two pass shader to simplify things a little, but the result as far as the GPU is concerned is roughly the same as you’re still rendering the geometry twice regardless. You also get all of the things you want to achieve in terms of alpha blending, sorting, fog, self shadowing*, and VR friendly(ish), though not for all pixels. When it comes to transparency sorting on hair you still have to manually pre-sort the geometry yourself to get the best results. I also note in the article that The Witness does two passes with the first using A2C rather than straight alpha testing for clouds, but I’ve never gotten this to work as well as they have for anything. They even say it only worked well for their clouds and nothing else.
*Self shadowing will only on the alpha tested parts, since, as noted above, shadows don’t get cast on transparent objects in Unity.
The easiest way to get a shader that does most of what the Standard shader does, but with additional features, is to use a Surface Shader. I did post an example two pass Surface Shader for hair in the Unity forum. Though there are some issues with it, especially with shadow receiving. The thread itself goes into why, and possible work arounds.
I also recently posted a lot on OIT here: