Object-object occlusion

I’ve been playing with an invisiprim. In case you don’t know what those are – if you’re wearing heels, you very likely have two on you right now. They are prims textured in a specific tricky way, that makes the avatar mesh invisible when seen through it, as well as any prim textured with a texture containing an alpha channel. In footwear, they are usually used to make the feet invisible when otherwise they would stick through the sole of the shoe. Normally, this is thought to be the limit of invisiprims, but I have found that it is not so. Later, I found out that the effect was already observed and described, but as usual, lost in the sea of information, and so, mostly forgotten.

When Second Life viewer draws what you see, it has in it’s possession the entirety of objects within the sphere delimited by your draw distance slider, because it requested their positions and parameters from the server. But to send the entire mess to the graphics card would choke even the best ones, so it tries to save a little. Upon converting the objects into triangles, it only sends those that it expects you to be able to see, and it determines it by calculating which objects cover which ones. Objects that are considered covered are not rendered at all.

This is a mathematically complex problem, so it prefers to err on the side of caution. A big sheet of invisiprim is still considered non-transparent by this process, so you can see just what it requires to make it consider an object ‘covered’ — that is, occluded — and observe the practical savings yourself.

And it is very obvious that occlusion is not determined per individual prim, but per linked object. If you only see part of my 252 prim glasses, your card still gets the full data on every single triangle that composes them. But the moment I step far enough behind a wall, this no longer happens, and they cannot possibly affect your rendering. If I am behind your back, you don’t see them either. Only the avatars directly and at least partially within your camera view affect your FPS.

Now, what happens if you’re inside a completely linked building? It will occlude the avatars behind the walls, but it will get sent completely to your card, every prim of it. As a result, the card will get information on the walls behind your back, and even though it will not need to draw them, it will have to process them too. Which means that depending on the surroundings, a complex build made of smaller linked chunks may actually be faster to render than a simpler build that is linked together as one unit.

Whether any FPS drop can be avoided this way remains to be proven, naturally, but it is something to consider. It is clear though, that practical rendering lag that corresponds to high ARC — what little of it is actually there — very much depends on where you are and how complex your environment is, but mostly, it depends on how many avatars you actually see at once. Much of the rendering load at the Hair Fair could be avoided completely by making less open spaces and higher walls.


One thought on “Object-object occlusion

  1. Pingback: Arbitrary Rage Cause « Through the Broken Looking Glass

Comments are closed.