I am by no means an expert in the workings of 3D engines — meaning, I kind of stopped tracking the topic seriously back when raycasting engines were the pinnacle of technology and there was no such thing as a GPU. But I understand a few things about them, and the few things about how Second Life uses your GPU bug me to no end.
For various reasons originating in pure mathematics, your graphics card is optimised to draw three-dimensional triangles — the biggest of these reasons would be that three points in 3D define a flat surface, meaning that wherever these three points are, a flat plane can be drawn that includes all three. All 3D objects on your screen are actually a mess… er, a mesh of those triangles. GPU performance is measured in triangles per second, among other things.
Now, imagine a cube. How many triangles do you need to draw it? A cube has 6 faces, each of which is a square, and every square can be drawn using two triangles. So the answer would be 12, right? In a normal 3D engine, it would. A large wall covering half your screen is still only 2 triangles — in a 3D game. While the GPU still has the same amount of pixels to paint, it only has two triangles to paint, and only needs to know the coordinates of four individual points — vertices, to be precise — that tell it where to start and end.
Now, rez a cube. Turn on wireframe rendering, which will let you actually see where all triangles are. And stare.
Because every bloody cube on the grid has each face split into nine squares, which results in 108 triangles for the GPU to think about! That’s almost as much as is needed to draw a ball!
I can only imagine one good reason to do it this way, and that would be that each cube at any time might taper, twist, hollow, and otherwise transform, so the viewer has to be ready for it… at the cost of ten times more vertex data sent to the GPU to draw it.
Now tell me, how often do you twist, taper or otherwise mutilate a wall. I bet it’s not that often, and walls and boxes would be the most numerous and commonplace stationary prim type — most commonplace until you see someone’s hair, at least.
If each of these normal, plain, standard boxes would be rendered with 12 triangles instead of 108, I’d expect rendering load would drop by 30%.
There are two ways to go about it that I can name.
The most obvious one would be a clientside optimization, coding a special case in llVolume to ensure that as long as we are to render a standard, un-twisted box, the mesh that results contains no extraneous triangles. To say whether that is possible, I’d need to look into llvolume.cpp, which I’m not all that anxious to do — but, there might be reasons that do make it unfeasible.
Another would be to introduce a new prim type — or several — which cannot be twisted, path cut, or otherwise tortured, only scaled and rotated, and let the content creators decide when they need a flexible prim for something that will change shape and when a simpler one will do better and save everyone the trouble.
Now tell me. Why hasn’t something like this been done years ago?…