64kb ought to be enough for everyone

That’s what they must have thought, when they introduced Mono virtual machine to SL.

Well, I must say that LL’s LSL to Mono compiler sucks immensely. I noticed there were some things odd about it, recently, but today I have working and largely irrefutable proof on my hands of how much it sucks and I’m telling you it ain’t pretty.

As I have mentioned before all over, I have been working on a complex, highly automated, and hopefully, blonde-proof device to help with setting up treasure hunts of the more interesting and fun kinds. It’s almost ready for release and is currently in field testing, growing in features so much that it frustrates me to no end. This is a rather big project for LSL, and monstrously big for my LSL coding experience, totaling around 3000 lines of code in about 30 individual scripts scattered over 6 prims. Some of the scripts are particularly memory-intensive, because they manipulate long lists of keys, which is what makes the whole thing so versatile and, well, cool, if I say so myself. The core script is almost constantly operating at the limits of free memory right next to a stack-heap collision, it’s largely impossible to move many more components out of it into another script without slowing the whole thing into treacle, and I’ve been looking for ways to optimize it all the time I worked with it.

First thing that ran out of memory was another script, though, the script used to display the dialog menu interface, which brought me on to discover that simply declaring an empty state in Mono results in 3kb memory gone immediately. States weren’t critical for it, so I recoded it to work without them and went on my merry way. I was too busy to consider the possible implications.

And along that way I wanted to save memory by using a function to replace llListReplaceList(list,[data],pointer,same pointer) with something like l2r(list,data,pointer), hoping it would save me some bytecode.

Well, it doesn’t.

In fact, one function declaration and 100 (!) calls to that function, a short one, invariably results in more bytecode than 100 calls to the original function. I jotted down that result and went on my merry way again.

Today, I looked at the script again and suddenly had an eureka moment. I replaced all the long functions I only called once or twice, duplicating their code in the states. Then I replaced all the shorthand functions I called often, like signal(string) with llWhisper(COMM_CHANNEL,string). There were a lot of calls to those.

And the result was 5kb (!) more free memory and a significant gain in the amount of treasure items a single server box can handle.

My only theory to explain this is that declaring a function causes the compiler to generate code required to create a function context. Even more is probably required to define a script state. For a general purpose virtual machine that may well be less efficient than for the original LSL one. But instead of storing that code globally somewhere and calling it when required, like good compilers do, LL’s compiler just dumps a fixed size code block in, one that is at least 700 bytes long, and pretends that is nice and proper.

Well, it bloody hell isn’t nice, isn’t proper, and I bloody have no words.

What the fuck do they think they’re doing, pardon the term, doing something like that and then blaming people for ‘overusing openspace’?!

P.S., long after: Idea: an offworld preprocessor to unroll shorthand functions. And maybe allow list[start:end] instead of llList2List and friends. I can’t fix this braindead language, but hopefully I can make it more tolerable and prevent further blisters on my fingers.


One thought on “64kb ought to be enough for everyone

Comments are closed.