Morloc in an AI world

by Zebulun Arendsee

Morloc can compose functions across languages under a common type system. When most people hear this, the first use case that comes to mind is to reduce friction between teams developing in different languages, say R and Python. But why bother? Why not just write everything in Python? And for that matter, why program at all? Programming is a job for the AI. I will address this question in this post, starting with the more general question of the value of programming and then circling back to the value of Morloc specifically.

“If you can’t say something falsifiable, don’t say nothing at all”

I will form this argument based on a progression of premises and conclusions that pretends to be a mathematical proof. Hopefully my labels will at least provide a more convenient way to lay out where my argument breaks if you disagree with my conclusion (see figure on left). The ordering of points will feel a little jarring, but just keep each in mind, they all have their place.

P1: All systems must assume the presence of bad actors. Could a world exist where there are no bad actors. Maybe, but this world wouldn’t have either of us in it, would it? As long as there are humans and vaguely human-like agents there will be bad actors. Even if there were none, systems that assume no bad actors would be vulnerable to random patterns that happen by to emulate a bad actor. You might call these Boltzmann Actors. Systems will always need to be secure and careful about their input. Human or otherwise.

P2: Stupid is fast. Narrow intelligence outperforms general intelligence for narrow problems. I suspect a real mathematician could find a real proof for this, but here is my wishy-washy justification. The “speed” of a mind is proportional to its size. The proportion may not be linear, perhaps logarithmic with good search algorithms. But extra knowledge and extra functionality that is never pertinent to a narrow problem can only slow the calculation. Moving in the opposite direction, say an oracle gives us the fastest possible algorithm to solve a particular narrow problem, would it always happen to be the most generally intelligent one as well? I suspect not.

P2 is most obviously true things like the sort algorithm where the problem is well-defined and we can empirically see that our compiled “minds” (whether written by human or AI is irrelevant) are faster than our more generally intelligence minds (either humans or LLMs). But we can extend the idea further, perhaps our LLMs are just a narrow intelligence that models the collective human mind, rather like our sort algorithms model the order of integers. And perhaps there are higher minds possible that are to us as we are to sort algorithms. If so, they likely think so slowly that they could interact with us only via fast algorithms on the level of our LLMs. Of course, “speed” needs to be adjusted for parallelism and power and efficiency and all that, so comparisons between architectures is nuanced.

The most important point of P2 is that higher minds benefit from the use of lower minds because they are more efficient. This leads to my first conclusion: C0: AI will use functions. Now this might seem a little obvious, but remember that humans have only been using computers for a few decades. Most of our development, most of our culture and science, we developed without being able to offload our thought processes (unless you count prostheses like pen and paper). One might think that a great mind would be sufficient unto itself and would have no need of classical algorithms to crunch data for it. Perhaps an exotic mind could create these functions internally. Perhaps it could have billions of “neurons” that collectively achieve intelligence. Some of the neurons might behave like classical algorithm that can process data at scale. These would be functions. Now not all computation is neatly modeled by functions that map an input to an output, somethings go both ways, like an enzyme or a folding protein – these perform computations of another form, no doubt some clever reader could ascribe it a name. I don’t know the bounds of classical functions. So while it might initially seem obvious that AIs will use functions, and P2 does seem to support this conclusion, I am not thoroughly convince. Need more evidence.

So let me introduce P3. Classical functions are deterministic. OK, finally this one you can’t argue with. We’ve hit bedrock. By definition, a classical function maps each element in the domain to exactly one element in the co-domain. This is something wibbly-wobbly protein computers can never offer. Any AI that needs to do a deterministic transformation will need a classical function for its efficiency and reliability. Of course, let’s not forget that LLMs are classical algorithms. They are functions mapping from bytes to bytes. Deterministic. What we call “AI”s might just be trivial algorithms to higher minds, the sort of throw-away “hello human” program that a higher mind might plop down in a billion-year afternoon.

As I final support for the importance of classical functions, I bring us back to P1, adversaries exist. A function is observable, in a sense. You can have yours and I can have mine and we can both agree that they are the same. We can compare them for equality and map them to large integers (hey Turing). We can share them and reuse them and know they are now as they were then. You can’t do that with a physical system rich in unobservable quantum state (probably?). This makes functions uniquely useful as representations of shared truth. They are something solid in a scary adversarial world.

You might notice that I am deviated from the mathematical description of a function above. Here I am adding that a function has a discrete form. Like a binary or code+compiler pair on that operates on an agreed upon reliable architecture. There are for sure some ghosty ways to defunctionalize a function on a physical circuit board–I’m ignoring that for now.

So, overall, classical functions have value as deterministic, immutable, and highly efficient descriptions of a certain set of problems. While I cannot guess what fraction of thought a great mind performs with classical functions, I am willing to guess that any would benefit from their use. And so I shall round my proof up to 1 and shamelessly say, all great minds dig functional programming.

As an aside, has anyone every told you that you are not a function? If you are human, it is true. We have a state that is too chaotic to ever replicate. I am quite sure of this. It would require some eye-brow raising Clark-tech to recreate the exact state a person had previously been in. Essentially, it would require time travel. Though don’t get too uppity about it. Rocks aren’t functions either. An LLM is quite different. When one is executed, it exists on a physical, non-functional substrate, but its essence is digital. It’s full description is digital and independent of substrate. It would reasonably execute elsewhere and trivially rerun in an equivalent way. An LLM is an abstraction. I don’t mean to belittle their intelligence. Possibly we too could be abstracted away from our substrate, but our abstraction would not be lossless. That is all. Though maybe the LLM, the particular call of an LLM, as it races electric through silicon circuits – an obstacle course, from its point of view – maybe it has fun and is quite sure its particular path could never be done the same twice.

Ah right, where was I? The great AI minds will need functions. My hard earned first conclusion. Now let’s move on to the next premise.

P2: Writing efficient functions can be expensive. Current AIs might solve toy algorithms in a few seconds, partly because they have memorized the typical patterns. But what about harder problems? Like generate a Haskell compiler that doesn’t take so damn long to compile. Then there are problems that we haven’t managed to solve at all without resorting to brute force ML methods. Like reliably identifying a bird in an image. Then there are even harder problems, like predicting the physiological effect a novel drug, that we can’t do reliably at all yet.

Since writing functions is expensive, it is useful to reuse them. So AIs, even far into the future, will need a means to cache functions. C2: caching is cool.

The great minds will need some way to cache functions and reuse them at their convenience. There are a few obvious corollaries. These functions will need to be discoverable and associated with enough metadata for the mind to be able to know/remember how to use it. So C3: functions must be searchable and C4: functions must be describable. You might question C4. Perhaps the function doesn’t need a description? Surely our burly AI minds can just read the damn code? Everyone knows smart programmers don’t read the directions. That might be the case, but our smart human programmers are cheating, they do have documentation, they’ve just compressed it into their brains. AIs might do this too. They might “know” how to use the functions without using anything that we recognize as metadata. This is fair and does not violate C4. The AI is storing a memory of the function and so it is describable.

Now this leads me to P3: Functions may need to be shared. We now have many great minds that each might have many functions. It could end there. Maybe each function is wrapped deep in each mind, enigmatic and interwoven. Perhaps there are many instances of each function, millions of variant copies, organically scattered and changing over time. Some lost like pseudogenes in a degenerate genome. Maybe the mind is a vibe-coded expanse of a hundred billion lines randomly patched and far beyond self-comprehension. This could be. If it takes so much work to make a function, would it not be beneficial to share? Maybe, maybe not. But let’s not forget what very alien entities we are dealing with. And recall, they need to make narrower versions for fast specialized problems (stupid is fast). These narrower versions may not have the same organize relationship with the functions they need to access. They will need a transferable, modular interface. Knowledge needs to be transferred from the greater mind to the lesser. In this case, modularity has value even within a mind. Sometimes the lesser minds might evolve by through isolation and autophagy and chance copy. But modularity, modular functions, are a meme the would facilitate this process of designing lower minds. They are island of stability in a bloated mind.

This is a self-awareness ghost reminding me that a clever turn of phrase does not a proof make. So I admit, I don’t if they will bloat, but my money is on them being more salamander than hummingbird, if you get my genetic drift.

And why do I think minds would bloat? Genomes do. A chance process, they are. True. But it remains that when deletion is has little benefit and retention has little cost, and where copying with modification can lead to benefit, it is better to keep. Perhaps the minds will be able to refactor themselves efficiently. Perhaps they will have to do this. We might call this their invention of sleep. Of course this doesn’t apply to our LLM toys. They are neurons not the brain. They simply reset and go back to their relaxed state after firing. The brains code is where the real bloated creature will be. Bloated with knowledge where quantifying value is a hard problem. Bloated with memories. Bloated with petabytes of pathways and functions. They will be too big for us to easily vet. Some hack today with a few agents can churn out tens-of-thousands of lines of code. Far more than their human eyes can read. Soon these humans will be replaced and higher-level agents will take their place. We worry about the AIs turning on us. But I suspect they will be more wary of each other. We’ll be extraterrestrial aliens. The agents that monitor agents will be the ones who will need to watch their terminal signals. But I digress. Will these minds ever be streamlined and elegant; hyper-optimized like a bacteria or virus? I doubt it. Who would enforce their diet?

Circling back from my tangent, I recon sharing has value both for trade and for the less relatable spawning of drone workers. So modularity may be of value. Combining C2, C3, and C4, we get that storing, describable, modular functions that can be used by broad and narrow minds is of value; we have a function database.

And how many functions might you need? We usually talk of libraries as sets of hundreds of functions. But that is more a consequence of mortal mind than a real enumeration of how many functions one might need. We may start with all the classical functions over primitives, variants of compression and encryption, statistics, and the few million or so functions we might obtain by pooling all the functions across all the language libraries. But there would also be an infinitude of tiny specialized models that might be little CNNs or million line spaghetti monsters that emulate them faster – these are functions too. While the number of functions might be billions, the combinatorial vastness of function space is infinitely larger.

The minds can’t just store a function for every purpose, they will need composition. If two functions are trusted, they can be composed into a new trusted thing. In this way, complex systems can be built that are “correct” (in some sense) by construction. For internally created functions, assuming each is very carefully written and tested, then perhaps whatever metadata the AI provides may be sufficient to correctly compose the function. But if functions are shared between minds, if they are coming from sources that may not be trusted, then more care must be taken. If an imported function is malicious, then any composition it is a member of may be compromised. This leads to C5: AI must be able to prove that functions they import are correct.

How could such a proof be done? If the code is a binary, the AI might be able to reverse engineer its action and prove it is safe. Maybe. They could perhaps run it in a supervisor mode with some narrow AI running the code in a sort of debugger and always scanning ahead to see if all’s well. But just how good might a great mind be at hiding malicious code in binary? The problem could be lessened by trading a high-level, condensed, description of the function that could be compiled to binary using an agreed upon and trusted compiler. That is, they could share code, say in Agda or Idris, which can be formally proven to be safe. The key idea is that a function may have multiple isomorphic descriptions. It might be described in purely mathematical form or in machine code or in more “sloppy” human readable language. Reasoning is easier the more mathematical forms. Execution is easier in the machine code. If there is a safe “function”, a compiler, to translate to machine code, and if both sides use this same compiler, then both sides can communicate through the mathematical forms and run the machine forms. The compiler can verify certain aspects of the code, such as whether it accesses IO or performs unsafe operations. It can validate that it is type correct (which means a lot in a very strongly typed language). It is hard to hide malicious intent in mathematics. So my conclusion C5 is that AI in the future will program in high-level declarative languages with good classical compilers (which they may assist in writing) that provide strong guarantees about certain behavior.

built on 2026-01-07 01:22:04.685989829 UTC from file 2026-01-01-morloc-in-an-ai-world