Discussion about this post

User's avatar
Luke Burton's avatar

I’m someone in the basement of the AI … crèche? … in Silicon Valley. I’ve found that those of us thinking about the impact of AI have a tendency to extrapolate straight from today, all the way to self-aware / autonomous / super-intelligent AGIs. Where we are today is that our AIs are mounds of lifeless floating point numbers. We pass electricity through them, they twitch like Galvani’s frogs, then lay still. How we get from this moment to AIs that respond to competitive evolutionary pressures is really quite unclear. One thing for certain is that it won’t fall out of the scaling laws. Any more than a hyper-accurate weather forecasting system would eventually evolve into a rain cloud.

The velocity of improvements we’re able to wring out of that floating point corpse gives the impression we’re on a straight like trajectory. But pretty much every human innovation tapers into an S-curve. Shouldn’t we be on Mars right now? The optimistic position would be that this is an interregnum, that the “mech suit” for thinking and doing afforded by today’s AIs is an intelligence ratchet which breaks the S-curve. I think this is very plausible, actually. The pessimistic position is the standard S-curve results in a new local maximum and we’re stuck with very lifelike but ultimately lifeless simulacrums of intelligence.

It feels as if the local maximum outcome is so horrifying to think about that we’d rather spend time worrying about alignment strategies to prevent the earth being turned into a paperclip factory. But the former has a much higher probability than the latter which should worry us a great deal more. The abrupt nullification of biological life might be preferable to the consequences of a botched rollout of stillborn AI affecting untold generations of humans.

In the stillborn AI scenario wages for the majority of human labor crash through the floor. Scaling laws mean powerful AIs are concentrated in the hands of the very few national or private entities with the capital and infrastructure required to generate terawatts of power. Operators of these AIs are able to abuse the intelligence ratchet to outthink and outsmart adversaries. There is no incentive to reach AGI: why would you create your ultimate competitor? There will be no storming of the Bastille because the planet is an info-Bastille; the data harvested in the digital and physical worlds would make Graham’s number blush.

Here’s one datapoint to make this feel less hyperbolic. It is already the case that I, a lifelong programmer and engineer in my 40s, am 100x more productive using AI agents (what the kids call “vibe coding”). It truly feels like a superpower. There is a slot-machine quality to the experience and I’ve never written so much code in my life. It’s also true that junior engineers are more productive, but by a smaller factor multiplied by less experience. I’ve seen a staggering number of “old” programmers pick up their tools again, long dormant, and perform miracles. This effect is somewhat dampened by a significant quantity of senior engineers feeling threatened and skeptical of AI while the younger ones take it for granted. But it’s happening. And the key point is the existence of the gap.

So imagine that effect multiplied out across industries, where the AI advantage stacks up for the incumbents, and it remains the case that the combination of an incumbent with significant experience plus an AI assistant always has an advantage – even a slight advantage – over inexperienced or less capable competitors. It is like compound interest for intelligence and execution; you only need to be left behind a little bit in the beginning to be an order of magnitude behind towards the end.

Anyway, we’ll be dealing with some runaway evolutionary processes, whether they’re human+AI or AI alone, and human+AI scares me more than AI itself. In the long term Mother Gaia remains indifferent and will happily turn our remains into soil for the next cycle.

Expand full comment
Satisfying Click's avatar

As usual, his writing starts with a pen, which becomes a scalpel, which ends up as a hammer.

Expand full comment
4 more comments...

No posts