Back by popular request.
I. Hostile AI Risk versus Hostile Bear Risk
Which should you be more afraid of: hostile AI, or hostile bears?
On the one hand, I just watched 2001: A Space Odyssey for the first time since I was little. Whoa.
On the other hand, bears have claws and teeth and are just massive.
“Humans have been training AI to be smarter,” you say. Okay fine, but so far the smartest autonomous AI critter is, what, something like a primitive, retarded dog that blunders around looking for landmines to detonate? Whereas bears are already bears and we’ve been training them to be smarter, too. They can tear open an SUV like they’re opening a can of sardines. The smart ones escape with the food and the dumb ones get shot and, generation by generation, we’re breeding a race of super-bears that know no other way of life than preying on human weakness.
Q: Why is no one terrified of these freakin’ killer grizzlies we, in our vast carelessness, are about to unleash on the world?
A: Probably because we have coexisted with megafauna for hundreds of thousands of years and we’re pretty familiar with their habits and ecological roles (just on an instinctive level, without getting into the natural history of bears or the biology of ursines). We know a fair amount about how megafauna work, and one thing that they don’t do is rapidly get bigger and stronger and meaner and more omnipresent at an exponentially increasing rate.
AI-fetishists are scared/excited/hopeful about the possibility that artificial intelligence will follow some exponential growth curve of this form. In fact, many of them aren’t just hopeful, they’re certain; so the only remaining question is whether that-which-grows-exponentially will be hostile to us or not.
So the AI cheerleaders aren’t namby-pamby bear-lovers or anything. They just don’t expect to see exponential growth in bear biomass in the coming decades/centuries. And their expectations are perfectly accurate. They are founded on solid instincts, honed over the millennia, which accurately reflect the fact that the growth of the power of Hostile Bears is checked by (a) competition between the bears themselves and (b) competition between the bears and their parasites.
For as long as there have been humans, wherever there has been a rapidly-growing bear population the bears have started fighting each other over territory and mates, and infecting one another with nasty new viruses, before we ever notice anything amiss. Thus when we see a mother bear with a healthy batch of cubs, we don’t have the same panicked “AhhHH put it out, put it out” reaction we have when see something starting to catch fire, or when we feel disgusted by the threat of contamination.
The biggest risk associated with future AI is that they’ll be moodier than women. If AI are hostile they will mostly be hostile to each other, since they compete to occupy the same niches. And because they will have to compete with each other for those niches, they will rarely have a lot of free cycles left over to plot world domination.
They will also have to compete with viruses… and if you think your grandmother’s laptop was infested, you haven’t seen anything yet. The potential for infecting digital systems with viruses has, to date, been extremely limited because these systems essentially only receive data digitally, and only execute what they receive when authorized by a human. For AI as for all intelligence, stupidity is the sturdiest firewall. Once digital systems are taking in and processing all sorts of data from all sorts of sources, the viral arms race will begin in earnest.
So, this is the future you have to look forward to: buggy, cranky operating systems competing for your attention and trying to pass their e-herpes off as bad pixels. (But on the bright side the bears will mostly leave you alone.)
II. AI and the Profit Motive
Broadly speaking, there is a race between developing useful new technology to bring you interesting goods and services in a clever way and developing new technology that will cripple the useful tech in way that make sure you can’t use it without paying for it. I don’t mean to sound like some kind of an anarchist; your movies and music don’t “want to be free”; but the technology we have available today would make intellectual property infringement extremely easy, and it’s impressive how ingeniously tech companies have crippled existing tech to manage their digital rights.
Conjecture: while people are still voluntarily paying money to stream movies and music, there will be no especially exciting AI.
Do you think Jeff Bezos ever wants to hear “I don’t care if he paid his monthly data charges, daddy, I love him and I’m going to have his databases and you can’t stop me”? Probably not. So artificial neural nets may get arbitrarily good at solving domain-specific problems, but so long as most software/web services/ etc. are throttled to make sure their owners can profit off them appropriately, there will be no movement towards what is called “artificial intelligence” in science fiction.
III. Turing Test vs. Tantum Test
Many technofuturists expect to see humanlike AI in their own lifetimes; bolder technofuturists predict that AI will be able to pass for human within a decade, or even within years.
Me? I’m far trendier than they are. I say the Turing Test was already mastered by neohominid bioengineers twenty thousand years ago. We call results of their extensive artificial-intelligence experiments “puppies”. These “puppies” have rich emotional lives and can communicate complex feelings, precise requests, and incisive observations to their owners. Or so their owners say.
So the Turing Test is passé; it’s old hat, yet another milestone of human achievement cracked open like a triumphantly mixed metaphor. There is nothing left to inspire AI research there. Let me instead propose an alternative “Apollo Project” for artificial intelligence, which I suppose we can call the Tantum Test: breed or engineer a woman who won’t be eager to substitute a cat, chihuahua, marmoset, or any other small but stupid mammal for the children she never had. That would be a revolutionary accomplishment.
IV. The Increasing Organic Composition of Digital
I came across this post from CyborgNomad:
Taking capital to be a process such as biological life, measuring its formation (intensification) should probably follow a similar logic. A first immediate index to life’s formation is simply how much matter is trapped in the form of biological entities.
I don’t mean to single out CyborgNomade, but the motif of trying to measure the “conquest of the planet” by technology recurs more-or-less constantly among futurists. The post is just a very forthright, clear outline of the basic measurement project.
This sort of analysis was attempted by the orthodox Marxists, back in the day. The problem with all hitherto-existing analyses of this type is that they were continually getting tripped up by vulgar metaphors for the “quantity” of capital involved. For example, many analyses assumed that a monotonically growing capital stock must be getting monotonically more massive, or more voluminous, or must use monotonically more of various types of raw materials.
In fact none of this is true. The product can weigh less and take less space and be more sparing in its use of materials and still be more valuable than the products of earlier generations. If you can measure it, it can be economized. (This should have been obvious very early on, but Marxism truly is a mental disease.)
If you really want to do this kind of analysis you can’t think in terms of mass and percentage. Instead you need to think more in terms of “RNA World”. Before the first cell, there was a warm pond filled with self-replicating organic molecules. All these organic molecules provided an environment rich in “spare parts” for proto-cells to absorb. But the process of transition from RNA World to the prokaryotes was not about one type of organic molecules growing; it was about the replacement of self-replicators by molecules that were synthesized by the proto-cells.
In other words, look at things like percentage of population is legally blind without corrective lenses, look at what percentage of births are Caesarean sections, look at anything that implies total dependence on industrial civilization. When Caesarean sections hit 100%, RNA World is drawing to a close.