Why the AI Experts Are So Scared
The news is filled lately with tech billionaires telling us that AI is going to be so powerful so fast that we better make laws to prevent the machines from enslaving us. Spoiler alert: the machines have already enabled big business to enslave us, but only if you’re poor and poorly educated. We don’t need AI for that.
Yet to the public it sounds like just the latest over-hyped promise of flying cars. Fool us once, won’t get fooled again… right? (Credit the great painter known as Dubya.)
Here’s why they’re scared: because they know that machines don’t get tired, and they fear — actually they’re certain — that when they finally succeed in figuring out how to genuinely get machines to learn (that’s the key), the machines will get super-smart, super-fast… unimaginably smart, unimaginably fast. Star Trek smart and fast. Finally. Whereas flying cars require abiding by the laws of energy, there don’t seem to be any laws limiting AI, just the limits of our skill in understanding, decoding, and recoding what it means to learn… and maybe what it means to be self-aware, but I digress.
Are they right? They might be.
I personally haven’t yet seen a demonstration of AI that makes me think we’re really close to machine autodidactism (self-learning), but I can’t make an argument to the contrary. As lofty as the ‘code’ for learning seems, I don’t think it’s a secret that God protects with 10 layers of unknowable puzzles… but that’s only because I don’t believe in God. The robots in the lobby of Las Vegas’ Sphere are pretty compelling conversationalists, but so are some of the lower life forms in Congress, so that’s no high bar that’s been hurdled.
But we’ll probably get there and soon. Then the question won’t be “Can machines be intelligent,” but “Can we?”