A Brief History of Automated Reasoning
The computer science community has been talking about machines capable of high-level reasoning since before the first general-purpose computers were even built. Alan Turing famously developed a chess-playing algorithm that never ran on a real machine, but was instead something he computed by hand!
At the end of the day, we haven't moved far beyond this. Computers really do 4 things:
- Read from memory
- Write to memory
- Add
- Multiply
Anything else is just for convenience. Whether it is a human with a pen and paper, or GPU array crunching hundreds of a billions of computations per second, computers are doing the same thing today that they were when the pioneers of the field first envisioned them.
The biggest difference between Turing's chess algorithm and the multi-billion paramter models being trained today is not a fundamental shift in our understanding of computation, but rather something closer to physics. Access to low-power, highly-replicated computing machines has allowed us to compute things that we have known are possible for more than 50 years.
Why Now?
So if nothing has fundaemntally changed, why is this happening now? The first part of the answer is that we now have access to the hardware needed to pull off some fairly impressive feats. The second is that we have a lot of starry-eyed dreamers (with perhaps a little too much money and power) who think that something has fundamentally changed. There's a lot of money and influence to be made in selling the idea that computers are going to surpass human capability sometime in the near future.
The truth is that the terms "artificial intelligence" and "machine learning" aren't much more than marketing gimmicks. Algorithms are certainly no smarter than the people who programmed them and, in practice, almost definitely way less smart. The reason for this is simple: The human brain is the most efficient computer in the known universe. For about 100 watts of power, we get access to at least a 100 trillion parameter model. Not only that, but this model is being trained 24 hours a day on real human experience, not some limited data set captured by what some people think is representive of human experience.
So machines aren't really going to beat humans in general intelligence, possibly ever. It may be a fact of our universe that the human brain is the most efficient way to organize molecules into a computer. It wouldn't be too surprising because computation (thinking, reasoning, planning) is how our ancestors survived against beasts with strength and claws. It is our super power and nature has spent several hundred million years developing it. It would not be surprising if our compute/energy ratio is as optmized as it can get.
So sorry machines, humans win every time.
Call it AA, not AI
The words "intelligence" and "learning" come with a lot of baggage. They evoke an emotional response that brings up feelings of creativity and soulfulness. However what we are thinking of when these feelings come up is not the product of a simple mechanical machine trained on limited data, but a complex human machine trained on specific and interdepedent data. These generative systems are not intellgient. They are adaptive, and we should call them that. So let's deprecate the phrase "artifical ntelligence" and supplant it with "adaptive algorithms".
The Computer Science Community Needs to be Honest About AI
As computer scientists, it is imperative that we be explicit and honest about how these systems work. Referring to them as intelligence or learners is emotionally deceitful and continuing to do so diminishes our credibility as scientists. Someday very soon, someone with a little too much power is going to assume a little too much about the capabilities of adaptive algorithms and a lot of people are going to suffer for it.
And who's going to take the fall? Yes probably some billionaire or high-ranking party official is going to look pretty embarassed. But people are also going to look to the computer science experts and say, "Why did you tell us this was safe? Why didn't you tell us the full truth?" And we're not gonna have anything to say execpt, "well, umm, funding, and all that..."
Being honest about adaptive algorithms is not going to make your sponsors happy. It is not going to win you any "best paper" awards. Rather it is a moral imperative and a responsilibity to the public. We are the experts. We are the privileged few who steward a segment of humanity's knowledge and heritage. Continuing to hype up lies about artificial "intelligence" is irresponsible at best and exploitive at worst.