Note from the Publisher

What is AI?

Eliminating the termites. (Image generated with DALL-E by Gina Lerman)

Wikipedia just released its annual list of most visited pages. Number 1 was “ChatGPT.” The term AI is being thrown around with the reckless abandon a two-year-old deploys on their favorite new toy or stuffed bunny. As a freelance writer, I was recently given the assignment to add “AI” to a prospective start-up’s business plan. Nothing about the plan or technology was really changing, they simply felt that “Shark Tank”-style investors would be more likely to read it, if that buzzword was prominent.

Language is a constantly evolving creature, and meaning will shift to match usage. Right now, it seems you can call almost anything involving a computer or automated process “AI” and your friends will nod solemnly, then share a story about the latest spam call they received.

So, what AI really is, and what that term will mean over the coming years, are probably not the same thing. For now, let’s focus on the actual meaning of “Artificial Intelligence.”

True AI, to computer programmers, is a set of code that teaches itself, based on experiences or input, in part by rewriting its own code. This leaves Dr. Frankenprogrammer unsure, at times, what their Creature is going to do, or why. It’s different from the many, many current applications  that learn by gathering data, or through machine learning. They develop ever-improving databases that follow unchanging algorithms which parse ever-increasing amounts of data to decide, for example, what time is ideal to call you and offer you a bogus interest-free loan. Their mathematical conclusions might be unexpected (8am? Really?), but they’re not going to jump to a conclusion like “I should shut down because no one wants to talk to me.”

Another way of breaking down current AI is distinguishing between “Narrow AI” and “General AI.” Narrow uses machine learning and data correlations to perform specific tasks – like Alexa recognizing your speech pattern and interpreting what you’re saying. General AI is the kind that’s on a path to making overall human-level decisions and writing itself new code; the kind Stephen Hawking warned humanity about (“The development of full artificial intelligence could spell the end of the human race…”). If Skynet evolves, it’s likely to be from this category. Or from time-travel, depending on which Terminator movies you believe.

It’s a very human tendency to anthropomorphize. I have an elevator in my apartment building, and pressed the call button one morning. When the doors opened, I was suddenly worried I’d forgotten my phone, and started grabbing all my pockets to find it before getting on. The doors gave up on me and closed. I found my phone, and pushed the button, then managed to drop my wallet, which had been dislodged by my pocket inventory scramble. I let the doors close again while I collected the scattered wallet ingredients. My act somewhat together, I pressed the button a third time and thought, “Wow, this elevator must think I’m a real idiot.’ I was even tempted to apologize on the ride down. But of course, that elevator didn’t think less of me for calling it three times. It wasn’t wondering what the heck was wrong with this guy. It didn’t think anything at all.

In parts of the country right now, there are robots building houses. They are operating from templates, and do exactly as they’re programmed. If the programmer puts a toilet outlet in a spot that is too crowded, the robot will not pick up on this – it will put the toilet exactly where it was told to put the toilet, crappy decision or not. That’s not AI, that’s programming. Right now, most people would probably call it AI though, and complain about our AI overlords making their bathroom visit awkward.

Most of what we’re calling AI right now isn’t. A lot of it is machine learning. Some of it is just computers doing exactly what they were told to do. Some of it is us projecting.

Even the recently ascending tools like ChatGPT (for text) and Midjourney (for images) are somewhat constrained – they’ve gone out on their own, and received massive amounts of data, from which they make connections and combine elements to produce a new thing from a vast storehouse of existing parts. They spit out the results, but they don’t have a higher awareness of what it means. They can’t distinguish hate speech, or notice when a graphic of a person has two left arms.

“I have a real problem with the term AI,” says Kade Crockford, a specialist in facial recognition issues at the Massachusetts ACLU. “Most of the people doing this sort of programming view that as a PR and marketing phrase that doesn’t represent any of the science or engineering. Facial Recognition, for example, is based on machine learning. It’s looking to match characteristics – a faceprint – to an ever increasing database of possible matches. It’s not AI.”

In both narrow and general cases, the real side-effect that humans will need to get used to is that no one really knows exactly what, in the code, is responsible for some of the end results. All of these new tools, whether narrow or broad, will have unpredictable results. They will make mistakes, and should reduce their mistakes as they grow more robust.

Wargames (spoilers) is an early example. A computer needs to tie itself in Tic-Tac-Toe a few million times before concluding that in some games (like, say, nuclear war), “the only way to win is not to play.” Nobody coded that, it had to learn for itself, like a stubborn six-year old who doesn’t believe a stove is hot. There isn’t really a current AI that would make a leap like that, but for the true AI research happening out of the public view, it’s a goal – a system that would apply lessons learned in one context to other scenarios. And that’s also what’s potentially alarming – if one should take the elimination of termites being good for a house, and then apply that logic to what is good for the Earth. We probably don’t want that.

So are most of the not-really-AI topics covered in this issue wrong or bogus? Not really – the term is evolving, just like the tech.

Mike Ryan is a former web developer, programmer, and SQL DBA who currently publishes Motif.