Specific vs. General Artificial Intelligence

The most recent episode of the Ezra Klein podcast includes an interview with Google’s head of DeepMind, Demis Hassabis, whose AlphaFold project was able to use artificial intelligence to predict the shape of proteins essential for addressing numerous genetic diseases, drug development, and vaccines.

Before the AlphaFold project, human scientists, after decades of work, had solved around 150,000 proteins. Once AlphaFold got rolling, it solved 200 million protein shapes, nearly all proteins known, in about a year.

I enjoyed the interview because it focused on Artificial Intelligence to solve specific problems (like protein folds) instead of one all-knowing AI that can do anything. At some point in the future, a more generic AI will be useful, but for now, these smaller specific AI projects seem the best path. They can help us solve complex problems while at the same time being constrained to just those problems while we humans figure out the big-picture implications of artificial intelligence.

Is ChatGPT Really Artificial Intelligence?

Lately, I’ve been experimenting with some of these Large Language Model (LLM) artificial intelligence services, particularly Monkey. Several readers have taken issue with my categorization of ChatGPT Monkey as “artificial intelligence”. The reason, they argue, is that ChatGPT really is not an artificial intelligence system. It is a linguistic model looking at a massive amount of data and smashing words together without any understanding of what they actually mean. Technologically, it has more in common with the grammar checker in Microsoft Word than HAL from 2001: A Space Odyssey.

You can ask ChatGPT for the difference between apples and bananas, and it will give you a credible response, but under the covers, it has no idea what an apple or a banana actually is.

One reader wrote in to explain that her mother’s medical professional actually had the nerve to ask ChatGPT about medical dosages. ChatGPT’s understanding of what medicine does is about the same as its understanding of what a banana is: zilch.

While some may argue that ChatGPT is a form of artificial intelligence, I have to agree that there is a more compelling argument that it is not. Moreover, calling it artificial intelligence gives us barely evolved monkeys the impression that it actually is some sort of artificial intelligence that understands and can recommend medical dosages. That is bad.

So going forward, I will be referring to things like ChatGPT as an LLM, and not artificial intelligence. I would argue that you do the same. 

(I want to give particular thanks to reader Lisa, who first made the case to me on this point.)