AI Is Just Fancy Google

Share on:

Have you ever searched for something on Google, and as you read through the search results thought, "Wow, these words look like they were written by a human!" Of course not, because you knew that the results Google was giving you -- webpages, videos, news articles -- were created by humans. Google was indexing those pages and giving you a preview.

But when someone uses AI to produce content -- write an essay, create a crazy picture, generate an outlandish video, or produce a silly song -- many people think that the content was produced by a computer, and not a human. But that's not true. In fact, all "AI-generated" content is actually just human-generated content rearranged and put together in a different way.

Let's take an analogy. When you listen to a song on your phone, you hear instruments and the artist's voice, right? But that song is really just data stored on your phone. It's essentially bits -- 1's and 0's. At some point, the vocals and instruments that make up that song were recorded and stored as data. It then went through a production process to change it, mix it, and ultimately make it sound certain way before being released as a song.

It's safe to say that the song you're listening to sounds noticeably different than the original recordings. But you still would not say that song is AI-generated just because humans used computers to produce it. The core of the song was produced by humans, and without that core, the computers that aided in editing and production would have nothing to work with. There would be no song.

You’ve probably noticed that AI generated content always seems familiar. When AI writes text, it always reads the same.

Here's the output from four different AI models in response to the prompt, "Write me a one sentence summary of polonium halos.” Notice how similar the responses are in structure and wording:

  • Claude 3 Haiku: Polonium halos are microscopic spherical discolorations found in certain types of rocks, believed to be caused by the radioactive decay of polonium isotopes.
  • Llama 3.3: Polonium halos are ring-shaped discolorations found in certain rocks, particularly granites, that are believed to be the result of radioactive decay of polonium isotopes, which has been cited as evidence for a young Earth by some creationists.
  • GPT-4o mini: Polonium halos are microscopic spherical discolorations found in certain types of rocks, believed to be formed by the radioactive decay of polonium isotopes, which provide evidence for the presence of radioactive materials in the Earth's crust and have implications for geological and nuclear processes.
  • Mistral Small 3: Polonium halos are microscopic, circular discolorations in rocks, formed by the decay of polonium isotopes, which provide evidence for the rapid formation of certain geological features.

Not only is the wording similar, but the order of the words is formulaic: "Polonium halos are ... discolorations ... formed by ... decay of polonium isotopes ... evidence for ..." It's obvious that these AI models did not have an original thought here, nor did they come up with these answers independently. In fact, if I didn't tell you these answers were AI-generated, you might have thought they were all written by the same person, but at different times and perhaps in different contexts.

The fascination with AI "creation" is frankly hype. The impressive output of models like ChatGPT or Midjourney isn’t a spontaneous emergence of intelligence. It's a rearrangement of existing human-generated data. The idea that a computer can suddenly originate an entirely new concept litters the plots of sci-fi stories, but AI doesn't and can't do it. All AI does is take existing data and reconstruct it based on a given prompt.

Think about the sheer scale of the information these models operate on. They’ve been trained on everything – countless books, articles, websites, code repositories, images, even social media posts. This isn’t a blank slate. It's a massive, human-built library that's constantly expanding. The AI isn't creating. It's analyzing patterns within existing data, identifying statistically probable sequences of words or pixels, and then producing something that fits those patterns.

Consider the implications for creative endeavors. When a model generates a poem, or a painting, or even a plausible historical account, it's doing so by calculating the most likely outcome based on the vast dataset it’s been fed. It’s effectively saying, “Given this prompt, and the enormous amount of information I’ve seen, the statistically most probable response is… this.” There's no genuine insight, no emotional connection, no personal experience informing the generation – just complex calculations.

Let’s revisit the example provided previously – the responses to the prompt, "Write me a one sentence summary of polonium halos." Notice how strikingly similar the outputs were. Each model, trained on related datasets, inevitably arrived at a remarkably similar response, utilizing a predictable structure and vocabulary. "Polonium halos are… [descriptor]… formed by… [process]… evidence for… [outcome]." This isn't serendipity; it’s the unavoidable consequence of operating within a constrained, statistically-driven framework. The models aren’t independently synthesizing knowledge. They’re echoing and recombining the data they've been trained on.

This isn’t to diminish the capabilities of AI models. They're powerful tools, capable of impressive feats of data analysis and synthesis. However, the question of originality becomes fundamentally different when considering these systems. Rather than searching for truly novel ideas, we should focus on how AI can augment and assist human creativity, not replace it. The real innovation lies not within the algorithms themselves, but in the human prompts, choices, and interpretations that guide their output. The challenge is to harness this power wisely while understanding its inherent limitations and avoiding the temptation to mistake statistical mimicry for genuine intelligence.