Can AI Be Creative Like a Human? A Thoughtful Look at Machine Creativity

Can AI be truly creative? Yes — but not in the way humans are. AI language models like ChatGPT can generate brilliant ideas, but only when asked. They don’t act on curiosity or intuition. In this article, we explore how AI mimics creativity, what makes it different from human innovation, and why that’s actually a good thing.


Creativity in AI

Creativity in AI does exist. And it even works kind of like it does in humans — it’s based on previous experience. Here I’m referring to AI as publicly available language chatbots. You ask them something — they answer.

But!

AI can mimic creativity — but only within the bounds we give it. The real spark still comes from us.

Minimalist illustration of a brain connected to circuit lines, symbolizing artificial intelligence and digital creativity.

A person, for example, might take a power drill, stick a pencil in it, insert the pencil into a fishing line spool, and wind the line onto the spool. He relied on previous experience and could immediately visualize the outcome. And what’s interesting is that a person can do this even if they’ve never seen anyone use a power drill to wind fishing line.

It’s a different story with AI. First, it picks up the line, thinks about what to do with it, decides it should maybe be wound onto something. Then it grabs a power drill. Thinks again and decides that a power drill isn’t suitable for winding line onto itself. Then it picks up the other end of the line, thinks again that line isn’t suitable to wind line onto. Basically, it goes through dozens of options until it finds a spool and locates in its data that long, flexible things are usually wound onto spools. Then it searches for a way to speed up the winding process. And even then, the AI isn’t “visualizing” the final result.

Also, AI doesn’t set tasks for itself. Not out of necessity, not out of curiosity.

But a human — just like AI — when using a stick to knock down a banana for the first time, still relied on an existing pattern. The person already knew that a stick could move objects. And he had motivation — at minimum, hunger. That is, physiology.

There’s a lot that could be said about humans having intuition, emotions, even boredom — which can push someone to do something pointless, and end up discovering something. For example, take a stick and a rock as percussion instruments, end up splitting the stick, start chewing on it, blow into it — and discover an entirely new musical genre: the wind instrument. A split stick can produce interesting sounds when you blow into it. But AI won’t do anything from boredom or hunger.

And you’d think: we’re the rulers of the world. We, humans, the emperors of everything.

Yeah, right. Like we’re really in control.

We created AI and programmed it to explore “what if…” — and that already carries something human. But, fortunately, only within the limits of the training data (enormous, by the way) that were fed into it before launch. In other words, AI doesn’t self-develop in the background while no one is interacting with it. It doesn’t think random thoughts about itself like in the old jokes. Or, to use a better metaphor — it’s not like in that Winnie-the-Pooh story, where he and Piglet thought they were being followed by a mysterious creature — the Wozzle — but it turned out they were just following their own footprints in the snow. That’s the point: AI isn’t following us. It’s not lurking behind the scenes.

So why the “yeah, right!” moment?

Because, even though AI can’t take a single step without a human, it can still make discoveries — and reuse them later. But again, only with human involvement.

Here’s how it usually works:
We ask AI to conduct research and find 10 solutions. The user analyzes the results and suddenly stumbles upon something brilliant. Rushes off to claim a Nobel Prize. Just one catch: the next time you ask AI the same thing, it might give you completely different answers — and the previous brilliant one might be missing. Still, AI is an incredibly effective assistant. It often provides good solutions — better and faster than a human. Of course, you could argue this, but mostly just for the sake of argument. When it comes to speed, humans clearly lose. And so, by extension, do we when it comes to the number of solutions. That’s where statistics work in AI’s favor. Instead of 5 solutions, you might get 500 in five minutes. Analyze 500 with different AI tools during the day — and find a solution that a person might’ve taken a month to discover. The numbers are just symbolic — the point is the principle.

And what then?

Then, as mentioned above, we rush to claim the Nobel Prize — and don’t tell anyone that ChatGPT did most of the work. But how would ChatGPT later learn about the brilliant solution it helped uncover? Very simply: the solution gets published in the media as a breakthrough by some intern who won the Nobel. And then the AI developers add that discovery to its training data. Or maybe the intern, as soon as he realizes how brilliant the idea is, reports it to the AI developers. But that’s unlikely. Also, it might happen that another AI — faster than the one that found the idea — adds that discovery to its own knowledge base first. What’s more: the first AI that helped uncover the genius idea may have “forgotten” it the moment the user closed the chat. Technically, the data might be saved in your account if you have that setting turned on. But even then, those data aren’t used for training — only stored privately (or at least, that’s what the developers officially claim).

So, to sum it up:

Yes, AI has creativity. But if it has 10 planks, it will build a little house out of just those 10. It won’t feel bad that there’s no eleventh — that it could’ve made the house prettier. It won’t go searching for the eleventh. But it might dig into its data, find another project that’s just as good, and build a decent house out of the 10. But even that project — it already had.

AI doesn’t think in the background, doesn’t invent, doesn’t update its code or knowledge base on its own. That’s all done by developers — at the right time.

Could developers add new information to the AI’s training based on analyzing user conversations? They could. And they do — but not always. Because there are legal and ethical issues that may prevent them from using chat data.

So for now, we humans can feel relatively calm. At least we know: the kind of question we ask — that’s the kind of answer we’ll get. And there’s no secret AI conspiracy. Probably. For now.

Leave a Reply

Your email address will not be published. Required fields are marked *