Senjuti Dutta

Why AI Still Struggles With What Isn't There

All opinions are my own.

On March 26, 2025, a technical milestone quietly passed: ChatGPT successfully generated an image of a wine glass filled to the brim. Just few months or even few weeks earlier, it couldn’t. The model repeatedly returned partial pours, no matter how the prompt was phrased. This change marked a clear improvement in visual fidelity and prompt precision.

But this blog isn’t about that.

Because while AI has grown better at adding details — pouring more wine, rendering clearer skies, combining horses with swimming pools — it still fails in a far more basic way:

It struggles deeply with negation — with imagining what’s missing.

The Failure to Subtract

Try these prompts today:

  • “A plane without wings”
  • “A dog without whiskers”

You’ll almost always get images that ignore the “without” entirely. The plane has wings. The dog has whiskers. Sometimes, the model even emphasizes those features more clearly — almost as if negation has the opposite effect.

Example prompt failure: plane without wings still shows wings

This isn’t a rendering issue. It’s conceptual. The model doesn’t understand how to violate its own learned patterns. And that tells us something about what AI is — and what it isn’t.

When Negation Works — And Why

Some negation prompts do succeed:

  • “A man's face without beard”
  • “A bird without feathers”
Example prompt success: bird without feather successfully shows a bird without any feather

Why do these work while others fail?

Because the model has seen both versions. It has been trained on images of clean-shaven men and even plucked birds. These are optional features, statistically variable in the dataset.

But wings? Whiskers? These are rarely — if ever — missing. So when you ask the model to remove them, it can’t. It’s not designed to question what it has learned. It’s designed to repeat it.

Negation Is Not Just Visual — It’s Philosophical

What’s happening here isn’t just a limitation in training data. It’s a limitation in how the model represents knowledge.

Negation isn’t just flipping a switch. It requires understanding what something is, and what happens when a part is missing. That means reasoning about essence.

If a plane has no wings, is it still a plane? If a dog has no whiskers, is it still a dog? A human would ponder that. AI does not. It doesn’t think — it associates.

Enter David Hume

David Hume, the 18th-century philosopher, argued that all ideas come from impressions — from things we’ve actually seen or experienced.

In that sense, modern AI is thoroughly Humean. It doesn’t reason. It doesn’t invent. It recombines impressions. Ask it to draw a unicorn? Easy — it’s seen horses and horns. But ask it to imagine the absence of a wing? Or to question the structure of a concept?

It fails — because it hasn’t had the right impression. And it cannot go beyond them.

The Real Limit of AI Isn't Art — It's Absence

AI can now fill a glass of wine. It can mix animals and architecture. It can render things that look original.

But it cannot imagine what it hasn’t seen, and more importantly, it cannot subtract meaningfully from what it has learned to be “true.”

Until it can reason about what isn’t there — about what’s missing, what’s optional, what’s essential — it won’t be able to think. Not in the way we do.

AI sees patterns. But it doesn’t yet see the space between them.