AI Keeps Making Beautiful Images. That’s Exactly the Problem.

AI Keeps Making Beautiful Images. That’s Exactly the Problem.

We’re outsourcing our taste and calling it art.

Everyone says AI art is easy. Lazy, even. Just type a prompt, click a button, and voilà, instant masterpiece. But if you actually care about what the image feels like? If you want it to carry tone, texture, and story? That’s where everything breaks down.

Perfection Is Easy. Feeling Is Hard.

I generated 393 AI images for a single chapter of a graphic novel. Know how many made it into the final version? Roughly 20. Not because the rest were ugly. They weren’t. They were beautiful. Smooth. Cinematic. But they didn’t tell the story. They didn’t carry the burden of the moment. They looked like art. But they didn’t feel like mine.

Beauty isn’t authorship.

AI Is Getting Better. That’s Exactly the Problem.

Every time the model “gets it right,” something strange happens. You start trusting it more. You stop questioning what feels off. You begin to settle.

The danger isn’t that AI replaces artists. It’s that it replaces the part of the artist that fights back.

When that happens, you’re not creating anymore. You’re curating what a machine thinks you want. And that’s where taste goes to die.

Most People Stop at Pretty

I was naive.

I thought AI could save me time. Instead, it made me ruthless. I rewrote prompts, tuned lighting, tested color palettes, deleted folders full of “perfect” images, all because they felt hollow.

You can spot these images everywhere: slick renders, cinematic haze, perfect anatomy, and zero soul. They don’t contradict themselves. They don’t hesitate. They don’t carry tension. Or silence. Or memory.

True storytelling isn’t generation. It’s calibration.

The Sora Prompt

Eventually, I had to invent a new way to talk to the model. Not what I wanted, but how it should feel. I called it the Sora Prompt.

Not “dreamy forest.” Instead: “humid dusk-to-night with cool-blue moon haze.”

I specified:
  • Era: Early 1990s
  • Locale: Small South Indian village
  • Palette: #0C0950, #161179, #4D4DFF, #500073
  • Character posture. Fabric weight. Environmental stillness.
That helped. A little. But even with all of that, most of the outputs still leaned toward what the AI thought was beautiful. And that’s the trap.

The model is trying to please you. But sometimes, what you need is resistance.

Here's an example snippet from a Sora prompt I actually used:

The presets:
presets:
  environment:
    era: "Early-1990s"
    locale: "Small South-Indian village"
    lighting: "humid dusk-to-night with cool-blue moon haze"

  style:
    medium: oil_paint
    render_mode: wash
    color_palette: ["#0C0950", "#161179", "#4D4DFF", "#500073"]

  characters:
    satya:
      description: "Age ~3; petite toddler build; dense dark curls. Wears short-sleeved floral shirt, shorts; curious posture."

    grandfather:
      description: "Lean elderly man; short pepper-grey beard; white tank-top and veshti; typically rim-lit silhouette."
The page prompt:
layout: grid
rows: 2
columns: 2
gutter_px: 20
gutter_style: faded

panels:
  - cell: {row: 1, col: 1}
    type: close_up
    focus: satya_eyes_widen
    content: |
      Satya’s silhouette, eyes widening, highlighted by cool-blue rim light.
    text: |
      “But it’s not?”

  - cell: {row: 1, col: 2}
    type: detail
    focus: grandfather_smile
    content: |
      Grandfather’s gentle silhouette smile, softly lit.
    text: |
      “No, Aadi kanna.”

  - cell: {row: 2, col: 1}
    type: splash
    focus: stars_scale
    content: |
      Expansive starfield, countless lights stretching infinitely.
    text: |
      “The stars are not small.”

  - cell: {row: 2, col: 2}
    type: splash
    focus: starfield_immensity
    content: |
      Celestial panorama with shining stars and nebulae.
    text: |
      “They are huge, bigger than anything you can imagine.”

What Actually Worked

Here’s what made a difference:
  • I deleted everything that didn’t feel like the story.
  • I stopped chasing what looked “good.”
  • I defined a visual grammar: palette, posture, rendering style, narrative weight.
  • I treated AI like a lens, not a brush. I had to frame, direct, and sometimes reject it entirely.
Most importantly:

If an image made me feel nothing, I deleted it. Even if it looked great. Especially if it looked great.

What I Was Really Trying to Paint

I wasn’t chasing visual perfection. I was trying to hold a single, quiet moment:
A child asking his grandfather,
“Are the stars attached to the ceiling of the sky?”
That line unraveled into a meditation on existence, silence, memory, and perception. AI generated the images, but I had to guide it there, and recognize when it was faking and glazing.

Readers have responded warmly to Chapter 1 & 2 and this is what matters:

“A thought-provoking exploration of consciousness, existence, and the nature of reality… seamlessly moves between cosmic, abstract entities and deeply human character moments.”

“I felt like I was going through a journey towards more of an awakening to the greater idea of existence.”

“Existence is such a profound concept... true nothingness not being an end state but a space between existence itself.”

“I could feel the weight of everything, the fear of death, the silence between Shakti and SATYA, and that haunting tension underneath.”

You can read Chapter 2 now and see the images I spent so long making it, with AI as just one part of the process:

Here's the chapter 2 of Mediation at the End of the Universe.

If you’re using AI to create art,

Don’t chase what looks good.
Chase what feels like your story.

Ignore the noise.
Make something that holds.

Subscribe

Subscribe to the newsletter