ChatServicesBlogEngage
Schedule Consultation

From Painters to Sculptors: How AI Is Changing the Way We Write Code

March 18, 2026

A marble block being chiseled to reveal clean code architecture — the shift from additive creation to subtractive refinement

For most of my career, writing software felt like painting.

You start with a blank canvas. Every line of code is deliberate. Additive. Yours.

You build systems stroke by stroke.

But that mental model is breaking.


The Shift: From Additive to Subtractive

Think about what a sculptor does differently from a painter.

A painter starts with nothing and builds outward. Every mark is a decision, a construction. The blank canvas is a problem to solve.

A sculptor starts with more than enough—a block of marble—and removes what doesn't belong. The form was always there. The job is to reveal it.

That's the shift. With AI generating entire functions, modules, even architectural scaffolding, we're no longer building stroke by stroke. We're constraining, shaping, and refining something that already exists.

The role has shifted from creator to editor of possibility.


Speed Is Up. Confidence Is Down.

There's no question: we can build faster than ever. But that speed comes with a cost that took me a while to name.

After more than a decade of writing code, I'm no longer close to every line. That used to be a source of pride—not ego, but confidence. When something broke, I had a mental map. I could reason about what was happening because I had written the thing. I had opinions about the structure because I'd thought through each decision.

That's changed. There are parts of systems I've shipped recently where I understand the intent completely, but the implementation is... mostly AI's. I reviewed it. I shaped it. I constrained it. But I didn't write it from scratch.

And that creates a new kind of discomfort. Not "is this code wrong?" but "would I even know if it was?"

The instinct is to slow down. To go back to writing every line. But that's nostalgia, not a strategy. The question becomes: if you didn't write every line… how do you trust the system?


The New Discipline: Observability Over Intuition

The answer isn't to slow down—it's to evolve how we validate.

I've found myself leaning more heavily on blue-green deployments, higher code coverage, strict linting, and end-to-end testing. But more importantly, we're starting to bring AI into the validation loop itself.

AI isn't just writing code—it's reviewing it. We now run AI-powered code reviews in CI to catch issues before production. It's not perfect, but it's a meaningful shift.

Validation Layer Old Approach New Approach
Code review Peer review Peer review + AI-assisted CI checks
Deployments Push and monitor Blue-green with automatic rollback
Test coverage Happy paths End-to-end + coverage gates
Confidence Came from writing the code Comes from watching the system behave

The Sculptor Mindset

If this is the new paradigm, then the next generation of engineers needs a different mindset.

Two principles stand out:

1. Dream Bigger Than Feels Reasonable

The constraint has changed. It's no longer "can I build this?"—it's "can I articulate this clearly enough for AI to help me build it?"

That's a different skill. It rewards engineers who can think architecturally before touching a keyboard—who can write a detailed spec, break a system into components, and describe the interfaces before any code exists. AI is a multiplier, and what it multiplies is your clarity of thinking.

Concretely: take the thing you've been putting off because it felt too big. The internal tool no one has time to build. The automation that would save hours a week. The prototype that could validate a direction before a full sprint. These are tractable now. The engineers who are thriving are the ones who are attempting them—not waiting for permission or capacity that never comes.

2. Play With the Problem Space

The best way to learn this new model is through exploration. And one direction I keep returning to: what if you didn't just build for users—but simulated them?

Instead of testing happy paths and edge cases in isolation, what if you modeled the texture of your actual user base? This is what I've started calling negative space engineering—and I want to be clear that this is more prediction than practice right now.

The idea: build the inverse of your system. Create synthetic user agents that behave like your real users—with realistic patterns, frustrations, and failure modes. Use them to pressure-test your software before a single real person touches it. Not just "does this button work," but "does this workflow hold up under the weight of how people actually use it?"

The tooling is still emerging. But I believe this is where observability and AI-assisted development are heading—toward systems that can model their own failure modes before they fail.


Trust Comes From Experience, Not Code

You can't read your way to trust anymore. Trust comes from seeing systems behave correctly.

That means:

  • Watching recordings of end-to-end tests
  • Interacting with your application directly
  • Having engineers QA their own work

As engineers move up the stack, we own more surface area—not less. And the one thing AI doesn't replace is accountability.



Final Thought

We're not painters anymore. We're sculptors. And the job is no longer to create from nothing—it's to shape what's possible into something trustworthy.

The engineers who thrive in this new era won't be the ones who resist the shift. They'll be the ones who embrace it—and learn to trust the chisel.