Career

When AI Writes the Code, What Should Interviews Really Test?

6 min read
ยท
December 23, 2025
ยท
905 views

In an era where AI tools are mandated at work, why do interviews still test skills we no longer use? When AI can generate code, what should we actually evaluate? Reasoning, understanding, and critical thinking, not just code generation.

Featured Image

I was taking an interview recently, and something the candidate said stuck with me long after the call ended.

Midway through a coding discussion, he paused and said, "Coding a question isn't really that important these days. We have Copilot and other AI tools now. Even companies are pushing us to use them."

He wasn't wrong.

In fact, in my own company, we have a mandate encouraging developers to use AI tools on a daily basis. They're positioned as productivity boosters, accelerators, even quality improvers. We celebrate faster delivery, fewer bugs, and better developer experience, all enabled, at least in part, by AI.

The candidate continued, "I may not be able to write the complete code, but I can guide you on how we should solve the problem at a high level."

To his credit, he did explain the approach reasonably well. The architecture made sense. The flow was logical. The intent was clear.

But when I asked counter-questions, edge cases, trade-offs, failure scenarios, implementation details, he struggled. The confidence faded. The answers became vague.

And that's when a bigger question surfaced in my mind:

Shouldn't the interview process itself change now?

๐Ÿค– When AI Writes Code, What Should Interviews Test?

This question isn't just theoretical. It's urgent. As AI tools become standard in development workflows, the gap between how we work and how we interview grows wider every day. Companies are mandating AI usage, yet interview rooms still operate as if these tools don't exist. This disconnect isn't just unfair to candidates, it's failing to identify the skills that actually matter in modern software development.

To answer this properly, we need to examine both sides: the reality of AI-assisted development today, and what interviews were really trying to measure all along. The answer isn't about banning AI from interviews, nor is it about accepting shallow understanding. It's about testing the right things.

๐ŸŒ The Reality We're Already Living In

Let's be honest about where we are today.

AI-assisted coding is no longer experimental.

It's not a "nice to have."

It's becoming expected.

Developers use tools like Copilot, ChatGPT, and internal AI assistants to:

  • Generate boilerplate code
  • Write tests
  • Refactor legacy logic
  • Explore unfamiliar APIs
  • Debug faster

And companies are actively encouraging this.

So the candidate wasn't wrong to question the relevance of writing code from scratch in an interview setting. If the job allows, or even mandates, AI assistance, why are interviews still pretending it doesn't exist?

But that's only half the story.

๐Ÿง  Coding Was Never Just About Typing Code

Here's the uncomfortable truth:

Coding interviews were never meant to test typing speed or syntax memory.

They were proxies.

Proxies for:

  • Problem decomposition
  • Logical reasoning
  • Data structure awareness
  • Trade-off analysis
  • Attention to edge cases
  • Debugging mindset

When a candidate says, "I can't write the code, but I can explain the solution," what they're really saying is:

"I understand the destination, but I'm not confident about the road."

And in real-world engineering, that road matters.

AI can help you generate code.

It cannot reliably tell you:

  • Why this approach is better than another
  • When an abstraction will break at scale
  • What corner case will crash production at 2 a.m.
  • Which trade-off your system is silently making

Those insights still come from understanding, not tooling.

โš ๏ธ Where the Candidate's Argument Breaks Down

The candidate's position sounded modern, even progressive.

But the cracks showed during follow-up questions.

When I asked:

  • What happens if this assumption fails?
  • How would you optimize this for large inputs?
  • What would you log or monitor here?
  • How would you debug this in production?

The answers didn't come.

And that's the key distinction.

High-level explanations are easy.

Depth is hard.

AI can help you write code you don't understand.

Interviews exist to ensure you do understand it.

๐Ÿ•ฐ๏ธ But the Interview Process Is Outdated

At the same time, I can't ignore the other side.

Many interviews still:

  • Penalize minor syntax mistakes
  • Expect perfect recall of rarely used APIs
  • Treat AI usage as cheating
  • Optimize for stress, not signal

This creates a mismatch:

On the job, "Use every tool available."

In interviews, "Pretend those tools don't exist."

That hypocrisy doesn't help candidates or companies.

So yes, something needs to change.

๐ŸŽฏ What Should Interviews Test in the AI Era?

Instead of asking, "Can you write this perfectly from scratch?", we should be asking:

๐Ÿงฉ 1. Can You Reason About a Problem?

  • Can you break it down?
  • Can you identify constraints?
  • Can you ask the right clarifying questions?

๐Ÿ” 2. Do You Understand the Code You're Using?

Whether written by you or AI:

  • Can you explain each part?
  • Can you modify it safely?
  • Can you spot bugs or inefficiencies?

โš–๏ธ 3. Can You Handle Trade-Offs?

  • Performance vs readability
  • Simplicity vs extensibility
  • Speed vs correctness

AI doesn't make these decisions for you.

You do.

๐Ÿ› ๏ธ 4. Can You Debug and Own the Outcome?

When things go wrong:

  • Do you know where to look?
  • Do you know what questions to ask?
  • Can you reason under uncertainty?

That's engineering.

Not code generation.

๐Ÿงช A Better Interview Model

Imagine an interview where:

  • The candidate is allowed to use AI
  • The interviewer observes how they use it
  • The focus is on reasoning, not rote memory
  • Follow-up questions go deeper, not wider

For example:

  • "Ask AI to generate a solution, now explain it."
  • "Which part would you change and why?"
  • "What could break in production?"
  • "How would you test this?"

This mirrors real work far better than whiteboard gymnastics.

๐Ÿ“‰ The Real Skill Gap Isn't Coding, It's Thinking

The interview reminded me of something important:

Tools evolve faster than understanding.

AI raises the baseline.

It does not replace fundamentals.

A developer who:

  • Understands systems
  • Thinks critically
  • Asks good questions
  • Knows their limits

will outperform someone who only knows how to prompt.

And interviews should reflect that reality.

๐Ÿง  Final Thought

That candidate wasn't entirely wrong.

But he wasn't entirely right either.

Yes, coding has changed.

Yes, AI is here to stay.

Yes, interviews must evolve.

But outsourcing understanding to tools is not the same as being an engineer.

If interviews change, they must change in the right direction, away from memorization, but toward deeper thinking.

Because in the end, when AI gives you ten answers, the real question is:

Do you know which one to trust, and why?

Career

Enjoyed this post?

Follow me on LinkedIn for more insights on technology, career growth, and software development.

Follow on LinkedIn