Something strange is happening in engineering hiring. Companies are using AI to build their products, encouraging engineers to use Copilot and Claude in their daily work, and celebrating the productivity gains — then asking those same engineers to solve LeetCode puzzles on a whiteboard during interviews, without any tools, as if it were 2015.
It doesn't add up. And at Ruzora, we've spent the last year fundamentally rethinking how we vet engineers because of it.
The catalyst was a talk by Justin Lee — a software engineer at Notion who has been designing technical interviews since 2017 at Arc.dev — at LeadDev Berlin 2025. His core argument was simple and, once you hear it, impossible to ignore: most technical interviews test knowledge recall and pattern recognition. Those are precisely the things AI already does well. We're screening engineers on skills that are rapidly becoming commoditized, while ignoring the skills that actually determine whether a hire succeeds.
That insight changed how we evaluate every engineer who comes through our pipeline. Here's what we learned, what we changed, and why it matters for every company hiring engineers in 2026.
The Problem: Interviews Test the Wrong Column
Justin presents a framework that clarifies the issue immediately. Imagine a two-column matrix:
Left column — what AI excels at:
- "What's a mutex?"
- "Implement an LRU cache"
- "Design Google Docs"
- Structured knowledge recall
- Pattern matching on known problems
Right column — what humans excel at:
- "How did you handle concurrency issues in your infrastructure?"
- "What features did you cut to make a deadline, and how did you decide?"
- Judgment in ambiguous situations
- Trade-off reasoning with real constraints
- Learning from lived experience
Most technical interviews — the ones with algorithmic coding challenges, trivia questions about data structures, and textbook system design prompts — live entirely in the left column. They test whether a candidate can do things that ChatGPT can do in seconds.
"Why shape interviews around what AI can easily handle?" — Justin Lee, Notion
The right column is where the signal lives. And it's precisely what we now focus on at Ruzora.
Why "Just Ban AI" Isn't the Answer
The most common response to AI in hiring is to ban it. "Please turn off your AI tools during this interview." It's a valid boundary, but it creates a fundamental contradiction.
As Microsoft CEO Satya Nadella noted in 2025, 20 to 30 percent of the code in repositories is now AI-written. If your engineers use AI tools every day on the job — and they do — then testing them without those tools tells you how they perform in an artificial environment, not how they'll perform in your actual codebase.
You have two design choices:
1. Evaluate how well someone uses AI as part of their engineering process. This tests tool-augmented productivity, prompt engineering ability, and critical evaluation of AI-generated output.
2. Focus on the human attributes AI still struggles with. Judgment, context, reasoning from experience, communication, and the ability to navigate ambiguity.
At Ruzora, we've chosen option two — and it's transformed the quality of our placements.
How We Redesigned Our Coding Assessments
Justin made an observation that resonated deeply with our experience: "In trying to make programming interviews realistic, we sometimes make them more complicated than real work."
He described an interviewer who spent 10 minutes reading a script to walk candidates through a 20-file sandbox with fake APIs and mock data. It felt nothing like real engineering work. The people who excelled had simply practiced that specific format.
What We Stopped Doing
- Algorithmic puzzle challenges (reverse a linked list, find the shortest path)
- Complex sandbox environments that take longer to explain than to solve
- Time-pressured coding exercises that reward memorization
- Testing language-specific syntax knowledge
What We Do Now
Our coding assessments use simple, realistic tasks that mirror daily engineering work:
- Join two JSON objects with different schemas
- Read and parse a log file to extract specific patterns
- Extend a SQL query to handle a new edge case
- Write a simple caching layer with an expiration policy
- Build a basic queue consumer that handles retries
These are deliberately simple. The problem isn't the point — it's the delivery mechanism. What we observe is how the candidate actually works:
- Do they run their code incrementally, or write everything and hope it works?
- Do they write tests or log output to verify behavior?
- When they get stuck, do they debug systematically or guess randomly?
- Do they ask clarifying questions about requirements?
- Do they talk through their reasoning?
We tell every candidate upfront: "We don't care about code performance or whether you finish. We care about how you think through the problem." Justin calls this "setting context," and it's the single most impactful change we've made.
"I don't care about the performance of the code, or whether you finish. It enables them to do their best work." — Justin Lee
When you remove the pressure to perform under artificial constraints, candidates relax. They think out loud. They show you their actual engineering judgment — the thing that determines whether they'll succeed on your team.
How We Redesigned Our System Design Interviews
The traditional system design interview goes something like this: "Design Twitter" or "Design a URL shortener." The candidate draws boxes on a whiteboard, recites memorized architectures, and discusses theoretical trade-offs they've read about but never actually faced.
This format has the same problem as algorithmic interviews — it tests knowledge recall, not experience. A candidate who spent a weekend studying "System Design Interview" by Alex Xu can ace these questions without ever having built a production system.
Experience-Based Over Knowledge-Based
We replaced knowledge-based questions with experience-anchored ones. The difference is subtle but powerful:
Knowledge-based (what we stopped asking):
- "What are the trade-offs of serverless?"
- "Compare Cassandra to PostgreSQL"
- "How would you design a real-time notification system?"
Experience-based (what we ask now):
- "You mentioned you deployed Lambda functions at your last company. What scalability challenges did you hit?"
- "Did you introduce Cassandra as a new data store? How did your team support the operational complexity of a new system?"
- "Walk me through a time you had to make a technology trade-off under time pressure. What did you choose, what did you give up, and how did it play out?"
The key difference: experience-based questions are extremely hard to fake. A candidate who has genuinely dealt with Lambda cold-start issues in production will describe specific behaviors, workarounds, and lessons learned that no amount of blog-reading can replicate. A candidate who hasn't will give textbook-level answers — what Justin calls "trivia mode."
Handling Trivia Mode
When candidates fall into reciting definitions — "Cassandra is a wide-column NoSQL database that uses consistent hashing..." — our interviewers redirect with a simple prompt: "That's a good overview. How have you used this in your own work?"
This single question separates engineers with genuine production experience from those who've only read about it. And it does so without trick questions, gotchas, or adversarial dynamics.
The Interviewer's Role: Guide, Not Proctor
This is perhaps the most underappreciated aspect of effective technical interviews. The interviewer's skill matters as much as the question design.
At Ruzora, we train our interviewers on specific techniques drawn from Justin's framework:
- Set context early. Tell candidates this isn't a puzzle test. When you do, they stop apologizing for code not being perfect and start showing you how they actually think.
- Handle depth calibration. When a candidate goes too deep on a tangent, redirect gently: "This is great context, but let's move on to stay on time." When they stay too shallow, push: "Can you give me a specific example from a system you've built?"
- Tell candidates shortcuts are okay. If they want to skip boilerplate or use pseudocode for a section, that's fine — as long as they call it out. This keeps them talking and shows reasoning.
- Listen more than talk. The interviewer's job is to create space for the candidate to demonstrate their abilities, not to demonstrate their own knowledge.
"You're not a test proctor. Your role is to create an environment where candidates can do their best work." — Justin Lee
The result? Candidates consistently tell us they enjoyed the process. That's not just a feel-good metric — it directly impacts our ability to attract top talent. Word gets around. Engineers talk to each other about interview experiences. A respectful, thoughtful process becomes a competitive advantage in hiring.
Why This Matters for Our Clients
At Ruzora, our vetting process is the foundation of everything we deliver. When a client receives a shortlist from us, they're trusting that we've already done the hard work of separating genuinely excellent engineers from those who merely interview well.
Our old process — while thorough — was partially susceptible to the same left-column bias that plagues the industry. We were catching it in our later stages, but the redesign has moved the signal detection earlier and made it more reliable.
The impact since we adopted this approach:
- Candidate pass-through rate dropped from 4.2% to 2.8%. We're more selective because we're testing for things that actually predict success.
- Client 6-month satisfaction scores increased from 88% to 93%. Better vetting means better matches.
- Interview-to-offer ratio improved by 35%. Clients make offers more confidently because the candidates we present have been tested on real-world judgment, not puzzle-solving ability.
What This Means If You're Hiring Engineers
Whether you work with Ruzora or not, here are the principles we'd encourage every engineering leader to adopt:
- Audit your interview questions. For each question, ask: "Could an AI answer this in 30 seconds?" If yes, it's testing the wrong thing.
- Shift from knowledge-based to experience-based. Instead of "what are the trade-offs of X?", ask "tell me about a time you had to choose between X and Y in production."
- Simplify your coding exercises. Complex sandboxes test interview preparation, not engineering ability. Use simple, realistic tasks and observe the process, not just the output.
- Train your interviewers. An untrained interviewer with a great question will produce worse signal than a trained interviewer with an average question. Invest in interview skills.
- Set context with candidates. Tell them what you care about (judgment, communication, experience) and what you don't (finishing, performance optimization, syntax perfection). Watch how their behavior changes.
- Treat interviews as a two-way evaluation. The candidate is evaluating you too. A respectful, well-designed process attracts better candidates and improves your close rate.
The Bigger Picture
AI isn't replacing engineers — but it is making certain engineering skills less scarce. The ability to write syntactically correct code, recall data structure trade-offs, or implement standard algorithms is being democratized by AI tools that every engineer now has access to.
What remains scarce — and what will remain scarce for a long time — is the judgment that comes from years of building, breaking, and debugging production systems. The ability to look at a problem and know, from experience, which approach will work and which will create headaches in six months. The communication skills to explain a complex decision to a product manager, disagree constructively in a code review, or mentor a junior engineer through a difficult debugging session.
Those are the skills we test for. Those are the skills that predict whether a hire will still be on your team — and thriving — twelve months from now.
And in a world where AI handles the routine, the engineers who can think clearly, communicate precisely, and draw on deep experience are more valuable than they've ever been. Our job is to find them. And our interviews are finally designed to do exactly that.
