thoughtsFeatured

The Shift #4: The Death of Knowledge-Based Interview

Traditional coding interviews are dying as AI masters LeetCode and has infinite access to knowledge. The new hiring process should value Grit, Adaptability, and Judgement over syntax memorisation.

8 February 20267 min read

<Haringey, Feb 2026. Another rainy day ...>

Important

As of 2025-2026, we all know the current interview process is broken. While pioneers like Meta are now trialling AI-assisted interviews, the wider industry has yet to adapt.

This post is my proposal for how we can fix this, what values we should focus on, and how students and freshers can prove their worth in this new reality.

Please note: these are my personal observations and predictions for the future of hiring, and may not yet reflect the standard across every company.

Note

Update 2026-02-19:

Updated the "Precision" section to specifically address the role of 'AI Engineers' building Agentic Systems. The distinction between Scepticism (not trusting the model) and Precision (force-guiding the model) is critical when moving from chat-based coding to autonomous agents.

Note

Update 2026-03-29:

For candidates preparing for an interview at Cinnamon AI:

Please note that this article does not reflect how we currently conduct interviews at Cinnamon (at least not yet). However, these are exactly the traits I usually look for in a successful "Cinner".

The traditional interview handbook is rapidly becoming obsolete. As AI replaces manual coding tasks, the metrics we have used for decades to assess engineering talent are breaking down.

If knowledge is just an API call away, what are we actually testing for?

Here is my perspective on how the value of different interview categories is shifting, and - more importantly - what students and freshers can do to prove their worth in this new reality.

The Shift in Interview Value

Job Interview vs Reality, source: https://programmerhumor.io/programming-memes/jobs-requirements/

Job Interview vs Reality, source: https://programmerhumor.io/programming-memes/jobs-requirements/

We need to be honest about what AI has solved. The "hard skills" that used to define a senior engineer are now the baseline capabilities of a good model. Conversely, the human elements - empathy, ethics, and judgement - are becoming the only true differentiators.

Here is how the value of traditional interview categories is shifting in the age of AI:

Focus AreaOld ValueThe AI RealityNew Value
Knowledge
(Core concepts, Syntax)
CriticalKnowledge is now an API call.
Questions can be pasted into Perplexity or Gemini for a PhD-level answer in seconds.
Near Zero
Intelligence
(LeetCode, Logic)
Medium/HighReasoning engines are superior.
AI solves logic better than humans (e.g., AI solves Maths Olympiad).
Low
Experience
(Track record)
HighAdaptability > History.
"I've done this for 5 years" means less when tools change daily. A junior with good AI skills can outperform a rigid senior.
Medium
Behavioural
(Empathy, Ethics)
Low
(Often seen as the "HR round" or a formality)
Human-in-the-loop.
Empathy, ethics, and team dynamics cannot be prompted or automated.
High
Situational
(Ambiguity, Judgement)
Near Zero
(Very few test how candidates handle uncertainty)
The Orchestrator.
It is no longer about generating the code, but judging if the AI's output is right for the context.
Critical

What Values Survive? (And How to Prove Them)

If the standard tests - syntax, memorisation, and speed - are irrelevant, what am I actually looking for?

It comes down to a fundamental shift in the engineer's role: We are moving from Craftsman to Guardian.

In 2026, the AI provides the raw intelligence and the infinite labour. The human must provide the Context, the Constraints, and the Liability.

Here are the pillars that survive "The Shift", and how you can prove them in your portfolio.

1. Grit & Resilience (The Debugger's Mindset)

The wrong Grit in debugging AI Code, source: https://www.reddit.com/r/ProgrammerHumor/comments/1jilyj1/vibecoding/

The wrong Grit in debugging AI Code, source: https://www.reddit.com/r/ProgrammerHumor/comments/1jilyj1/vibecoding/

The AI Reality:

AI code is probabilistic. It often looks perfect but fails subtly-hallucinating a library method or misinterpreting a business rule. Grit is no longer about grinding through boilerplate; it is the patience to debug code you didn't write. It is the tenacity to iterate on a prompt ten times until the logic aligns with reality, rather than settling for the AI's first "convincing" answer.

How to show it: Your Git History

FYI, when screening candidates' GitHub, I don't just look at their final code; I look at the journey. A history of continuous improvement, fighting through bug fixes, and visible struggles where you improved a solution over time speaks volumes. It shows you don't just "generate and commit," but that you wrestle with the output until it works.

2. Adaptability & Lifelong Un-learning (Fluidity)

The AI Reality:

The half-life of a tech stack is now measured in months. "I am a LangChain Developer" is a dangerous identity when the industry might shift to DSPy, LlamaIndex, or raw API calls next week.

The core skill isn't just learning; it is unlearning. It is the low ego required to abandon your "expert" knowledge of a legacy tool because a better paradigm arrived this morning. It is Paradigm Agnosticism-using the right tool for the job because the AI handles the syntax barrier for you.

How to show it: Diverse Projects.

Don't just show me 10 projects in the same stack. Show me that you used Python for training, but perhaps wrapped the inference in Rust or C++ for performance. This proves you aren't married to a syntax; you are married to solving the problem.

3. Scepticism & Precision (The Editor)

AI-"assisted" code, source: https://www.instagram.com/p/DTXKbDWClB4/

AI-"assisted" code, source: https://www.instagram.com/p/DTXKbDWClB4/

The AI Reality:

AI is a confident liar. It will generate a security vulnerability with the same enthusiastic tone as a "Hello World" script.

Scepticism is now a technical skill. The modern engineer must be a ruthless editor, capable of validating output and spotting silent errors. It is the ability to look at "working code" and ask: "Yes, but is it safe? Is it performant? Is it actually what I asked for?"

Precision is the antidote to 'Vibe Coding'. AI models default to the average—the most common solution found on the internet. Precision is your ability to force the model off the path of "generic average" and towards "specific excellence". It is the difference between "hoping" the agent follows instructions and architecting a state machine that forces it to. It is the standards that turn a probabilistic toy into a deterministic product.

How to show it: Unit Tests & Edge Cases

Show me a project where the test suite is larger than the codebase. As AI generates code faster than you can review, you cannot manually audit every line. Your value lies in stress-testing the system - finding the blind spots and logic errors that an optimistic AI will miss. A repository with robust error handling proves you don't trust the "Happy Path".

4. Teamwork & Empathy (Alignment > Output)

The AI Reality:

AI allows us to work faster and more often in isolation. A single engineer can now generate the output of three. The danger is that we build faster in the wrong direction.

Teamwork in 2026 is about Alignment and Communication. Can you explain the context to the team (and the AI) clearly? Can you write the documentation that saves your teammate 4 hours of debugging? If you can't articulate your intent clearly, the AI will build the wrong thing, and your team will be left cleaning up the mess.

How to show it: Documentation & Readability

You prove this by how you treat the stranger reading your code. I look at your README.md, school project reports, and public blogs. Do they explain why this exists? Do they explain how to run it simply? Helping others build upon your work is the ultimate sign of a team player in an async, remote world.

Note

Teamwork is Not Just "Good Communication"

I often see these two terms conflated in job descriptions, but they are not the same. Most interviews heavily bias towards verbal fluency - how well you can explain a concept in a high-pressure room. This overlooks a huge portion of engineering reality. (IMO, the best engineers are often introverts, who spend much time listening and thinking)

An introvert may not dominate the whiteboard discussion, but they might write the critical document that clarifies the architecture for everyone. They might say less in meetings, but their code reviews are precise, constructive, and catch issues before production.

True teamwork extends far beyond chatting:

  • Empathy in Code: Delivering work that is easy for others to maintain, not just to show off.
  • Reliability: Being the person who delivers what they promised, so others aren't blocked.
  • Documentation: Respecting your team's time by writing clear READMEs and async updates, reducing the need for lengthy meetings.
  • Supportiveness: Proactively spotting when a teammate is stuck and offering a hand without being asked.

Don't mistake a quiet candidate for a poor team player. In a remote-first, async world, the ability to type a clear, critical thought is often more valuable than the ability to fill the silence.

5. Ownership & Liability (The Approver)

The AI Reality:

The AI is not an employee. It has no liability. If you copy-paste code that leaks customer data, you leaked the data.

"The AI wrote it" is the new "It works on my machine" - it is not a valid excuse. Ownership means signing your name to the machine's work and accepting the consequences.

How to show it: Deployed, Maintained Projects

Most portfolios are graveyards of half-finished tutorials. Ownership means you shipped it. I look for a project that is actually deployed, has a version number higher than 0.1, and has commits spanning months.

6. Engineering Judgement (The Filter)

The AI Reality:

AI can offer you ten different ways to build a backend in seconds. A junior picks the fanciest one; a senior picks the boring one that solves the problem.

Judgement is the Opportunity Cost calculator. It is the ability to say "No" to complexity, even when the AI makes complexity easy to generate.

How to show it: "Alternatives Considered" (ADRs)

In your portfolio, write about your decisions. Don't just show me the code; tell me why you chose that specific library. Did you stick to a monolith because microservices were overkill? Showing that you can evaluate trade-offs is the hallmark of the "Orchestrator" role.

Tip

Including a "Known Limitations" or "Future Improvements" section shows maturity. It proves you understand where your solution breaks and that you have the judgement to ship it anyway because it meets the current requirements.

Advice for Beginners: Output > Input

To the students and freshers reading this: you do not need to impress interviewers with a production-level complex architecture where you only justify 20% of the trade-offs.

Those systems are not battle-tested, and you aren't using them every day. Following a complex tutorial often gives you the wrong impression of what a "production" solution is. In the real world, we avoid complexity at all costs. We only add it if the business requirements absolutely dictate it.

Tip

Suggested reading: Clean Architecture by Uncle Bob

My advice:

  • DON'T: Build a complex, fully-fledged microservice for "something AI" that no one uses (those are likely clones from 1,000+ tutorials).
  • DO: Build a simple, boring tool that you/your friends/your family actually use and continuously improve.

For example, realising that a hard-scheduled 07:00 Mon-Fri alarm is not flexible enough, you build a simple clock that syncs with University timetable/Google Calendar to wake you up on your own school days. Be original. Don't copy-paste ideas.

Show a GitHub repo with a thoughtful README.md and a full commit history - not just an "Initial commit" that was vibe-coded in an hour. If you use it every day, you'll find bugs, you'll fix them, and you'll improve it.

This shows commitment to the output (solving an actual problem), not just the input (technical jargon).

Always remember: quality > quantity.


Credit: My thoughts & drafts + Gemini 3 for polishing + Nano Banana for cover image