Hiring in the Age of AI


It’s not a secret that Software Engineering as we know it is changing. The reason is quite simple: code and coding has become a commodity. With the rise of AI, generating code is a cheap process that is just a prompt away. This means that the value of a software engineer lies less in their ability to write code fast, but in their ability to distinguish good code from bad code, and design systems and solve complex problems.

You might not think that, and you might not like it: but it’s the reality we are living in. It’s a reality that requires we change. Not only that we change the way we work, but also the way we hire. And that requires careful thinking and a lot of experimentation.

At Lendable we are doing this exercise, and it’s not an easy one to make. The reason is that we are in uncharted territory here. AI has forced us to rethink our hiring process, due to the new challenges it presents. But before I dive into my thoughts on how hiring in the age of AI should look like, I think I should spend some time in two important things: first, to describe what is the goal, the north star of a hiring process; and two, what kind of problems AI has brought into today’s world that affect the path to that north star.

I think once these two things are clear, we can start thinking about how to design a hiring process that can help us achieve our goal in the best possible way, taking into account the new challenges that AI has brought into the spotlight.

The Goal of Hiring

The goal of hiring is quite simple for me: to obtain the best talent as fast as possible. But please, do not think by any means this is a complete definition of hiring. There are many other important things at play (like culture fit, diversity, etc) that I won’t discuss here (otherwise this article would be way too long), but I think that getting the best talent quickly is the core of it.

I don’t think I need to make a case as to why we need the best talent. It’s stands to reason that, as a hiring manager of a company or even as a software engineer involved in the hiring process, you want to work with the best people. Maybe what’s not so clear is what the best people are. And of course, that is quite subjective and can vary from company to company. But suffice to say here for the sake of simplicity that we all want to work with smart, talented, and motivated people. People that can help us achieve our goals and make our business successful. But again, the specifics of what that entails must be defined by each company.

The other important factor is the speed. Hiring is a market, and like any market, it is governed by the rules of supply and demand. The best talent is in high demand, and if you take too long to hire them, they might get hired by someone else. So the speed in which you execute your hiring process does affect the quality variable. This is especially true in the current market, where recent layoffs have made available an even bigger pool of talent than there ever was, and where a productivity boosted by AI has enabled companies to start more initiatives in parallel, consequently requiring more talent.

The Problem

At Lendable, we have always used the classic three-step process for hiring software engineers: a screening call, a technical interview based on a take home test and a cultural one. This process has served us well for a long time, but we have come to realize that it is not fit for the current market.

The issues with this process are quite simple: first, it is too slow. The take home test takes a lot of time to complete, and coordinating three rounds of interviews can take weeks. Also, we review each take home test submissions carefully, but that also takes precious engineering time, and we have to do it for a good deal of candidates. With Lendable becoming an increasingly attractive company, this means we sustain quite a sizeable backlog of candidates, which in turn means that we are losing a lot of the good candidates to other companies that are faster.

Second, the take home test has ceased to be a good predictor of the quality of the candidate. With the rise of AI, candidates can now use tools like Claude or Copilot to help them with the take home test, which means that our take home test is no longer a good indicator of their skills. I’ve done this exercise myself, and I can tell you that with the right prompting, you can get a very good solution to our take home test in a matter of minutes. This means that we are not really assessing the candidate’s skills, but rather their ability to use AI tools, which is not what we were originally looking for when we designed the test.

Third, as I mentioned before, coding has become a commodity. The value of a software engineer lies less in their ability to write good code fast, but in their ability to instruct a machine to write good code. Writing the code is cheap, but those who have been doing this for years knew that coding was just a means to an end, and the real value was in the design of the system, in the problem solving, in the ability to break down a problem into smaller pieces, and in the ability to communicate effectively with the rest of the team. With coding out of the equation, we need to focus more on these other skills, which are harder to assess with a take home test.

A New Approach

Given all these things, then the question: how do we design a hiring process that can help us achieve our goal of getting the best talent as fast as possible, taking into account the new challenges that AI has brought into the spotlight? Let me start by saying that I don’t have a definitive answer to this. This is something we are still experimenting with at Lendable, and I think it’s something that the whole industry is still trying to figure out. But I can share some of my personal ideas fruit of some of the trials we are running and the conversations we are having.

First, I think there is still room for a coding exercise, but it should be done in a different way. Before you roll your eyes in surprise, let me explain. I think that a coding exercise can still be a good way to assess a candidate’s skills. After all, designing systems, delivering value to customers and debugging and fixing issues is still about writing and reading code, even if the way we write and read that code has changed.

The key is to design a coding exercise that is more focused on the design and problem solving aspects, rather than on the ability to implement a particular algorithm or making something more efficient. This is something that our current take home test does pretty well, because we ask the candidates to focus on the domain and the design of the system, rather than on the implementation details. However, because candidates can now use AI tools to help them with the coding exercise, in theory they can do much more with our current take home test than they could before, which means that we could increase the scope of the test to cover for more things we might want to assess, introducing more complex design problems.

Or maybe just go the other way around. Now there is enough time to make the take home test part of a live coding interview, allowing the candidate to use a coding agent so they can produce working code effortlessly and quickly. We could evaluate how they reason by means of the instructions they give to the agent, and how they evaluate whether the agent has done something wrong or not, or something that can be improved. This will most likely simulate more closely the way we work in real life, where we have to use AI tools to help us with our work, and where the value is not in the code itself, but in the design and problem solving skills of the engineer.

Another thing is that, if candidates are using (and are allowed to use) AI tools to help them with the coding exercise, why then cannot we use those very same tools as part of reviewing the exercise? We can implement a fully automated review process that can give us a good indication of the candidate’s skills, without having to spend hours reviewing each submission. With a prompt serving as a blueprint we ensure our criteria for evaluation is well defined and consistent across candidates, and we can also use it to give feedback to the candidates, which is something we have been struggling with in the past due to the time it takes to review each submission.

I know it sounds a bit dystopian in a way: machines writing code that other machines will review. But the reality is that the machine does not write code out by itself and of nothing, it writes code based on the instructions and the design provided by the candidate, which maps really well to the abilities we are trying to assess. The LLM and agent harness are just tools that can help, but the real value is in the candidate’s ability to recognize what matters in order to design a good solution, and to be able to encode it in a prompt that can help the machine write good code. This is still a very valuable skill, and one that is not easy to acquire and worth assessing.

Second, for the technical interview, I think we should focus more on the design and problem solving aspects, rather than on the coding exercise. Again, using AI we can craft different scenarios for system design interviews, at different levels (IC2, IC3, IC4, etc) and with different focuses (scalability, reliability, consistency, etc). We can prepare rich interviewer guides that can help the interviewer to evaluate the candidate’s performance in a consistent way, and we can also use AI to help us with the evaluation of the candidate’s performance, by providing a structured feedback based on the interviewer’s notes and the interview transcript.

Basically, my thoughts revolve around how to use AI to effectively ensure the quality of the candidates we hire, and to do it in a way that is fast and efficient, without having to spend hours reviewing each submission or conducting multiple rounds of interviews.

This helps us keep the quality bar high, while also being able to move fast and not lose good candidates to other companies that are faster. And it also helps us to focus more on the skills that are really important in the age of AI, like design and problem solving, rather than on the ability to write code fast, which again, is becoming less and less relevant.

Final Thoughts

Bear in mind, my thoughts are very particular to our context at Lendable, and what works for us might not work for other companies. For us, code quality is a very important factor, and we want to make sure that the candidates we hire have a good understanding of how to design maintainable, scalable and reliable systems, and that they can write the good code that will support those when needed.

This is something we are still experimenting with, and I am sure we will make mistakes along the way, but I think it’s important to start thinking about these things and to start experimenting with new approaches, because the reality is that the old ways of hiring are not working anymore, and we need to adapt to the new reality if we want to continue attracting and retaining the best talent in the market.