PM Hiring Practices

Recent changes to my hiring process

I've written in the past about hiring the team as a product leader. In this short article, I want to share some practices I've recently adopted and liked in my hiring processes.

Collaboratively developing the job description

It's been my belief for a long time a good hiring process starts with a really detailed job description that makes it exceptionally clear what kind of profile you are looking for. This is true both externally (ie. for candidates to decide whether or not this is the right opportunity for them) and internally (ie. for the interviewers to evaluate whether a candidate has shown evidence that they can meet the requirements for the job).

One practice that I've adopted more in the last couple of years (I got the idea from some podcast and was a bit shocked that I hadn't thought of this earlier) is developing the job description collaboratively with the team that the product manager will work with. Of course, the final call should always be the hiring manager's – after all, there are considerations beyond that of the team that the PM will work on when they start, such as the composition of the product team or a potential future team evolution. However, in terms of the strengths and the profile that would most help the team deliver on their prioirities, it is absolutely worth getting the input of the team.

Adding a job-specific screener question

After a quiet period in which I wasn't hiring anyone, I've recently opened up a couple of roles and my experience has been very different to past hiring processes. These days, I get way more applications than I can realistically handle and process in depth. For my most recent PM job posting, I received 400 applications over the span of a week. There are probably several factors at play here: firstly, the job market in tech is much worse than a couple of years ago, with layoffs still happening across the industry. Secondly, RevenueCat is probably a bit higher profile than it was the last time I was hiring. The last factor is generative AI: ChatGPT and Co. have made it much easier for candidates to apply to many positions by just having ChatGPT answer application questions on their behalf. (One of the RevenueCat application questions is the YC question "What non-computer system have you hacked to your advantage", and I can't count how many ChatGPT generated answers about hacking the commute, color-coding office supplies, and inventory management I've read).

One decent way I've found to address this is to add a more specific screener question. As an example, for my most recent PM position which was for the team focusing on our web billing product, I added the question "What is your experience in subscription based monetization and building and optimizing checkout experiences in the mobile app or web billing space?". The answers to this question were much more specific and obviously more related to the candiates' actual experience.

Taking summary notes on candidates from the first look

Also related to the amount of candidates that I have had to handle in the most recent hiring processes, I've now taken to making sure I write down a very brief note (2-3 lines) about each candidate that passes the resumé screen right that will jog my memory about what I found interesting about this candidate. This saves me from opening up their resumé every time I have another interaction with the candidate – usually, the couple of lines will make me remember the rest of the profile.

Sending a pre-brief video

This was the biggest recent level-up. Our VP of Marketing posted this idea on our Slack from this LinkedIn post:

I've adopted this, and can only recommend it. Before the first phone screen with a candidate, I send them a 10-15 minute video in which I give an overview of the company, the product and strategy, the team, and the role. Every candidate who I've spoken with has liked this a lot because it preempts a lot of questions (and gives them the ability to ask better follow-up questions than they would if they heard this information for the first time in the first interview), and for me it's been great because firstly, I don't have to repeat the same spiel every single interview, and secondly, I get 10-15 additional minutes to collect relevant signal about whether or not the candidate is a good fit for the position.

Asking a hypothetical in the first phone screen

This practice is connected directly with the time saved from sharing the pre-briefing video. Previously, I had mostly used the first phone screen with the candidate to ask about specific points of their experience, how they worked, who they worked with, and getting them to walk me through some example projects. This is a very good way to understand how people actually work, but it can sometimes be a bit too easy to polish certain example stories from one's past to be able to pass this stage without giving good evidence of actually being a good fit for the role.

Therefore, I've now experimented with adding one hypothetical question to the first phone screen, in which I give them an actual problem that they would work on and ask them how they would start to think through that problem, what input they would collect, etc. In the question, I try to assess both their methods (do they know how to think through a product problem and how to collect the right inputs to make a decision) as well as their product sense (do they have a good gut feeling for the direction the decision might take).

More streamlined ways of assessing interview performance

Right now, I talk to candidates in three stages: the first phone screen, a formal interview (still generally following the product sense interview I've described in the past), and the take-home assignment review. Since my recent job openings had so many applications, I've had a lot of interviews in each of these stages (even when I set the bar very high to pass each of the stages). Therefore, I needed to streamline my assessment and decision making process.

For the first phone screen, this essentially takes the form of a hypothesis test. Based on the candidate's resumé and application, I form a hypothesis about why the candidate could be a good fit, and areas in which I have doubts whether they could be a good fit. Then I try to validate the first hypothesis and invalidate the doubt hypothesis in the interview. After the interview, I jot down quick pros and cons of the candidate's profile post hypothesis validation, and pass them only if the pros massively outweigh the cons.

For the product sense interview, for the most part, I know what each of my questions tests, and I judge each answer by how good it is. I then make a list with three sections that I call The Good, The Meh, and The Bad, and essentially, each answer to a question gets added under one of these sections. Occasionally, a meta-point is added as well, eg. if the candidate is very rambly and doesn't stay on track (that goes in The Bad), or if the candidate proactively continues the conversation (eg. bringing up how they would validate their ideas, that goes in The Good). To pass this interview, The Good needs to massively outweigh The Bad.

For the take-home assignment, the key is before I do the first ever assignment review, I write down a list of evaluation questions. Some of them are generic (eg. "Is the candidate's written narrative clear, concise, and precise?"), others are more specific to the assignment (eg. "Did the candidate show sufficient technical sophistication in their answer (eg. do they have an understanding of HTML/CSS; do they have an understanding of how the SDK works)?"). As I review more and more assignments, I might add more questions to the list, but the key is to start with a rather comprehensive list of evaluation questions – this helps avoid biasing the review of the first few responses.

Photo of Jens-Fabian Goetzmann

About Jens-Fabian Goetzmann

I am currently Head of Product at RevenueCat. Previously, I worked at 8fit, Microsoft, BCG, and co-founded two now-defunct startups. More information on my social media channels.

Share this post: