I’m preparing for product management interviews. I’ll publish some of my case practice here on the blog.
The following is based on a practice question given by Lewis Lin in his book Decode and Conquer.
For this exercise, let’s assume that:
— I’m applying for a senior-level (equivalent to Google’s L6) product management role at a growth-stage startup like Snowflake.
— This is a first-round interview taking place over the phone without video.
— The interviewer is a UX designer. Random gender generator says it’s a “she”.
Interviewer: Now I’d like to give you a case and hear how you work through it.
Me: Sounds good — let’s do it.
What I’m thinking: I have two goals at the outset of any case.
First, I want to understand what type of question I’m getting. Most questions that show up in PM interviews can be classified as one of a small number of types. As soon as I know which type I’m dealing with, I’ll know a lot about how to approach it.
Second, I want to understand the context in which the question is arising. Who am I? Who are you? What’s happened to give rise to this question? If the context is undefined, the problem will be hard to grapple with. If the scenario is clear, I’ll be able to use my actual experience to ground my decisions.
As I listen to the question, this is what I’ll be thinking about.
Interviewer: This is a design critique question. It’s meant to give you a chance to show how you think about design and giving feedback. The question is: what do you think of LinkedIn’s endorsement feature?
What I’m thinking: What’s the question type? She said “design critique”, which is a term of art for designers — it’s a conversation in which a designer presents work in order to receive feedback for the purpose of improving the work. It’s also a common type of question for product managers. I’m pretty sure that’s what this is, but I’ll want to double-check.
What’s the scenario? She didn’t give much context, but knowing that it’s a design critique gives me enough to assume and confirm: rather than asking open-ended questions to get information, I’m going to invent a scenario and ask her if we can assume that that’s what’s going on. This tactic has two benefits: first, it’s fast and efficient. Second, it decreases the chance we’ll wind up in an area that’s unfamiliar to me.
Me: Great. Let me see if I understand the question. You said “design critique”, so I’m assuming that this is a feature someone on the team is working on. I’m imagining that we both work at LinkedIn — let’s say you’re a designer and I’m a PM. And you’re asking me for informal feedback on a feature that you’re working on. Is that a fair assumption?
Interviewer: Sure, let’s go with that.
What I’m thinking: Now that I know what kind of question I’m dealing with, I want to make sure I understand the feature we’re talking about. Even though I’m pretty sure I know what she means by “LinkedIn’s endorsement feature”, I’ll double check. This might turn up some useful information, and if I’ve made the wrong assumption, it could save me from a major confusion.
Me: Great. And let me check whether I understand the feature that we’re talking about. Is this the feature that lives on someone’s profile page and says things like “SEO — 18 people have endorsed Matt for this skill”?
Interviewer: Exactly — that’s the one.
What I’m thinking: Now I have all of the context I need to start answering the question. If I’m not immediately sure where to go next, this would be a good time to ask for a moment to think.
Me: Got it. And the question is: let’s do a design critique on that feature. Do you mind if I take a minute or so to gather my thoughts?
Interviewer: Absolutely — go ahead.
What I’m thinking: So how am I going to approach this? Since we’re dealing with a design problem, I’ll want to make sense of who it’s for and what their needs are. For that, I’ll use the SSUN framework. And since I’m giving feedback on an existing solution, I’ll use the Design Scorecard method to structure my critique. That’s going to be my approach: SSUN and Design Scorecard.
Me: Okay. That’s a huge feature — very central to the product. I’d like to do two things. First, since it’s such a central feature, I’d like to walk through an exercise to get a clear user and use case in mind. Then, I’d propose we make a scorecard with two or three design goals and see how it does against those goals. How does that sound?
Interviewer: Sounds good.
What I’m thinking: I’ll work through the SSUN framework starting with Stakeholders. For each section I’ll first brainstorm a number of options, and then I’ll select one. To keep things moving and to stay attuned to the interviewer, I’ll use the ‘assume and confirm’ tactic at each step.
Me: Okay. To make sense of users and needs, I like to use a framework called SSUN — it stands for Stakeholders, Segments, Use cases, and Needs.
Starting with Stakeholders, let’s brainstorm a few. We’ve got:
- LinkedIn people — employees on various teams, executives, board, etc
- Other parties on the platform like advertisers
We could brainstorm more, but those seem like the big ones.
As far as which one to focus on here, we’re probably most interested in the users, so I’m going to set aside the others for now, and just focus in on the users. Does that sound good?
Interviewer: Yeah, that sounds good.
Me: Okay. Then on to Segments. Within the user stakeholder group, we can sub-divide into a few segments.
LinkedIn is a career marketplace, so the main user segments are going to be:
- People who are trying to show off their skills, and
- People who are trying to find people that have certain skills.
Let’s for now call them “job-seekers” and “employers”.
We could brainstorm more segments, but I think these are the main ones.
Between these, my first thought is that we should focus on the employer side. The reason is that if employers trust and use endorsements, then job-seekers have a strong reason to get and to give endorsements. But if employers are ignoring the feature, then job-seekers are probably going to ignore it too. So in that sense, employers are the linchpin.
Does that sound okay?
Interviewer: Yep, that sounds good.
Me: Great. So next is use cases.
Let’s brainstorm a couple. As an employer, I’ve personally used LinkedIn in two ways:
- One is to search for people.
- The other is to evaluate a candidate who’s applied.
Let’s call those “outbound” and “inbound”.
There are definitely more use cases that we could brainstorm, but those are big ones. Let’s go with those two for now.
Of those, the one that seems most important here is the one where the employer is trying to evaluate an inbound candidate.
Shall we focus on that one?
Interviewer: Why does that one seem most important?
Me: Well, I think that one gives us the cleanest view of the need we talked about a minute ago. We identified this relationship where if the employer trusts the feature and uses it, then job-seekers will too. The inbound use case will put that trust front and center. The outbound case would touch on that, but it would also bring in additional things related to the mechanics of search.
Interviewer: Makes sense. Let’s go with that.
Me: Great. Then, the last thing is needs. What goals does the user have in this situation? Let’s brainstorm.
- The first thing that comes to mind is something like accuracy of the signal, or trust. Basically, if I’m the employer, I want to know if this candidate is going to be successful in this role. I’m looking for information I can rely on.
- Another thing is speed. I’m looking at lots of candidates, so the faster I can get a signal, the better.
Again we could brainstorm more needs but that feels good for now.
I think the one we want to focus on here is the first one: credibility of the signal. As we said earlier, that one feels like the linchpin.
Does that sound good?
Interviewer: Yeah, that makes sense.
What I’m thinking: Now I want to package all that work up with a neat user story. That’ll help us to remember what we’re working with in the next stage.
Me: Great. So if we put all of that into a user story, we have something like, “As an employer evaluating a candidate for a role, I want credible signals about this person’s skills.”
We could obviously go down different branches of that tree to make stories for the other segments et cetera. But for now let’s just focus on that one.
What I’m thinking: Now I’ve completed the SSUN framework, so I have a clear idea of who we’re designing for and what their problem is. My next goal is to set up the context for a good, disambiguated conversation about design — one that might result in useful feedback, and that will give us lots of footholds for a well-structured discussion.
Me: With that user story in mind, let’s talk about what’s working and not working for this feature.
I’d propose we start by making a scorecard. It’s hard to talk about whether something is successful if you haven’t said what the goals are.
Interviewer: Sure, sounds good.
Me: So to make a scorecard, let’s agree on 2-3 design criteria, and then for each criterion we’ll give three responses:
- A 1-5 rating (basically a Likert scale rating. This gives us an apples-to-apples comparable.)
- One or two things that are working well.
- And one or two things that aren’t working well.
We can pick any design goals we want, but I’ve found it useful to say that good design is useful, easy, and honest. How do those three sound?
Interviewer: Sure, sounds good.
Me: Great. If this was real life I’d suggest that we both make a scorecard and fill it out, and then discuss. But since I’m in the hot seat here I’ll just do one and talk aloud as I go.
Interviewer: Sounds good.
Is it useful. I’d give it a 2 out of 5 on this. What’s working well is that it’s super easy to use. What’s not working well is that I don’t trust the information — I think it’s too easy to game.
Next, is it easy. On this one I’d give it a 5 out of 5. What’s working well is that it’s structured and consistent — it’s really easy to pick up at a glance. If I had to stretch and name something that’s not working well, maybe I’d point out that it still requires me to make some kind of inference to figure out how much of an expert it’s saying this person is. What does it mean that 12 people endorsed him for that skill?
Last, is it honest. On this one I’d give it a 2 out of 5. This goes back to what we said before. What’s working well is that it leaves the endorsement up to real people — so it’s as honest as those people are. What’s not working as well is that it projects a level of confidence about these endorsements that I’m not sure is warranted. There’s too much incentive, for too little cost, to game the system.
So it looks like that’s a 9 out of 15. Overall I think this feature has a lot of potential — but the trust issue is the main holdup for me right now.
Interviewer: Awesome! Thanks. Now let’s move on to…