Do you like LinkedIn’s endorsements feature?

I’m preparing for product management interviews. I’ll publish some of my case practice here on the blog.

The following is based on a practice question given by Lewis Lin in his book Decode and Conquer.

For this exercise, let’s assume that:
— I’m applying for a senior-level (equivalent to Google’s L6) product management role at a growth-stage startup like Snowflake.
— This is a first-round interview taking place over the phone without video.
— The interviewer is a UX designer. Random gender generator says it’s a “she”.

Interviewer: Now I’d like to give you a case and hear how you work through it.

Me: Sounds good — let’s do it.

What I’m thinking: I have two goals at the outset of any case.

First, I want to understand what type of question I’m getting. Most questions that show up in PM interviews can be classified as one of a small number of types. As soon as I know which type I’m dealing with, I’ll know a lot about how to approach it.

Second, I want to understand the context in which the question is arising. Who am I? Who are you? What’s happened to give rise to this question? If the context is undefined, the problem will be hard to grapple with. If the scenario is clear, I’ll be able to use my actual experience to ground my decisions.

As I listen to the question, this is what I’ll be thinking about.

Interviewer: This is a design critique question. It’s meant to give you a chance to show how you think about design and giving feedback. The question is: what do you think of LinkedIn’s endorsement feature?

What I’m thinking: What’s the question type? She said “design critique”, which is a term of art for designers — it’s a conversation in which a designer presents work in order to receive feedback for the purpose of improving the work. It’s also a common type of question for product managers. I’m pretty sure that’s what this is, but I’ll want to double-check.

What’s the scenario? She didn’t give much context, but knowing that it’s a design critique gives me enough to assume and confirm: rather than asking open-ended questions to get information, I’m going to invent a scenario and ask her if we can assume that that’s what’s going on. This tactic has two benefits: first, it’s fast and efficient. Second, it decreases the chance we’ll wind up in an area that’s unfamiliar to me.

Me: Great. Let me see if I understand the question. You said “design critique”, so I’m assuming that this is a feature someone on the team is working on. I’m imagining that we both work at LinkedIn — let’s say you’re a designer and I’m a PM. And you’re asking me for informal feedback on a feature that you’re working on. Is that a fair assumption?

Interviewer: Sure, let’s go with that.

What I’m thinking: Now that I know what kind of question I’m dealing with, I want to make sure I understand the feature we’re talking about. Even though I’m pretty sure I know what she means by “LinkedIn’s endorsement feature”, I’ll double check. This might turn up some useful information, and if I’ve made the wrong assumption, it could save me from a major confusion.

Me: Great. And let me check whether I understand the feature that we’re talking about. Is this the feature that lives on someone’s profile page and says things like “SEO — 18 people have endorsed Matt for this skill”?

Interviewer: Exactly — that’s the one.

What I’m thinking: Now I have all of the context I need to start answering the question. If I’m not immediately sure where to go next, this would be a good time to ask for a moment to think.

Me: Got it. And the question is: let’s do a design critique on that feature. Do you mind if I take a minute or so to gather my thoughts?

Interviewer: Absolutely — go ahead.

What I’m thinking: So how am I going to approach this? Since we’re dealing with a design problem, I’ll want to make sense of who it’s for and what their needs are. For that, I’ll use the SSUN framework. And since I’m giving feedback on an existing solution, I’ll use the Design Scorecard method to structure my critique. That’s going to be my approach: SSUN and Design Scorecard.

Me: Okay. That’s a huge feature — very central to the product. I’d like to do two things. First, since it’s such a central feature, I’d like to walk through an exercise to get a clear user and use case in mind. Then, I’d propose we make a scorecard with two or three design goals and see how it does against those goals. How does that sound?

Interviewer: Sounds good.

What I’m thinking: I’ll work through the SSUN framework starting with Stakeholders. For each section I’ll first brainstorm a number of options, and then I’ll select one. To keep things moving and to stay attuned to the interviewer, I’ll use the ‘assume and confirm’ tactic at each step.

Me: Okay. To make sense of users and needs, I like to use a framework called SSUN — it stands for Stakeholders, Segments, Use cases, and Needs.

Starting with Stakeholders, let’s brainstorm a few. We’ve got:

  • Users
  • LinkedIn people — employees on various teams, executives, board, etc
  • Other parties on the platform like advertisers

We could brainstorm more, but those seem like the big ones.

As far as which one to focus on here, we’re probably most interested in the users, so I’m going to set aside the others for now, and just focus in on the users. Does that sound good?

Interviewer: Yeah, that sounds good.

Me: Okay. Then on to Segments. Within the user stakeholder group, we can sub-divide into a few segments.

LinkedIn is a career marketplace, so the main user segments are going to be:

  • People who are trying to show off their skills, and
  • People who are trying to find people that have certain skills.

Let’s for now call them “job-seekers” and “employers”.

We could brainstorm more segments, but I think these are the main ones.

Between these, my first thought is that we should focus on the employer side. The reason is that if employers trust and use endorsements, then job-seekers have a strong reason to get and to give endorsements. But if employers are ignoring the feature, then job-seekers are probably going to ignore it too. So in that sense, employers are the linchpin.

Does that sound okay?

Interviewer: Yep, that sounds good.

Me: Great. So next is use cases.

Let’s brainstorm a couple. As an employer, I’ve personally used LinkedIn in two ways:

  • One is to search for people.
  • The other is to evaluate a candidate who’s applied.

Let’s call those “outbound” and “inbound”.

There are definitely more use cases that we could brainstorm, but those are big ones. Let’s go with those two for now.

Of those, the one that seems most important here is the one where the employer is trying to evaluate an inbound candidate.

Shall we focus on that one?

Interviewer: Why does that one seem most important?

Me: Well, I think that one gives us the cleanest view of the need we talked about a minute ago. We identified this relationship where if the employer trusts the feature and uses it, then job-seekers will too. The inbound use case will put that trust front and center. The outbound case would touch on that, but it would also bring in additional things related to the mechanics of search.

Interviewer: Makes sense. Let’s go with that.

Me: Great. Then, the last thing is needs. What goals does the user have in this situation? Let’s brainstorm.

  • The first thing that comes to mind is something like accuracy of the signal, or trust. Basically, if I’m the employer, I want to know if this candidate is going to be successful in this role. I’m looking for information I can rely on.
  • Another thing is speed. I’m looking at lots of candidates, so the faster I can get a signal, the better.

Again we could brainstorm more needs but that feels good for now.

I think the one we want to focus on here is the first one: credibility of the signal. As we said earlier, that one feels like the linchpin.

Does that sound good?

Interviewer: Yeah, that makes sense.

What I’m thinking: Now I want to package all that work up with a neat user story. That’ll help us to remember what we’re working with in the next stage.

Me: Great. So if we put all of that into a user story, we have something like, “As an employer evaluating a candidate for a role, I want credible signals about this person’s skills.”

We could obviously go down different branches of that tree to make stories for the other segments et cetera. But for now let’s just focus on that one.

What I’m thinking: Now I’ve completed the SSUN framework, so I have a clear idea of who we’re designing for and what their problem is. My next goal is to set up the context for a good, disambiguated conversation about design — one that might result in useful feedback, and that will give us lots of footholds for a well-structured discussion.

Me: With that user story in mind, let’s talk about what’s working and not working for this feature.

I’d propose we start by making a scorecard. It’s hard to talk about whether something is successful if you haven’t said what the goals are.

Sound good?

Interviewer: Sure, sounds good.

Me: So to make a scorecard, let’s agree on 2-3 design criteria, and then for each criterion we’ll give three responses:

  • A 1-5 rating (basically a Likert scale rating. This gives us an apples-to-apples comparable.)
  • One or two things that are working well.
  • And one or two things that aren’t working well.

We can pick any design goals we want, but I’ve found it useful to say that good design is useful, easy, and honest. How do those three sound?

Interviewer: Sure, sounds good.

Me: Great. If this was real life I’d suggest that we both make a scorecard and fill it out, and then discuss. But since I’m in the hot seat here I’ll just do one and talk aloud as I go.

Interviewer: Sounds good.

Me: Okay.

Is it useful. I’d give it a 2 out of 5 on this. What’s working well is that it’s super easy to use. What’s not working well is that I don’t trust the information — I think it’s too easy to game.

Next, is it easy. On this one I’d give it a 5 out of 5. What’s working well is that it’s structured and consistent — it’s really easy to pick up at a glance. If I had to stretch and name something that’s not working well, maybe I’d point out that it still requires me to make some kind of inference to figure out how much of an expert it’s saying this person is. What does it mean that 12 people endorsed him for that skill?

Last, is it honest. On this one I’d give it a 2 out of 5. This goes back to what we said before. What’s working well is that it leaves the endorsement up to real people — so it’s as honest as those people are. What’s not working as well is that it projects a level of confidence about these endorsements that I’m not sure is warranted. There’s too much incentive, for too little cost, to game the system.

So it looks like that’s a 9 out of 15. Overall I think this feature has a lot of potential — but the trust issue is the main holdup for me right now.

Interviewer: Awesome! Thanks. Now let’s move on to…

Design Scorecard: a framework for giving feedback

This is a framework to use when you’re looking at a proposed solution to a well-defined design problem, and your goal is to provide feedback so that the solution can be improved. A typical scenario would be a design critique meeting in which the designer in charge of a problem is showing recent work and asking for feedback.

Best practices

Giving effective design feedback is an art. This method will help you with three best practices:

  • Identify design goals. If you don’t have clear goals, it’s very hard to evaluate whether a design is successful. Successful at what?
  • Get clear on the goals before you begin to evaluate the solution. This will make the process feel more objective and the ensuing conversation more productive.
  • Name what is working in addition to what isn’t. On a practical level, this will help to ensure that good stuff doesn’t get forgotten in the next iteration. On an emotional level, this positive reinforcement is like wind in the sail for the people doing the hard work of improving the feature.

First, select your design criteria.

Two or three is typically a good number. Less than two, and you’ll tend to lump everything into one bucket. More than three, and the process will tend to become unwieldy.

Industrial designer Dieter Rams has a famous list of ten design principles that you might choose from. According to Rams, good design is:

  1. Innovative
  2. Useful
  3. Aesthetic
  4. Makes a product understandable
  5. Unobtrusive
  6. Honest
  7. Long-lasting
  8. Thorough down to the last detail
  9. Environmentally friendly
  10. As little design as possible.

Second, evaluate the design.

For each of your agreed-upon criteria, give three responses:

  • A numerical rating on a 1-5 Likert Scale. This forces you to quantify your feedback and gives designers an apples-to-apples comparison.
  • One or two things that are working. This gives the designer some positive reinforcement and ideas for what to build on.
  • One or two things that could be improved. Since the goal is to improve, this is the meat of the critique.

Visualized, your score card will look like this:

CriterionLikert ratingWhat’s workingWhat could be improved
Useful2Gives me an at-a-glance picture of this person’s skillsI’m not sure that I can trust the information
Easy5Fits really easily with pre-existing mental models: one person endorsing another… and many people endorsing one person.I can see how many people endorsed, but I still have to make an inference about what that means. Can we give a takeaway metric?

How to frame a product design problem: the SSUN framework

I’m preparing for product management interviews. Like all PMs I love frameworks, and a fun thing about interview prep is that I get to quickly try out lots of frameworks and even invent new ones where what’s out there isn’t working for me.

A popular framework for product design cases is Lewis Lin’s CIRCLES Method. In my experience it’s been helpful as a starting point, but for a few reasons I’ve found it hard to use.

So I’ve been tinkering. I’m attracted to the idea of more modular frameworks that can be recombined as needed for a variety of cases. SSUN has a narrower scope than CIRCLES, and I’ve been finding it quite useful for its intended scope.

Here’s the idea.

When you’re facing a product design problem (either in an interview or on the job) you need to separate the problem from solutions. First get clear on exactly what the problem is — who you’re solving for, what their needs are, who else is affected and what their needs are, and who you’re ignoring (for now). Get that clear in mind before attempting to create or evaluate solutions. .

SSUN is designed to systematically make sense of problem space. It stands for Stakeholders, Segments, Use cases, Needs.

The way to approach it is from left to right.

If you do the whole thing you’ll form a tree that looks something like this:

But in an interview, you won’t have time to flesh out the whole tree. You’ll need to focus. The way to do that is to make each of the four steps a two-stage process: first brainstorm a list (diverge), then prioritize and select one (converge).

You’ll wind up with a tree that looks more like this:

“Need 1” is both well-defined enough and narrow enough to tackle in the space of a single interview question.

As an example, let’s say you’re faced with a product design prompt like this:

How would you improve LinkedIn’s endorsements feature?

After you’ve confirmed with the interviewer that you know what the endorsements feature is, you can apply the SSUN framework to get clear on who you’re designing for and what their goal is. That might look like this:

Now you’re clear on the problem, and you can move into solutions.

What’s a linchpin?

I always assumed that a linchpin was something like the brooch that holds a cloak together at the chest or neck. Which kinda makes sense given the colloquial usage in which a linchpin is the thing that holds it all together — the sine qua non.

Image result for brooch cloak
I thought this was a linchpin. TIL it’s not.

But it turns out that a linchpin is something different.

A linchpin is the pin or metal rod that goes through the end of an axle to prevent the wheel from sliding off. Like this:

Image result for lynchpin

Jocko Willink: Going through hard things together is what brings people together

I like to find silver linings. This one might be useful next time I’m on a team going through hard stuff.

Going through hard things together is what brings people together. The harder things you go through the tighter the bonds are gonna be.

So if you take the military for example.

The first thing you do is you put them through boot camp. Well that’s hard. You go and you form bonds with other people that went through boot camp. We all have that common bond.

Then you go to airborne school, where you’re gonna jump out of airplanes. And that’s gonna be a little bit of a death-defying thing. And airborne crews are gonna be a little bit tighter.

You go to special operations training and all of a sudden you’ve done something that’s harder than that. And now the bonds are a little bit tighter.

Take that unit and put them in a combat zone and their bonds are gonna be even tighter.

Now you take that combat zone and you make it super intense, and the bonds are gonna be even tighter.

If your team gets through the hard stuff without fracturing, you’re gonna be closer on the other side.

From The Portal s01e06

The ambiguity problem

The ambiguity problem is an obstacle — one of many — to the effective pursuit of truth. 

Humans often disagree. Sometimes the disagreers are people who want to agree, and try to agree, but they fail. This happens a lot. John Nerst joked that 82% of the internet is people arguing, and the rest is cats and porn. 
Sometimes when people disagree they’re not really trying to agree. Sometimes they’re trying to win, or to be right. They might actually be averse (consciously or not) to changing their mind. 

But sometimes people — or at least parts of them — are genuinely trying to resolve a disagreement. And they just simply fail to do it. 

And sometimes these people are skilled truth-pursuers. Intellectuals. Scientists. Philosophers. Lovers of truth. Sometimes they’re friends and comrades who genuinely respect one another. And they simply fail to agree.

What’s going on in those cases? 

I want to suggest that a large percentage of the time, the problem is semantic ambiguity. Natural language has a nice feature in that it lets us be vague and gestural. It lets us speak in metaphors and allusions. It lets us describe part of a thing while leaving other parts ambiguous. But sometimes we need more precision than that. Sometimes natural language is too vague — sometimes it’s critically vague. This can get us into unproductive disagreements. Sometimes — perhaps very often — two people who disagree, think that they are understanding one another when they are not. They think that one believes p and the other believes not-p, when really, one person is saying that they believe p1, and the other is saying that they believe not-p2, where p1 and p2 are orthogonal. But because they hear “p” and “not-p”, they think they’re understanding one another and having a debate, when really they’re just talking past one another. 

I want to give a provisional name to this phenomenon. Let’s say that the ambiguity problem is the propensity for natural language to lead to unproductive disagreement rooted in misunderstanding about the meanings of terms.

David Chalmers has called disagreements of this sort “merely verbal” disputes. In the linked paper he offers a technique for resolving them: the conversants should temporarily taboo the ambiguous term and re-phrase it in less ambiguous terms. 

I have a hunch that a more powerful solution is possible. For people who are genuinely interested in pursuing truth and who are hindered by this ambiguity problem, my provocation is that we can create a tool that:

  1. lets people create bespoke semantic objects disambiguated to the appropriate degree for their desired use case, and
  2. reference these objects unambiguously.

My hunch is that such a tool has the potential to take motivated people a long way towards dissolving the ambiguity problem.  

Critical thinking and critical feeling

From Eric Weinstein on his podcast episode with Jocko Willink:

Just as it’s important to think critically — to evaluate ideas before accepting them as true —, it’s also important to feel critically: to evaluate feelings before accepting the color they put on the world.

Also. There are limits to the value that can be extracted from thinking and feeling. We sometimes say, “you’re overthinking this.” We can also say, “you’re overfeeling this.”