Three types of wanting

If you’re building a product, it better be something that people want. That much is obvious. But it turns out that this concept of “wanting” is enormously complex — and there are lots of ways to misunderstand it. Here are three types of “want”.

Type 1 wanting: the passions

The first kind of desire is the kind that’s more of the body than the mind. It’s more System 1 than System 2. It’s a type of desire that’s not based in instrumental reasoning — “I want X because it’ll help me get Y.” This type of wanting is direct and visceral. You feel it when you’re hungry and you see and smell delicious food. You feel it when you’re engrossed in a novel and you can’t stop reading. You feel it when you’re madly in love in a new relationship.

For those who make products, this is a type of wanting to be cultivated. People aren’t born with passion for your product.

Type 2 wanting: the rational mind

The second type of desire is that of the rational mind. It’s instrumental: Bob wants to get a summer job so that he can buy a car. Bob might not have much passion about the idea of getting a job, but nonetheless he wants to do it for instrumental reasons.

When there’s dissonance between type 1 and type 2 wanting, we have a very interesting human phenomenon that philosophers call akrasia, or ‘weakness of will’. People don’t always do what they think is good for them. And sometimes they do what they think is bad for them. For example: George thinks he should stop watching TV, but it feels so good that he keeps doing it. Or: Jerry thinks that he should eat a salad, but instead he easts Kenny Rogers chicken because it feels so desirable.

Akrasia is especially relevant to product makers in health, wellness, and education. You’re building something that’s supposed to provide your user with some benefit — just the way exercise is supposed to provide benefit. But just like we often don’t exercise when we know we should, we won’t necessarily use a product just because it’s good for us. We need Type 1 wanting as well.

Type 3 wanting: the one who knows better

“You don’t want to do that.”

“I don’t think you want that.”

What do we mean when we say things like this? We’re not talking about type 1 or type 2 wanting. You know better than me what you want in those senses.

We’re talking about a third sense of wanting: what you should want. It’s what you would want if you knew more. You currently want to open door number 3, but if you knew what was behind the doors, you wouldn’t want it.

Product makers sometimes mess this up when they have a theory about the way the world should be, and they expect that all their users will want the world to be that way too. But if the user doesn’t share the theory, then they probably won’t share the picture of a better world.

Humans are best off when there’s consonance between the three types of wanting. Eg, when it comes to eating sand, I:

  • Type 1 don’t want it
  • Type 2 don’t want it
  • Type 3 (I’m pretty sure) don’t want it

And we run into trouble when there’s dissonance between the three.

The best products are those for which there’s consonance (in the positive direction) between the three types of wanting. When I bought my first pair of AirPods, here’s how it looked:

  • Type 1: Those things are sexy; I want them
  • Type 2: I think it’ll be worth it go go wireless even though they’re expensive
  • Type 3: Two years later I think that was one of the best purchases I ever made.

Four criteria for a good vision

A vision can ignite and align an organization. Here are four criteria by which to evaluate a vision statement:

  1. Does it inspire? A good vision activates people. It’s something that, on Day 1 and Day 1000, can be called into mind to generate motivation to continue.
  2. Does it guide? A good vision has a perspective. On key decisions, a people turn to the vision and say “which option better satisfies the vision?” As a corollary, a good vision aligns: if everyone in an org is turning to the same vision to guide their decisions, they’ll be making decisions that are aligned with one another.
  3. Is it used? A good vision doesn’t go in one ear and out the other. Employees don’t need to look up the vision statement to remember what it is. A good vision is remembered and used.
  4. Is it accurate? A good vision, when accomplished, leaves you happy. A bad vision, when accomplished, makes you wish you’d aimed somewhere else.

Do you like LinkedIn’s endorsements feature?

I’m preparing for product management interviews. I’ll publish some of my case practice here on the blog.

The following is based on a practice question given by Lewis Lin in his book Decode and Conquer.

For this exercise, let’s assume that:
— I’m applying for a senior-level (equivalent to Google’s L6) product management role at a growth-stage startup like Snowflake.
— This is a first-round interview taking place over the phone without video.
— The interviewer is a UX designer. Random gender generator says it’s a “she”.

Interviewer: Now I’d like to give you a case and hear how you work through it.

Me: Sounds good — let’s do it.

What I’m thinking: I have two goals at the outset of any case.

First, I want to understand what type of question I’m getting. Most questions that show up in PM interviews can be classified as one of a small number of types. As soon as I know which type I’m dealing with, I’ll know a lot about how to approach it.

Second, I want to understand the context in which the question is arising. Who am I? Who are you? What’s happened to give rise to this question? If the context is undefined, the problem will be hard to grapple with. If the scenario is clear, I’ll be able to use my actual experience to ground my decisions.

As I listen to the question, this is what I’ll be thinking about.

Interviewer: This is a design critique question. It’s meant to give you a chance to show how you think about design and giving feedback. The question is: what do you think of LinkedIn’s endorsement feature?

What I’m thinking: What’s the question type? She said “design critique”, which is a term of art for designers — it’s a conversation in which a designer presents work in order to receive feedback for the purpose of improving the work. It’s also a common type of question for product managers. I’m pretty sure that’s what this is, but I’ll want to double-check.

What’s the scenario? She didn’t give much context, but knowing that it’s a design critique gives me enough to assume and confirm: rather than asking open-ended questions to get information, I’m going to invent a scenario and ask her if we can assume that that’s what’s going on. This tactic has two benefits: first, it’s fast and efficient. Second, it decreases the chance we’ll wind up in an area that’s unfamiliar to me.

Me: Great. Let me see if I understand the question. You said “design critique”, so I’m assuming that this is a feature someone on the team is working on. I’m imagining that we both work at LinkedIn — let’s say you’re a designer and I’m a PM. And you’re asking me for informal feedback on a feature that you’re working on. Is that a fair assumption?

Interviewer: Sure, let’s go with that.

What I’m thinking: Now that I know what kind of question I’m dealing with, I want to make sure I understand the feature we’re talking about. Even though I’m pretty sure I know what she means by “LinkedIn’s endorsement feature”, I’ll double check. This might turn up some useful information, and if I’ve made the wrong assumption, it could save me from a major confusion.

Me: Great. And let me check whether I understand the feature that we’re talking about. Is this the feature that lives on someone’s profile page and says things like “SEO — 18 people have endorsed Matt for this skill”?

Interviewer: Exactly — that’s the one.

What I’m thinking: Now I have all of the context I need to start answering the question. If I’m not immediately sure where to go next, this would be a good time to ask for a moment to think.

Me: Got it. And the question is: let’s do a design critique on that feature. Do you mind if I take a minute or so to gather my thoughts?

Interviewer: Absolutely — go ahead.

What I’m thinking: So how am I going to approach this? Since we’re dealing with a design problem, I’ll want to make sense of who it’s for and what their needs are. For that, I’ll use the SSUN framework. And since I’m giving feedback on an existing solution, I’ll use the Design Scorecard method to structure my critique. That’s going to be my approach: SSUN and Design Scorecard.

Me: Okay. That’s a huge feature — very central to the product. I’d like to do two things. First, since it’s such a central feature, I’d like to walk through an exercise to get a clear user and use case in mind. Then, I’d propose we make a scorecard with two or three design goals and see how it does against those goals. How does that sound?

Interviewer: Sounds good.

What I’m thinking: I’ll work through the SSUN framework starting with Stakeholders. For each section I’ll first brainstorm a number of options, and then I’ll select one. To keep things moving and to stay attuned to the interviewer, I’ll use the ‘assume and confirm’ tactic at each step.

Me: Okay. To make sense of users and needs, I like to use a framework called SSUN — it stands for Stakeholders, Segments, Use cases, and Needs.

Starting with Stakeholders, let’s brainstorm a few. We’ve got:

  • Users
  • LinkedIn people — employees on various teams, executives, board, etc
  • Other parties on the platform like advertisers

We could brainstorm more, but those seem like the big ones.

As far as which one to focus on here, we’re probably most interested in the users, so I’m going to set aside the others for now, and just focus in on the users. Does that sound good?

Interviewer: Yeah, that sounds good.

Me: Okay. Then on to Segments. Within the user stakeholder group, we can sub-divide into a few segments.

LinkedIn is a career marketplace, so the main user segments are going to be:

  • People who are trying to show off their skills, and
  • People who are trying to find people that have certain skills.

Let’s for now call them “job-seekers” and “employers”.

We could brainstorm more segments, but I think these are the main ones.

Between these, my first thought is that we should focus on the employer side. The reason is that if employers trust and use endorsements, then job-seekers have a strong reason to get and to give endorsements. But if employers are ignoring the feature, then job-seekers are probably going to ignore it too. So in that sense, employers are the linchpin.

Does that sound okay?

Interviewer: Yep, that sounds good.

Me: Great. So next is use cases.

Let’s brainstorm a couple. As an employer, I’ve personally used LinkedIn in two ways:

  • One is to search for people.
  • The other is to evaluate a candidate who’s applied.

Let’s call those “outbound” and “inbound”.

There are definitely more use cases that we could brainstorm, but those are big ones. Let’s go with those two for now.

Of those, the one that seems most important here is the one where the employer is trying to evaluate an inbound candidate.

Shall we focus on that one?

Interviewer: Why does that one seem most important?

Me: Well, I think that one gives us the cleanest view of the need we talked about a minute ago. We identified this relationship where if the employer trusts the feature and uses it, then job-seekers will too. The inbound use case will put that trust front and center. The outbound case would touch on that, but it would also bring in additional things related to the mechanics of search.

Interviewer: Makes sense. Let’s go with that.

Me: Great. Then, the last thing is needs. What goals does the user have in this situation? Let’s brainstorm.

  • The first thing that comes to mind is something like accuracy of the signal, or trust. Basically, if I’m the employer, I want to know if this candidate is going to be successful in this role. I’m looking for information I can rely on.
  • Another thing is speed. I’m looking at lots of candidates, so the faster I can get a signal, the better.

Again we could brainstorm more needs but that feels good for now.

I think the one we want to focus on here is the first one: credibility of the signal. As we said earlier, that one feels like the linchpin.

Does that sound good?

Interviewer: Yeah, that makes sense.

What I’m thinking: Now I want to package all that work up with a neat user story. That’ll help us to remember what we’re working with in the next stage.

Me: Great. So if we put all of that into a user story, we have something like, “As an employer evaluating a candidate for a role, I want credible signals about this person’s skills.”

We could obviously go down different branches of that tree to make stories for the other segments et cetera. But for now let’s just focus on that one.

What I’m thinking: Now I’ve completed the SSUN framework, so I have a clear idea of who we’re designing for and what their problem is. My next goal is to set up the context for a good, disambiguated conversation about design — one that might result in useful feedback, and that will give us lots of footholds for a well-structured discussion.

Me: With that user story in mind, let’s talk about what’s working and not working for this feature.

I’d propose we start by making a scorecard. It’s hard to talk about whether something is successful if you haven’t said what the goals are.

Sound good?

Interviewer: Sure, sounds good.

Me: So to make a scorecard, let’s agree on 2-3 design criteria, and then for each criterion we’ll give three responses:

  • A 1-5 rating (basically a Likert scale rating. This gives us an apples-to-apples comparable.)
  • One or two things that are working well.
  • And one or two things that aren’t working well.

We can pick any design goals we want, but I’ve found it useful to say that good design is useful, easy, and honest. How do those three sound?

Interviewer: Sure, sounds good.

Me: Great. If this was real life I’d suggest that we both make a scorecard and fill it out, and then discuss. But since I’m in the hot seat here I’ll just do one and talk aloud as I go.

Interviewer: Sounds good.

Me: Okay.

Is it useful. I’d give it a 2 out of 5 on this. What’s working well is that it’s super easy to use. What’s not working well is that I don’t trust the information — I think it’s too easy to game.

Next, is it easy. On this one I’d give it a 5 out of 5. What’s working well is that it’s structured and consistent — it’s really easy to pick up at a glance. If I had to stretch and name something that’s not working well, maybe I’d point out that it still requires me to make some kind of inference to figure out how much of an expert it’s saying this person is. What does it mean that 12 people endorsed him for that skill?

Last, is it honest. On this one I’d give it a 2 out of 5. This goes back to what we said before. What’s working well is that it leaves the endorsement up to real people — so it’s as honest as those people are. What’s not working as well is that it projects a level of confidence about these endorsements that I’m not sure is warranted. There’s too much incentive, for too little cost, to game the system.

So it looks like that’s a 9 out of 15. Overall I think this feature has a lot of potential — but the trust issue is the main holdup for me right now.

Interviewer: Awesome! Thanks. Now let’s move on to…

Design Scorecard: a framework for giving feedback

This is a framework to use when you’re looking at a proposed solution to a well-defined design problem, and your goal is to provide feedback so that the solution can be improved. A typical scenario would be a design critique meeting in which the designer in charge of a problem is showing recent work and asking for feedback.

Best practices

Giving effective design feedback is an art. This method will help you with three best practices:

  • Identify design goals. If you don’t have clear goals, it’s very hard to evaluate whether a design is successful. Successful at what?
  • Get clear on the goals before you begin to evaluate the solution. This will make the process feel more objective and the ensuing conversation more productive.
  • Name what is working in addition to what isn’t. On a practical level, this will help to ensure that good stuff doesn’t get forgotten in the next iteration. On an emotional level, this positive reinforcement is like wind in the sail for the people doing the hard work of improving the feature.

First, select your design criteria.

Two or three is typically a good number. Less than two, and you’ll tend to lump everything into one bucket. More than three, and the process will tend to become unwieldy.

Industrial designer Dieter Rams has a famous list of ten design principles that you might choose from. According to Rams, good design is:

  1. Innovative
  2. Useful
  3. Aesthetic
  4. Makes a product understandable
  5. Unobtrusive
  6. Honest
  7. Long-lasting
  8. Thorough down to the last detail
  9. Environmentally friendly
  10. As little design as possible.

Second, evaluate the design.

For each of your agreed-upon criteria, give three responses:

  • A numerical rating on a 1-5 Likert Scale. This forces you to quantify your feedback and gives designers an apples-to-apples comparison.
  • One or two things that are working. This gives the designer some positive reinforcement and ideas for what to build on.
  • One or two things that could be improved. Since the goal is to improve, this is the meat of the critique.

Visualized, your score card will look like this:

CriterionLikert ratingWhat’s workingWhat could be improved
Useful2Gives me an at-a-glance picture of this person’s skillsI’m not sure that I can trust the information
Easy5Fits really easily with pre-existing mental models: one person endorsing another… and many people endorsing one person.I can see how many people endorsed, but I still have to make an inference about what that means. Can we give a takeaway metric?

How to frame a product design problem: the SSUN framework

I’m preparing for product management interviews. Like all PMs I love frameworks, and a fun thing about interview prep is that I get to quickly try out lots of frameworks and even invent new ones where what’s out there isn’t working for me.

A popular framework for product design cases is Lewis Lin’s CIRCLES Method. In my experience it’s been helpful as a starting point, but for a few reasons I’ve found it hard to use.

So I’ve been tinkering. I’m attracted to the idea of more modular frameworks that can be recombined as needed for a variety of cases. SSUN has a narrower scope than CIRCLES, and I’ve been finding it quite useful for its intended scope.

Here’s the idea.

When you’re facing a product design problem (either in an interview or on the job) you need to separate the problem from solutions. First get clear on exactly what the problem is — who you’re solving for, what their needs are, who else is affected and what their needs are, and who you’re ignoring (for now). Get that clear in mind before attempting to create or evaluate solutions. .

SSUN is designed to systematically make sense of problem space. It stands for Stakeholders, Segments, Use cases, Needs.

The way to approach it is from left to right.

If you do the whole thing you’ll form a tree that looks something like this:

But in an interview, you won’t have time to flesh out the whole tree. You’ll need to focus. The way to do that is to make each of the four steps a two-stage process: first brainstorm a list (diverge), then prioritize and select one (converge).

You’ll wind up with a tree that looks more like this:

“Need 1” is both well-defined enough and narrow enough to tackle in the space of a single interview question.

As an example, let’s say you’re faced with a product design prompt like this:

How would you improve LinkedIn’s endorsements feature?

After you’ve confirmed with the interviewer that you know what the endorsements feature is, you can apply the SSUN framework to get clear on who you’re designing for and what their goal is. That might look like this:

Now you’re clear on the problem, and you can move into solutions.

The ambiguity problem

The ambiguity problem is an obstacle — one of many — to the effective pursuit of truth. 

Humans often disagree. Sometimes the disagreers are people who want to agree, and try to agree, but they fail. This happens a lot. John Nerst joked that 82% of the internet is people arguing, and the rest is cats and porn. 
Sometimes when people disagree they’re not really trying to agree. Sometimes they’re trying to win, or to be right. They might actually be averse (consciously or not) to changing their mind. 

But sometimes people — or at least parts of them — are genuinely trying to resolve a disagreement. And they just simply fail to do it. 

And sometimes these people are skilled truth-pursuers. Intellectuals. Scientists. Philosophers. Lovers of truth. Sometimes they’re friends and comrades who genuinely respect one another. And they simply fail to agree.

What’s going on in those cases? 

I want to suggest that a large percentage of the time, the problem is semantic ambiguity. Natural language has a nice feature in that it lets us be vague and gestural. It lets us speak in metaphors and allusions. It lets us describe part of a thing while leaving other parts ambiguous. But sometimes we need more precision than that. Sometimes natural language is too vague — sometimes it’s critically vague. This can get us into unproductive disagreements. Sometimes — perhaps very often — two people who disagree, think that they are understanding one another when they are not. They think that one believes p and the other believes not-p, when really, one person is saying that they believe p1, and the other is saying that they believe not-p2, where p1 and p2 are orthogonal. But because they hear “p” and “not-p”, they think they’re understanding one another and having a debate, when really they’re just talking past one another. 

I want to give a provisional name to this phenomenon. Let’s say that the ambiguity problem is the propensity for natural language to lead to unproductive disagreement rooted in misunderstanding about the meanings of terms.

David Chalmers has called disagreements of this sort “merely verbal” disputes. In the linked paper he offers a technique for resolving them: the conversants should temporarily taboo the ambiguous term and re-phrase it in less ambiguous terms. 

I have a hunch that a more powerful solution is possible. For people who are genuinely interested in pursuing truth and who are hindered by this ambiguity problem, my provocation is that we can create a tool that:

  1. lets people create bespoke semantic objects disambiguated to the appropriate degree for their desired use case, and
  2. reference these objects unambiguously.

My hunch is that such a tool has the potential to take motivated people a long way towards dissolving the ambiguity problem.  

Disambiguating expected likelihood of discrete outcomes

I’ve been writing about disambiguation techniques. Here’s another one. This one applies when the question cannot be disambiguated into a continual variable.

Suppose Adam and Bob and a bunch of friends live in a house together. They all enjoy living together and hope to continue for a long time to come. But a complication has emerged. Adam is allergic to dogs, and Bob’s girlfriend has a dog. Bob would like to have his girlfriend come over more — and maybe even move in some day — but it’s hard for her to do that because she has to leave her dog at home.

This poses a risk to Adam and Bob’s goals of living in the house together. If Bob can’t have his girlfriend over, he might have to move out. Or Bob might not move out, but might someday find that his relationship with his girlfriend has been strangled by the fact that she doesn’t spend much time at his house. Or if Bob brings his girlfriend’s dog over anyway, it may cause Adam to have constant allergies, and Adam might have to move out.

There’s also a possibility that some happy compromise can be found — perhaps if the dog is bathed regularly it won’t produce an allergic reaction. Or maybe the dog can be limited to certain areas of the house that Adam doesn’t care to spend time in.

But Adam and Bob find themselves having a hard time talking about the issue. If they were to disambiguate using this technique, they might find out why.

The first step is to list the possible outcomes. They agree that there are 4 possible outcomes worth discussing. Next, they each record their likelihood estimate for each outcome.

Here’s what they come up with.

OutcomeAdam’s expected likelihood of this outcomeBob’s expected likelihood of this outcome
Bob moves out25%10%
Bob stays but his relationship with his girlfriend is strangled50%35%
Adam moves out20%5%
A happy solution is found whereby all of the above are false5%50%

Finally, they compare results. Looking at the last row, they see a huge delta on their expected likelihood of a happy solution. Now it makes sense to both of them that Bob has been excited to talk about solutions, while to Adam this has felt wrong.

From here, with a greater understanding of one another’s point of view, they can further disambiguate to find out why they have such different likelihood estimates for this outcome. They’re doing productive disagreement.

Comparing disambiguated views

I’ve been writing about disambiguation and the high-dimensionality of superficially low-dimensional phenomena like abortion. 

The continuum from pro-life to pro-choice can be visualized as a single dimension. 

But it’s probably more helpful to think of one’s position on abortion as high-dimensional. It’s composed of your views on questions like “when does life begin?” and “how much should we value the preferences of the would-be mother?” We can say that your positions on those dimensions project to a position on the single pro-life/pro-choice dimension. 

We can disambiguate one person’s views this way, and we can also disambiguate a second person’s views this way. And we can map both of their views onto a single graph. 

For many issues, this would go a long way towards getting two people to understand one another and to disagree productively rather than unproductively. 

We could also compare one person’s actual views with what the other person thinks that person’s views must be. This type of misunderstanding/ambiguity is responsible for a lot of unproductive disagreement. 

Disambiguation is disentangling dimensions

A few days ago I wrote about Eric Weinstein’s discussion of the “middle” position on political issues. He used abortion as an example. We have the terms “pro-life” and “pro-choice”, and most people’s position is probably somewhere in the middle.

Weinstein said something interesting about this so-called “middle”. 

I don’t think it’s at the middle. …. I think that there’s this very flat, low dimensional plane where these positions [pro-life and pro-choice] live. And what we’re calling the middle is not the thing between these. It’s in a higher dimensional space that combines these crappy low resolution, moronic positions, and it projects to the middle, but it isn’t the middle.

I’m interested in visualizing this. Here’s a sketch.