CHUS method > CIRCLES method

Lewis Lin’s CIRCLES method is a popular framework for tackling the type of product design questions that commonly show up in product management interviews. I found the framework helpful as a starting point, but cumbersome in practice. As a result, I created a simplified reformulation that I find superior. 

It’s called CHUS — an acronym for Context, Humans, Uses, Solutions. 

Let’s first review what a product design question is, then look at the CIRCLES method and its shortcomings. Finally, I’ll show you the CHUS method. 

Product design questions

Product design questions are meant to produce a signal on your ability to take a big ambiguous problem and turn it into a great product or feature. 

Facebook calls these “product sense” questions. They’re looking to see structured thinking going from macro — can you identify and justify user segments, key paint points, and product goals — to micro — what solutions would you build to address those pain points? 

Typical questions will look like this: 

  • Improve product X.
    Eg: Dropbox recently asked, “How would you improve Slack?”
  • Design a new product that does X.
    Eg, Facebook recently asked, “Design a product to help consumers find a doctor.” 
  • Design a product for group X.
    Eg, Google recently asked, “Design a refrigerator for the blind.” 

I actually enjoy practicing these kinds of questions because this really is the kind of thinking that you have to do a lot of as a PM — taking some broad, poorly constrained provocation, making sense of it and figuring out what to do about it.

Now let’s look at Lin’s framework for tackling these kinds of questions. 

CIRCLES Method

Lin’s CIRCLES method has seven steps — one for each letter of the word CIRCLES. They are: 

  1. Comprehend the situation
  2. Identify the customer
  3. Report customer needs
  4. Cut, through prioritization
  5. List solutions
  6. Evaluate tradeoffs
  7. Summarize recommendation

What I like about the framework is that these are indeed seven good things to think about, in that order. It works well as a starting point. 

There are a few things that I don’t like about it.

The first is that it has seven steps, which is quite a lot to hold in mind at once.  It’s hard to quickly recall them all and outline a response in my head.

The second problem is that while the word “CIRCLES” is easy to remember, the letters of the acronym don’t point to words that are useful to remember. “I” stands for “Identify” — identify what? The customer? Pain points? Solutions? Risks?  The issue is that Lin formed the acronym around the verbs, but the verbs are practically interchangeable, like the action words on a resume (“delivered”, “achieved”). I suspect that interchangeability is actually why he did it: he chose to prioritize a nice-sounding acronym over useful pointers.

The third problem is that the framework is ontologically inelegant. What’s going on as you work through a question is that again and again you’re going wide to explore a space of possibilities and then narrowing down to choose a specific possibility to explore. First you do this around customer segments, then around needs, then around solutions. In CIRCLES, sometimes those diverge/converge steps are called out explicitly, and sometimes not. The Customer has one step, Needs and Solutions each are given two. This inelegance makes the framework harder to adapt on the fly — kind of like the way a piece of procedurally-written software is harder to adapt and extend than a piece of object-oriented software. 

I find it cumbersome to work with CIRCLES so I changed it up and simplified it. Now let’s look at the CHUS framework.

CHUS Method

Here we have four steps: 

  1. Context 
  2. Humans
  3. Uses
  4. Solutions

At each step, we’ll do some version of the classic design diamond: diverge by creating choices, then converge making choices. 

As you work through the case, it’s critical that you manage complexity. You’ll only have time to explore a tiny piece of the problem. Repeating the diamond will allow you to show that you know how to structure a complete exploration, while maintaining tractable complexity.

Here’s what it looks like.

At the context step, diverge by asking questions. Where’s this coming from in the org? Who am I in the example? Who’s it for? Why is it needed? Then converge by synthesizing: “Okay, we’re a national garage door lift manufacturer who sees an opportunity to gain market share by getting into the smart home space.” 

At the humans step, diverge by listing all the stakeholders you can think of. Customers, manufacturers, insurers, company employees and directors, law enforcement, etc. Within the customers group, list segments. Converge by prioritizing: in almost every case the primary stakeholder that you’ll focus on is the customer. Select a segment and offer rationale. 

At the uses step, diverge by listing needs, wants, and pain points for the user. Explore the normal solutions and use cases. Converge by selecting a specific use case. As a ___, I want ___ so that ___. 

At the solutions step, diverge by listing possible solutions that address the user’s goals and pain points. Converge by prioritizing solutions and identifying the best one. 

The benefits of CHUS over CIRCLES are simplicity and elegance. Yes, you have to learn two frameworks: the CHUS sequence, and the diverge-converge method. But separating these objects out lets you see each one individually more clearly, and lets you modulate its use as appropriate to the question. 

Additional benefit — the CHUS acronym, if pronounced as “choose”, points to something that you have to do a lot of in these kinds of questions: choose where you’re going to focus. You can’t cover the whole problem space in 20 minutes. You have to again and again choose which branch of the tree you’re going to explore. 

The optimal amount of planning

On the wall in an Uber office, painted in bold, confident letters is a General George Patton quote:

“A good plan violently executed now is better than a perfect plan executed next week.”

In war and startups, I bet Patton is right much of the time. Chaos in the field is high — predictions lose value the farther they attempt to reach into the future. And the action moves fast — wait a week and the enemy will be elsewhere. But there are also cases in which it’s better to perfect the plan for another week. Launching the Apollo 11 rocket with humans aboard? I’ll take the perfect plan next week. Performing an organ transplant? Let’s get it right.

If we mess with the parameters in Patton’s thought experiment, things start to get fuzzy. What if it’s a choice between a good plan today and a perfect plan in 5 days? In 1 day? In 1 hour? What if it’s a choice between a bad plan today and a perfect plan next week? How bad does today’s plan have to be before it’s worth it to wait on next week’s perfect plan?

Product managers have to make this choice regularly. Invest more in planning, or go ahead with what you’ve got? The wisdom in the zeitgeist pushes more strongly towards action than towards planning — we encourage one another to “fail fast”, to “break things”, to “ship it”. And that wisdom is responding to something real: it feels easier, safer, to sit in the armchair a bit longer, to polish the stone a bit more, before getting out in the world and putting things to the test — so we benefit the extra nudge to go.

But in some cases we shouldn’t move fast. In some cases we should make the plan better. We should improve our models more. We should think, analyze, write, discuss, whiteboard, challenge, criticize, and strategize. Some cases call for more planning.

How do we know which is which? This is the million-dollar question. I don’t have an algorithmic solution, but here are some heuristics.

First, simply recognizing that there’s an optimal amount of strategy — which means that in a given case there can be too much or too little — will get us asking the right questions. The “fuck it ship it” mantra will not get us all we need.

Second, consider whether the decision or action is reversible. The easier it is to change your mind later, the more you should lean towards acting rather than planning. The more irreversible, the more you’ll want to perfect your plan. The Apollo 11 crew, once launched, cannot be un-launched.

Third, consider the stakes. What are the implications if you act based on your current plan, and your current plan turns out to be bad? If it won’t matter too much, then go ahead and act. If it’s make or break, lean more towards planning. A bad moon landing plan probably means the crew dies.

Fourth, consider the opportunity cost. If you were to spend the next unit of time acting rather than planning, what value could you capture? If not much, then go ahead and plan. If a lot, then act.

Fifth, consider the chaos. If the field you’re working within is highly unpredictable, then detailed long-term plains will likely go out the window pretty quickly. Get out in the field and learn as you go.

Finally, consider the fact that planning, in an important sense, makes you smarter. When you’re planning, you’re ramifying your mental models of the the world. The more sophisticated and well-articulated your models, the more capable you’ll be of collecting the right data and making the right interpretations as you go along. This will have long-term, difficult-to-account-for benefits. As another legendary WWII general said, “Plans are worthless, but planning is everything.”

Work as if you live in the early days of a better world

I just finished reading Semiosis, a novel by Sue Burke, which follows the story of a group of humans colonizing a new planet over the first 100 years. It reads a lot like The Martian — science and survival — but replace The Martian‘s solitude and desolation with community and competition.

It’s a deeply hopeful book about the prospects for the flourishing of sentient and intelligent life. The quote in the title of this post is not from Sue Burke, but it resonates for me as an attitude that, if held, is likely to make our present and future moments better.

Concepts related to the 80/20 principle

The 80/20 rule or “Pareto Principle” says that for many phenomena, a majority (eg 80%) of the effects come from a minority (eg 20%) of the causes. For example, if you’re a lawyer, the top 20% of your clients probably generate around 80% of your revenue. Also, the worst 20% of your clients probably generate around 80% of your headaches.

What’s powerful about the concept is that if we can successfully distinguish the super-potent causes from the less-potent causes, we can prioritize our efforts and get a lot more bang for our buck.

There are a number of related concepts that I find super useful. Here are some.

  • Compounding returns — interest on interest for exponential growth
  • Matthew effect or law of accumulated advantage — the rich get richer
  • Leverage — any influence which is compounded or used to gain an advantage
  • Force multiplication — a factor that increases the effect size of a cause
  • The 1 Percent Rule — “over time the majority of the rewards in a given field will accumulate to the people, teams, and organizations that maintain a 1 percent advantage over the alternatives.”
  • Power law or “heavy tailed” distributions — “a relationship between two things in which a change in one thing can lead to a large change in the other, regardless of the initial quantities”
  • Return on investment — the ratio of benefit out to cost in

Virtue signaling and moral grandstanding

When we accuse someone of “virtue signaling”, we mean that they’re trying to make a showy display of how good they are — how pure their intentions, how spotless their record, how woke their views —, without necessarily doing the work to be good. We don’t like it because we want people to represent themselves authentically, so that we can judge them for what they really are, not what they’ve mis-represented themselves to be.

But as Sam Bowman of the Adam Smith think tank points out, the term “virtue signaling” is problematic. To “signal” is to provide credible information. A bank might situate its offices in a big grand building to signal that it has lots of money. It wouldn’t be able to afford the fancy building if it didn’t. When we separate the signal from the noise, we’re locating the valuable information in the sea of static. But when we accuse someone of “virtue signaling”, we want to say that the information they’re providing is not credible.

Justin Tosi, a philosopher at Michigan, offers a better term: “moral grandstanding”. To “grandstand” is to seek favorable attention, and the term doesn’t carry the implication that one’s displays are credible.

Fun fact: James Bartholomew claims to have coined the term “virtue signaling” in 2015. Google Trends backs up the timeline:

Yoshua Bengio on Integrated Information Theory

At NeurIPS this year Yoshua Bengio gave a great talk on research directions towards general intelligence.

While the deep learning paradigm has made major progress this century beyond classical symbolic AI, it has not accomplished its original goal of high-level semantic representations grounded in lower-level representations, which would enable higher-level cognitive tasks like systematic generalization of concepts and properties, working with causality, and factorizing knowledge into small exchangeable pieces.

Bengio thinks that there are pathways from current deep learning to high-level semantics that do not require a return to, or an interleaving with, classical symbolic approaches. One element of the picture he paints is that if we want to get to high-level representations we should drop the “disentangled factors” goal (assumption that each variable should be independent) and instead think of thoughts as best represented by a sparse factor graph.

This leads a questioner to ask about the difference between this model and IIT’s model of integrated information as the measure of consciousness.

Questioner:

The other major theory of consciousness is of course IIT, which measures consciousness by this phi quantity. which is essentially a measure of the mutual information of the parts of a system. and the higher the mutual information, the more consciousness you have. Which seems like the polar opposite of your sparse factor graph hypothesis. How do you reconcile the two?

Yoshua Bengio:

I don’t. I think the IIT theory is more on the mystical side of things and attributes consciousness to any atom in the universe. I’m more interested in the kind of consciousness that we can actually see in brains. [….] There is a quantity that is being measured [in IIT] but I don’t think that it is related to the kind of computational abilities that I’ve been talking about.

Ray Dalio: good synthesis requires successful navigation of levels

For Ray Dalio, making good decisions requires maintaining a true and rich picture of the realities that will affect your decision. To do that, you have to be able to synthesize an enormous amount of information. And to do that, you have to be able to successfully navigate what Dalio calls “levels”. 

Reality exists at different levels and each of them gives you different but valuable perspectives. It’s important to keep all of them in mind as you synthesize and make decisions, and to know how to navigate between them. 

Let’s say you’re looking at your hometown on Google Maps. Zoom in close enough to see the buildings and you won’t be able to see the region surrounding your town, which can tell you important things. Maybe your town sits next to a body of water. Zoom in too close and you won’t be able to tell if the shoreline is along a river, a lake, or an ocean. You need to know which level is appropriate to your decision.

To synthesize and communicate well, we learn to keep track of the high-level narrative. Dalio has a nice diagram:

Sometimes we need to go into the lower level details — but only when necessary, and we return to the high-level thread when we’ve accomplished what we need to at lower levels. Here’s what that might look like:

But sometimes things go awry. For example, we might get lost in the weeds:

Or, we might lose the thread entirely:

To avoid these pitfalls, Dalio recommends these four steps:

1. Remember that multiple levels exist for all subjects.

2. Be aware on what level you’re examining a given subject.

3. Consciously navigate levels rather than see subjects as undifferentiated piles of facts that can be browsed randomly.

4. Diagram the flow of your thought processes using the outline template shown on the previous page.

From his book Principles: Life and Work (p. 250).

Can you explain cognitive dissonance on a neurological level?

Unfortunately the answer to all questions of the form “Can you explain [some phenomenon that we observe on a psychological/experiential level] on a neurological level?” is “no”. We’re pre-copernican in our understanding of the mind/brain and all we can do today is say things like “well, activity in x region is associated with y phenomenon”. Which is like asking what “war” is and getting an answer that there’s statistically more physical heat in regions where war is occurring.

But we do have lots of perspectives that you might find helpful or interesting. One that is applicable here is multi-agent models of mind. For example, there might be at the same time a part of me that wants a cheeseburger, and a part that wants to eat healthily. In that kind of picture, cognitive dissonance would be characterized in the same way you’d characterize dissonance between two agents in any system. More about multi-agent models of mind here.

Love and conditions

Conditional love: I love you because of your merits. 

Unconditional love: I love you without respect to conditions. 

It’s interesting how unconditional love is not directly caused by conditions, but it’s not completely independent from conditions either. Eg I think you can intentionally cultivate unconditional love for someone, and you might choose to do that because of how good your life is with that person, which is conditional on their merits. And I think one way to cultivate unconditional love is to habitually feel appreciation for them, which will often be prompted by some little thing they said or did. 

If “bitterness” is anger that forgot its cause, maybe unconditional love is appreciation that forgot its cause.