A rose by any other name would release the same chemical odorants, but if a rose was called “pukeweed” you probably wouldn’t buy one for your sweetheart.
Names are important. Product people often underestimate how important they are. And since naming a product or brand can be really hard, it’s easy to say we’re spending too much time on this. It’s not that important. Let’s just pick one and move on.
But names matter. A lot. If you’re trying to name your product and you haven’t found a name that feels right, you probably shouldn’t put the search to rest until you do.
Here’s why names matter.
Nearly every time a person encounters your product they encounter your name. That name creates a snap emotional judgment. This judgment occurs before declarative thoughts and conscious intentions. It’s a feeling that precedes conscious awareness (in the Blink sense), and it creates the context in which the brain goes about forming declarative opinions about the object. After that first instant, the thing already has established its emotional valence. All that’s left now is for the brain to find reasons to explain why the thing is so good or so bad.
That initial movement will start a feedback loop. You hear the name. You feel good. You look for reasons the thing is good. You find them. You declare them. Now, noticing those, and getting your ego wrapped up in it, you feel even more good feelings about the thing. And so on. Once that process starts, the ball will tend to keep rolling down the same side of the hill.
If the initial feelings are negative, the relationship between the person and product is likely to end right there. If the initial feelings are positive, the product has a foot in the door. The person is interested. Their brain is already at work generating reasons why this thing is cool or valuable. There can be further interaction. A relationship might blossom.
By this process the seemingly tiny factor of the first emotional, aesthetic response to the product’s name has an enormous effect on the final outcome of the relationship between the person and the product. I’m reminded of the cheesily awesome explanation of chaos theory delivered by Ian Malcolm to Ellie Sattler in the Jeep on Jurassic Park.
Consider virality. If I have a negative feeling about a product, it’s unlikely that I’m going to tell my friends about it. If when I say the name I feel a negative shadow, a twinge of embarrassment, an urge to defend or convince you that no, this thing is not what you think, it’s actually cool, hear me out — then I’m probably not going to talk about it. And the product won’t spread. But if I feel good about it, and am excited about it, I might be happy to talk about it.
That’s virality at the micro level. Now consider the macro level. For your product to be successful, it’ll need millions of people to hear about it, check it out, and develop a relationship with it.
How to test if a name is good
For me, there are two steps to a name. When you have a candidate for a name, here’s how to evaluate it.
First: Say it to someone. Describe your product, and use the name as if it’s already been decided upon. Choose a person whose judgment you trust, and who isn’t predisposed to like everything you say. Not your mother. Coworkers and critical friends are good. And when you say the name, pay very close attention to how you feel. Did it feel good? Did you enjoy saying the name? Did you feel embarrassed? Was it a bit of a strain? Your answer there is all you need to know, because as Don Draper says, “You are the product. You, feeling something. That’s what sells.”
Second: If your name passed step one, great. But it now it has to pass another step: the sanity check. Someone recently pointed out to me that “chlamydia” is actually a pretty word, if you separate it from its meaning. But yeah. That meaning. I won’t be naming my daughter Chlamydia. If the name is trademarked by your competitor, or if it refers to a venereal disease, keep looking.
Julie Zhou has a great post about why she writes, how she got herself to do it, and how you can do it too.
In particular, this:
In all the times before that I have failed to get something on paper, it was because I had thoughts like the following: geez, what if I hit publish and nobody reads this? That’d be embarrassing and pointless. Or I only want to publish something if it’s really good and makes me seem smart, witty, and knowledgeable. Or What if I say this and somebody disagrees and tells me I’m wrong? Or Hmm, I should only write when inspiration hits me, and right now I don’t feel inspired.
In every creative endeavor — not just writing — this train of thought paralyzes. I have experienced it enough times to know that holding yourself to some lofty standard when you are just starting out is like blowing a deathkiss to your chances of success.
That resonates with me. Those thoughts occur, and suddenly I’ve found some other activity that’s more urgent and important than continuing to try to write.
And then her advice:
Instead, if you’d like to write, I offer the following tips:
1. Set a writing goal that is purely about the mechanical act of doing.Maybe, like me, it’s Hit the publish button every third Tuesday, Maybe it’sWrite 3 journal entries a week. Or maybe it’s Write 500 words a day. (In case you wonder how all your favorite authors complete their novels, I have it on good information that pretty much all of them do it via daily word-count/time-spent-writing goals.)
2. Tell yourself that nothing else matters besides #1. The thing you publish every third Tuesday does not have to fit any particular theme (in my case, not having any better ideas at the time, I’ve published poetry,listicles, and essays about my dog.) Your journal entries can be one sentence long. Your 500 daily words can be crap words. Don’t obsess over your audience. Don’t try to write what you think other people will want to read. Write about what you are excited about, because the best writing tends to reveal a piece of yourself anyway. The point is to bust down any possible barrier that might get in the way of you being able to achieve #1.
3. Commit to doing #1 for long enough that you will have built a habit out of it. A week or a month isn’t sufficient. Try 6 months or a year. By then, the act of writing will have molded to your life like a favorite sweatshirt, and you will begin to feel its effects on the way you think, reflect, and process the world.
But the first thing she did was to write an anonymous blog. Because her fear was keeping her from writing.
Yes, I was afraid.
To write publicly is to put yourself out there. To take a stance on something, propose an idea, have a point of view. It is to give someone else — someone you may not know and may never even meet — a piece of evidence with which to form an opinion of you. I cared deeply what others thought of me. (When I was little, I refused to ask grocery store clerks simple questions likeWhere are the oreos? for fear of seeming incompetent. As you can guess, this sacrifice cost me dearly in terms of snack-time utility.) I worried about what it would mean to admit weaknesses publicly, to write about touchy topics like gender and bad behavior and all the things that I’m learning. I worried what friends and coworkers would think.
And apparently, using the cover of anonymity, she was able to lower her fear enough to start writing.
In 2012, I sat down in January and scrawled a New Year’s Resolution on a sticky note: Write a blog. I did it the only way I knew how at the time: facelessly and anonymously. And that helped to get the words flowing. I wrote and published twice a week. The anonymity helped me share stories like what I learned from negotiating my first salary, tactics for interruptions, and what it felt like to harbor a Jekyll-Hyde impostor syndrome most days of the week.
My bet is that that step was crucial. Writing is hard because it’s so many things at once. Anonymity allowed her to condition her fear down while building the mechanical habits of writing. Only after she did that for a while — it sounds like she started in Jan 2012, and petered out in April 2012 — did she go nonymous.
This is a transcript of an interesting segment from episode 5 of the Talking Machines podcast, which contains part 1 of a conversation between three leaders in machine learning: Geoffrey Hinton, Yoshua Bengio, and Yann LeCun.
It seems like thinking hard about distributed representations is some of the most exiting stuff that’s come out of this resurgence. It’s a very different way of — I would say it kind of challenges a long history of knowledge representation. It feels very biological, right? Geoff can you talk a little bit more about distributed representations and maybe explain that to our audience.
Ok the idea is that you have a large number of neurons and they’re conspiring together to represent something and they each represent some tiny aspect of it, and between them they represent the whole thing and all its wonderful properties.
And it’s very different from a symbol. Where a symbol is just something that is either identical or not identical to another symbol. Whereas these big patterns, these distributed representations, have all sorts of intrinsic properties that make them relate in particular ways to other distributed representations. And so you don’t need explicit rules, you just need a whole bunch of connection strengths, and one distributed representation will cause another one in just the right way.
For example you could read an English sentence and get a distributed representation of what it means, and that could cause a distributed representation that creates a French sentence that means the same thing. And all of that can be done with no symbols.
So the power of that concept can be seen in the fact that all of us, in all of our labs, are essentially working on embedding the world — you can think of it this way. So how do we find vector representations for words, for text in various languages, for images, for video, for everything in the world. For people, actually, so you can match people’s interests with content, for example, which is something that Facebook is very interested in.
So finding embedding is a very interesting thing. And there’s a lot of methods for doing this. For text there’s the very famous method called word2vec invented by Thomas Mikolov[?].
And following the neural language model that Yoshua had worked on before that, Geoff and I also had worked separately on different methods to do high-level embeddings rather than low-level embeddings. So things that could be applied to images for example. So I guess this could be called metric learning. So this is situations where you have a collection of objects and you know that two different objects are actually the same object with different views or the same category. So two images of the same person, or two views of the same object, or two different instances of the same category.
And so you have two copies of the same network, you show those two images and you tell the two networks ‘produce the same output. I don’t care what output you produce, but your output should be nearby.’ And then you show two objects that are known to be different, and then you can push the output of the two networks away from each other.
Geoff had a technique called NCA to do this neighborhood component analysis. […] And then Jason Weston and Sonny Bengio came up with a technique called Wasabi which they used to do image search on Google. Google used that as a method to build vector representations for images and text so you could match them in search. At Facebook we’re using techniques like this for face recognition. So we find embedding spaces for faces, which allows us to search very quickly through hundreds of millions of faces to find you in pictures, essentially.
So those are very powerful methods that I think we’re gonna use increasingly over the next few years.
Is there a point where you need to have discrete grammars on top, or can it be distributed the whole way down?
My belief — if you’d asked me a few years ago, I’d have said well maybe in the end we need something like a discrete grammar on top. Right now I don’t think we do. My belief is we can get a recurrent neural network — that is something with an internal state that has connections to itself so it sort of keeps going over time. We can get that kind of network to translate from one language to another — this has been done at Google, and it’s been done in Yoshua Bengio’s group — we can do that with nothing that looks like symbols with symbolic rules operating on them. It’s just vectors inside.
It works very well. It’s at about the state of the art now, both at Google and at Yoshua’s lab. And it’s developing very fast.
And I think the writing’s on the wall for people who think the way you get implications from one sentence to the next is by turning the sentence into some kind of mentalese that looks a bit like logic and then applying rules of inference. it seems that you can do a better job by using these big distributed vectors, and that’s much more likely to be what people are up to.
There’s a very interesting white paper or position paper by Leon Bottou the title is “From Machine Learning to Machine Reasoning” which basically advocates the idea that we can use those vector representations as the basic components of an algebra for reasoning, if you want. Some of those ideas have been tried out but not to the extent that we can exploit the full power of it.
And you start seeing work now, so for example my colleague [?] Fergus […] and someone from Google, worked on a system that uses distributed representation that identifies mathematical identities. And it’s one of those problems that is very very sort of classical AI — like solving intervals and stuff like that — that involves reasoning and search and stuff like that. And we can do that recurrent nets now to some extent.
Then there are people working on how do you augment recurrent networks with sort of a memory structure. So there’s been ideas going back to the early 2000’s or late 90’s, like LSDM which is pretty widely used at Google and other places. So it’s a recurrent net that has a sort of a separate structure for memory. You can think of it as sort of a processor part and a memory part, where the processor can write and read from the memory.
So neural Turing Machine is one example, there’s another example. Jason Weston [and others] have proposed something called a Memory Network which is kind of a similar idea. It’s somewhat simpler than SDM in many ways. […]
And there’s a sense that we can use those types of methods for things like producing long chains of reasoning, maintaining a kind of state of the world if you want. So there’s a cute example in the memory network where you can tell a story to the network, like say Lord of the Rings, so “Bilbo takes the ring and goes to Mt Doom, and then drops the ring, and blah blah blah.” You tell all the events in the story, and at the end you can ask a question to the system, so, “Where’s the ring?”and it tells you, “Well, it’s in Mt Doom.” Because it maintains sort of an idea of the state of the world, and it can respond to questions about it.
So that’s pretty cool cool because that starts to get into the stuff that a lot of symbolic AI people said neural networks will never be able to do.
I’d like to add something about the question you asked regarding distributed representations and why they are so powerful and behind a lot of what we do.
So one way to think about these vectors of numbers what they really are are attributes that are learned by the machine, or by a brain if we think that’s how brains work. So a word or an image or any concept is going to be associated with these attributes that are learned.
Now associating attributes to concepts is not a new idea. Linguists will define things like the gender or plural or this is an animal or this is alive or not. And people trying to build semantic descriptions of the world do that all the time. But here the difference is that these attributes are learned. And the learning system discovers all of the attributes that it needs to do a good job of predicting the kind of data that we observe.
The important notion here is the notion of composition — something which is is very central in computer science and also in many of the older ideas of AI. Cognitive scientists thought that neural nets cannot do composition.
Actually composition is at the heart of why deep learning works. In the case of the attributes and distributed representation I was talking about it’s because there are so many configurations of these attributes that can be composed in exponentially many ways that these representations are so powerful.
And when you consider multiple levels of representations, which is what deep learning is about,
then you get an extra level of composition that comes in, and that allows you to represent even more abstract things.
A nice example of distributed representations where you can see them at work in people, is if you just have symbols, you might have a symbol for a dog and a symbol for a cat, and a symbol for a man and a symbol for a woman. But that wouldn’t explain why you can ask anybody the following question, and young kids can do this. If you say “You’ve gotta chose: either dogs are male and cats are female, or dogs are female and cats are male.” People have no doubt whatsoever. It’s clear that dogs are male and cats are female.
And that doesn’t make any sense at all. And the reason it’s clear is because the vector for dogs is more like the vector for man, and the vector for cats is more like the vector for woman. And that’s just obvious to everybody. And if you believe in symbols and rules, it doesn’t make any sense.
Author’s note, November 2019: I originally wrote this article in January 2015. Two months later, I joined Lumosity’s growth team as a product manager and worked there for four years. I no longer work there and if I were to analyze Lumosity’s onboarding flows now, I’d write a different article than what follows. Nonetheless, I think there’s still valuable stuff in here so I leave it up for posterity.
At first glance, Lumosity’s onboarding flow seems pretty typical. There’s a landing page, a personalization wizard, an assessment, a summary, and then a payment screen. But on closer inspection, you’ll find that the whole thing is executed uncommonly well, and that key parts are startlingly original. In this post I’ll break down the entire flow and analyze it step-by-step.
A note: I don’t work at Lumosity, and all of this analysis is my own opinion drawn from publicly available information.
All of Lumosity’s acquisition activities are designed to funnel users into two places: the website home page, and the app store download pages. I’ll pick up the story at the website home page.
Today’s Lumosity home page looks like this.
If you’ve been watching over the last few months, you’ll know that this is a relatively recent update. In Q4 2014 they were A/B testing a major redesign of the home page. What you see to the left was the experimental variation, and it was the winner.
There are two key things to notice about this page.
The science. Of the five content areas on this page, four are focused on science. Lumosity is a company that tests principles rather than tweaks, and in this variation they tested the principle that foregrounding the science leads to subscriptions. Apparently it does. Compare this to the old version, which did emphasize science (this isn’t a new idea for Lumosity), but also focused on social proof (see the extraordinarily cool Lumosity members at the bottom — Sandy is literally shooting a gun while riding a horse).
Aggressive funneling. You’ll need to visit the page to see this, but nearly every single element on the page links to one place: the start of their onboarding flow. You want to learn more about Mike Scanlon, co-founder? Onboarding. Curious about that “Prestigious research network” or the “40+ scientific games”? Onboarding. Want to “Get started”? Oh good: onboarding. Lumosity is singular in its intention: they want to move users into the onboarding flow.
There must be something good going on in there. Let’s check it out.
The first phase of the onboarding flow is what I’ll call the ‘personalization wizard’. Here Lumosity collects data for the purpose of creating a customized training program.
Upon clicking “Get started” (or just about anything else on the home page), the user taken to this screen:
Let’s take a look at what’s going on here.
At the top, a headline: “Welcome! Let’s build your Personalized Training Program”. Sounds sensible — if I were starting with a personal trainer at a gym, he’d need to know what I want to accomplish.
The page then instructs: “Select all aspects of your memory that you want to challenge”. Let’s look at the options. I can apparently get better at:
Remembering patterns and locations
Associating names with faces
Keeping track of multiple pieces of information in my head
Recalling sequences of objects and movements
Hmm. Those all sound good. Can I select them all? Yep. Okay, I’d better prioritize. Which ones do I want most? How about remembering names — that’s something I often struggle with. And keeping track of multiple pieces of information in my head? Like upgrading my RAM? Sure, that sounds awesome.
On the second screen, I’m asked to select all aspects of my attention that I want to challenge. Again, all of these look pretty good. Who doesn’t want to be better at ignoring distractions?
As I continue through the next three screens, I’ll be asked about my desire to improve my cognitive speed, flexibility, and problem solving.
By the time I make it to the end of all five screens, I’ve now considered what my life would be like with each of 20 mental power-ups. All the while, the implication framing the experience was that these abilities would be attainable if I were to use Lumosity. Consider the effect that this experience has on my state of mind: I’ve been educated, and I’ve been motivated.
Let’s unpack it.
The purpose of an onboarding flow is to get the user to onboard — that is, to subscribe and pay. I haven’t subscribed. I haven’t even been given the option to subscribe. Why would Lumosity put extra work between me and the thing they really want? Do they really need to collect the data to personalize the training program I haven’t bought yet right now? Why not wait until after I’ve subscribed, or at least given my email?
Lumosity must figure that they’re going to get more of what they want (subscription and retention) if they guide me through this experience right now.
In other words, this experience is here because it improves subscription and/or retention. And if that’s the case, it means that the primary purpose of the personalization wizard is to modify the state of the user’s mind, rather than the state of a database on Lumosity’s servers. Basically, it has the same purpose as a standard product sales page.
And there’s the brilliance. The user experiences this as a survey, rather than a sales page. What does the user do when she’s taking a survey? She carefully reads the questions and considers the answers. In the experience Lumosity has created, the user applies her full, unguarded attention to imagining what her life would be like if she possessed each of 20 improved cognitive abilities, all while associating those visions with the instrumental path to realizing them — using Lumosity.
Let me just say it again.
While collecting data that will be useful for personalizing the program later on, Lumosity has made massive gains on two primary onboarding goals: education and motivation.
Education. For many people, the concept of brain training is unfamiliar. One of the boxes Lumosity needs to check off is that the user understands the benefits she can expect to gain by using Lumosity. How to do that? As soon as you get didactic about your product, you’re going to lose people in droves. This personalization wizard lets Lumosity educate users without boring them.
Motivation. As the user envisions a better version of her life (one in which she remembers faces, solves problems, and ignores distractions with ease), her motivation to take steps toward that better life increases. At the upcoming conversion phase where she’ll be asked to pay for the product, her motivation level is one of the key factors determining whether she will convert.
This dynamic is well illustrated by BJ Fogg’s behavior change model. According to that model, whether or not a person will take a given action depends on the person’s motivation level and her ability (the perceived difficulty of the task) at the moment of a trigger.
In Lumosity’s case, the behavior in question is the user subscribing to the product. The trigger will come when Lumosity presents the payment options at the end of the flow. Whether or not the user subscribes will depend on her motivation level and the perceived difficulty of the task (which comprises factors such as the cost and her expectation of whether she’ll be able to stick with the training program) at the moment of the ask.
The personalization wizard experience increases the user’s motivation by getting her to visualize the reward she stands to earn. As Lumosity product design director Sushmita Subramanian explains,
we know from our customer research and also from a body of neuroscience research that letting people reflect and introspect and then share and disclose information about themselves actually activates parts of the brain associated with reward — similar to those that you find from food, sex, and money. So we thought that having some parts of those in the product would help us out as well.
And sure enough, in an A/B test, the team found that directing users through the survey rather than sending them straight to signup increased subscription rate by almost 10%. This was despite the fact that the survey variation performed worse on conversion to the signup page. In other words, without the survey, more users made it to the signup page, but fewer users actually signed up. (There’s a lesson in this: make sure you’re optimizing for the right metric. Josh Elman calls it the only metric that matters. Avinash Kaushik’s Digital Marketing and Measurement Model is a great tool for getting your thoughts clear.)
Smart. Now that the user is all jazzed up on visions of her own imminent superintelligence, it’s a great time for…
Lumosity knows that many of its users aren’t going to pull the trigger and buy on the first visit. They’ll need to be nurtured. Everyone cites different stats, but this guide to lead nurturing by Marketo claims that
up to 95 percent of qualified prospects on your Web site are there to research and are not yet ready to talk with a sales rep, but as many as 70 percent of them will eventually buy a product from you — or your competitors.
If the user leaves without providing any contact info, Lumosity will have to rely on ad retargeting and luck to get her attention again. But if the user shares her name and email, Lumosity can send her targeted, personalized, optimized messages at any time. Think of these messages in Fogg terms: each message is a chance to increase the user’s motivation and sense of ability, and each message is a new trigger that might land at a fortuitous moment.
So it’s important that Lumosity capture the user’s contact information. After she completes the last step of the personalization wizard, here’s where she’ll be directed:
A form like this — asking a stranger on the internet to hand over her name and contact information — is one of the leakiest joints of any funnel. But the Lumosity user has just spent five minutes envisioning herself with mental superpowers. And she sees Lumosity as something scientific — ie, trustworthy. I’ll bet the conversion rate on this form would make any growth hacker envious.
As we continue walking through the flow, keep in mind that if the user drops out at any point after completing this form, she’s in Lumosity’s database. She’ll be getting regular emails until she unsubscribes or pays.
Now let’s fill in this form and go on to the next step in the flow, where Lumosity asks for…
Think back to Robert Cialdini’s famous principles of influence. Number 2 on his list is “commitment and consistency”. Cialdini says that humans are moved by a deep desire to be consistent. When we’ve committed to something, that drive towards consistency will make us more inclined to follow through with it.
At this point in the onboarding flow, the user has just just shared her name, email address, and birthdate. We could say that she’s made a commitment to seeing what this is all about. How likely is she to bail out at the next step? Not very.
That makes this a good opportunity to ask for some more information.
The text at the top tells the user that the reason Lumosity needs this information is to provide an additional level of personalization. I could write an entire post analyzing Lumosity’s use of Cialdini principles in this flow. To quote Cialdini: “A well-known principle of human behavior says that when we ask someone to do us a favor we will be more successful if we provide a reason. People simply like to have reasons for what they do.” Look back and you’ll see that Lumosity always provides a reason when asking for information.
Whether or not Lumosity needs this data in order to personalize the product, it’s valuable information for the marketing and product teams. Since the team found that adding additional complexity at this stage doesn’t hurt the onboarding metrics, how about one more page of survey questions before moving on:
After these survey questions, it’s on to the next step:
Reflecting user choices
Remember how we said that the personalization wizard isn’t primarily about personalization? So what is Lumosity going to do with the information it collected in that stage of the flow?
Here’s one thing. It will reflect that information back to the user, reinforcing the idea that this is personalized. Here’s what the user sees next:
First she gets a nice animation of a pie chart reflecting the categories she said she wants to prioritize.
This animation goes on to tell her that her priorities will be factored into her Personalized Training Program.
And finally, she’s prompted to get started on the next major phase of the onboarding flow, the Fit Test.
The purpose of the reflection phase is to satisfy the principle that personalization — or at least the expectation of personalization — will improve the user’s expectation of value and thus improve conversion and retention. In addition, it’s simply a matter of respect: the user has just answered a series of personal questions, and shows her that she was heard.
Next the user clicks “Start Your Fit Test”, and it’s on to…
Now we’re getting to something close to the actual product. Here Lumosity is going to have the user play three games “to calibrate [her] starting point”. Again, it sounds reasonable. If I were beginning with a personal trainer at the gym, he’d need to know my starting fitness level, right?
Each game takes a few minutes to complete. They’re fairly challenging, but are also adaptive — if the user screws up, it gets easier. Aside from being good game design, this ensures that everyone, including the sharpest test-taker, gets to see that there’s room for improvement.
Here’s why that matters. Lumosity just had the user imagine all the cognitive power-ups she stands to gain. With her aspirations set high, she’s now confronted with objective evidence of her current shortcomings. This assessment will make the difference between where she is and where she wants to be painfully obvious.
Assessment 1: Speed. As quickly as possible, decide if the card that’s flashed is the same as the last one shown.
Following the first assessment, the user gets a little bit of encouragement and education.
Assessment 2: Attention. Control railroad switches to direct train cars to their like-colored homes.
Following the attention test, some more reinforcement of the science messaging.
Assessment 3: Memory. Glimpse a pattern of colored tiles, and then recreate the pattern.
Following the three assessments, the user gets another overlay telling her that the system is setting up her personalized training program.
At the second screen in the overlay, the ‘Next’ button will take the user into the penultimate section of the onboarding flow, which we can call the ‘walled garden’.
Before going on, let’s pause for a moment and take stock. The user has been in this flow for 10 or 20 minutes by this point, and she hasn’t once been asked to pay. Come to think of it, she hasn’t even seen a price tag.
So what’s been accomplished?
The user is educated about the product and the concept of brain training.
The user is motivated to achieve improved cognitive abilities.
The user is aware of the objectively-measured distance between where she is and where she wants to be.
The user has committed 10–20 minutes of her time and attention, which creates momentum for her to continue in the flow in order to be consistent.
Lumosity has collected the user’s name and email address, which means that even if the user doesn’t buy today, Lumosity can communicate directly with her in the future.
Not bad at all. Closing time!
Now it’s time to close the deal. The user is deposited into a small walled garden of content — 6 or so pages that all flow downhill into the payment screen. In the name of saving space, I’ll show only the first page. If you want to see the others, here’s an Imgur gallery.
At the top, the user now sees her scores on the three assessments. Key point: these are not presented as contextless raw numbers. They’re presented as percentiles— how do I compare with others? This leverages the user’s natural competitiveness and curiosity about social status.
Below that, the user is again reminded of the science.
Here, the “daily workouts” line reinforces the gym-membership-like positioning and sets long-term training expectations that will serve retention.
And at the bottom, one last time, the page leverages our universal need to know where we stand compared to others.
Here’s how this walled garden works. Once the user has made it this far, any time she comes back to Lumosity on the same account, she’s limited to accessing these pages and a daily training session, which consists of three short games. The walls stay up until she either A) logs out, in which case she’s back to the beginning, or B) subscribes.
Lumosity would love for the user to proceed to the payment page and subscribe right now, but if she doesn’t, hopefully a seed will germinate in her mind. The emails she’ll be receiving almost daily should help to nurture that seed.
Finally, let’s go to the last step in the flow: payment.
When the user clicks Unlock, here’s where she’s taken.
There’s a lot of smart going on here. I want to focus on three key things.
A special offer is pre-loaded. First, notice that the page comes pre-loaded with a special offer to save 20% “today only”. (It’s there every day for new users.) This accomplishes two things. The first is that it adds a sense of urgency. I feel like I have to buy now to get that savings. The second is that it allows Lumosity to pre-fill the Promotion Code field below. That’s a fantastic idea, because these are a notorious conversion killer — users see the promotion code field and go off Googling for coupons rather than completing the purchase.
Multiple subscription length options. Offering four options for subscription length (monthly, yearly, two-year, and lifetime) accomplishes a couple of things.
First, it takes advantage of the contrast effect, whereby the subjective value of one object can be increased by positioning it alongside more expensive options. The classic example comes from a 1992 marketing research paper (sorry, no free version). Williams-Sonoma was selling a breadmaker for $275 in their print catalog. Sales of the machine were weak. Later the company introduced another breadmaker and began selling it for $429 on the same page. Sales of the original machine nearly doubled.
By offering a product for $239.96, Lumosity makes $11.95 seem cheap. In Fogg terms, this decreases the user’s perceived difficulty (of purchasing) without actually decreasing the amount of money Lumosity receives.
Second, offering multiple subscription length options allows Lumosity to capture the full amount of money a customer is willing to spend right now, rather than leaving some of it on the table for later. You can be sure that the lifetime subscription price, $239.96, is greater than or equal to the time-discounted expected lifetime value for a monthly subscriber. If a customer is motivated to spend that much right now, Lumosity will put that money in the bank.
Prices are displayed as monthly and total. Lumosity wants to make sure the user knows that she’ll save a lot by buying a long-term subscription. But Lumosity doesn’t want the user to balk because she’s confused about the total cost. So they display both, foregrounding the number that will contribute to a decreased sense of purchase difficulty.
When the user selects a plan, she’ll be taken to the checkout page.
This payment form is beautifully simple. It asks for name, credit card number, and expiration date; nothing more. At this point in the funnel, the rule is simple: minimize friction, maximize conversion.
Conclusions and takeaways
So that’s it — an onboarding flow that’s uncommonly well-executed and has at least one moment of brilliance.
What can we take away?
Simpler ≠ better. Sometimes adding more complexity to the flow results in a net win — particularly if you need to establish unfamiliar background. Just make sure you understand your user’s psychological state in that moment, and minimize the burden relative to her level of commitment.
Optimize on principle. Think about the deeper psychological and motivational forces at work during the user’s journey through your flow. Hypothesize principle-driven improvements to that flow, and test whether your principle was right. This results in powerful learning that will carry over into other parts of your product.
Don’t just climb hills. Or risk getting stranded at a local maximum. Include in your testing program an appetite for radical, principle-driven experiments.
That’s it. Any questions or comments, get me on Twitter @mgmobrien or email me at firstname.lastname@example.org. Happy onboarding.
In this post, I want to look at the notion of rationality and gain an intuitive understanding of what we might mean when we say that something is more or less rational.
Broadly, there are two common senses in with the term is used.
One is in the sense of what’s called epistemic or theoretical rationality. Intuitively, this is the kind of rationality that aims for the truth. The practice of epistemic rationality involves making inferences from the things one believes to the things that are logically entailed by those beliefs.
The second sense of rationality, to be distinguished from epistemic rationality, is practical rationality, which is about deciding upon actions that one could take which cohere with one’s goals. From the perspective of practical rationality, believing something is just another action, and whether or not it’s rational to do so depends on the degree to which one can expect that action to take one closer to one’s goals.
For an example, let’s take Pascal’s Wager. The idea is that we can’t be certain whether God does or does not exist. But we can cultivate a belief one way or the other by selectively exposing ourselves to the right experiences. The epistemic rationalist would say that the thing to do is to collect evidence and reason carefully about it, and believe whatever is most strongly supported by the evidence. Pascal, in a famous example of practical rationality, looks at the cost/benefit analysis. Suppose God does exist. In that case, if I believe, then God will reward me with an eternity of bliss. But if I don’t believe, he will punish me with an eternity of hell. On the other hand, if God really doesn’t exist, then it only matters a little bit whether I believe or disbelieve, because there’s no eternity in heaven or hell to worry about. Given this situation, Pascal thinks, it’s in my best interest to believe in God, regardless of what the evidence says.
These are intuitive notions. To use them in an unambiguous way, we’ll have to analyze them further and try to define, as precisely as we can, the criteria of evaluation that will be used to measure some act for its practical or epistemic rationality.
It’s difficult to get a man to understand something when his salary depends upon his not understanding it.
If we want to get a man to understand something, but he gets paid to not understand it, then what should we do? I think that the first step is to understand that this man is not trying to believe the truth. Only then can we start to develop decent strategies.
Smart people have observed that it’s harmful to a society to have members who hold false beliefs and employ enabling epistemologies. This is because all of our success, as individuals and as groups, depends on the quality of our reasoning. False beliefs and faulty epistemology lead to poor reasoning and bad decisions.
Religious institutions can be seen as propagators of false beliefs and faulty epistemology. The strategy that the so-called ‘new atheists’ typically employ is to use careful reasoning to show that the claims that these religions make about the world are false.
However, this strategy assumes that religious people are fundamentally aiming to believe what’s true, and are simply lacking information. I claim that in actuality, people are — and should be — aiming to believe what’s good for them. In that case, then the ‘new athieist’ strategy is missing the point and is unlikely to work.
Religious belief can provide significant benefits — things like existential comfort, community, and a psychological toolkit effective at dealing with life’s difficulties. It’s difficult to get a man to understand something when his social and existential security depends upon his not understanding it.
If my answer to the Peter Thiel question is right, then the way to reduce the prevalence and limit the propagation of faulty epistemology is to reduce the switching costs and provide greater switching benefits.
This is a followup to a post in which I give an answer to Peter Thiel’s favorite interview question: “Tell me something that’s true, that almost nobody agrees with you on.” My answer is that if we want to be rational, we shouldn’t aim to believe what’s true; rather, we should aim to believe what’s good for us to believe.
An objector might say: “You can’t control your beliefs. Changing a belief is not like changing a shirt. Humans are forced into their beliefs by the evidence and experiences that they’re confronted with.”
In this post I’ll respond to that objection.
We can voluntarily influence our own beliefs
Let’s start by making a distinction. Let’s say that humans can have two kinds of voluntary control: we can have direct voluntary control, or we can have indirect voluntary control.
If we have direct voluntary control, we can simply choose to perform an act, and the act will occur. For example, I can simply choose to lift my hand, and my hand will rise.
Indirect voluntary control refers to situations in which we can influence or control an outcome, but only by indirect means. A good example is falling asleep. I cannot simply choose to fall asleep and immediately do so in the way that I can choose to lift my hand. But I can indirectly cause myself to fall asleep by laying in bed while I’m tired and filling my mind with relaxing thoughts.
The original objection holds that it doesn’t make sense to talk about where to aim our belief because we don’t have a choice in the matter: we lack voluntary control.
When it comes to direct voluntary control, I grant the point. I can’t choose to believe that polar bears are green.
But we do have indirect voluntary control over our beliefs. Every experience we have influences our beliefs. And we can choose, to some extent, what experiences we’ll have. We can choose whether to research a question or not; we can choose whether to reflect on an issue or not; we can choose whether to engage in a conversation or not.
Further, we often know which way our beliefs are likely to change if we take a certain action. Recall the example of the omnicidal aliens who will destroy Earth unless you swallow a pill that will cause you to believe that polar bears are green.
In practice, the process of influencing our beliefs often happens with less conscious awareness. A person might feel that a certain thing is bad or dangerous, and so avoid it. Or he might feel that something else is good or pleasurable, and so seek it. Consider the culture within many religions that censors and stigmatizes ideas which might rot the faith of the believers.
In any case, it’s clear that we do have voluntary control over our beliefs. Since the path aimed at true belief sometimes diverges from the path aimed at beneficial belief, we can’t escape the obligation to choose an aim. In the next post, I’ll discuss why it matters that we get it right.
Peter Thiel says that there is one question he likes to ask of interviewees:
“Tell me something that’s true, that almost nobody agrees with you on.”
Here’s my answer:
If we want to be rational, we shouldn’t aim to believe what’s true. We should aim to believe what’s good for us to believe.
Most people either think that this is false or moot. Those who think it’s false say that it’s always rational to aim for the truth, even if the truth hurts. Those who think it’s moot say that the two aims amount to the same thing because true beliefs and good beliefs are equivalent.
But both views are incorrect. In this post I’ll address these traditional views, and show that they can’t be right.
Traditional view 1: always aim for the truth, even if it hurts
Let’s start with the commonly-held idea that we should always aim for the truth, even if it hurts.
This can be analyzed into two separate claims: we should aim to believe all truths (pursue truths), and we should aim to believe only truths (avoid falsehoods).1Following Paul Horwich Value of Truth 2006 Neither is acceptable as a universal maxim.
Regarding the ‘pursue truth’ aim: very many truths are too trivial or too costly to be worth pursuing. Take, for example, the billionth digit of pi. Most people haven’t pursued this nugget of knowledge, even though it’s easy to do (it’ s 9). You’ll be forgiven for not pursuing a true belief on this matter (and many others like it).
Regarding the ‘avoid falsehoods’ aim: it’s not hard to construct scenarios in which any rational human would choose to aim for a false belief over a true one. For example, imagine you’re abducted by aliens who convincingly tell you that they will destroy Earth unless you come to believe that polar bears are green. They then offer you a pill and credibly inform you that taking the pill will cause you to believe that polar bears are green. Would you not be rational to aim for a false belief by taking the pill?
So the maxim that we’d be rational to pursue all and only truths fails. What we actually believe is that we’re rational to pursue some truths, and accept some falsehoods.
Traditional view 2: the true and the good are the same
The second commonly-held view is that aiming for the truth and aiming for what’s good amount to the same thing, because the truth is always good. The alien abduction scenario from the last section gives a counterexample. We might give a more earthly example to bolster the point. Imagine you’re being interrogated in a windowless room by a government agent. On the table between you is a folder. The agent credibly informs you that the folder contains information which would correct a false belief that you currently hold, if you were to read it. You are free to read it, he tells you, but if you do, he’ll kill you. What should you do? If you’re aiming for the truth, you’ll read the folder. If you’re aiming for what’s good for you, you’ll walk away. I think that this is sufficient to show that the truth is not always the good and the good is not always the true.
I believe these examples are sufficient to discredit the idea that we should always aim to believe the truth. In the next post I’ll address the objection that claims we don’t have a choice in the matter of what we believe.
All of our success depends on making the right decisions, yet our public decision-making tools are primitive. Can software revolutionize the pursuit of truth?
There’s something special about humans. In the vastness of space there’s something special about any kind of life, sure. But even here on lush and bustling Earth, we humans are different. We are the ones sending rockets into space and visiting the other animals at the zoo. We’re the ones that have mastered agriculture and built societies in which millions of our kind live their entire lives without ever knowing the fear of starvation — a luxury that the universe doesn’t grant to living creatures by default.
What makes us special is our extraordinary talent for reasoning. To an extent far surpassing any other creature, we successfully reason about what’s good for us and how to get it. That talent lets us act in such a way that our fields produce feasts of vegetables in autumn, rather than dirt and weeds. It is what lets us plan our retirement, and avoid turning cold wars into hot ones. It’s not an overstatement to say that all of our success depends upon our ability to reason.
And yet, as important as it is that we reason well, the way that we go about it as a public is a disaster. Look, for example, at the way America makes decisions about matters like gun control and health care for its citizens. It’s a political clusterfuck. Science and the academy are wonderful developments — but when it comes to influencing a democratic population, research papers are but more noise in the chamber. The debate around climate change is a stark illustration: the consensus among the scientific community is at 98%. Yet among the public, it’s closer to 48%. Something is broken.
The costs of this problem are too great to ignore. When we reason poorly, we erode our ability to make smart plans and achieve smart goals. We fumble, we fight, and we miss opportunities to make the world a better place. Worse, our animalistic infighting may carry us, blundering and bickering, into catastrophe. Ours would not be the first species to meet a brutal reckoning in this amoral universe.
But there’s reason for hope. The emergence of software and the wiring-up of the world began just a moment ago, on the grand view. The problem described above — call it the ‘aletheia problem’ after the Greek word for truth — is at its heart a matter of organizing information.
There are those who believe that software will revolutionize the pursuit of truth. Imagine a world in which reasoning — that sacred process of pondering, planning, conversing, and debating, all for the goal of getting things right — is scientific. Objective. Transparent. Where the ‘truth’ of a statement, understood as the opinion that you yourself would come to hold if you were to investigate the matter, is an objective value that you can just look up, instantly, the way you would look up a stock quote. And the structure of reasoning that lies behind that belief is mapped, the lattice of connections well-traversed by others, so that it’s transparently and asynchronously available, the way the pages of Wikipedia, though constantly in flux, are always there when you need them.
This would be a world in which the deciding voice is not the one that shouts the loudest or has the most money. It’s a world in which truth is not, in the end, a matter of opinion.
There is right now a confluence of forces that makes it feasible that such a solution to the aletheia problem is on the horizon.
Software is eating the world, and like history’s most significant revolutions, this is a process that will go through many iterations of creative destruction before it finds a lasting stasis. The aletheia problem will not be easy to solve. But we have ample reason to expect that the digital revolution, that software-powered assault on inefficiency in all its forms, will transform the way we pursue truth. Fifteen years ago there were still those who called the nascent internet a “fad”. Now, in 2014, we’ve seen enough to know that the reality is far more exciting than we imagined. The early internet wasn’t a fad. It was the first flicker of life.
It’s easy to be pessimistic about our collective future, but this attitude misses one important trend: software is eating the world.
The form of government which communicates ease, comfort, security, or, in one word, happiness, to the greatest number of persons, and in the greatest degree, is the best. — John Adams
On a spring afternoon in 2011, a Manhattan public school teacher named Alberto Willmore was arrested in front of his home in the Bronx for possession of marijuana. Willmore, an art teacher, was working on a piece of chalk art when he flicked a cigarette butt in the direction of the nearby sewer grate. A passing NYPD officer saw this and quickly pulled around. The officer restrained Willmore and retrieved a cigarette butt from the area. Willmore was arrested, and lab tests later showed that the cigarette butt retrieved by the officer contained 0.2 grams of marijuana.
This marked the start of an ordeal that would shatter Willmore’s career.
The day after his arrest, he received a letter in his school mailbox informing him that he would be suspended from teaching pending resolution of his case in the courts. At the time, he couldn’t know that his course through the legal system would be an absurdity of delays and extensions lasting nearly two years. It wasn’t until January of 2013, seven court dates and 579 days after the arrest, that Willmore’s case was resolved — with a dismissal.
Finally ready to resume teaching, Willmore was shocked to discover that the city’s Department of Education remained unsatisfied. The Department decided that the nature of the charges against him warranted further suspension pending a hearing. And in March of 2013, the Department made its final ruling. Willmore’s job was gone for good.
By all accounts, Alberto Willmore was an excellent teacher — one that a society bent on maximizing the happiness of its citizens would want to cherish. A student said of Willmore, “I honestly think he would just jump in front of a bullet for us. Like, he loved us, for real.” But instead of cherishing him, Willmore’s government struck him down and dragged him through the gutter.
And Willmore’s case is far from unique. In 2011 there were 759,000 marijuana arrests in the United States. That’s about one arrest every 40 seconds. Each arrest carries with it all of the expense, embarrassment, loss of liberty, and collateral damage to one’s personal and professional circumstances that Willmore experienced.
And here’s the rub. All of this is due to a misguided policy decision. Marijuana prohibition is a bad idea and an empirical failure. Marijuana is now recreationally legal in two states. The sitting president has said that in his opinion marijuana is no more dangerous than alcohol. After an extensive review of the medical research and a series of visits to people affected by marijuana prohibition around the country, CNN’s Sanjay Gupta recently changed his mind on marijuana, saying, “We have been terribly and systematically misled for nearly 70 years in the United States, and I apologize for my own role in that.”
CNN’s Sanjay Gupta recently changed his mind on marijuana, saying, “We have been terribly and systematically misled for nearly 70 years in the United States, and I apologize for my own role in that.”
But we live in a democracy, and prohibition has been our will. Only recently has the country begun to realize that we got this one wrong.
This error is exactly the kind of thing political theorists expect from democratic societies. Policy decisions are hard. The voting citizens in a democracy have jobs and lives to take care of — we don’t have the time, let alone the resources and motivation — to review every policy issue and come to an informed position. So we take shortcuts; we absorb our opinions from the zeitgeist. Or worse, we don’t even bother. We can be lazy and apathetic and are often misinformed and manipulated.
And as a result, we get our preferences wrong. Consider the relationship between a parent and child. A mother doesn’t take it as her job to satisfy all of her child’s preferences — the child often desires the wrong thing. We adults are in the same position. A moment’s historical retrospection is enough to shake free the illusion that we live at an exceptional point in history in which we finally have it all figured out. Smart people thought slavery was right before they realized it was wrong; they opposed universal suffrage before they were for it; the majority of Americans thought invading Iraq was a necessity before we realized we shouldn’t have done it; well-meaning legislators thought alcohol prohibition was good policy before they realized it was not. Larry Page, CEO and co-founder of Google, offers a reflection. “Consider our own history,” he said. “When we started Google, it wasn’t really obvious that what we were doing wouldn’t get regulated away. Remember, at the time, people were arguing that making a copy of a file in a computer’s memory was a violation of copyright. We put the whole web on our servers, so if that were true, bye-bye search engines. The Internet’s been pretty great for society, and I think that 10 or 20 years from now, we’ll look back and say we were a millimeter away from regulating it out of existence.”
For as long as the idea of democracy has been around, theorists have seen the ignorance of the electorate as its inextricable flaw. “The best argument against democracy,” Churchill is rumored to have said, “is a five-minute conversation with the average voter.”
“The best argument against democracy,” Churchill is rumored to have said, “is a five-minute conversation with the average voter.”
But this problem may not be inextricable after all. There’s a revolution going on that Churchill could not have foreseen. It’s been called the digital revolution, the software revolution, and the information revolution. Whatever you want to call it, its impact is unmistakable, and it’s only just begun. The most illuminating summarization of what’s happening may be Marc Andreessen’s famous phrase: software is eating the world. The idea is simple. For any problem within the sphere of human interest that involves the handling of information, from selling a dresser, to finding a spouse, to delivering advertisements to consumers, software presents the opportunity to do things far more efficiently than traditional methods. And whatever software can do better, software inevitably will do better.
The ignorance of the electorate is fundamentally an information problem. As such, it’s a problem that’s ripe for consumption by software. Framed in terms of a solution, it is the problem of discovering the informed preferences of the individual voters. Here’s what I mean by that. As our evolving attitudes towards marijuana prohibition illustrate, our current preferences aren’t the final answer — we may be misinformed, misled, or ignorant. And in that case, our votes will favor policies that differ from those that we’d support if we knew what was good for us. But our informed preferences — those policies that we would favor if we were to take the time to investigate the question thoroughly — are hard to get to. We discover them only by a process of labor-intensive inquiry, which we often don’t have the ability or inclination to undertake. But the days aren’t getting any longer, and the issues aren’t getting any simpler. This is why theorists have long seen democracy’s ignorance problem as inescapable. The challenge for software, then, is to find new ways to streamline the path to our informed preferences. If software can do that, then the electorate’s ignorance will be radically undermined.
This is no easy problem to solve. However, if we hold this problem in mind while taking a careful look at the recent trajectory of progress in information technology and brain sciences, I think we will see that only a very brash person would predict that the path to our informed preferences will remain as inefficient as it has been thus far.