Peter Thiel's contrarian thought exercise

Mr. Thiel shows, again and again, how he likes to “flip around” issues to see if conventional wisdom is wrong, a technique he calls Pyrrhonian skepticism.

“Maybe I do always have this background program running where I’m trying to think of, ‘O.K., what’s the opposite of what you’re saying?’ and then I’ll try that,” he says. “It works surprisingly often.” He has even wondered if his most famous investment, Facebook, contributes to herd mentality.

When I remark that President Obama had eight years without any ethical shadiness, Mr. Thiel flips it, noting: “But there’s a point where no corruption can be a bad thing. It can mean that things are too boring.”

When I ask if he is concerned about conflicts of interest, either for himself or the Trump children, who sat in on the tech meeting, he flips that one, too: “I don’t want to dismiss ethical concerns here, but I worry that ‘conflict of interest’ gets overly weaponized in our politics. I think in many cases, when there’s a conflict of interest, it’s an indication that someone understands something way better than if there’s no conflict of interest. If there’s no conflict of interest, it’s often because you’re just not interested.”

When I ask if Mr. Trump is “casting” cabinet members based on looks, Mr. Thiel challenges me: “You’re assuming that Trump thinks they matter too much. And maybe everyone else thinks they matter too little. Do you want America’s leading diplomat to look like a diplomat? Do you want the secretary of defense to look like a tough general, so maybe we don’t have to go on offense and we can stay on defense? I don’t know.”

Maureen Dowd, NYT interview with Peter Thiel

Peter Thiel: we should be more worried about the lack of automation than excess automation

From an interview with Erik Weinstein on The Portal podcast, episode 1:

If we have runaway automation, and if we’re building robots that are smarter humans and can do everything humans can do, then we probably have to have a conversation about a universal basic income or something like that. And you’re going to end up with a very weird society. I don’t see the automation happening at all.

When one actually concretizes it, it’s not quite clear how disruptive the automation that’s happening really is. It’s a version of the tech stagnation thing. It’s always: the last 40 or 50 years things have been slow, we’re always told it’s about to accelerate like crazy. That may be true. In some ways I hope that’s true. But if one was simply extrapolating from the last 40 to 50 years perhaps the default is that we should be more worried about the lack of automation than excess automation.

Morality for sociopaths: why to just be a good person

In one of my favorite Paul Graham essays, we learn something basic about Graham’s perspective on life.

In almost every domain there are advantages to seeming good. It makes people trust you. But actually being good is an expensive way to seem good. To an amoral person it might seem to be overkill.

The question he’s asking is: is it worth it to go beyond seeming like a good person, and to actually be one? 

He gives the example of Ron Conway, the legendary investor. 

No one, VC or angel, has invested in more of the top startups than Ron Conway.

And yet he’s a super nice guy. In fact, nice is not the word. Ronco is good. I know of zero instances in which he has behaved badly. It’s hard even to imagine.

Is it merely a coincidence that the most successful investor happens to be such a nice guy? Taking an outside view, Graham notes that Conway isn’t an outlier. 

Though plenty of investors are jerks, there is a clear trend among them: the most successful investors are also the most upstanding. 

So what might be going on? Taking an inside view, it’s easy to draw a connection between Conway’s goodness and the good things that have come his way: 

All the deals he gets to invest in come to him through referrals. Google did. Facebook did. Twitter was a referral from Evan Williams himself. And the reason so many people refer deals to him is that he’s proven himself to be a good guy.

But couldn’t Conway and others have gotten the same benefits by being good strategically? Surely there are some occasions on which it’s beneficial to capitalize on an advantage, even if the other person wouldn’t call it “fair”. Maybe they won’t know. Maybe they can’t do anything about it.  

Graham offers two factors that weigh in favor of being good as a rule rather than opportunistically. 

The first is transparency. The more your actions and motivations will be on display for all to see, the more it matters that you be good. Put differently, the greater the transparency, the fewer opportunities for you to screw people over and not get caught. Graham notes that transparency seems to be increasing in the world as a general trend. 

The second factor is chaos. The more unpredictability there is in the world — the less you’re able to predict future events — the less capable you are of deciding accurately when it’s okay to be an asshole. The person that has no power to help or harm you today might be in a very different position next year. The farther out in the future you go, the less you’re able to predict how things will be.

Both of these factors weigh against being a moral opportunist and in favor of just being a good person. 

We might put it another way: there are long-term compounding benefits to simply being a good person. Humans reciprocate. If you are good to people as a matter of routine, then over time the number of people in the world that you’ve been good to will increase. You’ll be more and more likely to encounter people that you’ve been good to. They will want to be good to you. Furthermore, you’ll gain a reputation for being good. People that you’ve never interacted with will assume, based on your reputation, that you’re going to be good to them, so they’ll be predisposed to be good to you. 

A world where the people you encounter have a heightened probability of being friendly to you is a good world to operate in. So even for the sociopath motivated only by self-interest, it’s probably worth it to just adopt the rule of being a good person. Invest your scheming energy elsewhere. 

Towards a science of ethics

Ethics is the study of goodness and badness.

If we could go back in a time machine to the year 1700 and offer 68-year-old John Locke a trip to our present day, I bet he’d be excited to see the development of the empirical sciences and the technological wonders that have sprung out of them. Thermodynamics, optics, chemistry, electricity — our mathematical mastery over phenomena that were in his day the subject of speculative philosophy would probably bring him to tears. But when he asked about ethics, I think he’d be saddened by our response. We’ve made no scientific progress in ethics. It’s still squarely in the realm of philosophy, and our attempts to turn it into an empirical science have failed.

This much we know: what’s good and bad must depend upon facts about consciousness. If there were no systems in the universe that had consciousness — the fact of there being something that it’s like to be that system — then there could be no good or bad, and there would be no ethics. There would just be things (or, somewhat more precisely, there would be everything, and nobody to take a perspective that would divide everything up into discrete things).

What we don’t know is basically the first thing about how consciousness works. We certainly don’t understand how affective valence — the subjective experience of goodness and badness — works. Without this understanding, there simply is no hope of developing a science of ethics. That’s why we need a science of consciousness.

Survival is not enough

Imagine that the year is 12019 — ten thousand years in the future. Humans long ago populated the galaxy — and almost as long ago, were effectively enslaved, Matrix-style, by a superior intelligence. They keep us alive in order to harvest a resource from us. They are really good at keeping us alive — so good that we’re effectively immortal. This is the case for billions upon billions of humans —  alive, immortal, and enslaved.

And suppose that, to our terrible misfortune, the resource that this alien intelligence wishes to harvest from us can be extracted from each human in proportion to the magnitude of suffering experienced by that person. As a result, the aliens are maximizing not just our survival, but also our suffering. This is just about as bad as it gets.

Which would you prefer: that world, or the world in which it’s 12019 and all humans are extinct? Do you want existence with infinite suffering or non-existence?

The point is that survival is not enough. There’s something else that has to accompany survival in order for the world to be good. If we want to evaluate the goodness of a possible world, our evaluation must take into account not just the bare existence of humans, but also the degree of psychological suffering or happiness that they experience.

I think that Sam Harris uses this thought experiment or something like it in his argument for a science of ethics. 

Why can’t we agree?

There’s something really interesting to me about the Dan Dennett / Sam Harris free will debate. It’s not so much the content of the debate, and it’s not so much that they disagree. It’s the fact that they seem so close in their views but have been unable to agree. They are both philosophers who pride themselves on their clarity of thought, amenability to reason, willingness to change their minds when shown evidence of their wrongness.

And yet, in this debate, not only have they been unable to come to an agreement, they actually damaged their friendship for 2 years over it.

In the podcast episode, it begins to look like this is a verbal dispute — a trivial disagreement that would be resolved by unraveling ambiguity in terminology. But if that’s the case, why weren’t these two experts able to identify and dissolve it?

Maybe it’s that our tools are bad. That we use this loose, ambiguous, unstructured natural language, either spoken or written, but in any case the actual parsing for semantics is done in the head of each participant, in each case out of view of the other.

What if we could get more of that parsing happening out in the open?

One of the world’s great unsolved problems

Elon Musk does everything he does because he wants to ensure the survival of the species. He thinks that this is the most important problem to be solved. To that end, he founded Tesla to solve energy, and he founded SpaceX to get human civilization on other planets. I say there’s another problem that’s just as big, but far fewer people are talking about it.

It’s the problem of how to get people to do what’s good for them.

So often there’s a course of action available to a human that would be best for a person, and the person doesn’t do it. Maybe because he doesn’t know it’s the right thing to do. Or maybe he does, but he doesn’t know that he can. Or maybe he knows he should and that he can, but he doesn’t have the will power.

Looking at humans on the level of groups, it becomes scary.

The problem can be broken down into a number of sub-problems:

  1. How to get people to have true beliefs about what’s really good for them
  2. How to get people to do what they believe is good for them