Yuval Noah Harari is an unusual historian whose focus is on the future. He filled UCSB Campbell Hall for an event sponsored by UCSB Arts and Lectures and the SAGE Center for the Study of the Mind. My own sponsor at UCSB, Jonathan Schooler, introduced Harari.
Here Jonathan and I are together under different circumstances!
Schooler noted that historians usually focus on tiny periods of the past, while Harari asks the big questions about big spans of time.
Harari has asked how our initially insignificant species came to dominate the globe. In one phrase: "Flexible cooperation." Bees cooperate in large numbers, but they are not flexible.
How do humans do it? The answer is timely in our current era of fake news and alternative facts. Part of what makes us special is our ability to believe in fictions. Fictions such as religion, corporations, nations or money. These are powerful fictions.
Harari started his talk off with a bang saying that we are one of the last generations of Homo sapiens. A century or two at most remains for our species. Something new is coming and it will be more different from us than we are from Neanderthals or chimpanzees.
Authority is shifting away from humans to algorithms.
Humanism represented a major advance over traditional philosophies. Humanism has served us for the past 200-300 years. Humanism leaves no room for a god or for Big Brother. Meaning and authority come from feelings and free choices of humans. You should follow your heart.
Humanist politics? The voter knows best. There is no higher authority. Not the Pope. Not a council of Nobel laureates.
Humanist economics? The customer is always right. Voting is done with credit cards and cash. He asked us to imagine that Toyota or Ford hired the smartest people in the world to design the best car. Nobel laureates. Artists. Oscar winners. They would produce the Toyota Perfect or the Ford Perfect. They would turn out millions of them and ship them to dealer lots around the country. But if customers don't buy it, it is not the best car. The customer is always right.
The Soviet politburo did try to decide which was the best car and order that one car to be made. That system failed.
Humanist art? Every first year art student is familiar with Marcel Duchamp's 1917 "Fountain" which is actually just a manufactured urinal mounted for display.
In that first year art class the students will be asked, "Is it art?" Heated debate will always result. Humanism says that beauty is only in the eye of the beholder.
Humanist ethics? He shows an image of two men kissing. "If it feels good, do it!" In the past, ethics came from the Pope or the Bible. Humanism says only human feelings matter. What about if a man cheats on his spouse? Whose feelings matter? Good question. But it still comes down to human feelings.
Even the fundamentalists are getting on board. There is a rare day of harmony each year in Jerusalem. It is the occasion of the Gay Pride Parade. It unifies Muslims, Christians and Jews that they are all against it! They all say, "It hurts our feelings." They appeal to their feelings, not to any commandments.
Humanist education? Think for yourself. In the past, the Bible was offered as the one source for all knowledge. Now our own thoughts and feelings are most important when it comes to what we should learn.
But all good things must come to an end. The very science that Humanism supported has come to show that the basic principles of Humanism are based on false concepts. In particular, free will. Feelings are biochemical processes. Given enough data, an external entity can understand my feelings and make better decisions for me than I can!
The universe only allows for two possibilities: Determinism and randomness. There is no room for "freedom". Humans invented the idea of freedom just as they invented the idea of heaven and hell. Sensations and emotions are just biochemical calculations.
We need to calculate probabilities to make our decisions. He gives an example of a baboon who sees a banana tree with bananas on it. But beyond that is a lion. The baboon will starve if it does not get the bananas. But it will die if the lion gets to it first. It needs data. Distances to bananas and lion. Ripeness of the bananas. Its speed. The lion's speed. Is the lion awake? Is it hungry?
The entire baboon's body and brain are the calculator. But it does not consciously do a calculation. The answer will appear as a feeling. The baboon will feel courage if the answer is to go for the bananas. Or fear if the answer is that the risk is too high.
Up until now no one else could know me better than I do. Humanism made sense. The KGB tried to know you by watching your every move. But it did not have biological understanding nor computing power.
Listening to your own feelings works better than listening to the Bible. Humanism made sense.
We have ever better algorithms and ever more computing power. We have a merging of two tidal waves now: Biotechnology and information technology. Facebook or Google will understand you better than you understand yourself! They will know how you feel and why. Authority will shift from feelings to algorithms. Health decisions will come from algorithms.
Angelina Jolie tested positive for the BRCA gene. She was told this made for an 87% chance of future breast cancer. She felt healthy. But Big Data had a different story. Jolie opted for a double mastectomy and published it. She encouraged other women to get tested.
Theism was the first age. Authority came from the Bible and a god up in the clouds.
Humanism was the second age. Authority came from our own feelings.
Dataism is our new age. Listen to Google. It knows how you feel and why. Authority is back in the clouds!
What to learn? What to eat? Whom to vote for? Where to live? Where to go? What book to read? When to sleep and wake? Whom to marry?
As for books, the answer was once easy: The Bible.
Humanism said there are better books. Go to a bookstore. Flip through the pages and get a feeling whether this is a good book to read.
Now you can go to Amazon's virtual bookstore. They have statistics on you and your book choices. They will recommend three books. But Amazon needs more data on you. As you read a book on Kindle, Kindle is reading you! Which pages you read fast and slow. Where you stop. Whether you don't come back. But even this is still primitive. It is crucial to get biometric data.
Face recognition can tell if you are laughing, crying or angry. But it is still primitive. It doesn't get inside the body. We will have biometric sensors on or in the body. For blood pressure and brain activity. When you read War and Peace you have forgotten most of it by the end! But Amazon remembers.
Perhaps in the future in North Korea, people will be required to wear a biometric bracelet. If they see an image of the leader and feel anger it will be the end for them.
A couple may have reached a point in their relationship where they either need to get married or separate. This is a choice and a dilemma. In the Middle Ages they might have gone to the priest. In the Humanist Age they would have been told to follow their hearts.
But in the 21st century maybe they should listen to Amazon or Google! They have followed you. Your emails. Your phone calls. Your searches. Your responses on dates. Data on your partner. Google might give you an 87% recommendation to get married! Google might also know that you don't like Google's advice! That you think you can do better.
Google knows that you weigh looks too much. This goes back to your evolution on the African savanna. You give looks 30% weighting. You should give looks 11%. Google doesn't have to be perfect to be useful. It just has to be better than humans on the most important decisions.
We trust them ever more as they make good choices. We do that with Waze or with Google maps. Our instinct is to turn right, but Google says turn left. It is longer, but there is less traffic. The result is that people lose their sense of direction!
The brain is not the mind. We are far from understanding the brain. We are even further from understanding the mind. He shows an image of a Buddhist monk meditating.
The mind may not be algorithms. Technology is not deterministic. We can use the same technology to produce very different societies. A night satellite image of Asia shows a dark area for North Korea.
They have the same technology for lighting and electricity as adjacent South Korea and China. But they did different things with that technology.
If you don't like some future possibility, do something about it!
Don't algorithms already rule the world? In many cases, yes. Loan applications are processed by algorithms with no human intervention. This is good and bad. This gets rid of certain kinds of prejudice and discrimination. But it makes discrimination very personal.
What will be the effect on children? We have no idea of the future world. In the Middle Ages a parent might teach a child to make shoes knowing that will always be a valuable skill. But we have no idea what skills will be valuable when our children get older.
What if consciousness is our true master? The idea of a soul or spirit is an old idea. Life sciences broke that down.
How will a self-driving car know whether to hit a full school bus or veer off of a bridge? Philosophers have asked such ethical questions for thousands of years. But in an instant of driving crisis, a choice must be made. You use your gut feeling rather than your application of Kant! Tesla could leave it to the market. They could make two versions: The Tesla Altruist and the Tesla Egotist!
In the book he talks of how Google can discover epidemics in progress based on searches. What will this do to human relationships? What if an algorithm is not able to solve a problem? We are in new territory with no historical precedent. This is the first time we are trusting non-human agents with our decisions.
Humans are losing their social abilities and even their body awareness. In the past we went to the forest to pick mushrooms or apples. We had keen tastes for whether something was safe to eat. Now we eat while watching TV and hardly taste our food. Manufacturers make ever stronger tastes to overcome these distractions.
We use technology to compensate for our losses. Software is already better than we are at reading facial expression and tone of voice. Biometric sensors will go further. You will be able to ask Google how you are enjoying your date!
What about Cambridge Analytica engineering the Brexit vote and the Trump victory? Who owns the data is a big question. We used to care a lot about who owned the land. We cared about who owned the means of production. Data ownership is huge now. People are giving up their most important asset for little in exchange. Maybe some funny cat videos. It is like when Manhattan was bought for a few beads. We are selling Manhattan for a few funny cat videos!
Harari asked what is your ethical metric? For him, suffering is the metric. This may not be for the master algorithm. Suffering is not the metric of the market. We already know that we can't trust the market with the global climate.
[I would argue that the market should be forced to include externalities such as the impact on third parties and on the environment!]
Someone asked about the image of the Buddhist monk in one slide. Is meditation our salvation? He would like to say yes. Harari practices Vipassana (insight) meditation for two hours each day. He has done so for 17 years. Each year he goes on a 30-60 day retreat. This past year it was not until the end of December that he found out about the Trump election.
He knows how difficult meditation is. It is not "scalable" in his view. Funny cat videos are more likely to grab our time and attention. Focusing on your breath gives insights into the nature of consciousness and the nature of reality. It is more rewarding. But it is very hard. After 5-10 seconds your mind will wander. If it is so difficult to keep your focus for just a few seconds on something so simple as your breath, how can we expect people to stay focused on more complex problems?
He affirmed the point in the introduction: Fiction is the most powerful force in the world.
I should note that I am a Humanist, yet for years I have realized the limitations of Humanism. Democracy and free choice have not given us the best leaders. Statistics show that arranged marriages are more likely to succeed than marriages from following your heart.
Public policy driven by democracy rewards short term gratification over long term investment. People choose leaders who give us cheap gasoline rather than charging taxes for negative impacts and investing in sustainable transportation. The same is true for financial markets which reward short term gains and which respond positively to tax cuts. When in fact corporations would benefit enormously from public investment in infrastructure, universal healthcare, science and innovation. Which all need tax money.
While I may call myself a Humanist most of the time, my true philosophy in recent years has shifted to what is often called Transhumanism. The idea that humans may not know and/or decide what is best for humans nor for the sustainability of the planet's other inhabitants. I am totally open to algorithms, data and computation that can do better for us.
I will make one final note: I asked Harari afterwards about what is called the Hard Problem of Consciousness: The fact that we have phenomenal consciousness at all. He is acutely aware of the problem. He said we are very far from solving this problem. We have much more to learn!