AI and Happiness Analysis PDF
Document Details
Uploaded by Deleted User
Tags
Summary
This document analyzes the potential of artificial intelligence (AI) to improve human happiness. It discusses the difficulties in defining and measuring happiness, the role of data and the General Data Protection Regulation (GDPR). The author also touches upon Maslow's hierarchy of needs and the difference between hedonic and eudaimonic happiness.
Full Transcript
ANALYSIS: AI AND HAPPINESS, GENERAL DATA PROTECTION REGULATION (GDPR), PERSONAL DATA, PRIVACY COMPUTING USING FEDERATED LEARNING AND TRUSTED EXECUTION ENVIRONMENT (TEE) In the earlier chapters, we covered short-term AI deep learning applications, such as optimizing financial metrics, classroom grad...
ANALYSIS: AI AND HAPPINESS, GENERAL DATA PROTECTION REGULATION (GDPR), PERSONAL DATA, PRIVACY COMPUTING USING FEDERATED LEARNING AND TRUSTED EXECUTION ENVIRONMENT (TEE) In the earlier chapters, we covered short-term AI deep learning applications, such as optimizing financial metrics, classroom grades, health diagnostics, and the like. "Isle of Happiness" tackles a bigger question--- and challenge: Can AI optimize our happiness? This is an incredibly complex and tough problem. The ambiguous outcome in this story suggests that AI's efforts to improve our happiness will still be a work in progress by 2041, with progress and early prototypes, but making no prediction about when, how, or even whether it will be solved. Why is this problem so tough? I can think of four reasons. First is the problem of definition. What is happiness exactly? There are countless theories of happiness, from Abraham Maslow's hierarchy of needs to Martin Seligman's positive psychology. Defining happiness will be even more complex by 2041, when society will have progressed, through AI technology, to a point where living standards for most if not all people are comfortable. Once people's basic needs are satisfied, what constitutes happiness? That definition may still be evolving around 2041. The second challenge is the problem of measurement. Happiness is abstract, subjective, and individualistic. How can we quantify our happiness and continuously measure it? If we could measure it, how would AI guide our lives to be happy? The third problem is data. To build powerful happiness-enabling AI, we will need extensive data, including the most personal forms of data. But where will this data be stored? General Data Protection Regulation (GDPR) is a new standard gaining acceptance, and its goal is to have us each take our own data back under our control. Will GDPR accelerate or impede this grand quest for improving our happiness? What other approaches might be possible? Finally, there is the question of safe storage. How can we find a trusted entity to store that data? History tells us that trust is possible only if that entity's interest is fully aligned with the users. How would such an interest-aligned entity be found or created? Now you can see why happiness-inducing AI is extremely hard! Let's dig into the four problems and possible solutions. WHAT IS HAPPINESS IN THE ERA OF AI? Setting aside AI for the moment, let's ask the most basic question: What does happiness mean anyway? In 1943, Abraham Maslow published his seminal paper "A Theory of Human Motivation," which described what is now known as "Maslow's hierarchy of needs." This theory is usually illustrated as a pyramid, shown below. This pyramid describes human needs from the most basic to the most advanced level. Each lower-level need must be fulfilled in order to move toward a higher-level need. The levels are "physiological," "safety," "belonging and love," "social needs" or "esteem," and "self-actualization." ![A diagram of the needs of a self-actualization Description automatically generated](media/image2.png) \[**Maslow's hierarchy of needs**---our happiness grows from the bottom up, as more basic needs are satisfied.\] Today, many people feel material wealth is the most significant component of happiness. Material wealth is mostly related to the bottom two layers of the pyramid---where sustenance or financial security are ensured by material wealth. Some people even associate material wealth with higher-level needs like power, esteem, and sense of accomplishment. But interestingly, research suggests that chasing material wealth cannot produce sustained feelings of happiness. Psychologist Michael Eysenck introduced the term "**hedonic treadmill**" to describe our tendency to always readjust to a fixed level of happiness, despite monetary and possession gains (or losses). Studies have shown that people who come into sizable wealth (such as winning a lottery) are happy for a few months, but after that, their happiness usually drops down to their baseline level before coming into wealth. This is what doomed Crown Prince Mahdi's quixotic attempt to build his AI-enabled paradise in "Isle of Happiness"---his AI aimed to improve the guests' "hedonic happiness." When they first arrived on the island, the guests indulged in various pleasurable activities that produced short-term bursts of happy feelings, but over time, they were back on the hedonic treadmill, always treading, but never achieving lasting happiness. In contrast with **hedonic happiness** (material wealth, pleasure, enjoyment, comfort), people who advance above the bottom two levels of the Maslow hierarchy pursue eudaimonic happiness (growth, meaning, authenticity, excellence). Maslow's hierarchy states that only after the levels of hedonic happiness are satisfied can people move up to **eudaimonic happiness**. In other words, once our material needs are satisfied, we will seek to belong, to love and be loved, to be respected, and to be self-actualized. This is why Princess Akilah wanted to replace Mahdi's hedonic AI with her eudaimonic AI---to help deliver to each person individualized happiness that is more experiential and purposeful, with love and authenticity. It was in this context that this eclectic group of high-achieving visitors were invited to take part in the experiment, by becoming inhabitants of the island. Take Viktor. Before coming to the island, the successful entrepreneur was stuck on his hedonic treadmill. While he had achieved material wealth, success, and esteem, something was missing from his life. He was far from **self-actualized** and had sought refuge in mind-altering substances and other hedonistic escapes. These circumstances made him an ideal candidate for Akilah to recruit him to the island, where she would try to elevate him to eudaimonic happiness. As the AI got to know Viktor during his stay on the island, he was given opportunities to build a relationship with Akilah. He was also put in situations that could satisfy his innate desire for adventure and offer boosts of self-esteem. He was offered the chance to seek self-actualization by using his game-designing skills to improve the Isle of Happiness. Viktor's goals were uniquely his, and AI tailored opportunities for him by understanding him and those goals. Whereas Viktor sought adventure, another person might have preferred just the opposite---serenity, for example---and for that AI would propose completely different experiences. By the end of the story, we knew Viktor would be happy, not because he possessed more material wealth, but because he was leading the life he wanted, growing his relationships with others and getting a chance to do important work that might help people. For him, happiness was not a binary state, but an ongoing pursuit. Like the other stories in the book, "Isle of Happiness" is set in 2041. By then, societies will be richer, thanks to technological advancements, with AI taking over routine tasks, and robotics and 3D printing producing goods cheaply (this concept is known as **plenitude**). If society is governed by good leaders, government will take care of all the people, assuring them material sufficiency. By 2041, in wealthier societies, people will find that their definition of happiness is evolving, as people advance from hedonic happiness to eudaimonic happiness. HOW CAN AI MEASURE AND IMPROVE OUR HAPPINESS? In order to build an AI to maximize happiness, we have to first learn how to measure it. I can envision three ways to do this, using technologies that are within reach today. The first one is simple---we ask people. In the story, as the new inhabitants arrived on the island, they were required to answer a series of questions. Taking stock of people's happiness by asking questions is possibly the most reliable measure, but it cannot be done continuously, so there must be other measures as well. The second way to measure happiness would consist of using the ever-advancing technologies of IoT devices (cameras, microphones, motion detection, temperature/humidity sensors, and so on) to capture user behavior, facial expressions, and voice, and then using "affective computing" techniques to recognize each user's emotions as determined from the IoT data. Observing people's faces, affective computing algorithms can detect both macro-expressions (usually within 0.5--4 seconds) as well as micro-expressions (0.03--0.1 seconds). These expressions reveal emotions. Micro-expressions are often detected when people try to conceal their emotions, and because they are extremely short-lived, humans usually miss them, while affective computing algorithms can recognize them accurately. Other useful physical features to estimate an inhabitant's emotions include the hue of different parts of the face, which is caused by localized blood flow, and the pitch, loudness, tempo, emphasis, and stability of the voice. In addition, the trembling of the hand, dilation of the pupil, welling up of tears, patterns of blinking, humidity of skin (pre-sweating), and changes in body temperature are all useful features by which to estimate someone's state of mind. With so many features, AI will be able to detect human emotions (happy, sad, disgusted, surprised, angered, or fearful) much more accurately than people can. This recognition can be further enhanced by watching multiple people throughout time. In the story, for example, AI observed that both Viktor and Princess Akilah were developing feelings for each other. This can lead AI to score them both higher for "belongingness and love needs" on the Maslow hierarchy. AI's ability to recognize human emotions already exceeds the average human, and this gap will grow much wider by 2041. The third way to measure happiness is to continuously check levels of hormones that correlate with particular sensations and feelings. In this story, each inhabitant wears a transdermal biosensor membrane with a matrix of under-the-skin microneedles and an electrochemical sensor that continuously measures hormone levels as partial measures of happiness. For example, serotonin is correlated with well-being and confidence, dopamine with pleasure and motivation, oxytocin with love and trust, endorphins with bliss and relaxation, and adrenaline with energy. Monitoring these features, the island's AI was able to note the activities, measures, and environments when an inhabitant was happy, and use these happy moments to train itself to recognize happiness. Then, the AI assistant Qareen could make recommendations or suggestions for activities or choices that would lead to more happiness (achievement, growth, or connection), or less unhappiness (sadness, frustration, or anger). Toward the end of the story, when Viktor was instructed to leave the island and go home, it was not because the experiment had ended, but because AI knew that by ending the game in this particular way, Viktor would opt to escape, because he loved adventure, and that experience would eventually bring him back to the island and make him happier. In order to build a scientifically rigorous and robust happiness-optimizing engine, researchers will need to solve daunting challenges. First, what kind of happiness metrics can we use? We have some approximations above, but we know that our state of mind depends on unknown combinations of electrical (brain waves), architectural (brain structures), and chemical (hormone) components working in concert. The approximations above capture only some hormone levels, which are useful but surely incomplete. They do not tap into the electrical or architectural measures. Over time, we will need to read all three components and understand their interactions and causation for happiness in order to improve the training data quality for AI to learn. Second, achieving higher levels of the Maslow hierarchy doesn't involve moments of instant gratification, but rather the long-term pursuit of meaning and purpose. AI learning across a long time span is challenging, because when a person's happiness goes up, the AI does not know whether it was a result of today's activities, or last week's, or last year's, or some combination thereof. This problem is akin to a challenge facing social media algorithms: How can Facebook train its newsfeed to help a user grow over the longer term, rather than simply entice immediate advertising clicks? When the person shows growth, how does the Facebook AI know which day's content or algorithms caused that growth? We will need to invent new AI algorithms to learn long-term stimulus-response amid a lot of noise. By 2041, we will not gain a full understanding of what determines our state of mind, nor will we know how long-term eudaimonic happiness works. But by that time, AI's ability to read human emotions should be quite advanced, well beyond human capabilities, and there should be prototypes that try to improve human higher-level happiness. DATA FOR AI: DECENTRALIZED VS. CENTRALIZED Aggregation of data is a necessary step to building powerful AI. This is already happening today at giant Internet companies. Google knows everything you've ever searched, every place you've been to (through Android analytics and Google Maps, unless you turned off location history), every video you've watched, every email you've sent, everyone you've called on Google Voice, and every meeting you've scheduled in a Google calendar. Trained on this data, Google can deliver tailored services that are incredibly convenient for you. Google and Facebook have access to so much data, they can infer your home address, ethnicity, sexual orientation, and even what makes you angry. They can guess your innermost secrets, whether you cheated on your taxes, are an alcoholic, or had an extramarital affair. These inferences will have a fair amount of errors, but even the notion that these companies have the tools and your data to attempt to guess likely makes one uneasy. These privacy concerns have led to discussions about government action. Countries ranging from the United States to China are looking at whether the power of the data has strengthened Internet companies into monopolies, and if so how to use antitrust laws to curb their power. Europe took action much earlier---the EU decided to put a stake into the ground on personal data by introducing GDPR (General Data Protection Regulation) terms, which the EU calls "the toughest privacy and security law in the world." Other countries are evaluating building their data laws with GDPR as a foundation. GDPR is a big deal, and it got off to a good start. GDPR has the vision of ultimately giving data back to the individual, so as to help people control who gets to see and use their data, and even derive value from licensing their data. In the first few years of GDPR implementation, the law has achieved some successes. It has succeeded in educating the masses on the significant risks about personal data. And GDPR has required websites and apps the whole world over to rethink and refactor their applications to minimize malicious, erroneous, or neglectful abuses of user data. There are large fines for companies that violate GDPR. But some details of GDPR are not practical, and in general GDPR is an impediment to AI. In its current form, GDPR stipulates that companies must be transparent to people about how their data will be used. Users' explicit consent for a specific purpose is needed in order for a company to start collecting that user's data (for example, giving your address to Facebook only for the purpose of facilitating e-commerce order delivery). Data must be protected from unauthorized use, leak, or theft. Automated decisions should be explainable, and escalation to human intervention should be available upon users' request. I believe that GDPR's goals (transparency, accountability, and confidentiality) are all well-intentioned and even noble. However, the current implementation described above is unlikely to achieve these goals and may even be counterproductive in many ways. For example, it is difficult to limit the purpose of each piece of data collected, because AI is a sprawling exercise, and it is unfeasible to enumerate all purposes for each piece of data collected when the collection begins. For example, when Gmail saved all your emails, it was to help you to search and find any email. But later, when Gmail developed the new auto-completion feature, it needed to train on the old data. It is also impractical to expect that users will grasp each company's data-usage explanations every time they are asked whether to consent to data collection. (How many times have you encountered a complex pop-up on a site and just clicked "OK" without understanding or even reading the text of the pop-up?) GDPR requires giving users the right to escalate to humans if a user is concerned with AI decision-making. But human escalation may cause havoc, as humans are not as good as AI in decision-making. Finally, GDPR's goal of data minimization and data retention requirements will seriously handicap the AI systems. When considered independently, most people would want to take back ownership of their personal data using GDPR and other regulations. But this must be looked at in light of the fact that if all the data is ripped out, then most software and apps would become "stupid," if not entirely dysfunctional. In the story "Isle of Happiness," we suggest that rather than throwing out the baby (AI services) with the bathwater (data privacy concerns), another option when technologies mature would be a "trusted AI" to which we would give all our data to safeguard, hide, or give out. If that "trusted AI" knew everything that Google, Facebook, and Amazon knew about us, and much more, it would deliver capabilities well beyond today's Internet services. The many data swamps that have our data will be unified into a powerful data ocean. And when this "trusted AI" (let's call it the Isle) knows everything about us, we can have it respond to all data requests for us. So when Spotify wants to know our location, or when Facebook wants our address, the Isle will decide on our behalf if the benefits of the service is worth the risks of providing the data, based on what it knows about our values and preferences, and the trustworthiness of the company making the request. This will get rid of all the consent-seeking pop-up windows that confuse and annoy us. The Isle would become not only a powerful AI assistant, but also our protector of data, and our interface to all the apps. One could think of this arrangement as essentially a new social contract for data.