sama yt interviews - pt1.docx
Document Details
Uploaded by Deleted User
Full Transcript
toc --- note that there will be transcription errors also it doesnt have speaker names so its all mixed and some dates may be wrong if the youtube was uploaded long after the interview [[lester holt interviews sama and brian chesky - june 2024]](#lester-holt-interviews-sama-and-brian-chesky---ju...
toc --- note that there will be transcription errors also it doesnt have speaker names so its all mixed and some dates may be wrong if the youtube was uploaded long after the interview [[lester holt interviews sama and brian chesky - june 2024]](#lester-holt-interviews-sama-and-brian-chesky---june-2024) stanford ecorner may 2024 ------------------------- 0:01 \[Music\] 0:13 welcome to the entrepreneurial thought leader seminar at Stanford 0:21 University this is the Stanford seminar for aspiring entrepreneurs ETL is brought to you by stvp the Stanford 0:27 entrepreneurship engineering center and basis The Business Association of Stanford entrepreneurial students I\'m 0:33 rvie balani a lecturer in the management science and engineering department and the director of Alchemist and accelerator for Enterprise startups and 0:40 today I have the pleasure of welcoming Sam Altman to ETL 0:50 um Sam is the co-founder and CEO of open AI open is not a word I would use to describe the seats in this class and so 0:57 I think by virtue of that that everybody already play knows open AI but for those who don\'t openai is the research and 1:02 deployment company behind chat gbt Dolly and Sora um Sam\'s life is a pattern of 1:08 breaking boundaries and transcending what\'s possible both for himself and for the world he grew up in the midwest in 1:15 St Louis came to Stanford took ETL as an undergrad um for any and we we held on 1:22 to Stanford or Sam for two years he studied computer science and then after his sophomore year he joined the 1:27 inaugural class of Y combinator with a Social Mobile app company called looped um that then went on to go raise money 1:33 from Sequoia and others he then dropped out of Stanford spent seven years on looped which got Acquired and then he 1:40 rejoined Y combinator in an operational role he became the president of Y combinator from 2014 to 2019 and then in 1:48 2015 he co-founded open aai as a nonprofit research lab with the mission to build general purpose artificial 1:54 intelligence that benefits all Humanity open aai has set the record for the fastest growing app in history with the 2:01 launch of chat gbt which grew to 100 million active users just two months after launch Sam was named one of 2:08 times\'s 100 most influential people in the world he was also named times CEO of the year in 2023 and he was also most 2:15 recently added to Forbes list of the world\'s billionaires um Sam lives with his husband in San Francisco and splits 2:20 his time between San Francisco and Napa and he\'s also a vegetarian and so with that please join me in welcoming Sam 2:27 Altman to the stage 2:35 and in full disclosure that was a longer introduction than Sam probably would have liked um brevity is the soul of wit 2:40 um and so we\'ll try to make the questions more concise but this is this is this is also Sam\'s birth week it\'s it 2:47 was his birthday on Monday and I mentioned that just because I think this is an auspicious moment both in terms of time you\'re 39 now and also place you\'re 2:55 at Stanford in ETL that I would be remiss if this wasn\'t sort of a moment of just some reflection and I\'m curious 3:01 if you reflect back on when you were half a lifee younger when you were 19 in ETL um if there were three words to 3:08 describe what your felt sense was like as a Stanford undergrad what would those three words be it\'s always hard 3:13 questions um I was like ex uh you want three words 3:20 only okay uh you can you can go more Sam you\'re you\'re the king of brevity uh 3:25 excited optimistic and curious okay and what would be your three words 3:30 now I guess the same which is terrific so there\'s been a constant thread even though the world has changed and you 3:37 know a lot has changed in the last 19 years but that\'s going to pale in comparison what\'s going to happen in the next 19 yeah and so I need to ask you 3:44 for your advice if you were a Stanford undergrad today so if you had a Freaky Friday moment tomorrow you wake up and 3:49 suddenly you\'re 19 in inside of Stanford undergrad knowing everything you know what would you do would you drop be very 3:55 happy um I would feel like I was like coming of age at the luckiest time um like in several centuries probably I 4:03 think the degree to which the world is is going to change and the the opportunity to impact that um starting a 4:10 company doing AI research any number of things is is like quite remarkable I 4:15 think this is probably the best time to start I yeah I think I would say this I 4:22 think this is probably the best time to start a companies since uh the internet at least and maybe kind of like in the history of technology I think with what 4:29 you can do with AI is like going to just get more remarkable every year and the 4:35 greatest companies get created at times like this the most impactful new products get built at times like this so 4:43 um I would feel incredibly lucky uh and I would be determined to make the most of it and I would go figure out like 4:50 where I wanted to contribute and do it and do you have a bias on where would you contribute would you want to stay as a student um would and if so would you 4:56 major in a certain major giving the pace of of change probably I would not stay as a student but only cuz like I didn\'t 5:04 and I think it\'s like reasonable to assume people kind of are going to make the same decisions they would make again 5:09 um I think staying as a student is a perfectly good thing to do I just I it would probably not be what I would have 5:15 picked no this is you this is you so you have the Freaky Friday moment it\'s you you\'re reborn and as a 19-year-old and 5:20 would you yeah what I think I would again like I 5:25 think this is not a surprise cuz people kind of are going to do what they\'re going to do I think I would go work on 5:31 research and and and where might you do that Sam I think I mean obviously I have 5:36 a bias towards open eye but I think anywhere I could like do meaningful AI research I would be like very thrilled about but you\'d be agnostic if that\'s 5:42 Academia or Private Industry um I say this with sadness I think I 5:48 would pick industry realistically um I think it\'s I 5:53 think to you kind of need to be the place with so much compute M MH okay and 5:59 um if you did join um on the research side would you join so we had kazer here last week who was a big advocate of not 6:06 being a Founder but actually joining an existing companies sort of learn learn the chops for the for the students that 6:11 are wrestling with should I start a company now at 19 or 20 or should I go join another entrepreneurial either 6:17 research lab or Venture what advice would you give them well since he gave the case to join a company I\'ll give the 6:24 other one um which is I think you learn a lot just starting a company and if that\'s something you want to do at some 6:30 point there\'s this thing Paul Graham says but I think it\'s like very deeply true there\'s no pre-startup like there 6:36 is Premed you kind of just learn how to run a startup by running a startup and if if that\'s what you\'re pretty sure you 6:42 want to do you may as well jump in and do it and so let\'s say so if somebody wants to start a company they want to be in AI um what do you think are the 6:48 biggest near-term challenges that you\'re seeing in AI that are the ripest for a 6:54 startup and just to scope that what I mean by that are what are the holes that you think are the top priority needs for 7:00 open AI that open AI will not solve in the next three years um yeah 7:08 so I think this is like a very reasonable question to ask in some sense 7:13 but I think it\'s I\'m not going to answer it because I think you should 7:19 never take this kind of advice about what startup to start ever from anyone um I think by the time there\'s something 7:26 that is like the kind of thing that\'s obvious enough that me or somebody else will sit up here and say it it\'s 7:33 probably like not that great of a startup idea and I totally understand the impulse and I remember when I was 7:38 just like asking people like what startup should I start um but I I think like one of the most 7:46 important things I believe about having an impactful career is you have to chart your own course if if the thing that 7:53 you\'re thinking about is something that someone else is going to do anyway or more likely something that a lot of 7:58 people are going to do anyway um you should be like somewhat skeptical of that and I think a really good muscle 8:04 to build is coming up with the ideas that are not the obvious ones to say so 8:09 I don\'t know what the really important idea is that I\'m not thinking of right now but I\'m very sure someone in this 8:15 room does it knows what that answer is um and I think learning to trust 8:21 yourself and come up with your own ideas and do the very like non-consensus things like when we started open AI that 8:27 was an extremely non-consensus thing to do and now it\'s like the very obvious thing to do um now I only have the 8:34 obvious ideas CU I\'m just like stuck in this one frame but I\'m sure you all have the other ones but are there so can I ask it 8:41 another way and I don\'t know if this is fair or not but are what questions then are you wrestling with that no one else 8:47 is talking about how to build really big computers I mean I think other people are talking 8:52 about that but we\'re probably like looking at it through a lens that no one else is quite imagining yet um 9:02 I mean we\'re we\'re definitely wrestling with how we when we make not just like 9:09 grade school or middle schooler level intelligence but like PhD level intelligence and Beyond the best way to 9:14 put that into a product the best way to have a positive impact with that on society and people\'s lives we don\'t know 9:20 the answer to that yet so I think that\'s like a pretty important thing to figure out okay and can we continue on that thread then of how to build really big 9:27 computers if that\'s really what\'s on your mind can you share I know there\'s been a lot of speculation and probably a 9:33 lot of here say too about um the semiconductor Foundry Endeavor that you are reportedly embarking on um can you 9:41 share what would make what what\'s the vision what would make this different than it\'s not just foundies although 9:47 that that\'s part of it it\'s like if if you believe which we increasingly do at this point that AI infrastructure is 9:55 going to be one of the most important inputs to the Future this commodity that everybody\'s going to want and that is 10:01 energy data centers chips chip design new kinds of networks it\'s it\'s how we look at that entire ecosystem um and how 10:09 we make a lot more of that and I don\'t think it\'ll work to just look at one piece or another but we we got to do the 10:15 whole thing okay so there\'s multiple big problems yeah um I think like just this 10:21 is the Arc of human technological history as we build bigger and more complex systems and does it gross so you 10:29 know in terms of just like the compute cost uh correct me if I\'m wrong but chat gbt 3 was I\'ve heard it was \$100 million 10:36 to do the model um and it was 100 175 billion parameters gbt 4 was cost \$400 10:44 million with 10x the parameters it was almost 4X the cost but 10x the parameters correct me adjust me you know 10:52 it I I do know it but I won oh you can you\'re invited to this is Stanford Sam okay um uh but the the even if you don\'t 11:00 want to correct the actual numbers if that\'s directionally correct um does the cost do you think keep growing with each 11:07 subsequent yes and does it keep growing multiplicatively uh probably I mean and 11:15 so the question then becomes how do we how do you capitalize that well look I I kind of think 11:26 that giving people really capable tools and letting them figure out how they\'re 11:32 going to use this to build the future is a super good thing to do and is super valuable and I am super willing to bet 11:39 on the Ingenuity of you all and everybody else in the world to figure out what to do about this so there is 11:46 probably some more business-minded person than me at open AI somewhere that is worried about how much we\'re spending 11:52 um but I kind of don\'t okay so that doesn\'t cross it so you know open ey is phenomenal chat gbt is 11:59 phenomenal um everything else all the other models are phenomenal it burned you\'ve earned \$520 12:05 million of cash last year that doesn\'t concern you in terms of thinking about the economic model of how do you 12:11 actually where\'s going to be the monetization source well first of all that\'s nice of you to say but Chachi PT 12:16 is not phenomenal like Chachi PT is like mildly embarrassing at best um gp4 is 12:24 the dumbest model any of you will ever ever have to use again by a lot um but 12:29 you know it\'s like important to ship early and often and we believe in iterative deployment like if we go build 12:35 AGI in a basement and then you know the world is like kind 12:40 of blissfully walking blindfolded along um I don\'t think that\'s like I don\'t 12:46 think that makes us like very good neighbors um so I think it\'s important given what we believe is going to happen 12:51 to express our view about what we believe is going to happen um but more than that the way to do it is to put the 12:56 product in people\'s hands um and let Society co-evolve with the 13:03 technology let Society tell us what it collectively and people individually want from the technology how to 13:09 productize this in a way that\'s going to be useful um where the model works really well where it doesn\'t work really 13:14 well um give our leaders and institutions time to react um give 13:20 people time to figure out how to integrate this into their lives to learn how to use the tool um sure some of you all like cheat on your homework with it 13:27 but some of you all probably do like very amazing amazing wonderful things with it too um and as each generation 13:32 goes on uh I think that will expand 13:38 and and that means that we ship imperfect products um but we we have a 13:43 very tight feedback loop and we learn and we get better um and it does kind of 13:49 suck to ship a product that you\'re embarrassed about but it\'s much better than the alternative um and in this case 13:54 in particular where I think we really owe it to society to deploy tively 14:00 um one thing we\'ve learned is that Ai and surprise don\'t go well together people don\'t want to be surprised people want a gradual roll out and the ability 14:07 to influence these systems um that\'s how we\'re going to do it and there may 14:13 be there could totally be things in the future that would change where we\' think iterative deployment isn\'t such a good 14:19 strategy um but it does feel like the 14:24 current best approach that we have and I think we\'ve gained a lot um from from doing this and you know hopefully s the 14:31 larger world has gained something too whether we burn 500 million a year or 5 14:38 billion or 50 billion a year I don\'t care I genuinely don\'t as long as we can I think stay on a trajectory where 14:45 eventually we create way more value for society than that and as long as we can figure out a way to pay the bills like 14:51 we\'re making AGI it\'s going to be expensive it\'s totally worth it and so and so do you have a I hear you do you 14:56 have a vision in 2030 of what if I say you crushed it Sam it\'s 2030 you crushed it what does the world look like to 15:03 you um you know maybe in some very important ways not that different uh 15:12 like we will be back here there will be like a new set of students we\'ll be 15:17 talking about how startups are really important and technology is really cool we\'ll have this new great tool in the 15:23 world it\'ll feel it would feel amazing if we got to teleport forward six years today and 15:30 have this thing that was like smarter than humans in many subjects and could do these complicated 15:36 tasks for us and um you know like we could have these like complicated 15:41 program written or This research done or this business started uh and yet like the Sun keeps 15:48 Rising the like people keep having their human dramas life goes on so sort of like super different in some sense that 15:55 we now have like abundant intelligence at our fingertips and then in some other sense like not 16:01 different at all okay and you mentioned artificial general intellig AGI artificial general intelligence and in 16:07 in a previous interview you you define that as software that could mimic the median competence of a or the competence 16:12 of a median human for tasks yeah um can you give me is there time if you had to 16:18 do a best guess of when you think or arrange you feel like that\'s going to happen I think we need a more precise 16:23 definition of AGI for the timing question um because at at this point 16:29 even with like the definition you just gave which is a reasonable one there\'s that\'s your I\'m I\'m I\'m paring back what 16:34 you um said in an interview well that\'s good cuz I\'m going to criticize myself okay um it\'s it\'s it\'s it\'s too loose of 16:41 a definition there\'s too much room for misinterpretation in there um to I think be really useful or get at what people 16:47 really want like I kind of think what people want to know when they say like what\'s the timeline to AGI is like when 16:55 is the world going to be super different when is the rate of change going to get super high when is the way the economy 17:00 Works going to be really different like when does my life change 17:05 and that for a bunch of reasons may be very different than we think like I can totally imagine a world where we build 17:13 PhD level intelligence in any area and you know we can make researchers way 17:18 more productive maybe we can even do some autonomous research and in some sense 17:24 like that sounds like it should change the world a lot and I can imagine that we do that and then we can detect no 17:32 change in global GDP growth for like years afterwards something like that um which is very strange to think about and 17:38 it was not my original intuition of how this was all going to go so I don\'t know how to give a precise timeline of when 17:45 we get to the Milestone people care about but when we get to systems that are way more capable than we have right 17:52 now one year and every year after and that I think is the important point so I\'ve given up on trying to give the AGI 17:59 timeline but I think every year for the next many we have dramatically more 18:05 capable systems every year um I want to ask about the dangers of of AGI um and gang I know there\'s tons of questions 18:11 for Sam in a few moments I\'ll be turning it up so start start thinking about your questions um a big focus on Stanford 18:17 right now is ethics and um can we talk about you know how you perceive the dangers of AGI and specifically do you 18:24 think the biggest Danger from AGI is going to come from a cataclysmic event which you know makes all the papers or 18:29 is it going to be more subtle and pernicious sort of like you know like how everybody has ADD right now from you 18:35 know using Tik Tok um is it are you more concerned about the subtle dangers or the cataclysmic dangers um or neither 18:42 I\'m more concerned about the subtle dangers because I think we\'re more likely to overlook those the cataclysmic 18:47 dangers uh a lot of people talk about and a lot of people think about and I don\'t want to minimize those I think 18:53 they\'re really serious and a real thing um but I think we at least know to look 19:01 out for that and spend a lot of effort um the example you gave of everybody getting add from Tik Tok or whatever I 19:07 don\'t think we knew to look out for and that that\'s a really hard the the 19:13 unknown unknowns are really hard and so I\'d worry more about those although I worry about both and are they unknown unknowns are there any that you can name 19:19 that you\'re particularly worried about well then I would kind of they\'d be unknown unknown um you can 19:27 I I am am worried just about so so even though I think in the short term things 19:32 change less than we think as with other major Technologies in the long term I think they change more than we think and 19:40 I am worried about what rate Society can adapt to something so new and how long 19:47 it\'ll take us to figure out the new social contract versus how long we get to do it um I\'m worried about that okay 19:54 um I\'m going to I\'m going to open up so I want to ask you a question about one of the key things that we\'re now trying to in 19:59 into the curriculum as things change so rapidly is resilience that\'s really good 20:04 and and you know and the Cornerstone of resilience uh is is self-awareness and so and I\'m 20:11 wondering um if you feel that you\'re pretty self-aware of your driving 20:16 motivations as you are embarking on this journey so first of all I think um I 20:23 believe resilience can be taught uh I believe it has long been one of the most important life skills um and in the 20:29 future I think in the over the next couple of decades I think resilience and adaptability will be more important 20:36 theyve been in a very long time so uh I think that\'s really great um on the 20:42 self-awareness question I think I\'m self aware but I 20:48 think like everybody thinks they\'re self-aware and whether I am or not is sort of like hard to say from the inside 20:54 and can I ask you sort of the questions that we ask in our intro classes on self awareness sure it\'s like the Peter duer 20:59 framework so what do you think your greatest strengths are Sam 21:07 uh I think I\'m not great at many things but I\'m good at a lot of things and I 21:12 think breath has become an underrated thing in the world everyone gets like hypers specialized so if you\'re good at 21:19 a lot of things you can seek connections across them um I think you can then kind 21:25 of come up with the ideas that are different than everybody else has or that sort of experts in one area have and what are your most dangerous 21:32 weaknesses um most dangerous that\'s an interesting 21:39 framework for it uh I think I have like a general bias to 21:45 be too Pro technology just cuz I\'m curious and I want to see where it goes and I believe that technology is on the 21:50 whole a net good thing but I think that is a worldview that has overall served 21:56 me and others well and thus got like a lot of positive reinforcement and is not always true and 22:03 when it\'s not been true has been like pretty bad for a lot of people and then Harvard psychologist David mcland has 22:09 this framework that all leaders are driven by one of three Primal needs a need for affiliation which is a need to 22:15 be liked a need for achievement and a need for power if you had to rank list those what would be 22:22 yours I think at various times in my career all of those I think there these like levels that people go through 22:29 um at this point I feel driven by like wanting to do something useful and interesting okay and I definitely had 22:37 like the money and the power and the status phases okay and then where were you when you most last felt most like 22:45 yourself I I always and then one last question and 22:50 what are you most excited about with chat gbt five that\'s coming out that uh people 22:56 don\'t what are you what are you most excited about with the of chat gbt that we\'re all going to see 23:01 uh I don\'t know yet um I I mean I this this sounds like a cop out answer but I 23:07 think the most important thing about gp5 or whatever we call that is just that it\'s going to be smarter and this sounds 23:13 like a Dodge but I think that\'s like among the most remarkable facts in human 23:19 history that we can just do something and we can say right now with a high degree of scientific certainty GPT 5 is 23:25 going to be smarter than a lot smarter than GPT 4 GPT 6 going to be a lot smarter than gbt 5 and we are not near 23:30 the top of this curve and we kind of know what know what to do and this is not like it\'s going to get better in one area this is not like we\'re going to you 23:37 know it\'s not that it\'s always going to get better at this eval or this subject or this modality it\'s just going to be 23:43 smarter in the general sense and I think the gravity of that statement is still like underrated okay 23:50 that\'s great Sam guys Sam is really here for you he wants to answer your question so we\'re going to open it up hello um 23:57 thank you so much for joining joining us uh I\'m a junior here at Stanford I sort of wanted to talk to you about responsible deployment of AGI so as as 24:05 you guys could continually inch closer to that how do you plan to deploy that responsibly AI uh at open AI uh you know 24:13 to prevent uh you know stifling human Innovation and continue to Spur that so 24:19 I\'m actually not worried at all about stifling of human Innovation I I really deeply believe that people will just 24:24 surprise us on the upside with better tools I think all of history suggest that if you give people more leverage 24:30 they do more amazing things and that\'s kind of like we all get to benefit from that that\'s just kind of great I am 24:37 though increasingly worried about how we\'re going to do this all responsibly I think as the models get more capable we 24:42 have a higher and higher bar we do a lot of things like uh red teaming and external Audits and I think those are 24:48 all really good but I think as the models get more capable we\'ll have to 24:53 deploy even more iteratively have an even tighter feedback loop on looking at how they\'re used and where they work and 24:59 where they don\'t work and this this world that we used to do where we can release a major model update every 25:04 couple of years we probably have to find ways to like increase the granularity on that and deploy more iteratively than we 25:11 have in the past and it\'s not super obvious to us yet how to do that but I think that\'ll be key to responsible 25:17 deployment and also the way we kind of have all of the stakeholders negotiate 25:24 what the rules of AI need to be uh that\'s going to get more comp Lex over time too thank you next question where 25:32 here you mentioned before that there\'s a growing need for larger and larger computers and faster computers however 25:38 many parts of the world don\'t have the infrastructure to build those data centers or those large computers how do 25:44 you see um Global Innovation being impacted by that so two parts to that 25:49 one um no matter where the computers are built I think Global and Equitable 25:56 access to use the computers for training as well inference is super important um 26:01 one of the things that\'s like very C to our mission is that we make chat GPT available for free to as many people as 26:07 want to use it with the exception of certain countries where we either can\'t or don\'t for a good reason want to operate um how we think about making 26:14 training compute more available to the world is is uh going to become increasingly important I I do think we 26:21 get to a world where we sort of think about it as a human right to get access to a certain amount of compute and we 26:26 got to figure out how to like distribute that to people all around the world um there\'s a second thing though which is I 26:32 think countries are going to increasingly realize the importance of having their own AI infrastructure and 26:38 we want to figure out a way and we\'re now spending a lot of time traveling around the world to build them in uh the 26:44 many countries that\'ll want to build these and I hope we can play some small role there in helping that happen trfic 26:50 thank you U my question was what role do you envision for AI in the future of like 26:57 space exploration or like colonization um I think space is like 27:02 not that hospitable for biological life obviously and so if we can send the robots that seems 27:16 easier hey Sam so my question is for a lot of the founders in the room and I\'m 27:21 going to give you the question and then I\'m going to explain why I think it\'s complicated um so my question is about 27:28 how you know an idea is non-consensus and the reason I think it\'s complicated is cu it\'s easy to 27:34 overthink um I think today even yourself says AI is the place to start a company 27:40 I think that\'s pretty consensus maybe rightfully so it\'s an inflection point I think it\'s hard to 27:47 know if idea is non-consensus depending on the group that you\'re talking about 27:52 the general public has a different view of tech from The Tech Community and even Tech Elites have a different point of 27:58 view from the tech community so I was wondering how you verify that your idea 28:03 is non-consensus enough to pursue um I mean first of all what you 28:11 really want is to be right being contrarian and wrong still is wrong and if you predicted like 17 out of the last 28:17 two recessions you probably were contrarian for the two you got right probably not even necessarily um but you 28:24 were wrong 15 other times and and and so I think it\'s easy to get too 28:30 excited about being contrarian and and again like the most important thing to be right and the group is usually right 28:39 but where the most value is um is when you are contrarian and 28:45 right and and that doesn\'t always happen in 28:50 like sort of a zero one kind of way like everybody in the room can agree that AI 28:57 is the right place to start the company and if one person in the room figures out the right company to start and then 29:02 successfully executes on that and everybody else thinks ah that wasn\'t the best thing you could do that\'s what matters so it\'s okay to kind of like go 29:11 with conventional wisdom when it\'s right and then find the area where you have some unique Insight in terms of how to 29:17 do that um I do think surrounding yourself with the right peer group is 29:23 really important and finding original thinkers uh is important but there is 29:28 part of this where you kind of have to do it Solo or at least part of it Solo 29:33 or with a few other people who are like you know going to be your co-founders or whatever 29:38 um and I think by the time you\'re too far in the like how can I find the right peer group you\'re somehow in the wrong 29:45 framework already um so like learning to trust yourself and your own intuition 29:51 and your own thought process which gets much easier over time no one no matter what they said they say I think is like 29:57 truly great at this this when they\'re just starting out you because like you kind of just haven\'t built the muscle 30:03 and like all of your Social pressure and all of like the evolutionary pressure 30:09 that produced you was against that so it\'s it\'s something that like you get better at over time and and and don\'t 30:15 hold yourself to too high of a standard too early on it Hi Sam um I\'m curious to know what 30:22 your predictions are for how energy demand will change in the coming decades and how we achieve a future where 30:28 renewable energy sources are 1 set per kilowatt hour um I mean it will go up for sure well 30:36 not for sure you can come up with all these weird ways in which like we all depressing future is where 30:42 it doesn\'t go up I would like it to go up a lot I hope that we hold ourselves to a high enough standard where it does 30:47 go up I I I forget exactly what the kind of world\'s electrical gener generating 30:53 capacity is right now but let\'s say it\'s like 3,000 4,000 gwatt something like that even if we add another 100 gwatt 31:00 for AI it doesn\'t materially change it that much but it changes it some and if 31:06 we start at a th gwatt for AI someday it does that\'s a material change but there are a lot of other things that we want 31:11 to do and energy does seem to correlate quite a lot with quality of life we can deliver for people 31:18 um my guess is that Fusion eventually dominates electrical generation on Earth 31:24 um I think it should be the cheapest most abundant most reliable densest source I could could be wrong with that and it 31:30 could be solar Plus Storage um and you know my guess most likely is it\'s going to be 820 one way or the other and 31:37 there\'ll be some cases where one of those is better than the other but uh those kind of seem like the the two bets 31:43 for like really global scale one cent per kilowatt hour 31:51 energy Hi Sam I have a question it\'s about op guide drop what happened last year so what\'s the less you learn cuz 31:59 you talk about resilience so what\'s the lesson you learn from left that company 32:04 and now coming back and what what made you com in back because Microsoft also gave you offer like can you share more 32:11 um I mean the best lesson I learned was that uh we had an incredible team that 32:17 totally could have run the company without me and did did for a couple of days 32:22 um and you never and also that the team was super resilient like we knew that a 32:29 CRA some crazy things and probably more crazy things will happen to us between here and AGI um as different parts of 32:37 the world have stronger and stronger emotional reactions and the stakes keep ratcheting up and you know I thought 32:45 that the team would do well under a lot of pressure but you never really know until you get to run the experiment and 32:50 we got to run the experiment and I learned that the team was super resilient and like ready to kind of run 32:56 the company um in terms of why I came back you know I originally when the so 33:02 it was like the next morning the board called me and like what do you think about coming back and I was like no um I\'m mad um 33:11 and and then I thought about it and I realized just like how much I loved open AI um how much I loved the people the C 33:17 the culture we had built uh the mission and I kind of like wanted to finish it Al 33:23 together you you you emotionally I just want to this is obviously a really sensitive and one of one of oh it\'s it\'s 33:29 not but was I imagine that was okay well then can we talk about the structure about it because this Russian doll 33:35 structure of the open AI where you have the nonprofit owning the for-profit um 33:40 you know when we\'re we\'re trying to teach principal ger entrepreneur we got here we got to the structure gradually 33:46 um it\'s not what I would go back and pick if we could do it all over again but we didn\'t think we were going to have a product when we started we were 33:52 just going to be like a AI research lab wasn\'t even clear we had no idea about a language model or an API or chat GPT so 33:59 if if you\'re going to start a company you got to have like some theory that you\'re going to sell a product someday and we didn\'t think we were going to we 34:06 didn\'t realize we\'re were going to need so much money for compute we didn\'t realize we were going to like have this nice business um so what was your 34:11 intention when you started it we just wanted to like push AI research forward we thought that and I know this gets 34:17 back to motivations but that\'s the pure motivation there\'s no motivation around making money or or power I cannot 34:24 overstate how foreign of a concept like I mean for you personally not for open 34:30 AI but you you weren\'t starting well I had already made a lot of money so it was not like a big I mean I I like I 34:36 don\'t want to like claim some like moral Purity here it was just like that was the of my life a dver driver okay 34:44 because there\'s this so and the reason why I\'m asking is just you know when we\'re teaching about principle driven entrepreneurship here you can you can 34:49 understand principles inferred from organizational structures when the United States was set up the architecture of governance is the 34:55 Constitution it\'s got three branches of government all these checks and balances and you can infer certain principles 35:02 that you know there\'s a skepticism on centralizing power that you know things will move slowly it\'s hard to get things 35:08 to change but it\'ll be very very stable if you you know not to parot Billy eish but if you look at the open 35:14 AI structure and you think what was that made for um it\'s a you have a like your near hundred billion dollar valuation 35:20 and you\'ve got a very very limited board that\'s a nonprofit board which is supposed to look after it\'s it\'s its 35:26 fiduciary duties to the again it\'s not what we would have done if we knew then what we know now but you don\'t get to 35:31 like play Life In Reverse and you have to just like adapt there\'s a mission we really cared about we thought we thought 35:38 AI was going to be really important we thought we had an algorithm that learned we knew it got better with scale we 35:43 didn\'t know how predictably it got better with scale and we wanted to push on this we thought this was like going to be a very important thing in human 35:50 history and we didn\'t get everything right but we were right on the big stuff and our mission hasn\'t changed and we\'ve 35:56 adapted the structure as we go and will adapt it more in the future um but you know like you 36:04 don\'t like life is not a problem set um you don\'t get to like solve everything really nicely all at once it doesn\'t 36:11 work quite like it works in the classroom as you\'re doing it and my advice is just like trust yourself to 36:16 adapt as you go it\'ll be a little bit messy but you can do it and I just asked this because of the significance of open 36:21 AI um you have a you have a board which is all supposed to be independent financially so that they\'re making these decisions as a nonprofit thinking about 36:29 the stakeholder their stakeholder that they are fiduciary of isn\'t the shareholders it\'s Humanity um 36:34 everybody\'s independent there\'s no Financial incentive that anybody has that\'s on the board including yourself 36:40 with hope and AI um well Greg was I okay first of all I think making money is a good thing I think capitalism is a good 36:46 thing um my co-founders on the board have had uh financial interest and I\'ve never once seen them not take the 36:52 gravity of the mission seriously um but you know we\'ve put a structure in place 36:58 that we think is a way to get um incentives aligned and I do believe 37:03 incentives are superpowers but I\'m sure we\'ll evolve it more over time and I think that\'s good not bad and with open 37:09 AI the new fund you\'re not you don\'t get any carry in that and you\'re not following on investments onto those okay 37:15 okay okay thank you we can keep talking about this I I I know you want to go back to students I do too so we\'ll go we\'ll keep we\'ll keep going to the 37:20 students how do you expect that AGI will change geopolitics and the balance of power in the world um like maybe more 37:29 than any other technology um I don\'t I I think 37:34 about that so much and I have such a hard time saying what it\'s actually going to do um I or or maybe more 37:42 accurately I have such a hard time saying what it won\'t do and we were talking earlier about how it\'s like not going to CH maybe it won\'t change 37:48 day-to-day life that much but the balance of power in the world it feels 37:53 like it does change a lot but I don\'t have a deep answer of exactly how thanks so much um I was wondering sorry 38:02 I was wondering in the deployment of like general intelligence and also responsible AI how much do you think is 38:08 it necessary that AI systems are somehow capable of recognizing their own 38:14 insecurities or like uncertainties and actually communicating them to the outside world I I always get nervous 38:21 anthropomorphizing AI too much because I think it like can lead to a bunch of weird oversights but if we say like how 38:28 much can AI recognize its own flaws uh I think that\'s very important 38:34 to build and right now and the ability to like recognize an error in reasoning 38:41 um and have some sort of like introspection ability like that that that seems to me like really important 38:47 to pursue hey s thank you for giving us 38:54 some of your time today and coming to speak from the outside looking in we we all hear about the culture and together togetherness of open AI in addition to 39:00 the intensity and speed of what you guys work out clearly seen from CH gbt and all your breakthroughs and also in when 39:07 you were temporarily removed from the company by the board and how all the all of your employees tweeted open air is nothing without its people what would 39:13 you say is the reason behind this is it the binding mission to achieve AGI or something even deeper what is pushing the culture every 39:19 day I think it is the shared Mission um I mean I think people like like each other and we feel like we\'ve you know 39:25 we\'re in the trenches together doing this really hard thing um 39:30 but I think it really is like deep sense of purpose and loyalty to the mission 39:36 and when you can create that I think it is like the strongest force for Success 39:42 at any start at least that I\'ve seen among startups um and you know we try to 39:47 like select for that and people we hire but even people who come in not really believing that AGI is going to be such a 39:54 big deal and that getting it right is so important tend to believe it after the first three months or whatever and so that\'s like that\'s a very powerful 40:00 cultural force that we have thanks um currently there are a lot of 40:06 concerns about the misuse of AI in the immediate term with issues like Global conflicts and the election coming up 40:12 what do you think can be done by the industry governments and honestly People Like Us in the immediate term especially 40:18 with very strong open- Source models one thing that I think is 40:25 important is not to pretend like this technology or any other technology is all good um I believe that AI will be 40:32 very net good tremendously net good um but I think like with any other tool 40:40 um it\'ll be misused like you can do great things with a hammer and you can 40:45 like kill people with a hammer um I don\'t think that absolves us or you all 40:50 or Society from um trying to mitigate the bad as much as we can and maximize 40:56 the good but I do think it\'s important to realize 41:02 that with any sufficiently powerful Tool uh you do put Power in the hands of tool 41:09 users or you make some decisions that constrain what people in society can do 41:15 I think we have a voice in that I think you all have a voice on that I think the governments and our elected representatives in Democratic process 41:21 processes have the loudest voice in that but we\'re not going to get this perfectly right like we Society are not 41:28 going to get this perfectly right and a tight feedback loop I think is the 41:34 best way to get it closest to right um and the way that that balance gets negotiated of safety versus freedom and 41:42 autonomy um I think it\'s like worth studying that with previous Technologies and we\'ll do the best we can here we 41:49 Society will do the best we can here um gang actually I\'ve got to cut it 41:54 sorry I know um I\'m wanty to be very sensitive to time I know the the interest far exceeds the time and the 42:00 love for Sam um Sam I know it is your birthday I don\'t know if you can indulge us because I know there\'s a lot of love 42:05 for you so I wonder if we can all just sing Happy Birthday no no no please no we want to make you very uncomfortable 42:11 one more question I\'d much rather do one more question this is less interesting to you 42:17 thank you we can you can do one more question quickly day dear 42:23 Sam happy birthday to you 20 seconds of awkwardness is there a 42:29 burner question somebody who\'s got a real burner and we only have 30 seconds so make it short um hi I wanted to ask if the 42:38 prospect of making something smarter than any human could possibly be scares 42:44 you it of course does and I think it would be like really weird and uh a bad 42:50 sign if it didn\'t scare me um humans have gotten dramatically smarter and 42:56 more capable over time you are dramatically more capable than your 43:02 great great grandparents and there\'s almost no biological drift over that period like sure you eat a little bit 43:08 better and you got better healthare um maybe you eat worse I don\'t know um but 43:14 that\'s not the main reason you\'re more capable um you are more capable because 43:20 the infrastructure of society is way smarter and way more 43:25 capable than any human and and through that it made you Society people that 43:30 came before you um made you uh the internet the iPhone a huge amount of 43:37 knowledge available at your fingertips and you can do things that your predecessors would find absolutely 43:44 breathtaking um Society is far smarter than you now 43:50 um Society is an AGI as far as you can tell and the 43:57 the way that that happened was not any individual\'s brain but the space between all of us that scaffolding that we build 44:03 up um and contribute to Brick by Brick step by step uh and then we use to go to 44:11 far greater Heights for the people that come after us um things that are smarter than us will contribute to that same 44:18 scaffolding um you will have your children will have tools available that you didn\'t um and that 44:25 scaffolding will have gotten built up to Greater Heights 44:32 and that\'s always a little bit scary um but I think it\'s like more way more good 44:38 than bad and people will do better things and solve more problems and the people of the future will be able to use 44:45 these new tools and the new scaffolding that these new tools contribute to um if you think about a world that has um AI 44:54 making a bunch of scientific discovery what happens to that scientific progress is it just gets added to the scaffolding 45:00 and then your kids can do new things with it or you in 10 years can do new things with it um but the way it\'s going 45:07 to feel to people uh I think is not that there is this like much smarter entity 45:14 uh because we\'re much smarter in some sense than the great great great grandparents are more capable at least 45:21 um but that any individual person can just do more on that we\'re going to end it so 45:27 let\'s give Sam a round of applause 45:35 \[Music\] [[The Possibilities of AI \[Entire Talk\] - Sam Altman (OpenAI)]](https://www.youtube.com/watch?v=GLKoDkbS1Cg) lester holt interviews sama and brian chesky - june 2024 ======================================================== 0:00 \[Music\] \[Applause\] \[Music\] \[Applause\] 0:07 well you guys get all the Applause I\'ve invented nothing Zippo great to see you guys welcome thanks everybody for being 0:14 here very excited about this conversation uh we\'ll set this up by letting folks know you guys are friends 0:20 you have your your work is kind of you know worked in together on some important projects and some important 0:27 things so we\'re going to get into some of that as well but you\'re wondering why the two them here that\'s why thank you so much for your time let me get um 0:33 let\'s start off with kind of a perspective Sam what percentage of this audience do you think has in some way 0:39 interative with AI today I I would bet most 0:45 uh I\'m not going to hold you to it by the way in the 90s it\'s a and most of us don\'t know 0:52 where it\'s affecting our lives yeah you know there there are people who use chat GPT and you kind of know when you\'re 0:58 using that or not but the number of people are integrating AI into all of their other services and taking our gp4 1:04 and other models that we have and you know it\'s sort of like lifting a lot of services up has AI 1:11 crossed a a critical threshold in the past year I think 1:18 that yes but I think there will be many thresholds that AI uh crosses you know 1:24 we used to Brian actually gave me great advice about this we used to talk about we\'re going to get to this like moment 1:29 of AGI and you know it was this very IL defined term and I think it never made sense to think about it that way in the 1:35 first place but we used to and now we think about it is it\'ll just be this series of thresholds uh where the 1:41 systems will get new and new cap better and better capabilities so you know you can use chat GPT today for some things 1:48 and you\'ll be able to use it for much more helpful tasks in the future um you know maybe today there are things like 1:55 okay uh like for example one of our um one of our partners callor health is now 2:00 using uh gp4 for cancer screening and treatment plans and that\'s great and 2:05 then maybe a future version will help uh discover cures for cancer so I think of 2:11 it as success of thresholds but definitely the fact that we can talk to computers in natural language and have 2:17 them understand us and help us that\'s certainly been a threshold I want to talk about some things that we\'ve seen 2:22 in the news lately and get your reaction to it um at times you have both made friends and enemies fairly quickly you 2:28 struck a big deal with Apple recently um Elon Musk was upset and said he wouldn\'t 2:34 allow Apple products at his companies did you see that reaction coming uh well 2:40 I saw it happen but no I didn\'t I I I didn\'t I sort of doubt it will actually 2:46 happen um but I didn\'t predict that are are you does it represent something 2:52 that\'s happening on the outskirts of open AI in terms of reaction from other tech 2:57 companies uh no I think that\'s just like an Elon 3:04 reaction and Brian let me turn to you Airbnb recently picked up an AI startup 3:10 are we at a point now that every tech company is going to have to have a piece of this action a partnership or uh its 3:16 own development plans yeah I mean I think that just like now every company almost in the world is on the internet 3:23 AI is just going to be completely embedded in everything that we do and I think that one of the things that\'s 3:29 incredible Sam is like Sam used to say you have to be if you want to be a great entrepreneur you have to be right about 3:35 one big thing in your career and I think that Sam was right about one of the biggest things in the history of tech 3:42 because this is going to be something that\'s going to affect people\'s lives more than any technology that we\'ve ever 3:47 seen in the past but I think a lot of the conversation you know we\'re talking about AI as this like existential 3:52 enigmatic thing and I think one of the things we\'re missing is just talking about the practical ways that people can 3:58 benefit their lives I can give you an example Airbnb but Sam has a lot of examples so today Airbnb is a way you 4:04 like type in a city and you find a home and you book a home and that\'s Airbnb and it\'s pretty much the way that the 4:09 internet\'s worked for the last 20 years but imagine in the future um systems that understand you better that\'s the 4:16 real promise a computer that can understand you and can ask you like well who are you Lester like what are your 4:22 hopes what are your dreams like where do you want to travel what do you one day want to do with your life and then it could actually understand you and be 4:28 more of a Matchmaker really understand you and match you to people communities 4:34 Services experiences anything you want to be able to travel and live anywhere in the world and that\'s kind of how I 4:39 think airb be can use but I think almost every industry can get remade with AI 4:45 and I think they can participate but the stakes are higher here than I mean what what you talk about is largely aspirational but with AI you\'re looking 4:52 at some real fears that I think we all here understand so what does that mean 4:57 in terms of the people who are running this most most of us are just passengers on this bus we\'re watching you guys you 5:03 know do these incredible things you know talk about it being compared to the Manhattan Project and wondering where is 5:08 this going and wondering who are the people behind it can we trust these people so talk if you can about the 5:15 moral responsibility um and and for all of us to know these people know people like 5:20 you who are making these changes yeah I mean I I can share um I I 5:27 me I I met Sam in 2008 and when I came to Silicon Valley the word technology 5:32 might as well have been like a dictionary definition for the word good I mean Facebook is a way to share photos of your friends YouTube was like cat 5:38 videos Twitter was like talking about like what you\'re going doing today and I think there was this General innocence 5:44 and I think over time what we realize is when you take a tool and I think technology is a tool you know Steve Jobs 5:51 one of the things he said is he put a handle in the back of every computer cuz he said never trust a computer who can\'t 5:56 throw out the window he said these are tools and we\'re meant to dominate them they don\'t dominate us and I think one 6:02 of the things that happen though is when you put a tool in the hands of hundreds of millions of people you know they\'re 6:08 going to use it for ways you didn\'t intend and I think we are much more sober and realistic in this new 6:15 generation because I think we learned a lot of the lessons of the last generation we learned about how technology can be used mostly for good 6:21 but there\'s always unintended consequences and so I think this time one of the things I\'ve seen Sam do is 6:26 he\'s been very cautious not polanish at all about where this technology is going and and really telling governments there 6:33 actually is a need for regulation Sam I want to get your your take and give you a chance to talk about your firing you 6:41 were you were fired from your own company why let me first touch on something that 6:48 Brian said in with your earlier question and then I will very happy to talk about that um 6:55 I this is going to be a huge change in society uh I think unlike other 7:02 technological Trends um we\'re sort of we\'re aware even if today we\'re like okay chat gbt is this like very helpful 7:08 tool and it\'s you know once I use it I\'m not scared of it um there is a sense 7:15 of super understandable anxiety about where this is going to go what does it 7:21 mean if these tools keep getting more capable at the rate they\'ve been getting capable at and there\'s tons of wonderful 7:27 things and we could talk about those all day but there is this what is the future going to look like even if we solve 7:34 every safety problem even if we solve every um you know misuse problem even if 7:40 we figure out the perfect regulatory regime like what are what are our lives going to be like when it\'s not just like 7:46 the computer understands us and gets to know us and helps us and do these things but we can say like hey computer like 7:53 discover all of physics and it can go off and do that um what does it mean when we can say like hey start and run a 7:59 great company you can go off and do that so that\'s a big change uh that\'s a lot 8:06 of trust that we have to earn to be some of the stewards there will be many other people working on this of this 8:12 technology and we\'re we\'re proud of our track record uh I think if you look at 8:17 the systems that we\'ve put out and the time and Care we\'ve taken we\'ve been able to get them to a level of generally 8:23 accepted robustness and safety that is well beyond what what people thought we were going to be able to do when we got 8:28 to these initial systems a few years ago like when you looked at gpt2 or gpt3 and said are we going to be able to make 8:34 this safe enough to use a lot of people thought no but but there\'s this thing in 8:40 there\'s this the future is like looming large and we\'ve got to continue to earn the trust with what we do the systems we 8:47 put in the world um and how we how we have legitimate decision-making over 8:52 these systems how we broadly Empower people with them how we continue to promote stability in the world in the face of all this change um and it makes 9:00 people very anxious uh and the whole like the whole board firing me and coming back thing I mean Brian was an 9:06 enormous help during that uh it was obviously a super painful experience but 9:12 I do understand why anxiety levels have been so are so high uh I and I think the 9:20 previous board members like they\'re nervous about the continued development 9:26 of AI uh had whatever feelings they had about me and how we were doing things and 9:32 although I super strongly disagree with what what they think things they\'ve said 9:37 since how how they acted uh I think there are like fundamentally good people who are nervous about the future and 9:44 trying to figure out how we get to a good outcome um I\'m super excited with 9:50 the new board they\'re extremely uh constructive and helpful and experienced 9:55 and strong and it\'s been a very productive thing since then but that was a horrible experience to go through not 10:00 not during the moment where it was just like this is a crazy thing let\'s figure out how to undo it and Brian was like 10:07 unbelievably helpful but then the period after that uh where I just had to like kind of pick up the pieces in this like 10:13 state of emotional shock that was that was really bad you were trying to pick up the pieces you were picking up the phone Brian yes explain that well I 10:21 remember um so maybe just to go back in time um 10:27 when chat GPT launched and launched in Nate late November 10:32 2022 it was a phenomenon unlike anything we\'d seen probably since the launch of the iPhone I have no recollection 10:38 anything and I we knew overnight everything was going to change and I remember meeting with Sam and I said you 10:44 know I\'ve been through a little bit of this rocket ship before and I\'m not going to advise you on the core research 10:49 of AI but when it comes to like marketing and like stakeholder management and PR and like design and 10:55 product and everything that\'s not that you\'re going to go on a rocket ship and I\'m only where I am today because people 11:01 believed in me and people helped me and one of the great things about silken Valley is is a high trust place where 11:06 people will help so I just wanted to be helpful to him so this goes on for about a year it\'s one year later and I get a 11:13 text message and it\'s actually from somebody else saying Sam was fired from 11:19 open the eye I was like fired and I immediately texted him and I think his 11:25 text back to me like was 5 minutes later he had just found out he was fired and he said so brutal and I go what 11:33 happened so we get on the phone and he doesn\'t know what happened it wasn\'t fully explained to him and by the way 11:38 his co-founder who was also on the board was removed from the board and that seemed to me very suspicious so I got on 11:45 the phone with him and Greg and I felt really comfortable with the circumstances that this was not a fair 11:51 process and I think this should always be a fair process but especially if they\'re Founders because they\'re very very difficult to replace and what I 11:58 noticed in those first 24 hours was not a lot of people sticking up for Sam and I in my darkest times in my crisis have 12:06 had people stick up for me and that\'s what I wanted to do for Sam and I basically we talk through things and I 12:13 said I think the most important thing for you to do is just be completely transparent internally and externally 12:19 with what you know and what\'s happening but the most remarkable thing and the thing that made me really want to defend 12:25 him was you know you you learn a lot about people in a crisis if you really 12:31 want to know what someone\'s like see them in a crisis and at no point in the 5 12:37 days this went down did Sam ever even for a second focus on self-preservation 12:44 he was completely I I was like why aren\'t you sticking up for here like why aren you care more about yourself that\'s 12:49 what I was saying to him like somebody\'s got to stick up for you you\'re not even sticking up for yourself and he just was so focused on the team and what was best 12:56 for the team and I think that\'s what really made me you know so vifer like focused on 13:02 helping I want to turn Sam if I can turn to the some of the bad publicity you\'ve received lately including the dust up 13:07 over the voice of Sky one thing that could help clear up the concern over the similarity of Sky\'s voice to Scarlett 13:14 Johansson would be to hear from the actor who you say was hired to be the voice of sky is that something that you 13:21 will do certainly if she wants to I mean I know she\'s made statements through her 13:27 agent uh but I\'m not I don\'t I don\'t know where I mean anything she wants to do would certainly be fine with us the 13:33 whole thing opens up certainly a larger question of what do we own in an AI 13:38 world uh do we have control over our likenesses we\'re seeing uh you know deep fake porn right now people\'s you know 13:45 heads being swapped um these are harmful on an individual level how and I know 13:51 it\'s not unique to open AI but how is the industry going to respond to this I mean we think the industry needs 13:58 to take a super strong stance on that it is we obviously do uh and there are 14:04 other issues related to how this technology is being used uh to harm people that we think the industry needs 14:10 to take a very strong stance on um we try to be not only very loud in our calls for regulation to prevent some of 14:17 these misuse cases these misuses which I think is happening but also to set a really good example in the products and 14:23 services we offer and hold ourselves to a very high were these things inevitable I mean you you clearly saw the risk 14:30 coming as this uh technology was maturing like deep fakes and stuff deep fakes yeah head face swapping yeah um it 14:39 was inevitable that the technology was going to be capable of that and so you know of course there are going to be 14:45 systems out there that allow that uh but that\'s where I think we society and 14:51 governments have a role to say you know will allow 14:56 some use cases of Technology we not not comfortable with but in some places we are going to draw a line and face 15:03 swapping deep fake revenge porn is a great place to draw a line we\'re nearing a presidential election as you know 15:10 we\'re seeing some of the deep fakes already happening there\'s been talk about this for years that this would be 15:15 a very difficult election what are your thoughts as you begin to see this stuff kind to emerge and in terms of your 15:21 responsibility your industry\'s responsibility to make sure that we\'re not being overwhelmed by disinformation 15:28 yes so you know this will be I think the first election where there\'s not just the US many other elections this year 15:34 where AI is like a major technological element Providence is really important 15:40 accurate polling information and avoiding some of the issues we\'ve seen with uh previous technological platforms 15:46 and other election Cycles um and you know preventing things like deep fakes I 15:52 I think those are three top of- mind issues for us uh in this election cycle I\'ll also add that there may be other 16:00 things ways people try to misuse misuse this that we\'re not aware of yet um so 16:05 we\'re we have like a whole monitoring efforts set up and uh I think we\'ll need a very tight feedback loop as we get 16:11 closer to the election uh to see if there\'s additional areas where people are trying to abuse the technology while 16:17 we\'re on the topic of the election Brian I\'ll let you start what what do you think will be the impact on your individual businesses in terms of the 16:23 outcome of this election hard to say I mean Airbnb is 16:30 kind of more of a cityby city state-by-state thing so the changes in um Federal administrations don\'t have 16:36 not historically um had a huge impact on us and we\'re of course in a 109 220 16:42 countries so we\'re a pretty resilient business I mean one of the things we saw during the pandemic is when one part of our business changes it adapts to some 16:49 other part so I don\'t anticipate a really big change based on who\'s who\'s who\'s elected how about you 16:57 Sam I do expect some big impact based off who\'s elected but I don\'t know how to I I 17:04 don\'t know what it\'ll be it it does seem to me like AI is going to be an increasingly important geopolitical 17:11 priority in the world um but I\'m you know I I hard for me to say exactly how 17:16 it\'s going to go one of the things that I\'ve really valued about Brian so Brian kind of like under sold 17:23 what he mentioned earlier in that first year kind of like what he\'s done to help but when Chach BT started taking off and 17:28 everything just went crazy for me a lot of people reach out and say oh I\'d love to help you I can do this I can do that and you know everyone\'s I think they 17:35 mean it when they say it but everybody\'s just busy um Brian was like the person who would just sit down with me for like 17:41 3 hours every other week and like give me a list and say Here\'s the five things you got to do now here\'s where you\'re behind here\'s what you\'re screwing up 17:47 here\'s what you got to proactively do here\'s what you got to think about um and it\'s basically like almost always 17:53 right and uh I learn to just like always shut up and follow the advice um 17:59 one of the things that Brian started saying uh more recently uh is that you\'re 18:07 probably not thinking enough about politics and policy and what that\'s going to mean for how the world thinks 18:14 about Ai and here\'s the people you need to hire here\'s the here\'s what it means to like you know map this out and think 18:21 about a strategy here here\'s what you should do and definitely not do and uh 18:27 that\'s been like super helpful and do think for our business it\'s going to be really important and I think one of the things Lester is that you know I 18:33 remember coming to silen Valley we didn\'t think these platforms would have the impact on society that they we now 18:39 know they have and so I think the mindset that Sam has and even the questions you\'re asking him probably 18:44 weren\'t asked of tech leaders 15 years ago I think the whole industry is changed the whole conversation is is 18:50 like like Sam has built out much more of a team much earlier than the big tech companies would have around policy and 18:56 stakeholder management I want to ask if about one of the things we\'ve learned in your research and developing chat GPT 19:02 and others is the requirement of data to train up these modules it\'s an insatiable appetite as it as it appears 19:10 has it changed how you view what is fair use and whose material compated material 19:15 you can use first of all I don\'t think we know yet what the future of how these models 19:22 get smart is going to look like you know is it that we just need more and more data forever doesn\'t feel to me like likely 19:29 to be right you know if you think about what a human can learn from Reading one textbook it\'s very different than what 19:35 it takes these AI models for now so I I expect and also there comes a point 19:41 where to like invent new science you need to just sit there and think and run some experiments but it\'s not in any 19:46 textbook because it\'s new so I I expect that the future of how we think about 19:53 training data um and what it takes to make these models really capable is it 19:58 going to be a roadblock though in the development of these products that\'s what I was trying to say I I you know this is like science we don\'t know for 20:04 sure I think it won\'t be um now that said uh the issue of like fair use and 20:11 how to think about how people who create data create knowledge create you know 20:17 Wonderful books I think although like from a legal perspective we\'re confident in our fair 20:23 use position now that we see where this may evolve um we need to figure figure 20:28 out New Economic models where the whole world gets to participate and I think this goes beyond just people who have 20:35 data that we train on but also uh and we\'ve you know found many different ways to license it and do different things 20:41 but also the people that provide the feedback to the models the people who like go off and create great realtime 20:46 news that maybe the model doesn\'t train on but you want to display it um at the time and that there\'s a lot of work that 20:52 goes into that uh and you know I I think 20:57 maybe AI is going to not super significantly but somewhat significantly 21:03 change the way people use the internet and if so you can see some of the economic models of the past needing to 21:08 evolve uh and I think that\'s a broader conversation than just training data but it\'s sort of like content in general 21:15 surfaced via AI I want to ask you about artificial general intelligence that\'s taking it up taking up the game 21:21 considerably if I understand it correctly that\'s when you get to the point that the computers can do whatever we can do is that a fair summation you 21:28 know that I I I think I was wrong to initially think about it as this one 21:34 moment as we talked about but uh it does seem to me and now I think people use 21:40 AGI to means all all sorts of things it it does seem to me that trying to sort 21:47 of road map out for the world where we think the significant increases in capability will be um can do what you 21:55 know people can do can create new science can what whole companies can do 22:00 uh that feels like it\'d be very useful for the industry to sort of agree on so that we could have these conversations 22:06 in a little bit more of a disciplined way and that\'s one of the things we talked about is like just operating transparently letting people know that 22:13 it\'s probably not this one promethian moment where it goes from AI to AGI that there\'s many many steps just like the 22:19 story of technology and that it\'s really important that we bring Society along and that we\'re not operating in this 22:24 black box and people think there\'s only a few people controlling the future that were transparent with other developers 22:31 and computer scientists and researchers and policy makers about these are steps we\'re going this is what we\'re seeing 22:37 and this is what we think the next four steps look like but isn\'t but isn\'t this a race on a different level the stakes are so high I mean are are you do 22:44 consider yourself in a race and do you think it\'s one you\'ll win to get to the point of artificial general intelligence 22:50 I don\'t think of it as a race I understand why that\'s like a very compelling dramatic way to to talk about 22:57 it I I think that there may be a race between nation 23:02 states um at some point but the companies that are developing this now I think everyone feels the stakes the need 23:09 to get this right I also think to to Brian\'s point that it\'s not there\'s not 23:15 this Milestone we\'re all racing towards it is this it is this continual evolution of Technology where we melted 23:24 sand and figured out how to like turn it into transistors and then figured out how to like build an operating system 23:30 and do a certain kind of programming and we made it bigger and bigger and then we figured out how to like train these 23:35 systems that are sort of smart in some ways but they\'re not off like running as these autonomous things they\'re tools 23:41 that we\'re using to do more than we could before in the way that we used computers to do more than we could 23:47 before without Ai and in the way we used machines in the industrial revolution to do more than we could before and the way 23:54 we used agriculture to be able to have time and space to do more things than we could before 23:59 and and I don\'t think it\'s this race to a milestone it\'s this ongoing the next 24:04 step and the next one and the next one and the tools are going to get better and better but what happens is it\'s not 24:10 like for sure technology is not neutral and tools are not inherently neutral things but the impact we can have by 24:18 building the tools is important we want to get that right people are going to go use these tools to invent the future 24:24 that we all collectively live in and what one person can already do now before chat GPT existed is an impressive 24:30 leap and by the time we get to GPT 6 or 7 what one person can do will be incredibly uh increased and I\'m very 24:38 excited for that like I think that is that is the story of the world getting better we make technology um people use 24:45 it to build new things Express their creative ideas and Society improves yeah 24:50 when you when you talk about these programs though um and when you give them the ability to do what we do we 24:57 also have a set of values different sets of values we view common decency in a 25:03 not so common way sometimes how do you teach that to a computer in a way that won\'t be harmful how do you teach values 25:10 that are positive one of the things that has surprised me and I don\'t want to say 25:15 this gets us like this solves the whole AI alignment problem um but at our current levels of systems uh our ability 25:23 to teach a Model A Certain set of values and to behave in a certain way um is way 25:29 better than I thought it was going to be at this point now there\'s a harder question which is who gets to decide what those values are um who gets to 25:36 decide what the defaults are how much an individual user can uh sort of customize 25:41 them within those broad bounds and as an early step there we put out this thing maybe a month or two ago called the spec 25:47 where we tried to say um here is our desired Model Behavior here are the values we want our model to follow and 25:54 that way people can at least tell if it\'s a bug or intentional when it does something that they don\'t like and over 25:59 time Society can debate what those values are and we can adapt to it um so 26:05 I\'m very heartened by our technical progress on this topic but man writing that set of values or getting Society to 26:11 debate and agree on what those set of values should be that\'s a much harder Challenge and Brian you as you\'ve talked 26:17 about you\'ve given Sam um advice from time to time I I I read somewhere I 26:22 don\'t have the exact quote in front of me at least I can\'t find it right now but to the notion of go for it and figure it out later 26:29 I don\'t what is the quote it\'s it\'s the idea that you you have believe that you need to go for it when it comes to this 26:35 kind of research are there brakes that should be put on well yeah I mean I think if you 26:41 imagine you\'re in a car the faster the car goes the more you need to look ahead in front of you and you need to 26:47 anticipate the corners and I think that we acknowledge that this technology is so so powerful that I think this is why 26:55 we\'re like being so thoughtful I mean people really are agon izing over how to treat these systems and I do not 27:00 remember us doing this in 2007 2008 so I do think it\'s a very very different time 27:05 I mean one of the things that like Sam and I talked about was bring other stakeholders in early and one of the things we did last year was he went on a 27:12 tour around the world meeting with people it was mostly I think a way to get feedback from people educate people 27:18 and really get feedback so I think I think the key Point Lester is we never 27:23 go so fast that we leave Society behind that we only go as fast as to bring bring everyone along and I think that if 27:30 everyone here could feel like they could participate and they could have their input into it then I don\'t really think 27:35 there\'s a huge thing to fear I think the thing to fear is something we don\'t understand we\'re left out of and 27:40 something that runs away from us that we can\'t control and so that\'s the future we don\'t want to live in also it\'s quite 27:46 interesting if you say the word AI it can be scary you say Chachi BT it doesn\'t sound as scary because it\'s a 27:52 very tangible tool so I think we need to also just focus on like that which is in 27:57 front of us and how can we help people there\'s a lot of problems right now and open AI I mean it can lead to a lot of 28:03 scientific research and Discovery um Chach PD can be an incredible tool for artists um you know Airbnb we can think 28:10 we think it can really bring people together we\'re living in this huge epidemic of Lon we can use this to help bring people together at the end of the 28:17 day it\'s not the technology it\'s the people with the technology it\'s always comes down to the people their values 28:22 and are they good people the way I sort of think about this is um we need to 28:28 learn how to make safe technology we need to figure out how to build Safe products and that in that includes like 28:35 an ongoing dialogue with Society about hey this has this impact I didn\'t expect or don\'t want and also you\'re not 28:41 letting me do this thing that\'s really important for this reason you didn\'t understand so the way that we talk to the broader world and the people that 28:47 use and impact by our products and impacted by our products and let let them reflect what they want and then 28:53 also like a safe operating plan which is we get better and better at predicting 28:58 capabilities um research is of course an open question you don\'t always know where it\'s going to go but before we start training a new model we\'d like to 29:05 be able to say here here are the dangerous capabilities that we think could happen we have a preparedness 29:10 framework to test them sometimes this takes a very long time uh with gp4 for example we had about eight months 29:17 between when we finished training when we released it including lots of like external consultation in red teaming um 29:23 future models may take even longer but it is very important to get the feedback from society one thing that I don\'t 29:29 think is good is to let a huge capability overhang build up uh and then we haven\'t had that 29:36 feedback loop with Society so we we we we do need to figure out how to balance that but yeah you know taking the time 29:43 to get it right is very important are you ever inclined or you think you\'d ever be inclined to back up to see the future and and a and find it is maybe as 29:50 scary as some some people have suggested are you prepared to that hit that moment where you have to take a step back even 29:56 as your competitors may want to move forward forward for sure um there are things that we have built and chosen not 30:03 to release or held back for long periods of time um there are plenty of other companies that would release things that 30:09 we won\'t um we\'re not going to get every decision right of course and uh we also 30:15 May at some point deploy something and need to take it back but there\'ll also be things that we just don\'t deploy you 30:21 you we talk about these scary images did it help when you compared where you are 30:26 with a I with the Manhattan Project the the race to build an atomic weapon was 30:32 that helpful for you as you try to make your case I mean we we try to give a number 30:40 of historical analogies because we think it is important we may be wrong we may 30:45 be right but it\'s important for us to tell Society what we believe the level of importance of this technology is 30:51 there\'s no perfect historical analog for any new technology so we can say there were some things about the Manhattan 30:57 project that are like what we\'re doing now there\'s some things about the Apollo program there are some things about the iPhone there are some things about that 31:03 iMac with the handle which I also really loved um there\'s some things about the internet there are some things about the 31:09 Industrial Revolution and but what I think is important is to 31:15 say here are the parts where we can look to a historical analogy and here are the parts where we can\'t and the shape of 31:22 this technology and kind of the decisions and the impact it is fundamentally like a little bit different than anything I think I think 31:28 it\'s different than the Manhattan Project it\'s not a race it\'s not going to be done in secret and I think Nations 31:34 can collaborate together and there\'re could be a transnational kind of group or body that could really kind of align 31:40 to make sure we\'re all on the same page which would be best for society and frankly probably best for entrepreneurs 31:45 so they don\'t have to comply with like 200 different laws we think we think that\'s super important uh to get to get some sort of global framework in 31:52 cooperation uh I think we\'re we\'re really going to need that I think one of you mentioned the the the nation states 31:57 is there a risk of of Nations States taking this technology and using them in a in a dangerous way 32:05 or absolutely and um I you know I think you always have to be really really careful about like who this technology 32:13 who who you\'re putting this technology in the hands of and I think it goes back to like some of the things Sam\'s thinking about like one of the things we 32:19 like I know they developed early on that they chose not to release is like voice cloning right there\'s technology already 32:25 where you can basically like capture someone\'s voice but obviously that would be very very dangerous because obviously 32:31 you can imagine how it could compromise elections and major security risk so I think one of the things is just thinking 32:37 about like who could these tools end up in the hands of and therefore if you let the genie out of the bottle could it get 32:43 like too dangerous and so be very thoughtful about it yeah and Sam according to one report you speculated 32:48 AR artificial in general intelligence could acrew as much wealth as a \$100 32:53 trillion that\'s wealth that you said you would then redistribute is it was that an accurate quote and do you 32:59 want to expand on it I I think the sort of point I was trying to make was that I thought it could like double the world\'s 33:05 GDP um which feels like reasonable to me and certainly would be in line with other technological 33:11 revolutions um yeah we do think this is just going to be a massive driver of 33:17 productivity and already at this early stage seeing what people are doing with 33:23 it to sort of vast improve products and services do you think do you understand have that would sound to a lot of people 33:29 though um for for for sure of course uh but I think like this is where this is 33:37 where historical analoges are helpful and this is where it is helpful to look at the chart of world GDP over time and 33:46 you know if the if the world GDP can grow at you know 7% a year like which 33:52 sounds hugely fast but maybe with a technological shift like this um is not 33:58 that far away I\'m always bad at doing this in my head but I think that\'s like only 10 years to double 34:04 so I I think it is you know I think it is worth taking 34:11 the potential of this technology to do enormous good very seriously and I think we can now see more of what that looks 34:17 like as people are adopting the tools preview if you will for us Chad GPT five 34:22 um what what will the leaps in technology be and and does it put you on a straighter path to where you want to 34:28 be does it put us on a what path I\'m sorry a what path does it put you on a straighter path in terms of your goals 34:34 um so we don\'t know yet uh you know we\'re optimistic but we still have a lot of work to do on it uh but I expect it 34:43 to be a significant Leap Forward um a lot of the things that GPT for gets 34:48 wrong you know can\'t do much in the way of reasoning sometimes just sort of totally goes off the rails and like 34:54 makes a dumb mistake uh like even like a six-year-old would ever make um I expect it to be much much better in those ways 35:02 and to be able to be used for a much wider variety of of more helpful tasks 35:07 and it does go off the rail sometimes is that a result to back where we were we were speaking about the lack of data or 35:13 the shortage of data I think it\'s many things together it\'s we\'re we\'re still just like so 35:20 early in developing such a complex system um there\'s data issues there\'s 35:26 algorithmic issues uh the models are still quite small relative to what they will be someday and we know they get 35:32 predictably better uh so I think it\'s more like there\'s many things that we 35:37 need to go improve all of and we\'re still just like so early in the technology you know the first iPhone was 35:43 still pretty buggy but it was like good enough to be useful for people yeah I think that like I I don\'t think things 35:49 are going to change as much in the world in the next couple years people think it\'s not linear it\'s things are going to change slowly and then probably all of a 35:55 sudden and I think everyone\'s still trying to figure out how to use this technology if you take your phone and 36:01 you look at your home screen and ask yourself a year and a half after chat GPT launched how many apps are fundamentally different because of AI 36:07 and very few of them are fundamentally different so I think we\'re still in this world where we\'re developing a lot of 36:12 the you know computation with Nvidia Sam and team are developing the models and a lot of the change to SI is going to 36:18 happen when people build on top of those models the applications and there\'s so many uses for it I mean you know one of 36:23 the big use cases that we\'re talking about scientific discovery you know about what this can do to drug research 36:30 uh to like you know some of the biggest kind of uh types of ills in society 36:35 there\'s a lot this can do with education we think this can essentially give access to tutors to everyone around the 36:41 world um creative people I know there\'s a lot of fear that artists can be replaced but you know I think if artists 36:47 participate I went to design school I think this is a technology that they can use so I think we can go down the list 36:52 um and I think there are going to be a lot of really exciting opportunities in the next 3 to five years where do you want to be in 5 years 36:59 Sam further along the same path you know we\'d like to one of the most fun parts of the job 37:06 is getting like tons of email every day from people who are using Tools in these amazing ways I you know was able to like 37:14 diagnosed this health problem that I\'d had for years and I couldn\'t figure it out and was making my life miserable and I just typed my symptoms into Chachi PT 37:20 and I got this idea and went to see a doctor and I\'m totally cured or I\'ve been trying my whole life to learn these things and couldn\'t do it and I got 37:27 Chach PT to be like a tutor for me or I you know I\'m like three times as productive as a developer and I\'m doing 37:32 these amazing things I\'m the scientist using it and I love getting those things I love how much people love chbt I 37:39 really do and 5 years from now I just hope it\'s a lot more of that I hope we have put this tool into the world that 37:45 continues to Delight people and let them do more and like be their best at whatever they\'re doing well will we 37:51 having more conversations like this down the road certainly as you go down your path but I want to thank both of you Sam 37:56 wman Brian chesy for taking time and being with us here in as great \[Music\] 38:03 conversation nice job thanks for watching stay updated about breaking news and top stories on the NBC News app 38:11 or follow us on social media [[Lester Holt interviews Open AI\'s Sam Altman and Airbnb\'s Brian Chesky]](https://www.youtube.com/watch?v=8e8RpbO2lNU) sama on joe rogan - june 2024 ----------------------------- 0:01 Joe Rogan podcast check it out The Joe Rogan Experience Train by day Joe Rogan 0:07 podcast by night all day hello Sam what\'s happening not much 0:15 thanks for coming in here appreciate it thanks for having me so what have you done like ever no I mean what have you 0:22 done with AI I mean it\'s um one of the things um about this is I mean I think 0:30 everyone is fascinated by it I mean everyone is uh absolutely blown away at 0:35 the current capability and wondering what the potential for the future is and whether or not that\'s a good 0:44 thing I think it\'s going to be a great thing but I think it\'s not going to be all a great thing and that that is 0:51 where I think that\'s where all of the complexity comes in for people it\'s not 0:56 this like clean story of we\'re going to do this and it\'s all going to be great we\'re going to do this it\'s going to be net great but it\'s going to be like a 1:04 technological Revolution it\'s going be a societal Revolution and those always come with change and even if it\'s like 1:11 net wonderful you know there\'s things we\'re going to lose along the way some kinds of job some kind parts of our way 1:17 of life some parts of the way we live are going to change or go away and no matter how tremendous the upside 1:24 is there and and I believe it will be tremendously good you know there\'s a lot of stuff we got to navigate through to make sure um 1:32 that\'s that\'s a complicated thing for anyone to wrap their heads around and there\'s you know deep and super 1:37 understandable emotions around that that\'s a very honest answer that it\'s not all going to be good but it seems 1:46 inevitable at this point it\'s yeah I mean it\'s definitely inevitable my my view of the world you know when you\'re 1:53 like a kid in school you learn about this technological Revolution and then that one and then that one and my view 1:58 of the world now sort of looking back Ward and forwards is that this is like one long technological 2:04 Revolution and we had sure like first we had to figure out agriculture so that we had the resources and time to figure out 2:11 how to build machines then we got this Industrial Revolution and that made us learn about a lot of stuff a lot of other scientific discovery too let us do 2:18 the computer Revolution and that\'s now letting us as we scale up to these massive systems do the AI Revolution but 2:24 it really is just one long story of humans discovering science and technology and Co evolving with it and I 2:32 think it\'s the most exciting story of all time I think it\'s how we get to this world of abundance and 2:38 although you know although we do have these things to navigate and there there will be these downsides if if you think 2:43 about what it means for the world and for people\'s quality of lives if we can get to a world uh where the the cost of 2:51 intelligence and the abundance that comes with that uh the cost dramatically Falls the 2:58 abundance goes ways up goes way way up I think we\'ll do the same thing with energy and I think those are the two sort of key inputs to everything else we 3:04 want so if we can have abundant and cheap energy and intelligence that will transform people\'s lives largely for the better 3:11 and I think it\'s going to in the same way that if we could go back now 500 years and look at someone\'s life we\'d 3:17 say well there there\'s some great things but they didn\'t have this they didn\'t have that can you believe they didn\'t have modern medicine that\'s what people 3:23 are going to look back at us like but in 50 years when you think about the people that currently really rely on jobs that 3:31 AI will replace when you think about whether it\'s truck drivers or automation 3:37 workers people that work in Factory assembly lines what if anything what strategies 3:44 can be put to mitigate the negative downsides of those jobs being eliminated 3:50 by AI so I\'ll talk about some general thoughts 3:57 but I I find making very specific predictions difficult because the way 4:02 the technology goes has been so different than even my own intuitions or certainly my own intuitions can we maybe 4:09 we should stop there and back up a little what we what were your initial 4:14 thoughts if you had asked me 10 years ago I would have said first AI is going to come for bluecollar labor basically 4:22 it\'s going to drive trucks and do factory work and you know it\'ll handle heavy machinery then maybe after that 4:29 it\'ll will do like some kinds of cognitive Labor uh kind of you know but 4:35 not it won\'t be off doing what I think of personally is the really hard stuff it won\'t be off proving new mathematical 4:41 theorems won\'t be off you know discovering new science um won\'t be offw writing code and then eventually maybe 4:49 but maybe last of all maybe never because human creativity is this magic special special thing last of all it\'ll 4:55 come for the creative jobs that\'s what I would have said now a it looks to me 5:02 like and for a while AI is much better at doing tasks than doing jobs it can do these little pieces super well but 5:09 sometimes it goes off the rails uh it can\'t keep like very long coherence so people are instead just able to do their 5:16 existing jobs way more productively um but you really still need the human there today and then B it\'s going 5:22 exactly the other direction could do the Creative work first stuff like coding second they can do things like other 5:28 kinds of cogni labor third and we\'re the furthest away from like humanoid 5:35 robots so back to the initial question if we do have something that 5:42 completely eliminates Factory workers completely eliminates truck drivers delivery 5:49 drivers things along those lines that creates this massive vacuum in our 5:55 society so I think there\'s things that we\'re going to do that are good to do 6:02 but not sufficient so I think at some point we will do something like a Ubi or some other kind of like very long-term 6:09 unemployment insurance something but we\'ll have some way of giving people like redistributing money in society as 6:17 a cushion for people as people figure out the new jobs but and I maybe I should touch on that I I\'m not a 6:24 Believer at all that there won\'t be lots of new jobs I I think human cre creativity desire for status wanting 6:31 different ways to compete invent new things feel part of a community feel valued uh that\'s not going to go 6:38 anywhere people have worried about that forever what happens is we get better tools and we just invent new things and 6:45 more amazing things to do and there\'s a big universe out there and and I think I mean that like literally uh in that 6:52 there\'s like Space is really big but also there\'s just so much stuff we can all do if we do get to this world of 6:58 abundant intellig where you can sort of just think of a new idea and it it gets created 7:06 but but again that doesn\'t to the point we started with that that that doesn\'t 7:11 provide like great soless to people who are losing their jobs today so saying there\'s going to be this great 7:17 indefinite stuff in the future people are like what are we doing today so you know we\'ll I think we will as a society 7:24 do things like Ubi and other ways of redistribution but I don\'t think that could it\'s at the core of what people 7:30 want I think what people want is like agency self-determination the ability to 7:36 play a role in architecting the future along with the rest of society the ability to express themselves and create 7:44 something meaningful to them and also I think a lot of people work 7:50 jobs they hate and I think there\'s we as a society are always a little bit confused about whether we want to work 7:56 more or work less but but somehow 8:02 that we all get to do something meaningful and we all get to play our role in driving the future forward 8:09 that\'s really important and what I hope is as those truck driving Long Haul truck driving jobs go away which you 8:16 know people have been wrong about predicting how fast that\'s going to happen but it\'s going to happen um we 8:21 figure out not just a way to solve the economic problem by like 8:28 giving people the equivalent of money every month but that there\'s a way that and we have a 8:34 lot of ideas about this there\'s a way that we like share ownership and decision- making over the future um I 8:42 thing I say a lot about AGI is that everyone everyone realizes we\'re going to have to share the benefits of that 8:48 but we also have to share like the decision making over it and access to the system itself like I\'d be more 8:54 excited about a world where we say rather than give everybody on earth like one eight billionth of the AGI money 9:01 which we should do that too we say you get like one8 billionth of a 1 eight 9:06 billionth slice of the system you can sell it to somebody else you can sell it 9:12 to a company you can pull it with other people you can use it for whatever creative Pursuit you want you can use it to figure out how to start some new 9:17 business um an