Logo
Home
language
Loading...

3 Possible Futures for AI — Which Will We Choose? | Alvin W. Graylin | TED

सुनें/Video/TED Talk/3 Possible Futures for AI — Which Will We Choose? | Alvin W. Graylin | TED

3 Possible Futures for AI — Which Will We Choose? | Alvin W. Graylin | TED

TED Talk
3000 Oxford Words4000 IELTS Words5000 Oxford Words3000 Common Words1000 TOEIC Words5000 TOEFL Words

उपशीर्षक (242)

0:03Manoush Zomorodi: Humans, still amazing.
0:06Alvin W. Graylin: I don’t think we’re replaceable yet.
0:08MZ: No, not yet.
0:09But, Alvin, we're going to get some straight talk.
0:12To me, for normal people, this was the year of AI.
0:16And I don't think anyone really knows what to think right now.
0:19So, Alvin, you have been in this field --
0:22AI, cybersecurity, VR, semiconductors --
0:2635 years you've been doing this.
0:28But what makes you very different
0:30is that it's been both in the United States,
0:32as a US citizen,
0:33and in China a lot of the time.
0:35I think a lot of people feel ambivalent about AI.
0:38They feel like, what is actually really happening,
0:41what is hype
0:42and what is transforming our existence?
0:46Where are we right now according to you?
0:49AWG: I mean, this is one of the biggest questions
0:51that we have as a society today.
0:53And unfortunately, there's just a lot of misinformation.
0:56And my answer to you is probably going to be a little different
1:00than the Silicon Valley consensus,
1:02even though I work at Stanford,
1:04and it's going to be probably a little scary to a lot of you.
1:07But hopefully, by the end of this, it will convince you to take action.
1:11Just like what TED, and the little note I saw in TED,
1:14it says, what action are you going to take after this event?
1:18We are really at this inflection point
1:21and the inflection point, not the traditional one
1:23that just keeps going up.
1:24We are essentially at a fork in a road between three possible futures right now.
1:28One where the big labs essentially take control
1:35of the government by growing their power
1:38and their resources as much as possible,
1:40then creating essentially a class of trillionaires and everybody else.
1:44This is kind of the Elysium future that's ahead of us.
1:47The second option is that actually we are heading towards a Mad Max future,
1:51where we intensify the conflict between countries,k
1:55and going from AI race to AI war
1:58to kinetic war and potentially to nuclear war.
2:01And I've talked to people in DC who actually see that as "inevitable,"
2:06which is a little scary.
2:08And the third option that we have right now
2:11is potentially the Star Trek option,
2:14the option where technology is being used and shared,
2:17and something brings us -- you know,
2:20in the Star Trek stories, essentially,
2:22the Vulcans bring us advanced technology.
2:24Peaceful, rational species brings us technology and saves us from ourselves
2:29and brings on this century of discovery
2:32or millennia of discovery.
2:34We have a potential to to get there.
2:36Unfortunately, today, we are heading towards the first two.
2:40And the forces of what's driving it
2:44is actually going to take a lot of work for us
2:46to move from the first forces towards the first two,
2:50towards that last one.
2:51MZ: Can we get into that a little bit more?
2:53Because I think the narrative we've all been told,
2:56at least certainly by Sam Altman and maybe some other AI executives,
2:59is that we've got to lock this technology down,
3:01we've got to grow it, we've got to grow it fast,
3:04because if we don't, China will.
3:07Would you agree with that?
3:09AWG: That's actually one of the biggest myths out there,
3:11and actually one of the most scary things out there.
3:14In fact, two days ago, I just came back from China.
3:16I've worked there half my career,
3:18and I think essentially, the AI industry today
3:21is using the same tools
3:23that the military industrial complex has used over the last century
3:28in terms of you have to create an enemy,
3:30once you do that, then you get funding,
3:32you get support, you get deregulation,
3:34you get to move faster, and then you get to make money.
3:36And what the AI labs are actually trying to do
3:40is not to save the world.
3:41It is actually to create billions, actually trillions of dollars.
3:45In fact, they specifically said AI is worth trillions of dollars.
3:48And they want to be the first one to create AGI,
3:51artificial general intelligence.
3:53And it's defined, actually by Sam,
3:55as a technology that can replace the average worker.
3:59And what that means is
4:01he wants to create a technology that can take everybody's jobs here.
4:05Now on the surface, that actually may be scary,
4:08but I think if it's coming from the right place,
4:10it actually could be an amazing thing
4:12because that means we get liberated
4:14so that we can spend time doing art and music
4:16and watching, coming to TED.
4:18But, unfortunately,
4:20I think right now there isn't the other side of the story being put in,
4:24which is how do we protect the people who are going to be displaced by it?
4:28MZ: OK, so I mean, despite what we've just talked about so far,
4:32Alvin is actually an optimist.
4:34(Laughter)
4:35He is, I promise.
4:37Explain the vision that you have come up with
4:39about how we take the right track,
4:42that we take this moment of inflection and we actually pivot in a good way.
4:48AWG: Yeah, so I actually just turned in the paper to Stanford,
4:51which is an AI policy paper about what we need to do going forward
4:56and how we move from today's trajectory into something better.
5:00And it's a three-piece, a three-part story,
5:02which sounds simple,
5:03but it's actually very hard to execute.
5:05One is we actually have to decide
5:08that instead of competing over resources
5:10and creating hundreds of labs around the world,
5:12trying to create, duplicate, actually, the same work
5:15and having a undersupply of chips
5:18and memory and talent,
5:20rather than doing that,
5:21we need to come together and create what some people call the CERN of AI.
5:24Essentially a single lab that aggregates all of the talent around the world.
5:29MZ: Like the space station.
5:30AWG: Like the space station, like CERN, like the ITER labs that we do,
5:34we've done for other types of technologies.
5:36It is very doable.
5:37And then whatever comes out of it,
5:39rather than hoarding it for one company or one country
5:42to do it and share it with the world,
5:44which is the whole idea of open science.
5:47This is what's made progress in this world happen.
5:50MZ: Woo for open science,
5:51yeah, TED crowd, alright, nerds, I love it.
5:55AWG: Yes.
5:57And then two, is that we need to put together everybody's data
6:03from around the world so that we're not creating --
6:05in fact, the thing that a lot of people want to do today is create "sovereign AI"
6:09which means an AI that works for your country, culture
6:12and represents you.
6:13And it essentially is a subset of data feeding into it.
6:17And it sounds like, OK, that's good,
6:19because I have something on my side.
6:20But what data right now,
6:22what research is showing is that the less you give data,
6:24the more biased these AI become.
6:27And what we really need to do
6:28is to make sure that the entire world's data,
6:30all of our history, all of our languages
6:32are represented, all of the culture,
6:34because then the AI can come in and create an optimal for everyone,
6:39that there is a way to find a way to balance everybody's needs
6:42without taking other people down.
6:44MZ: So how are we going to convince people to do this,
6:47technologists, governments, to go along with this?
6:50AWG: That's the hard part, I think.
6:52The thing is, we need to understand
6:54or we need them to understand, that the world is not zero sum,
6:57and that actually by working together,
6:59it's not weakness.
7:00Working together is enlightened self-interest.
7:03Because when you work together,
7:05you actually raise everybody up.
7:07And when you raise everybody up,
7:09there's a lot less reason to have conflict,
7:11a lot less reason to have my children fly 10,000 miles around the world
7:14to kill your children.
7:15Why would I need that when I have everything around me
7:19from this technology that’s going to give us.
7:21Because this is amazing technology.
7:23It's going to solve cancer,
7:24it is going to bring us better energy sources,
7:26it's going to solve hunger, all these things.
7:28But we have to choose to share with the world,
7:31and we have to choose to use it for humanity's good,
7:33not for one country's good.
7:35But there was a third part to the plan.
7:37MZ: Sorry.
7:38AWG: The third part of the plan is
7:40something called the GI Bill for the AI age.
7:44So why do I say that?
7:45Because in 1944-45,
7:48there was about 15 million American service people
7:51coming back from World War II,
7:52and they were going to create a giant employment shock to the world
7:56because they're going to come in, they're going to be unemployed.
7:59What did America decide to do?
8:00The government says, we're going to give you free education,
8:03we're going to give you zero-interest loans,
8:05free medical,
8:06and then we're going to help you, essentially, buy homes,
8:09because that's what's needed for people to have secure lives.
8:13And it created the American middle class.
8:15It created a boom in our economy and turned us into what we are today,
8:18which is the most successful and most powerful nation in the world.
8:22We can do that again, but not for 15 million people,
8:25maybe for 150 million people, maybe for 1.5 billion people.
8:28Because America has 170 million workers,
8:31and the displacements that we are seeing
8:34is going to be the proportions that people are predicting.
8:36It could get to 100-plus million people affected just in this country.
8:40And globally, it will be billions of people.
8:43And we have to take care of them.
8:44Because if we don’t,
8:46this world not going to be a very good place for us to hang out in.
8:49MZ: OK, that's a lot to take in.
8:52(Laughter)
8:53I do want to give us something actionable, right.
8:56Because it can feel like,
8:57oh, this AI thing is happening to us and that it's inevitable,
9:01but what can we do?
9:03Like when we walk out of here.
9:04AWG: I think what you need to do
9:06is actually to start to change your mindset,
9:08to start to understand that the world is not zero-sum.
9:11And you actually have a responsibility as business owners,
9:14most of you guys own businesses
9:16or work in very senior positions in businesses.
9:18You need to see about how does your company integrate AI,
9:21not in a way to replace people,
9:23but in a way to make things more efficient.
9:25And rather than saying,
9:26I'm going to lay off 30 percent of my staff,
9:28which some companies are doing,
9:29recently I've talked to 50 companies in the last two months
9:32about how they were implementing,
9:34a lot of them are saying, I'm going to just replace my people.
9:37Giving them four-day workweeks, or reskilling to other places,
9:40we need to reduce the shock of what this technology is going to do
9:43to our society.
9:44The prior industrial revolutions took 80, 60 and 40 years to play out.
9:47This one is going to happen in the next five to 10 years,
9:51maybe shorter,
9:52and our society is not equipped to move at that speed.
9:55MZ: So play with the models, see what they're like,
9:58know what these companies are talking about.
10:00Do you recommend that?
10:01AWG: Oh, you have to do it.
10:03You have to actually use these models, because you’ll hear people say,
10:06oh, this thing is not that scary,
10:08this thing will never replace humans.
10:11The reality is, the more you use,
10:12the more you understand how powerful they are
10:14and how quickly they're changing every day,
10:16and if you don't use it, you won't understand it.
10:19MZ: Alvin Graylin, thanks for giving us a glimpse into our future.
10:22AWG: Thank you, Manoush.
10:23(Applause)