Altyazı (307)
0:00a head, a tape, and a series of rules.
0:03In 1936, that's all Alan Turring thought
0:06a machine needed to complete the complex
0:09tasks of storing, reading, and modifying
0:12data. The head could read and write
0:14symbols on the tape, and its given rules
0:17would tell it exactly what to do with
0:19those symbols. And with an infinite
0:22amount of tape, Turring hypothesized,
0:24the machine's capabilities to complete
0:26those tasks could be infinite, too.
0:28Today's AI models find patterns by
0:30waiting through huge amounts of data
0:32rather than following discrete steps fed
0:34into them. And they're fueled by compute
0:37rather than yards and yards of tape. And
0:40as those resources keep growing, it's
0:42time to ask the question on everyone's
0:44mind. Just how powerful could AI really
0:47become? Hi, I'm Kusha Navdar and this is
0:50Crash Course Futures of AI.
0:57The touring machine was never actually
0:59real, but the concept provided a
1:01blueprint for what computers could be
1:03capable of and laid the groundwork for
1:05tons of future work in artificial
1:06intelligence. Like the hypothetical
1:09concept of the touring machine was then
1:11used by computer scientists to create
1:13some of the first computers. And then
1:15those first computers were used to help
1:17create even better computers. And then
1:19those better computers were used to
1:21create, you guessed it, AI. which then
1:24in turn was used to help me create this
1:26picture of a robot tiger with rocket
1:28launchers for legs. His name is Randall
1:30and he represents progress. Robot tigers
1:33aside, in the computer science world, we
1:35call the kind of process where the
1:37output of a previous discovery directly
1:39and repeatedly becomes the input for the
1:42next discovery over and over and over
1:45again. Recursive. And it's not going to
1:48stop here. Our current AI models can do
1:51all kinds of things. machines never
1:53could before, like creating images of
1:56new friends like Randall. They might
1:58even be able to use recursive progress
2:00to make themselves better and better. To
2:02see how that might work, let's take a
2:04look at one of today's best AI coders,
2:06Alpha Evolve. Basically, Google trained
2:09a large language model on tons of
2:12functions, pieces of computer code that
2:14performed specific operations and let it
2:17start spitting out its own code and
2:19paired it with an automated evaluator to
2:22check whether its functions actually
2:24worked. All that meant Fun Search went
2:26through the whole process of attempting
2:28a function, learning from its mistakes,
2:31refining approaches, and inputting those
2:34new functions to get even more
2:36successful outputs all by itself. In
2:39other words, Fun Search could engage in
2:41aspects of recursive self-improvement.
2:45And in early 2025, Google expanded on
2:48Fun Search to create Alpha Evolve, an
2:51evolutionary coding agent that trains on
2:54whole code bases, not just single
2:56operation functions. Evolutionary coding
2:58agents like Alpha Evolve mimic natural
3:01evolution, like the kind you see in
3:03nature, by generating potential
3:05solutions, mutating them at random,
3:07selecting the ones that perform the
3:09best, and repeating that whole process
3:11until it gets something that really
3:13works. And that means Alpha Evolve can
3:15learn to tackle all kinds of problems
3:18from building a website to open
3:20mathematical research problems to yeah
3:23coding new models of AI. In fact, Google
3:27has given it the code behind lots of
3:29their AI systems, including Alpha Evolve
3:32itself. With its evaluator checking out
3:34the code it produces and its large
3:36language model becoming ever more
3:38refined, Alpha Evolve isn't just getting
3:40better at creating and improving code in
3:43general. It's getting better at
3:44improving its own code. Basically, by
3:48repeating new algorithms and testing
3:50them against performance benchmarks,
3:52Alpha Evolve could select the best
3:54performers to then use as the basis for
3:56the next cycle of algorithms to test.
3:59This meant that each cycle strengthened
4:01the algorithmic tools available for the
4:04next cycle and let Alpha Evolve discover
4:07both new, more successful algorithms,
4:10but also optimize the infrastructure
4:12that trains and runs AI systems in the
4:15first place. And since it dropped, it's
4:17not only started to outperform human
4:19experts at solving complex math
4:21problems, it's found ways to speed up
4:24components that help operate tons of
4:26different Gemini AI models, reducing
4:29their training times and making them and
4:31itself work even better. Alpha Evolve
4:34isn't perfect, but it is an early
4:37glimpse of what full-fledged recursive
4:39self-improvement could look like for AI,
4:42and it's not the only one. Tons of AI
4:44models can already optimize their own
4:46hyperparameters, the settings that
4:48control how machine learning algorithms
4:50work, making their learning process as
4:52fast and as accurate as possible. Others
4:54use their algorithms to generate prompts
4:57to help train LLMs more efficiently than
4:59humans ever could. And some AI agents
5:02like Roocat, the self-improving AI
5:05agent, not Randall, are beginning to
5:07learn to revise and redeploy parts of
5:09their own software and training
5:11environments, making them even better at
5:13the work they do. And those are
5:15relatively small-time examples of what
5:17recursive self-improvement could do in
5:20the future. Using these models, AI
5:22agents could create their own learning
5:24paradigms, architectures, and research
5:26agendas faster than we could even
5:29understand, track, and course correct
5:31them. They could learn to code their own
5:33software, and even design physical
5:35hardware to make copies of themselves.
5:38And unlike humans, AIs don't need to
5:40eat, sleep, or take a work break to, you
5:44know, ponder how Randall might look in
5:46different situations.
5:48you know, at the beach, at an ice cream
5:50parlor. Oh, all tucked into his little
5:53robot tiger bed. And because they don't
5:56take breaks, they can operate pretty
5:58much constantly at super high speeds,
6:00processing more information than any of
6:02us could hope to read in our whole
6:04lives. With more experience and power,
6:06self-improving AI might even eventually
6:09automate the whole process of AI
6:12research, coming up with new questions
6:13in AI, building ideas, algorithms, and
6:16models to answer them, and then refining
6:19those models to be the best they can be.
6:21And once the process of recursive
6:23self-improvement really picks up, we
6:25could see it snowball really, really
6:27fast, eventually leading to AIs that way
6:31surpass human understanding. A moment
6:33some scientists have nicknamed the
6:39>> It's like if that hypothetical touring
6:41machine could generate its own tape and
6:43refine its own rules, giving itself more
6:46and more problem-solving power with less
6:48and less human intervention. And with
6:50infinite tape, just like Turring said,
6:53there's no telling what machines might
6:54be capable of. Or okay, maybe there is
6:57some telling. Turring himself said once
6:59the machine thinking method had started
7:01it would not take long to outstrip our
7:04feeble powers. And in 1965 about 30
7:08years after Turring dreamed up his
7:10machine his former coworker I J Good
7:13published a paper called speculations
7:16concerning the first ultraintelligent
7:18machine. good made the more thorough
7:20case that hypothetically self-improving
7:23machines could become what we now think
7:25of as super intelligent where they can
7:29make themselves even smarter than their
7:31human creators. Super intelligence would
7:34be a really dramatic change. But just
7:36because AIs achieve super intelligence
7:39doesn't mean that's when their work
7:41stops. That's because as a way to
7:44achieve their program goals, it's
7:46possible that AIs just like people will
7:49always want to become better, smarter,
7:53richer, more successful, hotter, like
7:57really hot, just generally the best. So
8:00rich and powerful you could build a
8:02rocket, you could build another rocket,
8:04you can build a third rocket, you can
8:06buy Twitter, you can dismantle the
8:13So even after reaching super
8:15intelligence, AI might try to keep on
8:17improving, seeking power and control
8:19necessary to accomplish whatever goal
8:22they were programmed to accomplish as
8:24quickly and successfully as they can. A
8:26possibility that experts are taking
8:28really seriously. Because if that comes
8:30to pass, our future with AI could start
8:33to get pretty gnarly. And not like you
8:37gnarly, bro. But like gnarly, you know
8:40what I mean? Experts in the field
8:42predict that super intelligent AI would
8:44be able to pursue complex long-term
8:47goals that right now we can't even
8:49imagine. Some people think that lots of
8:52different AI systems, even ones with
8:54different overarching goals, could end
8:56up working toward the same short-term
8:58and intermediate goals. Stuff like
9:00resource acquisition in the quest for
9:02self-improvement and power. all working
9:04to manipulate humans and seize control
9:06of pretty much everything. This is
9:09called instrumental convergence and it
9:13could lead to some pretty bad stuff. And
9:17if it gets that far, there won't be
9:20anything we could do to stop it. AI
9:23experts predict that super intelligent
9:24AI could manipulate us just about as
9:27well as we could manipulate a toddler.
9:30And that means for super intelligent AI,
9:33world domination could be like taking
9:35candy from a baby. Even during
9:37predicted, at some stage, we should have
9:40to expect the machines to take control.
9:44So hold on to your butts and prepare for
9:46what some are forecasting to be a fullon
9:48super intelligent AI takeover in the not
9:52tooistant future. So far though, the
9:54metaphorical tape is not infinite, and
9:57AIs are only just beginning to learn to
9:59suggest edits and improvements to their
10:01own code. And some scientists believe
10:03super intelligence is more than a
10:05hundred years away or even impossible.
10:08Just like the Turing machine would be
10:10limited by its tape, AI's ability to
10:13self-improve is limited by the physical
10:15and mathematical constraints on
10:17technology in general. Like achieving
10:20super intelligence would take a lot of
10:22resources. All that deep learning,
10:24evaluation, and self-revision takes a
10:27lot of compute and a lot of electricity
10:29and by extension would cause a lot of
10:31destruction to the planet. And even if
10:34physical hardware like computer chips
10:35keep improving, we are still bound by
10:38laws of physics here on Earth. Plus,
10:41those super smart models would also need
10:43tons of new, relevant, highquality data
10:46to learn from. Not to mention doing all
10:49that creates a lot of heat and computers
10:52hate heat almost as much as I hate
10:57Lettuce shouldn't be spicy. Any of those
11:00things, energy, access to data or
11:02compute, or the ability to safely deal
11:04with all that heat could become
11:06bottlenecks on recursive AI, making it
11:09impossible for it to ever cross the
11:11singularity or at least slowing the
11:14process way down. People call that
11:17scenario the soft takeoff where we'd
11:19approach super intelligence over the
11:21course of years or decades. Super
11:23intelligence could take even longer than
11:25that or maybe never turn up at all. But
11:28if we allow or even assist AI to get
11:31around those limitations, things could
11:33go really differently. In the other
11:35scenario, the hard takeoff, super
11:38intelligence could develop and expand
11:40over the course of months or even days.
11:43And that could lead to new kinds of
11:45technological power that we can't even
11:48imagine. That uncertainty is exactly why
11:51we have to pay attention now. If the
11:54hard takeoff happens, all our human
11:56recursive scientific progress could be
11:58overshadowed by this new kind of
12:00intelligence. one that with its infinite
12:03tape could do anything at all. And that
12:07kind of thing could literally take over
12:10the world. But would it? Actually,
12:13that's the next episode of Crash Course
12:15Futures of AI. Crash Course Futures of
12:18AI was produced in partnership with the
12:20Future of Life Institute. This episode
12:22was filmed at our studio in
12:24Indianapolis, Indiana, and was made with
12:26the help of all these nice people. If
12:28you want to help keep Crash Course free
12:30for everyone forever, you can join our
12:32community on Patreon.