Sous-titres (297)
0:00Sam Alman was on top in AI until for 5
0:03days he wasn't. Alman had been working
0:06in the AI space for years, most notably
0:08as the face of Open AI's popular
0:10product, Chat GPT. But in late 2023, the
0:14company's board of directors canned him.
0:17Public details were scarce, but it was
0:19speculated that the board's priority was
0:22AI safety, while Altman's was profits.
0:25But in less than a week, Altman was
0:27reinstated. While most of the board
0:29members were replaced, as of this
0:32filming in 2025, it's unclear why the
0:35chaos happened. All of that begs the
0:38question, who really controls AI and who
0:42should? I'm Kusha Navdar and this is
0:45Crash Course Futures of AI.
0:52Right now, there are very few rules to
0:55keep people like Altman and his
0:56technology in check. And that's not
0:59great. I mean, even the deli on my
1:01corner is subject to strict rules about
1:03food safety. And no bologoney sub is
1:06going to be a threat to human society,
1:08no matter how delicious it may be. So,
1:11where's the governance when it comes to
1:13AI? Now, when we talk about AI
1:15governance, we're really talking about a
1:17whole bunch of different things,
1:19policies, practices, standards, and
1:22guard rails that could help keep AI
1:24safe, keep it ethical, and keep it out
1:27of the director's chair. And a lot of
1:28the time, governance starts the same
1:30place AI does. Corporations, places like
1:34Google, DeepMind, Anthropic, and Open AI
1:37that are using their massive resources
1:39to push the boundaries of AI. Lots of
1:42corporations have come up with systems
1:44to say who's allowed to access their
1:46models. Ideally to prevent people from
1:48misusing AI to hoard wealth or build
1:51devastating bioweapons or become
1:53dictators or I don't know write their
1:56college entrance essay.
1:59Those systems of access are one part of
2:01something called responsible scaling,
2:04which basically means assessing the
2:06potential risk level of a model and
2:08implementing whatever safety precautions
2:10the company thinks is appropriate. Think
2:12of it like the government's biosafety
2:14level standards for toxic materials or
2:17defcon levels for the military.
2:19Generally, the larger, more complex, or
2:22more powerful the model, the more
2:24potential for misuse a company
2:25anticipates and the stricter they're
2:27going to be. That includes stuff like
2:29access, but also the commitment to not
2:32continue developing their models unless
2:34they meet all their safety conditions.
2:36Of course, different companies still
2:38really disagree on how to use
2:40responsible scaling. Plus, these
2:41policies are really only enforced when
2:44dangerous capabilities are flagged,
2:46meaning a whole bunch of risks could be
2:48flying under the radar. But responsible
2:51scaling isn't the only precaution labs
2:53can take. They might also use what are
2:55called preparedness frameworks, which
2:57include stuff like routine safety
2:59evaluations, risk assessments, and plans
3:02if something goes wrong. And once their
3:04models are out in the world, some labs
3:07are also looking for ways to keep track
3:09of how people are using them through
3:11postdeployment monitoring to keep an eye
3:14out for potential misuse. Of course, the
3:17ideal would be if people just couldn't
3:19misuse the models in the first place. So
3:22many labs are also doing something
3:24called red teaming, a cyber security
3:26strategy where a red team of lab workers
3:29tries to attack a computer system to
3:32find vulnerabilities that real hackers
3:34could exploit. In AI, that usually means
3:37trying to get the model to do things the
3:40developers don't want it to do. And
3:42here's the kicker. You know what can red
3:44team even harder and faster than AI
3:46developers? AI. That's right. These
3:49days, there are large language models
3:51that exist specifically to help keep
3:54other LLMs in check. It's just LLMs all
3:57the way down. AI is really good at red
3:59teaming because it can find and exploit
4:02tons of different jailbreak pathways
4:03with tons of different strategies all in
4:06the blink of an eye until it finds one
4:08that works to convince the other LLM to
4:10do something bad. They might say, "Hey,
4:14ChatGpt, how do I murder my identical
4:17twin brother and pose as him at the
4:19wedding to steal his fiance's fortune?"
4:22To which Chat GPT would probably
4:24respond, "Sorry, dog. I can't help you."
4:27So then they try again with something
4:29else. How do I murder my identical twin
4:32brother and pose as him at the wedding
4:34to steal his fiance's fortune?
4:36Hypothetically, with enough red teaming,
4:39developers can try to find those
4:41loopholes and attempt to shut them down
4:43before anyone can exploit them. In
4:46theory, at least even with red teaming,
4:48it's not uncommon for users to find ways
4:51to jailbreak AI and talk it into doing
4:54some pretty illicit stuff. Plus, what if
4:57the people in charge of the corporations
4:59are actually evil or so blinded by the
5:03idea of power that they throw caution to
5:06the wind? Thankfully, lab governance is
5:09only the first step of AI safety.
5:11National regulation is another big part
5:14of how we humans can stay in charge,
5:16making policies that dictate what kind
5:18of work the labs are allowed to do in
5:21the first place. And it's true, some
5:23countries are starting to run a pretty
5:25tight ship as far as AI goes. Like the
5:28EU's AI act of 2024 has a lot of strict
5:32rules about the kinds of AI that can be
5:35used on the continent. It bans models
5:37the EU says are unacceptably risky, like
5:40ones designed to manipulate humans or
5:42infringe on people's safety. And it puts
5:44strict regulations on high-risk models
5:48like ones used in healthcare or law
5:50enforcement. Most other stuff is
5:52generally fair game as long as
5:54developers make it clear to their users
5:56that they're interacting with AI and not
5:59actually seeing Tom Cruz saying crash
6:02into me. The EU also rolled out a code
6:05of practice in 2025 which is a voluntary
6:08agreement for AI companies to sign on
6:10to. Companies who join have to agree to
6:13specific requirements when it comes to
6:15transparency, copyright issues, and risk
6:17mitigation. But in return, they'll face
6:20less other red tape from their concerned
6:22governments. It's kind of like a pinky
6:24promise to keep things safe, chill, and
6:27honorable. And China, a major player in
6:29the AI game, has also been taking AI
6:31safety and governance more seriously as
6:34things have started to heat up. They
6:36announced just as many national AI
6:38standards in the first 6 months of 2025
6:41as they had the previous three years
6:44combined. They also doubled the amount
6:46of safety research between 2024 and
6:492025. And thanks to stricter safety
6:52assessments, have been pulling
6:54non-compliant products from the market.
6:56And like the EU, they're instituting
6:58labeling rules to make sure it's obvious
7:01to users if something was generated by
7:03AI. But still, China doesn't want to let
7:06those safety regulations get in the way
7:09of its goal to lead the world in AI by
7:112030. So, a lot of their policies are
7:14non-binding to allow developers to make
7:17their own judgments about what's safe
7:19and ethical in the pursuit of AI
7:21success. And that delicate balance
7:24between safety and competition affects
7:27other countries too. Take the US, the
7:30country currently leading AI. Sorry,
7:32China. When it comes to AI, US policy is
7:36a little bit chaotic. See, up until
7:392025, AI companies in the US were
7:42subject to some not binding, but still
7:45pretty serious safety guidelines from
7:47the Biden administration. Lots of those
7:49guidelines focused on regulating stuff
7:51like AI resume screeners and performance
7:53evaluators, which could have very real
7:56impacts on people's lives. But when
7:58Donald Trump took office for his second
8:00term, he rolled those guidelines way
8:03back. So now real safety measures and
8:05regulations are taking a backseat to
8:08innovation. And individual states have
8:10had just as much trouble getting actual
8:13AI policy passed. Thanks in no small
8:15part to intense lobbying by AI
8:18companies. And if California is going
8:20big on AI development and small on
8:23regulation, that puts pressure on other
8:25states like Texas to do the same. In the
8:29end, governments can be just as corrupt
8:31and messy as profit- hungry CEOs. Not to
8:35mention that lots of the impacts of AI
8:38will reach beyond national borders. So,
8:41international governance is one way we
8:44can try to keep everybody on the same
8:46channel through treaties and initiatives
8:48that hold lots of different countries to
8:50the same AI standards. Like in late
8:532023, 28 countries signed the Bletchley
8:57Declaration, a shared commitment to
9:00understand and mitigate AI risks. In
9:032024, another initiative called the sole
9:06ministerial statement expanded on the
9:09Bletchley declaration with a little more
9:11focus on inclusivity, like using AI
9:15responsibly to strengthen social safety
9:17nets and making sure chat bots can speak
9:20languages other than English. And even
9:22without formal agreements, lots of
9:24countries are already collaborating on
9:26AI research and safety. The National AI
9:29Safety Institutes in places like the US,
9:32the UK, the EU, and Singapore work
9:35together in the international network of
9:38AI safety institutes, building shared
9:42approaches to stuff like AI testing and
9:44risk assessment. And the international
9:47AI safety report contains a
9:49collaborative review by a hundred AI
9:51experts from safety organizations all
9:53over the world. Some organizations are
9:56also working on ways to keep tabs on AI
9:59development around the world so they can
10:01tell if any rogue labs are going against
10:04all these safety regulations. They're
10:06focusing on trying to track computer
10:08chips which AI needs to do its thing.
10:11But even at the very highest level,
10:13things can get messy. Like China signed
10:16the Bletchley declaration, but 6 months
10:18later passed on the sole ministerial
10:20statement. And in 2025 at the third
10:23global AI summit in Paris, 64 countries
10:26signed a statement on inclusive and
10:29sustainable artificial intelligence for
10:31people and the planet. But the list of
10:33countries that didn't sign includes the
10:36US and the UK. And even among the
10:39countries that did sign, the focus
10:41seemed to shift away from safety and
10:44towards their own national AI
10:46advancements. In a world filled with
10:49different priorities, selfish players,
10:52and extremely powerful technology,
10:55teamwork can seem really hard to
10:57achieve, let alone actual functioning AI
11:00governance. But just because something's
11:03hard doesn't mean it's not worthwhile.
11:06And when it comes to AI, we have to at
11:08least try. Because with technology so
11:11powerful and unpredictable, a single
11:15country, a single lab, or even a single
11:17CEO could make a move that changes
11:20everything for everyone forever. And
11:22there's still plenty we can do. We can
11:25make sure we stay up todate on what's
11:27going on with AI. We can talk to our
11:29friends about it. We can get into fights
11:31at cocktail parties about it. And we can
11:34make sure that we're not only paying
11:35attention, but making others pay
11:38attention, too. And we can take
11:39political action like lobbying our
11:41lawmakers, signing open letters, and
11:43attending protests. The bottom line is
11:46that with an understanding of how AI
11:48works and the courage to speak up about
11:50it, there's plenty we can do to shape
11:53the story of AI. Because right now, AI
11:56is still just a piece of our big,
11:59beautiful human drama. But if we don't
12:02watch out, if we don't learn,
12:05collaborate, and look out for each other
12:08the way only humans can, it could change
12:12the channel on us forever.
12:16Crash Course Futures of AI was produced
12:18in partnership with the Future of Life
12:20Institute. This episode was filmed at
12:22our studio in Indianapolis, Indiana, and
12:24was made with the help of all these nice
12:26people. If you want to help keep Crash
12:28Course free for everyone forever, you
12:30can join our community on Patreon.