Trending Now
We have updated our Privacy Policy and Terms of Use for Eurasia Group and its affiliates, including GZERO Media, to clarify the types of data we collect, how we collect it, how we use data and with whom we share data. By using our website you consent to our Terms and Conditions and Privacy Policy, including the transfer of your personal data to the United States from your country of residence, and our use of cookies described in our Cookie Policy.
AI
Are AI companies being reckless and ignoring safety concerns in the race to develop superintelligence? On GZERO World, Ian Bremmer is joined by former OpenAI whistleblower and executive director of the AI Futures Project, Daniel Kokotajlo, to discuss new developments in artificial intelligence and his concerns that big tech companies like OpenAI and DeepMind are too focused on beating each other to create new, powerful AI systems and not focused enough on safety guardrails, oversight, and existential risk. Kokotajlo left OpenAI last year over deep concerns about the direction of its AI development and argues tech companies are dangerously unprepared for the arrival of superintelligent AI. If he’s right, humanity is barreling toward an era of unprecedented power without a safety net, one where the future of AI is decided not by careful planning, but by who gets there first.
“OpenAI and other companies are just not giving these issues the investment they need,” Kokotajlo warns, “We need to make sure that the control over the army of superintelligences is not something one man or one tiny group of people gets to have.”
GZERO World with Ian Bremmer, the award-winning weekly global affairs series, airs nationwide on US public television stations (check local listings).
New digital episodes of GZERO World are released every Monday on YouTube. Don't miss an episode: subscribe to GZERO's YouTube channel and turn on notifications (🔔).GZERO World with Ian Bremmer airs on US public television weekly - check local listings.
Are AI companies recklessly racing toward artificial superintelligence or can we avoid a worst case scenario? On GZERO World, Ian Bremmer sits down with Daniel Kokotajlo, co-author of AI 2027, a new report that forecasts how artificial intelligence might progress over the next few years. As AI approaches human-level intelligence, AI 2027 predicts its impact will “exceed that of the Industrial Revolution,” but it warns of a future where tech firms race to develop superintelligence, safety rails are ignored, and AI systems go rogue, wreaking havoc on the global order. Kokotajlo, a former OpenAI researcher, left the company last year warning the company was ignoring safety concerns and avoiding oversight in its race to develop more and more powerful AI. Kokotajlo joins Bremmer to talk about the race to superhuman AI, the existential risk, and what policymakers and tech firms should be doing right now to prepare for an AI future experts warn is only a few short years away.
“One of the unfortunate situations that we're in as a species right now is that humanity in general mostly fixes problems after they happen,” Kokotajlo says, “Unfortunately, the problem of losing control of your army of super intelligences is a problem that we can't afford to wait and see how it goes and then fix it afterwards.”
GZERO World with Ian Bremmer, the award-winning weekly global affairs series, airs nationwide on US public television stations (check local listings).
New digital episodes of GZERO World are released every Monday on YouTube. Don't miss an episode: subscribe to GZERO's YouTube channel and turn on notifications (🔔).GZERO World with Ian Bremmer airs on US public television weekly - check local listings.
Listen: How much could our relationship with technology change by 2027? In the last few years, new artificial intelligence tools like ChatGPT and DeepSeek have transformed how we think about work, creativity, even intelligence itself. But tech experts are ringing alarm bells that powerful new AI systems that rival human intelligence are being developed faster than regulation, or even our understanding, can keep up with. Should we be worried? On the GZERO World Podcast, Ian Bremmer is joined by Daniel Kokotajlo, a former OpenAI researcher and executive director of the AI Futures Project, to discuss AI 2027—a new report that forecasts AI’s progression, where tech companies race to beat each other to develop superintelligent AI systems, and the existential risks ahead if safety rails are ignored. AI 2027 reads like science fiction, but Kokotajlo’s team has direct knowledge of current research pipelines. Which is exactly why it’s so concerning. How will artificial intelligence transform our world and how do we avoid the most dystopian outcomes? What happens when the line between man and machine disappears altogether?
Subscribe to the GZERO World Podcast on Apple Podcasts, Spotify, Stitcher, or your preferred podcast platform, to receive new episodes as soon as they're published
Artificial General Intelligence (AGI) is the holy grail of AI research and development. What exactly does AGI mean, and how will we know when we’ve achieved it? On Ian Explains, Ian Bremmer breaks down one of the most exciting (and terrifying) discussions happening in artificial intelligence right now: the race to build AGI, machines that don’t just mimic human thinking but match and then far surpass it. The idea of AGI is still a little hard to define. Some say it’s when a computer can accomplish any cognitive task a human can, others say it’s about transfer learning. Researchers have been predicting AGI’s arrival for decades, but lately, as new AI tools like ChatGPT and DeepSeek become more and more powerful, there is a consensus that achieving true general intelligence in computers isn’t a matter of if, but when. And when it does arrive, they say it will transform almost everything about the way humans live their lives. But is society ready for the huge changes experts warn are only a few years away? What happens when the line between man and machine disappears altogether?
GZERO World with Ian Bremmer, the award-winning weekly global affairs series, airs nationwide on US public television stations (check local listings).
New digital episodes of GZERO World are released every Monday on YouTube. Don't miss an episode: subscribe to GZERO's YouTube channel and turn on notifications (🔔).GZERO World with Ian Bremmer airs on US public television weekly - check local listings.
Data center servers and components containing the newest artificial intelligence chips from Nvidia are seen on display at the company's GTC software developer conference in San Jose, California, USA, on March 19, 2025.
8: Where do US advanced microchips go? US lawmakers want to know. A bipartisan group of eight congresspeople has introduced a bill requiring tracking technology on any export-bound artificial intelligence chips. The proposal, similar to a Senate bill introduced last week, is meant to stop cutting-edge American AI tech from going to China.
100: Tesla owner Elon Musk’s political action committee is being sued for failing to pay the $100 that it – controversially – promised to give swing-state voters who signed a pro-Constitution petition during last year’s presidential election.
21: The central Canadian province of Manitoba is struggling to control 21 active wildfires. The fast-moving blazes killed two people earlier this week and have forced the evacuation of more than 1,000 Manitobans. So far, this season’s 80 fires are nearly double the 20-year average.
11 billion: Honda is moving production of some of its vehicles from Ontario to the US, and postponing a plan to invest $11 billion in the production of EVs and batteries in Canada. The move is a direct response to Donald Trump’s 25% tariff on Canadian autos and parts.
3,000: Honda may be leaving, but the rubber ducks are coming! The owners of the Rubber Duck Museum in Point Roberts, Washington, a US town accessible only via Canadian territory, are decamping for Canada — along with their famous retail shop of 3,000 novelty ducks. The reason? Trump’s threats and tariffs on Canada have caused such a severe drop in cross-border visitors that the business can no longer stay afloat in the US.
President Joe Biden signs an executive order about artificial intelligence as Vice President Kamala Harris looks on at the White House on Oct. 30, 2023.
US President Joe Biden on Monday signed an expansive executive order about artificial intelligence, ordering a bevy of government agencies to set new rules and standards for developers with regard to safety, privacy, and fraud. Under the Defense Production Act, the administration will require AI developers to share safety and testing data for the models they’re training — under the guise of protecting national and economic security. The government will also develop guidelines for watermarking AI-generated content and fresh standards to protect against “chemical, biological, radiological, nuclear, and cybersecurity risks.”
The US order comes the same day that G7 countries agreed to a “code of conduct” for AI companies, an 11-point plan called the “Hiroshima AI Process.” It also came mere days before government officials and tech-industry leaders meet in the UK at a forum hosted by British Prime Minister Rishi Sunak. The event will run tomorrow and Thursday, Nov. 1-2, at Bletchley Park. While several world leaders have passed on attending Sunak’s summit, including Biden and Emmanuel Macron, US Vice President Kamala Harris and European Commission President Ursula von der Leyen plan to participate.
When it comes to AI regulation, the UK is trying to differentiate itself from other global powers. Just last week, Sunak said that “the UK’s answer is not to rush to regulate” artificial intelligence while also announcing the formation of a UK AI Safety Institute to study “all the risks, from social harms like bias and misinformation through to the most extreme risks of all.”
The two-day summit will focus on the risks of AI and its use of large language models trained by huge amounts of text and data.
Unlike von der Leyen’s EU, with its strict AI regulation, the UK seems more interested in attracting AI firms than immediately reining them in. In March, Sunak’s government unveiled its plan for a “pro-innovation” approach to AI regulation. In announcing the summit, the government’s Department for Science, Innovation, and Technology boasted the country’s “strong credentials” in AI: employing 50,000 people, bringing £3.7 billion to the domestic economy, and housing key firms like DeepMind (now owned by Google), while also investing £100 million in AI safety research.
Despite the UK’s light-touch approach so far, the Council on Foreign Relations described the summit as an opportunity for the US and UK, in particular, to align on policy priorities and “move beyond the techno-libertarianism that characterized the early days of AI policymaking in both countries.”- UK AI Safety Summit brings government leaders and AI experts together - GZERO Media ›
- AI agents are here, but is society ready for them? - GZERO Media ›
- Yuval Noah Harari: AI is a “social weapon of mass destruction” to humanity - GZERO Media ›
- Should we regulate generative AI with open or closed models? - GZERO Media ›
- Podcast: Talking AI: Sociologist Zeynep Tufekci explains what's missing in the conversation - GZERO Media ›
- OpenAI is risk-testing Voice Engine, but the risks are clear - GZERO Media ›
Artificial intelligence is often seen as a futuristic tool—but for some global health challenges, it’s already the only solution. Dr. Juan Lavista Ferres, Microsoft's Chief Data Scientist, Corporate Vice President, and Lab Director for the AI for Good Lab, points to a powerful example: diagnosing a leading cause of childhood blindness in newborns.
In this Global Stage conversation from the 2025 STI Forum at the United Nations, Ferres explains how AI is being used to detect retinopathy of prematurity, a condition affecting premature babies that now ranks as the world’s top cause of childhood blindness. The problem? There aren’t nearly enough pediatric ophthalmologists to meet global demand—and without early diagnosis, the condition often leads to permanent vision loss.
“We have AI models today that can diagnose this from your smartphone,” says Ferres. “This is just one example where AI is not just the solution—it’s the only solution we have.”
He argues that technology like this can empower doctors, not replace them, and help close critical gaps in healthcare access. With billions of people still lacking adequate care, Ferres believes AI can be a transformative force for scaling health services—if deployed thoughtfully and equitably.
This conversation is presented by GZERO in partnership with Microsoft, from the 2025 STI Forum at the United Nations in New York. The Global Stage series convenes global leaders for critical conversations on the geopolitical and technological trends shaping our world.