Search
AI-powered search, human-powered content.
scroll to top arrow or icon

How tech companies aim to make AI more ethical and responsible

Artificial intelligence’s immense potential power raises significant questions over its safety. Large language models, a kind of AI like Microsoft’s Bard or OpenAI’s ChatGPT, in particular, run the risk of providing potentially dangerous information.

Should someone, say, ask for instructions to build a bomb, or advice on harming themselves, it would be better that AI not answer the question at all. Instead, says Microsoft Vice Chair and President Brad Smith in a recent Global Stage livestream, from the sidelines of the 78th UN General Assembly, tech companies need to build in guardrails that will direct users toward counseling, or explain why they can’t answer.


And that’s just the first step. Microsoft aims to build a full safety architecture to help artificial intelligence technology flourish within safe boundaries.

Watch the full Global Stage Livestream conversation here: Hearing the Christchurch Call

More from Global Stage

Can we use AI to secure the world's digital future?

How do we ensure AI is safe, available to everyone, and enhancing productivity? It’s a big topic at this year’s UN General Assembly. That’s why GZERO’s Global Stage livestream brought together leading experts at the heart of the action for “Live from the United Nations: Securing our Digital Future,” an event produced in partnership between the Complex Risk Analytics Fund, or CRAF’d, and GZERO Media’s Global Stage series, sponsored by Microsoft.

Is the Europe-US rift leaving us all vulnerable?

As the tense and politically charged 2025 Munich Security Conference draws to a close, GZERO’s Global Stage series presents a conversation about strained relationships between the US and Europe, Ukraine's path ahead, and rising threats in cyberspace.

Using AI to diagnose patients with a smartphone but no healthcare access

Artificial intelligence is often seen as a futuristic tool—but for some global health challenges, it’s already the only solution. Dr. Juan Lavista Ferres, Microsoft's Chief Data Scientist, Corporate Vice President, and Lab Director for the AI for Good Lab, points to a powerful example: diagnosing a leading cause of childhood blindness in newborns.

AI adoption starts in the C-suite

Successful adoption of AI in business requires more than just access to tools, says Eurasia Group's Caitlin Dean in a Global Stage discussion at the 2025 UN STI Forum.

Winning the AI race isn't about who invented it first

Author Jeffrey Ding says that scaling AI, not just inventing it, drives national power. He shared insights on AI diffusion and inclusion in a Global Stage livestrem at the 2025 UN STI Forum.

Customizing AI strategies for every region, culture, and language is critical

As artificial intelligence races ahead, there’s growing concern that it could deepen the digital divide—unless global inclusion becomes a priority. Lucia Velasco, AI Policy Lead at the United Nations Office for Digital and Emerging Technologies, warns that without infrastructure, local context, and inclusive design, AI risks benefiting only the most connected parts of the world.