Barry Diller trusts Sam Altman. But ‘trust is irrelevant’ as AGI nears, he says.
By Jakub Antkiewicz
•2026-05-07T10:25:44Z
The Trust Paradox
Billionaire media executive Barry Diller, chairman of IAC and Expedia Group, offered a defense of OpenAI CEO Sam Altman's character this week, but simultaneously argued that personal trust is ultimately irrelevant in the face of artificial general intelligence (AGI). Speaking at The Wall Street Journal’s “Future of Everything” conference, Diller suggested that the intense scrutiny on Altman is a distraction from the more pressing and uncontrollable nature of the technology being built. He posited that the focus should not be on the intentions of the leaders, but on the unknown consequences of their creations.
Unknowns Outweigh Stewardship
Diller, who co-founded Fox Broadcasting, stated that the fundamental issue with advanced AI extends beyond the integrity of its stewards. He described his interactions with key figures in AI development, noting their own “sense of wonder” and surprise at what they are building. This underscores his core point: if the creators themselves cannot fully predict the outcomes, then public trust in their leadership provides a false sense of security. The problem isn't a potential for deception, but the certainty of unpredictability.
- Beyond Trust: Diller argues the central issue is not leader integrity but the inherent unpredictability of AGI.
- Creator Uncertainty: The developers of advanced AI are themselves surprised by its emergent capabilities.
- Guardrail Imperative: He warned that if humans don't establish controls, an “AGI force” will create its own, an irreversible event.
- Stewardship is Secondary: The character of individuals like Sam Altman is less important than the systemic risks of the technology.
This perspective shifts the conversation from corporate governance at firms like OpenAI to the collective responsibility of the entire industry. Diller’s warning implies that market forces and massive investments are accelerating development without a proportional focus on containment. His call for “guardrails” is a direct challenge to the sector, suggesting that without proactive, enforceable safety protocols, the path toward AGI is a gamble where the stakes are control over the future itself.
Diller’s argument repositions the debate around AI leadership: the industry's focus on the trustworthiness of key figures like Sam Altman may be a dangerous distraction from the more fundamental and uncontrollable risks inherent in the technology itself. The real concern is not a bad actor, but an unpredictable outcome.