The new app showcases AI-generated video clips, but it quickly became filled with “deepfake” videos of CEO Sam Altman.
While praised for its technological advancement, researchers expressed concern that OpenAI’s move may conflict with its stated nonprofit mission of developing safe AI that benefits humanity, according to a report by TechCrunch reviewed by Al Arabiya Business.
Researcher John Holman wrote on X that he was worried when he first heard about the launch plans for Sora 2, but said the team had tried to create a positive experience.
Boaz Barak, an OpenAI researcher and Harvard professor, said the app was technically impressive but warned of risks similar to those that have plagued other social media platforms.
For his part, Sam Altman explained that the major investment in Sora aims to secure the capital and computing power needed to build advanced AI to serve scientific research.
He added: “It’s also nice to bring smiles to people’s faces with new products, while generating some profit to cover our needs.”
The launch of Sora highlights the tension between OpenAI’s identity as a nonprofit lab and its increasing expansion into consumer products.
This contradiction has prompted regulators, such as the California Attorney General, to scrutinize the company’s structure and future IPO plans.
OpenAI affirms it will try to avoid the mistakes of TikTok and Instagram, saying it will add alerts for excessive use and that its algorithms won’t be designed to maximize browsing time.
However, from day one, users noticed engagement-boosting features like dynamic emojis, raising concerns that the app may follow the same path as other platforms.
While the company insists that ChatGPT provides real benefits, it admits that Sora exists primarily for entertainment.