Leading AI models use a lot of unwieldy code and computing power, but the new Phi 4 model is small enough that companies could run it on their own systems.
World models, like all AI models, also hallucinate — and internalize biases in their training data. A world model trained largely on videos of sunny weather in European cities might struggle to comprehend or depict Korean cities in snowy conditions, for example, or simply do so incorrectly.
Madonna posts 'unethical' AI-generated photos with pope
A first-of-its-kind study highlights the stark gender disparity in AI-generated nonconsensual intimate images — and puts into focus the evolving risks for women in politics and public life.
As technology continues to expand its integration with healthcare services, these experts caution that the spike in innovation is outpacing the knowledge
The AI news was seemingly nonstop this week, from OpenAI's 12 Days of" Shipmas" to Google Gemini updates and new Apple Intelligence features.
Character.AI users can create original chatbots or interact with existing bots. Two lawsuits allege those bots harmed teen users.
OpenAI co-founder Ilya Sutskever spoke on a range of topics at NeurIPS, the annual AI conference, including his predictions for AI "superintelligence."
At Peter's Chapel in Lucerne, Switzerland, a virtual Jesus dispensed words of faith as part of “Deus in Machina," Not surprisingly, controversy ensued.
A new report graded companies including Meta, Anthropic, and OpenAI on their AI safety measures. Many were found lacking.
Systems that operate on behalf of people or corporations are the latest product from the AI boom, but these “agents” may present new and unpredictable risks
OpenAI’s cofounder and former chief scientist, Ilya Sutskever, made headlines earlier this year after he left start his own AI lab called Safe Superintelligence Inc. He has avoided the limelight since his departure but made a rare public appearance in Vancouver on Friday at the Conference on Neural Information Processing Systems (NeurIPS).