march 2025

March 2025 – Gemini 2.5, ChatGPT Image Generation, Llama 4 controversy

March brought us some exciting new releases—the now-viral ChatGPT 4o Image Generation, which broke the internet with AI-generated anime art, and Google’s Gemini 2.5 model, which has turned many heads in the industry. However, our Snapshot doesn’t cover only that—check out the rest to learn more about these releases and other exciting advancements and news in technology and business.

IN THIS BUSINESS & TECHNOLOGY NEWSLETTER:
  • Gemini 2.5
  • ChatGPT Image Generation
  • Llama 4 controversy
march 2025

The World Economic Forum has launched a new digital safety framework to help organizations mitigate online risks like misinformation and exploitation. Meanwhile, Microsoft is addressing cybersecurity threats with AI-powered agents to combat cybercrime on a larger scale.

The digital world is evolving rapidly, and so too must our approach to digital safety,” note two contributors to the report, Adam Hildreth, Founder of Crisp and Julie Inman Grant, eSafety Commissioner, in a recent article for Forum Stories. “Ensuring platforms prioritize digital safety is a collective effort that requires proactive planning, continuous adaptation, and a commitment to collaboration across sectors.

At the 2025 Mobile World Congress in Barcelona, industry leaders, policymakers, and technology innovators explored the latest advancements in 5G, artificial intelligence, and next-generation connectivity.

The four-day event, hosted by the GSMA at the Fira Gran Via exhibition center, is themed “Converge, Connect, Create,” emphasizing the fusion of mobile and AI-driven technologies.

OpenAI has integrated image generation into GPT-4o to help users create valuable images. This model excels at rendering text, following detailed prompts and refining images through natural conversation. While it offers impressive capabilities OpenAI continues to improve safety features and model performance to ensure responsible use.

We trained our models on the joint distribution of online images and text, learning not just how images relate to language, but how they relate to each other. Combined with aggressive post-training, the resulting model has surprising visual fluency, capable of generating images that are useful, consistent, and context-aware.

According to Google, Gemini 2.5 is their most intelligent and advanced AI model, featuring improved reasoning and advanced coding capabilities. The experimental version of Gemini 2.5 Pro achieves leading results in benchmark tests and is available in Google AI Studio and for Gemini Advanced users.

Gemini 2.5 Pro Experimental is our most advanced model for complex tasks. It tops the LMArena leaderboard — which measures human preferences — by a significant margin, indicating a highly capable model equipped with high-quality style. 2.5 Pro also shows strong reasoning and code capabilities, leading on common coding, math and science benchmarks.

  • Opinion: I switched back to Google… and I kinda hate that it’s good now

It seems that Google has made such notable progress in artificial intelligence with its recent Gemini update that it even made some OpenAI fans feel slightly conflicted. One user on the r/OpenAI subreddit says:

It’s everywhere – Unlike ChatGPT, Gemini is baked into Gmail, Search, and Calendar. It just works. Less censorship – There’s a way to push Image Editor beyond the usual limits. Gemini 2.5 Pro is FREE – Meanwhile, OpenAl is charging $20/month. Actual research mode – It doesn’t hallucinate nearly as much anymore. (…) I didn’t expect to say this, but Google might actually be back in the Al race. Are they about to dominate, or will they fumble again? 🤔

According to the latest research, Meta has been accused of manipulating AI benchmarks by using a specially optimized version of its Llama 4 model, Maverick, to get better results on the LM Arena platform. The publicly available version of Maverick differs from the one tested, undermining the benchmarks’ credibility as indicators of the model’s true capabilities.

Meta’s interpretation of our policy did not match what we expect from model providers,” LMArena posted on X two days after the model’s release. “Meta should have made it clearer that ‘Llama-4-Maverick-03-26-Experimental’ was a customized model to optimize for human preference. As a result of that, we are updating our leaderboard policies to reinforce our commitment to fair, reproducible evaluations so this confusion doesn’t occur in the future.

IN THIS BUSINESS & TECHNOLOGY NEWSLETTER:
  • Gemini 2.5
  • ChatGPT Image Generation
  • Llama 4 controversy
Liked it?