Let’s get cracking! ⬇️ Elon Musk's Twitter rebranded as X. Here's why
The company said it plans to become an online messaging and payments hub.Read the article oni.abcnewsfe.com
🐦 News that shocked probably everyone, including Twi…sorry – X’s – employees. The rebrand of one of the most prominent social media platforms was – as far as marketing standards go – rather quick.
The plan for the future is an ambitious one – X is meant to be become an “everything app”, kind of like WeChat is in China:
What is new about X? For now, X is a rebranded Twitter — but the company on Sunday revealed plans to offer users a one-stop shop for many of their online needs. The aspiration was made public as long ago as last year. Days after acquiring Twitter, in October, Musk tweeted: “Buying Twitter is an accelerant to creating X, the everything app.” Taking a step closer earlier this month, Musk launched an artificial intelligence company called xAI, vowing to develop a generative AI program that competes with established offerings like ChatGPT. Describing X’s goal as “unlimited interactivity,” Yaccarino said on Sunday that the company plans to become a hub of online messaging and commerce.” Max Zahn
ABC NEWS
Moving AI governance forward
OpenAI and other leading labs reinforce AI safety, security and trustworthiness.Read the article onopenai.com🧠 All the big players in AI – Anthropic, Google, Microsoft and OpenAI – are now collaborating on the Frontier Model Forum, an industry body focused on ensuring the safe and responsible development of AI models. What will this mean for the future of AI moving forward?
“Today, Anthropic, Google, Microsoft and OpenAI are announcing the formation of the Frontier Model Forum, a new industry body focused on ensuring safe and responsible development of frontier AI models. The Frontier Model Forum will draw on the technical and operational expertise of its member companies to benefit the entire AI ecosystem, such as through advancing technical evaluations and benchmarks, and developing a public library of solutions to support industry best practices and standards.
The core objectives for the Forum are: Advancing AI safety research to promote responsible development of frontier models, minimize risks, and enable independent, standardized evaluations of capabilities and safety. Identifying best practices for the responsible development and deployment of frontier models, helping the public understand the nature, capabilities, limitations, and impact of the technology. Collaborating with policymakers, academics, civil society and companies to share knowledge about trust and safety risks. Supporting efforts to develop applications that can help meet society’s greatest challenges, such as climate change mitigation and adaptation, early OPEN AI
ChatGPT’s accuracy in solving basic math declined drastically, study finds
From 98% to 2% within just a few months.Read the article ontechstartups.com🧮 A study has been conducted by Stanford University, that is being interpreted as saying that GPT-4 has gotten worse since its release:
According to the latest study conducted by researchers at Stanford University, ChatGPT’s performance has actually worsened on specific tasks between March and June. The finding raises concerns about the AI’s overall capabilities and raises questions about the factors contributing to its apparent diminishing performance. As part of the study, Stanford researchers compare the performance of ChatGPT over several months across four diverse tasks: solving math problems, answering sensitive questions, generating software code, and visual reasoning.
The study identified significant fluctuations in the ChatGPT’s capabilities, referred to as “drift,” while performing specific tasks. The researchers focused on two versions of the technology: GPT-3.5 and GPT-4. Notably, they observed remarkable variations in GPT-4’s ability to solve math problems.
In March, GPT-4 correctly identified the number 17077 as a prime number in 97.6% of the cases. Surprisingly, just three months later, this accuracy plunged dramatically to a mere 2.4%. Conversely, the GPT-3.5 model showed contrasting results. The March version only managed to answer the same question correctly 7.4% of the time, while the June version exhibited a remarkable improvement, achieving an 86.8% accuracy rate.” Daniel Levi
TECH STARTUPS