december 2023

December 2023 – Google Gemini, AI in 2024

Hi! Welcome to the new edition of our newsletter – the first one in 2024! It’s time to hit the ground running with the summary of the biggest news of last month, including a brand-new episode of CEO Perspective, in which Krzysztof Goworek looks at advancements in 2023.

IN THIS BUSINESS & TECHNOLOGY NEWSLETTER:
  • Google Gemini
  • Apple's plan for Gen AI
  • AI in 2024: what's next?
december 2023

CEO PERSPECTIVE – DECEMBER 2023

In the field of AI, Google’s launch of Gemini, a new-gen AI platform, made the biggest waves. What makes it different from the competition? What can it really do? What’s the difference between Bard and Gemini? What’s up with the “fake demo video”? All answers can be found in TechCrunch’s updated summary:

All Gemini models were trained to be “natively multimodal” — in other words, able to work with and use more than just text. They were pre-trained and fine-tuned on a variety audio, images and videos, a large set of codebases, and text in different languages. That sets Gemini apart from models such as Google’s own large language model LaMDA, which was only trained on text data. LaMDA can’t understand or generate anything other than text (e.g. essays, email drafts and so on) — but that isn’t the case with Gemini models. Their ability to understand images, audio and other modalities is still limited, but it’s better than nothing. Kyle Wiggers
TechCrunch

The humanity of AI is being manifested… in an interesting way. Is ChatGPT fed up with us? Many users complain that it has become “lazy” and “rude” – to the point that OpenAI is seriously investigating the issue:

In recent days, more and more users of the latest version of ChatGPT – built on OpenAI’s GPT-4 model – have complained that the chatbot refuses to do as people ask, or that it does not seem interested in answering their queries.If the person asks for a piece of code, for instance, it might just give a little information and then instruct users to fill in the rest. Some complained that it did so in a particularly sassy way, telling people that they are perfectly able to do the work themselves, for instance. Andrew Griffin
Independent

According to Apple’s research paper on Gen AI, titled “LLM in a Flash”, the tech giant wants to include run AI tasks fully on hardware, as opposed to leaning towards cloud solutions:

Apple’s latest research about running large language models on smartphones offers the clearest signal yet that the iPhone maker plans to catch up with its Silicon Valley rivals in generative artificial intelligence. (…) Its approach “paves the way for effective inference of LLMs on devices with limited memory,” they said. Inference refers to how large language models, the large data repositories that power apps like ChatGPT, respond to users’ queries. Chatbots and LLMs normally run in vast data centers with much greater computing power than an iPhone.The paper was published on December 12 but caught wider attention after Hugging Face, a popular site for AI researchers to showcase their work, highlighted it late on Wednesday. It is the second Apple paper on generative AI this month and follows earlier moves to enable image-generating models such as Stable Diffusion to run on its custom chips. Tim Bradshaw
Ars Technica

MIT Technology Review Writers took on a herculean task – trying to predict what will happen next in the erratic world of AI in 2024. Last year they hit the jackpot with some of their prediction, so it’s worth to check out what are they betting on for this year.

We decided to ignore the obvious. We know that large language models will continue to dominate. Regulators will grow bolder. AI’s problems—from bias to copyright to doomerism—will shape the agenda for researchers, regulators, and the public, not just in 2024 but for years to come. Melissa Heikkilä & Will Douglas Heaven
MIT Technology Review

IN THIS BUSINESS & TECHNOLOGY NEWSLETTER:
  • Google Gemini
  • Apple's plan for Gen AI
  • AI in 2024: what's next?
Liked it?