Week 22

*|MC:SUBJECT|*

Hi Humans,

Welcome to the first AI.HUB newsletter!


The purpose of this weekly newsletter is to keep you updated on what is happening in the world of AI, provide you with links to exciting articles, and hopefully take the edge of your AI urge.


With no further ado, let us get started.


AI news from the past week:

  • Google presented its newest AI search algorithms, which didn’t go well.

  • Employees leaving OpenAI because of safety concerns.

  • Elon Musk wants to build a 10,000-semiconductor supercomputer to train xAI’s upcoming Grok model.

  • Apple signs a deal with OpenAI for iOS.


Why It Matters

This isn't the first time Google has struggled with an AI tool launch. When Gemini (the old Bard) was introduced, it was full of errors. Later, controversy arose when Google's image generator produced racially diverse depictions of Nazis and Founding Fathers.


However, there's a critical difference between the misinformation on Gemini, which includes a disclaimer that states, “Gemini may display inaccurate info, including about people, so double-check its responses,” and the misinformation on Google Search, a trusted service used for billions of search queries daily (and lacking any such a disclaimer).


I expect Google to fix this issue, and I believe we have to be patient before the ‘AI Overview’ is released to the Danish market. In the meantime, we can always use Perplexity or patiently wait for the arrival of the OpenAI Search feature.


‘OpenAI is shouldering an enormous

responsibility on behalf of all humanity’


This statement is made by Jan Leike, who left his position as head of super alignment at OpenAI on Friday, May 17th.


Jan Leike announced his resignation on the social media platform X, where he explains how OpenAI is no longer concerned about safety or security, but is more focused on shipping shiny products.

Why It Matters

This is a no-brainer. With more than 180.5 million people using and sharing data with ChatGPT, data security and safety should always be at the top of the agenda to ensure that the risk of privacy violation and the potential for misuses remain as low as possible.


I think we, as users and humans, need to demand data transparency and easy-to-understand data policies (I know it is easier said than done) and start thinking critically about how we are sharing our data. Otherwise, we will all end up in a 'Joan is Awful' episode — it’s a reference to a Black Mirror episode, and it is absolutely worth watching!

Annemette Møhl

Annemette Møhl is a dynamic entrepreneur and AI expert, currently serving as Founder of Borbaki and AI.HUB. With extensive experience in AI project management, Annemette has been instrumental in implementing and monitoring advanced AI tools. She is also an AI lecturer and speaker, contributing to the knowledge base at ai.hub

https://www.linkedin.com/in/annemette-moehl/
Previous
Previous

Week 23