Tim Cook admits that Apple can't get its AI to stop lying

Earlier this week, Apple made one of the most anticipated announcements during the Worldwide Developers Conference: it's finally joining the AI ​​race. During the event he announced a surprisingly familiar set of machine learning tools. But it was not the only thing that caught attention: the subsequent confessions of its CEO, Tim Cook about the reliability of these tools.

In an interview with The Washington Post, Cook openly admitted that not entirely sure that the latest “Apple Intelligence” will not make up lies and confidently distort the truth, a problematic and probably intrinsic tendency that has affected virtually all AI chatbots released to date.

When asked about his “confidence that Apple Intelligence won't freak out,” an increasingly unpopular term that has quickly become the wild card for lies generated by AICook admitted that there are still many unknowns.

It is not 100% reliable – he answered, arguing that he is still – confident that it will be of very high quality. But I would say in all honesty that's less than 100%. Of course, I would never say it is 100%.”

It's an uncomfortable reality, especially considering how focused the tech industry and Wall Street have been on developing AI chatbots. Even as tens of billions of dollars are invested in this technology, AI tools are repeatedly caught with obvious falsehoods and, perhaps more worrying, convincing lies.

In addition to mixing facts to the point that they no longer hold together, some of these AI models are trained with dubious data that they offer as truth. For example, last month, the search function powered by Google artificial intelligence he told with complete confidence to a user that put glue on your pizzaa response that reached social networks through a tweet.

Despite this, Cook is not the first technology executive to admit that these tools may simply continue to lie. The news comes after the CEO of Google, Sundar Pichai, made similar statements in another interview last month.

“We have definitely made progress when we look at feasibility metrics year after year,” Pichai explained. We are all improving it, but it is not solved”.

It remains to be seen how Apple's own implementation will fare (a sort of marriage between Siri and the upcoming ChatGPT integration) regarding these hallucinations.

The result is significant considering the stakes are high, including the vast amount of sensitive consumer data, including photos, emails and text messages, that Apple has collected from its customers. Nobody wants Siri to make up a calendar invite or tell you that a flight was canceled, when it wasn't.