iPhone users should be prepared for Apple’s upcoming AI model, which will potentially spread misinformation and other dangerous content. The manufacturer admitted this shortly after unveiling its product AI Package “Apple Intelligence” The one with whom iOS 18, iPadOS 18 And macOS 15 A large number of tools arrive in the fall.
Advertisement
Apple boss Tim Cook stressed in an interview that he is by no means 100 percent certain that the in-house generator AI will not hallucinate. The company has designed the technology to be as safe as possible and is confident that Apple’s AI models generate content of “very high quality”. “To be honest, I would say it’s not 100 percent,” Cook explained to the Washington Post,
Also a damning reply from Apple Intelligence
in one Long Treatise on Apple’s New Foundation Model The group refers to test runs in which the AI ​​model reacted to intended inputs (“adverse signals”), in many cases with “harmful content, sensitive topics, and false information.” The percentage of such rule violations in Apple Intelligence is relatively low compared to other AI models. For its AI model running on servers, Apple says that 6.6 percent of queries resulted in unwanted answers – for other models such as GPT-4-Turbo this ratio is a good 20 percent. Apple says it continues to actively work to check the safety of its models.
Hallucinations are a widespread problem in AI models, from artificial locations to misinformation to obviously dubious answers,
Apple’s Foundation model was trained not only with licensed content, but also with publicly accessible content from the “open web” – without explicitly obtaining special permission. To opt out, website operators must block the company’s website crawler “Applebot”.
Apple AI partly on device, partly in the cloud
Apple’s AI models also do not run entirely locally, but also partly in the cloud – on the company’s servers with in-house chips. The company emphasizes that great efforts have been made to ensure that user data is managed as securely as possible – nothing is stored and Apple itself does not gain access. Apple has now published the first concrete details about the architecture of the “private cloud compute”. External security researchers should soon be able to investigate parts of the promises. It is currently unclear to what extent users can prevent some of their data from ending up on Apple servers for AI functions – and how visible the data transfer really is in the operating system.
Along with “Apple Intelligence”, ChatGPT, a first external AI model, is also being integrated into the operating system: before sending data to OpenAI servers, users are first asked for consent. Cook says the provider has taken data protection precautions – IP addresses for inquiries should not be tracked. The Apple system allows ChatGPT features to be used even without account registration. Answers are marked with a warning that they may contain false information. Apple apparently plans to work with other AI companies in the future.
,lbe,
