Musk may ban Apple devices from companies over ChatGPT integrations
Musk's comments suggest that he believes OpenAI is firmly interwoven into Apple's operating system and therefore capable of capturing any personal and private data.
Tesla, SpaceX, and xAI executive Elon Musk has threatened to ban iPhones from all of his companies in response to Apple's newly announced OpenAI integrations at WWDC 2024 on Monday.
Musk revealed in posts on X: "If Apple integrates OpenAI at the OS level," Apple devices would be barred from his firms, and guests would have to check their Apple devices at the entrance, where they would be "stored in a Faraday cage."
It’s patently absurd that Apple isn’t smart enough to make their own AI, yet is somehow capable of ensuring that OpenAI will protect your security & privacy!
— Elon Musk (@elonmusk) June 10, 2024
Apple has no clue what’s actually going on once they hand your data over to OpenAI. They’re selling you down the river.
Replying to Apple CEO Tim Cook, he warned Apple devices would be barred from his firms if he didn't "stop this creepy spyware."
Both Apple and OpenAI stated that users are consulted before "any questions are sent to ChatGPT," as well as any documents or photographs. However, Musk's comments suggest that he believes OpenAI is firmly interwoven into Apple's operating system and therefore capable of capturing any personal and private data.
Apple announced that users would be able to question Siri in iOS 18, and if Siri believes ChatGPT can be of assistance, it would request to share the query and deliver the answer. This enables users to receive an answer from ChatGPT without having to launch the ChatGPT iOS app. This applies to photos, PDFs, and other materials as well.
Musk, on the other hand, prefers that OpenAI's capabilities be limited to a separate app rather than a Siri connection.
Musk responded to VC and CTO Sam Pullara of Sutter Hill Ventures, who stated that the user is accepting a specific request on a per-request basis and that OpenAI does not have access to the device, saying, "Then leave it as an app. This is nonsense."
Pullara stated that the way ChatGPT was incorporated was much the same as how the ChatGPT app operates now. On-device AI models are either Apple's own or those that use Apple's Private Cloud.
Apple also revealed another connection, which would provide users with system-wide access to ChatGPT using Writing Tools' "compose" capability. For example, Apple advised that you ask ChatGPT to compose a bedtime tale for your child in a document. You might also ask ChatGPT to create pictures in a variety of styles to accompany your message. Users will effectively have free access to ChatGPT thanks to these capabilities, eliminating the need to register an account.
According to TechCrunch, Musk is making the objections by relying on the fact that Apple consumers may not be familiar with the complexities of privacy issues.
According to Apple's statement, users' requests and information are not tracked; nevertheless, ChatGPT members may connect their accounts and utilize their premium services directly within Apple's AI experiences.
Apple SVP of Software Engineering Craig Federighi reassured that the user was "in control over when ChatGPT is used and will be asked before any of your information is shared."
OpenAI stated in a blog post that "requests are not stored by OpenAI, and users' IP addresses are obscured." Users can also link their ChatGPT accounts, which means their data choices will be applied to ChatGPT's regulations. The latter refers to the option (as in opt-in) of connecting the function to their paying membership.
ChatGPT under fire: Austria complains about 'uncorrectable errors'
A Vienna-based privacy advocacy group announced its intention in April to file a complaint against ChatGPT in Austria, alleging that the AI tool, known for producing "hallucinating" responses, generates incorrect answers that its creator, OpenAI, cannot rectify.
NOYB ("None of Your Business") stated that there is no assurance of the program's ability to provide accurate information, emphasizing that "ChatGPT keeps hallucinating -- and not even OpenAI can stop it."
The group criticized OpenAI for openly admitting its inability to correct inaccuracies generated by its generative AI tool and for failing to clarify the sources of the data used and the information stored about individuals by ChatGPT.
According to NOYB, errors of this nature are deemed unacceptable in the context of personal information, as EU legislation mandates that personal data must be accurate.
"If a system cannot produce accurate and transparent results, it cannot be used to generate data about individuals," said Maartje de Graaf, data-protection lawyer at NOYB, as quoted by AFP.
"The technology has to follow the legal requirements, not the other way around," Graaf added.