AI Training with Bag-Report: Apple wants to use the draftsman report sent by developers so that Apple Intelligence-Khufia-although such reports and system logs can easily have personal and sensitive data, as the manufacturer himself gives grants. A dialogue that appears before sending an error report to Apple’s feedback assistant, a developer recently visited this new application for training AI model. In addition to the error report in the first place, there is no opt-out option.
Report with only AI training consent?
He wanted to report a serious bug to Apple and this Warndialog was published after attaching his system diagnostic data, Developer explained to @cooCoafrogWho first worked on development equipment in the group. Such vacant folding for AI training is a “highly bright behavior”-active, developers voluntarily transmit sensitive data to help apples with errors.
Pop-up that appears when you want to send an error report to Apple.
(Picture: @CocoCocafrog / Hachyderm.ioo,
Other iOS developers also reacted to Mastodone posting with anger and clear rejection. He will probably not report any other insects to Apple, then For example, PCALC developer James Thomson wrote,
Apple’s bug registration system has a poor reputation
Apple has a pathetic reputation among developers already in the re -registration process. The system is considered to be largely “black holes”, and there is often no response from the manufacturer that is very carefully documented. Sometimes developers of very few meanings also report questions or requests for clinical data that are long transmitted. In recent times, individual developers sometimes tried to draw attention to the problem through a boycott call; Some developers should have saved Apple to report bugs (unpaid) work. @CoCoafrog says, this is actually required more reasons so that the developers no longer submit an error report.
To improve Apple Intelligence, Apple is increasingly aggressive: With the help of iPhone analysis data, the group now wants to determine the trend from “real user data” to develop its AI model. Data security techniques should prevent conclusions about the use of individual users and sensitive data.
(Lbe)