White in Apple’s keynote speaker for WWDC 2025 on Monday, several observers themselves asked whether the group would also provide information about postponing important new tasks. And in fact, the subject was briefly mentioned by the software major Craig Federighi after being expressed detailed positive about Apple Intelligence introduced last year. The manager said that he had already said that “we continue with our work to make Siri even more personal”. This task “took longer to obtain our high quality criteria”. It is now looking forward to “more about it in the coming year (with you)”. Earlier, there was speculation that the group could probably show better Siri in the autumn after the reconstruction of this department.
Three head tasks are still missing
Federighi insisted that Siri was already “more natural and helpful” with Apple Intelligence. In fact, the new features remained within narrow boundaries. Therefore Siri must be able to “keep” the conversation, that is, should be able to refer to previous statements. In practice, however, it only performs semi -work. The three most important features declared by Apple are still missing: individual references for voice assistants, direct work with apps and detecting what can be seen on the screen.

Visual intelligence for screenshot
The latter now wants to apply apple with iOS 26 at least partially, differently different from the expected: as a better visual intelligence function. The function has been long and allows you to analyze the camera content with the help of other services like Google. For example, you can make appointments for concerts or get information about sites. The new thing is that visual intelligence can also deal with screenshots. To do this, use specific shortcuts on the iPhone and then it can choose that the screenshot is analyzed. With the keynote speaker, it was displayed, for example, to find a jacket on the Internet, it is also possible to mark the image areas. The screenshot use is clever because the user is transmitted to AI, holds complete control over it. Apple not only uses its technique and Google, but also opens the AI ​​processes. Parts of AI functions are also performed on the device with its model of Apple.
Openai’s image generator
Finally, Apple expands its image generator with iOS 26. The group will not only use its own model in the future, but will also complement your app image playground to use Chatgpt for construction. How widely it will remain unclear before, but Apple initially wants to enable only “some styles”, as known from already limited own models. Image Playground will now also be present as an API for developers that can save cost. The second apple image generator to create the geneemoji function, Ideomms will be more in the future (to some extent). In this way, two existing emojis can be added and there are more facial expressions and hairstyles for those that are similar to people. The iOS 26 is available in the autumn, a beta phase for developers is currently running, and follows a public beta in July. Incidentally, Apple did not give information about the establishment of Google Gemini in Apple Intelligence-although already the alphabet’s boss was the first signs of beautiful pichai.
(BSC)