Home MOBILE Project Astra: Google’s fast AI assistant has eyes, ears and memory

Project Astra: Google’s fast AI assistant has eyes, ears and memory

0


In addition to the new AI model Gemini 2.0, Google has also announced progress with “Project Astra”. Behind this is work on a universal, multimodal AI assistant in smartphones or smart glasses that can support the user both visually and linguistically. Furthermore, the AI ​​assistant can now remember various things so that the user can use the system as a memory aid. At the moment, Project Astra is only available to a limited group of testers.

Advertisement


The data company presented Project Astra with video AI, search AI, and even more AI at Google I/O in May 2024. Project Astra brings together work on AI assistants, including agents that perform tasks for users. These agents must understand the world, remember things, and perform actions to make them useful. The Gemini app should be getting some features this year, but Google has managed to get it out sometime before the end of the year, at least for some testers.

Google speaks for itself project astra Nor “research prototypes to explore future capabilities of a universal AI assistant.” Since Google I/O, the group has further improved and expanded the Assistant with selected test subjects. AI assistant can now understand different languages ​​and can also respond in these languages. It includes different pronunciations and unusual words.

The assistant can now also remember things for a longer period of time. Google speaks into Project Astra Long-term memory of 10 minutes during a session phase. It was only 45 seconds at Google I/O. Additionally, the assistant should be able to remember more conversations that have occurred in the past, so that the use of artificial intelligence becomes more personalized. Low latency also helps, allowing the assistant to understand linguistic queries and instructions as quickly as a human.

Apple closes serious security flaws, no patches for iOS 17

Project Astra is built on Google’s new AI model Gemini 2.0 and can use various Google services like Search, Maps, and Google Lens. The user can point the smartphone’s camera at an object and ask the AI ​​assistant for background information about it. If the user asks about a bakery on the way to the city centre, the Astra Assistant can also clearly name the different locations.

However, the AI ​​assistant still has limitations. This is what Project Astra can do According to Axios Cannot access user’s emails and photos stored on the smartphone, and has difficulty recognizing different voices in noisy environments. Even simple tasks like a wake-up call or setting a timer are not possible, although the previous Google Assistant has been able to do this for a long time.

read this also

Google says it is continuing to work on Project Astra and wants to integrate these functions into the Gemini app in the future and bring them to other form factors like smart glasses. The scope of test subjects has been expanded. Interested parties can join the waiting list to try out the AI ​​assistant. However, this is limited to the United States and Great Britain, as Project Astra is only available there so far. An Android cell phone is also required; The AI ​​assistant is not currently compatible with iPhones.


(fds)

Microsoft is retrofitting file exchange between Windows and iPhone

NO COMMENTS

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Exit mobile version