Tech & Gadgets

This feature allows you to ask ChatGPT questions about your environment

OpenAI rolled out the Advanced Voice Mode with Vision feature in ChatGPT on Thursday. The feature, which allows the artificial intelligence (AI) chatbot to access the smartphone’s camera to capture visual information about the user’s surroundings, will be available to all ChatGPT Plus, Team and Pro subscribers. The feature takes advantage of the capabilities of GPT-4o and can provide real-time voice responses to what is displayed in the camera. Vision in ChatGPT was first revealed in May during the company’s Spring Updates event.

ChatGPT gets vision capabilities

The new ChatGPT feature was rolled out on day six of OpenAI’s twelve-day feature release schedule. The AI ​​company has so far released the full version of the o1 model, the Sora model of the video generation and a new Canvas tool. With Advanced Voice Mode with Vision, users can now show the AI ​​their surroundings and ask questions based on it.

In a demonstration, the OpenAI team members interacted with the chatbot’s camera and introduced different people. Then the AI ​​could answer a quiz about those people, even if they weren’t active on the screen. This highlights that the vision mode also comes with memory, although the company hasn’t specified how long the memory will last.

Users can use the ChatGPT vision feature to show the AI ​​their refrigerator and ask for recipes, or by showing their wardrobe and asking for outfit recommendations. They can also show the AI ​​a landmark outside and ask questions about it. This feature is combined with the chatbot’s low latency and emotional Advanced Voice mode, making it easier for users to communicate in natural language.

Once the feature rolls out to users, they can go to ChatGPT’s mobile app and tap on the Advanced Voice icon. In the new interface, they will now see a video option, giving the AI ​​access to the user’s camera feed. Moreover, a Screenshare feature is also available which can be accessed by tapping on the three-dot menu.

The Screenshare feature allows the AI ​​to see the user’s device and any app or screen they go to. The chatbot can also help users with smartphone-related problems and questions. Notably, OpenAI said that all Team subscribers will be able to access the feature in the latest version of the ChatGPT mobile app within the next week.

Most Plus and Pro users will also get this feature, but users in the European Union region, Switzerland, Iceland, Norway and Liechtenstein will not get it at this time. On the other hand, Enterprise and Edu users will get access to ChatGPT’s Advanced Voice with Vision in early 2025.

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button