OpenAI is releasing a much-anticipated new voice assistant to all paid users of its chatbot ChatGPT, four months after the artificial intelligence company first unveiled the feature at a product launch event.
The San Francisco-based startup said Tuesday it has started rolling out the option, known as advanced voice mode, to ChatGPT Plus subscribers and users of its ChatGPT Team service for businesses. The company said Enterprise and edu-paid users will begin getting access to the feature next week.
OpenAI first teased the voice product in May, showing how it could quickly respond to written and visual prompts from users with a spoken voice. But the next month, OpenAI delayed launching the option to work through potential safety issues. In July, OpenAI rolled out the feature to a limited number of its ChatGPT Plus customers.
After the delay, OpenAI said the product would not be able to impersonate how other people speak. The company also said that it had added new filters to ensure the software can spot and refuse some requests to generate music or other forms of copyrighted audio.
However, the new voice assistant lacks a number of the capabilities the company originally demonstrated. For example, the chatbot can’t currently access a computer-vision feature that would let it offer spoken feedback on a person’s dance moves simply by using their smartphone’s camera.
As part of the expanded rollout, OpenAI said it’s adding five new voices to the feature, bringing the total number users can choose to nine. New options include voices with enigmatic, arboreal names, such as arbour, spruce and maple.
Also Read: Meta readies AR glasses reveal at Connect event