The Chief Executive of Meta, Mark Zuckerberg, has just demonstrated a working prototype of the company’s AI system, which allows users to create virtual worlds by just describing them. It is possible that the Builder Bot will eventually attract more people to Meta’s Horizon virtual reality experiences.
Conversational artificial intelligence (AI) is what’s needed to create these virtual worlds, and that’s what Project CAIRaoke is aiming to develop with the builder bot. It will allow users to create or import things into a virtual world using just voice instructions.
The live event was pre-recorded, and in it Zuckerberg showed how you and I can create a virtual environment using Builder Bot, starting with commands like “Let’s go to the beach.” The bot then creates a whimsical 3D landscape of sand and water around him, making it look like it’s in real life. And then the CEO asks for broad things like making an island and very specific things like adding altocumulus clouds and making a hydrofoil model. They also play sound effects like “tropical music,” which Zuckerberg thinks is coming from a boombox that Builder Bot made, but it could have been general background noise.
The video, however, doesn’t say if Builder Bot draws from a limited library of human-created models or if the AI helps to make the designs.
In the past, several AI projects have shown how to make images based on text descriptions, like OpenAI’s DALL-E, Nvidia’s GauGAN2, and VQGAN+CLIP, as well as more accessible apps like Dream by Wombo. But these well-known projects only make 2D images (sometimes very surreal ones) without any interactive parts. Though some researchers are working on making 3D objects.
It looks like Builder Bot is using voice commands to add 3D objects that users can walk around. Meta is gearing towards making more complex interactions with this.
When Zuckerberg spoke at the event, he said: “You’ll be able to make complex worlds to explore and share your experiences with other people just by using your own voice.” The company also announced plans for a universal language translator, a new version of a conversational AI system, and a project to develop new translation models for languages with small written data sets during the event.
If successful, the technology could impact other VR worlds and platforms.