An organization called OpenAI is revolutionizing the field of AI (artificial intelligence) by introducing the model GPT-3 (ChatGPT), which can generate text in a human-like manner, and the system DALL-E, which generates images from text. I’m waking up. This time they introduced Sora, a model for creating videos from text.
OpenAI claims to be teaching AI to mimic movements in the physical world, with the goal of training models that help people solve problems that require interaction with the physical world. Based on your instructions, Sora can generate videos with complex scenes with multiple characters, specific movements, detailed background objects, and even different camera angles. Models not only understand what users want, but also how people and objects interact in the real world.
This model is not yet complete. You may not understand the consequences of certain actions. For example, in a video where a person takes a bite of a cookie, the cookie remains intact. Sora may also get confused between left and right, or in scenes that transcend time.
The Sora model is not generally available (at least for now). OpenAI is offering it to a group of researchers who are evaluating the risks that exist within it. We also reached out to a group of designers and filmmakers to receive feedback that helped us evolve the model and make it better for creators. According to OpenAI, the reason for making Sora public at this early stage is to get feedback from outside the organization and because they want the public to see the future of his AI field.
The video released by OpenAI is very impressive and includes aerial footage of a beach house, an astronaut walking in space, an elephant walking in the snow, and a woman walking on a rainy night street. Click here for more articles on The Gadget Reviews website.