Achieve AGI By Just Scaling Compute?

February 20th, 2024

Video Generation Models as World Simulators:

We explore large-scale training of generative models on video data. Specifically, we train text-conditional diffusion models jointly on videos and images of variable durations, resolutions and aspect ratios. We leverage a transformer architecture that operates on spacetime patches of video and image latent codes. Our largest model, Sora, is capable of generating a minute of high fidelity video. Our results suggest that scaling video generation models is a promising path towards building general purpose simulators of the physical world.

Via: The AIGRID:

OpenAI might have already achieved AGI internally:

Leave a Reply

You must be logged in to post a comment.