Snapchat uses AI to create a cinema filter

For better or worse, filters, on Instagram, WhatsApp or Snapchat, have become a routine with which we live. But judging by the results, Snapchat is looking to take them to another level. The application has presented its next artificial intelligence model capable of transform the images from smartphone cameras into practically anything we want.

In the demo, Snapchat’s new AI can transport its subjects to the world of a “1950s sci-fi movie” with a simple text message, and even updates our wardrobe to be consistent with the image we see.

Now, in practice, the results look more like a stop motion piece than a fluid video, but the achievement is not that it is fluid, but rather to demonstrate the ability to render a video in real time and do it directly on the phone, instead of it taking place on a remote cloud server.

Snapchat calls these real-time local generative AI capabilities a “milestone” and says they were possible thanks to the “advances of his team to optimize GenAI techniques faster and more efficient.”

The interesting thing is that with this demonstration they achieve that very advanced AI models that consume a lot of energy run on small, popular devices.

Snapchat had already tested artificial intelligence functions for at least a year with your chatbot My AI, an option that was not very popular. It then launched the option to send completely AI-generated snaps for paid users, and also launched a feature for AI-generated selfies called Dreams.

Taking those capabilities and applying them to video was a logical progression, but doing it in real time is a bolder leap. But the results are currently less impressive than what is possible with still images, which is not surprising: generating coherent videos is something that AI models continue to struggle with, even without time constraints.

There is much left to experiment, but in the meantime, regular users will be able to try out these AI-powered AR features through Lenses in the coming months.