Yesterday nVidia posted a demo of their slo-mo research, which uses AI / machine learning to dramatically slow down footage and interpolate between frames.
I have to say that I don’t really see any advantage over existing optical flow technology (ie twixtor). I did make the brave step of actually reading the YouTube comments, and a few people did mention twixtor with one person saying the nVidia version is “vastly superior”. But I don’t see it. Similar artefacts, lots of blurring, I just don’t see an immediate advantage over existing technology. Of course it’s difficult to judge without access to the source footage itself. Twixtor and comparable plugins can vary widely depending on the quality of the source footage, with things like noise/grain and compression artefacts sometimes being a make or break factor. So maybe this new technology is more tolerant?
What do you think?
-Chris