Mailing List AE-List@media-motion.tv ? Message #64125
From: J Bills <AE-List@media-motion.tv>
Subject: Re: [AE] nVidias Slo mo demo
Date: Wed, 20 Jun 2018 07:45:29 +0000
To: After Effects Mail List <AE-List@media-motion.tv>

Totally agree with you - seems to be nothing special, on par with current solutions.  Sometimes works, sometimes has tons of artifacts.  I respect them for baring it and not just showing ideal footage.


Twixtor, Kronos, etc are all pretty old in the tooth now to be honest.  Many studios have their own in-house optical flow systems that produce much much better results.  Denser vector data sets.  Better algorithms and edge handling, far less "fraying" for lack of a better word.  Ghosting.


This tech really needs a revisit and more "pro" offerings since many of us have enough computer horsepower at our disposal to go for the better results.  And it's not just for retiming - there are lots of other tools and use cases that can use optical flow for tracking/paint/roto or to accelerate certain processes.  There is a lot of damage to be done here, and Foundry doing the Smart Vector tools over in Nuke are really just the tip of the iceberg.  They could be smarter.  har har.


GPU acceleration is great, and you definitely notice when you're on a machine that doesn't have a video card up to par to use the GPU acceleration on, say Kronos for example.  So it should definitely be on Nvidia's radar.   But whoever goes for the gold and produces the eventual system that will produce the level of results I'm after will understand that this is something that has to be distributed to a *real* CPU render farm at 1 frame per node, and cooked at length.  Long render times, comparing many many more frames than the current systems are using.  Want!




From: After Effects Mail List <AE-List@media-motion.tv> on behalf of Chris Zwar <AE-List@media-motion.tv>
Sent: Tuesday, June 19, 2018 10:38:29 PM
To: After Effects Mail List
Subject: [AE] nVidias Slo mo demo
 
Yesterday nVidia posted a demo of their slo-mo research, which uses AI / machine learning to dramatically slow down footage and interpolate between frames.


I have to say that I don’t really see any advantage over existing optical flow technology (ie twixtor).  I did make the brave step of actually reading the YouTube comments, and a few people did mention twixtor with one person saying the nVidia version is “vastly superior”.  But I don’t see it.  Similar artefacts, lots of blurring, I just don’t see an immediate advantage over existing technology.  Of course it’s difficult to judge without access to the source footage itself.  Twixtor and comparable plugins can vary widely depending on the quality of the source footage, with things like noise/grain and compression artefacts sometimes being a make or break factor.  So maybe this new technology is more tolerant?

What do you think?

-Chris



 
Subscribe (FEED) Subscribe (DIGEST) Subscribe (INDEX) Unsubscribe Mail to ListMaster