GoPro cameras and Google Glass have made first-person POV videos quite popular despite the fact that the output is frequently unpalatable at best and downright nauseating at worst.
Even though stabilisation filters exist in current video editors the end result is still fragmented and shaky enough that it, combined with that fish-eye lens makes for an unpleasant viewing experience, to say the least.
Hence the reason Microsoft researchers Johannes Kopf, Richard Szeliski, Michael Cohen are working towards a method that will drastically clean up the shakiness of such footage. Their algorithms analyse the footage to create a newer smoother camera path. It will also reconstruct scenes in the input footage using a depth map analysis. When put together the new algorithm generates an output that stitches together perspectives that are much closer to each other than the original.
Action-packed, less shaky video
As you can see here, the process does an incredible job of smoothing out the video and creates very easy to stomach and streamlined transitions as the timelapse goes on. The new videos may have some strange artifacts and glitches but the end result is still much smoother than “stabilised” output and it’s certainly leaps above the unedited input footage.
The project is very similar to Photosynth, which it turns out was a project spearheaded by the same team. Photosynth also runs a similar process of analyzing multiple frames to automatically create a map most palatable for visual consumption, though it’s used to make panoramas and synths mostly.
The algorithm debuted at Siggraph 2014 running this week in Vancouver. The conference will be the premier spot to find the newest and latest video rendering and animation technology. We’ll be keeping our eyes peeled for any more interesting tech developments that may be unveiled as the days go on.