This forum uses cookies
This forum makes use of cookies to store your login information if you are registered, and your last visit if you are not. Cookies are small text documents stored on your computer; the cookies set by this forum can only be used on this website and pose no security risk. Cookies on this forum also track the specific topics you have read and when you last read them. Please confirm whether you accept or reject these cookies being set.

A cookie will be stored in your browser regardless of choice to prevent you being asked this question again. You will be able to change your cookie settings at any time using the link in the footer.

Optical flow using Hybrid -- Possible?
#12
With help from my friends, things are coming into focus. Since you "never really had time and motivation to really dive into the whole motion interpolation issue so much", I'm curious why you wrote this: 

https://github.com/Selur/VapoursynthScri...erframe.py?

I'm glad you did because it has given me vital clues, but nonetheless, I'm curious whether your interest is deep enough to give me a push now and then. You are generous with your time and I don't want to burden you in any way -- I'm a Scientist Engineer and pretty competent -- but are you interested in helping? "No" is an acceptable answer and cheerfully accepted.
(28.09.2021, 05:48)Selur Wrote:
Quote:Apparently, no one uses the motion vectors stored in the source frames. Do you know why?
Oh there was an old discussion about that over at doom9s years ago as a side discussion whether they could be used using compression.
Is I recall correctly the main points were:
a. you need a specific source filter to extract them properly. (lot of work, but propably not too complicated)
b. motion vectors from the compressed formats are often not ideal and usually reusing them would give you only a roughly relyable compression. Through some tests it was determined that recalculating motion vectors produces waay better results.
c. also the motion vectors from the compressed formats might need different interpretation depending on the video format
Now that you mention it, I do recall such a discussion a long time ago. Point 'b' never made sense to me because it is those MVs that produce the pictures used in optical flow. I don't see how anything could be better. From my reading -- cover to cover -- of h.162 h.264 and h.265, motion vectors are just the beginning of compression. There's block-local corrections, too. Perhaps they were ignored during the tests you mention, eh? And I don't know what video format has to do with macroblocks -- I assume by "video format" you mean "transport stream" (whereas macroblocks are in the presentation stream, of course).

My goal is a one-click Windows cmd script to convert all video to 120fps[120pps] (i.e. progressive). I have it working for 24fps[24pps] (including soft telecine, of course) and have proven methods that are guaranteed to work for 30fps[24pps] (i.e. hard telecine), 30fps[60sps] (i.e. NTSC field scans) and 30fps[60sps+30pps] (i.e. mixed field scans + hard telecine as in "Making of" documentaries) with ffprobe and MediaInfo supplying the info to control the execution. In order to implement all of it, I need 3 things: 1, an ffmpeg primative that returns "combing" on a frame-by-frame basis, and 2, an ffmpeg primative that returns "scene_change" on a frame-by-frame basis, and 3, a lot more lines of script -- I'm already over 400 lines.  Smile
Reply


Messages In This Thread
RE: Optical flow using Hybrid -- Possible? - by markfilipak - 28.09.2021, 09:56

Forum Jump:


Users browsing this thread: 1 Guest(s)