This forum uses cookies
This forum makes use of cookies to store your login information if you are registered, and your last visit if you are not. Cookies are small text documents stored on your computer; the cookies set by this forum can only be used on this website and pose no security risk. Cookies on this forum also track the specific topics you have read and when you last read them. Please confirm whether you accept or reject these cookies being set.

A cookie will be stored in your browser regardless of choice to prevent you being asked this question again. You will be able to change your cookie settings at any time using the link in the footer.

Optical flow using Hybrid -- Possible?
#11
Quote:Apparently, no one uses the motion vectors stored in the source frames. Do you know why?
Oh there was an old discussion about that over at doom9s years ago as a side discussion whether they could be used using compression.
Is I recall correctly the main points were:
a. you need a specific source filter to extract them properly. (lot of work, but propably not too complicated)
b. motion vectors from the compressed formats are often not ideal and usually reusing them would give you only a roughly relyable compression. Through some tests it was determined that recalculating motion vectors produces waay better results.
c. also the motion vectors from the compressed formats might need different interpretation depending on the video format

Quote:Here's an example of the sort of stuff I've posted to Doom9, Vapoursynth: ...
I never really had time and motivation to really dive into the whole motion interpolation issue so much, but I'll probably keep an eye on the thread.

Cu Selur
Reply
#12
With help from my friends, things are coming into focus. Since you "never really had time and motivation to really dive into the whole motion interpolation issue so much", I'm curious why you wrote this: 

https://github.com/Selur/VapoursynthScri...erframe.py?

I'm glad you did because it has given me vital clues, but nonetheless, I'm curious whether your interest is deep enough to give me a push now and then. You are generous with your time and I don't want to burden you in any way -- I'm a Scientist Engineer and pretty competent -- but are you interested in helping? "No" is an acceptable answer and cheerfully accepted.
(28.09.2021, 05:48)Selur Wrote:
Quote:Apparently, no one uses the motion vectors stored in the source frames. Do you know why?
Oh there was an old discussion about that over at doom9s years ago as a side discussion whether they could be used using compression.
Is I recall correctly the main points were:
a. you need a specific source filter to extract them properly. (lot of work, but propably not too complicated)
b. motion vectors from the compressed formats are often not ideal and usually reusing them would give you only a roughly relyable compression. Through some tests it was determined that recalculating motion vectors produces waay better results.
c. also the motion vectors from the compressed formats might need different interpretation depending on the video format
Now that you mention it, I do recall such a discussion a long time ago. Point 'b' never made sense to me because it is those MVs that produce the pictures used in optical flow. I don't see how anything could be better. From my reading -- cover to cover -- of h.162 h.264 and h.265, motion vectors are just the beginning of compression. There's block-local corrections, too. Perhaps they were ignored during the tests you mention, eh? And I don't know what video format has to do with macroblocks -- I assume by "video format" you mean "transport stream" (whereas macroblocks are in the presentation stream, of course).

My goal is a one-click Windows cmd script to convert all video to 120fps[120pps] (i.e. progressive). I have it working for 24fps[24pps] (including soft telecine, of course) and have proven methods that are guaranteed to work for 30fps[24pps] (i.e. hard telecine), 30fps[60sps] (i.e. NTSC field scans) and 30fps[60sps+30pps] (i.e. mixed field scans + hard telecine as in "Making of" documentaries) with ffprobe and MediaInfo supplying the info to control the execution. In order to implement all of it, I need 3 things: 1, an ffmpeg primative that returns "combing" on a frame-by-frame basis, and 2, an ffmpeg primative that returns "scene_change" on a frame-by-frame basis, and 3, a lot more lines of script -- I'm already over 400 lines.  Smile
Reply
#13
Quote:With help from my friends, things are coming into focus. Since you "never really had time and motivation to really dive into the whole motion interpolation issue so much", I'm curious why you wrote this:
https://github.com/Selur/VapoursynthScri...erframe.py ?
Oh, that's simple, know the basics of Python and since someone asked whether it was possible to use the values they used in the official SVP software in Hybrid, I looked into the script and extended it a bit. No need to understand much about motion interpolation that's just understanding what the original code did and extending it a bit.

Quote:assume by "video format" you mean "transport stream" (whereas macroblocks are in the presentation stream, of course).
No I mean H.264, MPEG-2, AV1,...

Quote:My goal is a one-click Windows cmd script to convert all video to 120fps[120pps] (i.e. progressive).
Then you probably also should look into Zopti since I do not see how you would deternine good values for all source otherwise than to properly analyse the source and the effect of the filters.

Quote:I'm already over 400 lines. ... a lot more lines of script
probably add two zeros at least. Wink (Hybrid was ~250k lines of not generated code March this year, after some refactoring it's ~240k Wink)

Quote: I need 3 things: 1, an ffmpeg primative that returns "combing" on a frame-by-frame basis
That one would interesst me if you find something.
Last I checked there was no reliable way to detect combing. (aside from using ones own eyes)

Quote:2, an ffmpeg primative that returns "scene_change" on a frame-by-frame basis
If a scene change is just frame A change more than X percent to frame (A-1) that should be simple, see:
https://stackoverflow.com/questions/3567...h-timecode

Cu Selur
Reply
#14
(28.09.2021, 19:59)Selur Wrote:
Quote: I need 3 things: 1, an ffmpeg primative that returns "combing" on a frame-by-frame basis
That one would interesst me if you find something.
Last I checked there was no reliable way to detect combing. (aside from using ones own eyes)

It must exist in the detelecine filter for example, but I wasn't able to pursuade the ffmpeg devs to expose it. Then again, I don't know what 'side data' is. Perhaps a 'combing' flag exists as side data, eh?
Reply
#15
A source clip can have flags regarding whether they are interlaced or not, but like the normal interlaced info it often is not reliable.
If you call
ffprobe -i "<path to input>" -show_frames
you get data like:
[FRAME]
media_type=video
stream_index=0
key_frame=0
pts=149673
pts_time=1.663033
pkt_dts=151473
pkt_dts_time=1.683033
best_effort_timestamp=149673
best_effort_timestamp_time=1.663033
pkt_duration=1800
pkt_duration_time=0.020000
pkt_pos=532604
pkt_size=87443
width=1920
height=1080
pix_fmt=yuv420p
sample_aspect_ratio=1:1
pict_type=P
coded_picture_number=5
display_picture_number=0
interlaced_frame=1
top_field_first=1
repeat_pict=0
color_range=tv
color_space=bt709
color_primaries=bt709
color_transfer=bt709
chroma_location=left
[/FRAME]
Some deinterlacers use these,...
Problem is that just because a frame is interlaced, doesn't mean it shows combing. Sometimes progressive input is encoded with an interlaced flag or telecined contend it flagged as progressive or vise versa.

Cu Selur
Reply
#16
(29.09.2021, 05:38)Selur Wrote: A source clip can have flags regarding whether they are interlaced or not, but like the normal interlaced info it often is not reliable.
Oh, it's reliable. But it's useless because it doesn't differentiate NTSC (i.e. 30fps[60sps]) from hard telecined cinema (i.e. 30fps[24pps]). That's why some sort of !combed flag is needed triggered by the 3 frames out of 5 that differentiate hard telecine from NTSC. If such a flag was exposed to ffmpeg user processes, then, after converting to 120fps, mixed NTSC-converted-to-120fps could be made progressive by buffering the bottom field and bobbing the top field for 2 frames. It would be perfect.

I'd like to present some timing diagrams to show you how it works, but this forum application doesn't display plain text properly, even if wrapped in code tags, and the attachments facility doesn't work at all.

PS: Well, it appears attachments do work, they just don't indicate that they are there until you save the reply. Having now saved the reply, I see that the 'anything-to-120fps .txt' attachment is there. When clicked, it doesn't display properly -- lines are wrapped in my browser -- so copying the lines to a plain text editor is required, but at least you can see the methods. Kindly let me know what you think, eh?


Attached Files
.txt   anything-to-120fps .txt (Size: 13,93 KB / Downloads: 17)
Reply
#17
Seems like the board software doesn't use scrollbars for 'code' but limits it to a fixed max width.
Since I'm not starting to modify the board software, that's how it is.
(will look at the text after work)

Cu Selur
Reply
#18
By the way, the methods I'm using -- criticized as strange and wrong -- have completely eliminated bad PTSs and DTSs as sources of transcoding failure. I still occasionally get DTS errors but I think they're for audio, not video -- I haven't figured out yet how to avoid them the way I avoid video stream errors. I really am doing some pretty cool things.
Reply
#19
btw. regarding the combing detection, you could try 'tdm.IsCombed'
see: https://github.com/HomeOfVapourSynthEvol...-TDeintMod

Cu Selur
Reply
#20
(02.10.2021, 23:57)Selur Wrote: btw. regarding the combing detection, you could try 'tdm.IsCombed'
see: https://github.com/HomeOfVapourSynthEvol...-TDeintMod

Cu Selur

I posted a new thread, "MV interpolation of mixed TV+cinema to 120fps progressive", to http://forum.doom9.org/showthread.php?t=183286

I didn't want to take up your time without exhausting all possibilities. The main folks at doom9 are getting to know me and I'm sure they can help me work out the issues, if that's possible.

-- Mark.
Reply


Forum Jump:


Users browsing this thread: 1 Guest(s)