Selur's Little Message Board

Full Version: vhs-bordercontrol, telecide() and (ml)degrain in vapoursynth
You're currently viewing a stripped down version of our content. View the full version with proper formatting.
Pages: 1 2
having some issues on scene-changes with mdegrain2 from old mvtools, in old avisynth, i was wondering about hybrid, and after some tinkering here are questions that presented themselves to me:


a) functionality of avs' bordercontrol filter in vapoursynth? is there any wrapper for vapour to load avs plugins? or a way to get same or simillar functionality? i'm using it to cover headswitching noise on the bottom of vhs caps.

b) didn't find a way to do simple pal phase shift in hybrid with vapour, dunno why would tivtc think that should be 20fps output, or better to ask why can't one turn off decimation?

c) i like the results of didee's mldegrain, but it should be applied after telecide, instead of using qtgmc first....this content needs no constant deinterlacing
(i'm using term "telecide" for anything doing decombing of film content)

here's the clip that destroys MDegrain2i2 (just blurs scene change across 4-5 frames):
https://www.mediafire.com/file/ut51kjhyj...n.avi/file


here's clip that breaks old telecide in old avs (telecide(post=false)) when mantegna's hand moves in  conjucton with just half of subtitles remaining in that frame
https://www.mediafire.com/file/7k7hfnoiy...s.avi/file

but that in itself is not a big issue, one can just remove post=false for telecide to deint. that frame....in its old, crude ways of deinterlacing....
(yes, the subs are interlaced, i don't care, yes, the remainder of first clip (ie whole movie) has pal phase shifts happening elsewhere)

the original thread on doom9 that hosted mldegrain doesn't have it any more,  could we enclose the original version here?
thanks
Quote: a) functionality of avs' bordercontrol filter in vapoursynth? ... or a way to get same or simillar functionality?
I assume you are talking about http://avisynth.nl/index.php/BorderControl.
Hybrid comes with no plugin/filter/script for this, and I don't know of any.
I have never user Avisynths BorderControl. EdgeFixer (which is in Hybrid) sound similar to it.
Quote: is there any wrapper for vapour to load avs plugins?
You can load 64bit Avisynth filters in Vapoursynth.
see: http://www.vapoursynth.com/doc/functions...inavs.html

Quote: i'm using it to cover headswitching noise on the bottom of vhs caps.
EdgeFixer might help otherwise, crop and letterbox.

Quote:didn't find a way to do simple pal phase shift in hybrid with vapour, dunno why would tivtc think that should be 20fps output, or better to ask why can't one turn off decimation?
TIVTC with default setitngs is for inverse telecine.
If you just want field matching, use TFM or VFM on their own.

Quote:c) i like the results of didee's mldegrain, but it should be applied after telecide, instead of using qtgmc first....this content needs no constant deinterlacing
(i'm using term "telecide" for anything doing decombing of film content)
Not following you naming schema. Usually to clean up combing artifacts one would use for example vinverse, vinverse2, qtgmc (inputype >0) , santiag.
MLDegrain is available in Hybrid.

Quote: here's the clip that destroys MDegrain2i2 (just blurs scene change across 4-5 frames):
MDegrain2i2 is not available in Hybrid. MDegrain2i2 should be used on interlaced content, so I don't really see the connection.

Quote: here's clip that breaks old telecide in old avs (telecide(post=false)) when mantegna's hand moves in conjucton with just half of subtitles remaining in that frame
Okay, to deal with this and similar stuff post-processing is meant to be used,...

Quote:but that in itself is not a big issue, one can just remove post=false for telecide to deint. that frame....in its old, crude ways of deinterlacing....
(yes, the subs are interlaced, i don't care, yes, the remainder of first clip (ie whole movie) has pal phase shifts happening elsewhere)

Quote: the original thread on doom9 that hosted mldegrain doesn't have it any more, could we enclose the original version here?
like I wrote, Hybrid already has a MLDegrain (not MDegrain2i2) port, but at this point I'm not sure what you are actually looking for,..
You could even write a Vapoursynth port of MDegrain2i2,...
from that I see can be easily be ported to Vapoursynth.
Code:
function MDegrain2i2 (clip source, int "overlap", int "dct")
{

  overlap = default (overlap, 0) # overlap value (0 to 4 for blksize=8)
  dct = default (dct, 0) # use dct=1 for clip with light flicker
  fields = source.SeparateFields () # separate by fields
  super = fields.MSuper ()
  backward_vec2 = super.MAnalyse (isb = true, delta = 2, overlap=overlap, dct=dct)
  forward_vec2 = super.MAnalyse (isb = false, delta = 2, overlap=overlap, dct=dct)
  backward_vec4 = super.MAnalyse (isb = true, delta = 4, overlap=overlap, dct=dct)
  forward_vec4 = super.MAnalyse (isb = false, delta = 4, overlap=overlap, dct=dct)
  erg = fields.MDegrain2 (super, backward_vec2, forward_vec2, backward_vec4, forward_vec4, thSAD=400)
  erg = erg.Weave()
  return (erg)
}
(quick glance) Isn't that basically applying MLDegrain on the fields of the source ? (which can be done in Hybrid for example by setting interlaced output and telling Hybrid to apply progressive filters on the separated fields,....)
untested:
Code:
import vapoursynth as vs
from vapoursynth import core

# overlap value (0 to 4 for blksize=8)
# use dct=1 for clip with light flicker
def MDegrain2i2(source: vs.VideoNode, overlap: int=0, dct: int=0, tff=True):
  fields = core.std.SeparateFields(clip)
  superF = fileds.mv.Super()
  backward_vec2 = superF.mv.MAnalyse (isb = True, delta = 2, overlap=overlap, dct=dct)
  forward_vec2 = superF.mv.MAnalyse (isb = false, delta = 2, overlap=overlap, dct=dct)
  backward_vec4 = superF.mv.MAnalyse (isb = true, delta = 4, overlap=overlap, dct=dct)
  forward_vec4 = superF.mv.MAnalyse (isb = false, delta = 4, overlap=overlap, dct=dct)
  erg = fields.mv.Degrain2(clip=superF, backward_vec2, forward_vec2, backward_vec4, forward_vec4, thSAD=400)
  erg = erg.std.Weave(tff=tff)
  return erg
requires mvtools.



Cu Selur
(18.05.2024, 06:34)Selur Wrote: [ -> ]MDegrain2i2 is not available in Hybrid. MDegrain2i2 should be used on interlaced content, so I don't really see the connection.
like I wrote, Hybrid already has a MLDegrain (not MDegrain2i2) port, but at this point I'm not sure what you are actually looking for,..

no connection, just an example of what old mdegrain has issues with....

i was thinking about using avisynth (with mldegrain) to do mpeg2 encode (hcenc just takes .avs), but after this denoising it kinda lacks the noise, so mpeg2 is kinda moot....mpeg2 is best to preserve noise....and interlacing....but this won't have either of those two....

last nite i did this (attachment).
removed decimate, added crop/borders, tried another denoiser (temporaldegrain), edited out YUV422P8 to YUV420P8 conversion because i thought green border instead of black was caused by color conversion (  Vapoursynth AddBorders with StaxRip - Doom9's Forum ). 

that mostly solves it. i've now fed that script into hybrid's input, turned off all video processing, used source clip for audio, and am getting just a blip on the beginning of clip:
https://www.mediafire.com/file/jr7oxtj6n...4.mp4/file
(probably reducing audio volume is good thing when playing it)
tinkered with some audio options on "misc" tab of "audio", to no avail. 

now when i listen to source clip, it's obvious audio just has beginning, just as mediainfo suggest (107ms). rest of audio is clipped.

also, where would one set 4:3 ar flag for mp4 output? it's 480x576, but it's 4:3. 

something else interesting: mismatch in "finished percentage" during job (attachment).


semi-related, there's even chromashift for vhs 2nd generation copies
VapoursynthScriptsInHybrid/chromashift.py at master · Selur/VapoursynthScriptsInHybrid · GitHub
which is nice (path: filtering->vapour->color->misc)



kuzon, good denoising, but (vertical) resize destroyed the subtitles...what did you use?
here's mldegrain
https://www.mediafire.com/file/fhakzaolf...w.mp4/file
seems to obliterate 90% of the noise....the color is slightly shifted down....
audio works fine there, it's loading .avi clip straight to hybrid, not loading the py script as in above clip with audio issues.
Quote:also, where would one set 4:3 ar flag for mp4 output? it's 480x576, but it's 4:3.
You should adjust the input PAR if your source isn't properly flagged. (4:3 is usually a DAR)
My guess is that you should set a "16:9 MPEG-2 PAR" (= 64:45)
[Image: grafik.png]
see: [INFO] About pixel aspect ratios,..

Quote:something else interesting: mismatch in "finished percentage" during job (attachment).
This happens if the expected frame count is less than the resulting frames.
Compare frames in Vapoursynth Preview and the indicated frames.
Would need more details (read sticky) to look into it.

Quote:(probably reducing audio volume is good thing when playing it)
[Image: grafik.png]
there is just some noise,... no clue what you are really doing, but it seems wrong.

Quote:...mpeg2 is best to preserve noise...
where does that come from Huh
I know of no technical aspect of MPEG-2 that would support that statement.

Quote:...and interlacing...
same to that


No clue what most of the other stuff is about,...

Cu Selur
Quote:This happens if the expected frame count is less than the resulting frames.
Compare frames in Vapoursynth Preview and the indicated frames.
might have found it, send you a link to a dev version for testing.

Cu Selur
Quote:You should adjust the input PAR if your source isn't properly flagged. (4:3 is usually a DAR)


yes, i know (i was using that to make some upscaled sd straight from 480x576 mpeg2, to upload to yt), but mp4 probably has DAR flags same as prev. mpeg versions....i think one can set them in ffmpeg....

that would be simpler. ie i usually just encode 480x576 and set 4:3 flag in mpeg2 so player corrects it on playback....
(simillar to many weird resolutions of dvb-s mpeg2, for example 544 x 576)

your 640x539 image can't be proper aspect ratio, 640 / 4/3 is 480. 

768x576 is correct (attached), in a square pixel world. and preserving vertical resolution. 


that clip has no audio, and it should have it. like i said, i fed modded .py script to hybrid (without audio, script made by mangling the temp script from temp folder of windows), and audio is source .avi file.....


inspect this clip and you'll know what i'm doing:
https://www.mediafire.com/file/bpnkx5ol8...4.wmv/file

peculiar thing is that both path to .py and path to .avi file are listed in audio job path, as visible in that .wmv.
maybe i should load audio via vapoursynth script too, with "best source"....

if you play that clip ("tivtc 50 and ml degrain tempPreviewVapoursynthFile05_03_13_984.mp4")  in mpc-hc or pot player you'll just hear first brief moment of the audio.
the source clip with ok audio is linked in the first post ( https://www.mediafire.com/file/7k7hfnoiy...s.avi/file )

somehow hybrid begins to convert audio but just gives up after 107ms. 
when the video input is .py script, and audio input is mjpeg avi file.

about mpeg2: that comes from years of tinkering with codecs, all the way back to divx, xvid: motion estimation of these is too sensitive to noise and can produce false-motion, ie motion of noise is mistaken for real motion and then you have weird distortions of still backgrounds, for example.
on top of that, h264 has inloop filter (just a throwback to realvideo and vp codecs' bluriness, really) that just makes mess of noise, with all those different block sizes (8x8 dct is just for i-frames, right?) etc. i usually keep that at -4:-4, but either way it's pretty bad.
while mpeg2 just grinds everything, noise or no noise.
there is a post by manono on doom9 somewhere along these lines, but offcourse i wasn't reading about that, i was testing. i just read it much later and it put a smile on my face. offcourse cce (if you remember japanese cinema craft encoder?) is also 10x faster than any mpeg4 or h264 encoder, it seems they made everything important in assembler....heh....
(there's even a peculiarity that mencoder was making mpeg2 at usual mpeg4 (lower) bitrates that looked quite fine, there's even peculiarity that one can put mpeg2 in .avi, but that too is just a peculiarity)

why all of this? because at some (high) levels of noise, you just can't denoise it properly. some analog satellite tv recordings have much noise.  my sources were usually analog, and noisy....i was never ripping dvds.
it's quite fun to preserve some noise in today's world of clean, overcompressed and blurry videos of tv and yt. 


as for interlacing, support for interlaced encoding since divx/xvid was hit and miss when it comes to hardware players, which don't mean much to me, but it's ok if it plays properly straight on tv, read from usb flash stick....as mpeg2 does.
i mean i guess it can be done with x264 (although there was some confusion about laced modes of 264, ie its application in x264) but then again those compression woes remain, so.....

offcourse, mpeg2 needs higher bitrate. 
but hdd space is cheap these days.
here's the good illustration of the new issue, when i try to load audio via .vpy script:
 [attachment=2485]

vdub2 loads and plays both audio and video without issue (so i think script is valid), while hybrid says it didn't find length ( same as here https://forum.selur.net/thread-14.html ). it then disables video processing tract.

vpy:



Code:
# Imports
import vapoursynth as vs
# getting Vapoursynth core
import ctypes
import sys
import os
core = vs.core
# Import scripts folder
scriptPath = 'C:/Program Files/Hybrid/64bit/vsscripts'
sys.path.insert(0, os.path.abspath(scriptPath))
# Loading Support Files
Dllref = ctypes.windll.LoadLibrary("C:/Program Files/Hybrid/64bit/vsfilters/Support/libfftw3f-3.dll")
# Loading Plugins
core.std.LoadPlugin(path="C:/Program Files/Hybrid/64bit/vsfilters/DenoiseFilter/CTMF/CTMF.dll")
core.std.LoadPlugin(path="C:/Program Files/Hybrid/64bit/vsfilters/GrainFilter/RemoveGrain/RemoveGrainVS.dll")
core.std.LoadPlugin(path="C:/Program Files/Hybrid/64bit/vsfilters/DenoiseFilter/HQDN3D/libhqdn3d.dll")
core.std.LoadPlugin(path="C:/Program Files/Hybrid/64bit/vsfilters/DenoiseFilter/NEO_FFT3DFilter/neo-fft3d.dll")
core.std.LoadPlugin(path="C:/Program Files/Hybrid/64bit/vsfilters/Support/libmvtools.dll")
core.std.LoadPlugin(path="C:/Program Files/Hybrid/64bit/vsfilters/DeinterlaceFilter/TIVTC/libtivtc.dll")
core.std.LoadPlugin(path="C:/Program Files/Hybrid/64bit/vsfilters/SourceFilter/LSmashSource/vslsmashsource.dll")
# Import scripts
import rescued
# source: 'C:\Video\netwr0k video cache\vhs_tdk_240_mamets theater_12sec killing telecide with laced subs.avi'
# current color space: YUV422P8, bit depth: 8, resolution: 480x576, fps: 25, color matrix: 470bg, yuv luminance scale: full, scanorder: top field first
# Loading C:\Video\netwr0k video cache\vhs_tdk_240_mamets theater_12sec killing telecide with laced subs.avi using LWLibavSource
clip = core.lsmas.LWLibavSource(source="C:/Video/netwr0k video cache/vhs_tdk_240_mamets theater_12sec killing telecide with laced subs.avi", format="YUV422P8", stream_index=0, cache=0, prefer_hw=0)

audio = core.bs.AudioSource(source=r"C:/Video/netwr0k video cache/vhs_tdk_240_mamets theater_12sec killing telecide with laced subs.avi")

# Setting detected color matrix (470bg).
clip = core.std.SetFrameProps(clip, _Matrix=5)
# Setting color transfer info (470bg)
clip = core.std.SetFrameProps(clip, _Transfer=5)
# Setting color primaries info (5)
clip = core.std.SetFrameProps(clip, _Primaries=5)
# Setting color range to PC (full) range.
clip = core.std.SetFrameProp(clip=clip, prop="_ColorRange", intval=0)
# making sure frame rate is set to 25
clip = core.std.AssumeFPS(clip=clip, fpsnum=25, fpsden=1)
clip = core.std.SetFrameProp(clip=clip, prop="_FieldBased", intval=2) # tff
# Deinterlacing using TIVTC
clip = core.tivtc.TFM(clip=clip)
# Making sure content is preceived as frame based
clip = core.std.SetFrameProp(clip=clip, prop="_FieldBased", intval=0) # progressive
# Making sure content is preceived as frame based
clip = core.std.SetFrameProp(clip=clip, prop="_FieldBased", intval=0) # progressive
# removing grain using TemporalDegrain
clip = rescued.TemporalDegrain(inpClip=clip)
# adjusting output color from: YUV422P8 to YUV420P8 for x264Model
clip=core.std.Crop(clip=clip, left=0, right=0,top=0, bottom=12)
clip = core.std.AddBorders(clip=clip, left=0,right=0,top=0,bottom=12, color = [0, 128, 128])
# Output
clip.set_output(index=0)
audio.set_output(index=1)


interesting question is how can it figure out length when same script is used, but without loading the audio in the script....ie script with video only
encoded video and audio separately, then muxed as a 3rd step.

got a working file with audio....
Code:
clip.set_output(index=0)
audio.set_output(index=1)
Is probably the problem.
Hybrid will call:
Code:
vspipe.exe --info <PATH TO SCRIPT> -
and expects something like:
Code:
Width: 640
Height: 480
Frames: 2
FPS: 24/1 (24.000 fps)
Format Name: RGB24
Color Family: RGB
Alpha: No
Sample Type: Integer
Bits: 8
SubSampling W: 0
SubSampling H: 0
output. Since you didn't post a debug output, I suspect that the output for your script looks different and thus Hybrid lacks the frame count and/or the fps and thus does not know the length of the video, everything after that is bound to fail.
As soon as there is an error popup in Hybrid, going further is always wrong.

No Hybrid, was not planned with audio from .vpy.
I'm not looking into audio support for vpy unless there is a waveform filter for Vapoursynth.

Quote:but mp4 probably has DAR flags same as prev. mpeg versions....i think one can set them in ffmpeg....

that would be simpler. ie i usually just encode 480x576 and set 4:3 flag in mpeg2 so player corrects it on playback....
(simillar to many weird resolutions of dvb-s mpeg2, for example 544 x 576)

your 640x539 image can't be proper aspect ratio, 640 / 4/3 is 480.

768x576 is correct (attached), in a square pixel world. and preserving vertical resolution.
PAR != DAR. Whether you want to preserve the vertical or horizontal resolution to the original can be configured in Hybrid,...
My screenshot was just to show that 64:45 seems to be the correct input PAR, not that it would create a 4:3 DAR.
Personally, I do not care at all about the DAR unless I need to since I convert to something that only supports a specific DAR. (don't know of any in Hybrid supported format that does)
If you set the correct PAR (correct = no distortion), you can crop however you want without having to worry about PAR&DAR.
If you require a specific DAR, set your width or height and adjust it so that both with in your storage aspect ratio and then letterbox so you got the correct DAR.
This is how Hybrid works. If you don't like it, simply don't use Hybrid.

Cu Selur
Pages: 1 2