This forum uses cookies
This forum makes use of cookies to store your login information if you are registered, and your last visit if you are not. Cookies are small text documents stored on your computer; the cookies set by this forum can only be used on this website and pose no security risk. Cookies on this forum also track the specific topics you have read and when you last read them. Please confirm whether you accept or reject these cookies being set.

A cookie will be stored in your browser regardless of choice to prevent you being asked this question again. You will be able to change your cookie settings at any time using the link in the footer.

[HELP] Fragments disappear in hand-drawn animation
#1
Question 
Hello, when using the Despot and SpotLess filters (mainly) and LUTDeCrawl, whole pieces in the frame often disappear with rapid movement, as if they dissolve in the background color. Fragments of black lines that are not noise also disappear. When these filters are turned off, there is much more noise in the frame (real interference appears) here and there.
The question arises, how to remove noise in hand-drawn animation and not spoil the frames?
Maybe there are some recommendations for setting up or using other filters? Thanks!
# Imports
import os
import sys
import ctypes
# Loading Support Files
Dllref = ctypes.windll.LoadLibrary("D:/Progs/Hybrid/64bit/vsfilters/Support/libfftw3-3.dll")
Dllref = ctypes.windll.LoadLibrary("D:/Progs/Hybrid/64bit/vsfilters/Support/libfftw3f-3.dll")
import vapoursynth as vs
# getting Vapoursynth core
core = vs.core
# Import scripts folder
scriptPath = 'D:/Progs/Hybrid/64bit/vsscripts'
sys.path.insert(0, os.path.abspath(scriptPath))
# Loading Plugins
core.std.LoadPlugin(path="D:/Progs/Hybrid/64bit/vsfilters/ResizeFilter/nnedi3/vsznedi3.dll")
core.std.LoadPlugin(path="D:/Progs/Hybrid/64bit/vsfilters/Support/libdescale.dll")
core.std.LoadPlugin(path="D:/Progs/Hybrid/64bit/vsfilters/Support/Bilateral.dll")
core.std.LoadPlugin(path="D:/Progs/Hybrid/64bit/vsfilters/ResizeFilter/nnedi3/NNEDI3CL.dll")
core.std.LoadPlugin(path="D:/Progs/Hybrid/64bit/vsfilters/GrainFilter/AddGrain/AddGrain.dll")
core.std.LoadPlugin(path="D:/Progs/Hybrid/64bit/vsfilters/Support/EEDI3m.dll")
core.std.LoadPlugin(path="D:/Progs/Hybrid/64bit/vsfilters/Support/temporalsoften.dll")
core.std.LoadPlugin(path="D:/Progs/Hybrid/64bit/vsfilters/GrainFilter/RemoveGrain/RemoveGrainVS.dll")
core.std.LoadPlugin(path="D:/Progs/Hybrid/64bit/vsfilters/DebandFilter/Flash3kDeband/flash3kyuu_deband.dll")
core.std.LoadPlugin(path="D:/Progs/Hybrid/64bit/vsfilters/SharpenFilter/AWarpSharp2/libawarpsharp2.dll")
core.std.LoadPlugin(path="D:/Progs/Hybrid/64bit/vsfilters/Support/TCanny.dll")
core.std.LoadPlugin(path="D:/Progs/Hybrid/64bit/vsfilters/Support/fmtconv.dll")
core.std.LoadPlugin(path="D:/Progs/Hybrid/64bit/vsfilters/Support/vcm.dll")
core.std.LoadPlugin(path="D:/Progs/Hybrid/64bit/vsfilters/Support/libmvtools_sf_em64t.dll")
core.std.LoadPlugin(path="D:/Progs/Hybrid/64bit/vsfilters/DenoiseFilter/TTempSmooth/TTempSmooth.dll")
core.std.LoadPlugin(path="D:/Progs/Hybrid/64bit/vsfilters/Support/DCTFilter.dll")
core.std.LoadPlugin(path="D:/Progs/Hybrid/64bit/vsfilters/DeblockFilter/Deblock/Deblock.dll")
core.std.LoadPlugin(path="D:/Progs/Hybrid/64bit/vsfilters/DenoiseFilter/KNLMeansCL/KNLMeansCL.dll")
core.std.LoadPlugin(path="D:/Progs/Hybrid/64bit/vsfilters/DenoiseFilter/NEO_FFT3DFilter/neo-fft3d.dll")
core.std.LoadPlugin(path="D:/Progs/Hybrid/64bit/vsfilters/DenoiseFilter/DFTTest/DFTTest.dll")
core.std.LoadPlugin(path="D:/Progs/Hybrid/64bit/vsfilters/DenoiseFilter/VagueDenoiser/VagueDenoiser.dll")
core.std.LoadPlugin(path="D:/Progs/Hybrid/64bit/vsfilters/Support/libtemporalmedian.dll")
core.std.LoadPlugin(path="D:/Progs/Hybrid/64bit/vsfilters/Support/libmvtools.dll")
core.std.LoadPlugin(path="D:/Progs/Hybrid/64bit/vsfilters/DeCrawlFilter/DeDot/libdedot.dll")
core.std.LoadPlugin(path="D:/Progs/Hybrid/64bit/vsfilters/DeCrawlFilter/DotKill/DotKill.dll")
core.std.LoadPlugin(path="D:/Progs/Hybrid/64bit/vsfilters/MiscFilter/MiscFilters/MiscFilters.dll")
core.std.LoadPlugin(path="D:/Progs/Hybrid/64bit/vsfilters/Support/scenechange.dll")
core.std.LoadPlugin(path="D:/Progs/Hybrid/64bit/vsfilters/SourceFilter/FFMS2/ffms2.dll")
# Import scripts
import edi_rpow2
import fvsfunc
import mvsfunc
import muvsfunc
import G41Fun
import SpotLess
import havsfunc
# source: 'F:\Downloads\Осторожно, обезьянки\01. Обезьянки. Гирлянда из малышей - 1983.avi'
# current color space: YUV420P8, bit depth: 8, resolution: 720x544, fps: 25, color matrix: 470bg, yuv luminance scale: limited, scanorder: progressive
# Loading source using FFMS2
clip = core.ffms2.Source(source="F:/Downloads/Осторожно, обезьянки/01. Обезьянки. Гирлянда из малышей - 1983.avi",format=vs.YUV420P8,alpha=False)
# making sure input color matrix is set as 470bg
clip = core.resize.Bicubic(clip, matrix_in_s="470bg",range_s="limited")
# making sure frame rate is set to 25
clip = core.std.AssumeFPS(clip=clip, fpsnum=25, fpsden=1)
# Setting color range to TV (limited) range.
clip = core.std.SetFrameProp(clip=clip, prop="_ColorRange", intval=1)
# anti decrawling using LUTDeCrawl
clip = havsfunc.LUTDeCrawl(input=clip, maxdiff=60, scnchg=22)
clip = core.dotkill.DotKillS(clip=clip, usematch=True)
clip = core.dedot.Dedot(clip=clip)
clip = SpotLess.SpotLess(clip=clip, radT=1, ablksz=8, rec=True)
# denoising using VagueDenoiser
clip = core.vd.VagueDenoiser(clip=clip)
# denoising using DFTTest
clip = core.dfttest.DFTTest(clip=clip, ftype=1)
# denoising using Neo-FFT3D
clip = core.neo_fft3d.FFT3D(clip=clip, sigma=1.00, sigma2=2.00, sigma4=3.00, planes=[1,2])
# denoising using KNLMeansCL
# adjusting color space from YUV420P8 to RGB24 for vsKNLMeans
clip = core.resize.Bicubic(clip=clip, format=vs.RGB24, matrix_in_s="470bg", range_s="limited")
clip = core.knlm.KNLMeansCL(clip=clip, d=3, a=3, channels="RGB", wmode=1, device_type="gpu", device_id=0)
# adjusting color space from RGB24 to YUV444P8 for vsMCT
clip = core.resize.Bicubic(clip=clip, format=vs.YUV444P8, matrix_s="470bg", range_s="limited")
# denoising using MCTemporalDenoise
clip = havsfunc.MCTemporalDenoise(i=clip, settings="medium", deblock=True, useQED=True, quant1=10, quant2=20, bt=1)
# denoising using mClean
clip = G41Fun.mClean(clip=clip, sharp=6, rn=15, deband=0)
# Denoising using QTGMC
clip = havsfunc.QTGMC(Input=clip, Preset="Fast", InputType=1, TR2=0, SourceMatch=0, Lossless=0, EZDenoise=0.10, NoisePreset="Fast", opencl=True, device=0)
# debanding using GradFun3
clip = fvsfunc.GradFun3(src=clip, dyn=True, staticnoise=True)
# adjusting output color from: YUV444P16 to YUV420P8 for VCEEncModel
clip = core.resize.Bicubic(clip=clip, format=vs.YUV420P8, range_s="limited")
# set output frame rate to 25.000fps
clip = core.std.AssumeFPS(clip=clip, fpsnum=25, fpsden=1)
# Output
IMG:
http://lostpix.com/img/2022-02/18/gefmps...t5thqv.png

Maybe there are some recommendations, presets for this type of video? I would like to optimize this somehow, not to pick up settings every time, the interference is quite typical. I would like to test 3-5 presets and choose the appropriate one.
Reply
#2
I would advise to:
  • don't use that many filters. (simply enabling a ton of denoisers doesn't make sense)
  • change the filter order. (moving despot, spotless behind the denoisers usualy helps)
  • adjust the parameters of the filters. Default values are rarely optiomal for cartoon/anime like content.
  • depending on the sort of artifacts you see using aWarpSharp2 as sharpener might help.
  • depending on your hardware using some ml based methods might also be useful.

If you can share a short and not too large sample of your source giving advice would be easier.
The image doesn't really explain anything. No clue whether it's from the source or the processed frame an what really is the problem. The circles on the image wihtout explaination don't help.

Cu Selur
Reply
#3
Thanks for the recommendations, here are some examples.

The last 2 videos have the same interference as in the others, only the frame size is 1080p, with my settings often a lot of noise remains.

https://drive.google.com/drive/folders/1...sp=sharing

From PC equipment I use:
Ryzen 5900x
Radeon RX 6600 XT

For re-encoding I use:
VCEEnc and OpenCl (where is possible)

Circles marked the places where the necessary parts with filters disappeared.
(Despot and SpotLess filters (mainly) and LUTDeCrawl)

If there were presets and recommendations, it would be much more convenient for typical problems. So without knowing about the filters, which ones and how they work, you have to try different things and just look at the result)
Thanks!
Reply
#4
About the filters search the Net using the filtername and oftern those filters are based on Avisynth filters and their documentation might help.
QTGMC -> http://avisynth.nl/index.php/QTGMC
VagueDenoiser -> http://avisynth.nl/index.php/VagueDenoiser
..

Will look at the samples.

Cu Selur
Reply
#5
Do you by chance have a NVIDIA graphic card?
(DPIR deblocking really works well with the general cleaning of those samples and applying DeSpot or SpotLess after this should help with the stong dot noise)
[Image: input-1-1.png]
[Image: input-2.png]
Downside is:
a. DPIR deblocking is rather slow, especially if you have to run it with cpu only.
b. it requires Hybrid's torchpy-addon which is rather large (12GB download)

Cu Selur
Reply
#6
Thanks! And in the case of sample 5, which is better? What settings should there be for filters? And where to find DPIR deblocking? 
As I understand it, DPIR is needed only on a very noisy picture, what in other cases? And I tried cutting the video in parts, for sequential processing. This increases the time, but does not overload the equipment.

No video cards have become very expensive lately. I bought the option that was optimal in price and quality. The only drawback, yes, there is no cuda here and working with rays is worse.
Reply
#7
Quote:And where to find DPIR deblocking?
DPIR and some other filters require a special addon, but that is only supported if an NVIDIA card is present, so you can't use it.

Quote:in the case of sample 5, which is better
Can't handle sample 5 atm. in Hybrid since the Vapoursynth Preview stumbles over some meta data.
-> working on that.

Quote:What settings should there be for filters?
Not going through all the filters and trying to explain each option.
For example for mClean I would use SAD threshold 900 and ReNoise 0.
# applying deblocking using DeBlock QED
clip = havsfunc.Deblock_QED(clip, quant1=40)
# denoising using mClean
clip = G41Fun.mClean(clip=clip, thSAD=900, rn=0)
# sharpening using AWarpSharp2
clip = core.warp.AWarpSharp2(clip=clip, blur=2, depth=20)
clip = havsfunc.EdgeCleaner(c=clip)
seems like a good idea to start.

Will look at it some more in a few hours, since I'll be offline for a while.

Cu Selur
Reply
#8
Thanks, ok.
Here is another example of how a picture spoils with filters that clean the frame the best, but because of these problems they cannot be used in such videos.
(Despot, SpotLess filters, LUTDeCrawl)
https://ibb.co/Lr0PqdV
Uploaded 1 more sample with this frame.
There would be black and white dots to remove and there are large scratches, damage to frames and save the picture from such artifacts.
Maybe it makes sense to form some recommendations or presets again. This could be posted separately on the forum, discussed in different directions. (Cartoons, movies, VHS videos, etc.)
Reply
#9
Problem with presets is:
a. you first need to have decent copy right free source and descriptons of example sources.
b. usually there is no 'one size fits all' filter and you should adjust your filtering per source (or series)
c. whenever Hybrid gets new filters those presets would need to be reevaluated
-> Sorry, but at least I don't have enough time and motivation to do this. (Hybrid is developed by me in my free time.)

Problem is ikely caused by DeSpot and SpotLess (probably mainly DeSpot) which both are not that good to differentiate between details that are surounded by spots to remove and those spots. If you use Avisynth instead of Vapoursynth DeSpot has some additional parameters which might help to lessen the problem for some sources.

You can probably write scripts and use some masking that could help, but there are no generic scripts I'm aware off that could do the job.
All filtering, especially denoising, degrain, dedot, descratch,... filtering will potentially remove wanted details. (basically true for any lossy operation)
-> if you want a surefire preset/filter that can do no hard, you are out of luck.

Some times it's necessary, to apply a single filter mutliple times (potentially with different settings) to properly clean up a source while not causing too much damages to wanted details/data. Also somtimes its necessary to apply different settings on different ranges of the source or change the filter order. Hybrid does allowall this, but it requires that the user has a good understanding of the filters, their order, settings and effects to be used. Which is why by default these options are not shown. (can be enabled under Filtering->Vapoursynth->Misc->UI (or View; not sure about the name) and switchting from FilterOrder to FilterQueue usage.

So any preset that one comes up with might not even work for the whole clip.
Video restoration, enhancement, filtering, etc. usually isn't something that can be done easily or without knowing quite bit about the topics involved.

Cu Selur

Ps.: hope I didn't make to many typos (writing on a tablet atm.)
PPs.: when I'm back home I'll try to find some time to look at sample 5 and the new sample.
PPP.: using some of the Degrain filters (SMDegrain, MCTemporalDegrain) with high values might also work well.
Reply
#10
Yes, I understand, the project itself is very interesting and voluminous. I found your program after I tried to run and figure out Avisynth. It's much easier and more convenient, very impressive, thank you very much!
I'm just starting to study this topic, trying to figure out the filters and settings for my video options, if there are no universal solutions and presets.
Anyway, I think it can be a topic for discussion and exchange of experience, as well as newcomers to the topic.

offtop:
I think it would also be interesting to make the program multilingual and it's not so difficult.
Even in automatic translation through machine learning, this will be correct by 85-90%, there will also be those who want to correct the translation or do it manually. I'm ready to participate.
Reply


Forum Jump:


Users browsing this thread: 2 Guest(s)