This forum uses cookies
This forum makes use of cookies to store your login information if you are registered, and your last visit if you are not. Cookies are small text documents stored on your computer; the cookies set by this forum can only be used on this website and pose no security risk. Cookies on this forum also track the specific topics you have read and when you last read them. Please confirm whether you accept or reject these cookies being set.

A cookie will be stored in your browser regardless of choice to prevent you being asked this question again. You will be able to change your cookie settings at any time using the link in the footer.

ProPainter Vapoursynth filter
#1
Accidentally deleted the original thread. (wanted to delete a post) Sad
Here's waht Dan64 wrote:
Quote:Hello Selur,

here the first RC1.

To use the filter is necessary to install the missing package "av"
.\python -m pip install av
then unzip the attached filter in Hybrid packages directory from https://github.com/sczhou/ProPainter/rel...tag/v0.1.0
is necessary to download in .\vspropainter\weights
The files: ProPainter.pth, raft-things.pth, recurrent_flow_completion.pth
I attached also a sample (with script included) so that you can test the filter.
Currently is working only in CPU mode (device_index=-1) and is very, very, very slow.

I hope that you can find a solution for the GPU environment Angel

Thanks,
Dan

Cu Selur


Attached Files
.zip   sample.zip (Size: 892,79 KB / Downloads: 29)
.zip   vspropainter-1.0.0_RC1.zip (Size: 63,23 KB / Downloads: 32)
Reply
#2
I get the filter working with CPU, but not with the GPU.
Also, I get a noticeable color shift, which does not come from some tv vs pc luma scale issue or a small matrix mix up.
[Image: grafik.png]
Haven't found a way to get the GPU support working.
I got the same problem:
NotImplementedError: Could not run 'torchvision::deform_conv2d' with arguments from the 'CUDA' backend. This could be because the operator doesn't exist for this backend, or was omitted during the selective/custom build process (if using custom build). If you are a Facebook employee using PyTorch on mobile, please visit https://fburl.com/ptmfixes for possible resolutions. 'torchvision::deform_conv2d' is only available for these backends: [CPU, BackendSelect, Python, FuncTorchDynamicLayerBackMode, Functionalize, Named, Conjugate, Negative, ZeroTensor, ADInplaceOrView, AutogradOther, AutogradCPU, AutogradCUDA, AutogradHIP, AutogradXLA, AutogradMPS, AutogradIPU, AutogradXPU, AutogradHPU, AutogradVE, AutogradLazy, AutogradMTIA, AutogradPrivateUse1, AutogradPrivateUse2, AutogradPrivateUse3, AutogradMeta, AutogradNestedTensor, Tracer, AutocastCPU, AutocastCUDA, FuncTorchBatched, FuncTorchVmapMode, Batched, VmapMode, FuncTorchGradWrapper, PythonTLSSnapshot, FuncTorchDynamicLayerFrontMode, PreDispatch, PythonDispatcher].
full error: https://pastebin.com/an4VNsy0

Cu Selur
Reply
#3
Got the GPU working in my current R68 test setup, using:
# Imports
import vapoursynth as vs
# getting Vapoursynth core
import site
import sys
import os
core = vs.core
# Import scripts folder
scriptPath = 'F:/Hybrid/64bit/vsscripts'
sys.path.insert(0, os.path.abspath(scriptPath))
os.environ["CUDA_MODULE_LOADING"] = "LAZY"
#os.environ["CUDA_LAUNCH_BLOCKING"] = "1"
# loading plugins
core.std.LoadPlugin(path="F:/Hybrid/64bit/vsfilters/SourceFilter/LSmashSource/LSMASHSource.dll")
# Import scripts
import validate
clip = core.lsmas.LWLibavSource(source=r"running_car.mp4", format="YUV420P8", stream_index=0, cache=0, fpsnum=25,repeat=True, prefer_hw=0)
frame = clip.get_frame(0)
# Setting detected color matrix (709).
clip = core.std.SetFrameProps(clip=clip, _Matrix=1)
# setting color transfer (709), if it is not set.
if validate.transferIsInvalid(clip):
  clip = core.std.SetFrameProps(clip=clip, _Transfer=1)
# setting color primaries info (to 709), if it is not set.
if validate.primariesIsInvalid(clip):
  clip = core.std.SetFrameProps(clip=clip, _Primaries=1)
# setting color range to TV (limited) range.
clip = core.std.SetFrameProps(clip=clip, _ColorRange=1)
# making sure frame rate is set to 25fps
clip = core.std.AssumeFPS(clip=clip, fpsnum=25, fpsden=1)
# making sure the detected scan type is set (detected: progressive)
clip = core.std.SetFrameProps(clip=clip, _FieldBased=0) # progressive
# changing range from limited to full range for propainter
clip = core.resize.Bicubic(clip, range_in_s="limited", range_s="full")
# setting color range to PC (full) range.
clip = core.std.SetFrameProps(clip=clip, _ColorRange=0)
# adjusting color space from YUV420P8 to RGB24 for propainter
clip = core.resize.Bicubic(clip=clip, format=vs.RGB24, matrix_in_s="709", range_s="full")
# adding colors using propainter
from vspropainter import propainter
org = clip
clip = propainter(clip, length=25, mask_path="running_car_mask.png", device_index=0)
clip = core.std.StackHorizontal([org.text.Text("Original"), clip.text.Text("Filtered")])
# changing range from full to limited range for propainter
clip = core.resize.Bicubic(clip, range_in_s="full", range_s="limited")
# adjusting output color from: RGB24 to YUV420P8
clip = core.resize.Bicubic(clip=clip, format=vs.YUV420P8, matrix_s="709", range_s="limited")
# set output frame rate to 25fps (progressive)
clip = core.std.AssumeFPS(clip=clip, fpsnum=25, fpsden=1)
# output
clip.set_output()
the color shift is present there too:
[Image: grafik.png]
I'll package and upload my current R68 setup to my GoogleDrive. (this will take me ~1hour, 5min compression, 55min upload)

=> Package is uploaded
So once:
a. the color issue is fixed
b. instead of a single image a video/mask clip can be provided as input
the filter could be added to Hybrid in a future version. (R68 torch addon is not really ready for general usage)
@Dan64:could it be a RGB vs. BGR issue?
Yes, adding:
# rgb to bgr
clip = vs.core.std.ShufflePlanes(clips=clip,planes=[2,1,0], colorfamily=vs.RGB)
below:
ppaint = ModelProPainter(device, weights_dir, mask_path, mask_dilation, neighbor_length,
                         ref_stride, raft_iter)
in the __init__.py, fixes the colors:
[Image: grafik.png]
seems like a COLOR_BGR2RGB is missing somewhere.

Cu Selur

Ps.: works for poisondeathray over at Doom9s
Reply
#4
After the having installed the new version R68, vsViewer is working only when is launched by Hybrid.

When vsViewer is launched standalone I got the following error:

Quote:VapourSynth plugins manager: Failed to load vapoursynth library!
Please set up the library search paths in settings.

But I don't have changed the installation path.

I just renamed the old "Vapoursynth" folder in "Vapoursynth_R65" and created the new "Vapoursynth"  using your archive.

How I can fix this issue ?

Dan
Reply
#5
vsViewer should work fine without that path.
iirc starting with R66 there is no VapourSynth.dll in the main folder, there is only one in the Lib/site-packages-folder, so you would have to adjust "Edit->Settings->Paths->Vapoursynth library search paths" to include your 'Hybrid\64bit\Vapoursynth\Lib\site-packages'-folder.

Cu Selur
Reply
#6
issued solved, it was just enough to add the library path is settings as required by the warning message.
It is strange because previously this setting was not necessary.

It is working also on my side. Smile

The version that I provided was just a first draft, I need to add some additional parameter and review the code (will fix the color shift)

I removed from the code the possibility to add more file masks, because I watched the examples provided and in practice using frame masks is a nightmare.

In the "cinema" are used, but to create them the objects to be removed are colored with green color, then it is possible to build a mask to remove these objects.

Instead build mask frames by manually highlight the objects to remove them frame by frame is simply a nightmare.

This filter is very useful to remove (TV)logos, but they are usually in a fixed position so just a frame is enough.

Dan
Reply
#7
I agree that manually creating frame-by-frame masks is a nightmare.
Automatic mask generation using DeScratch, MotionMasks and similar is possible and often used in restoration tasks. Tongue
(subtitle removing also usually works by masking and then inpainting, see: Avisynth InpaintDelogo)
Coloring the mask in a specific color isn't a problem either.

Cu Selur
Reply
#8
On my RTX3060 I was able to encode the test clip in Vapoursynth at a speed of 0.88fps, while using the python script I was able to encode the clip at a speed of 1.3fps.

The bottleneck is the request of forward frames in Vapoursynth. 
This was a problem that I already observed when I developed the temporal filters for ddeoldify.

Dan
Reply
#9
Using a RTX 4080 with:
clip = propainter(clip, length=250, mask_path="running_car_mask.png", device_index=0, enable_fp16=True)
and
"F:/Hybrid/64bit/x265.exe" --input - --fps 25 --y4m --crf 18.00 --output "G:\Output\test.265"
in vsViewer, I x265 reported:
encoded 192 frames in 50.79s (3.78 fps), 1707.74 kb/s, Avg QP:21.51
Using larger length values speeds things up here.
When using length=25, I get:
encoded 192 frames in 81.51s (2.36 fps), 1703.32 kb/s, Avg QP:21.51
So maybe requesting more frames than needed directly and using a cache might help.
---

Maybe looking how HolyWu&Co do it gives some ideas how to speed things up.

Cu Selur
Reply
#10
Using length=200 (the clip has 191 frames) and f16, I was able to get a speed of 1.60fps better than my tests using the python script.

I already fixed the shift color (It was due to a conversion from BGR to RGB, not necessary since the final frames are already in RGB).

I will introduce the parameter frame width/height because the speed will decrease with the frame size and review the mask(s) management.

Dan
Reply


Forum Jump:


Users browsing this thread: 3 Guest(s)