This forum uses cookies
This forum makes use of cookies to store your login information if you are registered, and your last visit if you are not. Cookies are small text documents stored on your computer; the cookies set by this forum can only be used on this website and pose no security risk. Cookies on this forum also track the specific topics you have read and when you last read them. Please confirm whether you accept or reject these cookies being set.

A cookie will be stored in your browser regardless of choice to prevent you being asked this question again. You will be able to change your cookie settings at any time using the link in the footer.

Deoldify Vapoursynth filter
Updated delodify test version to latest changes in Hybrid.
Reply
Hello Selur,

  I completed the ColorMNet implementation.
  Please find attached the last version RC18 (I hope).
  The main changes are:
  1. now there are only 2 encode modes for ColorMNet: (0) remote, (1) local
  2. removed the Future Warnings
  3. improved the explanation of ColorMNet parameters
  4. RefMerge and related parameters (Weight, Threshold) are implemented only for Deep-Exemplar
  5. The parameter Preset (fast, medium, slow) will apply also to ColorMNet 
  On my PC, with Preset "medium", the speed of ColorMNet[remote] (8.2 fps) is near to Deep-Exemplar (8.6 fps).
  Unfortunately, while ColorMNet improved the temporal consistency, the long memory is introducing significant color artifacts in smooth scene change.
  For this reason I kept the encode method (1) local that is able to manage small memory frames. But in this case I think that Deep-Exemplar is better.
  In summary to get a good coloring results are necessary both the models: ColorMNet and Deep-Exemplar.

  Now I will start to work to improve the scene change implementation, which can affect the output quality when HAVC is used with the Exemplar-based models.

Thanks,
Dan


Attached Files
.zip   vsdeoldify-4.5.0_RC18.zip (Size: 393,96 KB / Downloads: 18)
Reply
Quote: Now I will start to work to improve the scene change implementation, which can affect the output quality when HAVC is used with the Exemplar-based models, but this change should not impact the Hybrid GUI,
Fingers crossed. Smile

Quote:In summary to get a good coloring results are necessary both the models: ColorMNet and Deep-Exemplar.
You mean depending on the scene or somehow in combination?

Quote:now there are only 2 encode modes for ColorMNet: (0) remote, (1) local
I adjusted the combo box in Hybrid accordingly.

Quote:improved the explanation of ColorMNet parameters
I adjusted the tool-tips accordingly.

Quote:RefMerge and related parameters (Weight, Threshold) are implemented only for Deep-Exemplar
I adjusted the gui to hide them when ColorMNet is selected.

=> update the deoldify test download.

Cu Selur

Ps.: moved your and mine post to the Deoldify thread, since they are not dlib related. Smile
Reply
Calling:
clip = HAVC_main(clip=clip, EnableDeepEx=True, DeepExMethod=0, DeepExRefMerge=0, ScFrameDir=None, DeepExModel=0, DeepExEncMode=0, DeepExMaxMemFrames=0)
I get:
2024-10-03 08:04:35.046
F:\Hybrid\64bit\Vapoursynth\Lib\site-packages\kornia\feature\lightglue.py:44: FutureWarning: `torch.cuda.amp.custom_fwd(args...)` is deprecated. Please use `torch.amp.custom_fwd(args..., device_type='cuda')` instead.
@torch.cuda.amp.custom_fwd(cast_inputs=torch.float32)

F:\Hybrid\64bit\Vapoursynth\Lib\site-packages\kornia\feature\lightglue.py:44: FutureWarning: `torch.cuda.amp.custom_fwd(args...)` is deprecated. Please use `torch.amp.custom_fwd(args..., device_type='cuda')` instead.
@torch.cuda.amp.custom_fwd(cast_inputs=torch.float32)

F:\Hybrid\64bit\Vapoursynth\Lib\site-packages\vsdeoldify\deepex\models\vgg19_gray.py:130: FutureWarning: You are using `torch.load` with `weights_only=False` (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for `weights_only` will be flipped to `True`. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via `torch.serialization.add_safe_globals`. We recommend you start setting `weights_only=True` for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature.
model.load_state_dict(torch.load(vgg19_gray_path))

F:\Hybrid\64bit\Vapoursynth\Lib\site-packages\vsdeoldify\deepex\models\vgg19_gray.py:130: FutureWarning: You are using `torch.load` with `weights_only=False` (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for `weights_only` will be flipped to `True`. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via `torch.serialization.add_safe_globals`. We recommend you start setting `weights_only=True` for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature.
model.load_state_dict(torch.load(vgg19_gray_path))

2024-10-03 08:04:48.902
Qt warning: QPixmap::scaled: Pixmap is a null pixmap
2024-10-03 08:04:53.714
Error on frame 0 request:

Traceback (most recent call last):
File "src\\cython\\vapoursynth.pyx", line 3216, in vapoursynth.publicFunction
File "src\\cython\\vapoursynth.pyx", line 3218, in vapoursynth.publicFunction
File "src\\cython\\vapoursynth.pyx", line 834, in vapoursynth.FuncData.__call__
File "F:\Hybrid\64bit\Vapoursynth\Lib\site-packages\vsdeoldify\colormnet\__init__.py", line 234, in colormnet_client_color
img_color = colorizer.colorize_frame(ti=n, frame_i=img_orig)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "F:\Hybrid\64bit\Vapoursynth\Lib\site-packages\vsdeoldify\colormnet\colormnet_client.py", line 62, in colorize_frame
return byte_array_to_image(frame_bytes)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "F:\Hybrid\64bit\Vapoursynth\Lib\site-packages\vsdeoldify\colormnet\colormnet_utils.py", line 38, in byte_array_to_image
img = Image.open(stream).convert('RGB')
^^^^^^^^^^^^^^^^^^
File "F:\Hybrid\64bit\Vapoursynth\Lib\site-packages\PIL\Image.py", line 3305, in open
raise UnidentifiedImageError(msg)
PIL.UnidentifiedImageError: cannot identify image file
I suspect something might be missing in my setup.
Not sure what.

DOH, installed an older version of DeOlfidy.
Reply
I discovered that, for some reason the warnings issued by other threads will affect the colormnet thread.
For this reason I removed all the workings raised by vsdeoldify.
I added the following function in vsdeoldify\__init__.py

def disable_warnings():
    logger_blocklist = [
        "matplotlib",
        "PIL",
        "torch",
        "numpy",
        "tensorrt",
        "torch_tensorrt"
        "kornia",
        "dinov2"  # dinov2 is issuing warnings not allowing ColorMNetServer to work properly
    ]

    for module in logger_blocklist:
        logging.getLogger(module).setLevel(logging.ERROR)

    warnings.simplefilter(action='ignore', category=FutureWarning)
    warnings.simplefilter(action='ignore', category=UserWarning)
    warnings.simplefilter(action='ignore', category=DeprecationWarning)
    # warnings.simplefilter(action="ignore", category=Warning)

    torch._logging.set_logs(all=logging.ERROR)
 
on my side this function is working properly and the warnings are not shown any more.

I attached my last version (there is my work in progress on scene detection) where I removed the field port. Now when I create the server I set the port=0 so that the O.S. will assign the first port available (useful in case of parallel encoding).

Please let me know if with this version the warnings are removed.

Thanks,

I attached a sample for testing ColorMNet (remote). 
On my PC is very fast (16 fps).

Dan


Attached Files
.zip   vsdeoldify-4.5.0_RC19.zip (Size: 396,1 KB / Downloads: 11)
.zip   sample4.zip (Size: 4,45 MB / Downloads: 11)
Reply
RC19 doesn't do anything for me when calling:
clip = HAVC_main(clip=clip, EnableDeepEx=True, DeepExMethod=0, DeepExRefMerge=0, ScFrameDir=None, DeepExModel=0, DeepExEncMode=0, DeepExMaxMemFrames=0)
I get an uncolored output.

Quote:On my PC is very fast (16 fps).
Using:
F:\Hybrid\64bit\Vapoursynth\VSPipe.exe "c:\Users\Selur\Desktop\sample4\Downfall_400p_method1_exmodel0.vpy" -c y4m NUL
I get:
Output 1250 frames in 5.17 seconds (241.90 fps)
I only adjusted the paths:
# Imports
import vapoursynth as vs
# getting Vapoursynth core
import sys
import os
core = vs.core
# Import scripts folder
scriptPath = 'F:/Hybrid/64bit/vsscripts'
sys.path.insert(0, os.path.abspath(scriptPath))
# loading plugins
core.std.LoadPlugin(path="F:/Hybrid/64bit/vsfilters/Support/fmtconv.dll")
core.std.LoadPlugin(path="F:/Hybrid/64bit/vsfilters/MiscFilter/MiscFilters/MiscFilters.dll")
core.std.LoadPlugin(path="F:/Hybrid/64bit/vsfilters/SourceFilter/LSmashSource/LSMASHSource.dll")
# Import scripts
import validate
# Source: 'D:\PProjects\colormnet\tests\clips\sample3\Downfall_400p.mp4'
# Current color space: YUV420P8, bit depth: 8, resolution: 720x406, frame rate: 25fps, scanorder: progressive, yuv luminance scale: limited, matrix: 709, transfer: bt.709, primaries: bt.709, format: HEVC
# Loading D:\PProjects\colormnet\tests\clips\sample3\Downfall_400p.mp4 using LWLibavSource
clip = core.lsmas.LWLibavSource(source="Downfall_400p.mp4", format="YUV420P8", stream_index=0, cache=0, fpsnum=25, prefer_hw=0)
frame = clip.get_frame(0)
# setting color matrix to 709.
clip = core.std.SetFrameProps(clip, _Matrix=vs.MATRIX_BT709)
# setting color transfer (vs.TRANSFER_BT709), if it is not set.
if validate.transferIsInvalid(clip):
  clip = core.std.SetFrameProps(clip=clip, _Transfer=vs.TRANSFER_BT709)
# setting color primaries info (to vs.PRIMARIES_BT470_BG), if it is not set.
if validate.primariesIsInvalid(clip):
  clip = core.std.SetFrameProps(clip=clip, _Primaries=vs.PRIMARIES_BT470_BG)
# setting color range to TV (limited) range.
clip = core.std.SetFrameProps(clip=clip, _ColorRange=vs.RANGE_LIMITED)
# making sure frame rate is set to 25fps
clip = core.std.AssumeFPS(clip=clip, fpsnum=25, fpsden=1)
# making sure the detected scan type is set (detected: progressive)
clip = core.std.SetFrameProps(clip=clip, _FieldBased=vs.FIELD_PROGRESSIVE) # progressive
# changing range from limited to full range for vsDeOldify
clip = core.resize.Bicubic(clip, range_in_s="limited", range_s="full")
# setting color range to PC (full) range.
clip = core.std.SetFrameProps(clip=clip, _ColorRange=vs.RANGE_FULL)
# adjusting color space from YUV420P8 to RGB24 for vsDeOldify
clip = core.resize.Bicubic(clip=clip, format=vs.RGB24, matrix_in_s="709", range_s="full")
# adding colors using DeOldify
from vsdeoldify import HAVC_main
clip = HAVC_main(clip=clip, ColorTune="medium", EnableDeepEx=True, DeepExMethod=3, ScFrameDir="C:/Users/Selur/Desktop/sample4/ref_jpg", ScThreshold=0.05, DeepExModel=0, DeepExEncMode=0)
# changing range from full to limited range for vsDeOldify
clip = core.resize.Bicubic(clip, range_in_s="full", range_s="limited")
# Resizing using 10 - bicubic spline
clip = core.fmtc.resample(clip=clip, kernel="spline16", w=720, h=408, interlaced=False, interlacedd=False) # resolution 720x408 before RGB24 after RGB48
# adjusting output color from: RGB48 to YUV420P10 for x265Model
clip = core.resize.Bicubic(clip=clip, format=vs.YUV420P10, matrix_s="709", range_s="limited", dither_type="error_diffusion")
# set output frame rate to 25fps (progressive)
clip = core.std.AssumeFPS(clip=clip, fpsnum=25, fpsden=1)
# output
clip.set_output()


Cu Selur
Reply
you get the black and white output because, in this version, when the server class is not initialized, I skip the call to coloring the frame.
The only case that I found of server class not initialized is when are shown the warnings.

Using this version, if you use the preview you get some warning or not ?

Dan
Reply
Got no warning.
It takes 'long' for the preview to start, but then everything is fast (as if the filter isn't applied).
(tried directly in vsViewer and through Hybrid)

Okay, sample4 is also not working.
adding:
import adjust
# adjusting color using Tweak
clip = adjust.Tweak(clip=clip, hue=0.00, sat=0.00, cont=1.00, coring=True)
before the "# changing range from limited to full range for vsDeOldify"
shows that the no coloring is applied.
Script takes long to load, before the preview is visible, but no coloring.
(I used the old R68 setup to make sure it's not due to some new cuda stuff.)

Is the R19 the correct version (LastEditTime: 2024-09-29)?

Cu Selur
Reply
aaah,.. my R68 setup is missing the spatial_correlation_sampler Big Grin
Reply
Now I get colors Smile and speed for sample4 is down to 28.97fps.
(In the R70 with new cuda&co I still get an uncolored output without errors or warnings. Probably same problem as with your dlib cuda version, without the errors.)

Cu Selur
Reply


Forum Jump:


Users browsing this thread: 6 Guest(s)