This forum uses cookies
This forum makes use of cookies to store your login information if you are registered, and your last visit if you are not. Cookies are small text documents stored on your computer; the cookies set by this forum can only be used on this website and pose no security risk. Cookies on this forum also track the specific topics you have read and when you last read them. Please confirm whether you accept or reject these cookies being set.

A cookie will be stored in your browser regardless of choice to prevent you being asked this question again. You will be able to change your cookie settings at any time using the link in the footer.

Deoldify Vapoursynth filter
If it happens, a torch-addon update would be required, then. Smile
----
Dev versions are in the 'experimental'-folder of my GoogleDrive, which is linked on the download page.
Reply
Wanted to give this a go since I do a ton of color correction work.  When running the initial command to install fastai I run into an error.

C:\Program Files\Hybrid\64bit\Vapoursynth>python -m pip install fastai==1.0.6
WARNING: Skipping C:\Program Files\Hybrid\64bit\Vapoursynth\Lib\site-packages\pip-24.0.dist-info due to invalid metadata entry 'name'
WARNING: Skipping C:\Program Files\Hybrid\64bit\Vapoursynth\Lib\site-packages\pip-24.0.dist-info due to invalid metadata entry 'name'
Collecting fastai==1.0.6
  Using cached fastai-1.0.6-py3-none-any.whl.metadata (9.6 kB)
Requirement already satisfied: fastprogress>=0.1.10 in c:\program files\hybrid\64bit\vapoursynth\lib\site-packages (from fastai==1.0.6) (1.0.3)
Collecting ipython (from fastai==1.0.6)
  Using cached ipython-8.27.0-py3-none-any.whl.metadata (5.0 kB)
Collecting jupyter (from fastai==1.0.6)
  Using cached jupyter-1.1.1-py2.py3-none-any.whl.metadata (2.0 kB)
Requirement already satisfied: matplotlib in c:\program files\hybrid\64bit\vapoursynth\lib\site-packages (from fastai==1.0.6) (3.9.0)
Requirement already satisfied: numpy>=1.12 in c:\program files\hybrid\64bit\vapoursynth\lib\site-packages (from fastai==1.0.6) (1.26.4)
Requirement already satisfied: pandas in c:\program files\hybrid\64bit\vapoursynth\lib\site-packages (from fastai==1.0.6) (2.2.2)
Requirement already satisfied: Pillow in c:\program files\hybrid\64bit\vapoursynth\lib\site-packages (from fastai==1.0.6) (10.1.0)
Requirement already satisfied: requests in c:\program files\hybrid\64bit\vapoursynth\lib\site-packages (from fastai==1.0.6) (2.32.2)
Requirement already satisfied: scipy in c:\program files\hybrid\64bit\vapoursynth\lib\site-packages (from fastai==1.0.6) (1.13.1)
Requirement already satisfied: spacy>=2.0.16 in c:\program files\hybrid\64bit\vapoursynth\lib\site-packages (from fastai==1.0.6) (3.7.4)
INFO: pip is looking at multiple versions of fastai to determine which version is compatible with other requirements. This could take a while.
ERROR: Could not find a version that satisfies the requirement torchvision-nightly (from fastai) (from versions: none)
ERROR: No matching distribution found for torchvision-nightly
WARNING: Skipping C:\Program Files\Hybrid\64bit\Vapoursynth\Lib\site-packages\pip-24.0.dist-info due to invalid metadata entry 'name'
WARNING: Skipping C:\Program Files\Hybrid\64bit\Vapoursynth\Lib\site-packages\pip-24.0.dist-info due to invalid metadata entry 'name'

I have the Torch addon extracted to the Hybrid folder where it replaced the previous one from the install.
Reply
I have no clue how you ended up calling 'python -m pip install fastai==1.0.6',.... and what exactly you wanted to try.
ColorMNet support isn't working at the moment
TypeError: vs_colormnet..colormnet_clip_color_merge() got an unexpected keyword argument 'propagate'
and there is no Hybrid version which supports it atm.
vsDeOldify before that works fine without having to install anything through the command line.

Cu Selur
----
Dev versions are in the 'experimental'-folder of my GoogleDrive, which is linked on the download page.
Reply
I was just taking the steps you took in Post #4 of this thread.  I was not aware that the vs filter was at that URL.  I guess I need to find documentation for how to add new filters to Hybrid?  

After looking at the github link with the filter, I see it is using cuda.  Was this made with only an Nvidia card in mind?  Anything I can do to get it to run on my 7900 XTX?
Reply
Quote: I was just taking the steps you took in Post #4 of this thread. I was not aware that the vs filter was at that URL. I guess I need to find documentation for how to add new filters to Hybrid?
Deoldify is part of the torch-addon for Hybrid and yes, Dan64 documented how to install it on the github project page.

Quote: After looking at the github link with the filter, I see it is using cuda. Was this made with only an Nvidia card in mind? Anything I can do to get it to run on my 7900 XTX?
Sorry, but as far as I know, you can't use DeOldify (especially the version of Dan64, which leverages different tools) with non-NVIDIA cards.

Cu Selur
----
Dev versions are in the 'experimental'-folder of my GoogleDrive, which is linked on the download page.
Reply
(16.09.2024, 19:32)Selu Wrote: ColorMNet support isn't working at the moment
TypeError: vs_colormnet..colormnet_clip_color_merge() got an unexpected keyword argument 'propagate'
and there is no Hybrid version which supports it atm.

The current version is working in a limited number of cases. I already fixed some bug, but I'm working to a way to fully enable the long memory feature of ColorMNet that represent the most important new feature. I hope to be able to release a new version in the next 2 days.

Dan
Reply
Nice Smile Probably won't do much testing before the weekend, and even if there is no release till then, no problem. Smile

Cu Selur
----
Dev versions are in the 'experimental'-folder of my GoogleDrive, which is linked on the download page.
Reply
Hello Selur,

  I attached the new RC4.
  I tried to solve the problem related to the long term memory model used by ColorMNent which is XMem
  I found a solution but I had to disable almost totally Vapoursynth, because first I have to color all the frames in python (not using ModifyFrame) and only after this step I can use ModifyFrame that being multi-thread and asynchronous will not allow to use the XMen saving memory capabilities.
  So I introduced a new parameter encode_mode:

Quote::param encode_mode:        Parameter used by ColorMNet to define the encode mode strategy.
                                Available values are:
                                    0: batch encoding. The frames in the same batch will be encoded syncronously.
                                    1: asyncronous encoding. This is the standard Vapoursynth mode.
                                    2: syncronous encoding. This option will allow to fully use the long term frame memory
                                        and will speed up the encoding. As side effect it will be disabled the Preview.
   
  Also the meaning of parameter max_memory_frames has changed as following:

Quote::param max_memory_frames:  Parameter used by ColorMNet model, specify the max number of encoded frames to keep in memory.
                                Its value depend on encode mode.
                                encode_mode=0, represent the batch size, suggested value are:
                                    6 or 8 : for 8GB GPU
                                    12 or 14 : for 12GB GPU
                                    24 or 28: for 24GB GPU
                                encode_mode=1, suggested values are:
                                    4 or 5 : for 8GB GPU
                                    8 or 9 : for 12GB GPU
                                    15 or 16: for 24GB GPU
                                encode_mode=2, there is no limit to this value, it can be set = 0 (all the frames in the clip)

If is used encode_mode=2 and the clip has more than 200 frams the preview should be disabled because it could take minutes to show (or even hours).
I don't know if you would to support this encode mode in Hybrid.

I attached also a small sample to test the 3 encode mode.

Dan


Attached Files
.zip   vsdeoldify-4.5.0_RC4.zip (Size: 401,08 KB / Downloads: 19)
.zip   sample.zip (Size: 474,5 KB / Downloads: 20)
Reply
Nice, will look at it tomorrow after work and report back
----
Dev versions are in the 'experimental'-folder of my GoogleDrive, which is linked on the download page.
Reply
Wanted to try thin2_async.vpy, so I adjusted the paths to:
# Imports
import vapoursynth as vs
# getting Vapoursynth core
import sys
import os
core = vs.core
# Import scripts folder
scriptPath = 'f:/Hybrid/64bit/vsscripts'
sys.path.insert(0, os.path.abspath(scriptPath))
# loading plugins
core.std.LoadPlugin(path="f:/Hybrid/64bit/vsfilters/Support/fmtconv.dll")
core.std.LoadPlugin(path="f:/Hybrid/64bit/vsfilters/MiscFilter/MiscFilters/MiscFilters.dll")
core.std.LoadPlugin(path="f:/Hybrid/64bit/vsfilters/SourceFilter/LSmashSource/LSMASHSource.dll")
# Import scripts
import validate
# Source: 'D:\PProjects\colormnet\tests\thing2.mp4'
# Current color space: YUV420P8, bit depth: 8, resolution: 384x280, frame rate: 25fps, scanorder: progressive, yuv luminance scale: limited, matrix: 709, format: AVC
# Loading D:\PProjects\colormnet\tests\thing2.mp4 using LWLibavSource
clip = core.lsmas.LWLibavSource(source="thing2.mp4", format="YUV420P8", stream_index=0, cache=0, fpsnum=25, prefer_hw=0)
frame = clip.get_frame(0)
# setting color matrix to 709.
clip = core.std.SetFrameProps(clip, _Matrix=vs.MATRIX_BT709)
# setting color transfer (vs.TRANSFER_BT601), if it is not set.
if validate.transferIsInvalid(clip):
  clip = core.std.SetFrameProps(clip=clip, _Transfer=vs.TRANSFER_BT601)
# setting color primaries info (to vs.PRIMARIES_BT470_BG), if it is not set.
if validate.primariesIsInvalid(clip):
  clip = core.std.SetFrameProps(clip=clip, _Primaries=vs.PRIMARIES_BT470_BG)
# setting color range to TV (limited) range.
clip = core.std.SetFrameProps(clip=clip, _ColorRange=vs.RANGE_LIMITED)
# making sure frame rate is set to 25fps
clip = core.std.AssumeFPS(clip=clip, fpsnum=25, fpsden=1)
# making sure the detected scan type is set (detected: progressive)
clip = core.std.SetFrameProps(clip=clip, _FieldBased=vs.FIELD_PROGRESSIVE) # progressive
# changing range from limited to full range for vsDeOldify
clip = core.resize.Bicubic(clip, range_in_s="limited", range_s="full")
# setting color range to PC (full) range.
clip = core.std.SetFrameProps(clip=clip, _ColorRange=vs.RANGE_FULL)
# adjusting color space from YUV420P8 to RGB24 for vsDeOldify
clip = core.resize.Bicubic(clip=clip, format=vs.RGB24, matrix_in_s="709", range_s="full")
# adding colors using DeOldify
from vsdeoldify import HAVC_deepex
clip = HAVC_deepex(clip=clip, method=4, ref_merge=0, dark=True, smooth=False, max_memory_frames=8,
                   sc_framedir="ref", ex_model=0, encode_mode=1)
# changing range from full to limited range for vsDeOldify
clip = core.resize.Bicubic(clip, range_in_s="full", range_s="limited")
# no resizing since resolution is already archived
# adjusting output color from: RGB24 to YUV420P10 for x265Model
clip = core.resize.Bicubic(clip=clip, format=vs.YUV420P10, matrix_s="709", range_s="limited")
# set output frame rate to 25fps (progressive)
clip = core.std.AssumeFPS(clip=clip, fpsnum=25, fpsden=1)
# output
clip.set_output()
but starting the script preview (in vsViewer) I got:
2024-09-19 15:04:39.495
F:\Hybrid\64bit\Vapoursynth\Lib\site-packages\vsdeoldify\fastai\gen_doc\gen_notebooks.py:63: SyntaxWarning: invalid escape sequence '\s'
match = re.match(f"^({key})\s*=\s*.*", codestr)

F:\Hybrid\64bit\Vapoursynth\Lib\site-packages\vsdeoldify\fastai\gen_doc\gen_notebooks.py:63: SyntaxWarning: invalid escape sequence '\s'
match = re.match(f"^({key})\s*=\s*.*", codestr)

F:\Hybrid\64bit\Vapoursynth\Lib\site-packages\vsdeoldify\fastai\gen_doc\gen_notebooks.py:216: SyntaxWarning: invalid escape sequence '\('
if re.search(f"update_nb_metadata\('{fn}'", c['source']): return c

F:\Hybrid\64bit\Vapoursynth\Lib\site-packages\vsdeoldify\fastai\gen_doc\gen_notebooks.py:216: SyntaxWarning: invalid escape sequence '\('
if re.search(f"update_nb_metadata\('{fn}'", c['source']): return c

F:\Hybrid\64bit\Vapoursynth\Lib\site-packages\kornia\feature\lightglue.py:44: FutureWarning: `torch.cuda.amp.custom_fwd(args...)` is deprecated. Please use `torch.amp.custom_fwd(args..., device_type='cuda')` instead.
@torch.cuda.amp.custom_fwd(cast_inputs=torch.float32)

F:\Hybrid\64bit\Vapoursynth\Lib\site-packages\kornia\feature\lightglue.py:44: FutureWarning: `torch.cuda.amp.custom_fwd(args...)` is deprecated. Please use `torch.amp.custom_fwd(args..., device_type='cuda')` instead.
@torch.cuda.amp.custom_fwd(cast_inputs=torch.float32)

F:\Hybrid\64bit\Vapoursynth\Lib\site-packages\vsdeoldify\deepex\models\vgg19_gray.py:130: FutureWarning: You are using `torch.load` with `weights_only=False` (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for `weights_only` will be flipped to `True`. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via `torch.serialization.add_safe_globals`. We recommend you start setting `weights_only=True` for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature.
model.load_state_dict(torch.load(vgg19_gray_path))

F:\Hybrid\64bit\Vapoursynth\Lib\site-packages\vsdeoldify\deepex\models\vgg19_gray.py:130: FutureWarning: You are using `torch.load` with `weights_only=False` (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for `weights_only` will be flipped to `True`. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via `torch.serialization.add_safe_globals`. We recommend you start setting `weights_only=True` for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature.
model.load_state_dict(torch.load(vgg19_gray_path))

2024-09-19 15:04:44.694
Failed to evaluate the script:
Python exception: 'NoneType' object has no attribute 'write'

Traceback (most recent call last):
File "src\\cython\\vapoursynth.pyx", line 3365, in vapoursynth._vpy_evaluate
File "src\\cython\\vapoursynth.pyx", line 3366, in vapoursynth._vpy_evaluate
File "C:\Users\Selur\Desktop\sample\thing2_async.vpy", line 43, in
clip = HAVC_deepex(clip=clip, method=4, ref_merge=0, dark=True, smooth=False, max_memory_frames=8,
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "F:\Hybrid\64bit\Vapoursynth\Lib\site-packages\vsdeoldify\__init__.py", line 540, in HAVC_deepex
clip_colored = vs_colormnet(clip, clip_ref, image_size=-1, enable_resize=enable_resize,
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "F:\Hybrid\64bit\Vapoursynth\Lib\site-packages\vsdeoldify\vsslib\vsmodels.py", line 44, in vs_colormnet
return vs_colormnet_async(clip, clip_ref, image_size, enable_resize, frame_propagate, max_memory_frames)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "F:\Hybrid\64bit\Vapoursynth\Lib\site-packages\vsdeoldify\colormnet\__init__.py", line 203, in vs_colormnet_async
colorizer = colormnet_colorizer(image_size=image_size, vid_length=vid_length, enable_resize=enable_resize,
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "F:\Hybrid\64bit\Vapoursynth\Lib\site-packages\vsdeoldify\colormnet\__init__.py", line 45, in colormnet_colorizer
return ColorMNetRender(image_size=image_size, vid_length=vid_length, enable_resize=enable_resize,
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "F:\Hybrid\64bit\Vapoursynth\Lib\site-packages\vsdeoldify\colormnet\colormnet_render.py", line 80, in __init__
self._colorize_init(image_size, vid_length)
File "F:\Hybrid\64bit\Vapoursynth\Lib\site-packages\vsdeoldify\colormnet\colormnet_render.py", line 134, in _colorize_init
self.network = ColorMNet(self.config, self.config['model']).cuda().eval()
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "F:\Hybrid\64bit\Vapoursynth\Lib\site-packages\vsdeoldify\colormnet\model\network.py", line 30, in __init__
self.key_encoder = KeyEncoder_DINOv2_v6()
^^^^^^^^^^^^^^^^^^^^^^
File "F:\Hybrid\64bit\Vapoursynth\Lib\site-packages\vsdeoldify\colormnet\model\modules.py", line 161, in __init__
network = resnet.resnet50(pretrained=True)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "F:\Hybrid\64bit\Vapoursynth\Lib\site-packages\vsdeoldify\colormnet\model\resnet.py", line 170, in resnet50
load_weights_add_extra_dim(model, model_zoo.load_url(model_urls['resnet50']), extra_dim)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "F:\Hybrid\64bit\Vapoursynth\Lib\site-packages\torch\hub.py", line 855, in load_state_dict_from_url
sys.stderr.write(f'Downloading: "{url}" to {cached_file}\n')
^^^^^^^^^^^^^^^^
AttributeError: 'NoneType' object has no attribute 'write'
Tongue (same with the other two scripts)

Cu Selur
----
Dev versions are in the 'experimental'-folder of my GoogleDrive, which is linked on the download page.
Reply


Forum Jump:


Users browsing this thread: 1 Guest(s)