29.09.2024, 08:36
Pages: 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77
29.09.2024, 13:51
Hello Selur,
I completed the ColorMNet implementation.
Please find attached the last version RC18 (I hope).
The main changes are:
Unfortunately, while ColorMNet improved the temporal consistency, the long memory is introducing significant color artifacts in smooth scene change.
For this reason I kept the encode method (1) local that is able to manage small memory frames. But in this case I think that Deep-Exemplar is better.
In summary to get a good coloring results are necessary both the models: ColorMNet and Deep-Exemplar.
Now I will start to work to improve the scene change implementation, which can affect the output quality when HAVC is used with the Exemplar-based models.
Thanks,
Dan
I completed the ColorMNet implementation.
Please find attached the last version RC18 (I hope).
The main changes are:
- now there are only 2 encode modes for ColorMNet: (0) remote, (1) local
- removed the Future Warnings
- improved the explanation of ColorMNet parameters
- RefMerge and related parameters (Weight, Threshold) are implemented only for Deep-Exemplar
- The parameter Preset (fast, medium, slow) will apply also to ColorMNet
Unfortunately, while ColorMNet improved the temporal consistency, the long memory is introducing significant color artifacts in smooth scene change.
For this reason I kept the encode method (1) local that is able to manage small memory frames. But in this case I think that Deep-Exemplar is better.
In summary to get a good coloring results are necessary both the models: ColorMNet and Deep-Exemplar.
Now I will start to work to improve the scene change implementation, which can affect the output quality when HAVC is used with the Exemplar-based models.
Thanks,
Dan
29.09.2024, 14:44
Quote: Now I will start to work to improve the scene change implementation, which can affect the output quality when HAVC is used with the Exemplar-based models, but this change should not impact the Hybrid GUI,Fingers crossed.
Quote:In summary to get a good coloring results are necessary both the models: ColorMNet and Deep-Exemplar.You mean depending on the scene or somehow in combination?
Quote:now there are only 2 encode modes for ColorMNet: (0) remote, (1) localI adjusted the combo box in Hybrid accordingly.
Quote:improved the explanation of ColorMNet parametersI adjusted the tool-tips accordingly.
Quote:RefMerge and related parameters (Weight, Threshold) are implemented only for Deep-ExemplarI adjusted the gui to hide them when ColorMNet is selected.
=> update the deoldify test download.
Cu Selur
Ps.: moved your and mine post to the Deoldify thread, since they are not dlib related.
03.10.2024, 08:06
Calling:
I get:
I suspect something might be missing in my setup.
Not sure what.
DOH, installed an older version of DeOlfidy.
Code:
clip = HAVC_main(clip=clip, EnableDeepEx=True, DeepExMethod=0, DeepExRefMerge=0, ScFrameDir=None, DeepExModel=0, DeepExEncMode=0, DeepExMaxMemFrames=0)
Code:
2024-10-03 08:04:35.046
F:\Hybrid\64bit\Vapoursynth\Lib\site-packages\kornia\feature\lightglue.py:44: FutureWarning: `torch.cuda.amp.custom_fwd(args...)` is deprecated. Please use `torch.amp.custom_fwd(args..., device_type='cuda')` instead.
@torch.cuda.amp.custom_fwd(cast_inputs=torch.float32)
F:\Hybrid\64bit\Vapoursynth\Lib\site-packages\kornia\feature\lightglue.py:44: FutureWarning: `torch.cuda.amp.custom_fwd(args...)` is deprecated. Please use `torch.amp.custom_fwd(args..., device_type='cuda')` instead.
@torch.cuda.amp.custom_fwd(cast_inputs=torch.float32)
F:\Hybrid\64bit\Vapoursynth\Lib\site-packages\vsdeoldify\deepex\models\vgg19_gray.py:130: FutureWarning: You are using `torch.load` with `weights_only=False` (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for `weights_only` will be flipped to `True`. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via `torch.serialization.add_safe_globals`. We recommend you start setting `weights_only=True` for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature.
model.load_state_dict(torch.load(vgg19_gray_path))
F:\Hybrid\64bit\Vapoursynth\Lib\site-packages\vsdeoldify\deepex\models\vgg19_gray.py:130: FutureWarning: You are using `torch.load` with `weights_only=False` (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for `weights_only` will be flipped to `True`. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via `torch.serialization.add_safe_globals`. We recommend you start setting `weights_only=True` for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature.
model.load_state_dict(torch.load(vgg19_gray_path))
2024-10-03 08:04:48.902
Qt warning: QPixmap::scaled: Pixmap is a null pixmap
2024-10-03 08:04:53.714
Error on frame 0 request:
Traceback (most recent call last):
File "src\\cython\\vapoursynth.pyx", line 3216, in vapoursynth.publicFunction
File "src\\cython\\vapoursynth.pyx", line 3218, in vapoursynth.publicFunction
File "src\\cython\\vapoursynth.pyx", line 834, in vapoursynth.FuncData.__call__
File "F:\Hybrid\64bit\Vapoursynth\Lib\site-packages\vsdeoldify\colormnet\__init__.py", line 234, in colormnet_client_color
img_color = colorizer.colorize_frame(ti=n, frame_i=img_orig)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "F:\Hybrid\64bit\Vapoursynth\Lib\site-packages\vsdeoldify\colormnet\colormnet_client.py", line 62, in colorize_frame
return byte_array_to_image(frame_bytes)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "F:\Hybrid\64bit\Vapoursynth\Lib\site-packages\vsdeoldify\colormnet\colormnet_utils.py", line 38, in byte_array_to_image
img = Image.open(stream).convert('RGB')
^^^^^^^^^^^^^^^^^^
File "F:\Hybrid\64bit\Vapoursynth\Lib\site-packages\PIL\Image.py", line 3305, in open
raise UnidentifiedImageError(msg)
PIL.UnidentifiedImageError: cannot identify image file
Not sure what.
DOH, installed an older version of DeOlfidy.
03.10.2024, 18:29
I discovered that, for some reason the warnings issued by other threads will affect the colormnet thread.
For this reason I removed all the workings raised by vsdeoldify.
I added the following function in vsdeoldify\__init__.py
on my side this function is working properly and the warnings are not shown any more.
I attached my last version (there is my work in progress on scene detection) where I removed the field port. Now when I create the server I set the port=0 so that the O.S. will assign the first port available (useful in case of parallel encoding).
Please let me know if with this version the warnings are removed.
Thanks,
I attached a sample for testing ColorMNet (remote).
On my PC is very fast (16 fps).
Dan
For this reason I removed all the workings raised by vsdeoldify.
I added the following function in vsdeoldify\__init__.py
Code:
def disable_warnings():
logger_blocklist = [
"matplotlib",
"PIL",
"torch",
"numpy",
"tensorrt",
"torch_tensorrt"
"kornia",
"dinov2" # dinov2 is issuing warnings not allowing ColorMNetServer to work properly
]
for module in logger_blocklist:
logging.getLogger(module).setLevel(logging.ERROR)
warnings.simplefilter(action='ignore', category=FutureWarning)
warnings.simplefilter(action='ignore', category=UserWarning)
warnings.simplefilter(action='ignore', category=DeprecationWarning)
# warnings.simplefilter(action="ignore", category=Warning)
torch._logging.set_logs(all=logging.ERROR)
on my side this function is working properly and the warnings are not shown any more.
I attached my last version (there is my work in progress on scene detection) where I removed the field port. Now when I create the server I set the port=0 so that the O.S. will assign the first port available (useful in case of parallel encoding).
Please let me know if with this version the warnings are removed.
Thanks,
I attached a sample for testing ColorMNet (remote).
On my PC is very fast (16 fps).
Dan
03.10.2024, 18:44
RC19 doesn't do anything for me when calling:
I get an uncolored output.
I get: I only adjusted the paths:
Cu Selur
Code:
clip = HAVC_main(clip=clip, EnableDeepEx=True, DeepExMethod=0, DeepExRefMerge=0, ScFrameDir=None, DeepExModel=0, DeepExEncMode=0, DeepExMaxMemFrames=0)
Quote:On my PC is very fast (16 fps).Using:
Code:
F:\Hybrid\64bit\Vapoursynth\VSPipe.exe "c:\Users\Selur\Desktop\sample4\Downfall_400p_method1_exmodel0.vpy" -c y4m NUL
Code:
Output 1250 frames in 5.17 seconds (241.90 fps)
Code:
# Imports
import vapoursynth as vs
# getting Vapoursynth core
import sys
import os
core = vs.core
# Import scripts folder
scriptPath = 'F:/Hybrid/64bit/vsscripts'
sys.path.insert(0, os.path.abspath(scriptPath))
# loading plugins
core.std.LoadPlugin(path="F:/Hybrid/64bit/vsfilters/Support/fmtconv.dll")
core.std.LoadPlugin(path="F:/Hybrid/64bit/vsfilters/MiscFilter/MiscFilters/MiscFilters.dll")
core.std.LoadPlugin(path="F:/Hybrid/64bit/vsfilters/SourceFilter/LSmashSource/LSMASHSource.dll")
# Import scripts
import validate
# Source: 'D:\PProjects\colormnet\tests\clips\sample3\Downfall_400p.mp4'
# Current color space: YUV420P8, bit depth: 8, resolution: 720x406, frame rate: 25fps, scanorder: progressive, yuv luminance scale: limited, matrix: 709, transfer: bt.709, primaries: bt.709, format: HEVC
# Loading D:\PProjects\colormnet\tests\clips\sample3\Downfall_400p.mp4 using LWLibavSource
clip = core.lsmas.LWLibavSource(source="Downfall_400p.mp4", format="YUV420P8", stream_index=0, cache=0, fpsnum=25, prefer_hw=0)
frame = clip.get_frame(0)
# setting color matrix to 709.
clip = core.std.SetFrameProps(clip, _Matrix=vs.MATRIX_BT709)
# setting color transfer (vs.TRANSFER_BT709), if it is not set.
if validate.transferIsInvalid(clip):
clip = core.std.SetFrameProps(clip=clip, _Transfer=vs.TRANSFER_BT709)
# setting color primaries info (to vs.PRIMARIES_BT470_BG), if it is not set.
if validate.primariesIsInvalid(clip):
clip = core.std.SetFrameProps(clip=clip, _Primaries=vs.PRIMARIES_BT470_BG)
# setting color range to TV (limited) range.
clip = core.std.SetFrameProps(clip=clip, _ColorRange=vs.RANGE_LIMITED)
# making sure frame rate is set to 25fps
clip = core.std.AssumeFPS(clip=clip, fpsnum=25, fpsden=1)
# making sure the detected scan type is set (detected: progressive)
clip = core.std.SetFrameProps(clip=clip, _FieldBased=vs.FIELD_PROGRESSIVE) # progressive
# changing range from limited to full range for vsDeOldify
clip = core.resize.Bicubic(clip, range_in_s="limited", range_s="full")
# setting color range to PC (full) range.
clip = core.std.SetFrameProps(clip=clip, _ColorRange=vs.RANGE_FULL)
# adjusting color space from YUV420P8 to RGB24 for vsDeOldify
clip = core.resize.Bicubic(clip=clip, format=vs.RGB24, matrix_in_s="709", range_s="full")
# adding colors using DeOldify
from vsdeoldify import HAVC_main
clip = HAVC_main(clip=clip, ColorTune="medium", EnableDeepEx=True, DeepExMethod=3, ScFrameDir="C:/Users/Selur/Desktop/sample4/ref_jpg", ScThreshold=0.05, DeepExModel=0, DeepExEncMode=0)
# changing range from full to limited range for vsDeOldify
clip = core.resize.Bicubic(clip, range_in_s="full", range_s="limited")
# Resizing using 10 - bicubic spline
clip = core.fmtc.resample(clip=clip, kernel="spline16", w=720, h=408, interlaced=False, interlacedd=False) # resolution 720x408 before RGB24 after RGB48
# adjusting output color from: RGB48 to YUV420P10 for x265Model
clip = core.resize.Bicubic(clip=clip, format=vs.YUV420P10, matrix_s="709", range_s="limited", dither_type="error_diffusion")
# set output frame rate to 25fps (progressive)
clip = core.std.AssumeFPS(clip=clip, fpsnum=25, fpsden=1)
# output
clip.set_output()
Cu Selur
03.10.2024, 19:05
you get the black and white output because, in this version, when the server class is not initialized, I skip the call to coloring the frame.
The only case that I found of server class not initialized is when are shown the warnings.
Using this version, if you use the preview you get some warning or not ?
Dan
The only case that I found of server class not initialized is when are shown the warnings.
Using this version, if you use the preview you get some warning or not ?
Dan
03.10.2024, 19:16
Got no warning.
It takes 'long' for the preview to start, but then everything is fast (as if the filter isn't applied).
(tried directly in vsViewer and through Hybrid)
Okay, sample4 is also not working.
adding:
before the "# changing range from limited to full range for vsDeOldify"
shows that the no coloring is applied.
Script takes long to load, before the preview is visible, but no coloring.
(I used the old R68 setup to make sure it's not due to some new cuda stuff.)
Is the R19 the correct version (LastEditTime: 2024-09-29)?
Cu Selur
It takes 'long' for the preview to start, but then everything is fast (as if the filter isn't applied).
(tried directly in vsViewer and through Hybrid)
Okay, sample4 is also not working.
adding:
Code:
import adjust
# adjusting color using Tweak
clip = adjust.Tweak(clip=clip, hue=0.00, sat=0.00, cont=1.00, coring=True)
shows that the no coloring is applied.
Script takes long to load, before the preview is visible, but no coloring.
(I used the old R68 setup to make sure it's not due to some new cuda stuff.)
Is the R19 the correct version (LastEditTime: 2024-09-29)?
Cu Selur
03.10.2024, 19:38
aaah,.. my R68 setup is missing the spatial_correlation_sampler
03.10.2024, 19:40
Now I get colors and speed for sample4 is down to 28.97fps.
(In the R70 with new cuda&co I still get an uncolored output without errors or warnings. Probably same problem as with your dlib cuda version, without the errors.)
Cu Selur
(In the R70 with new cuda&co I still get an uncolored output without errors or warnings. Probably same problem as with your dlib cuda version, without the errors.)
Cu Selur