This forum uses cookies
This forum makes use of cookies to store your login information if you are registered, and your last visit if you are not. Cookies are small text documents stored on your computer; the cookies set by this forum can only be used on this website and pose no security risk. Cookies on this forum also track the specific topics you have read and when you last read them. Please confirm whether you accept or reject these cookies being set.

A cookie will be stored in your browser regardless of choice to prevent you being asked this question again. You will be able to change your cookie settings at any time using the link in the footer.

Deoldify Vapoursynth filter
#61
(03.03.2024, 09:16)Selur Wrote: @zspeciman: In Hybrid DeOldify (like ddcolor) will be applied to the video, DeOldifys 'Video'-Model was meant for video. Both of them are nowhere near perfect, so on some content it might work good, while on others not so much.
(see the link in #40 and the screenshots throughout this thread)

Just to be more precise, Hybrid can used also to colorize single images, procedure is the follow:

1)  In main window check the box "image sequence"
2)  After clicking the upper right arrow will be displayed a dialog box. Select as the first frame the image, then check the box "single image" and then push "Accept".
3)  Apply the DeOldify filter or better the "DDeOldify" (new version), in: Filtering->Vapoursynth->Color->Basic page
4)  Finally push the preview icon (with the image on an eye) 

Wait some seconds (the loading of all libraries takes a lot of time) and will be visualized the "VsViewer" window, by clicking with the mouse on the right of the image you can save the colorized image on the folder.

Dan

@Selur: there is a bug in the dev version where the conversion to RGB24 format is not applied in case of "image sequence"
Reply
#62
Quote:@Selur: there is a bug in the dev version where the conversion to RGB24 format is not applied in case of "image sequence"
Have you tried this, or did you just take a fast look at the generated script?
Usually image sequences should be imported as RGB24, so there should be no need to convert to RGB24.

Side note:
else:
        clipa = clip.std.ModifyFrame(clip, ddeoldify_colorize)
        clipa = Tweak(clip=clipa.resize.Bicubic(format=vs.YUV444PS, matrix_s="709", range_s="limited"), hue=hue[0], sat=sat[0], cont=1.00, coring=True)
        color_clip = clipa.resize.Bicubic(format=vs.RGB24, range_s="limited")
source: https://github.com/dan64/vs-deoldify/blo...C1-L136C78
should only be used when sat != 1 and hue != 0 (analog in the ddcolor section).

Cu Selur
Reply
#63
I released a new version: https://github.com/dan64/vs-deoldify/rel...tag/v1.1.2

The input parameters are the same, but I changed the default of dd_strenght to 3, because the auto selection will selected an higher value, improving the quality but slow down significantly the speed. I performed a code clean-up (also fixing the issue with sat and hue).

In my version there is the problem of "image sequence".

I attached an archive with the test image and the generated script.

Thanks,
Dan


Attached Files
.zip   Test_Imagesequence.zip (Size: 222,92 KB / Downloads: 10)
Reply
#64
Ah, okay, your image gets imported as Gray8.
Quote:Image
Count : 167
Count of stream of this kind : 1
Kind of stream : Image
Kind of stream : Image
Stream identifier : 0
Format : JPEG
Format : JPEG
Commercial name : JPEG
Internet media type : image/jpeg
Width : 1280
Width : 1 280 pixels
Height : 870
Height : 870 pixels
Color space : Y
Bit depth : 8
Bit depth : 8 bits
Compression mode : Lossy
Compression mode : Lossy
Stream size : 234427
Stream size : 229 KiB (100%)
Stream size : 229 KiB
Stream size : 229 KiB
Stream size : 229 KiB
Stream size : 228.9 KiB
Stream size : 229 KiB (100%)
Proportion of this stream : 1.00000
colour_description_present : Yes
Color range : Full
Color primaries : BT.709
Transfer characteristics : sRGB/sYCC
Matrix coefficients : Identity
ColorSpace_ICC : RGB
-> Not sure, atm. whether I want to support such images or whether I'll throw an error message.
Quote:Color space : Y
Bit depth : 8
hints that the image is just Gray8, but
Quote:Color primaries : BT.709
Transfer characteristics : sRGB/sYCC
Matrix coefficients : Identity
ColorSpace_ICC : RGB
doesn't really make sense for gray scale,...

Cu Selur
Reply
#65
Send you a link to a dev version which should support those Gray8 input images.

Cu Selur
Reply
#66
The new "Coloring" page is amazing!, maybe 3 digit for Weight are too much 2 should be enough (with step 0.05).

I found a problem when I select the DDColor ModelScope, in this case the generated code is:
clip = ddeoldify(clip=clip, model=0, sat=[1.00,1.00], hue=[0.00,0.00], dd_method=0, dd_weight=0.250, dd_model=-1)

The possible values for this parameter are specified in __init__.vpy

:param dd_model:       ddcolor model (default = 0):
                              0 = ddcolor_modelscope,
                              1 = ddcolor_artistic

For the moment I don't have found any other problem.

Thanks,
Dan
Reply
#67
"dd_model=-1"
Ah, I accidentally left the offset I used for vsDeOldifyDDCCombine.
-> fill fix and adjust the weight precision.

Cu Selur
Reply
#68
Send you a new link, which should fix both problems.

Cu Selur
Reply
#69
With the new dev version, the problem regarding the ddcolor model has been solved.
The fix introduced regarding the conversion to RGB24 when input is GRAY8 caused a lot of problems on other filters.
For example selecting Tewak to de-saturate an image to B&W before ddeoldify is not more working.

Please revert the code back. I'm will apply the transformation to RGB24 directly in the filter if necessary.
I'm doing so many colorspace conversion in the filter that one more will not create any problem.

Thanks,
Dan

P.S.
I will release a new version with the patch.
Reply
#70
Can't reproduce the problem here, the code Hybrid produces:
# Imports
import vapoursynth as vs
# getting Vapoursynth core
core = vs.core
# Loading Plugins
core.std.LoadPlugin(path="F:/Hybrid/64bit/vsfilters/Support/libimwri.dll")
# source: 'C:/Users/Selur/Desktop/TestBW1.jpg'
# current color space: GRAY8, bit depth: 8, resolution: 1280x870, fps: 25, scanorder: progressive, yuv luminance scale: full, matrix: 709
# Loading C:\Users\Selur\Desktop\TestBW1.jpg using vsImageReader
clip = core.imwri.Read(["C:/Users/Selur/Desktop/TestBW1.jpg"])
clip = core.std.Loop(clip=clip, times=100)
frame = clip.get_frame(0)
# Setting detected color matrix (709).
clip = core.std.SetFrameProps(clip, _Matrix=1)
# Setting color transfer (709), if it is not set.
if '_Transfer' not in frame.props or not frame.props['_Transfer']:
  clip = core.std.SetFrameProps(clip, _Transfer=1)
# Setting color primaries info (to 709), if it is not set.
if '_Primaries' not in frame.props or not frame.props['_Primaries']:
  clip = core.std.SetFrameProps(clip, _Primaries=1)
# Setting color range to PC (full) range.
clip = core.std.SetFrameProp(clip=clip, prop="_ColorRange", intval=0)
# making sure frame rate is set to 25
clip = core.std.AssumeFPS(clip=clip, fpsnum=25, fpsden=1)
clip = core.std.SetFrameProp(clip=clip, prop="_FieldBased", intval=0) # progressive
# adjusting color space from GRAY8 to RGB24 for vsDeOldify
clip = core.resize.Bicubic(clip=clip, format=vs.RGB24, range_s="full")
# adding colors using DeOldify
from vsdeoldify import ddeoldify
clip = ddeoldify(clip=clip, model=0, sat=[0.50,1.00], hue=[0.00,5.00], dd_method=0, dd_weight=0.50, dd_model=1)
# adjusting output color from: RGB24 to YUV420P10 for NVEncModel
clip = core.resize.Bicubic(clip=clip, format=vs.YUV420P10, matrix_s="709", range_s="full")
# set output frame rate to 25fps (progressive)
clip = core.std.AssumeFPS(clip=clip, fpsnum=25, fpsden=1)
# Output
clip.set_output()
seems to work fine,...


Quote:For example selecting Tewak to de-saturate an image to B&W before ddeoldify is not more working.
Works fine here, tried:
# Imports
import vapoursynth as vs
# getting Vapoursynth core
import sys
import os
core = vs.core
# Import scripts folder
scriptPath = 'F:/Hybrid/64bit/vsscripts'
sys.path.insert(0, os.path.abspath(scriptPath))
# Loading Plugins
core.std.LoadPlugin(path="F:/Hybrid/64bit/vsfilters/SourceFilter/LSmashSource/vslsmashsource.dll")
# Import scripts
import adjust
# source: 'G:\TestClips&Co\files\test.avi'
# current color space: YUV420P8, bit depth: 8, resolution: 640x352, fps: 25, scanorder: progressive, yuv luminance scale: limited, matrix: 470bg
# Loading G:\TestClips&Co\files\test.avi using LWLibavSource
clip = core.lsmas.LWLibavSource(source="G:/TestClips&Co/files/test.avi", format="YUV420P8", stream_index=0, cache=0, prefer_hw=0)
frame = clip.get_frame(0)
# Setting detected color matrix (470bg).
clip = core.std.SetFrameProps(clip, _Matrix=5)
# Setting color transfer (170), if it is not set.
if '_Transfer' not in frame.props or not frame.props['_Transfer']:
  clip = core.std.SetFrameProps(clip, _Transfer=6)
# Setting color primaries info (to 470), if it is not set.
if '_Primaries' not in frame.props or not frame.props['_Primaries']:
  clip = core.std.SetFrameProps(clip, _Primaries=5)
# Setting color range to TV (limited) range.
clip = core.std.SetFrameProp(clip=clip, prop="_ColorRange", intval=1)
# making sure frame rate is set to 25
clip = core.std.AssumeFPS(clip=clip, fpsnum=25, fpsden=1)
clip = core.std.SetFrameProp(clip=clip, prop="_FieldBased", intval=0) # progressive
# Color Adjustment
clip = adjust.Tweak(clip=clip, hue=0.00, sat=0.00, cont=1.00, coring=False)
# adjusting color space from YUV420P8 to RGB24 for vsDeOldify
clip = core.resize.Bicubic(clip=clip, format=vs.RGB24, matrix_in_s="470bg", range_s="limited")
# adding colors using DeOldify
from vsdeoldify import ddeoldify
clip = ddeoldify(clip=clip, model=0, sat=[0.50,1.00], hue=[0.00,5.00], dd_method=0, dd_weight=0.50, dd_model=1)
# adjusting output color from: RGB24 to YUV420P10 for NVEncModel
clip = core.resize.Bicubic(clip=clip, format=vs.YUV420P10, matrix_s="470bg", range_s="limited")
# set output frame rate to 25fps (progressive)
clip = core.std.AssumeFPS(clip=clip, fpsnum=25, fpsden=1)
# Output
clip.set_output()
generated code and result look as expected,..
Using another image sequence:
# Imports
import vapoursynth as vs
# getting Vapoursynth core
import sys
import os
core = vs.core
# Import scripts folder
scriptPath = 'F:/Hybrid/64bit/vsscripts'
sys.path.insert(0, os.path.abspath(scriptPath))
# Loading Plugins
core.std.LoadPlugin(path="F:/Hybrid/64bit/vsfilters/Support/libimwri.dll")
# Import scripts
import adjust
# source: 'G:/clips/image sequence/ED-360-png/10940.png'
# current color space: RGB24, bit depth: 8, resolution: 640x360, fps: 25, scanorder: progressive, yuv luminance scale: full, matrix: 709
# Loading G:\clips\image sequence\ED-360-png\10940.png using vsImageReader
clip = core.imwri.Read(["G:/clips/image sequence/ED-360-png/10940.png"])
clip = core.std.Loop(clip=clip, times=100)
frame = clip.get_frame(0)
# Setting color transfer (170), if it is not set.
if '_Transfer' not in frame.props or not frame.props['_Transfer']:
  clip = core.std.SetFrameProps(clip, _Transfer=6)
# Setting color primaries info (to 470), if it is not set.
if '_Primaries' not in frame.props or not frame.props['_Primaries']:
  clip = core.std.SetFrameProps(clip, _Primaries=5)
# making sure frame rate is set to 25
clip = core.std.AssumeFPS(clip=clip, fpsnum=25, fpsden=1)
clip = core.std.SetFrameProp(clip=clip, prop="_FieldBased", intval=0) # progressive
# adjusting color space from RGB24 to YUV444PS for vsTweak
clip = core.resize.Bicubic(clip=clip, format=vs.YUV444PS, matrix_s="709", range_s="full")
# Color Adjustment
clip = adjust.Tweak(clip=clip, hue=0.00, sat=0.00, cont=1.00, coring=False)
# adjusting color space from YUV444PS to RGB24 for vsDeOldify
clip = core.resize.Bicubic(clip=clip, format=vs.RGB24, matrix_in_s="709", range_s="full", dither_type="error_diffusion")
# adding colors using DeOldify
from vsdeoldify import ddeoldify
clip = ddeoldify(clip=clip, model=0, sat=[0.50,1.00], hue=[0.00,5.00], dd_method=0, dd_weight=0.50, dd_model=1)
# adjusting output color from: RGB24 to YUV420P10 for NVEncModel
clip = core.resize.Bicubic(clip=clip, format=vs.YUV420P10, matrix_s="709", range_s="full")
# set output frame rate to 25fps (progressive)
clip = core.std.AssumeFPS(clip=clip, fpsnum=25, fpsden=1)
# Output
clip.set_output()
-> no clue where you see the problem.

Cu Selur
Reply


Forum Jump:


Users browsing this thread: 5 Guest(s)