RE: Deoldify Vapoursynth filter - Dan64 - 10.03.2024
Hello Selur,
I completed the writing of new version. The changes are so many that I decided to bump the version to 2.0 (not yet on github).
I attached the source of the new version, so that you can start to look at parameters.
The full list of parameters is shortly described in the file __init__.py
def ddeoldify(
clip: vs.VideoNode, model: int = 0, render_factor: int = 24, sat: list = [1.0,1.0], hue: list = [0.0,0.0],
dd_model: int = 1, dd_render_factor: int = 24, dd_tweak_luma_bind: list = [False, 0.0, 0.0], dd_bright: float = 0, dd_cont: float = 1, dd_gamma: float = 1.0,
dd_method: int = 2, dd_method_params: list = [0.5, 1.2, 0.15, 0.1], device_index: int = 0, n_threads: int = 8, dd_num_streams: int = 1,
torch_hub_dir: str = model_dir
) -> vs.VideoNode:
"""A Deep Learning based project for colorizing and restoring old images and video
:param clip: clip to process, only RGB24 format is supported.
:param model: deoldify model to use (default = 0):
0 = ColorizeVideo_gen
1 = ColorizeStable_gen
2 = ColorizeArtistic_gen
:param render_factor: render factor for the model, range: 10-40 (default = 24).
:param sat: list with the saturation parameters to apply to color models (default = [1,1])
:param hue: list with the hue parameters to apply to color models (default = [0,0])
:param dd_model: ddcolor model (default = 0):
0 = ddcolor_modelscope,
1 = ddcolor_artistic
:param dd_render_factor: ddcolor input size equivalent to render_factor, if = 0 will be auto selected
(default = 24) [range: 0-64]
dd_tweak_luma_bind parameters for luma constrained ddcolor preprocess
[0] : luma constrained ddcolor preprocess enabled (default = False)
[1] : luma min value for tweak activation (default = 0, non activation)
[2] : luma min value for gamma tweak activation (default = 0, non activation)
:param dd_tweak_bright ddcolor tweak's bright (default = 0)
:param dd_tweak_cont ddcolor tweak's constrast (default = 1)
:param dd_tweak_gamma ddcolor tweak's gamma (default = 1)
:param dd_method: method used to combine deoldify with ddcolor (default = 0):
0 : deoldify olny (no merge)
1 : ddcolor only (no merge)
2 : Simple Merge
3 : Adaptive Luma Merge
4 : Constrained Chroma Merge
:param dd_method_params: list with the parameters to apply to selected dd_method:
[0] : clipb_weight, used by: SimpleMerge, AdaptiveLumaMerge, ConstrainedChromaMerge
[1] : scale_factor, used by: AdaptiveLumaMerge
[2] : min_weight, used by: AdaptiveLumaMerge
[3] : luma_threshold, used by: ConstrainedChromaMerge
:param device_index: device ordinal of the GPU, choices: GPU0...GPU7, CPU=99 (default = 0)
:param n_threads: number of threads used by numpy, range: 1-32 (default = 8)
:param dd_num_streams: number of CUDA streams to enqueue the kernels (default = 1)
:param torch_hub_dir: torch hub dir location, default is model directory,
if set to None will switch to torch cache dir.
"""
As decided dd_method is now used to switch on/off Deodify or DDColor.
I added 3 merging methods
2) Simple Merge: already implemented in the previous versions
3) Adaptive Luma Merge: I noted that DDColor is sensible to luma, in dark scenes the quality of colored images is poor. This method will reduce the weight applied to DDColor when the luma is low, till a minimum weight defined by the parameter min_weight.
4) Constrained Chroma Merge: This method try to solve the problem by not allowing the DDColor estimated chroma values to be too much different from the chroma values estimated by Deoldify, the constrain is defined by the parameter luma_threshold.
Finally I added the possibility to change the brightness, the contrast and the gamma of original B&W image before is provide in input to DDColor. This kind of tweak has been developed for helping DDColor estimates. Since usually changing these parameters can damage significantly the final quality, after the estimation are propagated back only the chroma values, while luma will be the original one of the B&W image. In this way will be improved the chroma without destroying the luma.
I added also a constrained tweak, controlled by dd_tweak_luma_bind. The behavior is similar to the unconstrained tweak, but in this case the bright will be increased only on the images having average luma below luma_min, also gamma will be applied only in the case the average luma will be below gamma_luma_min.
The results are quite interesting.
For example the B&W image of frame 899 of clip provided in my previous post is the following:
The image colored with Deoldify (default settings) is the following:
The colored image, is quite good, but the estimate provided by DDColor (default settings) is the following:
which is quite bad. But applying the tweak: bright=0.2, gamma=3, the image becomes:
which is quite good.
In the following picture are shown some examples of results obtained by applying other methods.
There is still a lot to experiment, but with this version I think to have included all the more interesting tools.
The source code should help in understanding better the meaning of some option, but please don't hesitate to contact me for any doubt.
Thanks,
Dan
Here the source code
RE: Deoldify Vapoursynth filter - Selur - 10.03.2024
Will look at it tomorrow after work, but looks nice.
Cu Selur
RE: Deoldify Vapoursynth filter - Dan64 - 11.03.2024
Hello Selur,
I had time to review the code and add some description of the new functions.
No material changes but I changed the name of "luma_threshold" in "chroma_threshold" and "scale_factor" in "luma_threshold".
I attached the last version.
Dan
P.S.
I received the following comment from Jason Antic (author of Deoldify)
Quote:Wow, I went through that readme- very interesting results you got there. Great read, and I like the project! Thanks for doing this.
RE: Deoldify Vapoursynth filter - Selur - 11.03.2024
My thoughts,...
He, he, I would write the code differently (more function definitions).
The
os.environ["NUMEXPR_MAX_THREADS"] = "8"
at the beginning should not be needed, since you later use:
os.environ['NUMEXPR_MAX_THREADS'] = str(n_threads)
btw. have you tested whether it makes a difference whether ddcolor and deoldify are fed with b&w or normal colored content?
Won't get around to adjust Hybrid to this version before the weekend, since it's a lot of changes to add and I'm not that well atm.
Would also be good to add the min/max values for all the list-parameters. Also you should note that for example for sat the first param in the list ist for deoldify and the second is for ddcolor.
Cu Selur
RE: Deoldify Vapoursynth filter - Dan64 - 11.03.2024
(11.03.2024, 18:56)Selur Wrote: My thoughts,...
He, he, I would write the code differently (more function definitions).
The
os.environ["NUMEXPR_MAX_THREADS"] = "8"
at the beginning should not be needed, since you later use:
os.environ['NUMEXPR_MAX_THREADS'] = str(n_threads)
The problem is that the error is triggered at the first "import from numpy..." that happens early in (import from .deoldify) so I have to assign a default value very early before any code and then reassign later with the seconda "parametric" assignment.
(11.03.2024, 18:56)Selur Wrote: btw. have you tested whether it makes a difference whether ddcolor and deoldify are fed with b&w or normal colored content?
Both the models ignore the chroma component, there is no difference in the results by feeding a colored or b&w image.
(11.03.2024, 18:56)Selur Wrote: Would also be good to add the min/max values for all the list-parameters. Also you should note that for example for sat the first param in the list ist for deoldify and the second is for ddcolor.
Cu Selur
I will try to fill the incomplete specifications.
I updated the description of ddeoldify()
def ddeoldify(
clip: vs.VideoNode, model: int = 0, render_factor: int = 24, sat: list = [1.0,1.0], hue: list = [0.0,0.0],
dd_model: int = 1, dd_render_factor: int = 24, dd_tweak_luma_bind: list = [False, 0.0, 0.0], dd_bright: float = 0, dd_cont: float = 1, dd_gamma: float = 1.0,
dd_method: int = 2, dd_method_params: list = [0.5, 0.6, 0.15, 0.2], device_index: int = 0, n_threads: int = 8, dd_num_streams: int = 1,
torch_hub_dir: str = model_dir
) -> vs.VideoNode:
"""A Deep Learning based project for colorizing and restoring old images and video
:param clip: clip to process, only RGB24 format is supported.
:param model: deoldify model to use (default = 0):
0 = ColorizeVideo_gen
1 = ColorizeStable_gen
2 = ColorizeArtistic_gen
:param render_factor: render factor for the model, range: 10-44 (default = 24).
:param sat: list with the saturation parameters to apply to color models (default = [1,1])
[0] : saturation for deoldify
[1] : saturation for ddcolor
:param hue: list with the hue parameters to apply to color models (default = [0,0])
[0] : hue for deoldify
[1] : hue for ddcolor
:param dd_model: ddcolor model (default = 1):
0 = ddcolor_modelscope,
1 = ddcolor_artistic
:param dd_render_factor: ddcolor input size equivalent to render_factor, if = 0 will be auto selected
(default = 24) [range: 0, 10-64]
:param dd_tweak_luma_bind: parameters for luma constrained ddcolor preprocess
[0] : luma_constrained_tweak -> luma constrained ddcolor preprocess enabled (default = False), range: [True, False]
[1] : luma_min -> luma (%) min value for tweak activation (default = 0, non activation), range [0-1]
[2] : gamma_luma_min -> luma (%) min value for gamma tweak activation (default = 0, non activation), range [0-1]
:param dd_tweak_bright ddcolor tweak's bright (default = 0)
:param dd_tweak_cont ddcolor tweak's constrast (default = 1)
:param dd_tweak_gamma ddcolor tweak's gamma (default = 1)
:param dd_method: method used to combine deoldify with ddcolor (default = 2):
0 : deoldify only (no merge)
1 : ddcolor only (no merge)
2 : Simple Merge
3 : Adaptive Luma Merge
4 : Constrained Chroma Merge
:param dd_method_params: list with the parameters to apply to selected dd_method:
[0] : clipb_weight (%), used by: SimpleMerge, AdaptiveLumaMerge, ConstrainedChromaMerge, range [0-1]
[1] : luma_threshold (%), used by: AdaptiveLumaMerge, range [0-1]
[2] : min_weight (%), used by: AdaptiveLumaMerge, range [0-1]
[3] : chroma_threshold (%), used by: ConstrainedChromaMerge [0-1]
:param device_index: device ordinal of the GPU, choices: GPU0...GPU7, CPU=99 (default = 0)
:param n_threads: number of threads used by numpy, range: 1-32 (default = 8)
:param dd_num_streams: number of CUDA streams to enqueue the kernels (default = 1)
:param torch_hub_dir: torch hub dir location, default is model directory,
if set to None will switch to torch cache dir.
"""
I hope that this fill the gap in the documentation.
Dan
RE: Deoldify Vapoursynth filter - Selur - 12.03.2024
just to give you a heads-up, I'm down with some gastrointestinal infection/flu so probably I won't get around to work on it before Friday.
RE: Deoldify Vapoursynth filter - Dan64 - 12.03.2024
I'm sorry for the flu, I hope you get better soon.
In meanwhile I added another boolean parameter, called chroma_resize (default = True).
When this parameter is set to true, the encoding speed will increase by about 10% (see table below)
The increase spreed will not decrease the final output quality that will be the same obtained by setting chroma_resize = False.
So it is safe to enable this parameter by default.
happy recovery!
Dan
I also added more explanations in ddeoldify(), now all the parameters are explained.
def ddeoldify(
clip: vs.VideoNode, model: int = 0, render_factor: int = 24, sat: list = [1.0,1.0], hue: list = [0.0,0.0],
dd_model: int = 1, dd_render_factor: int = 24, dd_tweak_luma_bind: list = [False, 0.0, 0.0], dd_bright: float = 0, dd_cont: float = 1, dd_gamma: float = 1.0,
dd_method: int = 2, dd_method_params: list = [0.5, 0.6, 0.15, 0.2], chroma_resize: bool = True, device_index: int = 0, n_threads: int = 8, dd_num_streams: int = 1,
torch_hub_dir: str = model_dir
) -> vs.VideoNode:
"""A Deep Learning based project for colorizing and restoring old images and video
:param clip: clip to process, only RGB24 format is supported.
:param model: deoldify model to use (default = 0):
0 = ColorizeVideo_gen
1 = ColorizeStable_gen
2 = ColorizeArtistic_gen
:param render_factor: render factor for the model, range: 10-44 (default = 24).
:param sat: list with the saturation parameters to apply to color models (default = [1,1])
[0] : saturation for deoldify
[1] : saturation for ddcolor
:param hue: list with the hue parameters to apply to color models (default = [0,0])
[0] : hue for deoldify
[1] : hue for ddcolor
:param dd_model: ddcolor model (default = 1):
0 = ddcolor_modelscope,
1 = ddcolor_artistic
:param dd_render_factor: ddcolor input size equivalent to render_factor, if = 0 will be auto selected
(default = 24) [range: 0, 10-64]
:param dd_tweak_luma_bind: parameters for luma constrained ddcolor preprocess
[0] : luma_constrained_tweak -> luma constrained ddcolor preprocess enabled (default = False), range: [True, False]
when enaabled the average luma of a video clip will be forced to don't be below the value
defined by the parameter "luma_min". The function allow to modify the gamma
of the clip if the average luma is below the parameter "gamma_luma_min"
[1] : luma_min -> luma (%) min value for tweak activation (default = 0, non activation), range [0-1]
[2] : gamma_luma_min -> luma (%) min value for gamma tweak activation (default = 0, non activation), range [0-1]
:param dd_tweak_bright ddcolor tweak's bright (default = 0)
:param dd_tweak_cont ddcolor tweak's constrast (default = 1)
:param dd_tweak_gamma ddcolor tweak's gamma (default = 1)
:param dd_method: method used to combine deoldify with ddcolor (default = 2):
0 : deoldify only (no merge)
1 : ddcolor only (no merge)
2 : Simple Merge:
the images are combined using a weighted merge, where the parameter clipb_weight
represent the weight assigned to the colors provided by ddcolor()
3 : Adaptive Luma Merge:
given that the ddcolor() perfomance is quite bad on dark scenes, the images are
combinaed by decreasing the weight assigned to ddcolor() when the luma is
below a given threshold given by: luma_threshold.
For example with: luma_threshold = 0.6 the weight assigned to ddcolor() will
start to decrease linearly when the luma < 60% till "min_weight"
4 : Constrained Chroma Merge:
given that the colors provided by deoldify() are more conservative and stable
than the colors obtained with ddcolor() images are combined by assigning
a limit to the amount of difference in chroma values between deoldify() and
ddcolor() this limit is defined by the parameter threshold. The limit is applied
to the image converted to "YUV". For example when threshold=0.1, the chroma
values "U","V" of ddcolor() image will be constrained to have an absolute
percentage difference respect to "U","V" provided by deoldify() not higher than 10%
:param dd_method_params: list with the parameters to apply to selected dd_method:
[0] : clipb_weight (%), used by: SimpleMerge, AdaptiveLumaMerge, ConstrainedChromaMerge, range [0-1]
[1] : luma_threshold (%), used by: AdaptiveLumaMerge, range [0-1]
[2] : min_weight (%), used by: AdaptiveLumaMerge, range [0-1]
[3] : chroma_threshold (%), used by: ConstrainedChromaMerge [0-1]
:param chroma_resize: if True will be enabled the chroma_resize: the cololorization will be applied to a clip with the same
size used for the models inference(), but the final resolution will be the one of the original clip.
:param device_index: device ordinal of the GPU, choices: GPU0...GPU7, CPU=99 (default = 0)
:param n_threads: number of threads used by numpy, range: 1-32 (default = 8)
:param dd_num_streams: number of CUDA streams to enqueue the kernels (default = 1)
:param torch_hub_dir: torch hub dir location, default is model directory,
if set to None will switch to torch cache dir.
"""
Dan
RE: Deoldify Vapoursynth filter - zspeciman - 13.03.2024
@Selur, wishing you a quick recovery. Stay away from dairy, include some rice in your meal, that always works for me.
@Dan64 that Gamma tweak shows real potential on that dark scenes, nice discovery.
This project growth has been wonderful
As far as flickering, I've noticed DDcolor input at 512 has less of it vs the 384. It is mostly in the red color. Is there a plugin to further reduce it?
Deoldify has flickering at minimum, perhaps because it has less red output.
RE: Deoldify Vapoursynth filter - Selur - 13.03.2024
@Dan64: about the chroma resize, instead of
clip_colored = clip_colored.resize.Lanczos(width=clip_orig.width, height=clip_orig.height)
I would suggest to use Spline64 instead of Lancos, to avoid the introduction of ringing&halo artifacts. (see: https://forum.doom9.org/showthread.php?t=145210)
Cu Selur
RE: Deoldify Vapoursynth filter - Selur - 13.03.2024
[1] : luma_threshold (%), used by: AdaptiveLumaMerge, range [0-1]
[2] : min_weight (%), used by: AdaptiveLumaMerge, range [0-1]
[3] : chroma_threshold (%), used by: ConstrainedChromaMerge [0-1]
[1] : luma_min -> luma (%) min value for tweak activation (default = 0, non activation), range [0-1]
[2] : gamma_luma_min -> luma (%) min value for gamma tweak activation (default = 0, non activation), range [0-1]
Range [0-1] in % so max 1%,.... this seems misleading. You might want to rephrase that. (I assume 0.01 = 1%. )
[code] :param dd_render_factor: ddcolor input size equivalent to render_factor, if = 0 will be auto selected
(default = 24) [range: 0, 10-64] [/quote]
"0, 10-64" <- this is ugly
Cu Selur
Ps.: send you a link to a dev version which is adjusted to vsdeoldify-2.0.0_2024-03-12.
|