11.03.2024, 19:22
(11.03.2024, 18:56)Selur Wrote: My thoughts,...The problem is that the error is triggered at the first "import from numpy..." that happens early in (import from .deoldify) so I have to assign a default value very early before any code and then reassign later with the seconda "parametric" assignment.
He, he, I would write the code differently (more function definitions).
The
at the beginning should not be needed, since you later use:os.environ["NUMEXPR_MAX_THREADS"] = "8"
os.environ['NUMEXPR_MAX_THREADS'] = str(n_threads)
(11.03.2024, 18:56)Selur Wrote: btw. have you tested whether it makes a difference whether ddcolor and deoldify are fed with b&w or normal colored content?
Both the models ignore the chroma component, there is no difference in the results by feeding a colored or b&w image.
(11.03.2024, 18:56)Selur Wrote: Would also be good to add the min/max values for all the list-parameters. Also you should note that for example for sat the first param in the list ist for deoldify and the second is for ddcolor.
Cu Selur
I will try to fill the incomplete specifications.
I updated the description of ddeoldify()
def ddeoldify(
clip: vs.VideoNode, model: int = 0, render_factor: int = 24, sat: list = [1.0,1.0], hue: list = [0.0,0.0],
dd_model: int = 1, dd_render_factor: int = 24, dd_tweak_luma_bind: list = [False, 0.0, 0.0], dd_bright: float = 0, dd_cont: float = 1, dd_gamma: float = 1.0,
dd_method: int = 2, dd_method_params: list = [0.5, 0.6, 0.15, 0.2], device_index: int = 0, n_threads: int = 8, dd_num_streams: int = 1,
torch_hub_dir: str = model_dir
) -> vs.VideoNode:
"""A Deep Learning based project for colorizing and restoring old images and video
:param clip: clip to process, only RGB24 format is supported.
:param model: deoldify model to use (default = 0):
0 = ColorizeVideo_gen
1 = ColorizeStable_gen
2 = ColorizeArtistic_gen
:param render_factor: render factor for the model, range: 10-44 (default = 24).
:param sat: list with the saturation parameters to apply to color models (default = [1,1])
[0] : saturation for deoldify
[1] : saturation for ddcolor
:param hue: list with the hue parameters to apply to color models (default = [0,0])
[0] : hue for deoldify
[1] : hue for ddcolor
:param dd_model: ddcolor model (default = 1):
0 = ddcolor_modelscope,
1 = ddcolor_artistic
:param dd_render_factor: ddcolor input size equivalent to render_factor, if = 0 will be auto selected
(default = 24) [range: 0, 10-64]
:param dd_tweak_luma_bind: parameters for luma constrained ddcolor preprocess
[0] : luma_constrained_tweak -> luma constrained ddcolor preprocess enabled (default = False), range: [True, False]
[1] : luma_min -> luma (%) min value for tweak activation (default = 0, non activation), range [0-1]
[2] : gamma_luma_min -> luma (%) min value for gamma tweak activation (default = 0, non activation), range [0-1]
:param dd_tweak_bright ddcolor tweak's bright (default = 0)
:param dd_tweak_cont ddcolor tweak's constrast (default = 1)
:param dd_tweak_gamma ddcolor tweak's gamma (default = 1)
:param dd_method: method used to combine deoldify with ddcolor (default = 2):
0 : deoldify only (no merge)
1 : ddcolor only (no merge)
2 : Simple Merge
3 : Adaptive Luma Merge
4 : Constrained Chroma Merge
:param dd_method_params: list with the parameters to apply to selected dd_method:
[0] : clipb_weight (%), used by: SimpleMerge, AdaptiveLumaMerge, ConstrainedChromaMerge, range [0-1]
[1] : luma_threshold (%), used by: AdaptiveLumaMerge, range [0-1]
[2] : min_weight (%), used by: AdaptiveLumaMerge, range [0-1]
[3] : chroma_threshold (%), used by: ConstrainedChromaMerge [0-1]
:param device_index: device ordinal of the GPU, choices: GPU0...GPU7, CPU=99 (default = 0)
:param n_threads: number of threads used by numpy, range: 1-32 (default = 8)
:param dd_num_streams: number of CUDA streams to enqueue the kernels (default = 1)
:param torch_hub_dir: torch hub dir location, default is model directory,
if set to None will switch to torch cache dir.
"""
I hope that this fill the gap in the documentation.
Dan