06.03.2024, 20:02
Hello Selur,
sorry if I bother you again about this filter, but I just released a new version: https://github.com/dan64/vs-deoldify/rel...tag/v1.1.5
Following the Model Comparison analysis (concluded today) included in the README, which confirmed the positive effect of model combination I changed the default values of parameters used by the function ddeoldify(). Now the dd_weight is set equal to 0.5 by default. Please pay attention to the fact that in the version of DDColor I changed the range of values of dd_strength to be equivalent to the render_factor in Deoldify (now both parameters have default value = 24). You should change the GUI using the new defaults, the parameter dd_strength should be named strength or even better render_factor and not input size as now. In the new version the relationship between render_factor and input size is the following
In effect both the parameters are related to the resolution used to perform the inference, bigger values implies that the inference is using a bigger matrix, improving the quality of the inference, but slowing down the encoding process. A good range for these parameters, that do not slow down too much the encoding (of course it depends also on the available GPU) is between 23-33.
Thanks,
Dan
sorry if I bother you again about this filter, but I just released a new version: https://github.com/dan64/vs-deoldify/rel...tag/v1.1.5
Following the Model Comparison analysis (concluded today) included in the README, which confirmed the positive effect of model combination I changed the default values of parameters used by the function ddeoldify(). Now the dd_weight is set equal to 0.5 by default. Please pay attention to the fact that in the version of DDColor I changed the range of values of dd_strength to be equivalent to the render_factor in Deoldify (now both parameters have default value = 24). You should change the GUI using the new defaults, the parameter dd_strength should be named strength or even better render_factor and not input size as now. In the new version the relationship between render_factor and input size is the following
input_size = render_factor * 16
In effect both the parameters are related to the resolution used to perform the inference, bigger values implies that the inference is using a bigger matrix, improving the quality of the inference, but slowing down the encoding process. A good range for these parameters, that do not slow down too much the encoding (of course it depends also on the available GPU) is between 23-33.
Thanks,
Dan