Deoldify Vapoursynth filter - Printable Version +- Selur's Little Message Board (https://forum.selur.net) +-- Forum: Talk, Talk, Talk (https://forum.selur.net/forum-5.html) +--- Forum: Small Talk (https://forum.selur.net/forum-7.html) +--- Thread: Deoldify Vapoursynth filter (/thread-3595.html) |
RE: Deoldify Vapoursynth filter - Dan64 - 06.03.2024 Hello Selur, sorry if I bother you again about this filter, but I just released a new version: https://github.com/dan64/vs-deoldify/releases/tag/v1.1.5 Following the Model Comparison analysis (concluded today) included in the README, which confirmed the positive effect of model combination I changed the default values of parameters used by the function ddeoldify(). Now the dd_weight is set equal to 0.5 by default. Please pay attention to the fact that in the version of DDColor I changed the range of values of dd_strength to be equivalent to the render_factor in Deoldify (now both parameters have default value = 24). You should change the GUI using the new defaults, the parameter dd_strength should be named strength or even better render_factor and not input size as now. In the new version the relationship between render_factor and input size is the following input_size = render_factor * 16 In effect both the parameters are related to the resolution used to perform the inference, bigger values implies that the inference is using a bigger matrix, improving the quality of the inference, but slowing down the encoding process. A good range for these parameters, that do not slow down too much the encoding (of course it depends also on the available GPU) is between 23-33. Thanks, Dan RE: Deoldify Vapoursynth filter - Selur - 06.03.2024 Yeah, got a notification, too groggy to look into it atm. (long and stressful work day and tiring way home), but I'll look at it tomorrow after work. Cu Selur RE: Deoldify Vapoursynth filter - Dan64 - 06.03.2024 (06.03.2024, 17:12)zspeciman Wrote: @Dan64, that was a nice test you run. I was colorizing some photos and videos as well to see the difference. DDcolor photo images are stunning, but in the videos, in some parts it works very well (more robust color than standalone DeOldify) and in others parts more like the 60s psychedelic colors. The merge concept is a brilliant idea to combine the stability of DeOldify with the the color pop of DDcolor I also observed this effect, I noted that happen more frequently on dark scenes, I'm working on a solution to remove this effect, probably an adaptive merge could solve the problem. Dan (06.03.2024, 17:12)zspeciman Wrote: 1. In DDcolor, what is Input size about? FP16? Artistic Model vs ModelScope? Answers: 1. FP16 reduce the size of the frame thus speeding the inference(), Artistic is better than ModelScope (see my models comparison) 2. yes it is better set streams=1 3. See my previous post: https://forum.selur.net/thread-3595-post-21545.html#pid21545 Dan RE: Deoldify Vapoursynth filter - Selur - 07.03.2024 @Dan64: send you a link to an adjusted Hybrid dev version. (not updating torch update until public next release) Cu Selur RE: Deoldify Vapoursynth filter - Dan64 - 07.03.2024 Thanks for the new release, I updated the Hybrid's screenshot in README.md Dan RE: Deoldify Vapoursynth filter - Selur - 07.03.2024 You might note somewhere what version works with what Hybrid release, since the current public release does not work with v1.1.5 as expected. Also, current torch addon does not come with v1.1.5. Cu Selur RE: Deoldify Vapoursynth filter - Dan64 - 07.03.2024 Hello Selur, I found a way to stabilize DDColor, with a method that I called AdaptiveMerge, the problem is that is too slow. Using this method the encoding speed is reduced by 50% and this method is very simple, I was unable to find a way to speed up the computation. I don't know very well Vapoursynth and the available documentation is "poor". Maybe there is a way to get a faster encoding... The current code is the following def AdaptiveMerge3(clipa: vs.VideoNode = None, clipb: vs.VideoNode = None, clipb_weight: float = 0.0) -> vs.VideoNode: Since DDColor on dark scenes shows a psychedelic effect (try to use it with the attached video), the adaptive merge "weight" the weight_merge parameter by the brightness of the image. In order to not penalize too much DDColor I multiply this value by 1.2. For example if the brightness is 45% and the weight_merge is 50% the effective weight used in the merge is given by = 50%*(1.2*45%) = 27% This computation must executed at frame level. Do you have any idea on how it is possible to speed up the function AdaptiveMerge3 ? Thanks, Dan RE: Deoldify Vapoursynth filter - Selur - 08.03.2024 Got a few ideas how to speed that up, not sure whether it will work. Will look at it after work. What Formats are clipa and clipb you feed to AdaptiveMerge3, some more context how you use it would be helpful. Cu Selur RE: Deoldify Vapoursynth filter - Dan64 - 08.03.2024 I end up using Pillow and OpenCV. This version is only 5% slower (imput format of clips is RGB24) def AdaptiveMerge4(clipa: vs.VideoNode = None, clipb: vs.VideoNode = None, clipb_weight: float = 0.0) -> vs.VideoNode: But I'm not happy about this solution, because is only a patch to DDColor. I'm thinking to develop a Temporal Chroma Smoother for DDColor, but I still have to find a way on how implement it. Thanks, Dan RE: Deoldify Vapoursynth filter - Selur - 08.03.2024 Quote:This version is only 5% slower (imput format of clips is RGB24)Slower than compared to? |