This forum uses cookies
This forum makes use of cookies to store your login information if you are registered, and your last visit if you are not. Cookies are small text documents stored on your computer; the cookies set by this forum can only be used on this website and pose no security risk. Cookies on this forum also track the specific topics you have read and when you last read them. Please confirm whether you accept or reject these cookies being set.

A cookie will be stored in your browser regardless of choice to prevent you being asked this question again. You will be able to change your cookie settings at any time using the link in the footer.

Deoldify Vapoursynth filter
#91
Hello Selur,

  sorry if I bother you again about this filter, but I just released a new version: https://github.com/dan64/vs-deoldify/rel...tag/v1.1.5

  Following the Model Comparison analysis (concluded today) included in the README, which confirmed the positive effect of model combination I changed the default values of parameters used by the function ddeoldify(). Now the dd_weight is set equal to 0.5 by default. Please pay attention to the fact that in the version of DDColor I changed the range of values of dd_strength to be equivalent to the render_factor in Deoldify (now both parameters have default value = 24). You should change the GUI using the new defaults, the parameter  dd_strength should be named strength or even better render_factor and not input size as now. In the new version the relationship between render_factor and input size is the following

input_size = render_factor * 16

In effect both the parameters are related to the resolution used to perform the inference, bigger values implies that the inference is using a bigger matrix, improving the quality of the inference, but slowing down the encoding process. A good range for these parameters, that do not slow down too much the encoding (of course it depends also on the available GPU)  is between 23-33.

Thanks,
Dan
Reply
#92
Yeah, got a notification, too groggy to look into it atm. (long and stressful work day and tiring way home), but I'll look at it tomorrow after work.

Cu Selur
------
offline 02.-07. July, https://www.rockharz-festival.com/ Big Grin
Reply
#93
(06.03.2024, 17:12)zspeciman Wrote: @Dan64, that was a nice test you run.  I was colorizing some photos and videos as well to see the difference.  DDcolor photo images are stunning, but in the videos, in some parts it works very well (more robust color than standalone DeOldify) and in others parts more like the 60s psychedelic colors.  The merge concept is a brilliant idea to combine the stability of DeOldify with the the color pop of DDcolor

I also observed this effect, I noted that happen more frequently on dark scenes, I'm working on a solution to remove this effect, probably an adaptive merge could solve the problem.

Dan

(06.03.2024, 17:12)zspeciman Wrote: 1. In DDcolor, what is Input size about?  FP16?  Artistic Model vs ModelScope?
2. I wasn't sure what Streams settings was about either, but when I changed 1 to 4, the video had a several corrupted images, so I stuck to 1
3. In DeOldify with Simple Merge enabled, in the DDcolor settings on the right, what does that input size about?

Answers:

1. FP16 reduce the size of the frame thus speeding the inference(), Artistic is better than ModelScope (see my models comparison)
2. yes it is better set streams=1
3. See my previous post: https://forum.selur.net/thread-3595-post...l#pid21545


Dan
Reply
#94
@Dan64: send you a link to an adjusted Hybrid dev version.
(not updating torch update until public next release)

Cu Selur
------
offline 02.-07. July, https://www.rockharz-festival.com/ Big Grin
Reply
#95
Thanks for the new release, I updated the Hybrid's screenshot in README.md

Dan
Reply
#96
You might note somewhere what version works with what Hybrid release, since the current public release does not work with v1.1.5 as expected. Also, current torch addon does not come with v1.1.5.

Cu Selur
------
offline 02.-07. July, https://www.rockharz-festival.com/ Big Grin
Reply
#97
Hello Selur,

  I found a way to stabilize DDColor, with a method that I called AdaptiveMerge, the problem is that is too slow.
  Using this method the encoding speed is reduced by 50% and this method is very simple, I was unable to find a way to speed up the computation.
  I don't know very well Vapoursynth and the available documentation is "poor". Maybe there is a way to get a faster encoding...
  The current code is the following

  
def AdaptiveMerge3(clipa: vs.VideoNode = None, clipb: vs.VideoNode = None, clipb_weight: float = 0.0) -> vs.VideoNode:
    #Vapoursynth version
    def merge_frame(n, f):               
        clip1 = clipa[n]
        clip2 = clipb[n] 
        clip2_yuv = clip2.resize.Bicubic(format=vs.YUV444PS, matrix_s="709", range_s="limited") 
        clip2_avg_y = vs.core.std.PlaneStats(clip2_yuv, plane=0)
        luma = clip2_avg_y.get_frame(0).props['PlaneStatsAverage']
        #vs.core.log_message(2, "Luma(" + str(n) + ") = " + str(luma))
        brightness = min(1.5 * luma, 1)
        w = max(clipb_weight * brightness, 0.15)
        clip3 = vs.core.std.Merge(clip1, clip2, weight=w)  
        f_out = f.copy()
        f_out = clip3.get_frame(0)
        return f_out
    clipm = clipa.std.ModifyFrame(clips=clipa, selector=merge_frame)
    return clipm

  Since DDColor on dark scenes shows a psychedelic effect (try to use it with the attached video), the adaptive merge "weight" the weight_merge parameter by the brightness of the image.
  In order to not penalize too much DDColor I multiply this value by 1.2.
  For example if the brightness is 45% and the weight_merge is 50% the effective weight used in the merge is given by = 50%*(1.2*45%) = 27%
  This computation must executed at frame level.
  Do you have any idea on how it is possible to speed up the function AdaptiveMerge3 ?

Thanks,
Dan


Attached Files
.zip   VideoTest_small1.zip (Size: 2,85 MB / Downloads: 14)
Reply
#98
Got a few ideas how to speed that up, not sure whether it will work.
Will look at it after work.
What Formats are clipa and clipb you feed to AdaptiveMerge3, some more context how you use it would be helpful.

Cu Selur
------
offline 02.-07. July, https://www.rockharz-festival.com/ Big Grin
Reply
#99
I end up using Pillow and OpenCV.
This version is only 5% slower (imput format of clips is RGB24)

def AdaptiveMerge4(clipa: vs.VideoNode = None, clipb: vs.VideoNode = None, clipb_weight: float = 0.0) -> vs.VideoNode:
    #Python version with constants hard-coded
    def merge_frame(n, f):               
        img1 = frame_to_image(f[0])
        img2 = frame_to_image(f[1])
        luma = get_pil_brightness(img2)
        #vs.core.log_message(2, "Luma(" + str(n) + ") = " + str(luma))       
        brightness = min(1.2 * luma, 1)
        w = max(0.5 * brightness, 0.15)
        img_m = Image.blend(img1, img2, w)       
        return image_to_frame(img_m, f[0].copy())               
    clipm = clipa.std.ModifyFrame(clips=[clipa, clipb], selector=merge_frame)
    return clipm


def get_pil_brightness(img: Image) -> float:
    img_np = np.asarray(img)
    hsv = cv2.cvtColor(img_np, cv2.COLOR_RGB2HSV)
    brightness = np.mean(hsv[:,:, 2])
    return (brightness/255)

But I'm not happy about this solution, because is only a patch to DDColor.
I'm thinking to develop a Temporal Chroma Smoother for DDColor, but I still have to find a way on how implement it.

Thanks,
Dan
Reply
Quote:This version is only 5% slower (imput format of clips is RGB24)
Slower than compared to?
------
offline 02.-07. July, https://www.rockharz-festival.com/ Big Grin
Reply


Forum Jump:


Users browsing this thread: 4 Guest(s)