Selur's Little Message Board

Full Version: Deoldify Vapoursynth filter
You're currently viewing a stripped down version of our content. View the full version with proper formatting.
Compared to SimpleMerge  Smile
Did a few tests, can't beat the cv2 version in speed.
AdaptiveMerge3 can be speed up a bit by using something like
Code:
def AdaptiveMerge3(clipa: vs.VideoNode = None, clipb: vs.VideoNode = None, clipb_weight: float = 0.0, luma="limited") -> vs.VideoNode:

    yuv = clipb.resize.Bicubic(format=vs.YUV444PS, matrix_s="709", range_s=luma)
    yuv = core.std.PlaneStats(yuv, plane=0)
    clipb = core.std.CopyFrameProps(clipb, yuv)
    #Vapoursynth version
    def merge_frame(n, f):              
        clip1 = clipa[n]
        clip2 = clipb[n]
        luma = clip2.get_frame(0).props['PlaneStatsAverage']
        #vs.core.log_message(2, "Luma(" + str(n) + ") = " + str(luma))
        brightness = min(1.5 * luma, 1)
        w = max(clipb_weight * brightness, 0.15)
        clip3 = vs.core.std.Merge(clip1, clip2, weight=w)  
        f_out = f.copy()
        f_out = clip3.get_frame(0)
        return f_out
    clipm = clipa.std.ModifyFrame(clips=clipa, selector=merge_frame)
    return clipm
        
clip = AdaptiveMerge3(clip, org, 0.5)

Side note: AdaptiveMerge4 uses different values.
it uses:
hsv[:,:, 2] = value of 'V from HSV' = max(R, G, B)
vs.
PlaneStats(yuv, plane=0) = luma value Y from YUV = 0.299R + 0.587G + 0.114B.
Both are describes as brightness, but represent different things. Smile


Cu Selur
btw. I missed that dd_model only supports using ddcolor and deoldify:
"deoldify: only dd_method=0 is supported"
Why remove the option to only use deoldify?
Send you a link to a dev version which restricts dd_method to 0.

Cu Selur
I added dd_method to left room to add more Merge algorithms, SimpleMerge was simple and was the first, in the next release I will add AdaptiveMerge (dd_method = 1) and I hope to be able to add a TemporalChromaMerge (dd_method = 2). It was not my intention to remove the DeOldify only version. But given the good results obtained by merging the models, I proposed to define this setup as the default setup for Deoldify.

Thanks,
Dan
Quote:It was not my intention to remove the DeOldify only version.
then how about:
0: no merge
1: Simple Merge
2: ...
or
-1: no merge
0: Simple Merge
1: ...
and keeping 1 as default?

The problem is not that 0 is SimpleMerge, but that this is the only option. Smile

Cu Selur
(08.03.2024, 19:03)Selur Wrote: [ -> ]
Quote:It was not my intention to remove the DeOldify only version.
then how about:
0: no merge
1: Simple Merge
2: ...

I prefer this solution. But please wait, before implement this list, that I have time to release a new version, consider that the AdapativeMerge will require 2 additional parameters: the scale_factor (in the code was = 1.2) and the min_weight (in the code was 0.15).
Quote:It was not my intention to remove the DeOldify only version.
+
Quote:I prefer this solution.
=> How to use only DeOldify? (only thing, I can think of, would be to set the dd_weight=0, but that seem rather ugly).

Cu Selur
(08.03.2024, 20:21)Selur Wrote: [ -> ]
Quote:It was not my intention to remove the DeOldify only version.
+
Quote:I prefer this solution.
=> How to use only DeOldify? (only thing, I can think of, would be to set the dd_weight=0, but that seem rather ugly).

I do agree with you, it is better to use dd_method.
Before release a new version in github I will send you the draft version, so that we can finalize all the necessary parameters.

Thanks,
Dan
I developed another type of Merge. The idea behind is to use the stable values provided by DeOldify, to force DDColor to don't generate an image with colors too different from DeOldify.
In the example below I set the threshold to 10%.

Code:
def AdaptiveMerge3(clipa: vs.VideoNode = None, clipb: vs.VideoNode = None, clipb_weight: float = 0.0) -> vs.VideoNode:
    def merge_frame(n, f):   
        img1 = frame_to_image(f[0])
        img2 = frame_to_image(f[1])       
        img_m = chroma_smoother(img1, img2)
        return image_to_frame(img_m, f[0].copy())               
    clipm = clipa.std.ModifyFrame(clips=[clipa, clipb], selector=merge_frame)
    return clipm

def chroma_smoother(img_prv: Image, img: Image, strength: int = 0) -> Image:

    r2, g2, b2 = img.split()

    img1_up = Image.eval(img_prv, (lambda x: min(x*(1 + 0.10),255)) )
    img1_dn = Image.eval(img_prv, (lambda x: max(x*(1 - 0.10), 0)) )

    r1_up, g1_up, b1_up = img1_up.split()
    r1_dn, g1_dn, b1_dn = img1_dn.split()

    r_m = ImageMath.eval("convert(max(min(a, c), b), 'L')", a=r1_up, b=r1_dn, c=r2)
    g_m = ImageMath.eval("convert(max(min(a, c), b), 'L')", a=g1_up, b=g1_dn, c=r2)
    b_m = ImageMath.eval("convert(max(min(a, c), b), 'L')", a=b1_up, b=b1_dn, c=r2)

    img_m = Image.merge('RGB', (r_m, g_m, b_m))
   
    img_final = chroma_post_process(img_m, img)
   
    return img_final

def chroma_post_process(img_m: Image, orig: Image) -> Image:
    img_np = np.asarray(img_m)
    orig_np = np.asarray(orig)
    img_yuv = cv2.cvtColor(img_np, cv2.COLOR_RGB2YUV)
    # perform a B&W transform first to get better luminance values
    orig_yuv = cv2.cvtColor(orig_np, cv2.COLOR_RGB2YUV)
    hires = np.copy(orig_yuv)
    hires[:, :, 1:3] = img_yuv[:, :, 1:3]
    final = cv2.cvtColor(hires, cv2.COLOR_YUV2RGB)
    final = Image.fromarray(final)
    return final
 
Unfortunately Pillow is unable to work with YUV colors, so I must work with RGB colors and then transfer the changes to YUV.
Do you have any idea on how it is possible to speed up the code ? also the clamping to 10% is not given the expected results and I don't understand way.

Now I'm too tired to think something useful. 
Any idea is welcome.

Thanks,
Dan
Hmm, about speed up for chroma_smoother:
From the looks of it, it might be faster to first convert to numpy array, work on array and then convert back to image.