08.03.2024, 08:07
I end up using Pillow and OpenCV.
This version is only 5% slower (imput format of clips is RGB24)
But I'm not happy about this solution, because is only a patch to DDColor.
I'm thinking to develop a Temporal Chroma Smoother for DDColor, but I still have to find a way on how implement it.
Thanks,
Dan
This version is only 5% slower (imput format of clips is RGB24)
def AdaptiveMerge4(clipa: vs.VideoNode = None, clipb: vs.VideoNode = None, clipb_weight: float = 0.0) -> vs.VideoNode:
#Python version with constants hard-coded
def merge_frame(n, f):
img1 = frame_to_image(f[0])
img2 = frame_to_image(f[1])
luma = get_pil_brightness(img2)
#vs.core.log_message(2, "Luma(" + str(n) + ") = " + str(luma))
brightness = min(1.2 * luma, 1)
w = max(0.5 * brightness, 0.15)
img_m = Image.blend(img1, img2, w)
return image_to_frame(img_m, f[0].copy())
clipm = clipa.std.ModifyFrame(clips=[clipa, clipb], selector=merge_frame)
return clipm
def get_pil_brightness(img: Image) -> float:
img_np = np.asarray(img)
hsv = cv2.cvtColor(img_np, cv2.COLOR_RGB2HSV)
brightness = np.mean(hsv[:,:, 2])
return (brightness/255)
But I'm not happy about this solution, because is only a patch to DDColor.
I'm thinking to develop a Temporal Chroma Smoother for DDColor, but I still have to find a way on how implement it.
Thanks,
Dan