This forum uses cookies
This forum makes use of cookies to store your login information if you are registered, and your last visit if you are not. Cookies are small text documents stored on your computer; the cookies set by this forum can only be used on this website and pose no security risk. Cookies on this forum also track the specific topics you have read and when you last read them. Please confirm whether you accept or reject these cookies being set.

A cookie will be stored in your browser regardless of choice to prevent you being asked this question again. You will be able to change your cookie settings at any time using the link in the footer.

Using Stable Diffision models for Colorization
#32
Hi, Dan

I will try your optimization tips and write to you, in comfui I code for 5 seconds per frame, so it is possible

it would be very good if qwen_edit was integrated into hybrid

just asking if it is not too impudent?
is this script correct for subsequent coloring in hybrid with reference frames? and can it be improved with something? I have already coded a movie - it turned out pretty well:

import vapoursynth as vs
from vapoursynth import core
import sys
import os

# ------------------------------------------------------------
# PATH TO HYBRID VSSCRIPTS (IMPORTANT FIX)
# ------------------------------------------------------------

scriptPath = r"D:/Programs/Hybrid/64bit/vsscripts"
sys.path.insert(0, os.path.abspath(scriptPath))

# ------------------------------------------------------------
# IMPORT HAVC (actually vsdeoldify wrapper)
# ------------------------------------------------------------

import vsdeoldify as havc

# ------------------------------------------------------------
# PATHS
# ------------------------------------------------------------

VideoPath = r"E:\Hybrid\video.mkv"
RefDir    = r"E:\DiTServerRPC\output"

# ------------------------------------------------------------
# LOAD VIDEO
# ------------------------------------------------------------

clip = havc.HAVC_read_video(source=VideoPath)

# ------------------------------------------------------------
# COLOR PROPAGATION (HAVC)
# ------------------------------------------------------------

clip = havc.HAVC_cmnet2(
    clip,
    method=4,
    sc_framedir=RefDir,
    encode_mode=0,
    render_speed="auto",
    max_memory_frames=50,
    ref_mode=0,
    render_vivid=False
)

# ------------------------------------------------------------
# RGB -> YUV420P10 (for x265)
# ------------------------------------------------------------

clip = core.resize.Bicubic(
    clip,
    format=vs.YUV420P10,
    matrix_in_s="709",
    matrix_s="709",
    range_in_s="full",
    range_s="limited",
    dither_type="error_diffusion"
)

# ------------------------------------------------------------
# OUTPUT TO VAPOURSYNTH PIPE
# ------------------------------------------------------------

clip.set_output()
Reply


Messages In This Thread
RE: Using Stable Diffision models for Colorization - by didris - 11.05.2026, 22:35

Forum Jump:


Users browsing this thread: 1 Guest(s)