Selur's Little Message Board
Deoldify Vapoursynth filter - Printable Version

+- Selur's Little Message Board (https://forum.selur.net)
+-- Forum: Talk, Talk, Talk (https://forum.selur.net/forum-5.html)
+--- Forum: Small Talk (https://forum.selur.net/forum-7.html)
+--- Thread: Deoldify Vapoursynth filter (/thread-3595.html)



RE: Deoldify Vapoursynth filter - Dan64 - 06.04.2026

but in R74, the constants vs.RANGE_FULL (=0) and vs.RANGE_LIMITED (=1) are still defined ? vs


RE: Deoldify Vapoursynth filter - Selur - 06.04.2026

myrsloik mentionend that one should use the constants,...
Looking in the code:
typedef enum VSRange {
    VSC_RANGE_FULL = 1,
    VSC_RANGE_LIMITED = 0
} VSRange;

#else

typedef enum VSColorRange {
    VSC_RANGE_FULL = 0,
    VSC_RANGE_LIMITED = 1
} VSColorRange;
source: https://github.com/vapoursynth/vapoursynth/blob/140ed20676a2863cd8542030e630b13454035233/include/VSConstants4.h#L26-L36
=> using the constants is fine, Vapoursynth switches the values depending on where the value is set. Wink
And yes, the constants are still there https://github.com/vapoursynth/vapoursynth/blob/140ed20676a2863cd8542030e630b13454035233/src/py/__init__.py#L14

Cu Selur


RE: Deoldify Vapoursynth filter - Dan64 - 06.04.2026

But this code will work in R74 ?

    if vs.core.core_version.release_major < 74:
        clip_color_range = vs.ColorRange(props.get('_ColorRange', vs.RANGE_LIMITED.value))
    else:
        clip_color_range = vs.Range(props.get('_Range', vs.RANGE_LIMITED.value))

In R72 this code will work, but since I don't have R74 installed, I cannot test it in R74.

Dan


RE: Deoldify Vapoursynth filter - Selur - 06.04.2026

I see no reason for it not to work. Smile
(haven't tested Big Grin )
I plan to setup a test R74 with Python 3.12 later today. Smile


RE: Deoldify Vapoursynth filter - Selur - 06.04.2026

Code, works fine with R74. (tested)


RE: Deoldify Vapoursynth filter - Dan64 - 06.04.2026

Released new version: v5.6.7

Main changes:

    Improved connection error handling in ColorMNet
    Improved extraction of reference images
    Added new API HAVC_SceneDetectEdges() with improved scene detection algorithm
    Extended API HAVC_extract_reference_frames() with 3 new algorithms:
       1: Advanced detection on the edges (best for smooth transitions),
       2: Scene detection using SCXvid plugin,
       3: Scene detection using MVTools
    Added management of new "_Range" property (ver. R74) since "_ColorRange" property was deprecated.

Dan


RE: Deoldify Vapoursynth filter - Selur - 06.04.2026

are any adjustments to Hybrid needed?


RE: Deoldify Vapoursynth filter - Dan64 - 06.04.2026

No, all the changes are related to API not directly exposed in Hybrid.
I improved the scene detection algorithm and this was necessary to allow the use of DiT models as additional coloring model.

The next big change will be the direct support of DiT models in Hybrid, but for such step I need that will be released a DiT model with low hardware requirements.
Unfortunately most of the researchers working on Qwen left Alibaba and I don't know if they will start to produce new lightweight models in other companies.

Fortunately the high VRAM cost is providing an incentive to many researchers to develop models with lower RAM usage.
Will see..

Dan


RE: Deoldify Vapoursynth filter - NASS - 10.04.2026

Hello Dan & Selur ,

I am working on a custom video colorization pipeline heavily inspired by ColorMNet, but I completely overhauled the core architecture to make it state-of-the-art:

1. Backbone Upgrade: Replaced DINOv2 with DINOv3 for denser and richer semantic feature extraction.
2. Memory Upgrade: Upgraded the tracking engine to the XMem++ architecture (incorporating Permanent Memory).

The Progress:
I successfully trained the model from scratch up to 145,000 iterations (DAVIS AND REDS AND 16MM FILM)
The temporal stability and object tracking are mind-blowing. If I provide a reference frame with a red car, the car stays perfectly red throughout the whole video, even through severe occlusions.

The Problem:
While the tracking is perfect, I am experiencing a spatial issue: Color Bleeding / Spilling ( specifically spilling over the ground/road and the sky )



Call for Collaboration:
I am reaching out to see if we can team up to stabilize this model. Once we fix this spatial bleeding, I truly believe this will be the ultimate upgrade to ColorMNet.

To get things started, I have attached all the files to this post:

    The complete training and inference source code.

    The test scripts.

    The trained model weights (at 145k iterations).

    The visual results along with the reference images.

Let's build something great together. Any advice or pull requests are welcome!

Best

NASS

Script and model: https://drive.google.com/file/d/1JV7V2ppKlQSIIG-bVZ52jiJkF0MuFxRx/view?usp=sharing

Resultat: https://drive.google.com/file/d/1aKtCB5QC1MoSRqn97HogvcJV-rca08Ny/view?usp=sharing

For Test: python nass.py --input 0000.mp4 --ref_path REF --model saves/color_v3_3090_145000.pth