06.04.2026, 09:33
but in R74, the constants vs.RANGE_FULL (=0) and vs.RANGE_LIMITED (=1) are still defined ? vs
|
Deoldify Vapoursynth filter
|
|
06.04.2026, 09:33
but in R74, the constants vs.RANGE_FULL (=0) and vs.RANGE_LIMITED (=1) are still defined ? vs
myrsloik mentionend that one should use the constants,...
Looking in the code: typedef enum VSRange {=> using the constants is fine, Vapoursynth switches the values depending on where the value is set. ![]() And yes, the constants are still there https://github.com/vapoursynth/vapoursyn...t__.py#L14 Cu Selur
----
Dev versions are in the 'experimental'-folder of my GoogleDrive, which is linked on the download page.
06.04.2026, 10:38
But this code will work in R74 ?
if vs.core.core_version.release_major < 74:In R72 this code will work, but since I don't have R74 installed, I cannot test it in R74. Dan
I see no reason for it not to work.
![]() (haven't tested )I plan to setup a test R74 with Python 3.12 later today.
----
Dev versions are in the 'experimental'-folder of my GoogleDrive, which is linked on the download page.
06.04.2026, 11:43
Code, works fine with R74. (tested)
----
Dev versions are in the 'experimental'-folder of my GoogleDrive, which is linked on the download page.
06.04.2026, 16:11
Released new version: v5.6.7
Main changes: Improved connection error handling in ColorMNetDan
06.04.2026, 17:40
are any adjustments to Hybrid needed?
----
Dev versions are in the 'experimental'-folder of my GoogleDrive, which is linked on the download page.
06.04.2026, 18:49
No, all the changes are related to API not directly exposed in Hybrid.
I improved the scene detection algorithm and this was necessary to allow the use of DiT models as additional coloring model. The next big change will be the direct support of DiT models in Hybrid, but for such step I need that will be released a DiT model with low hardware requirements. Unfortunately most of the researchers working on Qwen left Alibaba and I don't know if they will start to produce new lightweight models in other companies. Fortunately the high VRAM cost is providing an incentive to many researchers to develop models with lower RAM usage. Will see.. Dan
8 hours ago
Hello Dan & Selur ,
I am working on a custom video colorization pipeline heavily inspired by ColorMNet, but I completely overhauled the core architecture to make it state-of-the-art: 1. Backbone Upgrade: Replaced DINOv2 with DINOv3 for denser and richer semantic feature extraction. 2. Memory Upgrade: Upgraded the tracking engine to the XMem++ architecture (incorporating Permanent Memory). The Progress: I successfully trained the model from scratch up to 145,000 iterations (DAVIS AND REDS AND 16MM FILM) The temporal stability and object tracking are mind-blowing. If I provide a reference frame with a red car, the car stays perfectly red throughout the whole video, even through severe occlusions. The Problem: While the tracking is perfect, I am experiencing a spatial issue: Color Bleeding / Spilling ( specifically spilling over the ground/road and the sky ) Call for Collaboration: I am reaching out to see if we can team up to stabilize this model. Once we fix this spatial bleeding, I truly believe this will be the ultimate upgrade to ColorMNet. To get things started, I have attached all the files to this post: The complete training and inference source code. The test scripts. The trained model weights (at 145k iterations). The visual results along with the reference images. Let's build something great together. Any advice or pull requests are welcome! Best NASS Script and model: https://drive.google.com/file/d/1JV7V2pp...sp=sharing Resultat: https://drive.google.com/file/d/1aKtCB5Q...sp=sharing For Test: python nass.py --input 0000.mp4 --ref_path REF --model saves/color_v3_3090_145000.pth |
|
« Next Oldest | Next Newest »
|