![]() |
|
Deoldify Vapoursynth filter - Printable Version +- Selur's Little Message Board (https://forum.selur.net) +-- Forum: Talk, Talk, Talk (https://forum.selur.net/forum-5.html) +--- Forum: Small Talk (https://forum.selur.net/forum-7.html) +--- Thread: Deoldify Vapoursynth filter (/thread-3595.html) Pages:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
|
RE: Deoldify Vapoursynth filter - Dan64 - 06.04.2026 but in R74, the constants vs.RANGE_FULL (=0) and vs.RANGE_LIMITED (=1) are still defined ? vs RE: Deoldify Vapoursynth filter - Selur - 06.04.2026 myrsloik mentionend that one should use the constants,... Looking in the code: typedef enum VSRange {=> using the constants is fine, Vapoursynth switches the values depending on where the value is set. ![]() And yes, the constants are still there https://github.com/vapoursynth/vapoursynth/blob/140ed20676a2863cd8542030e630b13454035233/src/py/__init__.py#L14 Cu Selur RE: Deoldify Vapoursynth filter - Dan64 - 06.04.2026 But this code will work in R74 ? if vs.core.core_version.release_major < 74:In R72 this code will work, but since I don't have R74 installed, I cannot test it in R74. Dan RE: Deoldify Vapoursynth filter - Selur - 06.04.2026 I see no reason for it not to work. ![]() (haven't tested )I plan to setup a test R74 with Python 3.12 later today.
RE: Deoldify Vapoursynth filter - Selur - 06.04.2026 Code, works fine with R74. (tested) RE: Deoldify Vapoursynth filter - Dan64 - 06.04.2026 Released new version: v5.6.7 Main changes: Improved connection error handling in ColorMNetDan RE: Deoldify Vapoursynth filter - Selur - 06.04.2026 are any adjustments to Hybrid needed? RE: Deoldify Vapoursynth filter - Dan64 - 06.04.2026 No, all the changes are related to API not directly exposed in Hybrid. I improved the scene detection algorithm and this was necessary to allow the use of DiT models as additional coloring model. The next big change will be the direct support of DiT models in Hybrid, but for such step I need that will be released a DiT model with low hardware requirements. Unfortunately most of the researchers working on Qwen left Alibaba and I don't know if they will start to produce new lightweight models in other companies. Fortunately the high VRAM cost is providing an incentive to many researchers to develop models with lower RAM usage. Will see.. Dan RE: Deoldify Vapoursynth filter - NASS - 10.04.2026 Hello Dan & Selur , I am working on a custom video colorization pipeline heavily inspired by ColorMNet, but I completely overhauled the core architecture to make it state-of-the-art: 1. Backbone Upgrade: Replaced DINOv2 with DINOv3 for denser and richer semantic feature extraction. 2. Memory Upgrade: Upgraded the tracking engine to the XMem++ architecture (incorporating Permanent Memory). The Progress: I successfully trained the model from scratch up to 145,000 iterations (DAVIS AND REDS AND 16MM FILM) The temporal stability and object tracking are mind-blowing. If I provide a reference frame with a red car, the car stays perfectly red throughout the whole video, even through severe occlusions. The Problem: While the tracking is perfect, I am experiencing a spatial issue: Color Bleeding / Spilling ( specifically spilling over the ground/road and the sky ) Call for Collaboration: I am reaching out to see if we can team up to stabilize this model. Once we fix this spatial bleeding, I truly believe this will be the ultimate upgrade to ColorMNet. To get things started, I have attached all the files to this post: The complete training and inference source code. The test scripts. The trained model weights (at 145k iterations). The visual results along with the reference images. Let's build something great together. Any advice or pull requests are welcome! Best NASS Script and model: https://drive.google.com/file/d/1JV7V2ppKlQSIIG-bVZ52jiJkF0MuFxRx/view?usp=sharing Resultat: https://drive.google.com/file/d/1aKtCB5QC1MoSRqn97HogvcJV-rca08Ny/view?usp=sharing For Test: python nass.py --input 0000.mp4 --ref_path REF --model saves/color_v3_3090_145000.pth |