<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/">
	<channel>
		<title><![CDATA[Selur's Little Message Board - Small Talk]]></title>
		<link>https://forum.selur.net/</link>
		<description><![CDATA[Selur's Little Message Board - https://forum.selur.net]]></description>
		<pubDate>Tue, 21 Apr 2026 19:03:37 +0000</pubDate>
		<generator>MyBB</generator>
		<item>
			<title><![CDATA[ColormnetV2 Project]]></title>
			<link>https://forum.selur.net/thread-4365.html</link>
			<pubDate>Fri, 10 Apr 2026 00:27:15 +0200</pubDate>
			<dc:creator><![CDATA[<a href="https://forum.selur.net/member.php?action=profile&uid=4685">NASS</a>]]></dc:creator>
			<guid isPermaLink="false">https://forum.selur.net/thread-4365.html</guid>
			<description><![CDATA[Hello Dan &amp; Selur ,<br />
<br />
I am working on a custom video colorization pipeline heavily inspired by ColorMNet, but I completely overhauled the core architecture to make it state-of-the-art:<br />
<br />
1. Backbone Upgrade: Replaced DINOv2 with DINOv3 for denser and richer semantic feature extraction.<br />
2. Memory Upgrade: Upgraded the tracking engine to the XMem++ architecture (incorporating Permanent Memory).<br />
<br />
The Progress:<br />
I successfully trained the model from scratch up to 145,000 iterations (DAVIS AND REDS AND 16MM FILM)<br />
The temporal stability and object tracking are mind-blowing. If I provide a reference frame with a red car, the car stays perfectly red throughout the whole video, even through severe occlusions.<br />
<br />
The Problem:<br />
While the tracking is perfect, I am experiencing a spatial issue: Color Bleeding / Spilling ( specifically spilling over the ground/road and the sky )<br />
<br />
<br />
<br />
Call for Collaboration:<br />
I am reaching out to see if we can team up to stabilize this model. Once we fix this spatial bleeding, I truly believe this will be the ultimate upgrade to ColorMNet.<br />
<br />
To get things started, I have attached all the files to this post:<br />
<br />
    The complete training and inference source code.<br />
<br />
    The test scripts.<br />
<br />
    The trained model weights (at 145k iterations).<br />
<br />
    The visual results along with the reference images.<br />
<br />
Let's build something great together. Any advice or pull requests are welcome!<br />
<br />
Best<br />
<br />
NASS<br />
<br />
Script and model: <a href="https://drive.google.com/file/d/1JV7V2ppKlQSIIG-bVZ52jiJkF0MuFxRx/view?usp=sharing" target="_blank" rel="noopener" class="mycode_url">https://drive.google.com/file/d/1JV7V2pp...sp=sharing</a><br />
<br />
Resultat: <a href="https://drive.google.com/file/d/1aKtCB5QC1MoSRqn97HogvcJV-rca08Ny/view?usp=sharing" target="_blank" rel="noopener" class="mycode_url">https://drive.google.com/file/d/1aKtCB5Q...sp=sharing</a><br />
<br />
For Test: python nass.py --input 0000.mp4 --ref_path REF --model saves/color_v3_3090_145000.pth]]></description>
			<content:encoded><![CDATA[Hello Dan &amp; Selur ,<br />
<br />
I am working on a custom video colorization pipeline heavily inspired by ColorMNet, but I completely overhauled the core architecture to make it state-of-the-art:<br />
<br />
1. Backbone Upgrade: Replaced DINOv2 with DINOv3 for denser and richer semantic feature extraction.<br />
2. Memory Upgrade: Upgraded the tracking engine to the XMem++ architecture (incorporating Permanent Memory).<br />
<br />
The Progress:<br />
I successfully trained the model from scratch up to 145,000 iterations (DAVIS AND REDS AND 16MM FILM)<br />
The temporal stability and object tracking are mind-blowing. If I provide a reference frame with a red car, the car stays perfectly red throughout the whole video, even through severe occlusions.<br />
<br />
The Problem:<br />
While the tracking is perfect, I am experiencing a spatial issue: Color Bleeding / Spilling ( specifically spilling over the ground/road and the sky )<br />
<br />
<br />
<br />
Call for Collaboration:<br />
I am reaching out to see if we can team up to stabilize this model. Once we fix this spatial bleeding, I truly believe this will be the ultimate upgrade to ColorMNet.<br />
<br />
To get things started, I have attached all the files to this post:<br />
<br />
    The complete training and inference source code.<br />
<br />
    The test scripts.<br />
<br />
    The trained model weights (at 145k iterations).<br />
<br />
    The visual results along with the reference images.<br />
<br />
Let's build something great together. Any advice or pull requests are welcome!<br />
<br />
Best<br />
<br />
NASS<br />
<br />
Script and model: <a href="https://drive.google.com/file/d/1JV7V2ppKlQSIIG-bVZ52jiJkF0MuFxRx/view?usp=sharing" target="_blank" rel="noopener" class="mycode_url">https://drive.google.com/file/d/1JV7V2pp...sp=sharing</a><br />
<br />
Resultat: <a href="https://drive.google.com/file/d/1aKtCB5QC1MoSRqn97HogvcJV-rca08Ny/view?usp=sharing" target="_blank" rel="noopener" class="mycode_url">https://drive.google.com/file/d/1aKtCB5Q...sp=sharing</a><br />
<br />
For Test: python nass.py --input 0000.mp4 --ref_path REF --model saves/color_v3_3090_145000.pth]]></content:encoded>
		</item>
		<item>
			<title><![CDATA[Shirt problem in this video]]></title>
			<link>https://forum.selur.net/thread-4363.html</link>
			<pubDate>Tue, 07 Apr 2026 19:42:01 +0200</pubDate>
			<dc:creator><![CDATA[<a href="https://forum.selur.net/member.php?action=profile&uid=4668">georgepriftakis</a>]]></dc:creator>
			<guid isPermaLink="false">https://forum.selur.net/thread-4363.html</guid>
			<description><![CDATA[Hello I have been trying for atleast 3 days to find settings that would fix this thing but I can't, look at the shirt and the paperr thing.I want to remove those lines <br />
<br />
<br />
<a href="https://mega.nz/file/B6IHGQqB#z5u-Ftp1UsU2skOOHfRlrr34wxo6_AUNMx65aTXrJrA" target="_blank" rel="noopener" class="mycode_url">https://mega.nz/file/B6IHGQqB#z5u-Ftp1Us...x65aTXrJrA</a>]]></description>
			<content:encoded><![CDATA[Hello I have been trying for atleast 3 days to find settings that would fix this thing but I can't, look at the shirt and the paperr thing.I want to remove those lines <br />
<br />
<br />
<a href="https://mega.nz/file/B6IHGQqB#z5u-Ftp1UsU2skOOHfRlrr34wxo6_AUNMx65aTXrJrA" target="_blank" rel="noopener" class="mycode_url">https://mega.nz/file/B6IHGQqB#z5u-Ftp1Us...x65aTXrJrA</a>]]></content:encoded>
		</item>
		<item>
			<title><![CDATA[Ringing, Mosquito or Blocking]]></title>
			<link>https://forum.selur.net/thread-4361.html</link>
			<pubDate>Mon, 06 Apr 2026 14:40:53 +0200</pubDate>
			<dc:creator><![CDATA[<a href="https://forum.selur.net/member.php?action=profile&uid=4482">lastchance22</a>]]></dc:creator>
			<guid isPermaLink="false">https://forum.selur.net/thread-4361.html</guid>
			<description><![CDATA[<img src="http://titant.free.fr/tmp/vlcsnap.jpg" loading="lazy"  alt="[Image: vlcsnap.jpg]" class="mycode_img" />Hi everyone,<br />
I’m having trouble improving the quality of an anime video and I can’t seem to fully fix it.<br />
<span style="font-weight: bold;" class="mycode_b">Source info:</span><ul class="mycode_list"><li>FPS: 29.970<br />
</li>
<li>I already processed it with SRestore to get back to 23.976 fps<br />
</li>
<li>I also used QTGMC to fix interlacing<br />
</li>
</ul>
<span style="font-weight: bold;" class="mycode_b">Current issue:</span><br />
There are still a lot of <span style="font-weight: bold;" class="mycode_b">small bright and colored pixels around the lineart (edges)</span>. It looks like strong compression artifacts (ringing / mosquito noise), almost like an over-compressed JPEG.<br />
<span style="font-weight: bold;" class="mycode_b">What I already tried in Hybrid:</span><ul class="mycode_list"><li>DeHaloAlpha (Radius 2, Strength 1–2)<br />
</li>
<li>HQDeRing (Y + U + V checked)<br />
</li>
<li>Deband (f3kdb Neo, light settings)<br />
</li>
</ul>
Despite this, the pixels around the lines are still very visible.<br />
<span style="font-weight: bold;" class="mycode_b">What I’m looking for:</span><ul class="mycode_list"><li>Best way to reduce/remove these artifacts without destroying line detail<br />
</li>
<li>Recommended filters or settings in Hybrid / VapourSynth<br />
</li>
<li>Whether this can be properly fixed or if it’s just a limitation of the source<br />
</li>
</ul>
If needed, I can provide screenshots or a sample.<br />
Thanks a lot for your help! <!-- start: postbit_attachments_attachment -->
<br /><!-- start: attachment_icon -->
<img src="https://forum.selur.net/images/attachtypes/image.png" title="JPG Image" border="0" alt=".jpg" />
<!-- end: attachment_icon -->&nbsp;&nbsp;<a href="attachment.php?aid=3543" target="_blank" title="">vlcsnap.jpg</a> (Size: 303,25 KB / Downloads: 16)
<!-- end: postbit_attachments_attachment -->]]></description>
			<content:encoded><![CDATA[<img src="http://titant.free.fr/tmp/vlcsnap.jpg" loading="lazy"  alt="[Image: vlcsnap.jpg]" class="mycode_img" />Hi everyone,<br />
I’m having trouble improving the quality of an anime video and I can’t seem to fully fix it.<br />
<span style="font-weight: bold;" class="mycode_b">Source info:</span><ul class="mycode_list"><li>FPS: 29.970<br />
</li>
<li>I already processed it with SRestore to get back to 23.976 fps<br />
</li>
<li>I also used QTGMC to fix interlacing<br />
</li>
</ul>
<span style="font-weight: bold;" class="mycode_b">Current issue:</span><br />
There are still a lot of <span style="font-weight: bold;" class="mycode_b">small bright and colored pixels around the lineart (edges)</span>. It looks like strong compression artifacts (ringing / mosquito noise), almost like an over-compressed JPEG.<br />
<span style="font-weight: bold;" class="mycode_b">What I already tried in Hybrid:</span><ul class="mycode_list"><li>DeHaloAlpha (Radius 2, Strength 1–2)<br />
</li>
<li>HQDeRing (Y + U + V checked)<br />
</li>
<li>Deband (f3kdb Neo, light settings)<br />
</li>
</ul>
Despite this, the pixels around the lines are still very visible.<br />
<span style="font-weight: bold;" class="mycode_b">What I’m looking for:</span><ul class="mycode_list"><li>Best way to reduce/remove these artifacts without destroying line detail<br />
</li>
<li>Recommended filters or settings in Hybrid / VapourSynth<br />
</li>
<li>Whether this can be properly fixed or if it’s just a limitation of the source<br />
</li>
</ul>
If needed, I can provide screenshots or a sample.<br />
Thanks a lot for your help! <!-- start: postbit_attachments_attachment -->
<br /><!-- start: attachment_icon -->
<img src="https://forum.selur.net/images/attachtypes/image.png" title="JPG Image" border="0" alt=".jpg" />
<!-- end: attachment_icon -->&nbsp;&nbsp;<a href="attachment.php?aid=3543" target="_blank" title="">vlcsnap.jpg</a> (Size: 303,25 KB / Downloads: 16)
<!-- end: postbit_attachments_attachment -->]]></content:encoded>
		</item>
		<item>
			<title><![CDATA[help please]]></title>
			<link>https://forum.selur.net/thread-4358.html</link>
			<pubDate>Wed, 01 Apr 2026 21:36:22 +0200</pubDate>
			<dc:creator><![CDATA[<a href="https://forum.selur.net/member.php?action=profile&uid=4683">lsd4me2</a>]]></dc:creator>
			<guid isPermaLink="false">https://forum.selur.net/thread-4358.html</guid>
			<description><![CDATA[[img]<img src="https://i.ibb.co/1fBwFqn9/03596.png" loading="lazy"  alt="[Image: 03596.png]" class="mycode_img" /> <img src="https://i.ibb.co/GSd3spt/03597.png" loading="lazy"  alt="[Image: 03597.png]" class="mycode_img" /> <img src="https://i.ibb.co/Cp6kYN5N/03598.png" loading="lazy"  alt="[Image: 03598.png]" class="mycode_img" />[/img]<br />
<br />
Backing up up my Simpsons dvd collection.  I did  tivtc with qtgmc fast, VInverse2, KNLMeansCL, YAHR, Santiag followed by a NNEDi3 resize to convert to square pixels at 640x480. The result is great except for about 100 frames around the 2:30 mark. It seems to be imbedded in the source as it occurs even when i play the  untouched vob ripped from the disc. Any idea's of how to fix this? Can i just fix the 100 or so problem frames without messing  with the rest of the episode?]]></description>
			<content:encoded><![CDATA[[img]<img src="https://i.ibb.co/1fBwFqn9/03596.png" loading="lazy"  alt="[Image: 03596.png]" class="mycode_img" /> <img src="https://i.ibb.co/GSd3spt/03597.png" loading="lazy"  alt="[Image: 03597.png]" class="mycode_img" /> <img src="https://i.ibb.co/Cp6kYN5N/03598.png" loading="lazy"  alt="[Image: 03598.png]" class="mycode_img" />[/img]<br />
<br />
Backing up up my Simpsons dvd collection.  I did  tivtc with qtgmc fast, VInverse2, KNLMeansCL, YAHR, Santiag followed by a NNEDi3 resize to convert to square pixels at 640x480. The result is great except for about 100 frames around the 2:30 mark. It seems to be imbedded in the source as it occurs even when i play the  untouched vob ripped from the disc. Any idea's of how to fix this? Can i just fix the 100 or so problem frames without messing  with the rest of the episode?]]></content:encoded>
		</item>
		<item>
			<title><![CDATA[Dehalo help]]></title>
			<link>https://forum.selur.net/thread-4349.html</link>
			<pubDate>Mon, 16 Mar 2026 16:14:02 +0100</pubDate>
			<dc:creator><![CDATA[<a href="https://forum.selur.net/member.php?action=profile&uid=4668">georgepriftakis</a>]]></dc:creator>
			<guid isPermaLink="false">https://forum.selur.net/thread-4349.html</guid>
			<description><![CDATA[Hello there is this clip that has extreme halos and I want to fix it but dehalo doesn't completely do it justice. what would be the best options to fix it? the posters especially are the biggest problem<br />
<br />
<br />
<br />
<a href="https://mega.nz/file/1iBjjJhK#Z3x8r-JJxmuAdkXkYJyEJRbbO3rdK3tBnsBLz2ldlOI" target="_blank" rel="noopener" class="mycode_url">https://mega.nz/file/1iBjjJhK#Z3x8r-JJxm...sBLz2ldlOI</a>]]></description>
			<content:encoded><![CDATA[Hello there is this clip that has extreme halos and I want to fix it but dehalo doesn't completely do it justice. what would be the best options to fix it? the posters especially are the biggest problem<br />
<br />
<br />
<br />
<a href="https://mega.nz/file/1iBjjJhK#Z3x8r-JJxmuAdkXkYJyEJRbbO3rdK3tBnsBLz2ldlOI" target="_blank" rel="noopener" class="mycode_url">https://mega.nz/file/1iBjjJhK#Z3x8r-JJxm...sBLz2ldlOI</a>]]></content:encoded>
		</item>
		<item>
			<title><![CDATA[deblendS6,...]]></title>
			<link>https://forum.selur.net/thread-4343.html</link>
			<pubDate>Sat, 07 Mar 2026 09:56:26 +0100</pubDate>
			<dc:creator><![CDATA[<a href="https://forum.selur.net/member.php?action=profile&uid=1">Selur</a>]]></dc:creator>
			<guid isPermaLink="false">https://forum.selur.net/thread-4343.html</guid>
			<description><![CDATA[Hi,<br />
I added deblendS6 which is an alternative to sRestore mode=6, where I tried to:<br />
a. keep the results (nearly) the same as sRestore mode=6<br />
b. speed the whole thing up<br />
c. add lookaheads and threading<br />
<br />
Would be nice if someone uses it instead of sRestore mode=6 to give some general feedback.<br />
<br />
Cu Selur]]></description>
			<content:encoded><![CDATA[Hi,<br />
I added deblendS6 which is an alternative to sRestore mode=6, where I tried to:<br />
a. keep the results (nearly) the same as sRestore mode=6<br />
b. speed the whole thing up<br />
c. add lookaheads and threading<br />
<br />
Would be nice if someone uses it instead of sRestore mode=6 to give some general feedback.<br />
<br />
Cu Selur]]></content:encoded>
		</item>
		<item>
			<title><![CDATA[Green outline and color problems]]></title>
			<link>https://forum.selur.net/thread-4340.html</link>
			<pubDate>Tue, 03 Mar 2026 17:22:34 +0100</pubDate>
			<dc:creator><![CDATA[<a href="https://forum.selur.net/member.php?action=profile&uid=4668">georgepriftakis</a>]]></dc:creator>
			<guid isPermaLink="false">https://forum.selur.net/thread-4340.html</guid>
			<description><![CDATA[Hello I there is this green outline in this video and I tried to use chromashift but it creates problems for the whole video, do you have any idea how to fix it?<br />
<br />
<br />
<br />
Link to video: <a href="https://mega.nz/file/U7RzSATL#AEOYV3szPsfbz5vIUKyzOoeqpliYhu4c-Z6o90oOLsk" target="_blank" rel="noopener" class="mycode_url">https://mega.nz/file/U7RzSATL#AEOYV3szPs...Z6o90oOLsk</a>]]></description>
			<content:encoded><![CDATA[Hello I there is this green outline in this video and I tried to use chromashift but it creates problems for the whole video, do you have any idea how to fix it?<br />
<br />
<br />
<br />
Link to video: <a href="https://mega.nz/file/U7RzSATL#AEOYV3szPsfbz5vIUKyzOoeqpliYhu4c-Z6o90oOLsk" target="_blank" rel="noopener" class="mycode_url">https://mega.nz/file/U7RzSATL#AEOYV3szPs...Z6o90oOLsk</a>]]></content:encoded>
		</item>
		<item>
			<title><![CDATA[Weird outline in the clothes]]></title>
			<link>https://forum.selur.net/thread-4338.html</link>
			<pubDate>Mon, 02 Mar 2026 16:21:41 +0100</pubDate>
			<dc:creator><![CDATA[<a href="https://forum.selur.net/member.php?action=profile&uid=4668">georgepriftakis</a>]]></dc:creator>
			<guid isPermaLink="false">https://forum.selur.net/thread-4338.html</guid>
			<description><![CDATA[Hello there is this wierd outlines in the clothes of the person in the video and I can't find anyway to fix it. Are there any suggestions on how to fix it?<br />
<br />
<br />
video: <a href="https://mega.nz/file/ojR3kSxT#CeiSOzAsGt3OQAXaHKctQfuD5aYVN3N0KGGkmbjPw-o" target="_blank" rel="noopener" class="mycode_url">https://mega.nz/file/ojR3kSxT#CeiSOzAsGt...GGkmbjPw-o</a>]]></description>
			<content:encoded><![CDATA[Hello there is this wierd outlines in the clothes of the person in the video and I can't find anyway to fix it. Are there any suggestions on how to fix it?<br />
<br />
<br />
video: <a href="https://mega.nz/file/ojR3kSxT#CeiSOzAsGt3OQAXaHKctQfuD5aYVN3N0KGGkmbjPw-o" target="_blank" rel="noopener" class="mycode_url">https://mega.nz/file/ojR3kSxT#CeiSOzAsGt...GGkmbjPw-o</a>]]></content:encoded>
		</item>
		<item>
			<title><![CDATA[[HELP] Frame blending on old telecine, SRestore etc]]></title>
			<link>https://forum.selur.net/thread-4330.html</link>
			<pubDate>Thu, 19 Feb 2026 18:14:10 +0100</pubDate>
			<dc:creator><![CDATA[<a href="https://forum.selur.net/member.php?action=profile&uid=4659">tygerbug</a>]]></dc:creator>
			<guid isPermaLink="false">https://forum.selur.net/thread-4330.html</guid>
			<description><![CDATA[<a href="https://www.swisstransfer.com/d/b1dc712d-dc8f-4f14-bb39-36e5bb58431c" target="_blank" rel="noopener" class="mycode_url">https://www.swisstransfer.com/d/b1dc712d...e5bb58431c</a><br />
<br />
I am restoring some old commercials, which often have old, primitive telecines where there is lots of frame blending and no obvious pulldown.<br />
<br />
I have been using Hybrid to do QTGMC bobbing. I haven't learned the other features yet. I could use AviSynth in VirtualDub2 but Hybrid is more user friendly. My attempts to write AviSynth scripts usually fail and I wouldn't know how to fix them.<br />
<br />
I am attempting to use SRestore or similar to get this 59.94 video down to 23.976 fps accurately, with as little frame blending as possible. Most frames are blended but some more so than others. There is no obvious pattern.<br />
<br />
When I attempt to add SRestore the job crashes immediately, says there is no such filter, or slowly encodes to "500%" and then stalls.<br />
<br />
What are the exact steps I should do?]]></description>
			<content:encoded><![CDATA[<a href="https://www.swisstransfer.com/d/b1dc712d-dc8f-4f14-bb39-36e5bb58431c" target="_blank" rel="noopener" class="mycode_url">https://www.swisstransfer.com/d/b1dc712d...e5bb58431c</a><br />
<br />
I am restoring some old commercials, which often have old, primitive telecines where there is lots of frame blending and no obvious pulldown.<br />
<br />
I have been using Hybrid to do QTGMC bobbing. I haven't learned the other features yet. I could use AviSynth in VirtualDub2 but Hybrid is more user friendly. My attempts to write AviSynth scripts usually fail and I wouldn't know how to fix them.<br />
<br />
I am attempting to use SRestore or similar to get this 59.94 video down to 23.976 fps accurately, with as little frame blending as possible. Most frames are blended but some more so than others. There is no obvious pattern.<br />
<br />
When I attempt to add SRestore the job crashes immediately, says there is no such filter, or slowly encodes to "500%" and then stalls.<br />
<br />
What are the exact steps I should do?]]></content:encoded>
		</item>
		<item>
			<title><![CDATA[What is the correct order of steps]]></title>
			<link>https://forum.selur.net/thread-4316.html</link>
			<pubDate>Thu, 05 Feb 2026 23:24:55 +0100</pubDate>
			<dc:creator><![CDATA[<a href="https://forum.selur.net/member.php?action=profile&uid=4651">Maron</a>]]></dc:creator>
			<guid isPermaLink="false">https://forum.selur.net/thread-4316.html</guid>
			<description><![CDATA[Hi – please advise.<br />
(if I put the question in the wrong thread – I apologize)<br />
<br />
I have source films on Hi8 and I download them to my PC with the WinDV program.<br />
Parameters: Format AVI, 25FPS, Interlaced- Bottom Field First, 720x576, 4:3.<br />
I need to do editing (softw KdenLive) and also remove interlacing in the Hybrid program.<br />
The question is - What is the correct order of steps: 1. Deinterlaced ( by Hybrid), 2.- editing by KdenLive. 3- export to MP4 ???<br />
Thank you for the advice<br />
Maron CZ]]></description>
			<content:encoded><![CDATA[Hi – please advise.<br />
(if I put the question in the wrong thread – I apologize)<br />
<br />
I have source films on Hi8 and I download them to my PC with the WinDV program.<br />
Parameters: Format AVI, 25FPS, Interlaced- Bottom Field First, 720x576, 4:3.<br />
I need to do editing (softw KdenLive) and also remove interlacing in the Hybrid program.<br />
The question is - What is the correct order of steps: 1. Deinterlaced ( by Hybrid), 2.- editing by KdenLive. 3- export to MP4 ???<br />
Thank you for the advice<br />
Maron CZ]]></content:encoded>
		</item>
		<item>
			<title><![CDATA[Hallo]]></title>
			<link>https://forum.selur.net/thread-4314.html</link>
			<pubDate>Sun, 01 Feb 2026 21:51:10 +0100</pubDate>
			<dc:creator><![CDATA[<a href="https://forum.selur.net/member.php?action=profile&uid=4647">RextheC</a>]]></dc:creator>
			<guid isPermaLink="false">https://forum.selur.net/thread-4314.html</guid>
			<description><![CDATA[Wollte nur kurz H A L L O sagen (-;<br />
<br />
Bin jetzt mittlerweile schon seit... Jahren! , dabei mich mit VHS digitalisieren rumzuschlagen.<br />
Nun bin ich hier gelandet [:-)<br />
<br />
Gruß Tobias]]></description>
			<content:encoded><![CDATA[Wollte nur kurz H A L L O sagen (-;<br />
<br />
Bin jetzt mittlerweile schon seit... Jahren! , dabei mich mit VHS digitalisieren rumzuschlagen.<br />
Nun bin ich hier gelandet [:-)<br />
<br />
Gruß Tobias]]></content:encoded>
		</item>
		<item>
			<title><![CDATA[SeedVR2 expriences?]]></title>
			<link>https://forum.selur.net/thread-4311.html</link>
			<pubDate>Fri, 30 Jan 2026 14:49:45 +0100</pubDate>
			<dc:creator><![CDATA[<a href="https://forum.selur.net/member.php?action=profile&uid=1">Selur</a>]]></dc:creator>
			<guid isPermaLink="false">https://forum.selur.net/thread-4311.html</guid>
			<description><![CDATA[Does anyone use SeedVR2 ?<br />
I know that it can be used through <a href="https://github.com/Comfy-Org/ComfyUI_frontend" target="_blank" rel="noopener" class="mycode_url">ComfyUI</a> (leveraging <a href="https://github.com/Comfy-Org/ComfyUI-Manager" target="_blank" rel="noopener" class="mycode_url">ComfyUI-Manager</a> and following <a href="https://www.youtube.com/watch?v=MBtWYXq_r60" target="_blank" rel="noopener" class="mycode_url">SeedVR2 v2.5 Video Upscaling: Official Guide from the ComfyUI Integration Team | AInVFX Nov 7</a>.<br />
<br />
Haven't really tested this a lot, (I would probably need multiple faster gpus with lots of vram to really have fun with it) I'm wondering:<br />
a. are there significant differences between the different SeedVR2 models?<br />
b. what experiences do you have in regard to what prefiltering should be done before feeding a source to SeedVR2?<br />
c. is it just me or does the 3b model somehow create somewhat strange/creepy skin? <img src="https://forum.selur.net/images/smilies/smile.png" alt="Smile" title="Smile" class="smilie smilie_1" /><br />
d. is there <br />
<br />
Cu Selur]]></description>
			<content:encoded><![CDATA[Does anyone use SeedVR2 ?<br />
I know that it can be used through <a href="https://github.com/Comfy-Org/ComfyUI_frontend" target="_blank" rel="noopener" class="mycode_url">ComfyUI</a> (leveraging <a href="https://github.com/Comfy-Org/ComfyUI-Manager" target="_blank" rel="noopener" class="mycode_url">ComfyUI-Manager</a> and following <a href="https://www.youtube.com/watch?v=MBtWYXq_r60" target="_blank" rel="noopener" class="mycode_url">SeedVR2 v2.5 Video Upscaling: Official Guide from the ComfyUI Integration Team | AInVFX Nov 7</a>.<br />
<br />
Haven't really tested this a lot, (I would probably need multiple faster gpus with lots of vram to really have fun with it) I'm wondering:<br />
a. are there significant differences between the different SeedVR2 models?<br />
b. what experiences do you have in regard to what prefiltering should be done before feeding a source to SeedVR2?<br />
c. is it just me or does the 3b model somehow create somewhat strange/creepy skin? <img src="https://forum.selur.net/images/smilies/smile.png" alt="Smile" title="Smile" class="smilie smilie_1" /><br />
d. is there <br />
<br />
Cu Selur]]></content:encoded>
		</item>
		<item>
			<title><![CDATA[About deinterlace, telecine and other things]]></title>
			<link>https://forum.selur.net/thread-4291.html</link>
			<pubDate>Fri, 19 Dec 2025 22:22:19 +0100</pubDate>
			<dc:creator><![CDATA[<a href="https://forum.selur.net/member.php?action=profile&uid=4136">Doom83</a>]]></dc:creator>
			<guid isPermaLink="false">https://forum.selur.net/thread-4291.html</guid>
			<description><![CDATA[In my journey to convert some movies, i use some command on ffmpeg for analyze the video frame and sometime give strange things<br />
<br />
<br />
 <br />
<blockquote class="mycode_quote"><cite>Quote:</cite>Duration: 00:27:25.41, start: 1696.428300, bitrate: 6349 kb/s<br />
  Stream #0:0[0x1e0]: Video: mpeg2video (Main), yuv420p(tv, fcc/bt470bg/bt470bg, bottom first), 720x480 [SAR 8:9 DAR 4:3], 29.97 fps, 29.97 tbr, 90k tbn, start 1696.661633<br />
    Side data:<br />
      cpb: bitrate max/min/avg: 8800000/0/0 buffer size: 1835008 vbv_delay: N/A<br />
  Stream #0:1[0x80]: Audio: ac3, 48000 Hz, mono, fltp, 192 kb/s, start 1696.428300<br />
[Parsed_idet_0 @ 000001de269c8cc0] Repeated Fields: Neither:    0 Top:    0 Bottom:    0<br />
[Parsed_idet_0 @ 000001de269c8cc0] Single frame detection: TFF:    0 BFF:    0 Progressive:    0 Undetermined:    0<br />
[Parsed_idet_0 @ 000001de269c8cc0] Multi frame detection: TFF:    0 BFF:    0 Progressive:    0 Undetermined:    0<br />
Stream mapping:<br />
  Stream #0:0 -&gt; #0:0 (mpeg2video (native) -&gt; wrapped_avframe (native))<br />
Press [q] to stop, [?] for help<br />
Output #0, null, to 'pipe:':<br />
  Metadata:<br />
    encoder        : Lavf62.3.100<br />
  Stream #0:0: Video: wrapped_avframe, yuv420p(tv, fcc/bt470bg/bt470bg, bottom coded first (swapped)), 720x480 [SAR 8:9 DAR 4:3], q=2-31, 200 kb/s, 29.97 fps, 29.97 tbn<br />
    Metadata:<br />
      encoder        : Lavc62.11.100 wrapped_avframe<br />
[Parsed_idet_0 @ 000001de26daf880] Repeated Fields: Neither:  501 Top:    0 Bottom:    0<br />
[Parsed_idet_0 @ 000001de26daf880] Single frame detection: TFF:    0 BFF:    0 Progressive:  448 Undetermined:    53<br />
[Parsed_idet_0 @ 000001de26daf880] Multi frame detection: TFF:    0 BFF:    0 Progressive:  495 Undetermined:    6<br />
[out#0/null @ 000001de269c7e00] video:203KiB audio:0KiB subtitle:0KiB other streams:0KiB global headers:0KiB muxing overhead: unknown<br />
frame=  500 fps=0.0 q=-0.0 Lsize=N/A time=00:00:16.68 bitrate=N/A speed= 294x elapsed=0:00:00.05</blockquote>
<br />
<br />
Tell chatgpt and say there are interlaced BFF but with a lot of progressive frames and undetermined   <img src="https://forum.selur.net/images/smilies/huh.png" alt="Huh" title="Huh" class="smilie smilie_17" /><br />
Put in hybrid and say is TFF<br />
<br />
how handle this?]]></description>
			<content:encoded><![CDATA[In my journey to convert some movies, i use some command on ffmpeg for analyze the video frame and sometime give strange things<br />
<br />
<br />
 <br />
<blockquote class="mycode_quote"><cite>Quote:</cite>Duration: 00:27:25.41, start: 1696.428300, bitrate: 6349 kb/s<br />
  Stream #0:0[0x1e0]: Video: mpeg2video (Main), yuv420p(tv, fcc/bt470bg/bt470bg, bottom first), 720x480 [SAR 8:9 DAR 4:3], 29.97 fps, 29.97 tbr, 90k tbn, start 1696.661633<br />
    Side data:<br />
      cpb: bitrate max/min/avg: 8800000/0/0 buffer size: 1835008 vbv_delay: N/A<br />
  Stream #0:1[0x80]: Audio: ac3, 48000 Hz, mono, fltp, 192 kb/s, start 1696.428300<br />
[Parsed_idet_0 @ 000001de269c8cc0] Repeated Fields: Neither:    0 Top:    0 Bottom:    0<br />
[Parsed_idet_0 @ 000001de269c8cc0] Single frame detection: TFF:    0 BFF:    0 Progressive:    0 Undetermined:    0<br />
[Parsed_idet_0 @ 000001de269c8cc0] Multi frame detection: TFF:    0 BFF:    0 Progressive:    0 Undetermined:    0<br />
Stream mapping:<br />
  Stream #0:0 -&gt; #0:0 (mpeg2video (native) -&gt; wrapped_avframe (native))<br />
Press [q] to stop, [?] for help<br />
Output #0, null, to 'pipe:':<br />
  Metadata:<br />
    encoder        : Lavf62.3.100<br />
  Stream #0:0: Video: wrapped_avframe, yuv420p(tv, fcc/bt470bg/bt470bg, bottom coded first (swapped)), 720x480 [SAR 8:9 DAR 4:3], q=2-31, 200 kb/s, 29.97 fps, 29.97 tbn<br />
    Metadata:<br />
      encoder        : Lavc62.11.100 wrapped_avframe<br />
[Parsed_idet_0 @ 000001de26daf880] Repeated Fields: Neither:  501 Top:    0 Bottom:    0<br />
[Parsed_idet_0 @ 000001de26daf880] Single frame detection: TFF:    0 BFF:    0 Progressive:  448 Undetermined:    53<br />
[Parsed_idet_0 @ 000001de26daf880] Multi frame detection: TFF:    0 BFF:    0 Progressive:  495 Undetermined:    6<br />
[out#0/null @ 000001de269c7e00] video:203KiB audio:0KiB subtitle:0KiB other streams:0KiB global headers:0KiB muxing overhead: unknown<br />
frame=  500 fps=0.0 q=-0.0 Lsize=N/A time=00:00:16.68 bitrate=N/A speed= 294x elapsed=0:00:00.05</blockquote>
<br />
<br />
Tell chatgpt and say there are interlaced BFF but with a lot of progressive frames and undetermined   <img src="https://forum.selur.net/images/smilies/huh.png" alt="Huh" title="Huh" class="smilie smilie_17" /><br />
Put in hybrid and say is TFF<br />
<br />
how handle this?]]></content:encoded>
		</item>
		<item>
			<title><![CDATA[Using Stable Diffision models for Colorization]]></title>
			<link>https://forum.selur.net/thread-4287.html</link>
			<pubDate>Sun, 14 Dec 2025 19:08:57 +0100</pubDate>
			<dc:creator><![CDATA[<a href="https://forum.selur.net/member.php?action=profile&uid=881">Dan64</a>]]></dc:creator>
			<guid isPermaLink="false">https://forum.selur.net/thread-4287.html</guid>
			<description><![CDATA[Recently I received some request to include stable diffusion models in HAVC colorization process.<br />
<br />
So I decided to analyze the problem and to write this post to describe my findings.<br />
<br />
First all is necessary to understand that the stable diffusion models were developed for the text to image process. They are able to build an image based on the description of the image.<br />
This "specialization" represent the main problem, because I want to try to use them to color an image already available. <br />
<br />
For example if I try to describe to a stable diffusion model the following image<br />
<br />
 <img src="https://forum.selur.net/attachment.php?aid=3419" loading="lazy"  alt="[Image: attachment.php?aid=3419]" class="mycode_img" /><br />
<br />
In the best case I can obtain something like this<br />
<br />
<img src="https://forum.selur.net/attachment.php?aid=3420" loading="lazy"  alt="[Image: attachment.php?aid=3420]" class="mycode_img" /><br />
<br />
So I had to develop a complex pipeline, and after many attempts I was able to obtain "decent" colored images using the following models in the colorization pipeline:<br />
<br />
1) Juggernaut-XL_v9_RunDiffusionPhoto_v2 (for the text to image colorization)<br />
2) control-LoRA-recolor-rank256 (LoRA specialized to force the stable diffusion model to produce an image equal to the source in the gray-space)<br />
3) DDColor_Modelscope (to provide a colored image as reference)<br />
3) Qwen3-VL-2B (to descibe the image provided by DDColor and to provide the "text" to Juggernaut which tries to "mimic" DDColor)<br />
<br />
Using this pipeline I obtained the following result (source image on the left generated with AI)<br />
<br />
<img src="https://forum.selur.net/attachment.php?aid=3421" loading="lazy"  alt="[Image: attachment.php?aid=3421]" class="mycode_img" /><br />
<br />
The description of the pipeline that I used is too complex to be included in this post, but I can say that to build the colorization pipeline I used <a href="https://www.comfy.org/" target="_blank" rel="noopener" class="mycode_url">ComyUI</a>.<br />
<br />
For those familiar with it, I've attached an image (Recolor_Workflow.png) containing the workflow that I used. <br />
It is necessary to drag and drop the image into ComfyUI to view the workflow (very big). <br />
Of course, it will be necessary to install all the missing nodes and models (available at <a href="https://huggingface.co/" target="_blank" rel="noopener" class="mycode_url">Hugging Face</a>).<br />
<br />
In summary, a part the speed (stable diffusion models are about 50x slower) I don't see any significant improvement in using stable diffusion models for the colorization process.<br />
They could be used with the Image-to-Image or Image-Edit process to change the colors of an image already colored to be used as reference.<br />
But this process is totally manual and cannot be included in the automatic colorization pipeline used by HAVC.<br />
 <br />
<br />
Dan<br /><!-- start: postbit_attachments_attachment -->
<br /><!-- start: attachment_icon -->
<img src="https://forum.selur.net/images/attachtypes/image.png" title="JPG Image" border="0" alt=".jpg" />
<!-- end: attachment_icon -->&nbsp;&nbsp;<a href="attachment.php?aid=3419" target="_blank" title="">Original_090_small.jpg</a> (Size: 94,7 KB / Downloads: 386)
<!-- end: postbit_attachments_attachment --><br /><!-- start: postbit_attachments_attachment -->
<br /><!-- start: attachment_icon -->
<img src="https://forum.selur.net/images/attachtypes/image.png" title="JPG Image" border="0" alt=".jpg" />
<!-- end: attachment_icon -->&nbsp;&nbsp;<a href="attachment.php?aid=3420" target="_blank" title="">Juggernaut_00090_color_small.jpg</a> (Size: 20,38 KB / Downloads: 375)
<!-- end: postbit_attachments_attachment --><br /><!-- start: postbit_attachments_attachment -->
<br /><!-- start: attachment_icon -->
<img src="https://forum.selur.net/images/attachtypes/image.png" title="JPG Image" border="0" alt=".jpg" />
<!-- end: attachment_icon -->&nbsp;&nbsp;<a href="attachment.php?aid=3421" target="_blank" title="">Compare3_0003.jpg</a> (Size: 69,61 KB / Downloads: 387)
<!-- end: postbit_attachments_attachment --><br /><!-- start: postbit_attachments_attachment -->
<br /><!-- start: attachment_icon -->
<img src="https://forum.selur.net/images/attachtypes/image.png" title="PNG Image" border="0" alt=".png" />
<!-- end: attachment_icon -->&nbsp;&nbsp;<a href="attachment.php?aid=3422" target="_blank" title="">Recolor_Workflow.png</a> (Size: 226,29 KB / Downloads: 113)
<!-- end: postbit_attachments_attachment -->]]></description>
			<content:encoded><![CDATA[Recently I received some request to include stable diffusion models in HAVC colorization process.<br />
<br />
So I decided to analyze the problem and to write this post to describe my findings.<br />
<br />
First all is necessary to understand that the stable diffusion models were developed for the text to image process. They are able to build an image based on the description of the image.<br />
This "specialization" represent the main problem, because I want to try to use them to color an image already available. <br />
<br />
For example if I try to describe to a stable diffusion model the following image<br />
<br />
 <img src="https://forum.selur.net/attachment.php?aid=3419" loading="lazy"  alt="[Image: attachment.php?aid=3419]" class="mycode_img" /><br />
<br />
In the best case I can obtain something like this<br />
<br />
<img src="https://forum.selur.net/attachment.php?aid=3420" loading="lazy"  alt="[Image: attachment.php?aid=3420]" class="mycode_img" /><br />
<br />
So I had to develop a complex pipeline, and after many attempts I was able to obtain "decent" colored images using the following models in the colorization pipeline:<br />
<br />
1) Juggernaut-XL_v9_RunDiffusionPhoto_v2 (for the text to image colorization)<br />
2) control-LoRA-recolor-rank256 (LoRA specialized to force the stable diffusion model to produce an image equal to the source in the gray-space)<br />
3) DDColor_Modelscope (to provide a colored image as reference)<br />
3) Qwen3-VL-2B (to descibe the image provided by DDColor and to provide the "text" to Juggernaut which tries to "mimic" DDColor)<br />
<br />
Using this pipeline I obtained the following result (source image on the left generated with AI)<br />
<br />
<img src="https://forum.selur.net/attachment.php?aid=3421" loading="lazy"  alt="[Image: attachment.php?aid=3421]" class="mycode_img" /><br />
<br />
The description of the pipeline that I used is too complex to be included in this post, but I can say that to build the colorization pipeline I used <a href="https://www.comfy.org/" target="_blank" rel="noopener" class="mycode_url">ComyUI</a>.<br />
<br />
For those familiar with it, I've attached an image (Recolor_Workflow.png) containing the workflow that I used. <br />
It is necessary to drag and drop the image into ComfyUI to view the workflow (very big). <br />
Of course, it will be necessary to install all the missing nodes and models (available at <a href="https://huggingface.co/" target="_blank" rel="noopener" class="mycode_url">Hugging Face</a>).<br />
<br />
In summary, a part the speed (stable diffusion models are about 50x slower) I don't see any significant improvement in using stable diffusion models for the colorization process.<br />
They could be used with the Image-to-Image or Image-Edit process to change the colors of an image already colored to be used as reference.<br />
But this process is totally manual and cannot be included in the automatic colorization pipeline used by HAVC.<br />
 <br />
<br />
Dan<br /><!-- start: postbit_attachments_attachment -->
<br /><!-- start: attachment_icon -->
<img src="https://forum.selur.net/images/attachtypes/image.png" title="JPG Image" border="0" alt=".jpg" />
<!-- end: attachment_icon -->&nbsp;&nbsp;<a href="attachment.php?aid=3419" target="_blank" title="">Original_090_small.jpg</a> (Size: 94,7 KB / Downloads: 386)
<!-- end: postbit_attachments_attachment --><br /><!-- start: postbit_attachments_attachment -->
<br /><!-- start: attachment_icon -->
<img src="https://forum.selur.net/images/attachtypes/image.png" title="JPG Image" border="0" alt=".jpg" />
<!-- end: attachment_icon -->&nbsp;&nbsp;<a href="attachment.php?aid=3420" target="_blank" title="">Juggernaut_00090_color_small.jpg</a> (Size: 20,38 KB / Downloads: 375)
<!-- end: postbit_attachments_attachment --><br /><!-- start: postbit_attachments_attachment -->
<br /><!-- start: attachment_icon -->
<img src="https://forum.selur.net/images/attachtypes/image.png" title="JPG Image" border="0" alt=".jpg" />
<!-- end: attachment_icon -->&nbsp;&nbsp;<a href="attachment.php?aid=3421" target="_blank" title="">Compare3_0003.jpg</a> (Size: 69,61 KB / Downloads: 387)
<!-- end: postbit_attachments_attachment --><br /><!-- start: postbit_attachments_attachment -->
<br /><!-- start: attachment_icon -->
<img src="https://forum.selur.net/images/attachtypes/image.png" title="PNG Image" border="0" alt=".png" />
<!-- end: attachment_icon -->&nbsp;&nbsp;<a href="attachment.php?aid=3422" target="_blank" title="">Recolor_Workflow.png</a> (Size: 226,29 KB / Downloads: 113)
<!-- end: postbit_attachments_attachment -->]]></content:encoded>
		</item>
		<item>
			<title><![CDATA[Removing reel-change markings in Hybrid]]></title>
			<link>https://forum.selur.net/thread-4286.html</link>
			<pubDate>Fri, 12 Dec 2025 19:00:28 +0100</pubDate>
			<dc:creator><![CDATA[<a href="https://forum.selur.net/member.php?action=profile&uid=4589">Oldermediaformats</a>]]></dc:creator>
			<guid isPermaLink="false">https://forum.selur.net/thread-4286.html</guid>
			<description><![CDATA[I have some LD recordings that have change-cue marks/flags in the upper-right corner of the image that I'd like to expunge. Are there any filters in Hybrid for change-scene detection?]]></description>
			<content:encoded><![CDATA[I have some LD recordings that have change-cue marks/flags in the upper-right corner of the image that I'd like to expunge. Are there any filters in Hybrid for change-scene detection?]]></content:encoded>
		</item>
	</channel>
</rss>