I just completed the development of a Vapoursynth filter for Deoldify.
As you probably already know, Deoldify is a Deep Learning based project for colorizing and restoring old images and video.
Currently it is possible to use Deoldify using Jupyter or Stable Diffusion.
Now with this implementation it will be possible to use it directly in Vapoursynth.
To use this filter it is necessary to install fastai version v1.0.60 with the command
python -m pip fastai=1.0.60
Deoldify is delivered with a own version of fastai, so this installation is necessary to install "only" the dependencies.
After the installation of fastai it is necessary to delete it from "Hybrid\64bit\Vapoursynth\Lib\site-packages" to avoid conflicts with the Deodify version (it seems that this problem arise only when Deoldify is inside a packeage).
To install the filter it is necessary to unzip the attached file vsdeoldify-1.0.0.zip in "Hybrid\64bit\Vapoursynth\Lib\site-packages"
I don't have included the models. They must downloaded from: https://github.com/jantic/DeOldify
And installed in the folder: "Hybrid\64bit\Vapoursynth\Lib\site-packages\vsdeoldify\models"
I attached also the file Example.zip with a simple example.
The usage is the following:
from vsdeoldify import ddeoldify
clip=ddeoldify(clip, model, render_factor, device_index, post_process)
where
- clip: Clip to process. Only RGB24 format is supported.
- model: Model to use (default 0).
0 = ColorizeVideo_gen
1 = ColorizeStable_gen
2 = ColorizeArtistic_gen
- render_factor: render factor for the model.
- device_index: Device ordinal of the GPU, choices: GPU0...GPU7, CPU=99 (default 0)
- post_process: post_process takes advantage of the fact that human eyes are much less sensitive to imperfections in chrominance compared to luminance lowering the memory usage (default True)
This is a draft version. It would be nice if you can install on your side to check if is working on your installation.
Nice.
Sadly, I'm busy today and tomorrow, but I'll test it and report back on Thursday after work.
How does it compare to ddcolor?
iirc render_factor slowed down deolidfy quite a bit, but helped with stabilizing the colors. (iirc. one needed to use 20+ to get usable results)
do you have any experience with this (it's quite a while since I last used deoldify)
Cu Selur
----
Dev versions are in the 'experimental'-folder of my GoogleDrive, which is linked on the download page.
(27.02.2024, 18:37)Selur Wrote: How does it compare to ddcolor?
It depends on the model used, the Video model in "deoldify" has been calibrated to be flicker free in videos. Unfortunately in "ddcolor" all the models have some flickering.
The colors in "ddcolor" are more saturated while "deoldify" is more conservative and the color is desatured in order to avoid the flickering.
The 2 models have different behavior and maybe the best result can be reached using them in combination (see Merge()).
(27.02.2024, 18:37)Selur Wrote: iirc render_factor slowed down deolidfy quite a bit, but helped with stabilizing the colors. (iirc. one needed to use 20+ to get usable results)
do you have any experience with this (it's quite a while since I last used deoldify)
Yes the render_factor controls the number of iteration for the convergence of the colors to be assigned to the image.
Higher values decrease the speed.
I forget to specify that a reasonable range of values for render_factor is between 10 and 44, being 21 a good default.
In term of speed render_factor affect the speed as input_size in "ddcolor".
With render_factor=30, "deoldify" has about the same speed of "ddcolor" with input_size =1024.
=> seems to work (~5.4GB VRAM used)
Increasing the render_factor to 40 roughly doubled the VRAM usage.
Using a 4k source didn't really increase the VRAM usage, a lot. (maybe 20-25%)
(yes, deoldify isn't good with nature. )
I then threw some 1080p content at it
(again, not suited for natur content)
on the last I also used ddColor for comparison:
So conclusion: It's slow, but working.
Also found a bug in my code regarding handling of custom sections.
Cu Selur
----
Dev versions are in the 'experimental'-folder of my GoogleDrive, which is linked on the download page.
The resnet network is necessary for torch and is automatically downloaded (only the first time).
It is the backbone of the generator- objects are detected more consistently and correctly with this.
I cannot do nothing on that issue.
As you can note the colors provided by ddcolor are more saturated of the colors applied by deoldify.
For amusement I compared the effect of render_factor and input_size on deoldify and ddcolor
28.02.2024, 05:51 (This post was last modified: 28.02.2024, 06:20 by Selur.)
Quote:The main differences are in the color of the hair and hands.
It's all the colors. See the color of the tree (on the right) and the floors.
Quote:Are you planning to add this filter in Hybrid ?
Not sure atm., will have to test whether:
a. one can download the "resnet101-63fe2227.pth" models beforehand and maybe place them in the models folder (<- that doesn't work).
I really don't like the idea of Hybrid triggering 'random' downloads.
b. I manage to put alle the dependencies&co together so that there is a single download I could add as a spearate addon-package. (alternatively writing down the steps to install this manually would also be possible, but I prefer the package, since it better allows to reproduce problems)
Quote:As I wrote none of the filter is perfect, maybe it is possible to obtain a better result by combining them.
Not sure whether that will work, since iirc. both of the filters assume gray colored content and don't try to just fix some colors.
I doubt, something like a simple merging or some masked merges will work.
Also the video model does seem to add some blue halos.
Cu Selur
Ps.: main issue for me are the additional downloads. Not sure whether changing 'TORCH_HOME' environment variable a. will work b. will not cause issues with other filters.
Creating 'f:\Hybrid\64bit\Vapoursynth\Lib\site-packages\vsdeoldify\models\resnet\hub\checkpoints' with the files in it and using:
import os
os.environ['TORCH_HOME'] = 'F:/Hybrid/64bit/Vapoursynth/Lib/site-packages/vsdeoldify/models/resnet'
does seem to work, but no clue whether it causes any issues with other filters (mlrt or torch filters).
----
Dev versions are in the 'experimental'-folder of my GoogleDrive, which is linked on the download page.
The download of resnet101 is triggered by "fastai vision" in the files: presnet.py, xresnet.py, xresnet2.py
The function used is defined in torch.utils.model_zoo.load_url()
The logic is embedded in fastai version of Deoldify. More precisely in the function
def xresnet101(pretrained=False, **kwargs):
"""Constructs a XResNet-101 model.
Args:
pretrained (bool): If True, returns a model pre-trained on ImageNet
"""
model = XResNet(Bottleneck, [3, 4, 23, 3], **kwargs)
if pretrained: model.load_state_dict(model_zoo.load_url(model_urls['xresnet101']))
return model
While the use of the ".cache" is embedd directly in torch, I don't think that is possible to change this logic.
In meanwhile I discovered that is not a problem to have "fastai==1.0.60" installed with vsdeoldify.
It was a problem for me during the development, but once all the "imports" are correctly assigned, the problem disappears.
I fixed some small issue in the package, I attached a new version.