First of all, thank you both for your feedback and for taking the time to look into this! I really appreciate the attention you’ve given to the test.
That being said, I have to disagree—I find DeepEnhancer to be significantly more effective than Spotless in my tests. The level of detail preservation and artifact reduction is much better.
Also, Dan64, I wanted to mention that DeepEnhancer can also be used for colorization! Unfortunately, I wasn’t able to get the colorization script working on my side, but I’m sure you’d have no trouble with it. 😆 It looks very promising, and I’d love to hear your thoughts if you manage to test it.
I believe this could be a great addition to Hybrid, bringing even more value to its filtering options. Let me know what you think!
Hi djilayeden, here is a short 6 second clip sample of a damaged film footage that I could never clean up. Perhaps its unrepairable. Please process it with DeepEnhancer and let see how well it works.
then, I extracted pretrained_models.zip into the folder DeepEnhancer folder.
Since the side didn't mention and dependencies I called:
SET CUDA_VISIBLE_DEVICES=0 python Lib\site-packages\DeepEnhancer\test_demo.py
which returned:
Traceback (most recent call last):
File "f:\Hybrid\64bit\Vapoursynth\Lib\site-packages\DeepEnhancer\test_demo.py", line 14, in <module>
from basicsr.data.film_dataset import resize_240_short_side
ModuleNotFoundError: No module named 'basicsr'
So I installed it ('python -m pip install basicvsr'), called the test_demoy.py again, and got
Traceback (most recent call last):
File "f:\Hybrid\64bit\Vapoursynth\Lib\site-packages\DeepEnhancer\test_demo.py", line 14, in <module>
from basicsr.data.film_dataset import resize_240_short_side
File "F:\Hybrid\64bit\Vapoursynth\Lib\site-packages\basicsr\__init__.py", line 3, in <module>
from .archs import *
File "F:\Hybrid\64bit\Vapoursynth\Lib\site-packages\basicsr\archs\__init__.py", line 16, in <module>
_arch_modules = [importlib.import_module(f'basicsr.archs.{file_name}') for file_name in arch_filenames]
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "importlib\__init__.py", line 90, in import_module
File "F:\Hybrid\64bit\Vapoursynth\Lib\site-packages\basicsr\archs\basicvsrpp_arch.py", line 7, in <module>
from basicsr.archs.arch_util import flow_warp
File "F:\Hybrid\64bit\Vapoursynth\Lib\site-packages\basicsr\archs\arch_util.py", line 6, in <module>
from distutils.version import LooseVersion
ModuleNotFoundError: No module named 'distutils'
and changing "F:\Hybrid\64bit\Vapoursynth\Lib\site-packages\basicsr\archs\arch_util.py", line 6, from
from distutils.version import LooseVersion
seems to fix the above problem.
But now calling:
SET CUDA_VISIBLE_DEVICES=0 python Lib\site-packages\DeepEnhancer\test_demo.py
doesn't output anything.
Removing 'SET CUDA_VISIBLE_DEVICES=0' does seem to start, but stops after:
Traceback (most recent call last):
File "F:\Hybrid\64bit\Vapoursynth\Lib\site-packages\DeepEnhancer\test_demo.py", line 14, in <module>
from basicsr.data.film_dataset import resize_240_short_side
File "F:\Hybrid\64bit\Vapoursynth\Lib\site-packages\basicsr\__init__.py", line 4, in <module>
from .data import *
File "F:\Hybrid\64bit\Vapoursynth\Lib\site-packages\basicsr\data\__init__.py", line 22, in <module>
_dataset_modules = [importlib.import_module(f'basicsr.data.{file_name}') for file_name in dataset_filenames]
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "importlib\__init__.py", line 90, in import_module
File "F:\Hybrid\64bit\Vapoursynth\Lib\site-packages\basicsr\data\realesrgan_dataset.py", line 11, in <module>
from basicsr.data.degradations import circular_lowpass_kernel, random_mixed_kernels
File "F:\Hybrid\64bit\Vapoursynth\Lib\site-packages\basicsr\data\degradations.py", line 8, in <module>
from torchvision.transforms.functional_tensor import rgb_to_grayscale
ModuleNotFoundError: No module named 'torchvision.transforms.functional_tensor'
The torchvision.transforms.functional_tensor module was removed in 0.17.
Current dev torch-add-on uses:
torchvision 0.22.0.dev20250325+cu128
=> Not looking into this further.
Cu Selur
----
Dev versions are in the 'experimental'-folder of my GoogleDrive, which is linked on the download page.
Dear Selur and Dan,
Thank you very much for your attention and for taking the time to look at the test and consider the suggestion.
Even if the project won't be supported further, I truly appreciate your time, your amazing work, and everything you've already done for the restoration community.
Best regards,
Thank you for working on this clip, always nice to see a comparison.
The results of DeepEnhancer and BasicVSR++ (Video Denoising) are nearly identical.
BasicVSR++ (NTIRE 2021 (3)) reminds me of Topaz software. Looks great on objects, very clean, but there is something unnatural on people's hair and faces.
I've noticed that DeepEnhancer performs better than BasicVSR_NTIRE when it comes to removing spots and blemishes—especially the larger ones often found in films from the 1920s, 30s, and 40s. It seems that BasicVSR_NTIRE may have been pre-trained on modern, cleaner footage, which makes it less effective on heavily damaged historical material.
In comparison, DeepEnhancer provides stronger results in terms of restoration. However, as our friend Selur pointed out, it can sometimes be a bit aggressive—so it's a matter of balancing use cases and fine-tuning.
Still, I'm really impressed with how well DeepEnhancer handles old footage!