This forum uses cookies
This forum makes use of cookies to store your login information if you are registered, and your last visit if you are not. Cookies are small text documents stored on your computer; the cookies set by this forum can only be used on this website and pose no security risk. Cookies on this forum also track the specific topics you have read and when you last read them. Please confirm whether you accept or reject these cookies being set.

A cookie will be stored in your browser regardless of choice to prevent you being asked this question again. You will be able to change your cookie settings at any time using the link in the footer.

DeepEnhancer
#11
Dear Selur and Dan64,

First of all, thank you both for your feedback and for taking the time to look into this! I really appreciate the attention you’ve given to the test.

That being said, I have to disagree—I find DeepEnhancer to be significantly more effective than Spotless in my tests. The level of detail preservation and artifact reduction is much better.

If you’d like to try it yourself, here are the pre-trained models:
👉 https://drive.google.com/file/d/1ViZjRQ9...sp=sharing

Also, Dan64, I wanted to mention that DeepEnhancer can also be used for colorization! Unfortunately, I wasn’t able to get the colorization script working on my side, but I’m sure you’d have no trouble with it. 😆 It looks very promising, and I’d love to hear your thoughts if you manage to test it.

I believe this could be a great addition to Hybrid, bringing even more value to its filtering options. Let me know what you think!

Best
Reply
#12
Hi djilayeden, here is a short 6 second clip sample of a damaged film footage that I could never clean up. Perhaps its unrepairable. Please process it with DeepEnhancer and let see how well it works.


Attached Files
.zip   Test_video.zip (Size: 3,68 MB / Downloads: 7)
Reply
#13
Problem: Can't get it working in Hybrids portable Python 3.12 environment.
Inside the Vapoursynth folder, I called:
git clone https://github.com/jiangqin567/DeepEnhancer.git
then, I extracted pretrained_models.zip into the folder DeepEnhancer folder.
Since the side didn't mention and dependencies I called:
SET CUDA_VISIBLE_DEVICES=0 python Lib\site-packages\DeepEnhancer\test_demo.py
which returned:
Traceback (most recent call last):
  File "f:\Hybrid\64bit\Vapoursynth\Lib\site-packages\DeepEnhancer\test_demo.py", line 14, in <module>
    from basicsr.data.film_dataset import resize_240_short_side
ModuleNotFoundError: No module named 'basicsr'
So I installed it ('python -m pip install basicvsr'), called the test_demoy.py again, and got
Traceback (most recent call last):
  File "f:\Hybrid\64bit\Vapoursynth\Lib\site-packages\DeepEnhancer\test_demo.py", line 14, in <module>
    from basicsr.data.film_dataset import resize_240_short_side
  File "F:\Hybrid\64bit\Vapoursynth\Lib\site-packages\basicsr\__init__.py", line 3, in <module>
    from .archs import *
  File "F:\Hybrid\64bit\Vapoursynth\Lib\site-packages\basicsr\archs\__init__.py", line 16, in <module>
    _arch_modules = [importlib.import_module(f'basicsr.archs.{file_name}') for file_name in arch_filenames]
                     ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "importlib\__init__.py", line 90, in import_module
  File "F:\Hybrid\64bit\Vapoursynth\Lib\site-packages\basicsr\archs\basicvsrpp_arch.py", line 7, in <module>
    from basicsr.archs.arch_util import flow_warp
  File "F:\Hybrid\64bit\Vapoursynth\Lib\site-packages\basicsr\archs\arch_util.py", line 6, in <module>
    from distutils.version import LooseVersion
ModuleNotFoundError: No module named 'distutils'
which is a problem since distutils was removed in Python 3.12 which is required for Vapoursynth R70 (upcoming R71 will require 3.13).
(see: https://docs.python.org/3/library/distutils.html)

Cu Selur
----
Dev versions are in the 'experimental'-folder of my GoogleDrive, which is linked on the download page.
Reply
#14
Installing looseversion
python -m pip install looseversion
and changing "F:\Hybrid\64bit\Vapoursynth\Lib\site-packages\basicsr\archs\arch_util.py", line 6, from
from distutils.version import LooseVersion
seems to fix the above problem.
But now calling:
SET CUDA_VISIBLE_DEVICES=0 python Lib\site-packages\DeepEnhancer\test_demo.py
doesn't output anything.
Removing 'SET CUDA_VISIBLE_DEVICES=0' does seem to start, but stops after:
Traceback (most recent call last):
  File "F:\Hybrid\64bit\Vapoursynth\Lib\site-packages\DeepEnhancer\test_demo.py", line 14, in <module>
    from basicsr.data.film_dataset import resize_240_short_side
  File "F:\Hybrid\64bit\Vapoursynth\Lib\site-packages\basicsr\__init__.py", line 4, in <module>
    from .data import *
  File "F:\Hybrid\64bit\Vapoursynth\Lib\site-packages\basicsr\data\__init__.py", line 22, in <module>
    _dataset_modules = [importlib.import_module(f'basicsr.data.{file_name}') for file_name in dataset_filenames]
                        ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "importlib\__init__.py", line 90, in import_module
  File "F:\Hybrid\64bit\Vapoursynth\Lib\site-packages\basicsr\data\realesrgan_dataset.py", line 11, in <module>
    from basicsr.data.degradations import circular_lowpass_kernel, random_mixed_kernels
  File "F:\Hybrid\64bit\Vapoursynth\Lib\site-packages\basicsr\data\degradations.py", line 8, in <module>
    from torchvision.transforms.functional_tensor import rgb_to_grayscale
ModuleNotFoundError: No module named 'torchvision.transforms.functional_tensor'

The torchvision.transforms.functional_tensor module was removed in 0.17.
Current dev torch-add-on uses:
torchvision                 0.22.0.dev20250325+cu128

=> Not looking into this further.

Cu Selur
----
Dev versions are in the 'experimental'-folder of my GoogleDrive, which is linked on the download page.
Reply
#15
I was able to use this package.

I had to create an environment with the following packages

# Name                    Version
albumentations            2.0.5
matplotlib                3.10.1
mmcv                      2.2.0
mmcv-full                 1.7.2
numpy                     2.1.2
nvidia-cuda-runtime-cu12  12.8.90
opencv-python             4.11.0.86
opencv-python-headless    4.11.0.86
pillow                    11.0.0
pip                       25.0
scikit-image              0.25.2
scipy                     1.15.2
setuptools                75.8.0
tensorrt                  10.9.0.34
tensorrt-cu12             10.9.0.34
tensorrt-cu12-bindings    10.9.0.34
tensorrt-cu12-libs        10.9.0.34
torch                     2.8.0.dev20250314+cu128
torchaudio                2.6.0.dev20250315+cu128
torchvision               0.22.0.dev20250315+cu128

I don't know in Linux what happens, but in window, running the demo I get out-of-memory error, despite the fact that I have a 16GB GPU.

So I had to reduce the size of frames in input to get this package working in Windows.

The coloring capability are very bad (see picture below)

[Image: attachment.php?aid=3059]

Probably the model was trained to coloring well only the clip provided as demo.

The restoring part is better, but in my opinion in Hybrid there are better tool.

I attached some clips restored using DeepEnhancer, BasicVSR++ (Video Denoising) and BasicVSR++ (NTIRE 2021 (3))

In my opinion BasicVSR++ (NTIRE 2021 (3)) is better.

I have no intention of spending any more time on this project.

Dan


Attached Files Thumbnail(s)
   

.zip   Test_DeepEnhancer_restore.zip (Size: 2,2 MB / Downloads: 6)
Reply
#16
Yeah, I figured that using an older environment might work, but that isn't really an option for Hybrid.
----
Dev versions are in the 'experimental'-folder of my GoogleDrive, which is linked on the download page.
Reply
#17
Dear Selur and Dan,
Thank you very much for your attention and for taking the time to look at the test and consider the suggestion.
Even if the project won't be supported further, I truly appreciate your time, your amazing work, and everything you've already done for the restoration community.
Best regards,
Reply
#18
Thank you for working on this clip, always nice to see a comparison.  
The results of DeepEnhancer and BasicVSR++ (Video Denoising) are nearly identical.
BasicVSR++ (NTIRE 2021 (3)) reminds me of Topaz software.  Looks great on objects, very clean, but there is something unnatural on people's hair and faces.
Reply
#19
BasicVSR++ is just so aggressive since Dan64 reduced the resolution so much. (BasicVSR++ gets more aggressive with lower resolution.
Not using machine learning stuff, I also had a quick go at that file.
output: https://www.mediafire.com/file/1oo9c0o24...t.mp4/file
script: https://pastebin.com/EhxAx0Gy

Cu Selur
----
Dev versions are in the 'experimental'-folder of my GoogleDrive, which is linked on the download page.
Reply
#20
Hi,

I've noticed that DeepEnhancer performs better than BasicVSR_NTIRE when it comes to removing spots and blemishes—especially the larger ones often found in films from the 1920s, 30s, and 40s. It seems that BasicVSR_NTIRE may have been pre-trained on modern, cleaner footage, which makes it less effective on heavily damaged historical material.

In comparison, DeepEnhancer provides stronger results in terms of restoration. However, as our friend Selur pointed out, it can sometimes be a bit aggressive—so it's a matter of balancing use cases and fine-tuning.

Still, I'm really impressed with how well DeepEnhancer handles old footage!

Best regards,


Attached Files
.zip   Test Restoration.zip (Size: 4,46 MB / Downloads: 6)
Reply


Forum Jump:


Users browsing this thread: zspeciman, 1 Guest(s)