This forum uses cookies
This forum makes use of cookies to store your login information if you are registered, and your last visit if you are not. Cookies are small text documents stored on your computer; the cookies set by this forum can only be used on this website and pose no security risk. Cookies on this forum also track the specific topics you have read and when you last read them. Please confirm whether you accept or reject these cookies being set.

A cookie will be stored in your browser regardless of choice to prevent you being asked this question again. You will be able to change your cookie settings at any time using the link in the footer.

Deoldify Vapoursynth filter
Dear Dan64 and Selur,

I would like to extend a tremendous tribute to both of you for the incredible work you've done with Hybrid HAVC, especially in the film colorization process. Your expertise and dedication in advancing this tool are truly inspiring. The level of quality and precision you’ve achieved is simply remarkable.

I sincerely hope that you continue to develop and enhance this wonderful tool. Your contribution to film colorization is invaluable, and I’m sure you will keep pushing the boundaries of what can be accomplished in this field.

Thank you for everything you’ve done and continue to do.

Best regards,
Reply
Hello Selur,

   I will try to explain why I decided to resurrect DeepRemaster.
  
   The main reason is that after having learned better about the exemplar-based models, I was able to include properly DeepRemaster in HAVC.
   Respect to the version implemented previously I added the following improvements:

    1) possibility to read the reference frames directly from the clip (my previous version was able to read only the reference images from a folder)
    2) implemented a buffered read of the reference clip to don't slow down too much that start of the filter. Without this buffered read the start of the clip could take more than 5 minutes
    4) implemented a feed strategy so that at DeepRemaster are privided 50% past reference frame and 50% future reference frames

   While ColorMNet is working quite well, it has the problem that is not able to handle properly the future frames. In effect ColorMNet is not storing the full reference frame image (like DeepRemaster) but it  store only the key points (e.g., representative pixels in each frame). This imply that the colored frames could have some colors that are very different from the reference image. DeepRemaster has not this problem since it store the full reference image. Unfortunately the number of reference images that DeepRemaster is able to use depends on GPU memory and power, because the time required for inference increase with the number of reference images provided.

    The problem of gray images on fast moving scenes is a problem that affect mainly the exemplar-based models, especially DeepRemaster, because when there are fast moving objects in a clip is necessary to provide more reference images (for example 4/5 frames every second). ColorMNet has some interpolation capability while DeepRemaster is very basic and is unable to properly colorize a frame if is missing a reference image very similar.

    Then there is the classical problem of gray noses or ears. This problem affect all the models, especially the examplar-based models. The reason is that despite the improvement of the technology. The current models for features recognition like ResNet50 are still trained using a frame of size 224x224. Using a so small resolution is very difficult to recognize a nose or ear.

     You can find some info in this post: Questions regarding ColorMNet

     I recently tried to colorize the movie Night of the living dead (1968). For this movie was available a small colored version: notld_201610

     The problem was that the 2 movie were out of sync, the difference of about 400 frames was distributed randomly between them. And using ColoMNet (included in HAVC 4.6.8) by providing the colored version was a failure. So I implemented the version "remote all-ref" that theoretically should be able to handle reference frames not in sync with the clip to colorize. But even this attempt was a failure. The colored movie was quite good, but the color accuracy was lost. For example the the pink/magenta colored car seen at the beginning of the film was painted by ColorMNet in gray.

     I was surprised when I discovered that DeepRemaster was able to handle perfectly well the reference frames not in sync and that at the same time was able to maintain very well the colors in the reference images.

     At the following link: Night of the living dead (colorized, 1968)
     Is available the movie colored using DeepRemaster. If you see some strange colors, is not a failure of DeepRemaster, but because were present in the reference frames (ColorMNet has not this problem, but  the color accuracy is lost).

     This is the main reason why I introduced the method = 5.

Dan
Reply
Today I tested DeepRemaster with the movie Casablanca (1942)
For this source is available a low resolution (and format) video: Casablanca InColor

I already tried to colorize this movie using ColorMNet, but I was not satisfied about the result because the 2 movies are not in sync, there is a difference of about 200 frames randomly distributed between the 2 movies.

In the attached archive there are 2 small clips:

1) 1942_Casablanca_colormnet_blue-parrot.mp4  - movie colored using ColorMNet
2) 1942_Casablanca_deepremaster_blue-parrot.mp4 - movie colored using DeepRemaster  

In the first clip, it is possible to see that the blue-parrot at beginning is not blue, after few frames becomes blue, because the source used for reference was not in sync with B&W movie to colorize.

In the second clip I used DeepRemaster to colorize the clip, and in this case the blue-parrot is rendered perfectly, DeepRemaster was able to manage the fact that the reference frames provided in input were not in sync with the movie.

This result was possible because I adopted the strategy to provide in input to DeepRemaster 50% of past reference images and 50% of future reference images, respect to the frame to be colored. In this way DeepRemaster is able to manage the situation where the reference frames are either ahead or behind the frame to colorize.
Differently from ColorMNet DeepRemaster store the full frame in a tensor array and in this way is able to properly apply the colors without compromise.

For restoring colored videos, probably DeepRemaster is the best tool curently available in HAVC, and I'm happy of having added it in the HAVC filter.

Dan


Attached Files
.zip   Casablanca (1942) sample.zip (Size: 1,16 MB / Downloads: 9)
Reply
so the latest Hybrid_deoldify worked as expected?
----
Dev versions are in the 'experimental'-folder of my GoogleDrive, which is linked on the download page.
Reply
Till now I performed the tests using directly VapourSynth scripts.

Your version was not working as expected, but before boring you to apply the fixes I preferred to spend time to test better DeepRemaster to verify if it was really worth adding the method=5.

My tests confirmed the good performance of DeepRemaster in restoring old colored videos and I decided to change the name of method=5 in "HAVC video restore" and to write a specific function HAVC_video_restore to call directly when the method=5 is selected.

But I don't want boring you with other enhancements, and before provide the new RC2 I'd like to understand if you are interested to support this functionality in Hybrid.

Thanks,
Dan
Reply
I attached the new RC2

I added the following specific function

HAVC_restore_video(clip: vs.VideoNode = None, clip_ref: vs.VideoNode = None, render_speed: str = 'medium',
                       ex_model: int = 0, ref_thresh: float = None, ref_freq: int = None,
                       max_memory_frames: int = 0, render_vivid: bool = False, encode_mode: int = 2,
                       torch_dir: str = model_dir) -> vs.VideoNode

It is a simplified version of HAVC_deepex and most of parameters are the same.

In this way when is selected the method = 5 is not more necessary to call HAVC_ddeoldify and HAVC_deepex

It is just necessary to call directly  HAVC_restore_video

What it is important is that in clip_ref is provided in input the clip in RGB24 provided in the field sc_framedir.
Is not necessary for this clip to have the same number of frames or Width/Height of clip to colorize.

If you think that this implementation is not feasible, please just remove the method = 5 from the list of available methods in the GUI.

Thanks,
Dan


Attached Files
.zip   vsdeoldify-5.0.0_RC2.zip (Size: 422,1 KB / Downloads: 3)
Reply
I will look at it tomorrow
----
Dev versions are in the 'experimental'-folder of my GoogleDrive, which is linked on the download page.
Reply
Since I will not read your source code to figure this out and it's not clear to me without spending tons of time trying to follow your source code:

Can you write what should be done when by Hybrid to use HAVC?
(Ideally as a graph, where the edges are labled with the requirements? Example Start)
  • 'HAVC_bw_tune(..action="ON")'
  • 'clip = HAVC_ddeoldify('
  • 'refClip = HAVC_ddeoldify('
  • loading of external clip as RGB24 into refClip
  • HAVC_deepex(..refClip='None'...)
  • HAVC_deepex(...refClip='clipRef')
  • 'HAVC_bw_tune(..action="OFF")'
  • 'HAVC_stabilizer'

Cu Selur
----
Dev versions are in the 'experimental'-folder of my GoogleDrive, which is linked on the download page.
Reply


Forum Jump:


Users browsing this thread: 3 Guest(s)