looks great, thats a clever idea to borrow the colors in such a way.
I tried to get DeepMaster going with your instructions, just to try it out for fun. Not having much success there.
Selur, is there a chance to get a hybrid.exe version of it, as you've done for deepdeinterlace :-)
Quote: Not having much success there.
Where is the problem?
Did you download the remasternet.pth.tar ?
Cu Selur
Yes, I have placed everything in the right folders including the remasternet.pth
What is confusing to me, is what to do next. I drop the sample video test_green_bw.mp4 in Hybrid Base tab, and what is the next step? Is it to go to vsViewer and open the vsremaster_test_green.vpy script? Or is to drop the script directly at the Hybrid Base tab. You've made everything so easy with few clicks that we've been spoiled, lol. If its too complicated of steps to walk thru, involving greater understanding of commands, I'll understand if you skip.
using a custom section and adding:
Code:
# requires colorformat RGB24
# requires luma pc
from vsremaster import remaster_colorize
clip = remaster_colorize(clip=clip, length = 2, render_vivid = False, ref_buffer_size = 10, ref_dir=r"g:\Temp")
(path of 'ref_dir' needs to be adjusted)
should work.
Cu Selur
Hello Selur,
I noted that in the function HAVC_deepx(), to the boolean parameters "dark" and "render_vivid" are passed strings as shown in the code below
Code:
clip = HAVC_deepex(clip=clip, clip_ref=clipRef, render_speed="slow", render_vivid="False", ref_merge=0, dark="True", smooth=True)
Unfortunately Python, which is not a "true" language, don't raise any warning...
Dan
Uploaded a new delodify test, which should fix the problem.
Cu Selur
Problem fixed.
Thanks,
Dan
Added maintenance release:
https://github.com/dan64/vs-deoldify/rel...tag/v4.0.1
A part code clean-up and bug fixing, I added the utility function:
HAVC_extract_reference_frames
It is just an utility function that can be used by more expert users.
It is not strictly related to HAVC and does not need to be added in Hybrid.
Dan
Good news on
ColorMNet side
The authors are working to improve their methodology.
I think that the most important improvement is the decision to use a large-pretrained visual model guided feature estimation (PVGFE) module.
Moreover, from the development point of view they decided to move from TensorFlow to PyThorch, this switch should simplify the porting in Hybrid.
More info here:
https://github.com/yyang181/NTIRE23-VIDE.../issues/12
Dan