you are right, I added these parameters in the past but I never released a new version.
My intention is to rework completely the scene change methodology.
I was thinking to use PySceneDetect , but I had no time to test it till now.
I released a new RC2 to fix some problems related to the max_memory_frames.
For unknown reasons the memory usage of ColorMNet in Vapoursynth increase exponentially, while if the same code is called in a python script (using frames saved on the disk) the memory usage is very low as described in the paper. But in Vapoursynth it is a nightmare, because this filter is taking advantage for using the frames memory and I had to adapt the code to force the filter to use only a limited number of frames.
I don't have understood the reason. There is nothing on internet about this problem and it is not possible to fully debug Vapoursynth scripts.
For the moment I found this solution.
My be I will be able to improve the memory management, but for the moment this is the only solution that I found (try to set max_memory_frames = 0 to see the problem).
:param ref_weight: If enable_refmerge = True, represent the weight used to merge the reference frames.
If is not set, is assigned automatically a value of 0.5
the description, does not match the code:
if enable_refmerge:
if ref_weight is None:
ref_weight = refmerge_weight[ref_merge]
if ref_thresh is None:
ref_thresh = 0.1
clip_sc = SceneDetect(clip, threshold=ref_thresh)
if method in (1, 2):
clip_sc = SceneDetectFromDir(clip_sc, sc_framedir=sc_framedir, merge_ref_frame=True,
ref_frame_ext=(method == 2))
else:
ref_weight = 1.0
clip_sc = None
Cu Selur
----
Dev versions are in the 'experimental'-folder of my GoogleDrive, which is linked on the download page.
Send you a link to a not properly working dev version for testing, totally unsure whether everything is en-/disabled as it should.
Since some other stuff seems to have changed too.
I thought that:
I like the new GUI
I will test it to see if everything is Ok.
Regarding the parameters ref_weight, ref_thresh they were already available in the release 4.0.0: __init__.py
But has been added to handle very specific situations and it is right that are not included in the GUI.
I changed HAVC_main() to include also the parameter DeepExModel (see attached RC3) so that it will be possible to select the Exemplar based model to use for colors propagation.
(15.09.2024, 14:26)Selur Wrote: => does not seem like ColorMNet is used when calling:
from vsdeoldify import HAVC_main
clip = HAVC_main(clip=clip, EnableDeepEx=True, DeepExRefMerge=0, DeepExModel=0)
There was no download,... => What should be downloaded to where when DeepExModel=0 <> ColorMNet is used?
I the torch cache dir are downloaded:
the folder repo: facebookresearch_dinov2_main (it seems strange but torch is capable to do it)
under the folder chekpoints are downloaded the following networks:
resnet18-5c106cde.pth
dinov2_vits14_pretrain.pth
resnet50-19c8e357.pth
having decided to set the the torch cache under the model filter's folder, implies that Hybrid need to be installed in a writeable directory.
Quote: having decided to set the the torch cache under the model filter's folder, implies that Hybrid need to be installed in a writeable directory.
ARGH,...
No! The torchAddon needs to include the files.
Hybrid needing to be installed into a writeable directory is a NO GO.
Anything that needs to be done during runtime should not end in a program folder.
Cu Selur
----
Dev versions are in the 'experimental'-folder of my GoogleDrive, which is linked on the download page.
It was a your requirement in this post: #7 and you confirmed the approach in this post: #12
I can change this behaviour, just let me know what are your requirements.