| 
		
	
	
	
		
	Posts: 985Threads: 80
 Joined: Feb 2020
 
	
	
		good! 
Unfortunately it seems that there is something wrong with Average Frames.
 
Here an example using 7 frames, with average: center, left, right
https://imgsli.com/MjQ4ODEx 
The situation get worse if the number of frames is increased.
https://imgsli.com/MjQ4ODEy 
I'm trying to write my own average filter, but it is seem that is not possible to access past frames using Vapoursynth
 
Here my code
 def clip_color_stabilizer(clip: vs.VideoNode = None, nframes: int = 5, smooth_type: int = 0) -> vs.VideoNode:max_frames = max(1, min(nframes, 31))
 def smooth_frame(n, f):
 f_out = f.copy()
 if n < max_frames:
 return f_out
 img_f = list()
 img_f.append(frame_to_image(f))
 for i in range(1, max_frames):
 img_f.append(frame_to_image(clip.get_frame(n-i)))
 img_m = color_temporal_stabilizer(img_f, max_frames)
 return image_to_frame(img_m, f_out)
 clip = clip.std.ModifyFrame(clips=[clip], selector=smooth_frame)
 return clip
 
 #-----------------------------------------------------------------------
 def color_temporal_stabilizer(img_f: list, nframes: int = 5) -> Image:
 
 img_new = np.copy(np.asarray(img_f[0]))
 
 yuv_new = cv2.cvtColor(img_new, cv2.COLOR_RGB2YUV)
 
 weight: float = 1.0/nframes
 
 yuv_m = np.multiply(yuv_new, weight).clip(0, 255).astype(int)
 
 for i in range (1, nframes):
 yuv_i = cv2.cvtColor(np.asarray(img_f[i]), cv2.COLOR_RGB2YUV)
 yuv_m += np.multiply(yuv_i, weight).clip(0, 255).astype(int)
 
 yuv_new[:, :, 1] = yuv_m[:, :, 1]
 yuv_new[:, :, 2] = yuv_m[:, :, 2]
 
 return Image.fromarray(cv2.cvtColor(yuv_new, cv2.COLOR_YUV2RGB))
The problem is in the call: frame_to_image(clip.get_frame(n-i)) 
When nframes>1 Vapoursynth freeze.
 
Do you have any idea on how to access past frames from Vapoursynth ?
 
Thanks, 
Dan
	 
	
	
	
		
	Posts: 12.046Threads: 65
 Joined: May 2017
 
	
		
		
		20.03.2024, 15:38 
(This post was last modified: 20.03.2024, 15:41 by Selur.)
		
	 
		frist thing I see is that, if n < max_frames that should crash, since 'clip.get_frame(n-i)' would try to access non existing frames. for i in range(1, max_frames):
should be something like
 for i in range(1, min(n-1,max_frames)):
Also the output of AverageFrames does look correct to me, one of the frames you look at is an unprocessed frame.
 
Cu Selur
	
----Dev versions are in the 'experimental'-folder of my GoogleDrive, which is linked on the download page.
 
 
	
	
	
		
	Posts: 985Threads: 80
 Joined: Feb 2020
 
	
	
		 (20.03.2024, 15:38)Selur Wrote:  frist thing I see is that, if n < max_frames that should crash, since 'clip.get_frame(n-i)' would try to access non existing frames.
 should be something likefor i in range(1, max_frames):
 
before this call, there is the if
 if n < max_frames:return f_out
so it could never happen to ask frames not available yet (the range is excluding last frame), for nframes=1 is working
 
  (20.03.2024, 15:38)Selur Wrote:  Also the output of AverageFrames does look correct to me, one of the frames you look at is an unprocessed frame.
 Cu Selur
 
In this case I provided in input a clip already colored, there are no gray frames in input.
	 
	
	
	
		
	Posts: 12.046Threads: 65
 Joined: May 2017
 
	
	
		"n < max_frames" should be "n <= max_frames" then
	 
----Dev versions are in the 'experimental'-folder of my GoogleDrive, which is linked on the download page.
 
 
	
	
	
		
	Posts: 985Threads: 80
 Joined: Feb 2020
 
	
	
		just to provide an example.nframes=3
 
 max_frames = 3
 
 if (n < max_frames)
 
 stop when n=3
 
 for i in range(1, 3)
 
 are processed only the frames:
 
 3 - 1 = 2
 3 - 2 = 1
 
 current frame (n=3) is allocated at the begin on the list
 
 so the list contains the frames: 3, 2, 1
 
 so I don't see the risk to access non available frames.
 
 Dan
 
	
	
	
		
	Posts: 12.046Threads: 65
 Joined: May 2017
 
	
		
		
		20.03.2024, 16:11 
(This post was last modified: 20.03.2024, 16:25 by Selur.)
		
	 
		okay. 
Are you sure, with:
 img_f.append(frame_to_image(clip.get_frame(n-i)))
that 'clip.get_frame(n-i)' will access the already processed output frames? and not the unprocessed input frames? 
I suspect that you are averaging the colored frames with the gray scaled frames
	
----Dev versions are in the 'experimental'-folder of my GoogleDrive, which is linked on the download page.
 
 
	
	
	
		
	Posts: 985Threads: 80
 Joined: Feb 2020
 
	
	
		The problem is that is not working! the images provide as example was obtained by using the Vapoursynth AverageFrames
	 
	
	
	
		
	Posts: 12.046Threads: 65
 Joined: May 2017
 
	
		
		
		20.03.2024, 17:08 
(This post was last modified: 20.03.2024, 17:09 by Selur.)
		
	 
		Quote:The problem is that is not working!  
What does that mean? Are you reffering to AverageFrames or your code? 
I need some code I can actually run or open inside an editor to look at. 
A code snippet without context isn't something one can debug.
 
btw. I just realized the ui elements in Hybrid are wrong. 
colstab_merge_enabled, colstab_ddcolor_enabled, colstab_deoldify_enabled are separated options, not like in the gui where if colstab_merge_enabled is disabled the other two are disabled too. 
-> I'll change that in the gui
 
Cu Selur
	 
----Dev versions are in the 'experimental'-folder of my GoogleDrive, which is linked on the download page.
 
 
	
	
	
		
	Posts: 12.046Threads: 65
 Joined: May 2017
 
	
		
		
		20.03.2024, 17:27 
(This post was last modified: 20.03.2024, 17:41 by Selur.)
		
	 
		using: core.std.LoadPlugin(path="F:/Hybrid/64bit/vsfilters/MiscFilter/MiscFilters/MiscFilters.dll")import math
 def vs_clip_color_stabilizer(clip: vs.VideoNode = None, nframes: int = 5, mode: str = 'center', scenechange: bool = True) -> vs.VideoNode:
 
 if nframes < 3 or nframes > 31:
 raise ValueError("deoldify: number of frames must be in range: 3-31")
 
 if mode not in ['left', 'center', 'right']:
 raise ValueError("deoldify: mode must be 'left', 'center', or 'right'.")
 
 # convert the clip format for AverageFrames to YUV
 clip_yuv = clip.resize.Bicubic(format=vs.YUV444PS, matrix_s="709", range_s="limited")
 
 if nframes%2==0:
 nframes +=1
 
 N = max(3, min(nframes, 31))
 Nh = round((N-1)/2)
 Wi = math.trunc(100.0/N)
 
 if mode in ['left', 'right']:
 Wi = 2*Wi
 Wc = 100-(Nh)*Wi
 else:
 Wc = 100-(N-1)*Wi
 
 weight_list = list()
 for i in range(0, Nh):
 if mode in ['left', 'center']:
 weight_list.append(Wi)
 else:
 weight_list.append(0)
 weight_list.append(Wc)
 for i in range(0, Nh):
 if mode in ['right', 'center']:
 weight_list.append(Wi)
 else:
 weight_list.append(0)
 
 clip_yuv = vs.core.misc.AverageFrames(clip_yuv, weight_list, scenechange = True, planes=[1,2])
 
 # convert the clip format for deoldify to RGB24
 clip_rgb = clip_yuv.resize.Bicubic(format=vs.RGB24, matrix_in_s="709", range_s="limited", dither_type="error_diffusion")
 
 return clip_rgb
 
 clip = vs_clip_color_stabilizer(clip, nframes = 31)
one can see that the output seems wrong (= not as hopped). (clip is a normal color clip) 
Problem is, that as soon as there is motion, understandably the averaging can cause problems. 
I suspect that the idea of using a 'stupid' average is the main problem here. 
=> the more motion the clip has the smaller the number of frames taken into account for the averaging need to be. 
(maybe using a motion mask  - created over the luma only - combined with AverageFrames - using the colored frames - could work, to only merge static chroma,..)
 
Cu Selur
 
Ps.: send you a link to a dev version which should fix the colstab_merge_enabled, colstab_ddcolor_enabled, colstab_deoldify_enabled gui problem
	
----Dev versions are in the 'experimental'-folder of my GoogleDrive, which is linked on the download page.
 
 
	
	
	
		
	Posts: 985Threads: 80
 Joined: Feb 2020
 
	
	
		 (20.03.2024, 14:14)Selur Wrote:  VSPipe.exe --progress -c y4m c:\Users\Selur\Desktop\test.vpy NULnormal (= just deoldify):
 
 stacked (= split, stack, deoldify, split, interleave):Script evaluation done in 11.18 secondsOutput 429 frames in 42.74 seconds (10.04 fps)
 Script evaluation done in 8.62 secondsOutput 430 frames in 22.62 seconds (19.01 fps)
Cu Selur
 
It is only an apparent speed improvement. 
 
Supposing that you are using a render_factor=24, by stacking the frames you are decreasing by 2 the render_factor. 
You can obtain exactly the same speed increase and quality (without stacking the frames) by setting render_factor=12 and dd_render_factor=12
 
Dan
	 |