This forum uses cookies
This forum makes use of cookies to store your login information if you are registered, and your last visit if you are not. Cookies are small text documents stored on your computer; the cookies set by this forum can only be used on this website and pose no security risk. Cookies on this forum also track the specific topics you have read and when you last read them. Please confirm whether you accept or reject these cookies being set.

A cookie will be stored in your browser regardless of choice to prevent you being asked this question again. You will be able to change your cookie settings at any time using the link in the footer.

Esxi running macos For selur
No, that's Avisynth syntax.
And event then I would not recommend write it this way in Avisynth.
In Avisynth I would use something like:
clip=last
filteredClip = clip
filteredClip  = filteredClip.Blur(0, 1.0)
filteredClip = filteredClip.Sharpen(0.7)
MergeChroma(clip, filteredClip)
instead.

Vapoursynth is different, Vapoursynth is based on Python and MergeChroma isn't one of the functions that comes with Vapoursynth out of the box.

std.BoxBlur(clip clip[, int[] planes, int hradius = 1, int hpasses = 1, int vradius = 1, int vpasses = 1])

    Performs a box blur which is fast even for large radius values. Using multiple passes can be used to fairly cheaply approximate a gaussian blur. A radius of 0 means no processing is performed.
source: http://www.vapoursynth.com/doc/functions/boxblur.html

def MergeChroma(clip1: vs.VideoNode, clip2: vs.VideoNode, weight: float = 1.0) -> vs.VideoNode:
    """Merges the chroma from one videoclip into another. Port from Avisynth's equivalent.
    There is an optional weighting, so a percentage between the two clips can be specified.
    Args:
        clip1: The clip that has the chroma pixels merged into (the base clip).
        clip2: The clip from which the chroma pixel data is taken (the overlay clip).
        weight: (float) Defines how much influence the new clip should have. Range is 0.0–1.0.
    """
source: https://github.com/WolframRhodium/muvsfu...uvsfunc.py

-> Why would you want to use that anyways?
----
Dev versions are in the 'experimental'-folder of my GoogleDrive, which is linked on the download page.
Reply
I was on a videohelp forum, explaining i tried making  a video progressive, while maintaining the best quality

https://archive.org/download/ReturnToTre...Island.mkv


The file has combing all over the place. It’s in 29.97 fps. It was filmed in australia, so i’m was thinking 25 fps. I did Qtgmc bob + srestore omode 6, like my previous tv show. And it got rid of maybe 99% combing, but every few seconds or so, there is like a like a stutter. So then i tried deteelcine from 29.97 to 23.976, i’m able to get rid of most of the combing as well, but there are a few areas like in long john silvers face With auick movements that remained. 

I think i am missing something. Footage is pretty bad, believe it comes from a poorly encoded dvd of an unremastered Film. My plan was to get it in the best progressive possible and then run it thru topaz video enhancer ai bring it up to 4k or 8k, maybe do further color correcting in resolve, maybe add back in some film grain, and then downscale it via spline to 720p or 1080p. 

So a person on videohelp looked at it and said:
That video is basically telecined film. But interlaced frames were encoded progressive so the chroma of the two fields was blended together. The basic fix is TFM().TDecimate() to get back to the original film frames at 23.976 fps.



After that you'll see the blended chroma as horizontal color stripes when colored objects are moving. There might be a way of using the chroma of the previous or next field/frame when the stripes are detected but the easiest thing to do is blur away the choroma stripes, MergeChroma(last, last.Blur(0.0, 1.0).Sharpen(0.0, 0.7). But some ghosting of the colors will remain.


Beyond that there are occasional dropped and duplicate frames. There's not much you can do for that except manually insert of remove frames. Not worth the trouble in my opinion.

So thats why i asked.  I dont need to do It the way He suggested. I’m open to better options.

On a side note when i use VIVTC to detelecine
I want to capture the most combing, so despite hybrid saying the sweet spot is 32/32 or 16/16 it offers a smaller box to find more areas of smaller combing during movement.

There is a bug on legacy hybrid 2018 that when i drop block x/y below 16 on the x-axis like 8, 4,2 it crashes.


Attached Files Thumbnail(s)
   
Reply
Image Sequence now give different error but still don't works. XQuartz was already installed on my system before . This is what error i see when i launch VS script preview now:
Failed to evaluate the script:
Python exception: Read: Failed to read image properties: vsViewer: NoDecodeDelegateForThisImageFormat `JPG' @ error/constitute.c/ReadImage/562

Traceback (most recent call last):
File "src/cython/vapoursynth.pyx", line 2244, in vapoursynth.vpy_evaluateScript
File "src/cython/vapoursynth.pyx", line 2245, in vapoursynth.vpy_evaluateScript
File "/Volumes/temp/Hybrid Temp/tempPreviewVapoursynthFile00_40_41_124.vpy", line 10, in <module>
clip = core.imwri.Read("/Users/shph/Desktop/office building/%03d.jpg", firstnum=0)
File "src/cython/vapoursynth.pyx", line 2069, in vapoursynth.Function.__call__
vapoursynth.Error: Read: Failed to read image properties: vsViewer: NoDecodeDelegateForThisImageFormat `JPG' @ error/constitute.c/ReadImage/562

Here is log file. (Just imported Image Sequence and launch vsViewer)

UPDATE. Problem with jpeg only. I tested it with PNG sequence and it works.
l33tmeatwad report that "jpg decoding is not enabled by default in image magick" and he will fix it
---

By the way, There is "Image Sequence Base" preference in Config tab. Seems Image Sequence importer ignore it and always use 1.000 framerate as default, so i need to change framerate manually on every Sequence import import.
Reply
Generate name for PoRes now force generates mkv.mov container name
---

Seems you still use old havsfunc.py script (edited for fft3dfilter only) inside Hybrid build instead of new universal (fft3dfilter/neo_fft3d) version you posted on github
---

faac binary seems still broken, so i just replace it with working one from Hybrid 2018. Hope it is ok like this.
---

OpenCL Deinterlace WORKS!!!

But i need to manually set Device: 1. If i use Device: -1 (autodetection?) it don't works. (it may be because my GPU installed at bottom pci-e slot instead of usual top slot)
Source VOB file (29.970 FPS duration 52s) rendered to ProRes with QTGMC Placebo with BOB (59.94 FPS):
OpenCL render finished after 00:00:54.713
CPU render finished after 00:01:52.309
---

Same as before KNLMeansCL in QTGMC Denoiser combined with "Placebo" or "Very Slow" preset gives me horizontal lines artifact.
---

FIltering -> Denoise -> KNLMeansCL (Test 1) ERROR KERNEL PANIC:
Device type: auto Device: (disabled)
VS preview window start without error but show me bright yellow oversaturated image artefact instead of noise reduction.
When i decide to select Channels: RGB instead of "auto" computer just turned off because kernel panic Exclamation

FIltering -> Denoise -> KNLMeansCL (Test 2) ERROR:
I reset Denoise settings and now select Device type: gpu Device: 1.
Failed to evaluate the script:
Python exception: knlm.KNLMeansCL: no compatible opencl platforms available!

Traceback (most recent call last):
File "src/cython/vapoursynth.pyx", line 2244, in vapoursynth.vpy_evaluateScript
File "src/cython/vapoursynth.pyx", line 2245, in vapoursynth.vpy_evaluateScript
File "/Volumes/temp/Hybrid Temp/tempPreviewVapoursynthFile01_40_59_773.vpy", line 32, in <module>
clip = core.knlm.KNLMeansCL(clip=clip, device_type="gpu", device_id=1)
File "src/cython/vapoursynth.pyx", line 2069, in vapoursynth.Function.__call__
vapoursynth.Error: knlm.KNLMeansCL: no compatible opencl platforms available!

FIltering -> Denoise -> KNLMeansCL (Test 3) ERROR:
Device type: cpu Device: (disabled)
Failed to evaluate the script:
Python exception: knlm.KNLMeansCL: the opencl device does not support this video format!

Traceback (most recent call last):
File "src/cython/vapoursynth.pyx", line 2244, in vapoursynth.vpy_evaluateScript
File "src/cython/vapoursynth.pyx", line 2245, in vapoursynth.vpy_evaluateScript
File "/Volumes/temp/Hybrid Temp/tempPreviewVapoursynthFile01_48_55_174.vpy", line 32, in <module>
clip = core.knlm.KNLMeansCL(clip=clip, device_type="cpu")
File "src/cython/vapoursynth.pyx", line 2069, in vapoursynth.Function.__call__
vapoursynth.Error: knlm.KNLMeansCL: the opencl device does not support this video format!

De Grain -> MCDegrainSharp (OpenCL) WORKS

De Grain -> SMDegrain (OpenCL Device: 1) ERROR:
Failed to evaluate the script:
Python exception: SMDegrain() got an unexpected keyword argument 'opencl'

Traceback (most recent call last):
  File "src/cython/vapoursynth.pyx", line 2244, in vapoursynth.vpy_evaluateScript
  File "src/cython/vapoursynth.pyx", line 2245, in vapoursynth.vpy_evaluateScript
  File "/Volumes/temp/Hybrid Temp/tempPreviewVapoursynthFile02_13_19_804.vpy", line 28, in <module>
    clip = havsfunc.SMDegrain(input=clip, interlaced=False, pel=4, opencl=True, device=1)
TypeError: SMDegrain() got an unexpected keyword argument 'opencl'

AntiAliasing -> DAA (OpenCL Device: 1) WORKS

AntiAliasing -> Saniag (OpenCL Device: 1) WORKS

AntiAliasing -> Nedi3AA (OpenCL Device: 1) no error, but i can't see any AA effect with it.

AntiAliasing -> MAA ERROR:
Failed to evaluate the script:
Python exception: There is no function named SangNom

Traceback (most recent call last):
  File "src/cython/vapoursynth.pyx", line 2244, in vapoursynth.vpy_evaluateScript
  File "src/cython/vapoursynth.pyx", line 2245, in vapoursynth.vpy_evaluateScript
  File "/Volumes/temp/Hybrid Temp/tempPreviewVapoursynthFile02_43_38_298.vpy", line 35, in <module>
    clip = muvsfunc.maa(clip)
  File "/Library/Frameworks/VapourSynth.framework/lib/python3.8/site-packages/muvsfunc.py", line 991, in maa
    aa_clip = core.sangnom.SangNom(aa_clip).std.Transpose()
  File "src/cython/vapoursynth.pyx", line 1934, in vapoursynth.Plugin.__getattr__
AttributeError: There is no function named SangNom

AntiAliasing -> AAF ERROR:
Failed to evaluate the script:
Python exception: There is no function named SangNom

Traceback (most recent call last):
  File "src/cython/vapoursynth.pyx", line 2244, in vapoursynth.vpy_evaluateScript
  File "src/cython/vapoursynth.pyx", line 2245, in vapoursynth.vpy_evaluateScript
  File "/Volumes/temp/Hybrid Temp/tempPreviewVapoursynthFile02_44_49_352.vpy", line 34, in <module>
    clip = havsfunc.aaf(inputClip=clip)
  File "/Library/Frameworks/VapourSynth.framework/lib/python3.8/site-packages/havsfunc.py", line 5333, in aaf
    aa = aa.sangnom.SangNom(aa=aay)
  File "src/cython/vapoursynth.pyx", line 1934, in vapoursynth.Plugin.__getattr__
AttributeError: There is no function named SangNom

Resizer -> NNEDI3 (GPU) ERROR:
Failed to evaluate the script:
Python exception: NNEDI3CL: pscrn must be 1 or 2

Traceback (most recent call last):
File "src/cython/vapoursynth.pyx", line 2244, in vapoursynth.vpy_evaluateScript
File "src/cython/vapoursynth.pyx", line 2245, in vapoursynth.vpy_evaluateScript
File "/Volumes/temp/Hybrid Temp/tempPreviewVapoursynthFile02_38_27_362.vpy", line 33, in <module>
clip = edi_rpow2.nnedi3cl_rpow2(clip=clip, rfactor=4, nns=1, pscrn=4)
File "/Library/Frameworks/VapourSynth.framework/lib/python3.8/site-packages/edi_rpow2.py", line 23, in nnedi3cl_rpow2
return edi_rpow2(clip=clip,rfactor=rfactor,correct_shift=correct_shift,edi=edi)
File "/Library/Frameworks/VapourSynth.framework/lib/python3.8/site-packages/edi_rpow2.py", line 63, in edi_rpow2
clip=edi(clip,field=1,dh=1)
File "/Library/Frameworks/VapourSynth.framework/lib/python3.8/site-packages/edi_rpow2.py", line 21, in edi
return core.nnedi3cl.NNEDI3CL(clip=clip,field=field,dh=dh,nsize=nsize,nns=nns,qual=qual,etype=etype,pscrn=pscrn)
File "src/cython/vapoursynth.pyx", line 2069, in vapoursynth.Function.__call__
vapoursynth.Error: NNEDI3CL: pscrn must be 1 or 2

Other -> Frame Interpolation -> Interframe/SVP (GPU as well as unchecked GPU) ERROR:
Failed to evaluate the script:
Python exception: There is no attribute or namespace named svp1

Traceback (most recent call last):
  File "src/cython/vapoursynth.pyx", line 2244, in vapoursynth.vpy_evaluateScript
  File "src/cython/vapoursynth.pyx", line 2245, in vapoursynth.vpy_evaluateScript
  File "/Volumes/temp/Hybrid Temp/tempPreviewVapoursynthFile02_53_21_637.vpy", line 26, in <module>
    clip = havsfunc.InterFrame(clip, Tuning="smooth", NewNum=30, NewDen=1, GPU=True, OverrideAlgo=1) # new fps: 30
  File "/Library/Frameworks/VapourSynth.framework/lib/python3.8/site-packages/havsfunc.py", line 3988, in InterFrame
    return InterFrameProcess(Input)
  File "/Library/Frameworks/VapourSynth.framework/lib/python3.8/site-packages/havsfunc.py", line 3964, in InterFrameProcess
    Super = clip.svp1.Super(SuperString)
  File "src/cython/vapoursynth.pyx", line 1443, in vapoursynth.VideoNode.__getattr__
AttributeError: There is no attribute or namespace named svp1

Other -> Frame Interpolation -> MVToolsFPS WORKS

Other -> Add Logo ERROR:
Failed to evaluate the script:
Python exception: Overlay() got an unexpected keyword argument 'oberlay'

Traceback (most recent call last):
File "src/cython/vapoursynth.pyx", line 2244, in vapoursynth.vpy_evaluateScript
File "src/cython/vapoursynth.pyx", line 2245, in vapoursynth.vpy_evaluateScript
File "/Volumes/temp/Hybrid Temp/tempPreviewVapoursynthFile03_24_24_823.vpy", line 33, in <module>
clip = havsfunc.Overlay(base=clip, oberlay=logo, mask=alpha)
TypeError: Overlay() got an unexpected keyword argument 'oberlay'
Reply
Quote:Other -> Frame Interpolation -> Interframe/SVP (GPU as well as unchecked GPU) ERROR:
No surprise, since the svp filters are missing.
Quote:Python exception: Overlay() got an unexpected keyword argument 'oberlay'
fixed
Quote:Python exception: NNEDI3CL: pscrn must be 1 or 2
-> will look into it

Quote:Python exception: There is no function named SangNom
missing filters


Quote:Python exception: SMDegrain() got an unexpected keyword argument 'opencl'
-> will look into it, seems like I accidentally forgot to add that code back in when updating havsfunc.

Quote:vapoursynth.Error: knlm.KNLMeansCL: no compatible opencl platforms available!
..
When i decide to select Channels: RGB instead of "auto" computer just turned off because kernel panic
Okay, so KNLMeansCLs gpu support doesn't work on mac atm.

Quote:Python exception: knlm.KNLMeansCL: the opencl device does not support this video format!
What did the script look like that you used?

=> I should probably remove KNLMeansCL from the mac release for the time being.

Quote:faac binary seems still broken, so i just replace it with working one from Hybrid 2018. Hope it is ok like this.
Will look into it

Quote:Seems you still use old havsfunc.py script (edited for fft3dfilter only) inside Hybrid build instead of new universal (fft3dfilter/neo_fft3d) version you posted on github
Will look into it

Quote:Generate name for PoRes now force generates mkv.mov container name
:/ will look into it, though I fixed it. Smile

Quote:By the way, There is "Image Sequence Base" preference in Config tab. Seems Image Sequence importer ignore it and always use 1.000 framerate as default, so i need to change framerate manually on every Sequence import import.
when opening Hybrid changing the value to i.e. 50 everything works, chaning it to 50 and back to 25 also works, seems like the initialization has a bug
-> will look into it

@Adamcarter:
So a person on videohelp looked at it and said:
then best ask that person if he can also recommand a way with Vapoursynth and not Avisynth.
Will try to look at the sample today or tomorrow.
Cu Selur
----
Dev versions are in the 'experimental'-folder of my GoogleDrive, which is linked on the download page.
Reply
Hey selur,
He is avisynth only. I asked, earlier. The video i shared you, def no rush, and can look at later.  I know you said the linux distro was not compiling and shijan hit you with a bunch of bugs. 

The only bug i noticed like i mentioned is when i run vivtc and use a very small searching box less than 16, it crashes. 


The guy who recommended the avisynth scripts, also mentioned my video had some repeat frames. I know the DAIN  application was able to spot duplicates And remove them. I wonder if there was something on the hybrid side. When i do run image writer and do 29.97 would be nice to get rid of dupes. Although now thinking out loud that would probably cause my audio to be off sync, probably wouldnt be by much and i can plug into audacity to shrink to fit.
Reply
Decimate filters during ivtc and srestore are usually used to remove duplicates.
Will look at the source later if I find the time.

Quote:The only bug i noticed like i mentioned is when i run vivtc and use a very small searching box less than 16, it crashed
don't think this is a bug in Hybrid itself
----
Dev versions are in the 'experimental'-folder of my GoogleDrive, which is linked on the download page.
Reply
Small not too critical bug found in Filter Order list:
- Drag and drop filter with mouse
- press Reset
- no changes, Reset don't works.

- move filter by up/down button
- press Reset
- Reset works now

[Image: XbOJ2Ft.gif]
Reply
I never intended drag&drop to work there.
-> will be removed
----
Dev versions are in the 'experimental'-folder of my GoogleDrive, which is linked on the download page.
Reply
Here is a copy of script (from Show Vapousynth Script) used to test FIltering -> Denoise -> KNLMeansCL (Test 3) Device type: cpu Device: (disabled)
ERROR Failed to evaluate the script:
Python exception: knlm.KNLMeansCL: the opencl device does not support this video format!

# Imports
import vapoursynth as vs
core = vs.get_core()
# loading source: /Users/shph/Desktop/44.1hz test source.mov
# color sampling YUV420P8@8, matrix:709, scantyp: progressive
# luminance scale TV
# resolution: 1280x720
# frame rate: 25 fps
# input color space: YUV420P8, bit depth: 8, resolution: 1280x720, fps: 25
# Loading /Users/shph/Desktop/44.1hz test source.mov using LibavSMASHSource
clip = core.lsmas.LibavSMASHSource(source="/Users/shph/Desktop/44.1hz test source.mov")
# making sure input color matrix is set as 709
clip = core.resize.Point(clip, matrix_in_s="709",range_s="limited")
# making sure frame rate is set to 25
clip = core.std.AssumeFPS(clip, fpsnum=25, fpsden=1)
# Setting color range to TV (limited) range.
clip = core.std.SetFrameProp(clip=clip, prop="_ColorRange", intval=1)
# denoising using KNLMeansCL
clip = core.knlm.KNLMeansCL(clip=clip, device_type="cpu")
# adjusting output color from: YUV420P8 to YUV422P10 for ProResModel (i422)
clip = core.resize.Bicubic(clip=clip, format=vs.YUV422P10, range_s="limited")
# Output
clip.set_output()

Seems KNLMeansCL make lot of crazy problems, so it is logical to disble it. Let me know if you need some more detailed logs or something. I can try to record somehow kernel panic log with KNLMeansCL if it helps...
Reply


Forum Jump:


Users browsing this thread: 3 Guest(s)