I am trying to "upscale" a video using the torch addon and the dev version 2023.11.05-101728. However, I keep running into a problem with the job crashing on the "Video" step. It shouldn't be a Vram issue, as I have a 3090 and the Vapoursynth preview works for me. I even tried tiling and it still crashed.
Also, I am also having issues with the site not posting any attachments, so I will email the zipped file with the debug logs (it was split into two, I think due to character or file size?) and a small sample video to try and replicate the issue if needed. Please feel free to ask any questions, if needed.
09.11.2023, 18:24 (This post was last modified: 09.11.2023, 18:25 by Selur.)
Attaching suff works fine, assuming it's zipped or has one of the supported extensions, here.
Quote:it was split into two, I think due to character or file size?)
Debug output normally is only split if the output folder changes.
Read the sticky on how to create a debug output and what info is needed to reproduce a problem.
Cu Selur
----
Dev versions are in the 'experimental'-folder of my GoogleDrive, which is linked on the download page.
Okay. I reviewed the instructions last night, and so I deleted the existing txt file, then reopened Hybrid, changed the path and checked the debug file box, and then I loaded and tried exporting the file and it still split the txt files. I can try and combine them into one. It was late last night when I was doing this and I was just trying to compile what I needed. Something weird is that I was able to attach the 7z file at one point, but deleted it because I forgot to include the video file. I tried to attach the 7z file again and haven't had any luck getting it to attach again.
I can't seem to compress the video file into the same 7z folder either. Don't know if it's the file size or not. Although, I'm not sure because I don't see an uploading bar or indicator. I just had to hit "Save as Draft" in order to see the 7z file had been attached. I don't know if anyone else experiences this, but I thought I'd mention it to know if it's just me. I can still email you the video file if you want. Thanks again for you help!
Had a quick look.
x264 crashes due to some problem with the Vapoursynth script.
# Imports
import vapoursynth as vs
# getting Vapoursynth core
import ctypes
import os
import site
core = vs.core
# Adding torch dependencies to PATH
path = site.getsitepackages()[0]+'/torch_dependencies/bin/'
ctypes.windll.kernel32.SetDllDirectoryW(path)
path = path.replace('\\', '/')
os.environ["PATH"] = path + os.pathsep + os.environ["PATH"]
# Loading Plugins
core.std.LoadPlugin(path="C:/Program Files/Hybrid/64bit/vsfilters/Support/fmtconv.dll")
core.std.LoadPlugin(path="C:/Program Files/Hybrid/64bit/vsfilters/SourceFilter/AviSource/avisource.dll")
# source: 'E:\Film-Tape Captures\Tests\Hybrid Test video.avi'
# current color space: YUV422P8, bit depth: 8, resolution: 720x480, fps: 59.94, color matrix: 470bg, yuv luminance scale: limited, scanorder: progressive
# Loading E:\Film-Tape Captures\Tests\Hybrid Test video.avi using VsAviSource
clip = core.avisource.AVISource(path="E:/Film-Tape Captures/Tests/Hybrid Test video.avi")
# Setting detected color matrix (470bg).
clip = core.std.SetFrameProps(clip, _Matrix=5)
# Setting color transfer info (470bg), when it is not set
clip = clip if not core.text.FrameProps(clip,'_Transfer') else core.std.SetFrameProps(clip, _Transfer=5)
# Setting color primaries info (), when it is not set
clip = clip if not core.text.FrameProps(clip,'_Primaries') else core.std.SetFrameProps(clip, _Primaries=5)
# Setting color range to TV (limited) range.
clip = core.std.SetFrameProp(clip=clip, prop="_ColorRange", intval=1)
# making sure frame rate is set to 59.94
clip = core.std.AssumeFPS(clip=clip, fpsnum=60000, fpsden=1001)
clip = core.std.SetFrameProp(clip=clip, prop="_FieldBased", intval=0) # progressive
# Weighted resize - START
clipSmoothed = clip
# Resizing using fmtconv - cubic
clipSmoothed = core.fmtc.resample(clip=clipSmoothed, kernel="linear", w=1440, h=960, interlaced=False, interlacedd=False) # resolution 1440x960 before YUV422P8 after YUV422P16
from vsrealesrgan import realesrgan as RealESRGAN
# adjusting color space from YUV422P8 to RGBH for vsRealESRGAN
clip = core.resize.Bicubic(clip=clip, format=vs.RGBH, matrix_in_s="470bg", range_s="limited")
# resizing using RealESRGAN
clip = RealESRGAN(clip=clip, model=5, device_index=0, trt=True, trt_cache_path=r"C:\Users\Dan\AppData\Local\Temp", num_streams=2) # 2880x1920
# resizing 2880x1920 to 1440x960
# adjusting resizing
clip = core.resize.Bicubic(clip=clip, format=vs.RGBS, range_s="limited")
clip = core.fmtc.resample(clip=clip, w=1440, h=960, kernel="lanczos", interlaced=False, interlacedd=False)
clipSmoothed = core.resize.Bicubic(clip=clipSmoothed, format=vs.RGBS, matrix_in_s="470bg", range_s="limited")
clip = core.std.Merge(clipa=clipSmoothed,clipb=clip,weight=0.5)
# Weighted resize - END
# adjusting output color from: RGBS to YUV422P8 for x264Model
clip = core.resize.Bicubic(clip=clip, format=vs.YUV422P8, matrix_s="470bg", range_s="limited", dither_type="error_diffusion")
# set output frame rate to 59.94fps (progressive)
clip = core.std.AssumeFPS(clip=clip, fpsnum=60000, fpsden=1001)
# Output
clip.set_output()
Script seems fine to me and the Vapoursynth preview works so, it should not be an issue with the script itself.
Try:
1. changing the default temp path (Config->Paths) before creating the job, maybe something is interfering with the piping between vspipe and x264.
2. setting an exemption in your virus scanner for the temp folder.
Cu Selur
----
Dev versions are in the 'experimental'-folder of my GoogleDrive, which is linked on the download page.
# Imports
import vapoursynth as vs
# getting Vapoursynth core
import ctypes
import os
import site
core = vs.core
# Adding torch dependencies to PATH
path = site.getsitepackages()[0]+'/torch_dependencies/bin/'
ctypes.windll.kernel32.SetDllDirectoryW(path)
path = path.replace('\\', '/')
os.environ["PATH"] = path + os.pathsep + os.environ["PATH"]
# Loading Plugins
core.std.LoadPlugin(path="C:/Program Files/Hybrid/64bit/vsfilters/Support/fmtconv.dll")
core.std.LoadPlugin(path="C:/Program Files/Hybrid/64bit/vsfilters/SourceFilter/AviSource/avisource.dll")
# source: 'E:\Film-Tape Captures\Tests\Hybrid Test video.avi'
# current color space: YUV422P8, bit depth: 8, resolution: 720x480, fps: 59.94, color matrix: 470bg, yuv luminance scale: limited, scanorder: progressive
# Loading E:\Film-Tape Captures\Tests\Hybrid Test video.avi using VsAviSource
clip = core.avisource.AVISource(path="E:/Film-Tape Captures/Tests/Hybrid Test video.avi")
# Setting detected color matrix (470bg).
clip = core.std.SetFrameProps(clip, _Matrix=5)
# Setting color transfer info (470bg), when it is not set
clip = clip if not core.text.FrameProps(clip,'_Transfer') else core.std.SetFrameProps(clip, _Transfer=5)
# Setting color primaries info (), when it is not set
clip = clip if not core.text.FrameProps(clip,'_Primaries') else core.std.SetFrameProps(clip, _Primaries=5)
# Setting color range to TV (limited) range.
clip = core.std.SetFrameProp(clip=clip, prop="_ColorRange", intval=1)
# making sure frame rate is set to 59.94
clip = core.std.AssumeFPS(clip=clip, fpsnum=60000, fpsden=1001)
clip = core.std.SetFrameProp(clip=clip, prop="_FieldBased", intval=0) # progressive
# Weighted resize - START
clipSmoothed = clip
# Resizing using fmtconv - cubic
clipSmoothed = core.fmtc.resample(clip=clipSmoothed, kernel="linear", w=1440, h=960, interlaced=False, interlacedd=False) # resolution 1440x960 before YUV422P8 after YUV422P16
from vsrealesrgan import realesrgan as RealESRGAN
# adjusting color space from YUV422P8 to RGBH for vsRealESRGAN
clip = core.resize.Bicubic(clip=clip, format=vs.RGBH, matrix_in_s="470bg", range_s="limited")
# resizing using RealESRGAN
clip = RealESRGAN(clip=clip, model=5, device_index=0, trt=True, trt_cache_path=r"E:\Hybrid -debug logs\Temp", num_streams=2) # 2880x1920
# resizing 2880x1920 to 1440x960
# adjusting resizing
clip = core.resize.Bicubic(clip=clip, format=vs.RGBS, range_s="limited")
clip = core.fmtc.resample(clip=clip, w=1440, h=960, kernel="lanczos", interlaced=False, interlacedd=False)
clipSmoothed = core.resize.Bicubic(clip=clipSmoothed, format=vs.RGBS, matrix_in_s="470bg", range_s="limited")
clip = core.std.Merge(clipa=clipSmoothed,clipb=clip,weight=0.5)
# Weighted resize - END
# adjusting output color from: RGBS to YUV422P8 for x264Model
clip = core.resize.Bicubic(clip=clip, format=vs.YUV422P8, matrix_s="470bg", range_s="limited", dither_type="error_diffusion")
# set output frame rate to 59.94fps (progressive)
clip = core.std.AssumeFPS(clip=clip, fpsnum=60000, fpsden=1001)
# Output
clip.set_output()
The Vapoursynth Preview is closed, when you start the job, right?
(Otherwise the preview would probably eat up all the gpu resources and the encoding would fail, since no more gpu resources would be available,..)
Encoding while using Vapoursynth does work, if you don't use any of the stuff from the add-ons, right?
Cu Selur
----
Dev versions are in the 'experimental'-folder of my GoogleDrive, which is linked on the download page.