This forum uses cookies
This forum makes use of cookies to store your login information if you are registered, and your last visit if you are not. Cookies are small text documents stored on your computer; the cookies set by this forum can only be used on this website and pose no security risk. Cookies on this forum also track the specific topics you have read and when you last read them. Please confirm whether you accept or reject these cookies being set.

A cookie will be stored in your browser regardless of choice to prevent you being asked this question again. You will be able to change your cookie settings at any time using the link in the footer.

New models?
#1
Not sure if you had added these models to the dev version. It looks promising. If it isn't implimented I would love to test these out with the hybrid pipeline. Smile 


https://github.com/xpixelgroup/hat


Here is where I learned about it:

https://paperswithcode.com/task/image-su...testPlease, read the 'Infos needed to fix&reproduce bugs,..'-sticky before you post about a problem.
Reply
#2
this:

[Image: Comparison.png]
is stunning!
Reply
#3
Not sure what models were used to obtain these results.
I tried to replicate the test using the resizers available in Hybrid: BasicVSR++, RealESRGAN (x4plus), RealESRGAN (realesr-general)

[Image: Comparison2-Final.png]

Then I added improved version by using: CASglis, FastLineDark, CAS (after resize) and weighted resize with weight = 0.65

I was surprised by the behavior of  RealESRGAN (x4plus) on the dog image, because the resized image was too smooth, this effect was mitigated using the  weighted resize with weight = 0.65.
In general RealESRGAN with realesr-general seems to provide better results.
Reply
#4
Did you test these through VSGAN or vs-mlrt?
----
Dev versions are in the 'experimental'-folder of my GoogleDrive, which is linked on the download page.
Reply
#5
Did a quick test.
Those models require additional parameters, so your whishes needs to go to https://github.com/rlaphoenix/VSGAN and/or https://github.com/AmusementClub/vs-mlrt/ to support these models.

Cu Selur
----
Dev versions are in the 'experimental'-folder of my GoogleDrive, which is linked on the download page.
Reply
#6
(17.12.2023, 19:37)Selur Wrote: Did you test these through VSGAN or vs-mlrt?

None of them, I used "Frame->Resizer->RealESRGAN" and "Frame->Resizer->BasicVSR++"
My startup log is the following

-> skipped 'Intel(R) UHD Graphics 630' since it's no NVIDIA card
Detected NVIDIA PureVideo compatible cards: NVIDIA GeForce RTX 3060
Detected vfwDecoders with 32bit:   VIDC.FFDS   VIDC.LAGS   VIDC.X264   VIDC.XVID   vidc.cvid   vidc.i420   vidc.iyuv   vidc.mrle   vidc.msvc   vidc.uyvy   vidc.yuy2   vidc.yvu9   vidc.yvyu
Detected vfw64BitDecoders:   VIDC.FFDS   VIDC.LAGS   VIDC.X264   VIDC.XVID   vidc.cvid   vidc.i420   vidc.iyuv   vidc.mrle   vidc.msvc   vidc.uyvy   vidc.yuy2   vidc.yvu9   vidc.yvyu
   Avisynth+ is available,..
    DGDecNV available,..
   Vapoursynth is available,..
    DGDecNV available,..
    VSGAN available,..
    vsDPIR available,..
    vsRIFE (torch) available,..
    vsGMFSS fortuna available,..
    vsBasicVSR++ available,..
    vsRealESRGAN available,..
    vsSwinIR available,..
    vsHINet available,..
    vsAnimeSR available,..
    vsFeMaSR available,..
    vsSCUNet available,..
    vsCodeFormer available,..
    vsGRLIR available,..
    VSMLRT available,..
    vsRIFE (mlrt) available,..
    vsDPIR (mlrt) available,..
    vsSAFA (mlrt) available,..


Dan

(17.12.2023, 20:28)Selur Wrote: Did a quick test.
Those models require additional parameters, so your whishes needs to go to https://github.com/rlaphoenix/VSGAN and/or https://github.com/AmusementClub/vs-mlrt/ to support these models.

Cu Selur

I noted that VSGAN added the support to HAT, I found the models in GoogleDrive, which parameters have to be passed ?
Reply
#7
If those models can be used through VSGAN or VSMLRT (both listed as resizers, first uses .pth the second .onnx models) adding them is easy, by simply loading the model.
If those models can not be used through these, either they need a whole wrapper filter (like BasicVSR++) or would need adjustments to VSGAN or VSMLRT.

Cu Selur

Ps.: just checked, the latest VSGAN version available through pip is still 1.6.4 which is from ~2years ago, so unless a new version is released, I doubt that those models will work.
----
Dev versions are in the 'experimental'-folder of my GoogleDrive, which is linked on the download page.
Reply
#8
(18.12.2023, 19:52)Selur Wrote: Ps.: just checked, the latest VSGAN version available through pip is still 1.6.4 which is from ~2years ago, so unless a new version is released, I doubt that those models will work.

The support to HAT was added 7 month ago. Any way to apply the latest github version ?
Reply
#9
Don't know for sure, I never tried, but in theory, updating the files in
F:\Hybrid\64bit\Vapoursynth\Lib\site-packages\vsgan
and removing the existing __pycache__-folders should work.

Cu Selur

Ps.: I created an issue entry https://github.com/rlaphoenix/VSGAN/issues/43
----
Dev versions are in the 'experimental'-folder of my GoogleDrive, which is linked on the download page.
Reply
#10
(18.12.2023, 20:05)Selur Wrote: Ps.: I created an issue entry https://github.com/rlaphoenix/VSGAN/issues/43

Good question
Reply


Forum Jump:


Users browsing this thread: 1 Guest(s)