This forum uses cookies
This forum makes use of cookies to store your login information if you are registered, and your last visit if you are not. Cookies are small text documents stored on your computer; the cookies set by this forum can only be used on this website and pose no security risk. Cookies on this forum also track the specific topics you have read and when you last read them. Please confirm whether you accept or reject these cookies being set.

A cookie will be stored in your browser regardless of choice to prevent you being asked this question again. You will be able to change your cookie settings at any time using the link in the footer.

Open Models support ...
#1
Hellu,

Iam starting to explore the usage of different models using VSGAN & VSLMRT..
and i have an few faq's in mind..

a. Where do i put the DL pytorch & / or ONNX files for hybrid (definitely depended by wich filter i use) ?  If i have to guess, just adding torch files in the vsgan_models folder.. but.. it never is that easy for me..
b. What parameters do i have to use..  
c. When no parameters is set, does the default settings (when checked) actually work optimized already for vsgan ?  Or is it best to use custom parameters for that filter?

I know, torch files can be converted to onnx so my question about that, are the models interchangeable between filters by doing that ?

EDIt:  hmm;... kind of hoped to find an good model to work with other resizers to speed up but maintain the same output quality as ESRgan resizer..
And i have found one to work with vsgan, but the turtle-slow-crawling speeds is killing my nvidia gpu !?  Like 0.00x fps ehh (°^°)

If anybody could provide me with an link to esrgan models for vs-mlrt , your most welcome to do so...  preferable 2x.. not 4x .. actually 4x looks much wurse than 2x.. for some reason..
to be specific, looking for ESRgan_2xPlus.. looks great for the source iam working with..


Cheerios
TD
Reply
#2
The supported model types vsgan and vsmlrt support are listed on their github pages.
Through the model parameters: no clue, depends on the architecture. (atm. Hybrid does only support ESRGAN models in VSGAN)

About interchangeability: that is the idea onnx and pth are should be different representation of the same data.

Cu Selur
Reply
#3
(01.01.2024, 02:29)Selur Wrote: The supported model types vsgan and vsmlrt support are listed on their github pages.
Through the model parameters: no clue, depends on the architecture. (atm. Hybrid does only support ESRGAN models in VSGAN)

About interchangeability: that is the idea onnx and pth are should be different representation of the same data.

Cu Selur

I have looked over there already , for an VSGAN equivalent of RealESRGAN_x2Plus for VS_MLRT (*.Onxx).
But haven't found one yet...  

Iam doing some reading about using the right params for the models.. so.. the jury is still out  Idea

(01.01.2024, 02:29)Selur Wrote: About interchangeability: that is the idea onnx and pth are should be different representation of the same data.
Cu Selur

Maybe so.. but that's the problem!  A pytorch model can't be loaded in vsmlrt , just like an onxx can't be loaded for vsgan.
These have to be converted / exported to the right format is it not ?

Is there an easy/quick way/tool (gui) to convert a pytorch model into an *.Onxx ? and Vice Versa ? Iam not a coder..hence..


Thanks,
Reply
#4
One can use chaiNNer to convert model files.
[Image: grafik.png]

Cu Selur
Reply
#5
(01.01.2024, 15:54)Selur Wrote: One can use chaiNNer to convert model files.
[Image: grafik.png]

Cu Selur

I knew you would suggest that app..  I have tried that one already before i asked here..

Doesn't seem to be so straightforward and easy as in the picture you post though  Dodgy
Iam using the portable version ..

cheers,
Reply
#6
Quote:Doesn't seem to be so straightforward and easy as in the picture you post though
Works fine here.
What I do is:
  • Check under 'Manage Dependencies' (upper right corner), that everything you need is installed.
  • Add a 'Pytoch->Load Model'-element and configure it to load your .pth file.
  • Add a 'Pytoch->Convert to Onnx'-element.
  • Add a 'Onnx->Save Model'-element.
connect/configure the elements, then press the 'play'-button at the top.
Seems to be rather straight forward.


Cu Selur
Reply
#7
(01.01.2024, 16:14)Selur Wrote:
Quote:Doesn't seem to be so straightforward and easy as in the picture you post though
Works fine here.
What I do is:
  • Check under 'Manage Dependencies' (upper right corner), that everything you need is installed.
  • Add a 'Pytoch->Load Model'-element and configure it to load your .pth file.
  • Add a 'Pytoch->Convert to Onnx'-element.
  • Add a 'Onnx->Save Model'-element.
connect/configure the elements, then press the 'play'-button at the top.
Seems to be rather straight forward.


Cu Selur

By that i mean, NEXT to the chainner app, i had to search for and dl the additional files to see the pytorch node in chainner   Wink
Everything wos expanded , and although there wos an onxx node available among many others, only the one i need wos missing  Dodgy .. ofCurse.. 

so yeah, i needed to DL the dependencies additionally.. Whooping 2GB+ >_> just to load A model ..!
I gather you didn't have to do that presumable? → portable VS redistributable difference mayhap..

cheers,
td
Reply
#8
¯\_(ツ)_/¯
Reply
#9
(01.01.2024, 16:23)Selur Wrote: ¯\_(ツ)_/¯

(°^°)
Reply


Forum Jump:


Users browsing this thread: 2 Guest(s)