10.02.2024, 00:32
The new dev-version is working.
Thanks,
Dan
Thanks,
Dan
The following warnings occurred: | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
Warning [2] Trying to access array offset on null - Line: 198 - File: inc/plugins/google_seo.php PHP 8.3.12-nmm1 (Linux)
|
Improving CodeFormer videos
|
10.02.2024, 00:32
The new dev-version is working.
Thanks, Dan
10.02.2024, 11:27
Hello Selur,
I noted that there are situations where the "retinaface" detector is introducing strong artifacts. You can find an example here: https://imgsli.com/MjM5MDQz the only combination with no artifacts (for the sample used) are CF Detector: dlib CF Detector: retinaface with only center The second detector has the disadvantage to not enhance the faces that are in background. You can find an example here: https://imgsli.com/MjM5MDQ2 In this case "retinaface (only center)" is worse than "dlib" The big problem is that "dlib" is very very very slow (speed 0.005 fps). Do you know if the version available in Hybrid is CUDA enabled ? because I checked the flag dlib.DLIB_USE_CUDA and it is False. Thanks, Dan P.S. I attached a sample that when encoded with CF shows the artifacts
HolyWu only links a a dlib wheel file (see: https://github.com/HolyWu/vs-codeformer/releases)
which is probably from https://github.com/z-mahmud22/Dlib_Windows_Python3.xor https://github.com/sachadee/Dlib No clue whether those are compiled with dlib or whether vs-codeformer uses any functionality of the library that is speed up through cuda. You would have to ask HolyWu. Reading: https://github.com/eddiehe99/dlib-whl I suspect that cuda is used since the CUDA libraries should be in the dll path. Cu Selur
----
Dev versions are in the 'experimental'-folder of my GoogleDrive, which is linked on the download page.
10.02.2024, 19:21
In the script I added the following code
import dlib and I got DLIB_USE_CUDA: False I suspect that is not used. In eddiehe99 dib-whl version there is the following section regarding CUDA configuration: With CUDA but in Hybrid are missing both cudnn.lib and cudart.lib. So I think that CUDA is not enabled.
10.02.2024, 19:37
.lib files are for compiling, 64bit\Vapoursynth\torch_dependencies\bin contains the dlls that get loaded.
Like I wrote you would have to ask HolyWu to be sure. Cu Selur
----
Dev versions are in the 'experimental'-folder of my GoogleDrive, which is linked on the download page.
10.02.2024, 20:02
11.02.2024, 16:57
Fingers crossed that HolyWu can help.
Cu Selur
----
Dev versions are in the 'experimental'-folder of my GoogleDrive, which is linked on the download page.
12.02.2024, 22:11
Hello Selur,
I solved the problem by building "dlib" with CUDA enabled. I attached the new dlib wheel, feel free to try it. With this version the encoding speed increased from 0.05fps to 2.5fps -> 50x faster! I encoded the dlib by setting the CUDA capabilities to 8.0 (good for RTX 30 and above). I suspect that the few dlib versions with CUDA support available, were compiled with CUDA capabilities equal to 5.0 (because this is the default in cmake). For compatibility reason I compiled "dlib" against CUDA SDK v11.4 (good for RTX 30 and above). In the case the pip installer refuse to install the wheel because is not compatible. You have to rename the weel extension from ".whl" to ".zip" so that you can edit the file __init__.py in dlib folder. Change the following code: if 'ON' == 'ON': Dan
13.02.2024, 21:00
Will try it tomorrow evening.
Does it work if the files from Hybrid\64bit\Vapoursynth\torch_dependencies\bin are used? (not planing to install the CUDA SDK) Cu Selur
----
Dev versions are in the 'experimental'-folder of my GoogleDrive, which is linked on the download page.
13.02.2024, 21:51
(13.02.2024, 21:00)Selur Wrote: Does it work if the files from Hybrid\64bit\Vapoursynth\torch_dependencies\bin are used? I'm using the Hybrid environment which should be based on CUDA 12.x, and the filter is using that libraries. CUDA is backward compatible with the previous versions and so CUDA 11.4 is a good starting point. To be sure I renamed the folder with my SDK installation (which was necessary to compile dblib) in 'C:/Program Files/NVIDIA GPU Computing Toolkit/CUDA/v10.4 and the filter worked perfectly, so my answer to your question is that the Hybrid\64bit\Vapoursynth\torch_dependencies\bin. I'm not sure if during the installation of the wheel python checks the existence of the libraries. But given that "python" is very lazy on this kind of controls, I expect that the installation should proceed smoothly. Dan Hello Selur, the fixing of CUDA problem, raised another issue. There are situations (or clips) where VSpipe fails to fully encode the movie. The error is the following: Error: fwrite() call failed when writing frame: xxx, plane: 0, errno: 32 I was unable to fix this issue. This problem started to happen when I enabled CUDA in dlib. I suspect that some library raise some kind of error. I tried to fix the problem by adding to the script the following code # Blind Face Restoration using CodeFormer maybe the error is raised before or after I not checked all the code. But what is really strange is that using "VsViewer" to encode the same video, the encoding proceed smoothly. I obtained the same result using the original Codeformer script: inference_codeformer.py So It seems that the problem is limited to VSpipe. But given that also "VsViewer" is using the "pipes" to perform the encoding I expected to see the same problem on "VsViewer". I know that you worked on "VsViewer", maybe this the reason why "VsViewer" is working better that "VSpipe". Do you have any idea on how the problem can be fixed ? It is theoretically possible to extend "VsViewer" so that can be launched using the command line like "VSpipe" ? I guess that "VsViewer" is more robust because the "pipe" is directly controlled by the program, while with "VSpipe" is the OS that control the pipe. I attached an archive containing: 1) the script used 2) the clip used 3) the log. I hope that this can help. Thanks, Dan |
« Next Oldest | Next Newest »
|