Same test I ran in #9 is slower now.
Before I got:
now I get:
Okay, strangely commenting out the '@torch.inference_mode()'-lines gets speed up to 3.8fps again.
Side note: Additionally, increasing ref_stride to 100 increases speed to 3.99fps, increasing raft_iter to 30 does slow down the encoding to 2.02fps.
Keeping the '@torch.inference_mode()'-lines and using ref_stride=100 increases the speed to 5.14 fps.
Cu Selur
Ps.: did a small test with a non-transparent logo (see attachment)
Before I got:
encoded 192 frames in 50.79s (3.78 fps), 1707.74 kb/s, Avg QP:21.51
encoded 192 frames in 87.40s (2.20 fps), 1707.83 kb/s, Avg QP:21.45
Okay, strangely commenting out the '@torch.inference_mode()'-lines gets speed up to 3.8fps again.
Side note: Additionally, increasing ref_stride to 100 increases speed to 3.99fps, increasing raft_iter to 30 does slow down the encoding to 2.02fps.
Keeping the '@torch.inference_mode()'-lines and using ref_stride=100 increases the speed to 5.14 fps.
Cu Selur
Ps.: did a small test with a non-transparent logo (see attachment)
----
Dev versions are in the 'experimental'-folder of my GoogleDrive, which is linked on the download page.
Dev versions are in the 'experimental'-folder of my GoogleDrive, which is linked on the download page.