No, but cv2 should be already optimized being written in C/C++.
In any case in the coloring pipeline, the bottleneck are elsewhere.
For example, recently I had to perform some color adjustment on a 1440x1080 clip already colored, using this code
clip = havc.HAVC_ColorAdjust(clip=clip, BlackWhiteTune="strong", BlackWhiteMode=4, BlackWhiteBlend="True", ReColor=False, chroma_resize=True)
clip = havc.HAVC_ColorAdjust(clip=clip, BlackWhiteTune="medium", BlackWhiteMode=3, BlackWhiteBlend="True", ReColor=False, chroma_resize=True)
The encoding speed was about 30fps and in this case are used both LUTs and CLAHE.
I admit that is not very fast, but given that in this case the CPU usage was 25% and the GPU usage was 9%, I think that the best option to increase the speed is using the chunk encoding as suggested in the chapter 4.0.4 of HAVC user guide.
I tried to use Filtering->Filter Order/Queue->Use Filter Queue
To build a script calling 2 times HAVC_ColorAdjust, using these settings
The expected code to be generated is
clip = havc.HAVC_ColorAdjust(clip=clip, BlackWhiteTune="strong", BlackWhiteMode=4, BlackWhiteBlend="True", ReColor=False)
clip = havc.HAVC_ColorAdjust(clip=clip, BlackWhiteTune="medium", BlackWhiteMode=3, BlackWhiteBlend="True", ReColor=False)
But Hybrid instead of generated the following code
Please, could you fix it.
Thanks,
Dan