Given that your source is already in a good quality and anime/cartoon you might want to try some of the models (compact models if possible, since they are faster) from https://openmodeldb.info/.
(using chaiNNer you can convert most of them to onnx)
I often get better results using one of those, then NNEDI3 (+ maybe another model or filtering) + some line darkening and/or luma sharpening.
For noise cartoon/anime I would recommend first doing some cleanup with Avisynth and then (masked) SCUNet and then the above.
(side note: I would go a bit different at it with Vapoursynth)
About 'natural' content: I'm no fan so far for the machine learning upscaling.
For general filtering, machine learning can help a lot, but I would always recommend having the possibility of 'masked filtering' in your mind when using machine learning based filters.
---
Instead of 'ConvertBits(32)' ,'ConvertBits(16)' should be fine when using fp16, but it shouldn't really make a speed difference. 
Cu Selur
Ps.: Welcome to the forum.
(using chaiNNer you can convert most of them to onnx)
I often get better results using one of those, then NNEDI3 (+ maybe another model or filtering) + some line darkening and/or luma sharpening.
For noise cartoon/anime I would recommend first doing some cleanup with Avisynth and then (masked) SCUNet and then the above.
(side note: I would go a bit different at it with Vapoursynth)
About 'natural' content: I'm no fan so far for the machine learning upscaling.
For general filtering, machine learning can help a lot, but I would always recommend having the possibility of 'masked filtering' in your mind when using machine learning based filters.
---
ConvertBits(32)
ConvertToPlanarRGB()
mlrt_RealESRGAN(model=2, backend=["ncnn", "fp16=true"])

Cu Selur
Ps.: Welcome to the forum.

----
Dev versions are in the 'experimental'-folder of my GoogleDrive, which is linked on the download page.
Dev versions are in the 'experimental'-folder of my GoogleDrive, which is linked on the download page.