This forum uses cookies
This forum makes use of cookies to store your login information if you are registered, and your last visit if you are not. Cookies are small text documents stored on your computer; the cookies set by this forum can only be used on this website and pose no security risk. Cookies on this forum also track the specific topics you have read and when you last read them. Please confirm whether you accept or reject these cookies being set.

A cookie will be stored in your browser regardless of choice to prevent you being asked this question again. You will be able to change your cookie settings at any time using the link in the footer.

Scunet Strength
#1
Hello Selur and friends, I'm working on a video of a live music performance, and in some parts there's heavy noise. I've noticed that only Scunet can handle it without losing too much detail in close-up shots. However, the issue is with subtle textures and finer details, like a crack in the wall or even some surface relief — I think it's being identified as noise and ends up getting compromised.
The goal is to preserve as much detail as possible, but it's been tough. My question is: is there any way to reduce the strength a bit so these details aren't affected as much? Or is there any other way to deal with such heavy noise without losing fine detail quality?
Thanks in advance!

[Image: 5A7j15U.png]
Reply
#2
SCUNet has no 'strength' or similar.
You can:
  • try a different Model
  • try to apply it masked. Hybrid supports some basic masks through the 'Masked'-Controls. Enable "Filtering->Vapousynth->UI->Show 'Masked' Controls" to see them.
  • try to apply it only partial based on YUV components. (enable Filtering->Vapousynth->UI->Show 'Merge' Controls" to configure this)
Would need a link to a short sample clip to suggest any conventional filtering. Based on the image, maybe some deblocking is all that is needed.

Cu Selur

Ps.: You probably will always lose some fine detail unless you find a way to mask just these fine details.
----
Dev versions are in the 'experimental'-folder of my GoogleDrive, which is linked on the download page.
Reply
#3
Thanks for the help, Selur. This is one of the parts with the most noise in the video — it's attached. I followed the procedure you mentioned, but I couldn't find where to use the mask.


Attached Files
.rar   B1_t00-00.25.05.510-00.25.19.476.rar (Size: 11,5 MB / Downloads: 4)
Reply
#4
Overwriting the scan type to 'progressive', I see no combing. => the clip does not seem interlaced (thus the input scan type should be overwritten to progressive).
Whatever you used for the cutting of the clip flagged it was having 85720 frames.

Do you plan to change the brightness of the clip?

Just to see potential artifacts I used Retinex and moved it to the end of the filter queue.
This showed me something like:
[Image: grafik.png]
Since the brighter areas seemed fine, I used a LimitMask with Limit 60 and 'invert' enabled, to only filter pixels with a luma value of 60 or below.
This then showed:
[Image: grafik.png]
disabling 'Retinex' I got:
[Image: grafik.png]
Sadly, the 60 limit basically disabled SCUNet for most of the second scene.
So I got another idea, maybe not limit SCUNet at all but apply CAS before it. Smile
script: https://pastebin.com/bC9LtQEr
clip: https://www.mediafire.com/file/1iorqajal...t.mp4/file
compare: https://imgsli.com/Mzg3NDky

Quote:I followed the procedure you mentioned, but I couldn't find where to use the mask
If enable the 'Masked Controls' ( "Filtering->Vapousynth->UI->Show 'Masked' Controls" ), next to the filter there are now more options. Wink
Here's a screenshot of what you could see (if you scrolled to the right or enlarged you Hybrid window).
[Image: grafik.png]
Since these are quite a few controls most users do not use, they are hidden by default. Smile

Cu Selur
----
Dev versions are in the 'experimental'-folder of my GoogleDrive, which is linked on the download page.
Reply
#5
Quote:Overwriting the scan type to 'progressive', I see no combing. => the clip does not seem interlaced (thus the input scan type should be overwritten to progressive).
Whatever you used for the cutting of the clip flagged it was having 85720 frames.
Okay, but then it wouldn't be possible to double the frames naturally using QTGMC, right? In that case, I’d have to use something like Flowframes instead? As for what I used to cut the video, it was LosslessCut. 

Quote:Do you plan to change the brightness of the clip?
Although it looks quite dark in some parts, I think I'll leave it like that.

Quote:If enable the 'Masked Controls' ( "Filtering->Vapousynth->UI->Show 'Masked' Controls" ), next to the filter there are now more options. [Image: wink.png]
Here's a screenshot of what you could see (if you scrolled to the right or enlarged you Hybrid window).
Alright, I thought I had done something wrong because nothing was changing, but I found it here now haha thanks! 

Quote:So I got another idea, maybe not limit SCUNet at all but apply CAS before it. [Image: smile.png]
Excellent, is there a way to import this script or just enable CAS at 0.700 and then ScuNet, and one more question what's the difference between the regular ScuNet and ScuNet MLRT?
I saw that DPIR has it too!
Reply
#6
Quote:Okay, but then it wouldn't be possible to double the frames naturally using QTGMC, right?
There's nothing natural about it. Deinterlacing progressive content will just produce duplicate frames. (This too can be done by using 'Filtering->Speed Change->Scale output frame rate to'.) Problem with this method is that it might introduce additional artifacts due to unnecessary/harmful filtering steps.

Quote:As for what I used to cut the video, it was LosslessCut.
note that, unless you only cut on key frames, in general LosslessCut is not lossless.

Quote: I’d have to use something like Flowframes instead?
If you want something better than simple duplication of frames: yes, frame interpolation would be what one would choose.

Quote:Excellent, is there a way to import this script or just enable CAS at 0.700 and then ScuNet
You should adjust the filter order. The order in which you enable filters, does not influence the filter order.

Quote:what's the difference between the regular ScuNet and ScuNet MLRT?
one uses https://github.com/HolyWu/vs-scunet, the other https://github.com/AmusementClub/vs-mlrt to use SCUnet models. Usually one or the other is faster depending on your setup, but they both use the same models (in different formats) and should produce nearly the same output. Smile
(same for DPIR and some other filters)

Cu Selur
----
Dev versions are in the 'experimental'-folder of my GoogleDrive, which is linked on the download page.
Reply
#7
Quote:There's nothing natural about it. Deinterlacing progressive content will just produce duplicate frames. (This too can be done by using 'Filtering->Speed Change->Scale output frame rate to'.) Problem with this method is that it might introduce additional artifacts due to unnecessary/harmful filtering steps.
Sorry, I think I expressed myself poorly. I meant when the video is actually interlaced, then the best choice would be using QTGMC with "Bob" enabled, right?
I’m asking because all this time I believed that deinterlacing the video and doubling the frames would be better than using something like Flowframes after converting it to progressive.
About the filter order, I figured it out here — I noticed that unfortunately SCUNet tends to wash out the textures, everything becomes very flat, no depth or detail.
Isn't there another filter that could be more subtle than MC Temporal Denoise, like SCUNet, but that doesn't lose details as shown in the video?
I'll include an example here — even with CAS applied before SCUNet, in some scenes it still loses a lot of background detail. Look at this image:

[Image: Wl5JqLw.png]

Anyway, everything else is fine — I’ll continue testing here. Thanks a lot for the lesson!
Reply
#8
The problem is to figure out what are fine details and what is noise. Smile
If one can filter out just the fine details, one could create a diff with the original source and thus create a mask that could be used with the denoised to re-add these details.
Hybrid does not have a mask for fine detail/grain since I don't know of a way to generically identify it. Smile
Alternatively, you can add new grain after the filtering.

Can you share a sample of the scene where you want to keep the details?

Cu Selur

Ps.: https://silentaperture.gitlab.io/mdbook-...sking.html might be interessting for understanding masking.
PPs.: You might want to try whether using an inverted 'EdgeMask (Prewitt)' shrunk by -1 or -2 might work.
----
Dev versions are in the 'experimental'-folder of my GoogleDrive, which is linked on the download page.
Reply


Forum Jump:


Users browsing this thread: 1 Guest(s)