Hi everyone,
@Kevin_san what do you do for oversharpened sources?
1. You cannot completely remove that, because it becomes nonlinear, and data has been deleted by the artifact.
Most algos that invent missing data, look awkward.
2. It is always the tricky matrix. I developed them over a long time.
There are several basic types. One is for originals with halo, one is for those without, and for both there is another distinction if extra blurred.
3. There is another type of halo that has only horizontal dark lines above bright objects. This comes from wrongful sharpening of an interlaced video.
For this I use pre-blending, which also in my case has to be tuned to the variant of the interlacing system. So it can be used also without halo, to improve deinterlacing artifacts.
4. You need to know for sure, when not to deinterlace, because the video you work on uses progressive frames. 60 fps makes only sense from truly interlaced material.
I have posted the whole set of scripts somewhere here on the board, I can give you the latest version ready for a hundred particular ISO's.
Many of the original ISOs are absolutely catastrophic, and a one-click re-rendering delivers practically nothing you would ever watch.
So, the processing makes it at least a bit more palpatable, and it takes very sensitive fine-tuning, including sub-pixel shifting of chroma. Like "mastering".
Average size is a bit over 2GB plus audio. A few are 3GB. Processing takes 3-7 hours.
You have to follow a strategy. You need to restore some decent skin color, you need to make eyes and teeth sparkle, and you need scenes, where the face looks really natural and shows depth. You need scenes, where the color is completely natural, not yellow-ish. Then the girl looks sexy, and viewers can enjoy.
In other words, these are your human parameters for the fine-tuning.
I use 960x640 because this removes almost every trace of pixelation, it makes it look like printed on good paper or like an aquarel painting.
Also, this is crucial because of the color subsampling. The encoder can optimize the chroma pane a little bit this way.
Higher upsampling just uses too much disk space for almost no gain. Current AI cannot recreate a healthy look by reinventing lost data. Too many disturbing "uncanny valley" artifacts.
..in your case, try C50g5b or C50g5h1
(but you need the script for the shifting and pre-blending, and then the postprocessing with the oversampled resharpening at pixel-level and smaller.)
The following series of "convolutional kernels" was made for videos with some or even very strong halo.
C50g5a ="3 -15 19 -1 121 0 0 2 -4 -12 0 0 -4 4 23 0 0 0 -1 -7 0 0 0 0 1" #C5.0g5a
C50g5b ="2 -15 23 -3 121 0 0 2 -5 -31 0 0 -4 4 37 0 0 0 -0 -3 0 0 0 0 6" #C5.0g5b AiShz5301 ßß
C50g5c ="1 -11 19 -9 113 0 0 2 -5 -21 0 0 -4 4 11 0 0 -0 -1 -7 0 0 0 0 3" #C5.0g5c HarukaM-DV13
C50g5d ="1 -11 19 -9 113 0 0 2 -5 -31 0 0 -4 4 23 0 0 -0 -1 -9 0 0 0 0 5" #C5.0g5d Nana7x
C50g5e ="1 -11 19 -9 113 0 0 2 -5 -17 0 0 -4 4 3 0 0 -0 -1 -3 0 0 0 0 1" #C5.0g5e FuminaYuna19
C50g5f ="2 -7 21 -17 113 0 0 2 -5 -13 0 0 -4 4 3 0 0 -0 -1 -3 0 0 0 0 1" #C5.0g5f
C50g5f1="3 -7 21 -17 113 0 0 2 -5 -11 0 0 -4 4 9 0 0 -0 -1 -9 0 0 0 0 5" #C5.0g5f1 KaedeKaga-ODYB
C50g5g ="1 -7 19 -17 113 0 0 2 -5 -13 0 0 -4 4 3 0 0 -0 -1 -3 0 0 0 0 1" #C5.0g5g #sharper
C50g5h ="1 -9 23 -5 121 0 0 2 -5 -31 0 0 -4 4 19 0 0 0 -0 -3 0 0 0 0 4" #C5.0g5h Saaya-5436,Saaya-11925
C50g5h1="3 -7 23 -5 121 0 0 2 -5 -17 0 0 -5 5 21 0 0 -0 -1 -5 0 0 0 0 3" #C5.0g5h1 ReimiTachibana_TSDV,
C50g5h2="3 -9 29 -7 121 0 0 2 -5 -23 0 0 -5 5 27 0 0 -0 -1 -9 0 0 0 0 5" #C5.0g5h2 AyameMisaki_LCDV-40314
C50g5h3="1 -9 23 -5 121 0 0 2 -5 -27 0 0 -4 4 17 0 0 0 -0 -9 0 0 0 0 4" #C5.0g5h3 Saaya-11927
C50g5h4="3 -7 29 -9 121 0 0 2 -5 -23 0 0 -5 5 27 0 0 -0 -1 -9 0 0 0 0 5" #C5.0g5h4 AyameMisaki_LCDV-40314
C50g5h5="3 -11 29 -9 121 0 0 2 -5 -23 0 0 -5 5 27 0 0 -0 -1 -9 0 0 0 0 5" #C5.0g5h5 Saaya-11164
C50g5i ="1 -11 19 -9 113 0 0 2 -5 -9 0 0 -4 4 7 0 0 -0 -1 -9 0 0 0 0 1" #C5.0g5i
C50g5j ="1 -11 19 -9 113 0 0 2 -5 -11 0 0 -4 4 5 0 0 -0 -1 -5 0 0 0 0 1" #C5.0g5j Kyouka; FuminaYuna19?
C50g5k ="1 -11 19 -9 113 0 0 2 -5 -7 0 0 -4 4 9 0 0 -0 -1 -7 0 0 0 0 3" #C5.0g5k
C50g5k1="2 -15 21 -7 113 0 0 2 -5 -13 0 0 -4 4 27 0 0 -0 -1 -11 0 0 0 0 4" #C5.0g5k1 RinaHirata
#C50g5k="1 -9 21 -7 121 0 0 2 -5 -23 0 0 -4 4 13 0 0 0 -0 -9 0 0 0 0 4" #C5.0g5k
C50g5l ="3 -7 21 -17 121 0 0 2 -5 -31 0 0 -4 4 19 0 0 0 -0 -3 0 0 0 0 4" #C5.0g5l Yukino-DDD068
C50g5l1="5 -13 21 -23 121 0 0 2 -5 -31 0 0 -4 4 21 0 0 0 -0 -11 0 0 0 0 5" #C5.0g5l1 *** SakuraOtawa_TSDS-42372
C50g5l2="5 -13 21 -23 121 0 0 2 -5 -27 0 0 -4 4 21 0 0 0 -0 -13 0 0 0 0 4" #C5.0g5l2 *** Nanoka093
C50g5l3="2 -7 21 -17 121 0 0 2 -5 -9 0 0 -4 4 7 0 0 0 -1 -9 0 0 0 0 4" #C5.0g5l3 ??
C50g5m ="1 -9 21 -7 121 0 0 2 -5 -11 0 0 -4 4 15 0 0 0 -0 -5 0 0 0 0 3" #C5.0g5m AsukaKishi_LCDV-40560 + C62a1
C50g5n ="1 -11 19 -9 113 0 0 2 -5 -17 0 0 -4 4 11 0 0 -0 -1 -5 0 0 0 0 3" #C5.0g5n ** ErikaFujm
C50g5o ="1 -11 19 -9 113 0 0 2 -4 -7 0 0 -4 4 11 0 0 -0 -1 -5 0 0 0 0 3" #C5.0g5o **
C50g5p ="1 -11 19 -9 113 0 0 2 -5 -17 0 0 -4 3 17 0 0 -0 -1 -5 0 0 0 0 5" #C5.0g5p
C50g5q ="3 -9 21 -9 113 0 0 2 -5 -13 0 0 -4 3 13 0 0 -0 -1 -11 0 0 0 0 5" #C5.0g5q NTak216
de-halo needs low value of coefficient #4 and a high value of #3
#1 compensates ringing/echo
#2 compensates for wider blur
#5 is the anchor = original data
#10 is the sharpener for y direction. this continues with #15 (dehalo), #20 (wide deblur), #25 (dering)
Below, I will post a bunch of further screenshot examples. Those that are blurred, have a very sick original, that was already totally destroyed with too much of MPEG-2 compression.
I use the x264 denoiser quite heavily. You might discover that some details become better than in the original. There are a few originals among the pics, you know which, because they are 720x480, while my results have to be 960x640 (I did not find one single player on Windows that cannot run these.)