User:Reel.Deal/Sandbox
(add stuff) |
m (→VirtualDub Plugins: typo) |
||
(26 intermediate revisions by one user not shown) | |||
Line 1: | Line 1: | ||
__NOTOC__ | __NOTOC__ | ||
− | A page to keep notes | + | A page to keep notes (work-in-progress) |
==Scripts/Plugins to add to the wiki== | ==Scripts/Plugins to add to the wiki== | ||
Line 8: | Line 8: | ||
===Effects=== | ===Effects=== | ||
+ | * [http://github.com/Lostech/AviSynth_Scripts DollyZoom] - an effect for applying a very simplified [http://en.wikipedia.org/wiki/Dolly_zoom Dolly Zoom] (also known as "Vertigo" effect). | ||
* [http://forum.doom9.org/showthread.php?t=163395 SoftWipe()] - Soft-edged horizontal and vertical wipe transitions | * [http://forum.doom9.org/showthread.php?t=163395 SoftWipe()] - Soft-edged horizontal and vertical wipe transitions | ||
* [http://forum.doom9.org/showthread.php?t=163453 GradientWipe()] - Luminance Map Transitions | * [http://forum.doom9.org/showthread.php?t=163453 GradientWipe()] - Luminance Map Transitions | ||
+ | * [https://web.archive.org/web/20160610143450/http://forum.gleitz.info/showthread.php?28932-Kameraschwenk-skripten/page2&p=277807 ZoomedTravel] - Zoom-around-in-a-big-frame function.[http://forum.doom9.org/showthread.php?t=113654#post853459] | ||
+ | |||
+ | ===Frame replacement=== | ||
+ | * [http://forum.doom9.org/showthread.php?t=158677#post1656803 QQfix] - a script for fixing corrupted frames. | ||
===Other=== | ===Other=== | ||
+ | *[http://forum.doom9.org/showthread.php?t=88727#post599828 ChangeColour] - lets you swap one colour for another; download [http://www.avisynth.info/?plugin=attach&pcmd=list&refer=%E3%82%A2%E3%83%BC%E3%82%AB%E3%82%A4%E3%83%96 here]. | ||
*[http://web.archive.org/web/20130212155503/http://doom10.org/index.php?topic=2181.0 Logo] - a simple script that will help you add logos to your video sources in the easiest, fastest and best quality methods. [http://www.mediafire.com/download/0xlaqhv5ag2k864/Logo10.1.zip Download] | *[http://web.archive.org/web/20130212155503/http://doom10.org/index.php?topic=2181.0 Logo] - a simple script that will help you add logos to your video sources in the easiest, fastest and best quality methods. [http://www.mediafire.com/download/0xlaqhv5ag2k864/Logo10.1.zip Download] | ||
*[http://forum.doom9.org/showthread.php?t=163636 Unipolator] - an universal frame interpolator script | *[http://forum.doom9.org/showthread.php?t=163636 Unipolator] - an universal frame interpolator script | ||
− | + | *[http://forum.doom9.org/showthread.php?t=164554&page=3#post1568520 TGMC_SVP_Test] - QTGMC() using SVP for motion analysis. | |
+ | *[http://forum.doom9.org/showthread.php?t=153903#post1397650 Dehalo calculator] - halo calculator for MATLAB. | ||
+ | *[http://xenoveritas.github.io/AviSynth-Stuff/index.html Xenoveritas' AviSynth Stuff] - various scripts, mainly effects and utility functions. | ||
==Missing Plugins== | ==Missing Plugins== | ||
*[http://forum.doom9.org/showthread.php?t=159274 ColorScreenMask] -- [http://web.archive.org/web/20130127181348/http://getoddnews.com/2011/01/20/colorscreenmask archived homepage] | *[http://forum.doom9.org/showthread.php?t=159274 ColorScreenMask] -- [http://web.archive.org/web/20130127181348/http://getoddnews.com/2011/01/20/colorscreenmask archived homepage] | ||
+ | *[http://forum.doom9.org/showthread.php?t=161899 Dubois] [http://forum.doom9.org/showthread.php?t=161914] | ||
*[http://forum.doom9.org/showthread.php?t=88645&page=3#post1220183 GHRCompoundHBlur] | *[http://forum.doom9.org/showthread.php?t=88645&page=3#post1220183 GHRCompoundHBlur] | ||
==AviSynth.info== | ==AviSynth.info== | ||
+ | *[http://news.avisynth.info/ AviSynth news] -- [http://www.avisynth.info/?%E9%96%A2%E9%80%A3%E3%83%8B%E3%83%A5%E3%83%BC%E3%82%B9 Older news] -- [http://www.avisynth.info/?cmd=backup&action=diff&page=%E9%96%A2%E9%80%A3%E3%83%8B%E3%83%A5%E3%83%BC%E3%82%B9 Archive] | ||
+ | |||
+ | *[http://www.avisynth.info/?%E5%A4%96%E9%83%A8%E3%83%97%E3%83%A9%E3%82%B0%E3%82%A4%E3%83%B3 External filters list] | ||
+ | *[http://www.avisynth.info/?%E3%82%B7%E3%83%A3%E3%83%BC%E3%83%97%E3%83%BB%E3%81%BC%E3%81%8B%E3%81%97 Blurring/sharpening filters] | ||
*[http://www.avisynth.info/?%E3%82%A4%E3%83%B3%E3%82%BF%E3%83%BC%E3%83%AC%E3%83%BC%E3%82%B9%E8%A7%A3%E9%99%A4 Deinterlacing filters] | *[http://www.avisynth.info/?%E3%82%A4%E3%83%B3%E3%82%BF%E3%83%BC%E3%83%AC%E3%83%BC%E3%82%B9%E8%A7%A3%E9%99%A4 Deinterlacing filters] | ||
*[http://www.avisynth.info/?%E3%83%9E%E3%82%B9%E3%82%AF Masking filters] | *[http://www.avisynth.info/?%E3%83%9E%E3%82%B9%E3%82%AF Masking filters] | ||
*[http://www.avisynth.info/?%E9%96%A2%E9%80%A3%E3%82%BD%E3%83%95%E3%83%88%E3%82%A6%E3%82%A7%E3%82%A2 Utilities] | *[http://www.avisynth.info/?%E9%96%A2%E9%80%A3%E3%82%BD%E3%83%95%E3%83%88%E3%82%A6%E3%82%A7%E3%82%A2 Utilities] | ||
− | |||
− | |||
− | |||
− | |||
==Development== | ==Development== | ||
*[http://forum.doom9.org/showthread.php?t=163794 Filter/plugin life cycle] | *[http://forum.doom9.org/showthread.php?t=163794 Filter/plugin life cycle] | ||
+ | |||
+ | |||
+ | ==Informational Threads== | ||
+ | *[http://forum.doom9.org/showthread.php?t=172549 3D Game Anti-Aliasing] | ||
+ | *[http://forum.doom9.org/showthread.php?t=164677 Advice on resizing] | ||
+ | *[http://forum.doom9.org/showthread.php?t=170176 Any resizers with anti-ringing?] | ||
+ | *[http://forum.doom9.org/showthread.php?t=172094 Gamma aware resizing] | ||
+ | *[http://forum.doom9.org/showthread.php?t=145358 Non-ringing Lanczos scaling] | ||
+ | *[http://forum.doom9.org/showthread.php?t=145210 Spline Resize vs. Lanczos Resize] | ||
+ | |||
+ | |||
+ | ==AviSynth Information== | ||
+ | |||
+ | ===Plugins=== | ||
+ | AviSynth uses the Windows functions [http://msdn.microsoft.com/en-us/library/windows/desktop/aa364418%28v=vs.85%29.aspx FindFirstFile]/[http://msdn.microsoft.com/en-us/library/windows/desktop/aa364428%28v=vs.85%29.aspx FindNextFile] to search the plugins folder. For plugins, it uses "*.dll" as the search string and it appears that this also returns any file whose extension starts with "dll" (perhaps because files with long names or extensions also have a short name for DOS compatibility). So to stop a dll from being loaded, change the extension to "_dll", for example, or add a further extension like ".old". OTOH, .avsi can be renamed to .avsx to prevent loading, since searching for "*.avsi" only returns files with exactly that extension (because it's more than 3 characters). [http://forum.doom9.org/showthread.php?t=149193#post1320314] | ||
+ | |||
+ | ====Plugin auto-loading limit?==== | ||
+ | The current AviSynth limit is 50, that is, AviSynth can have a maximum of 50 filter dll's/.vfds (it doesn't matter how many functions the dll has) loaded at any one time. How prescanning works is it searches for .dlls/.vdf's in the plugin directory, loads them, finds any functions in the plugin and stores that information+name. It will do this for a maximum of 50 at which point it can't load anymore. Once it has all the info stored it then unloads all the prescanned plugins. After that, AviSynth does <tt>[[AVSI|.avsi]]</tt> file loading. | ||
+ | |||
+ | An important point to remember is that any filters loaded with <tt>LoadPlugin()</tt> (even if they are in an <tt>[[AVSI|avsi]]</TT> file) will not be unloaded if they are not actually required, that means that they stay loaded taking up one of the 50 available slots. | ||
+ | |||
+ | Now, when you attempt to open a script and AviSynth finds that it needs to invoke function x it first searches for the required function in currently loaded dlls. If doesn't find it it searches for it in prescanned dlls (which have been unloaded). If it finds it it attempts to load the needed dll (which if there are already 50 plugins loaded will fail). If it still doesn't find the function it searches the internal function list. [http://forum.doom9.org/showthread.php?t=85912#post800876] | ||
+ | |||
+ | ===AssumeTFF/AssumeBFF=== | ||
+ | It's not necessary when the field order of the input material is correctly flagged, and reported by the source filter (e.g.: Mpeg-2 via DGDecode/mpeg2source). | ||
+ | It is necessary when the source material is not flagged, falsely flagged, and/or the source filter doesn't report the field order. (e.g.: AviSource).[http://forum.doom9.org/showthread.php?t=155458#post1414627] | ||
+ | |||
+ | DirectShowSource does not hand over the source's actual field order to Avisynth. If Avisynth doesn't have a discrete filed order info, it defaults to BFF. Yadif picks up the field order provided by Avisynth. Since your source is TFF, that's where things go wrong.[http://forum.doom9.org/showthread.php?t=139102&page=3#post1159097] | ||
+ | |||
+ | ===YV12 Cropping=== | ||
+ | Progressive YV12 shouldn't be cropped or add-border'ed by odd numbers vertically . | ||
+ | For interlaced YV12, that is MUST NOT. Never, never, never ever. If you do, you're changing chroma phase, i.e. luma and chroma are no more temporally aligned.[http://forum.doom9.org/showthread.php?t=132310&page=15#post1108087] | ||
+ | |||
+ | It's a basic technical requirement that the top border of interlaced YV12i may only be altered in mod4 steps.[http://forum.doom9.org/showthread.php?t=132310&page=15#post1110128] | ||
+ | |||
+ | ===Resizers=== | ||
+ | The cropping parameters of the resizer engine are floating point numbers. This allows the resizers to do subpixel shifting of the resultant image. As a consequence of this design the cropping is not a hard crop at the boundary but more an edge limit to the resampler centre point (there is separate hard limiting at the picture edge).[http://forum.doom9.org/showthread.php?t=91630#post627396] | ||
+ | |||
+ | *For cropping off hard artifacts like VHS head noise or letterbox borders always use [[Crop]]. The crop command defines a hard edge of the image. Former pixels beyond that edge no longer exist. | ||
+ | |||
+ | *For extracting a portion of an image and to maintain accurate edge resampling use the resize cropping parameters. The resizer cropping defines the centre point of the edge. Pixels to the left and right of that centre point will be used to calculate the new output pixel. Only if the hard edge of the input image is encountered will sampling be constrained. | ||
+ | |||
+ | The optional 4 subpixel crop parameters of the resizers apply to the input image size.[http://forum.doom9.org/showthread.php?t=113654#post854824] | ||
+ | |||
+ | Note! | ||
+ | <code>...resize(1440,1080,1,1,-1,-1)</code> | ||
+ | versus | ||
+ | <code>...resize(1442,1082).crop(1,1,-1,-1)</code> | ||
+ | |||
+ | Are not exactly the same. The boundary conditions in the resizer are different. In the first case the edge row of pixels are not used in the output image, in the second they are. A very minor point. | ||
+ | |||
+ | Also the cropping factors on the resizers are floating point numbers so you can get subpixel adjustment if required. | ||
+ | |||
+ | And as your resize percentage change is very small you might experiment with the taps=N option of lanczos.[http://forum.doom9.org/showthread.php?t=114592#post862241] | ||
+ | |||
+ | A trick worth mentioning in regards to the hard cropping done by crop and the windowed cropping done by the resizers is to use the resizer to do the resampling and cropping but with mod N extra guard pixels that are subsequently hard cropped off.[http://forum.doom9.org/showthread.php?t=166780#post1607735] | ||
+ | |||
+ | AviSynth resizers maintain the image centre point rather than maintaining the top left position.[http://forum.doom9.org/showthread.php?t=162286&page=2#post1522689] | ||
+ | |||
+ | Given subrange_height == target_height and subrange_width == target_width you don't want any actual resizing, just pure 0.5 pixel shifting. So try either BilinearResize() or BicubicResize(). These will involve the minimum number of input pixels per output pixel. If you involve to many input pixels like with Lanzos or Spline36 you may introduce ringing artefacts. With Bicubic you may want to try even softer B and C values than the default 1/3, 1/3.[http://forum.doom9.org/showthread.php?t=156671#post1433522] | ||
+ | |||
+ | Less processed pixels, more blurring. You should really try both a high tap shift, and a low tap shift and see what does batter on you source. If your input is already blurry, for whatever reason, you should probably use a higher tap filter.However if you have something really sharp like flash animation, IanB is likely correct.[http://forum.doom9.org/showthread.php?t=156671#post1433526] | ||
+ | <br> | ||
+ | <br> | ||
+ | |||
+ | ==QTGMC Notes== | ||
+ | Notes from the QTGMC thread. | ||
+ | |||
+ | *[http://forum.doom9.org/showthread.php?t=156028&page=51#post1545044 Page 51] | ||
+ | ''Does it take two fields in consideration or only one?'' | ||
+ | |||
+ | At max settings, up to fourteen fields for each output frame. | ||
+ | Don't compare QTGMC (its workflow) with that of other deinterlacers. It works different, it IS different. Usual deinterlacers go "to weave, or not to weave, that's the question". QTGMC basically is a motioncompensated temporal superresolution filter. | ||
+ | |||
+ | *[http://forum.doom9.org/showthread.php?t=156028&page=52#post1545055 Page 52] | ||
+ | Yes, QTGMC does "take both fields into account" (and more) even if you keep single rate. Every output frame is constructed from a range of neighbor fields as Didée has noted. By default the current field is interpolated into a full frame, then the two fields before and after (interpolated + motion compensated) are combined into the frame in such a way as to remove bob-shimmer. This temporal processing also enhances detail to some degree and has some noise reducing effect. | ||
+ | So all source data will have been used in your output even after a SelectEven(). The result will be primarily based on the even fields of course, but the neighboring ("thrown away") fields will have had an influence too. | ||
+ | |||
+ | * [http://forum.doom9.org/showthread.php?t=156028&page=52#post1547688 Page 52] | ||
+ | Temporal smoothing is used to remove bob-shimmer, but we don't want large areas of motion blur so the Rep0/1/2 settings limit the amount of change that the temporal smoothing is allowed to make. Higher values for Rep0/1/2 allow larger areas of change from the smoothing, but (counter intuitively) 0 switches of the limiting completely and so allows all changes through. You're right that there is a code path for Rep0/1/2 = 0 that is not used. However, that code path would only allow 2-pixel high areas of change, much bob-shimmer covers a larger area than that. It would be getting close to doing no temporal smoothing at all, similar to TR0/1/2=0 and would be especially bad on stationary detail. Having said that, it does seem to be a little odd not to allow that code path even if it is not the most useful. | ||
+ | |||
+ | Why is 2 pixels not enough? Consider a stationary single pixel high horizontal line, positioned such that it appears only in the even fields. The bob will expand that to a 3 pixel high line, and clearly it will be a cause of major bob-shimmer, flickering on and off. When temporally smoothed, the now 3 pixel high line is softened in the even frames and appears in the odd frames. Bob shimmer removed - by a 3-pixel high area of change, which would be removed if you followed the Rep0=0 code path... I could make that more clear with a diagram, but I hope you get the idea... | ||
+ | |||
+ | [I note that my comments on that function need an update for precision: the two vertical in/expands allow through areas of change up to 4 pixels high, the in/deflate and RemoveGrain are not limited vertically so they also perform some measure of mask clean up] | ||
+ | |||
+ | As I'm sure you're aware, the epsilon is only there to trigger the GaussResize to do its blur even though we're not actually resizing, I'm sure it doesn't matter much where it is placed given the tiny value and the fact it is being used on a huge blur anyway. I do wonder if there's a more efficient way to do a Gaussian nowadays, or something similar. | ||
+ | |||
+ | *[http://forum.doom9.org/showthread.php?t=156028&page=54#post1561135 Page 54] | ||
+ | ''Does QTGMC just deinterlace with the presets or does it also do sharpening, denoising, noise stabilizing, etc.?'' | ||
+ | |||
+ | You will get a little of all three if you provide only a preset and no other settings. QTGMC is not really trying to be a sharpener or denoiser, this happens mainly a byproduct of the processing used to avoid shimmer. The light temporal smoothing / denoising improves compressibility, and the sharpening provides some detail enhancement. Despite the fact that this moves the result away from the source, most people like those effects. | ||
+ | |||
+ | The impact on noise is often very minor, and you may not care to do anything about it, especially since it will involve more processing. However, some people do take the sharpness down, and that's a free operation, e.g. Sharpness=0.4 or 0.7 | ||
+ | |||
+ | However, if you really want something that's *very* close to the source, then Boulder's suggestion above is the right way to go about it. Although his MatchPreset choice is very high and will slow it down - I would usually leave that out and set an explicit Preset for clarity: | ||
+ | Code: | ||
+ | |||
+ | QTGMC(Preset="Slower", SourceMatch=2, Lossless=2, EZKeepGrain=0.5, Sharpness=0.1) | ||
+ | |||
+ | SourceMatch specifically tries to make the deinterlace as "lossless" as possible without introducing shimmer. EZKeepGrain helps preserve the noise from the original. These are not default settings because I think most people want the slight denoise/enhance. Also these settings are slower - you can speed up the Preset a little without any major loss. I would strongly suggest an MT setup (see the first post). | ||
+ | |||
+ | *[http://forum.doom9.org/showthread.php?t=156028&page=61#post1573273 Page 61] | ||
+ | ''Why static titles flicker after QTGMC?'' | ||
+ | |||
+ | Short answer: | ||
+ | Try adding Rep1=4, Rep2=0 to your settings. This might fix your problem - it might rarely add a tiny bit of motion blur (hard to notice) | ||
+ | |||
+ | Long answer: | ||
+ | The core operation of TGMC is to blend 50% of the current frame with 25% each of the previous & next frames (motion-compensated). That removes all bob-shimmer and helps define the missing field lines. However, it also introduces motion-blur where the motion analysis is incorrect. So there is a repair step that only allows changes that affect thin horizontal areas - because bob-shimmer normally only affects thin horizontal areas. Occasionally there is shimmer that covers a wider area, especially on static detailed things such as text. That shimmer gets through because fixing shimmer in larger areas would potentially create motion blur elsewhere. | ||
+ | |||
+ | There are settings controlling the shimmer repair step: Rep0, Rep1 and Rep2. Rep0 improves the motion search clip only so that isn't so relevant here. Rep1 and Rep2 are alternative ways to repair the output, you set them to a value from 1 to 5 to control the repair strength (it's a bit more complicated but that's the basic idea). The higher you set the value the more shimmer is removed but with the possibility that some motion blur might creep through. Rep1 has a stronger effect than Rep2, but again might let more motion blur through. The defaults are Rep1=0, Rep2=4. I suggest you switch the 4 to the stronger Rep1 and see if that works. | ||
+ | |||
+ | You might wonder why TGMC doesn't just mask static areas and leave them untouched to avoid all this complexity. You can try it yourself: | ||
+ | Code: | ||
+ | |||
+ | qtgmc = QTGMC() | ||
+ | dw = DoubleWeave() | ||
+ | mask = mt_lutxy( dw, dw.SelectEvery(1,1), "x y - abs", U=1,V=1 ).mt_expand(U=1,V=1).mt_binarize(0, U=1,V=1) | ||
+ | mt_merge( dw, qtgmc, mask, luma=true ) | ||
+ | |||
+ | That simple script leaves any pixel untouched if it and its 8 neighbors don't change over the nearby fields (could be made more robust by including chroma or more complex masking). It might fix your problem. However, any tiny change within your "static" text pixels, even a change by 1 luma then you'd need to add a threshold. You can change the 0 in the mt_binarize to 1 or 2 to allow slight dissimilarities. But that will start to cause problems in normal footage: occasional pixels will be identified as "static" and will be processed differently to their neighbors - artefacts would show up (in fact rare cases artefacts can show up even with the script as I've written it). | ||
+ | |||
+ | It's very easy to create discontinuities by naive masking during deinterlacing, different algorithms often don't match up perfectly. You see this in other deinterlacers: "this part is combed so do A, this part is not combed so do B". We see the discontinuity between A and B. Softened masks help but blur detail. | ||
+ | |||
+ | On a side note, the other problem with static detail in (Q)TGMC is that is loses too much vertical detail compared to other deinterlacers. Source match was specifically designed to greatly improve static detail. Sadly though it doesn't affect these minor shimmer issues. | ||
+ | |||
+ | * [http://forum.doom9.org/showthread.php?t=156028&page=62#post1577046 Page 62] | ||
+ | ''What are the best settings for QTGMC?'' | ||
+ | |||
+ | Everyone's opinion about "best" is different. Every source has different "best" settings. Don't ask for "best", find out what's best for yourself. | ||
+ | |||
+ | This is one place to start if you don't care about speed: | ||
+ | QTGMC(Preset="Very Slow", SourceMatch=2, Lossless=2, EZKeepGrain=0.5, Sharpness=0.1, Sbb=0) | ||
+ | |||
+ | But that's for good quality footage that has fine detail, noisy VHS tapes probably aren't worth that kind of processing - but you asked... | ||
+ | You will likely want to change that EZKeepGrain value (increase it to keep more noise, decrease if you don't care about preserving noise. Change it to EZDenoise and increase the value if you want QTGMC to denoise for you.) | ||
+ | |||
+ | |||
+ | ==Dancing Grain== | ||
+ | Some more thoughts on "dancing" grain. | ||
+ | |||
+ | This sort of "dancing" usually isn't a property of the original grain, at least for the most part. Grain in itself usually is a high-frequency distortion only. The "dancing" effect is introduced by lossy DCT-based compressors, where in the lossy compression process some error is introduced into the low-frequency parts, caused mainly by the hi-frequency parts.[http://forum.doom9.org/showthread.php?t=132310&page=2#post1073349] | ||
+ | |||
+ | One step further, something in this direction often is very useful for subsequent motioncompensated denoising. If the flicker is left in, it eventually will also disturb the ME engine (making the vectors follow the flicker, causing spatial shifts where in fact there should be none), which will lower the benefit one can get from MC-NR. When the flicker is taken out before the motion search, chances are better to get more clean vectors.[http://forum.doom9.org/showthread.php?t=132310&page=2#post1073668] | ||
+ | |||
+ | It's still the basic method of using a pre-filter before doing the motion search. This can be done in several different ways, and this here was just one of them that can be used. E.g., if one is using a pure spatial prefilter, the effect is the opposite: it will take out the hi-freq's, but leave the flicker mostly intact, therefore still irritating the ME engine. | ||
+ | Fact is, with strong grain there is so much uncertainty at the pixel level that there is hardly any "this is the right way to do". There are plenty of different possible points to break into the circle of catch-22 ... but there's no "correct" one.[http://forum.doom9.org/showthread.php?t=132310&page=3#post1073717] | ||
+ | |||
+ | Truth is, this kind of "flickering of low spatial frequencies" is one of the ultimate foes, because right here it is where the nebula-of-uncertainty becomes thick: | ||
+ | :*a) without mocomp, you can't know if it's flicker or motion | ||
+ | :*b) with mocomp, you can't know if the mocomp has been misleaded by the flicker | ||
+ | :*c) With prefiltering before mosearch, you can't know if the prefilter has mangled moving areas (because of a)) and conseqquentially has misleaded the mosearch | ||
+ | Chicken-and-egg problem, without any definite solution.[http://forum.doom9.org/showthread.php?t=132310&page=17#post1471841] | ||
+ | |||
+ | |||
+ | ==MVTools2 Notes== | ||
+ | |||
+ | ===Prefiler=== | ||
+ | The prefilter should be able to cut down the grain almost completely. Some loss of detail is nothing to worry about at this stage. Then, <code>thSAD</code> should not be increased, but instead decreased from the default 400. | ||
+ | Using a prefilter together with such a high <code>thSAD</code> is guaranteed to introduce artifacts in areas where MC fails.[http://forum.doom9.org/showthread.php?t=132310&page=2#post1073104] | ||
+ | |||
+ | Whether pre-denoising is needed at all depends on the stronginess of grain. I see slight danger that this sort of processing now gets thrown at all kind of sources, even those that don't need such a processing method. | ||
+ | Prefiltering might work out for the most part, but there'll be cases where it bites you back. Dumb filters can destroy the motion of rather smooth regions (e.g. close-up of smooth faces + head-movement) strong enough so that MVTools won't recognize motion anymore. | ||
+ | If source has strong grain, the pre-filtering should be barely strong enough to make static areas calm. Which filter that is or could be, depends on the source.[http://forum.doom9.org/showthread.php?t=132310&page=14#post1106283] | ||
+ | |||
+ | For '''low''' noise you need exactly '''no prefiltering at all'''. Use <tt>MDegrainX</tt> directly as stated in [[MVTools|MVTools']] documentation.[http://forum.doom9.org/showthread.php?t=132310&page=14#post1106417] | ||
+ | |||
+ | Seeing the source is LOTR, I'd say the prefiltering is MUCH too strong. LOTR is rather clean with only little noise, no need to break a fly on a wheel. The current prefiltering will nuke-out enough content to make the motionsearch worse than it could be. Sometimes less is simply more.[http://forum.doom9.org/showthread.php?t=132310&page=17#post1474023] | ||
+ | |||
+ | *This should be a "simple" but effective searchclip-pre-processing for such rather clean sources.[http://forum.doom9.org/showthread.php?t=132310&page=17#post1474191] - more info here[http://forum.doom9.org/showthread.php?t=132310&page=18#post1474495] | ||
+ | MinBlur(1) | ||
+ | FluxSmoothT().Merge(last,0.251) | ||
+ | sbr() | ||
+ | |||
+ | If you use a nicely sharp & high-contrast clip as the searchclip, with "default operation as per documentation" you'll end up with pretty big SADs wherever there's an edge. Means, little to nothing will happen on edges. Which is quite counterproductive when the goal is to calm (the effect of) a sharpener.[http://forum.doom9.org/showthread.php?t=132310&page=17#post1474449] | ||
+ | |||
+ | For "dancing grain" (aka low-frequency flicker) prefiltering, the following script will remove the low-frequency flicker, leaving the high-frequencies intact:[[http://forum.doom9.org/showthread.php?t=132310&page=2#post1073349] | ||
+ | o = last | ||
+ | f = o.MinBlur(1,2).MinBlur(2,2).RemoveGrain(11,-1) | ||
+ | f.FluxSmoothT(7).mt_AddDiff(mt_MakeDiff(o,f,U=2,V=2),U=4,V=4)<br> | ||
+ | # eventually, limit the maximum pixel change to +/- 2 : | ||
+ | # mt_LutXY(o,last,"x 2 + y < x 2 + x 2 - y > x 2 - y ? ?",U=2,V=2) | ||
+ | In result, there will be almost no smoothing, and the grain basically is fully preserved. It's just the flicker, or "dancing" effect, that will be removed. | ||
+ | As a side-effect, there might occur some slight toning-down of shadings when there is motion. One can definitely see it in single-frame comparisons by flipping between original and processed. | ||
+ | |||
+ | One step further, something in this direction often is very useful for subsequent motioncompensated denoising. If the flicker is left in, it eventually will also disturb the ME engine (making the vectors follow the flicker, causing spatial shifts where in fact there should be none), which will lower the benefit one can get from MC-NR. When the flicker is taken out before the motion search, chances are better to get more clean vectors.[http://forum.doom9.org/showthread.php?t=132310&page=2#post1073668] | ||
+ | |||
+ | *Somewhere earlier in this thread I had posted a [http://forum.doom9.org/showthread.php?t=132310&page=2#post1073349 pre-calm script (with MinBlur() and FluxSmooth)] - in essence the same as the following script. Depending on the source characteristics, using Flux5framesT instead of simple FluxSmooth within such a pre-calmer can make sense.[http://forum.doom9.org/showthread.php?t=132310&page=17#post1472637] | ||
+ | |||
+ | a = last | ||
+ | b = a.RemoveGrain(11) | ||
+ | f = b.FluxSmoothT().merge(b,0.49)<br> | ||
+ | a.mt_makediff(mt_makediff(b,f,U=3,V=3),U=3,V=3) | ||
+ | The basic idea is to combine a spatial and a spatio-temporal filter, so that the spatio-temporal does not do what the spatial filter would do. Or s'th similar. Here's a mini-script in the spirit of the original idea (protect a temporal filter to act on those bits that a spatial filter would act on):[http://forum.doom9.org/showthread.php?t=159109#post1471926] | ||
+ | |||
+ | |||
+ | ==MaskTools2 Notes== | ||
+ | 1) In MaskTools2 does Round(1.5) equal 2 or 1? | ||
+ | :*It should follow the usual convention for rounding, .5 is rounded upwards. | ||
+ | |||
+ | 2) Why in mt_makediff with identical clips, the difference is 128 and not 0 like maths? In general I don't understand well the meaning of 128. | ||
+ | :* A pixel can't have a negative value, only 0-255. In order to handle "negative" differences, the range -127..0..127 is offsetted to 0..128..255.[http://forum.doom9.org/showthread.php?t=132310&page=16#post1128216] | ||
+ | |||
+ | *Some info here: [http://forum.doom9.org/showthread.php?t=104701&page=24#post1066089] | ||
+ | |||
+ | |||
+ | ==Other Plugins/Scripts== | ||
+ | ===Deblock_QED=== | ||
+ | Prior to deblocking: NO resizing. NO noise filtering. Cropping only at macroblock boundaries.[http://forum.doom9.org/showthread.php?t=104701&page=24#post1061491] | ||
+ | |||
+ | <code>SeparateFields().DeBlock_QED().Weave()</code> - That's a '''BAD way''' of deblocking interlaced footage: 50% of all possible boundaries between vertically neighboured blocks are NOT deblocked this way![http://forum.doom9.org/showthread.php?t=82264&page=45#post934083] | ||
+ | |||
+ | The only correct way for interlaced sources is: (alas) | ||
+ | SeparateFields().PointResize(width,height) | ||
+ | Deblock_QED().AssumeFrameBased() | ||
+ | SeparateFields().SelectEvery(4,0,3).Weave() | ||
+ | |||
+ | Originally mentioned [http://forum.doom9.org/showthread.php?t=131198#post1059181 here] and updated [http://forum.doom9.org/showthread.php?t=136601&page=2#post1185608 here], another variation [http://forum.doom9.org/showthread.php?t=165297] - [http://forum.videohelp.com/threads/342699-Neatvideo-strange-behavior/page2] | ||
+ | |||
+ | You use a chain of three deblockers: DGDecode-deblocking, then Deblock_QED(), then Deblock(). That's pretty much pointless. Deblocking filters need to "reckognize" blocking. It is likely to happen that one deblocker - although eventually not acting efficiently enough for visual pleasance - alters the content by so much that the upfollowing deblockers *cannot* reckognize the blocks anymore, therefore they become inefficient. | ||
+ | |||
+ | For making a reasonable start, you should find & use one deblocker that removes all (or at least a major part) of the blocking. After having found that, you can make your way to add more stuff.[http://forum.doom9.org/showthread.php?t=132902#post1077930] | ||
+ | |||
+ | Using Deblock_QED on a blocky source can potentially improve motion estimation.[http://forum.doom9.org/showthread.php?t=128977#post1035263] | ||
+ | |||
+ | Deblock_QED works with a fixed 8x8 grid mask. But if you execute UVtoY(), then you get a halfsize frame, where the smallest possible blocksize is 4x4. Thus deblock_qed will potentially miss half of all blocking, if you do it that way .[http://forum.doom9.org/showthread.php?t=121082#post942757]. | ||
+ | <br> | ||
+ | <br> | ||
+ | |||
+ | ===FluxSmooth=== | ||
+ | FluxSmooth could be approximated with a combination of Clense and TemporalSoften, or it can also be built by a 3-fold mt_logic() combination.[http://forum.doom9.org/showthread.php?t=132310&page=16#post1412286] | ||
+ | |||
+ | *What is the max value for FluxsmoothT(Temporal)? | ||
+ | :Maximum is 255. Perhaps more, but in 8-bit-sources pixel differences can not be larger than 255, anyway. Of course, with such big threshold there will appear motion artifacts.[http://forum.doom9.org/showthread.php?t=132310&page=17#post1474191] | ||
+ | |||
+ | If FluxSmooth is set up more aggressively (i.e. bigger threshold), then it'll do more good where Flux is doing right, and will do more bad where Flux is doing wrong. Remember FluxSmooth is a simple temporal smoother with median-like decision where to filter and where not. | ||
+ | |||
+ | Examples: | ||
+ | |||
+ | :*a) a pixel sequence: ... 80 81 85 79 80 ... FluxSmooth will filter the "85" and the "79", because these two pixels are overshooting both of their neighbors. | ||
+ | :*b) pixel sequence: ... 80 81 85 85 81 80 ...FluxSmooth will filter *nothing*, because no pixel satisfies the "overshooting both neighbors" criteria. | ||
+ | |||
+ | For case b), this means: | ||
+ | |||
+ | :*- IF those two "85" are due to motion, then FluxSmooth has done correct. | ||
+ | :*- But IF those two "85" in fact are related to "flicker" in a "flat" a/o "static" area, then FluxSmooth has not filtered something that you would like to have filtered.[http://forum.doom9.org/showthread.php?t=132310&page=17#post1471841] | ||
+ | |||
+ | ====FluxSmoothT==== | ||
+ | *What is the max value for FluxsmoothT(Temporal)? | ||
+ | :Maximum is 255. Perhaps more, but in 8-bit-sources pixel differences can not be larger than 255, anyway. Of course, with such big threshold there will appear motion artifacts.[http://forum.doom9.org/showthread.php?t=132310&page=17#post1474191] | ||
+ | |||
+ | ====Flux5framesT==== | ||
+ | Related - I've though several times if and how the "FluxSmooth principle" could be extended from the current 3-frame temporal window to a 5-frame temporal window.[http://forum.doom9.org/showthread.php?t=132310&page=17#post1471858] | ||
+ | |||
+ | A reasonable approach would be this: | ||
+ | *- calculate temporal median with radius=2 | ||
+ | *- calculate temporal soften with radius=2 | ||
+ | *- for each pixel, use that result that caused the smaller difference | ||
+ | |||
+ | Of course, Flux5framesT is *not* safe in regards to weak shadings in moving areas. Vanilla FluxSmooth is not safe, and Flux5framesT is even less. Well, you can't expect anything else from a simple, thresholded temporal smoother. It's a compromise the user needs to balance out.[http://forum.doom9.org/showthread.php?t=132310&page=17#post1472648] | ||
+ | |||
+ | Depending on the source characteristics, using Flux5framesT instead of simple FluxSmooth within such a pre-calmer can make sense. | ||
+ | |||
+ | ===MedianBlur=== | ||
+ | MedianBlur can be done via mt_luts(). MedianBlurT is not usable if radius>2.[http://forum.doom9.org/showthread.php?t=132310&page=16#post1412286] | ||
+ | |||
+ | ===ML3Dex=== | ||
+ | The exact operation of ML3Dex isn't fully clear to me (have been a bit lazy when looking through that pdf) ... however in practice, it doesn't impress me too much. The temporal artefacts in motion areas (resp. areas with erroneous motion compensation) are pretty much the same as those of plain temporal median, so there's no benefit in that respect. In areas without motion (resp. in areas with correct motion compensation), it does remove a bit more signal spikes, no matter whether it's noise or detail.[http://forum.doom9.org/showthread.php?t=132310&page=16#post1128216] | ||
+ | |||
+ | |||
+ | ==Interlacing== | ||
+ | === HD 1080i to DVD (all interlaced) - how?=== | ||
+ | '''NOT COMPLETE''' - need to all all useful information from this thread: http://forum.doom9.org/showthread.php?t=139102 | ||
+ | |||
+ | Interlaced resizing is fast, but you pay a price for generating each new field based only on the original field. Any new pixels spacially between 2 original field lines will effectivly be a weighted average of only the pixels above and below in that field, i.e. a blur. Effectively all the pixels in the new fields are vertically blurred slightly. | ||
+ | |||
+ | Using the SmartBob/Resize/ReInterlace method, although slower, will give vastly superior results in static areas because each new field can be based on a full frame. In static areas there is no "spacially between 2 original field lines". Those new pixels are rendered from complete frame data. i.e. no blur in static areas. | ||
+ | |||
+ | Of course in motion areas any difference can be attributed to how good the SmartBob interpolates the missing pixels. If using linear interpolators like in KernelBob or DGBob there will be no difference to Interlaced Resizing. i.e. a blur again. If using Edge Directed and/or Motion Compensated interpolators then the results can be a significant step up from bog interlaced resizing. | ||
+ | |||
+ | And apart from everything else the eye has trouble seeing bluring of things in high motion, it attributes motion to the blur, instead of blur to the blur. So it is a little unfair to look at individual fields on a PC screen, you really should evaluate the results on an interlaced display device at normal speed.[http://forum.doom9.org/showthread.php?t=139102&page=2#post1154808] | ||
+ | |||
+ | Going 1080i -> 480i means your going from 100% image area down to 16.6% image area, or the other way round: you're loosing 83.3% image area in the process. Still, you want to use golden nails to hammer some planks together? The vast majority of the intermediate improvement (at the 1080p stage) will be lost again when you've reached 480i. A plain and fast bob() does pretty good for that task ...[http://forum.doom9.org/showthread.php?t=139102&page=3#post1158826] | ||
+ | |||
+ | Low ringing (and low detail) lowpass for 1080i->480i:[http://forum.doom9.org/showthread.php?t=139102&page=6#post1174068] | ||
+ | :<code>mt_convolution(horizontal=" 255 ", vertical=" -0.00457430142084469586164799888191 -0.91092031121042564306650907803944 -2.7215434011820571965496188952936 -4.2381040109875854130339774799147 -2.7739456768086984932442890697262 4.556137386140445570028490752454 18.505136047840382914953022942635 36.000435907859456703965425655238 50.797650942298968076309880259519 56.609999970907811068675436793984 50.797650942298968076309880259519 36.000435907859456703965425655238 18.505136047840382914953022942635 4.556137386140445570028490752454 -2.7739456768086984932442890697262 -4.2381040109875854130339774799147 -2.7215434011820571965496188952936 -0.91092031121042564306650907803944 -0.00457430142084469586164799888191 ", u=3, v=3)</code> | ||
+ | It does lowpass between 120 and 240 lines, but because it is low ringing and has a low number of sample points (relatively speaking, you need more and more sample points the more frequencies you eliminate) it loses a lot of frequency amplitude all the way to around 60 lines. This filter is a simple chebyshev windowed sinc fir filter.[http://forum.doom9.org/showthread.php?t=139102&page=7#post1174727] | ||
+ | |||
+ | And one last point, if a ConvertToYV12() is required, where should it go. | ||
+ | |||
+ | I would recommend straight after the Bob and before the Resize. There is a whole raft of discussion about chroma positioning with interlaced 4:2:0 material. Summary is the chroma is positioned the same with both interlaced and progressive, but with interlaced, alternate lines are temporally distinct. This means for a static scene there is no difference between progressive and interlaced chroma. See these threads for the gorey details, [http://forum.doom9.org/showthread.php?t=129182 AutoYUY2() updated] and [http://forum.doom9.org/showthread.php?t=97987 Adaptive chroma upsampling]. [http://forum.doom9.org/showthread.php?t=139102&page=2#post1155110] | ||
+ | |||
+ | |||
+ | ==VirtualDub Plugins== | ||
+ | *[https://web.archive.org/web/20061023052458/http://dsp.ucsd.edu/~wgardner/VirtualDub.htm CACorrect] - This filter allows you to radially scale (i.e., zoom) the R, G, and B channels of a video stream with independent scale factors. This can be used to correct radial chromatic aberration. See [http://www.dvinfo.net/forum/general-hd-720-1080-acquisition/61274-free-chromatic-aberration-correction-software.html discussion] - download: <tt>[http://web.archive.org/web/20100921231258/http://www.wrgardner.com/CaCorrect.vdf CaCorrect.vdf]</tt> | ||
+ | |||
+ | *[https://web.archive.org/web/20200129202208/http://www.celestial-spells.com:80/en/logs/2012/04/kagayaki.php Kagayaki filter] is the Twinking Soft Focus and Cross filter for VirtualDub, to enhance emotion of your starry and night footage. It works nicely for starry timelapse, night city and fireworks footage. It is also optimal when you use still-astrophotograpies in your movies. And it can work as general-purpose soft focus filter, not limited to night footages. | ||
+ | |||
+ | *[http://members.chello.at/nagiller/vdub/downloads.html Gradation Curves Filter] - This Filter can be used to edit the gradation curves similar to the curves function of painting programs. See [http://members.chello.at/nagiller/vdub/tutorial/tutorial.html tutorial] and [http://members.chello.at/nagiller/vdub/readme.html readme]. | ||
+ | |||
+ | *[https://web.archive.org/web/20160407174859/http://home.earthlink.net/~tacosalad/video/dotcrawl.html DotCrawl Comb Filter] - Removes composite video artifacts: false colors and hanging dots. [https://web.archive.org/web/20110912224921/http://home.earthlink.net/~tacosalad/video/ Scott Elliott's (aka 'tacosalad') homepage]. | ||
+ | |||
+ | ====Plugins List==== | ||
+ | *[http://acobw.narod.ru/ http://acobw.narod.ru/] | ||
+ | *[http://rationalqm.us/mine.html#virtualdub http://rationalqm.us/mine.html#virtualdub] - [http://web.archive.org/web/20111231073704/http://neuron2.net/hosted.html http://neuron2.net/hosted.html] | ||
+ | *[http://www.infognition.com/VirtualDubFilters/ http://www.infognition.com/VirtualDubFilters/] | ||
+ | *[http://www.hlinke.de/dokuwiki/doku.php?id=en:virtualdub_pluginlist http://www.hlinke.de/dokuwiki/doku.php?id=en:virtualdub_pluginlist] | ||
+ | |||
+ | ==Defunct Forums== | ||
+ | *[http://web.archive.org/web/20070422170504/http://neuron2.net/board/ Neuron2's Video Processing Forum] - [http://web.archive.org/web/20071013175106/http://neuron2.net/board/index.php] | ||
+ | *[http://web.archive.org/web/20140208015017/http://doom10.org/ Doom10 Forum: Digital Video Discussion] |
Latest revision as of 00:46, 22 May 2021
A page to keep notes (work-in-progress)
[edit] Scripts/Plugins to add to the wiki
[edit] Denoising
[edit] Effects
- DollyZoom - an effect for applying a very simplified Dolly Zoom (also known as "Vertigo" effect).
- SoftWipe() - Soft-edged horizontal and vertical wipe transitions
- GradientWipe() - Luminance Map Transitions
- ZoomedTravel - Zoom-around-in-a-big-frame function.[1]
[edit] Frame replacement
- QQfix - a script for fixing corrupted frames.
[edit] Other
- ChangeColour - lets you swap one colour for another; download here.
- Logo - a simple script that will help you add logos to your video sources in the easiest, fastest and best quality methods. Download
- Unipolator - an universal frame interpolator script
- TGMC_SVP_Test - QTGMC() using SVP for motion analysis.
- Dehalo calculator - halo calculator for MATLAB.
- Xenoveritas' AviSynth Stuff - various scripts, mainly effects and utility functions.
[edit] Missing Plugins
[edit] AviSynth.info
- AviSynth news -- Older news -- Archive
[edit] Development
[edit] Informational Threads
- 3D Game Anti-Aliasing
- Advice on resizing
- Any resizers with anti-ringing?
- Gamma aware resizing
- Non-ringing Lanczos scaling
- Spline Resize vs. Lanczos Resize
[edit] AviSynth Information
[edit] Plugins
AviSynth uses the Windows functions FindFirstFile/FindNextFile to search the plugins folder. For plugins, it uses "*.dll" as the search string and it appears that this also returns any file whose extension starts with "dll" (perhaps because files with long names or extensions also have a short name for DOS compatibility). So to stop a dll from being loaded, change the extension to "_dll", for example, or add a further extension like ".old". OTOH, .avsi can be renamed to .avsx to prevent loading, since searching for "*.avsi" only returns files with exactly that extension (because it's more than 3 characters). [3]
[edit] Plugin auto-loading limit?
The current AviSynth limit is 50, that is, AviSynth can have a maximum of 50 filter dll's/.vfds (it doesn't matter how many functions the dll has) loaded at any one time. How prescanning works is it searches for .dlls/.vdf's in the plugin directory, loads them, finds any functions in the plugin and stores that information+name. It will do this for a maximum of 50 at which point it can't load anymore. Once it has all the info stored it then unloads all the prescanned plugins. After that, AviSynth does .avsi file loading.
An important point to remember is that any filters loaded with LoadPlugin() (even if they are in an avsi file) will not be unloaded if they are not actually required, that means that they stay loaded taking up one of the 50 available slots.
Now, when you attempt to open a script and AviSynth finds that it needs to invoke function x it first searches for the required function in currently loaded dlls. If doesn't find it it searches for it in prescanned dlls (which have been unloaded). If it finds it it attempts to load the needed dll (which if there are already 50 plugins loaded will fail). If it still doesn't find the function it searches the internal function list. [4]
[edit] AssumeTFF/AssumeBFF
It's not necessary when the field order of the input material is correctly flagged, and reported by the source filter (e.g.: Mpeg-2 via DGDecode/mpeg2source). It is necessary when the source material is not flagged, falsely flagged, and/or the source filter doesn't report the field order. (e.g.: AviSource).[5]
DirectShowSource does not hand over the source's actual field order to Avisynth. If Avisynth doesn't have a discrete filed order info, it defaults to BFF. Yadif picks up the field order provided by Avisynth. Since your source is TFF, that's where things go wrong.[6]
[edit] YV12 Cropping
Progressive YV12 shouldn't be cropped or add-border'ed by odd numbers vertically . For interlaced YV12, that is MUST NOT. Never, never, never ever. If you do, you're changing chroma phase, i.e. luma and chroma are no more temporally aligned.[7]
It's a basic technical requirement that the top border of interlaced YV12i may only be altered in mod4 steps.[8]
[edit] Resizers
The cropping parameters of the resizer engine are floating point numbers. This allows the resizers to do subpixel shifting of the resultant image. As a consequence of this design the cropping is not a hard crop at the boundary but more an edge limit to the resampler centre point (there is separate hard limiting at the picture edge).[9]
- For cropping off hard artifacts like VHS head noise or letterbox borders always use Crop. The crop command defines a hard edge of the image. Former pixels beyond that edge no longer exist.
- For extracting a portion of an image and to maintain accurate edge resampling use the resize cropping parameters. The resizer cropping defines the centre point of the edge. Pixels to the left and right of that centre point will be used to calculate the new output pixel. Only if the hard edge of the input image is encountered will sampling be constrained.
The optional 4 subpixel crop parameters of the resizers apply to the input image size.[10]
Note!
...resize(1440,1080,1,1,-1,-1)
versus
...resize(1442,1082).crop(1,1,-1,-1)
Are not exactly the same. The boundary conditions in the resizer are different. In the first case the edge row of pixels are not used in the output image, in the second they are. A very minor point.
Also the cropping factors on the resizers are floating point numbers so you can get subpixel adjustment if required.
And as your resize percentage change is very small you might experiment with the taps=N option of lanczos.[11]
A trick worth mentioning in regards to the hard cropping done by crop and the windowed cropping done by the resizers is to use the resizer to do the resampling and cropping but with mod N extra guard pixels that are subsequently hard cropped off.[12]
AviSynth resizers maintain the image centre point rather than maintaining the top left position.[13]
Given subrange_height == target_height and subrange_width == target_width you don't want any actual resizing, just pure 0.5 pixel shifting. So try either BilinearResize() or BicubicResize(). These will involve the minimum number of input pixels per output pixel. If you involve to many input pixels like with Lanzos or Spline36 you may introduce ringing artefacts. With Bicubic you may want to try even softer B and C values than the default 1/3, 1/3.[14]
Less processed pixels, more blurring. You should really try both a high tap shift, and a low tap shift and see what does batter on you source. If your input is already blurry, for whatever reason, you should probably use a higher tap filter.However if you have something really sharp like flash animation, IanB is likely correct.[15]
[edit] QTGMC Notes
Notes from the QTGMC thread.
Does it take two fields in consideration or only one?
At max settings, up to fourteen fields for each output frame. Don't compare QTGMC (its workflow) with that of other deinterlacers. It works different, it IS different. Usual deinterlacers go "to weave, or not to weave, that's the question". QTGMC basically is a motioncompensated temporal superresolution filter.
Yes, QTGMC does "take both fields into account" (and more) even if you keep single rate. Every output frame is constructed from a range of neighbor fields as Didée has noted. By default the current field is interpolated into a full frame, then the two fields before and after (interpolated + motion compensated) are combined into the frame in such a way as to remove bob-shimmer. This temporal processing also enhances detail to some degree and has some noise reducing effect. So all source data will have been used in your output even after a SelectEven(). The result will be primarily based on the even fields of course, but the neighboring ("thrown away") fields will have had an influence too.
Temporal smoothing is used to remove bob-shimmer, but we don't want large areas of motion blur so the Rep0/1/2 settings limit the amount of change that the temporal smoothing is allowed to make. Higher values for Rep0/1/2 allow larger areas of change from the smoothing, but (counter intuitively) 0 switches of the limiting completely and so allows all changes through. You're right that there is a code path for Rep0/1/2 = 0 that is not used. However, that code path would only allow 2-pixel high areas of change, much bob-shimmer covers a larger area than that. It would be getting close to doing no temporal smoothing at all, similar to TR0/1/2=0 and would be especially bad on stationary detail. Having said that, it does seem to be a little odd not to allow that code path even if it is not the most useful.
Why is 2 pixels not enough? Consider a stationary single pixel high horizontal line, positioned such that it appears only in the even fields. The bob will expand that to a 3 pixel high line, and clearly it will be a cause of major bob-shimmer, flickering on and off. When temporally smoothed, the now 3 pixel high line is softened in the even frames and appears in the odd frames. Bob shimmer removed - by a 3-pixel high area of change, which would be removed if you followed the Rep0=0 code path... I could make that more clear with a diagram, but I hope you get the idea...
[I note that my comments on that function need an update for precision: the two vertical in/expands allow through areas of change up to 4 pixels high, the in/deflate and RemoveGrain are not limited vertically so they also perform some measure of mask clean up]
As I'm sure you're aware, the epsilon is only there to trigger the GaussResize to do its blur even though we're not actually resizing, I'm sure it doesn't matter much where it is placed given the tiny value and the fact it is being used on a huge blur anyway. I do wonder if there's a more efficient way to do a Gaussian nowadays, or something similar.
Does QTGMC just deinterlace with the presets or does it also do sharpening, denoising, noise stabilizing, etc.?
You will get a little of all three if you provide only a preset and no other settings. QTGMC is not really trying to be a sharpener or denoiser, this happens mainly a byproduct of the processing used to avoid shimmer. The light temporal smoothing / denoising improves compressibility, and the sharpening provides some detail enhancement. Despite the fact that this moves the result away from the source, most people like those effects.
The impact on noise is often very minor, and you may not care to do anything about it, especially since it will involve more processing. However, some people do take the sharpness down, and that's a free operation, e.g. Sharpness=0.4 or 0.7
However, if you really want something that's *very* close to the source, then Boulder's suggestion above is the right way to go about it. Although his MatchPreset choice is very high and will slow it down - I would usually leave that out and set an explicit Preset for clarity: Code:
QTGMC(Preset="Slower", SourceMatch=2, Lossless=2, EZKeepGrain=0.5, Sharpness=0.1)
SourceMatch specifically tries to make the deinterlace as "lossless" as possible without introducing shimmer. EZKeepGrain helps preserve the noise from the original. These are not default settings because I think most people want the slight denoise/enhance. Also these settings are slower - you can speed up the Preset a little without any major loss. I would strongly suggest an MT setup (see the first post).
Why static titles flicker after QTGMC?
Short answer: Try adding Rep1=4, Rep2=0 to your settings. This might fix your problem - it might rarely add a tiny bit of motion blur (hard to notice)
Long answer: The core operation of TGMC is to blend 50% of the current frame with 25% each of the previous & next frames (motion-compensated). That removes all bob-shimmer and helps define the missing field lines. However, it also introduces motion-blur where the motion analysis is incorrect. So there is a repair step that only allows changes that affect thin horizontal areas - because bob-shimmer normally only affects thin horizontal areas. Occasionally there is shimmer that covers a wider area, especially on static detailed things such as text. That shimmer gets through because fixing shimmer in larger areas would potentially create motion blur elsewhere.
There are settings controlling the shimmer repair step: Rep0, Rep1 and Rep2. Rep0 improves the motion search clip only so that isn't so relevant here. Rep1 and Rep2 are alternative ways to repair the output, you set them to a value from 1 to 5 to control the repair strength (it's a bit more complicated but that's the basic idea). The higher you set the value the more shimmer is removed but with the possibility that some motion blur might creep through. Rep1 has a stronger effect than Rep2, but again might let more motion blur through. The defaults are Rep1=0, Rep2=4. I suggest you switch the 4 to the stronger Rep1 and see if that works.
You might wonder why TGMC doesn't just mask static areas and leave them untouched to avoid all this complexity. You can try it yourself: Code:
qtgmc = QTGMC() dw = DoubleWeave() mask = mt_lutxy( dw, dw.SelectEvery(1,1), "x y - abs", U=1,V=1 ).mt_expand(U=1,V=1).mt_binarize(0, U=1,V=1) mt_merge( dw, qtgmc, mask, luma=true )
That simple script leaves any pixel untouched if it and its 8 neighbors don't change over the nearby fields (could be made more robust by including chroma or more complex masking). It might fix your problem. However, any tiny change within your "static" text pixels, even a change by 1 luma then you'd need to add a threshold. You can change the 0 in the mt_binarize to 1 or 2 to allow slight dissimilarities. But that will start to cause problems in normal footage: occasional pixels will be identified as "static" and will be processed differently to their neighbors - artefacts would show up (in fact rare cases artefacts can show up even with the script as I've written it).
It's very easy to create discontinuities by naive masking during deinterlacing, different algorithms often don't match up perfectly. You see this in other deinterlacers: "this part is combed so do A, this part is not combed so do B". We see the discontinuity between A and B. Softened masks help but blur detail.
On a side note, the other problem with static detail in (Q)TGMC is that is loses too much vertical detail compared to other deinterlacers. Source match was specifically designed to greatly improve static detail. Sadly though it doesn't affect these minor shimmer issues.
What are the best settings for QTGMC?
Everyone's opinion about "best" is different. Every source has different "best" settings. Don't ask for "best", find out what's best for yourself.
This is one place to start if you don't care about speed:
QTGMC(Preset="Very Slow", SourceMatch=2, Lossless=2, EZKeepGrain=0.5, Sharpness=0.1, Sbb=0)
But that's for good quality footage that has fine detail, noisy VHS tapes probably aren't worth that kind of processing - but you asked... You will likely want to change that EZKeepGrain value (increase it to keep more noise, decrease if you don't care about preserving noise. Change it to EZDenoise and increase the value if you want QTGMC to denoise for you.)
[edit] Dancing Grain
Some more thoughts on "dancing" grain.
This sort of "dancing" usually isn't a property of the original grain, at least for the most part. Grain in itself usually is a high-frequency distortion only. The "dancing" effect is introduced by lossy DCT-based compressors, where in the lossy compression process some error is introduced into the low-frequency parts, caused mainly by the hi-frequency parts.[16]
One step further, something in this direction often is very useful for subsequent motioncompensated denoising. If the flicker is left in, it eventually will also disturb the ME engine (making the vectors follow the flicker, causing spatial shifts where in fact there should be none), which will lower the benefit one can get from MC-NR. When the flicker is taken out before the motion search, chances are better to get more clean vectors.[17]
It's still the basic method of using a pre-filter before doing the motion search. This can be done in several different ways, and this here was just one of them that can be used. E.g., if one is using a pure spatial prefilter, the effect is the opposite: it will take out the hi-freq's, but leave the flicker mostly intact, therefore still irritating the ME engine. Fact is, with strong grain there is so much uncertainty at the pixel level that there is hardly any "this is the right way to do". There are plenty of different possible points to break into the circle of catch-22 ... but there's no "correct" one.[18]
Truth is, this kind of "flickering of low spatial frequencies" is one of the ultimate foes, because right here it is where the nebula-of-uncertainty becomes thick:
- a) without mocomp, you can't know if it's flicker or motion
- b) with mocomp, you can't know if the mocomp has been misleaded by the flicker
- c) With prefiltering before mosearch, you can't know if the prefilter has mangled moving areas (because of a)) and conseqquentially has misleaded the mosearch
Chicken-and-egg problem, without any definite solution.[19]
[edit] MVTools2 Notes
[edit] Prefiler
The prefilter should be able to cut down the grain almost completely. Some loss of detail is nothing to worry about at this stage. Then, thSAD
should not be increased, but instead decreased from the default 400.
Using a prefilter together with such a high thSAD
is guaranteed to introduce artifacts in areas where MC fails.[20]
Whether pre-denoising is needed at all depends on the stronginess of grain. I see slight danger that this sort of processing now gets thrown at all kind of sources, even those that don't need such a processing method. Prefiltering might work out for the most part, but there'll be cases where it bites you back. Dumb filters can destroy the motion of rather smooth regions (e.g. close-up of smooth faces + head-movement) strong enough so that MVTools won't recognize motion anymore. If source has strong grain, the pre-filtering should be barely strong enough to make static areas calm. Which filter that is or could be, depends on the source.[21]
For low noise you need exactly no prefiltering at all. Use MDegrainX directly as stated in MVTools' documentation.[22]
Seeing the source is LOTR, I'd say the prefiltering is MUCH too strong. LOTR is rather clean with only little noise, no need to break a fly on a wheel. The current prefiltering will nuke-out enough content to make the motionsearch worse than it could be. Sometimes less is simply more.[23]
- This should be a "simple" but effective searchclip-pre-processing for such rather clean sources.[24] - more info here[25]
MinBlur(1) FluxSmoothT().Merge(last,0.251) sbr()
If you use a nicely sharp & high-contrast clip as the searchclip, with "default operation as per documentation" you'll end up with pretty big SADs wherever there's an edge. Means, little to nothing will happen on edges. Which is quite counterproductive when the goal is to calm (the effect of) a sharpener.[26]
For "dancing grain" (aka low-frequency flicker) prefiltering, the following script will remove the low-frequency flicker, leaving the high-frequencies intact:[[27]
o = last f = o.MinBlur(1,2).MinBlur(2,2).RemoveGrain(11,-1) f.FluxSmoothT(7).mt_AddDiff(mt_MakeDiff(o,f,U=2,V=2),U=4,V=4)
# eventually, limit the maximum pixel change to +/- 2 : # mt_LutXY(o,last,"x 2 + y < x 2 + x 2 - y > x 2 - y ? ?",U=2,V=2)
In result, there will be almost no smoothing, and the grain basically is fully preserved. It's just the flicker, or "dancing" effect, that will be removed. As a side-effect, there might occur some slight toning-down of shadings when there is motion. One can definitely see it in single-frame comparisons by flipping between original and processed.
One step further, something in this direction often is very useful for subsequent motioncompensated denoising. If the flicker is left in, it eventually will also disturb the ME engine (making the vectors follow the flicker, causing spatial shifts where in fact there should be none), which will lower the benefit one can get from MC-NR. When the flicker is taken out before the motion search, chances are better to get more clean vectors.[28]
- Somewhere earlier in this thread I had posted a pre-calm script (with MinBlur() and FluxSmooth) - in essence the same as the following script. Depending on the source characteristics, using Flux5framesT instead of simple FluxSmooth within such a pre-calmer can make sense.[29]
a = last b = a.RemoveGrain(11) f = b.FluxSmoothT().merge(b,0.49)
a.mt_makediff(mt_makediff(b,f,U=3,V=3),U=3,V=3)
The basic idea is to combine a spatial and a spatio-temporal filter, so that the spatio-temporal does not do what the spatial filter would do. Or s'th similar. Here's a mini-script in the spirit of the original idea (protect a temporal filter to act on those bits that a spatial filter would act on):[30]
[edit] MaskTools2 Notes
1) In MaskTools2 does Round(1.5) equal 2 or 1?
- It should follow the usual convention for rounding, .5 is rounded upwards.
2) Why in mt_makediff with identical clips, the difference is 128 and not 0 like maths? In general I don't understand well the meaning of 128.
- A pixel can't have a negative value, only 0-255. In order to handle "negative" differences, the range -127..0..127 is offsetted to 0..128..255.[31]
- Some info here: [32]
[edit] Other Plugins/Scripts
[edit] Deblock_QED
Prior to deblocking: NO resizing. NO noise filtering. Cropping only at macroblock boundaries.[33]
SeparateFields().DeBlock_QED().Weave()
- That's a BAD way of deblocking interlaced footage: 50% of all possible boundaries between vertically neighboured blocks are NOT deblocked this way![34]
The only correct way for interlaced sources is: (alas)
SeparateFields().PointResize(width,height) Deblock_QED().AssumeFrameBased() SeparateFields().SelectEvery(4,0,3).Weave()
Originally mentioned here and updated here, another variation [35] - [36]
You use a chain of three deblockers: DGDecode-deblocking, then Deblock_QED(), then Deblock(). That's pretty much pointless. Deblocking filters need to "reckognize" blocking. It is likely to happen that one deblocker - although eventually not acting efficiently enough for visual pleasance - alters the content by so much that the upfollowing deblockers *cannot* reckognize the blocks anymore, therefore they become inefficient.
For making a reasonable start, you should find & use one deblocker that removes all (or at least a major part) of the blocking. After having found that, you can make your way to add more stuff.[37]
Using Deblock_QED on a blocky source can potentially improve motion estimation.[38]
Deblock_QED works with a fixed 8x8 grid mask. But if you execute UVtoY(), then you get a halfsize frame, where the smallest possible blocksize is 4x4. Thus deblock_qed will potentially miss half of all blocking, if you do it that way .[39].
[edit] FluxSmooth
FluxSmooth could be approximated with a combination of Clense and TemporalSoften, or it can also be built by a 3-fold mt_logic() combination.[40]
- What is the max value for FluxsmoothT(Temporal)?
- Maximum is 255. Perhaps more, but in 8-bit-sources pixel differences can not be larger than 255, anyway. Of course, with such big threshold there will appear motion artifacts.[41]
If FluxSmooth is set up more aggressively (i.e. bigger threshold), then it'll do more good where Flux is doing right, and will do more bad where Flux is doing wrong. Remember FluxSmooth is a simple temporal smoother with median-like decision where to filter and where not.
Examples:
- a) a pixel sequence: ... 80 81 85 79 80 ... FluxSmooth will filter the "85" and the "79", because these two pixels are overshooting both of their neighbors.
- b) pixel sequence: ... 80 81 85 85 81 80 ...FluxSmooth will filter *nothing*, because no pixel satisfies the "overshooting both neighbors" criteria.
For case b), this means:
- - IF those two "85" are due to motion, then FluxSmooth has done correct.
- - But IF those two "85" in fact are related to "flicker" in a "flat" a/o "static" area, then FluxSmooth has not filtered something that you would like to have filtered.[42]
[edit] FluxSmoothT
- What is the max value for FluxsmoothT(Temporal)?
- Maximum is 255. Perhaps more, but in 8-bit-sources pixel differences can not be larger than 255, anyway. Of course, with such big threshold there will appear motion artifacts.[43]
[edit] Flux5framesT
Related - I've though several times if and how the "FluxSmooth principle" could be extended from the current 3-frame temporal window to a 5-frame temporal window.[44]
A reasonable approach would be this:
- - calculate temporal median with radius=2
- - calculate temporal soften with radius=2
- - for each pixel, use that result that caused the smaller difference
Of course, Flux5framesT is *not* safe in regards to weak shadings in moving areas. Vanilla FluxSmooth is not safe, and Flux5framesT is even less. Well, you can't expect anything else from a simple, thresholded temporal smoother. It's a compromise the user needs to balance out.[45]
Depending on the source characteristics, using Flux5framesT instead of simple FluxSmooth within such a pre-calmer can make sense.
[edit] MedianBlur
MedianBlur can be done via mt_luts(). MedianBlurT is not usable if radius>2.[46]
[edit] ML3Dex
The exact operation of ML3Dex isn't fully clear to me (have been a bit lazy when looking through that pdf) ... however in practice, it doesn't impress me too much. The temporal artefacts in motion areas (resp. areas with erroneous motion compensation) are pretty much the same as those of plain temporal median, so there's no benefit in that respect. In areas without motion (resp. in areas with correct motion compensation), it does remove a bit more signal spikes, no matter whether it's noise or detail.[47]
[edit] Interlacing
[edit] HD 1080i to DVD (all interlaced) - how?
NOT COMPLETE - need to all all useful information from this thread: http://forum.doom9.org/showthread.php?t=139102
Interlaced resizing is fast, but you pay a price for generating each new field based only on the original field. Any new pixels spacially between 2 original field lines will effectivly be a weighted average of only the pixels above and below in that field, i.e. a blur. Effectively all the pixels in the new fields are vertically blurred slightly.
Using the SmartBob/Resize/ReInterlace method, although slower, will give vastly superior results in static areas because each new field can be based on a full frame. In static areas there is no "spacially between 2 original field lines". Those new pixels are rendered from complete frame data. i.e. no blur in static areas.
Of course in motion areas any difference can be attributed to how good the SmartBob interpolates the missing pixels. If using linear interpolators like in KernelBob or DGBob there will be no difference to Interlaced Resizing. i.e. a blur again. If using Edge Directed and/or Motion Compensated interpolators then the results can be a significant step up from bog interlaced resizing.
And apart from everything else the eye has trouble seeing bluring of things in high motion, it attributes motion to the blur, instead of blur to the blur. So it is a little unfair to look at individual fields on a PC screen, you really should evaluate the results on an interlaced display device at normal speed.[48]
Going 1080i -> 480i means your going from 100% image area down to 16.6% image area, or the other way round: you're loosing 83.3% image area in the process. Still, you want to use golden nails to hammer some planks together? The vast majority of the intermediate improvement (at the 1080p stage) will be lost again when you've reached 480i. A plain and fast bob() does pretty good for that task ...[49]
Low ringing (and low detail) lowpass for 1080i->480i:[50]
mt_convolution(horizontal=" 255 ", vertical=" -0.00457430142084469586164799888191 -0.91092031121042564306650907803944 -2.7215434011820571965496188952936 -4.2381040109875854130339774799147 -2.7739456768086984932442890697262 4.556137386140445570028490752454 18.505136047840382914953022942635 36.000435907859456703965425655238 50.797650942298968076309880259519 56.609999970907811068675436793984 50.797650942298968076309880259519 36.000435907859456703965425655238 18.505136047840382914953022942635 4.556137386140445570028490752454 -2.7739456768086984932442890697262 -4.2381040109875854130339774799147 -2.7215434011820571965496188952936 -0.91092031121042564306650907803944 -0.00457430142084469586164799888191 ", u=3, v=3)
It does lowpass between 120 and 240 lines, but because it is low ringing and has a low number of sample points (relatively speaking, you need more and more sample points the more frequencies you eliminate) it loses a lot of frequency amplitude all the way to around 60 lines. This filter is a simple chebyshev windowed sinc fir filter.[51]
And one last point, if a ConvertToYV12() is required, where should it go.
I would recommend straight after the Bob and before the Resize. There is a whole raft of discussion about chroma positioning with interlaced 4:2:0 material. Summary is the chroma is positioned the same with both interlaced and progressive, but with interlaced, alternate lines are temporally distinct. This means for a static scene there is no difference between progressive and interlaced chroma. See these threads for the gorey details, AutoYUY2() updated and Adaptive chroma upsampling. [52]
[edit] VirtualDub Plugins
- CACorrect - This filter allows you to radially scale (i.e., zoom) the R, G, and B channels of a video stream with independent scale factors. This can be used to correct radial chromatic aberration. See discussion - download: CaCorrect.vdf
- Kagayaki filter is the Twinking Soft Focus and Cross filter for VirtualDub, to enhance emotion of your starry and night footage. It works nicely for starry timelapse, night city and fireworks footage. It is also optimal when you use still-astrophotograpies in your movies. And it can work as general-purpose soft focus filter, not limited to night footages.
- Gradation Curves Filter - This Filter can be used to edit the gradation curves similar to the curves function of painting programs. See tutorial and readme.
- DotCrawl Comb Filter - Removes composite video artifacts: false colors and hanging dots. Scott Elliott's (aka 'tacosalad') homepage.
[edit] Plugins List
- http://acobw.narod.ru/
- http://rationalqm.us/mine.html#virtualdub - http://neuron2.net/hosted.html
- http://www.infognition.com/VirtualDubFilters/
- http://www.hlinke.de/dokuwiki/doku.php?id=en:virtualdub_pluginlist