see also MaskTools2
After a processing, you may need to keep only a part of the output. Say, you have a clip named smooth that is the result of a smoothing (blur() for instance) on a clip named source. Most of the noise from source has disappeared in smooth, but so have details. You may therefore want to only keep filtered pixels and discard those where there are big differences of color or brightness. That's what does MSmooth by D. Graft for instance.
Now consider that you write on an image pixels from smooth that you want to keep as white pixels, and the other ones from source as black pixels. You get what is called a mask. MaskTools deals with the creation, the enhancement and the manipulating of such mask for each component of the YV12 colorspace.
This Avisynth 2.5 YV12-only plugin offers several functions manipulating clips as masks:
|Binarize||Binarizes the input picture depending on a threshold and a command||YV12|
|CombMask||Outputs a mask which gives areas that presents combing.||YV12|
|DEdgeMask / DEdgeMask2||Builds a mask of the edges of a clip, applying thresholdings (proper values will enable or disable them).||YV12|
|EdgeMask||Builds a mask of the edges of a clip, applying thresholdings (proper values will enable or disable them). Similar as DEdgeMask with predefined kernels.||YV12|
|FitY2UV / FitY2U / FitY2V / FitU2Y / FitV2Y||Resizes Y plane and replaces UV/U/V plane(s) by the result of the resize (you can specify your resizer filter, even one that isn't built-in AviSynth); the opposite functions are FitU2Y and FitV2Y.||YV12|
|Expand||'Expands' the high values in a plane, by putting in the output the maximum value in the 3x3 neighbourhood around the input pixel.||YV12|
|Inpand||The opposite filter of Expand.||YV12|
|Inflate||'Inflates' the high values in a plane, by putting in the output plane either the average of the 8 neighbours if it's higher than the original value, otherwise the original value.||YV12|
|Deflate||The opposite filter of Inflate. Dedicated to Phil Katz.||YV12|
|HysteresyMask||Creates a mask from two masks. Theoritically, the first mask should be inside the second one, but it can work if it isn't true (though results will be less interesting).||YV12|
|Invert||Inverts the pixel (i.e. out = 255 - in); this can be also used to apply a 'solarize' effect to the picture.||YV12|
|Logic||Performs most typical logical operations (in fact, the ones provided by MMX mnemonics, though C functions are still available, mainly because of the picture dimensions limits).||YV12|
|RGBLUT / YUY2LUT / YV12LUT / YV12LUTxy||These filters are look-up tables, allowing to apply fastly a function to every pixel of the picture.||YV12|
|MaskedMerge||Takes 3 clips and applies a weighted merge between first and second clips depending on the mask represented by the third clip.||YV12|
|MotionMask||Creates a mask of the motion on the picture.||YV12|
|OverlayMask||Compares 2 clips based on luminance and chrominance thresholds, and outputs whether pixels are close or not (close to what ColorKeyMask does).||YV12|
|YV12Convolution||Allows you to convolve the picture by the matrix of your choice.||YV12|
|YV12Layer||Is the equivalent to Overlay.||YV12|
|YV12Substract||Is the same as Subtract, also works in YV12, but should be a bit faster (because MMX optimised).||YV12|
All the above filters take 3 additional parameters: Y, U and V (except the FitPlane filters, where obviously the name tells what is processed). Depending on their value, different operations are applied to each plane:
- value = 3 will do the actual process of the filter,
- value = 2 will copy the 2nd video plane (if appliable) to the output corresponding plane
- value = 1 will not process it (i.e., most often, left it with 1st clip plane or garbage - check by yourself)
- value = [-255...0] will fill the output plane with -value (i.e. to have grey levels, use U=128,V=128)
A last point is the ability of some filters to process only a part of the frame:
- this behaviour is set by the parameters (offX, offY) (position of the start point) and (w,h) (width and height of the processed area); filters should modify those parameters so that the processed area is inside the 2 pictures
- in case of a filter (except YV12Layer) using 2 clips, the 2 clips must have the same dimensions
- in all cases, the picture must be at least MOD8 (MOD16 sometimes) in order to enable the filter to use MMX functions (ie work at full speed)
This was intended for modularity and atomic operations (or as useful as possible), not really speed. It became both bloated and slow. I let you decide whether this statement is totally true, or a bit less... The examples in filter documentation are most probably much faster applied with the original filters.
Below are some practical use examples. Be aware that they have not been tested extensivelly. They won't produce the exact same results as the original filters they try to mimic, in addition to be far more slower. Despite the numerous additional functions, no newer idea.
- I'm too lazy to update the syntax, especially regarding how mode=2 works, and how EdgeMask was updated (no longer needs of a Binarize for instance).
- Some filters I describe as 'to create' already exist (ImageReader, Levels for clamping, ...).
# Build EdgeMask of clip1, Binarize it and store the result into clip3 # Apply any sharpening filter to clip1 and store it into clip2 ... return MaskedMerge(clip1, clip2, clip3)
The sharpened edges of clip2 higher than the threshold given to Binarize will be sharpened and used to replace their original value in clip1. You could also write a filter with a particular Look-up table (best would look like a bell), replace Binarize by it, and have a weighed sharpening depending on the edge value: that's the HiQ part in SmartSmoothHiQ
clip2 = clip1.<EdgeEnhancer>(<parameters>) #U and V planes don't need filtering, Y needs it #EdgeMask(<...>, "roberts", Y=3, U=-128, V=-128) for greyscale map clip3 = clip1.EdgeMask(15, 60, "roberts", Y=3, U=1, V=1) return MaskedMerge(clip1, clip2, clip3)
Replace EdgeEnhancer by a spatial softener (cascaded blurs? spatialsoftenMMX?) and use upper=true to select near-flat pixels.
3) Rainbow reduction (as described here in this thread)
Warning, this isn't a miracle solution either
clip2 = clip1 soften at maximum (using deen("m2d") or edeen for instance) #Get luma edgemap and increase edges by inflating # -> wider areas to be processed clip3 = clip1.EdgeMask(6, "roberts", Y=3, U=1, V=1).Inflate(Y=3, U=1, V=1) #Now, use the luma edgemask as a chroma mask clip3 = YToUV(clip3, clip3).ReduceBy2().Binarize(15, upper=false, Y=1, U=3, V=3) #We have to process pixels' chroma near edges, but keep intact Y plane return MaskedMerge(clip1, clip2, clip3, Y=1, U=3, V=3)
4) Supersampled fxtoon
. Use tweak to darken picture or make a plugin that scales down Y values -> clip2 . Build edge mask, Supersample this mask, Binarize it with a high threshold (clamping sounds better), Inflate it -> clip3 . Apply the darker pixels of clip2 depending on the values of clip3
5) Warpsharp for dark luma
. Apply warpsharp -> clip2 (replacement pixels) . Create a clamping filter or a low-luma bypass filter -> clip3 (mask)
6) pseudo-deinterlacer (chroma will still be problematic)
clip2 = clip1.SeparateFields().SelectEven().<Method>Resize(<parameters>) clip3 = clip1.<CombingDetector>(<parameters>) return MaskedMerge(clip1, clip2, clip3, Y=3, U=3, V=3)
(chroma even more problematic)
7) Non-rectangular overlays
#Simple hack because ImageReader needs an integer fps... #Most sources are natively in YUY2/YV12 clip = AviSource("test.avi").ConvertToYV12().AssumeFPS(fps) #Load the picture to be overlayed image = ImageReader("mask.bmp", 0, clip.framecount()-1, 24, use_DevIl=false) #Simple way: assume black is transparent #Any other colour would be quite more complicated* masktemp = imageYV12.Binarize(17, upper=false, Y=3) #We set the luma mask to fit the chroma planes mask = Mask.FitY2UV() #Now that we have the mask that tells us what we want to keep... #Replace by image the parts of clip masked by mask! MaskedMerge(clip, image, mask, Y=3, U=3, V=3) #*solution: mask = OverlayMask(image, image.BlankClip("$xxxxxx"), 1, 1)
8) Replace backgrounds
This example clearly would look better in RGB. To avoid typical problems due to noise or compression, you would better use blurred versions of the clip and picture.
source = AviSource("overlay.avi").AssumeFPS(24) #blur the source clip = source.Blur(1.58).Blur(1.58).Blur(1.58) #load the background to replace, captured from the blurred sequence bgnd = ImageReader("bgnd.ebmp", 0, clip.framecount()-1, 24, use_DevIl=false) #load new background new = ImageReader("new.ebmp", 0, clip.framecount()-1, 24, use_DevIl=false) #integrated filter to output the mask = (clip~overlay?) mask = OverlayMask(clip, overlay.ConvertToYV12(), 10, 10) MaskedMerge(source, new.ConvertToYV12(), mask, Y=3, U=3, V=3)
I need to include more info (original urls/posts) but for now I think mfToon's original author, mf (firstname.lastname@example.org) will not react too violently to it, while it's still not addressed.
The output of the function inside K-mfToon.avs should be identical to the output of the original mftoon.avs (also included), with twice the speed.
This plugin is released under the GPL license. You must agree to the terms of 'Copying.txt' before using the plugin or its source code.
You are also advised to use it in a philanthropic state-of-mind, i.e. not "I'll keep this secret for myself".
Last but not least, a very little part of all possible uses of each filter was tested (maybe 5% - still a couple of hours spent to debug ;-). Therefore, feedback is _very_ welcome (the opposite - lack of feedback - is also true...)
See the latest version of the MaskTools changelog.
Note: MaskTools will not be further updated; it has been replaced with the newer MaskTools2 plugin. Script writers should prefer to use the newer version in new scripts. Both plugins can be used in parallel, however. This is useful since a few filters (eg, YV12Subtract) were dropped in the newer version.
Download the latest stable version: Masktools 1.5.8.
Back to External filters