Normalize

From Avisynth wiki
(Difference between revisions)
Jump to: navigation, search
m (1 revision)
(add link to avs+ documentation)
 
(4 intermediate revisions by one user not shown)
Line 1: Line 1:
{{Template:FuncDef|Normalize(clip ''clip'' [, float ''volume''] [, bool ''show''])}}
+
<div style="max-width:62em" >
 +
<div {{BlueBox2|40|0|3px solid purple}} >
 +
{{AvsPlusFullname}}<br>
 +
Up-to-date documentation: [https://avisynthplus.readthedocs.io/en/latest/avisynthdoc/corefilters/normalize.html https://avisynthplus.readthedocs.io]
 +
</div>
  
Amplifies the entire waveform as much as possible, without clipping.
 
  
By default the clip is amplified to 1.0, that is maximum without clipping - higher values are sure to clip, and create distortions. If one volume is supplied, the other channel will be amplified the same amount.  
+
Raises (or lowers) the loudest peak of the audio track to a given {{FuncArg|volume}}. This process is called [[Wikipedia:Audio_normalization|''audio normalization'']].<br>
 +
Note that '''Normalize''' performs [[Wikipedia:Audio_normalization#Peak_normalization|''peak normalization'']] (used to prevent audio [[Wikipedia:Clipping_(audio)#Digital_clipping|clipping]]) and not [[Wikipedia:Audio_normalization#Loudness_normalization|loudness normalization]].
  
Starting from v2.08 there is an optional argument ''show'', if set to "true", it will show the maximum amplification possible without distortions.
+
==== Syntax and Parameters ====
 +
{{FuncDef
 +
|Normalize(clip ''clip'' [, float ''volume'' , bool ''show'' ] )
 +
}}
  
Stereo channels are never amplified separately by the filter, even if level between them is very different. The two volumes are applied AFTER the maximum peak has been found, and works in effect as a separate [[Amplify]]. That means if you have two channels that are very different the loudest channel will also be the peak for the lowest. If you want to normalize each channel separately, you must use [[GetChannel]] to split up the stereo source. Audio is converted to 16 bits as a side-result of this filter.  
+
:{{Par2|clip |clip|}}
 +
::Source clip. Supported [[ConvertAudio|audio sample types]]: 16-bit integer and [[Float|32-bit floating-point]].  
 +
::Other sample types (8-, 24- and 32-bit integer) are automatically converted to floating-point.
  
'''Examples:'''
+
:{{Par2|volume|float|1.0}}
 +
::Set the amplitude of the loudest audio. Default = 1.0 for peaking at 0[[Wikipedia:DBFS|dB]]: for [[Float|floating-point]] samples, this corresponds to the range -1.0 to +1.0, and for 16-bit integer samples, this corresponds to the range -32768 to +32767 &ndash; the widest range possible without [[Wikipedia:Clipping_(audio)#Digital_clipping|clipping]].
 +
::*For a particular peak [[Wikipedia:Decibel|decibel]] level, use the equation {{FuncArg|volume}} = {{Serif|10}}<sup> {{Serif|'''dB''' / 20}}</sup>
 +
::*For example, set a -3dB peak with {{FuncArg|volume}} = 10<sup>-3/20</sup> or 0.7079.
 +
::*Where multiple audio channels are present, all channel gains are set in proportion. For example, if the loudest peak on the loudest channel comes to -10dB, by default a gain of +10dB is applied to all channels.
  
# normalizes signal to 98%:
+
:{{Par2|show|bool|false}}
video = AviSource("C:\video.avi")
+
::If ''true'', a text overlay (see image below) will show the calculated amplification factor and the frame number of the loudest peak.
audio = WavSource("c:\autechre.wav")
+
audio = Normalize(audio, 0.98)
+
return AudioDub(video, audio)
+
  
  # normalizes each channel separately:
+
==== ''Normalization and Floating-point Audio'' ====
  video = AviSource("C:\video.avi")
+
The idea of digital ''clipping'' (when the signal is outside the range that can be stored accurately) really applies only to ''integer'' sample types; floating-point samples will never become clipped in practice, as [[Wikipedia:Single-precision_floating-point_format|the maximum value]] is around 3.4×10<sup>38</sup> &ndash; some 29 orders of magnitude (580 dB) larger than 16-bit samples can store.
  audio = WavSource("bjoer7000.wav")
+
 
  left_ch = GetChannel(audio,1).Normalize()
+
'''Normalize''' is therefore not needed for floating-point audio, but using it is recommended before [[ConvertAudio|converting]] to an integer type, especially if any processing has been done &ndash; such as [[Amplify|amplification]], [[MixAudio|mixing]] or [[SuperEQ|equalization]] &ndash; which may expand the audio peaks beyond the integer clipping range.
  right_ch = GetChannel(audio,2).Normalize()
+
 
  audio = MergeChannels(left_ch, right_ch)
+
==== Examples ====
  return AudioDub(video, audio)
+
* Normalize signal to 98%
 +
<div {{BoxWidthIndent|44|2}} >
 +
  video = [[AviSource]]("video.avi")
 +
audio = [[WavSource]]("audio.wav").Normalize(0.98)
 +
return [[AudioDub]](video, audio)
 +
</div>
 +
 
 +
* Normalize each channel separately (eg for separate language tracks)
 +
<div {{BoxWidthIndent|44|2}} >
 +
  video = [[AviSource]]("video.avi")
 +
  audio = [[WavSource]]("audio2ch.wav")
 +
  left_ch = [[GetChannel]](audio,1).Normalize
 +
  right_ch = [[GetChannel]](audio,2).Normalize
 +
  return [[AudioDub]](video, [[MergeChannels]](left_ch, right_ch))
 +
</div>
 +
 
 +
* Effect of {{FuncArg|show}}=true with added [[Histogram]], [[Waveform]] and [[Runtime_environment#Special_runtime_variables_and_functions|current_frame]] overlays
 +
<div {{BoxWidthIndent|44|2}} >
 +
  LoadPlugin(p + "[[Waveform]]\waveform.dll")
 +
V=[[BlankClip]](pixel_type="YV12", width=480, height=360).[[Loop]]
 +
A=[[WavSource]]("music.wav")
 +
[[AudioDub]](V, A).[[AudioTrim]](0.0, A.[[Clip_properties|AudioDuration]])
 +
[[ScriptClip]](Last,
 +
\ """[[Subtitle]](Last, "frame "+[[Internal_functions#String|String]]([[Runtime_environment#Special_runtime_variables_and_functions|current_frame]]), align=5)""")
 +
Normalize({{FuncArg|volume}}=1.0, {{FuncArg|show}}=true)
 +
[[Histogram]](mode="audiolevels")
 +
[[Waveform]](window=3)
 +
return Last
 +
</div>
 +
:[[File:NormalizeEx2_v1,0.png]]
 +
:(showing frame 2744 where the loudest peak was detected, but note that ''Amplify Factor'' is the same for all frames)
  
# normalizes each channel separately:
 
clip = AviSource("D:\Video\rawstuff\stereo-test file_left(-6db).avi")
 
left_ch = GetChannel(clip,1).Normalize()
 
right_ch = GetChannel(clip,2).Normalize()
 
audio = MergeChannels(left_ch, right_ch)
 
AudioDub(clip, audio)
 
  
  
 
[[Category:Internal filters]]
 
[[Category:Internal filters]]
 +
[[Category:Audio_filters]]

Latest revision as of 05:33, 18 September 2022

AviSynth+
Up-to-date documentation: https://avisynthplus.readthedocs.io


Raises (or lowers) the loudest peak of the audio track to a given volume. This process is called audio normalization.
Note that Normalize performs peak normalization (used to prevent audio clipping) and not loudness normalization.

[edit] Syntax and Parameters

Normalize(clip clip [, float volume , bool show ] )

clip  clip =
Source clip. Supported audio sample types: 16-bit integer and 32-bit floating-point.
Other sample types (8-, 24- and 32-bit integer) are automatically converted to floating-point.
float  volume = 1.0
Set the amplitude of the loudest audio. Default = 1.0 for peaking at 0dB: for floating-point samples, this corresponds to the range -1.0 to +1.0, and for 16-bit integer samples, this corresponds to the range -32768 to +32767 – the widest range possible without clipping.
  • For a particular peak decibel level, use the equation volume = 10 dB / 20
  • For example, set a -3dB peak with volume = 10-3/20 or 0.7079.
  • Where multiple audio channels are present, all channel gains are set in proportion. For example, if the loudest peak on the loudest channel comes to -10dB, by default a gain of +10dB is applied to all channels.
bool  show = false
If true, a text overlay (see image below) will show the calculated amplification factor and the frame number of the loudest peak.

[edit] Normalization and Floating-point Audio

The idea of digital clipping (when the signal is outside the range that can be stored accurately) really applies only to integer sample types; floating-point samples will never become clipped in practice, as the maximum value is around 3.4×1038 – some 29 orders of magnitude (580 dB) larger than 16-bit samples can store.

Normalize is therefore not needed for floating-point audio, but using it is recommended before converting to an integer type, especially if any processing has been done – such as amplification, mixing or equalization – which may expand the audio peaks beyond the integer clipping range.

[edit] Examples

  • Normalize signal to 98%
video = AviSource("video.avi")
audio = WavSource("audio.wav").Normalize(0.98)
return AudioDub(video, audio)
  • Normalize each channel separately (eg for separate language tracks)
video = AviSource("video.avi")
audio = WavSource("audio2ch.wav")
left_ch = GetChannel(audio,1).Normalize
right_ch = GetChannel(audio,2).Normalize
return AudioDub(video, MergeChannels(left_ch, right_ch))
LoadPlugin(p + "Waveform\waveform.dll") 
V=BlankClip(pixel_type="YV12", width=480, height=360).Loop
A=WavSource("music.wav")
AudioDub(V, A).AudioTrim(0.0, A.AudioDuration)
ScriptClip(Last, 
\ """Subtitle(Last, "frame "+String(current_frame), align=5)""")
Normalize(volume=1.0, show=true)
Histogram(mode="audiolevels")
Waveform(window=3)
return Last
NormalizeEx2 v1,0.png
(showing frame 2744 where the loudest peak was detected, but note that Amplify Factor is the same for all frames)
Personal tools