Avs-mlrt

From Avisynth wiki
(Difference between revisions)
Jump to: navigation, search
(External Links: add links to compatible models)
(Update avs-mlrt)
 
Line 2: Line 2:
 
{{Filter3
 
{{Filter3
 
|1={{Author/Asd-g}}
 
|1={{Author/Asd-g}}
|2=1.0.0
+
|2=1.0.1
 
|3=[https://github.com/Asd-g/avs-mlrt/releases/ avs-mlrt.7z]
 
|3=[https://github.com/Asd-g/avs-mlrt/releases/ avs-mlrt.7z]
 
|4=Multipurpose
 
|4=Multipurpose
Line 10: Line 10:
 
== Description ==
 
== Description ==
 
This project provides AviSynth+ ML filter runtimes for variety of platforms.
 
This project provides AviSynth+ ML filter runtimes for variety of platforms.
 
To simplify usage, a wrapper mlrt.avsi is provided for all bundled models.
 
  
 
This is [https://github.com/AmusementClub/vs-mlrt a partial port of the VapourSynth plugin vs-mlrt].
 
This is [https://github.com/AmusementClub/vs-mlrt a partial port of the VapourSynth plugin vs-mlrt].
 +
 +
 +
To simplify usage, a wrapper [https://github.com/Asd-g/avs-mlrt/blob/main/mlrt.avsi mlrt.avsi] is provided for all bundled models.
 +
 +
Custom models can be found [https://forum.doom9.org/showthread.php?t=184768 in this doom9] thread.
 +
 +
'''Filters'''
 +
 +
* [[avs-mlrt#mlrt_ncnn|mlrt_ncnn]] - [https://github.com/Tencent/ncnn ncnn] is a popular AI inference runtime. mlrt_ncnn provides a vulkan based runtime for some AI filters. It includes support for on-the-fly ONNX to ncnn native format conversion so as to provide a unified interface across all runtimes provided by this project.
 +
* [[avs-mlrt#mlrt_ov|mlrt_ov]] - [https://docs.openvino.ai/latest/index.html OpenVINO] is an AI inference runtime developed by Intel, mainly targeting x86 CPUs and Intel GPUs. The mlrt_ov plugin provides optimized pure CPU & Intel GPU runtime for some popular AI filters. Intel GPU supports Gen 8+ on Broadwell+ and the Arc series GPUs.
 
<br>
 
<br>
 
<br>
 
<br>
 
== Requirements ==
 
== Requirements ==
* [https://en.wikipedia.org/wiki/Vulkan Vulkan] compatible device
+
* [https://en.wikipedia.org/wiki/Vulkan Vulkan] compatible device (mlrt_ncnn only)
* [x64]: '''AviSynth+ r3682''' or greater (AviSynth+ 3.7.3 (test 6, r3935 can be [https://forum.doom9.org/showthread.php?p=1983250#post1983250 downloaded from here])
+
* Intel GPU (mlrt_ov only, device="GPU" only)
 +
* [x64]: '''AviSynth+ r3928''' or greater (AviSynth+ 3.7.3 (test 6, r3935 can be [https://forum.doom9.org/showthread.php?p=1983250#post1983250 downloaded from here])
 
* Microsoft VisualC++ Redistributable Package 2022 (can be downloaded from [https://github.com/abbodi1406/vcredist/releases here])
 
* Microsoft VisualC++ Redistributable Package 2022 (can be downloaded from [https://github.com/abbodi1406/vcredist/releases here])
 
<br>
 
<br>
Line 76: Line 85:
 
:::Simply print a list of available GPU devices on the frame and does nothing else.
 
:::Simply print a list of available GPU devices on the frame and does nothing else.
 
:::Default: False.
 
:::Default: False.
 +
<br>
 +
 +
===mlrt_ov===
 +
<br>
 +
Download the required OpenVINO runtimes from [https://github.com/Asd-g/avs-mlrt/blob/main/2022.3.7z here].
 +
After there are few options:
 +
*Add the extracted files to PATH.
 +
*Place the extracted files in the same location as mlrt_ov.dll.
 +
*(Requires [[LoadDLL]]) Create AutoLoadDll.avsi with following:
 +
LoadDLL("path_to\tbb.dll")
 +
LoadDLL("path_to\openvino.dll")
 +
LoadPlugin("mlrt_ov.dll")
 +
 +
:{{Template:FuncDef|mlrt_ov (clip[] input, string "network_path", int "overlap_w", int "overlap_h", int "tilesize_w", int "tilesize_h", string "device", bool "builtin", string "builtindir", bool "fp16", string "config", bool "path_is_serialization", bool "list_devices", string[] "fp16_blacklist_ops", string "dot_path")}}
 +
 +
<br>
 +
::{{Par2| |clip| }}
 +
:::Clips to process.
 +
:::They must be in RGB/Gray 32-bit planar format, have same dimensions and same number of frames.
 +
<br>
 +
::{{Par2|network_path|string|}}
 +
::: Path to the model.
 +
<br>
 +
::{{Par2|overlap_w|int|0}}
 +
::{{Par2|overlap_w|int|0}}
 +
:::Overlap width and overlap height of the tiles, respectively.
 +
:::Must be less than or equal to <code>tilesize_w</code> / <code>tilesize_h</code> / 2.
 +
:::Default: 0.
 +
<br>
 +
::{{Par2|tilesize_w|int|input_width}}
 +
::{{Par2|tilesize_h|int|input_height}}
 +
:::Tile width and height, respectively.
 +
:::Use smaller value to reduce GPU memory usage.
 +
:::Must be specified when <code>overlap_w</code> / <code>overlap_h</code> > 0.
 +
:::Default: input_width, input_height.
 +
<br>
 +
::{{Par2|device|string|"CPU"}}
 +
:::Device to use - CPU or GPU.
 +
:::For example, if there are more than one GPU device, to use the first device - "GPU.0", to use the second device - "GPU.1"
 +
:::Default: "CPU".
 +
<br>
 +
::{{Par2|builtin|bool|True}}
 +
:::Whether the models are in the same location with the plugin.
 +
:::Default: True.
 +
<br>
 +
::{{Par2|builtindir|string|"models"}}
 +
:::Root folder when <code>builtin</code> is used.
 +
:::Default: "models".
 +
<br>
 +
::{{Par2|fp16|bool|False}}
 +
:::Enable FP16 mode.
 +
:::Default: False.
 +
<br>
 +
::{{Par2|config|string|}}
 +
:::Configuration parameters.
 +
:::CPU configuration parameters can be found [https://docs.openvino.ai/2021.4/openvino_docs_IE_DG_supported_plugins_CPU.html#supported-configuration-parameters here].
 +
:::GPU configuration parameters can be found [https://docs.openvino.ai/2021.4/openvino_docs_IE_DG_supported_plugins_GPU.html#supported-configuration-parameters here].
 +
:::<code>KEY_</code> prefix must be omitted.
 +
:::Format is: <code>param=value</code>.
 +
:::If more than one parameter is specified, the parameters must be separated by space.
 +
:::For example, to disable all internal CPU threading: <code>config="CPU_THROUGHPUT_STREAMS=0 CPU_THREADS_NUM=1 CPU_BIND_THREAD=NO"</code>
 +
<br>
 +
::{{Par2|path_is_serialization|bool|False}}
 +
:::Whether the model is serialized into one contiguous memory buffer.
 +
:::Default: False.
 +
<br>
 +
::{{Par2|list_devices|bool|False}}
 +
:::Simply print a list of available CPU/GPU devices on the frame and does nothing else.
 +
:::Default: False.
 +
<br>
 +
::{{Par2|fp16_blacklist_ops|string[]|["ArrayFeatureExtractor", "Binarizer", "CastMap", "CategoryMapper", "DictVectorizer", "FeatureVectorizer", "Imputer", "LabelEncoder", "LinearClassifier", "LinearRegressor", "Normalizer", "OneHotEncoder", "SVMClassifier", "TreeEnsembleRegressor", "ZipMap", "NonMaxSuppression", "TopK", "RoiAlign", "Range", "CumSum", "Min", "Max"]}}
 +
:::Configurable FP16 operations black list.
 +
:::Default: ["ArrayFeatureExtractor", "Binarizer", "CastMap", "CategoryMapper", "DictVectorizer", "FeatureVectorizer", "Imputer", "LabelEncoder", "LinearClassifier", "LinearRegressor", "Normalizer", "OneHotEncoder", "SVMClassifier", "TreeEnsembleRegressor", "ZipMap", "NonMaxSuppression", "TopK", "RoiAlign", "Range", "CumSum", "Min", "Max"].
 +
<br>
 +
::{{Par2|dot_path|string|}}
 +
:::Path for .dot file.
 +
:::Allows to serialize to xDot format.
 
<br>
 
<br>
 
== Examples ==
 
== Examples ==
Line 81: Line 167:
 
<br>
 
<br>
 
<br>
 
<br>
== Changelog ==
+
== mlrt_ncnn Changelog ==
 
  Version      Date            Changes<br>
 
  Version      Date            Changes<br>
 +
v1.0.1      2023/03/20      - Changed AviSynth+ requirements.<br>
 
  v1.0.0      2023/01/27      - Initial release
 
  v1.0.0      2023/01/27      - Initial release
 +
 +
== mlrt_ov Changelog ==
 +
Version      Date            Changes<br>
 +
v1.0.0      2023/03/20      - Initial release
  
 
<br>
 
<br>

Latest revision as of 12:27, 18 May 2023

Abstract
Author Asd-g
Version 1.0.1
Download avs-mlrt.7z
Category Multipurpose
License GPLv3
Discussion

Contents

[edit] Description

This project provides AviSynth+ ML filter runtimes for variety of platforms.

This is a partial port of the VapourSynth plugin vs-mlrt.


To simplify usage, a wrapper mlrt.avsi is provided for all bundled models.

Custom models can be found in this doom9 thread.

Filters

  • mlrt_ncnn - ncnn is a popular AI inference runtime. mlrt_ncnn provides a vulkan based runtime for some AI filters. It includes support for on-the-fly ONNX to ncnn native format conversion so as to provide a unified interface across all runtimes provided by this project.
  • mlrt_ov - OpenVINO is an AI inference runtime developed by Intel, mainly targeting x86 CPUs and Intel GPUs. The mlrt_ov plugin provides optimized pure CPU & Intel GPU runtime for some popular AI filters. Intel GPU supports Gen 8+ on Broadwell+ and the Arc series GPUs.



[edit] Requirements

  • Vulkan compatible device (mlrt_ncnn only)
  • Intel GPU (mlrt_ov only, device="GPU" only)
  • [x64]: AviSynth+ r3928 or greater (AviSynth+ 3.7.3 (test 6, r3935 can be downloaded from here)
  • Microsoft VisualC++ Redistributable Package 2022 (can be downloaded from here)


[edit] Syntax and Parameters

[edit] mlrt_ncnn

mlrt_ncnn (clip[] input, string "network_path", int "overlap_w", int "overlap_h", int "tilesize_w", int "tilesize_h", int "device_id", int "num_streams", bool "builtin", string "builtindir", bool "fp16", bool "path_is_serialization", bool "list_gpu")


clip   =
Clips to process.
They must be in RGB/Gray 32-bit planar format, have same dimensions and same number of frames.


string  network_path =
Path to the model.


int  overlap_w = 0
int  overlap_w = 0
Overlap width and overlap height of the tiles, respectively.
Must be less than or equal to tilesize_w / tilesize_h / 2.
Default: 0.


int  tilesize_w = input_width
int  tilesize_h = input_height
Tile width and height, respectively.
Use smaller value to reduce GPU memory usage.
Must be specified when overlap_w / overlap_h > 0.
Default: input_width, input_height.


int  device_id =
GPU device to use.
By default the default device is selected.


int  num_streams = 1
GPU parallel execution.
Default: 1.


bool  builtin = True
Whether the models are in the same location with the plugin.
Default: True.


string  builtindir = "models"
Root folder when builtin is used.
Default: "models".


bool  fp16 = False
Enable FP16 mode.
Default: False.


bool  path_is_serialization = False
Whether the model is serialized into one contiguous memory buffer.
Default: False.


bool  list_gpu = False
Simply print a list of available GPU devices on the frame and does nothing else.
Default: False.


[edit] mlrt_ov


Download the required OpenVINO runtimes from here. After there are few options:

  • Add the extracted files to PATH.
  • Place the extracted files in the same location as mlrt_ov.dll.
  • (Requires LoadDLL) Create AutoLoadDll.avsi with following:
LoadDLL("path_to\tbb.dll")
LoadDLL("path_to\openvino.dll")
LoadPlugin("mlrt_ov.dll")
mlrt_ov (clip[] input, string "network_path", int "overlap_w", int "overlap_h", int "tilesize_w", int "tilesize_h", string "device", bool "builtin", string "builtindir", bool "fp16", string "config", bool "path_is_serialization", bool "list_devices", string[] "fp16_blacklist_ops", string "dot_path")


clip   =
Clips to process.
They must be in RGB/Gray 32-bit planar format, have same dimensions and same number of frames.


string  network_path =
Path to the model.


int  overlap_w = 0
int  overlap_w = 0
Overlap width and overlap height of the tiles, respectively.
Must be less than or equal to tilesize_w / tilesize_h / 2.
Default: 0.


int  tilesize_w = input_width
int  tilesize_h = input_height
Tile width and height, respectively.
Use smaller value to reduce GPU memory usage.
Must be specified when overlap_w / overlap_h > 0.
Default: input_width, input_height.


string  device = "CPU"
Device to use - CPU or GPU.
For example, if there are more than one GPU device, to use the first device - "GPU.0", to use the second device - "GPU.1"
Default: "CPU".


bool  builtin = True
Whether the models are in the same location with the plugin.
Default: True.


string  builtindir = "models"
Root folder when builtin is used.
Default: "models".


bool  fp16 = False
Enable FP16 mode.
Default: False.


string  config =
Configuration parameters.
CPU configuration parameters can be found here.
GPU configuration parameters can be found here.
KEY_ prefix must be omitted.
Format is: param=value.
If more than one parameter is specified, the parameters must be separated by space.
For example, to disable all internal CPU threading: config="CPU_THROUGHPUT_STREAMS=0 CPU_THREADS_NUM=1 CPU_BIND_THREAD=NO"


bool  path_is_serialization = False
Whether the model is serialized into one contiguous memory buffer.
Default: False.


bool  list_devices = False
Simply print a list of available CPU/GPU devices on the frame and does nothing else.
Default: False.


string[]  fp16_blacklist_ops = ["ArrayFeatureExtractor", "Binarizer", "CastMap", "CategoryMapper", "DictVectorizer", "FeatureVectorizer", "Imputer", "LabelEncoder", "LinearClassifier", "LinearRegressor", "Normalizer", "OneHotEncoder", "SVMClassifier", "TreeEnsembleRegressor", "ZipMap", "NonMaxSuppression", "TopK", "RoiAlign", "Range", "CumSum", "Min", "Max"]
Configurable FP16 operations black list.
Default: ["ArrayFeatureExtractor", "Binarizer", "CastMap", "CategoryMapper", "DictVectorizer", "FeatureVectorizer", "Imputer", "LabelEncoder", "LinearClassifier", "LinearRegressor", "Normalizer", "OneHotEncoder", "SVMClassifier", "TreeEnsembleRegressor", "ZipMap", "NonMaxSuppression", "TopK", "RoiAlign", "Range", "CumSum", "Min", "Max"].


string  dot_path =
Path for .dot file.
Allows to serialize to xDot format.


[edit] Examples

TODO

[edit] mlrt_ncnn Changelog

Version      Date            Changes
v1.0.1 2023/03/20 - Changed AviSynth+ requirements.
v1.0.0 2023/01/27 - Initial release

[edit] mlrt_ov Changelog

Version      Date            Changes
v1.0.0 2023/03/20 - Initial release


[edit] External Links

  • GitHub - Source code repository.
  • Doom9 - ONNX models compatible with avs-mlr.




Back to External Filters

Personal tools