Avs-mlrt

From Avisynth wiki
Jump to: navigation, search
Abstract
Author Asd-g
Version 1.0.1
Download avs-mlrt.7z
Category Multipurpose
License GPLv3
Discussion

Contents

Description

This project provides AviSynth+ ML filter runtimes for variety of platforms.

This is a partial port of the VapourSynth plugin vs-mlrt.


To simplify usage, a wrapper mlrt.avsi is provided for all bundled models.

Custom models can be found in this doom9 thread.

Filters

  • mlrt_ncnn - ncnn is a popular AI inference runtime. mlrt_ncnn provides a vulkan based runtime for some AI filters. It includes support for on-the-fly ONNX to ncnn native format conversion so as to provide a unified interface across all runtimes provided by this project.
  • mlrt_ov - OpenVINO is an AI inference runtime developed by Intel, mainly targeting x86 CPUs and Intel GPUs. The mlrt_ov plugin provides optimized pure CPU & Intel GPU runtime for some popular AI filters. Intel GPU supports Gen 8+ on Broadwell+ and the Arc series GPUs.



Requirements

  • Vulkan compatible device (mlrt_ncnn only)
  • Intel GPU (mlrt_ov only, device="GPU" only)
  • [x64]: AviSynth+ r3928 or greater (AviSynth+ 3.7.3 (test 6, r3935 can be downloaded from here)
  • Microsoft VisualC++ Redistributable Package 2022 (can be downloaded from here)


Syntax and Parameters

mlrt_ncnn

mlrt_ncnn (clip[] input, string "network_path", int "overlap_w", int "overlap_h", int "tilesize_w", int "tilesize_h", int "device_id", int "num_streams", bool "builtin", string "builtindir", bool "fp16", bool "path_is_serialization", bool "list_gpu")


clip   =
Clips to process.
They must be in RGB/Gray 32-bit planar format, have same dimensions and same number of frames.


string  network_path =
Path to the model.


int  overlap_w = 0
int  overlap_w = 0
Overlap width and overlap height of the tiles, respectively.
Must be less than or equal to tilesize_w / tilesize_h / 2.
Default: 0.


int  tilesize_w = input_width
int  tilesize_h = input_height
Tile width and height, respectively.
Use smaller value to reduce GPU memory usage.
Must be specified when overlap_w / overlap_h > 0.
Default: input_width, input_height.


int  device_id =
GPU device to use.
By default the default device is selected.


int  num_streams = 1
GPU parallel execution.
Default: 1.


bool  builtin = True
Whether the models are in the same location with the plugin.
Default: True.


string  builtindir = "models"
Root folder when builtin is used.
Default: "models".


bool  fp16 = False
Enable FP16 mode.
Default: False.


bool  path_is_serialization = False
Whether the model is serialized into one contiguous memory buffer.
Default: False.


bool  list_gpu = False
Simply print a list of available GPU devices on the frame and does nothing else.
Default: False.


mlrt_ov


Download the required OpenVINO runtimes from here. After there are few options:

  • Add the extracted files to PATH.
  • Place the extracted files in the same location as mlrt_ov.dll.
  • (Requires LoadDLL) Create AutoLoadDll.avsi with following:
LoadDLL("path_to\tbb.dll")
LoadDLL("path_to\openvino.dll")
LoadPlugin("mlrt_ov.dll")
mlrt_ov (clip[] input, string "network_path", int "overlap_w", int "overlap_h", int "tilesize_w", int "tilesize_h", string "device", bool "builtin", string "builtindir", bool "fp16", string "config", bool "path_is_serialization", bool "list_devices", string[] "fp16_blacklist_ops", string "dot_path")


clip   =
Clips to process.
They must be in RGB/Gray 32-bit planar format, have same dimensions and same number of frames.


string  network_path =
Path to the model.


int  overlap_w = 0
int  overlap_w = 0
Overlap width and overlap height of the tiles, respectively.
Must be less than or equal to tilesize_w / tilesize_h / 2.
Default: 0.


int  tilesize_w = input_width
int  tilesize_h = input_height
Tile width and height, respectively.
Use smaller value to reduce GPU memory usage.
Must be specified when overlap_w / overlap_h > 0.
Default: input_width, input_height.


string  device = "CPU"
Device to use - CPU or GPU.
For example, if there are more than one GPU device, to use the first device - "GPU.0", to use the second device - "GPU.1"
Default: "CPU".


bool  builtin = True
Whether the models are in the same location with the plugin.
Default: True.


string  builtindir = "models"
Root folder when builtin is used.
Default: "models".


bool  fp16 = False
Enable FP16 mode.
Default: False.


string  config =
Configuration parameters.
CPU configuration parameters can be found here.
GPU configuration parameters can be found here.
KEY_ prefix must be omitted.
Format is: param=value.
If more than one parameter is specified, the parameters must be separated by space.
For example, to disable all internal CPU threading: config="CPU_THROUGHPUT_STREAMS=0 CPU_THREADS_NUM=1 CPU_BIND_THREAD=NO"


bool  path_is_serialization = False
Whether the model is serialized into one contiguous memory buffer.
Default: False.


bool  list_devices = False
Simply print a list of available CPU/GPU devices on the frame and does nothing else.
Default: False.


string[]  fp16_blacklist_ops = ["ArrayFeatureExtractor", "Binarizer", "CastMap", "CategoryMapper", "DictVectorizer", "FeatureVectorizer", "Imputer", "LabelEncoder", "LinearClassifier", "LinearRegressor", "Normalizer", "OneHotEncoder", "SVMClassifier", "TreeEnsembleRegressor", "ZipMap", "NonMaxSuppression", "TopK", "RoiAlign", "Range", "CumSum", "Min", "Max"]
Configurable FP16 operations black list.
Default: ["ArrayFeatureExtractor", "Binarizer", "CastMap", "CategoryMapper", "DictVectorizer", "FeatureVectorizer", "Imputer", "LabelEncoder", "LinearClassifier", "LinearRegressor", "Normalizer", "OneHotEncoder", "SVMClassifier", "TreeEnsembleRegressor", "ZipMap", "NonMaxSuppression", "TopK", "RoiAlign", "Range", "CumSum", "Min", "Max"].


string  dot_path =
Path for .dot file.
Allows to serialize to xDot format.


Examples

TODO

mlrt_ncnn Changelog

Version      Date            Changes
v1.0.1 2023/03/20 - Changed AviSynth+ requirements.
v1.0.0 2023/01/27 - Initial release

mlrt_ov Changelog

Version      Date            Changes
v1.0.0 2023/03/20 - Initial release


External Links

  • GitHub - Source code repository.
  • Doom9 - ONNX models compatible with avs-mlr.




Back to External Filters

Personal tools