This guide walks the user through the development of new features in LPDNN’s SDK.

A detailed explanation of LPDNN is offered in the AI App’s user guide Definition.

How to compile LPDNN’s SDK

This section describes how to compile the Low Power Deep Neural Network Software Development Kit (LPDNN SDK).

This project allows to build and experiment with LPDNN and the their libraries. It contains:

  • the LPDNN library LPDNN in ./lpdnn).

  • unit tests (./test)

  • a set of challanges, models and sample datasets to generate aiapps CATALOG in ./catalog)

  • a set of platforms for which the SDK can be cross-built PLATFORMS in ./platform/plaforms)

  • sample applications to experiment with the models (./app)

  • a set of external packages needed to build the library and applications (./deps-base, ./ext)

  • some utility scripts (code formatting, build, git update)


This project requieres a few local dependencies on your system. Please, follow the installation procedure of:


For modularity the SDK is not a single monolithic repository, but organized in submodules. To start working with the SDK:

  1. Make sure you’ve added your public ssh key to GitLab keys

  2. Make sure you log in to the docker registry:

    docker login
  3. Clone the project repository and submodules:

    git clone
    cd sdk
  4. Install python dependencies
    sudo pip3 install -r ai-app/bonseyes-cli/tool/requirements.txt
  1. Initialize the platforms for which you want to build. For example to use host PC with ubuntu18 and raspberry:
    cd platform
    git submodule update --init --recursive platforms/x86_64-ubuntu18
    git submodule update --init --recursive platforms/raspberry4b_64-ubuntu20
    cd ..
  2. Initialize the challenges and models you need. For example to use lenet5:
    cd catalog
    git submodule init image-classification/mnist/challenge
    git submodule init image-classification/mnist/models/mnist-lenet5
    cd ..
  3. Get/update all required submodules platforms and models:


Build for specific Target Platform

In order to build the model and AI-app for a specific target platform execute and select one of the available target platforms prompted.


The platform can be specified directly by using the –platform parameter, for example:

./  --platform raspberry3bp-raspian_stretch

The build tree is generated by default in the subdirectory ./build/{$platformName}/, where {platformName} is the name of the selected platform. It is often convenient to specify a build directory outside the source tree and to use the same paths inside the docker as the ones on the host. This allows to keep the source tree clean and to have the correct links to source file names in the compiler error messages:

./ --output-dir ../sdk-build/ --platform x86_64-ubuntu18

Plugin compilation and cmake-options can be controlled via parameters (see ./ --help).

Run AiApp from command line

To test the lenet5 sample application execute the command following commands:

cd ./build/x86_64-ubuntu18/install/bin
./aiapp-cli --cfg ../share/ai-apps/mnist-lenet4-default/ai_app_config.json -f ../../../../catalog/image-classification/mnist/challenge/samples/image_100.png

How to add new pre- & post-processing?

As explained in LPDNN’s Definition, AI Apps are conposed of thee parts:

  • Pre-processing: Step to prepare, normalize or convert the input data into the required input that is expected by the DNN.

  • DNN inference: Forward-pass of the neural network. The execution is taken care of by an inference engine.

  • Post-processing: Convertion of the neural network’s output into structured and human-readable information

In some cases, the pre- and post-processing might not be supported by LPDNN and needs to be added. This section describes how to add such methods in LPDNN.

AI App Preprocessing

The pre-processing step is performed by a pre-processor, which takes the raw input and produces a data blob for the inference engine. LPDNN’s preprocessor are located in ai-app/core/components. LPDNN contains the following pre-processors:

  • Image preprocessor takes the input image and transform it into the format and specifications that the neural network expects. The pre-processor can execute several functions such as cropping, normalization, filtering, domain transformations, etc.

  • Audio preprocessor takes a wav file and obtains the MFCC features from by swipping a window over the legth of the audio file.

  • Signal preprocessor takes an input JSON tile and may transmorm it by applying different filters.

More information about the currently supported pre-processor is available in LPDNN Pre-processing.

AI App classes may share the same input data type and have the preprocessor in common. For example, image-based AI Apps, e.g., image classification, object detection, share a common image preprocessor. Similarly, signal-processing AI Apps use a common signal preprocessor.

Note: Currently, the data blob, i.e., input tile to the neural network, that results from the pre-processors is in the FP32 format. The input tiles assume the NCWH format (N for batch size, C for channel number, W for width, H for height). Also, one should note that the only currently-supported batch size is 1.

Should one need to support different layout, data type, methods or the order of these, an appropriate preprocessing configuration would need to be added and implemented.


Each preprocessor maintains a set of related preprocessing routines. The developer defines the AI App preprocessing by declaring the routine names and by providing the routine arguments within the AI App configuration files as explained in LDPNN Pre-processing. The order of routine application may be predefined for a given input type (e.g., crop always preceeds resize) and depends on a particular preprocessor implementation.

The preprocessor implementation is located in an appropriate ai-app/core/components subdirectory. For instance:

  • ai-app/core/components/image_preprocessor/

  • ai-app/core/components/signal_preprocessor/

  • ai-app/core/components/audio_preprocessor/

The preprocessor directory contains:

  • src/: C++ source files that contain the preprocessor’s implemented methods.

  • schemas/: schema definitions.

  • generator/: python generator scripts that serves to create the AI App config JSON (ai_app_config.json) , including the preprocessing steps inside.

  • component.yml: YAML file that defines the preprocessor.

The C++ source files specify the actual implementation of the preprocessing routines. Adding a new step in the preprocessing would require changing the C++ code and extending it with a new routine. For exposing the newly-added step to the AI App configuration (ai_app_config.json), one would need to extend the schema and the generator files accordingly. A proper extension of these files ensures syntax-checks and correctness of generated AI App configurations.

Driving example

Next, we will use the image preprocessor as a driving example to cover more in detail the files residing in the ai-app/core/components/image_preprocessor/src directory. The src/ directory contains the following files (C++ and corresponding header files):

image_preprocessor (cpp/hpp)

The image preprocessor class defined in image_preprocessor.cpp and image_preprocessor.hpp contains the methods and members related to the image preprocessing. It also contains the preprocesor-configuration related code:

  • The public methods provide means to initialise, configure, execute, and get output from the preprocessor. These methods are sufficiently stable and should not require immediate change or extension.

  • The private methods implement the image preprocessing (image_crop, image_align, image_normalize) or related transformations (*std_image_to_blob, image_to_mat).

It also contains:

  • preprocesor-configuration details given in the ai-app_config.json (_cfg).

  • output blob* (_out), i.e., input tile for the inference process.

The principal preprocessor method named process invokes the related private methods in a predefined order. Its signature is shown below. Configuring the preprocessor to act on a part of the input image is possible through the bounding_box method argument.

bool Image_preprocessor::process(const ai_app::Image& img, ai_app::Rect* bounding_box);

The process method takes an img as input, which can be of format:

  • ai_app::Image::Format::tile: ready-to-use input blob, no pre-processing is performed.

  • ai_app::Image::Format::encoded: jgp/png format.

  • ai_app::Image::Format::raw_rgb8/raw_rgba8: rgb8 or rgba8 format.

  • ai_app::Image::Format::raw_grayscale: greyscale format.

All format, except tile format, require the private std_image_to_blob method to convert the input image into an input blob to the neural network.

bool Image_preprocessor::std_image_to_blob(const ai_app::Image& img, ai_app::Rect* bounding_box)

The method std_image_to_blob defines the preprocessing order:

  1. image_crop

  2. image_align

  3. image_normalize

This method produces the preprocessor’s output blob (_out).

Should one need to support a tensor format different than NCWH or a different image plane order (BGR in place of RGB), this is the right place to write memory and blob rearrangement code.

As mentioned before, the output blob’s data (_out) is expressed in FP32. To support different data types, overloading of the internal preprocessing methods within std_image_to_blob may be required. Besides, the pre-processor’s output blob (_out) structure, which is defined in aiapp.hpp, may be modified.


The file image_preprocessor_align.cpp contains implementation of different landmark-based transformations (affine2, affine3, rigid5, umeyama, align_affine, none) that can be referred from the config file. For adding a new transformation, one has to extend the principal function image_align with its signature shown below. It operates on matrix data types from the ncv wrapper library.

bool Image_preprocessor::image_align(const ncv::Mat& src, ncv::Mat& dst,
                              const ai_app::Landmarks& image_landmarks,
                              const ai_app::Rect& bounding_box,
                              ai_app::Dim2d out_dim)


The file image_landmarks.cpp contains implementation of various landark conversion routines. Some of them find the eye centers out of different face landmark formats (face_49_left_eye_center, face_49_right_eye_center, face_68_left_eye_center, face_68_right_eye_center). The others convert from 49-point or 68-point face landmarks to lover number of landmarks (convert_landmarks_to_eyes_center_2, convert_landmarks_to_eyes_nose_5, convert_landmarks_to_eyes_nose_mouth_5). Finally, there is also a general conversion routine ( convert_landmarks).


ai_app::Image class is defined in image_based.hpp. Currenly, it contains lardmarks and region–of-interest (roi) structures, which are used during the pre-processing steps. Should one need to change the ai_app::Image class to incorporate more details to the image, they should be added in the above file, and incorporated in the object_detection_item_to_image routine.

AI App Postprocessing

The post-processing step converts the neural network’s output into a more structured and human-readable information. Each AI App class has its own set of postprocessing routines because they are often specific for a particular algorithm, architecture or implementation to solve the corresponding AI challenge and their reuse is limited.


The implementation of the object detection postprocessing routines in within each AI App class’s _preprocess_infer directory, e.g., ai-app/core/components/object_detection_preprocess_infer/src.

Adding a new postprocessing routine assumes extending the source files (e.g., in the case of object detection, object_detection_preprocess_infer.cpp) to handle the new out_format defined in the subcomponents.inference.parameters.out_format of the AI App configuration (ai_app_config.json). The code below shows an example of adding new_routine for object detection postprocessing.

// Obtaintion of output blob
const ai_app::Blob output_blob = inference_output();

if (out_format.empty() || out_format == "ssd") {
  // further implementation of the default postprocessing

else if (out_format == "new_routine")
  // new_routine implementation

Complex postprocessing, like body pose openpifpaf, may go beyond changing the single file and can require including an entire implementation source tree, with its own cmake files and build configurations. Such an implementation should reside in a separate subdirectory of src/.

In the case of image segmentation, two additional structures are provided to help with the preprocessing (Pixel and Segmentation). The item structure contains class index, confidence and segmentation. Final result can hold multiple items. In the case of instance segmentation, separate instances of the same class can be stored as separate items with the same class index, while in the case of semantic segmentation, one item can be used to store all pixels of the same class.

The new out_format defined in the subcomponents.inference.parameters.out_format of the AI App configuration (ai_app_config.json) can take any name that is desired. The new name needs to be listed into the ai-app/bonseyes-cli/algorithms/inference_processor/parameters.yml YML file, under the out_format enum section.

Output result

The postprocessing of a given AI App class returns a corresponding result type. The return types of AI App classes and their postprocessing routines are defined in appropriate header files. The header files reside in the ai-app/bonseyes-cli/inc directory. Currently, these are following header files available in LPDNN:

  • image_classification.hpp

  • object_detection.hpp

  • signal_classification.hpp

  • face_recognition.hpp

  • image_segmentation.hpp

Each of these fails contain the definition of struct Result for each class, respectively. Below is an example definition of the object detection result type:

struct Result {
  struct Item {
    float confidence;
    int class_index;
    Rect bounding_box;
    Landmarks landmarks;
    Landmarks3d landmarks3d;
    Orientation orientation{};
  bool success{};
  std::vector<Item> items;

There may be a need to change this structure, if a new result field needs to be added for the given class. If that is the case, the following the documentation for How to add a new AI App class? provides reference guidance. In this case, only the section Files that need to be changed need to be followed without the need to crate new files.

How to add a new inference engine?

The files related to the LPDNN engines integration reside in the ai-app/engines directory. An engine implementation consists of files written in C++, python, YML, and cmake. The directory structure is as follows (ai-app/engines/lne/components/network_processor):

  • CMakeLists.txt

  • component.yml

  • generator/

  • schemas/

  • src/

How to add a new AI App class?

General workflow

  • Add your model (challenge and models) to the catalog.

  • Create your class and all helper types in lpdnn/ai-app/bonseyes-cli.

  • Create interface and algorithm for your class in lpdnn/ai-app/bonseyes-cli.

  • Create inference and postprocess for your class in lpdnn/ai-app.

  • Create json conversions for your types and structures.

Files that need to be changed changes

  • ai-app/bonseyes-cli/inc/

    • Create new class (ex. image_segmentation.hpp) and define the new Result structure.

    • In image_based.hpp create all the helper structures you will use (ex. Segmentation and Pixel).

    • In aiapp_cvt_json_str.hpp create all to and from json conversions for the helper functions you created.

  • ai-app/bonseyes-cli/algorithms/

    • Create new algorithm file for your app (ex. image_segmentation/algorithm.yml) and link the appropriate interface and components in that file.

  • ai-app/bonseyes-cli/interfaces/

    • Create 5 necessary files in a folder for your class (ex. image_segmentation/): ground_truth.yml, http_api.yml, interface.yml, parameters.yml, results.yml and change those files to suit your needs.

  • Example commit - ae9e7270 <> changes

  • ai-app/core/base/

    • In files aiapp_cvt_json.cpp, aiapp_cvt_json.hpp and aiapp_cvt_json_str.cpp and neccesary conversions for your data types.

  • ai-app/core/components/

    • Create separate folder for your class (ex. image_segmentation/) and create the structure needed (use ai-app/core/components/image_segmentation/ as an example).

  • ai-app/core/CMakeLists.txt

    • In CMakeLists.txt add add_subdirectory(components/<your_class_name>).

  • Example commit - 033c1dd9 <>

3.catalog changes

  • catalog/

    • Create your AI App class folder structure (use image_segmentation/defect-detection as an example).

    • challenge/ and every model in models/ should be a separate git repositoriums and should be added via git submodule add.

  • Example commit - 9b8de426 <> changes

  • app/aiapp_cli/

    • Add neccesary wrappers for your AI App class in aiapp_cli.cpp and aiapp_cli.hpp.

  • app/utils/

    • Add neccesary wrappers and json conversions for your AI App class in aiapp_json.cpp, aiapp_json.hpp, aiapp_wrapper.cpp and aiapp_wrapper.hpp.

  • Example commit - 01afe16f <>

How to create and upload a deployment package?


How to check if an ONNX model is supported

  • Convert trained model to ONNX

  • If possible, optimize the model using the onnx simplifier as this may reduce the amount of operators you need to support

There are two possible ways of checking if the ONNX model is supported:

  • Within LPDNN’s SDK

  • Using the Bonseyes-cli tool


To check if the model is supported, run the following commands from the root folder of LPDNN’s SDK (replace /path/to/your/model.onnx with your model):

python3 lpdnn/tools/lib/lpdnn_onnx/ -m /path/to/your/model.onnx

Bonseyes CLI

First, install the Bonseyes-cli tool as explained in Install Bonseyes tool section.

Then, install lpdnn’s python lib by executing the following commands:

pip3 install numpy onnx==1.7.0
pip3 install lpdnn-python-lib --extra-index-url

Finally, you can check if the ONNX model is supported by running the following commands (replace /path/to/your/model.onnx with your model):

bonseyes onnx check --model /path/to/your/model.onnx

If the output is Process finish successfully - ONNX model can be converted then all operators should be supported. Warnings should not be an issue, but they should be kept in mind in case your model does not work

How to add a new operator

General workflow

  • Find which operator is missing using the How to check if an ONNX model is supported step

  • Implement the operator for the conversion and for inference with the operator_files_that_need_changes

  • Run the How to check if an ONNX model is supported step again to check if there are any errors

  • Write tests for inference: writing_running_inference_tests

  • Create an AI Model (Catalog). by adding a new entry in the catalog and use the script to compare outputs

  • Check that inference tests passes

Files that need to be changed

This is a summary of what changes have to be made to what files. Check the commits linked above to get a better idea on what should be done.

1. Define operator and its params in python for the converter

  • tools/lib/lpdnn/

    • Add type in the LpdnnLayerType enum

    • If required, create additional enums for modes / types / etc.

  • tools/lib/lpdnn/

    • Define a class for your layer

      • Set its LpdnnLayerType in the super constructor

      • Add (default?) attributes in __init__ function

      • Implement _param_struct_name, _compute_output_shapes and potentially _compute_flops

    • Add a param name in the params array

2. Register your operator and how it should be created in the converter

  • tools/lib/lpdnn_onnx/op_definition/operators

    • Create a file or edit an existing one for your operator

    • See the “ONNX Converter Operator Definition” below for documentation on how to write it

  • plugin/cpu_vanilla/

    • Note: this example is with the cpu vanilla plugin, but is probably similar for the other plugins.

    • Use add_layer to add the type defined in the LpdnnLayerType enum

3. Define operator and its params in C++ for inference

  • core/inc/com/layer/LayerParam.hpp

    • Add type in the LayerType enum

    • Define how to convert from the LpdnnLayerType defined in tools/lib/lpdnn/ and the LayerType created in the previous step by adding an entry in the NLOHMANN_JSON_SERIALIZE_ENUM macro

  • core/inc/com/layer/<Name>.h

    • Create a struct defining the parameters of the operator

    • Define from_json to define conversion from parameters defined in python to parameters defined above

    • If required, define enums for modes / types / etc.

      • Use the NLOHMANN_JSON_SERIALIZE_ENUM macro to define how to convert from values defined in tools/lib/lpdnn/

  • core/inc/com/LayerParams.hpp

    • Add params defined in the previous step in LayerParams struct

    • Define how to convert from the value added in the params array in tools/lib/lpdnn/ to <name>Param class created in the previous step by adding an entry in the NLOHMANN_JSON_SERIALIZE_ENUM macro

  • plugin/cpu_vanilla/CpuVanillaPlugin.cpp

    • Register the layer.

  • plugin/cpu_vanilla/<Name>.cpp

    • Define how the layer works

    • Write Vanilla<Name>Layer to make your layer work

    • Setup getResizeLayerDesc to link the layer type to the function created above

  • lpdnn/core/src/Layer.cpp

    • Define an init function for your layer

    • Add your layer to the switch case which calls the init function

Comparing outputs

Once you implemented your operator, you can check that it works correctly by comparing the output produced by LPDNN and the output of onnxruntime.

There are two ways of doing that:

  1. If you have an ONNX file and want to test the whole model.

  • Generate an AI app that uses your model by creating an entry in the catalog

  • The following scripts can be found in the tools folder

  • Run the AI app using the -l parameter to dump lpdnn nodes into json (in build/<platform>/install/bin)

    • ./aiapp-cli –cfg ../share/ai-apps/<model>-default/ai_app_config.json -f ~/dog.png -l lpdnn_dump.json

  • Dump ONNX layer using the same preprocessing step used by the AI app (in lpdnn/scripts/onnx)

    • python -m <model>.onnx -i lpdnn_dump.json -o onnx_dump.json

  • Compare the results (in build/<platform>/install/bin)

    • python ~/lpdnn_dump.json ../../scripts/onnx/onnx_dump.json

    • If there are no red entries, conversion is probably correct

  1. If you want to test a specific operator (or an onnx model available in Python)

  • Note: you can use this step even if you do not implement a new operator, to check if is supported or not

  • Note: It’s good practice to write and push this, to test and ensure that everything still works in the future

  • Make sure onnx, onnxruntime and matplotlib are installed with pip (if you get errors, try installing them)

  • Create a script for your operator in sdk/lpdnn/scripts/onnx/tests/models. It should contain the following structure (this is with comments)

# NOTE: Almost all information for each operators are taken from
# This page also contains example which can be used here (useful for some output_shape)

# The only restrictions for this script is to contain a "get_model()" function
# which has the same dictionnary structure as a return value
def get_model():
    # The model name will be displayed in the result table which can be used to identify different tests case
    model_name = "Sqrt"

    # In almost all cases the input_shape does not need to be changed
    input_shape = [1, 3, 224, 224]
    # However, if the operator changes the size of the output (e.g. reshape) output_shape must be changed manually
    # according to the different parameters
    output_shape = [1, 3, 224, 224]

    # Define input and output according to the onnx specifications
    X = helper.make_tensor_value_info('X', TensorProto.FLOAT, input_shape)
    Y = helper.make_tensor_value_info('Y', TensorProto.FLOAT, output_shape)

    # Create operator
    node = helper.make_node(
        # if the model had attributes, they would be here (see the examples on the Operator page)

    # Create the graph (add all nodes in "node" and setup inputs and outputs. No need to change the rest)
    graph = helper.make_graph(

    # This does not need to be changed if you did not rename the variables
    return {
        "input_shape": input_shape,
        "output_shape": output_shape,
        "model_name": model_name,
        "graph": graph
  • Run This will convert your model to onnx, create an entry in the catalog, build the ai-app, run the ai-app and dump the output, dump the output with onnxruntime and compare the results.

  • Check python –help to see what is available.

  • Example usage:

    • python -t x86_64-ubuntu20 -m sqrt

  • Note that this can also be used to quickly test compatibility with any operator for any platform/plugin combination.

Writing and running inference unit tests

  • Writing a test

    • Note that tests are not in the same submodule

    • Copy a file in test/test_layers/ and adapt it for your test

    • Register the test in test_main.cpp and layer_tests.hpp

  • Running tests

    • Build your AI app - ./ –platform <platform>

    • Run tests - ./build/<platform>/out/test/test_layers/test_layers

ONNX Converter Operator Definition

This part describes how to add an ONNX operator to the ONNX to LPDNN converter with proper version checks. In the best case, this should describe what is supported by which plugin.


  • Create a new file for your operator in lpdnn/tools/lib/lpdnn_onnx/op_definition/operators

  • Import this file in lpdnn/tools/lib/lpdnn_onnx/op_definition/

Writing the file

  • The goal is to convert an ONNX node to the class you defined for your layer. It only works with the attributes/node of the model. This is never executed at inference time, so you cannot deal with data here.

  • Decorators (@…) will be used to define metadata for your operator. This is useful as it allows:

    • Checking opset version easily

    • Define what is supported and not supported in your implementation

    • Generate documentation on what is supported

  • Steps:

    1. Register an operator definition by creating a function decorated with @Operator(name)

    2. Children functions defined in this function will be executed in the same order of their definition during the conversion

    3. Theses functions contains decorators that define which attributes and inputs can be converted by the function and for which version the function should be executed

    4. For each supported version, one of the function should return the created class.

    5. A shared dictionary allows to pass data between function

    6. Dictionary attributes and inputs automatically contains attributes / inputs with their default values

  • Below a more concrete simplified example

# @Operator(...) registers a new Operator (e.g. "Conv"), this must match the name in the ONNX documentation
def _op_conv(op, shared, inputs, attributes):

    # The name of functions does not matter
    # Parameters passed to the function:
        # op: instance of created Operator (contains decorators and a few useful attributes)
        # shared: dictionary that can be used to share values between functions
        # inputs, attributes: dictionary of inputs and attributes that is defined only during conversion

    # Define the first step of the conversion process
    @op.versions(["9-10", "11-last"]) # Only execute if opset version is 9, 10, 11, ..., or last
    @op.attributes("group") # Set that we support converting the "group" attribute
    @op.incomplete_attributes({"auto_pad": "Only supports NOTSET"}) # Set that auto_pad is only partially supported and explains why
    def _(node: LpdnnOnnxNode, lpdnn_net_onnx):

        # Check that auto_pad is handled properly
        op.in_assert(attributes['auto_pad'].s == "NOTSET", "auto_pad only supports NOTSET")

        # Set shared values that can be used in other functions
        shared['group'] = attributes['group'].i
        shared['auto_pad'] = attributes['auto_pad'].s

 @op.versions(["9-10", "11"]) # Only execute if opset version is 9, 10 or 11
 @op.inputs([None, "w"]) # Set that we support converting the "w" inputs (which is at index 1) (Check "Available decorators" to understand why "None" is required)
 @op.incomplete_inputs([("x", "Only default is supported"), None]) # Set that x (index 0) is only partially supported and gives a reason
 def _(node: LpdnnOnnxNode, lpdnn_net_onnx):
     # Check that x is handled properly
     op.in_assert(inputs['x'] == "default", "x input must be 'default'")

     shared['w'] = inputs['w']

 @op.versions(["9-last"]) # Only execute if opset version is 9, 10, 11, ..., or last
 def _(node: LpdnnOnnxNode, lpdnn_net_onnx):
     return LpdnnLayerConvolution(op.node_name, group=shared['group'])


  • Do not use assert for errors related because of invalid input. Use op.in_assert() instead. This is done to avoid crashing the checker tool if any error occur.

  • An optional attribute with a default value will always be set, so there is no purpose on using if <attr_name> in attributes. This can be used to check if other attributes are defined.

  • Attributes that don’t have a constant value (e.g. “default is 0 for all axis”: it’s not constant because it depends on the number of axis) should not be set with @Defaults.

Available decorators


  • @Operator (opName: str)

    • Defines a new ONNX operator named opName for conversion

    • Instantly executes the function it is applied to, passing the following parameters:

      • op: newly defined Operator

      • shared: dictionary that must be used to pass values between functions (note: shared contains inputs and attributes)

      • inputs: dictionary of inputs, only set during conversion

      • attributes: dictionary of attributes, only set during conversion

    • This decorator can be used multiple times on the same function to define operators with similar implementations (e.g. “Conv” and “ConvTranspose”)

    • Example

def _op_conv(op, shared, inputs, attributes):
    # convert steps...
  • @op.versions (versions: list of str or str)

    • Specifies for which opset version the function will be executed

    • See About ONNX Opset version for more info

    • If this decorator is not specified the code will not be executed

    • A version can either be a single version or a range separated by a dash (version or from-to)

    • last can be used instead of a version number and is automatically replaced by the latest opset version defined in

    • Example

@op.versions(["7-9", "11", "13-last"])
def _('...'):
    # Executed for version 7,8,9, 11, 13,..,last
  • @op.applies_to (opNames: list of (str or None) or str)

    • If multiples @Operator were used, this can be used to only execute a function for the specified operator. Otherwise the function is executed for all.

def _('...'):
    # Executed only if current operator is ConvTranspose


  • @op.attributes (attributes: list of str or str) - Defines which attributes are fully supported - If a specified attribute is missing in the model, a warning is raised - If the model contains an attribute that is not handled in any function, the conversion errors - Example

@op.attributes(["group", "strides"])
def _('...'):
    # attributes['group'] and attributes['strides'] contains attribute value
  • @op.incomplete_attributes (attributes: dict of str)

    • attributes should contains the attribute name as a key, and a description of what is supported as a value

    • Defines which attributes are partially supported (e.g. only works with some values or only default)

    • If the model contains an incomplete attribute, a warning is raised

    • Use op.in_assert(test, message) to verify that the values for this attribute are supported

    • Example

@op.incomplete_attributes({"auto_pad": "Only supports NOTSET"})
def _('...'):
    # Make sure attribute is supported:
    op.in_assert(attributes['auto_pad'].s == "NOTSET", "Only NOTSET is supported")
  • @op.unsupported_attributes (attributes: list of str or str)

    • Defines which attributes are known to not be supported

    • This only has documentation purpose and has the same effect as not specifying an attribute

    • Optional attributes that are not supported should be put it incomplete_attributes (this is done because optional attributes may have a default value that is supported)

    • Example: @op.unsupported_attributes([“pads”, “dilations”])

  • @op.defaults (defaultValues: dict of any)

    • Set defaults values for attributes

    • This value is encapsulated in an AttributeProto (just like attributes that have a value)

    • Example


  • @op.inputs (inputs: list of str or None)

    • Defines which inputs are supported

    • Order of items in inputs should be the same as the order defined in the ONNX documentation

    • A value of None indicates that the input is incomplete or not supported

    • Since the position in the list must be respected, None must be used on indices that are not supported

    • Example

# Operator inputs are "x", "w" and "b", but only "b" is supported:
@op.inputs([None, None, "b"])
def _('...'):
    # inputs['b'] contains input value
  • @op.incomplete_inputs (inputs: list of (tuple of str or None))

    • Defines which inputs are only partially supported

    • The tuple contains the name of the input as first element, and a description on what is supported as the second

    • Use op.in_assert(test, message) to verify that the values for this input are supported

    • A value of None indicates that the input is fully supported or not supported

    • Example

# Operator inputs are "x", "w" and "b", but "x" is partially supported:
@op.inputs([("x", "Only some values are supported"), None, None])
def _('...'):
    # Make sure input is supported:
    op.in_assert(inputs['x'] == "some values", "Only some values are supported")
  • @op.unsupported_inputs (inputs: list of (str or None) or str)

    • Specify which operators are not supported

    • This only has documentation purpose and has the same effect as not specifying an input

    • Optional inputs that are not supported should be put it incomplete_inputs (this is done because optional inputs may have a default value that is supported)

    • Example:

      • Operator inputs are “x”, “w” and “b”, but “w” is not supported: @op.unsupported_inputs([None, “w”, None])

About ONNX Opset version

TL;DR: New ONNX versions don’t update all operators. If a new version is released, last in should be updated to the last version. If the new version has breaking changes, all operators using last MUST be updated (or their last version must be replaced by the previous version.)

  • When a new ONNX version releases, not all operators versions are incremented.

    • e.g: Add was only changed in opset versions 1, 6, 7, 13, 14

  • This means that an operator implemented in version 1 also works for versions 2, 3, 4, 5

  • This is why a range of version can be specified with @op.versions([“1-5”, “6”])

    • Even if all versions are supported, it is good practice for the documentation to specify them separately (e.g: @op.versions([“1-5”, “6”, “7-12”, “13”, “14-last”]))

  • Since only a minority of operators are updated every version, last can be used to automatically specify the latest version supported

  • The last variable is defined in

    • If you want to increment the last supported opset version, you MUST replace last manually by the last supported version number for ALL operators that have breaking changes.

    • If and only if there are no backward compatibility issues (e.g. only new attributes/inputs were added) you can keep last (missing attributes/inputs will error)

    • Failing to do that may introduce hard to debug errors

    • If you update an operator for its latest released version, check if you can replace the latest version by last to make it easier to update opset number in the future