NXP AN13854 i.MX 93 Applications Processors User Guide

June 15, 2024
NXP

AN13854 i.MX 93 Applications Processors
User Guide

AN13854 i.MX 93 Applications Processors

NXP AN13854 i MX 93 Applications Processors

AN13854
NPU Migration Guide from i.MX 8M Plus to i.MX 93
Rev. 1 — 18 September 2023

Document information

Information Content
Keywords i.MX 93, i.MX 8M Plus, neural processing unit (NPU), Tensor Flow

Lite (Flite), AN13854
Abstract| This application note describes how to migrate a machine learning application from i.MX 8M Plus to i.MX 93 with NPU acceleration.

Introduction

This application note describes how to migrate a machine learning application from i.MX 8M Plus to i.MX 93 with neural processing unit (NPU) acceleration. The NPU of the i.MX 8M Plus and i.MX 93 devices are different IPs, and their features and usage methods are also different. This document introduces the differences between the i.MX 8M Plus NPU and the i.MX 93 NPU, and covers the operation guidance and optimization suggestions. However, if the CPU inference is used, the i.MX 8M Plus and i.MX 93 devices function in a similar manner.

NPU overview

The NPU provides hardware acceleration for AI/ML workloads and vision functions. NPU with different IP is used by i.MX 8M Plus and i.MX 93.
2.1 Block diagram
The following figure shows the i.MX 8M Plus NPU high-level block diagram.

NXP AN13854 i MX 93 Applications Processors - Block
diagram

Table 1.  i.MX 8M Plus NPU functional blocks

i.MX 8M Plus NPU block Description
Host interface Allows the NPU to communicate with external memory and the CPU

through the AXI /
AHB bus. In this block, data crosses clock domain boundaries
Memory controller| Internal memory management unit that controls the block-to- host memory request interface
Vision front end| Inserts high-level primitives and commands into the vision pipeline
Neural network core| Provides parallel convolution MAC for recognition functions using 8 bits or 16 bits integer
Tensor processing fabric| Provides data preprocessing and supports compression and pruning for multidimensional array processing for Neural Nets
Compute unit| SIMD processor programmable execution unit that performs as a compute unit. The NPU block has one vector4 parallel processor unit,  which also acts as four processing elements
Vision engine| Provides advanced image processing functions
Universal storage cache| Cache shared between the vision front end and the parallel processing unit

Note:
For i.MX 8M Plus NPU supported operator list, refer to https://www.nxp.com.cn/docs/en/user-guide/IMXMACHINE-LEARNING-UG.pdf— OVXLIB Operation Support with NPU.NXP AN13854 i MX 93 Applications Processors -
Block diagram1

Table 2. i.MX 93 NPU functional blocks

i.MX 93 NPU block Description
Clock and power module (CPM) Handles hard and soft resets, contains registers

for the current security settings, the main clock gate, and the QLPI interface
Central control| Controls how the NPU processes neural networks, maintains synchronization, and handles data dependencies
DMA controller| Manages all transactions that use the Arm AMBA 5 AXI interfaces
Weight decoder| Reads the weight stream from the DMA controller. The decoder decompresses and stores this stream in a double-buffered register, ready for the MAC unit to consume it
MAC unit| The MAC unit performs multiply-accumulate operations that are required for convolution, depth-wise pooling, vector products, and the max operation required for max pooling
Output unit| Reads finished accumulators from the shared RAM and converts them into output activations. This process includes performing scaling for  each OFM, adding the bias to values, and applying the activation function to each point
Shared memory| Memory is shared between the DMA controller, the MAC unit, and the Output unit

Note:
For the i.MX 93 NPU supported operator list, refer to https://www.nxp.com.cn/docs/en/user-guide/IMXMACHINE-LEARNING-UG.pdf— Supported ML operators and constraints.
2.2 Differences in NPU key features
The following table describes the NPU features of i.MX 8M Plus and i.MX 93.
Table 3. NPU features of i.MX 8M Plus and i.MX 93

Feature i.MX 8M Plus i.MX 93
Host Cortex-A53 Cortex-M33
NPU IP VIP8000Nano Ethos-U65
Device node name /dev/galore /dev/ethous0
Primary APIs OpenX with NN Extensions Ethos-U operator
MAC per cycle 1152 256
Clock 1000 MHz 1000 MHz

2.3 Ethos-U subsystem overview
The i.MX 8M Plus NPU is attached to the AXI-BUS and the Cortex-A core controls it, whereas the Cortex-M core controls the i.MX 93 NPU Ethos-U65. This i.MX 93 machine learning system involves several hardware components  working collaboratively to support the acceleration of the tensor computation of an ML model: Cortex-A, Cortex-M, messaging unit (MU), and Ethos-U NPU. NXP AN13854
i MX 93 Applications Processors - overview

The Cortex-A55 is responsible for loading the ML model, capturing, and pre- processing the dynamic inputs with Linux OS and rich libraries. The Cortex-M is the controller of the attached Ethos-U NPU. It prepares the offloading descriptor for the NPU and triggers the NPU execution. It also provides the unsupported kernel execution for NPU. The MU is the message unit IP to facilitate the core communication between Cortex-A and Cortex-M.

  • Supports Tensor Flow Lite (Flite) inference with fallback to Cortex-A
  • Supports Tensor Flow Lite Micro (Flite-Micro) inference with fallback to Cortex-M
  • Supports the inference API to offload the entire model to Flite-Micro and NPU on Cortex-M
  • Supports Flite API to offload the customized “ethos-u” operator to NPU on Cortex-M
  • Provides Vela model tool to optimize the model performance and memory usage for the Ethos-U65 target

2.4 Ethos-U software architecture
Figure 4 shows the three main components of the software required for Ethos-U support.NXP AN13854 i MX 93 Applications Processors -
architecture

  • Vela model compiler: offline tool to compile the TFLite model graph for Ethos-U. The compiler replaces supported operators in the model with a custom “ethos-u” operator containing the command stream for Ethos-U NPU. The output of the compiler is a modified TFLite model graph for TFLite/TFLite-Micro inference engines.
  • Cortex-A software stack for Linux: contains MPU inference engine (Tensor Flow Lite), driver library, and kernel side device driver for the Linux kernel
  • Cortex-M software stack: contains MCU inference engine software (TFLite-Micro, CMSIS-NN) and NPU driver

The typical inference workflow is as follows:

  1. Converts the TFLite model into a Vela model using the Vela model compiler and generates the optimized version for Ethos-U NPU.

  2. The optimized model is fed to either of the following:
    a. TFLite inference engine, which recognizes the custom “ethos-u” operator, allocates the buffer for input/ output feature map (IFM/OFM) and executes the operator via the Ethos-U Linux driver.
    b. Inference API, which allocates the buffer for the input/output feature map and sends the entire model via the Ethos-U driver.

  3. The Ethos-U driver composes the inference task message and sends it over Rams to Cortex-M.

  4. The Ethos-U runner on Cortex-M dispatches the task to the TFLite-Micro or Ethos-U driver directly according to the task type.
    a. If the task type is accelerating the “ethos-u” operator (using the TFLite), the Runner calls the Ethos-U driver directly.
    b. If the task type is accelerating the entire model (using the Inference API), the Runner dispatches the model to TFLite-Micro and further calls the Ethos-U driver for processing.

  5. After the Ethos-U driver completes the inference task, it writes the result into the output features map buffer and sends the response back to Cortex-A via RPMsg.

Note:
The model is loaded from Cortex-A and shared with Cortex-M over RPMsg. The Cortex-M software is prebuilt with both the model and Ethos-U operator acceleration capabilities in a single-binary firmware. This firmware is integrated into Yoctorootfs and is loaded automatically when the user starts an inference task using the TFLite or Inference API by opening the Ethos-U device.

Migrating TFLite applications from i.MX 8M Plus to i.MX 93

This section describes the migration workflow for the TFLite applications from i.MX 8M Plus to i.MX 93 using a few examples.
3.1 Tensor Flow Lite software stack
Figure 5 shows the Tensor Flow Lite software stack. The Tensor Flow Lite supports computation on the following hardware units:

  • CPU Arm Cortex-A cores
  • GPU/NPU hardware accelerator using the VX delegate
  • NPU hardware acceleration on i.MX 93 NPU

NXP AN13854 i MX 93 Applications Processors - software
stack

Note: i.MX 8M Plus inference back end can choose CPU/GPU/NPU. However, i.MX 93 does not have a GPU, and if it uses the CPU to do inference, APP does not make any changes. Therefore, the NPU acceleration usage is only  discussed in this document.
3.2 Tensor Flow Lite workflow for i.MX 8M Plus / i.MX 93NXP AN13854 i
MX 93 Applications Processors - TensorFlow

Both i.MX 8M Plus and i.MX 93 support Tensor Flow Lite with NPU acceleration. The major differences are as follows:

  • The i.MX 93 NPU software stack depends on the offline tool to compile the Tensor Flow Lite model to Ethos command stream for Ethos-U NPU execution, while i.MX 8M Plus uses online compilation to generate
    the NPU commands stream for NPU execution. This means that i.MX 93 NPU users must use the Vela tool  to convert the Tensor Flow Lite model to the Vela model first. For detail, see Section 4.

  • The i.MX 8M Plus uses the Tensor Flow Lite external delegate (VX delegate) mechanism to support NPU acceleration, however, i.MX 93 uses the Tensor Flow Lite Custom OP mechanism to support NPU acceleration.

In addition, when the i.MX 8M Plus model is deployed on i.MX 93, it is recommended to use PCQ quantization in the model quantization stage to obtain better performance. However, the final model performance depends on the actual application.
3.3 Migration example
When TFLite has to offload the ethos-u operator and fallback to Cortex-A (recommended), the change is minimal. Use Section 4 to compile the quantized TFLite mode, comment out the VX delegate. Afterward, run the ML application of i.MX 8M Plus on i.MX 93 and get NPU acceleration.
3.3.1 NPU accelerate on i.MX 8M Plus
Run an image classification example on i.MX 8M Plus with NPU accelerate.NXP
AN13854 i MX 93 Applications Processors - TensorFlow1The output of the NPU acceleration on the i.MX 8M Plus processor is as follows:NXP AN13854 i MX 93 Applications Processors -
TensorFlow2NXP AN13854 i MX 93
Applications Processors - TensorFlow3

3.3.2 NPU accelerate on i.MX 93 with TFLite inference engine
Compile the model for Ethos-U using Vela tool, reusing the model mobilenet_v1_1.0_224quant.tflite from /user/bin/tensorflow- lite-2.9.1/examples/. If it runs successfully, an optimized Vela model mobilome v1_1.0_224_quant_vela.tflite is generated in the output folder.NXP
AN13854 i MX 93 Applications Processors - TensorFlow4Run the model with the TFLite inference engine (offload the “ethos-u” operator to Cortex-M).![NXP AN13854 i MX 93 Applications Processors

The following is printed if no error occurs:NXP AN13854 i MX 93 Applications
Processors - error occurs

3.3.3 NPU accelerate on i.MX 93 with inference API
Run the model with the inference API (offloads the entire model to TFLite- Micro).NXP AN13854 i MX 93 Applications Processors -
TensorFlow6

The following is printed if no error occurs:NXP AN13854 i MX 93 Applications
Processors - TensorFlow7NXP AN13854 i MX 93
Applications Processors - TensorFlow8

Vela tool

The Vela tool is used to compile a Tensor Flow Lite for microcontrollers neural network (NN) model into an optimized version that can run on an embedded system containing an Arm Ethos-U NPU. The optimized model contains TFLite  custom operators for those parts of the model that can be accelerated by the Ethos-U NPU. Parts of the model that cannot be accelerated are left unchanged and run on a CPU (Cortex-A or Cortex-M) using an appropriate kernel (such as  the Arm optimized CMSIS-NN kernels). After compilation, the optimized model can only be run on an Ethos-U NPU embedded system. The tool also generates performance estimates for the compiled model.
To deploy the NN model on Ethos-U, the first step is to use the Vela tool to compile the prepared model. To be accelerated by the Ethos-U NPU, the network operators must be quantized to either 8-bit (unsigned or signed) or 16-bit (signed).

4.1 Installing the Vela tool
You can run the Vela tool on the i.MX 93 board or Linux PC. It is already available in NXP Yoctorootfs. This section describes how to install it on the X86 Linux PC. The steps are as follows.

  1. Get the Vela source code.NXP AN13854 i MX 93 Applications Processors - TensorFlow9
  2. Install with Python pip.NXP AN13854 i MX 93 Applications Processors - TensorFlow10
  3. After all the commands are successful, you can use vela –help to check if the Vela tool is installed successfully.NXP AN13854 i MX 93 Applications Processors - TensorFlow11

4.2 Compiling the TFLite model
After the Vela tool is installed, the following commands can be used to compile a TFLite model to the optimized version for Ethos-U NPU. The optimized model is stored into the OUTPUT_DIR (“./output” by default). The output file has  the suffix _vela. Flite. It is also a TFLite model. After the compilation, Vela outputs the detailed log in the console.
Note: The Vela tool expects that the TFLite model is quantized already. Vela supports asymmetric quantization to 8 bit (signed and unsigned) and 16 bit (signed), as defined by TFLite. To accelerate model operators with Ethos-U NPU, the input model to Vela has to be quantized. Nonquantized operators fall back to the CPU.
The following provides an example of how to compile a model and shows the corresponding output log.NXP AN13854 i MX 93 Applications Processors -
TensorFlow12Output log:NXP
AN13854 i MX 93 Applications Processors - TensorFlow13The following is the computational graph after the model (mobilenet_v1_1.0_224_pb_int8.tflite) is compiled. Here, Vela encapsulates all supported OPs into one Ethos-U OP.NXP AN13854 i MX 93 Applications
Processors - TensorFlow14

4.3 Memory hierarchy for Cortex-M
For Cortex-M, several types of memory media with different capacity, speed, and cost can be accessed by the CPU. Figure 8 shows the memory hierarchy on i.MX 93 with speed decreasing order.

NXP AN13854 i MX 93 Applications Processors - Cortex

The TCM size is 256 kB, used for Cortex-M runtime data. By design, this memory space is not allocated for the system purpose after booting. How to use it effectively is left for the user decision.
OCRAM size is 640 kB. By design, the first 256 kB is allocated for the Arm trusted firmware (ATF), which used to bootstrap the Cortex-A before the DRAM is available. The rear 384 kB is reserved for NPU data: the weight/ bias of an ML model.
DRAM size is 2 GB on the i.MX 93 EVK board. However, only the shared DMA region between Cortex-A and Cortex-M can be used. The Ethos-U Linux driver requests DMA buffers for Tensor Arena dynamically from the DMA pool and passes the buffer address to the Ethos-U firmware on Cortex-M. If not explicitly specified, a 16 MB DMA buffer is requested by default.
Ethos-U can only access the DRAM and OCRAM memory by design. Figure 9 shows the current memory mapping for Ethos-U firmware.NXP AN13854 i MX 93
Applications Processors - TensorFlow15

With this configuration, the model data and tensor arena are allocated in DRAM and the OCRAM is used as an NPU cache. Use “Dedicated_Sram” memory mode for model compilation with Vela (vlea.ini can be found in ethos-u-vela/ethosu/config_files):NXP AN13854 i MX 93 Applications
Processors - TensorFlow16

For a standalone Cortex-M application, the memory mapping is as follows:NXP
AN13854 i MX 93 Applications Processors - TensorFlow17With this configuration, no DRAM is used. All the model data and tensor arena memory for NPU is allocated in OCRAM. Use “Sram_Only”memory mode for model compilation with Vela:NXP AN13854 i MX 93 Applications
Processors - TensorFlow18

Hardware acceleration with Ethos-U on i.MX 93 platform

The Ethos-U65 is an NPU on i.MX 93, which supports user space Inference APIs.

  • TFLite API to offload ethos-u operator and fallback to Cortex-A, nonintrusive
  • Arm inference API to offload Vela model and fallback to Cortex-M

5.1 Inference with TFLite
The Ethos-U custom operator enables accelerating the inference on the Ethos-U accelerator. The OP directly uses the hardware accelerator driver to use the accelerator capabilities fully.
See Section 3.3.2 for an example.

5.2 Inference with Ethos-U inference API
The Ethos-U inference API provides the methods to use the Ethos-U NPU on the Linux OS without the Tensor Flow Lite inference engine. It takes the compiled model and IFM/OFM as inputs, composes an inference task, and dispatches the inferences to the Cortex-M with Ethos-U.
The Ethos-U driver provides the C++ APIs for dispatching the inference to the Ethos-U kernel driver. The library and the corresponding header file are available on Yoctorootfs and SDK.
• /usr/include/ethosu.hpp
• /usr/lib/libethosu.so
5.2.1 How to use the inference API (C++)
The following steps describe how to run a Vela model from Cortex-A.

  1. Create the inference device.NXP AN13854 i MX 93 Applications Processors - device
  2. Load the model into a buffer from the Vela model file.NXP AN13854 i MX 93 Applications Processors - device 1
  3. Create the Network instance with the model buffer.NXP AN13854 i MX 93 Applications Processors - device2
  4. Load the IFM from the input file (such as a picture for an image classification application) into a buffer. If there are multiple inputs, create the buffers one by one and push back to a vector.NXP AN13854 i MX 93 Applications Processors - device3
  5. Create the OFM buffers according to the output dimensions in the model. If there are multiple outputs, create the buffer one by one and push back to a vector.NXP AN13854 i MX 93 Applications Processors - device4
  6. Create an inference instance with the Network buffer, IFM buffer, and OFM buffer.NXP AN13854 i MX 93 Applications Processors - device5
  7. Call Inference->invoke() to trigger and wait for the completion of the inference.NXP AN13854 i MX 93 Applications Processors - device7
  8. Access the OFM buffers to get the inference result.NXP AN13854 i MX 93 Applications Processors - device6

5.2.2 How to use the inference API (Python)
In addition to the C++ API, the Ethos-U driver also provides the Python API.
It is installed into Yoctorootfs: /user/lib/python3.10/site-packages/ethosu.
Example usage:NXP AN13854 i MX 93 Applications Processors -
device8

5.3 Building and deploying the Ethos-U firmware
This section describes how to to build and deploy the Ethos-U firmware.
5.3.1 Getting the source
The ethos-u-core-software is part of the i.MX 93 Ethos-U NPU machine learning software package, which is an optional middleware component of the MCUXpresso SDK. The ethos-u-core-software is integrated into the MCUXpresso  SDK Builder delivery system available on mcuxpresso.nxp.com. To include Ethos-U NPU machine learning into the MCUXpresso SDK package, the ethos-u-core-software middleware component is selected in the software component  selector on the SDK Builder page when building a new package.
Figure 11 shows the SDK Builder page.NXP AN13854 i MX 93 Applications
Processors - device9Once the MCUXpresso SDK package is downloaded, it can be extracted on a local machine or imported into the MCUXpresso IDE. For more information on the MCUXpresso SDK folder structure, refer to the Getting Started with MCUXpresso SDK User’s Guide (Document ID: MCUSDKGSUG). The package directory structure is similar as follows.NXP AN13854 i MX 93 Applications Processors -
device10

5.3.2 Ethos-U example applications
This section describes the Ethos-U example applications and supported toolchains.
5.3.2.1 Introduction
The two Ethos-U applications are available as follows:
• ethosu_apps_rpmsg: firmware for Yocto Linux BSP
• ethosu_apps: standalone example for Cortex-M
The ethosu_apps_rpmsg is the firmware for the Ethos-U subsystem for Linux OS. It contains core message handling, inference request processing from the Cortex-A core, NPU’s registers configuration, inference execution, and inference result providing to the Cortex-A core. The supported inference engine is TFLite or TFLite-Micro (if the inference API is used).
The example ethosu_apps is a Cortex-M standalone app that demonstrates the inference execution entirely on the Cortex-M core that can be used in the low- power scenario with the Cortex-A sleeping. The example uses a conv2d op model. There is no core message handling and only supports TFLite-Micro. The apps are available in the /boards//demo_apps/ethosu_apps* folders.
5.3.2.2 Toolchains supported
• IAR Embedded Workbench for Arm when the project is opened in IAR, press the “Make” button to build the project in IAR as follows.NXP AN13854 i MX 93
Applications Processors - device11

• ArmGCC – GNU tools Arm embedded
Run the following command to build the project.NXP AN13854 i MX 93
Applications Processors - device12

5.3.2.3 Deploy procedure

  1. Deploy the ethosu_apps_rpmsg firmware. Example ethosu_apps_rpmsg is built as .out or .elf and installed in roofs as the name of “ethosu_firmware”. The prebuilt binary is integrated in the roofs and loaded by the Linux Ethos-U  driver upon an inference request. To rebuild the firmware, the rebuilt ethosu_apps_rpmsg. Out or ethosu_apps_rpmsg. Elf should be copied to /lib/firmware/ in roofs and renamed as the name of “ethosu_firmware” as follows:NXP AN13854 i MX 93 Applications Processors - device13
  2. Deploy the ethosu_apps with U-Boot.
    The ethosu_apps is built as .bin. In the U-Boot terminal, you can run the following command to do inference for the conv2d op model.NXP AN13854 i MX
93 Applications Processors - device14

The default firmware ethosu_apps_rpmsg contains the following operators support with TFLite-micro on Cortex-M33: Ethos-U, TFLite_Detection_PostProcess, and Dequantize. If an operator is supposed to fall back on Cortex-M33 but  not included, rebuild the source code and deploy the firmware. The ethosu_apps is a standalone Cortex-M application running without Cortex-A interactions, therefore, it is deployed at the U-Boot stage.

5.3.3 Using the Ethos-U on Cortex-M
The Ethos-U NPU on i.MX 93 is accessible by the TFLite-Micro library. The TFLite-Micro interprets the optimized Vela model and delegates the kernels to different execution providers.
Currently, three types of execution providers are supported:

  • NN kernel: default kernel implementation provided by TFLite-Micro for Cortex-M CPU.
  • CMSIS-NN kernel: optimized kernel implementation by Arm using the CMSIS-NN library. The CMSIS-NN library executes the kernel on the Cortex-M CPU or Ethos-U.
  • Ethos-U kernel: kernel implementation for the custom Ethos-U operator. This operator is registered in the TFLite-Micro framework and executes the computation on Ethos-U using the NPU driver.

5.3.3.1 Running Vela model with TFLite-Micro
The following provides the steps to run the Vela model on Cortex-M directly.

  1. Get the flat buffer Vela model.NXP AN13854 i MX 93 Applications Processors - Vela model
  2. Configure / allocate the inputs, outputs tensors statically.NXP AN13854 i MX 93 Applications Processors - Vela model 1
  3. Build the TFLite-Micro interpreter for the inference.NXP AN13854 i MX 93 Applications Processors - Vela model 2
  4. Set the input tensors.NXP AN13854 i MX 93 Applications Processors - Vela model 3
  5. Run the inference and get the output.NXP AN13854 i MX 93 Applications Processors - VVela model 4NXP AN13854 i MX 93 Applications Processors - VVela model5TFLite-Micro does not depend on dynamic memory allocation, therefore, it requires users (application developers) to supply a memory arena when an interpreter is created. In practice, the user allocates this memory arena as a static buffer when the program starts. For example:NXP AN13854 i MX 93 Applications Processors - VVela model6

TFLite-Micro framework uses this memory arena as inputs/outputs/intermediate tensors store. This memory size “TENSOR_ARENA_SIZE” must be adjusted according to the practical usage to consider the following points:

  • Model used for the application
  • Size of the input/output data
  • Memory needed for intermediate result
  • Memory arena mapping to SRAM or TCM, considering the effective usage of memory hierarchy

Acronym

Table 4 lists and defines the acronyms used in this document.
Table 4. Acronyms

Term Definition
AHB Advanced high-performance bus
API Application programming interface
ATF Arm trusted firmware
AXI Advanced extensible Interface
BSP Board support package
CPM Communications processor module
DMA Direct memory access
DRAM Dynamic random-access memory
IFM Input feature map
MAC Media access control
NPU Neural processing unit
OFM Output feature map
SDK Software development kit
SIMD Single instruction / multiple data
SRAM Static random-access memory
TCM Trellis-coded-modulation
TFLite Tensor Flow Lite

Note about the source code in the document

Example code shown in this document has the following copyright and BSD-3-Clause license:
Copyright 2023 NXP Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met:

  1. Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer.
  2. Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution.
  3. Neither the name of the copyright holder nor the names of its contributors may be used to endorse or promote products derived from this software without specific prior written permission.
    THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS “AS IS” AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF  MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL,  EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION)  HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS  SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.

Revision history

Table 5 summarizes revisions to this document.
Table 5. Revision history

Revision number Release date Description
1 18-Sep-23 Initial public release

Legal information

9.1 Definitions
Draft — A draft status on a document indicates that the content is still under internal review and subject to formal approval, which may result in modifications or additions. NXP Semiconductors does not give any representations or  warranties as to the accuracy or completeness of information included in a draft version of a document and shall have no liability for the consequences of use of such information.

9.2 Disclaimers
Limited warranty and liability — Information in this document is believed to be accurate and reliable. However, NXP Semiconductors does not give any representations or warranties, expressed or implied, as to the accuracy or completeness of such information and shall have no liability for the consequences of use of such information. NXP Semiconductors takes no responsibility for the content in this document if provided by an information source outside of  NXP Semiconductors. In no event shall NXP Semiconductors be liable for any indirect, incidental, punitive, special or consequential damages (including – without limitation lost profits, lost savings, business interruption, costs related to  the removal or replacement of any products or rework charges) whether or not such damages are based on tort (including negligence), warranty, breach of contract or any other legal theory.
Notwithstanding any damages that customer might incur for any reason whatsoever, NXP Semiconductors’ aggregate and cumulative liability towards customer for the products described herein shall be limited in accordance with the Terms and conditions of commercial sale of NXP Semiconductors.
Right to make changes — NXP Semiconductors reserves the right to make changes to information published in this document, including without limitation specifications and product descriptions, at any time and without notice. This  document supersedes and replaces all information supplied prior to the publication hereof.
Suitability for use — NXP Semiconductors products are not designed, authorized or warranted to be suitable for use in life support, life-critical or safety-critical systems or equipment, nor in applications where failure or malfunction of  an NXP Semiconductors product can reasonably be expected to result in personal injury, death or severe property or environmental damage. NXP Semiconductors and its suppliers accept no liability for inclusion and/or use of NXP  Semiconductors products in such equipment or applications and therefore such inclusion and/or use is at the customer’s own risk.
Applications — Applications that are described herein for any of these products are for illustrative purposes only. NXP Semiconductors makes no representation or warranty that such applications will be suitable for the specified use  without further testing or modification. Customers are responsible for the design and operation of their applications and products using NXP Semiconductors products, and NXP Semiconductors accepts no liability for any assistance with  applications or customer product design. It is customer’s sole responsibility to determine whether the NXP Semiconductors product is suitable and fit for the customer’s applications and products planned, as well as for the planned application and use of customer’s third party customer(s). Customers should provide appropriate design and operating safeguards to minimize the risks associated with their applications and products.
NXP Semiconductors does not accept any liability related to any default, damage, costs or problem which is based on any weakness or default in the customer’s applications or products, or the application or use by customer’s third party  customer(s). Customer is responsible for doing all necessary testing for the customer’s applications and products using NXP Semiconductors products in order to avoid a default of the applications and the products or of the application or  use by customer’s third party customer(s). NXP does not accept any liability in this respect.
Terms and conditions of commercial sale — NXP Semiconductors products are sold subject to the general terms and conditions of commercial sale, as published at http://www.nxp.com/profile/terms, unless otherwise agreed in a valid written individual agreement. In case an individual agreement is concluded only the terms and conditions of the respective agreement shall apply. NXP Semiconductors hereby expressly objects to applying the customer’s general  terms and conditions with regard to the purchase of NXP Semiconductors products by customer.
Export control — This document as well as the item(s) described herein may be subject to export control regulations. Export might require a prior authorization from competent authorities.
Suitability for use in non-automotive qualified products — Unless this document expressly states that this specific NXP Semiconductors product is automotive qualified, the product is not suitable for automotive use. It is neither  qualified nor tested in accordance with automotive testing or application requirements. NXP Semiconductors accepts no liability for inclusion and/or use of non-automotive qualified products in automotive equipment or applications.
In the event that customer uses the product for design-in and use in automotive applications to automotive specifications and standards, customer (a) shall use the product without NXP Semiconductors’ warranty of the product for such  automotive applications, use and specifications, and (b) whenever customer uses the product for automotive applications beyond NXP Semiconductors’ specifications such use shall be solely at customer’s own risk, and (c) customer fully  indemnifies NXP Semiconductors for any liability, damages or failed product claims resulting from customer design and use of the product for automotive applications beyond NXP Semiconductors’ standard warranty and NXP Semiconductors’ product specifications.
Translations — A non-English (translated) version of a document, including the legal information in that document, is for reference only. The English version shall prevail in case of any discrepancy between the translated and English versions.
Security — Customer understands that all NXP products may be subject to unidentified vulnerabilities or may support established security standards or specifications with known limitations. Customer is responsible for the design and  operation of its applications and products throughout their lifecycles to reduce the effect of these vulnerabilities on customer’s applications and products. Customer’s responsibility also extends to other open and/or proprietary  technologies supported by NXP products for use in customer’s applications. NXP accepts no liability for any vulnerability. Customer should regularly check security updates from NXP and follow up appropriately. Customer shall select  products with security features that best meet rules, regulations, and standards of the intended application and make the ultimate design decisions regarding its products and is solely responsible for compliance with all legal, regulatory,  and security related requirements concerning its products, regardless of any information or support that may be provided by NXP.
NXP has a Product Security Incident Response Team (PSIRT) (reachable at PSIRT@nxp.com) that manages the investigation, reporting, and solution release to security vulnerabilities of NXP products.
NXP B.V. — NXP B.V. is not an operating company and it does not distribute or sell products.
9.3 Trademarks
Notice: All referenced brands, product names, service names, and trademarks are the property of their respective owners.
NXP — wordmark and logo are trademarks of NXP B.V.
AMBA, Arm, Arm7, Arm7TDMI, Arm9, Arm11, Artisan, big. LITTLE, Cordia, Core Link, Core Sight, Cortex, Design Start, Dynamo, Janelle, Keli, Mali, Med, Med Enabled, NEON, POP, Rearview, Securicor, Socrates, Thumb, Trust Zone,  ULINK, ULINK2, ULINK-ME, ULINKPLUS, ULINKpro, μVision, Versatile — are trademarks and/or registered trademarks of Arm Limited (or its subsidiaries or affiliates) in the US and/or elsewhere. The related technology may be protected by any or all of patents, copyrights, designs and trade secrets. All rights reserved.
IAR — is a trademark of IAR Systems AB.
i.MX — is a trademark of NXP B.V.
Microsoft, Azure, and Thread — are trademarks of the Microsoft group of companies.
Tensor Flow, the Tensor Flow logo and any related marks — are trademarks of Google Inc.
Please be aware that important notices concerning this document and the product(s)
described herein, have been included in section ‘Legal information’.

© 2023 NXP B.V.
All rights reserved.
For more information, please
visit: http://www.nxp.com
Date of release: 18 September 2023
Document identifier: AN13854

References

Read User Manual Online (PDF format)

Read User Manual Online (PDF format)  >>

Download This Manual (PDF format)

Download this manual  >>

Related Manuals