Running Neural Networks on Rockchip in 2025

The landscape of deploying neural networks on Rockchip platforms has evolved dramatically. If you’ve ever dabbled with running detection models or classification networks on Rockchip boards, you know that change is the only constant in the world of embedded AI. I’m excited to share an updated, in-depth guide covering the latest improvements, best practices, and resources to help you get started—or optimize your current workflow.

In this post, we’ll cover everything from installing the necessary toolkits to overcoming common challenges like PyTorch’s resource demands, and we’ll explore the most popular models and export techniques available today. Whether you’re a seasoned developer or just starting out with edge AI, read on to discover how to harness the power of Rockchip’s neural network capabilities in 2025.

1. Setting Up Your Rockchip Platform

Before diving into neural network deployment, it’s critical to ensure that your Rockchip board is properly set up. The first major step is installing their toolkit—a comprehensive suite designed to simplify the integration of neural networks on Rockchip devices.

Pre-Installed vs. Manual Installation

Many of the latest Rockchip boards, such as the new Roda series, come with the toolkit pre-installed. This significantly reduces the setup time, allowing you to jump straight into development. However, if you’re working with an older board or one from a smaller vendor, you might need to install the toolkit manually. Detailed instructions for installing necessary tools, including flashing utilities, can be found on the Radxa Wiki.

Driver Installation Improvements

Driver installation has also seen improvements. Previously, you needed to manually copy the AR NPU file into your system directory. Now, many installations are automated via pip wheels or direct GitHub pulls, ensuring that the driver installs with minimal hassle.


2. Levaraging the RKNN Model Zoo

Once your Rockchip board is up and running with the RKNN Tool Kit, the next step is to tap into the wealth of pre-trained models available in the RKNN Model Zoo. This repository is a comprehensive collection of neural networks—ranging from classification and detection to segmentation models—that are optimized for Rockchip NPUs.

The RKNN Model Zoo has been continuously updated, now boasting an even larger selection of models. Whether you’re looking for a quick-start solution or planning to develop a commercial application, you’ll find pre-trained ONNX models that work out of the box. For instance, models such as CLIP, YOLO World, and Whisper are included and ready to deploy. Keep in mind that while many models are fully functional immediately, others may require specific configurations (like selecting the right COCO classes for certain detection tasks).

For an up-to-date list of models and detailed conversion instructions, visit the RKNN Model Zoo.

One of the biggest improvements in the Model Zoo is the widespread availability of ONNX files. Most repositories now include pre-prepared ONNX models that allow you to bypass lengthy conversion processes. This is particularly useful for developers who need to deploy models quickly and efficiently.


3. Exporting and Running Your Neural Networks

Deploying a neural network on a Rockchip board involves several key steps: training your model, exporting it to a common format like ONNX, converting it to a Rockchip-compatible format (often referred to as AR format), and finally running inference on the board.

A Step-by-Step Export Process

Let’s break down the process using a popular model as an example—YOLOv8:

  1. Training the Model: Begin by training your model on your host machine using your preferred repository (e.g., the YOLOv8 repository).
  2. Exporting to ONNX: Once training is complete, export the model to ONNX format. Thanks to the AR Model Zoo’s guidelines, this process is now streamlined.
  3. Converting to AR Format: After exporting, convert your ONNX model to the AR format using a simple one-line command. This conversion is critical, as Rockchip’s NPU requires a specific format for optimal performance.
  4. Deploying to the Board: Finally, transfer the AR model to your Rockchip board for inference. Note that while model training and exportation are typically performed on your host machine, the actual inference should occur on the Rockchip device.

For those interested in more technical details or needing ready-made scripts for exporting models, the rknn-toolkit2 provides updated scripts and tools designed to support complex models like Whisper and YOLO World.


4. Addressing Common Issues

One recurring challenge with deploying neural networks on Rockchip platforms has been the heavy reliance on PyTorch in many inference scripts. While PyTorch is a powerful framework, its resource demands can be problematic on edge devices:

  • Memory Overhead: PyTorch can consume up to 200 MB of memory during inference, which is significant for boards with limited resources.
  • Installation Complexity: The installation and configuration of PyTorch can be complex, often leading to compatibility issues or errors during runtime.
  • Performance Bottlenecks: The conversion of tensors between NumPy and PyTorch introduces additional processing overhead.

To address these issues, many developers are shifting towards using ONNX Runtime. This framework provides a lighter alternative to PyTorch and is designed to run ONNX models efficiently on various hardware platforms, including Rockchip NPUs. The latest developments even include support for Rockchip devices through the RKNPU Execution Provider. You can read more about this improved support on the ONNX Runtime documentation.

By using ONNX Runtime, you not only reduce the memory footprint but also streamline the inference process—making it a compelling option for developers looking to optimize performance on edge devices.


5. Classification, Detection, and Segmentation Models

Rockchip’s Model Zoo hosts a variety of models that can be grouped into three primary categories: classification, detection, and segmentation. Each category offers unique benefits depending on your application.

Classification Models

Classification models are often the easiest to deploy. Many popular models, such as MobileNet and CLIP, can be exported with a single command and work seamlessly on Rockchip boards. These models are well-suited for applications that require image recognition or object categorization without the complexity of spatial detection.

Detection Models

Detection models—particularly the various YOLO (You Only Look Once) variants—are widely used for tasks ranging from simple object detection to more complex applications like autonomous driving. If you’re developing a commercial product, YOLOX is a solid choice due to its balance between accuracy and speed. For hobby projects, YOLOv8 and YOLOv11 offer great alternatives, each with its own strengths and community support.

Segmentation Models

Segmentation models are crucial when you need to identify and delineate objects within an image. These models are typically straightforward to export and have seen excellent performance on Rockchip platforms. Whether you’re working on scene segmentation or object masking, the available segmentation models can meet a wide range of requirements.

Each category of models is designed to work “out of the box” in most cases, but it’s essential to follow the specific export and conversion guidelines provided in the AR Model Zoo documentation for optimal performance.


6. Tackling Whisper and YOLO World Models

Not all models are created equal, and some of the more complex networks pose significant export challenges. Models like Whisper and YOLO World are known for their intricacy, and exporting them to a Rockchip-compatible format can be a daunting task. Thankfully, the AR Model Zoo repository has kept pace with these challenges by incorporating updated scripts and guidelines for exporting complex models. Tools like the rknn-toolkit2 have been updated to simplify the conversion process. These scripts offer pre-configured commands that handle the intricate details of model conversion.

In addition, community forums and discussion boards often share tips and workarounds for successfully exporting these advanced models. Leveraging these community resources can save you significant time and frustration.


8. Future of AI Inference on Rockchip

The evolution of neural network deployment on Rockchip platforms is a testament to how rapidly edge AI is advancing. With the RKNN Tool Kit often pre-installed on modern boards, a robust and ever-expanding AR Model Zoo, and improved support for model export and inference, deploying AI on Rockchip in 2025 is more accessible than ever.

Here are the key takeaways:

  • Simplified Setup: New Rockchip boards frequently come with pre-installed toolkits, reducing initial setup complexity. For older boards, guides like those on the Radxa Wiki are invaluable.
  • Expansive Model Resources: The RKNN Model Zoo now offers a diverse range of pre-trained models, including classification, detection, and segmentation networks.
  • Efficient Model Export: With tools like rknn-toolkit2, the conversion of models to a Rockchip-friendly format is more streamlined.
  • Overcoming PyTorch Limitations: Transitioning to lighter alternatives like ONNX Runtime can help mitigate memory and performance issues, especially with the new support available for Rockchip NPUs.
  • Community and Documentation: Finally, leveraging community resources and detailed documentation can significantly ease the process of deploying and optimizing neural networks on Rockchip devices.

References and Useful Links:

About the author

Sophia Bennett is an art historian and freelance writer with a passion for exploring the intersections between nature, symbolism, and artistic expression. With a background in Renaissance and modern art, Sophia enjoys uncovering the hidden meanings behind iconic works and sharing her insights with art lovers of all levels. When she’s not visiting museums or researching the latest trends in contemporary art, you can find her hiking in the countryside, always chasing the next rainbow.