Welcome to ITRI AI Hub Tutorials
update : 2025/01 by ITRI (EOSL-R3)
ITRI AI Hub provides simple, fast, and commercialized Edge AI implementation solutions for enterprises and developers. Before selecting a system, we recommend evaluating which devices are best suited for your applications based on factors such as model type, computing power, memory, and energy efficiency. The diagram below summarizes the memory and computing power distribution of the devices we have selected, helping you intuitively compare them with a variety of Model Zoo options, open-source community models, or other custom models.

Overview
To support various types of AI tasks, users can pre-select base models from GitHub projects, open-source frameworks, or train their desired models using PyTorch or TensorFlow. These models rely on self-built workstations, servers, or cloud-hosted data centers during the design and development phases. In this regard, ITRI provides online resources such as Azure AI Foundry and AMD Instinct Cluster to help you easily adapt and develop your products without incurring significant infrastructure maintenance costs. Additionally, you can quickly deploy your innovative applications by integrating pre-built tools and frameworks from the open-source community (e.g., YOLO, LLaMA, Whisper...) to obtain models.
AI on Chips: Enabling the Future of AI Everywhere
How to Get Started with Chiplets?
Development Flow
At AI Hub, we provide comprehensive guides for various types of Chiplets, including system configuration methods and model deployment tutorials.
- First, obtain an Evaluation Kit from an authorized retailer or distributor. Then, follow the official documentation to configure the installation environment and operating system according to your requirements.
- The Developer Zone provides concise notes and shared resources to assist with the setup process.
- Next, utilize the benchmark data available in the Model Zoo to evaluate the performance of each chip and determine its suitability for your application needs.
- Finally, use the provided testing tools to assess your model’s performance on different processing units across various Chiplets, to gain detailed insights into AI acceleration techniques and optimization strategies.
[NOTE] It is essential to recognize that each chip vendor operates within its own distinct hardware and software ecosystem. Nevertheless, the following framework is widely adopted by most AI developers, providing a systematic approach to efficiently evaluate and select the most suitable chip:
Processing Unit Memory Usage Supported Computing Operators Ideal Use Case Notes CPU Medium General-purpose logic Control flow and non-parallel ML tasks Executes general-purpose code directly. The performance of ML models can be further enhanced using vendor-optimized libraries (e.g., OpenVINO, ZenDNN, Kleidi AI...). GPU High Graphics rendering and parallel computing Matrix multiplication and neural network inference Requires the installation of appropriate graphics drivers (e.g., CUDA for NVIDIA GPUs, ROCm for AMD GPUs) and the use of related execution provider (e.g., TensorRT) to utilize GPU computing resources. NPU Low Specialized AI operators Low-power, high-efficiency neural network inference Requires the installation of drivers and execution providers. Additionally, vendor-provided quantization tools are often necessary to compile models, as these tools map a subset of valid operators to NPU computing resources for optimized performance.