SOURCE RESEARCH
Artificial Intelligence and Machine Learning: Tiny ML
Market Data | 4Q 2024 | MD-AIMLT-101
Get The ReportGrowing at a CAGR of 31%, annual Tiny Machine Learning (TinyML) inference chipset shipments are forecast to reach 5.9 Billion by 2030. Audio and sound processing will be the most common application for TinyML, followed by sensor data analysis and machine vision.
Definitions
TinyML devices refer to ultra-low-power embedded systems or Internet of Things (IoT) devices that incorporate Machine Learning (ML) capabilities, typically through the use of specialized hardware accelerators or optimized software frameworks. TinyML devices are designed to operate within very tight power budgets, often running on coin cell batteries or through energy harvesting. TinyML devices perform some or all Artificial Intelligence (AI)/ML locally, i.e., on the device itself, rather than offloading computations to other devices in the network or cloud. This enables low-latency inference, privacy preservation, and operation without network connectivity. ML models deployed on TinyML devices are optimized for size and efficiency, often through techniques like quantization, pruning, and specialized model architectures.
AI inference is the process of using a model’s acquired capability to infer results from new data, predict new patterns, or interpret fresh data based on what is learned from existing training models.
Related Research
Market Data | 2Q 2024 | MD-AIMLOD-101
Market Data | 3Q 2024 | MD-AIML-111
Market Data | 2Q 2024 | MD-CBMM-23
Market Data | 2Q 2024 | MD-ARMD-101
Market Data | 4Q 2024 | MD-PCE-103