Alternative Intelligence Shows Great Promise for Robots |
NEWS |
Investors believe that neuromorphic computing—processing architecture that mimics the biology of brains—will eventually challenge NVIDIA’s chip supremacy by introducing energy and processing gains that far surpass the capabilities of the current commercial state-of-the-art. But beyond hardware, biomimetic algorithms offer significant advantages for robotics. The theory is that biomimicry—which hinges on dynamic memory and processing—can produce machine behaviors, reasoning, and responsiveness, akin to that of an animal or even a human being; potentially superseding the advantages promised by nascent efforts to adapt generative Artificial Intelligence (AI) for robotics.
Traditional Artificial Neural Networks (ANNs)—the backbone of most contemporary AI from image processing to foundation models—rely heavily on training data that can only be updated incrementally in real time, and at a large computation cost. However, new neural algorithms are demonstrating substantial real-time robotics performance increases. Emerging algorithmic approaches that can be deployed on current hardware, often by emulating neuromorphic architecture, include Continuous-Time Neural Networks (CTNNs) and closely related Liquid Neural Networks (LNNs). These algorithms excel at interpreting spatiotemporal data, improving responsiveness; are less dependent on training, creating their understanding models dynamically; can be run (and trained) on low resource chipsets such as Field Programmable Gate Arrays (FPGAs); and, like their hardware counterparts, provide dramatic energy reductions, enabling extended robot uptime in the field.
Biomimicry for Robotics Is Already Commercially Mature |
IMPACT |
Beyond the research ventures of Intel and IBM, university spinouts have begun commercializing these alternative algorithms for machine intelligence. Examples include the following:
Dedicated neuromorphic hardware has also been commercialized on a small scale. Several vendors have manufactured small neuromorphic chips with limited capabilities for lesser sensor-supporting tasks. These products have demonstrated significant battery life extension for edge sensors. Companies active in this space include Aspinity, Innatera, POLYN, and BrainChip. The latter recently used neuromorphic hardware to showcase an edge Large Language Model (LLM) with impressive results; feasibly, such an innovation could be deployed to issue verbal commands to robots with minimal computational cost. Generally, these chips are designed to perform basic signal analysis and then wake and interface with a secondary device, such as a microcontroller, to perform higher level processing if warranted. By augmenting existing systems—reducing their active time—stakeholders can realize significant energy savings and improved performance via intelligent data pre-processing.
Current Generative AI Is Fundamentally Incapable of Significantly Extending Robot Capabilities |
RECOMMENDATIONS |
The long history of robotics and its utility is defined by discrete actions within controlled environments. “Robotic tasks”, i.e., doing the same task repeatedly and achieving the same result with millimeter precision, is where robots’ value has historically resided. Extending robotics beyond the assembly line—even the paltry distance to the warehouse floor—can cause environmental variables to exponentiate, resulting in fundamentally unsolvable problems. When using a robot, all contingencies must be taught; every edge case, scenario, action, and behavior must be taught in advance. This has long hindered autonomous vehicles; environments and people can create scenarios that have potentially never been witnessed before, let alone are approximately present within the training data. This is the current objective of technology leaders: to create databases of robot behaviors to approximate every conceivable action and scenario. For “general-purpose robots,” or robots that we expect to interact with the uncontrolled real world, this is a computationally impossible task.
Although inference and learning for robots has progressed significantly in recent years, solutions remain limited, slow, and computationally expensive. Current foundation models for robotics, such as the Toyota Research institute’s Large Behavioral Model and NVIDIA’s Isaac suite, will further extend robot applications beyond the assembly line and enable marginally more complex use cases in controlled environments such as material handling and picking from a crowded bin. However, efficiency gains are likely to be overshadowed by a lack of adaptability, preventing robot operation in unstructured, or unpredictable, environments such as construction sites or around human beings. Determinism is another significant issue. Large foundation models are a black box—convoluted internal logic between nodes creates unpredictable and unrepeatable behaviors. This is a key issue fueling safety and repeatability concerns for the crossover of generative AI and robotics. Advocates claim that CTNNs, due in part to the reduced number of nodes in the network, can provide greater transparency and predictability.
Proponents believe that biomimetic algorithms—notably LNNs—have the capability to extend robot adoption into new, unstructured, and complex, environments. Given the maturity and cost savings of alternative forms of machine intelligence, decision makers ought to think twice before spending on batteries and Graphics Processing Units (GPUs).