Software, The Elephant in the Room for Edge-AI Hardware Acceleration
Many companies today are focused on trying to deliver peak efficiency in machine learning (ML) inference by encouraging customers to move from less efficient traditional processors, to purpose-built accelerators for ML inference. While this is directionally correct, oftentimes hardware specific solutions are unable to match customers’ performance and efficiency goals. The issue, solving for ‘peak efficiency’ cannot be accomplished by simply throwing a combination of silicon and power at the problem; this is especially true at the edge.