Edge-Native AI Inference Pipelines�Latency and Energy Trade-offs

review
Received: Jul 18, 2023
Published: Sep 24, 2023
Authors: Rafa Dumont ✉ Taro Carstairs Vera Xu

Abstract

We survey edge-native inference frameworks comparing quantization, batching, and caching strategies for energy-efficient real-time AI.

⬇ Download

Cite this article

Dumont, R., Carstairs, T., & Xu, V. (2023). Edge-Native AI Inference Pipelines�Latency and Energy Trade-offs. Research Explorations in Global Knowledge & Technology (REGKT), 2 (8). Retrieved from https://regkt.com/article.php?id=432&slug=review-edge-native-ai-inference-pipelines

Premium Membership Required

You need a premium account to view or download this article.

Become Premium