スキルアップAI株式会社 / 講師 / データサイエンティスト
Neural Architecture Search for Improving Latency-Accuracy Trade-off in Split Computing
Shoma Shimizu, Takayuki Nishio, Shota Saito, Yoichi Hirose, Chen Yen-Hsiu, Shinichi Shirakawa: "Neural Architecture Search for Improving Latency-Accuracy Trade-off in Split Computing", IEEE Globecom 2022 Workshops: Edge Learning over 5G Mobile Networks and Beyond (Accepted). =================== This paper proposes a neural architecture search (NAS) method for split computing. Split computing is an emerging machine-learning inference technique that addresses the privacy and latency challenges of deploying deep learning in IoT systems. In split computing, neural network models are separated and cooperatively processed using edge servers and IoT devices via networks. Thus, the architecture of the neural network model significantly impacts the communication payload size, model accuracy, and computational load. In this paper, we address the challenge of optimizing neural network architecture for split computing. To this end, we proposed NASC, which jointly explores optimal model architecture and a split point to achieve higher accuracy while meeting latency requirements (i.e., smaller total latency of computation and communication than a certain threshold). NASC employs a one-shot NAS that does not require repeating model training for a computationally efficient architecture search. Our performance evaluation using hardware (HW)-NAS-Bench of benchmark data demonstrates that the proposed NASC can improve the ``communication latency and model accuracy" trade-off, i.e., reduce the latency by approximately 40-60% from the baseline, with slight accuracy degradation.