[1911.06091v1] EdgeNet: Balancing Accuracy and Performance for Edge-based Convolutional Neural Network Object Detectors
We have demonstrated that an intelligent data reduction mechanism can go a long way towards improving the overall accuracy and focus the computation on the important image regions
Visual intelligence at the edge is becoming a growing necessity for low latency applications and situations where real-time decision is vital. Object detection, the first step in visual data analytics, has enjoyed significant improvements in terms of state-of-the-art accuracy due to the emergence of Convolutional Neural Networks (CNNs) and Deep Learning. However, such complex paradigms intrude increasing computational demands and hence prevent their deployment on resource-constrained devices. In this work, we propose a hierarchical framework that enables to detect objects in high-resolution video frames, and maintain the accuracy of state-of-the-art CNN-based object detectors while outperforming existing works in terms of processing speed when targeting a low-power embedded processor using an intelligent data reduction mechanism. Moreover, a use-case for pedestrian detection from Unmanned-Areal-Vehicle (UAV) is presented showing the impact that the proposed approach has on sensitivity, average processing time and power consumption when is implemented on different platforms. Using the proposed selection process our framework manages to reduce the processed data by $~100\times$ leading to under $4W$ power consumption on different edge devices.
‹Figure 1: Proposed tiles for processing base on the selection process. (Introduction)Figure 2: Overview of EdдeNet (Introduction)Figure 3: Different tile proposals, with respect to the position and size of the tiles, for an object in the image. a) 128 × 128, b) 256 × 256, c) 352 × 352, d) 416 × 416 (Proposed Approach)Figure 4: Comparison of average processing time (CPU) and sensitivity between different EdдeNet configurations for different time frames for each stage (Evaluation of EdдeNet Framework)Figure 5: Sensitivity of Tiny − YoloV 3, DroNet_V 3, DroNet_Tile and EdдeNet on different platforms (Evaluation of EdдeNet Framework)Figure 6: Average Processing Time of Tiny − YoloV 3, DroNet_V 3, DroNet_Tile and EdдeNet on different platforms (Performance analysis on CPU, Odroid and Raspberry platforms)Figure 7: Average Power Consumption of Tiny − YoloV 3, DroNet_V 3, DroNet_Tile and EdдeNet on different platforms (Performance analysis on CPU, Odroid and Raspberry platforms)Figure 8: EdдeNet selection process on different frames of the test set. White boxes indicates the proposed tiles for processing and orange is the actual detection of the objects (Performance analysis on CPU, Odroid and Raspberry platforms)›