26132

Domain-Specific On-Device Object Detection Method

Seongju Kang, Jaegi Hwang, Kwangsue Chung
Department of Electronics and Communications Engineering, Kwangwoon University, Seoul 01897, Korea
Entropy, 24(1), 77, 2022

@article{kang2022domain,

   title={Domain-Specific On-Device Object Detection Method},

   author={Kang, Seongju and Hwang, Jaegi and Chung, Kwangsue},

   journal={Entropy},

   volume={24},

   number={1},

   pages={77},

   year={2022},

   publisher={Multidisciplinary Digital Publishing Institute}

}

Download Download (PDF)   View View   Source Source   

741

views

Object detection is a significant activity in computer vision, and various approaches have been proposed to detect varied objects using deep neural networks (DNNs). However, because DNNs are computation-intensive, it is difficult to apply them to resource-constrained devices. Here, we propose an on-device object detection method using domain-specific models. In the proposed method, we define object of interest (OOI) groups that contain objects with a high frequency of appearance in specific domains. Compared with the existing DNN model, the layers of the domain-specific models are shallower and narrower, reducing the number of trainable parameters; thus, speeding up the object detection. To ensure a lightweight network design, we combine various network structures to obtain the best-performing lightweight detection model. The experimental results reveal that the size of the proposed lightweight model is 21.7 MB, which is 91.35% and 36.98% smaller than those of YOLOv3-SPP and Tiny-YOLO, respectively. The f-measure achieved on the MS COCO 2017 dataset were 18.3%, 11.9% and 20.3% higher than those of YOLOv3-SPP, Tiny-YOLO and YOLO-Nano, respectively. The results demonstrated that the lightweight model achieved higher efficiency and better performance on non-GPU devices, such as mobile devices and embedded boards, than conventional models.
No votes yet.
Please wait...

* * *

* * *

HGPU group © 2010-2024 hgpu.org

All rights belong to the respective authors

Contact us: