Weakly-supervised butterfly detection based on saliency map

Document Type

Journal Article

Publication Title

Pattern Recognition

Volume

138

Publisher

Elsevier

School

School of Engineering

RAS ID

55042

Funders

National Natural Science Foundation of China (61806013, 61876010,61906005) / General project of Science and Technology Plan of Beijing Municipal Education Commission (KM202110005028) / Project of Interdisciplinary Research Institute of Beijing University of Technology (2021020101) / International Research Cooperation Seed Fund of Beijing University of Technology (2021A01)

Comments

Zhang, T., Waqas, M., Fang, Y., Liu, Z., Halim, Z., Li, Y., & Chen, S. (2023).Weakly-supervised butterfly detection based on saliency map. Pattern Recognition, 138, Article 109313. https://doi.org/10.1016/j.patcog.2023.109313

Abstract

Given the actual needs for detecting multiple features of butterflies in natural ecosystems, this paper proposes a model of weakly-supervised butterfly detection based on a saliency map (WBD-SM) to enhance the accuracy of butterfly detection in the ecological environment as well as to overcome the difficulty of fine annotation. Our proposed model first extracts the features of different scales using the VGG16 without the fully connected layers as the backbone network. Next, the saliency maps of butterfly images are extracted using the deep supervision network with shortcut connections (DSS) used for the butterfly target location. The class activation maps of butterfly images are derived via the adversarial complementary learning (ACoL) network for butterfly target recognition. Then, the saliency and class activation maps are post-processed with conditional random fields, thereby obtaining the refined saliency maps of butterfly objects. Finally, the locations of the butterflies are acquired based on the saliency maps. Experimental results on the 20 categories of butterfly dataset collected in this paper indicate that the WBD-SM achieves a higher recognition accuracy than that of the VGG16 under different division ratios. At the same time, when the training set and test set are 8:2, our WBD-SM attains a 95.67 % localization accuracy, which is 9.37 % and 11.87 % higher than the results of the DSS and ACoL, respectively. Compared with three state-of-the-art fully-supervised object detection networks, RefineDet, YOLOv3 and single-shot detection (SSD), the detection performance of our WBD-SM is better than RefineDet, and YOLOv3, and is almost the same as SSD.

DOI

10.1016/j.patcog.2023.109313

Access Rights

subscription content

Share

 
COinS