Deep convolutional cross-connected kernel mapping support vector machine based on SelectDropout
School of Engineering
National Natural Science Foundation of China (61806013, 61876010, 61906005, 62166002, 62176009) / General project of Science and Technology Plan of Beijing Municipal Education Commission (KM202110005028) / Project of Interdisciplinary Research Institute of Beijing University of Technology (2021020101) / International Research Cooperation Seed Fund of Beijing University of Technology (2021A01)
Deep neural mapping support vector machine (DNMSVM) has achieved good results in numerous tasks by mapping the input from a low-dimensional space to a high-dimensional space and then using support vector machine for classification. However, it did not consider the connection of different spaces and increased the model parameters. To improve the classification performance while reducing the number of model parameters, we propose a deep Convolutional Cross-connected Kernel Mapping Support Vector Machine framework based on SelectDropout (CCKMSVM-SD). It consists of a feature extraction module and a classification module. The feature extraction module maps the data from low-dimensional to high-dimensional space by fusing the representations of different dimensional spaces through convolutional layers with cross-connections. For some convolutional layers, we use the depthwise separable convolution to replace the original convolution to reduce the number of parameters. Besides, we use SelectDropout to improve its generalization capability. The classification module uses a soft margin support vector machine for classification. The results on three tasks with ten different datasets indicate that CCKMSVM-SD obtains higher classification accuracy than other models with fewer parameters, demonstrating its effectiveness.
Wang, Q., Liu, Z., Zhang, T., Alasmary, H., Waqas, M., Halim, Z., & Li, Y. (2023). Deep convolutional cross-connected kernel mapping support vector machine based on SelectDropout. Information Sciences, 626, 694-709. https://doi.org/10.1016/j.ins.2023.01.004