![]() ![]() Object labels can cause some confusion in the classification of scene labels, resulting in errors in the classification. In Figure 1b, the scene label is ‘Overpass’, and the object labels are ‘Parking’, ‘Rivers’, ‘Buildings’, etc. In Figure 1a, the scene label is ‘River’, and the object labels are ‘Forest’, ‘Residential’, etc. ![]() A remote-sensing image with a specific scene label usually contains multiple object scene labels. A series of experimental results show that compared with some state-of-the-art classification methods, the proposed method not only greatly reduces the number of network parameters but also ensures the classification accuracy and achieves a good trade-off between the model classification accuracy and running speed.Īs shown in Figure 1, compared with general images, remote-sensing scene images contain richer, more detailed and more complex ground objects. The classification accuracy of the proposed LCNN-HWCF on the AID dataset (training:test = 2:8) and the NWPU dataset (training:test = 1:9), with great classification difficulty, reaches 95.76% and 94.53%, respectively. The proposed method achieves good classification results on UCM, RSSCN7, AID and NWPU datasets. Using global average pooling before the fully connected layer can better preserve the spatial information of features. Finally, the global average pooling layer, the fully connected layer and the Softmax function are used for classification. Therefore, the hierarchical-wise convolution fusion module is designed to extract the deep features of remote-sensing images. In the deep layer of the neural network (groups 4–7), the running speed of the network usually decreases due to the increase in the number of filters. Compared with traditional convolution, dimension-wise convolution has a lower number of parameters and computations. Dimension-wise convolution is carried out in the three dimensions of width, depth and channel, and then, the convoluted features of the three dimensions are fused. Firstly, in the shallow layer of the neural network (groups 1–3), the proposed lightweight dimension-wise convolution (DWC) is utilized to extract the shallow features of remote-sensing images. In order to solve this problem, a lightweight convolutional neural network based on hierarchical-wise convolution fusion (LCNN-HWCF) is proposed for remote-sensing scene image classification. In order to improve the classification performance, many studies increase the width and depth of convolutional neural network to extract richer features, which increases the complexity of the model and reduces the running speed of the model. In recent years, many remote-sensing scene classification methods based on convolutional neural networks have been proposed. The large intra-class difference and inter-class similarity of scene images bring great challenges to the research of remote-sensing scene image classification. ![]()
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |