Dimensionality reduction and scale-space analysis of APEX hyperspectral imagery for tree species discrimination
Abstract
Hyperspectral imagery has high potential for tree species classification. However, high-spectral dimensionality poses computational complexity, such as the Hugh phenomena (a.k.a., the ?curse? of dimensionality). The aim of this study was two-fold: a) to assess spectral dimensionality reduction, for classification of uniform tree stands, in a case of lesser training samples and b) to assess the potential of difference-of-Gaussian for multi-scale representation of tree stands, on dimensionality reduced hyperspectral imagery. A minimum noise transformation was applied on an airborne prism experiment (APEX) image, and 10 MNF-derived components were selected. Multiresolution segmentation and random forest algorithms were used for tree species classification. To assess multi-scale representation, MNF-derived components were convoluted with successive Gaussian filters, and difference-of-Gaussian of each consecutive image were created. A contrast-split-segmentation was used to assess potential tree stands at multi-scales. The classification results using MNF-derived components yielded 93% of overall accuracy on object level, and 0.910 measure of kappa coefficient. A visual inspection of the classification results showed a better performance in a comparison with the classification results of original hyperspectral imagery. Additionally, utilizing DOGs for spatial complexity reduction resulted not only in a spatially simplified image, but also dominant features at different scales were preserved as blobs.