Full-Waveform LiDAR Point Cloud Land Cover Classification with Volumetric Texture Measures

Full-Waveform (FW) Light Detection and Ranging (LiDAR) systems record the complete waveforms of backscattered laser signals, thus providing greater potential for extracting additional features and deriving physical properties from reflected laser signals. This study explores the feasibility of extracting volumetric texture features from airborne FW LiDAR point cloud data along with echo-based LiDAR features to improve land-cover classification. A second derivative algorithm is used to detect signal echoes and extract singleand multi-echo features from FW LiDAR data derived from Gaussian fitting function. The dense point clouds are further regularized to construct a data cube for volumetric texture extractions using 3D-GLCM (Gray Level Co-occurrence Matrix) and Gray Level Co-occurrence Tensor Field (GLCTF) algorithms coupled with second and third order texture descriptors. Different feature combinations of traditional and echo-based LiDAR features and texture measures are collected for supervised land-cover classification using a Random Forests classifier. The experimental results indicate that the echo-based features may be useful for distinguishing general land-cover types with acceptable accuracy but may not be adequate for detailed classifications, such as discriminating different vegetation cover types. Incorporating volumetric texture features can improve the classification of relatively more detailed land-cover types with an approximate 10 and 14% increase in the overall accuracy and Kappa coefficient, respectively.


InTRoDuCTIon
Airborne laser scanning (ALS) system (also known as airborne Light Detection and Ranging, LiDAR) is an active remote sensing technique that emits pulses and receives their responses to measure attitude angles and distances between the sensor and targets.Consequently, the target's coordinates can be computed using direct geo-referencing theory.The LiDAR outcome is referred to as point clouds consisting of many discrete points.Each point also contains target intensity information.The pulse response with the passing of time is called a waveform.Most ALS systems record part of the waveform and the number of records depends on the instrument, e.g., one echo derived from the first return, two echoes composed of first and last return, or six echoes determined by echo detection in a waveform.Airborne LiDAR data have been widely used for DSM (Digital Surface Model) and DEM (Digital Elevation Model) generation (e.g., Gamba and Houshmand 2000;Liu 2008), forest assessment (e.g., Zimble et al. 2003;Hyyppä et al. 2004), and urban reconstruction (e.g., Guo et al. 2011;Liu et al. 2013).These applications are based primarily on the geometry of point clouds and sometimes on the intensity information (Hug and Wehr 1997).
As related technologies advance and data storage capacity increases, Full-Waveform (FW) LiDAR systems have emerged since 2004.In addition to three-dimensional (3D) coordinates and intensity of returned laser signals provided by conventional LiDAR systems, this new type of laser scanning sensor also records the complete waveforms of the backscattered signal echo (Mallet and Bretar 2009).Thus, it provides more potential to extract additional parameters or features and derive physical properties from the recorded laser signals.FW LiDAR point clouds have much larger data volumes and also need more sophisticated algorithms to process and analyze to fully explore and take advantage of the additional information derived from the waveform features.

FW LIDAR PRoCessIng AnD AnALysIs
There are two major developments in FW LiDAR (Bretar et al. 2008;Mallet and Bretar 2009).The first is transferring all waveform samples into point cloud space to increase the number of 3D points.This results in denser point clouds which should be helpful in extracting more information for segmentation and classification tasks in both forest and urban areas (e.g., Chauve et al. 2009;Lin et al. 2010;Qin et al. 2012).Echo detection to decompose and fit the waveform is the core of another development for deriving further valuable parameters and features.For instance, Gaussian fitting or decomposition (Wagner et al. 2006) is a well-known solution that has been widely used for many applications (e.g., Mallet et al. 2011;Fieber et al. 2013;Tseng et al. 2015).Lin et al. (2010) and Lu and Tsai (2013) further proposed using a second derivative algorithm to determine the initial position and number of echoes for iterating Gaussian waveform fitting.It has been demonstrated that both qualitative and quantitative validations of the second derivative-based echo detector outperformed two conventional methods, i.e., center of gravity and zero-crossing of the first derivative, in terms of both range resolution and accuracy (Lin et al. 2010).
It is a common practice to rasterize LiDAR data into regular grids (Dalponte et al. 2008;Palenichka et al. 2013), especially for deriving topographic features, such as slope gradients and aspects of DEMs generated from LiDAR data.After rasterization, LiDAR data sets can be treated and analyzed as images and easily overlaid with other raster and vector data sets.For example, it would be difficult to perform texture analysis on the original discrete LiDAR point clouds, but common texture measures, such as Gray Level Co-occurrence Matrix (GLCM), can be computed effectually from the rasterized grids.Although the FW LiDAR point cloud density is typically higher than conventional ALS, the general strategy for the texture analysis of LiDAR data still involves slicing the point clouds into a few layers and converting them into two-dimensional images for texture computation (e.g., Anderson et al. 2008;Heinzel and Koch 2011).Therefore, only pixel-based second-order texture measures are used.A few studies recently started to treat airborne and mobile FW LiDAR point clouds as volumetric datasets to perform voxel-based 3D analysis such as tree detection and segmentation, tree species classification, stem volume and DBH (Diameter at Breast Height) estimation (e.g., Reitberger et al. 2008Reitberger et al. , 2009;;Yao et al. 2012;Wu et al. 2013).
If FW LiDAR can be treated as volumetric data sets for voxel-based analysis, it has a greater potential for extracting volumetric texture features from the dense point clouds using higher-order texture measures for more sophisticated classifications.This study therefore developed a systematic approach to extract high-order volumetric texture features based on 3D-GLCM (Tsai et al. 2007) and GLCTF (Gray Level Co-occurrence Tensor Field) (Tsai and Lai 2013) computations for FW LiDAR data and integrate these spatial measures into waveform-based features for point cloud classification to improve land-cover identification.Several issues are addressed innovatively in this research, including (1) using a second derivative algorithm to detect echoes for extracting single-and multi-echo features derived from the Gaussian fitting function; (2) regularization of dense point clouds as a data cube for volumetric texture feature extraction; (3) comparing different waveform and texture feature combinations to evaluate the effectiveness of volumetric texture measures for FW LiDAR point cloud land-cover classification.

MATeRIAL AnD MeThoD
The study site is located in Taoyuan, Taiwan, as displayed in Fig. 1 with an orthorectified aerial image.The primary data set used in this study is a FW LiDAR point cloud acquired in May 2012 using an Optech ALTM Pegasus airborne laser scanner.Table 1 lists a few important characteristics of the sensor.The flight height, point density and foot-print size in nadir were 2185 m, 0.54 point m -2 , and 0.43 m, respectively.Three data subsets were extracted from the original data set for analysis in three test cases.Their ground coverages are also marked in Fig. 1.The three test cases are designed to evaluate the performance in general land-cover classification, sophisticated classification (with more categories), and distinguishability among different vegetation covers, respectively.More detailed description and test case results are discussed in the next section.
The fundamental classification analysis principle in this research is to derive and collect useful features from original point clouds to improve land-cover type classification.After pre-processing echo-based point features and volumetric texture features are extracted and computed.A supervised classifier is then employed to classify land-cover types in the test cases.Assessment and cost analysis are then performed to evaluate the classification performance in different test cases.

Pre-Processing
The main purpose of pre-processing is two-fold, noise elimination and radiometric correction.Noise elimination involves offsetting and smoothing waveforms using thresholds and mean filters.The former eliminates path energy from aerosols and the latter reduces the effect of noise.The purpose of radiometric calibration in this study is to decrease the radiometric variations of the same targets due to different directions and ranges when the sensor acquired energy.A model-driven approach (Höfle and Pfeifer 2007) derived from the radar equation was used to describe the loss of emitted pulse power because this approach can be applied without any constrains such as the flight height and so on.The reflectivity, t, of an object could be represented as a proportion shown in Eq. ( 1).It has been demonstrated that the outcome of Eq. ( 1) can reduce radial variations in the same targets acquired in different directions and ranges (Höfle and Pfeifer 2007).
where P is the received power; R is the recorded range; a indicates the angle of incidence between surface normal and incoming laser ray; atm h and sys h are atmospheric and instrument factors but these factors can be ignored because only one flight line was used in this study.
The theoretical backscatter cross-section (BC) of a nadir echo can be obtained from Eq. (2) as indicated in Wagner et al. (2006), but the reflectance of the target surface is unknown.According to the radar equation, Eq. ( 2) can be replaced by Eq. ( 3) with a calibration constant, C cal .Assuming a reflectance of 0.25 for asphalt (Alexander et al. 2010), the calibration constant can be calculated from Eq. (4).
where, v: the backscatter cross section in square meters; t: the reflectance of the target surface as in Eq. ( 1); R: the recorded range in meters;  b: the laser beam divergence angle in radians; C cal : the calibration constant; A j : the pulse amplitude of the j-th waveform; W j : the pulse width of the j-th waveform.

echo-based Feature extraction
Two major processes are performed in this step, including echo detection and waveform filtering.When dealing with waveform data, Gaussian decomposition is one of the most commonly adopted methods to detect echo positions (Wagner et al. 2006).In this approach each echo is represented by a set of Gaussian parameters corresponding to the interaction between emitted pulses and the surface of the Earth.The Gaussian decomposition equation is where S G represents the received waveform signal; a j is the amplitude of the j-th echo; x is the value of a waveform; j n indicates the distribution centre of the j-th echo; and j v is the pulse width.Using Eq. ( 5), echo positions can be approximated by iteration.
Before performing Gaussian decomposition, a secondderivative based echo detector is used to determine the initial positions and number of echoes for the iteration process.The second derivative of a waveform x is calculated as where t indicates an echo location in the waveform and Δt is the time interval.In the second-derivative algorithm a local minimum is assumed to be a peak of a waveform.Echo-based LiDAR features can be classified into two parts, i.e., single-echo and multi-echo features.Single-echo features include amplitude (A), width (W), and backscatter cross-section (BC) that describes each echo independently using Gaussian decomposition (Wagner et al. 2006).Multiecho features consider the relationship between the echoes in a waveform, including the number of returns (NR), amplitude mean (Ā), and time interval from the first to last echoes (ΔT).Table 2 lists all of the LiDAR features used in this study, including traditional LiDAR features, single echo features, and multi-echo features.
It should be noted that amplitude (A) indicates the received energy of point clouds and is obtained after Gaussian decomposition (unit: DN), while BC describes the backscatter cross-section that is the combination of the target term, the reflectance and footprint area (unit: m 2 ).They may seem to be correlated; however, from a classifier's point of view, these two terms (features) have different characteristics, units, and computational elements.Therefore, they both provide distinctive and useful information for distinguishing between different targets.

Volumetric Texture Feature extraction
As mentioned previously, there is great potential to extract high order texture features from LiDAR, especially FW LiDAR, point clouds if they are treated as volumetric data sets.To do so all waveform samples should be transformed into a geographic object space first using echo coordinates, laser beam vectors and the time difference between echoes and samples.After that, the dense point clouds are regularized to construct a volumetric data cube in which the vertical direction is based on the normalized height to minimize feature variation of the same class caused by the terrain effect.In this study, the transformed waveform data are regularized (resampled) into a data cube with a voxel size of 1 m in all dimensions for texture computation and measures.The regularization process starts with identifying the volume cube extent based on the discrete FW LiDAR point cloud.The voxels are constructed by sub-setting the defined interval (1 m) in all dimensions from a reference corner (defined by the user) gradually and the value of each voxel is assigned to be the maximum DN value of the samples within the voxel.
One of the most important factors in GLCM-based texture computation is the moving box (kernel) size that might account for 90% of the variability in a classification task (Marceau et al. 1990).To address this issue, Tsai et al. (2007) proposed a 3D semi-variance analysis to determine the appropriate kernel sizes for volumetric data sets.Semivariance, ( ) d c , describes the spatial variance using a unit pair of pixels or voxels with a lag of d in 2D or 3D space, defined as where d is the distance between a unit pair and N(d) is the number of unit pairs.Typically, when d increases, ( ) reaches the maximum (sill), the range (d) indicates the best spatial variability to compute GLCMbased texture measures.
Second and third order texture measures are computed using voxel pairs and triplets with 3D-GLCM and GLCTF algorithms.As demonstrated in Tsai et al. (2007), 3D-GL-CM utilizes second-order GLCM statistics but the process is performed in a 3D data cube.However, GLCTF extends the conventional GLCM to the third-order texture measure as a tensor field and requires voxel triplets for computation (Tsai and Lai 2013).For voxel triplets within a moving box, the GLCTF is calculated using the following equation.The test condition (cond.) is defined as W x y z i W x dx y dy z dz j W x dx y dy z dz In Eqs. ( 8) and ( 9), W(x, y, z) is the value of a voxel at (x, y, z) and Wx, Wy, and Wz are the size (kernel) of the moving box, which are determined by the 3D semi-variance analysis as mentioned above.Two distance vectors, (dx 1 , dy 1 , dz 1 ) and (dx 2 , dy 2 , dz 2 ), define the relationship between the voxel triplets and their maximum values are dx, dy, and dz.Two types of voxel triplet connections are considered in this study, vertical and horizontal connection (labelled as GLCTF_vh) and bi-diagonal (45°) connection (labelled as GLCTF_45d).
After gray level co-occurrence computation, the next step is to extract texture features from GLCM or GLCTF with different statistical indexes.Previous studies indicated that four statistical texture measures, including Angular Second Moment (ASM), Contrast (CON), Entropy (ENT), and Homogeneity (HOM), are most appropriate for remote sensing classification applications (e.g., Marceau et al. 1990; Baraldi and Parmiggiani 1995;Clausi 2002).These measures were originally designed for second-order GLCM, thus need to be extended in order to be applied to the third-order GLCTF.First the calculated GLCTF, M(i, j, k), is converted into a probability form according to Eq. ( 10).The four texture measures are then extended to the third order as described in Eqs. ( 11) to ( 14).More detailed description and discussions about the GLCTF computation, semi-variance analysis and texture measures can be found in various references (e.g., Clausi 2002;Tsai et al. 2007;Warner 2011;Tsai and Lai 2013).

Classification, Assessment, and Cost Analysis
This research adopts the Random Forests (RF) machine learning algorithm to classify collected LiDAR features into different land-cover categories.The RF algorithm is a nonparametric classifier that uses multiple decision trees, bootstrap aggregation (bagging), and internal cross-validation techniques (Breiman 2001).The RF classifier principle is to build many decision tree models from randomized original data subsets and integrate all trees into a best model for the classification task.One of the advantages of the RF classifier is that it can avoid the over-fitting problem to improve classification accuracy (Ismail et al. 2010).It has been successfully applied to the mapping of invasive plant species (Lawrence et al. 2006), FW LiDAR point cloud classification (Guo et al. 2011) and other applications with plausible results.For comparison, a Naive Bayes (NB) classifier is also used in the point cloud classification.All RF and NB classifications are carried out using WEKA software (Witten et al. 2011).Before the classification operation all numeric features are discretized by searching the cut-point to transform them into binary data.A detailed description about the discretization can be found in Witten et al. (2011) and related references.
The classification results are evaluated using the confusion (contingency) matrices constructed from 10-fold cross validation against independent check data (ground truth) identified from high resolution aerial photos and ground surveys.Both overall accuracy (OA) and Kappa coefficient are used for preliminary evaluation.Producer's Accuracy (PA) and User's Accuracy (UA) are further utilized to evaluate omission (1-PA) and commission (1-UA) errors for each class in the advanced classification cases.When the omission or commission errors of certain classes are unacceptable, the classification model decision boundary can be adjusted according to the cost matrix (Witten et al. 2011;Desai and Jadav 2012).The cost matrix is a descriptor whose size is the same as the confusion matrix.The diagonal elements represent the cost of correct classification and the remainder indicates the incorrect parts between different classes.In general, the costs are set to 0 and 1 for the diagonal and other elements, respectively.Increasing the cost of an incorrect part can enlarge the decision boundary to include more samples for improving the classification results of a certain class, although it might also affect the classification of other classes positively or negatively.Therefore, the cost should be modified with care and different costs and their effects should be analyzed in order to achieve the best cost-benefit trade-off.Based on the cost-benefit analysis of poorly classified classes, the classification model decision boundary is adjusted to decrease misclassifications.

ResuLTs AnD DIsCussIons
A three-phase scenario with three test cases was adopted to demonstrate the effectiveness of volumetric texture features for improving FW LiDAR point cloud land-cover classification.The ground coverage of the three test cases is indicated in Fig. 1.Table 3 shows the land-cover categories of the test cases and their LiDAR sample numbers.The spatial distribution of ground truth samples used in each test case is displayed in Fig. 2. The first and second test cases are designed for general (broad) and detailed (with more ground cover types) land-cover classifications, respectively, while the third test case aims for distinguishing between different vegetation types.Four different combinations of features are used in the test cases.These combinations include traditional LiDAR features only (comb.1), comb. 1 plus single-echo features (comb.2), comb. 2 plus multi-echo features (comb.3), and comb.3 plus volumetric texture features (comb.4), as listed in Table 4.
As described in section 3 the echo-based features are calculated using Gaussian decomposition coupled with a second-derivative algorithm and the features are assigned to the corresponding LiDAR points (peaks).On the other hand, to derive volumetric texture features, the dense point cloud generated from transforming all waveform samples to the point cloud space has to be rasterized by regularization.However, after computing volumetric texture measures, they are associated back to LiDAR points from the nearest voxels, so the volumetric texture features can be combined with conventional (I, h) and echo-based features for classification.

Preliminary Classification Results
A series of tests were carried out in this study to better understand the effects of different LiDAR feature combinations in land-cover classification.First of all, to understand the effectiveness of the selected classifier (RF), it was compared with the NB algorithm using test case 1 as an example and with traditional LiDAR features and single-and multiecho features derived from second-derivative echo detection and Gaussian fitting (but without volumetric texture measures).The preliminary classification result of this test is displayed in Fig. 3.The best NB classification result occurred in the comb.3, but RF classifier outperformed NB in all three feature combinations, even with less features as in comb. 1 and 2. This clearly demonstrates that the selected RF classifier can provide better land-cover classification of LiDAR features than general statistical NB-based algorithms.Furthermore, from Fig. 3, it is also obvious that combining single-and multi-echo features produces better land-cover classification results than using only traditional LiDAR features.A classification map of the best result in test case 1 is displayed in Fig. 4. A visual comparison of the classification map with the high-resolution aerial photo shown in Fig. 2 also confirms that the classification result is reasonable.
Similar procedures were also applied to the land-cover identification in test case 2, which consists of more landcover types, thus requiring more sophisticated classifier discrimination function.Figure 5 compares the NB and RF classification accuracy with different feature combinations.As shown in the figure, in test case 2, the RF classifier again outperformed NB in all feature combinations, further proving the advantage of using RF in detailed land-cover classification of LiDAR features.

Classification of Different Vegetation Types
Examining Fig. 5 further it appears that the difference between the RF classification results for feature combinations 2 and 3 is not significant.This suggests that singleand multi-echo LiDAR features may still have limitations in terms of helping distinguish between similar land-cover classes.This will become worse in the classification of different vegetation types because they are likely to have very similar LiDAR features.Additional unconventional features such as volumetric texture measures should therefore be included to provide better separability among different vegetation types.
Test case 3 of this study is designed specifically to understand the effectiveness of including volumetric texture measures of FW LiDAR into the feature data set for detailed vegetation classification.Figure 6 shows the OA and Kappa values computed from RF classification results for test case 3 with different feature combinations.As illustrated in this figure, both the OA and Kappa values for feature combinations 2 and 3 are relatively low, indicating that the echo-based FW LiDAR features do not provide adequate separability for the classifier to produce satisfactory classification results for different vegetation types in the study site.After including volumetric texture features (3D-GLCM, or GLCTF in 45° or vertical-horizontal connected voxel triplets) extracted from the FW LiDAR data cube, the RF classification results accuracy is significantly improved.An overall comparison of all test cases is listed in Table 5 to provide a comprehensive understanding of the analysis    results discussed in 4.1 and 4.2.As mentioned previously, appropriate kernel sizes must be determined before computing 3D-GLCM and GLCTF to obtain the best texture statistics.According to the confusion matrix (as listed in Table 6a) generated from the classification results for test case 3 with feature combination 3 (conventional and echo-based LiDAR features), Bamboo, Broadleaf, and Coniferous have the most serious misclassification among the 6 different vegetation types.Therefore, two sets of VOIs (volume of interest) were selected from ground truth to perform 3D semi-variance analysis as described in 3.3 to determine the most appropriate kernel size for 3D-GLCM and GLCTF computation.The 3D semivariance analysis result indicated that the best separability occurred at a spatial range of 7 voxels.Accordingly, the kernel size was set as 7 × 7 × 7 when computing the 3D-GLCM and GLCTF for the echo-based LiDAR data cube for texture measures.
After including volumetric texture measures computed from 3D-GLCM or GLCTF, the classification accuracy was significantly improved, as shown in Fig. 6.This suggests that volumetric texture measures can extract subtle features that are difficult to find from conventional or echo-based Li-DAR features and are helpful in distinguishing between different vegetation types.Further examining the classification of individual classes reveals that the improvement stems mainly from the decrease in omission and commission errors among the three classes (Bamboo, Broadleaf, and Coniferous) which previously could not be separated clearly by the RF classifier.For example, Table 6b lists the confusion matrix generated from the classification results for test case 3 with traditional and echo-based LiDAR features and third-order texture measures computed from GLCTF with vertical-horizontal connected voxel triplets (GLCTF_vh).Comparing Table 6b against Table 6a the number of Bamboo points misclassified as Broadleaf was reduced from 775 to 393, while the number of Broadleaf points misclassified as Bamboo was reduced from 328 to 238.A similar decrease in misclassified points can also be observed in Broadleaf vs. Coniferous (reduced from 258 to 109 and from 437 to 184, respectively).In addition, the misclassification among other vegetation classes were also reduced, thus improving the  PA and UA for all classes and the OA has been improved from 79.3 -89.82%, while the Kappa value also increases from 0.7336 -0.8698.The effects of including ASM, CON, ENT, and HOM individually and all of them as features for discriminating detailed vegetation types can be further examined in Fig. 7.As illustrated in Fig. 7, individually, all four texture measures contributed to the classification improvement for all vegetation types.In particular, CON reduces the Bamboo and Coniferous omission errors significantly, while Orchard's PA is increased noticeably by ASM, ENT, and HOM.Similarly, commission errors for all class types are also evidently reduced by the volumetric texture features, with Bamboo having the most improvement in UA.Including all texture features results in the pronounced improvement in OA and Kappa of the classification as exhibited in Fig. 7a and discussed above.

Case
To further improve the classification, cost analysis was performed to adjust the decision boundary during the classification.Assume the classifier considers all costs are equal (referred to C1 thereinafter) in the original classification process.According to Table 6b, there are still misclassifications between Bamboo and Broadleaf, so the cost of misclassification between Bamboo and Broadleaf was set to 5-, 10-, and 20-fold (referred to as C5, C10, and C20) in the cost matrix and the test case was reclassified three times using the three new cost settings.Figure 8 compares the classification results for different volumetric texture measures with different cost settings in terms of the number of misclassified points (omissions and commissions) between Bamboo and Broadleaf, in which the black vertical lines highlight the difference in omission and commission errors at different cost settings.From this figure, it can be observed that C5 is the best trade-off to obtain minimum omission and commission errors for Bamboo and Broadleaf while maintaining high OA and Kappa values of the classification.It also appears that among the three sets of volumetric texture features, GLCTF_vh produced the least omission and commission errors between Bamboo and Broadleaf in the classification with C5 cost setting as illustrated in Fig. 9.
In addition to enabling the computation of high order texture measures, another advantage of treating LiDAR point clouds as volumetric data sets is the possibility of cross-section examination of the data.In particular, with the denser point clouds of FW LiDAR, different targets can be examined in more complete profiles.Figure 10 shows profiles of a few vegetated areas, including Coniferous, Broadleaf, Orchard, and Bamboo of the study site based on the    can still be applied to analyze multiple overlapping flight lines or multi-temporal data sets.However, in the cases of multiple flight lines or multi-temporal analysis, the radiometric calibration in the pre-processing becomes more important as it reduces the radiometric variations between different flight lines or time, thus producing more robust and unbiased features derived from later feature extraction processes described in 3.2 and 3.3 and reliable classification results.

ConCLusIons
This study extracted traditional and unconventional FW LiDAR features for improving land-cover classification of LiDAR point clouds.In addition to traditional LiDAR features such as intensity and normalized height, a secondderivative algorithm is used to detect echoes and extract echo-based features from Gaussian-fitted waveforms.The dense point clouds acquired with FW LiDAR sensors are also treated as volumetric data sets to allow the extraction of volumetric texture features from 3D-GLCM and GLCTF using second and third order texture measures to fully explore 3D gray level co-occurrence characteristics of the data sets.As the kernel size is an important factor in GLCMbased texture analysis, 3D semi-variance analysis is adopted to determine the most appropriate kernel size for 3D-GLCM and GLCTF computation.Comparisons between different combinations of traditional, single-and multi-echo based LiDAR features, and volumetric texture measures are accomplished using a RF classifier to evaluate the effectiveness of volumetric LiDAR texture features for improving land-cover identification.
The results presented in this paper demonstrate that echo-based LiDAR features may generate acceptable general land-cover classification results, but they may not be adequate for more detailed classifications such as distinguishing different vegetation types.After including volumetric texture features, the OA and Kappa coefficient of the classification have a 10 and 14% increases, respectively.The result can be further improved by a cost analysis to adjust the decision boundary during the classification.
The examples described above prove that volumetric texture measures can extract distinct characteristics from regularized FW LiDAR data cubes for better land-cover classifications.However, volumetric texture extraction requires intensive computation and is time-consuming, especially the GLCTF algorithm and third-order texture descriptors.Developing a simplified method to reduce the computational requirement may be required in the future.For instance, Akono et al. (2003) proposed a simple summation to simplify GLCM-based computation.Although it still has some limitations (Akono et al. 2006;Warner 2011), it has great potential to be incorporated into the texture computation algorithms discussed in this paper to increase the volumetric texture feature extraction efficiency of FW LiDAR data.

Fig. 6 .
Fig. 6.RF classification accuracies for test case 3 with traditional and echo-based LiDAR features (comb. 2 and 3) and with volumetric texture features.

Fig. 8 .
Fig. 8. Omission and commission counts between Bamboo and Broadleaf with different cost settings.(Color online only)

Fig. 9 .
Fig. 9. Omission and commission errors between Bamboo and Broadleaf in the classification with C5 cost setting of the three volumetric texture feature sets.

Table 2 .
LiDAR features used in this study.

Table 4 .
Combinations of LiDAR features for land-cover classification.

Table 5 .
Classification evaluations for all three test cases.