Feature Selection

After training and testing databases are established, the images are usually preprocessed using various techniques of filtering, segmentation, and transformation, to define the regions of interest (ROIs) in the image. Then, a computer program can be used to compute or extract features from each ROI in the processed images. Many different image features (i.e., intensity-based, geometrical, morphological, fractal dimension, and texture features) have been used in medical image processing. Feature extraction can be considered as data compression that removes irrelevant information and preserves relevant information from the raw data. However, defining effective features is a difficult task. Besides the complex and noisy nature of medical images, another important reason is that many features extracted by the computer are not visible by human vision or the meanings represented by these features are inaccessible to human understanding. It is almost impossible to directly interpret the effectiveness or redundancy of these subtle features. Thus, in order to find out the maximum number of effective features from the ROIs in the medical images, investigators usually extract a large number of initial features. For example, to classify masses from normal breast tissue in mammograms, one study initially extracted 572 texture features and 15 morphological features from each ROI [23]. Since most of these initial features were redundant, certain methods were then applied to select a small number of features that could be used in the final classifier. This process of incorporating an optimal set of features (or input nodes) into ANNs and BBNs presents issues that are similar to the typical signal-to-noise ratio problem. Every feature extracted from medical images contains both information (signal) and noise. The redundant features used in the input nodes of an ANN or a BBN make very little contribution to the information presented to the network but add noise to it. An important lesson about generalization of a supervised machine learning classifier can be learned from statistics: Too many free parameters result in overfitting. A curve fitted with too many parameters follows all small details or noise but is very poor for interpolation and extrapolation [8]. The same is true for an ANN and a BBN. Recent theoretical results support that decreasing the number of free parameters in a network can improve its generalization; thus, one of the most critical problems in the network design is the problem of choosing an appropriate network size for a given application [2].

Therefore, in developing a medical image processing algorithm or system involving an ANN or a BBN, optimization of feature selection is a critical issue to obtain good training performance and preserve the generalization of the network in the clinical testing. In this section we discuss and compare several feature selection methods reported before in the field of medical image processing.

0 0

Post a comment