SVM: Cross-Validation and Coarse to Fine Parameter Search + SVR

It s great that you have the SVM included but in order to be actually useful for research these are the things that are needed for optimum model selection. Now the models are not optimized.

1. Cross-Validation: you need to add a k-fold cross-validation step. Must be relatively easy.

2. Parameter search: Now, parameters for the various kernels take only one value and you need to manually change the values to find the optimum model. Must have a feature where you select the min and max values to be used and also the step (min, max, step).

3. Coarse-to-fine parameter search: further from 2, XLSTAT must perform a coarse-to-fine serach for parameter values. The user must specify how accurate, the limit of a search, i.e. (min, max, step, accuracy). For (1, 10, 1, 0.01) we search from 1 to 10 and then if 6 is the optimum we search from 5.5 to 6.5 with step now 0.1 and if 6.2 is the best then search for 6.15 to 6.25 with step 0.01.

4. Forecasting metrics for the prediction: now XLSTAT only provides the results but not the metrics of the forecasts for the validation and the prediction samples. This should be fairly easy to add. (Also I do not understand what is the difference between the validation and the prediction sample and they provide different results)

5. Include Support Vector Regression (SVR) models: it is the natural and necessary extension to the SVM just like a linear regression is to the binary Logit model.

With these features included XLSAT will provide a full machine learning experience to my students both undergraduate and graduate and we will not have the need to buy Matlab and learn to program it.

  • Guest
  • Dec 2 2018
  • Shipped
  • Attach files