Second most important function in the package. Using the formulas selected by `OmicSelector_OmicSelector` function, it test derived miRNA sets in a systematic manner using multiple model induction methods. This function allows to benchmark miRNA sets in context of their potential for diagnostic test creation. Hidden feature of this package is application of `mxnet`. Note that `mxnet` has to be installed and configured seperatly.
Usage
OmicSelector_benchmark(
wd = getwd(),
search_iters = 2000,
keras_epochs = 5000,
keras_threads = floor(parallel::detectCores()/2),
search_iters_mxnet = 5000,
cores = detectCores() - 1,
input_formulas = readRDS("featureselection_formulas_final.RDS"),
output_file = "benchmark.csv",
algorithms = c("mlp", "mlpML", "svmRadial", "svmLinear", "rf", "C5.0", "rpart",
"rpart2", "ctree"),
holdout = T,
stamp = as.character(as.numeric(Sys.time())),
OmicSelector_docker = F
)
Arguments
- wd
Working directory here `OmicSelector_OmicSelector` was also working.
- search_iters
The number of random hyperparameters tested in the process of model induction.
- keras_epochs
Number of epochs used in keras-based methods, if keras methods are used. (e.g. "mlpKerasDropout", "mlpKerasDecay")
- cores
Number of cores using in parallel processing.
- output_file
Out csv file for the benchmark.
- algorithms
Caret methods that will be checked in benchmark processing. By default the logistic regression is always included.
- holdout
Best set of hyperparameters can be selected using: (1) if TURE - using hold-out validation on test set, (2) if FALSE - using 10-fold cross-validation repeated 5 times.
- stamp
Character vector or timestamp to make the benchmark unique.
- OmicSelector_docker
Adding features used by OmicSelector GUI. Almost always you should set it to FALSE (default).
- input_fomulas
List of formulas as created by `OmicSelector_OmicSelector` or `OmicSelector_merge_formulas`. Those formulas will be check in benchmark.
- gpu
Wheter to use GPU in mxnet and keras processing. Default: F
- keras_threds
This package supports training of keras networks in parallel. Here you can set the number of threads used. (e.g. "mlpKerasDropout", "mlpKerasDecay")