Hi all,
I have a dataset of functional magnetic resonance imaging files upon which I am trying to run multivariate classification, using MATLAB and LibSVM. In my experiment, people were paying attention to two stimuli, from condition A and condition B. I would like to perform a searchlight, where I select a small number of adjacent voxels, imaged under condition A and B, and submit them to a classifier which tries to determine which samples came from condition A and condition B. Then, I calculate the average accuracy across this sphere of voxels. This means that for each sphere of voxels in the brain (~70,000), I am training and testing a model, and I am doing this 9 times.
The problem comes in with optimizing my hyperparameters. I would like to keep hyperparameters consistent across all 9 brains, so that I don’t have different models for each brain or for each sphere of voxels. I am making an assumption here, that any spheres of voxels that consistently provide discriminable activity are spheres that have information.
My classifiers (Gaussian naive Bayes, and linear support vector machine) have two hyperparameters. If I were to use a grid search function, examining 10 values of each parameter, then this would require me to run 100 models, and to evaluate these models for each of my brains (N = 9). It takes about 5 minutes per brain to execute code, which I believe means I would need 75 hours of computation time for each classifier.
Has anyone done a similar computation, and if so, do you have any recommendations on how to do so more quickly?
Matthew