In this paper we perform an empirical evaluation of supervised learning methods on high dimensional data. We evaluate learning performance on three metrics: accuracy, AUC, and squared loss. We also study the effect of increasing dimensionality on the relative performance of the learning algorithms. Our findings are consistent with previous studies for problems of relatively low dimension, but suggest that as dimensionality increases the relative performance of the various learning algorithms changes. To our surprise, the methods that seem best able to learn from high dimensional data are random forests and neural nets.