Apache Spark for Machine Learning on Large Data Sets

27 Oct 2017
14:30 - 15:10

Apache Spark for Machine Learning on Large Data Sets

Apache Spark is a general purpose distributed computing framework for distributed data processing. With MLlib, Spark’s machine learning library, fitting a model to a huge data set becomes very easy. Similarly, Spark’s general purpose functionality enables application of a model across a large collection of observations. We’ll walk through fitting a model to a big data set using MLlib and applying a trained scikit-learn model to a large data set.