A Python library for large-scale kernel methods, with optional (multi-)GPU acceleration.

The library currently includes two solvers: one for approximate kernel ridge regression [2] which is extremely fast, and one for kernel logistic regression [3] which trades off lower speed for better accuracy on binary classification problems.

The main features of Falkon are:

  • Full multi-GPU support - All compute-intensive parts of the algorithms are multi-GPU capable.

  • Extreme scalability - Unlike other kernel solvers, we keep memory usage in check. We have tested the library with datasets of billions of points.

  • Sparse data support

  • Scikit-learn integration - Our estimators follow the scikit-learn API

For more details about the algorithms used, you can read our paper, or look at the source code github.com/FalkonML/falkon and at the documentation. Also, make sure to follow the example notebooks to find out about all of Falkon’s features.

Falkon is built on top of PyTorch which is used to support both CPU and GPU tensor calculations, and KeOps for fast kernel evaluations on the GPU.

If you find this library useful for your research, please cite our paper [1]!



Giacomo Meanti, Luigi Carratino, Lorenzo Rosasco, Alessandro Rudi, “Kernel methods through the roof: handling billions of points efficiently,” Advancs in Neural Information Processing Systems, 2020.

Alessandro Rudi, Luigi Carratino, Lorenzo Rosasco, “FALKON: An optimal large scale kernel method,” Advances in Neural Information Processing Systems, 2017.

Ulysse Marteau-Ferey, Francis Bach, Alessandro Rudi, “Globally Convergent Newton Methods for Ill-conditioned Generalized Self-concordant Losses,” Advances in Neural Information Processing Systems, 2019.

Indices and tables