Search this site
Gal Kaplun
  • Home
  • Publications
Gal Kaplun
  • Home
  • Publications
  • More
    • Home
    • Publications

Publications

Deconstructing Distributions: A Pointwise Framework of Learning

Gal Kaplun*, Nikhil Ghosh*, Saurabh Garg, Boaz Barak and Preetum Nakkiran.

Under Review.

Knowledge Distillation: Bad Models Can Be Good Role Models

Gal Kaplun, Eran Malach, Preetum Nakkiran and Shai Shalev-Shwartz. [alphabetical]

Under Review.

For self-supervised learning, Rationality implies generalization, provably

Yamini Bansal*, Gal Kaplun*, Boaz Barak.

Poster in ICLR 2021.

For manifold learning, deep neural networks can be locality sensitive hash functions

Nishanth Dikkala, Gal Kaplun, Rina Panigrahy. [alphabetical]

2022 ICML Workshop on Topology, Algebra, and Geometry in Machine Learning.

Deep Double Descent: Where bigger models and more data hurt

Preetum Nakkiran, Gal Kaplun*, Yamini Bansal*, Tristan Yang, Boaz Barak and Ilya Sutskever.

Poster in ICLR 2020.

SGD Learns Functions of Increasing Complexity

Preetum Nakkiran, Gal Kaplun, Dimitris Kalimeris, Tristan Yang, Benjamin L. Edelman, Fred Zhang and Boaz Barak.

Neurips 2019 Spotlight talk.

Robust Influence Maximization for Hyperparametric Models

Dimitris Kalimeres, Gal Kaplun, Yaron Singer. [alphabetical]

ICML 2019 Presentation and Poster

Robust Neural Networks are More Interpretable for Genomics

Peter K. Koo, Sharon Qian, Gal Kaplun, Verena Volf, Dimitris Kalimeris.

The 2019 ICML Workshop on Computational Biology





Report abuse
Google Sites
Report abuse