Specialization after Generalization: Towards Understanding Test-Time Training in Foundation Models

ETH Zürich, September 2025
NeurIPS 2025
Oral Presentation at NeurIPS 2025 CCFM Workshop
Paper Code Model

Recent empirical studies have explored the idea of continuing to train a model at test-time for a given task, known as test-time training (TTT), and have found it to yield significant performance improvements. However, there is limited understanding of why and when TTT is effective. Earlier explanations mostly focused on the observation that TTT may help when applied to out-of-distribution adaptation or used with privileged data. However, the growing scale of foundation models with most test data being in-distribution questions these explanations. We instead posit that foundation models remain globally underparameterized, with TTT providing a mechanism for specialization after generalization, focusing capacity on concepts relevant to the test task.

Setting: high-dimensional concept space, superimposed on lower-dimensional feature space
Concepts from high-dimensional, sparse concept space are superimposed on lower-dimensional feature space.

Specifically, under the linear representation hypothesis, we propose a model in which TTT achieves a substantially smaller in-distribution test error than global training. We empirically validate our model’s key assumptions by training a sparse autoencoder on ImageNet, showing that semantically related data points are explained by only a few shared concepts. Finally, we perform scaling studies across image and language tasks that confirm the practical implications of our model, identifying the regimes where specialization is most effective.

Citation

references.bib
@misc{hübotter2025specializationgeneralizationunderstandingtesttime, title = {Specialization after Generalization: Towards Understanding Test-Time Training in Foundation Models}, author = {Jonas Hübotter and Patrik Wolf and Alexander Shevchenko and Dennis Jüni and Andreas Krause and Gil Kur}, year = {2025}, url = {https://arxiv.org/abs/2509.24510}, eprint = {2509.24510}, archiveprefix = {arXiv}, primaryclass = {cs.LG} }