Motivation
AutoML systems have traditionally focused on supervised learning with methods that train from scratch on a given task. Recently, structured foundation models such as TabPFN, TabDPT, and Chronos have been introduced that are pretrained on real or synthetic data to achieve strong performance without fine-tuning, using pure in-context learning. In cases such as TabPFNv2, they are even capable of outperforming or matching state of the art AutoML systems in the small data regime, with recent works such as TabICL and TabFlex implementing strategies to scale to medium to large data. Simultaneously, new non-FM deep learning models such as TabM, ModernNCA, and RealMLP have all shown performance rivaling tree methods in tabular data. In this tutorial, we will showcase the current state of the field with a comprehensive benchmark of tabular methods and discuss how to leverage these methods to create the next generation of AutoML systems.
Structured foundation models represent arguably the largest change in methodology in the tabular and time series domains in the past decade. For AutoML systems to remain relevant, it is critical that the AutoML community is aware of and acts to incorporate recent advancements. At the same time, proper evaluation and benchmarking is key to avoid losing trust in practitioners. Our tutorial highlights both aspects to ensure that we are properly evaluating these methods while leveraging AutoML techniques such as portfolio building and ensembling to get the most of out of these advancements.
Programme Outline
- Introduction to Tabular Foundation Models
- Motivation for Foundation Models for Tabular Prediction
- In-Context Learning in TabPFN & TabPFN v2
- Extensions
- Benchmarking in the Age of Tabular Foundation Models
- Shortcomings of Prior Tabular Benchmarks
- TabArena: A Living Benchmark for Machine Learning on Tabular Data
- TabArena meets AutoML: Establishing a Flywheel for Tabular Modeling Innovation
- Mitra: Fine-tuning Tabular Foundation Models for Peak Performance
- Redefining the Pareto Frontier with Foundation Model Portfolios
- Demo
- Q&A
Speakers

Nick Erickson is a Senior Applied Scientist at Amazon Web Services and the creator and lead maintainer of AutoGluon, an open-source AutoML framework that ranked as the top-performing system in the 2025 AutoML Benchmark. He received his M.S. in Computer Science and Engineering from the University of Minnesota. Originally developed in 2018 as his personal competition toolkit, AutoGluon was open-sourced in 2019 after Nick joined Amazon AI, where he has since focused on advancing the state of the art in AutoML. Nick is the lead organizer of the 1st ICML Workshop on Foundation Models for Structured Data (2025) and was a co-organizer and finalist in Kaggle’s AutoML Grand Prix 2024, where his team placed 2nd overall.

Frank Hutter is co-founder and CEO of PriorLabs, the world’s first company focussed on tabular foundation models. He is also a Hector-Endowed Fellow and PI at the ELLIS Institute Tübingen (part-time), as well as Full Professor for Machine Learning at the University of Freiburg (currently on leave). Frank holds a PhD from the University of British Columbia (UBC, 2009). He is a Fellow of EurAI and ELLIS, the director of the ELLIS unit Freiburg and the recipient of 3 ERC grants. Frank is best known for his work on automated machine learning (AutoML), including works on neural architecture search, efficient hyperparameter optimization, and meta-learning. He co-authored the first book on AutoML and the prominent AutoML tools Auto-WEKA, Auto-sklearn and Auto-PyTorch, won the first two AutoML challenges with his team, co-designed the first MOOC on AutoML, co-organized 15 AutoML-related workshops at ICML, NeurIPS and ICLR, and founded the AutoML conference as general chair in 2022 and 2023. In recent years, his focus has been on tabular foundation models like TabPFN.