Topics of Interest
We welcome submissions on any topic related to automating any aspect of machine learning, broadly interpreted. If there is any question of fit, please feel free to contact the program chairs at pc-chairs-2025@automl.cc
Similar to previous editions, this year’s conference will have two parallel tracks: one on AutoML methods and one on applications, benchmarks, challenges, and datasets (ABCD) for AutoML. Papers accepted to either track will comprise the conference program on equal footing.
The paper submission deadline is midnight (23:59), Monday, March 31, 2025 (anywhere on earth) For other relevant dates, please refer to https://2025.automl.cc/dates/
The following non-exhaustive lists provide examples of work in scope for each of these tracks:
Methods Track
- model selection (e.g., neural architecture search, ensembling)
- configuration/tuning (e.g., via evolutionary algorithms, Bayesian optimization)
- AutoML methodologies (e.g., reinforcement learning, meta-learning, in-context learning, warmstarting, portfolios, multi-objective optimization, constrained optimization)
- pipeline automation (e.g., automated data wrangling, feature engineering, pipeline synthesis, and configuration)
- automated procedures for diverse data (e.g., tabular, relational, multimodal, etc.)
- ensuring quality of results in AutoML (e.g., fairness, interpretability, trustworthiness, sustainability, robustness, reproducibility)
- supporting analysis and insight from automated systems
- context/prompt optimization
- dataset distillation / data selection / foundation datasets
- AutoML for multi-objective optimization
- large language models
- etc.
ABCD Track
→ see also https://2025.automl.cc/details-on-abcd-track for more details
- Applications: open-source AutoML software and applications in this category that help us bridge the gap between theory and practice
- Benchmarks: submissions to further enhance the quality of benchmarking in AutoML
- Challenges: design, visions, analyses, methods and best practices for future and past challenges
- Datasets: new datasets, collections of datasets, or meta-datasets that open up new avenues of AutoML research
Submission Guidlines
By submitting to AutoML 2025, you confirm you are aware of and agree to adhere to our ethics and accessibility guidelines: https://2025.automl.cc/ethics-and-accessibility-guidelines/.
A submission violating any of these guidelines may be (desk) rejected.
Anonymity
Methods track: Double-blind reviewing
All submissions to the methods track will undergo double-blind review. That is (i) the paper, code, and data submitted for review must be anonymized to make it impossible to deduce the authors and (ii) the reviewers will also be anonymous to the authors.
ABCD track: Optional single-blind reviewing
Since authors and organizers of AutoML systems, benchmarks, challenges, and datasets are often easily identifiable (and it is often required to reveal these identities during the review process), submissions to this track will undergo single-blind review by default (that is, with the authors’ identities listed on the front page). If desired, authors may alternatively elect to submit double-blind.
Formatting Instructions
Submissions should be formatted according to the LaTeX template available at https://github.com/automl-conf/LatexTemplate. The page limit for the main paper is 9 pages, not including space used for references, the optional submission checklist, or optional supplemental material. Accepted papers will be allowed to add an additional page of content at camera-ready time to react to reviewer feedback.
Submission Platform
We will use OpenReview to manage submissions. Shortly after the acceptance/rejection notifications are sent out, the de-anonymized paper and anonymous reviews of all accepted papers will become public in OpenReview and open for non-anonymous public commenting. For two weeks following notification, we will also allow authors of rejected papers to opt-in for their de-anonymized papers (including anonymous reviews) to also be made public in OpenReview if they choose. Unless the authors of a rejected paper choose to opt-in, there will be no public record that the paper was submitted to the conference.
Ethics Policy
We expect authors, reviewers, and area chairs to follow our ethics guidelines.
We ask that authors reflect upon the potential broader impact and ethical considerations of their work and discuss these issues as appropriate. Reviewers will have the ability to flag papers with perceived ethical concerns for further review by the conference organizers. The PC chairs will decide which action(s) may need to be taken in such a case and may decide to reject papers if any serious ethical concerns cannot be adequately addressed.
Dual Submissions
The goal of AutoML 2025 is to publish exciting new work while avoiding duplicating the efforts of reviewers. Papers that are substantially similar to papers either previously published, already accepted for publication, or submitted in parallel for publication may not be submitted to AutoML 2025. Here, we define a “publication” to be a paper that (i) appeared or is appearing in an archival venue with proceedings (non-archival workshops are permitted) and (ii) is five or more pages in length, excluding references.
For example:
- Allowed to submit: A manuscript on arXiv; a paper that appeared in a NeurIPS/ICML/ICLR workshop; a short CVPR/ICCV/ECCV workshop paper (≤ 4 pages).
- Not allowed to submit: A conference paper published at NeurIPS/ICML/ICLR; a published journal paper (although a published journal paper may be submitted to a separate track with non-archival proceedings).
The dual submissions policy applies for the duration of the review process. We also discourage slicing contributions too thinly.
Posting non-anonymized submissions on arXiv, personal websites, or social media is allowed. However, if posting to arXiv prior to acceptance using the AutoML style, we ask that authors use the “preprint” rather than the “final” option when compiling their document.
Reproducibility
We strongly value reproducibility as an integral part of scientific quality assurance. Therefore, we require that all submissions undergo a dedicated reproducibility review. To facilitate this process, we ask authors to share their code publicly as part of the submission. The code should be easily downloadable with a single click, e.g., via a provided link to a zip file. To abide by the double-blind reviewing policy for the methods track, we suggest using one of the following anonymous options for sharing code and data: Anonymous GitHub, zenodo, Dataverse, figshare, OpenML.org, or via OpenReview as supplementary material.
Review Process
After submission of the full manuscript, submitted papers will be reviewed by at least three reviewers and one reproducibility reviewer. The reviews will be evaluated by an area chair on various criteria (e.g., whether the review is reasonable, whether feedback is constructive and sufficiently detailed, and whether the review evaluates the paper based on the standards established in the call for papers). In some cases, the reviewer will be asked to clarify their review. After that, the area chair makes a recommendation taking into consideration the reviews as well as their own assessment of the paper, justifying this in a (short) written motivation of their recommendation.
This year, we will not have a rebuttal phase. However, there will be the opportunity to appeal a rejection if the authors feel there are suitable grounds to do so. In such cases, we will obtain an independent second opinion (from a different area chair and/or PC chair). We specifically encourage authors to reach out regarding: (1) cases where a review does not adhere to the criteria set above, and this discrepancy was not considered in the area chair’s decision, and (2) cases where an area chair does not provide sufficient detail regarding how they came to their final recommendation.
Changing the Author List
New authors cannot be added after the submission deadline, although changing author ordering will be allowed prior to the camera-ready deadline.
Publication of Accepted Submissions
Accepted submissions will be published via OpenReview and the proceedings will also be compiled and appear as a volume of PMLR. Feedback by the reviewers can be incorporated into the final version of the paper, but other major changes are not allowed.
Attending the Conference
The conference is scheduled for September 8–11, 2025 in the Verizon Executive Education Center at Cornell Tech, Roosevelt Island, New York City, USA. We are planning for a primarily in-person conference, although plenary sessions will be livestreamed. We request that at least one author register for and attend the conference, if possible (see below). We will also request that authors of accepted papers prepare a short video about their work to be uploaded to our YouTube channel.
Some authors have expressed concerns regarding obtaining a visa to travel to the United States in a timely fashion. The organizing committee is committed to accommodating any authors affected by such issues and is investigating options for ensuring representation in the program when attendance is not possible due to travel restrictions. In particular, all accepted papers will appear in the proceedings.
When attending the conference, we want every attendee to feel safe and comfortable at the venue. Our code of conduct outlines the behavior we expect from attendees during the conference and who you can contact in case of any problems.