Topics of Interest
We welcome submissions on any topic touching upon automating any aspect of machine learning, broadly interpreted. If there is any question of fit, please feel free to contact the program chairs.
This year’s conference will have two parallel tracks: one on AutoML methods and one on applications, benchmarks, challenges, and datasets (ABCD) for AutoML. Papers accepted to either track will comprise the conference program on equal footing.
For all submission times please refer to https://2025.automl.cc/dates/
The following non-exhaustive lists provide examples of work in scope for each of these tracks:
Methods Track
- model selection (e.g., neural architecture search, ensembling)
- configuration/tuning (e.g., via evolutionary algorithms, Bayesian optimization)
- AutoML methodologies (e.g., reinforcement learning, meta-learning, in-context learning, warmstarting, portfolios, multi-objective optimization, constrained optimization)
- pipeline automation (e.g., automated data wrangling, feature engineering, pipeline synthesis, and configuration)
- automated procedures for diverse data (e.g., tabular, relational, multimodal, etc.)
- ensuring quality of results in AutoML (e.g., fairness, interpretability, trustworthiness, sustainability, robustness, reproducibility)
- supporting analysis and insight from automated systems
- context/prompt optimization
- dataset distillation / data selection / foundation datasets
- AutoML for multi-objective optimization
- Large language models
- etc.
ABCD Track
→ see also https://2024.automl.cc/?page_id=625 for more details
- Applications: open-source AutoML software and applications in this category that help us bridge the gap between theory and practice
- Benchmarks: submissions to further enhance the quality of benchmarking in AutoML
- Challenges: design, visions, analyses, methods and best practices for future and past challenges
- Datasets: new datasets, collections of datasets, or meta-datasets that open up new avenues of AutoML research
Submission Guidelines
A submission violating any of these guidelines may be (desk) rejected.
Anonymity
Methods track: Double-blind reviewing
All submissions to the methods track will undergo double-blind review. That is (i) the paper, code, and data submitted for review must be anonymized to make it impossible to deduce the authors and (ii) the reviewers will also be anonymous to the authors.
ABCD track: Optional single-blind reviewing
Since authors and organizers of AutoML systems, benchmarks, challenges, and datasets are often easily identifiable (and it is often required to reveal these identities during the review process), submissions to this track will undergo single-blind review by default (that is, with the authors’ identities listed on the front page). If there is good reason for a submission to be treated as double-blind, authors may alternatively elect to submit double-blind (as long as this does not hinder the review process).
Broader Impact Statement
Submitted papers must include a broader impact statement regarding the approach, datasets, and applications proposed/used in your paper. It should reflect on the environmental, ethical, and societal implications of your work and discuss any limitations your approach may have. For example, authors may consider whether there is potential use for the data or methods to create or exacerbate unfair bias. The statement should require at most one page and must be included in the main body of the paper (not an appendix)both at submission and camera-ready time. If authors have reflected on their work and determined that there are no likely negative broader impacts, they may use the following statement: “After careful reflection, the authors have determined that this work presents no notable negative impacts to society or the environment.” A section with this name is included at the end of the paper body in the provided template, but you may place this discussion anywhere in the paper as you see fit, e.g., in the introduction/future work.
The Centre for the Governance of AI has written an excellent guide for writing good broader impact statements (for the NeurIPS conference) that may be a useful resource for AutoML authors: https://medium.com/@GovAI/a-guide-to-writing-the-neurips-impact-statement-4293b723f832
Formatting Instructions
The paper has to be formatted according to the LaTeX template available at https://github.com/automl-conf/LatexTemplate. The page limit for the main paper is 9 pages; this includes the broader impact statement but not the submission checklist, references, or appendix. The broader impact statement and submission checklist are mandatory (please see the LaTeX template for details) at both submission time and in the camera ready. References and supplemental materials are not limited in length. Accepted papers will be allowed to add an additional page of content to the main paper to react to reviewer feedback.
Submission Platform
We will use OpenReview to manage submissions. Shortly after the acceptance/rejection notifications are sent out, the de-anonymized paper and anonymous reviews of all accepted papers will become public in OpenReview and open for non-anonymous public commenting. For two weeks following the notifications, we will also allow authors of rejected papers to opt-in for their de-anonymized papers (including anonymous reviews) to also be made public in OpenReview if they choose. Unless the authors of a rejected paper choose to opt-in, there will be no public record that the paper was submitted to the conference.
- Method Track
- ABCD Track
Ethics Review
We ask that authors think about the broader impact and ethical considerations of their work and discuss these issues in their broader impact section. Reviewers will not have the ability to directly reject papers based on ethical considerations, but they will be able to flag papers with perceived ethical concerns for further review by the conference organizers. The PC chairs will decide which action(s) may need to be taken in such a case and may decide to reject papers if any serious ethical concerns cannot be adequately addressed.
Dual Submissions
The goal of AutoML 2025 is to publish exciting new work while avoiding duplicating the efforts of reviewers. Papers that are substantially similar to papers either previously published, already accepted for publication, or submitted in parallel for publication may not be submitted to AutoML 2025. Here, we define a “publication” to be a paper that (i) appeared or is appearing in an archival venue with proceedings (non-archival workshops are permitted) and (ii) is five or more pages in length, excluding references.
For example:
- Allowed to submit: A manuscript on arXiv; a paper that appeared in a NeurIPS/ICML/ICLR workshop; a short CVPR/ICCV/ECCV workshop paper (≤ 4 pages).
- Not allowed to submit: A conference paper published at NeurIPS/ICML/ICLR; a published journal paper (although a published journal paper may be submitted to a separate track with non-archival proceedings).
The dual submissions policy applies for the duration of the review process. We also discourage slicing contributions too thinly.
Posting non-anonymized submissions on arXiv, personal websites, or social media is allowed. However, if posting to arXiv prior to acceptance using the AutoML style, we ask that authors use the “preprint” rather than the “final” option when compiling their document.
Reproducibility
We strongly value reproducibility as an integral part of scientific quality assurance. Therefore, we require that all submissions be accompanied by a link to an open-source repository providing an implementation reproducing the results (if empirical results are part of the paper). To abide by double-blind reviewing (methods track), we will host our own version of anonymous GitHub supporting anonymization and full download of repositories: https://anon-github.automl.cc/. All submissions undergo a dedicated reproducibility review.
Review process
After submission of the full manuscript, submitted papers will be reviewed by at least 3 reviewers. The reviews will be evaluated by an area chair on various criteria (e.g., whether the review is reasonable, whether feedback is constructive and sufficiently detailed, and whether the review evaluates the paper based on the standards established in the call for papers). In some cases, the reviewer will be asked to clarify their review. After that, the area chair makes a recommendation taking into consideration the reviews as well as their own assessment of the paper, justifying this in a (short) written motivation of their recommendation.
This year, we will not have a rebuttal phase. However, as we are a small community, and committed to a high-quality review process, there will be the opportunity to appeal a rejection if the authors feel there are suitable grounds to do so. In such cases, we will obtain an independent second opinion (from a different area chair and/or PC chair).
We specifically encourage authors to reach out regarding: (1) cases where a review does not adhere to the criteria set above, and this discrepancy was not considered in the area chair’s decision, and (2) cases where an area chair does not provide sufficient detail regarding how they came to their final recommendation.
Commitment to Review
We ask that at least one author of each submission volunteer to review for AutoML 2025.
Changing the Author List
New authors cannot be added after the abstract deadline, although changing author ordering will be allowed at any time.
Publication of Accepted Submissions
Accepted submissions will be published via OpenReview and the proceedings will also be compiled and appear as a volume of PMLR. Feedback by the reviewers can be incorporated into the final version of the paper; other major changes are not allowed. Furthermore, each paper must be accompanied by a link to an open-source implementation (if there are any empirical results in the paper) to ensure reproducibility. As mentioned above, accepted papers will be made available alongside with their reviews and meta-reviews.
Attending the Conference
The conference is scheduled for September 08-11, 2025 in the Verizon Executive Education Center at Cornell Tech, Roosevelt Island, New York City, USA. We are planning for an in-person conference. Plenary presentations will be livestreamed but we are not planning to provide accommodations for remote presentations. We therefore request that at least one author register for the conference and present the work onsite. We will also request that authors of accepted papers prepare a short video about their work to be uploaded to our YouTube channel.