The call for papers is available here: https://2025.automl.cc/call-for-papers/
Timeline:
- Reviews due: April 30, 2025 (anywhere on earth)
→ We would appreciate your reviews at your earliest convenience. Please avoid submitting your reviews in the last hours. - AC-reviewer discussions: Please ensure that you are available in the period after the deadline, in particular, the days until May 13.
→ We don’t have a rebuttal phase for authors this year. Instead, we expect you to be available to engage in discussions with the area chairs.
General note: Besides choosing which submitted papers will be presented, reviews should help authors improve their papers (regardless of the outcome of the decision process). To make a review most effective, please try to address the following issues in the reviews, keeping a polite tone. Put yourself in the position of the author and write the review in a way in which you would like to see your own papers receive feedback.
Similarly important: ensure that your review is also aimed at the area chair, who eventually needs to make a recommendation on whether to accept or reject.
For all reviewers and area chairs, please ensure to be available in the period after the deadline, in particular the days until May 13, for communication among each other. This year, we have removed the rebuttal stage from the review phase, to ensure that we can focus our efforts on producing high-quality reviews. However, the area chair might still want to contact the reviewers in case they think that the review should be clarified in certain ways, or needs to be discussed in light of other reviews. In case a reviewer already knows that they are not available during this time period, they should signal this to the area chair in an early stage, so they can keep this in mind. Eventually, there could potentially be some iterations over the review until it is in a state that the area chair deems it understandable for themselves as well as for the authors.
Specific Instructions for Scientific Reviewers
Note that the review form is less structured than in previous years (less required fields), but that we still require the reviews to be reasonably detailed. Aspects that the review should elaborate on: (i) A summary of the contributions, (ii) potential impact on the field of AutoML, (iii) technical quality and correctness, (iv) clarity of the contributions, (v) optional: things you note about reproducibility, and (vi) potential ethical concerns.
- Factual aspects:
- State/summarize the main contributions of the paper in a few sentences.
- Compare the paper with previous work. In particular, is there highly relevant previously published work that the authors do not seem to be aware of?
- Express your level of confidence in the correctness of the results, and point out any major errors, if any are found.
- Final assessment:
- What are the strengths of the paper? (results? new research direction? application? etc.)
- What are the weaknesses of the paper?
- Express and explain your opinion regarding whether the contributions of the paper (assuming they are correct and original) are interesting/useful/relevant.
- Final Recommendation: Give a final recommendation for acceptance/rejection (or a more refined distinction, such as borderline).
- Additional feedback: Comment on the quality, clarity, and readability of the writing. Provide comments that may help the authors in producing a revised version of the paper.
Also note that as a conference we are extremely dedicated to reproducibility, and that there are also dedicated reproducibility reviewers checking the paper for replicability. Nonetheless, both reviewer types can comment on reproducibility.
Specific Instructions for Reproducibility Reviewers
The goal of a reproducibility review is to assess the likelihood that future researchers would be able to obtain the same conclusions as presented in the submission. For this conference, we are interested in replicability, the simpler form of reproducibility. The area chair needs to know if the work is replicable; that is, can you replicate some of the paper’s results (e.g., on a minimal example)? Therefore, we ask you to assess whether the provided code and dataset supplements are (likely) sufficient to replicate the submission’s results. Furthermore, you may evaluate the usability of these materials, noting any issues with running the code, generating results, or lacking documentation. If available, assess whether the reproducibility checklist is complete and coherent. Lastly, note that there is no rebuttal, so provide feedback, suggestions, and questions to help the authors improve the reproducibility for future submissions or the camera-ready copy.