ARTIFACT EVALUATION
FCCM will again offer authors the opportunity to optionally participate in an artifact evaluation process. Artifacts are digital objects that were created by the authors as part of the research or experiments performed with the submitted work. Examples of artifacts are:
- Software: Source code, scripts, Makefiles, container images (like Docker files), etc.
- Hardware: Verilog, VHDL, schematics, CAD tools, flows, etc.
- Data: Spreadsheets, databases, binary files, design sets, etc.
The goal of submitting artifacts promotes the availability and reproducibility of the experimental results and data such that other researchers can repeat experiments and replicate results with less effort.
SUBMISSION REQUIREMENTS
If the authors would like to participate in artifact evaluation and are willing to prepare and document artifacts, they need to fill out and submit a copy of the Artifact Form alongside their paper when submissions are due.
What is the Artifact Form?
The Artifact Form is a form used to collect information necessary for artifact evaluation. The form allows the authors to describe the presence or absence of artifacts, and their type (software, hardware, or data) that supports the research presented in the paper.
Do I need to open-source my software in order to complete the Artifact Form?
No. You are not asked to make any changes to your computing environment or design process in order to complete the form. The form is meant to describe the computing environment in which you produced your results and any artifacts you wish to share. Any author-created software does not need to be open source unless you wish to be eligible for an artifact review badge.
REVIEW PROCESS
Who will review my artifact form?
If your submission is accepted as a full paper for publication at FCCM 2025, then the Artifact form will be reviewed by the Artifact Evaluation Chairs. The Artifact Evaluation Committee (AEC) will review the information provided and will verify artifacts are indeed available at the URLs provided. They will also help authors improve their forms, in a double-open arrangement. If authors select this option, their paper may be evaluated for artifact review badges.
How will the review of artifacts interact with the double-blind review process?
Artifact review will not take place until after decisions on papers have been made. Reviewers will not have access to the Artifact Form. Authors should not include links to their artifacts/repositories in their submitted paper. The paper review process is double-blind. The artifact review process is not.
IMPACT OF ARTIFACT EVALUATION
What’s the impact of an Artifact Form on scientific reproducibility?
An artifact-evaluation effort can increase the trustworthiness of computational results. It can be particularly effective in the case of results obtained using specialized computing platforms, not available to other researchers. Leadership computing platforms, novel testbeds, and experimental computing environments are of keen interest to the FPGA community. Access to these systems is typically limited, however. Thus, most reviewers cannot independently check results, and the authors themselves may be unable to recompute their own results in the future, given the impact of irreversible changes in the environment (compilers, libraries, components, etc.). The various forms of Artifact Evaluation improve confidence that computational results from these special platforms are correct.
The paper text explains why I believe my answers are right and shows all my work. Why do I need an Artifact Evaluation?
There are many good reasons for formalizing the artifact description and evaluation process. Standard practice varies across disciplines. Labeling the evaluation as such improves our ability to review the paper and improves reader confidence in the veracity of the results.
ARTIFACTS
What are “author-created” artifacts and why make the distinction?
Author-created artifacts are the hardware, software, or data created by the paper’s authors. Only these artifacts need to be made available to facilitate evaluation. Proprietary, closed-source artifacts (e.g. commercial software and CPUs) will necessarily be part of many research studies. These proprietary artifacts should be described to the best of the author’s ability but do not need to be provided.
What about proprietary author-created artifacts?
The ideal case for reproducibility is to have all author-created artifacts publicly available with a stable identifier. Papers involving proprietary, closed-source author-created artifacts should indicate the availability of the artifacts and describe them as much as possible. Note that results dependent on closed-source artifacts are not reproducible and are therefore ineligible for some artifact review badges.
Are the numbers used to draw our charts a data artifact?
Not necessarily. Data artifacts are the data (input or output) required to reproduce the results, not necessarily the results themselves. For example, if your paper presents a system that generates charts from datasets then providing an input dataset would facilitate reproducibility. However, if the paper merely uses charts to elucidate results, the input data to whatever tool you used to draw those charts aren’t required to reproduce the paper’s results. The tool which drew the chart isn’t part of the study, so the input data to that tool is not a data artifact of this work.
Help! My data is HUGE! How do I make it publicly available with a stable identifier?
Use Zenodo (https://help.zenodo.org). Contact them for information on how to upload extremely large datasets. You can easily upload datasets of 50GB or less, have multiple datasets, and there is no size limit on communities.
ARTIFACT REVIEW BADGES
The artifact evaluation process for FCCM 2025 will consider awarding the following IEEE reproducibility badges in two different categories:
Each paper will be considered for only one of the badging categories (Code or Dataset) and can earn up to 3 badges (Available, Reviewed, and Reproducible).
Although IEEE recommends that code be submitted through IEEE’s Code Ocean, this is not mandatory to be considered for the Code Badges at FCCM. Since we routinely rely on FPGA platforms and proprietary tools not available in Code Ocean, we recommend using Zenodo to capture artifacts and evaluators will work with authors to reproduce results. Please see IEEE’s requirements for Code and Dataset publishing for more details.
The badging definitions are based on Reproducibility Badging at IEEE and are synonymous with the 3 badges offered last year:
Code/Dataset Available
This badge signals that author-created digital objects used in the research (including data and/or code) are permanently archived in a public repository that assigns a global identifier (DOI) and guarantees persistence, and are made available via standard open licenses that maximize artifact availability.
Notes:
1. This is akin to author-supplied supplemental materials, shared under a standard public license such as an Open Science Initiative (OSI)–approved license for software or a Creative Commons license or public domain dedication for data and other materials.
2. This definition corresponds to the Association for Computing Machinery (ACM) “Artifacts Available” badge, and to the combined Center for Open Science (COS) “Open Data” and “Open Materials” (pertaining to digital objects) badges.
3. The determination of what objects are “relevant” to a research publication is in the hands of the editorial board or leadership members of the community, in addition to the authors themselves.
4. For physical objects relevant to the research, the metadata about the object should be made available.
Code/Dataset Reviewed
This badge signals that all relevant author-created digital objects used in the research (including data and code) were reviewed according to the criteria provided by the badge issuer.
Notes:
1. This badge corresponds to the ACM “Artifacts Evaluated” badge, while the Institute of Electrical and Electronics Engineers (IEEE) has used a “Code Reviewed” badge.
Code/Dataset Reproducible
This badge signals that an additional step was taken or facilitated by the badge issuer (e.g., publisher, trusted third-party certifier) to certify that an independent party has regenerated computational results using the author‑created research objects, methods, code, and conditions of analysis.
Results Reproduced assumes that the research objects were also reviewed.
This Recommended Practice has adopted the National Academies of Sciences Engineering Medicine, definition of Reproducibility: “We define reproducibility to mean computational reproducibility—obtaining consistent results using the same input data, computational steps, methods, code, and conditions of analysis.”
FAQ
Q: I only want to make my artifacts public if the paper is accepted. Is that okay?
A: Yes. Artifacts will only be examined after papers are accepted. You can wait until you hear the decision on your paper before making your artifacts public, but you must complete the form by the deadline set; information in the form will not be used until after paper acceptance.
Q: Does the artifact submission link need to be anonymous?
A: No, it should not be anonymous and thus should not be included in the submitted paper. Any information in the paper must be anonymous to preserve the double-blind review. After a paper has been accepted, links to the appropriate artifacts, code repositories, etc., can be added to the camera-ready paper. The submitted Artifact Form is not anonymous and is reviewed independently and will not be examined until after paper decisions are made.
Q: What is the deadline for the submission of artifacts?
A: No one will look at the artifacts until your paper is accepted. We require the information available to start the evaluation process as soon as papers are accepted, hence the requirement to submit the form.
Q: Can I update the artifact evaluation form after paper acceptance?
A: You are free to make changes to the details of the artifacts up to the start of the evaluation and in consultation with your evaluator, but the initial submission should include as many details as possible to allow the evaluation process to be organized.