MAY 18-21, 2026 AT THE HILTON SAN FRANCISCO UNION SQUARE, SAN FRANCISCO, CA

47th IEEE Symposium on
Security and Privacy

Artifact Evaluation Process

Authors are invited to submit their artifacts immediately after receiving the acceptance notification for their paper. At least one contact author must be reachable and respond to questions in a timely manner during the entire evaluation period to allow round trip communications between the AEC and the authors. Artifacts can be submitted only in the AE time frame associated with the paper submission round.

At submission time, authors choose which badges they want to be evaluated for. Members of the AEC will evaluate each artifact using the author’s instructions within the submission as guides, as detailed later in this page. Evaluators will communicate anonymously with authors through HotCRP to resolve minor issues and ask clarifying questions.

Evaluation starts with a kick-the-tires period during which evaluators ensure they can access their assigned artifacts and perform basic operations such as building and running a minimal working example. Artifact evaluations include feedback about the artifact, giving authors the option to address any significant blocking issues for AE work using this feedback. Communication after the kick-the-tires stage end can address interpretation concerns for the produced results or minor syntactic issues in the submitted materials.

Artifact details and requirements

Artifacts can be, e.g., software, datasets, models, test suites, or mechanized proofs. Paper proofs are not accepted, as evaluators lack the time and often the expertise to carefully review them. Physical objects, such as specialized computer hardware, are also not accepted, due to the difficulty of making them available to evaluators.

To ensure that the evaluation is practical for the AEC, each code artifact must be packaged according to the instructions, and it must run on a public research infrastructure of author’s choice. For example, this includes SPHERE, Chameleon, CloudLab, Google Collab, FABRIC, etc. We understand that this may not be possible in some cases (e.g., the artifact requires special hardware or special geolocation). In these cases, authors should explain the constraint, and provide anonymous access to the AEC (e.g., via SSH, public-key based access) to the special hardware.

Proposed experiments should take at most 1 day to run for the evaluation. When the paper’s research requires longer run-times, the authors should design scaled-down experiments and properly justify how those can still significantly support the paper’s analyses. Hardware and software requirements must be stated when registering an artifact.

Artifact evaluation is single-blind. Each AEC member will independently test and review their assigned submissions. To maintain the anonymity of evaluators, artifact authors should not embed analytics or other tracking tools in any websites for their artifacts for the duration of the AE period. In cases where tracking is unavoidable, authors must notify the AE chair in advance so that AEC members can take adequate safeguards.

Submitting an artifact for evaluation does not give the AEC permission to make its contents public or to retain any part of it after evaluation. Thus, authors are free to include proprietary models, data files, or code in artifacts. However, we expect that meaningful parts of the artifact will be released publicly after the evaluation. If you foresee that some parts of your artifact would not be eventually publicly released, please note that in your submission. Otherwise, the expectation is that the entire artifact as evaluated will be publicly released at the permanent repository by the camera ready deadline. If the publicly released artifact contains significantly less information than the submitted artifact, and the AEC concludes that the final artifact is no longer meaningful in isolation, AEC reserves the right to not award evaluation badges.

Artifact Badges

Available

To earn this badge, the AEC must judge that the artifact associated with the paper has been made available for retrieval permanently and publicly. As an artifact undergoing AE often evolves as a consequence of AEC feedback, authors can use mutable storage for the initial submission, but must commit to uploading their materials to public services (e.g., Zenodo, FigShare, Dryad) for permanent storage backed by a Digital Object Identifier (DOI). Final permanent storage is a condition to receive this badge. Authors are welcome to report additional sources, like GitHub and GitLab, that may ease the dissemination of the artifact and possible future updates.

Functional

To earn this badge, the AEC must judge that the artifact conforms to the expectations set by the paper for functionality, usability, and relevance. Also, an artifact must be usable on other machines than the authors’, including when specialized hardware is required (for example, paths, addresses, and identifiers must not be hardcoded.) The AEC will particularly consider three aspects:

Reproduced

To earn this badge, the AEC must judge that they can use the submitted artifact to obtain the main results presented in the paper. In short, is it possible for the AEC to independently repeat the experiments and obtain results that support the main claims made by the paper? The goal of this effort is not to reproduce the results exactly, but instead to generate results independently within an allowed tolerance such that the main claims of the paper are validated. In the case of lengthy experiments, scaled-down versions can be proposed if clearly and convincingly explained for their significance.

Artifact preparation and packaging

Artifacts should be packaged to ease evaluation and use, including instructions for the evaluators and an artifact appendix to complement the paper. Packaging is not only about evaluation, but also about future use of the artifact by other researchers who may want to build on top of it or use it as a baseline. All relevant information for evaluation should be contained in the packaging.

Packaging instructions below cover the most popular types of artifacts seen in past cybersecurity conferences. If your artifact does not fall into the categories below, please email AEC chairs to discuss appropriate packaging. If your artifact falls into multiple categories (e.g., code and datasets) please package each according to the instructions, then decide which artifact is the primary one. Put the secondary artifact (or artifacts) into a folder, within the primary artifact. Ensure any of your scripts still run correctly within this packaging.

Code artifacts

Code artifacts should contain the following information:

Item How packaged Purpose
Research artifact Folder “artifact” Source code for the artifact or pointer to permalink where the artifact can be obtained. You can structure this folder’s contents in any way it works best, and you can have additional items like test datasets, narrative, etc.
Research infrastructure - public Folder “infrastructure” contains “url”, “resources”, and “allocation” files “Url” points to the research infrastructure to use in evaluation, e.g., https://sphere-testbed.net “Resources” is the infrastructure-specific resource specification that can be used to allocate resources “Allocation” contains any special allocation instructions if needed. These files are only present if the evaluation can be done on public infrastructure.
Research infrastructure - private Folder “infrastructure” contains “constraints” and “access” files “Constraints” describes why evaluation is not possible on public research infrastructure. Please be detailed, specify which requirements cannot be met. “access” contains instructions for AEC to access the private infrastructure. If public-key-based SSH access is provided, AE members will supply their public keys via HotCRP during the evaluation process. These files are only present if the evaluation must be done on private infrastructure.
Installation script “install.sh” A script installs (and compiles/configures) the artifact on the chosen public or private infrastructure. If you are using multiple VMs or physical machines on the infrastructure please provide multiple installation scripts, appending the name of the resource to each script.
Research claims Folder “claims” contains subfolders “claim1”, “claim2” etc. Each claim subfolder contains “claim”, “run.sh”, “expected” files “claim” contains description of the research claim the author wants to be evaluated. “run.sh” is the script to run to produce the output that satisfies the claim. Authors are free to structure this script the way it works best for their claim. For example, they can create a Jupyter notebook and replace “run.sh” with “run.ipynb” If it is not possible to pull everything into one script, authors can put human-readable instructions into this file. “expected” file explains how to compare the output of run.sh with the desired results.
License “license.txt” and “license.url” License name and URL pointing to license text
Citation “citation.txt” Please provide a full citation of your accepted paper in BibTeX format
Use and limitations “use.txt” Describe intended use of your artifact and any limitations of your research artifact, i.e., what use is it NOT suitable for
Destructive notes “destructive.txt” If your artifact performs any dangerous or destructive actions please put a note here, describing necessary precautions
Readme “README.txt” This file contains any narrative about your artifact and any remaining instructions to the AEC or instructions for future reuse.

Examples of artifacts packaged according to these instructions can be found here and here.

Data artifacts

Data artifacts should contain the following information:

Item How packaged Purpose
Research artifact Folder “artifact” Dataset or datasets, and any data collection and data preparation materials that are being released.
Provenance “provenance.txt” Describe how data was collected, where, when the collection started and ended. Be as detailed as possible. If you are only releasing data collection materials (e.g., survey instruments) and not the dataset, you can leave this file empty.
License “license.txt” and “license.url” License name and URL pointing to license text
Citation “citation.txt” Please provide a full citation of your accepted paper in any style
Limitations “limitations.txt” Describe any limitations of your research artifact, i.e., what use is it NOT suitable for. Describe any limitations of your data collection strategy.
Ethics notes “ethics.txt” Discuss ethics of your data collection process.
Readme “README.txt” This file contains any narrative about your artifact and any instructions to the AEC or instructions for future reuse.

Examples of data artifacts packaged according to these instructions can be found here and here.

Research claims

Linking the paper’s claims to the artifact is a necessary step that allows artifact evaluators to reproduce results. Authors must state their paper’s key results and claims clearly. Also, claims should be concrete, especially if these claims may differ from the expectations set by the paper. The AEC will still evaluate artifacts relatively to their paper, but an explanation can help setting expectations up front, especially in cases that might frustrate the evaluators without prior notice. For example, authors are encouraged to be transparent with the AEC about difficulties that evaluators might encounter in using the artifact or its maturity relative to the paper’s content.

Note on code artifacts

If releasing a code artifact authors should make every effort to package it as source code as described above. In a few exceptional cases, when this is not possible we will accept:

Authors should reach out to the AE chair when other formats look more reasonable in their judgment.

Resources

The following materials may be useful when preparing an artifact:

Acknowledgements

The AE process at IEEE S&P 2026 was inspired by similar endeavors in other systems and security conferences. This artifact packaging guide builds on materials from the AE process of NDSS’25 and USENIX Security’25.