B

benchmarks-pending

Repository to store benchmarks before validation, as soon as they are received by SMT-LIB managers

This repository contains raw benchmarks which have been submitted for inclusion in the SMT-LIB benchmark library, but which have not yet been validated for compliance with the SMT-LIB standard. Anyone is welcome to download and use these benchmarks, but please note that there is no guarantee that they are compliant with the SMT-LIB standard. To download benchmarks, please select "HTTPS" in the drop-down menu above and then use "git clone" with the corresponding URL. Please do not use the buttons for downloading zip or tar files as these can be very large and constructing them can overwhelm the server.

To submit new benchmarks, either submit a pull request against this repository, or simply send the new benchmarks to one of the SMT-LIB maintainers. We encourage to distribute benchmarks under the Creative Commons Attribution 4.0 International License, but submitters can specify their own licence in the benchmark itself using the (set-info :license "licence string") command.

Each set of pending benchmarks should be stored here, one set per directory, below the directory specifying the logic of the benchmarks. The directory should be named <date>-<application>-<submitter>, <date>-<application>, or <date>-<submitter>. We currently allow a deeper directory structure below the main directory.

The format of submitted benchmarks should be as follows:

(set-info :smt-lib-version <version>)
(set-logic <logic>)
(set-info :source |<description>|)
(set-info :license "https://creativecommons.org/licenses/by/4.0/")
(set-info :category <category>)
(set-info :status <status>)

where:

  • <version> is the SMT-LIB version number, e.g. 2.6.
  • <logic> is one of the accepted SMT-LIB logics.
  • <description> is a textual description containing:
    • Generated by: the name(s) of those who generated the benchmark;
    • Generated on: generation date with format YYYY-MM-DD;
    • Generator: tool which generated the benchmark (if any);
    • Application: the general goal;
    • Target solver: the solvers that were initially used to check the benchmarks;
    • Publications: references to related publications. This can be followed by any other useful information in free text.
  • <category> is either "crafted", indicating that it was hand-made, "random", indicating that it was generated randomly, or "industrial" (everything else). Note that the category should be in quotes.
  • <status> is either sat or unsat according to the status of the benchmark, or unknown if not known.

Below is an example:

(set-info :smt-lib-version 2.6)
(set-logic QF_UFLIA)
(set-info :source |
Generated by: Clark Barrett, Pascal Fontaine, Cesare Tinelli
Generated on: 2016-12-31
Generator: Sledgehammer
Application: Isabelle proof of Gödel theorem
Target solver: CVC4
|)
(set-info :license "https://creativecommons.org/licenses/by/4.0/")
(set-info :category "industrial")
(set-info :status unsat)

The workflow is as follows. The benchmarks are:

  1. put in this repository as soon as they are received
  2. checked for conformance with the available tools
  3. sanitized if necessary
  4. checked for status if their status is unknown (they are given a status if at least two solvers solve them)
  5. moved to the official repository, in the devel branch