# Naysaying Ligero and Brakedown proofs

*This post arose from a small hackathon project by Marcin Górny, César Descalzo Blanco and the author.*

In their recent ePrint article [SGB], I. Seres, N. Glaeser and J. Bonneau introduce the notion of *naysayer proofs*. In this evolution of the concept of optimistic verification, a network where which proofs are posted is observed by a number of *watchers*. Upon appearance of a new proof $\Pi$, watchers have a predetermined period of time to *naysay* it. Naysaying amounts to posting a new proof $\Pi_{\mathrm{nay}}$, known as a naysayer proof, which shows that $\Pi$ is incorrect. Any party, including the original target(s) of $\Pi$, can instead verify $\Pi_{\mathrm{nay}}$ to learn whether it truly refutes $\Pi$ or not. If no valid naysayer proofs are posted by the end of the deadline, $\Pi$ is assumed to be correct - which is a safe assumption if there is at least one honest watcher (as long as the naysayer and naysayer verification algorithms satisfy some obvious security properties).

This approach has at least two potential advantages. Firstly, verifying $\Pi_{\mathrm{nay}}$ might be substantially cheaper than verifying the original $\Pi$. Secondly, verification of the $\Pi_{\mathrm{nay}}$ is only triggered on-chain whenever a false proof $\Pi$ is posted (as long as there are disincentives to posting incorrect naysayer proofs $\Pi_{\mathrm{nay}}$), which is a rare occurrence in many settings. In addition, $\Pi_{\mathrm{nay}}$ may have other benefits over the dispute mechanisms of optimistic verification.

In this post, we illustrate this concept by proposing a simple naysayer scheme for Ligero and Brakedown. These two linear-code-based polynomial commitment schemes (PCSs) have recently garnered some attention and are simple to describe (without going into the specifics of their encodings). In section 1, we recall the definition of linear-code-based PCSs. Section 2 proposes a simple naysayer and naysayer verifier protocol. The focus of section 3 is the construction of various dishonest proofs $\Pi$ aimed at testing the proposed algorithms. Finally, section 4 highlights some aspects of the cost of naysayer verification as opposed to direct verification.

# 1: The polynomial commitment scheme

In order to understand Ligero and Brakedown naysayer proofs, it is crucial to first have a grasp of their commitment, opening and verification methods. In addition to the overview below, we refer the reader to the original articles [AHIV] and [GLSTW], as well as the Arworks implementations (Ligero and Brakedown). Another description of the two can be found in Justin Thaler’s *Proofs, Arguments, and Zero-Knowledge*, section 10.5.

The structure of these two polynomial commitment schemes is exactly the same, the only difference lying in the so-called *encoding function*. This is simply an $\mathbb{F}$-linear map $E: \mathbb{F}^m \to \mathbb{F}^e$ (where $\mathbb{F}$ is the finite field over which the entire scheme takes place) which encodes an $m$-element message $x$ into an $e$-element one $y = E(x)$. Ligero relies on some Reed-Solomon code (such a fast Fourier transform, or FFT), whereas Brakedown opts for a recursive encoding, both of which are out of the scope of this post. A noteworthy difference, however, is that their differing encoding functions lead to differing *code distances*, which in turn result in the need for many more *openings* in the case of the Brakedown PCS. “Openings” here refers to the random positions of an $e$-symbol message $y$ that a verifier needs to check in order to be convinced that $y$ is a codeword (i.e. $y = E(x)$ for some $x \in \mathbb{F}^m$). We denote by $t_E$ the number of openings required by the encoding function $E$ (where many other dependencies, such as that on the security parameter and $\vert \mathbb{F} \vert$, are ommitted).

The rest of this section is a informal presentation of (the common structure of) Ligero and Brakedown, where a few caveats must be taken into account:

- For simplicity, we present the schemes for univariate polynomials. Their multilinear counterparts only differ in how $a$ and $b$ are computed from the point $\alpha$ (see below).
- We describe the non-interactive version of the schemes obtained by applying the Fiat-Shamir transform to the original interactive version. In a nutshell, this means all parties rely on a
*sponge*$S$ that can*absorb*data and*squeeze*random-looking data. The prover feeds to $S$ the messages it would send to the verifier and squeezes from $S$ the uniformly random challenges it would receive from the verifier. We skip the sponge initialisation steps, which are a interesting and delicate topic of its own. - For readers who are already familiar with these PCSs: we assume we are in the case where no well-formedness check is needed on the commitment. A sufficient condition for this is that the queried point $\alpha \in \mathbb{F}$ is uniformly random. If this is not the case, a well-formedness check must be performed which is structurally analogous to the checks already present in the version below.

In the three algorithms we now present, the prover $P$ and the verifier $V$ know the same dimensions $m, n \in \mathbb{N} \setminus {0}$. The scheme can be applied to polynomials of degree less than $d = mn$. For simplicity, we assume $m$ and $n$ are powers of 2 and the same is true of the encoding length $e$. The parties use a cryptographically secure hash function $H$ (we use rather flexible notation for the inputs and outputs of $H$).

### Commit

The prover commits to a univariate polynomial $f \in \mathbb{F}^{< d}[x]$ of degree less than $d$.

$\mathsf{Commit}(f) \to C$:

- Form an $n$-by-$m$ matrix $M$ with entries in $\mathbb{F}$ by placing the coefficients of $f$ in row-major order, starting with the constant term and ending with the coefficient of $x^{d - 1}$.
- Encode each row of $M$ by applying the encoding function $E$, which results in an $n$-by-$e$
*extended matrix*$E(M)$. - Hash each column $c_i$ of $E(M)$ into a leaf $l_i = H(c_i)$.
- Form a Merkle tree $T$ from the leaves $(l_i)_{i = 1}^e$.
- The commitment $C$ to $f$ is the root of $T$.

### Open

The prover has $f$ as above, which they have commited to in the form of $C$. They receive a query point $\alpha \in \mathbb{F}$ and wish to open $C$ to prove the value of $f(\alpha)$.

$\mathsf{Open}(f,\alpha) \to \Pi$:

- Compute $M$, $E(M)$ and $T$ (and $C$) as in $\mathsf{Commit}(f)$. In practice, these can be stored and passed to $\mathsf{Open}$ instead.
- Compute $b = (1, \alpha^m, \alpha^{2m}, \ldots, \alpha^{(n - 1)m})$.
- Compute $v = bM$.
- Absorb $C$, $\alpha$ and $v$ into $S$.
- Squeeze $t_E$ indices $i_1, …, i_{t_E}$ from $S$, each between $0$ and $e - 1$.
- For each squeezed index $i_j$ (with $1 \leq j \leq t_E$), produce a Merkle proof $\pi_j$ that $l_{i_j}$ (computed as $H(c_{i_j})$ ) is the $i_j$-th leaf of $T$ (with root $C$).
- The proof $\Pi$ consists of $v$, the queried columns $(c_{i_j})_{j = 1}^{t_E}$ and the Merkle proofs $(\pi_j)_{j = 1}^{t_E}$.

**Remark**:

- If Ligero or Brakedown are used as a subprotocol in a larger protocol, the columns $c_{i_j}$ and the Merkle proofs $\pi_j$ should also be absorbed into the $S$ after step 6 in order to make the Fiat-Shamir transform robust.
- In most PCS definitions, the opening method receives the commitment and an opening hint. In the case of Ligero and Brakedown, which have deterministic commits, those elements are unnecessary. However, as pointed out in step 1, those elements
*can*still be given to $\mathsf{Open}$ to avoid recomputation.

### Verify

The verifier has a commitment $C$, a point $\alpha$, an evaluation $\beta$ claimed to equal $f(\alpha)$ and an opening proof $\Pi = (v, (c_{i_j})_{j = 1}^{t_E}, (\pi_j)_{j = 1}^{t_E})$.

$\mathsf{Verify}(\Pi, C, \alpha, \beta) \to {\mathsf{accept}, \mathsf{abort}}$:

Absorb $C$, $\alpha$ and $v$ into $S$.

Squeeze $t_E$ indices $i_1, …, i_{t_E}$ from $S$, each between $0$ and $e - 1$.

For each $j$ between $1$ and $t_E$ (inclusive):

3.1. Verify that the $j$-th Merkle proof $\pi_j$ corresponds to the leaf index $i_j$, otherwise $\mathsf{abort}$.

3.2. Hash $c_{i_j}$ into a leaf $l_{i_j} = H(c_{i_j})$.

3.3. Verify the proof $\pi_j$ that $l_{i_j}$ belongs to a Merkle tree with root $C$. If verification fails: $\mathsf{abort}$.

Compute $a = (1, \alpha, \ldots, \alpha^{m - 1})$ and $b = (1, \alpha^m, \alpha^{2m}, \ldots, \alpha^{(n - 1)m})$. Note that, with $M$ as above, one has $f(\alpha) = b M a^T$.

What follows is the probabilistic check that the vectors $b E(M)$ and $E(v)$ coincide (note that $E$ is linear and $v$ is supposed to equal $bM$), which is verified at the randomly chosen positions positions $i_1, …, i_{t_E}$:

Compute the encoding $w = E(v)$.

For each $j$ between $1$ and $t_E$ (inclusive):

If $b \cdot c_{i_j} \neq w[i_j]$: $\mathsf{abort}$.

If $v \cdot a \neq \beta$: $\mathsf{abort}$.

$\mathsf{accept}$.

# 2: Naysayer proofs and their verification

We propose the following natural naysayer strategy for Ligero and Brakedown:

$\mathsf{Naysay}(\Pi, C, \alpha, \beta) \to {\Pi_{\mathrm{nay}}, \bot}$: Repeat the steps of $\mathsf{Verify}(\Pi, C, \alpha, \beta)$. Whenever an assertion would cause $\mathsf{Verify}$ to abort, output a naysayer proof $\Pi_{\mathrm{nay}}$ simply containing an identifier for that assertion. If $\mathsf{Verify}$ does not abort, output $\bot$ (read as “aye” and meaning $\Pi$ is a valid proof).

$\mathsf{NaysayVerify}(\Pi_{\mathrm{nay}}, \Pi, C, \alpha, \beta) \to {\mathsf{true}, \mathsf{false}}$: Check the single assertion identified in $\Pi_{\mathrm{nay}}$ in the same way $\mathsf{Verify}(\Pi, C, \alpha, \beta)$ would,

**but**recomputing only the elements involved in that assertion (and which cannot be directly read from $\Pi$, $C$, $\alpha$ or $\beta$). If the assertion indeed fails, output $\mathsf{true}$ ($\Pi_{\mathrm{nay}}$ is accepted, i.e. $\Pi$ is rejected). Otherwise, output $\mathsf{false}$ ($\Pi_{\mathrm{nay}}$ is rejected).

Note that rejecting $\Pi_{\mathrm{nay}}$ does not mean $\Pi$ should be accepted - but, under the assumption that at least one network watcher is honest, $\Pi$ should be accepted if no accepting $\Pi_{\mathrm{nay}}$ is received within a given time frame.

**Remark**: Before delving into the specifics of our naysayer, we point out that the above interface of $\mathsf{Naysay}$ and $\mathsf{NaysayVerify}$ does not fit that in [SGB]. Indeed, the original naysayer framework relates to non-interactive proof schemes comprising the following methods:

- $\mathsf{Setup}(1^λ) \to \mathsf{crs}$.
- $\mathsf{Prove}(\mathsf{crs}, x, w) \to \pi$.
- $\mathsf{Verify}(\mathsf{crs}, x, \pi) \to {0, 1}$.

A polynomial commitment scheme can be regarded as the extension of a commitment scheme (similar to the above three methods) with another two: opening and (opening) verification. Our naysayer proofs relate to the latter two, which cannot be reconciled with the non-interactive proof scheme interface. It is expected that variants of the formal naysayer definitions will soon appear for protocols such as PCSs.

Inspection of the assertions in $\mathsf{Verify}$ leads to the following assertion identifiers:

- $\mathsf{PathIndexAssertion}(j)$: In step 3.1. Namely: The $j$-th Merkle proof $\pi_j$ refers to the leaf index $i_j$.
- $\mathsf{MerkleAssertion}(j)$: In step 3.3. Namely: The $j$-th Merkle proof $\pi_j$ verifies correctly.
- $\mathsf{EncodingAssertion}(j)$: In step 6. Namely, $b \cdot c_{i_j} = w[i_j]$.
- $\mathsf{EvaluationAssertion}$: In step 7. Namely, $v \cdot a = \beta$.

The method $\mathsf{Naysay}$ is trivial as described above: it verifies $\Pi$, stopping early if verification aborts at one of the assertions and returning $\Pi_{\mathrm{nay}}$ containing only the identifier of that assertion. If verification accepts, $\mathsf{Naysay}$ returns $\bot$.

We instead focus on $\mathsf{NaysayVerify}$, which is slightly more involved as it needs to handle four different types of $\Pi_\mathrm{nay}$. Aside from $\Pi_{\mathrm{nay}}$, the naysayer verifier has a commitment $C$, a point $\alpha$, an evaluation $\beta$ claimed to equal $f(\alpha)$ and the original opening proof $\Pi = (v, (c_{i_j})_{j = 1}^{t_E}, (\pi_j)_{j = 1}^{t_E})$.

$\mathsf{NaysayVerify}(\Pi_{\mathrm{nay}}, \Pi, C, \alpha, \beta) \to {\mathsf{true}, \mathsf{false}}$:

If $\Pi_{\mathrm{nay}}$ is one of $\mathsf{PathIndexAssertion}(j)$, $\mathsf{MerkleAssertion}(j)$ or $\mathsf{EncodingAssertion}(j)$ and $j$ is not between $1$ and $t_E$: return $\textsf{false}$.

If $\Pi_{\mathrm{nay}} = \mathsf{PathIndexAssertion}(j)$:

2.1. Absorb $C$, $\alpha$ and $v$ into $S$.

2.2. Squeeze $j$ indices $i_1, …, i_j$ from $S$, each between $0$ and $e - 1$.

2.3. Return $\mathsf{true}$ if the proof $\pi_j$ does not refer to the leaf index $i_j$, otherwise return $\mathsf{false}$.

If $\Pi_{\mathrm{nay}} = \mathsf{MerkleAssertion}(j)$:

3.1. Hash $c_{i_j}$ into a leaf $l_{i_j} = H(c_{i_j})$. Note that one does not need to squeeze $i_j$ from the sponge, since $c_{i_j}$ is simply the $j$-th element of $(c_{i_j})_{j = 1}^{t_E}$.

3.2. Verify the proof $\pi_j$ that $l_{i_j}$ belongs to a Merkle tree with root $C$. If verification fails, return $\mathsf{true}$. Otherwise, return $\mathsf{false}$.

If $\Pi_{\mathrm{nay}} = \mathsf{EncodingAssertion}(j)$:

4.1. Absorb $C$, $\alpha$ and $v$ into $S$.

4.2. Squeeze $j$ indices $i_1, …, i_j$ from $S$, each between $0$ and $e - 1$.

4.3. Compute the encoding $w = E(v)$.

4.4. Compute $b = (1, \alpha^m, \alpha^{2m}, \ldots, \alpha^{(n - 1)m})$.

4.4. If $b \cdot c_{i_j} \neq w[i_j]$, return $\mathsf{true}$. Otherwise, return $\mathsf{false}$.

If $\Pi_{\mathrm{nay}} = \mathsf{EvaluationAssertion}$:

5.1. Compute $a = (1, \alpha, \ldots, \alpha^{m - 1})$.

5.2. If $v \cdot a \neq \beta$: return $\mathsf{true}$. Otherwise, return $\mathsf{false}$.

This covers all possible $\Pi_{\mathrm{nay}}$.

# 3: Constructing dishonest proofs

A thorough testing of the naysayer and naysayer verifier would likely involve fussy-programming techniques and is beyond the scope of this post. In the following lines, however, we do illustrate how to construct dishonest proofs which should trigger failures in each of the assertions identified above and therefore result in $\mathsf{Naysay}$ returning all possible types of $\Pi_{\mathrm{nay}}$. In our testing, these were produced using a $\mathsf{DishonestOpen}$ method which received the same arguments as $\mathsf{Open}$ plus an additional $\mathsf{dishonesty}$ parameter indicating how to tamper with the proving process:

$\mathsf{PathIndexAssertion}(j)$: any tampering which results in a de-synchronisation between the prover and verifier sponges before the indices $i_j$ are squeezed will cause $\mathsf{PathIndexAssertion}(0)$ to fail. For instance:

After step 6 in $\mathsf{Open}$, modify any element of $v$.

More trivially, one can force failure at $\mathsf{PathIndexAssertion}(j)$ by manually modifying the leaf index contained in the $j$-th Merkle proof:

After step 6 in $\mathsf{Open}$, modify the leaf index in $\pi_j$.

It should be noted, both in this case and the next, that the proposed tampering only results in a failed assertion

*with overwhelming probability*due the properties of cryptographic hash functions.$\mathsf{MerkleAssertion}(j)$: in this case, it is enough to modify any of the nodes in the corresponding Merkle proof (which are the siblings of the nodes in the path connecting the proved leaf to the root $C$). For instance:

After step 6 in $\mathsf{Open}$, replace the first sibling in $\pi_j$ by any other value.

Another way to cause $\mathsf{MerkleAssertion}(j)$ is to modify any of the values in the $j$-th queried column $c_j$. Since these are hashed into a leaf by the verifier, which then uses $\pi_j$ to check that leaf belongs to the committed Merkle tree, the desired failure will be triggered:

After step 6 in $\mathsf{Open}$, modify any value in $c_j$.

$\mathsf{EncodingAssertion}(0)$: This case is slightly more interesting than the rest in that we do not cause failure by producing an honest proof and modifying it a posteriori, as we have been doing so far. Consider the equality we would like to invalidate, namely step 6 in $\mathsf{Verify}$. As a dishonest prover, we have no control over $b$, which is computed by the verifier using $\alpha$. We cannot tamper with the column $c_{i_j}$ either, as $\textsf{Naysay}$ would then report failure of $\mathsf{MerkleAssertion}(j)$ before reaching the inner product check (the column would no longer hash to the leaf proved by $\pi_j$). Therefore, we must fiddle with $w$, which is computed by the prover as $w = E(v)$ - in other words, we must insert into $\Pi$ a $v$ different from the honest $bM$. Furthermore, this must be done before $v$ is fed to the dishonest prover’s sponge (unlike in case 1 above) to ensure synchronisation with the verifier’s. In other words:

After step 3 in $\mathsf{Open}$, modify any element of $v$.

Note that this will likely trigger failure at $\mathsf{EncodingAssertion}(0)$ with high probability: it is difficult to craft an example where the first $j$ dot-product checks (at positions $i_0, \ldots, i_{j - 1}$) pass and the next one (at position $i_{j - 1}$) fails.

$\mathsf{EvaluationAssertion}$: In this case it suffices to supply $\mathsf{Naysay}$ (and $\mathsf{NaysayVerify}$) with a false evaluation $\beta \neq f(\alpha)$. Note that $\beta$ is not part of the original proof $\Pi$, but rather an argument to both functions (cf. the remark in section 2).

# 4: Costs (and a modification of $\mathsf{Verify}$)

Comparing the definitions of $\mathsf{Verify}$ and $\mathsf{NaysayVerify}$, it is clear that the latter performs less work - this is indeed the advantage of receiving a pointer $\Pi_{\mathrm{nay}}$ to the false assertion in $\Pi$. Note that these savings are on top of the conceptual shift of verification only being triggered on-chain if a watcher naysays the original proof.

A slightly unusual aspect of analysing the costs of naysayer verification is that they are heavily dependent on which type of $\Pi_{\mathrm{nay}}$ is received. Below, we study these costs in terms of various types of core operations as well as all possible types of $\Pi_{\mathrm{nay}}$. However, before giving a detailed comparision of operation counts, it is only fair to tweak the original $\mathsf{Verify}$ algorithm (which was presented in a more standard above) with some obvious optimisations that reduce computation costs when the received proof is incorrect. These modifications only amount to reordering the steps: the functionality is exactly the same. Note that which order is optimal depends on what type of incorrect proof is expected to be received more frequently - we simply make a reasonable choice:

$\mathsf{Verify}^\ast(\Pi, C, \alpha, \beta) \to {\mathsf{accept}, \mathsf{abort}}$:

Compute $a = (1, \alpha, \ldots, \alpha^{m - 1})$.

If $v \cdot a \neq \beta$: $\mathsf{abort}$.

Absorb $C$, $\alpha$ and $v$ into $S$.

For each $j$ between $1$ and $t_E$ (inclusive):

4.1. Squeeze an index $i_j$ between $0$ and $e - 1$ from $S$.

4.2. Verify that the $j$-th Merkle proof $\pi_j$ corresponds to the leaf index $i_j$, otherwise $\mathsf{abort}$.

4.3. Hash $c_{i_j}$ into a leaf $l_{i_j} = H(c_{i_j})$.

4.4. Verify the proof $\pi_j$ that $l_{i_j}$ belongs to a Merkle tree with root $C$. If verification fails: $\mathsf{abort}$.

Compute $b = (1, \alpha^m, \alpha^{2m}, \ldots, \alpha^{(n - 1)m})$.

What follows is the probabilistic check that the vectors $b E(M)$ and $E(v)$ coincide (note that $E$ is linear and $v$ is supposed to equal $bM$), which is verified at the randomly chosen positions positions $i_1, …, i_{t_E}$:

Compute the encoding $w = E(v)$.

For each $j$ between $1$ and $t_E$ (inclusive):

If $b \cdot c_{i_j} \neq w[i_j]$: $\mathsf{abort}$.

$\mathsf{accept}$.

The core operations performed in Ligero/Brakedown verification (skipping small or constant costs such as step 3) are:

- Computing $a$ and $b$ (step 1 and 5), requiring $m - 1$ and $n$ field operations, respectively (note that $\alpha^m$ can be obtained from the previously computed $\alpha^{m - 1}$ at a cost of one field operation).
- Squeezing the query indices $i_j$ from the sponge (step 4.1).
- Hashing the columns $c_{i_j}$ into leaves (step 4.3). Each column requires $n$ hashes.
- Verifying the Merkle paths $\pi_j$ (step 4.4). Each verification requires $\log(e)$ hashes.
- Computing the encoding $w = E(v)$ (step 6). We denote the cost by $\varepsilon$ and assume all basic operations it requires are field operations (this is the case for both Ligero and Brakedown). For instance, Ligero with FFT encoding has $\varepsilon = O(e \log(e))$.
- Computing the scalar products $v \cdot a$ and $b \cdot c_{i_j}$ (steps 2 and 7). Each such product requires about $2m$ and $2n$ field operations, respectively.

The following table contains the number of core operations as above performed by $\mathsf{Verify}^\ast$ and $\mathsf{NaysayVerify}$ when $\Pi$ leads to each possible assertion failure in $\mathsf{Verify}^\ast$. In the case of $\mathsf{Verify}^\ast$, this simply refers to operations performed until that assertion is reached and the algorithm aborts; whereas for $\mathsf{NaysayVerify}$, this refers to operations performed when receiving a $\Pi_\mathrm{nay}$ containing that assertion identifier.

We note that the basic operation accounted for in each cell is either one sponge squeezing, one hash or one field operation in $\mathbb{F}$ (we don’t delve into the distinction between addition, multiplication and inversion). Operation counts are either approximate or asymptotic.

Squeeze | Hash | $\mathbb{F}$ | ||
---|---|---|---|---|

$\mathsf{EvaluationAssertion}$ | $\mathsf{Verify}^\ast$ | $0$ | $0$ | $3m$ |

$\mathsf{NaysayVerify}$ | $0$ | $0$ | $3m$ | |

$\mathsf{PathIndexAssertion}(j)$ | $\mathsf{Verify}^\ast$ | $j$ | $j(n + \log(e))$ | $3m$ |

$\mathsf{NaysayVerify}$ | $j$ | $0$ | $0$ | |

$\mathsf{MerkleAssertion}(j)$ | $\mathsf{Verify}^\ast$ | $j$ | $j (n + \log(e))$ | $3m$ |

$\mathsf{NaysayVerify}$ | $j$ | $n + \log(e)$ | $0$ | |

$\mathsf{EncodingAssertion}(j)$ | $\mathsf{Verify}^\ast$ | $t_E$ | $t_E (n + \log(e))$ | $3m + 2jn + n + \varepsilon$ |

$\mathsf{NaysayVerify}$ | $j$ | $0$ | $3n + \varepsilon$ |

As expected, the fact that $\mathsf{Verify}^\ast$ needs to check many true assertions before reaching the false one results in a lot of extra work in most cases. This is especially clear for $\mathsf{EncodingAssertion}$, but also holds for $\mathsf{PathIndexAssertion}(j)$ and $\mathsf{MerkleAssertion}(j)$ if $j$ is high.

Unnecessary hashing and sponge operations are particularly undesirable when they must be performed in-circuit, for example in (typical) proof composition. Even circuit-tailored hash functions such as Poseidon come at a significant constraint cost - let alone traditional bit-operation-based ones such as the SHA family.

# Final thoughts

Naysayer proofs offer a promising paradigm for efficiently realising proof systems in contexts such as blockchain. We have illustrated how a classical family of schemes, that of linear-code-based PCSs such as Ligero and Brakedown, can be easily carried over to the naysayer paradigm and how this results in a substantial improvement of verification costs.

Many natural questions arose during the writing of this post, out of which we highlight a few:

- Is there a better way to derive a naysayer and naysayer verifier from a given scheme than the naive approach above?
- How should one formally and practically go about naysayer-proof composition? An easy example would be that of sumcheck relying on Ligero as the PCS of choice.
- For which proof systems (for instance, PCSs) is naysayer verification particularly efficient compared to direct proof verification? In particular, do any traditionally suboptimal protocols become optimal when their verifiers (and that of their alternatives) are transformed into naysayers?
- In the context of real-world deployment: how should one set naysayer deadlines and disincentives to false naysaying?

Keep posted for more naysaying!

# References

[SGN] I. Seres, N. Glaeser and J. Bonneau: Naysayer proofs. https://eprint.iacr.org/2023/1472.pdf

[AHIV] S. Ames, C. Hazay, Y. Ishai, M. Venkitasubramaniam: Ligero: Lightweight Sublinear Arguments Without a Trusted Setup. https://eprint.iacr.org/2022/1608

[GLSTW] A. Golovnev, J. Lee, S. Setty, J. Thaler, R. S. Wahby: Brakedown: Linear-time and field-agnostic SNARKs for R1CS. https://eprint.iacr.org/2021/1043

Author: Antonio Mejías Gil