On the number of column openings in Ligero

20 minute read

The author is grateful to Hossein Moghaddas and César Descalzo for interesting discussions on the subject.


In this short post, we review the soundness analysis of the Ligero polynomial commitment scheme (PCS) and study the number of column openings required for a target security level. The discussion largely applies to any linear-code-based PCS with the same structure (e.g. Brakedown), although aspects such as the concrete code distance or theorem 4.6 below are Ligero-specific.

Some of the basic ideas we illustrate include the following:

  • The two phases of verification (well-formedness and evaluation) do not necessarily have to open the same amount of columns, although that is essentially optimal under a mild assumption.

  • If we denote by $\varepsilon_\mathrm{wf}$ and $\varepsilon_{\mathrm{ev}}$ some soundness error bounds for those two phases, respectively, it suffices to bound $\max(\varepsilon_\mathrm{wf}, \varepsilon_{\mathrm{ev}}) \leq 2^{-\lambda}$ overall. That is, there is no need to resort to a looser union bound of the form $\varepsilon_\mathrm{wf} + \varepsilon_{\mathrm{ev}} \leq 2^{-\lambda}$.

In section 1, we review the structure of a family of linear-code-based PCSs which includes Ligero and Brakedown. The provided description is relatively implementation-oriented in that it includes the Fiat-Shamir transform and succinct Merkle-tree commitments - two techniques which are abstracted away during the soundness analysis. The second section delves into security of the scheme and the question of column openings, deriving some minimal bounds as well as an optimal choice of the distance error parameter $e$. Readers who are only interested in the number of column openings can consult equations $(4)$ and $(5)$ below.

The only requirement to understand the post is some familiarity with polynomial commitment schemes and Reed-Solomon codes.

1: Linear-code-based polynomial commitment schemes

We briefly recall the commitment, opening and verification methods of Ligero and Brakedown, which are largely identical. In addition to the overview below, the reader is referred to the original Ligero [AHIV] and Brakedown [GLSTW] articles, as well as their Arkworks implementations (Ligero here and Brakedown here). Another description of the two can be found in Justin Thaler’s Proofs, Arguments, and Zero-Knowledge, section 10.5. The present section can be safely skipped by any readers familiar with the Ligero PCS, although some of the notation introduced here will be used in the remainder of the post.

The structure of these two polynomial commitment schemes is exactly the same, the only difference between them lying in the so-called encoding function. This is simply an $\mathbb{F}$-linear map $E: \mathbb{F}^k \to \mathbb{F}^n$ (where $\mathbb{F}$ is the finite field over which the entire scheme takes place) which encodes a $k$-element message $x$ into an $n$-element one $y = E(x)$. Ligero relies on a Reed-Solomon code (often implemented through a Fast Fourier transform, or FFT), whereas Brakedown opts for a recursive encoding. Their differing encoding functions lead to differing code distances and performance.

In the description below, a few caveats must be taken into account:

  • For simplicity, we present the schemes for univariate polynomials. Their multilinear counterparts only differ in how $a$ and $b$ are computed from the point $\alpha$ (see below) and admit the exact same security analysis.
  • We describe the non-interactive version of the schemes obtained by applying the Fiat-Shamir transform to the original interactive version. In a nutshell, this means all parties rely on a sponge $S$ that can absorb data and squeeze uniformly-random-looking data. The prover feeds to $S$ the messages it would send to the verifier and squeezes from $S$ the random challenges it would receive from the verifier. We skip the sponge initialisation steps, which are a interesting and delicate topic of their own.
  • The scheme described below is faithful to real-world implementations in that it uses a Merkle commitment to the encoded matrix $E(M)$ for succinctness. However, it is easier to conceptualise the PCS in its non-succinct version during the security analysis, where the prover sends the verifier the full $E(M)$ and the verifier can perform the linear-combination tests directly on the columns of $E(M)$ rather than on purported columns verified through a Merkle path.

In the three algorithms below, the prover $P$ and the verifier $V$ know the same dimensions $m, k \in \mathbb{N} \setminus \{0\}$. The PCS can be applied to polynomials of degree less than $mk$. For simplicity, we assume $k$ and $n$ are powers of 2. The parties use a cryptographically secure hash function $H$ (we use rather flexible notation for the inputs and outputs of $H$). Crucially, $P$ and $V$ also know a number of column openings $t_{\mathrm{wf}}$ for the well-formedness phase, and another one $t_{\mathrm{ev}}$ for the evaluation phase. These depend on the encoding $E$ and its parameters (which we omit in the notation) and they are the object of section 2.


$\mathsf{Commit}(f) \to C$: The prover commits to a univariate polynomial $f \in \mathbb{F}^{< mk}[x]$ of degree less than $mk$:

  1. Form an $m$-by-$k$ matrix $M$ with entries in $\mathbb{F}$ by collating the coefficients of $f$ in row-major order, starting with the constant term and ending with the coefficient of $x^{mk - 1}$.
  2. Encode each row of $M$ by applying the encoding function $E$, which results in an $m$-by-$n$ extended (or: encoded) matrix $E(M)$.
  3. Hash each column $c_i$ of $E(M)$ into a leaf $l_i = H(c_i)$.
  4. Form a Merkle tree $T$ from the leaves $(l_i)_{i = 1}^n$.
  5. The commitment $C$ to $f$ is the root of $T$.

$\mathsf{Open}(f, C, \alpha) \to \Pi$: The prover has $f$ as above and a commitment $C$ to it. They receive a query point $\alpha \in \mathbb{F}$ and wish to open $C$ to prove the value of $f(\alpha)$:

  1. Compute $M$, $E(M)$ and $T$ as in $\mathsf{Commit}(f)$. In practice, these can be stored and passed to $\mathsf{Open}$ instead (in the slot of the PCS definition typically reserved for the opening hint).

  2. Compute $b = (1, \alpha^k, \alpha^{2k}, \ldots, \alpha^{(m - 1)k})$.

  3. Compute $v_{\mathrm{ev}} = bM$.

  4. Absorb $C$, $\alpha$ and $v_{\mathrm{ev}}$ into $S$.

// Well-formedness phase:

  1. Squeeze $m$ field elements $r = (r_1, \ldots, r_m) \in \mathbb{F}^m$ from $S$.

  2. Compute $v_{\mathrm{wf}} = rM$.

  3. Absorb $v_{\mathrm{wf}}$ into $S$.

  4. Squeeze $t_{\mathrm{wf}}$ indices $i_1, …, i_{t_{\mathrm{wf}}}$ from $S$, each between $0$ and $n - 1$.

  5. For each squeezed index $i_j$ (with $1 \leq j \leq t_{\mathrm{wf}}$), produce a Merkle proof $\pi_j$ that $l_{i_j}$ (computed as $H(c_{i_j})$) is the $i_j$-th leaf of $T$ (with root $C$).

  6. Absorb $(c_{i_j})_{j = 1}^{t_{\mathrm{wf}}}$ and $(\pi_j)_{j = 1}^{t_{\mathrm{wf}}}$ into $S$.

// Evaluation phase:

  1. Squeeze $t_{\mathrm{ev}}$ indices $i’_1, …, i’_{t_{\mathrm{ev}}}$ from $S$, each between $0$ and $n - 1$.

  2. For each squeezed index $i’_j$ (with $1 \leq j \leq t_{\mathrm{ev}}$), produce a Merkle proof $\pi’_j$ that $l’_{i’_j}$ (computed as $H(c’_{i’_j})$) is the $i’_j$-th leaf of $T$ (with root $C$).

  3. The proof $\Pi$ consists of $v_{t_{\mathrm{wf}}}$, $v_{t_{\mathrm{ev}}}$, the queried columns $(c_{i_j})_{j = 1}^{t_{\mathrm{wf}}}$ and $(c’_{i’_j})_{j = 1}^{t_{\mathrm{ev}}}$, and the Merkle proofs $(\pi_j)_{j = 1}^{t_{\mathrm{wf}}}$ and $(\pi’_j)_{j = 1}^{t_{\mathrm{ev}}}$.

Remark: Step 10 has been included here for robustness of the Fiat-Shamir transform, since those columns and Merkle proofs would indeed be sent by the prover at that point in the interactive version of the scheme. However, it is likely they do not need to be absorbed in practice, since a dishonest prover would have no freedom to tamper with them after $C$ and the $i_j$ were fixed (the verifier would reject when checking the Merkle proofs by the cryptographic properties of $H$). Nonetheles, experience shows that any modification of the Fiat-Shamir transform must be accompanied by a conscientious security analysis.


$\mathsf{Verify}(\Pi, C, \alpha, \beta) \to \{\mathsf{accept}, \mathsf{abort}\}$: The verifier has a commitment $C$, a point $\alpha$, an evaluation $\beta$ claimed to equal $f(\alpha)$ and a polynomial opening proof $\Pi = \big(v_{t_{\mathrm{wf}}}, v_{t_{\mathrm{ev}}}, (c_{i_j})_{j = 1}^{t_{\mathrm{wf}}}, (c’_{i’_j})_{j = 1}^{t_{\mathrm{ev}}}, (\pi_j)_{j = 1}^{t_{\mathrm{wf}}}, (\pi’_j)_{j = 1}^{t_{\mathrm{ev}}}\big)$:

  1. Absorb $C$, $\alpha$ and $v_\mathrm{ev}$ into $S$.

  2. Squeeze $m$ field elements $r = (r_1, \ldots, r_m) \in \mathbb{F}^m$ from $S$.

  3. Compute $a = (1, \alpha, \ldots, \alpha^{k - 1})$ and $b = (1, \alpha^k, \alpha^{2k}, \ldots, \alpha^{(m - 1)k})$.

  4. Compute the encodings $w_{\mathrm{wf}} = E(v_{\mathrm{wf}})$ and $w_{\mathrm{ev}} = E(v_{\mathrm{ev}})$.

  5. Absorb $v_{\mathrm{wf}}$ into $S$.

// Well-formedness phase:

  1. Squeeze $t_{\mathrm{wf}}$ indices $i_1, …, i_{\mathrm{wf}}$ from $S$, each between $0$ and $n - 1$.

  2. For each $j$ between $1$ and $t_{\mathrm{wf}}$ (inclusive):

    7.1. Hash $c_{i_j}$ into a leaf $l_{i_j} = H(c_{i_j})$.

    7.2. Verify the proof $\pi_j$ that $l_{i_j}$ is the $i_j$-th leaf of a Merkle tree with root equal to the commitment $C$. If verification fails: $\mathsf{abort}$.

    7.3. If $r \cdot c_{i_j} \neq w_{\mathrm{wf}}[i_j]$: $\mathsf{abort}$.

  3. Absorb $(c_{i_j})_{j = 1}^{t_{\mathrm{wf}}}$ and $(\pi_j)_{j = 1}^{t_{\mathrm{wf}}}$ into $S$.

// Evaluation phase:

  1. Squeeze $t_{\mathrm{ev}}$ indices $i’_1, …, i’_{t_{\mathrm{ev}}}$ from $S$, each between $0$ and $n - 1$.

  2. For each $j$ between $1$ and $t_{\mathrm{ev}}$ (inclusive):

    10.1. Hash $c’_{i’_j}$ into a leaf $l’_{i’_j} = H(c’_{i’_j})$.

    10.2. Verify the proof $\pi’_j$ that $l’_{i’_j}$ is the $i’_j$-th leaf of a Merkle tree with root equal to the commitment $C$. If verification fails: $\mathsf{abort}$.

    10.3. If $b \cdot c’_{i’_j} \neq w_{\mathrm{ev}}[i’_j]$: $\mathsf{abort}$.

  3. If $\beta \neq v_{\mathrm{ev}} a^T$: $\mathsf{abort}$.

  4. $\mathsf{accept}$.


Remarks:

  • Steps 5 and 11 of $\mathsf{Open}$ and 6 and 9 of $\mathsf{Verify}$ require squeezing integral indices from the sponge $S$. In both cases, this can be achieved through repeated squeezings or by squeezing a single element and use the latter to seed a PRNG, which is then used to produce the desired indices. The author thanks Giacomo Fenzi for acquainting them with the latter technique. Which option is more beneficial depends on multiple factors (bit-based vs. arithmetic sponge, direct or recursive proofs, bias possibly introduced by the field size, etc.), but we defer that discussion to another time.
  • If Ligero or Brakedown are used as a subprotocol in a larger protocol, the columns $c’_{i’_j}$ and the Merkle proofs $\pi’_j$ from the evaluation phase should formally also be absorbed into $S$ at the end of $\mathsf{Open}$ and $\mathsf{Verify}$ in order to make the Fiat-Shamir transform robust. The same remark applies as before that this may not be necessary in practice.
  • The indices $i_j$ and $i’_j$ of the columns queried during the well-formedness and evaluation checks do not have to be distinct - this is not required by the security analysis below. In practice, duplicate columns can be sent and tested only once.

2: The number of column openings $t_{\mathrm{wf}}$ and $t_{\mathrm{ev}}$

This section delves into why (and in which sense) the above PCS is binding and sound, and how large $t_{\mathrm{wf}}$ and $t_{\mathrm{ev}}$ must be chosen in order to control the soundness error on the claim $f(\alpha) = \beta$. Subsection 2.1 is slightly more theoretical and explains the error bounds $\varepsilon_{\mathrm{wf}}$ and $\varepsilon_{\mathrm{wf}}$ on the two phases of the protocol. The second subsection derives concrete bounds for $t_{\mathrm{wf}}$ and $t_{\mathrm{ev}}$ as well as optimal values given a desired security level $\lambda$ under a mild assumption.

In the sequel, we proceed under the aforementioned simplifying assumptions:

  • No Fiat-Shamir transform, i.e. interactivity: The prover sends the verifier the elements of the proof instead of feeding them to $S$, and the verifier sends the prover the random coins instead of having the latter squeeze them from $S$.
  • No succinctness: The prover sends the verifier the extended $m$-by-$n$ matrix $M_E$ (claimed to equal $E(M)$) in full. In particular, the verifier has direct access to the columns $c_{i_j}$ and $c’_{i’_j}$ and therefore no need for column openings or Merkle proofs.

The PCS described in section 1 is an acceptable model of this simplified protocol (which is indeed the more natural one) if $S$ and $H$ are cryptographically secure.

2.1: The well-formedness and evaluation checks

Let $d \in \mathbb{N} \setminus \{0\}$ denote the distance of the code $E$. In the case of Reed-Solomon codes (used in Ligero), one has $d = n - k + 1$ since a polynomial of degree $< k$ over a field is uniquely determined by its values at any $k$ points. In the linear-code-PCS security analysis, one fixes a chosen distance bound $e < d/2$. This ensures the algorithms take place in the so-called unique-decoding regime, where a ball of radius $\leq e$ centred at any $c \in \mathbb{F}^n$ contains at most one codeword.

Steps 6-7 of $\mathsf{Verify}$ (corresponding to steps 8-9 of $\mathsf{Open}$) are typically termed the well-formedness, consistency or commitment check. In them, $P$ proves to $V$ - up to a bounded soundness error - that the matrix $M_E$ does indeed arise as the row-wise encoding of some matrix $M^\ast$. Intuitively, soundness of the well-formedness check should rely on a result along the lines of: “If some row of $M_E$ is not close to a codeword, then $rM_E$ with $r \xleftarrow{R} \mathbb{F}^m$ is likely not close to a codeword”. It turns out, however, that a finer notion results in a tighter security analysis: one can control differences between rows of $M_E$ and codewords not individually, but simultaneously across all rows. More specifically, one considers a new code $E(\mathbb{F}^k)^m \subseteq (\mathbb{F}^n)^m \cong (\mathbb{F}^m)^n$ with codeword length $n$ over the alphabet of columns $\mathbb{F}^m$. It follows that a matrix $N \in M_{m \times n}(\mathbb{F})$ is within distance $\delta$ from the codeword $(w_1, \ldots, w_m)$ (where each $w_i$ is a row) if and only if there exists a set $I$ of $\delta$ columns such that each row $N_i$ of $N$ coincides with $w_i$ at all columns outside $I$. In other words, two matrices are considered to differ at any of $n$ possible columns if those columns differ in at least one entry. This notion, known as correlated agreement, has been instrumental in the study of commitment schemes such as Ligero.

The central result in the soundness analysis of the well-formedness check is theorem 4.6 of [AHIV], which in turns follows from [BCIKS] and is a refinement of previous results from the original Ligero paper:

Theorem: Let $E(\mathbb{F}^k)$ be an $[n, k, d]$-Reed-Solomon code, $e < d/2$ a soundness error in $\mathbb{N} \setminus \{0\}$ and $N \in M_{m \times n}(\mathbb{F})$ an arbitrary matrix. If $d(N, E(\mathbb{F}^k)^m) > e$, then $$\mathrm{Pr}\bigg(d(rN, E(\mathbb{F}^k)) \leq e \ | \ r \xleftarrow{R} \mathbb{F}^m \bigg) \leq \frac{n}{|\mathbb{F}|}.$$

Since the notation $L’$ in the actual formulation from [AHIV] has not been introduced here, we now clarify how the above theorem follows from the original source [BCIKS]. It is enough to apply theorem 1.6 in the latter with $u_0$ set to the zero vector and $(u_i)_{i = 1}^l$ set to the rows of $N$ (so the article’s $l$ is our $m$). With these choices, said theorem states that, if $\mathrm{Pr}\big(d(rN, E(\mathbb{F}^k)) \leq e \ | \ r \xleftarrow{R} \mathbb{F}^m \big) > \frac{n}{|\mathbb{F}|}$, then $d(N, E(\mathbb{F}^k)^m) \leq e$. Note that our $e$ corresponds to $\delta \cdot |\mathcal{D}|$ in the article’s notation, where $\mathcal{D}$ denotes the entire evaluation domain of the Reed-Solomon code and $\mathcal{D}’$ is the agreement set of columns between $N$ and a minimal-distance matrix in $E(\mathbb{F}^k)^m$. Taking now the contrapositive of that formulation yields the theorem as stated above.

Suppose now that the Ligero verifier interacts with a dishonest prover $P^\ast$ who sends an extended matrix $E_M$ such that $d(E_M, E(\mathbb{F}^k)^m) > e$. The verifier will always construct $w_{\mathrm{wf}}$ as the codeword $E(v_{\mathrm{wf}}) \in E(\mathbb{F}^k)$. The theorem tells us that $d(rE_M, w_{\mathrm{wf}}) \leq e$ is very unlikely for large $\mathbb{F}$, so we do not need a finer analysis of the soundness error in that case. Otherwise, one has $d(rE_M, w_{\mathrm{wf}}) > e$, in which case testing $t_{\mathrm{wf}}$ random entries of $rE_M$ against $w_{\mathrm{wf}}$ will discover a discrepancy except with probability bounded from below by $((n - e)/n)^{t_{\mathrm{wf}}} = (1 - e/n)^{t_{\mathrm{wf}}}$. Note that, as mentioned earlier, this does not rely on the tested entries being distinct. Applying now the partition law of conditional probability, one easily obtains the following:

Corollary: Let $E$ and $e < d/2$ be as in the theorem, and suppose the possibly dishonest prover $P^\ast$ sends a matrix $M_E \in M_{m \times k}(\mathbb{F})$ such that $d(M_E, E(\mathbb{F}^k)^m) > e$. Then one has $$\mathrm{Pr}(V \ \text{does not abort during the well-formedness check}) \leq \frac{n}{|\mathbb{F}|} + \left(1 - \frac{e}{n}\right)^{t_{\mathrm{wf}}}.$$

The second part of the analysis focuses on the complementary case $d(M_E, E(\mathbb{F}^k)^m) \leq e$. It follows (using the unique decoding regime $e < d/2$) that each row of $M_E$ is at distance $\leq e$ from exactly one codeword of $E(\mathbb{F}^k)$. We denote the matrix formed by those codewords by $M_E^\ast$ and the unique preimage of the latter under the encoding by $M^\ast$, i.e. $E(M^\ast) = M_E^\ast$.

Verifier steps 9-10 (corresponding to prover steps 11-12) are aimed at ensuring that $v_{\mathrm{ev}} = b M^\ast$ - or equivalently by the injectivity of $E$, that $w_{\mathrm{ev}} = M_E^\ast$. Two small comments are in order here. Firstly, this condition only makes sense if $M^\ast$ is (uniquely) defined, a sufficient condition for which is $d(M_E, E(\mathbb{F}^k)^m) \leq e$. Secondly, even though we use different notation for $M^\ast$ and the honest-prover matrix $M$, the two represent the same. More specifically, in the case $d(M_E, E(\mathbb{F}^k)^m) \leq e$, the matrix $M^\ast$ represents the polynomial $f^\ast$ that the possibly dishonest prover $P^\ast$ is effectively committing to by sending $E_M$ (as otherwise the verifier will likely reject already during the well-formedness check). One then has $f^\ast(\alpha) = (bM^\ast) a^T$.

The verifier tests agreement between $w_{\mathrm{ev}}$ and $b M_E$ at $t_{\mathrm{ev}}$ (not necessarily distinct) positions chosen uniformly at random. Let us study how many coincidences there can be whenever $w_{\mathrm{ev}} \neq b M_E^\ast$. Here the correlated agreement property finally comes in: $d(M_E, E(\mathbb{F}^k)^m) \leq e$ implies that $M_E$ differs from $M_E^\ast$ (its nearest matrix) in at most $e$ columns, which in turn yields $d(bM_E^\ast, bM_E) \leq e$ (note that we could not conclude this from row-wise information about discrepancies between $M_E$ and $M_E^\ast$ alone). Outside of these $\leq e$ entries, $b M_E$ agrees with $b M_E^\ast$, which is a codeword different from $w_{\mathrm{ev}}$ by assumption and therefore coincides with $w_{\mathrm{ev}}$ on at most $k - 1$ positions. It follows that $w_{\mathrm{ev}}$ and $b M_E$ coincide at $\leq e + k - 1$ entries, and hence:

Proposition: Let $E$ and $e < d/2$ be as in the theorem. For any possibly dishonest prover $P^\ast$, $$\mathrm{Pr}(V \ \text{does not abort during the evaluation check} \ \mid \ d(M_E, E(\mathbb{F}^k)^m) \leq e \ \text{and} \ v_{\mathrm{ev}} \neq bM^\ast) \leq \left(\frac{e + k - 1}{n}\right)^{t_{\mathrm{ev}}}.$$

We would like to combine the preceding results into a soundness statement of the form: “If a possibly dishonest prover lies about $f(\alpha)$, the probability that verifier does not reject is negligible”. The technical issue arises that $f$ is not defined a priori, since all the verifier sees is a commitment $C$ in the form of an arbitrary matrix $M_E \in M_{m \times n}(\mathbb{F})$, and the dishonest prover could be performing any algorithm (even one not involving any polynomials). We have already defined the natural candidate for $f$ in the case $d(M_E, E(\mathbb{F}^k)^m) \leq e$: it is the polynomial $f^\ast$ given by the unique matrix $M^\ast \in M_{m \times k}(\mathbb{F})$ such that $E(M^\ast)$ is within correlated distance $e$ from the code $E(\mathbb{F}^k)^m$. We extend this definition to all of $M_{m \times n}(\mathbb{F})$, at the cost of leaving the unique decoding regime, as follows:

Definition: For each $M_E \in M_{m \times n}(\mathbb{F})$, choose and fix an $M^\ast \in M_{m \times k}(\mathbb{F})$ such that $E(M^\ast)$ is within minimal correlated distance to $M_E$, that is: for all $\widetilde{M} \in M_{m \times k}(\mathbb{F})$, one has $d(M_E, E(\widetilde{M})) \geq d(M_E, E(M^\ast))$. Furthermore, define $f^\ast \in \mathbb{F}^{< mk}[x]$ to be the polynomial whose coefficients (in the monomial basis) are given by $M^\ast$ iterated in row-major order.

Note that we omit $M_E$ from the notation of $M^\ast$ and $f^\ast$: this will not cause any ambiguities.

The soundness result from the next subsection captures the idea that sending $M_E$ binds the (possibly dishonest) prover to opening $f^\ast(\alpha)$ as defined above. We stress that how $f^\ast$ is defined only truly matters in the $d(M_E, E(\mathbb{F}^k)^m) \leq e$ case, as otherwise the verifier will reject with high probablity arleady during the well-formedness check. Recall that, in the $d(M_E, E(\mathbb{F}^k)^m) \leq e$ case, there is only one possible choice for $M^\ast$ and it coincides with our previous notation. For any readers still bothered by the lack of canonicity of the choice of $M^\ast$ in the above definition, we point out that it can be made canonical in many natural ways: for instance, by identifying $\mathbb{F}$ with $\{0, 1, \ldots, |\mathbb{F}| - 1\}$ and ordering $M_{m \times k}(\mathbb{F})$ lexicographically.

2.2: A combined soundness result and bounds on column openings

The crucial distinction between the corollary and the proposition from the previous subsection is the correlated-distance bound on $M_E$. Let us partition the $\mathbb{F}$-algebra of $m$-by-$n$ matrices $M_{m \times n}(\mathbb{F})$ into $$M_{m \times n}(\mathbb{F})^{> e} = \{M_E \in M_{m \times n}(\mathbb{F}) : d(M_E, E(\mathbb{F}^k)^m) > e\}$$ and its complement $M_{m \times n}(\mathbb{F})^{\leq e}$ defined analogously.

Given a possibly dishonest prover $P^\ast$ sending a matrix $M_E$, the events $M_E \in M_{m \times n}(\mathbb{F})^{> e}$ and $M_E \in M_{m \times n}(\mathbb{F})^{\leq e}$ partition the entire probability space of the prover-verifier interaction, and therefore it is not necessary to bound the sum of the individual soundness error bounds for each event, but rather their maximum. We denote these two bounds by $\varepsilon_\mathrm{wf} = n / |\mathbb{F}| + (1 - e/n)^{t_{\mathrm{wf}}}$ and $\varepsilon_{\mathrm{ev}} = ((e + k - 1)/n)^{t_{\mathrm{ev}}}$ in the notation of the previous subsection. While one could bound $$\sum_{M_E \in M_{m \times n}(\mathbb{F})} \mathrm{Pr}(V \ \text{does not abort when the matrix sent is} \ M_E \ \mid \ f^\ast(\alpha) \neq \beta) \cdot \mathrm{Pr}(P^\ast \ \text{sends} \ M_E),$$ the preceding corollary and proposition yield a stronger result in the form of a bound for each $M_E$. This switch to a universal quantifier on $M_E$ turns $f^\ast(\alpha) \neq \beta$ into a non-random event (as $\alpha$ and $\beta$ are not random variables: they are arbitrary inputs to $\mathsf{Verify}$) and we can therefore do away with the conditional probability, yielding the final result:

Corollary: Let $E$ and $e < d/2$ be as in the theorem and set $\varepsilon_\mathrm{wf}$ and $\varepsilon_\mathrm{ev}$ as above. Then, for any $M_E \in M_{m \times n}(\mathbb{F})$ sent by a possibly dishonest prover and $\alpha, \beta \in \mathbb{F}$ such that $f^\ast(\alpha) \neq \beta$ (with $f^\ast$ is as in the definition in subsection 2.1), one has$$\mathrm{Pr}(V \ \text{does not abort during} \ \mathsf{Verify}) \leq \max(\varepsilon_\mathrm{wf}, \varepsilon_\mathrm{ev}).$$

Proof: If $M_E \in M_{m \times n}(\mathbb{F})^{> e}$, then $$\begin{align*} &\mathrm{Pr}(V \ \text{does not abort during} \ \mathsf{Verify}) \\ \leq \ & \mathrm{Pr}(V \ \text{does not abort during the well-formedness check}) \\ \leq \ & \varepsilon_\mathrm{wf} \end{align*}$$ by the previous corollary. Otherwise, $M_E \in M_{m \times n}(\mathbb{F})^{\leq e}$ and therefore $$\begin{align*} &\mathrm{Pr}(V \ \text{does not abort during} \ \mathsf{Verify}) \\ \leq \ & \mathrm{Pr}(V \ \text{does not abort during the evaluation check}) \\ \leq \ & \varepsilon_\mathrm{ev} \end{align*}$$ by the proposition. $\square$

Let $\lambda \in \mathbb{N} \setminus \{0\}$ be a security parameter. We denote the rate $k/n$ of the code by $\rho$ and the relative distance error $e/n$ by $\delta$. The corollary shows that we can bound the soundness error of Ligero by $2^{-\lambda}$ by choosing $t_{\mathrm{wf}}$ and $t_{\mathrm{ev}}$ such that $\varepsilon_\mathrm{wf} \leq 2^{-\lambda}$ and $\varepsilon_{\mathrm{ev}} \leq 2^{-\lambda}$. This is equivalent to $$\begin{equation}\tag{1}t_{\mathrm{wf}} \geq \log_{(1 - \delta)}(2^{-\lambda} - n / |\mathbb{F}|)\end{equation}$$ and $$\begin{equation}\tag{2}t_{\mathrm{ev}} \geq \log_{(\delta + \rho - 1/n)}(2^{-\lambda}) = \frac{-\lambda}{\log(\delta + \rho - 1/n)}\end{equation}$$ (here and in the sequel, undecorated $\log$ denotes $\log_2$).

The above formulas provide a simple way to set the number of column openings for the two phases of the argument in the case where the distance error $e$ (or equivalently, $\delta$) is predetermined. However, $e$ is merely a parameter in the security analysis and can therefore often be chosen when implementing the scheme. The natural question thus arises of what value leads to the most efficient PCS, which we adress now. We regard $\rho$ as fixed, since modifying it has a more substantial impact on proof costs and sizes and it is therefore typically set first (although a more detailed analysis could be done simultaneously on $\rho$ and $\delta$).

A simple initial realisation is that the time and space costs associated to $t_{\mathrm{wf}}$ are exactly the same as those associated to $t_{\mathrm{ev}}$: one column opening and associated Merkle proof (either producing it or verifying it) plus, in the case of the verifier, a scalar product check. Therefore, we aim to minimise $t_{\mathrm{wf}} + t_{\mathrm{ev}}$ and hence its lower bound $$\begin{equation}\tag{3}\log_{(1 - \delta)}(2^{-\lambda} - n / |\mathbb{F}|) + \frac{-\lambda}{\log(\delta + \rho - 1/n)}\end{equation}$$ implied by $(1)$ and $(2)$.

It is now that we make the simplifying assumption $(A)$ that $n / |\mathbb{F}| \ll 2^{-\lambda}$. If this is not the case, equation $(3)$ must be handled in full form. Proceeding under $(A)$, however, we can approximate the first summand of $(3)$ by $\log_{(1 - \delta)}(2^{-\lambda}) = -\lambda / \log(1 - \delta)$. Minimising $(3)$ then becomes equivalent to maximising (the negative function) $$\frac{1}{\log(1 - \delta)} + \frac{1}{\log(\delta + \rho - 1/n)}.$$

This function of $\delta$ is symmetric with respect to the vertical line $\delta = (1 - \rho + 1/n)/2$, to either side of which it is monotonous and has a negative asymptote. It therefore has a local maximum within the domain of the valid values of $\delta$ at $\delta_0 = (1 - \rho + 1/n)/2$. This yields an optimal distance error $e_0 = n \delta_0 = (n - k + 1)/2$. Note that this is precisely $d/2$, whereas the corollary requires $e < d/2$. But $n$ and $k$ are even by assumption and therefore we can set $e = (n - k)/2$ and $\delta = (n - k)/(2n) = (1 -\rho)/2$.

Plugging this back into $(1)$ and $(2)$ and performing trivial manipulations results in the final bounds $$\begin{equation}\tag{4} t_{\mathrm{wf}} = \left \lceil \frac{\log(2^{-\lambda} - \frac{n}{|\mathbb{F}|})}{\log(1 + \rho) - 1} \right\rceil \end{equation}$$ and $$\begin{equation}\tag{5} t_{\mathrm{ev}} = \left \lceil \frac{-\lambda}{\log(\frac{1 + \rho}{2} - \frac{1}{n})}\right\rceil. \end{equation}$$

We stress that these values always result in a combined soundness error smaller than or equal to $\leq 2^{-\lambda}$ in the sense of the last corollary - what is contingent upon assumption $(A)$, i.e. $n / |\mathbb{F}| \ll 2^{-\lambda}$, is how optimal those values are. It is also worth noting that $t_{\mathrm{wf}}$ and $t_{\mathrm{ev}}$ are very close to one another whenever both $n / |\mathbb{F}|$ and $1 / n$ are small.

References

  • [AHIV] S. Ames, C. Hazay, Y. Ishai, M. Venkitasubramaniam. Ligero: Lightweight Sublinear Arguments Without a Trusted Setup. Preprint available at: https://eprint.iacr.org/2022/1608.

  • [BCIKS] E. Ben-Sasson, D. Carmon, Y. Ishai, S. Kopparty, and S. Saraf. Proximity Gaps for Reed-Solomon Codes. Preprint available at: https://eprint.iacr.org/2020/654

  • [GLSTW] A. Golovnev, J. Lee, S. Setty, J. Thaler, R. S. Wahby. Brakedown: Linear-time and field-agnostic SNARKs for R1CS. Preprint available at: https://eprint.iacr.org/2021/1043.


Author: Antonio Mejías Gil