
{"id":154655,"date":"2026-04-23T05:35:57","date_gmt":"2026-04-23T05:35:57","guid":{"rendered":"https:\/\/mycryptomania.com\/?p=154655"},"modified":"2026-04-23T05:35:57","modified_gmt":"2026-04-23T05:35:57","slug":"how-zksnarks-actually-work-r1cs-qap-and-pairings-explained-with-examples","status":"publish","type":"post","link":"https:\/\/mycryptomania.com\/?p=154655","title":{"rendered":"How zkSNARKs Actually Work: R1CS, QAP, and Pairings Explained With Examples"},"content":{"rendered":"<h3>2. Zero Knowledge From Mental Model to Math: What Actually Happens Inside a\u00a0zkSNARK<\/h3>\n<p>Part 2 of a 4-part series. In <a href=\"https:\/\/medium.com\/coinmonks\/zero-knowledge-proof-build-mental-model-before-math-afac29df0461\">Part 1,<\/a> we built the mental model. In <a href=\"https:\/\/mahdidarabi.medium.com\/inside-groth16-how-blinding-simulators-and-four-secret-numbers-make-zksnarks-zero-knowledge-f438d35de06d\">Part 3<\/a>, we\u2019ll cover zero-knowledge and Groth16. Part 4 re-implements TornadoCash.<\/p>\n<p>I know this post is a little bit long, but trust me it\u2019s better to have all the math in one place rather than trying to figure out what is happening in several\u00a0posts.<\/p>\n<p>In the previous post we built the intuition. A prover convinces a verifier that they <strong>know<\/strong> something, without revealing what they\u00a0know.<\/p>\n<p>Now let\u2019s open the\u00a0hood.<\/p>\n<p>How does a piece of code, like x\u00b3 + x + 5 = 35, become a cryptographic proof that is only <strong>200 bytes<\/strong> and takes about <strong>1 millisecond<\/strong> to\u00a0verify?<\/p>\n<p>That\u2019s not metaphor. That\u2019s what <strong>zkSNARKs<\/strong> actually\u00a0deliver.<\/p>\n<p>To get there, we\u2019ll walk through four big\u00a0ideas:<\/p>\n<p>Turning computation into\u00a0<strong>gates<\/strong>Turning gates into <strong>constraints<\/strong> (R1CS)Turning constraints into a <strong>single polynomial equation<\/strong>\u00a0(QAP)Using <strong>elliptic curve pairings<\/strong> to check that equation without revealing anything<\/p>\n<p>No magic. Just careful layering. And at every step, <strong>real numbers you can verify yourself (I strongly suggest to the math by yourself, so it\u2019s better to bring pen and\u00a0paper)<\/strong>.<\/p>\n<p>Let\u2019s start.<\/p>\n<h3>The Example We\u2019ll Carry Through the Whole\u00a0Post<\/h3>\n<p>We want to\u00a0prove:<\/p>\n<p>\u201cI know a value x such that x\u00b3 + x + 5 =\u00a035\u201d<\/p>\n<p>Without revealing x.<\/p>\n<p>Spoiler: x = 3. But the prover will never say that. The verifier will just get a 200-byte proof, and a \u201cyes, they know\u00a0it.\u201d<\/p>\n<p>This is a tiny toy. Real zkSNARKs prove things like \u201cI performed a valid Zcash transaction\u201d or \u201cthis ML model was evaluated correctly.\u201d But the machinery is identical. If you understand x\u00b3 + x + 5, you can understand Zcash.<\/p>\n<h3>Step 1: Turning Computation Into\u00a0Gates<\/h3>\n<p>A computer doesn\u2019t \u201cknow\u201d algebra. It knows how to add, and how to multiply. Everything else is built on\u00a0top.<\/p>\n<p>So the first step of any zkSNARK is to <strong>flatten<\/strong> the computation into a sequence of very simple statements\u200a\u2014\u200aeach one either an addition or a multiplication.<\/p>\n<h3>The Flattening Rule<\/h3>\n<p>Every statement must look\u00a0like:<\/p>\n<p>variable = variable (op) variable<\/p>\n<p>Where op is +, -, *, or \/. And each operation takes exactly <strong>two\u00a0inputs<\/strong>.<\/p>\n<p>No nested expressions. No \u201cdo three things at once.\u201d Just one operation per\u00a0line.<\/p>\n<h3>Why This Restriction?<\/h3>\n<p>Three reasons, each important:<\/p>\n<p><strong>1. It matches how real CPUs work.<\/strong> An ADD r1, r2, r3 instruction means &#8220;add r2 to r3, store in r1.&#8221; Flattening is essentially the assembly-language trace of your\u00a0program.<\/p>\n<p><strong>2. Every intermediate value gets named.<\/strong> If your computation has unnamed sub-expressions, you can\u2019t pin them down cryptographically. By naming every intermediate, we create variables the prover must commit\u00a0to.<\/p>\n<p><strong>3. It maps cleanly to the next step.<\/strong> Each multiplication gate will become exactly one row of R1CS. If we allowed 3-input gates, that clean 1-to-1 mapping would\u00a0break.<\/p>\n<h3>Flattening Our\u00a0Example<\/h3>\n<p>The expression x\u00b3 + x + 5 isn&#8217;t atomic. Let&#8217;s break it\u00a0down:<\/p>\n<p>sym_1 = x * x         \u2190 computes x\u00b2<br \/>sym_2 = sym_1 * x     \u2190 computes x\u00b3<br \/>out   = (sym_2 + x + 5) * 1 \u2190 the final sum<\/p>\n<p>Three lines. Each one a single\u00a0gate.<\/p>\n<p>Every intermediate value gets its own name (sym_1, sym_2). These names matter\u200a\u2014\u200athey become <strong>witness variables<\/strong> later.<\/p>\n<h3>The Circuit\u00a0View<\/h3>\n<p>Picture this as a circuit\u200a\u2014\u200aa network of gates wired together:<\/p>\n<p>x\u00b3 + x + 5 to gates of addition and multiplication<\/p>\n<p>Each gate has two inputs\u200a\u2014\u200a<strong>Left<\/strong> and <strong>Right<\/strong>\u200a\u2014\u200aand one <strong>output<\/strong>. The output of one gate can fan out to become the input of many later gates. This shape\u200a\u2014\u200agates wired by their inputs and outputs\u200a\u2014\u200ais the foundation of everything that\u00a0follows.<\/p>\n<h3>Free vs. Expensive Operations<\/h3>\n<p>Here\u2019s a practical fact that shapes how real ZK circuits are designed:<\/p>\n<p>Operation Cost a + b <strong>free<\/strong> a \u2212 b <strong>free<\/strong> 5 \u00d7 a (times a constant) <strong>free<\/strong> a + 7 (plus a constant) <strong>free<\/strong> <strong>a \u00d7 b<\/strong> (between two secrets) <strong>1 constraint<\/strong> <strong>a \u00d7 a<\/strong> (squaring) <strong>1 constraint<\/strong><\/p>\n<p><strong>Only multiplications between two secret variables cost something (and thus a constraint)<\/strong>.<\/p>\n<p>This is why <strong>ZK-friendly hash functions<\/strong> like Poseidon exist. A SHA-256 invocation takes roughly 27,000 multiplication constraints. Poseidon hashes the same amount of data in around 200 constraints\u200a\u2014\u200a<strong>over 100\u00d7 cheaper<\/strong>. When you hear \u201cthis circuit has 30,000 constraints,\u201d they always mean 30,000 multiplication gates.<\/p>\n<h3>Step 2: Constraints With Three Slots (L \u00b7 R =\u00a0O)<\/h3>\n<p>Now here\u2019s the cleanest idea in the whole\u00a0system.<\/p>\n<p>Every multiplication gate becomes one <strong>constraint<\/strong> with three\u00a0slots:<\/p>\n<p><strong><em>(Left input) \u00d7 (Right input) =\u00a0(Output)<\/em><\/strong><\/p>\n<p>We call this R1CS\u200a\u2014\u200a<strong>Rank-1 Constraint System<\/strong>. The \u201crank 1\u201d means each constraint is one linear thing times one linear thing equals one linear\u00a0thing.<\/p>\n<h3>Each Slot Is a Linear Combination<\/h3>\n<p>This is the twist that makes R1CS powerful. Each slot doesn\u2019t have to be just one variable. It can be a <strong>sum of variables<\/strong>, each multiplied by some coefficient.<\/p>\n<p>For example, a valid R1CS constraint could look\u00a0like:<\/p>\n<p>(2x + 3y + 5) \u00d7 (z) = (out)<\/p>\n<p>The left slot is a linear combo. The right slot is just z. The output is just out. Still one constraint.<\/p>\n<p><strong>Why allow this?<\/strong> Because additions are free. If your circuit needs to sum several numbers before multiplying, we pack those additions directly into the slot. No extra gates. No extra constraints.<\/p>\n<p>Think of each slot as a little \u201cpre-processor\u201d that can take as many witness values as it wants, weight them, sum them up\u200a\u2014\u200aand hand the result to the multiplication.<\/p>\n<h3>The Witness\u00a0Vector<\/h3>\n<p>To express these linear combinations, we put all the variables into one ordered list called the\u00a0<strong>witness<\/strong>.<\/p>\n<p>For our\u00a0example:<\/p>\n<p>w = (1, out, x, sym_1, sym_2)<\/p>\n<p>By convention:<\/p>\n<p>Position 0 is always 1 (the <strong>constant wire<\/strong>\u200a\u2014\u200athis lets us inject constants like 5 into linear combinations)Then the public variables (like\u00a0out)Then the private variables and intermediates (x, sym_1,\u00a0sym_2)<\/p>\n<p>With x =\u00a03:<\/p>\n<p>w = (1, 35, 3, 9, 27)<\/p>\n<p>This is the complete witness. <strong>The prover knows all 5 values<\/strong>. <strong>The verifier knows only the public part<\/strong>: w[0] = 1 and w[1] = out =\u00a035.<\/p>\n<h3>Anatomy of One Constraint<\/h3>\n<p>Here\u2019s the shape in full\u00a0detail:<\/p>\n<p>(a\u2080\u00b7w\u2080 + a\u2081\u00b7w\u2081 + a\u2082\u00b7w\u2082 + &#8230;) \u00d7<br \/>(b\u2080\u00b7w\u2080 + b\u2081\u00b7w\u2081 + b\u2082\u00b7w\u2082 + &#8230;) =<br \/>(c\u2080\u00b7w\u2080 + c\u2081\u00b7w\u2081 + c\u2082\u00b7w\u2082 + &#8230;)<\/p>\n<p>The coefficients (a\u2080, a\u2081,\u00a0&#8230;, b\u2080, b\u2081,\u00a0&#8230;, c\u2080, c\u2081,\u00a0&#8230;) are <strong>fixed at circuit design time<\/strong>. They&#8217;re part of the public description of the problem. The prover doesn&#8217;t choose\u00a0them.<\/p>\n<p>What the prover does choose is the witness\u00a0w.<\/p>\n<p>Think of each constraint as a <strong>stencil<\/strong>: three patterns that, when laid over the witness, extract three numbers. The constraint is \u201cthose three numbers must satisfy L \u00d7 R =\u00a0O.\u201d<\/p>\n<p>How gate3 works in Left, Right, Output\u00a0system<\/p>\n<h3>Writing Each Gate as a Constraint<\/h3>\n<p>Let\u2019s do all three gates of our example. This is where you should slow down and trace with pen and paper\u200a\u2014\u200athe pattern is simple once you see\u00a0it.<\/p>\n<p>costly operators are blue, and free operators are\u00a0green<\/p>\n<p><strong>Gate 1:<\/strong> x \u00d7 x =\u00a0sym_1<\/p>\n<p><strong>Left slot picks <\/strong><strong>x<\/strong>: a = (0, 0, 1, 0,\u00a00)<strong>Right slot picks <\/strong><strong>x<\/strong>: b = (0, 0, 1, 0,\u00a00)<strong>Output slot picks <\/strong><strong>sym_1<\/strong>: c = (0, 0, 0, 1,\u00a00)<\/p>\n<p>Verify:<\/p>\n<p>a \u00b7 w = 0 + 0 + 1\u00b73 + 0 + 0 =\u00a0<strong>3<\/strong>b \u00b7 w =\u00a0<strong>3<\/strong>c \u00b7 w = 0 + 0 + 0 + 1\u00b79 + 0 =\u00a0<strong>9<\/strong>Check: 3 \u00d7 3 = 9\u00a0\u2713<\/p>\n<p>Notice a and b are identical here. That&#8217;s how we encode <strong>squaring<\/strong>\u200a\u2014\u200aplug the same variable into both\u00a0ports.<\/p>\n<p><strong>Gate 2:<\/strong> sym_1 \u00d7 x =\u00a0sym_2<\/p>\n<p>a = (0, 0, 0, 1, 0) \u2192 picks\u00a0sym_1b = (0, 0, 1, 0, 0) \u2192 picks\u00a0xc = (0, 0, 0, 0, 1) \u2192 picks\u00a0sym_2<\/p>\n<p>Verify:<\/p>\n<p>a \u00b7 w =\u00a09b \u00b7 w =\u00a03c \u00b7 w =\u00a027Check: 9 \u00d7 3 = 27\u00a0\u2713<\/p>\n<p>Here the output of gate 1 (sym_1 = w[3] = 9) becomes the left input of gate 2. In R1CS this is just &#8220;select w[3] in the Left slot&#8221;\u200a\u2014\u200asimple.<\/p>\n<p><strong>Gate 3:<\/strong> (sym_2 + x + 5) \u00d7 1 =\u00a0out<\/p>\n<p>How gate3 works in Left, Right, Output\u00a0system<\/p>\n<p>This gate shows off linear combinations. The left side is three things added together.<\/p>\n<p><strong>Left slot<\/strong>: 5\u00b7w[0] + 1\u00b7w[2] + 1\u00b7w[4], so <strong>a = (5, 0, 1, 0,\u00a01)<\/strong><strong>Right slot<\/strong>: just 1, so <strong>b = (1, 0, 0, 0,\u00a00)<\/strong><strong>Output slot<\/strong>: just out, so <strong>c = (0, 1, 0, 0,\u00a00)<\/strong><\/p>\n<p>Verify:<\/p>\n<p>a \u00b7 w = 5\u00b71 + 1\u00b73 + 1\u00b727 = <strong>35<\/strong> \u2190 the sum (sym_2 + x +\u00a05)b \u00b7 w =\u00a01c \u00b7 w =\u00a035Check: 35 \u00d7 1 = 35\u00a0\u2713<\/p>\n<p><strong>This gate uses three R1CS tricks at\u00a0once:<\/strong><\/p>\n<p><strong>Constant injection<\/strong> via w[0] = 1: the coefficient 5 on position 0 means &#8220;add the constant 5 to this\u00a0slot&#8221;<strong>Multi-wire linear combination<\/strong>: three variables (w[0], w[2], w[4]) contribute to one\u00a0slot<strong>Multiply by 1<\/strong>: since gate 3 is really just addition, we multiply by 1 to fit the L\u00b7R=O\u00a0shape<\/p>\n<p>Here\u2019s what all three gates look like laid out together:<\/p>\n<h3>Stacking Into\u00a0Matrices<\/h3>\n<p>Now we collect all the a vectors into a matrix A, all the b vectors into B, all the c vectors into C. Each row is one\u00a0gate:<\/p>\n<p>w\u2080 w\u2081 w\u2082 w\u2083 w\u2084              w\u2080 w\u2081 w\u2082 w\u2083 w\u2084              w\u2080 w\u2081 w\u2082 w\u2083 w\u2084<br \/>    \u250c              \u2510             \u250c              \u2510             \u250c              \u2510<br \/>A = \u2502 0  0  1  0  0\u2502       B =   \u2502 0  0  1  0  0\u2502       C =   \u2502 0  0  0  1  0\u2502<br \/>    \u2502 0  0  0  1  0\u2502             \u2502 0  0  1  0  0\u2502             \u2502 0  0  0  0  1\u2502<br \/>    \u2502 5  0  1  0  1\u2502             \u2502 1  0  0  0  0\u2502             \u2502 0  1  0  0  0\u2502<br \/>    \u2514              \u2518             \u2514              \u2518             \u2514              \u2518<\/p>\n<p>The R1CS condition is:<\/p>\n<p><strong><em>(A\u00b7w) \u2299 (B\u00b7w) =\u00a0C\u00b7w<\/em><\/strong><\/p>\n<p>where \u2299 is componentwise multiplication. All three constraints collapsed into one matrix equation.<\/p>\n<p>Let me\u00a0verify:<\/p>\n<p>A\u00b7w = (3, 9,\u00a035)B\u00b7w = (3, 3,\u00a01)Componentwise product: (9, 27,\u00a035)C\u00b7w = (9, 27, 35)\u00a0\u2713<\/p>\n<p>All three constraints hold. The witness is\u00a0valid.<\/p>\n<h3>Two Ways to Read These\u00a0Matrices<\/h3>\n<p><strong>Row-wise<\/strong>: Each row is one gate\u2019s stencil. This is how you <em>check<\/em> a\u00a0witness.<\/p>\n<p><strong>Column-wise<\/strong>: Each column is one variable\u2019s \u201cparticipation pattern across gates.\u201d Column 2 (index 2 actually, column 3) of A says: \u201cvariable x shows up in the Left slot of gate 1 (coefficient 1), not in gate 2 (coefficient 0), and in gate 3 (coefficient 1).\u201d<\/p>\n<p>This column view is the key conceptual shift for the next step. We\u2019re about to turn <strong>each column<\/strong> into a <strong>polynomial<\/strong>.<\/p>\n<h3>Step 3: One Polynomial to Rule Them\u00a0All<\/h3>\n<p>We now have 3 constraints <strong>(We are talking about each column of each matrix, A, B or C)<\/strong>. Checking 3 constraints is no problem. But real ZK circuits have <strong>millions<\/strong> of constraints. Checking them one by one isn\u2019t succinct\u200a\u2014\u200ait\u2019s not even\u00a0close.<\/p>\n<p>We need to collapse all constraints into <strong>one equation<\/strong>.<\/p>\n<p>The tool: <strong>polynomial interpolation<\/strong>.<\/p>\n<h3>The Core\u00a0Idea<\/h3>\n<p>Pick 3 distinct points\u200a\u2014\u200asay x = 1, 2, 3. Think of each point as \u201cthe index of one gate (constraint actually).\u201d<\/p>\n<p><strong>(Again we are talking about each column of each matrix, now matrix\u00a0A)<\/strong><\/p>\n<p>For each witness variable j, we build a polynomial that encodes its participation in the L slot across all\u00a0gates:<\/p>\n<p><em>u_j(1) = coefficient of w_j in gate 1\u2019s L slot <br \/>u_j(2) = coefficient of w_j in gate 2\u2019s L slot <br \/>u_j(3) = coefficient of w_j in gate 3\u2019s L\u00a0slot<\/em><\/p>\n<p>Three points, one polynomial. Lagrange interpolation gives us a unique polynomial of degree \u2264 2 through any 3\u00a0points.<\/p>\n<p>for example, we now Talking about matrix A and column with index 2 (third\u00a0column).<\/p>\n<p>matrix A<em>u_2(1) = 1<br \/>u_2(2) = 0<br \/>u_2(3) =\u00a01<\/em><\/p>\n<p>We do the same for the R slot (getting v_j polynomials) and the O slot <strong>(getting w_j polynomials\u200a\u2014\u200awarning, overloaded notation: <\/strong><strong>w_j the polynomial is different from <\/strong><strong>w[j] the witness\u00a0value)<\/strong>.<\/p>\n<h3>Building a Polynomial from Scratch\u200a\u2014\u200aWorked\u00a0Example<\/h3>\n<p>Let\u2019s build u_x(\u03bb) for our variable x (index 2 in the witness).<\/p>\n<p>Looking at column 2 of matrix A, x appears with coefficients:<\/p>\n<p>Gate 1 Left:\u00a01Gate 2 Left:\u00a00Gate 3 Left:\u00a01<\/p>\n<p>So we want a polynomial u_x(\u03bb) (or it\u2019s better to write u_2(\u03bb) because \u201cx\u201d is index 2 of witness array)\u00a0with:<\/p>\n<p>u_2(1) =\u00a01u_2(2) =\u00a00u_2(3) =\u00a01<\/p>\n<p><strong>Lagrange interpolation<\/strong> gives us a recipe. For 3 points (1, 2, 3), the basis polynomials are:<\/p>\n<p>\u2113\u2081(\u03bb) = (\u03bb \u2212 2)(\u03bb \u2212 3) \/ 2<br \/>\u2113\u2082(\u03bb) = \u2212(\u03bb \u2212 1)(\u03bb \u2212 3)<br \/>\u2113\u2083(\u03bb) = (\u03bb \u2212 1)(\u03bb \u2212 2) \/ 2<\/p>\n<p>Each \u2113\u1d62 equals 1 at its own point and 0 at the other two.\u00a0So:<\/p>\n<p>u_2(\u03bb) = 1\u00b7\u2113\u2081(\u03bb) + 0\u00b7\u2113\u2082(\u03bb) + 1\u00b7\u2113\u2083(\u03bb)<br \/>       = (\u03bb\u22122)(\u03bb\u22123)\/2 + (\u03bb\u22121)(\u03bb\u22122)\/2<br \/>       = (\u03bb\u22122)\u00b7[(\u03bb\u22123) + (\u03bb\u22121)] \/ 2<br \/>       = (\u03bb\u22122)(2\u03bb\u22124)\/2<br \/>       = (\u03bb\u22122)\u00b2<br \/>       = \u03bb\u00b2 \u2212 4\u03bb + 4u_2(\u03bb) = \u03bb\u00b2 -4\u03bb +\u00a04<\/p>\n<p>Check:<\/p>\n<p>u_2(1) = 1 \u2212 4 + 4 = 1\u00a0\u2713u_2(2) = 4 \u2212 8 + 4 = 0\u00a0\u2713u_2(3) = 9 \u2212 12 + 4 = 1\u00a0\u2713<\/p>\n<p>This is the polynomial encoding \u201chow x (index 2 of witness) participates in the Left slot across our three\u00a0gates.&#8221;<\/p>\n<p>We do this for every witness variable, for all three slot types (L, R, O). That gives us a whole library of polynomials. They\u2019re all computed from the <strong>circuit<\/strong>, once\u200a\u2014\u200anot per-witness.<\/p>\n<h3>The Combined Polynomials<\/h3>\n<p>Once we have all the per-variable polynomials, we build three big polynomials by summing them weighted by the\u00a0witness:<\/p>\n<p>U(\u03bb) = sum over all j of (w[j] \u00b7 u_j(\u03bb))  \u2190 the combined Left<br \/>V(\u03bb) = sum over all j of (w[j] \u00b7 v_j(\u03bb))  \u2190 the combined Right<br \/>W(\u03bb) = sum over all j of (w[j] \u00b7 w_j(\u03bb))  \u2190 the combined Output<\/p>\n<p>The magic property:<\/p>\n<p><strong><em>U(i) = the Left slot value of gate i<\/em><\/strong><em> <br \/><\/em><strong><em>V(i) = the Right slot value of gate i<\/em><\/strong><em> <br \/><\/em><strong><em>W(i) = the Output slot value of gate\u00a0i<\/em><\/strong><\/p>\n<p><strong>(Do not cofuse above \u201cW\u201d with\u00a0witness)<\/strong><\/p>\n<p>(for i = 1, 2, 3\u200a\u2014\u200athe three gate\u00a0indices)<\/p>\n<p>For our witness w = (1, 35, 3, 9, 27), a bit of arithmetic gives:<\/p>\n<p>U(\u03bb) = 10\u03bb\u00b2 \u2212 24\u03bb +\u00a017V(\u03bb) = \u2212\u03bb\u00b2 + 3\u03bb +\u00a01W(\u03bb) = \u22125\u03bb\u00b2 + 33\u03bb \u2212\u00a019<\/p>\n<h3>Verify U, V, W at Every Gate\u00a0Index<\/h3>\n<p>This is where you can see the construction really works. Plug in \u03bb= 1, 2,\u00a03:<\/p>\n<p><strong>At <\/strong>\u03bb<strong>= 1 (gate\u00a01):<\/strong><\/p>\n<p>U(1) = 10 \u2212 24 + 17 = <strong>3<\/strong> (Left of gate 1 = x = 3\u00a0\u2713)V(1) = \u22121 + 3 + 1 = <strong>3<\/strong> (Right of gate 1 = x = 3\u00a0\u2713)W(1) = \u22125 + 33 \u2212 19 = <strong>9<\/strong> (Output of gate 1 = sym_1 = 9\u00a0\u2713)<\/p>\n<p><strong>At <\/strong>\u03bb<strong>= 2 (gate\u00a02):<\/strong><\/p>\n<p>U(2) = 40 \u2212 48 + 17 = <strong>9<\/strong> (Left of gate 2 = sym_1 = 9\u00a0\u2713)V(2) = \u22124 + 6 + 1 = <strong>3<\/strong> (Right of gate 2 = x = 3\u00a0\u2713)W(2) = \u221220 + 66 \u2212 19 = <strong>27<\/strong> (Output of gate 2 = sym_2 = 27\u00a0\u2713)<\/p>\n<p><strong>At <\/strong>\u03bb<strong>= 3 (gate\u00a03):<\/strong><\/p>\n<p>U(3) = 90 \u2212 72 + 17 = <strong>35<\/strong> (Left of gate 3 = sym_2 + x + 5 = 35\u00a0\u2713)V(3) = \u22129 + 9 + 1 = <strong>1<\/strong> (Right of gate 3 = 1\u00a0\u2713)W(3) = \u221245 + 99 \u2212 19 = <strong>35<\/strong> (Output of gate 3 = out = 35\u00a0\u2713)<\/p>\n<p>The polynomials literally encode the gate computations. Every gate\u2019s constraint becomes:<\/p>\n<p><em>U(i) \u00d7 V(i) \u2212 W(i) =\u00a00<\/em><\/p>\n<h3>The Key\u00a0Identity<\/h3>\n<p>Define the polynomial:<\/p>\n<p>P(\u03bb) = U(\u03bb) \u00b7 V(\u03bb) \u2212 W(\u03bb)<\/p>\n<p>From the checks\u00a0above:<\/p>\n<p>P(1) = 3\u00b73 \u2212 9 =\u00a0<strong>0<\/strong>P(2) = 9\u00b73 \u2212 27 =\u00a0<strong>0<\/strong>P(3) = 35\u00b71 \u2212 35 =\u00a0<strong>0<\/strong><\/p>\n<p>P(\u03bb) vanishes at \u03bb= 1, 2, 3. Which means P(\u03bb) is <strong>divisible<\/strong> by:<\/p>\n<p>t(\u03bb) = (\u03bb \u2212 1)(\u03bb \u2212 2)(\u03bb \u2212 3)<\/p>\n<p>t(\u03bb) is called the <strong>target polynomial<\/strong>. It&#8217;s a public property of the circuit\u200a\u2014\u200aonce the circuit is fixed, t(\u03bb) is fixed\u00a0too.<\/p>\n<p>And the quotient:<\/p>\n<p>h(\u03bb) = P(\u03bb) \/ t(\u03bb)<\/p>\n<p>is a clean polynomial with no remainder\u200a\u2014\u200a<strong>but only if all constraints are satisfied.<\/strong><\/p>\n<p>If even one constraint fails, P(\u03bb) won\u2019t vanish at the corresponding gate index, and t(\u03bb) won\u2019t divide P(\u03bb)\u00a0cleanly.<\/p>\n<h3>The QAP: One Equation for Everything<\/h3>\n<p>So the entire R1CS collapses into this one equation:<\/p>\n<p><strong><em>U(<\/em><\/strong>\u03bb<strong><em>) \u00b7 V(<\/em><\/strong>\u03bb<strong><em>) \u2212 W(<\/em><\/strong>\u03bb<strong><em>) = h(<\/em><\/strong>\u03bb<strong><em>) \u00b7\u00a0t(<\/em><\/strong>\u03bb<strong><em>)<\/em><\/strong><\/p>\n<p>This is a <strong>QAP\u200a\u2014\u200aQuadratic Arithmetic Program<\/strong>. One equation. Any number of\u00a0gates.<\/p>\n<h3>A Sanity Check at a Random\u00a0Point<\/h3>\n<p>Let me show you that this is a real polynomial identity, not a coincidence at the three gate\u00a0points.<\/p>\n<p>At \u03bb=\u00a05:<\/p>\n<p>U(5) = 250 \u2212 120 + 17 =\u00a0147V(5) = \u221225 + 15 + 1 =\u00a0\u22129W(5) = \u2212125 + 165 \u2212 19 =\u00a021P(5) = 147\u00b7(\u22129) \u2212 21 = \u22121323 \u2212 21 =\u00a0<strong>\u22121344<\/strong><\/p>\n<p>Now t(5) = 4\u00b73\u00b72 = <strong>24<\/strong>. And you can compute h(\u03bb) = \u221210\u03bb \u2212 6 (it\u2019s linear because h has degree 4 \u2212 3 = 1), so h(5) =\u00a0\u221256.<\/p>\n<p>h(5) \u00b7 t(5) = \u221256 \u00b7 24 = <strong>\u22121344<\/strong>\u00a0\u2713<\/p>\n<p>Match. The identity holds everywhere\u200a\u2014\u200anot just at the gate points\u200a\u2014\u200abecause it\u2019s a real polynomial equation.<\/p>\n<h3>Why This\u00a0Matters<\/h3>\n<p>We\u2019ve moved from <strong>\u201ccheck 3 constraints\u201d<\/strong> to <strong>\u201ccheck whether one polynomial divides another.\u201d<\/strong><\/p>\n<p>For a 30-million-gate circuit, this is a massive win: instead of 30 million separate checks, we have one polynomial identity that captures all of\u00a0them.<\/p>\n<p>But we still have one problem: how do we check this identity <strong>without revealing the\u00a0witness<\/strong>?<\/p>\n<p>That\u2019s the last piece\u200a\u2014\u200aelliptic curves and pairings.<\/p>\n<h3>Step 4: Elliptic Curve Pairings\u200a\u2014\u200aChecking Without\u00a0Seeing<\/h3>\n<p>We now need a way for the prover to commit to values like U(\u03c4), V(\u03c4), W(\u03c4), h(\u03c4), and let the verifier check the polynomial identity\u200a\u2014\u200awithout anyone ever learning what these values\u00a0are.<\/p>\n<p>This is where elliptic curves come\u00a0in.<\/p>\n<h3>The Basic Tool: Scalar Multiplication on a\u00a0Curve<\/h3>\n<p>you need to know a little bit about Elliptic curve, unless you can not follow what\u2019s happening.<a href=\"https:\/\/github.com\/bitcoinbook\/bitcoinbook\/blob\/develop\/ch04_keys.adoc#public-key-cryptography\"> Mastering Bitcoin<\/a> is a good source to understand what\u2019s the concept of Elliptic\u00a0curves<\/p>\n<p>An elliptic curve gives us a special group. I won\u2019t go deep into the math, but the key facts\u00a0are:<\/p>\n<p>You have a generator point\u00a0<strong>G<\/strong>\u201cMultiplying\u201d G by a scalar a means adding G to itself a times: [a]G = G + G +\u00a0&#8230; +\u00a0GComputing [a]G from a is\u00a0<strong>easy<\/strong>Computing a from [a]G is <strong>cryptographically hard<\/strong>\u200a\u2014\u200athis is the <strong>discrete log problem (the fundamental security for public key private key signatures)<\/strong><\/p>\n<p>So [a]G is a <strong>one-way commitment<\/strong> to a. You can share it without revealing a.<\/p>\n<p>And importantly, these commitments <strong>preserve addition<\/strong>:<\/p>\n<p>[a]G + [b]G = [a + b]G<\/p>\n<p>So if I\u2019ve committed to a and b, I can get a commitment to a + b for\u00a0free.<\/p>\n<h3>The Problem: We Need Multiplication, Not Just\u00a0Addition<\/h3>\n<p>The QAP identity involves U(\u03c4) \u00d7 V(\u03c4). That\u2019s a <strong>multiplication<\/strong> in the exponent. And plain elliptic curves don\u2019t let us multiply commitments.<\/p>\n<p>If I give you [a]G and [b]G, you cannot compute [a\u00b7b]G efficiently. That would break discrete\u00a0log.<\/p>\n<p>So how do we check a multiplicative relationship?<\/p>\n<p><strong>u\u00b7v\u200a\u2014\u200aw\u00a0?=\u00a0ht<\/strong><\/p>\n<h3>Enter Pairings<\/h3>\n<p>A <strong>pairing<\/strong> is a special function between three\u00a0groups:<\/p>\n<p><strong><em>e: G\u2081 \u00d7 G\u2082 \u2192\u00a0G_T<\/em><\/strong><\/p>\n<p>with one magical property\u200a\u2014\u200a<strong>bilinearity<\/strong>:<\/p>\n<p><strong><em>e([a]G\u2081, [b]G\u2082) = e(G\u2081, G\u2082)^(a\u00b7b)<\/em><\/strong><\/p>\n<p>In words: feed the pairing two committed values, and it gives you a value where the <strong>product<\/strong> a\u00b7b sits in the exponent (of a third\u00a0group).<\/p>\n<p>This is exactly what we\u00a0need.<\/p>\n<h3>Using Pairings to Check Multiplication<\/h3>\n<p>Suppose the prover gives the verifier\u00a0[u]\u2081<\/p>\n<p>a commitment to U(\u03c4) in group\u00a0G\u2081<strong>like [<\/strong>U(\u03c4)<strong>]Generaror1 <\/strong>which is the<strong> scalar multiplication of <\/strong>U(\u03c4) in Generator point of group\u00a0G1this format applies for the next commitments like [v]\u2082\u00a0etc.<\/p>\n<p>and [v]\u2082 (a commitment to V(\u03c4) in group\u00a0G\u2082).<\/p>\n<p>The verifier computes e([u]\u2081, [v]\u2082), which equals e(G\u2081, G\u2082)^(u\u00b7v). Call this value\u00a0LHS.<\/p>\n<p>Separately, suppose the prover gives [w]\u2081 (commitment to W(\u03c4) in Group G1) and [ht]\u2081 (commitment to h(\u03c4)\u00b7t(\u03c4) in Group\u00a0G1).<\/p>\n<p>The verifier computes e([w]\u2081 + [ht]\u2081, G\u2082) = e(G\u2081, G\u2082)^(w + ht). Call this\u00a0RHS.<\/p>\n<p>If LHS == RHS, then <strong>u\u00b7v = w + ht<\/strong> in the exponent. Which means the QAP identity holds at\u00a0\u03c4.<\/p>\n<p><strong>The verifier learned nothing beyond the fact that the equation balanced.<\/strong> No witness values, no polynomial coefficients. Just a yes\/no\u00a0answer.<\/p>\n<h3>The Trusted Setup: Where Does \u03c4 Come\u00a0From?<\/h3>\n<p>One problem: how does the prover compute commitments at \u03c4 when \u03c4 is\u00a0secret?<\/p>\n<p><strong>Setup ceremony<\/strong>: A trusted party (or a multi-party computation) samples \u03c4 randomly. Then they\u00a0publish:<\/p>\n<p>[1]\u2081, [\u03c4]\u2081, [\u03c4\u00b2]\u2081, [\u03c4\u00b3]\u2081, &#8230;  (up to the degree needed)<\/p>\n<p>And analogously in\u00a0G\u2082.<\/p>\n<p>This collection is called the <strong>Structured Reference String\u00a0(SRS)<\/strong>.<\/p>\n<p>Then <strong>\u03c4 is destroyed<\/strong>. Nobody should ever know\u00a0it.<\/p>\n<h3>Why Destroying \u03c4\u00a0Matters<\/h3>\n<p>If \u03c4 leaked, a cheater\u00a0could:<\/p>\n<p>Compute t(\u03c4)\u00a0directlyFind polynomials U\u2019, V\u2019, W\u2019, h\u2019 that satisfy U\u2019(\u03c4)\u00b7V\u2019(\u03c4) \u2212 W\u2019(\u03c4) = h\u2019(\u03c4)\u00b7t(\u03c4) at that specific \u03c4 but correspond to no real\u00a0witnessForge a valid-looking proof<\/p>\n<p>This is called the <strong>toxic waste<\/strong> problem. Real ceremonies are elaborate. The Zcash Sapling ceremony used dozens of participants, each contributing randomness. Only <strong>one<\/strong> participant needs to be honest and destroy their share for the setup to be\u00a0safe.<\/p>\n<h3>Committing to a Polynomial Without Knowing\u00a0\u03c4<\/h3>\n<p>Here\u2019s the beautiful part: even though the prover doesn\u2019t know \u03c4, they can still commit to any polynomial f(X) at\u00a0\u03c4.<\/p>\n<p>If f(X) = c\u2080 + c\u2081\u00b7X + c\u2082\u00b7X\u00b2 + c\u2083\u00b7X\u00b3, then the prover computes:<\/p>\n<p>[f(\u03c4)]\u2081 = c\u2080\u00b7[1]\u2081 + c\u2081\u00b7[\u03c4]\u2081 + c\u2082\u00b7[\u03c4\u00b2]\u2081 + c\u2083\u00b7[\u03c4\u00b3]\u2081<\/p>\n<p>This is a linear combination of SRS elements\u200a\u2014\u200ano \u03c4 knowledge required. The result is a single group element encoding f(\u03c4) in the exponent.<\/p>\n<p>The prover\u2019s coefficients c\u2080, c\u2081,\u00a0\u2026 come from the witness via the column polynomial construction from Step\u00a03.<\/p>\n<h3>Schwartz-Zippel: Why Checking at One Point Is\u00a0Enough<\/h3>\n<p>There\u2019s a subtle worry. The verifier only checks the QAP identity at <strong>one point<\/strong> (\u03c4). How do they know the identity truly holds as polynomials\u200a\u2014\u200anot just at that one lucky\u00a0point?<\/p>\n<p><strong>Schwartz-Zippel lemma<\/strong>: If two polynomials of degree \u2264 d are different, they disagree at <strong>almost every point<\/strong>. Specifically, they agree at most at d points out of the whole\u00a0field.<\/p>\n<p>So if a cheating prover tries to make the identity hold at \u03c4 without actually being a real polynomial identity, their success probability is at most d\/p, where p is the field\u00a0size.<\/p>\n<p>With \u201c<strong>p<\/strong>\u201d is about (2\u00b2\u2075\u2074) and \u201c<strong>d<\/strong>\u201d around 10\u2076 for real circuits, this is about 2\u207b\u00b2\u00b3\u2074. Essentially zero.<\/p>\n<p><strong>One random check is enough.<\/strong> That\u2019s the miracle that makes zkSNARKs succinct.<\/p>\n<h3>Putting It All\u00a0Together<\/h3>\n<p>Here\u2019s the entire pipeline, end to\u00a0end:<\/p>\n<p>Every modern zkSNARK\u200a\u2014\u200aGroth16, PLONK, Marlin\u200a\u2014\u200afollows some version of this pipeline. The details differ (different arithmetizations, different commitment schemes, different setup requirements), but the skeleton is the\u00a0same.<\/p>\n<h3>What We\u2019ve\u00a0Covered<\/h3>\n<p>Let\u2019s take a breath and zoom\u00a0out.<\/p>\n<p>You\u2019ve now seen the entire machinery that makes zkSNARKs <strong>succinct<\/strong>\u200a\u2014\u200asmall proofs, fast verification, for arbitrarily large computations:<\/p>\n<p><strong>Code \u2192 Circuit<\/strong>: flatten computation into addition and multiplication gates<strong>Circuit \u2192 R1CS<\/strong>: each multiplication gate becomes one constraint with L \u00b7 R = O\u00a0shape<strong>R1CS \u2192 QAP<\/strong>: interpolate all constraints into one polynomial identity<strong>QAP \u2192 Pairings<\/strong>: check the identity at a secret point \u03c4 using elliptic curve pairings, without revealing anything<\/p>\n<p>Four translations. Each one compressing the computation a little further. Ending in a 200-byte proof that verifies in milliseconds.<\/p>\n<h3>But We\u2019re Not Done\u00a0Yet<\/h3>\n<p>Here\u2019s the thing. Everything we built so far makes the proof <strong>succinct<\/strong>. But there\u2019s one property we promised and haven\u2019t delivered: <strong>zero-knowledge<\/strong>.<\/p>\n<p>Think about it. If the prover commits to U(\u03c4), V(\u03c4), W(\u03c4), h(\u03c4) and the verifier checks the pairing equation, the proof is succinct\u200a\u2014\u200abut it\u2019s also <strong>deterministic<\/strong>. Run the prover twice on the same witness, get the same\u00a0proof.<\/p>\n<p>That\u2019s a problem. A verifier who <strong>guesses<\/strong> the witness could just run the prover themselves and compare. If the proofs match, their guess was right. Information about the witness leaks through the proof\u2019s fingerprint.<\/p>\n<p><strong>So we need one more layer.<\/strong> We need the proof to look uniformly random\u200a\u2014\u200ato truly reveal nothing beyond \u201cthe statement is\u00a0true.\u201d<\/p>\n<h3>What\u2019s Next<\/h3>\n<p>In the next post, we\u2019ll finish the job. We\u2019ll\u00a0see:<\/p>\n<p><strong>Blinding<\/strong>: how random noise gets mixed into the commitments to scramble every\u00a0proof<strong>The simulator argument<\/strong>: how zero-knowledge is <em>formally proven<\/em>, not just\u00a0asserted<strong>Groth16 specifics<\/strong>: the four secrets (\u03b1, \u03b2, \u03b3, \u03b4) and the distinct attack each one defends\u00a0against<strong>The trusted setup ceremony<\/strong>: toxic waste, multi-party computation, and destroyed laptops<strong>Real-world numbers<\/strong>: Zcash, TornadoCash, zkEVMs\u200a\u2014\u200ahow all this machinery performs in production<\/p>\n<p>If this post gave you the <strong>succinctness<\/strong> of zkSNARKs, the next one gives you the <strong>zero-knowledge<\/strong>\u200a\u2014\u200athe other half of the name, and arguably the more beautiful half.<\/p>\n<p>Then in Part 4, we\u2019ll take everything and <strong>re-implement TornadoCash<\/strong> from\u00a0scratch.<\/p>\n<p>Until then\u200a\u2014\u200ahomework suggestion: pick a different tiny circuit, like (x + 2) \u00d7 (y + 3) = 21, and do the entire flattening \u2192 R1CS \u2192 QAP pipeline by hand. Find the witness. Build U, V, W. Check that UV \u2212 W vanishes at the gate\u00a0points.<\/p>\n<p>It\u2019s the best way to lock in what you\u2019ve learned. Once you\u2019ve watched the polynomials cancel with your own pen, this stops being abstract and becomes something you\u00a0own.<\/p>\n<p><strong>If you enjoyed this, hit \ud83d\udc4f and follow. Part 3 on zero-knowledge is coming\u00a0soon.<\/strong><\/p>\n<p><a href=\"https:\/\/medium.com\/coinmonks\/how-zksnarks-actually-work-r1cs-qap-and-pairings-explained-with-examples-ce7b102741e4\">How zkSNARKs Actually Work: R1CS, QAP, and Pairings Explained With Examples<\/a> was originally published in <a href=\"https:\/\/medium.com\/coinmonks\">Coinmonks<\/a> on Medium, where people are continuing the conversation by highlighting and responding to this story.<\/p>","protected":false},"excerpt":{"rendered":"<p>2. Zero Knowledge From Mental Model to Math: What Actually Happens Inside a\u00a0zkSNARK Part 2 of a 4-part series. In Part 1, we built the mental model. In Part 3, we\u2019ll cover zero-knowledge and Groth16. Part 4 re-implements TornadoCash. I know this post is a little bit long, but trust me it\u2019s better to have [&hellip;]<\/p>\n","protected":false},"author":0,"featured_media":154656,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[2],"tags":[],"class_list":["post-154655","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-interesting"],"_links":{"self":[{"href":"https:\/\/mycryptomania.com\/index.php?rest_route=\/wp\/v2\/posts\/154655"}],"collection":[{"href":"https:\/\/mycryptomania.com\/index.php?rest_route=\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/mycryptomania.com\/index.php?rest_route=\/wp\/v2\/types\/post"}],"replies":[{"embeddable":true,"href":"https:\/\/mycryptomania.com\/index.php?rest_route=%2Fwp%2Fv2%2Fcomments&post=154655"}],"version-history":[{"count":0,"href":"https:\/\/mycryptomania.com\/index.php?rest_route=\/wp\/v2\/posts\/154655\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/mycryptomania.com\/index.php?rest_route=\/wp\/v2\/media\/154656"}],"wp:attachment":[{"href":"https:\/\/mycryptomania.com\/index.php?rest_route=%2Fwp%2Fv2%2Fmedia&parent=154655"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/mycryptomania.com\/index.php?rest_route=%2Fwp%2Fv2%2Fcategories&post=154655"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/mycryptomania.com\/index.php?rest_route=%2Fwp%2Fv2%2Ftags&post=154655"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}