2. Surge — How will Ethereum scale?
While “Merge” focuses on optimizing the Proof-of-Stake mechanism, “Surge” is focused on Scaling—speed, throughput, and transactions-per-second (tps)—while not compromising on security and decentralization.
Rollup-Centric Roadmap
Unlike other performance-oriented blockchains, where the base layer itself supports high speed and volumes, Ethereum has decided on a Rollup-Centric Roadmap. The base layer (L1) will provide security, and a type of Layer 2 (L2) called Rollup will handle most of the data and computation.
An upgrade called Dencun went live in March ’24 and implemented Proto-Danksharding, which enabled L2s to post blocks to Ethereum more efficiently in the form of blobs. This has technically increased the Ethereum ecosystem’s speed to 1,000tps. While an improvement, this is not nearly enough.
The major constraint limiting tps is that more transactions increase the amount of data that has to be stored in L1.
Reduce data per transaction
Given that each transaction currently uses an average of 180 bytes of space, even if the slot size is increased to 16MB, a max of 7407 tps (16,000,000/180) can be reached.
The space used by each transaction must be reduced to increase capacity. Some examples of data compression techniques being considered are:
Long sequences of zero bytes can be replaced with 2 bytes representing the number of zero bytes there areMultiple transactions signed by multiple users can be combined with a single signature.Addresses take up 20 bytes. If an address has been used before, it can be replaced with address pointers that use only 4 bytes.Transaction and fee values can be represented more compactly. There can even be a dictionary of commonly used values.
Larger slots combined with data compression techniques should increase capacity to 58,000 tps.
Divide data between nodes for storage
16MB is a lot of data. If nodes have to store such large amounts of data, then only large and resource-rich entities will be able to carry out validating duties, making validators centralized. The solution is not to have to store the data while being assured that the correct data is stored elsewhere and can be downloaded when required. This is possible with Data Availability Sampling (DAS)
Implementing the ethos of “Verify. Don’t trust,” a full node downloads all data and executes all transactions before accepting a block. An honest full node cannot, therefore, be fooled into accepting invalid transactions.
However, downloading every piece of data and executing every transaction is expensive. The solution is for every node to download only a small randomly selected part of the data. Anyone looking to reconstruct the full blocks can do so by querying other nodes for the data stored with them.
But if you’re only downloading a small part of the data, how can you be sure that all the data is available? Here, we’re helped by the technology called “Data Erasure Coding”. The original data is blown up in a manner that if any data is missing, 50% of the blown-up data will be missing. The number of samples to be downloaded per node can then be adjusted so that there’s a high probability that at least one of the downloaded samples will have data missing.
Another advantage of Data Erasure Coding is that you don’t need to download all the data. Once you have 50% of the data, you can reconstruct the full data.
Ethereum is currently evaluating between PeerDAS and SubnetDAS. In PeerDAS, nodes listen in on a small group of Subnets that broadcast samples of 1 blob and query other nodes for data on other blobs. In SubnetDAS, nodes listen to broadcasts of all Subnets. Currently, the plan is for Consensus Nodes to go for SubnetDAS and for the rest to implement PeerDAS.
PeerDAS and SubnetDAS are implementations of 1D Sampling. Once PeerDAS is implemented and blob count is slowly increased, the next step is likely to be 2D Sampling, which does random sampling not just within blobs but also between blobs. This would further reduce the amount of data stored by each node.
L2s add speed. But they’re not trustless. What is the roadmap to make them trustless?
Most L2s today are not decentralized or trustless.
L2s can be in one of 3 stages of maturity:
Stage 1: The validation process may be fully centralized, but anyone should be able to run a node and sync the chain.
Stage 2: A trustless proof system must ensure that only valid transactions are accepted. The Security Council can override the proof system provided it has at least a 75% majority. At least 26% of the Security Council (the quorum-breaking portion) must be outside the entity building the Rollup.
Stage 3: A trustless proof system must ensure that only valid transactions are accepted. The Security Council can override the proof system, but only when there’s a bug in the code.
In September earlier this year, Vitalik Buterin stated that he would acknowledge L2s only if they reach at least Stage 1. According to L2Beat, an analytics platform, Arbitrum, OP Mainnet, dYdX v3, and ZKSync Lite claim to have reached this milestone.
Using L2s currently doesn’t feel like you’re using a part of the Unified Ethereum Ecosystem
To move funds between L2s today, one must first withdraw funds to the L1, wait for the challenge period to pass (in the case of optimistic rollups), and then transfer funds to the other L2.
Operating in the Ethereum ecosystem, therefore, does not feel like one is operating in a unified space. Each rollup is its own silo.
Some upgrades being considered to make the Rollup-Centric Ecosystem more seamlessly interoperable are:
Chain-Specific Addresses: Ethereum Ecosystem addresses look the same regardless of which L2 they are on. If the address specified the name of the chain, “wallets” could do all the background work involved in moving funds from one chain to the other. All the user would have to do is mention the address it wants to send funds.Chain-Specific Payment Requests: In addition to making payments, it should be possible to request payment in specified tokens from specified chains. For example, a merchant should be able to request payment in USDC on Arbitrum.Light clients: One should not have to trust intermediaries to verify the chains one interacts with. Light clients should be able to do so themselves.Keystore wallets: Today, if you want to update the keys to your wallet, you must do so in every chain in which the wallets exist. With a keystore wallet, the address only needs to exist in one place and, therefore, needs to be updated only once.Shared Token Bridge: Instead of withdrawing from one L2 and depositing to another for every transaction, it would be more efficient to have a shared minimal rollup that keeps track of how many tokens of which type are owed by and to each L2. These balances can be updated by periodically transferring funds between L2s in bulk.
Could Plasma be a better option than Rollups?
While the progress made with Rollups is impressive, it still isn’t sufficient to support use cases like decentralized social media.
While Ethereum seems to have decided on a Rollup-Centric Roadmap, going the Plasma route is possible. Plasma is a scaling solution where an Operator publishes blocks off-chain, and only Merkle roots are published on L1. This is different from Rollups, which publish the full block on-chain.
The progress in SNARKs technology makes Plasma more viable than in the past.
Finally, should we “also” scale L1 itself and not just rely on L2s for scaling?
While Ethereum has decided on a path of scaling with L2s, there are risks associated with not scaling the base layer:
Transactions on the base layer give ETH utility and raise its value. If the ETH value drops, the capital securing the base layer will drop, making both L1 and L2s less secureIf the base layer does not provide the needed security and the ecosystem weakens, L2s might prefer to go independent.There are likely to be situations when an L2 fails. Users will then need to go through the base layer to recover their funds. In that case, L1 should be able to support a high transaction load, at least temporarily.
The easiest way to scale L1 is to increase the gas limit. This risks centralization, but there’s more leeway today for increasing the gas limit after factoring in that technologies like statelessness and history expiry make large blocks easier to verify. Further, there’s also scope for making some types of computation cheaper
2. Surge — How will Ethereum scale? was originally published in Coinmonks on Medium, where people are continuing the conversation by highlighting and responding to this story.