Blobs: A Comprehensive Analysis of EIP-4844 and Its Impact on Ethereum’s Scalability and Rollup Economics

Research Report: EIP-4844 and the Dawn of Proto-Danksharding on Ethereum

Many thanks to our sponsor Panxora who helped us prepare this research report.

Abstract

Ethereum’s journey towards achieving global-scale transaction throughput has been profoundly shaped by its inherent scalability challenges. The foundational design of its Layer 1 (L1) blockchain prioritises decentralisation and security, leading to limitations in processing capacity and higher transaction costs, particularly as network demand surges. These constraints have historically impeded Ethereum’s ability to onboard a truly expansive user base and accommodate burgeoning decentralised applications. In response, Layer 2 (L2) scaling solutions, most notably rollups, have emerged as vital components of Ethereum’s scaling roadmap, processing transactions off-chain and batching them for efficient settlement on the mainnet. However, even these L2s face significant operational costs, largely driven by the expense of posting transaction data back to Ethereum’s L1.

Ethereum Improvement Proposal 4844 (EIP-4844), formally known as Proto-Danksharding and implemented as part of the Dencun upgrade, represents a seminal advancement in addressing these L2 data availability and cost bottlenecks. This EIP introduces a novel data structure termed ‘blobs’ (Binary Large Objects), specifically engineered to provide a temporary, highly efficient, and cost-effective channel for data storage on the Ethereum network. This comprehensive report undertakes an in-depth exploration of EIP-4844, meticulously detailing the technical architecture and specifications of blobs, their role in facilitating transient and economical data availability for L2 rollups, and their substantial impact on the economic viability of L2 transactions. Furthermore, the paper elucidates the critical role of blobs as a foundational stepping stone towards the eventual implementation of full Danksharding, positioning EIP-4844 as an indispensable precursor in Ethereum’s broader strategic trajectory for data availability scaling.

Many thanks to our sponsor Panxora who helped us prepare this research report.

1. Introduction: The Imperative for Ethereum’s Scalability

Ethereum, since its inception, has cemented its position as the preeminent smart contract platform, fostering an unprecedented ecosystem of decentralised applications (dApps), decentralised finance (DeFi) protocols, and non-fungible tokens (NFTs). However, its success has paradoxically highlighted its fundamental architectural limitations, particularly concerning scalability. The Ethereum blockchain, like many first-generation public blockchains, operates under the constraints of the ‘blockchain trilemma,’ a concept positing that a decentralised system can only simultaneously achieve two out of three desirable properties: decentralisation, security, and scalability. Ethereum has historically prioritised the former two, leading to a bottleneck in transaction throughput (transactions per second, TPS) and consequently, elevated transaction fees (gas costs) during periods of high network congestion.

Historically, the network’s capacity has been limited by the block gas limit, dictating the maximum amount of computational work a block can contain. When demand for block space outstrips supply, gas prices surge, making transactions prohibitively expensive for many users and use cases. This challenge is particularly acute for Layer 2 scaling solutions, which aim to alleviate L1 congestion by processing transactions off-chain. While L2s, especially rollups, significantly boost transaction capacity, they fundamentally rely on Ethereum’s L1 for security and data availability. Specifically, rollups must publish a compressed representation of their batched transactions, including state changes, to the Ethereum mainnet. This data, traditionally stored as calldata, is permanently recorded on the blockchain, incurring substantial gas costs that form a significant portion of an L2 rollup’s operational expenditure.

The persistent high cost of calldata for rollups has served as a primary barrier to their widespread adoption and the full realisation of their scaling potential. Recognising this critical bottleneck, the Ethereum core development community has systematically pursued a multi-pronged scaling roadmap. EIP-4844, also known as Proto-Danksharding, represents a pivotal milestone in this roadmap. It introduces a novel mechanism to significantly reduce the cost of data availability for rollups, thereby enhancing their economic efficiency and enabling a substantial increase in transaction throughput across the Ethereum ecosystem. This proposal is not merely an incremental upgrade; it is a strategic precursor, laying the cryptographic and architectural groundwork for the eventual implementation of full Danksharding, Ethereum’s ultimate vision for data availability scaling.

Many thanks to our sponsor Panxora who helped us prepare this research report.

2. Technical Overview of EIP-4844: Introducing the Blob

EIP-4844 introduces a paradigm shift in how data, particularly that emanating from Layer 2 rollups, is handled and stored on the Ethereum network. At the heart of this proposal lies the concept of a ‘blob’, a specialised data structure designed for transient and cost-optimised data storage.

2.1. Introduction to Blobs: Structure and Lifespan

Blobs, an abbreviation for Binary Large Objects, are large, contiguous segments of data specifically designed to be attached to blocks on the Ethereum blockchain. Unlike traditional calldata, which forms an intrinsic part of the permanent blockchain state and is accessible by the Ethereum Virtual Machine (EVM), blobs are treated differently. Each blob is structured as a collection of 4,096 field elements, with each field element comprising 32 bytes of data. This translates to a total size of 128 KiB (kilobytes) per blob. This structure is carefully chosen to align with the mathematical requirements of the underlying cryptographic scheme, which will be discussed in subsequent sections.

A defining characteristic of blobs is their ephemeral nature. Blobs are temporarily stored by Ethereum’s consensus layer nodes for a limited duration, specifically approximately 18 days (or around 4096 epochs, assuming 6.4 minutes per epoch). After this retention period, the blob data is pruned, meaning it is no longer readily available on the network for historical queries by ordinary nodes. This contrasts sharply with calldata, which is permanently stored on the Ethereum blockchain, contributing to the chain’s ever-growing historical state and increasing the storage burden on full nodes. The temporary storage model for blobs is a deliberate design choice aimed at significantly reducing the storage overhead for the main chain while still providing sufficient time for Layer 2 rollups to settle their transactions and for network participants to verify data availability and resolve any potential disputes (e.g., fraud proofs for optimistic rollups).

2.2. Blob Transactions: A New Transaction Type and Cryptographic Guarantees

To accommodate blobs, EIP-4844 introduces a new transaction type, formally referred to as a ‘Type-3 transaction’ or ‘blob transaction.’ This new transaction type is distinct from previous transaction formats (e.g., legacy, EIP-2930, EIP-1559) in that it includes a reference to one or more blobs. Crucially, while a blob transaction commits to the data contained within its associated blob, the Ethereum Virtual Machine (EVM) on the execution layer does not have direct access to the blob’s raw data. This architectural separation is fundamental to achieving the desired scalability and efficiency.

The integrity and authenticity of blob data are ensured through the sophisticated application of a KZG (Kate-Zaverucha-Goldberg) commitment scheme, leveraged via zero-knowledge proofs. A KZG commitment is a concise, cryptographic commitment to a polynomial. In the context of EIP-4844, the blob data is encoded as a polynomial, and a short KZG commitment to this polynomial is included in the blob transaction itself. This commitment acts as a compact cryptographic proof, allowing the Ethereum chain to verify the validity of the blob data without needing to process or store the entire blob data on the execution layer.

The KZG commitment scheme offers several critical properties:

  • Binding: It is computationally infeasible to find two different polynomials that commit to the same KZG commitment. This ensures the integrity of the blob data.
  • Hiding: The commitment reveals no information about the polynomial itself, thus maintaining data privacy until it is explicitly revealed (though for blobs, the data is public).
  • Efficiency: Crucially, proofs generated using KZG commitments are extremely succinct and efficient to verify, regardless of the size of the underlying data. This enables light clients and validators to perform ‘data availability sampling’ (DAS), a cornerstone of the sharding roadmap, without downloading the entire blob.

The implementation of KZG commitments requires a ‘trusted setup’ ceremony. This ceremony generates a set of public parameters (the ‘common reference string’ or CRS) that are essential for creating and verifying KZG proofs. The Ethereum community successfully conducted this multi-party computation (MPC) ceremony, known as the ‘KZG Ceremony’ or ‘Plumo Ceremony,’ involving thousands of participants globally to ensure the parameters were generated in a trust-minimised manner. The security of the KZG scheme relies on the assumption that at least one participant in this ceremony acted honestly and deleted their secret share.

2.3. Integration with Ethereum’s Layered Architecture

The effective management and processing of blobs are carefully integrated into Ethereum’s post-Merge architecture, which delineates responsibilities between the execution layer (EL) and the consensus layer (CL). This separation of concerns is paramount for maintaining network efficiency and reducing gas costs:

  • Consensus Layer Responsibility: Blobs are primarily managed by Ethereum’s consensus layer (specifically, beacon chain nodes). When a block is proposed, it can include references to blobs. These blobs are then gossiped across the consensus layer network, similar to how blocks are propagated. Consensus layer clients are responsible for storing and making blob data available for the prescribed 18-day period. This means that while the blob_hash (a commitment to the blob) is part of the execution block, the actual blob data itself resides primarily with the consensus layer.
  • Execution Layer (EVM) Interaction: The EVM on the execution layer does not directly process or store the raw blob data. Instead, the EVM only receives a blob_hash (more precisely, a versioned_hash of the KZG commitment) that cryptographically links the blob transaction to its associated blob. This design prevents the EVM from becoming bloated with large, temporary data, thereby preserving its efficiency and limiting the computational overhead for smart contracts. Smart contracts can only verify the existence of a blob and its commitment, but cannot read its content. This distinction is critical for gas efficiency, as permanent storage on the EVM is significantly more expensive than transient storage on the CL.

This architectural separation allows for a significant enhancement in gas efficiency. By segregating the blob data from the main execution layer’s permanent storage, the high gas costs historically associated with permanent data storage via calldata are dramatically reduced for rollup data. Blobs introduce a dedicated, cheaper data availability channel, optimised for the specific needs of L2s which primarily require data to be available for a period, not permanently stored or processed by L1 smart contracts.

Many thanks to our sponsor Panxora who helped us prepare this research report.

3. Blobs and the Enhancement of Data Availability

Data availability is a cornerstone of the security model for Layer 2 rollups. Without guaranteed data availability, it would be impossible for network participants, including fraud provers or verifiers, to reconstruct the rollup’s state, verify transactions, or challenge invalid state transitions. Blobs address this crucial requirement in a novel and efficient manner.

3.1. Enhancing Data Availability for Rollups: A Cost-Effective Solution

Rollups, both optimistic and zero-knowledge (ZK) variants, require that the data representing their off-chain computations and state transitions be published to the Ethereum mainnet. This data is essential for two primary reasons:

  1. State Reconstruction: Any participant (e.g., a rollup node, a block explorer, or even a user) must be able to reconstruct the rollup’s state from the published data on L1. This ensures transparency and verifiability.
  2. Dispute Resolution/Verification: For optimistic rollups, the data is needed for fraud proofs, allowing anyone to challenge an invalid state root by re-executing the batch of transactions. For ZK-rollups, the data is needed by the on-chain verifier to check the validity proof, although the raw transaction data itself might not be strictly necessary for the L1 validity check, it is crucial for users to reconstruct the state and verify their own balances or transactions.

Before EIP-4844, rollups primarily used calldata for this purpose. While calldata is inexpensive per byte compared to L1 storage (like SSTORE), its cumulative cost for large batches of transactions, especially during periods of high L1 congestion, was a significant operational expense for rollups. This cost directly impacted the fees users paid on L2s, undermining the very purpose of scaling solutions.

Blobs offer a substantially more cost-effective alternative. By providing a temporary and dedicated storage mechanism separate from the EVM’s permanent state, blobs drastically reduce the per-byte cost of publishing data. Rollups can now bundle large volumes of transaction data into blobs and attach them to L1 blocks, leveraging this cheaper channel for data availability. This economic advantage translates directly into lower transaction fees for users on Layer 2 networks, making L2s more accessible and economically viable for a broader range of applications and users.

The 18-day retention period for blobs is specifically tailored to the needs of rollups. For optimistic rollups, this duration provides ample time for the fraud proof window (typically 7 days) to elapse, ensuring that all necessary data for challenging an invalid state transition is available. For ZK-rollups, while the validity proof provides immediate finality, the blob data is still crucial for external parties to sync with the rollup’s state and understand the transactions that occurred.

3.2. Data Availability Sampling (DAS): The Promise of Scalable Verification

The KZG commitment scheme employed in blob transactions is not merely for data integrity; it is the cryptographic primitive that underpins Data Availability Sampling (DAS). DAS is a cornerstone technology for enabling truly massive data throughput on Ethereum in the future, particularly with full Danksharding.

In a traditional blockchain, a full node must download and verify all transaction data in every block to ensure data availability and validity. As throughput scales, this requirement becomes a significant bottleneck, demanding ever-increasing bandwidth and storage from full nodes, thereby centralising the network.

DAS, facilitated by KZG commitments, offers an elegant solution. Instead of downloading an entire blob (or later, an entire shard), network participants, including validators and light clients, can cryptographically verify that the data within a blob (or shard) is available by sampling only small, random portions of it. Here’s how it works:

  1. Polynomial Encoding: The blob data is transformed into coefficients of a polynomial.
  2. KZG Commitment: A KZG commitment to this polynomial is generated and included in the block.
  3. Data Erasure Coding: The polynomial is often extended using Reed-Solomon erasure coding. This process adds redundancy to the data, meaning that even if a significant portion of the original data is lost or withheld, it can still be reconstructed from the remaining available samples. This is crucial for robustness against malicious data withholding.
  4. Random Sampling: Validators and light clients randomly select a few ‘points’ on the extended polynomial (i.e., small segments of the encoded blob data). They then request these specific samples from block producers or other nodes.
  5. KZG Proofs: For each requested sample, the block producer provides the sample along with a KZG proof that cryptographically verifies that this sample indeed lies on the polynomial committed to by the KZG commitment. This proof is compact and efficient to verify.
  6. Probabilistic Guarantee: If a sufficient number of randomly chosen samples, accompanied by valid KZG proofs, are received, a verifier gains a high probabilistic guarantee that the entire blob data is available on the network. If data were withheld, the probability of sampling only the available portions would rapidly diminish.

This mechanism significantly reduces the bandwidth requirements for individual nodes. Validators no longer need to download the full 128 KiB of each blob, and in the future, light clients will not need to download entire shards. This decentralises the verification process, allowing more participants with fewer resources to contribute to network security and maintain a high degree of confidence in data availability. EIP-4844 introduces the KZG-based DAS mechanism for blobs, providing invaluable real-world testing and experience before its full-scale deployment in Danksharding.

Many thanks to our sponsor Panxora who helped us prepare this research report.

4. Throughput Capacity and Transformative Transaction Economics

EIP-4844’s introduction of blobs has not only provided a new data channel but has also established a novel fee market, profoundly impacting Ethereum’s data throughput capabilities and the economic landscape for Layer 2 rollups.

4.1. Throughput Capacity: Expanding Ethereum’s Data Bandwidth

With EIP-4844, each Ethereum block is designed to accommodate a specific quantity of blob data. The initial target is an average of three blobs per block, with a maximum capacity of six blobs per block. Given that each blob is 128 KiB, this translates to:

  • Target throughput: 3 blobs/block * 128 KiB/blob = 384 KiB of data per block.
  • Maximum throughput: 6 blobs/block * 128 KiB/blob = 768 KiB of data per block.

Considering an average block time of 12 seconds on Ethereum, this means EIP-4844 introduces approximately:

  • Target data throughput: 384 KiB / 12 seconds ≈ 32 KiB/second.
  • Maximum data throughput: 768 KiB / 12 seconds ≈ 64 KiB/second.

This new, dedicated data space represents a substantial increase in Ethereum’s overall data bandwidth, specifically for rollup transaction data. Prior to EIP-4844, rollups contended with other L1 transactions for general calldata space within the existing block gas limit. While calldata could theoretically scale, its cost was directly linked to the general L1 gas market, which is prone to volatility and high prices. Blobs introduce a separate, purpose-built channel for rollup data, effectively lightening the overall data load on Ethereum’s main execution layer and making the scaling more efficient and predictable.

The capacity of three to six blobs per block is a conservative initial target. This allows the network to gradually adapt and for developers to monitor performance, security, and fee market dynamics. In the future, with full Danksharding, the number of blobs (or shards) per block is expected to increase significantly, potentially reaching 64 shards, each capable of accommodating a similar amount of data, thereby unlocking orders of magnitude higher data throughput.

4.2. Impact on Transaction Economics: Dramatically Reduced Costs for Rollups

One of the most immediate and tangible benefits of EIP-4844 has been the profound impact on the transaction economics for Layer 2 rollups. The introduction of blobs and their dedicated fee market has led to a dramatic reduction in the cost for rollups to post data to Ethereum’s mainnet.

Prior to EIP-4844, the cost of posting rollup data was dictated by the L1 gas price, driven by general network demand. This meant that when L1 was congested, rollup operational costs soared, which was then passed on to end-users as higher L2 transaction fees. With the introduction of blobs, a separate, distinct fee market for blob space has been established, operating on principles similar to EIP-1559.

The Blob Fee Market Dynamics:

Similar to EIP-1559 for execution gas, the blob fee market incorporates a base fee mechanism. The blob_base_fee dynamically adjusts based on the demand for blob space in preceding blocks:

  • If the number of blobs in a block exceeds the target (e.g., more than 3 blobs), the blob_base_fee increases for the subsequent blocks.
  • If the number of blobs falls below the target, the blob_base_fee decreases.
  • Half of the blob_base_fee (as with base_fee_per_gas) is burned, removing it from circulation and adding deflationary pressure to the ETH supply.

This dynamic pricing mechanism ensures that blob fees adjust efficiently based on network demand for blob space, promoting optimal utilisation while preventing persistent congestion or excessively high costs. Rollups can specify a max_fee_per_blob_gas and a priority_fee_per_blob_gas (tip) to influence their inclusion probability, just as with regular L1 transactions.

Quantifiable Cost Reductions:

The impact of this new fee market has been stark and immediate. Post-Dencun upgrade, the cost for rollups to publish data has plummeted. As observed in early post-upgrade data, the fees paid by rollups for data availability have dropped by orders of magnitude. For instance, reports indicate that data fees from prominent rollups which historically incurred millions of dollars per month to secure their data on L1, saw reductions of over 90%, sometimes pushing average data costs down to mere cents or fractions of a cent per transaction batch. This is a transformative shift.

This dramatic reduction in data costs directly translates into substantially lower transaction fees for end-users on Layer 2 networks. For example, a typical swap or transfer on an optimistic rollup that might have cost several dollars during peak L1 congestion could now cost only a few cents. This enhanced economic viability makes Layer 2s far more attractive for everyday use, fostering wider adoption of dApps and services that were previously too expensive to operate on-chain.

The long-term implications are equally significant:

  • Increased Adoption of L2s: Lower fees encourage more users and applications to migrate to L2s, alleviating pressure on the L1.
  • New Use Cases: Economically viable microtransactions, gaming, and social applications become feasible on Ethereum’s L2 ecosystem.
  • Improved User Experience: More predictable and consistently low transaction fees provide a smoother and more reliable user experience.
  • Rollup Profitability: While user fees decrease, the operational efficiency gained from cheaper data publishing can improve the overall profitability and sustainability of rollup operators, encouraging further development and innovation within the L2 ecosystem.

Many thanks to our sponsor Panxora who helped us prepare this research report.

5. Blobs as a Stepping Stone Towards Full Danksharding

EIP-4844 is not an endpoint but a pivotal milestone in Ethereum’s comprehensive scalability roadmap. It is intentionally designed as ‘Proto-Danksharding’ to introduce the core components and mechanisms necessary for the much grander vision of full Danksharding.

5.1. Transition to Full Danksharding: A Phased Approach

Full Danksharding represents Ethereum’s ultimate solution for data availability scaling. It aims to dramatically increase the network’s data throughput by dividing the blockchain’s data into numerous parallel streams, known as ‘shards.’ Instead of every node processing all data, nodes will only be responsible for a subset of shards, while still being able to verify the availability of data across the entire network via Data Availability Sampling.

EIP-4844 serves as the critical initial step, integrating several key components that pave the way for full Danksharding:

  1. Blob Transaction Type: The introduction of the Type-3 blob transaction format is foundational. This format can be seamlessly extended in full Danksharding to reference multiple shards or a larger number of blobs per block.
  2. KZG Commitments: The adoption and widespread deployment of KZG commitments for blobs is paramount. These commitments are the cryptographic backbone for DAS, which is essential for verifying data availability across many shards without requiring nodes to download all shard data. EIP-4844 provides real-world experience with KZG implementation and verification.
  3. Separate Blob Fee Market: Establishing a distinct fee market for blob space, separate from the execution gas market, is a crucial precursor. In full Danksharding, this will evolve into a unified, merged fee market for all data shards, where users bid for block space in a more integrated manner.
  4. Data Availability Sampling (DAS) Introduction: While full DAS will be implemented with sharding, EIP-4844 introduces the initial tooling and client capabilities for sampling blob data. This allows network participants to begin adapting to this new verification paradigm and provides valuable feedback for future improvements.

This phased approach allows the Ethereum ecosystem to incrementally build and test complex scaling technologies. By introducing blobs first, the network gains practical experience with sharded data structures, new cryptographic primitives (KZG), and a separate fee market without the full complexity of coordinating many parallel shards. This iterative development model significantly de-risks the ambitious undertaking of full Danksharding.

5.2. Data Availability Sampling in Full Danksharding: The Ultimate Goal

The experience gained from implementing blobs and the foundational Data Availability Sampling (DAS) mechanisms in EIP-4844 will be instrumental in the full implementation of Danksharding. In a fully sharded environment, the amount of data processed by the network will be vast, making it impossible for every full node to download and store all data from all 64 potential shards.

DAS, empowered by KZG commitments, provides the solution:

  • Scalable Verification: In full Danksharding, validators will still be responsible for verifying the overall validity of the chain. However, instead of downloading all shard data, they will perform DAS across all shards. By randomly sampling a small number of data points from each shard and verifying their KZG proofs, validators can achieve a high probabilistic guarantee that the data for all shards is indeed available. This vastly reduces the bandwidth and storage requirements for validators, enabling a greater number of nodes to participate and thus enhancing decentralisation.
  • Light Client Security: DAS also significantly enhances the security of light clients. Traditionally, light clients only download block headers and rely on full nodes for data. With DAS, light clients can independently sample data from shards, gaining stronger cryptographic assurances about data availability without relying solely on trusted third parties. This strengthens the overall security and censorship resistance of the network.
  • Proposer-Builder Separation (PBS): Full Danksharding is also intricately linked with Proposer-Builder Separation (PBS). PBS aims to decouple the role of proposing a block from building its content. This design helps mitigate centralisation risks associated with MEV (Maximal Extractable Value) and facilitates the efficient handling of vast amounts of sharded data. In the PBS model, ‘builders’ would be responsible for constructing blocks, including the large data blobs/shards, and then sending them to ‘proposers’ to be included in the chain. EIP-4844’s introduction of separate blob transactions and their fee market is a step towards this separation for data.

Essentially, EIP-4844 provides the fundamental building blocks and a robust testing ground for the ultimate data availability solution envisioned for Ethereum. It validates the cryptographic choices, refines the fee market mechanisms, and provides critical operational experience necessary for the eventual, seamless transition to a fully sharded Ethereum network capable of processing millions of transactions per second.

Many thanks to our sponsor Panxora who helped us prepare this research report.

6. Challenges and Considerations in the Blob Era

While EIP-4844 marks a significant leap forward for Ethereum’s scalability, its implementation introduces new dynamics and considerations that require careful management by network participants, particularly Layer 2 rollup operators.

6.1. Blob Fee Market Dynamics and Volatility

The introduction of a separate, EIP-1559-like fee market for blobs brings with it certain complexities. While designed to efficiently allocate blob space, the blob_base_fee can exhibit volatility, particularly in the initial phases post-Dencun, as network demand for blob space fluctuates. Factors influencing this volatility include:

  • Rollup Batching Strategies: The way rollups batch and submit transactions significantly impacts blob demand. If many rollups attempt to post large batches simultaneously, the blob_base_fee will rise rapidly. Conversely, if demand is low, fees will drop.
  • Emergent Use Cases: New applications or unexpected bursts of activity on L2s could lead to spikes in blob consumption.
  • Market Speculation: Although less direct than L1 gas, the blob fee market could potentially be influenced by speculative behaviour if blob space becomes a highly sought-after commodity.

Rollup operators must develop sophisticated strategies to navigate these dynamics. This might involve dynamic batch sizing, implementing internal queuing mechanisms to smooth out blob submissions, or even engaging in active bidding strategies in the blob fee market to ensure timely data publication while optimising costs. Monitoring the blob_base_fee and predicting its movements will become an important operational aspect for L2s.

6.2. Rollup Strategies and Blob Utilization: Optimising for the New Landscape

Rollups must adapt their operational models to fully leverage the benefits of blobs. This involves several strategic considerations:

  • Batching Optimisation: Rollups need to re-evaluate their batching algorithms to efficiently fill blobs. Since blobs have a fixed size (128 KiB), a rollup’s batch size should ideally align with multiples of this, or consider strategies to combine multiple small batches into a single blob to maximise cost efficiency. Posting a small batch that only uses a fraction of a blob’s capacity will still incur the cost of the entire blob.
  • Transaction Finality vs. Cost: Rollups must balance the trade-off between transaction finality and cost. While immediate data publication to a blob offers quicker finality for the batch (as soon as the block is finalised), waiting for periods of lower blob_base_fee could yield significant cost savings. Rollups may implement different service levels, offering faster but potentially more expensive data inclusion for high-priority transactions, and slower, cheaper inclusion for others.
  • Data Compression: While blobs offer cheaper data, rollups will continue to benefit from advanced data compression techniques. The more transactions a rollup can pack into a single 128 KiB blob, the lower the per-transaction cost will be for their users.
  • Client Software Updates: Rollup clients, sequencers, and provers need to be updated to interact with the new Type-3 transactions and correctly handle KZG commitments. This includes changes to how data is retrieved, validated, and stored internally for their own operational needs.

6.3. Long-term Data Retention and Archival

A critical consideration with the ephemeral nature of blobs (18-day retention) is the question of long-term data availability. While 18 days is sufficient for L2 security models (e.g., fraud proof windows), it is insufficient for historical data analysis, block explorers, or syncing full L2 nodes from scratch. The responsibility for long-term archival of blob data falls outside the core Ethereum protocol.

  • Rollup Operators: L2 rollup sequencers and operators are primarily responsible for indefinitely archiving their own transaction data, often storing it in decentralised storage solutions (e.g., Arweave, IPFS) or centralised databases. This ensures that their L2 chain can always be reconstructed and audited.
  • Data Providers/Block Explorers: Services like block explorers for L2s will need to specifically implement mechanisms to capture blob data within the 18-day window and archive it for permanent public access.
  • Decentralised Archival Networks: The rise of dedicated decentralised archival networks may play an increasingly important role in providing robust, censorship-resistant long-term storage for pruned blob data.

This decentralised approach to long-term storage aligns with Ethereum’s minimalist L1 philosophy but shifts the burden to L2s and third-party services. While it prevents L1 from ballooning in size, it necessitates the development of robust off-chain data archival solutions.

6.4. Security Considerations and Monitoring

While EIP-4844 enhances security by making data availability sampling possible, ongoing vigilance is required:

  • DoS Attacks on Blob Space: The blob_base_fee mechanism is designed to mitigate denial-of-service (DoS) attacks by making it prohibitively expensive to flood the network with useless blob data. However, continuous monitoring of blob space utilisation and fee dynamics is essential to ensure the mechanism is effective.
  • KZG Trusted Setup: The security of the KZG commitment scheme relies on the honesty of at least one participant in the multi-party trusted setup ceremony. While the ceremony was conducted with the highest levels of transparency and participation, it remains a critical assumption in the cryptographic security model.
  • Client Implementation Bugs: As with any significant protocol upgrade, the possibility of bugs in client implementations (both Ethereum core clients and L2 clients) that interact with blobs exists. Robust testing, audits, and continuous monitoring are paramount.

These considerations highlight that EIP-4844, while a massive step forward, also opens new avenues for research, development, and operational best practices within the Ethereum ecosystem. Its successful long-term integration relies on the continued collaboration and innovation of developers, researchers, and users.

Many thanks to our sponsor Panxora who helped us prepare this research report.

7. Conclusion

EIP-4844, implemented as Proto-Danksharding, stands as a pivotal and transformative upgrade in Ethereum’s evolution towards a truly scalable and globally accessible blockchain platform. By introducing ‘blobs’ as a novel, temporary, and cost-efficient data structure, the proposal directly addresses the most significant economic bottleneck currently faced by Layer 2 rollups: the high cost of data availability on the Ethereum mainnet.

This sophisticated architectural enhancement, marked by the separation of blob data management to the consensus layer and its cryptographic integrity ensured by the robust KZG commitment scheme, has demonstrably led to a dramatic reduction in rollup operational costs. This, in turn, translates into substantially lower transaction fees for end-users on Layer 2 networks, unlocking new economic efficiencies and enabling a broader spectrum of decentralised applications and user interactions that were previously constrained by prohibitive costs.

Beyond its immediate economic impact, EIP-4844 serves as an indispensable and meticulously engineered precursor to the full implementation of Danksharding. It systematically introduces and rigorously tests core components essential for Ethereum’s ultimate data availability scaling vision, including the new transaction type, the foundational KZG-based Data Availability Sampling (DAS) primitive, and a dynamic fee market for data space. The experience and data gathered from Proto-Danksharding will be invaluable in refining the design and implementation of full Danksharding, which promises to expand Ethereum’s data throughput by orders of magnitude.

In essence, EIP-4844 is far more than a mere incremental upgrade; it is a strategic and foundational re-architecture of Ethereum’s data layer, laying the groundwork for a future where high transaction volumes are processed efficiently and economically, firmly cementing Ethereum’s position as the bedrock for decentralised innovation on a global scale. The successful deployment and ongoing optimisation of blobs represent a significant milestone in Ethereum’s unwavering commitment to scalability, decentralisation, and security.

Many thanks to our sponsor Panxora who helped us prepare this research report.

References

Be the first to comment

Leave a Reply

Your email address will not be published.


*