Simply Explained: Blockchain Scalability Solutions — Past, Present, and Future

And how Polkadot & Substrate fits into this ecosystem

Nicole Zhu
Published in
9 min readJun 17, 2019

--

I recently gave a talk at Cogx about how blockchains scale. Where, we also discoursed how Polkadot fits into a future ecosystem where many blockchains are operating at scale.

In the same vein, this post relays:

  • A brief history of blockchain scaling solutions
  • Polkadot & Substrate’s role in this ecosystem
  • Polkadot & Substrate’s layer 1 and 2 scaling solutions

Note: Many better articles have previously addressed blockchains scaling issues. If this topic is foreign, I suggest reading the Crypto Canon’s scaling primers here.

Note: My views in this post do not represent that of the company which employs me. Feedback and suggests for improvement are welcome.

A brief history of scaling blockchains

When Bitcoin first launched in 2009 it became clear that, by design, it traded off transaction speed for decentralization and security.

Each block contained ~1mb of transactions. It took about 10 minutes to produce each new block. And it took another 45–60 minutes before you were sure your transaction had made it through.

The philosophy behind Bitcoin’s decentralized vision drives the tradeoffs that define proof-of-work. The 10 minute latency and small block sizes are enforced constraints that make it possible for a node running on a laptop in Kathmandu to have a chance of finding a block.

You wanted it to be possible for anyone, fighting through network latency and old specs to be able to maintain the integrity of the network.

The point was that we achieved decentralisation & security by ensuring that anyone can process transactions.

This came at a limit to how many transactions we could ask the network to process. Bitcoin aimed for ~20 transactions per second (tps), and in reality achieved about ~4 tps.

Scaling through simple parameter tweaks

Naturally, the first family of scalability solutions evolved around the idea that we could just stuff each block with more transactions.

Bigger blocks! Smaller transaction sizes!

To do this, we could simply make blocks bigger. Or, we could make each transaction smaller.

Examples of such implementations were Litecoin and Bitcoin Cash which hard-forked from Bitcoin to with smaller transactions and larger blocks, respectively. In practice, these efforts yielded 2–10x tps optimizations.

The fundamental issue with these scaling solutions was that each change required a hard fork. This required the community to rally behind the new chains and subsequently led to limited adoption. There’s also a practical upper limit to how much we can tweak these dials while still insuring decentralization ..

Scaling through off-chain computation

The next round of solutions came about the realization that not all transactions are equally important.

For example, processing a land deed agreement is likely more consequential than paying a friend back for dinner.

Many transactions types, like micropayments, can be processed off-chain. The main chain could be a settlement layer. For example, you can process 20k transactions in a state channel or a side chain as quickly as you wish. Then, verifying them on-chain in bulk, just takes a single transaction [pictured below].

Transactions can be processed off-chain and then reconciled on-chain

Doing things off chain reduces computation and storage load on the main chain. At the same time, it still gives you the benefits of on-chain reconciliation for transactions, over time.

Examples of such implementations include Lightning network, Bitcoin’s off-chain transaction compute solution, which theoretically resolves more than 1M tps. There’s also Raiden. which provides payment state channels on top of Ethereum, and theoretically processes more than 100M tps. And notably, there’s Plasma, which spawn child chains from the main blockchain, theoretically handling an infinite tps.

Some issues with doing things off-chain were that i) you were offloading these off-chain computations to centralized services and ii) as a result, you couldn’t get the security guarantee of an entire ecosystem maintaining the network & adhering to an immutable security protocol.

Scaling through on-chain sharding

Additional solutions came about the insight that transactions cluster around different social communities.

For example, transactions occurring within a shipping network in Singapore, vs. an eCommerce marketplace in Mexico, vs. freelancer community in Berlin, won’t often overlap.

It made sense to adopt a traditional database concept of sharding. Where, within one blockchain, you can have different computing resources, nodes, handle different transactions in parallel.

All transactions are processed on-chain but in different nodes

In theory, transactions are often contained in network clusters and are easy to parallelize.

Examples of such implementations include Zilliqa, which developed a complex sharding algorithm.

In practice, parallelization is very difficult. The challenges span from how to securely allocate transactions per node, to ensuring data availability among nodes, to resolving network asynchronicity issues...

A simple edge case where there is a transaction C in node C, which depends on a transaction A in node A and a transaction B in node B, which in turn has other dependencies, compounded with the reality of network latencies… becomes difficult to resolve.

And many more solutions…

Dags

The list for brilliant, blockchain scalability solutions spans on. From blockchains that look less like chains and more like directed acyclic graphs [pictured above], to faster consensus algorithms like PoS, PoA, specifically mutations of federated BFTs and delegated BFTs that guarantee faster block finality & production… users now have a plethora of solutions to choose from.

The challenge going forward might be around adoption.

Aside from canonical chains like Bitcoin, Ethereum, and Eos, the majority of blockchains remain severely underutilized.

Going forward, each chain in the ecosystem will need to find product market fit, given the scaling tradeoffs they have made.

The important point here is that blockchains do scale.

And blockchain scalability is, arguably, not one of the biggest blockers for consumer adoption going forward, as many pundits would posit.

The future is a disparate ecosystem of blockchains, where adoption will be very tribal.

The Blockchain Scaling Trilemma diagram helps us picture what this future ecosystem might look like.

Blockchain Scaling Trilemma: image courtesy cointelegraph.com

The Trilemma posits that there is no silver bullet solution for scaling blockchains. At best, two out of the three criteria of scalability, security, versus decentralization can be satisfied.

Each blockchain solution will always make some degree of tradeoffs that’s informed by its intended use cases and audience.

In many ways, we’re already seeing these decisions become more formal and well understood by the community:

  • Bitcoin & Ethereum 1.0 optimizes security & decentralization: as previously mentioned
  • Ethereum 2.0 optimizes scale and security: As, Vitalik confirms in this talk Eth 2.0 is about “targeting people who are building many, different small scale applications that can all talk to each other on top of Ethereum’s homogenous layer”. It is also about “ensuring lots of transactions”.
  • IPFS optimizes scale and decentralization: IFPS trades off security (or rather consistency) aspects, like transaction ordering, for faster data transmission and a fully decentralized network. This works for its particular use case of transfering files and videos.
  • EOS optimizes scale and security: Its delegated PoS means that token holders democratically elect a central team to represent their interests, which strays away from pure decentralization.

In the future, each chain might continue to fall somewhere on the trilemma diagram. The best blockchains will be the ones that are highly customized for their use cases.

A myriad of blockchain scaling solutions. Image Courtesy: Masterthecrypto.com

Blockchain adoption is also fundamentally tribal.

We’re already seeing that Ethereum is doing better in the west, as it is democratic and censorship resistant, which is quite a western notion.

But if you look over in Asia, EOS is the protocol that’s getting significant traction. This is due to cultural reasons, local marketing advantages, and basic human nature to put our resources towards something we understand and whatever most reflects our cultural ideologies.

I posit that in a future where, for our daily needs are fulfilled by blockchain solutions, we’ll likely adopt many individual chains. These chains will also be different across major geographic regions.

Polkadot & Substrate’s role in this ecosystem

If you’ve made it this far, you might observe that scaling blockchains won’t be the main challenge in the future.

The protocols themselves have already done a good job of scaling, making trilemma tradeoffs that make sense for their particular use cases.

But at the end of the day, all of these chains will all need to talk to each other.

Going forward, overcoming blockchain interoperability may be a more pressing issue.

That’s where Polkadot and Substrate come in.

“Polkadot is a network that connects blockchains.”

When there are a myriad of solutions focused on scaling a single chain, Polkadot tries to scale an ecosystem of these single chains.

Polkadot as the connector & security guarantee between these heterogenous blockchains

It’s a network interface that lets one chain talk to another, despite different consensus algorithms and token economies.

For example, if you own a Bitcoin UTXO, you’ll be able to spend it in the ZCash protocol. If you send a transaction in an Ethereum Dapp, say CryptoKitties, you’ll be able to triggers a chain of other actions in Eos. And, if you are a private blockchain network, you’ll be able to reconciliate your records with Ethereum.

Rather than replace these blockchains, Polkadot bridges these homogenous protocols to each other. This gives the end user access to an entire ecosystem of heterogenous blockchains.

The analogy here is: if Ethereum and Bitcoin are the London and New Yorks of the world, then Polkadot is an information highway connecting these metropolises. And Substrate is an open source framework that lets you erect the future cities to come.

Polkadot & Substrate’s layer 1 and 2 scaling solutions

In this last part, I wanted to dive into the specifics for how Polkadot and Substrate can scale a single chain.

Interestingly, in Polkadot, scalability occurs as a side effect of figuring out interoperability.

Scalability as a side effect of interoperability

A single blockchain gains the ability to horizontally scale itself once it joins the Polkadot network.

After a chain reaches peak capacity, it can create a new instance of itself and thus parallelize execution.

This new chain connects to the ecosystem by default, enjoying the benefits of shared security and interoperability of the network.

To the user, nothing has occured. Behind the scenes, a single business will be able to horizontally scale itself in minimal time, ad infinitum.

You can read more about how this works here

Additional Scalability Optimizations

Substrate, the open source blockchain builder behind Polkadot, also provides some scaling optimizations out-of-the-box, at the individual chain level.

Consensus

Notably, Substrate is PoS and provides a fast, hybrid consensus mechanism. GRANDPA is a near instant block finality gadget, which just means that it can quickly conclude your transactions are valid. Conversely, Babe is a fast block production gadget, which just means you get new blocks very quickly. I believe Babe is currently tuned to produce a new block every 6 seconds.

You can read more about how this works here

Smart economics

Substrate provides default incentive structures to help you deter user behaviors that slow down the network.

The network uses transaction weights to calculate transaction fees. Users will be paying directly proportional to compute resources and storage needs. More granular fees structure discourages nonessential writes to on-chain storage and other bad habits that typically blight a blockchain.

You can read more about how this works here

Off-chain compute

Substrate provides a special off-chain workers service for heavy computations that take longer to run than a single block interval. This is useful to process intensive computations, oracle queries, encryption algorithms, and generally lighten the load on-chain. This functionality is a part of your blockchain’s default runtime.

You can read more about how this works here

Upgradeable runtimes

Lastly, it’s important to note that Substrate builds meta protocols. This means that anything launched at genesis doesn’t have to be final. You can subsequently introduce optimizations like any state channel solutions, parameter tweaks, and even quantum consensus.

You can read more about how this works here (thanks to WAsm)

Liked what you read?

We’re hiring developers to build blockchains & share them with the world! Come join the developer freedom @ParityTech, and build open source for Web 3.0.

Want to do something else? Check out more roles here.

Try it out Substrate, an open-source blockchain builder, here.

Have someone in mind? Share this post with your network!

Citations

--

--