Written by: Li
Source: Project White
Original title: “Do you really know what “cloud computing” based on blockchain is? 》
Encryption currency network development in recent years, with the expansion projects are characterized by a lot, but not much energy as a “landmark solutions”, such as Ethernet Square isomorphic fragments of 2.0, Boca build heterogeneous fragments , Plasma’s side chain, zkSync, Optimistic, StarkWare, etc. layer2, COSMOS’s cross-chain structure (with cross-chain expansion).
These projects are constantly looking for the most suitable expansion method based on the blockchain structure of Ethereum and Bitcoin . Ethereum 2.0 has attracted the highest attention. It focuses on PoW and switches PoS, then compresses transaction data (rollup), and shapes the sharding structure (sharding, non-data sharding). This method is extremely long and belongs to the ultimate idea of the cryptocurrency network. , But it cannot be denied that this is an extremely necessary long-term road.
In essence, the advantages of the blockchain structure are obvious, and the ceiling is also extremely obvious. If you want to innovate, if you do not break the ceiling brought by the blockchain structure, it seems difficult for the industry to move to the next step. I believe that innovation in the industry needs to learn from many mature industries and mature technology systems.
Regarding breaking the performance ceiling, we can learn from the design of cloud computing platforms.
The bottleneck of the blockchain is too obvious
The bottleneck of the blockchain comes from its most advantageous place: consensus.
The operating process of consensus is a process in which multiple parties (node devices) calculate the same data (block). For example, Bitcoin is a node that packages the block and broadcasts it to all nodes to save one by one.
Even if Ethereum 2.0 switches from PoW to PoS, it will only speed up the entire consensus process, reduce the time for a single consensus to complete, and increase the number of processing units within a unit time. In the face of massive computing needs, PoS is still too obvious.
Example diagram of single node limitation of blockchain
In this way, in each blockchain structure, the model in the figure above will appear. All computing tasks are seizing the computing resources of a computing node, and several tasks are seizing a narrow channel.
If the amount of concurrency is not so high in some application scenarios, you can smooth the entire confirmation process by improving the computing power of a single node, replacing a faster consensus algorithm, and assigning a “pass” time to the task of scrambling for resources.
However, it is a pity that for many high-concurrency scenarios (blockchain cannot stop at financial and single scenarios), it will definitely be blocked, slow, or even impossible at all, or it may cause other problems (such as security) due to blockage.
To solve this problem, it is necessary to achieve enough parallel processing in task processing to increase the upper limit of task processing in the network unit time.
If we learn from the expansion and parallel thinking of cloud computing, how can the cryptocurrency network be realized?
The most basic requirement of the idea provided by cloud computing is the network resources of the access system. It is not the access of multiple computing devices. The upper limit of the external output resource is only the upper limit of one computing device, but after N computing devices are connected, the network The processing capacity is increased by N times.
This is exactly what the cryptocurrency network needs. Each cryptocurrency network has the access of several computing devices, and the final performance is limited to the structural part of the consensus layer.
Let’s look specifically at it. Traditional cloud computing platforms have horizontal expansion and vertical expansion. Horizontal expansion is parallel, and tasks are divided and processed. Vertical expansion means increasing the processing capacity of a single device, which is very similar: One way to solve the expansion problem is to increase the block size.
An example of parallelism in cloud computing, data generation is suitable for parallel structure, and then fast processing using GPU performance
However, when the blockchain structure in the cryptocurrency network cannot be changed, the idea of parallelization has evolved into two.
In this article, the White Project team will take six cryptocurrency projects, Oasis, Phala, PlatON, Dfinity, Filecoin, and IOTA, as examples to illustrate the two main parallel ideas.
(Note for White Plan: The order of arrangement is based on the parallelism that relies on safe hardware and parallelism that relies on improved algorithms)
When these cryptocurrency networks have the ability to parallelize cloudification, they will assume the expectations of solving many of the Internet’s legacy problems in the future.
Two mainstream parallel ideas split
The above projects can be divided into two ways to solve the problem of expansion and parallelism.
One is represented by Oasis, Phala, and PlatON. By connecting trusted computing hardware as a computing device to the network, the hardware device has high computing and security capabilities, and can give security to computing and storage processes. And these single devices (or clusters) can independently undertake independent processing work, so that parallel and secure computing can be realized outside the consensus layer, which can be summarized as independent trusted computing.
The second is represented by Dfinity, IOTA, and Filecoin. Through the development of new algorithms at the consensus layer, the process of block transaction confirmation is changed to achieve parallel verification to improve the task processing capacity on the chain, and then through the shaping of scalability , Increasing the computing power and space of a single computing device is the vertical expansion of cloud computing.
The specific split is as follows:
Realize parallel network design with trusted hardware
1. First build a high-quality consensus layer .
First of all, cryptocurrency needs a general ledger. The general ledger exists in the consensus layer. Oasis, Phala, and PlatON all cut the consensus layer and the computing layer. There is an independent consensus layer on the computing device, that is, one through the computing device. (Or cloud) a blockchain network built to run a high-speed consensus algorithm.
However, it is worth noting that Oasis and PlatON have more obvious layering concepts, while Phala’s layering concept is not obvious. The design details are independent rules for off-chain computing devices.
In order to ensure the stability of the consensus layer, this layer of Oasis chooses to build nodes through organizations and enterprises with high industry trust. The nodes communicate through the Tendermint algorithm to quickly form a general ledger.
PlatON’s nodes are also built by partners, and it uses the BFT-like algorithm CBFT algorithm to optimize the efficiency of the ordinary BFT algorithm.
Phala connects computing nodes with TEE (called Gatekeeper) to the network. The TEE computing area of Gatekeeper can maintain the general ledger. Its consensus is the NPOS consensus consistent with Polkadot, which can quickly produce blocks.
Phala’s Gatekeeper (middle part) maintains the general ledger
Outside the consensus layer, they bring calculations and storage into off-chain or layer2. Parallel computing is implemented here.
2. Let the computing layer implement parallel computing .
Let me talk about Oasis first. Its computing layer is called Paratime. It can be seen as an independent chain or as a runtime cluster. However, in the early days of the establishment of the Oasis network, Paratime was mostly deployed in the cloud and was not comprehensive. Replace TEE-equipped equipment as network infrastructure equipment. As progress progresses, Paratime’s nodes will all have TEE capabilities to ensure their safety.
Oasis’ computing layer (right side)
The calculation of Phala is done in the TEE of the access node. Phala’s pRuntime will be deployed in each TEE. The communication between pRuntime and the “consensus layer” (theoretically) is independent, so the transactions processed in pRuntime do not conflict with each other. , This is achievable parallelism, because the pRuntime of each TEE is like a “shard”. The more such access nodes, the stronger the network performance.
PlatON’s calculations are done in the computing layer marked as layer2. PlatON’s layer2 has a large number of computing devices, including customized trusted computing devices, such as programmable circuits for multi-party calculations, and also through cryptography. As well as technologies such as zero-knowledge proof to complete privacy calculations, PlatON also implements privacy calculations, but its use of technology is multi-party calculation or zero-knowledge proof, homomorphic encryption, etc.
Modules and layers of the PlatON network
Designing the computing layer as a network of trusted computing hardware is to use the parallel computing layer to expand capacity and achieve scalability. We may think that the migration of computing to outside the consensus layer does not really mean parallel computing.
However, the hardware with trusted computing and the consensus layer are closely linked to the consensus layer because of their security and close connection. In theory, for the security of off-chain computing, a general ledger concept or other off-chain concepts will be required. Ways to control security. But with the help of trusted computing hardware, there is no need for this general ledger to provide off-chain security protection.
For comparison, it can be Ethereum 2.0. The beacon chain is the general ledger. If shards have been deployed, each shard can handle tasks independently, but in Oasis, Phala, and PlatON, trusted hardware is used instead. The calculation part of the sharding.
After splitting the parallelism of the computing layer, let’s look at the way of splitting and using algorithms to achieve parallelism.
Design of parallel processing through algorithms
1. Research and develop new algorithms .
Represented by Dfinity, IOTA, and Filecoin, after the algorithm is developed, tasks can be processed in parallel without changing the block confirmation process to speed up the confirmation speed.
Here we must first mention that if parallelism is implemented at the algorithm level, the main implementation will be to change the rules of algorithm calculation, which also changes the functional logic of the algorithm. For example, if the PoW algorithm is changed, it will change the calculation of random numbers and packing in the PoW algorithm. , The logic of broadcasting.
The part of Dfinity’s change to the algorithm is the consensus algorithm. The traditional consensus node is modified to participate in the consensus calculation by calculating a random number to select some nodes to complete the consensus calculation. This is a step to speed up the consensus verification. The more core is that the selected consensus node confirms the transaction through the non-interactive BSL algorithm (the node confirms that the data signature feedback is performed independently, not in combination), which means that it will not experience repeated interactions between the nodes of the BFT-type consensus. Process, and achieve a similar “parallel” acceleration effect.
Dfinity’s consensus confirmation process, the parallel effect is the signature part on the left
IOTA has made more thorough changes to the algorithm. Compared with the blockchain, IOTA uses the Tangle data structure to form the general ledger. The Tangle feature is that each transaction is attached to two previous transactions, so the original blockchain chain must be completely eliminated. The dependence of the structure on the confirmation time. This forms an infinite correlation confirmation structure of transactions, which can achieve parallel effects.
Tangle algorithm’s transaction confirmation model
The revision of Filecoin in parallel is the parallel processing of storage tasks, because the storage part of Filecoin will completely calculate the stored data, which is an extremely long process (for comparison). Therefore, parallelism and speed-up are very important. Currently, it uses the updated NSE algorithm.
Splitting the NSE algorithm can see that when processing data, the data will be processed by window (can be understood as a unit) and layer by layer. After the processing is completed, the next step of data storage and subsequent Post proof will be performed Bale. After adopting NSE, in the processing part of the layer, there is no too much dependence between the layers, so the parallel processing effect can be formed, which can be summarized as the adjustment of parallel speed.
The disassembly of the FilecoinNSE algorithm, you can observe the layer part on the left
2. Configure other parts
Solve the parallel problem algorithmically, then some auxiliary functions are needed next.
IOTA’s Tangle does not have the time limit in the ordinary block structure. In order to reach a consensus, the help of a transaction validator is needed to confirm which transactions form a consensus.
Dfinity has improved the algorithm. It also matches with subnets, data centers, and containers. Subnets are similar to “shards”. The data center is the underlying network deployment of the Dfinity network. It requires the participation of the data center, which means that the network The basic processing ability is very strong. On the subnet, the container is an established independent operation unit, which can be analogous to the smart contract of the blockchain, and the combination and interaction of the container can achieve complexity.
After Filecoin processes the data in parallel with the NSE algorithm, it performs storage replication and package preservation of time-space proofs. These parts ensure the consistency of Filecoin’s general ledger. The other development parts rely on tools provided by the official team and ecosystem.
What to do after cloudization is parallel?
The above 6 cryptocurrency projects theoretically break through the performance limitations of the blockchain in parallel. What is left for the project?
The author believes how to make these features available to developers through the development of network tools. The most important thing for the use of cryptocurrency networks is to develop DApps and broadly decentralized businesses.
Even if the performance of the infrastructure is high, the infrastructure will be futile without the developer’s application time. The developer determines the amount of application generation of the chain, and the amount of application generation determines the value created and contained by the chain.
Just like traditional Internet developers, starting from the basic development of Lanlu into the era of cloud development, the cloud computing platform has provided developers with an extremely high-quality experience. Later entrepreneurs, unlike those in the past, have to worry about capacity expansion.
I dare to ask whether today’s cryptocurrency network can take the “service-oriented architecture” of the cloud computing platform as an example to form a wave of development. After the parallelization of cloudification, cryptocurrency has only broken through the wellhead. Can you continue to rise to the sky?