Introduction to ZK Co-processor: How to Enhance User Experience of Blockchain Products Through Data?

share
Introduction to ZK Co-processor: How to Enhance User Experience of Blockchain Products Through Data?

One of the AVS in the EigenLayer ecosystem - Brevis, stands out by providing a decentralized ZK coprocessor. But what is a ZK coprocessor? Why is it said to truly enhance the efficiency of blockchain? Are Rollups not enough? This article will provide a brief introduction.

What problem does the ZK Coprocessor aim to solve?

Use Case: Product Data Collection and Computation

Existing Web3 products often feel lacking in certain functionalities compared to Web2 products, such as automated recommendations, loyalty programs, precise marketing, etc., which have been standard features in traditional software for years. Why are these features not seen in Web3 products?

For a more specific example, when a user watches a series on Netflix, the platform collects all the user's viewing history and behavioral data. By analyzing this data through algorithms, it can identify content that the user may enjoy and personalize recommendations and advertisements accordingly.

However, in the Web3 ecosystem, if a user frequently trades stablecoins on Uniswap, converting USDT to USDC, why can't the Uniswap interface automatically display this commonly used trading pair for the user?

The root of the above issue lies in "data." While Web2 products can systematically collect and compute data, like tracking user viewing history, Web3 products struggle even with simple transaction records.

Some might ask, "Aren't transaction records already on the blockchain? Why can't they be read?" The truth is, it's not as straightforward, or more precisely: it's either costly or risky.

Issue 1: Blockchain Virtual Machine Not Suitable for Intensive Computation

To achieve the mentioned use case, it requires significant computational resources. Taking Uniswap as an example, achieving "displaying historical trading pairs on the homepage for quicker user access" would involve:

  • Collecting user transaction records
  • Comparing usage frequency of various transaction records and selecting the most used
  • Execution

The final step of execution is generally well-handled by smart contracts, but the challenge lies in the first two steps.

Regarding the first step, although transaction records are indeed on the chain, the virtual machine and the blockchain's historical ledger are distinct. For the virtual machine to read the historical ledger, it would require substantial gas fees. Moreover, every call would entail a fresh read or storage operation, both of which are resource-intensive and unsuitable for decentralized virtual machine use.

The second step involves data processing, which, though relatively straightforward, also demands significant resources for computation on the EVM. If the task becomes more complex, such as analyzing user viewing history and viewing times to determine content for personalized recommendations, the project becomes cost-prohibitive.

Issue 2: Trust Assumption Problem in Applying External Data

"If on-chain computation is not feasible, then let's handle it off-chain!" This is a reasonable approach adopted by services like Etherscan, Dune, and DeBank, which can resolve many issues.

However, if the off-chain computation results are related to significant Total Value Locked (TVL) or user interests, the trust assumption risks associated with the outcomes increase.

For instance, in blockchain airdrops, even if the criteria are public, the team still performs the selection process, and while the results are eventually made public, there's a trust assumption involved - how do we know the team hasn't manipulated the selection process? It's not surprising that airdrops often lead to controversies and grievances within the community. Furthermore, the consequences of incorrect data can be severe.

Hence, off-chain computation may not be viable in many application scenarios.

The Dilemma of Web3 Data

This dilemma results in a catch-22 situation: directly fetching and computing data is prohibitively expensive, but relying on off-chain computation results introduces trust assumptions. It's no wonder that there are no native services utilizing Web3 data currently.

Almost all mainstream Web2 applications and products leverage data to enhance user experiences and maintain a competitive edge. If Web3 aims to create "killer applications," data, while not the sole factor, is undeniably crucial.

As a solution to this issue, the concept of ZK Coprocessors has emerged in the market, attempting to address the challenge by enabling off-chain computation without introducing trust assumptions, thereby offering a perfect resolution.

Introduction to ZK Coprocessors

Separating Computation from EVM

The ZK Coprocessor, also known as the ZK Assistant Processor, aims to extract the parts that the EVM struggles to compute, segregating them for off-chain computation and zero-knowledge proofs to enhance efficiency and trustlessness, thereby boosting blockchain efficiency.

The naming of the coprocessor is inspired by the relationship between CPU and GPU, where the GPU handles tasks that the CPU isn't proficient at, such as intensive image processing, optimizing overall computer efficiency through specialized division of labor. Similarly, blockchain is a decentralized computer that can benefit from a similar approach.

Addressing the Above Issues with ZK Coprocessors

From an abstract perspective, the coprocessor primarily focuses on two key aspects:

  • Data Retrieval: Fetching transaction records from the blockchain ledger and verifying data authenticity through ZK proofs.
  • Data Computation: Performing relevant computations based on the data and requirements, and verifying results' authenticity through ZK proofs.

Finally, the data is returned to smart contracts, which can efficiently verify and utilize the data via zero-knowledge proofs, maximizing execution efficiency.

ZK Coprocessor vs. Rollups: Differing Efficiency Improvements

The ZK Coprocessor primarily enhances the front-end process, i.e., data collection and processing efficiency, while Rollups focus on improving data processing and execution efficiency. They are not competitors but rather complementary solutions.

Introduction to ZK Coprocessor Implementation Projects

Major ZK Coprocessor projects include Brevis, Herodotus, and Axiom, which operate based on the aforementioned concepts but may differ slightly in implementation.

Brevis

Brevis, founded by the same team members of Celer Network, comprises three main components:

  • zkFabric: Responsible for collecting blockchain data and computing zero-knowledge proofs for block headers.
  • zkAggregatorRollup: Stores and transmits data to smart contracts on the blockchain, including data collected by zkFabric and zkQueryNet.
  • zkQueryNet: Handles data and computations required by Web3 smart contracts.
Brevis Architecture Design Source

Axiom

Axiom provides a trustless blockchain query service, unlike the centralized off-chain query frameworks mentioned earlier. Axiom's data can be verified for accuracy through zero-knowledge proofs.

ZK Coprocessors: Not the Sole Solution

ZK Coprocessors address the challenges of Web3 data collection and computation, vital for enhancing user experience and precise marketing.

However, alternative solutions exist for addressing "on-chain data computation," such as Smart Layer, which utilizes external data integration to achieve similar outcomes, albeit with trust assumption risks dependent on the security of the Smart Layer network itself.

Recommended Reading: What Is Smart Layer? How to Integrate Web3 with Real-World Scenarios?
Reason for Recommendation: This article provides a detailed explanation of Smart Layer's design architecture and operational principles. It is highly recommended for cross-referencing with this article to gain a holistic understanding of the issues ZK Coprocessors aim to resolve.

If the information is relatively unimportant, off-chain computation can suffice without requiring complete trustlessness for all data. The key is to address the specific problem at hand.