New Paradigm in Ethereum L2 Scaling: Multi-proving and ZK-VMs

Brief intro on Ethereum scaling & zk-proofing

Ethereum is the largest, and in many respects, the most credible smart contract blockchain. Ethereum has the most decentralized consensus in the smart contract blockchain space, backed by “Ethereum values” such as credible neutrality — an open garden for anyone to build permissionlessly. It's not surprising this environment attracts the most talent and projects.

The demand for Ethereum has increased, making the network expensive to use. Ethereum L1 can only handle around 15 transactions per second. Thus, scaling Ethereum has become especially critical and drawn lot of talent and teams.

As we know, Ethereum follows Roll-up centric scaling roadmap. This involves L2 rollups that improve throughput and reduce the burden on Ethereum L1. There are two main types of rollups: optimistic rollups and zk-rollups.

Optimistic rollups rely on the assumption that most transactions are valid (with the future implementation of fraud proofs). They allow transactions to be processed off-chain (off the mainnet) and then submitted to Ethereum L1 in a compressed format, reducing gas costs and increasing throughput.

In contrast, zk-rollups, short for Zero-Knowledge Rollups, provide a trustless and more secure way of scaling Ethereum. ZK-Rollups use cryptography and rely on math to prove the validity of transactions. ZK-rollups batch a bunch of transactions together, calculate a ZK proof of these program execution traces, and finally submit the proof to Ethereum L1 mainnet. The key thing is that verifying this proof on L1 mainnet is significantly faster than re-executing the transactions, thus providing scaling benefits.

Thus, it’s not a surprise that some Optimistic rollups are taking steps to adopt zk-proofs. Besides this, certain entirely different Layer-1 (L1) blockchains are also considering converting themselves from independent L1 to an L2 rollup under Ethereum.

However, while the proofing systems for ZK-Rollups have developed fast, there's also a shared understanding that some of the proof systems won't be bug-free for years. Thus, certain rollups are exploring multiple parallel proving systems as a secure path ahead.

One of the most interesting paradigm shifts has been ZK-VMs, which can essentially create ZK-proof for any computer program, even for the entire EVM (Ethereum Virtual Machine).

ZK-VMs have been around for a while. TinyRAM in 2013 was the first research-driven project. Later, around 2021, in the blockchain space, Cairo (Starknet), Zinc (zkSync), and Polygon ZK-EVM (or Miden) were the next major projects. 2021 was also the year for the first commercial projects.

The recent developments this year in 2023 have enabled the creation of ZK proofs for the entire microprocessor architecture (RISC-V), which, in turn, has enabled running the entire EVM inside it. Generating proofs for everything running on EVM. This is a major paradigm shift.

Overview of ZK-VMs. Source: “Analysis of zkVM Designs” (ZK10), 10th Sep 2023, by Wei Dai & Terry Chung.

focus on ZK-rollups, Multi-proving and ZK-VMs

In this article, we explore both of these topics: Multi-proving in the context of Ethereum L2 rollups, ZK-VMs, and how they relate to each other. ZK-VMs and general-purpose verifiable computation are new paradigms we'll explore.

We'll also examine the most important projects and companies in the ZK-VM space: Risc Zero, Powdr, Nexus, and Lasso/Jolt.

Background info

This article is written for people curious to understand the current state of Ethereum L2 zk-rollup development, and what is ahead in terms of proving systems, focusing in ZK-VM approach.

I assume a basic understanding of Ethereum, L2, zk-Rollups, and zero-knowledge proofs.

If you are new to zero-knowledge proofs and zk-rollups, it’s better to first read my primer article on the topic here:

https://www.mikkoikola.com/blog/2023/1/9/exploring-ethereum-scalability-what-are-zero-knowledge-proofs-and-zkevm

With these opening words, let’s go!

Table of Content

  1. Problem: 34000+ lines of code in PSE circuits won’t be bug free for years

    1. Rollups are on training wheels

  2. Solution: Vitalik’s Multi-Proving idea

    1. Option 1: High-threshold governance override

    2. Option 2: Multi-prover

    3. Option 3: Two-prover plus governance tie break (combination of 1 and 2)

  3. Recap of proving systems:

    1. SGX

    2. Polygon zkEVM

    3. Kakarot

    4. ZK-VM

    5. Other

  4. Contestable Rollup?

  5. ZK-VM is brings a new paradigm in ZK and scaling

    1. What is ZK-VM?

    2. What is RISC-V?

  6. How does ZK-VM actually work?

    1. Regular Virtual Machine

    2. Zero Knowledge Virtual Machine (ZK-VM)

  7. What are the most interesting companies in the field?

    1. Risc Zero

    2. Powdr

    3. Nexus

    4. Jolt and Lasso

    5. Others

  8. Performance comparison between ZK-VMs

  9. Summary and conclusion

  10. References

Problem: 34000+ lines of code in PSE circuits won’t be bug free for years

Screenshot of Vitalik’s presentation about Multi-Proving on 10th Oct 2022 in Rollup Day in Bogota.

Ethereum Foundation’s PSE team (Privacy & Scaling Explorations team) has been developing open-source system based on manually written circuits using Halo2 proofing system. PSE circuits are used by e.g. Taiko and Scroll, which are both L2 zk-Rollups. PSE circuits are also used by certain ZK co-processors. And naturally, PSE team is exploring ZK tech to eventually scale Ethereum L1 as well, albeit with a conservative and long timeframe.

PSE circuits contain over 34,000 lines of code (at least that was the number one year ago). These circuits are not going to be bug-free for years, as acknowledged by Vitalik and the engineers at large.

Vitalik addressed this in his presentation “Multi-Provers for Rollup Security” on 10th Oct 2022 at Rollup Day 2022. See the video embedded below.

It is complicated and hard to manually write these circuits for something as complex as EVM (which was never designed for ZK proofing). The maintenance of the code base is hard, making modifications is hard, general human readability is not easy.

Vitalik’s presentation about Multi-Proving in Rollup Day in Bogota, 10th Oct 2022

Rollups are on training wheels

This means that nearly all rollups are on "training wheels" before becoming fully decentralized. That is, either in development or overridable by governance.

L2BEAT does great work monitoring the development (Stage 0, 1, 2).

The PSE circuits are being developed further and over time will become more trustworthy. To mitigate the long time frame it takes, new methods has been proposed to guarantee the security of zk-Rollups. Let's deep-dive into Multi-Proving and Contestable Rollup design.

Solution: Vitalik’s Multi-Proving idea

Besides PSE circuits, we can use parallel proving methods.

Vitalik mentioned in his speech:

1. High-threshold governance override

2. Multi-proving

3. Two-prover plus governance tie break (combination of 1 and 2)

Since Vitalik's speech one year ago, there has been interesting development. I'll leave you to linger with a couple of acronyms: Contestable Rollup, SGX, and ZK-VMs.

Before we explore these, let's first explore Vitalik's 3 main ideas and then expand on the development since his presentation.

Option 1: High-threshold governance override

E.g., 6 of 8 or 12 of 15 guardian multi-sig. If a bug is found, guardians can override it (override the state root).

That said, governance always has a weakness: guardian selection, legal risks/responsibilities, etc.

Option 2: Multi-prover

Like Ethereum, which has multiple client implementations for extra safety, zk-Rollups could have multiple proving systems.

If one implementation has a bug, another implementation may not have one, at least in the same place. This is especially true if the implementations are created by different teams and have different technical architectures.

This is a relevant analogue, because Ethereum’s multi-client approach has saved it from attack attempts many times. Bugs has been found in many clients (both Geth and Parity), even at the same time, but the bugs have been in a different place.

Vitalik suggested combining optimistic fraud proofs and zk-proofs, and there are many variations.

As one option, Vitalik envisioned something that didn't exist back then: Compiling the Geth source code and putting it through some minimal VM, such as MIPS.

Vitalik’s words have proven to possess strong predictive force in the industry. This one-liner turned out to be no exception, as we’ll see later in the blog post ✌️

Option 3: Two-prover plus governance tie break

One big question is multi-prove aggregator's code.

You want to minimize the number of lines of code that definitely need to be bug-free. Instead of 30,000+ lines, it could be 100-200 lines of code.

Then, formally prove it. And coordinate between various projects using this same formally proved code.

One challenge is minimizing the multi-aggregators themselves. Vitalik's idea was to use a plain old 2-of-3 Gnosis Safe wallet.

The 3 wallets would be:

  • 1st: 4-of-7 guardian multi-sig (another Gnosis)

  • 2nd: account that confirms ZK-proofs

  • 3rd: account that confirms fraud proofs

So, you could even utilize existing code that does the aggregation of proofs and thus reduce the surface area of code you have to trust unconditionally.

This marks the conclusion of Vitalik's presentation summary.

The rest of the thread explores developments that have happened in the last 12 months since Vitalik's speech.

Recap of proving systems

Besides PSE’s ZK proofs, optimistic fraud proofs, and governance mechanisms, there are also other proving systems that are utlized or explored by L2 teams.

SGX

One of them is SGX (Secure Guard Extensions). SGX is a Trusted Execution Environment (TEE) developed by Intel. SGX can feel like a cheat code because it runs nearly at the same speed as the computation itself. However, it relies on trust in Intel, and thus is not sufficient alone. SGX can work as part of a multi-proving system as an additional security mechanism.

Vitalik recently commented on SGX on Twitter spaces. He said essentially the same thing. SGX is good if it adds security, and bad if it adds dependency for security. E.g. running a rollup that would only utilize SGX would be a terrible idea.

If you’d like to learn more about SGX, you can check this comprehensive article.

Polygon ZK-EVM

Polygon zkEVM is L2 rollup on Ethereum, which uses its own proving system that is based on combination of SNARK and STARKs. To be clear, Polygon doesn’t use PSE circuits.

Kakarot

Kakarot is slightly a different kind of project. It is zkEVM implementation written in Cairo. It is designed to bring Solidity language available specifically for Starknet L2 rollup which traditionally supported only Cairo language for smart contract.

ZK-VM

Finally, we have the ZK-VM (Zero-Knowledge Virtual Machine) approach. In this case, we specifically refer to ZK-VM that supports entire instruction set architecture, like RISC-V. This is a truly new paradigm, something Vitalik was envisioning.

ZK-VM enables building ZK apps without having to build custom program-specific circuits.

Bobbin Threadbare summarized the benefits of ZK-VMs in four parts:

  • ZK-VMs are easy to use (no need to learn cryptography and ZKP systems)

  • They are universal (Turing-complete ZK-VM can be proof computations for arbitrary computations).

  • They are simple. A (relatively) simple set of constraints can describe the entire VM.

  • Ability to utilize recursion. (Proof verification is just another program executed on the VM)

ZK-VM is a key focus area of this article. In the rest of the article we’ll have a brief look how ZK-VM works, and what are the most interesting companies and projects in the space.

Contestable Rollup

Daniel Wang’s presentation about Taiko L2 protocol, multi-proving and contestable rollup approach.

Now that we understand multi-proving paradigm and have a glimpse of different proving systems, let’s have a look at “contestable rollup” design.

Multi-proving works best when combined with contestable rollup design.

So, what is contestable rollup? Not all blocks need a ZK proof. It might sound dramatic. It's an extension of Vitalik Buterin’s original multi-prover vision we summarized above.

Anyone can attest the validity of a transaction. Anyone can request a ZK and/or SGX proof.

This way, computational resources are initially saved from tiny/minuscule transactions, but proofs can always be requested if needed. As the name says, it's a contestable approach.

“Contestable rollup” term was coined by Daniel Wang (CEO & Co-founder at Taiko).

If you’d like to understand multi-prover and contestable rollup concepts in more detail, you can see Daniel’s video above how it all works in Taiko L2 zk-Rollup. Multi-prover part starts at 13:54.

Dispute game factory

“Dispute Game Factory” is essentially a similar concept as Contestable Roll-up on Optimism. Protolambda from Optimism discussed this approach in the recent Twitter spaces.

This allows adding more “games” or “sub-games” in the system. “Games” essentially refer to various kind of proofs, optimally both optimistic fraud proofs and zk-proofs, and various version of these.

In the same Twitter spaces, Vitalik also agreed that contestable rollup or dispute game approach is good. Vitalik stressed that the fallback mechanism of a rollup have to be robust by itself. The mechanism needs to be so solid that people would feel comfortable even if it was the only mechanism.

This concludes the first part of the blog post.

The next half of the blog post goes deeper in the ZK-VMs, especially the ones that utilize RISC-V instruction set architecture. We explore why they are a new paradigm, how they work and what are some of the most interesting companies and projects in the space.

ZK-VM brings a new paradigm in ZK and scaling

What is ZK-VM?

Generally speaking, ZK-VM is a virtual machine that can guarantee secure and verifiable computation utilizing zero-knowledge proofs.

Technically speaking, ZK-VM is a Virtual Machine (VM) implemented as an arithmetic circuit for the zero-knowledge proof system. Instead of proving the execution of a specific program (e.g., EVM, as PSE circuits aim to do), you prove the execution of an entire Virtual Machine.

As an example, currently, e.g., Taiko and Scroll L2 zk-rollups (together with the Ethereum Foundation’s PSE team) have developed circuits directly for the EVM, on top of which the smart contracts run.

With ZK-VM, instead of creating circuits specifically for the EVM program, you prove the execution of the entire microprocessor architecture, such as RISC-V.

The benefit here is that RISC-V is a widely used architecture and widely supported by compilers. You can compile programs from a number of typical programming languages like Rust, Python, C++, and so on. Sometimes these are called “frontend languages” in the ZK-VM context.

So, what is RISC-V?

RISC-V is the royalty-free open-source version of the industry-standard RISC (Reduced Instruction Set Computer) architecture. RISC-V was initiated in 2014 at Berkeley. As opposed to hundreds of instructions in CISC (Complex Instruction Set Architecture), RISC only has around 40 instructions. This makes it feasible to create ZK circuits and proofs for the entire RISC architecture. The open-source nature of RISC-V also aligns well with the values of Ethereum and decentralized blockchain systems.

The beauty here is that after you’ve generated ZK circuits for RISC-V, you can generate ZK proofs for any computer program. Wow. You can literally put the entire EVM there as it is and get a fully ZK-proofed EVM. Double wow.

Is this real? Yes, it is. However, with a caveat.

Let’s explore the difference between manually writing circuits and the ZK-VM approach. The following slide gives a good overview of the current state:

Comparing pros and cons of manually writing program-specific circuits (“Zk Circuits”, on left) and ZK-VMs (on right). Source: Wei Dai and Terry Chang.

To summarize the slide above, ZK-VMs are better in almost all aspects compared to manual circuit writing. With ZK-VMs, it's easier to modify and expand the proofing system code base, conduct auditing, and utilize a wide range of available tooling.

However, the caveat is that ZK-VMs are currently 10-100x slower.

Thus, the big question is: Can this performance be improved? The consensus in the industry is — yes — we're just taking baby steps. The performance will improve in a reasonable time frame.

The key way to improve speed is parallelization, splitting the ZK proof generation into hundreds or thousands of processors. Some companies call this splitting technology "continuations." We'll explore this later.

How does ZK-VM actually work?

Before explaining ZK-VM, let’s take a look at how a simple VM (Virtual Machine), or a state machine, works.

I’ll utilize three slides from Bobbin Threadbare’s presentations.

Regular Virtual Machine

Source: Miden VM architecture overview by Bobbin Threadbare at 2022 Science of Blockchain Conference - Applied ZK Day on September 14, 2022.

The components explained:

Inputs:

  • Initial State is the state of a program, e.g., the state of the Ethereum blockchain before the transactions are executed.

  • Program is a computer program.

Outputs:

  • Final state is the state after program execution. E.g. state of the Ethereum blockchain after the transactions are executed.

Now, let’s have a look how ZK-VM looks like:

Source: ZK7: Miden VM: a STARK-friendly VM for blockchains by Bobbin Threadbare at Zero Knowledge Summit (ZK7) on April 21, 2022.

The ZK-VM has two extra components: Proof and Witness.

Outputs:

  • Proof can be used by anyone to verify that the programs were executed correctly. It means we can get from the Initial State to the Final State without having to re-execute any of the programs. This saves a lot of computational resources.

Inputs:

  • Witness = aka. “secret inputs.” In the context of a blockchain transaction, it could be transaction signatures that go into the witness.

    • The verifier who wants to verify the correct execution of the program does not need to know this Witness.

    • Because the ZK-VM has access to the Witness, it is also possible to transform the other inputs (Initial State, Final State, and Programs). We can simply provide a commitment to the Initial State, Final State, and Programs.

    • Committment basically refers to a cryptographic technique that involves publicly committing to a specific value without revealing the value itself)

  • These all three go into the Witness, and then, verifier can verify the correct execution of the program.

    • Verifier needs to know what were the commitments to the Initial State, Final State and Programs.

    • But importantly, no re-execution needs to be done, and verifier doesn’t have access to what actually happened inside the execution, or to any other specific details.

  • In our example of scaling Ethereum, the “Program” input is the entire EVM.

Because we only use the commitments and not the actual values/data, the diagram looks like as follows:

Source: ZK7: Miden VM: a STARK-friendly VM for blockchains by Bobbin Threadbare at Zero Knowledge Summit (ZK7) on April 21, 2022.

If you have a bit extra time, I recommend watching the first 4 minutes of Bobbin’s presentation. He elegantly explains these three slides above. If you have more time, the entire 30 minute presentation is a great primer on ZK-VMs.

Obviously, this was just a scratch of the surface of how the tech works. The detailed mechanics of ZK-VM are out of the scope of this blog post.

Next, let’s explore the companies and projects working on general-purpose ZK-VMs that are utilizing RISC-V.

What are the most interesting companies in the field?

We’ll cover the following companies and projects: Risc Zero, Nexus, Powdr and Jolt/Lasso. I’ve embedded YouTube presentation from the founders.

Risc Zero

Search the 12:25 mark in the video for EVM proofing part

Risc Zero is team of over 50 people, originating from San Francisco. They’ve been around since 2021 and have raised 40 MUSD in Series A funding, in addition to 14 MUSD of previous rounds.

Risc Zero has developed a general purpose ZK-VM.

In August 2023, they gained plenty of publicity by proving an EVM block utilizing their RISC-V-based ZK-VM. This was conducted with their product named Zeth (see announcement). Zeth is an open-source ZK block prover for Ethereum built on Risc Zero’s ZK-VM. Zeth is based on Reth.

In a one-sentence summary, Risc Zero’s Zeth client can create ZK proofs for an EVM block. This was developed in just under 4 weeks by 2 engineers.

Of course, first, there was a significant effort to create the proofing system for RISC-V architecture. Once that was done, it is fairly easy to compile any piece of program in the architecture and get its execution ZK proofed. Nevertheless, it’s quite a contrast to PSE circuit development, which has taken years and is still to be completed and audited.

The caveat is, as mentioned earlier, creating ZK proofs for EVM blocks utilizing Risc Zero is still slow.

Risc Zero’s offering consists of different products, services and concepts. Let’s have a look:

Zeth is an open-source ZK block prover for Ethereum built on Risc Zero’s ZK-VM.

Bonsai is proofing as a service (essentially a SaaS service). Bonsai is a ZK co-processor for Ethereum. Bonsai allows you to request verified proofs via an off-chain REST API interface or on-chain directly from smart contracts.

Continuations is a feature that means parallel computing of ZK proofs in multiple machines/GPUs. This can significantly scale computation, however, it comes with overhead cost, as high as 50%. I’d imagine improving parallelization algorithms will be Risc Zero’s key area of development.

Without Continuations, proving big computations (such as EVM blocks) wouldn’t be feasible. Continuations led to the success of the Zeth project. For example, zkEVM proof is about 4 billion cycles. Continuations are used to split it into 4000 one-million-cycle segments, proving them in parallel, and finally combining them using recursive proof.

As one benchmark metric in their launch blog post, the cost of proving an Ethereum block came to around 21 USD and 50 minutes. This computation was done with 64 proving nodes using “continuations.” These numbers are expected to significantly decrease over time.

Finally, it’s interesting to see where Risc Zero will exactly develop in the future. The team has developed exciting technology. It is mostly (but not entirely) open source.

Besides Risc Zero, there are also other competent teams with a similar vision, which I’ll explore next.

Nexus

Nexus aims to create a general-purpose verifiable computing platform. Nexus also originates from San Francisco area, specifically from Stanford University, and has raised funding from Dragonfly, Alliance and SVangel.

Nexus has given one presentation at BASS (Blockchain Application Stanford Summit) on October 1, 2023 and has a simple landing website. Other than that, Nexus seems to be still in stealth mode.

Nexus ZK-VM

Nexus claims that Nexus ZK-VM can prove any programming language, any machine architecture, any arithmetization, any size (potentially infinite), and finally, use recursive proofs to aggregate and parallelize. They call this Nexus Network.

Founder Daniel Marin underlines in the presentation the importance of seeking to prove arbitrarily sized computations — extremely large computations, that has never been done before at this scale.

Nexus focuses on RISC-V architecture, similar to Risc Zero.

Nexus Network

Nexus Network is a decentralized market for verifiable computation. It enables decentralized computation of zero-knowledge proofs to enable verification of any computer program.

Technically, it is a system that enables the creation of multiple proofs (with multiple inputs) and aggregates these proofs together using recursion in a way that a single small proof can be succinctly and efficiently verified, attesting to the correct execution of multiple functions with multiple different inputs.

Every stateful system (e.g., rollups, smart contracts) can outsource proof generation to the stateless Nexus network.

Early performance stats (demo):

In the video above, they provided a demo of calculating the first 1000 Fibonacci numbers and generating a proof using Nexus ZK-VM.

Computing Fib(1000) on EVM amounts to around 2,000,000 Gas, which is around 160 USD on the EVM, with the recent ETH token prices.

If you run Fib(1000) on Nexus and verify its execution on Ethereum, it amounts to around 20 USD. This is a fixed cost for all N in Fib(N). Obviously, the proof generation takes time and has an additional cost, but this is outside of the EVM.

Finally:

While most of the details of Nexus are still to be published later, we can see from the website that Nexus has attracted some serious talent. For example, Jens Groth is their Chief Scientist, known for the Groth16 proving system and being the inventor of pairing-based zkSNARKs. Considering this, I can sense there might be some cryptography innovations cooking.

Nexus seems to have big ambitions and talent to support that. Nexus is also aiming for the Incrementally Verified Computation (IVC) space at large, providing use-cases also outside of the blockchain space.

PS. Nexus's presentation listed key cryptography/proof system research papers like IVC, Nova, SuperNova, CCS, HyperNova, and Protostar. I compiled them into this PDF for my convenience, with the first 7 pages containing abstracts in chronological order. It's a handy way to study the timeline of these innovations, and recognize how few recurring authors are behind them.

Powdr

Powdr: a modular stack for zkVMs, by Christian Reitwiessner at Zero Knowledge Summit 10 on Sept 20th 2023 in London.

Powdr is a remote team of around 10 people, originating from Berlin, funded by grants from the Ethereum Foundation.

Let’s clarify what Powdr is not: Powdr is not a ZK-VM.

Instead, with Powdr, you can build ZK-VMs.

Powdr creates a toolkit and a modular stack to build ZK-VMs.

The ZK part in the ZK-VM is not the most important aspect for Powdr. Powdr is more focused on the verifiable computation aspect.

Powdr aims to develop ZK-VMs that generate proofs of program execution faster to verify than rerunning the execution in the first place.

Recently, numerous ZK-VMs have been created (Taiko, Scroll, Polygon zkEVM, Risc Zero, LLVM, nil zkLLVM, Valida, fluentlabs, Polygon Miden & Zero, zkSync, Cairo). Usually, they are custom-made. See the slide below.

Source: Powdr: a modular stack for zkVMs, by Christian Reitwiessner at ZK10 on Sept 20th 2023 in London.

According to Powdr, there is one advantage in custom-built ZK-VMs, which is “probably” performance.

However, there are disadvantages in custom-built ZK-VMs:

  • Hard to audit

  • Hard to change

  • Requires lots of effort to build and maintain

  • Prover (and proof-system) specific

  • Difficult to reuse

In a way, Powdr aims to be the LLVM (Low-Level Virtual Machine) for ZK-VMs.

Powdr wants to develop a system that allows you to freely change the frontend programming language and the backend proving system. A system that enables you to write ZK-VMs in high-level languages, like Rust.

See the diagram below.

On the left, you can see front-end programming languages (JS, C++, Rust, Solidity). On the right, backend proofing systems (Halo2, eSTARK, SuperNova). Powdr is the toolkit in-between and consist of powdr-ASM language and powdr-PIL language.

Source: Powdr: a modular stack for zkVMs, by Christian Reitwiessner at ZK10 on Sept 20th 2023 in London.

Powdr is a compiler stack that enables you to define a ZK-VM. In the Powdr source code, written in the Powdr PIL language, you specify the architecture and instructions for your Virtual Machine.

It allows you to abstract away all the low-level constraints and prover complexity, allowing you to build your machines in a modular way.

Powdr incorporates various layers of abstraction that facilitate optimizations and analysis. Automated analysis, such as optimization checks for non-determinism, and formal verification to ensure the implementation correctness, can be performed on the Powdr source code.

Btw, PIL language stands for Polynomial Identity Language, and originates from Polygon zkEVM, with Powdr’s own modifications and development.

Powdr’s goal is to make it easy to build, test, and audit Zero Knowledge Virtual Machines.

It appears that Powdr also has heavyweight co-founders. Christian Reittwiessner, a co-inventor of the Solidity language, is among them. Powdr is entirely open-source and does not have any investors, unlike Risc Zero and Nexus.

JOLT and Lasso

Video above: 3 min primer in Lasso. Source: Lasso in a Nutshell, by Justin Thaler in a16z crypto event
Also see, 3 min primer of Jolt:
The Lookup Singularity with Justin Thaler.

Jolt and Lasso are a bit different from the previous products and companies.

Jolt and Lasso are more like research projects at the moment. These two products are created by the same team and closely relate to each other.

Lasso is a new, more performant “lookup argument”. Lasso is utilized by Jolt, which is a new approach to building ZK-VMs. Let’s examine more closely.

LASSO stands for Lookup Arguments via Sum-check and Sparse polynomial commitments, including for Oversized tables.

Hah! Cryptographers never disappoint me with their creative naming schemes.

In short, Lasso proposes a new method to speed up ZK systems. This method is a new “lookup argument,” a fundamental component behind ZK-SNARKs. They estimate a roughly 10x speedup over the lookup argument in the popular Halo2 toolchain, and a 40x improvement after all optimizations are complete.

See Lasso’s research paper and Github.

JOLT stands for Just One Lookup Table.

Jolt is the second project of the team. Jolt introduces a new approach to building ZK-VMs that uses Lasso. Currently, it is only a research paper and will later release a version in open source code.

Jolt describes a new front-end technique that applies to a variety of instruction set architectures (ISAs). They realize a vision, originally envisioned by Barry Whitehat, called “lookup singularity.” The idea is to produce circuits that only perform lookups into predetermined lookup tables that are gigantic in size, more than 2^128. Validity of these lookups is proved utilizing the new lookup argument Lasso. All these avoid costs that grow linearly with the table size.

See Jolt’s research paper "SNARKs for Virtual Machines via Lookups" and Github.

The interesting thing with Lasso and Jolt is that these open-source projects are co-written by venture capital company Andreessen Horowitz (A16Z), together with prominent researchers in the field. The motivation for A16Z to participate in research stems from the fact that they are heavily invested in blockchain and cryptography companies that utilize ZK. Supporting ZK research can speed up the performance of the ZK systems used by A16Z’s numerous portfolio companies.

Gigabrains behind Jolt and Lasso are well-known names in cryptography research: Arasu Arun, Srinath Setty, and Justin Thaler.

There is a set of 7 presentation in this YouTube list that covers the technical details of Jolt and Lasso.

Others

Beyond RISC-V, there are also other ZK-VM teams utilizing other instruction set architectures.

I decided to narrow down my blog post only to the team with RISC-V approach. Please let me know if I missed any. For other teams, you can find some of them mentioned in the next chapter when I summarize presentation about performance comparison between ZK-VMs.

Performance comparison between ZK-VMs

Wei Dai and Terry Chung gave probably the most information-dense and well-prepared presentation about the analysis of ZK-VM designs and performance comparison ever, in the ZK10 conference.

The first part of the presentation is a great overview which I recommend to watch. The second part you need to be an experienced ZK engineer to understand everything. They summarize lessons learned from design differences and spark discussions of finding more efficient arithmetizations and zk-specialized instruction set architectures.

Wei and Terry touched upon three category of projects:

  • RISC-V related: Risc Zero, Powdr, Jolt

  • WASM related: zkWasm, Wasm0

  • EVM related: Scroll, Polygon and Zeth

  • ZK-specific instructions set related: Cairo, Triton, Miden and Valida.

Here is my summary of their main conclusions in the presentation. And please don’t worry if you don’t understand everything. I warmly welcome you to the imposter club! :) These conclusions mostly make sense for ZK engineers who are deep in the nuts and bolts.

  • On the ISA front, there are efficiency gains to be made if we eliminate local data movement

  • Arithmetizations: It is clear that with more efficient SNARK-based hash functions, FRI can gain a lot of efficiency in terms of the recursive complexity.

  • Minimizing the number of cells committed per instruction should be a goal. Many times there are certain cells that are carried over repeatedly across different cycles, such as registers.

  • There are techniques to improve lookup protocols that are applicable to all VMs. For example, a lot of Halo2 VMs still use Halo2 lookup argument, which can be replaced by more recent and faster lookup arguments.

  • Finally, for PCS (Polynomial Commitment Schemes), we should have better benchmark frameworks to do comparisons.

  • Libraries are needed that frontends can depend on. Currently, every single STARK or SNARK library has to code its own Polynomial Commitment Schemes.

The summary slide from the presentation: STARK-VM Spec Sheet and Halo2-based VMs:

STARK-VM Spec Sheet

Halo2-based VMs

If you're not a full-time ZK engineer, I warmly suggest you to just skip the slides above. In case you'd like to understand more, the video presentation runs through the data of these slides one-by-one. For more ZK benchmarking, you can also have a look at the ZK Bench website..

Summary and conclusion:

In this article, I explored why a multi-prover approach is needed for ZK proof generation in certain Ethereum L2 rollups. In short, there is a need for an additional proofing system besides PSE circuits that can run in parallel. This is necessary because it will take years before PSE circuits can be considered bug-free. We briefly touched upon different proving systems, such as PSE circuits, SGX, and especially delved deep into ZK-VMs.

ZK-VMs have existed since 2013, and a significant innovation occurred this year in 2023. ZK proofs were successfully generated for an entire RISC-V microprocessor architecture. RISC is an architecture supported by a large number of compilers and frontend languages (e.g., Rust, C++, Python, etc.). It's possible to compile nearly any program to RISC-V and generate ZK proof of the execution trace of the program. This marks a significant paradigm shift in the industry.

In the Ethereum space, it means we can put the entire EVM inside RISC-V and have proofs generated for it effortlessly. The advantages include easier auditing and ensuring system security compared to manually written program-specific arithmetic circuits. The caveat is that currently, ZK-VM systems are 10-100x slower than program-specific proofing systems, but performance is expected to improve.

The truly fascinating future development of ZK-VMs might extend beyond the blockchain space. When ZK-VMs become more performant, there are opportunities in IVC: Incrementally Verifiable Computation, which so far has mostly been a theoretical area of cryptography research.

IVC means verifying the correct execution of computer programs at a large scale. For example, as more computation is outsourced to the cloud (which is increasingly happening), questions arise about trusting cloud computation services, especially in critical applications. IVC explores how we can generate proofs of correct execution on a massive scale for computer programs of any size. This still requires certain leaps in cryptography research and recursive proof performance, which is well on its way. ZK-VMs of today is only a tiny glimpse of what is coming in just a few years.

What are still some open questions?

In the article, we also introduced four companies/projects in the ZK-VM space that specifically utilized the RISC-V approach: Risc Zero, Nexus, Powdr, and Jolt/Lasso.

One of the open questions is, for example, about business models and open-source licensing and how these two fit together. For instance, Risc Zero has open-sourced most of its code base (but not all). They generate revenue by offering access to their SaaS proof generation cloud service, which also provides scaling through computation parallelization.

From the point of view of decentralized blockchains, such as Ethereum rollups, these services can function as one provider of the Multi-Prover approach and bring extra security to a rollup. This is great news.

However, relying on a centralized and partly closed-source proofing system doesn’t entirely align with the values of Ethereum. While decentralization in proof systems is not nearly as important as decentralization of the blockchain’s consensus layer, it still matters. Thus, I believe there will be a demand for sufficiently decentralized and fully open-source proofing systems. That said, the dilemma with a fully open-source and decentralized approach might be finding enough capital and thus attracting talent.

Moreover, the need for computation power in the future will be so massive that an entirely new industry of "proof markets" can emerge. How to connect idle (yet powerful) hardware to those in need of ZK computation and willing to pay for it? Certain companies are already exploring this.

Multi-proving unites both optimistic and zk-rollups

The traditional divide between Optimistic and ZK Rollups also appears to converge in the long term. The multi-prover approach seems to unite them both. ZK-Rollups are exploring optimistic approaches to save computation resources. Similarly, for instance, Optimism is collaborating with two external companies to bring both RISC-V and MIPS-based ZK-VM proofing systems into play, generating additional proofs for their rollup.

How about L1, will the "Enshrined Ethereum" adopt a multi-proving approach?

"Enshrined Ethereum" refers to the long-term dream of bringing ZK proofing directly to the L1 mainnet. The ZK tech is still not nearly battle-tested enough for this to become a reality in the coming years. However, it's worth exploring how this 'endgame' would look.

Vitalik addressed this in his recent article titled "What might an 'Enshrined ZK-EVM' look like?". One clear tradeoff space is an 'open' or 'closed' approach. Hardcoding proving system choices inside the protocol would be technically easier but would go against Ethereum's open nature, regressing Ethereum from being an open multi-client system to a closed multi-client system. That being said, engineering an open system, much like it's possible for anyone to develop and use any execution client implementation, poses an additional engineering challenge.

Finally, thank you for reading the article.

I hope you enjoyed reading it as much as I enjoyed researching the topic and writing about it. It’s such a privilege to be able to research this space. Years of academic ZK research are finally entering the real world with only a relatively small number of people truly understanding their potential.

I predict that years later, school textbooks list the top five crypto innovations as follows: First there was Bitcoin, then Ethereum and smart contracts, followed by Zero Knowledge Proofs and ZK-VMs.

I hope you learned something new. If you did, consider sending me a message with your favorite emoticon, it’ll make my day!

- Mikko

THANK YOU

I’ve done a lot of research and reading while writing this article.

Credits especially to Vitalik Buterin, Bobbin Threadbare, Wei Dai & Terry Chung, Daniel Wang, Lisa Akselrod, Justin Thaler, Daniel Marin, Christian Reitwiessner & Leonardo Alt, Brian Retford, Diederik Loerakker (Protolambda), Koh Wei Jie and Suning Yao. And special mention to Srinath Setty and Abhiram Kothapalli
who seem to be the scientists behind most of the research papers.

My article rests on the shoulders of your writings and presentations for the most part. I’ve provided the full reference list below.

References:

More ZK related learning resources:

YouTube videos used in the article:

Whitepapers and/or DOCS:

  • Risc Zero whitepaper, by Jeremy Bruestle, Paul Gafni & RISC Zero Team, Aug 11th 2023

  • Nexus — not published yet

  • Powdr docs

  • Jolt by Arasu Arun, Srinath Setty, Justin Thaler

  • Lasso by Srinath Setty, Justin Thaler, Riad Wahby

Incrementally Verifiable Computation (IVC)

  • I compiled all the whitepapers below in this handy PDF, as mentioned in Nexus presentation.

  • [Val08] Incrementally Verifiable Computation or Proofs of Knowledge Imply Time/Space Efficiency, by Paul Valiant, 2008 March

  • [KS22] Nova: Recursive Zero-Knowledge Arguments from Folding Schemes, by Abhiram Kothapalli
 Srinath Setty
 Ioanna Tzialla, 30th June 2022

  • [KST22] SuperNova: Proving universal machine executions without universal circuits, by Abhiram Kothapalli, Srinath Setty, Dec 22nd 2022

  • [KS23a] Customizable constraint systems for succinct arguments, by Srinath Setty, Justin Thaler, Riad Wahby, May 3rd 2023

  • [NBS23] Revisiting the Nova Proof System on a Cycle of Curves, by Wilson Nguyen, Dan Boneh Srinath Setty, June 20th 2023

  • [KS23] HyperNova: Recursive arguments for customizable constraint systems, by Abhiram Kothapalli, Srinath Setty, Aug 4th 2023

  • [BC23] Protostar: Generic Efficient Accumulation/Folding for Special-sound Protocols, by Benedikt Bünz, Binyi Chen, Aug 20th 2023

Exploring Ethereum Scalability: What are Zero-Knowledge Proofs and zkEVMs?

My deep dive into Zero-Knowledge Proofs

I’ve been full-time in the blockchain space for 2-3 years now and studied the most important protocols. I learned the basic principles of Zero-Knowledge Proofs (ZKP) already a long time ago. However, just a couple of months ago, I had nearly zero knowledge of how things work under the hood and what the ecosystem looks like. Certain smart people in the space got me convinced that ZKPs would be worth studying more. I sensed my inner mimetic desire growing and I decided to deep dive and learn.

It’s been fascinating to learn some fundamental cryptography and prime number math after so many years. I decided to compile this blog post to consolidate my learnings.

If you are new to ZKPs but familiar with crypto basics, I hope this blog post serves as a good primer for you to familiarize yourself with the ZKP basics and the current state of ZK-Rollups to scale Ethereum.

I’ll stay on a conceptual level in this article and touch only the surface of the complicated ‘moon math’. Expect about 20 min reading, though.

If you’d prefer a summary, here you go: [Tweet thread summary of the article]

Why are Zero-Knowledge Proofs important?

Ok, your time is limited. And this article is long.

Why should you care — why Zero-Knowledge Proofs are important to understand?

Here’s how Srikar Varadaraj formulated it:

Also, over the years, Vitalik Buterin himself has written 14 long-form articles about Zero-Knowledge Proofs, probably more than any other topic.

Vitalik’s writings have been a clear predictive force of what will be relevant in the space. Just have a look at the original Ethereum whitepaper from 2014 and you’ll find out that many of the ideas have been implemented years later. Another crystal ball writing is on Market Makers from 2017, where Hayden Adams picked up the idea and developed Uniswap (today > 5 Bn market cap protocol).

Zero-Knowledge tech and zk-Rollups most likely won’t be an exception. If Vitalik raves about this, there must be something.

Convinced? Ok — let’s go.

Table of Content

  1. Introduction

  2. Centralized fall down & Progress in protocols

  3. Scalability Nightmare has persisted

    1. Speed is a requirement for many apps

    2. Vitalik’s trilemma & L2 scaling

  4. So, what is Zero-Knowledge Proof exactly?

  5. Required properties for Zero-Knowledge Proofs

  6. Who invented Zero-Knowledge Proofs?

  7. Hypothetical examples of ZKP

    1. Mortgage Risk Assessment

    2. Proving you are over 18 years old

  8. Practical blockchain examples already in use

    1. Privacy coins

    2. Mixers

    3. Ethereum L2 Zk-Rollups

  9. ZK-Rollups for L2 Ethereum scaling solutions

  10. Different ZKP versions

    1. ZK-SNARK

    2. ZK-STARK

  11. ZK-SNARKS proof generation under the hood

    1. Computation

    2. Arithmetic circuit

    3. R1CS

    4. QAP

    5. zk-SNARK

    6. Finally, it is complex

  12. Ethereum scaling: Zero-knowledge rollups

  13. Optimistic rollup vs. zk-Rollups

  14. From A Simple ZK-Rollup to ZK-EVM

  15. Different zkEVMs -- Compatibility differences

  16. Vitalik's zkEVM Categorization

    1. Type 1 zkEVMs: Fully Ethereum-equivalent

    2. Type 2 zkEVMs: Fully EVM-equivalent (not Ethereum-equivalent)

    3. Type 3 zkEVMs: Almost EVM-equivalent

    4. Type 4 zkEVMs: High-level-language equivalent

  17. Comparing zkEVM projects

    1. zkSync

    2. Scroll

    3. Polygon zkEVM

    4. Starknet

    5. Privacy & Scaling Explorations team

  18. Latest news

  19. ZK-Rollup wars & State of L2 Scaling

  20. Final thoughts

  21. Thank you

  22. References

    1. References

    2. Vitalik's writings about ZKPs over the years

Centralized fall down & Progress in PROTOCOLS

First, let’s see where we are today with crypto markets and what is still one of the main pain points.

Last year 2022 was quite a ride in the blockchain space, to say the least. While several centrally-managed entities (FTX, Celsius, BlockFi, Voyager, Genesis, etc) have ended up in bankruptcy, the ‘true’ decentralized smart contract protocols (think of Aave, Maker, Uniswap etc) have crunched the transactions and liquidations without any hiccup.

Code and major decentralized protocols have performed much better than greedy and biased humans. Amidst the collapses of centralized entities, there’s been serious progress in many areas of the decentralized protocols. Also, the Ethereum Merge finally happened, and there are plenty of new L2 protocols, and new dapps e.g. in social space… a lot of progress and experiments.

In a world where entire countries or presidents are cancelled, there is increasing demand for uncensorable, programmable global consensus systems,value transfer and record-keeping.

However, scalability remains one of the fundamental challenges in the blockchain space to truly enable these applications. Scalability is improved by several teams and projects. Let’s see where we are today.

Scalability nightmare has persisted

Ethereum’s average transaction fees. (Source)

Yeah, I remember paying over 50 USD for transactions on Ethereum L1 many times during 2021 and 2022. Prices were over 20 USD for most of the year. It was the normal market rate. Block demand was high.

Naturally, these rates are too high for any serious mass-market adoption or applications.

SPEED is a requirement for MANY Dapps

The speed and cost of using blockchains need to go down. Regular users expect cheap and instant transactions. It is not only ‘nice to have’ — it’s a necessity for certain types of apps (think of e.g. social media apps with each post/comment counting as a transaction). A quick and smooth user experience is a must-have for larger adoption.

Vitalik’s trilemma & L2 scaling

After many years, Vitalik’s trilemma of Scalability, Security and Decentralisation seems to be an eternal area to improve in blockchain development. The rule of thumb says that you can only have two out of three characteristics in one public blockchain.

Scalability Trilemma, https://vitalik.ca/general/2021/04/07/sharding.html

For example, Ethereum has been considered to be Secure and Decentralized (thousands of nodes), but the scalability has been a limiting factor (only 15 transactions per second). When the demand is high, the Ethereum mainnet can be extremely expensive to use.

There have been many new L1s launched during the last few years. They have a different balance of the aforementioned three qualities. E.g. Solana has always focused on Scalability by compromising on Decentralization.

Ethereum’s L2 scaling solutions (e.g. Optimism and Arbitrum) have helped Ethereum quite a bit. So did The Merge (change of consensus mechanism from Proof-of-Work to Proof-of-Stake). But transaction costs have still been above 1 dollar. Imagine paying 1 dollar just for commenting on your friend’s post on decentralized social media. No thanks.

That’s why scalability and low transaction costs are so important: they enable the true adoption of many new types of applications.

While we are living in a bit quiet period for blockchain demand right now, the demand will most certainly pick up in a year or two.

Fortunately, Zero-Knowledge Proofs will help us with the scalability problem, as we will learn in this blog post.

Let’s get down to the basics.

So, what is Zero-Knowledge Proof exactly?

Enough of pep-talk and background. Let’s face it!

Let’s start with the definition:

In cryptography, Zero-Knowledge Proof is a method by which one party (the prover) can prove to another party (the verifier) that a given statement is true while avoiding disclosure of additional information beyond the fact that the statement is true.

That’s a mouthful. The above is usually related to privacy features.

Imagine this simple example: you want to prove to your friend that you have access to his Twitter account, but you don’t want to reveal the password. How to prove it? You can post a tweet with your friend’s account. Maybe your friend is not convinced yet. To make your point, you post 20 more tweets with his account. Your friend will eventually become convinced that you indeed know the password, even though you haven’t disclosed it.

However, in Ethereum and Layer 2 context, Zero-Knowledge Proofs are currently used only for scaling, not really for privacy-related features.

In the Ethereum L2 context, the more apt definition is:

Zero-Knowledge Proofs enable to prove honest computation without revealing inputs

Yes, “Zero Knowledge” is a misnomer name for Layer 2 roll-ups.

Here’s a tweet from one Haichen Shen, who is the co-founder of Scroll, one of the Ethereum L2 scaling solutions.

Haichen Shen pondered what could be a more accurate name for zk-rollup. (Source: tweet)

Required properties for Zero-Knowledge Proofs:

Zero-Knowledge Proof requires three properties: Completeness, Soundness and Zero-Knowledge.

Completeness

  • If statement is true, verifier will be convinced by the prover

Soundness

  • If statement is false, a cheating prover cannot convince verifier it is true (except with some tiny probability)

Zero-Knowledge

  • Verifier learns nothing beyond the statement’s validity

Btw, who invented these?

Before diving into the tech spec, here’s the human element to provide you a soft landing.

The notion of ‘zero knowledge’ was first proposed in the 1985 by MIT researchers Shafi Goldwasser, Silvio Micali and Charles Rackoff.

Shafi Goldwasser, Silvio Micali and Charles Rackoff

As a peculiar origin story, the original inventors were entertaining an idea of “mental poker”: How to play poker over the phone, so that you could be certain that the other player (which you can’t see) wouldn’t be cheating.

In their paper “The Knowledge Complexity of Interactive Proof Systems”, as the name says, they presented a solution which required repeated interactions between the prover and the verifier. These repeated interactions brought a lot of complexity and a large proof size, which made the solution more of a theoretical than practical success.

Think of the example I presented earlier: proving you know the password of your friend’s Twitter account by posting a tweet under your friend’s name. One tweet might not fully convince your friend, but several tweets most likely will. This is a simplification of what ‘Interactivity’ means.

Much later, in 2012, a non-interactive protocol (zk-SNARK) was invented, which enabled smaller proof sizes and more practical utility.

Looking at the original inventors today, Silvio Micali is the founder of the Algorand L1 blockchain, and thus very much operationally involved in the blockchain space today. Shafi Goldwasser has been lately giving plenty of lectures on the historical origins of Zero-Knowledge. Charles Rackoff continues to be a Professor Emeritus in Computer Science at the University of Toronto.

here are some HYPOTHETICAL examples of ZKP

1. Mortgage Risk Assessment

Typically your bank teller would look at all possible available information about you to assess your creditworthiness for a mortgage. With Zero-Knowledge Proof, it is possible to prove that e.g. the salary and certain other criteria are above thresholds, without actually revealing the numbers, and get the mortgage approved.

2. Proving you are over 18 years old

It feels funny when people keep asking for your ID and you’re already 36 years old (happened to me last week). Usually, they also see all kinds of other data (country, expiry date, etc) from your ID or Passport. It’s not really optimal that one needs to disclose all this irrelevant data at the same go. With Zero-Knowledge Proofs, the verifier can rest assured you are over 18, but without knowing exactly how old, where you are from, or your other personal details.

As you can extrapolate from the example above, you can think of myriad different identity scenarios where ZKPs will be useful.

Zero-Knowledge tech can prove you are over 18 without the need to disclose the number. (Pic from MINA video)

practical BLOCKCHAIN examples already in use

1. PRIVACY COINS

Zcash is a Bitcoin fork that uses Zero-Knowledge tech (zk-SNARK to be specific) to prove that all requirements for a valid transaction are satisfied without revealing additional details, thus enabling private transactions.

2. Mixers

Infamous Tornado Cash protocol (which is controversially banned by the US OFAC) also uses Zero-Knowledge tech to mix incoming coins from different users and thus achieving privacy. Tornado Cash is used by many illegal protocol hackers, but at the same time by people with legitimate needs. (Btw, privacy is a normal thing)

3. Ethereum L2 Zk-Rollups

The most common use case for Zero-Knowledge tech is actually not in privacy but in transaction scalability. The next chapter and major part of this article explore this further.

zk-Rollups for Ethereum L2 scaling solutions

We have finally arrived at the actual topic we want to deep-dive into today.

How Zero-Knowledge Proofs support scaling in certain Ethereum L2 solutions. These L2 solutions are referred to as zk-Rollups and the leading ones are developed by teams at Matter Labs (zkSync), Scroll, and Polygon.

Ironically, the majority of these zk-Rollups solutions do not have transaction-shielding privacy features.

We earlier mentioned three requirements for zk-Rollup: Completeness, Soundness and Zero-Knowledge.

The majority of zk-Rollups don’t have the 3rd property of Zero-Knowledge. Transactions on these L2 chains are not really private by nature. (An exception to the rule is Aztec Network, which has privacy-preserving features)

As we learned earlier, the ‘Zero-Knowledge’ is a bit misleading in the context of zk-Rollups.

The Zero-Knowledge tech is used to reduce the size of the transaction CALLDATA that is published to the blockchain. Operating data is a heavy burden on blockchains, and reducing CALLDATA size decreases transaction costs and increases speed. Blockchain data can be compressed to generate proof to perform a state change.

In other words, Zero-Knowledge tech is used to calculate validity proofs of transactions that are posted to L1, which then can be independently verified. This is faster than doing certain computations again, which brings the scalability benefit (e.g. compared to Optimistic Rollups)

We’ll get down to this in much more detail in a sec.

Let’s first explore two major implementations of Zero-Knowledge Proofs: ZK-SNARK and ZK-STARK.

Different ZKP versions: ZK-SNARK and ZK-sTARK

There are two major implementations of Zero-Knowledge tech: ZK-SNARKs and ZK-STARK.

ZK-SNARKS

Like we learned earlier, Zero Knowledge tech was more an academic than practical success for decades. Lot of research was done by many scientists to change the ‘interactive’ feature to ‘non-interactive’. This was a major improvement and got introduced under the name ZK-SNARK.

ZK-SNARK term was first introduced in 2012 in a paper co-authored by Alessandro Chiesa, a professor at UC Berkley.

Interestingly, Mr Chiesa is also a co-founder at Zcash and StarkWare. And author of libsnark, C++ library for ZK-SNARKs. Impressive bio indeed — he had all cryptography pioneers surrounding him throughout his studies. His M.Sc. thesis was supervised by Ron Rivest (co-inventor of RSA) and his PhD by Silvio Micali (co-inventor of ZKP and co-founder at Algorand) at MIT.

Alessandro Chiesa, co-author of a paper in 2012 in which he coined the term zk-SNARK. See his website here.

So what do these 5 letter acronyms stand for?

  • zk-SNARK = Zero-Knowledge Succinct Non-interactive ARgument of Knowledge

  • zk-STARK = Zero-Knowledge Scalable Transparent ARgument of Knowledge

Fortunately, these are pretty descriptive. Let’s have a look at each word separately in SNARKs:

  • "Zero-knowledge": We already know this part. The proof itself reveals no information about the underlying information.

  • "Succinct": This means that the proof is short and can be verified quickly, making it more efficient than other types of proofs.

  • "Non-interactive": This refers to the fact that the proof does not require any interaction between the prover and verifier. In other words, the proof can be verified without any communication between the parties. This was the major technical breakthrough.

  • "Argument of Knowledge": This means that the proof provides a convincing argument for the statement/knowledge being made, but it doesn't necessarily provide any additional information.

So all in all, ZK-SNARK is a tool that allows one party to prove to another party that they know a certain value, without revealing the value itself, and that this proof can be done quickly, without any interaction and only the knowledge aspect is proved, not the information itself.

ZK-SNARKs has been around for 10+ years and compared to ZK-STARKs, has naturally built larger community and developer tools over time. Both have their distinctive features and suitable use cases, though. Let’s continue and learn about ZK-STARKs.

ZK-STARKs

ZK-STARK was introduced much later in 2018 by Eli Ben-Sasson, Iddo Bentov, Yinon Horeshy and Michael Riabzev.

Co-authors of ZK-STARK: Eli Ben-Sasson, Michael Riabzev and Iddo Bentov. Yinon Horeshy is also a co-author, and apparently follows the zero-knowledge principle the most: I couldn’t find any photo of him online.

The first few pages of the original research paper of ZK-STARKs is a fascinating read. I’ll just copy+paste few snippets directly, and you get the hang of it:

“Zero-knowledge (ZK) proof systems are an ingenious cryptographic solution to this tension between the ideals of personal privacy and institutional integrity, enforcing the latter in a way that does not compromise the former. Public trust demands transparency from ZK systems, meaning they be set up with no reliance on any trusted party, and have no trapdoors that could be exploited by powerful parties to bear false witness.”

“For ZK systems to be used with Big Data, it is imperative that the public verification process scale sublinearly in data size. Transparent ZK proofs that can be verified exponentially faster than data size were first described in the 1990s but early constructions were impractical, and no ZK system realized thus far in code (including that used by crypto-currencies like Zcash™) has achieved both transparency and exponential verification speedup, simultaneously, for general computations”

“Here we report the first realization of a transparent ZK system (ZK-STARK) in which verification scales exponentially faster than database size, and moreover, this exponential speedup in verification is observed concretely for meaningful and sequential computations, described next”

Let’s revisit the acronyms:

  • zk-SNARK = Zero-Knowledge Succinct Non-interactive ARgument of Knowledge

  • zk-STARK = Zero-Knowledge Scalable Transparent ARgument of Knowledge

Next, let’s break down each word separately in zk-STARK:

  • "Zero-knowledge": (the same as in SNARKs.)

  • "Scalable": This means that the proof can be used to prove statements about a large amount of data, without needing to include all of the data in the proof. This allows zk-STARKs to be more efficient than other types of zero-knowledge proofs, particularly when the amount of data is large. zk-STARKs' proof sizes do not depend on the amount of data.

  • "Transparent": zk-SNARKs use non-public randomness to generate a key and rely on a trusted setup (which can be an attack surface). zk-STARKs do not require any such setup and can be computed without any trusted third party, hence they are transparent.

  • "Argument of Knowledge": (the same as in SNARKs.)

Okay, let’s do some comparisons between SNARK and STARK:

Image source: Elena Nadilinski's slides from Devcon4

As we can see, the proof size of SNARKs is over 100x smaller than STARKs. The proof is short and can be verified quickly.

However, if you are operating on a large amount of data, ZK-STARK might be a better choice. ZK-STARKs proof size does not depend on the amount of data, and thus it can prove statements about much larger amounts of data than ZK-SNARKs.

Then there are some other (theoretical/long-term) considerations. One is Post-Quantum Security. Zk-STARKs use hash functions that are thought to be resistant to quantum computer attacks. Zk-SNARKs are considered not to be quantum secure because they use Elliptic Curve Discrete Logarithm Problem (ECDLP). A sufficiently powerful quantum computer would be able to crack the ECDLP in polynomial time. The line is getting a bit blurred lately, and there is ongoing research making SNARKs quantum secure as well.

Next, let’s pick ZK-SNARK and explore it a bit deeper under the hood — how the tech actually works. If you’re reading about Zero Knowledge for the first time, you are probably already overwhelmed by the new terms and concepts. Congratulations on making it this far.

And apologies in advance — there will be much more new jargon in the next chapter. Feel free to skip the next chapter if you’d rather get a high-level understanding first and prefer to learn the tech part later.

ZK-SNARKS PROOF GENERATION UNDER THE HOOD

An article about Zero-Knowledge Proofs wouldn’t be complete without a small section on how things work under the hood.

I originally planned to write a comprehensive step-by-step explanation of how the moon math works. I quickly realized, though, that this would 10-20x the length of the article, and take ages to write — which would not make sense. There are incredibly many math/cryptography concepts involved. All in all, it is a nontrivial process. There are some extremely complex bits (polynomial commitments — phew). That said, most of the concepts are not that hard at all. They simply take a lot of space to explain.

So, in this blog post, I’ll give a rough explanation of each step and give pointers to learn more about math should you be interested.

ZK-SNARK proof generation step-by-step:

Image source: Isram Bashir, Mastering Blockchain (book)

Let’s explore each step separately.

Computation:

To be able to utilize zk-SNARKs, you first need to convert the problem into the right form. This form is called QAP (Quadratic Arithmetic Program).

QAP is an equation composed of a vector of values and three vectors of polynomials. Transforming the code of a function into one of these is complicated, and we need to take a series of steps to get there.

Image source: Zcash “What Are zk-SNARKs”

Arithmetic Circuit:

The first step is to convert the computation into an arithmetic circuit, which is a collection of logic gates (such as addition, subtraction, multiplication, division) and wires that perform specific operations on input variables.

See an example on the left: (a+b) * b * c is converted into an arithmetic circuit.




R1CS (Rank-1 Constraint System)

R1CS format is useful because it provides a way to translate complex arithmetic circuits (from the previous step) into a simple mathematical structure that can be analyzed and verified efficiently.

The important thing to understand is that R1CS is not a program that produces a value from certain inputs. Instead, A R1CS is a verifier, it demonstrates that an already complete computation is correct.

R1CS involves three vectors a, b, and c. The prover also provides vector s (“witness”). These vectors provided, the following equation needs to be satisfied:

as + bs - cs = 0

The dot (.) inside the equation refers to the dot product of vectors.

This transformation into R1CS is done for each of the logic gates from the previous step. For example, if we would have 5 logic gates in the first step, at the end of the second step, we would have a series of 5 vectors for each a, b, c.

To fully understand how R1CS works, we would need to go through vector and dot product math. In case you’re interested this and this are two good primers. This part is not that difficult, it just takes a lot of space to explain fully.

Alright, so we have our R1CS ready now. This is only an intermediate representation of getting to QAP (to enable verification without revealing the secret variables).

You need to learn to love these types of illustrations if you’d like to learn R1CS and QAP. For now, forget trying to figure this graph out, it’s just for illustration… (Source / Author: Misha Volkhov)

QAP (Quadratic Arithmetic Program)

QAP is a method for representing the vectors (from R1CS) as a system of polynomials.

Why do we want to do this? It makes the prover's task simpler and more efficient. With QAP we can check all of the constraints at the same time on the polynomials (instead of checking the constraints individually like in the R1CS stage).

So, how can we create the QAP polynomials from the R1CS vectors?

This gets tricky to summarize in short. Basically, we want the polynomials to implement the exact same logic (but in polynomial format instead of vectors). When we evaluate the polynomial at each coordinate x, it should represent one of the constraints. E.g. x=1 would give us the first set of vectors (that we got in the R1CS stage), x=2 would give the second set of vectors, x=3 third set etc.

This transformation can be completed with Lagrange interpolation. If you have a set of (x, y) coordinates (points), with this method you can create a polynomial (graph) that passes through all of these (x,y) coordinates.

Ok. So you have a bunch of polynomials now. What’s the point? Now, you can check all of the constraints with a single check on the polynomials utilizing the dot product check. (instead of checking them individually like in the R1CS stage).

Certain math tricks make operating polynomials really efficient.

Vitalik has written a good explainer article on QAP which I used for my summary above. Here’s another clear explainer.

ZK-SNARK:

Done. We did skip some steps and concepts, but now we can use the QAP in the ZK-SNARK protocol to prove the assertation between the prover and the verifier.

So, what parts did we actually omit describing ZK-SNARK generation?

Quite many parts, here’s a non-exhaustive list:

  • Polynomial commitments, a kind of polynomial "hash" that allows to verify the equation between polynomials in a very short amount of time. Major schemes are called: bulletproofs, Kate and FRI. These are really complex concepts, and you need to understand everything else before it makes sense to try to understand these. I haven’t personally tried to wrap my head around these yet.

  • Fast Fourier Transform (FFT). The FFT algorithm allows for the efficient computation of the coefficients of polynomials, essentially making ZK-SNARK generation and verification fast enough to be practical. Also, probably not worth figuring these out unless you plan to become ZK-engineer.

  • PLONK. PLONK is the most modern zk-SNARK proof system. Previous zk-SNARK versions required a new trusted setup for any new circuits. PLONK has a universal trusted setup. It can be initiated once and used by all circuits. It’s also updatable (new randomness can be added). More PLONK reading resources here.

  • Homophormic encryption and Homophormic hiding. This is actually quite an interesting and easy-to-understand concept. You can find an explanation in this article.

  • Elliptic Curve Pairing (and Elliptic Curve Cryptography).

  • All of the above also requires an understanding of basic cryptography such as public key encryption, digital signatures etc. This is a great book refreshing memory on these basic concepts. It’s beginner-friendly but goes deep enough. I really enjoyed reading it.

It is really hard and complex

Do I understand the inner mechanics of every step? Far from it. One basically needs years of math, computer science and cryptography experience to fully understand the stack. Perhaps you can even spot some inaccuracies in the explanation above. In case, please let me know.

That said, if you’re an engineer and would like to enter the rabbit hole, there’s plenty of material, courses & even boot camps, compared to just a few years ago. For e.g. 0xParc has a lot of resources, and they even host this cool ZK Spring Residency in Vietnam later this year.

“Ok tnx - I didn’t understand much from the previous chapter. What should I remember?”

ZK-SNARKs is all about verifying the computation, in the context of Ethereum L2 scaling.

The main idea you need to remember is that ZKPs allow you to verify millions of steps of calculation in a very fast way. There is no need to redo the computation (that would take a long time) to verify its honesty.

Ok. Enough math and acronyms for this blog post.

Let’s proceed to a higher level and see how ZKPs are utilized in Ethereum scaling.

Ethereum scaling: zero-knowledge rollups

As we covered earlier, Zero-Knowledge tech in Ethereum is mostly used for scaling, in the form of L2 ZK-Rollups.

You’ve probably heard of Optimism and Arbitrum. They were the first Ethereum L2 scaling solution using ‘Optimistic Rollups’ technology. Optimistic rollups are considered “optimistic” because they assume off-chain transactions (L2 transactions) are valid and don't publish proofs of validity for transaction batches posted on-chain in L1.

Optimistic rollups rely on the fraud-proving scheme and “a challenge period” when anyone can challenge the results of rollup transaction. This separates optimistic rollups from zero-knowledge rollups that publish cryptographic proofs of validity for L2 off-chain transactions.

The mainnet of both Optimism and Arbitrum went live in 2021. They helped to scale Ethereum and decrease transaction costs. However, there is still a lot of room for improvement in scaling.

Optimistic Rollups have always been a bit of a temporary solution to scaling. Zk-Rollups are more complicated but will enable even more scaling when the tech improves. Let’s dig into the specific comparison.

Optimistic rollup vs. zk-Rollups

You could almost write a book about the differences between Optimistic Rollups and Zk-Rollups. We’ll keep it short in this blog post.

So, what are the advantages of Zk-Rollups? Scalability, fast withdrawal, and privacy features

The main advantage of Zk-Rollups is scalability — they can process more transactions, and scale even further in the future when the tech improves (over 2000 TPS compared to 500 TPS with Optimistic Rollups).

Zk-Rollups also have a fast withdrawal period between L1 <-> L2 (next block, compared to one week with Optimistic Rollups).

Most Zk-Rollups currently use Zero-Knowledge tech only for the calculation of validity proofs = scaling the throughput. In the future, privacy-related features are more easier and natural to implement. One more plus point for Zk-Rollups.

Disadvantages of zk-Rollups? Complexity, EVM (in)compatibility and computation costs

However, the scalability comes with a considerable ‘cost’ in other areas. Zk-Rollups are extremely complex. There are not too many computer scientists on the planet who truly understands the entire stack. This also makes auditing the code more challenging. There’s a lot of effort and resources poured into the ZK space though, and many have been surprised about the speed of development during the last couple of years.

Implementing EVM (Ethereum Virtual Machine) is also much harder for ZK-Rollups. Different ZK-Rollups have done certain compromises and only support the majority of the OP codes. Thus, the Solidity code might require some slight changes to work on ZK-Rollup. Vitalik also wrote an article about EVM compatibility and created different categories. We’ll explore EVM compatibility in more detail later in this article.

Finally, the off-chain computation costs can be high on ZK-Rollup. Generating the proofs require specialized hardware. Some ZK-rollup projects (Scroll) are even exploring utilizing ASICs to generate these proofs and thus create a new decentralized proof market (a bit like Bitcoin mining, but not quite the same).

To put all the above arguments above in a table for easier comparison we get this:

Source: TokenInsight

Now that you understand the main differences, Vitalik’s prediction in 2021 on Optimistic Rollups and ZK-Rollups is fairly logical:

“In general, my own view is that in the short term, optimistic rollups are likely to win out for general-purpose EVM computation and ZK rollups are likely to win out for simple payments, exchange and other application-specific use cases, but in the medium to long term ZK rollups will win out in all use cases as ZK-SNARK technology improves.”

The “medium to long term” time frame up there, however, has moved up significantly with the recent zk-EVM compatibility that was thought to be years ahead.

If you’d like to read a more detailed comparison between zk-Rollups and Optimistic Rollups, here’s a deep dive by Suning Yao. Vitalik has also written a good article on Rollups: An Incomplete Guide to Rollups, which I’ve used as the basis for my summary above.

And oh btw, the Optimistic Rollup vs. ZK-Rollup is a heated debate. Optimistic Rollup people don’t seem to be happy about the hype around ZK-Rollups… pick up your popcorns and enjoy this tweet thread, and the response thread.

From A SIMPLE ZK-Rollup to ZK-EVM

To be precise, ZK-Rollup and ZK-EVM are two different things. Or rather, ZK-EVM is the advanced version of ZK-Rollup, with EVM compatibility.

ZK-Rollups were developed already a couple of years ago as L2s for Ethereum (think of Loopring or the early versions of zkSync). They execute transactions faster than optimistic rollups, thanks to cryptographic proofs that verify their transactions in a batch. However, the earliest ZK-rollups could only perform simple transactions (think of transfers, atomic swaps), and not full-scale smart contracts. They were not EVM (Ethereum Virtual Machine) compatible.

ZK-EVMs are advanced Zk-Rollups and support the EVM. This enables (near) direct portability of Solidity code, allowing developers to build just like they would be building on the Ethereum L1 mainnet. ZK-EVMs’ speed of development has indeed been fast, surprising Vitalik himself as we saw above. Three ZK-EVM projects are heading towards mainnet launch this year in 2023 (zkSync, Scroll and Polygon zk-EVM).

Different zk-EVMs — compatibility DIFFERENCES

Have a quick look at the previous graph in this article — row 4. We saw that “EVM Compatibility” with zk-Rollups says “Harder compatibility - Solidity code must be adjusted”.

This particular aspect is what separates the different zk-EVM projects from each other.

Generally, different zk-EVMs make different compromises with performance and EVM compatibility. Or in other words, between speed and practicality.

Vitalik has written an entire blog post about the categorization. He defined four categories: Type 1, Type 2, Type 3 and Type 4. Type 1 is the most compatible, but has the slowest prover times. Type 4 is the least compatible, but has the quickest prover times.

In the current stage of development, achieving perfect compatibility has a diminishing return.

Thus, none of the four different types is necessarily better than the other. It’s a good thing that different projects explore different tradeoffs.

The differences get technical and nuanced, and I’ve summarized Vitalik’s categorization here for the big picture.

Vitalik’s zk-EVM categorization:

Type 1 zkEVMs: Fully Ethereum-equivalent

Type 2 zkEVMs: Fully EVM-equivalent (not Ethereum-equivalent)

  • Advantage: perfect equivalence at the VM level

  • Disadvantage: improved but still slow prover time

  • Who's building it? Scroll, Polygon zkEVM and Consensys zkEVM are aiming to be Type-2, though are currently in Type 3.

Type 3 zkEVMs: Almost EVM-equivalent

  • Advantage: easier to build, and faster prover times

  • Disadvantage: more incompatibility

  • Who's building it? Scroll, Polygon zkEVM and Consensys zkEVM are aiming to be Type-2, though are currently in Type 3.

Type 4 zkEVMs: High-level-language equivalent

  • Advantage: very fast prover times

  • Disadvantage: more incompatibility

  • Who's building it? ZKSync is a Type 4 system, though it may add compatibility for EVM bytecode over time.

Finally, it’s also good to note that zk-EVM projects can over time develop and shift to lower-numbered types or higher-numbered types.

Vitalik’s categorization of zkEVM differences: the tradeoff between Compatibility and Performance (source)

COMPARING ZK-eVM projects

zkSync:

  • Type-4 zkEVM compatibility (EVM is Language-level compliant).

  • Utilizing SNARKs.

  • ZkSync supports Solidity (translated in Yul language), Vyper and LLVM.

  • Currently in ‘baby mainnet’ (= limited access). The actual mainnet is to be launched in 2023. ZkSync has promised to go open source by their next release (Fair Onboarding Alpha onwards).

  • What can I do now? (Feb/2023) Use zkSync in the testnet! And if you’ve been invited, also in their ‘baby mainnet’. zkSync has been the first to market and has already a fairly wide variety of dapps.

Scroll:

  • Aims to be Type-2 zkEVM compatible (with Bytecode-level compatibility).

  • Utilizing SNARKs.

  • Aims to be ‘faithful’ to EVM compatibility — Solidity is directly compiled without any other language in between.

  • Scroll collaborates with Ethereum Foundation’s Privacy and Scaling Explorations team to eventually get as close to Type-1 compatibility as possible.

  • Currently in Private pre-alpha testnet, planning to launch mainnet in 2023. Open-source.

  • What can I do now? (Feb/2023) Register in the pre-alpha testnet!

Polygon zkEVM (ex Hermez):

  • Aims to be Type-2 zkEVM compatible (with Bytecode-level compatibility via interpreter).

  • Utilizing both SNARKs and STARKs.

  • Currently in public testnet, mainnet planned for early 2023. Open-source.

  • Polygon has five different teams building different Ethereum scaling solutions (Polygon zkEVM, Polygon Miden, Polygon Edge, Polygon Zero, Polygon PoS). Yeah, it’s confusing. I’ve understood that they have a lot of expertise under the same roof, which has enabled them to learn a lot from each other.

  • What can I do now? (Feb/2023) Explore their public testnet!

Taiko:

  • Aims to be Type-1 zkEVM compatible (with Bytecode-level compatibility)

  • Thus, Taiko plans to prioritize compatibility over proof generation cost.

  • Taiko plans to support the same hash functions, state trees, transaction trees, precompiled contracts, and other in-consensus logic, which make Taiko’s developer experience as smooth as it can get (no cognitive load — everything works the same way as in Ethereum L1)

  • Taiko collaborates with Ethereum Foundation’s PSE team

  • “Taiko A1” Alpha-1 testnet is phasing out on 15th Feb 2023, and new Alpha-2 testnet launching around a month after

  • What can I do now? (Feb/2023) Try Taiko’s Alpha-2 testnet in March 2023 when it will launch

Starknet:

  • Type-4 zkEVM compatible (with language-level EVM compatibility).

  • Utilizes STARKs. It’s technically more secure than ZK-SNARKs but takes longer to verify and requires more gas.

  • Supports Cairo language and Solidity (via transpiler).

  • Alpha mainnet launched in Q4 2021 but remains limited. Closed source.

  • What can I do now? (Feb/2023) Explore Starknet’s mainnet!

Privacy & Scaling Explorations team at Ethereum Foundation (ex. AppliedZKP):

  • The PSE team is not really an L2, but worth mentioning here, because they have close collaboration with the Scroll team to develop Type-1 EVM

  • PSE explores new use cases for zero-knowledge proofs and other cryptographic primitives through research and proof-of-concepts.

Comparison of zk-Rollup projects, and their level of EVM compatibility. (Source)

OTHER

Aztec Network: Aztec Network is also L2 in Ethereum, with the speciality of being entirely private (shielded transaction). But Aztec is really different to the other rollups. It’s not a traditional roll-up with its own liquidity. Protocols don’t deploy their whole protocol on Aztec, instead, they add a set of smart contracts using Actec Connect, which enables them to use protocol through it. Major DeFi protocols have their integrations in place with Aztec. This method preserves liquidity and composability on L1. Aztec is not EVM compatible.

Latest news

While I was writing this article, there was interesting news coming up. Here’s just a few of the many.

Consensys zkEVM: On December 13th 2022, Consensys Launched a private beta zkEVM Testnet to scale Ethereum. Consensys’s zkEVM handles native EVM bytecode, thus enabling support for existing developer tools and infrastructure. On Vitalik’s categorization, Consensys zkEVM is considered to be Type-2, just like Scroll and Polygon zkEVM.

HyperOracle: HyperOracle is also a really early team, having just raised 3 MUSD. Hyper Oracle is Web3 zkMiddleware and aims to utilize Hyper Oracle Node to take and cache the states of smart contract, and generate proof of them so that any data can be transferred across different blockchains, blocks, and time.

Sovereign: Sovereign came to the public on Jan 30th, 2023 with 7.4 MUSD fundraising. They aim to be an open, interconnected rollup ecosystem. Their goal is to enable all developers to deploy seamlessly interoperable and scalable rollups that can run on any blockchain. Sovereign SDK is the framework they develop to aim for creating secure and interoperable sovereign zk-rollups.

Latest news: The latest news on Zero Knowledge you can find on Coindesk’s tag search here.

ZK-EVM RACE TO MAINNET

There’s been massive funding to different ZKP teams to build L2 scaling solutions on Ethereum. In the previous chapter, you learned about all the players in the field: zkSync, Scroll, Polygon zkEVM, StarkNet, Taiko, Consensys, etc.

There’s been a race to be the first one to release general-purpose zkEVM on mainnet, which would support smart contracts and porting dapps.

Three different teams have already announced they will bring their zk-EVM solutions to the mainnet stage during this year 2023.

As of writing this article in Feb 2023, that hasn’t happened yet. zkSync is closest to the goal, having launched ‘baby mainnet’, which means it is still in internal testing without public access.

If you are reading this article just a few months after being published, things most likely have already changed.

Final thoughts

Ethereum is the most popular smart contract blockchain and it has by far the largest amount of developers of any public blockchain. However, the Ethereum mainnet has become expensive and slow. Thus, there’s a need for scaling. Ethereum is following the ‘roll-up centric roadmap’.

Optimistic Rollups (Optimism and Arbitrum) command the largest Total Value Locked right now. Both of them have mainnet live and most important dapps ported. Optimism has also its OP token launched. Arbitrum is expected to launch its token soon.

The scalability which Optimistic Rollups provide won’t be enough for the future block space demand.

Thus, Zero-Knowledge rollups (ZK-Rollups) have been developed, and they promise much more scalability and privacy-related functionality. ZK-Rollups are technically much more complex to build, but a lot of money has been poured into several teams, even to the extent that the development has become a bit too fragmented.

The recent game changer with zk-Rollups has been the EVM support (= zk-EVMs). If you are a Solidity developer, this is like grace from heaven: you can pretty much deploy your code on zk-EVM like you have on Ethereum L1 mainnet. You need to know absolutely nothing about Zero-Knowledge Proofs.

However, zk-EVMs are different under the hood, and they make a different set of compromises between Performance and Compatibility (From Type 1 to Type 4).

Vitalik personally hopes that all zk-EVM projects would eventually become “Type 1”. That is, fully Ethereum-equivalent, which would not only enable using the same Solidity code, but all the developer tools, same OP codes, etc. Vitalik also hopes that Ethereum itself would improve to become more ZK-SNARK friendly.

However, it is still a faraway vision.

All zkEVMs support Solidity code. True Ethereum-equivalence is still a much larger challenge and has unsolved technical aspects. Most ZK-EVM projects are ‘selling’ their 3+ year future vision of their capabilities. Only time will tell how these promises are delivered.

Nevertheless, there is a lot of innovation and engineering happening as we speak, both in ZK-EVMs and in Ethereum itself. It’s a good thing that different teams have different emphases on the Performance vs Compatibility spectrum.

Thank you

I’ve done a lot of research and reading while writing this article.

Credits to Vitalik Buterin, Massimo Bertaccini, Imran Bashir, Alex Gluchowski, Haichen Shen, David Schwartz, Eshita Nandini, Jerry Sun, Panther protocol team, Eli Ben-Sasson, Iddo Bentov, Yinon Horesh, Michael Riabzev, Shafi Goldwasser, Alessandro Chiesa, Madars Virza, Eran Tromer, Suning Yao, Maksym Petkus, Alex Connolly, Elena Nadolinski, Misha Volkhov, L2 Beat.

My article rests on the shoulders of your writings and research for the most part. I’ve provided the full reference list below.

Email newsletter

If you’d like to stay updated on my future crypto articles, you can subscribe to my newsletter here.

References

Vitalik Buterin’s writings about Zero-Knowledge over the years in chronological order:

Vitalik has written a lot of great articles on Zero-Knowledge Proofs. Most of them are not beginner-friendly and go deep into math. For a high-level understanding, I would recommend reading article number 9 and 14 below.

I would not try to understand ZK-stuff only through Vitalik’s articles (many have tried…). There’s better beginner-friendly material, e.g. I really enjoyed this book to refresh my mind on basic cryptography.

  1. Quadratic Arithmetic Programs: from Zero to Hero - 10th Dec, 2016 (this additional article and visualizations help you to understand the math in Vitalik’s article)

  2. Zk-SNARKs: Under the Hood - 1st Feb, 2017

  3. STARKs, Part I: Proofs with Polynomials - 9th Nov 2017

  4. STARKs, Part II: Thank Goodness It's FRI-day - 22nd Nov 2017

  5. STARKs, Part 3: Into the Weeds - 21st Jul, 2018

  6. The Dawn of Hybrid Layer 2 Protocols - 28th Aug, 2019

  7. Fast Fourier Transforms - 12th May, 2019

  8. Understanding PLONK - 22nd Sep, 2019

  9. An Incomplete Guide to Rollups - 5th Jan, 2021

  10. An approximate introduction to how zk-SNARKs are possible - 26th Jan, 2021

  11. The Limits to Blockchain Scalability - 23rd May, 2021

  12. How do trusted setups work? - 14th Mar, 2022

  13. Some ways to use ZK-SNARKs for privacy - 15th Jun, 2022

  14. The different types of ZK-EVMs - 4th Aug, 2022

Web 3 marketing: My top-8 principles for growth

Web 3 protocols are well-funded and grow fast — but their marketing budgets are negligible

The last year 2021, was an active year in web 3 and crypto companies. The bull market generated record-breaking number of investments pouring into the space. Global VC investments totaled $10.5 billion just in one quarter, Q4 2021. That was more than entire 2020 combined.

However, marketing budgets of these well-capitalized web 3 protocols are usually negligible. Traditional web 2 startups are famous for using nearly 40% of VC funding on paid advertising, primarily on Facebook or Google.

How’s there such a discrepancy between web 2 and web 3 marketing?

First, let’s have a look at channel development over the last 20 years:

Credits to James Currier for the graph — https://www.nfx.com/post/viral-effects-vs-network-effects

The lifecycle of different user acquisition channels has shortened. Competition between platforms has become saturated.

Strategies to battle attention have dramatically changed with web 3 products

Marketing in web 3 requires a new set of methods and mental models. Successful marketing in web 3 creates an ‘organic pull’ of curiosity and utilizes word-of-mouth. Community building has much deeper dimensions than before. You might need an invite to be among the first users of a protocol. Joining as an early user might be financially beneficial, so invites are sought after. Protocols build on top of each other in a permissionless way, and also collaborate and spread the word of each other.

Web 3 resembles, in some ways, what the Internet was in the early 90s — a space for open innovation. In the 2000s, the growth of major web 2 platforms turned open innovation into separated walled gardens. Innovation happened only inside the walled gardens, and it was limited and later risky to innovate on top of them. Today we’re seeing strong signs that this cycle is starting to turn around.

My background

I’ve worked as CMO in early-stage web 3 protocol ramping up marketing efforts. I also have online marketing experience from my previous e-com business. I’m a regular user of many web 3 protocols.

I’ve dug up nearly all online resources about web 3 marketing, read books, and studied the methods of successful protocols to educate myself. Besides this, I’ve had numerous conversations with builders, investors and other marketers in the space.

After all this, I have learned a ton and felt compelled to write down and share my learnings.

In this lengthy blog post, I’ve compiled 8 key marketing principles to grow your protocol. With each principle, I provide an explanation, a case study and external resources you can tap in to learn more.

Are you a first-time web 3 protocol Founder/CMO?

You’ve come to the right place — this piece is written for you. I hope you find value.

If you have already worked in the space for a couple of years, you are most likely familiar with the majority of the principles. I hope there are some new insights you can learn from.

Starting a new protocol? Ask these questions:

Before we hop into the marketing principles, let’s touch briefly on the product-level decisions you have to make. As Matias Honorato described in his blog post, winners in the web 3 space will be the ones that can clearly define:

  • Where in the spectrum of centralization vs. decentralization will your product sit?

  • Is this a network-effect-driven business or not? (If yes, The Cold Start Problem is a must-read)

  • What does success look like? (User growth, TLV, developer activity, transaction volume, wallets connected, etc.)

Having a good understanding and answers to these questions helps you to create more effective marketing.

Exit-to-Community: New way to distribute value

Traditional companies have two options for liquidity events: corporate acquisition or IPO. It usually takes years or even over a decade to reach this point. When it happens, founders, investors, and early employees get financially rewarded. The actual users of the product usually hardly notice anything, let alone receive any financial benefit.

Exit-to-Community is coined by Media Enterprise Design Lab and Zebras Unite, have a look at their handbook.

In web 3, things are fundamentally different. Blockchain enables a new paradigm in value distribution.

In web 3, if your protocol/product becomes successful, not only the founders and investors get rewarded, but also the users, based on their activity and contribution. This is called a retroactive airdrop. Retroactive airdrop is relevant at a later stage after you’ve built a successful protocol that provides value to a large number of users. I cover this later in Principle #6.

At first, it’s important to build a useful product and community around it. That leads us to the first Principle #1: Community is everything in Web 3.

Table of Content

Introduction

Are you a first-time web 3 protocol Founder or CMO?

Starting a new protocol? Ask these questions

Exit-to-Community: New way to distribute value

Principle #1: Community is everything in Web 3

Principle #2: Curate your first Atomic Network & Create artificial scarcity

Principle #3: Welcome experience & Invite-only strategy

Principle #4: Utilize Bounty platforms to reward users when they complete specific actions

Principle #5: Collaborate & Integrate with value-aligned protocols

Principle #6: Use retroactive airdrop to distribute value to your community

Principle #7: How to acquire users outside of crypto twitter and web 3 natives?

Principle #8: Certain web 2 marketing methods are still useful, too

References

Principle #1: Community is everything in Web 3

Traditional web 2 companies develop a product. The product is purchased by consumers. If the company becomes successful, investors in the company will benefit.

In web 3 these categories of company, user and investor collapse. They collapse into a single economically aligned entity called a community.

Before web 3, the “community” word got slapped on top of many businesses relatively lightly. You got some conversation going on, but different stakeholders didn’t have a similar alignment. Also, access to protocol/product data was not that transparent for a typical user.

In web 3, a successful community looks really different. You can have the most passionate, engaged people you’ve ever seen. In web 3, communities feel different — users can participate and contribute to the success of the network in a way that wasn’t possible before.

[Thanks Amanda Cassatt for the mental model of collapsing interests of stakeholders.]

What does this mean for community building?

It means a whole lot.

One needs to wrap one’s head around how to communicate differently, be transparent, and create an organic ‘pull’ effect for your community. Early users expect that their early contribution will be rewarded with retroactive airdrop later. This means you can encourage your early users to contribute (and some of them are very much willing to do so).

Twitter is for discovery, Discord is the way to interact and go deep

For the top of the funnel and discovery, Twitter is the place to be. People learn about the latest protocols, news and development there. For a protocol founder, Twitter is a place to consistently tweet, create tweet threads explainers and interact with industry professionals.

As a protocol user, to go deep into a specific protocol and interact with community members, Discord is the place. As a protocol founder, that’s a good place to organize all detailed information, onboarding processes and announcements in different channels.

One unique interaction format in web 3 is AMAs (Ask-me-Anything)

They can be less-official audio-only hangouts organized on Discord or Twitter spaces. Or more official video-AMAs on YouTube.

You can click here to find an example of this type of video AMA. I was ramping up t2.world’s marketing in the early phase and co-hosting t2.world’s first video AMA.

Tool recommendations to start building community:

  • Discord & Twitter metrics: Orbit.love (Tip: Measure the Active Users, not only user amount)

  • Private Discord channels & token-gating: Collab Land and Guild.xyz

  • Create Twitter threads: Typefully — Scheduled tweets: Buffer

  • Video livestream for AMAs: Streamyard (this is simply a superb tool)

  • Protocol documentation: Gitbook (most people are used to Gitbook format)

Principle #2: Curate your first Atomic Network & Create artificial scarcity

If you are creating a product that depends on network effects, this is an important principle.

Let’s first explore the Atomic Network idea:

Invite-only strategies and Allow Lists are often described as leveraging FOMO, fear of missing out. While that is true, it is not the key driver. If your product depends on network effects, it’s not enough for the product be great.

It’s equally important to curate the first batch of users — the ‘Atomic Network’, as coined by Andrew Chen. Who’s on the network, why they are there and how they interact with each other. This first set of users will set the tone, culture, magnetism and ultimate trajectory of the community.

Suppose the product contains two-sided markets (e.g. buyers/sellers or readers/authors). In that case, the Atomic Network should be curated to be in a healthy balance, with the emphasis on manually recruiting the ‘hard side’ of the network first.

Artificial scarcity is important, too:

There is a certain magic in exclusive experiences. People holding Allow List spot to your protocol will engage more, post feedback, critique and tell about it to their friends. People without a spot on the Allow List will ask for it, and prompt conversation, and sometimes controversy, driven by scarcity and exclusivity dynamics. This again spurs up more engagement and attention. It just works.

We can learn these dynamics from web 2 success stories. Let’s hop back on memory lane:

  • Gmail first launched as an invite-only product in 2004, with a gigabyte of storage, when others were offering megabytes of storage. The original reason wasn’t marketing — their infrastructure consisting of old Pentium IIIs just couldn’t handle that many users. Later they learned that the invite-only feature turned out to be a key driver behind Gmail’s growth.

  • Facebook initially required a harvard.edu email address to sign up (fostering Atomic Network where everyone trusted each other). Later they expanded to other college campuses, again requiring specific university.edu email addresses to be eligible.

  • Tinder launched at the University of Southern California campus. Founders helped a couple of hyper-connected students to throw their birthday party. 500 students showed up, and a bouncer made sure everybody had Tinder installed before they could get in. If you didn’t have a chance to talk to someone during the party, you had a second chance the next day on Tinder. 95% of people that night become active users. Tinder replicated this model in other college campuses for their growth strategy, again fostering Atomic Network where people would trust each other.

[At this point, I’d like to pass credits where they are due: Andrew Chen, who’s the author of The Cold Start Problem. Simply a fantastic book. Part of the paragraphs above are directly from the book.]

Next, let’s examine a more recent case study from web 3 space.

Lens Protocol - NFT Profile minting experience

Lens Protocol is a composable and decentralized social graph that makes building a social media platform easy.

I wanted to pick up their launch process as a short case study, because it was successful (over 70,000 Lens profiles minted) and also a great example of utilizing artificial scarcity. On top of that, some other protocols have utilized a similar playbook later.

Chronological order of activities that happened during LENS launch in 2022 spring:

1) Lens received lots of mentions in web 3 media and twitter: AAVE team was building something new in the social space (“Twitter killer”). AAVE CEO Stani even got banned on Twitter for his marketing stunts.

2) Lens “Open Letter” was launched on 8th Feb 2022. Essentially, before any product was launched, you could read about their mission and values. And then connect your blockchain wallet, twitter account, “sign it”, and retweet your signature.

3) At launch on 18th May 2022, if you had signed the Open Letter and tweeted your signature before 5th of May, you were qualified to claim your Lens handle.

4) Besides that, builders were given priority access: “Lens community buildoooors will also be able to claim a handle! Addresses belonging to all project members that have built on Lens as part of the recent LFGrow, DAOHacks, and ETHAmsterdam hackathons will be eligible at launch, as well as Top 250 Gitcoin grantees from rounds 9 through 12. Thank you for building on Lens!”

5) LENS had over 30 projects live during the launch, so you had a lot of things to go and play around

My analysis of Lens launch:

  • First of all, many months before the launch, in all quietness, the Lens team had already attracted a significant amount of developers in different hackathons to build the first set of applications on Lens. That is, they had done serious heavy lifting solving the ‘hard side’ of Atomic Network building, having over 30 dapps published the users could interact with.

  • The Open Letter signing was designed to everyone else, normal web 3 curious users. It was the easier side of Atomic Network. A bit of artificial scarcity was applied, and only the web 3 natives who follow the space closely were early enough to sign the Open Letter, to be eligible to mint Lens handle. Not letting everyone in created a mythical exclusive feeling, and sparked some conversations. Heck, even I wrote tweet thread about Lens. I guess I felt a tiny bit privileged having been there early enough to sign the Open Letter and minting Lens handle among the first ones.

  • The Open Letter was a smart way to engage people to buy in their mission, warm up for the next stage, and encourage them to spread the word over Twitter to their friends. I was there signing the Open Letter as well. The values resonated and it was a no-brainer.

  • I’ve seen the Lens launch playbook being replicated, e.g. check out Tally Ho’s “Community Pledge” page

Allow List: Another method to curate Atomic Network

Another way of building the Atomic Network, and filtering suitable users, is to ask people to apply to use your product. This can be a simple Google Form, or Typeform, which is a bit more user-friendly. You can ask their name/handle, email, twitter, and certain questions related to your product or the problem you are solving.

You can open applications already months before the actual product is launched. The benefit is that you can hand-pick the most suitable users for your first batch, the Atomic Network. You don’t need to treat people on a first-come-first-serve basis (and it’s a good idea to be open about this). Over time, you can onboard new larger batches of users at the pace you can handle while giving a great user experience. And ultimately onboard everyone. Btw, when you are pre-product, increasing the number of sign-ups is a good metric to focus on, and also something you can communicate to your investors.

We used this method at t2.world — have a look at t2’s Allow List application here for inspiration:

We used Typeform to create t2.world’s Allow List page.

Principle #3: Welcome experience & Invite-only strategy

This stage comes when you have successfully built your first Atomic Network, the first batch of users. You have already found a way to provide value & users are excited and happy. Now it’s time to ask them to invite their friends.

The Welcome Experience is important:

You can think of this through an analogy. Imagine arriving at a large dinner party. A close friend welcomes you at the door, and while you step in and leave your jacket on the rack, you see familiar faces: close friends, acquaintances, and a number of new people who’ve been carefully curated. The dinner turns out to feel exciting and intimate. If this is an ideal experience for a dinner guest, you can think of a similar welcome experience for a user of a new product. Invite-only products can curate this because every new user is at least connected to one person, their inviter.

Utilizing highly connected people is useful. It’s a good idea to invite the most connected people early on because they tend to bring other highly connected people with them. The result is a dinner of social butterflies, which greatly helps in launching a new product.

[Again, thanks Andrew Chen for these analogies]

The invite-only strategy also gives you time to improve the product

Just like Gmail, in the beginning, couldn’t support more than a certain amount of users, you might face a similar challenge. You’ve built a sizeable community, but you are unsure how they will like your product. It is a safe bet to first invite only a small batch of users, and treat them like kings. You can learn from them, engage with them on a personal level, and utilize the feedback to improve the fundamental features of your product almost on the go. You don’t want to give a bad experience to a large number of users.

When you are ready to take the next batch of users, you can incentivise the first batch to invite their friends. This can be capped, too, for artificial scarcity. For example, you can give maximum 5 Activation Codes to each user. Let’s explore how STEPN utilized exactly this:

Case example: STEPN (Move-to-Earn game)

  • STEPN is a Move-to-Earn running tracker/game. It’s perhaps one of the best examples of utilizing Activation Codes from the previous bull market.

  • You had to buy a pair of NFT Sneakers, and you could start earning money simply by walking outside (GPS measured your movement). At the market peak, you could earn $100-200 USD for 15 min walk (although you had to invest thousands of USD to buy the NFT sneakers).

    • (The first 10,000 sneakers NFTs were distributed for free to the early community in Dec 2021, through a simple quiz question in their Discord)

  • STEPN was a massive short-time success story, picking up over 3 Bn USD market cap (and over 700,000 MAU) in just a few months and going down equally quickly. Its token economics leaned heavily on new users, and it was partly labelled as a ‘Ponzi’ project. Today in 10/2022 its market cap sits around 0.3 Bn USD with 70,000 MAU.

  • Despite the controversy, STEPN witnessed massive growth, and it’s worth studying how they achieved that.

  • One key aspect of their marketing was Activation Codes. You had to have an Activation Code to be able to register. I remember there were a couple of weeks when STEPN truly grabbed the narrative. Everybody in web 3 circles was talking about them and hunting for Activation Code.

  • STEPN had designed several ways to acquire the Activate Code. Here’s the full list:

    • 1. Get an Activation Code From a Friend (max 5)

    • 2. Get an Activation Code From STEPN Discord (released every 1h)

      • Every 15 min, STEPN shares 10 activation codes in the #activation-code channel.

    • 3. Get an Activation Code From STEPN Telegram

      • 1000 codes were shared every day at 13:00

    • 4. Get a STEPN Activation Code From Social Media & Discussion Groups.

    • 5. Get up to 100 STEPN Activation Codes. (For Influencers)

    • 6. Complete The STEPN Quiz For an Activation Code

Let’s analyze the methods above:

  • By distributing the codes inside their Discord and Telegram, people had to join these channels. That was a clever small trick to increase the following of their communities.

  • The different methods to get the Activation Code provided a great talking point for crypto influencers and bloggers. If you search “how to get STEPN activation code” on Google, you can find countless guides. Many of them were written organically, some were paid by collaborators.

    • Influencers could apply for exclusive STEPn Activation Codes they could give to their audience. This was a sweet value for the influencer to provide for his/her audience.

  • That’s for the praising part. Then the other side of the coin: STEPN’s Activation Code mechanism code was pure FOMO building and artificial scarcity. You didn’t need to contribute any actual value to the STEPN community to get a code (besides your movement data). Their Activation Code mechanism worked well for them, probably because STEPN was such a simple game/tracker which anyone could use — walking isn’t a stretch to most of us.

Principle #4: Utilize Bounty platforms to reward users when they complete specific actions

One of the most popular methods to scale user acquisition is to run an incentivized bounty campaign. These come in many shapes and forms. The most basic one is to ask the user to do specific actions in your protocol (e.g. “trade on our DEX!” or “Stake our native token!”) + retweet the bounty promotion tweet. These activities are asked to be done in exchange for some reward, for example, a community NFT.

Early users are generally willing to do this, as they expect it can lead to more rewards later (beyond the NFT). When designing these campaigns, it’s good to be aware of these user expectations and match the reward later on based on the actual efforts contributed by the early users. These rewards can take place during token launch or allocated from the protocol’s community treasury.

Some platforms also provide an option to use a credential/identity system (using government IDs), namely Galxe. Also, you have the option to require email confirmation or connecting a Twitter account. Utilizing one of these methods enables you to attract real humans and prevent bot farms from raiding your campaigns.

Case studies: Here are two links with plenty of examples of how campaigns can be structured: Campaign examples 1 and Campaign examples 2. These pages are a great way to source ideas. Generally speaking, you want to encourage users to do the main required action in your protocol/product, whatever that is: doing a trade, posting content, staking, etc - to activate a new user.

Here’s some Task Description examples (copy+paste from links above):

  • Perpetual Finance: Trade at least 100 USDC on Perp v2 (7501 NFTs minted)

  • Yearn Finance: Subscribe to the Yearn weekly newsletter (7,746 NFTs minted)

  • Serum: Buy at least $25 of SRM using USDC during the event (4427 NFTs minted)

Tools:

There are many different platforms to create an incentivized campaign. Some of the most popular platforms currently are Galxe, Layer3xyz and Crew3xyz.

Word of caution: While incentivized campaign executed the right way at the right time can bring significant growth, there is also a risk it brings spammy vibes to your brand. Especially the first set of community members is better to find through your personal and professional networks — people who know and trust you. Incentivized campaigns work at a bit later stage. Use them with caution.

Principle #5: Collaborate & Integrate with value-aligned protocols

This pic was just too cute not to be used here, and has nothing to do with web 3 protocols :-)

On the tech level, one fundamental innovation in web 3 protocols is composability (utilizing existing innovations of other protocols as ‘API calls’ in your own protocol). Just like composability works at the tech level, a collaboration between protocols and DAOs works at the community and marketing level.

You can reach out to other protocols and DAOs who have similar values as you. Perhaps some of them can see you as a potential competitor, but for the most part, there is potential for collaboration and cross-promotion.

These collaboration reach outs can be framed e.g. in the following way: “Hey Community Manager X, your community Y is value-aligned with what we’re working on at Z. We’re about to launch our product soon, and would like to provide free value to your community members. We could allocate 1000 free Allow List spots (or NFT mints) on a first-served basis. Perhaps we could even ideate a campaign together! Or just keep it simple. Let me know your thoughts.”

Technically speaking, you could just scrape the blockchain addresses from any protocol permissionlessly, create an Allow List around it and announce it at their Discord. That said, coordinating this with the Community Manager or the Founder, in most cases, is still wise and polite. If they like it, you might get some extra marketing firepower.

Usually, token communities are delighted to receive free value if you’ve built something useful. It’s a win for their community, it’s a win for your project’s growth.

Integrations on a product level

In web 2, the bigger network wins. In web 3, people who build the biggest network together win.

Lego Building

It’s good to research if there are useful building blocks already available in the niche you’re building.

It doesn’t make sense to build everything from scratch. If you utilize existing web 3 protocols, you not only save developer hours but most likely will also get marketing support from them and can tap into their community.

E.g. Lens Protocol definitely wants to promote all apps utilizing their social graph and identity layer. The benefit for your protocol is not only the identity component but also the user base of Lends handle holders.

Another example: ENS domain names. An increasing amount of protocols integrate with ENS. Many people create their online identity around their ENS domain. Some protocols have been clever in using the ENS domain name directly as the username/display name in their protocol. You don’t need to ask anyone’s permission to do just that.

Principle #6: Use retroactive airdrop to distribute value to your community

Needless to say, getting here is not an easy feat. Retroactive airdrop usually only makes sense when you’ve already created a protocol that provides value to a large number of users. Otherwise, it can easily fall flat. Airdrop is usually conducted together with the token launch.

In web 3 you don’t need to sell the entire company/protocol to create a liquidity event. Your own token enables the distribution of the value. Part of the tokens is allocated to founders, team and early investors. A significant part is also allocated also to the users of the protocol. This can be anything between 5-25% of the entire market cap, or even more.

Here are airdrop examples of the largest protocols:

  • Uniswap (UNI token) - Decentralized exchange

    • Eligibility criteria for the airdrop: interacted with Uniswap protocol

    • Airdrop: 15% of the entire treasury to ~12,000 addresses

    • Doing a trade on Uniswap would earn around 3 k USD right after the announcement

  • Ethereum Name Service (ENS token) - Domain names for blockchain addresses

    • Eligibility criteria for the airdrop: registered a domain, e.g. yourname.eth

    • Airdrop: 25% of the entire treasury to 137,689 addresses

    • 1 domain registered would land you 7 - 12 k USD worth of tokens after announcement

  • dYdX (DYX token) - Margin trading protocol

    • Eligibility criteria for the airdrop: traded using dYdX - the more trades, the more tokens

    • Airdrop 7.5% of the entire treasury, ~65,000 addresses

    • Depending on the number of trades using dYdX, the airdrop amount totalled from a couple of thousand USD to 6-figures.

Yes, the rewards were significant considering the effort from the user. After these major airdrops, many protocols started to have stricter eligibility criteria, e.g. requiring several interactions with the protocol, not just one. This weeds out the airdrop farmers and focuses on the actual core users.

Defiant has written a comprehensive summary of airdrops conducted by major protocols. This article by Hashed also goes deep.

Airdrops are a great way to decentralize the protocol, and the distribution should emphasize core users (not just lurkers or airdrop farmers). Nihilistically speaking, airdrops can also be seen as an expensive customer acquisition tool. Nevertheless, it’s a good idea to look long-term and consider distributing airdrops in several events over a couple of years instead of all at once. Ethereum’s Layer 2 scaling solution Optimism had successfully implemented this strategy. The way how airdrops are conducted seems to be in constant evolution, and the best practices are forming along the way.

Detailed guidance on how to design a retroactive airdrop is a deep topic of its own and outside of the scope of this blog post. If you’ve made it here, congrats. You most likely have experienced advisors to work out the details with you.

Principle #7: How to acquire users outside of crypto twitter and web 3 natives?

This principle is a joker card — it’s more of a product design idea.

The amount of web 3 protocol users is still minuscule on a global scale. There are only 30 million monthly active users of Metamask. We’re still really early. Many protocols compete for the same web 3 native users hanging on crypto twitter.

However, if you want to break out to the broader general audience, how to do that? That’s the billion-dollar question.

For an average internet user, interacting with web 3 protocols is still cumbersome. You need to download Metamask, you need to store your private key safely, you need a crypto exchange account to get some tokens, you might lose your tokens because of user error & there is no customer support… It takes quite a bit of hobbyism to be even able to interact with a web 3 dapp.

This is a challenge widely discussed in the industry. Many teams are working on more user-friendly blockchain wallets and onboarding methods. That said, I don’t think there will be any killer app available anytime soon. It’s also an educational thing, it will just take a while for the majority to wrap their head around digital tokens.

So, one option might be just to abstract away the blockchain wallet experience.

Your product could utilize blockchain and tokens at the backend, but the digital assets would be first stored in a centralized custodial wallet. The product could be fully functional just through a regular email signup. For users, signing up would feel like any web 2 product. Later, the user could be provided with the option to connect to a non-custodial blockchain wallet such as Metamask. The user could move his/her digital assets there, but it wouldn’t be mandatory.

Naturally, this type of approach is against the decentralization/DAO values. For the most fundamental DeFi protocols, this would never work or make sense. For less-critical consumer-facing applications, such as web 3 games, this is worth considering. And can potentially unlock millions of normie users who would have never had the patience to set up a blockchain wallet.

As we discussed in the beginning, it’s important to assess where in the spectrum of centralization/decentralization your product sits.

Principle #8: Certain web 2 marketing methods are still useful, too

It’s good to note that certain ‘traditional’ online marketing methods are still useful.

Content and blog posts — Churning out content regularly is equally essential as always. In web 3, there is an expectation of a higher level of transparency, considering we’re building open protocols with public transactions. Many users also analyze your protocol from an investor perspective.

SEO — Search Engine Optimization. Organic traffic from Google is still significant. If you’re developing a consumer-facing content product, SEO is as important as it always has been. If you’re developing niche DeFi protocol, you probably don’t need to worry about SEO that much.

PR — Fundraise news, product launches, and partnership with famous parties. Definitely worth having an experienced PR freelancer or agency pitching stories to web 3 and tech media. This not only gives you immediate traffic peak and credibility boost, but the backlinks from high domain authority websites contribute to long-term sustainable organic traffic as well.

Web 3 growth tech is evolving rapidly

Web 3 Marketing/Growth Tech landscape in Q3/2022, compiled by Safary community.

The good news is that you don’t have to build all support tools from scratch. Safary community has put together a great list of resources above.

That said, we’re early. In 2011, with web 2, there were only 150 Marketing Tech companies. Today there are over 10,000. The web 3 Marketing Tech space has just gotten started, and the tooling will evolve fast.

Credits to the top thinkers in the space

This blog post was built on top of insights from many thinkers/builders/marketers in the space.

Thank you, Andrew Chen, for writing The Cold Start Problem. It’s a fantastic read, and I would recommend it to anyone who wants to understand how the early days of some of the largest web 2 products looked like. Thank you, James Currier, a long-time author of everything related to network effects. Thank you, Matias Honorato, marketing visionary in the web 3 space. Thank you Blake Kim from Myosin, for insightful ideas on how to approach web 3 marketing. Thank you, Amanda Cassatt, for the most valuable YouTube video I found on web 3 marketing. Thank you to the entire team at Media Enterprise Design Lab for the Exit-to-Community handbook, it clearly articulates why value creation and distribution are going through a fundamental change. Thank you, Safary community, for bringing the web 3 growth professionals & resources together. Thank you, Galxe for the great tool, campaigns and documentation. Thank you, Defiant and Hashed, for deep-dive research on airdrops.

The full list of links can be found at the bottom.

Finally, thank you, reader

Thank you for reading. Hopefully, this sparked some ideas for your protocol marketing. If you found this valuable, you can consider joining my email list at the bottom of the page.

Also, you can consider giving a like to my tweet thread summary of this blog post.

Need support for your protocol marketing?

If you need help with your protocol marketing, feel free to reach out to me, and we can explore if I could be of help in supporting the growth of your protocol.

About the author

Mikko Ikola is passionate web 3, blockchain and disruptive protocols in the decentralized world. You can follow @MikkoIkola on Twitter and contact Mikko via email at ikola [a] iki.fi.

Email newsletter

If you’d like to stay updated on my future crypto articles, you can subscribe to my newsletter here.

Becoming a crypto investor: My top 6 books to learn the fundamentals

Becoming a successful crypto investor is a long path that requires plenty of learning. You need to master areas such as investing psychology, the history of money, and the basics of blockchain technology & cryptography. Understanding the main events of the most important crypto project during the last ten years will also be of much use.

One of the most challenging things for newbies entering the crypto space is finding the signal through the noise. That is, where to start learning and what are the best materials. The crypto space feels overwhelming and difficult to keep up even for the full-time professionals in the space.

There is absolutely a lot of garbage content (and outright scams) online. If you’re serious and committed, the top priority is to identify the most legitimate sources to learn the fundamentals. And stick to these and avoid the rest. You need to get back to the basics. You need to get back to history before you can understand the present and future.

When I started learning the space full-time, it literally took me 2-3 months to identify what sources I should consume. Now I would like to share the best books I have identified.

Based on my experience, books are absolutely the best way to build up your fundamental understanding in the shortest amount of time. There will be a lot of “mind warp” moments. That is, you need to internalize many new mental models and concepts. It requires brain bandwidth. That is to say, don’t try to cram through these books. Read slowly and try to understand everything. Make notes. Avoid audio version. Read when you are at your full capacity (for me, it’s the mornings or weekends).

Forget the crypto news, crypto twitter, and other day-to-day content. Get the fundamentals right. Read 1-2 of these books, and you are already 90% ahead of everybody else. Read all of them, and you’re 99% ahead.

Here are my Top-6 books to become a crypto investor:

  1. Psychology of Money

  2. Bitcoin Standard

  3. Infinity Machine

  4. Layered Money

  5. The Price of Tomorrow

  6. Coingecko’s “How to DeFi”

Psychology of Money (Morgan Housel)

This is not a blockchain or crypto book, but it is pre-requisite learning before it is safe to enter the crypto markets.

This book discusses how you should think about your relationship with money and life. Money can screw people over. Some people neglect thinking about money in its entirety. Some people get so attached to money that they do whatever it takes to get more while neglecting everything else. Some people are good at making money but bad at staying wealthy. It’s all about behaviour. And behaviour is hard to teach, even to really smart people. Also, financial behaviour is difficult to model through successful friends, as you really cannot “see” their full thinking process or values behind the decisions. Financial literacy is more a soft skill than a hard skill, requiring plenty of self-reflection.

This book is full of great universal truths about how to think about money. Investing is a lot about psychology. This book will help you to build a well-rounded foundation before you hit the markets. And you need that, as crypto markets can be 10x more volatile than traditional stock markets.

Read my full review and best quotes of Psychology of Money on Goodreads.


Bitcoin Standard (Saifedean Ammous)

In 2013, as a young computer science student, I studied the Bitcoin whitepaper with great fascination from a technical perspective. I thought Bitcoin was interesting, but I only understood a limited view of its potential.

Many years later, I read the Bitcoin Standard book. It truly blew my mind. I cannot recall any other book that so dramatically shifted how I understand how money, currency, and store of value assets work. It’s truly a fascinating topic, going way deeper than one might initially think.

Essentially, this book tells about the history of money. To understand Bitcoin, understanding the history of money is the best place to start learning. It is a bit counter-intuitive, but trust me. I regret that I didn’t read this when it came out because I thought, “I already know what Bitcoin is”. Maybe I did, but I couldn’t put it into perspective.

I argue that when you combine the understanding of the history of money with blockchain technology, you simply cannot unsee the transformation we are going through.

See my full review of Bitcoin Standard on Goodreads.

Infinity Machine (Camila Russo)

After you understand Bitcoin and the history of money, the next logical step to understand is Ethereum.

Understanding Ethereum can be roughly split into two parts:

  1. How Ethereum works technically and

  2. Who created Ethereum, and what was the chronological founding story.

For the first question, you can find numerous technical videos on YouTube. Watch a few of them.

For the second question, how The Infinity Machine is the best book. Camila Russo has interviewed all relevant people involved in creating Ethereum and how it evolved throughout the years. It is a truly fascinating story, covering many active people in the early days. Many of those people have important roles in the crypto space today. E.g. you might have heard of Polkadot and Cardano. Both are Top-10 crypto projects, and the founders of these projects were originally co-founders of Ethereum. There are many other people besides Vitalik Buterin.

See my full review of The Infinity Machine on Goodreads.


Layered Money (Nik Bhatia)

Now that you understand Bitcoin and Ethereum, it’s perhaps a good moment to step back and look again at the big picture. Layered money tells the history of money and especially describes how the current 100-year-old financial system we have operates and how it has evolved. It’s not a pretty story.

The book consists of fascinating visualizations of different ‘layers’ of our financial system is based on. At the top of the pyramid, there is the hardest asset. It used to be physical gold for a long time. In 1971 Nixon administration abolished the gold peg. The highest layer changed. It wasn’t gold anymore. Instead, it was US Treasuries. This changed the whole fundamentals of the system. Essentially, money printing started on a large scale.

I have written a Twitter thread summary of the learning of this book. (I got a nice confidence boost as the book author Nik Bhatia himself retweeted this, yay)

The Price of Tomorrow (Jeff Booth)

This book hammers down one fact: Why deflation is the key to an abundant future.

If you’ve taken the Economics 101 class at the university, you’ve learned how deflation is bad. You’ve been most likely taught that 2% inflation is a good thing. I can understand if it is difficult to question this.

That said, I kindly suggest forgetting what you were taught in the Econ class and picking up this book. Jeff Booth shows you that we are indeed headed to the deflationary future, and it’s a good thing (at least for those who comprehend it), thanks to the incredible rapid advancement of technology during the last few decades.

The key thing to understand is that technology is intrinsically deflationary. Our financial system and its fundamentals were built in an era when labor and capital were almost directly linked. This era counted on growth and inflation. This era is over today, but we keep pretending that the old financial system still works.

Technological development has been extremely fast for the last 20-30 years. Most successful companies today are not successful because they have a huge headcount. Instead, they have software that scales worldwide even with a relatively small headcount.

The only thing driving growth in the world today is easy credit, which is being created at a pace that is hard to comprehend — and with it, debt that we will never be able to pay back.

See The Price of Tomorrow book on Goodreads.

How to DeFi (Coingecko)

At last, we get to some pragmatic stuff you can put to use immediately.

I’ve left this as the last book on the list to emphasize how important it is to learn the fundamentals first. That is, the previous books on this list.

However, if I’d start my learning path from zero, I’d perhaps read the other books and this book simultaneously. That is, consume this book not by only reading but also by performing test transactions through most of the protocols mentioned in the book.

Goingecko’s “How to DeFi - Beginner” is a hands-on manual on using the essential DeFi protocols. This list includes for example, Maker (minting stable coin DAI), Aave and Compound (borrowing and lending protocols), Synthetix (synthetic assets), and Nexus protocol (decentralized insurance)

The screenshots in the book might seem fairly outdated. That is, because they are, heh. Don’t worry about it. The protocols are fundamentally still the same. And this discrepancy just signals how fast space has evolved.

Goingecko has also published an advanced version of this book. It covers other fundamental stuff, such as DEXs (Decentralized Exchanges), Yield aggregators, Oracles, Multi-chain bridges, etc.

Both books are the best consumed by actually using the protocols. So, be ready to invest a few hundred bucks to ether and gas fees to get the most out of it. Nothing replaces the understanding of actually using the protocols and exploring the first-source documentation they provide. Coingecko guides you to the most relevant protocols to start this exploration.

The learning path after reading these books

Learning crypto is like learning mathematics or a new language. It just doesn’t make sense to skip the basic arithmetics or basic grammar before entering the more advanced stuff.

That is to say, the best order to learn crypto is roughly this:

  1. History of money

  2. Blockchain technology

  3. Bitcoin

  4. Ethereum

  5. Basic DeFi protocols: Maker, Aave, Compound, etc

  6. NFTs, DAOs, other L1s and L2s, all the new cool stuff

The books I recommended in this blog post cover points from 1 to 5. Try to focus on understanding these well enough before you ape too much ahead.

After you have built these fundamentals, the next step is finding the best online sources to follow. If you like to stay at the cutting edge, there won’t be many books, as the space evolves so fast. Instead, you need to find the best YouTube channels, podcasters, newsletters, and Discords to stay up-to-speed.

It took me a long time to find these channels, and I will compile a list of my favorite sources in the next blog post.

Stay tuned.

If you like my writing, you can find more here: https://mikkoikola.com/

My Gitcoin KERNEL experience: 8 weeks deep dive in Web 3 with 400 fellows from over 40 countries

I just finished participating in the KERNEL web 3 educational program, with over 400 fellows from over 40 countries. It was 2 months full of inspiring people, interactions and learnings. It was the “Kernel Block 5”, or 5th batch in KERNEL’s 2-year history.

So, what is KERNEL exactly?

“A custom web3 educational community — We are building an open, peer-to-peer, lifelong network of awesome humans, one block at a time. Each block accommodates 250 individuals and runs for 8 weeks. It is a unique experience.”

(During the block 5, they decided to temporarily extend the size from 250 to 400 people).

Screenshot from one of the first sessions with the Kernel Block 5 fellows

Definitions aside, KERNEL is a different kind of experience for everyone. KERNEL really depends on you what you want to make out of it. At the minimum, you could use a few hours a week to read the learning Modules (publicly available) and participate in the fireside chat once a week. If you like to meet new people, you can join networking sessions at Gather Town after the fireside chats.

At the other extreme, you could experience KERNEL in a full-time manner by:

  • Participate in 1 or 2 learning tracks (there are DeFi, Token Communities, Dragons & DAOs, Gaming, DeSci, Regeneration, Culture tracks).

  • Joining (and creating) Junto conversations around different topics.

  • Reach out to other KERNEL fellows to set up Zoom calls to get to know each other.

  • Or perhaps you have recently founded your own web 3 project, and you are feet deep in development and could use the time to find new team members to work on your project.

  • Participating in “Wanderweek”, which is essentially a two-week hackathon. You could also jump-start your new web3/NFT project using the Wanderweek, and get feedback from mentors and other fellows

KERNEL is full of accomplished professionals, builders, creators.

KERNEL is overwhelming, and you need to be intentional

During the 8 weeks, there are a LOT of things going on.

The amount of Slack channels is staggering. You don’t have time to read and follow everything. KERNEL absolutely will feel overwhelming to everyone in the beginning. And that’s just a feature of KERNEL. Somebody put it this way: “Kernel is like a stream which is there.”

The only way to adjust is to be intentional. This is also heavily emphasised by the organizers. The overwhelm can hit you if you come totally without a plan. However, if you formulate an intention, for example e.g.:

“I study all the modules but skip the Wanderweek, participate in only 1 learning track and set a goal to meet 1 new person over Zoom every week. I will organize a Junto conversation about a topic X I’m curious to learn more about”

This way, you will get much more out of KERNEL. This way, you can focus only on the information flow that is relevant to you. KERNEL actually assigns you a Guide whom you can have a 1-on-1 conversation with in the first two weeks.

The atmosphere during the sessions is professional yet relaxed. The KERNEL organizers Vivek Singh and Andy Tudhope are some of the most emotionally intelligent and warmest people I’ve ever met online. Great people to look up to playing and enjoying Infinite Game. When you join the Zoom session, there’s always relaxing music playing in the background when people are rolling in. Everything works punctually and professionally - people are present.

At the same time, there is no record-keeping, no grades, nor any type of controlling “schooling” attitude. You get to decide what is worthwhile to take part in and deep-dive into your interests.

KERNEL Syllabus - From Module 0 to Module 8

Here’s the entire syllabus for 8 weeks. It’s publicly available here.

KERNEL Rituals & Rhythms

Kernel lasts for 8 weeks, but what does a typical week look like? There is a certain structure:

  • Every Sunday, the KERNEL team sends an email regarding the upcoming week, usually giving an outline of the week and reminding us which Module we are about to start.

  • On Tuesdays, the week begins with various small group explorations called Guilds. There were 7 different Guilds in KB5: DeFi, Dragons and DAOs, Token communities, Gaming, DeSci, Regeneration, and Culture.

  • Every Thursday, KERNEL hosts a Web 3 luminary for a fireside chat and brings all the KB3 Fellows together. After the fireside chat, people usually gather at the Gathertown.

  • Besides these regular activities, there is also Expo Week, Wander Week, and the Show Case Demo Day in the end. And numerous Junto conversations throughout the program. All optional.

Gathertown networking:

Every Thursday, there is a Fireside chat about the topic of the week. Usually, there’s a guest speaker. The first week’s firechat chat speaker was Vitalik Buterin himself, and he had just published his blog post about “Soulbounded NFTs” which he talked about in the first half. The other half was reserved for questions from Kernel fellows.

After the Fireside chats, people head over to the Gathertown networking space. It was my first time using Gathertown, and I have to say I was impressed by the experience. Gathertown aims to create a similar experience as if you’d be in an actual conference room. You have your character, and you move around the 2D map like in Habbo Hotel. If you move close to someone else, the video and audio connection opens automatically. Just like you would walk next to somebody in an actual conference and open up a conversation. You can also sit at a table, and see video & audio connections of everyone around the table. If you have a remote team, this tool is a must.

Gathertown networking session after Kernel Fireside chat

KERNEL emphasizes values, thinking & communication skills:

KERNEL sets in place certain values. The first Module 0 is all about this.

For example, KERNEL encourages us to think in Complementary Opposites (be able to entertain opposite views of a matter, and the spectrum in between). KERNEL fellows should have humility. And here’s one of my favourite:

Those who have positively changed the world did so because they learnt how to negotiate complexity, rather than impose their own will on things. They answered their own questions as honestly and directly as they could.

KERNEL encourages to Play with Pattern. The best 30 minutes of my focused attention was to play through this interactive “The Evolution of Trust” interactive online story. It teaches everything you need to about game theory in the context of coordinating with people.

So, how is this all relevant to KERNEL?

KERNEL is all about gathering talent around the world for a journey of 8 weeks. Everything happens online. Online is as real as offline in KERNEL. It’s all new people. However, the people on the other side of the screens are competent professionals. Their time is valuable. We should make the time together matter. Take a professional yet loving attitude. That’s my interpretation of the signal Module 0 wants to send.

And these values show throughout the program. Fellows are willing to live up to these values. When you interact with Kernel fellows, there is mutual respect and everyone is trying to help each other. These values create a fruitful foundation for getting to know new people and potentially work together on a project.

In the Kickoff session Vivek described that:

“Honesty, trust, clarity, excellence, heightened awareness is present in this environment”.

And it sums up the reality accurately. Some other expressions used to set the tone for the upcoming weeks were: mutual aid, lively relationships, infinite games, open fields, and remembrance. Couldn’t get better than that, could it?

KERNEL gets you back to basics — how to have a proper conversation?

Guilds (or learning tracks):

KERNEL Guilds are essentially Learning Tracks for a more specific area of web 3. The learning tracks have changed a bit from Block to Block. In KERNEL Block 5, we had the following options:

  • DeFi

  • Token Communities

  • Dragons & DAOs

  • Gaming

  • DeSci

  • Regeneration

  • Culture

Usually, each Guild hosts around 3-5 extra sessions. Looking at all the exciting agenda, one easily feels like a kid in a candy store, and wants to sign up for every track, or maybe half of them. At least I did. However, it’s suggested to take max 2 learning tracks. Otherwise, it simply gets too overwhelming. (Learned the hard way - agreed).

Some of the Guild sessions were also recorded to be watched later. However, I felt that the recordings as an experience were not even close compared to being real-time present in the session. Recordings didn’t obviously include the Zoom chat log, which was always full of interesting notes and links. Also, some of the sessions had interactive elements (e.g. DAO Guild actually setting up a DAO during the session), which obviously couldn’t be experienced through a recording.

KERNEL Juntos - Small-group discussions any fellow can set up

In 1727 Benjamin Franklin formed the Junto, a weekly mutual-improvement club made up of individuals with an array of interests and skills. The goal was to 1) To help us improve ourselves and 2) To help us improve our world.

KERNEL has built the Junto format based on the legacy of Mr. Franklin. Essentially, it’s a way to propose a topic for a small-group discussion. Usually, the host sets a short reading list of a few articles, to get more out of the discussion.

There’s a wide range of Juntos during KERNEL, and there are several each week. Even if you would like to participate in all of them, you could only make it to perhaps 10% of them. So, it’s good to reflect your KERNEL intention, and then decide on the ones that match. Or create your own Junto, it’s a relatively low effort to set up one, and a great way to connect with people.

Personally, I would have really enjoyed participating in the Junto organized by Ali Rizvi, which brought fellows to read and study together the book by Lisa Yi Tan, “Economics and Math of Token Engineering”. Just too many things to take part!

See the full history of Juntos here: https://convo.kernel.community/archive

Some of my personal highlights & remarks during KERNEL:

Here are some personal highlights and remarks of my KERNEL adventure.

  • Daniel Robinson (Head of Research at Paradigm) presentation about “Simple DeFi machines” was a great break-down of the logic behind major DeFi protocols. It was a great discussion with the fellows, too. Most of the KERNEL content is recorded and put publicly available. This particular conversation was intentionally not recorded, and I think it increased the quality of the conversation. The conversation was more candid and it provided a safe space for some controversial thoughts as well. As much as I subscribe to the ‘build in public’ philosophy, I think there would have been room for more non-recorded sessions with top industry experts like Dan.

  • It was great connecting to new people in general. I met one KERNEL fellow in person in Shanghai where I live. I had another interesting conversation online with somebody whose real name, face or location I didn’t even know. His professional background was really impressive and we had a great conversation. This was an example of the pseudonym/anonymous culture of crypto. Anon culture is a peculiar part of the web 3 landscape. It has its obvious downsides but also upsides.

  • At the end of the KERNEL, I was able to invite some web 3 investors to join the Showcase Demo Day and connect investors & teams together. Always makes you feel good to help people out with such a tiny effort from yourself.

Lifelong access to other KERNEL fellows is the most important takeaway

When I think about the concrete takeaways from the program, the number one thing is the lifelong access to other KERNEL fellows. Not only your batch, but all the other batches are in the same Slack as well. Thanks to Covid, the world is flat, and regardless of your location, you could just reach out to anyone. Also, if I were to move to another country/city someday, I think KERNEL would be a great way to establish some real-world friendships as well.

KERNEL’s Modules contributed a solid summary of key ideas to understand web 3 and underlined an ethical approach when building new projects. KERNEL’s benefit is bringing people together, lay out good values with professional facilitation.

“The KERNEL syllabus (and KERNEL at large) is about transformation, not information.”

As per the quote, knowledge-wise, I only learned a limited amount. That’s understandable as it’s not KERNEL’s main point.

Pure knowledge-wise, I have learned most by reading the most essential books e.g. on the history of money, the history of Ethereum, and how the current financial system works. Books have hands-down brought me the best ratio of invested time and gained understanding. The next most important source has been the primary source materials, such as whitepapers of Uniswap, Ethereum, Compound, Aave etc other fundamental protocols. And, going and using those protocols. Finally, certain YouTube channels as podcasts such as the Bankless and Coin Bureau have been the most efficient way to stay up-to-speed after building the foundational understanding.

If you’re a beginner in the web 3 space and willing to learn more, I would definitely recommend a similar learning path to start with, and then later consider KERNEL.

Looking back, how would I go about KERNEL:

First of all, I would have reserved much more time for KERNEL. I initially planned to do precisely that, but then certain unexpected things happened, and I had to take care of some time-taking personal errands just when KERNEL started.

Even though KERNEL’s website says that one could do the program while working a full-time job, I would not necessarily recommend that. Especially, if you have a busy non-web-3 job.

If you are already working in the web 3 space, then it’s a bit different and KERNEL can complement your work and enable new opportunities.

At the end of the day, KERNEL is like a stream, and if you just set realistic expectations considering the time you can set aside (and not overwhelm yourself), you can accommodate the experience to different life circumstances.

If you’re totally new to web 3, I’d advise you first to read some books, whitepapers, open up a crypto wallet and play around (like I wrote in the previous chapter). If you’re still excited after all this, then I would say you’d be well equipped to get the most out of KERNEL.

Timezones and your physical location:

Credits to Samuel He for this accurate illustration :-)

If you live in the US or Europe, you will get the most out of the Kernel. Why? Because 70-80% of the participants are from North America and Europe, and time zone wise the sessions are scheduled in comfortable time slots.

If you live in a big city in the US or Europe, think of NYC, San Francisco, Denver, Berlin, and London, you will get even more out of KERNEL. Why? Many other fellows also live in these cities, and you can get together to have dinners. KERNEL even reimburses them to encourage this (!). I also noticed there were a lot of participants from Singapore.

If you live in Asia, like I currently do, things are quite a bit different. Most of the sessions started at 11 pm or midnight (Shanghai/Singapore time zone). I could usually still focus on the session content, but meeting people at the Gathertown at midnight or 1 am, I was simply too sleepy to be presentable most of the time.

That said, I think the KERNEL team was as considerate as possible in planning the time zones. On West Coast US, sessions started at 8 am. After all, considering where most of the participants live that was the most suitable slot to pick.

Final presentations of KB5 Kernel block:

During KERNEL, you encounter many new ideas and plenty of new teams are formed. You can also apply to KERNEL program as a team. At the end of the 8 weeks, 40 teams were selected to present on the final demo day. Here’s the recording of the first 20 teams. The second set of 20 teams can be found here.

Thank you KERNEL organizers!

Thank you, Vivek Singh, Andy Tudhope, Sachin Mittal, Aliya Donn, Angela Gilhotra for running KERNEL and organizing everything. There’s would be many Guild leaders who did fantastic job as well, too many people to thank. I feel more empowered to continue my web 3 adventure :-)

The next KERNEL batch will start in autumn 2022. You can learn more about the application process on the KERNEL website here: https://www.kernel.community/en/blog/Editorial/summer-of-love

*********************************************************************************************

The rest of this blog post is my personal notes on KERNEL Modules from 0 to 8

*********************************************************************************************

My personal notes of KERNEL Modules 0 to 8

Before you read on, I’d like to make two important points:

1) The original material of KERNEL modules is created by Andy Tudhope, so all the credit to him for putting this together. I’ve only done copy+pasting here.

2) The following notes do not represent a holistic summary of the module content. I took the notes to foster my own learning, and I only selected insights that particularly resonated with me or new stuff that I wanted to learn and review later. A lot of essential content is left out from these snippets. The full module content is publicly available, so feel free to explore the entire syllabus here.

Module 0 - An Introduction to Kernel:

  • This module mostly sets the values and tone for the following weeks, such as:

    • “The quality of listening determines the quality of speaking."

    • "When people are listening, I'm compelled to speak more truth."

    • We should aspire to positive-sum, horizontal conversations (9:14 in the video).

    • "I have a lot of respect for people who think as they speak, and who pause to think and to attach the meaning that matters."

  • One of the best resources for me was “The Evolution of Trust” interactive storyline game. KERNEL claims that this will teach you everything about Game Theory you need to know. It was truly eye-opening and I really recommend it to anyone who is interested in to understand what type of behaviour and communication will lead to the best team work results. You can find it here, it’s a good idea to set 30-60 minutes of focused attention https://ncase.me/trust/

  • “We're in the midst of a move from closed organizations, to platforms accessible through APIs, to open protocols”

  • “Money is older than writing”

  • “Money is something that we don't talk about or teach, and perhaps this is a result of the architecture. Precious metals stored in a vault against which paper is issued mean that money in this expression is a form of debt, which curtails our discussions. How many of you have money in a bank? [everyone raises their hands] None of you have money in a bank! You have loaned your money to a bank: it is not the same thing."

Module 1 - Ethereum's History and State:

  • There is an implicit shift from trusting those who own the media by which we transfer value, to those with whom we are actually transacting.

  • The more succinctly we can express shared truths, the easier it becomes to verify (and therefore trust) the systems we use. This implies that: “Trust has something to do with truth"

  • “We are each others' environment”

  • “Our ability to create value has always been tied to the ways in which we tell stories about, and with, our shared records. However, prior to the feedback loop outlined in trust, the record was maintained by someone, which gives them enormous power and means everyone else is incentivized to try and manipulate them.”

  • “Because blockchains allow us to define succinctly our shared truths, and because the record itself is shared across all participants, there is a whole new "trust space" we can explore, searching for more valuable kinds of transactions impossible within merely legal fictions.”

  • Nick Szabo’s article “The Playdough Protocols” from 2002 gives a historical perspective: history of seals 5000 years ago in Iran, and how the digital equivalent of seals is the next step in the evolution of data integrity and unforgeable identities.

  • “The invention of the limited liability joint-stock corporation created wholly new systems of organizations. Blockchains and the possibility to create new types of crypto-economic coordination systems will lead to a marginal improvement in efficiency over the joint-stock corporation, but likely also allow the emergence of coordination systems we haven’t seen before.”

  • How to think of Ethereum in layman’s terms?

    • “Ethereum is the internet’s government, and smart contracts are its laws.”

    • “Ethereum is an unprecedented arena for playing cooperative games.”

  • Life is a work of art:

    • "Instead of thinking of life as a series of checks which I need to tick off - something which can be displayed on a graph that climbs ever up and to the right - I like to think of my life as a canvas which I can paint with whatever weird artwork I feel like [...] Here is my mandatory Venn diagram: the status quo needs to change, and life is short. When we put these two together, we can see that we need to subvert the status quo and have as much fun as possible along the way!"

    • “Play allows us to create and share ownership of spaces in ways which competition cannot. This is why we have unicorns and dancing developers and silly memes: it's not something incidental. It is a fundamental part of what borderless, global history-writing based on consensus is about. The revolution is not being televised because it's not about hate or anger or violence or anything else that grabs the headlines of a media operating with skewed incentives. It's heart to heart, here in the prison yards where we're using matching funds to build playgrounds where we can love again”


Module 2 - The global financial system:

  • Nick Szabo’s article, originally published in 2002, is a great historical backdrop (although quite a long one - needs a deep reading of 30-60 min) https://nakamotoinstitute.org/shelling-out/

  • "Indeed, collectibles provided a fundamental improvement to the workings of reciprocal altruism, allowing humans to cooperate in ways unavailable to other species. For them, reciprocal altruism is severely limited by unreliable memory.

  • Dawkins suggests "money is a formal token of delayed reciprocal altruism"

  • “We've established through Antonopoulos that money is a language for communicating value and that it was used long before writing was developed.”

  • "A novelty of the 20th century was the issue of fiat currencies by governments. While generally excellent as a media of exchange, fiat currencies have proven to be very poor stores of value [due mainly to inflation]."

  • (on Ethereum) “[…] this new order of communication, akin to the appearance of language itself, is best demonstrated by the simple fact that you need only memorize 12 magical words, incant them into an internet-connected machine and you gain immediate access to monetary value, anywhere in the world.”

  • [ Mikko’s note: If you want to deeply understand the history of money, you can find the highest value/time ratio by reading Bitcoin Standard and/or Layered Money books. Highly recommended ]

Module 3 - Take back the web:

  • “[… ] we are conditioned to value action and results over the intention. We live in a global culture that emphasizes what we achieve rather than what we mean. […] Actions are, in a sense, how we attempt to make the external world match our internal world. They're important, and what you achieve matters, but it is not the primary matter.”

  • “What's unique about web3 - about economic code which creates new trust spaces; expands the possible definitions of value; merges money and speech; and creates new constraints in which to experiment with freedom - is that we can encode our intentions, literally.”

  • “Freedom is only the ability to be conscious of the constraints within which you live.”

  • “Freedom is the simple combination of awareness and acceptance. It is here and now, or not at all.”

  • “According to Illich, it is only the willing acceptance of limits — a sense of enoughness — that can stop monopolistic institutions from appropriating the totality of the Earth’s available resources, including our identities, in their constant quest for growth.” - Labors of Love

  • “How does this apply to trust and value? Well, value is generated from trust in clearly shared truths […]” Freedom is our conscious ability to decide which shared truths to trust based on how well defined and encoded the concept of "cheating" is that created those truths. Meaning, we have the freedom to define what boundaries we choose. It is not possible, though, to operate efficiently with no boundaries at all. Which is why the practice of freedom includes an acceptance that it is not possible to exist without limitation.”


Module 4 - internet age institutions:

  • On governance:

    • “Mutual aid is the basis for individual autonomy.”

    • “Better tools are those which help us help each other more effectively”

    • “Again, for emphasis, the aim is not to build better tools for governing; it is to build tools that help people help each other.

    • As soon as I set out to help someone, the direction of that action always implies a patronizing power dynamic: "I have, you lack." However, when we help each other - when we can admit honestly that we both need help, always - then the environment is shifted towards reciprocity.

    • “The complementary opposite of scarcity is not abundance, it is reciprocity.”

  • On consensus:

    • “Being able to program incentives and the flow of value through society means we don't need to hold static popularity contests every four years, premised on partisan debates: we can govern dynamically by constantly modeling, assessing and updating our understanding of legitimacy.”

    • “IETF governance revolves around the simple maxim that engineering is about trade-offs. As such, we need clear ways of thinking about how we make decisions. We ought to avoid "majority rule" and get to rough consensus decisions which promote the best technical outcomes.”

    • “Lack of disagreement is more important than agreement”

      • “1. Coming to consensus is different from having consensus. In coming to consensus, the key is to separate those choices that are simply unappealing from those that are truly problematic.”

      • “2. Closure is more likely to be achieved quickly by asking for objections rather than agreement.”

    • “Issues are addressed, not necessarily accommodated”.

      • "That's not my favourite solution, but I can live with it. We've made a reasonable choice" - this is consensus, not rough.

  • “Don't be a reformer. Build systems that help people govern themselves, and then - the most radical choice of all - let them actually do so.”

  • On identity / portables roles:

    • “I always thought that masks were for hiding, but I’ve learned that they often reveal as much as they obscure. They allow you to explore a new identity even as you retreat from an old one. Rather than an escape from self, alt identities teach you that your legal identity is also a kind of mask — an ever-evolving montage of loosely assembled parts.” - Aaron Lewis

    • “Furthermore, by using verifiable credential tools such as zero-knowledge proofs, we can verify personal data without having to publicly reveal it. This wide range of possibilities means web3 “wallets” are actually a very powerful identity regime, and we must think carefully about how we wield them, especially as the distinction between financial applications and social applications fades.”

  • The Garden of Forking Memes, an article by Aaron Lewis, about subcultures, was just fascinating.

    • “The telegraph, time zones, radio, and television led to new patterns of mass connectivity and synchronization. Time was made visual and divided into smaller and smaller units that allowed us to achieve unprecedented levels of coordination [...] This trend has come to its logical conclusion because we all live inside a cage of time made up of 32 satellites orbiting Earth. Twentieth-century time was imposed on people from the top-down. Twenty-first century time is a bottom-up choose your own adventure story that allows people to make their own time machines and live anywhen.”

  • “One of my closest friends says his love language is deep attention. When I’m confused about a situation, he listens to what I have to say, directs me with careful questions, and then goes away for a few hours. Eventually, he comes back with a question or framing that slices through my fog. I treasure his speech deeply” — Attending to the other, Jasmine Wang


Module 5 - Tokens & Mechanism Design:

  • “A finite game is played for the purpose of winning, an infinite game for the purpose of continuing play.”

  • “Bitcoin and Ethereum have no strategy - they are means of ordering transactional facts in time such that no one can claim ownership of either order or fact. Remember: we're not fighting the system, just abandoning it.”

  • Regarding incentives and narratives:

    • Bankless podcast episode “Epsilon theory” - fantastic piece on how the stock/crypto market can be examined through narratives between people

    • “Dr Ben Hunt makes the claim that epsilon is the term which captures other people's behaviour. In order to demonstrate this, he cites a lesser-known game, beloved by John Maynard Keynes: the common knowledge game. Back in the day, newspapers held beauty contests where they would post 10 pictures of beautiful women and have the public write in and vote on who was the prettiest lady in the land (Level 1). However, you might be rewarded if you voted for the winner. Which means, you ought to vote not for who you actually feel is prettiest, but who you feel everyone else will vote for as prettiest (Level 2). Level 3 is everyone figuring out what everyone else thinks about who the prettiest girl is, much like the stock market today. It's a question of what the consensus about the consensus is. It's game theory for crowds!”

  • Prosocial value:

    • “What should young people do with their lives today? Many things, obviously. But the most daring thing is to create stable communities in which the terrible disease of loneliness can be cured.” Kurt Vonnegut, 1974

    • “What it would mean to design games that include goals like 1) Reducing loneliness 2) Decreasing Toxicity 3) Boosting a player's positive connections with others”

    • “The broad solution to both these problems is to design systems that build relationships between players.”

  • Three key prosocial measurements:

    • Measuring the unmeasured: trust and positive-sum resources, knowledge in particular.”

    • Facilitating connection: building for friendship formation, encouraging trade and fostering shared vulnerability.”

    • Facilitating expression: through voting resources as economic tools and integrating social metrics with business success.”


Module 6 - Scaling Principled Games:

  • “When the incentives which define the structure of power in society can be programmed by anyone, anywhere; censorship resistance becomes an engineering problem, not an ideological one. This is clear if you read Vitalik's post - it's all about implementation details, not ideology.”

  • “Instead of using legal code to uphold the supposed good of free speech, we can use economic code to make censorship prohibitively expensive.”

  • “Bitcoin uses mathematics in the form of elliptic curve cryptography to route around the need for human regulation and thereby ensure some degree of censorship resistance. Ethereum does this too. However, Eth2 will use a different kind of mathematics - game theory - in addition to cryptography to ensure not just censorship resistance, but to prove objectively that censorship is asymmetrically expensive for those who would attempt it.”

  • “Unless you're burning to find the answer, and unless you're willing to give up everything in the pursuit of that answer, you will never truly learn it.”

  • “The Sacred Cow: Schooling” (pages 52-58): Interesting piece of text about schooling vs education in Puerto Rico. Quite frankly the exact opposite of Kernel.

  • Playing the crypto games is what Andy many times refers to as "‘Infinite Game’. It took me a while to understand the concept he was referring to, and it’s the comparison between Infinite Game and Finite Game from James P. Carse book. If you are trained to win you are playing Finite Game. If you are learning how to continue the play, you are playing Infinite Game. “To be prepared against surprise is to be trained. To be prepared for surprise is to be educated.”

  • Eth2 design principles: Simplicity, stability, sufficiency, defence, verifiability

    • “There are other features deliberately left to L2: (i) privacy, (ii) high-level programming languages, (iii) scalable state storage, and (iv) signature schemes because these features are all areas of rapid innovation, which tilts the trade-off towards ensuring we don't set some solution in the stone of our protocol spec for an area likely to develop extensively over the next 10 years.”

    • On slashing the validators: “In a system designed around penalties, you need to distinguish between various types of validator failure - most of which are benign (like simply being offline) - and only a few of which are genuinely malicious. Critically, it is the trade-off between different penalties which informs how we structure rewards.”

    • “What is Eth2? Well, we said it already: our generation's elder game of economic penalties. These penalties are the game mechanics we use to reveal a unique kind of truth: it is possible to build - and asymmetrically defend/maintain - an explicitly prosocial, global, and ownerless system that provably benefits all the people who choose to use it.”

    • “Module 6 - Serenity” provides a great overview of how the PoW system works in technical detail, with links to original docs

  • On Inventing (Bret Victor):

    • “Creators need an immediate connection to what they create.”

    • “So much of creation is discovery. And you can't discover anything if you can't see what you're doing [...] Having an immediate connection allows ideas to surface, and develop, in ways which were not before possible.”

    • The two videos on the “Module 6 - Principled” page showed how important tools are in fostering creativity, whether it’s coding or creating music. You need to have an immediate feedback loop.


Module 7 - Gift:

  • “So, let's look more carefully at those two loaded words: ideology and spirit. To do so, we'll consider a critical and often-overlooked part of hacker and cypherpunk culture: gift-giving.”

  • “Here's the secret to telling executable economic stories which can be used to program human incentives: don't guess what "the world" needs, ask what beautiful things you would do if there was money for it, and then write the code required to give that value.”

  • In order to understand gift-giving properly, you need to hold in mind its complementary opposite: manipulation. When I give you a gift, you can either interpret it as a gift, pure and simple; or as me trying to hold one over on you, create a social debt, outdo you with my show of generosity, etc. This is to say that the act of giving does not create the gift: it is only when it is received in good faith that a gift truly exists.”

  • “The act of giving does not create gifts: it is only when one is received in good faith that a gift truly exists.”

  • “The psychology of giving reveals fascinating aspects of human consciousness. This is because gifts go against the scarcity we must navigate in order to survive and, in denying that scarcity, gift-giving is and always has been a profoundly meaningful act.”

  • “The sacred simply gives meaning to our lives; nothing more, nothing less. This is why the most potent gifts - sacrifices - are always at the heart of sacred ritual and initiatory rite.”

From Gold to Bitcoin and beyond: Is Ethereum taking over as a Store of Value?

Disclaimer: The information contained in or provided from or through this article and this entire website is not intended to be and does not constitute financial advice. You understand that you are using all information available on or through this website at your own risk.

mental sanity NEEDS A RELIABLE STORE OF VALUE

In 1519, Hernán Cortés and his 600-man strong Spanish troops invaded Mexico, back then an isolated part of the world. The Aztecs, who lived in Mexico at the time, noticed how the aliens showed an extraordinary interest in a particular yellow metal. It seemed like they never could stop talking about it. The Aztecs were not unfamiliar with gold. They used gold to make jewelry and statues, and occasionally used gold dust as a medium of exchange. But when Aztecs wanted to buy something, they generally paid in cocoa beans or bolts of cloth. The Spanish obsession with gold seemed inexplicable. What was so important about a metal that could not be eaten or drunk, and was too soft to use for tools or weapons? When the local people questioned Hernán Cortés, he answered:

“Because I and my companions suffer from a disease of the heart which can be cured only with gold”

Hernán Cortés having a chat with Aztec.

Hernán Cortés having a chat with Aztec.

This story, above all, tells about the psychological need for safety and reliability.

Whatever small or large wealth you have accumulated with hard work, you need to have a way to store it reliably. The value of your wealth needs to be retained until the following year, after the next 10 years, and all the way to your offspring.

The gold has served this need for thousands of years, and for good reasons.

However, this is about to change the first time in human history.

Physical gold is gradually being replaced by digital gold. That is, Bitcoin.

In the quest of the hardest Store of Value

I’m writing this article in May 2021. We’re about halfway through the largest crypto bull market in history.

You can hear investment tips from taxi drivers. Many of us have witnessed our friends going nuts over cryptos. The bull cycle has happened every four years, mimicking Bitcoin’s mining reward ‘halvening’. Perhaps you remember the end of 2017 and 2013. It was similar crazy times back then.

With this blog post, I want to cut through all the buzz and noise in the markets and get back to fundamentals: What are the traits of the hardest Store of Value?

In the first part of this blog post, I explore the fundamentals of money and the Store of Value. I summarize the history from gold coins in the Roman empire, the creation of fiat paper money, how fiat money was debased from physical gold in 1971, and how blockchain emerged as a superior technology to create global consensus and Store of Value. Bitcoin is universally regarded as ‘digital gold’, and I explain the fundamentals behind this.

In the second part of this blog post, I compare Bitcoin and Ethereum from the Store of Value perspective. While Bitcoin is considered ‘Digital gold’, Ethereum has been traditionally categorized as ‘Digital oil’. As we are approaching Ethereum 2.0 upgrade, with its new deflationary economics, some people have started to call Ethereum ‘ultrasound money’, or “Digital bond”. I will debunk the reality behind this meme.

This blog post is written so that even a beginner could follow it. It will get quite technical, though, so bear with me.

If you are already feet deep in the crypto metaverse, and you already understand the ‘digital gold’ narrative, just skip directly to the second part of the blog post.

My crypto background

I encountered Bitcoin first in 2013 when I was studying Computer Science. I studied Bitcoin white paper and blockchain technology with great fascination. I understood part of its potential. I bought my first Bitcoin the same year with 36.38 EUR, just to find out a few weeks later it was hacked from the preliminary web wallet. However, it was years later when I studied the history of money, and when my mind was truly blown.

Over the years, I’ve been studying blockchain and cryptos for thousands of hours in the form of several books, white papers, podcasts, top thinkers, VCs, and scientists in the field. I have met hundreds of people in different crypto conferences in Europe and Asia.


Article part 1/2


From Gold to Bitcoin and Beyond

Crypto markets today in May 2021

Let’s have a look at the crypto market capitalization as of today in May 14th, 2021:

Block size visualizes the relative market cap - Bitcoin is the largest, Ethereum is the 2nd largest. Pic from Coin360

Block size visualizes the relative market cap - Bitcoin is the largest, Ethereum is the 2nd largest. Pic from Coin360

The total market cap of all cryptos combined is USD 2.25 trillion:

  • Out of that, the Bitcoin market cap is $0.922 trillion.

  • Ethereum (ETH) market cap is $0.441 trillion.

  • All other cryptos combined: is $0.887 trillion.

So, you might ask, from a macro perspective, are these numbers significant or not? Let’s do some comparisons.

We can compare crypto market cap to nation-state GDPs. Bitcoin market cap is larger than the GDP of 90% of 187 nation-states, according to World Bank. If Bitcoin were a nation-state, it would be the 15th largest nation-state, larger than Saudi Arabia or Switzerland.

If the market cap of all cryptos combined were one company, it would be the largest company in the world. Larger than Apple, which is the largest company with over 2 trillion USD of market capitalization.

As the last data point, the combined market capitalization of cryptos surpassed the value of all US dollars in circulation this year.

The market cap of physical gold is around 11 trillion. Bitcoin needs to only 11x to reach that. Since its inception, Bitcoin has grown on average over 200% annually for 11 years. Extra 11x won’t be a stretch. The influx of institutional money has merely started, with Tesla, Square, MicroStrategy and over 20 public companies holding Bitcoins in their balance sheet. Bitcoin ‘flippening’ physical gold market cap is just a matter of few years.

Finally, it’s good to understand that crypto markets are highly cyclical yet in a surprisingly predictable manner. Historically, there has been a bull market every four years with prices soaring sky-high, followed by a massive crash. This cycle correlates with the Bitcoin mining reward, which has been programmed to slice in half every four years (‘halvening’). At the time of writing this blog post, in May 2021, we are in the middle of the bull market. Previous bull cycles happened in 2017 and 2013.

History of money — from gold coins to paper money

Traditionally, the hardest currency of the planet has been physical gold. It has been the best store of value for thousands of years. And not only a store of value, but gold was simultaneously used as the local day-to-day money and money for international trade.

In history, national currencies didn’t play that much role as they do today. Yes, old empires had their own gold and silver coins, but they were practically interchangeable with any other gold and silver coins based on their weight in grams.

Silver drachma of Marcus Aurelius, the legendary Roman Emperor whose wisdom has become trendy lately

Silver drachma of Marcus Aurelius, the legendary Roman Emperor whose wisdom has become trendy lately

Why has GOLD remained the hardest Store of Value throughout thousands of years?

Gold has remained as a Store of Value mostly because its Stock-to-Flow ratio is high. There is a major difference between annual production and the total available supply of physical gold.

In numbers, this means that, e.g. in 2017, the total amount of physical gold was approximately 190,000 tons (this is the ‘stock’). Annual mine production was around 3,100 tons the same year (this is the ‘flow’). If you do the simple math, it would take 64 years to double the total stock of gold at the current production rate. In other words, the inflation rate of gold is around 1.64%, which is the inverse of 64. If someone stores their wealth in gold, it is difficult for someone to dilute its value by more than 1.6%.

Gold is valuable because its annual production relative to the existing stock is so small. If you examine the historical production rate of gold, it has remained on average at 2-3% of the gold stock per year over many decades.

The asset which has the highest Stock-to-Flow ratio is high contender to win the store of value game. It’s not the only required feature, for example durability and divisibility are important too. People will naturally tend towards the best store of value, just like we witnessed with the Spanish invader and his troops in Aztec. It is not convenient to have multiple forms of Store of Value.

Gold is ‘hard money’ because of the laws of nature, not because of policies or laws set by people. As we will learn later in this blog post, currency system whose supply can be tampered by humans has consistently failed.

In our fiat economy, failure happens because inflating (printing) new money is the easiest way for governments to solve problems. That is because austerity (raising taxes) causes more pain than benefit, big (debt) restructurings wipe out too much wealth too fast, and transfers of wealth from haves to have nots don’t happen in sufficient size without revolution.

Stock-to-flow ratio of Bitcoin compared to physical gold. Bitcoin will surpass gold in just a few years.

Stock-to-flow ratio of Bitcoin compared to physical gold. Bitcoin will surpass gold in just a few years.

Banks and paper money was invented because gold was heavy to carry and risky to store

Gold was a great Store of Value, but it wasn’t convenient in day-to-day transactions. It was heavy to carry around and also not very divisible. Also, it was unsafe to keep all of your life savings in your home if someone would break in.

The banking system solved both of these problems. You could bring your gold to the local bank, store them safely, and then get paper currency in exchange. You could trust this arrangement because you knew that the paper currency is backed by gold, and at any given time, you could walk into your bank and ask them to change the paper currency back to gold.

This system worked somewhat well until the paper money wasn’t backed by gold anymore.

In the 1960s, the US federal spending skyrocketed due to an expansion of entitlement programs. At the same time the US was boosting its defense spending because of rising costs of Vietnam war and Cold War with Soviets. The increased debt eventually caused a depletion of America’s gold reserves from over 20 metric tons in the late 1950s to under 10 metric tons by 1970.

Sensing the situation was no longer tenable, in August 1971, the US President Richard Nixon announced the ‘temporary’ suspension of the dollar's convertibility into gold. The so-called Bretton-Woods system collapsed over the following two years, and the dollar has been a “fiat” currency ever since.

Suddenly, dollars were just collective hypnosis. No law of nature controlled their scarcity anymore. This set out a massive increase in fractional reserve banking. In layman’s terms: money printing. Fiat money has lost a tremendous amount of value against gold due to the extensive money printing, especially after 1971.

History is full OF currencies that have been debased

Debasing money is by no means a recent invention. Whether it’s the debasement of currency in the Roman empire (in 200-300 AC), glass beads in West Africa (late 18th century), or RAI stones in Yap Island (in the 19th century), the history is full of similar stories. Currency is debased, it gives a temporary fix, but soon creates a monetary monster that cannot be solved. Most of the time, the empire collapses.

The debasement of currency in the Roman empire — The rapid decline in silver purity of the Antoninianus coin

The debasement of currency in the Roman empire — The rapid decline in silver purity of the Antoninianus coin

In the 20th century, we can find similar examples of hyperinflation in countries such as Austria, Germany, Hungary, Zimbabwe, China, Yugoslavia, Greece, Armenia, to mention just a few. Do you think something similar wouldn’t happen in the US and Europe in the near future? Well, we’re on a highway full speed to this destination. Tighten your seatbelt. Politicians and central bankers of today are repeating history as we speak.

WTF happened in 1971

Now, it’s an excellent time to take a break, and browse this site for 2 minutes: https://wtfhappenedin1971.com/ — It gives you great laughs and makes you terrified at the same time.

One graph from http://wtfhappenedin1971.com/
cummulative-inflation.jpg
Average-cost-of-living.jpeg

“No way… US dollar of today is the reserve currency of the world! It cannot be debased!”

Oh, really?

In the entire history of the United States, it is estimated that over 20% of the circulating US dollar was printed in 2020 — from 1776 to 2020, over 20% of the money was printed in a single year. Let that sink in.

Yes, you might argue part of this printing was necessary to cover the expenses related to Covid, and you would be right. However, if printing is possible, it just always gets out of hand.

During the last few years, money printing has been inflating the prices of stocks, real estate, land, all the scarce assets like never seen before. These assets are largely owned by the middle class and especially wealthy people. Money printing is effectively universal basic income to people who own appreciable assets, and significant dilution for the younger generation who haven’t been able to accumulate much of these appreciable assets and savings yet.

Slowly and surely, inflation is also reaching commodities. Commodity prices of 2021 have increased compared to the previous year: Lumber 265%, gasoline 182%, soybeans 72%, and sugar 59% just to mention a few.

The main takeaway from the history of money is that humans have constantly screwed it up. Every. Single. Time. Debasing currency is easier than raising taxes. It was as true thousands of years ago as it is today. It always gets out of hands, in a way or another.

Jerome Powell, FED chair since 2018. With his lead, 20% of all US dollars were printed in 2020.

Let’s sum it all up — Why is the economy heated all the time?

The economy has been heated and on the brink of collapse since the financial crises in 2009. You can find many explanations for this, but ultimately, it comes to the fact that our current monetary system is not backed by the laws of nature or laws of physics.

Not too many decades ago, we had laws of nature (gold), creating the trustworthiness of paper money. Things were much more stable and predictable back then. Today, there are no laws of nature or laws of physics backing up the money we use.

Fiat currencies are backed by bunch of regulatory agencies, laws, and police. And ultimately army, especially in the case of the US. One can do a bit of research about countries that have tried to use other currencies than USD to trade oil. We’re constantly moving from a crisis to the next crisis, and the crisis expand to foreign political dimensions.

I want my money to be backed by the laws of physics again!

The big question is, would there be a way to introduce the laws of physics back to our monetary system? Would there be a way to return to sanity? Is there hope?

The good news is that we indeed do have hope. The hope is already available for anyone who cares to 1) study the history of money and 2) study how blockchain technology works, and 3) be invested in the blockchain space for the long term.

That’s great news for everyone, especially for the younger generation, who have been sidelined in the current economic structures.

So, how can we get the laws of physics back? What makes a good Store of Value?

We use money every day, and few of us genuinly understand the properties that make a solid Store of Value or good day-to-day money.

Let’s have a look at the chart below:

20210602_Traits_of_Money.png

As we can see from the chart, unlike Fiat money, Bitcoin is scarce. There will be only 21 million Bitcoins ever. Unlike Fiat money, Bitcoin is also backed by laws of physics. When something is scarce and backed by laws of physics, it functions as a reliable store of value when there is increasing demand for the asset. Bitcoin indeed has that demand. If you compare Bitcoin to the Internet, we are living in the year 1997. Bitcoin has as many users as the Internet had in 1997. The growth rate is just much higher.

Bitcoin is not only a good Store of Value, it’s actually a convenient way to do large transactions. For example, let’s say you would need to transfer USD 10 million worth of value from Europe to the US.

If you start moving USD 10 million of physical gold, that is a costly operation. The logistics costs alone are incredibly high because gold is heavy. On top of that, you need special safety arrangements to protect the gold bars. This comes with a high cost as well. The transaction is extremely slow and will take weeks, including all the preparatory work. Finally, the customs house of the receiving country might delay the shipment even further and, at worst, tax or confiscate it.

Let’s examine a similar transaction using Bitcoin. The logistics cost is the transaction fee of the Bitcoin network. This is anything between 5 to 20 dollars depending on the current transaction volume of the Bitcoin network. The preparation time to do the transaction is few minutes, or if you are using extra secure multi-signature wallets, few hours at most. The transaction arrives in few minutes. The transaction is censorship-resistant, and nobody can prevent you from sending or receiving Bitcoins.

As you can see, Bitcoin is an excellent way of transacting when it comes to large enough transactions. Small transactions (Day-to-day money) and large transactions (Store of Value money) have different requirements, though. It doesn’t make much sense to buy a bag of groceries with Bitcoin if you need to pay 20 dollars in transactions fee. However, when you invest anything more than 1000 dollars in Bitcoin, that transaction fee doesn’t really matter. A transaction fee is a price that you pay for the security of the network.

If Bitcoin is the hardest Store of Value, how does the future look like for gold and governments?

Central banks have traditionally backed their paper money with the hardest Store of Value asset. That’s why all central banks have sizeable physical gold reserves. Governments need to continue storing the hardest Store of Value asset in their reserves now and in the future, to be credible to issue their own paper money that the general population will trust.

But hey, gold-peg was removed already in 1971, right? And we just proved that Bitcoin is already a better Store of Value than physical gold? What value does the gold have as a reserve asset in the future?

Well, that’s a great question. And I’m sure all central banks are pondering this exact question right now. First time in history central banks have a competitor: decentralized blockchain technology.

I believe the last few decades of detaching our monetary system from the nature of physics has been a temporary period. Humanity is demanding sanity, and thus, laws of physics.

We already have over 20 publicly listed companies holding Bitcoin in their balance sheets, such as Tesla, Square, and Microstrategy. Many other corporations will learn and follow them. After few years, Bitcoin will subsume and flip the physical gold market cap.

In the next bull cycle in 2025, I predict some smart governments will start buying Bitcoin in their central bank reserves. Singapore is an exception because they have been accumulating freshly mined Bitcoins already for some time — damn, how smart folks. If Bitcoin holds the fundamentals of the best Store of Value asset in the world, Bitcoin will gradually start subsuming negative-yielding sovereign debt market. Institutional investors (investment funds, pension funds, etc) begin to allocate their wealth to Bitcoin.

Governments running their own currency, need to back it up with the hardest Store of Value. I can see no way around it. Traditionally, it has been physical gold. In the future, besides physical gold, it might be inevitable for central banks to have Bitcoin (and a basket of the largest crypto assets) in their reserves. After all, these crypto assets will have a higher Stock-to-Flow ratio than physical gold, and supply cap that cannot be easily tampered.

I don’t believe government currencies would disappear anytime soon. After all, governments have the power of law in their territory, backed by the police, a bunch of regulators, and ultimately the army. They can enforce their own currency for day-to-day transactions. However, if the currency is not credible, people have the decentralized options available. When it comes to savings and investments, people can use whatever decentralized currency serves them best. Government can at most hinder this a bit, but they cannot stop it.

In the long-term, I see a future where it’s totally irrational for governments and traditional banks to prevent the change. If you can’t beat them, join them. I believe traditional banks and central banks (with their CBDC money) will join the unstoppable shift from old legacy finance technology to build on top of the blockchain technology. Smart contracts and algorithms will play much larger role than they do today.

Right now, Bitcoin is thermodynamically hardest Store of Value

Bitcoin is currently the hardest Store of Value on the planet. There are no signs that any other asset could beat the Store of Value fundamentals of Bitcoin in the next few years. The most secure, decentralized and time-tested consensus mechanism backs it. No one has been able to hack it during its 11 year of existence. Bitcoin’s supply is capped at 21 million, and it has the largest network effects (most users) compared to any other cryptocurrencies. Finally, it’s backed by laws of physics and thermodynamics, which we explore in the second part of the blog post.

This is the end of the first part of this blog post.

If this was too much for you to stomach, and you can’t quite wrap your head around everything, it’s normal. To understand the entire monetary transformation the world is currently going through, one needs to 1) study how blockchain technology works and 2) the history of money.

The best starting point is to read Bitcoin Standard by Saifedean Ammous. This book tells the history of money.


Article part 2/2


Will Ethereum take over the status of Bitcoin as a Store of Value?

In the second half of the blog post, we compare the Store of Value traits between Bitcoin and Ethereum.

First, I’d like to emphasize that in this article I’m specifically talking about Ethereum’s version 2.0. Ethereum 2.0 is not launched at the time of the writing of this blog post in May 2021. The estimated launch schedule for EIP-1559 tokenomics upgrade is scheduled for July 2021 and the Proof of Stake migration to early 2022.

Ethereum flippening?

Ethereum flippening means that Ethereum’s market cap would surpass the market cap of Bitcoin. Every time we are in the bull cycle, this narrative comes up as Ethereum price (in BTC terms) has historically rallied a lot then. In the last bull cycle in 2017, Ethreum’s market cap was already over 83% of Bitcoin’s market cap at the highest peak — see the graph below.

You can compare the ‘flippening’ metrics between Bitcoin and Ethereum on the Blockchain center website.

You can compare the ‘flippening’ metrics between Bitcoin and Ethereum on the Blockchain center website.

I will argue that Bitcoin will remain the best Store of Value for the next few years. I believe Ethereum won’t permanently flip Bitcoin in this bull cycle for numerous reasons I will explain below. There is a chance that the Ethereum market cap temporarily exceeds that of Bitcoin, but I don’t believe it would be permanent.

Comparing Bitcoin’s and Ethereum’s Store of Value fundamentals:

I put together a summary of traits required from Store of Value crypto asset:

20210602_Traits_of_Store_of_Value.png

Let’s go through every trait one by one.

Scarcity

This is the easy part to understand.

If there are only 21 million Bitcoins, and no stakeholder can issue more of them, the asset is scarce. The more there is demand, the more the price goes up. Some more humorous folks than me call this NGU technology.

So far, Ethereum’s supply has been not capped. Ethereum was earlier regarded as ‘Digital oil’, and unlimited supply was considered to be okay by the Ethereum community. This sentiment has taken a 180 degrees U-turn, though. If bitcoin is considered to be ‘sound money’, the Ethereum community has recently voted that Ethereum needs to turn into ‘ultrasound money’. That is, Ethereum will turn into a scarce deflationary asset with the Ethereum 2.0 update.

Oh, pardon me. If you were left wondering, the “NGU technology” stands for Number Go Up technology.

Supply predictability

For supply predictability, we need to look at history and future plans.

Bitcoin’s is supply is as predictable as it can be. The supply cap was set to be 21 million in the genesis. As of today, around 19 million bitcoins have been mined. All stakeholders in the Bitcoin community have financial incentives to keep the supply cap as it is. Things are fairly simple.

Coin-metrics-BTC-issuance.png

Ethereum’s supply is a complicated story. Initially, Ethereum’s supply was not capped. The amount of issuance has changed over the years quite considerably, based on updates to the protocol. You can have a look at the graph below.

eth-issuance-chart-1.png

The most massive change is still ahead, though. This is EIP-1559. EIP stands for “Ethereum Improvement Proposal”. EIP-1559 upgrade, labeled “London hard fork”, is scheduled for July 2021. This will indeed turn Ethereum into a deflationary asset, meaning that its supply will decrease over time. This is quite a smart dynamic issuance/burning mechanism. In overly simplified terms, it means that if the number of transactions is high, part of the transaction fee will be burned, thus reducing the total supply. On the other hand, total supply will increase in case the number of transactions starts to drop.

Another massive change is Ethereum’s shift from Proof of Work to Proof of Stake consensus algorithm. This is schedule to happen after EIP-1559, in the end of 2021. This means that Ethereum miners (or in the future: validators) don’t need to sell most of their mined Ethers to fund electricity and equipment. This will decrease the sell-pressure of Ether.

You probably can guess what these two massive changes will do to the price of Ether the asset? Justin Drake from Ethereum Foundation presented interesting excel models on Bankless podcast how this would look like. Yup, if the upgrades go through successfully, the price of ether the asset will most likely go up quite a bit.

All this said, the upgrade to Ethereum 2.0 is highly complex, and it’s something never done before. The upgrade has been in the works for years and delayed several times. Although the Ethereum community seems to take all the necessary precautions and the time they need, such a massive upgrade has a safety risk, and that is not a good Store of Value trait.

So, how about the ‘ultrasound money” meme? Yes, the supply economics indeed look really bullish for Ethereum 2.0. However, ‘ultrasound money’ has many more properties besides issuance, such as safety and predictability. Safety of Proof of Work is still to be time tested. Ethereum development or supply issuance hasn’t been very predictable, either. Thus, in my books, this meme is a bit misleading and can be considered as clever marketing for new retail investors.

Immutability

Immutability requires a bit of explanation.

Bitcoin and Ethereum have three key stakeholders:

  • Users (investors)

  • Developers (software engineers)

  • Miners/validators (people who own the equipment to verify transactions in the network)

Out of all cryptocurrencies, Bitcoin is the only protocol with track record of being truly immutable.

Immutability means that all these three stakeholders are in a beautiful equilibrium. Meaning, none of these groups alone can change the protocol, e.g. increase the supply of Bitcoin. Miners cannot do it. Users cannot do it. And even the core developers cannot do it. It will forever stay at 21 million.

If core developers of Bitcoin want to change something, they need approval from over 50% of all the Bitcoin miners who verify transactions of the Bitcoin network. Besides incremental upgrades and small fixes, the miners have no incentive to do any large changes to any direction, because that would just risk the safety of the protocol and their Bitcoin wealth. The Bitcoin community is fairly stagnated when it comes to innovation. Bitcoin updates fairly rarely, compared to other crypto projects. However, this is exactly how the majority of all Bitcoin stakeholders want it to be. They simply want Bitcoin to be the safest Store of Value — no gimmicks.

Bitcoin was founded by the pseudonym Satoshi Nakamoto in 2009. Satoshi sent his last message in 2011, where he made it clear that he has “moved on to other projects”. The first known Bitcoin developer after Satoshi was software engineer Hal Finney. Hal Finney passed away in 2014. None of these original core people has been around for seven years. The founders don’t influence the protocol because they are not around anymore.

To summarize, immutability means stakeholder equilibrium (users, miners, developers) and the absence of founders.

If you think about immutability on all other crypto projects, things look very different, including Ethereum.

Nearly all other crypto projects have been started by people who are publicly known. Usually, the team has significant expertise, so these founders are strongly promoted to create credibility. The founding team usually holds anything between 5-20% of the initial supply of the crypto coins. Their expertise, and the fact that they own significant voting power in the protocol, make it possible for them to direct the protocol in the direction they want. Usually, they have good long-term intentions for the entire community, of course. However, in theory, they can make malicious decisions.

Regarding Ethereum, when it comes to immutability, we must talk about the infamous DAO hack — short for Decentralized Autonomous Organization. The DAO was created by a German start-up called Slock in June 2016, and within a month of launching over 150 million dollars’ worth of ETH had been deposited into the DAO smart contract. On June 17th 2016, an anonymous hacker stole 50 million dollars’ worth of ETH from the DAO smart contract, equal to about 5% of all the ETH in circulation at the time. Ethereum camp at that time was divided into two groups. The smaller minority was touting the ‘code is law’ principle, willing to stick to the immutability. The other group thought that not reversing the stolen funds would be riskier to the entire Ethereum project as a whole. After disagreements, Ethereum eventually hard-forked on the 20th July 2016, creating two separate blockchains: Ethereum (ETH) and Ethereum Classic (ETC). This meant that in Ethereum chain the DAO hack never happened, and in Ethereum Classic chain the ETH remains stolen for better or for worse.

To sum up the story, the hard-fork was probably the right thing to do at the time, but at the same time demonstrated that the protocol was far from being immutable.

Stability of codebase

Bitcoin’s codebase is much more stable than Ethereum’s codebase. There is about one-third of activity in Bitcoin’s codebase compared to Ethereum’s codebase.

The culture of these communities is really different. Bitcoin values stability, immutability, and only small necessary upgrades. Bitcoin is more in the ‘maintenance’ phase, while Ethereum is under heavy development, going through massive upgrades. Some people are referring to Ethereum as a flying plane, that’s being fixed and upgraded while it is in the air.

PS. measuring the codebase activity is actually not a straightforward task. For e.g. Github commits give misleading picture (E.g. Cryptomiso is misleading). One needs to look at this more holistically, such as Santiment is doing. Valentin Mihov has explored this topic in his blog post.

Protocol simplicity

Bitcoin’s aim is simple: to create consensus on its distributed ledger. You can store and send value. That’s it.

Ethereum as a protocol is much more complicated. Its ambition is also very different. Ethereum aims to be the world’s blockchain computer. Not only can you store value in Ethereum, but you can store any programs there. Indeed, Ethereum is a Turing-complete computer. This makes Ethereum many magnitudes more complicated protocol compared to Bitcoin.

Not only is the codebase of Ethereum much larger than the codebase of Bitcoin, but the amount of data stored in Ethereum is also significantly larger. Full Ethereum blockchain as of today is about 783 gigabytes. Full Bitcoin blockchain is about 345 gigabytes. When comparing these numbers, you also need to take into account that the Bitcoin blockchain has been active for 6 years longer than the Ethereum blockchain.

The larger the blockchain gets over time, the more difficult it will get for an ordinary man to mine the network. Normal computer and typical hard drive won’t be enough. Mining will get increasingly centralized to large data centers and huge mining pools.

Speed of innovation

I’ve just covered Stability of Codebase and Protocol Simplicity. Bitcoin naturally wins this battle from the Store of Value point of view.

However, we need to look at the flip side of the coin: Speed of Innovation. Ethereum is just light years ahead of Bitcoin in innovation and new development.

I think co-founder of Ethereum Vitalik Buterin summarized the difference between Bitcoin and Ethereum very well on Bankless podcast:

“Legitimacy is always different in different communities. There definitely is a cultural difference in Ethereum and Bitcoin communities. Ethereum people are more accustomed valuing Ethereum based on its future, where as Bitcoin people are much more accustomed to valuing Bitcoin based on its presence. Bitcoin people think Bitcoin is 80% complete. Ethereum people think Ethereum is 40% complete. And both sides are comfortable with that, and think the other side is crazy for taking the opposite choice.”

It’s easy to agree with this analysis. Bitcoin wants to be a simple protocol and its community is comfortable only with small incremental upgrades. While as Ethereum is complex platform and its community wants to develop it much further, with many opportunities ahead.

The speed of innovation is definitely Ethereum’s strong point, and can be witnessed with its native support for smart contract and NFTs. These two traits are explored in the following two paragraphs.

NATIVE Support for Smart contracts

Bitcoin doesn’t support smart contracts. While Bitcoin is programmable, you can develop only limited-functionality apps on it.

Ethereum is fully programmable and smart contracts are its main feature.

There are pros and cons to this.

Having no smart contracts keeps the protocol itself simple, which is good for safety and good for a Store of Value. Points for Bitcoin.

However, smart contracts enable an enormous amount of digital apps developed on top of the blockchain. The entire financial sector is being replicated to the blockchain through numerous projects — a movement known as Decentralized Finance (or DeFi). Most of these applications are developed on top of the Ethereum blockchain. These products will lock an enormous amount of value to the Ethereum blockchain (79 billion USD as of today). That’s good for network effect and ultimately store of value.

Explore the total value locked in Decentralized Finance projects on https://defipulse.com/

Explore the total value locked in Decentralized Finance projects on https://defipulse.com/

PS. Regarding Bitcoin blockchain, there are some layer 2 solutions, which do enable smart contracts and NFTs. For example, with Stacks you can run smart contracts that will be periodically settled on the native Bitcoin blockchain. Sovryn has built decentralized exchange in Bitcoin sidechain, with the aim of smart contract support in the future. Quite a stretch, indeed. Let’s see how these plays out.

Native support for NFTs

The other massive change is NFTs, or non-fungible tokens, which enable ownership coordination using blockchain. For example, pieces of digital art can be easily purchased and sold online, and the ownership transfer can be verified using blockchain technology.

NFT space is growing massively. More than USD 2 billion was spent on NFTs during the first quarter of 2021 — representing an increase of about 2,100% from Q4 2020 — according to a new report NonFungible.com.

A few decades from now, NFTs could represent ownership of most asset classes, such as real estate and cars. Imagine the future where your Tesla car ownership is recorded in NFT. When you want to sell your Tesla, you just auction the digital NFT ownership token online, and Tesla will self-drive to the new owner. Easy, huh? The potential of NFTs is massive. There are already hundreds of millions of real world assets such as real estate and bonds being tokenized. You find a good summary in this article.

Bitcoin blockchain doesn’t natively support NFTs. Keeping the protocol simple is a good trait for a Store of Value asset. That said, Bitcoin might be missing out on the enormous growth of NFTs, even though some layer 2 solutions like Stacks have been built.

Ethereum supports NFTs natively. Most NFTs are minted in the Ethereum blockchain. Ethereum has enormous potential locking most of the future NFT value.

Beeple’s NFT art work, Everydays: The First 5000 Days, sold at Christie’s for $69 million dollars, the sale positioning him as “among the top three most valuable living artists”

Beeple’s NFT art work, Everydays: The First 5000 Days, sold at Christie’s for $69 million dollars, the sale positioning him as “among the top three most valuable living artists”

Native support for staking (yield)

In the opening of the blog post, I mentioned that Ethereum has traditionally been categorized as “Digital oil” but it’s now being transformed as “Digital bond”. What does this comparison mean?

To understand this, we need to understand what ‘staking’ means. Traditional finance does not understand ‘staking’. Staking is a feature of Proof of Stake blockchains. The upcoming Ethereum 2.0 upgrade is a Proof of Stake blockchain.

(To clarify — At the time of writing this blog post in May 2021, the current version of Ethereum still uses Proof of Work consensus mechanism, and does not have staking functionality.)

To simplify, staking means locking your ETH for a certain period. These locked ETHs are used to validate transactions and secure the network. As a reward for locking your ETH, you are rewarded with new coins from the network.

ETH ‘staking’ is like a bond offering for a new type of digital nation. Traditionally, we think an economy is connected to a nation state. For example, we speak about “US economy” and “China economy”. Ethereum can be thought of as a global digital economy without physical borders.

Reserve asset of the traditional fiat economy is US dollar. Reserve asset of the Ethereum economy is ETH, and that is also used as the main collateral in its digital app economy.

Traditional economies have their government-issued bonds with yield. Ethereum economy has staking yield.

Unlike a traditional bond, staked ETH has no counter-party risk – there is only protocol risk. Staked ETH gives you yield at the protocol level and not via a counter-party. Staked ETH is therefore an intrinsic yield instrument.

You might think is just semantics. One might also argue that you can earn yield with Bitcoin. Yes, you can earn yield with Bitcoin, but there is always counter-party risk, and you need to store it to one of the Decentralized Finance protocols. Yield in Ethereum is intrinsic to the protocol itself.

Minimal attack surfaces

The Bitcoin blockchain is genuinely decentralized. It has over ten thousand full nodes in over 100 countries mining and securing the network. It’s extremely difficult to shut down the network by anybody or any powerful organization, such as the government. If you shut down the entire Internet in one country or even one continent, it doesn’t have a fundamental threat to the Bitcoin network itself.

Ethereum, on the other hand, has many potential surfaces for external attack. While it’s an oversimplification, one can argue that the Ethereum network could be significantly tampered with two phone calls by the government (asking Amazon to shut down AWS hosting for Ethereum dapps and asking Infura company to stop its services).

While the Ethereum community intends to increase decentralization and safety, the reality of today is quite different.

You can explore this topic more in this CoinDesk article.

Geographical decentralization of mining pools

Mining pools are groups of cooperating miners who agree to share block rewards in proportion to their contributed mining hash power. While mining pools are desirable to the average miner as they smooth out rewards and make them more predictable, they unfortunately concentrate power to the mining pool’s owner.

Mining centralization in China is one of Bitcoin’s largest issues at the moment.

There are about 20 major mining pools, and about 65% of Bitcoin blockchain’s hash power is controlled by mining pools located in China. This has happened simply because Chinese government subsidizes electricity production, and electricity is really cheap compared to almost anywhere else on the planet.

Due to the pressure from the Chinese government, this number is gradually decreasing, though. China-based mining pools are increasingly locating abroad, which is good for the security of the Bitcoin blockchain.

Chart from https://www.buybitcoinworldwide.com/mining/china/

Chart from https://www.buybitcoinworldwide.com/mining/china/

Mining pools heavily compete with each other, and they have full incentives to play by the rules. Moreover, miners can choose to redirect their hashing power to a different mining pool at any time.

However, what would happen in the unlikely event of malicious attacks to the mining pool companies, e.g. through government pressure?

If the hostile action were significant enough, it would be first of all noticed very quickly due to the transparent nature of the blockchain ledger. Secondly, if the attack would truly question the legitimacy of Bitcoin, the social layer (people) would step in to have their say. (Vitalik Buterin has a great blog post about the scarce resource of legitimacy in the blockchain world.) In this this case, Bitcoin would most likely be hard forked to a new blockchain with malicious transaction reversed. Something similar to what happened between Bitcoin and Bitcoin Cash. That said, a large successful attack against Bitcoin would nevertheless leave deep scars.

With Ethereum 2.0 there is no risk of geographical centralization of validators. This is because validating Ethereum 2.0 Proof-of-Stake blockchain is not dependent on cheap electricity. This means that validator nodes can be run anywhere on the planet with the same incentives.

Full replication of database

One fundamental security design of Proof of Work blockchain is the full replication of database. The full ledger is copied to as many nodes globally.

In Ethereum 2.0, the database is fragmented with a technology called ‘sharding’. Sharding increases performance but further reduces security and increases the number of attack surfaces.

Backed by LAWS OF PHYSICS

This might be the most important features of Proof of Work-based consensus mechanism. Bear with me, we will dive into high-school physics.

The second law of thermodynamics states that entropy (“randomness”) of the total system always tends to increase. One can create low entropy of a sub-system by dumping extra entropy elsewhere. With Bitcoin and Proof of Work (PoW) consensus mechanism, the extra entropy being dumped is heat from the mining rigs. This creates the reduced entropy locally, which are the numbers in the ledger.

As Michael Saylor says, once you understand money is monetary energy and you understand Bitcoin is a monetary energy network, then you start to appreciate the fact that it either does or does not respect the laws of thermodynamics. If it doesn’t, it means it has a leak.


Considering the fundamentals of physics and thermodynamics, Proof of Work seem to be the ultimate consensus mechanism.

Now, although I’ve been researching this topic quite a bit, I do not have a Ph.D. in physics. That’s why, besides the first principle analysis of my own, I need to trust some of the leading physicists in the field, such as Stanford professor Shoucheng Zhang. If you fire up the Wikipedia page of Mr. Zhang, you can see that he probably knows what he is talking about.

For those interested in deep diving into physics, I found his 1-hour lecture at Google Talk fascinating. If you insist on fast-forwarding to the beef, you can start the video at the timestamp 43:40. He addresses the Proof of Work versus Proof of Stake question. He believes that, at the fundamental layer, Proof of Work is a more secure mechanism than Proof of Stake. At the same time, he believes that there is room for Proof of Stake in many business applications. The ultimate settlement layer would still need to happen in the most secure chain, which is the Proof of Work chain.

Ethereum is currently using Proof of Work but will shift to Proof of Stake with Ethereum 2.0 upgrade. This will reduce the energy consumption of the Ethereum blockchain by 99%, but potentially decreases the security of the network.

Ultimately, it’s a good trait for the Store of Value protocol to be dependent on laws of physics (Proof of Work). In Proof of Work you have sunk computational and electricity cost to secure the network (mining). That’s why Bitcoin is often referred to be thermodynamically the hardest asset on the planet.

Proof of Stake, on the other hand, is based on game theory. There are no sunk cost. Validator nodes use same amount of electricity as a normal computer. The security of the Proof of Stake network is based on rules and regulations between the the different type of validator nodes.

That said, Ethereum’s main goal is to be a platform for business applications. The level of security Ethereum provides is enough for most of them. Scalability is understandably more important to Ethereum 2.0 than safety.

PS. One interesting new consensus mechanism currently developed is Proof of Space-Time (PoST), introduced by the Chia network. Instead of CPU/ASIC power, it will use hard drive storage space for mining. It will consume significantly less energy than the Proof of Work Bitcoin network. I believe this can be much secure than Proof of Stake, and close to the security of Proof of Work. Chia’s founder Bran Cohen brings lot of credibility behind this protocol. He is the inventor of BitTorrent, and one of the most experienced developer of decentralized protocols in the world. The project has just recently launched its main net, so let’s see how this plays out over the years.

DIVISION OF POWER

In Bitcoin and Proof of Work systems, full node operators who maintain the ledger, can delink from block creators if they become corrupt of dysfunctional to the network.

In Ethereum 2.0 this is impossible, as they are stuck with the validators who stake the required amount of ETH. This decreases the censorship resistance.

My summary

First of all, I think no one can see further than five years ahead in crypto development with any meaningful level of confidence. Five years ago was 2016. Ever since, we’ve experienced ICO boom, DeFi summer, and the NFTs to take off. Who could have predicted all this in 2016? And for the next five years, the pace of innovation is just increasing.

Bitcoin will remain as the best Store of Value asset for the next few years. Bitcoin is a simple Store of Value protocol and uses a consensus mechanism that has endured the test of time for over 11 years. Laws of physics back it. It has predictable supply development and minimal attack surfaces. The protocol is already in ‘maintenance mode’ and has never been hacked. All these are good traits for a Store of Value asset. One theoretical threat is mining pool centralization in China, but this has gradually decreased over recent years.

However, Bitcoin is missing out from the enormous influx of capital invested in Decentralized Finance platforms and NFTs, because Bitcoin doesn’t have native support for smart contracts or NFTs. That said, currently, Bitcoin does its job as a Store of Value more safely than anything else.

Ethereum’s role as a platform and “world’s blockchain computer” will increase rapidly year over year. Decentralized Finance protocols and NFTs are predominately being built on top of Ethereum. An increasing amount of wealth is being locked in digital apps on the Ethereum blockchain. Due to the massive increase in activity in different decentralized apps, Ethereum has already exceeded Bitcoin in transaction volume.

Ethereum is about to implement the EIP-1559 transaction pricing mechanism in July 2021. This will significantly change the token economics of the protocol, and if everything goes as predicted, it makes Ethereum an increasingly scarce asset with a higher price. Besides this, Ethereum is transforming from a Proof of Work to a Proof of Stake consensus mechanism in the end of 2021. This enables intrinsic yield accumulation for ETH without counter-party risk, something that is not possible with Bitcoin.

Ethereum is great for many business applications, but as an ultimate Store of Value, Ethereum 2.0 and the Proof of Stake consensus mechanism are not time-tested. Instead of fundamental physics, Proof of Stake is based on game theory. Moreover, Ethereum has some centralized elements and surfaces for external attack.

Both Proof of Stake and EIP-1559 upgrades are massive, and there’s a lot of positive change and innovation ahead. That said, Ethereum is still a protocol in a relatively early stage of development. Vitalik Buterin himself has said that Ethereum people consider Ethereum about 40% complete. There are a lot of opportunities ahead and the same amount of risks. Risks are not good traits for Store of Value asset.

Only time will tell if Ethereum can deliver all the promises. If Proof of Stake turns out to work well. If EIP-1559 brings the highly anticipated token economic changes. If sharding is successfully implemented and transaction fees stay at reasonable levels. And if all these features are delivered timely — before other smart contract blockchains ride ahead. Yes, in this case, transaction volume on Ethereum will likely be multifold compared to Bitcoin in the next bull cycle. In this case, there is a high likelihood for Ethereum ‘flippening’ Bitcoin market cap. There’s just lot of ifs.

I’m invested both in Bitcoin and Ethereum, with about equal allocation. Both have their place and purpose. Bitcoin as the safest Store of Value, and Ethereum as the new digital economy and world’s blockchain computing platform. I believe both have a great future.

In this bull cycle in 2021, while Ethereum has a chance to reach a higher market cap than Bitcoin temporarily, I don’t believe Ethereum permanently flippening Bitcoin’s market cap. The security fundamentals are still on Bitcoin’s side. Ethereum needs to go through its updates successfully and mature into ‘maintenance mode’ over the following years to prove its trustworthiness as a Store of Value.

For retail investors, all this meticulous analysis might not be that important. However, large institutional investors care about the fundamentals, and I see their money flowing predominantly to Bitcoin in the next few years. Ethereum will be the next one on their radar.

In the big picture, decentralized consensus technologies are still being heavily developed. Bitcoin (and Proof of Work) was the first one to solve consensus of decentralized ledger successfully. Bitcoin will hold the status of safest Store of Value for the next few years at minimum, thanks to its brand, largest user base and time-tested consensus mechanism. That said, it is not set in stone it will be the safest consensus method forever. New blockchain projects will likely try everything imaginable (e.g. PoW and PoST) to secure the consensus. Over a long period, considering numerous factors (e.g. decentralization, safety, scalability and environmental sustainability), we can see what are the most suitable ones to remain.

Above all technical discussion, there is one certainty.

Human psychology is wired the same way today as in the 16th century when Hernán Cortés invaded Aztec. We all look for safety and reliability when storing value we have accumulated with hard work.

About the author

Mikko Ikola is passionate about blockchain and disruptive protocols in the decentralized world. You can follow @MikkoIkola on Twitter and contact Mikko via email at last-name@iki.fi.

Thanks

Thank you Mikko Alasaarela, Voitto Kangas, Teemu Laurikainen, Marcus Ohlsson, Juuso Takalainen (in alphabetical order), for invaluable feedback pre-reading this article. I would not have been able to make as comprehensive and as balanced analysis without you.

Email newsletter

If you’d like to stay updated on my future crypto articles, you can subscribe to my newsletter at the bottom of this page.

MAIN RESOURCES:

  1. Bitcoin Standard (Ammous Saifadean, Apr 24, 2018) — Read my review!

  2. Infinite Machine (Camila Russo, Jul 14, 2020) — Read my review!

  3. An Economic Analysis of Ethereum (Lyn Alden, Jan 25, 2021)

  4. Open Reply to Lyn Alden & Ethereum Skeptics (David Hoffman and Lucas Campbell. Jan 22, 2021)

  5. Why Proof of Stake Is Less Secure Than Proof of Work (Donald McIntyre. Oct 7, 2019)

  6. EIP-1559 transaction pricing mechanism improvement proposal (Github)

Special remarks:

  1. Bankless podcast in general — thanks for amazing interviews

  2. Coin Bureau YouTube channel — unbeatable analysis of all major crypto protocols

  3. Coinmarketcap and Coingecko for day-to-day overview on markets

Collection of all links mentioned in this article:

  1. Coin360 visualization tool [Coin360.com]

  2. GDP by countries [Wikipedia.com]

  3. Bitcoin Treasuries [Bitcointreasuryreserve.com]

  4. ‘Ultrasound money’ meme graph [Twitter.com]

  5. Collapse of Bretton-Woods system [IMF.com]

  6. Debasement and the decline of Rome (Kevin Butcher. 16 Nov 2015) [Warwick.ac.uk]

  7. RAI stones in Yap island [Smithsonianmag.com]

  8. Examples of hyperinflation [Wikipedia.com]

  9. WTF happened in 1971 [Wtfhappenedin1971.com]

  10. List of public companies holding bitcoin [Coingecko.com]

  11. Governments are looking to buy Bitcoin, NYDIG CEO confirms [Cointelegraph.com]

  12. The Flippening (Ethereum vs. Bitcoin) [Blockchaincenter.com]

  13. Bankless podcast #44: Modeling Ultra Sound Money | Justin Drake (Apr 28, 2021)

  14. Santiment codebase analysis [Santiment.net]

  15. Tracking GitHub activity of crypto projects — introducing a better approach (Valentin Mihov. Apr 18, 2018) [Medium.com]

  16. Ethereum Chain Full Sync Data Size [Ycharts.com]

  17. Bitcoin Blockchain Size [Ycharts.com]

  18. Bankless podcast #64— Vitalk Buterin (May 10, 2021) [Youtube.com]

  19. DeFi Pulse — Total Value Locked in Decentralized Finance [Defipulse.com]

  20. Stacks — Smart contracts and NFTs on Bitcoin blockchain [Stacks.co]

  21. NonFungible NFT market tracker [Nonfungible.com]

  22. The Race Is On to Replace Ethereum’s Most Centralized Layer [Coindesk. Dec 5, 2018]

  23. Bitcoin Mining in China (May 12, 2021) — [Buybitcoinworldwide.com]

  24. Michael Saylor — Bitcoin: Creation Of Immortal Power [Youtube.com]

  25. Shoucheng Zhang [Wikipedia.com]

  26. Talks at Google: Quantum Computing, AI and Blockchain (Shoucheng Zhang. Jun 7, 2018) [Youtube.com]

  27. Proof of Space-time [Wikipedia.com]

  28. Chia network [Chia.net]

Disclaimer: No financial advice – The Information on this blog post and on the entire website in mikkoikola.com, is provided for educational, informational, and entertainment purposes only, without any express or implied warranty of any kind, including warranties of accuracy, completeness, or fitness for any particular purpose. The information contained in or provided from or through this website is not intended to be and does not constitute financial advice, investment advice, trading advice, or any other advice. You should not make any decision, financial, investment, trading or otherwise, based on any of the information presented on this website without undertaking independent due diligence and consultation with a professional broker or financial advisory. You understand that you are using any and all information available on or through this website at your own risk.

My favorite 7 business books for all entrepreneurs

Let’s face it, building, growing and running a business is hard. Regardless of your industry, you need to master several core areas, starting from your own psychology, recruiting, sales, fundraising, cultivating company culture, management best practices, and so on, to endure the game.

As an entrepreneur, you need to understand the fundamentals of all of these core areas. If you are a first-time founder, you by definition are not experienced nor have enough knowledge on all of these core areas. This means that you need to learn on the go. And because you’re busy, you need to be careful with your time. Besides building the business, you should think, with the limited amount of time that is left for learning, how to utilize that most effectively. Which are the most useful sources to learn from to build the best foundation of all of the core areas.

My experience is that the quickest tips and tricks you can learn one-on-one from other successful entrepreneurs or investors you have a good relationship with. What you will learn from them, is customized to your situation at hand. This is the most effective way of learning, considering the time used and how quickly you can put the new learnings into practice.

However, that is not enough, also you might have not these people around you. If you are only learning “just-in-time”, instead of taking the time to build the foundation, you’re building an incomplete picture in your head.

Besides learning from other people, I’m constantly reading books. And apparently, it’s not just me. All leaders are readers. Many successful CEOs say they read about one book per week, that is about 50 books per year. The median American adult reads 4 books per year. That’s a drastic difference in the knowledge and mental models compounded over the years.

My approach to effective reading

Firstly, when there is a new area I need to master, I research the 2-3 best books written in the field and immediately order them. When I mean with ‘best’ book, it’s not necessarily the latest. I try to stick with the books that have endured the test of time, and that explain the basic fundamentals really well.

Secondly, when I am for example recruiting, I will prepare myself by skimming through the best book on hiring and its blueprints. By doing so, I immediately utilize the mental models in the book, put the learnings immediately into practice, and save them in my long-term memory. I also subconsciously prepare myself for the task at hand. Reading a book is like an instant motivation injection. The better the book, the more ToDo-points it generates for me.

Thirdly, I try to read every day. Anything between 30 to 60 minutes. It’s not much, but puts me above 95% of everyone else. Knowledge and mental models compound. There is really no hack to learn or read faster. It needs to be a habit that I enjoy doing. For me, mornings are the best time to read — anything I read in the morning, sticks in my memory.

To summarize — Mikko’s formula for effective reading:

  1. You need to find the best books in the area of learning (and discard the rubbish)

  2. You need to (re)read these books when they are relevant to you

  3. You need to build a habit of reading every day

I’ve read plenty of non-fiction books. And browsed and explored a big stack more. I can tell you, there is a lot of rubbish out there. Avoid the rubbish. If the book is rubbish, just stop reading it and move to the next one.

In this blog post, I’ve identified the absolute best business books, one book per field. Below is a summary. The name of the book first, and the field in the parenthesis.

  1. The Almanack of Naval Ravikant (Principles about wealth and happiness)

  2. The Hard Things About Hard Things (Entrepreneurial psychology)

  3. Founder’s Dilemmas (Avoiding the common pitfalls)

  4. Delivering Happiness (Cultivating company culture)

  5. Venture Deals (Fundraising)

  6. Who - The A Method For Hiring (Recruiting)

  7. Scaling Up (Scaling up and managing the business)

I have read all these books, some of them several times, and vetted they are simply great. I utilize the mental models from these books all the time. The books and their models are part of me. I notice using quotes from these books with my friends. Besides me, there are plenty of other respected people who have simply praised these books. I can guarantee these books are solid and not waste of your precious time.

Btw, I initially wanted to include a book on sales in the list below. However, I have not found a really good one which I’d recommend. It really depends whether you’re selling a B2B product, a SaaS product, or something else. I’ll leave it to your homework to find the best sales book in your field.

If you’re wondering from which book to start with, as a general rule of thumb, the first three books (The Almanack, Hard Things, and Founder’s Dilemmas) are better suited before you have founded a business, or soon after founding the business. The last four books are better suited after building the business for some time, and when preparing for fundraising, recruiting, and scaling up. So, the books are in somewhat chronological order in terms of the lifetime of building a company.

1. The Almanack of Naval Ravikant - A Guide to Wealth and Happiness

Authors: Naval Ravikant & Eric Jorgenson

Navalbook.jpg

This is the best book I’ve read in a long time.

It’s a compilation of Naval Ravikant’s (CEO at AngelList) life wisdom, in the areas of entrepreneurship, happiness, health, philosophy and so on.

The books is divided in two parts, the first part is about business fundamentals, and the second part is about happiness. Naval argues that building wealth and being happy are skills we can learn.

If you’ve been in business already for few years, the first part might be pretty much common sense. It’s still useful and quick read.

However, the second part is simply pure gold, pure wisdom. It’s so easy to get too immersed in building business, and this book can keep you in check of all the other important areas of your life.

Naval words-smiths simple truths in life in really clever way. The wisdom and mental-models stick. I’ve been able to utilize wisdom from this book in practice already several times, and I notice quoting the book to myself and to other people when people reach out to me for advice.

My favorite quote on the book: “What is happiness? Happiness is a state of being without desires — you are content with what you have. What is desire? Desire is a contract with yourself, to be unhappy, until your desire is fulfilled.” So, in order to be happy, it’s a good idea to be really aware what you desire, and how many things you desire at once. Just reflecting this simple wisdom has improved my decision making in practice already couple of times.

At the end of the book, there is a list of recommended books included with short comments. It’s pure gold, and leads you to explore people like Jiddu Krishnamurti, Charlie Munger, Dale Carnegie, Marcus Aurelius, Osho and many others.

2. The Hard Thing About Hard Things: Building a Business When There Are No Easy Answers

Author: Ben Horowitz

Hard_things_about_hard_things.jpg

Just as Ben Horowitz writes in the synopsis: a lot of people talk about how great it is to start a business, but only Ben Horowitz is brutally honest about how hard it is to run one.

I’ve personally met many entrepreneurs, who have raised a lot of funding, whose businesses are doing pretty well, who have a hired bunch of people. And many of them are visibly quite stressed and not happy. And they’ve privately said to me “Mikko, look. I didn’t sign up for all this when I started.”

The first one or two years setting up a business is fun, but after getting real customers, investors, a larger team, the game changes completely.

It’s difficult to understand the psychological burden all growth entrepreneurs are carrying in their head. Some are able to mask it better than the others, but all of them have it.

Hard Thing About Hard Things is the best description of this mental state. This book is best read before you’ve decided to hop from a safe 9-to-5 job to become a growth entrepreneur (and assess if it’s really something you want), or for people who’ve recently founded their business, to mentally prepare what is ahead should they set a high ambition.

Ben Horowitz tells his personal story of founding a company, and describes the psychology behind many difficult situations he faced. Like — really difficult situations you cannot even imagine. The book is a constant swing between success and horror. It’s a first-row seat to mental rollercoaster of the growth company CEO.

(On an another note, if you’re interested in understanding the psychology of growth entrepreneur, written by real psychologist, article by Arzhang Kamarei is the best article I’ve found, and it’s pretty darn accurate. It also make several references to Hard Things About Hard Things).

3. The Founder's Dilemmas — Anticipating and Avoiding the Pitfalls That Can Sink a Startup

Author: Noam Wasserman

Founders_dilemmas.jpg

I got introduced to this book while taking the most useful university course I’ve ever taken at university: Entrepreneurial Leadership by Aino Tenhiälä.

The course staff had made a stretch and invited 15 CEOs of growth companies to share stories of their business, and 10 page confidential stories were distributed to the students beforehand, with content you never found in public sources. After studying them carefully, these CEOs came personally for a two hour session to answer any questions. And boy, there was lot of questions, and lot of great insights. As one can imagine, pulling this kind of course was one-time effort and was organized at this level of quality and preparation only once, as far as I’m aware of. Big credits to Aino.

The book that was used as a framework for the course was Noam Wasserman’s The Founder’s Dilemmas.

The book explores key dilemmas throughout the lifespan of a company the founder needs to make: pre-founding (whether to become a founder or not), the team (whether to partner with co-founders or not), equity split and vesting, dividing roles, rewarding, funding options, hiring, salary negotiations, board management, all the way to exit.

What makes this book so great, is that it’s based on solid research. Wasserman uses a dataset of 3,600 startups, nearly 10,000 founders, and 20,000 executives. The dataset comprises US survey results covering the decade of 2000-2010, most of the data coming from technology companies and the remainder from life sciences.

Wasserman uses both research-backed arguments and anecdotes from real companies. This format enables you to form good mental models and helps you to understand who you are as an entrepreneur. What you really want, and what are the hard statistics that happen in the markets. E.g. do you want to be the Rich or the King, clinging more to the control in the venture, or clinging more to the wealth creation? Or some balance between the two.

One should read this book before founding the company: To get the equity split, vesting, all these basic things done the right way.

Here is good summary of Founder’s Dilemma in the form of a Harvard Business Review article by the author himself for those who want to get a peek inside the book.

4. Delivering Happiness — A Path to Profits, Passion, and Purpose

Author: Tony Hsieh

Delivering_happiness.jpg

Delivering Happiness, is a combination of personal biography of Tony Hsieh, and stories of the companies he built, especially Zappos, the huge e-commerce company selling shoes that was acquired by Amazon.

The biggest takeaway in the book are 1) “Tony’s Happiness framework” and 2) How to identify and foster the company culture and 3) How to think of customer support.

Tony’s Happiness framework:

Tony’s Happiness framework has four pieces. 1. Perceived control: people need to be in control of their own fate. 2. Perceived progress: people don’t like to feel like they are not going anywhere. 3. Connectedness: there are numerous studies showing engaged team members are more productive, and that the number of good friends one has at workplace correlates how engaged the employee is. 4. Higher purpose: People need to believe in something bigger than themselves.

Company culture:

Identifying and fostering company culture is another great takeaway from the book. Tony writes that :

“Even though our core values guide us in everything we do today, we didn't actually have any formal core values for the first six or seven years of the company's history, because it was something I'd always thought of as a very ‘corporate’ thing to do. I resisted doing it for as long as possible. I'm just glad that an employee finally convinced me that it was necessary to come up with core values--essentially, a formalized definition of our culture--in order for us to continue to scale and grow. I only wish we had done it sooner."

"Our core values should always be the framework from which we make all of our decisions...Make at least one improvement every week that makes Zappos better to reflect our core values. The improvements don't have to be dramatic — it can be as simple as adding in an extra sentence or two to a form to make it more fun, for example. But if every employee made just one small improvement every week to better reflect our core values, then by the end of this year we will have over 50,000 small changes that collectively will be a very dramatic improvement compared to where we are today.""

I have explored the formation of core values in my other blog post at Ambronite blog, and we used Tony’s blueprint to formulate Ambronite Core Values mini booklet. I found out that the process of it is even more important than the outcome. When you engage your team identifying and agreeing on the values, everybody becomes more committed. It’s strange how positive impact it had, and quickly people started to argue different decisions based on the core values.

Philosophy on Customer Support:

If customer support is a critical function in your business, the mental models in this book are really useful: Tony argues that customers are made happy by developing relationships, creating personal connections, and delivering “wow” effect every time the brand engages with the customer. Number one driver of growth for them are repeat customers and word of mouth. They like to invest in customer service instead of paid advertising, and let customers do marketing via word of mouth. Contact info on the website is easy to find. Call center interaction is used to increase word of mouth marketing and lifetime value, by making a personal connection. Each call is an investment to build the wow brand, not an expense to be minimized. They build engagement and trust, rather than buzz. Loyal repeat customers are treated with surprise upgrades. Customers are sent to competitors if they can’t help them directly.

That’s something. My personal practical takeaway we implemented at Ambronite was to foster relationship with our best customers. We took a list of our monthly subscriber customers ordering our meal-shakes, and sent them a surprise: Ambronite shaker bottle with their first name laser-engraved in the bottle. That was one example how we created “wow” effect to Ambronite’s loyal customers.

Finally, if you decide to read this book, I recommend speed-reading the first half. The useful stuff comes at the second half of the book.

5. Venture Deals — Be Smarter Than Your Lawyer and Venture Capitalist

Venture_deals.jpg

When I was started to prepare to raise funding in the end of 2014, there were somewhat limited sources of good advice on how to run the fundraising process. Now there’s a ton, in any medium imaginable, such as YouTube and blog posts.

That said, in my books, nothing beats a great book. If you ask any seasoned entrepreneur, Venture Deals will be their go-to-recommendation of the best book on raising funding.

This book contains so much great knowledge and tips that it’s impossible to summarize. Just to pick few learnings from the book:

You need to understand The Players in the game (e.g. talking with General Partner instead of associates — associates are just mining the markets, GPs are making the decisions).

The book also describes the process and materials you need. — For e.g. VCs haven’t seen a business plan in more than 20 years — A pitch deck with 10 slides and supplemental material is what you need. Also, in the financial model, focus on getting the potential expenses right, forget about nailing the revenues. Ask VC for references. Reach out to entrepreneurs of their portfolio company CEOs.

Term Sheet — The book explains the most important features of the term sheet. Pre-money and post-money. It’s good to understand that some VCs will try to stick the option pool in the pre-money valuation. You must have BATNA to be able to negotiate good economic terms. The valuation will depend on many things, and competition aside, it’s the stage of the company, numbers, team’s experience, and so on. Typical vesting is four years with monthly installments and 1-year cliff, created to incentivize co-founders to commit for the long-term so that their ownership gets gradually ‘vested’ over a four-year period every month.

The book is written from the perspective of American legislation, namely from the Delaware C-corp perspective, so if you’re incorporated in Europe or another jurisdiction, you need to consult a local lawyer to explain the key provisions. Based on my experience, there are major differences.

Negotiation tips are also a great part. To get good results, understand that personal relationships are the most important. Protect these regardless of what happens to a deal (e.g. No means no). Talk to mutual contacts. Understand the deal structure and closing process. Get a really experienced lawyer who has closed several similar deals — it is worth every penny. In general, make effort to understand who you are dealing with. Always be transparent.

And finally, never make an offer first. When we raised our seed round, I never directly asked anybody to invest. I presented an opportunity, met with the investors who were interested in, and after talking for an hour, the investor themselves made a proposal if they were willing to proceed further. If they were not interested in proceeding further, it was an enlightening conversation in any case.

Personally, I’m also grateful that Venture Deal book was already out. We successfully raised our seed funding round from seven investors. For all the over $1M USD I’ve raised in aggregate, I’ll give a big chunk of credit to this book helping me to learn the process.

For the first-time founder, raising funding can be an intimidating process.

What I personally learned, is that while you need to master the process and the lingo (e.g. what all these mean: pre-money, post-money, liquidation preference, drag-along, vesting, convertible note, valuation cap, discount percent), the best time to raise funding is when you feel you are ready. What I mean with this, is that you’ve built your business to a certain stage (e.g. prototype, first customers, e.g. crowdfunding completed) that you feel confident that there is a bright future, and you know what the next steps are. After running the business for 2 years, I finally reached this ‘inner confidence’, and after that, raising funding was a breeze. Not only we successfully raised the funds, but we were also in the position to pick the investors we wanted to accept to the round, and politely passed offers from the rest of the investors. So, self-awareness is important.

Finally, it’s good to add that raising funding does not mean succeeding. And not raising is not failing. So whether you raise or not, most energy should go to building a great product and selling it. External funding is just a tool.

PS. How does an investor meeting look like in real life? As investor meetings happen behind closed doors, there are few examples of how it really works. For first-time founders, this is the best video I’ve encountered to understand how to run an investor meeting. Start to look at the video at the point 20:30. First, there is a sloppy example, and after that, a successful example of a meeting.

6. Who — The A Method For Hiring

Who_book.jpg

Best book on Recruiting

There are loads of books on recruiting, and most of them are full of fluffy rubbish. Believe me, when I started to recruit, I browsed several books on recruiting and most were either too general or had too narrow a perspective.

This was the only really good book I’ve found in the field. It’s research-based, and there is over 1300 hours of interviews with over 300 CEOs on the topic. When you’re about to start recruiting your first employees, you want to pick up this book.

The main thesis of this book is that many entrepreneurs use bad hiring habits (“voodoo hiring”), and that these need to be unlearned, and replaced with actual time-tested methods.

The first part is the job post description. It needs to be based on outcomes of the role, and specific enough.

The second part is pre-screening. You need to pre-screen heck out of your applicants to save valuable time for the actual face-to-face interview, for those who are potential fits. The largest takeaway I took from the book was the pre-screen phone script. It enabled me to identify potential candidates faster than any other method. It’s simply pure gold. If you are short on time, just implement this pre-screen phone method.

The third part is the actual interview. This is where I see most people doing the ‘voodoo hiring’. I’d estimate 90% of the people in charge of hiring, ask the same stereotypical interview question, and simply just swing the whole meeting. The candidates already know to expect the typical questions, and have well-formulated slick answers. In most cases, the interviewer really learns only some polished superficialities of the candidate.

The whole point of interviews is to put applicants off of their scripts in order to identify both problem areas and whether the candidates fit the position. This book includes a process and practical set of questions on how to make this happen. Also, it guides you to probe deeper when candidates give only superficial answers and gives suggestions on how to do that. I’ve been able to extract many times more value in interviews when I started to use the method.

Finally, an important remark for all tech entrepreneurs out there: The methods in this book are best suited for executive positions — operation, sales, and such. This is not the best book for highly technical hires, such as with software developers. Also, the methods might be a bit overkill for entry-level positions.

After you’ve read the book, they provide a free toolkit on their website you can use with the hiring process.

7. Scaling Up — How a Few Companies Make It...and Why the Rest Don't (Rockefeller Habits 2.0)

Author: Verne Harnish

Scaling_up.jpg

This book is different from all of the other books mentioned before.

This book is a handbook.

Reading it from start to finish at one session is not the way how it is supposed to be consumed. That’s what I did the first time, and thought - meh, nice tips but nothing spectacular. How wrong I was.

Some years later I picked just one chapter and read it with full focus. (Btw, that was the FACe chapter = Function Accountability Chart). And I filled in the worksheet. My mind was blown.

All chapters have a simple worksheet. If you actually utilize these worksheet, fill in the info, take few hours to talk with your co-founder, your mind will blow how effective they are revealing bottlenecks in your business. Utilizing the tools simply takes time and commitment. To get the most out of this book you need 10-20 times the time investment than any of the other books mentioned above.

Scaling Up is not only a book. It is a framework of all best practices gathered throughout all business books (for e.g. in hiring, they refer to “Who” book mentioned before). They also provide Scale Up certified coaches who can help you implement the Scale Up practices into your business. On average, it takes typically takes 2 years to drive in all the practices. That gives you some idea of how much content there is in this book to be digested. I’ve hired and worked with Scale Up coach, and that was just a really useful experience.

If you are just about to found a business, or you are perhaps 1 or 2 years in, this is not the best book to read. Sure, you can browse it around, and pick some nuggets of knowledge. I would say this book becomes extremely useful at the point when you have a team of over 10 people and/or revenues in excess of 1M.

Build a habit of reading, stay patient, and multiply your knowledge

As Farnam Street writes, if you can get 5% wiser and better every year, then you will be about twice as wise as you are now in less than 15 years. (Go ahead, grab your calculators.) In less than 30 years, this return will be 4x.

This is how the non-gifted among us can surpass otherwise more intelligent people.

When it comes to reading, I quantify these numbers in my Goodreads profile. I save all of my read and to-be-read books there, publicly available for all to explore. It’s a useful tool to keep track of both what I have read and what I am planning to read.

How to enjoy moment - 101

Ask any philosopher and they all say that happiness is about being present in the moment.

Motivational posters often say something along the lines of ”Enjoy the moment”, ”Be present”, ”Focus on now”.

Sounds nice, but how do you practice living in a moment, in everyday life?

Some people might suggest starting a daily 60-minute meditation practice. That’s for sure the most effective method. However, that requires already quite a bit of self-understanding and significant commitment.

I’ll present something simple anyone can do — starting today:

Here we go.

Imagine you have just stepped into your favorite cafe and ordered your favorite beverage. You find a cozy table in the corner of the cafe, next to the window. There is only you and your favorite beverage. You can see people passing by on the street. The cafe is about half-full, with people sipping their lattes.

You have just sat down, and you will…:

  • Put your phone in your pocket

  • Leave the laptop in your bag

  • Make sure you have only your favorite beverage on the table

Ok — the above is easy but necessary.

What you do next, is the critical part to be able to enjoy the moment.

You need to start paying attention to your mind. That is, what you think, or rather, what you should not think.

Instead of describing what to think, for me, it was more intuitive to understand what NOT to think. While you are sitting calmly, the beverage in front of you:

Do NOT think about the future:

  • Don’t think about the future in any form. Don’t do future planning. Don’t think about your to-do list. Don’t start visualizing your next report/email/meeting you need to prepare. Don’t start problem-solving. Don’t think about your desires or goals.

Do NOT think about the past:

  • Do not think past in any form. Don’t think about the past mistakes you did. Don’t think about the regrets. Don’t think about any worries. Or any past successes either, for that matter.

So, Mikko, you just told me what NOT to think. What on earth should we be thinking then?

That’s the beauty. Absolutely nothing.

Being present. Observing people passing by on the street. Taking a sip of your favorite beverage. Appreciating the smooth mouthfeel it brings. Enjoying the small details of the architecture of the house you can see from the window.

Sounds difficult? Well, for most of us, this is actually quite difficult and doesn’t come naturally.

Most of us have experienced this feeling naturally after completing a significant work milestone, or for students, final exam. If you go back to those fainting moments, they felt really nice, didn’t they?

The good news is that this feeling is attainable in your everyday life. It’s a skill and awareness you can develop.

Personally, I learned to reach this state by paying attention to what NOT to think. It is much easier to reach the moment by NOT letting your mind wander to the future or to the past. When you avoid the future and past, the only thing left is the present.

I’ve also noticed it’s much easier to reach this state in the morning hours. It doesn’t matter if it is at home, at a cafe, or some other peaceful place. Also, after doing sports (and being disconnected from the devices), this state is easier to reach.

When you have successfully let go of the past and future, you start to see the moment. The comfort of the chair, smell of the beverage, beautiful clouds in the sky. A happy couple passing by the window. Life starts to feel really light, and really great.

Next time you go to a cafe by yourself, you can try this method for the first 5 minutes.

PS. If you actually try this in a cafe, and you reach the 'present state’, you’ll be quite amused when looking at everyone else in the cafe. You will see that ALL of the other people (you being the only exception) are all-in in the Matrix: numbing their mind with social media feeds, in a serious future planning mode with their laptops, or sharing either past memories or future plans with their friends. You are the only one who paused to enjoy the very moment.

Death & Healing: Old Wisdom for Modern Life

During the year 2020, three close people to me have passed away. First, an old friend and a colleague in his 30s. Next, a close relative, in his 80s. And finally, a close friend of mine, in her late 30s, instantly died in an accident.

All of these happened in my home country, Finland, while I had recently moved to Shanghai. Due to the Covid and travel restrictions, I wasn’t able to participate in funerals, nor meet face to face with other close friends or relatives. As all these events happened in a fairly short period of time, this made me study philosophy and perspectives to death.

Modern societies clean death out of sight

In modern western societies, there is this quest for immortality. Longevity, transhumanism, health optimization. I’ve personally been fascinated by this trend and in 2013 even co-founded a biohacking society.

However, the flip-side of these trends is the underlying fear of death. Death is almost a taboo. Nobody wants to die. Even people who like to go to heaven, don’t want to die to get there. The more secular the societies have turned, the less there is uniting common beliefs, perspectives, or rituals to guide people through the event of the death of a close one.

We all need to figure it out ourselves. There are many explanations of what life and death mean. And whether there is life after death. You need to pick an explanation that serves you the best. An explanation that helps you to live happily on this planet, helps you to face the death of your close friends and relatives.

As much as the zeitgeist of modern society is to hide death and tout the supernatural abilities of humans, the fact is, death is a natural part of life. And that’s how it should be. While we should take good care of our health, we should not disillusion ourselves from the unavoidable. Instead, we should seek to accept death, find meaningful ways of healing with the community, and then move on with our lives. Personally, I’ve learned that whenever death comes to your close one, it’s the right time, and you should just find the perspective of being grateful for having him/her in your life.

Healing with the community was more natural just a few decades ago

Today in Finland, when somebody dies, there is a quick funeral process. A short event with the closest relatives — usually very few people. Plenty of tears. Few beautiful speeches. Besides greetings, most people say nothing. That’s it. Life goes on. Society is fast-paced. The rituals are quickly completed and people separate their own ways. It wasn’t like this still some decades ago.

80 years ago, a time before the second world war, funerals looked very different. In a book Kuoleman Salaisuus (in English: “The Secret of Death”), Kai Heikkilä and Pertti Jokivuori shed light on the perceptions of death in Finnish agricultural society. Antti Hakkarainen summarized their finding in his thesis:

"In the old days, in an agricultural-dominated society, the Finnish culture of death was rich and multidimensional compared to today's modern society. The reason for this was e.g. ancient beliefs and mythologies, and on the other hand also the influences of Christianity. Death was considered part of the cycle of nature, but this did not mean that it was treated without a fear. According to them, until World War II, Finland continued the tradition that event of death was thought to belong to the entire village community. The mourning of the deceased was not limited to the immediate family members, but according to tradition, every villager had to visit to greet the deceased for the last time. This was made possible by placing the deceased in an open coffin, which was kept in the separate room of the house for a few days after his death. There were strict rules about what was allowed to be done in front of the body, and what was not. For example, sleeping and working were forbidden, but instead you were allowed to drink coffee or liquor, and sing hymns.” (Heikkilä & Jokivuori 1994, 149-151.) & Thesis by Antti Hakkarainen [1]

As we can see, the relatives of the deceased, were not left to mourn alone but received the support of their entire community for the work of mourning. This certainly made it easier to deal with and cope with a difficult matter.

When the ritual described above was completed, there was the second phase of the ritual: actual funerals in a church, and burying the body in the grave.

Today, we have only this part left. And not even this, as cremation is increasingly more common than burial in the cemetery. All in all, the ritual has shrunk quite a bit.

Healing in the times of Covid-19 and travel restrictions

The emotional healing of the death of a closed one is really important. In modern times, and when people live abroad, and especially during Covid-19 with the travel bans, we need to find new ways of creating a space to share our memories and feelings with other close people of the deceased one. We don’t have a possibility to organize our own physical room for the body and visit there to sing songs and enjoy a cup of wine like 80 years ago.

Instead, we can create a modern version of this room, virtually: WhatsApp group chat. It might sound like way too light-weight medium to handle something as important as death. Let me explain how I have witnessed it working beautifully already two times:

When my close friend suddenly passed away in an accident, it was a shock to me and all of her friends. She was in her late 30s, had an active friend circle, and some of them (me included) were living abroad or in different cities. We created a WhatsApp group of around 30 closest friends of her. We also invited her closest family members, parents, and siblings, to the group.

What happened next was really beautiful.

First, the “initiator” of the group shared some latest happy photos of the friend who had passed away. Everyone else in the group reciprocated almost instantly. People shared moments of joy together with her, during different travels, dinner parties, and gatherings. People shared photos that they wouldn’t normally post on social media.

Personally, through these photos, I felt like peeking into the soul of my friend. I thought I knew her well, but in the photos, I saw people and events that we hadn’t talked about. I could see my friend having the best time of her life with her other friends, many beautiful experiences she had had. In short, the photos sent a clear underlying message to everyone: the person had lived an amazing life with great people around her. It was great to witness, and it also made me feel so grateful to have her as my friend for many years.

After the photos were shared, people started to write. Somebody in the group wrote a long memorial text about our mutual friend: about her good qualities, personality, great common memories. And people started to reciprocate again. Many pages of beautiful memorial speeches were produced. These memories again enforced the feeling that she had had a good life, and people felt grateful for sharing life with her.

Having her close relatives in the group, made it possible for them to see all the beautiful photos and memories as well. Later I learned, it was actually a positive surprise for the close relatives to witness how profoundly and positively their loved one had impacted other people’s lives. Her close relatives were living in another city and had not met these people beforehand. Amidst the grief, they later said, the beautiful messages in the group enormously supported overcoming the loss.

I’ve seen this memorial chat group happening on two occasions already, and it has worked beautifully. It for sure doesn’t replace real funerals, meeting people face-to-face, or calling people one-on-one. That said, in the global world we live in, it can be a great addition. Especially with people, who for a reason or another, leave on this planet unexpectedly when still young.

Wisdom from 80 years ago for the modern lifestyle

The history of dealing with death has gone through different stages. Before the second world word, the entire village community took part in greeting the family who had to face the tragic loss. The family had a chance to personally receive the condolences face-to-face from everyone.

Then, the urbanization started, and death turned into a private matter of the closest relatives. Death became an isolated event. As friends and colleagues were sometimes not invited to the funerals, there was a discontinuity in the relationship between them and the family who had faced the loss. Some people even started to avoid the family who had faced the loss. They didn’t find words to say. There was no ritual.

Today, my generation has learned the old wisdom from history and updated it into a modern lifestyle. Besides the closest relatives, friends and colleagues feel the same need to be able to share their own memories and listen to the memories of other people. The shared experience is important healing to all of us.

Some of us are living in a distant city, some of us are living abroad. The physical room to meet the family and drink a cup of wine next to the coffin might not be realistic. Instead, the room to share memories, photos, and wish blessings might as well be a virtual one.

Thanks for reading.

Resources — Material that deepened my perspective on death:

Mandarin Chinese: My Top 10 Resources for Effective & Enjoyable Learning

Since I met my fiancee, who is a Singapore citizen, and a fluent Mandarin Chinese speaker, I felt the urge to learn the basics of the language myself. As I was living in Finland back then, there was no natural Chinese-speaking environment, so I had to create one artificially.

Here is my Chinese-language learning path I’d like to share with you. I’ve put together a simple and straightforward list of resources I’ve used. It’s by no means a perfect path (there isn’t such), but you will find inspiration if you’re about to start, or have recently started to learn Mandarin Chinese.

In chronological order:

  1. Litao Chinese — YouTube video course for beginners

  2. Pimsleur audio course — Listening and speaking practice for beginners

  3. iTalki — Private tutoring online, over Skype/WeChat

  4. Short traditional course held offline — great for basic grammar and essential vocabulary

  5. TV drama in Chinese — Ode to Joy was my favorite show

  6. Chinese music — My favorite songs

  7. ChinesePod — Huge archive to learn from

  8. Skritter app — Hanzi drilling

  9. HSK tests — Goals to aim for

  10. Chinese Graded Readers book series — reading a real book in Chinese!

1. Litao Chinese was my first learning resource. It’s a simple YouTube course, with 20 episodes each 5-15min long — designed for complete beginners. I started every morning by having a morning run and then reserving the first productive hour of the day to watch one episode with full focus. During the episode, I wrote down all hanzis and pinyins on a traditional paper notebook. I paused the video once in a while to have enough time to do the writing. This was an easy way to get started,  as you could learn each episode independently.

2. Pimsleur Chinese audio course. To get started with spoken Chinese and simple conversations, I used the Pimsleur Chinese audio course. They are 30min-long audio recordings, based on spaced repetition. You hear a sentence in English and have time to translate and pronounce it yourself in Chinese, and then you hear the correct Chinese translation. There are altogether 5 levels, each containing 30 episodes (that is, 150 episodes in total). I listened to one episode every morning while on a morning run. I could never have just sat still in a room and listen to them, though. Some people probably found it funny as I was running in a park and speaking to myself. I’ve also read online how some people listen to the Pimsleur episodes while driving a car. I think Pimsleur is best combined with some other activity you do on your own, or whatever works for you. (Btw, Pimsleur courses are quite expensive when purchased directly on the Pimsleur website — better to have a look elsewhere, e.g. on Amazon, first)

3. Private tutoring online on iTalki. I started to take online classes once a week with native Chinese speakers. This is a real language hack and sped up my learning process, especially in pronunciation. Mandarin tutoring is priced anywhere between 5-25 USD per hour, based on the experience/demand of the teacher. That’s very affordable compared to traditional face-to-face private tutoring. I’ve written an entire blog post about iTalki which you can find here (Triple Your Language Learning and Speaking Skills with Private Online Tutors)

iTalki.jpg

4. Short traditional classroom course — I also signed up for a traditional course held in person, which lasted 6 weeks, two times a week. This was useful as well, and it’s always nice to have other people around when learning a language. I found that the course was great for the basics of grammar and vocabulary. However, it didn’t help my pronunciation at all. I think many students actually learned to pronounce incorrectly because the teacher couldn’t help students individually. So, remember to book private tutor time to get the pronunciation right (see iTalki, the previous chapter)

5. Chinese TV drama. After a few months of somewhat monotonous learning, I felt the need to immerse myself with some real Chinese content. I found that Ode To Joy, a TV drama about 5 Shanghainese ladies, was an entertaining way to relax one hour before going to sleep. It also taught me about contemporary Chinese city culture, sometimes characterized by vast wealth gaps. You can find the series on Viki.com and some episodes are also on YouTube.

6. Chinese music. As you must have noticed, I have a habit of going for a 30min morning run. After I listened to all 150 Pimsleur episodes during my runs, I compiled a list of upbeat Chinese music that I could enjoy listening to while running. I found this playlist on Spotify which I’ve listened to dozens of times. Two of my favorite tracks are from a Taiwanese band called 八三夭 (831): the first track being a duet with 鄭秀文 (Sammy Cheng), and the second one called 致青春 Young guns.

The most hilarious piece of music I’ve found online is Backstreet Boys’ legendary song ‘I want it that way’ sang in Chinese :-) See the video below.

7. ChinesePod is a huge archive of videos and podcasts for Chinese language learners in every level. There are free episodes or a paid monthly subscription to choose from. I like the vibe of ChinesePod as its production quality is really professional. However, the content is not very uniform. Many times you would need to follow the video instructions at the same time — listening to just the audio isn’t always suitable. This made it difficult for me to incorporate ChinesePod episodes into my morning run habit. That said, I’m sure many Chinese learners will find ChinesePod resources immensely useful. Also, have a look at their YouTube channel or phone app for some free content.

Skritter app — the best app to drill those damn hanzis

Skritter app — the best app to drill those damn hanzis

8. Drilling hanzis — enter Skritter app. The most daunting thing in Chinese language learning is to memorize the thousands of hanzi characters. To be able to complete the first HSK1 test, you need to master 150 hanzis. For HSK2 you need 300. However, to be able to read a newspaper you need to know around 3000 hanzis! Leaning hanzis is just about a lot of drilling in reading and recognizing the characters. In different formats and ways that work for you. The best app, in my opinion, to incorporate into your daily bus/train commutes is Skritter, which is specifically designed for learning Chinese hanzis. It’s based on spaced repetition like Pimsleur, and most importantly, you can also practice writing the strokes of each hanzi, not just memorize them. You can see your daily/weekly/monthly progress on the app, which is surprisingly motivating.

A few more thoughts about learning hanzis: Besides writing on your phone screen with Skritter, it is good to also write using a traditional pen and paper. Nothing beats the muscle memory of writing down the strokes in an old fashioned notebook. Also, it’s a good idea to start studying hanzis early on. I didn’t, and I had to catch up later which was more of a struggle for me.

And finally, learn the ‘radicals’ and their meanings. A ‘radical’ is a graphical component of one hanzi, and it can be a semantic/phonetic indicator of what the hanzi means. This can immensely help in memorization. In the beginning, I was just trying to bang out hanzis without much supporting logic. When my iTalki teacher started to teach me radicals, I started to memorize hanzis much better.

9. Chinese Proficiency Test, or HSK tests (Hanyu Shuiping Kaoshi). Just like English has the TOEFL test, the standardized test for the Chinese language is the HSK test. It comes in 6 levels: HSK 1, HSK 2, HSK 3, HSK 4, HSK 5 and HSK 6. One needs to master 150, 300, 600, 1200, 2500 and 5000 words, respectively, at each level. Passing the tests won’t make you a fluent speaker, however, but they have served as a motivational goal for me. That’s why I have used official HSK language books when learning Chinese.

You can find your nearest test center by simply googling “HSK test YourCountry”. There are plenty of practice tests online (e.g. here), and it’s good to complete some 3-4 practice tests on your own before taking the actual test.

In my experience, HSK 1 is ridiculously easy to complete after a few months of self-study. I actually never tested myself officially and went straight to take the HSK 2 test. For HSK 2, you need a fair bit of practice to pass it. In both the HSK 1 and HSK 2 tests, you will survive with just with pinyin romanization. From HSK 3 and upwards the tests get challenging — everything is only in hanzis and there is no pinyin to help you.

Country_of_the_blind.jpg

10. Chinese Graded Reader book series. Jared Turner and John Pasden have authored great fiction books for Chinese learners. They use e.g. original English stories authored by H.S. Wells that has been retrofitted to the Chinese language using a limited amount of hanzis. The books come in ‘levels’ as well, with Level 1 books containing only 300 hanzis. If you’ve studied Chinese for about 1-2 years, you should be able to read the Level 1 books. There are also some footnote translations for difficult words, but the actual text is written entirely with hanzis.

I’ve personally read the book “Country of the Blind People” and it was great. The story was captivating, and it felt inspiring to actually read a real book in Chinese.

To be transparent though, I was slow in finishing the entire book (of about 70 pages). It took me around 2 weeks in total and around 2 hours of focused reading every day.

That said, it was very effective. I noticed that my hanzi learning and memorization speed 10x’d. By the end of the book, I was reading significantly faster compared to just 2 weeks before. It was much more effective compared to tackling HSK textbook chapters.

I’m overwhelmed by all the resources — where should I start?

To get started, I recommend to book a good teacher on iTalki for a minimum of twice a week, and order physical copies of HSK 1 and HSK 2 textbooks, and simply stick to these in your beginner routine for a few months. I advise to never drop this routine (if you are having the idea of replacing it with another supposedly ‘better’ routine), but to simply introduce more habits on top of the iTalki classes and HSK books.

The next few habits would be Pimsleur podcasts, HSK tests and Chinese Graded Readers books.

Especially at the beginning of Chinese language studies, apps, music, and TV dramas are just entertainment. They are usually just excuses to avoid real studying which takes quite a bit of mental energy :-)

Looking back, how would I study differently? 

Everything comes back to habits. No technique, resource or language hack is useful if you don’t have a regular and steady studying rhythm. Personally, I regret having some months of pauses from iTalki classes when things got too busy and stressful in my entrepreneurial life.

Having a weekly/daily routine is everything. Early morning routines have been the best for me. Besides acquiring a new skill, I discovered language learning also reduced my overall stress levels (I later learned that this is supported by science as well).

Another thing I would do differently is the way how I learn hanzis. For the first few months, I neglected the hanzis almost completely. With pinyin romanization everywhere in the HSK 1 and HSK 2 textbooks, it got too easy.

Not learning hanzis in the beginning was a mistake I had to pay for later as all the chapters in the HSK 3 book are presented in hanzis without pinyins next to the text. Completing a chapter at the HSK 3 level (with exercises included) took me 4 times longer than HSK 2.

Hopefully, you got inspired :-)

With this blog post, I wanted to prove that you can create ‘virtual immersion’ in language learning surprisingly easily, with the resources available mostly online. I have used the methods mentioned above for the first 2-3 years of my part-time Chinese study.

Happy learning!