Layer N: Hyper Performant and Hyper Composable Execution. An Interview with Co-Founder David Cao

Bitcoin Suisse AG
2 Apr 202452:17

Summary

TLDRIn this engaging discussion, David Chow, co-founder of Layer n, shares insights on their cutting-edge Layer 2 blockchain solution designed to tackle scalability issues in the crypto industry. Chow discusses the evolution of Layer n, its unique features like the statet communication protocol, and the potential of AI and crypto intersection. Exciting partnerships and the future of Layer n are also highlighted, emphasizing the project's commitment to enhancing on-chain capabilities and user experience.

Takeaways

  • πŸš€ Layer n is a hyper performant, scalable layer 2 blockchain solution designed to tackle scalability issues in the crypto industry.
  • 🌐 David Chow, co-founder of layer n, shares his journey from bioinformatics research at Harvard to building high-frequency trading systems on blockchain.
  • πŸ› οΈ Layer n's core innovation is the creation of a unique network of rollups with custom VMs that allow for exponentially more compute for applications while retaining seamless composability.
  • πŸ” The platform aims to achieve feature and performance parity with centralized systems, but without sacrificing the core benefits of decentralization, such as permissionlessness and censorship resistance.
  • 🀝 Layer n has gained backing from renowned players in the crypto space, including Peter Teal's Founders Fund, highlighting its potential impact on the industry.
  • πŸ”— The statet concept within layer n enables applications and rollups to share a standardized messaging pipeline protocol, facilitating seamless and instant communication between them.
  • πŸ’‘ Layer n's approach to scalability involves a modular stack, with layer n focusing on the execution layer, Ethereum providing security, and partnered data availability solutions like IGDA.
  • 🚧 The development of layer n is at an exciting stage, with major strategic partnerships and announcements expected in the near future, including a focus on integrating AI into on-chain applications.
  • πŸ”„ The platform's unique Inter-VM Communication (IVC) protocol allows for the movement of assets between rollups and applications without the need for withdrawal periods or third-party bridges.
  • πŸ“ˆ Layer n's testnet has demonstrated the capability to achieve over 100k TPS in a closed test environment, showcasing its potential for high-performance trading and order book applications.
  • 🌟 The future of layer n is promising, with a focus on practical engineering solutions, future-proof modularity, and a commitment to trustless, decentralized systems.

Q & A

  • What is Layer n and what are its core objectives?

    -Layer n is a hyper performant, scalable layer 2 blockchain designed to tackle scalability issues in the crypto industry. Its core objectives include increasing the surface area of what's possible to build on chain by 10x and enabling more compute for application developers, thereby allowing the creation of complex applications without worrying about computational constraints.

  • How does Layer n approach the issue of scalability differently from other blockchain projects?

    -Layer n approaches scalability by introducing a new model of execution layers and virtual machines (VMs). It focuses on providing unbounded compute surface area for application developers and allows for the creation of custom VMs optimized for specific use cases, leading to significant improvements in performance and efficiency.

  • What is the significance of the partnership with IG-DAO for Layer n?

    -The partnership with IG-DAO is significant for Layer n as it provides a solution for data availability, which is crucial for the functioning of Layer n's rollups. IG-DAO enables high bandwidth and storage, which are essential for handling the large volume of transactions that Layer n aims to process.

  • Can you explain the concept of zero-knowledge fraud proofs as mentioned in the script?

    -Zero-knowledge fraud proofs are a mechanism used by Layer n to ensure security and integrity of the transactions. Instead of running validity proofs on every single transaction, which can be expensive and time-consuming, zero-knowledge fraud proofs only require a validity check when a fault is detected. This method reduces the cost and time required for validation while maintaining a high level of security.

  • What are the benefits of using custom VMs in Layer n's architecture?

    -Custom VMs in Layer n's architecture allow for the creation of application-specific execution environments that are highly optimized for their intended use cases. This results in better performance, more efficient use of resources, and the ability to handle complex computations that general-purpose VMs may struggle with.

  • How does Layer n plan to address the challenge of liquidity fragmentation across different rollups?

    -Layer n addresses the challenge of liquidity fragmentation by implementing a shared communication protocol known as the inter-VM communication protocol. This protocol allows different rollups and applications within the Layer n ecosystem to communicate and share liquidity seamlessly, eliminating the need for third-party bridges and reducing the complexity for users and developers.

  • What is the current development stage of Layer n and what can we expect in the near future?

    -Layer n is at an advanced stage of development with a public test net and subsequent mainnet launch planned. In the near future, Layer n will be announcing partnerships with major liquidity providers and is working on integrating AI use cases into its platform, aiming to provide tangible applications beyond infrastructure.

  • What are the security considerations for Layer n's architecture?

    -Security is a key consideration in Layer n's architecture. Despite off-chain data availability solutions introducing a subset of Ethereum's security, Layer n maintains a strong security posture through its use of zero-knowledge fraud proofs and its partnership with IG-DAO. This ensures that the system remains decentralized and trustless while achieving high performance.

  • How does Layer n's approach compare with other layer 2 solutions like Optimism, Arbitrum, and others?

    -While Layer n shares the vision of decentralized applications with other layer 2 solutions, it differentiates itself through its focus on modularity, custom VMs, and a shared communication protocol. Layer n aims to provide a seamless and high-performance environment for developers and users, with a future-proof design that can adapt to new technologies and research findings.

  • What are the key features of the Nord VM developed by Layer n?

    -The Nord VM is a trading and order book-specific virtual machine developed by Layer n. It is designed to handle tens of thousands of trades per second, offering performance on par with centralized exchanges but with the added benefits of on-chain settlement and trustlessness.

  • What are some of the strategic partnerships Layer n has announced or is planning to announce?

    -Layer n has announced a partnership with SushiSwap for the development of a hyper-performance order book. Additionally, they are planning to announce partnerships with major liquidity providers, which will help bring more users and assets to their platform.

Outlines

00:00

🎢 Introduction to Layer n Layer and Guest

The paragraph introduces the discussion with David Chow, co-founder of Layer n Layer, a high-performance and scalable layer 2 blockchain designed to address scalability issues in the crypto industry. The host expresses excitement for the conversation, highlighting the importance of Layer n Layer's backing by prominent figures in the crypto space. David shares his background, from studying at Harvard to his involvement in bioinformatics research, and his eventual entry into the crypto world through building onchain order books. The discussion sets the stage for exploring the challenges and solutions in building high-performance blockchain applications.

05:00

πŸš€ Addressing Scalability and Liquidity in Blockchain

In this segment, the conversation delves into the core challenges of scalability and liquidity in blockchain technology. David explains how traditional blockchains limit complex application development due to computational constraints. He discusses Layer n Layer's approach to removing these constraints by allowing developers to build their own execution environments or virtual machines, thus enabling the creation of complex applications. The discussion also touches on the importance of a shared communication protocol for seamless interaction between different rollups and applications, emphasizing the benefits of Layer n Layer's statet concept and inter-VM communication protocol.

10:00

πŸ”’ Shared Security and the Zero-Knowledge Fraud Proof

This paragraph focuses on the concept of shared security and the innovative Zero-Knowledge Fraud Proof mechanism introduced by Layer n Layer. David elaborates on how shared security assumptions among rollups enable seamless asset movement and communication between applications. He introduces the Zero-Knowledge Fraud Proof as a solution to efficiently handle disputes without extensive validity proofs for each transaction. This mechanism allows for a single-block proof process, potentially reducing withdrawal periods and enhancing the overall security model of Layer n Layer.

15:03

🧩 Positioning Layer n Layer in the Blockchain Ecosystem

The discussion now addresses how Layer n Layer fits within the broader blockchain ecosystem. David explains that Layer n Layer operates as an execution layer, focusing on optimizing performance and compute for application developers. He contrasts Layer n Layer with other rollup types, highlighting its unique combination of optimistic execution and zero-knowledge fraud proofs for proving. The conversation also touches on the partnership with IG-DNA for data availability, emphasizing the importance of high bandwidth and security in the choice of solutions.

20:05

πŸ”‘ Trade-offs and the Future of Layer n Layer

David discusses the trade-offs involved in Layer n Layer's approach, particularly the requirement to build within the constraints of the RIS Zero language set, which favors Rust but may limit initial programming language options. He also mentions plans to support a wider range of programming languages and emphasizes Layer n Layer's commitment to assisting developers in this new paradigm. The conversation looks forward to the potential of Layer n Layer to enable novel applications, especially at the intersection of AI and crypto, and hints at upcoming strategic partnerships and developments.

25:08

🌐 Exciting Developments and Partnerships for Layer n Layer

The paragraph wraps up the discussion with insights into Layer n Layer's current development stage and strategic partnerships. David shares about the upcoming launch of Sushi's soua on Layer n Layer, which promises to deliver centralized exchange-level performance with full on-chain settlement. He also teases major announcements related to liquidity providers and expresses enthusiasm for the potential of AI applications within the Layer n Layer ecosystem. The conversation concludes with David sharing his excitement for projects that focus on actual use cases within crypto and AI, emphasizing the importance of trustless systems and decentralized infrastructure.

Mindmap

Keywords

πŸ’‘Layer N

Layer N is a hyper performant and scalable layer 2 blockchain solution designed to tackle scalability issues in the crypto industry. It aims to increase the surface area of what's possible to build on-chain by enabling more compute power for applications. In the video, Layer N is positioned as a significant leap forward in addressing the bottlenecks that hinder widespread adoption of decentralized finance (DeFi).

πŸ’‘Scalability

Scalability refers to the ability of a system to handle increasing amounts of work, or its capacity to effectively grow and manage larger workloads without a corresponding degradation in performance. In the context of the video, scalability is a critical issue for blockchain networks, as it directly impacts their capacity to support a broad range of applications and users.

πŸ’‘Layer 2

Layer 2 refers to a solution that operates on top of an existing blockchain (the 'Layer 1') to improve its performance and scalability. These solutions often involve creating a separate network or protocol that processes transactions off the main blockchain, thereby reducing congestion and increasing transaction throughput.

πŸ’‘DeFi (Decentralized Finance)

DeFi, or decentralized finance, is a financial system built on blockchain technology that aims to provide alternatives to traditional finance by removing intermediaries like banks and allowing for peer-to-peer transactions. DeFi applications include lending, borrowing, trading, and investing, all conducted in a decentralized manner.

πŸ’‘Custom VMs (Virtual Machines)

Custom Virtual Machines (VMs) are specialized computing environments designed for specific tasks or applications. In the context of Layer N, these VMs are optimized to provide more computing power and better performance for blockchain applications, allowing for the execution of complex operations that would typically be too resource-intensive for the base blockchain layer.

πŸ’‘Composability

Composability refers to the ability of different systems or components to be combined or integrated in a seamless and interoperable manner. In the blockchain space, composability is important for creating a flexible and adaptable ecosystem where various decentralized applications (dApps) and protocols can work together and enhance each other's functionality.

πŸ’‘State Machine

A state machine is a computational model that describes the behavior of a system as a sequence of states and transitions between those states based on inputs. In the context of Layer N, the term 'single shared state machine' refers to a network-wide computational environment that powers the execution of multiple applications, each maintaining a consistent view of the system's state.

πŸ’‘Zero Knowledge Fraud Proofs

Zero Knowledge Fraud Proofs are a type of cryptographic proof that allows a prover to demonstrate that a statement is true without revealing any additional information about the statement. In the context of Layer N, these proofs are used to secure the system by allowing for the quick identification and correction of fraudulent activity without the need for extensive validation of every transaction.

πŸ’‘Inter-VM Communication Protocol

The Inter-VM Communication Protocol is a standardized messaging system that enables different virtual machines or applications within a blockchain network to communicate with each other. This protocol is essential for creating a cohesive and interconnected blockchain ecosystem where assets and data can move freely between different layers and applications.

πŸ’‘Shared Security

Shared security in the context of blockchain refers to a model where multiple rollups or blockchain networks rely on a common security mechanism or a shared set of validators to ensure the integrity and trustworthiness of the system. This approach can lead to more efficient security practices and reduce the duplication of efforts across different blockchain applications.

πŸ’‘High Performance Computing (HPC)

High Performance Computing (HPC) refers to the use of supercomputers and computational techniques to solve complex problems that require significant amounts of processing power. In the context of Layer N, HPC is leveraged to enable the execution of complex applications on the blockchain, which would otherwise be impossible due to the computational constraints of traditional blockchain networks.

Highlights

David Chow, co-founder of Layer n Layer, joins the conversation to discuss the project's innovative approach to tackling scalability issues in the blockchain industry.

Layer n Layer is a hyper performant, scalable layer 2 blockchain designed to address the scalability challenges faced by the industry since its inception.

The project has the backing of renowned and prominent players in the crypto space, including Peter Teal's Founders Fund, which co-led the seed round.

David Chow shares his professional background, including his experience with bioinformatics research at Harvard and his initial foray into crypto through building onchain order books.

Layer n Layer's creation was inspired by the realization of bottlenecks in building high-performance onchain systems, leading to the design of a new layer 2 model.

The Tex Tech stack is introduced, highlighting the concept of a single shared state machine powered by a network of custom and optimized rollups.

Layer n Layer enables unbounded compute surface area, allowing developers to build complex applications without worrying about computational constraints.

The statet concept is explained, which allows rollups and applications to share a standardized messaging pipeline protocol for seamless and instant communication.

Layer n Layer aims to achieve feature and performance parity with centralized systems while retaining the core benefits of composability and permissionlessness.

The Inter-VM Communication (IVC) protocol is discussed, which enables seamless asset movement and communication between rollups without the need for withdrawal periods or third-party bridges.

Shared security is a key aspect of the communication protocol, ensuring that all rollups share the same security risks and allowing for the use of zero-knowledge fraud proofs.

Layer n Layer sits at the execution component of the modular stack, focusing on being the best execution layer in crypto for application developers.

The project partners with iG-da for data availability, leveraging their high bandwidth storage to support the project's ambitious throughput goals.

Layer n Layer's approach combines the best of both ZK rollups and optimistic rollups, creating a new category of rollups with zero-knowledge fraud proofs.

The project's use of Rust for building the XVM (eXtended Virtual Machine) is discussed, highlighting the language's benefits for security and performance.

Layer n Layer's Nord VM is introduced as an example of an application-specific VM, designed to settle tens of thousands of trades per second for order book use cases.

David Chow shares insights on the project's development stage, mentioning upcoming announcements and strategic partnerships, including Sushi's soua building on Layer n Layer.

The conversation concludes with David Chow discussing other exciting projects he's following, including the intersection of AI and crypto and the potential for onchain use cases.

Transcripts

play00:01

[Music]

play00:11

hi this is uncut Gems by Bitcoin Swiss

play00:14

where we get into conversation with

play00:16

leading crypto Founders that will enable

play00:18

the next Leap Forward in our industry

play00:21

and ahead of the curve uncut gems

play00:23

provides you with insights on major

play00:25

emerging narratives and offers valuable

play00:28

one-of-a-kind discussions with

play00:30

pioneering minds and today I'm very

play00:33

excited to be joined by David Chow who

play00:37

is co-founder of layer n layer N is a

play00:40

hyper performant very scalable layer 2

play00:43

blockchain that is designed to tackle

play00:46

the scal scalability issues that our

play00:48

industry deals with since the very

play00:51

beginning and the scalability issues

play00:53

that also hinder the widespread defi

play00:55

adoption basically remarkably I want to

play00:58

to state that layer and posts the seat

play01:00

backing of very renowned and prominent

play01:03

players like Peter Teal's Founders F who

play01:06

co-ed the seat round David I'm very

play01:09

stoked to have you today and thanks for

play01:12

joining us thank you so much Dominic

play01:15

it's a pleasure and honored to to be on

play01:17

the show nice okay let's kick things off

play01:20

and start maybe with your professional

play01:23

background and maybe you could also

play01:25

share with us the story behind The

play01:28

Animated visuals on your website I think

play01:30

they are pretty dope I guess like quick

play01:32

personal background prior to coming into

play01:35

crypto used to study at Harvard and used

play01:37

to do bioinformatics research first got

play01:40

into crypto with my same co-founders at

play01:43

ler and actually but we first got into

play01:45

crypto building onchain order books on

play01:47

the salot of blockchain actually

play01:49

initially and that was our first

play01:51

introduction to the whole world of high

play01:53

frequency not only high frequency

play01:55

trading system but high frequency sort

play01:56

of systems in general right especially

play01:59

ones on chain and building that order

play02:02

book and having that opportunity to

play02:04

interact with Traders and market makers

play02:07

and other system thinkers and designers

play02:10

made us realize that even if we were

play02:11

building on the plat the fastest

play02:13

blockchain at the time and still today

play02:16

there were still immense bottlenecks at

play02:19

the infr layer that prevented us from

play02:22

building what building essentially

play02:26

something that could compete with modern

play02:28

Financial Networks like NASDAQ New York

play02:31

Stock Exchange Visa coinbase Etc so

play02:35

basically once we hit that sort of wall

play02:37

of okay if we want to build something

play02:39

that can actually compete we can't do it

play02:41

with the current stack we spent more and

play02:43

more times going lower and lower down

play02:44

the stack researching okay like how can

play02:46

we actually scale up this onchain order

play02:48

buck and that eventually led us down the

play02:50

path of accidentally designing a new

play02:53

layer 2 model and so that was like the

play02:55

first story of the first Sparks of how

play02:57

ler end started to become what it is

play03:00

today and then after multiple iterations

play03:02

and research Cycles it eventually became

play03:04

what L is today which is this more this

play03:06

sort of unique network of rollups with

play03:09

custom VMS that allow exponentially more

play03:12

compute for applications all while

play03:14

retaining seamless composability

play03:16

so really it was this like firsthand

play03:20

experience of trying to build something

play03:23

hyper performant on chain failing and

play03:26

spending the time to realize how we

play03:28

failed in do that and fixing the problem

play03:31

that essentially blocked us from trying

play03:33

to build that first thing so it was a

play03:35

very natural sort of experience for us

play03:38

and I think that's also something that

play03:40

hopefully also makes us understand

play03:42

Builders a lot more given that we've

play03:44

also been through the whole process of

play03:47

trying to build a protocol long chain

play03:50

that's a nice background and so as I

play03:53

understand basically the boundaries you

play03:55

dealt with back then drove you towards

play03:58

layer n and now you're aiming to solve

play04:01

all of these the bottlenecks you

play04:03

described like basically aiming to to

play04:06

achieve a feature and performance parity

play04:09

to centralized system systems but as I

play04:13

also understood you still value like the

play04:17

core features of the centralized systems

play04:19

very high which is permissionless like

play04:21

censorship resistance so you're trying

play04:24

to combine the best of both words that's

play04:27

a very nice story let's start with very

play04:30

high level unpacking the teex stack bit

play04:32

by bit like what's new in layer n why

play04:35

should we be excited about it I know you

play04:39

co the term statet a very powerful word

play04:42

which is this single shared State

play04:45

machine that is powered by that Network

play04:48

you already described of course custom

play04:50

and optimized Road UPS so before we dive

play04:54

deeper into the Tex Tech maybe give us

play04:57

some insights very high level on what

play05:00

are the perks of such a system and why

play05:02

should we be excited about it for sure

play05:06

absolutely so I think there's really two

play05:08

major unique unlocks right number one is

play05:12

basically removing the bounds removing

play05:15

the computational constraints for

play05:17

application developers so historically

play05:20

if you were to build any sort of

play05:24

complex application arm chain it would

play05:27

be really hard so that's sort of part of

play05:29

the reason XY K became a thing right

play05:32

because we had to find a simple way of

play05:35

computing like a market making formula

play05:37

without blowing out the gas constraints

play05:39

on the EPM but basically it's a it tells

play05:41

the tale of how hey if you're trying to

play05:44

build anything more complex like you

play05:46

just can't do that on chain at the

play05:48

moment you can't do that on things like

play05:49

the evm even the svm like I remember

play05:52

back in the day when we were building on

play05:53

salana we had to build a lookup table to

play05:56

find a square root of a number right and

play05:58

this is yeah like you should be able to

play06:00

just find a square root of a number this

play06:02

is basic math computations and so the

play06:05

first core unlock that we enable by

play06:08

allowing people to build their own

play06:10

basically execution environments or

play06:12

virtual machines as we like to call it

play06:14

is that you have this like basically

play06:17

unbounded compute surface area and you

play06:19

can build very complex applications

play06:21

without needing to worry about okayy I

play06:22

need to fit all these computational

play06:25

constraints in the that the EVN has

play06:28

right so that's number one you're now

play06:30

able to build very complex stuff we can

play06:32

even build all the way to things like

play06:34

incorporating like AI models too right

play06:36

which I'm sure what will'll talk about

play06:38

it some point on this show given how

play06:39

much AI is of Interest now that's the

play06:41

first unlock the second unlock is none

play06:44

of this would be useful if it can't

play06:46

share liquidity and communicate with

play06:48

everything else right so like we've seen

play06:50

like this Paradigm play out of everyone

play06:52

building their own individual rollups

play06:55

the whole problem with that is

play06:56

everything then gets fragmented right

play06:58

liquidity gets fragmented the user

play07:00

experience is fragmented you need to

play07:01

work with third party Bridges to move

play07:03

from one roll up to another it's just

play07:05

overall not a good experience right and

play07:08

so the second core unlock that we have

play07:10

is basically the statet concept which

play07:13

allows each of these rollups and

play07:17

applications to share a standardized

play07:21

messaging pipeline protocol that we call

play07:24

the inter VM communication protocol that

play07:26

basically allows applications en roll s

play07:29

to seamlessly and instantly communicate

play07:32

with each other right and so now you no

play07:35

longer need to worry about 7day withdraw

play07:37

periods times you don't need to worry

play07:39

about interoperability and bridging

play07:42

protocols and stuff like that all of

play07:43

that just like works right so like me as

play07:46

an application developer I either build

play07:48

my smart contracts on one of the evm

play07:50

rollups or I build like my I launch my

play07:52

own rollup with my own custom VM and

play07:55

everything just works right these just

play07:57

compose just as if I were building on a

play08:00

monolithic blockchain so really like the

play08:04

goal is hey like how can we like you

play08:05

said earlier how can we accomplish

play08:09

feature parity and performance parity

play08:10

with centralized systems all while

play08:12

retaining the benefits of composability

play08:14

that you're used to on the monolithic

play08:16

system so that's really yeah so

play08:19

hopefully that was clear yeah perfect

play08:22

answer and what you described earlier

play08:25

very much reflects what I observe or

play08:27

experience as a user myself right like

play08:30

it's it's painful if you're like stuck

play08:34

on different rollups and seriously

play08:37

sometimes I find myself questioning like

play08:39

how should I describe or explain that to

play08:43

something somebody that is new in the

play08:45

space while avoiding to get fished or

play08:48

something like that on the path towards

play08:51

briding from all these it's a thing and

play08:53

it's

play08:54

interoperability and all of this

play08:57

liquidity fragmentation and and so forth

play09:00

remains one of the core pain points

play09:02

within the industry in my opinion and

play09:04

it's it's awesome to see projects like

play09:06

layer and tackling these very challenges

play09:09

of our ecosystem so layer n kind of

play09:12

strives to enable this Melting Pot of

play09:15

VMS As I understood and rollups on the

play09:18

back of this like you said shared

play09:20

communication but also liquidity layer

play09:23

maybe we can dive into BM communication

play09:26

protocol which you call IVC and maybe

play09:29

could explain like how does it work and

play09:31

how does it compare with other Solutions

play09:34

are there for instance Concepts like

play09:36

shared security or something like that

play09:39

please guide us through the very concept

play09:41

of this liquidity and communication

play09:44

layer so at

play09:47

the base of the communication protocol

play09:52

is what you said the concept of shared

play09:55

security right so we wouldn't be able to

play09:58

accomplish

play10:00

a seamless the the features of the

play10:03

seamless communication protocol without

play10:04

some elements of shared security

play10:08

assumptions so basically what that means

play10:11

is all of these rollups or all of these

play10:13

rollups all settled to the same sort of

play10:16

big Global like State machine right so

play10:18

that's also where the word State comes

play10:20

from it's this idea thaty you have this

play10:22

single big state machine that's

play10:24

separated between this like networks of

play10:27

applications and that's a really

play10:29

important assumption right it's like you

play10:31

need shared security otherwise you have

play10:34

problems if someone were to run their

play10:35

own let's just say op stack roll up and

play10:37

someone want to run another optimistic

play10:40

or even ZK rollup because they both have

play10:42

different security assumptions you need

play10:44

that kind of 7-Day withdrawal period

play10:46

from the optimistic rollup or you need

play10:49

some kind of third party bridging

play10:50

provider to take on the risk and the

play10:52

liquidity providers take on the risk of

play10:54

moving assets between the rollups right

play10:57

and so all of that is solved once you

play10:59

have the shared security assumption

play11:00

right so all the rollups share the same

play11:02

security risks which means that okay

play11:04

like if one rollup has an issue you

play11:07

simply so we used this new thing called

play11:11

zero knowledge fraud proofs I think we

play11:12

were one of if not the first to to push

play11:15

this out but basically it's the idea of

play11:17

instead of doing validity proofs on

play11:19

every single transaction like current

play11:21

zero Dodge rollups do and instead of

play11:23

doing like interactive fraud proofs like

play11:25

the current optimistic rollups tried to

play11:28

do and I say tried do because I don't

play11:30

think anyone has like public

play11:31

implementations of interactive fraud

play11:33

proofs yet but basically the Z non fraud

play11:35

proofs is very elegant way of basically

play11:38

only running the validity proof when

play11:40

there is fault so that allows us to do

play11:43

the proof in a single block and it's

play11:45

also a lot more straightforward than

play11:48

thinking around all the game theory

play11:50

economics of the interactive fra proof

play11:54

so anyhow so basically yeah like if one

play11:57

rollup has an issue any valid leader can

play11:59

replay the transaction history from that

play12:02

they get from the da layer identify on

play12:04

which R up that issue occurred and at

play12:07

which transaction then submit that ZK

play12:09

fraud proof basically to be like hey

play12:10

like there's an error here and if there

play12:12

is an error then we would go into sort

play12:14

of the state roll back process now to

play12:17

get back into the sort of the

play12:18

communication layer so now that we have

play12:20

this sort of shared security

play12:22

assumption we can now seamlessly move

play12:26

assets in between rollups and

play12:27

applications without needing to about

play12:29

those like withdrawal periods and those

play12:31

bridges right and the key towards doing

play12:33

that is to have some kind of

play12:35

standardized way of assing and

play12:38

pipelining

play12:39

messages from one application slash

play12:43

rollup to another and basically that's

play12:45

what we're building in house so think of

play12:49

IBC on Cosmos but then like just remove

play12:52

all of the consensus components right so

play12:56

basically have you have a very

play12:57

straightforward problem of okay like I

play13:00

have one roll up I have another roll up

play13:02

like how do I create a standardized

play13:03

format of sending one message that can

play13:07

be any sort of arbitrary bites to

play13:09

another rollup right so then that if you

play13:12

like distill that problem to to the just

play13:14

that it's like a very similar just like

play13:16

Web Two communication problem and at

play13:19

that point once you have a standardized

play13:20

way of doing it it's very seamless for

play13:22

everything in your network to

play13:23

communicate with within each other like

play13:26

on a very fundamental level it's I mean

play13:28

you say that it already it pretty much

play13:30

reminds me of the early concepts of

play13:32

Cosmos and paulod do at least from the

play13:35

idea and the approach like pretty dope

play13:38

that you manag to to to abstract away

play13:41

the the different VMS and pro provider

play13:44

somewhat like shared security shared

play13:47

liquidity layer but also have

play13:49

composability like solve the comp

play13:52

composability Problem by you already

play13:55

touched base on the whole proving

play13:57

concept which was very new to me I know

play14:00

the the XVM We Touch based on the VMS

play14:03

and what layer n enables within that

play14:06

session as well but the XVM that you

play14:09

call XVM are built in Rust and are

play14:12

enabled as I stood As I understood by

play14:14

risk zero and if we look this modular

play14:18

stack periodic table with the different

play14:20

rollups if we think about layer n maybe

play14:23

help us a little bit if we look at this

play14:26

modular St table how would you classify

play14:29

layer and is it a settlement rollup is

play14:31

it a sovereign rollup and then the

play14:33

second aspect or component I'd like to

play14:35

cover here is the very fraud prooof CK

play14:39

fraud proof life cycle you said because

play14:42

until now I was used to either I have a

play14:46

validity proven system or rollup or a

play14:49

fraud or I use fraud proof proofs but

play14:52

now you combine these and create some

play14:55

kind of magic with huge benefits being

play14:58

that you don't have to to prove

play15:00

everything as you mentioned maybe you

play15:02

could yeah help us understand that

play15:04

concept a little bit better and where

play15:06

you would classify layer n like how can

play15:09

we think of layer n in that regard for

play15:12

sure so in terms of the modular stat

play15:15

right we sit at the execution component

play15:18

so I like I would say that we're like an

play15:20

execution layer right so we're purely

play15:22

focused on hey how can we make how can

play15:25

we be the best execution layer in crypto

play15:27

where application developers can come

play15:29

and just purely focus on building the

play15:31

best applications right so if you think

play15:34

about it in the modu periodic table like

play15:35

you said there's we have the execution

play15:37

component which is us the security

play15:39

component is on ethereum and then the da

play15:42

layer we're currently partnered with

play15:43

igen da to use in this sort of Validia

play15:46

model right so that's like the the

play15:50

general component with regards to the

play15:54

fraud proving question so that's like

play15:56

the really cool thing about what we're

play15:57

doing right we don't really fit in the

play16:00

camp of like fully ZK rollups we don't

play16:03

really fit in the camp of the typical

play16:05

optimistic rollup we combine The Best of

play16:07

Both Worlds to create this new category

play16:10

right and I wouldn't we're more akin to

play16:14

optimistic rups in the sense that we

play16:16

still run execution optimistically but

play16:19

when we talk about proving it's really

play16:21

this sort of new category that we called

play16:23

the zero knowledge fraud proofs right

play16:24

and then the basic idea is really hey

play16:29

doesn't make sense to run validity

play16:31

proofs on every single transaction if

play16:33

you're trying to optimize for

play16:34

performance because ZK proofs are

play16:38

expensive they're timec consuming to run

play16:41

and if you're trying to do anything like

play16:43

what we're doing which is like hundreds

play16:44

of thousands of transactions per second

play16:46

you're not going to be able to do that

play16:47

and if you were to do that you're

play16:49

probably paying millions of dollars to

play16:51

AWS to run your proving costs right or

play16:54

something like that on the other hand

play16:56

you have the interactive fraud proof

play16:58

which

play16:59

have historically been really hard to

play17:01

implement and then we've seen that in

play17:03

practice if you look at any of the

play17:04

current optimistic rollups I don't think

play17:08

as of the date of this interview that

play17:10

any of them have public implemented

play17:15

fraud proofs right so optimistic has no

play17:18

fraud proofs so you're basically running

play17:20

on the that you trust op Labs which is

play17:23

not a bad assumption but it's it that is

play17:26

the Assumption arbitrum has wh listed fr

play17:28

proofs right so it's not public so it's

play17:31

only like a wh list set of validators

play17:32

that can run so that's if you think

play17:34

about it from a game the pers or I guess

play17:36

from a theoretical perspective it has

play17:38

the same properties as not having fraud

play17:40

proofs at all because basically like

play17:41

only insiders can run fraud proofs which

play17:43

is the same thing as if the Insiders can

play17:46

like did have fraud proofs but again

play17:48

it's like not a terrible assumption if

play17:50

you trust arbitro but what I'm trying to

play17:52

get at is interactive problems are

play17:54

really complicated right because they're

play17:55

multi processes it depends on this like

play17:59

interaction game basically between

play18:01

theover and and the verifier instead of

play18:04

doing all of that complicated stuff

play18:06

right we say hey like validity proofs

play18:09

are great but validity proofs are only

play18:11

good if you can minimize the amount of

play18:13

time that you actually use it right so

play18:15

instead of doing it at every transaction

play18:17

we'll only do it when a fraud occurs

play18:20

right so when a fraud occurs a validator

play18:22

simply needs to rerun that state

play18:25

transition on like they can run it on

play18:27

their own zkv on their own MacBook if

play18:30

they want right Y and then basically

play18:33

through the RIS Zer zkm they get a hash

play18:36

that they can then use to submit on to

play18:38

an onchain smart contract that verifies

play18:40

the legitimate the legitimacy of that

play18:43

hash and if it is legitimate then hey

play18:45

you've got a valid proof right so it's

play18:47

that straightforward and the cool thing

play18:49

is hey like now we can do this within a

play18:51

single block as opposed to a multiblock

play18:53

process and did theoretical interactive

play18:55

fraud proof which also means that you

play18:57

can theoretically reduce your minimum

play19:00

withdrawal period by like a significant

play19:02

amount right so like the whole problem

play19:04

with the 7-Day withdrawal period 7day is

play19:06

arbitrary but it's also based on the

play19:08

idea that like hey if it takes like x

play19:11

amount of blocks to prove fraud and each

play19:14

block has some probability of being

play19:17

censored right what's the probability of

play19:20

every block being censored like multiple

play19:22

times and how much time do we need to

play19:24

ensure that all of these blocks go in

play19:26

right now if you only have a single

play19:28

block that you need to worry about the

play19:29

risk decreases significantly right and

play19:32

so that's why we say hey theoretically

play19:34

like you can decrease the withdrawal

play19:35

period Times by quite a significant

play19:37

amount while decreasing the actual risk

play19:40

of censorship itself so hopefully that

play19:43

gave you like a an overview of of where

play19:45

we land in terms of the security

play19:47

Spectrum

play19:48

there yeah yeah it definitely did but it

play19:51

also like to be frank it sounds like

play19:54

almost too good to be true that you get

play19:57

like this massive

play19:59

kind of bump in in withdrawal periods

play20:02

but also reduced risk at the same time

play20:05

and what you stated earlier I'm on the

play20:07

same page like it I I would be a little

play20:10

bit more critical and say it's even a

play20:13

problem that maybe optimism and other

play20:15

rollups

play20:17

still do not have these fraud proofs

play20:20

implemented yet I guess arbitrum is on a

play20:23

good path here and optimism all of the

play20:26

of these rollups still have training

play20:28

wheels on let's be honest and I hope it

play20:30

this will get better because we want to

play20:33

build like these sound decentralized

play20:35

permissionless immutable systems what

play20:39

now I would like to know you stated that

play20:41

you partnered with I da for the data

play20:43

availability kind of aspect would you

play20:46

mind giving us some insights on why you

play20:48

chose IG da and not other Solutions like

play20:52

aale or Celestia and yeah you touch Bas

play20:56

on it but I would like to know what to

play20:59

understand it maybe a little bit better

play21:00

so layer n is only the execution part I

play21:03

da a you use for data

play21:05

availability settlement is still

play21:07

happening on ethereum is that

play21:09

correct yes that's correct so basically

play21:13

layer Ed is still in L2 right because

play21:15

we're at the layer two in terms of the

play21:18

hierarchy chart so like our core

play21:22

function is how do we make execution

play21:25

better right execution better in the

play21:28

sense of performance and compute as well

play21:30

as that's the Mac feature ex and then

play21:33

execution better also in the sense of

play21:36

composability right and then as any

play21:40

rollup does We Roll Up the state and

play21:42

then put it back to ethereum where it

play21:44

can be verified and challenged in case

play21:47

there are problems right and then the

play21:48

main difference is instead of Hosting

play21:51

data availability on ethereum which a

play21:53

lot of the current rollups do we

play21:56

actually decide to do it with an off d

play21:58

solution like i d right and the core

play22:01

unlock there is really like a math a

play22:03

math problem right it's like Ian

play22:07

da will be able to enable up to 8 to 10

play22:14

like megabytes per second that's crazy

play22:17

storage bandwidth right yeah and the ma

play22:20

there is just hey yeah you look at

play22:22

something like Celestia it's currently

play22:23

doing 1.5 megabytes and it just if

play22:26

you're trying to reach the scill at

play22:28

which we're trying to do which is tens

play22:30

of hundreds of um tens thousands of

play22:33

transactions per second you need

play22:34

something that has high bandwidth right

play22:37

that's just a prerequisite and so it it

play22:40

really just boiled down to that I think

play22:42

the other really cool thing about I da

play22:45

forces the whole reaking component to it

play22:48

right one of the core things that people

play22:50

are worried about with these offchain da

play22:53

Solutions is like hey like now we're

play22:54

losing some security there right which

play22:57

is true if you're not storing your data

play22:59

on ethereum you're inherently going to

play23:03

inherit like a smaller a smaller subset

play23:07

of the security that you would assume on

play23:09

ether let's just assume ethereum is like

play23:11

the most secure decentralized system

play23:13

right like any other system is probably

play23:15

like a subset of that right and then

play23:17

even I da which is like resters right

play23:20

it's still going to be a subset right

play23:21

unless like the whole set of e theorum

play23:24

validators are restak into 8 right let's

play23:27

just assume it's like a sub said so yes

play23:29

that is like the tradeoff that you would

play23:31

be taking right but from our perspective

play23:34

it is a strong enough tradeoff and a

play23:36

beneficial enough tradeoff that it makes

play23:38

a lot of sense right if you're getting

play23:40

like a hundred times better performance

play23:44

but just the midle school fraction

play23:46

amount less security and even then we

play23:49

can argue as in really less security and

play23:51

I would say not really I think the

play23:54

tradeoff is very valid and I think we

play23:56

can play this out and see what types of

play23:57

products people will use more often but

play23:59

I think until we're able to reach

play24:01

performance parody like we still have a

play24:04

long ways to go from a prod Market fit

play24:06

perspective if we're really if we don't

play24:08

reach that performance parody threshold

play24:10

quicker

play24:11

so that was I lost track of the original

play24:14

question oh I think you were asking

play24:15

about the a DA stuff yeah so basically

play24:19

basically it just came down to a math

play24:20

problem right okay like we just want

play24:22

wherever has the most amount of

play24:23

bandwidth yeah and then two where had

play24:26

where it has the where it's the most

play24:28

secure as well so the nice thing about I

play24:30

is you still have some some strong

play24:32

subset of each security as opposed to

play24:35

some of these other systems who need to

play24:37

bootstrap their own set of validator

play24:39

nodes which could be a bit less secure

play24:41

right yeah great take thank you because

play24:44

you touch based on tradeoffs what is

play24:47

your opinion on the your approach now is

play24:50

using these like or rather novel I'm not

play24:53

sure if they are battle tested yet novel

play24:56

CK fraud proofs what is a potential

play24:58

trade of you using that kind of proving

play25:01

system in your opinion or aren't there

play25:05

any yeah for sure the main tradeoff

play25:08

would be we need to

play25:11

build the VMS and the rollup component

play25:17

such that it's it is it is basically it

play25:21

that what's the word I'm blinding it it

play25:24

basically works with since we're using

play25:27

RIS zero right which requires the risk

play25:29

Live Language set basically we need to

play25:32

build within those constraints right so

play25:34

we wouldn't be able to out of the box

play25:36

build something in like JavaScript or

play25:38

something right so that's a major

play25:39

constraint because it means that hey

play25:42

like the initial set of rollups and VMS

play25:44

we're going to have to build a rust with

play25:46

that being said also huge rust Maxi and

play25:49

I think the world should move towards

play25:50

rust but that's a separate topic with

play25:52

that being said we are thinking of of

play25:55

implementing wasum as well at point

play25:58

which will allow us to support a much

play26:02

wider breadth of program programming

play26:04

languages for people who would who might

play26:06

want to build applications in different

play26:08

languages and then the other note I'll

play26:11

make because this is a common point of

play26:13

confusion when people ask me hey like

play26:15

what programming languages is it is

play26:18

layer and in like we build everything in

play26:20

Russ right but that doesn't necessarily

play26:23

mean that the application developers

play26:25

need to build in Russ so depending on

play26:28

what type of application the application

play26:30

developers build wants to build it could

play26:32

be different right so we will have our

play26:35

own evm rollup as well that's going to

play26:38

be built in R East which is the rust

play26:40

implementation right but if you're

play26:42

deploying a smart contract on that evm

play26:45

rollup that's in solidity right so all

play26:47

of your pre-existing code all of your

play26:50

pre-existing sort of solidity and smart

play26:52

contract knowledge you can still use

play26:54

that right with that being said if

play26:56

you're trying to build a an XVM and your

play26:59

own your own rollup that's where you'll

play27:02

need R and at the current stage we're

play27:05

willing to partner really deeply with a

play27:07

lot of the people that we work with in

play27:09

that process to get them started just

play27:11

because we know that okayy this is a

play27:13

completely new way of Designing things

play27:15

right one that maybe most may not be

play27:17

familiar with and so we're happy to

play27:19

spend that time deating the initial

play27:21

course set of developers and helping

play27:23

them out a lot in that process nice so

play27:27

as I understand while you are

play27:28

constrained in the programming language

play27:30

for the XVM which is which kind of build

play27:34

that network of different virtual

play27:36

machines right you on the other hand of

play27:40

being constrained by the programming

play27:42

language you get a huge design space By

play27:45

by allowing crazy optimizations of these

play27:49

generalized but also application

play27:50

specific VMS and on the other hand you

play27:53

still provide As I understood with the

play27:55

NM still provide a VM environment for

play27:59

developers who want to build in solidity

play28:02

which is super nice because doing that

play28:04

you at one hand you wipe out the

play28:06

boundaries of operating within the evm

play28:09

but you also allow developers to build

play28:12

within solidity because you also have

play28:14

the NM so with these X VMS what design

play28:19

freedom and benefits do you achieve

play28:21

exactly maybe we you can give us some

play28:24

insights on that and what are the perks

play28:26

what are the perks between the

play28:28

because you also allow for generalized

play28:30

VMS but also application specific ones

play28:33

maybe you have some cool examples to

play28:35

share I think like the

play28:38

easiest

play28:40

mental model that I like to use to think

play28:43

of XVM and more generalizable smart

play28:48

contract VMS like the evm VM Etc is okay

play28:51

like imagine you're trying to build some

play28:54

kind of you're trying to build a car

play28:58

right the evm is basically a scenario

play29:02

where like someone comes and gives you

play29:04

all these generalizable Lego blocks

play29:06

right you can use these Lego blocks to

play29:08

build a house you can use these Lego

play29:09

blocks to build a table you can use

play29:12

these Lego blocks to build a car right

play29:14

because they're generalizable right

play29:16

these are Lego blocks you can use it to

play29:17

build anything right they might not be

play29:19

the best right if you're trying if

play29:21

you're trying to build a really solid

play29:23

house or a really fast car like it might

play29:25

not be the best right but it it does the

play29:27

job

play29:28

right and it's generalizable and that's

play29:30

why people like it and it's

play29:31

straightforward to use the XVM is

play29:34

basically like we don't have any Lego

play29:37

blocks for you but we have all of these

play29:39

primary resources right such as wood Etc

play29:42

you go build like whatever car you want

play29:44

from the ground up in the way that you

play29:46

want to right and that really is the

play29:48

core thing right like people typically

play29:50

think about scaling as this like as only

play29:53

two things right like throughput and

play29:55

latency right but people often forget

play29:57

the third thing which is compute right I

play29:59

mean comp is really important when

play30:01

you're trying to build anything more

play30:03

complex right and that's really

play30:04

constrained on what the sort of the

play30:07

virtual machine environment allows you

play30:08

to do right so when you have very preset

play30:11

ways of doing things on the EVN that

play30:13

makes it really hard to build things

play30:14

that are very computer intensive so

play30:17

imagine basically the XVM as this place

play30:21

right where you can pretty much just

play30:23

build an application like without

play30:26

thinking about these like blockchain

play30:28

constraint right like just build the

play30:29

application as you would build it if you

play30:31

were building it on web to right and one

play30:34

of the examples of this is actually the

play30:36

Nord VM that we built ourselves in house

play30:39

which is basically the first order book

play30:42

specific and exchange optimized virtual

play30:44

machine and really what it does is it

play30:47

only does one thing right and it does

play30:49

that one thing really well which is

play30:51

basically settle tens of thousands of

play30:53

Trades per second at something else I

play30:55

can lay and seas on this like massive

play30:57

like order book right and and basically

play31:00

the reason it's able to do that is

play31:02

because it was purposely built for that

play31:03

so it was actually modeled off of elmax

play31:06

digital which is I think one of if not

play31:09

the fastest current like institutional

play31:11

centralized exchange and we're able to

play31:14

do that because we're not building

play31:16

within the constraints of the evm and

play31:18

you get to just build whatever you want

play31:20

and so that's like uh that's basically

play31:23

like how I like to think about the

play31:25

difference between xvn and and a more

play31:29

generalizable virtual machine and I'll

play31:31

also think and the other thing I'll not

play31:33

about the XVM is like the XDM is also

play31:36

not trying to be like a generalizable

play31:38

platform right so you don't need to

play31:39

think about a lot of the problems and

play31:42

inherent challenges that comes to

play31:44

building a generalizable platform so you

play31:46

can really just be super application

play31:48

specific focused and not worry about a

play31:51

lot of the other constraints that the

play31:52

EDM needs to worry about right like how

play31:55

to create basically generalized way of

play31:59

doing everything you don't need to worry

play32:00

about any of that basically yeah did

play32:02

that kind of answer the question yeah

play32:04

fully fully satisfied here David yeah

play32:08

it's very cool I imagine as a developer

play32:10

it's awesome to have a blank canvas and

play32:12

build whatever you want because what

play32:15

tailor your solutions to towards your

play32:17

application that's really nice by

play32:19

coincidence in the previous episode of

play32:21

uncut gems we had Keon from from Monet

play32:25

who kind of shares a similar background

play32:28

stemming from Solana slh high frequency

play32:31

trading and as better than me probably

play32:35

parallelization is a very powerful tool

play32:38

to scale integrated chains and there are

play32:41

some more areas worth tweaking to boost

play32:44

the performance such as database

play32:46

consensus or pipelining like monut is

play32:49

doing and I feel to me it's very

play32:52

interesting to observe the developments

play32:54

and Industry from the sideline and

play32:56

observing that there there are these

play32:58

teams choosing the modular Direction and

play33:01

then there are the other teams like

play33:03

fully committed to the integrated

play33:05

approach and clearly ler n went with the

play33:08

modular narrative right so would you

play33:10

mind providing some insights on your

play33:13

decision making progress and why you

play33:16

eventually went with modula instead of

play33:19

integrated so please guide us through

play33:22

some pros and cons of these two very

play33:25

exciting exciting developments

play33:29

the best way to think about our decision

play33:31

making as a company and as a team is we

play33:34

have one mission in mind and that is you

play33:38

have the current sort of surface area of

play33:42

what's possible to build on chain how do

play33:44

we take that and increase it by 10x

play33:47

right that that's not 10x you get like

play33:50

how do we take the current surface area

play33:52

of what's possible to build on chain and

play33:55

vastly increase that right not just we

play33:58

don't just want to increase it by 1.2x

play34:01

1.3x 2x we want to increase it by 10x

play34:04

and to us like the key towards unlocking

play34:06

that really just comes down to compute

play34:09

right like you enable more compute for

play34:12

people that's going to allow people to

play34:14

do so much more and to do that we need

play34:16

to think in a completely different

play34:19

Paradigm and a completely different way

play34:22

about building applications on chain

play34:24

right and the truth is yeah like I think

play34:27

what what a lot the other teams are

play34:28

doing in terms of improving the evm is

play34:30

like super core and super important

play34:32

right because there's a whole bunch of

play34:33

people who are still building on the evm

play34:35

the problem is hey like that's not going

play34:37

to take us to a 10x in terms of compute

play34:40

right really that's what we're doing

play34:42

there is we're updating an existing

play34:44

model but we're not thinking outside the

play34:46

box in terms of like how can we create

play34:48

an entirely new model that's just so

play34:50

much better right so it's the same thing

play34:52

as being

play34:53

like it's a horse to the race car right

play34:56

so there's this like really thing that I

play34:58

think Ford said at some point in

play35:01

time maybe it was someone else but I

play35:03

think it was Ford and then he said

play35:05

something along the lines basically of

play35:07

hey ask people what they want today

play35:09

they'll tell you that they want like

play35:11

better horses right or like Faster

play35:14

Horses and stuff like that right but

play35:16

really what they want is like this new

play35:17

thing that's called the car right so

play35:20

it's like that that that's essentially

play35:22

how we think about this dichotomy okay

play35:24

nice so your take is to approach you are

play35:27

taking is the superior one and with less

play35:31

kind of less constraints like basically

play35:33

the next chapter not only an iteration

play35:36

but the next chapter that's cool let let

play35:39

me try to consolidate a little bit what

play35:41

we what we talked about until now you

play35:44

combine I da which gives you like an

play35:48

insane level of throughput with the stat

play35:51

net which is a network of custom VMS and

play35:54

that very statet is powered by shared

play35:57

communication and liquidity layer only

play35:59

stating that is already very exciting

play36:02

because we obtain this super highly

play36:04

performed L2 with also as I understood

play36:07

like very low

play36:09

latencies and I read in January you

play36:12

announced that layer ends S engine which

play36:14

is also a custom rust VM As I understood

play36:18

that you manage to achieve like 100k TPS

play36:21

on a close test net and I know it's a

play36:24

test net but I still think it's beyond

play36:27

remarkable to achieve like 100k TPS and

play36:31

now I wonder where these TPS where is

play36:34

full-fledged

play36:35

transactions including kind of smart

play36:37

contract logic or was it vanilla value

play36:40

transactions and then maybe you can

play36:43

provide some insight on insights on this

play36:46

n engine that I just mentioned because

play36:49

it's a new trading optimiz optimized

play36:52

rollup engine and maybe one of the the

play36:54

first examples on how highly optimizable

play36:58

rollups will be on layer n for sure so

play37:02

the Nord engine or the Nord VM as we

play37:05

like to call it now we went through

play37:07

different iterations of the name but but

play37:10

they all mean the same thing right

play37:12

basically it's this custom execution

play37:15

environment custom VM custom engine that

play37:18

just does one thing really well which is

play37:20

like the order book use case and as you

play37:23

brought up we ran this close test set

play37:25

with the IG team actually were all we

play37:28

were all live on the CLA at that time

play37:30

and then we were running it and and

play37:32

everyone got super excited when the

play37:33

numbers came out but basically yeah like

play37:36

it it was basically it wasn't just empty

play37:38

messages it was trades right real trades

play37:41

on the order book and I think we went up

play37:43

to 120 something thousand number I

play37:47

forget that it was 120 and something but

play37:49

yeah so it's and it's a very meaningful

play37:53

number right because I think like when

play37:55

we think about scaling

play37:58

the history of scaling on ethereum we've

play38:00

always just dealt with hey how do we go

play38:03

from 10 to 100 or how do we go from 100

play38:06

to 700 how do we go from 700

play38:11

to or I think the ma I think even with d

play38:14

sharting the math comes out to about

play38:16

you're only supposed to reach like

play38:17

around 800 transactions per second

play38:19

across all rollups or something like

play38:21

that I think I don't remember the exact

play38:23

MTH but some low

play38:25

number full d Proto de sharting with

play38:29

full sharting that might be different

play38:31

but with Proto I think it's still just

play38:33

800 or something but basically my point

play38:35

is we're in like the ballpark R we're in

play38:37

like we we just we haven't dreamt bigger

play38:41

right and so I think this is very

play38:43

meaningful because even salana does I

play38:46

think 6,000 TPS or something like that

play38:49

and then I think 6,000 that's accounted

play38:53

for the vote transaction so not I think

play38:57

without the vote transactions like

play38:59

actual real TPS I would need to check

play39:02

what it is but I think it was like 500

play39:05

or like a thousand or something like

play39:07

that yeah it's been a while since I

play39:08

checked those stats but it's definitely

play39:11

not 10,000 not even 100,000 we are

play39:15

talking about an entirely different

play39:18

Ballpark and the elegant thing to me is

play39:21

while you innovate so much on this kind

play39:25

of VM layer but also on the liquidity

play39:28

layer and the composability aspect you

play39:30

still incorporate what is very important

play39:33

to me good amount of security and

play39:36

decentralization from ethereum so that

play39:38

is is pretty appealing to me at least I

play39:42

would like to as we inch closer to the

play39:44

end of this nice conversation David I

play39:46

would like to your take on as I see it

play39:49

layer n kind of builds what also

play39:52

arbitrum orbit and stylus sets out to do

play39:55

what optimism sets out to do with the

play39:58

super chains what CK sync does with

play40:01

Hyper chains but with a few like very

play40:05

interesting and very elegant like

play40:07

implementation differences so how do you

play40:12

see yourself compared to these I will

play40:14

not say competitors but other projects

play40:17

that kind of aim to for a similar end

play40:20

game and how do you compare when it

play40:24

comes to decentralization when it comes

play40:26

to security when it comes to training

play40:29

wheels when it comes to

play40:34

orifices aspect o ification for rollups

play40:37

because just now with the tenun upgrade

play40:40

coming it it proved that like imagine

play40:43

there's a rollup that was aifi already

play40:46

and not upgradeable like they could not

play40:49

Implement blobs so sorry maybe that's a

play40:52

very comprehensive question but maybe

play40:55

would be nice to get your take on kind

play40:57

of comp

play40:58

landscape yeah for sure I think everyone

play41:01

has this shared vision of eventually

play41:06

things are going to move onto their own

play41:07

rollups it just makes a lot of sense

play41:10

right like the congestion that you face

play41:12

from a single model and doesn't make a

play41:14

lot of sense I think a lot of it just

play41:16

ultimately comes down to implementation

play41:18

details right because a lot of these

play41:21

other projects they' started with a

play41:24

certain fion in mind it's a lot harder

play41:26

to easily and really quickly progress

play41:30

towards this very scalable model that

play41:33

I've described under the state net right

play41:35

if you look at the super change stuff

play41:37

super cool super it's great what they're

play41:39

doing the problem is like there there's

play41:41

like a lack of I guess like

play41:43

implementation detail right so

play41:45

everything that we've talked about in

play41:46

terms of the communication protocol the

play41:49

shared liquidity system like those

play41:51

things are lacking right and the reality

play41:53

is it's not it's just a really hard

play41:55

problem right when you you have all of

play41:58

these like independent rollups being

play41:59

deployed and you don't there isn't this

play42:03

pre-existing Network that brings them

play42:05

all together right so you have this kind

play42:07

of like situation where like you already

play42:09

have a bunch of rollup that you're

play42:10

trying to figure out like afterwards

play42:13

like what are pipes that we could put

play42:15

together but it's really hard because

play42:16

they all have their own security and

play42:18

they all want to do things their own

play42:19

ways and so it's very hard whereas like

play42:21

I think like where we come into play is

play42:24

like really at making like the

play42:26

implementation like Flawless and making

play42:28

it right from the start and also making

play42:30

it modular enough such that it's very

play42:33

future proof so like think that you

play42:35

talked about the con of tication is

play42:37

really interesting but you also don't

play42:38

want to do it in such a way that like

play42:40

hey like two years from now some new

play42:43

technology comes across some new

play42:44

research unlocks some some new

play42:47

technology you want to be able to

play42:48

implement that so I think I think that's

play42:50

really important that's like part of the

play42:52

technology cycle right and so a lot of

play42:55

our philosophy is based just on that

play42:58

right what is the most practical

play43:00

engineering solution towards achieving

play43:03

trustless and that's really like how we

play43:06

think about designing Solutions rather

play43:08

than okay like rather than taking this

play43:10

maybe this more like philosophical

play43:11

approach of saying okay like there's

play43:13

this one concept that I just really and

play43:15

I'm just going to try to build things

play43:16

around that instead of being like that

play43:18

we like to think about what are the

play43:20

problems and how do we build the best

play43:21

solution out those problems so yeah

play43:25

hopefully that answered the question but

play43:27

basically I think it really just boils

play43:29

down to

play43:31

implementation and how we think about um

play43:37

piecing all these all these pieces

play43:39

together in a way that actually works

play43:41

really well yeah it's it's certain does

play43:43

thanks for the explanation and now that

play43:46

we are all hyped up including the

play43:49

audience maybe you can share some

play43:52

insights on the current stage

play43:54

development stage of ler n and can you

play43:57

share some exciting strategic

play43:59

Partnerships already with us I saw that

play44:02

Sushi's soua is building on layer in

play44:05

already so that's pretty cool but maybe

play44:08

there's more to share I don't know for

play44:10

sure yeah so number one

play44:13

is this coming month is going to be

play44:18

packed with a lot of exciting stuff so

play44:21

if you're not following us already on

play44:23

Twitter if you're not in our Discord and

play44:25

telegram group chats already make sure

play44:27

to join cuz we have a lot of exciting

play44:29

stuff that's going to be announced Susa

play44:32

is the first major announcement and

play44:33

we're super super excited about it

play44:35

because it's not just any other decks

play44:39

it's the first hyper performance order

play44:43

book that matches centralized exchange

play44:46

performance but not only that it also

play44:49

fully settles every single trade on

play44:52

chain right so if you think of any of

play44:53

the current offchain order book besides

play44:56

like AO vertex hyper liquid etc

play44:59

etc they're fast but the problem is they

play45:01

only settle maxed trades on trade right

play45:05

so all of the trades that you place like

play45:07

Place cancel order the vast majority of

play45:09

the trades aren't actually settled right

play45:11

so it's not a fully onchain and

play45:13

trustless system whereas Susa is not

play45:16

only going to be like the fastest it's

play45:19

also going to settle everything on shap

play45:21

meaning you can actually fully verify

play45:23

everything right down to every single

play45:25

place order cancel trans action that you

play45:27

make which is going to be pretty

play45:29

gamechanging from a security perspective

play45:31

but also from a feature perspective

play45:33

there's a few things that will be major

play45:36

unlocks in the deck space that are

play45:38

uniquely that will be uniquely

play45:40

implemented by their design and their

play45:42

utility of this unrestricted compute

play45:45

surface that we're enabling for them by

play45:48

being their own XVM so we're super

play45:51

excited about that announcement and

play45:53

there's a few other announcements with

play45:54

major liquidity providers that are

play45:56

coming up as well when we talk to

play45:58

developers and being a developer before

play46:00

as well one of this not the most asked

play46:05

question is just hey like where is

play46:06

liquidity going to come from who could I

play46:08

work with to get liquidity and where are

play46:09

the user come from so that's all going

play46:11

to be solved there's going to be major

play46:13

partnership announcements coming up

play46:15

ahead and we have yeah like huge

play46:17

Partners to help bring in liquidity on

play46:20

that front so I don't want to tease too

play46:23

much but basically lots of exciting

play46:25

announcements coming up in the next next

play46:27

few weeks make sure to be following

play46:29

along to stay up to dat and

play46:31

yeah very cool very exciting and happy

play46:35

to hear that such a kind of pioneering

play46:38

project is close to launching some and

play46:40

announcing some big stuff maybe last

play46:43

question be before we end this cool

play46:46

conversation outside I hope I don't

play46:48

catch you off guard but outside of leer

play46:50

n what is the most exciting project you

play46:53

are following and why oh

play46:57

that's a good question you're asking me

play47:00

about yeah no that that's a really good

play47:02

question there's a few really exciting

play47:04

things I think modad for one is has a

play47:09

very different approach to how we're

play47:11

doing things but I think the sheer focus

play47:15

on engineering and a lot of the

play47:17

improvements that they're bringing to

play47:18

the evm are actually really exciting so

play47:20

that's the part that that I'm really

play47:21

excited about modad I think the next

play47:25

generation of

play47:27

is also really exciting because up until

play47:30

now I know there's a lot of dexes that

play47:33

constantly get launched but up until now

play47:36

like there really hasn't been a DEX that

play47:39

truly Rivals like a centralized Exchange

play47:42

in terms of being able to provide the

play47:44

same level of user experience the same

play47:46

set of assets the same set of of

play47:50

features right and then even providing

play47:52

more than the current set of features

play47:55

there really isn't anyone that even

play47:56

holds a candle to like coinbase Finance

play47:59

Etc right so I'm really excited about

play48:03

someone to do that wink on layer end and

play48:07

the other thing I'm really excited about

play48:08

is actually the intersection of AI and

play48:10

crypto I think it's gotten like a bad

play48:14

rep because of a lot of I think as

play48:18

Anything Grows enough hype right there's

play48:20

always going to be like a strong amount

play48:23

of sort of noise around it but if you

play48:26

think about actual use cases and what

play48:30

actual

play48:32

onchain the centralized inference can

play48:34

enable it gets really exciting so one

play48:37

thing that we're thinking and we're

play48:40

working on behind the scenes or not one

play48:42

thing a few things there's a few AI

play48:44

partners that we're working with behind

play48:45

the scenes as well to enable some actual

play48:48

onchain use cases and I don't want to

play48:52

spoil the sort of the announcements but

play48:56

when we think about Ai and we think

play48:59

about the intersection of trading and

play49:01

gaming that's where I see like a lot of

play49:03

the really interesting products and

play49:06

applications and there's a few partners

play49:08

that were working on making that happen

play49:10

and that can uniquely happen thanks to

play49:13

the fact that we can enable a lot more

play49:14

compute allowing you to call these

play49:17

basically AI functions right into these

play49:19

like AI models and you can work with

play49:22

with a zkl team to do all the imprint

play49:25

proving and so on and so forth and now

play49:27

suddenly you have ai running on chain

play49:30

fully trustless right so that's actually

play49:32

something we're super excited about as

play49:33

well nice I really love your take

play49:36

because I seriously believe like after

play49:39

all of these years that we are in now in

play49:41

crypto and digital assets it's also it's

play49:45

time it's finally time for some actual

play49:49

use cases right maybe we at some point

play49:52

move a little bit away from having that

play49:55

major focus on infrastructure and maybe

play49:58

instead provide some very actual use

play50:02

cases as you said with gaming or Ai and

play50:04

with a with the current Bonanza around

play50:07

AI it's very exciting to hear that you

play50:09

have some Partnerships going on and I

play50:12

know it's hyped it's hyped up for the

play50:15

and the reason basically for me is like

play50:17

two industries that are highly

play50:20

speculative but don't have too many

play50:23

tangible use cases yet are clashing

play50:25

together which is AI crypto and the nice

play50:28

thing about about that occurrence is

play50:31

again that it shows us like as with many

play50:35

instances say deep fakes or cbdcs that

play50:40

blockchain Solutions and decentralized

play50:42

systems are insanely important for

play50:46

future use cases such as AI being a risk

play50:50

mitigator being an enabler for

play50:53

democratizing AI or things like that and

play50:56

with that David thank you I really

play50:59

enjoyed hosting you today I guess our

play51:02

discussion ranged from very complex but

play51:04

also to to digestible topics it was

play51:07

super insightful on behalf of Bitcoin

play51:10

Swiss and I also think the entire

play51:12

audience thanks for sharing your

play51:14

valuable time with us and also best of

play51:17

success obviously with the challenges

play51:19

ahead and a smooth public test net and

play51:22

subse subsequent mayet launch David

play51:25

thank you so much Dominic this was a lot

play51:27

of fun and I hope I I didn't bore your

play51:31

audience too much with my

play51:33

ramblings not at all and finally to our

play51:36

viewers if you're interested in learning

play51:38

more about layer n be sure to visit

play51:41

their social streams follow them on

play51:43

Twitter read their blog posts to make

play51:46

sure that you don't miss the

play51:48

announcements I will provide links in

play51:50

the description below and yeah until

play51:53

next time thanks for watching

play52:16

he

Rate This
β˜…
β˜…
β˜…
β˜…
β˜…

5.0 / 5 (0 votes)

Related Tags
BlockchainScalabilityLayer 2DeFi AdoptionDavid ChowInterviewTech InnovationStrategic PartnershipsFuture of FinanceWeb3