Layer N: Hyper Performant and Hyper Composable Execution. An Interview with Co-Founder David Cao
Summary
TLDRIn this engaging discussion, David Chow, co-founder of Layer n, shares insights on their cutting-edge Layer 2 blockchain solution designed to tackle scalability issues in the crypto industry. Chow discusses the evolution of Layer n, its unique features like the statet communication protocol, and the potential of AI and crypto intersection. Exciting partnerships and the future of Layer n are also highlighted, emphasizing the project's commitment to enhancing on-chain capabilities and user experience.
Takeaways
- 🚀 Layer n is a hyper performant, scalable layer 2 blockchain solution designed to tackle scalability issues in the crypto industry.
- 🌐 David Chow, co-founder of layer n, shares his journey from bioinformatics research at Harvard to building high-frequency trading systems on blockchain.
- 🛠️ Layer n's core innovation is the creation of a unique network of rollups with custom VMs that allow for exponentially more compute for applications while retaining seamless composability.
- 🔍 The platform aims to achieve feature and performance parity with centralized systems, but without sacrificing the core benefits of decentralization, such as permissionlessness and censorship resistance.
- 🤝 Layer n has gained backing from renowned players in the crypto space, including Peter Teal's Founders Fund, highlighting its potential impact on the industry.
- 🔗 The statet concept within layer n enables applications and rollups to share a standardized messaging pipeline protocol, facilitating seamless and instant communication between them.
- 💡 Layer n's approach to scalability involves a modular stack, with layer n focusing on the execution layer, Ethereum providing security, and partnered data availability solutions like IGDA.
- 🚧 The development of layer n is at an exciting stage, with major strategic partnerships and announcements expected in the near future, including a focus on integrating AI into on-chain applications.
- 🔄 The platform's unique Inter-VM Communication (IVC) protocol allows for the movement of assets between rollups and applications without the need for withdrawal periods or third-party bridges.
- 📈 Layer n's testnet has demonstrated the capability to achieve over 100k TPS in a closed test environment, showcasing its potential for high-performance trading and order book applications.
- 🌟 The future of layer n is promising, with a focus on practical engineering solutions, future-proof modularity, and a commitment to trustless, decentralized systems.
Q & A
What is Layer n and what are its core objectives?
-Layer n is a hyper performant, scalable layer 2 blockchain designed to tackle scalability issues in the crypto industry. Its core objectives include increasing the surface area of what's possible to build on chain by 10x and enabling more compute for application developers, thereby allowing the creation of complex applications without worrying about computational constraints.
How does Layer n approach the issue of scalability differently from other blockchain projects?
-Layer n approaches scalability by introducing a new model of execution layers and virtual machines (VMs). It focuses on providing unbounded compute surface area for application developers and allows for the creation of custom VMs optimized for specific use cases, leading to significant improvements in performance and efficiency.
What is the significance of the partnership with IG-DAO for Layer n?
-The partnership with IG-DAO is significant for Layer n as it provides a solution for data availability, which is crucial for the functioning of Layer n's rollups. IG-DAO enables high bandwidth and storage, which are essential for handling the large volume of transactions that Layer n aims to process.
Can you explain the concept of zero-knowledge fraud proofs as mentioned in the script?
-Zero-knowledge fraud proofs are a mechanism used by Layer n to ensure security and integrity of the transactions. Instead of running validity proofs on every single transaction, which can be expensive and time-consuming, zero-knowledge fraud proofs only require a validity check when a fault is detected. This method reduces the cost and time required for validation while maintaining a high level of security.
What are the benefits of using custom VMs in Layer n's architecture?
-Custom VMs in Layer n's architecture allow for the creation of application-specific execution environments that are highly optimized for their intended use cases. This results in better performance, more efficient use of resources, and the ability to handle complex computations that general-purpose VMs may struggle with.
How does Layer n plan to address the challenge of liquidity fragmentation across different rollups?
-Layer n addresses the challenge of liquidity fragmentation by implementing a shared communication protocol known as the inter-VM communication protocol. This protocol allows different rollups and applications within the Layer n ecosystem to communicate and share liquidity seamlessly, eliminating the need for third-party bridges and reducing the complexity for users and developers.
What is the current development stage of Layer n and what can we expect in the near future?
-Layer n is at an advanced stage of development with a public test net and subsequent mainnet launch planned. In the near future, Layer n will be announcing partnerships with major liquidity providers and is working on integrating AI use cases into its platform, aiming to provide tangible applications beyond infrastructure.
What are the security considerations for Layer n's architecture?
-Security is a key consideration in Layer n's architecture. Despite off-chain data availability solutions introducing a subset of Ethereum's security, Layer n maintains a strong security posture through its use of zero-knowledge fraud proofs and its partnership with IG-DAO. This ensures that the system remains decentralized and trustless while achieving high performance.
How does Layer n's approach compare with other layer 2 solutions like Optimism, Arbitrum, and others?
-While Layer n shares the vision of decentralized applications with other layer 2 solutions, it differentiates itself through its focus on modularity, custom VMs, and a shared communication protocol. Layer n aims to provide a seamless and high-performance environment for developers and users, with a future-proof design that can adapt to new technologies and research findings.
What are the key features of the Nord VM developed by Layer n?
-The Nord VM is a trading and order book-specific virtual machine developed by Layer n. It is designed to handle tens of thousands of trades per second, offering performance on par with centralized exchanges but with the added benefits of on-chain settlement and trustlessness.
What are some of the strategic partnerships Layer n has announced or is planning to announce?
-Layer n has announced a partnership with SushiSwap for the development of a hyper-performance order book. Additionally, they are planning to announce partnerships with major liquidity providers, which will help bring more users and assets to their platform.
Outlines
🎶 Introduction to Layer n Layer and Guest
The paragraph introduces the discussion with David Chow, co-founder of Layer n Layer, a high-performance and scalable layer 2 blockchain designed to address scalability issues in the crypto industry. The host expresses excitement for the conversation, highlighting the importance of Layer n Layer's backing by prominent figures in the crypto space. David shares his background, from studying at Harvard to his involvement in bioinformatics research, and his eventual entry into the crypto world through building onchain order books. The discussion sets the stage for exploring the challenges and solutions in building high-performance blockchain applications.
🚀 Addressing Scalability and Liquidity in Blockchain
In this segment, the conversation delves into the core challenges of scalability and liquidity in blockchain technology. David explains how traditional blockchains limit complex application development due to computational constraints. He discusses Layer n Layer's approach to removing these constraints by allowing developers to build their own execution environments or virtual machines, thus enabling the creation of complex applications. The discussion also touches on the importance of a shared communication protocol for seamless interaction between different rollups and applications, emphasizing the benefits of Layer n Layer's statet concept and inter-VM communication protocol.
🔒 Shared Security and the Zero-Knowledge Fraud Proof
This paragraph focuses on the concept of shared security and the innovative Zero-Knowledge Fraud Proof mechanism introduced by Layer n Layer. David elaborates on how shared security assumptions among rollups enable seamless asset movement and communication between applications. He introduces the Zero-Knowledge Fraud Proof as a solution to efficiently handle disputes without extensive validity proofs for each transaction. This mechanism allows for a single-block proof process, potentially reducing withdrawal periods and enhancing the overall security model of Layer n Layer.
🧩 Positioning Layer n Layer in the Blockchain Ecosystem
The discussion now addresses how Layer n Layer fits within the broader blockchain ecosystem. David explains that Layer n Layer operates as an execution layer, focusing on optimizing performance and compute for application developers. He contrasts Layer n Layer with other rollup types, highlighting its unique combination of optimistic execution and zero-knowledge fraud proofs for proving. The conversation also touches on the partnership with IG-DNA for data availability, emphasizing the importance of high bandwidth and security in the choice of solutions.
🔑 Trade-offs and the Future of Layer n Layer
David discusses the trade-offs involved in Layer n Layer's approach, particularly the requirement to build within the constraints of the RIS Zero language set, which favors Rust but may limit initial programming language options. He also mentions plans to support a wider range of programming languages and emphasizes Layer n Layer's commitment to assisting developers in this new paradigm. The conversation looks forward to the potential of Layer n Layer to enable novel applications, especially at the intersection of AI and crypto, and hints at upcoming strategic partnerships and developments.
🌐 Exciting Developments and Partnerships for Layer n Layer
The paragraph wraps up the discussion with insights into Layer n Layer's current development stage and strategic partnerships. David shares about the upcoming launch of Sushi's soua on Layer n Layer, which promises to deliver centralized exchange-level performance with full on-chain settlement. He also teases major announcements related to liquidity providers and expresses enthusiasm for the potential of AI applications within the Layer n Layer ecosystem. The conversation concludes with David sharing his excitement for projects that focus on actual use cases within crypto and AI, emphasizing the importance of trustless systems and decentralized infrastructure.
Mindmap
Keywords
💡Layer N
💡Scalability
💡Layer 2
💡DeFi (Decentralized Finance)
💡Custom VMs (Virtual Machines)
💡Composability
💡State Machine
💡Zero Knowledge Fraud Proofs
💡Inter-VM Communication Protocol
💡Shared Security
💡High Performance Computing (HPC)
Highlights
David Chow, co-founder of Layer n Layer, joins the conversation to discuss the project's innovative approach to tackling scalability issues in the blockchain industry.
Layer n Layer is a hyper performant, scalable layer 2 blockchain designed to address the scalability challenges faced by the industry since its inception.
The project has the backing of renowned and prominent players in the crypto space, including Peter Teal's Founders Fund, which co-led the seed round.
David Chow shares his professional background, including his experience with bioinformatics research at Harvard and his initial foray into crypto through building onchain order books.
Layer n Layer's creation was inspired by the realization of bottlenecks in building high-performance onchain systems, leading to the design of a new layer 2 model.
The Tex Tech stack is introduced, highlighting the concept of a single shared state machine powered by a network of custom and optimized rollups.
Layer n Layer enables unbounded compute surface area, allowing developers to build complex applications without worrying about computational constraints.
The statet concept is explained, which allows rollups and applications to share a standardized messaging pipeline protocol for seamless and instant communication.
Layer n Layer aims to achieve feature and performance parity with centralized systems while retaining the core benefits of composability and permissionlessness.
The Inter-VM Communication (IVC) protocol is discussed, which enables seamless asset movement and communication between rollups without the need for withdrawal periods or third-party bridges.
Shared security is a key aspect of the communication protocol, ensuring that all rollups share the same security risks and allowing for the use of zero-knowledge fraud proofs.
Layer n Layer sits at the execution component of the modular stack, focusing on being the best execution layer in crypto for application developers.
The project partners with iG-da for data availability, leveraging their high bandwidth storage to support the project's ambitious throughput goals.
Layer n Layer's approach combines the best of both ZK rollups and optimistic rollups, creating a new category of rollups with zero-knowledge fraud proofs.
The project's use of Rust for building the XVM (eXtended Virtual Machine) is discussed, highlighting the language's benefits for security and performance.
Layer n Layer's Nord VM is introduced as an example of an application-specific VM, designed to settle tens of thousands of trades per second for order book use cases.
David Chow shares insights on the project's development stage, mentioning upcoming announcements and strategic partnerships, including Sushi's soua building on Layer n Layer.
The conversation concludes with David Chow discussing other exciting projects he's following, including the intersection of AI and crypto and the potential for onchain use cases.
Transcripts
[Music]
hi this is uncut Gems by Bitcoin Swiss
where we get into conversation with
leading crypto Founders that will enable
the next Leap Forward in our industry
and ahead of the curve uncut gems
provides you with insights on major
emerging narratives and offers valuable
one-of-a-kind discussions with
pioneering minds and today I'm very
excited to be joined by David Chow who
is co-founder of layer n layer N is a
hyper performant very scalable layer 2
blockchain that is designed to tackle
the scal scalability issues that our
industry deals with since the very
beginning and the scalability issues
that also hinder the widespread defi
adoption basically remarkably I want to
to state that layer and posts the seat
backing of very renowned and prominent
players like Peter Teal's Founders F who
co-ed the seat round David I'm very
stoked to have you today and thanks for
joining us thank you so much Dominic
it's a pleasure and honored to to be on
the show nice okay let's kick things off
and start maybe with your professional
background and maybe you could also
share with us the story behind The
Animated visuals on your website I think
they are pretty dope I guess like quick
personal background prior to coming into
crypto used to study at Harvard and used
to do bioinformatics research first got
into crypto with my same co-founders at
ler and actually but we first got into
crypto building onchain order books on
the salot of blockchain actually
initially and that was our first
introduction to the whole world of high
frequency not only high frequency
trading system but high frequency sort
of systems in general right especially
ones on chain and building that order
book and having that opportunity to
interact with Traders and market makers
and other system thinkers and designers
made us realize that even if we were
building on the plat the fastest
blockchain at the time and still today
there were still immense bottlenecks at
the infr layer that prevented us from
building what building essentially
something that could compete with modern
Financial Networks like NASDAQ New York
Stock Exchange Visa coinbase Etc so
basically once we hit that sort of wall
of okay if we want to build something
that can actually compete we can't do it
with the current stack we spent more and
more times going lower and lower down
the stack researching okay like how can
we actually scale up this onchain order
buck and that eventually led us down the
path of accidentally designing a new
layer 2 model and so that was like the
first story of the first Sparks of how
ler end started to become what it is
today and then after multiple iterations
and research Cycles it eventually became
what L is today which is this more this
sort of unique network of rollups with
custom VMS that allow exponentially more
compute for applications all while
retaining seamless composability
so really it was this like firsthand
experience of trying to build something
hyper performant on chain failing and
spending the time to realize how we
failed in do that and fixing the problem
that essentially blocked us from trying
to build that first thing so it was a
very natural sort of experience for us
and I think that's also something that
hopefully also makes us understand
Builders a lot more given that we've
also been through the whole process of
trying to build a protocol long chain
that's a nice background and so as I
understand basically the boundaries you
dealt with back then drove you towards
layer n and now you're aiming to solve
all of these the bottlenecks you
described like basically aiming to to
achieve a feature and performance parity
to centralized system systems but as I
also understood you still value like the
core features of the centralized systems
very high which is permissionless like
censorship resistance so you're trying
to combine the best of both words that's
a very nice story let's start with very
high level unpacking the teex stack bit
by bit like what's new in layer n why
should we be excited about it I know you
co the term statet a very powerful word
which is this single shared State
machine that is powered by that Network
you already described of course custom
and optimized Road UPS so before we dive
deeper into the Tex Tech maybe give us
some insights very high level on what
are the perks of such a system and why
should we be excited about it for sure
absolutely so I think there's really two
major unique unlocks right number one is
basically removing the bounds removing
the computational constraints for
application developers so historically
if you were to build any sort of
complex application arm chain it would
be really hard so that's sort of part of
the reason XY K became a thing right
because we had to find a simple way of
computing like a market making formula
without blowing out the gas constraints
on the EPM but basically it's a it tells
the tale of how hey if you're trying to
build anything more complex like you
just can't do that on chain at the
moment you can't do that on things like
the evm even the svm like I remember
back in the day when we were building on
salana we had to build a lookup table to
find a square root of a number right and
this is yeah like you should be able to
just find a square root of a number this
is basic math computations and so the
first core unlock that we enable by
allowing people to build their own
basically execution environments or
virtual machines as we like to call it
is that you have this like basically
unbounded compute surface area and you
can build very complex applications
without needing to worry about okayy I
need to fit all these computational
constraints in the that the EVN has
right so that's number one you're now
able to build very complex stuff we can
even build all the way to things like
incorporating like AI models too right
which I'm sure what will'll talk about
it some point on this show given how
much AI is of Interest now that's the
first unlock the second unlock is none
of this would be useful if it can't
share liquidity and communicate with
everything else right so like we've seen
like this Paradigm play out of everyone
building their own individual rollups
the whole problem with that is
everything then gets fragmented right
liquidity gets fragmented the user
experience is fragmented you need to
work with third party Bridges to move
from one roll up to another it's just
overall not a good experience right and
so the second core unlock that we have
is basically the statet concept which
allows each of these rollups and
applications to share a standardized
messaging pipeline protocol that we call
the inter VM communication protocol that
basically allows applications en roll s
to seamlessly and instantly communicate
with each other right and so now you no
longer need to worry about 7day withdraw
periods times you don't need to worry
about interoperability and bridging
protocols and stuff like that all of
that just like works right so like me as
an application developer I either build
my smart contracts on one of the evm
rollups or I build like my I launch my
own rollup with my own custom VM and
everything just works right these just
compose just as if I were building on a
monolithic blockchain so really like the
goal is hey like how can we like you
said earlier how can we accomplish
feature parity and performance parity
with centralized systems all while
retaining the benefits of composability
that you're used to on the monolithic
system so that's really yeah so
hopefully that was clear yeah perfect
answer and what you described earlier
very much reflects what I observe or
experience as a user myself right like
it's it's painful if you're like stuck
on different rollups and seriously
sometimes I find myself questioning like
how should I describe or explain that to
something somebody that is new in the
space while avoiding to get fished or
something like that on the path towards
briding from all these it's a thing and
it's
interoperability and all of this
liquidity fragmentation and and so forth
remains one of the core pain points
within the industry in my opinion and
it's it's awesome to see projects like
layer and tackling these very challenges
of our ecosystem so layer n kind of
strives to enable this Melting Pot of
VMS As I understood and rollups on the
back of this like you said shared
communication but also liquidity layer
maybe we can dive into BM communication
protocol which you call IVC and maybe
could explain like how does it work and
how does it compare with other Solutions
are there for instance Concepts like
shared security or something like that
please guide us through the very concept
of this liquidity and communication
layer so at
the base of the communication protocol
is what you said the concept of shared
security right so we wouldn't be able to
accomplish
a seamless the the features of the
seamless communication protocol without
some elements of shared security
assumptions so basically what that means
is all of these rollups or all of these
rollups all settled to the same sort of
big Global like State machine right so
that's also where the word State comes
from it's this idea thaty you have this
single big state machine that's
separated between this like networks of
applications and that's a really
important assumption right it's like you
need shared security otherwise you have
problems if someone were to run their
own let's just say op stack roll up and
someone want to run another optimistic
or even ZK rollup because they both have
different security assumptions you need
that kind of 7-Day withdrawal period
from the optimistic rollup or you need
some kind of third party bridging
provider to take on the risk and the
liquidity providers take on the risk of
moving assets between the rollups right
and so all of that is solved once you
have the shared security assumption
right so all the rollups share the same
security risks which means that okay
like if one rollup has an issue you
simply so we used this new thing called
zero knowledge fraud proofs I think we
were one of if not the first to to push
this out but basically it's the idea of
instead of doing validity proofs on
every single transaction like current
zero Dodge rollups do and instead of
doing like interactive fraud proofs like
the current optimistic rollups tried to
do and I say tried do because I don't
think anyone has like public
implementations of interactive fraud
proofs yet but basically the Z non fraud
proofs is very elegant way of basically
only running the validity proof when
there is fault so that allows us to do
the proof in a single block and it's
also a lot more straightforward than
thinking around all the game theory
economics of the interactive fra proof
so anyhow so basically yeah like if one
rollup has an issue any valid leader can
replay the transaction history from that
they get from the da layer identify on
which R up that issue occurred and at
which transaction then submit that ZK
fraud proof basically to be like hey
like there's an error here and if there
is an error then we would go into sort
of the state roll back process now to
get back into the sort of the
communication layer so now that we have
this sort of shared security
assumption we can now seamlessly move
assets in between rollups and
applications without needing to about
those like withdrawal periods and those
bridges right and the key towards doing
that is to have some kind of
standardized way of assing and
pipelining
messages from one application slash
rollup to another and basically that's
what we're building in house so think of
IBC on Cosmos but then like just remove
all of the consensus components right so
basically have you have a very
straightforward problem of okay like I
have one roll up I have another roll up
like how do I create a standardized
format of sending one message that can
be any sort of arbitrary bites to
another rollup right so then that if you
like distill that problem to to the just
that it's like a very similar just like
Web Two communication problem and at
that point once you have a standardized
way of doing it it's very seamless for
everything in your network to
communicate with within each other like
on a very fundamental level it's I mean
you say that it already it pretty much
reminds me of the early concepts of
Cosmos and paulod do at least from the
idea and the approach like pretty dope
that you manag to to to abstract away
the the different VMS and pro provider
somewhat like shared security shared
liquidity layer but also have
composability like solve the comp
composability Problem by you already
touched base on the whole proving
concept which was very new to me I know
the the XVM We Touch based on the VMS
and what layer n enables within that
session as well but the XVM that you
call XVM are built in Rust and are
enabled as I stood As I understood by
risk zero and if we look this modular
stack periodic table with the different
rollups if we think about layer n maybe
help us a little bit if we look at this
modular St table how would you classify
layer and is it a settlement rollup is
it a sovereign rollup and then the
second aspect or component I'd like to
cover here is the very fraud prooof CK
fraud proof life cycle you said because
until now I was used to either I have a
validity proven system or rollup or a
fraud or I use fraud proof proofs but
now you combine these and create some
kind of magic with huge benefits being
that you don't have to to prove
everything as you mentioned maybe you
could yeah help us understand that
concept a little bit better and where
you would classify layer n like how can
we think of layer n in that regard for
sure so in terms of the modular stat
right we sit at the execution component
so I like I would say that we're like an
execution layer right so we're purely
focused on hey how can we make how can
we be the best execution layer in crypto
where application developers can come
and just purely focus on building the
best applications right so if you think
about it in the modu periodic table like
you said there's we have the execution
component which is us the security
component is on ethereum and then the da
layer we're currently partnered with
igen da to use in this sort of Validia
model right so that's like the the
general component with regards to the
fraud proving question so that's like
the really cool thing about what we're
doing right we don't really fit in the
camp of like fully ZK rollups we don't
really fit in the camp of the typical
optimistic rollup we combine The Best of
Both Worlds to create this new category
right and I wouldn't we're more akin to
optimistic rups in the sense that we
still run execution optimistically but
when we talk about proving it's really
this sort of new category that we called
the zero knowledge fraud proofs right
and then the basic idea is really hey
doesn't make sense to run validity
proofs on every single transaction if
you're trying to optimize for
performance because ZK proofs are
expensive they're timec consuming to run
and if you're trying to do anything like
what we're doing which is like hundreds
of thousands of transactions per second
you're not going to be able to do that
and if you were to do that you're
probably paying millions of dollars to
AWS to run your proving costs right or
something like that on the other hand
you have the interactive fraud proof
which
have historically been really hard to
implement and then we've seen that in
practice if you look at any of the
current optimistic rollups I don't think
as of the date of this interview that
any of them have public implemented
fraud proofs right so optimistic has no
fraud proofs so you're basically running
on the that you trust op Labs which is
not a bad assumption but it's it that is
the Assumption arbitrum has wh listed fr
proofs right so it's not public so it's
only like a wh list set of validators
that can run so that's if you think
about it from a game the pers or I guess
from a theoretical perspective it has
the same properties as not having fraud
proofs at all because basically like
only insiders can run fraud proofs which
is the same thing as if the Insiders can
like did have fraud proofs but again
it's like not a terrible assumption if
you trust arbitro but what I'm trying to
get at is interactive problems are
really complicated right because they're
multi processes it depends on this like
interaction game basically between
theover and and the verifier instead of
doing all of that complicated stuff
right we say hey like validity proofs
are great but validity proofs are only
good if you can minimize the amount of
time that you actually use it right so
instead of doing it at every transaction
we'll only do it when a fraud occurs
right so when a fraud occurs a validator
simply needs to rerun that state
transition on like they can run it on
their own zkv on their own MacBook if
they want right Y and then basically
through the RIS Zer zkm they get a hash
that they can then use to submit on to
an onchain smart contract that verifies
the legitimate the legitimacy of that
hash and if it is legitimate then hey
you've got a valid proof right so it's
that straightforward and the cool thing
is hey like now we can do this within a
single block as opposed to a multiblock
process and did theoretical interactive
fraud proof which also means that you
can theoretically reduce your minimum
withdrawal period by like a significant
amount right so like the whole problem
with the 7-Day withdrawal period 7day is
arbitrary but it's also based on the
idea that like hey if it takes like x
amount of blocks to prove fraud and each
block has some probability of being
censored right what's the probability of
every block being censored like multiple
times and how much time do we need to
ensure that all of these blocks go in
right now if you only have a single
block that you need to worry about the
risk decreases significantly right and
so that's why we say hey theoretically
like you can decrease the withdrawal
period Times by quite a significant
amount while decreasing the actual risk
of censorship itself so hopefully that
gave you like a an overview of of where
we land in terms of the security
Spectrum
there yeah yeah it definitely did but it
also like to be frank it sounds like
almost too good to be true that you get
like this massive
kind of bump in in withdrawal periods
but also reduced risk at the same time
and what you stated earlier I'm on the
same page like it I I would be a little
bit more critical and say it's even a
problem that maybe optimism and other
rollups
still do not have these fraud proofs
implemented yet I guess arbitrum is on a
good path here and optimism all of the
of these rollups still have training
wheels on let's be honest and I hope it
this will get better because we want to
build like these sound decentralized
permissionless immutable systems what
now I would like to know you stated that
you partnered with I da for the data
availability kind of aspect would you
mind giving us some insights on why you
chose IG da and not other Solutions like
aale or Celestia and yeah you touch Bas
on it but I would like to know what to
understand it maybe a little bit better
so layer n is only the execution part I
da a you use for data
availability settlement is still
happening on ethereum is that
correct yes that's correct so basically
layer Ed is still in L2 right because
we're at the layer two in terms of the
hierarchy chart so like our core
function is how do we make execution
better right execution better in the
sense of performance and compute as well
as that's the Mac feature ex and then
execution better also in the sense of
composability right and then as any
rollup does We Roll Up the state and
then put it back to ethereum where it
can be verified and challenged in case
there are problems right and then the
main difference is instead of Hosting
data availability on ethereum which a
lot of the current rollups do we
actually decide to do it with an off d
solution like i d right and the core
unlock there is really like a math a
math problem right it's like Ian
da will be able to enable up to 8 to 10
like megabytes per second that's crazy
storage bandwidth right yeah and the ma
there is just hey yeah you look at
something like Celestia it's currently
doing 1.5 megabytes and it just if
you're trying to reach the scill at
which we're trying to do which is tens
of hundreds of um tens thousands of
transactions per second you need
something that has high bandwidth right
that's just a prerequisite and so it it
really just boiled down to that I think
the other really cool thing about I da
forces the whole reaking component to it
right one of the core things that people
are worried about with these offchain da
Solutions is like hey like now we're
losing some security there right which
is true if you're not storing your data
on ethereum you're inherently going to
inherit like a smaller a smaller subset
of the security that you would assume on
ether let's just assume ethereum is like
the most secure decentralized system
right like any other system is probably
like a subset of that right and then
even I da which is like resters right
it's still going to be a subset right
unless like the whole set of e theorum
validators are restak into 8 right let's
just assume it's like a sub said so yes
that is like the tradeoff that you would
be taking right but from our perspective
it is a strong enough tradeoff and a
beneficial enough tradeoff that it makes
a lot of sense right if you're getting
like a hundred times better performance
but just the midle school fraction
amount less security and even then we
can argue as in really less security and
I would say not really I think the
tradeoff is very valid and I think we
can play this out and see what types of
products people will use more often but
I think until we're able to reach
performance parody like we still have a
long ways to go from a prod Market fit
perspective if we're really if we don't
reach that performance parody threshold
quicker
so that was I lost track of the original
question oh I think you were asking
about the a DA stuff yeah so basically
basically it just came down to a math
problem right okay like we just want
wherever has the most amount of
bandwidth yeah and then two where had
where it has the where it's the most
secure as well so the nice thing about I
is you still have some some strong
subset of each security as opposed to
some of these other systems who need to
bootstrap their own set of validator
nodes which could be a bit less secure
right yeah great take thank you because
you touch based on tradeoffs what is
your opinion on the your approach now is
using these like or rather novel I'm not
sure if they are battle tested yet novel
CK fraud proofs what is a potential
trade of you using that kind of proving
system in your opinion or aren't there
any yeah for sure the main tradeoff
would be we need to
build the VMS and the rollup component
such that it's it is it is basically it
that what's the word I'm blinding it it
basically works with since we're using
RIS zero right which requires the risk
Live Language set basically we need to
build within those constraints right so
we wouldn't be able to out of the box
build something in like JavaScript or
something right so that's a major
constraint because it means that hey
like the initial set of rollups and VMS
we're going to have to build a rust with
that being said also huge rust Maxi and
I think the world should move towards
rust but that's a separate topic with
that being said we are thinking of of
implementing wasum as well at point
which will allow us to support a much
wider breadth of program programming
languages for people who would who might
want to build applications in different
languages and then the other note I'll
make because this is a common point of
confusion when people ask me hey like
what programming languages is it is
layer and in like we build everything in
Russ right but that doesn't necessarily
mean that the application developers
need to build in Russ so depending on
what type of application the application
developers build wants to build it could
be different right so we will have our
own evm rollup as well that's going to
be built in R East which is the rust
implementation right but if you're
deploying a smart contract on that evm
rollup that's in solidity right so all
of your pre-existing code all of your
pre-existing sort of solidity and smart
contract knowledge you can still use
that right with that being said if
you're trying to build a an XVM and your
own your own rollup that's where you'll
need R and at the current stage we're
willing to partner really deeply with a
lot of the people that we work with in
that process to get them started just
because we know that okayy this is a
completely new way of Designing things
right one that maybe most may not be
familiar with and so we're happy to
spend that time deating the initial
course set of developers and helping
them out a lot in that process nice so
as I understand while you are
constrained in the programming language
for the XVM which is which kind of build
that network of different virtual
machines right you on the other hand of
being constrained by the programming
language you get a huge design space By
by allowing crazy optimizations of these
generalized but also application
specific VMS and on the other hand you
still provide As I understood with the
NM still provide a VM environment for
developers who want to build in solidity
which is super nice because doing that
you at one hand you wipe out the
boundaries of operating within the evm
but you also allow developers to build
within solidity because you also have
the NM so with these X VMS what design
freedom and benefits do you achieve
exactly maybe we you can give us some
insights on that and what are the perks
what are the perks between the
because you also allow for generalized
VMS but also application specific ones
maybe you have some cool examples to
share I think like the
easiest
mental model that I like to use to think
of XVM and more generalizable smart
contract VMS like the evm VM Etc is okay
like imagine you're trying to build some
kind of you're trying to build a car
right the evm is basically a scenario
where like someone comes and gives you
all these generalizable Lego blocks
right you can use these Lego blocks to
build a house you can use these Lego
blocks to build a table you can use
these Lego blocks to build a car right
because they're generalizable right
these are Lego blocks you can use it to
build anything right they might not be
the best right if you're trying if
you're trying to build a really solid
house or a really fast car like it might
not be the best right but it it does the
job
right and it's generalizable and that's
why people like it and it's
straightforward to use the XVM is
basically like we don't have any Lego
blocks for you but we have all of these
primary resources right such as wood Etc
you go build like whatever car you want
from the ground up in the way that you
want to right and that really is the
core thing right like people typically
think about scaling as this like as only
two things right like throughput and
latency right but people often forget
the third thing which is compute right I
mean comp is really important when
you're trying to build anything more
complex right and that's really
constrained on what the sort of the
virtual machine environment allows you
to do right so when you have very preset
ways of doing things on the EVN that
makes it really hard to build things
that are very computer intensive so
imagine basically the XVM as this place
right where you can pretty much just
build an application like without
thinking about these like blockchain
constraint right like just build the
application as you would build it if you
were building it on web to right and one
of the examples of this is actually the
Nord VM that we built ourselves in house
which is basically the first order book
specific and exchange optimized virtual
machine and really what it does is it
only does one thing right and it does
that one thing really well which is
basically settle tens of thousands of
Trades per second at something else I
can lay and seas on this like massive
like order book right and and basically
the reason it's able to do that is
because it was purposely built for that
so it was actually modeled off of elmax
digital which is I think one of if not
the fastest current like institutional
centralized exchange and we're able to
do that because we're not building
within the constraints of the evm and
you get to just build whatever you want
and so that's like uh that's basically
like how I like to think about the
difference between xvn and and a more
generalizable virtual machine and I'll
also think and the other thing I'll not
about the XVM is like the XDM is also
not trying to be like a generalizable
platform right so you don't need to
think about a lot of the problems and
inherent challenges that comes to
building a generalizable platform so you
can really just be super application
specific focused and not worry about a
lot of the other constraints that the
EDM needs to worry about right like how
to create basically generalized way of
doing everything you don't need to worry
about any of that basically yeah did
that kind of answer the question yeah
fully fully satisfied here David yeah
it's very cool I imagine as a developer
it's awesome to have a blank canvas and
build whatever you want because what
tailor your solutions to towards your
application that's really nice by
coincidence in the previous episode of
uncut gems we had Keon from from Monet
who kind of shares a similar background
stemming from Solana slh high frequency
trading and as better than me probably
parallelization is a very powerful tool
to scale integrated chains and there are
some more areas worth tweaking to boost
the performance such as database
consensus or pipelining like monut is
doing and I feel to me it's very
interesting to observe the developments
and Industry from the sideline and
observing that there there are these
teams choosing the modular Direction and
then there are the other teams like
fully committed to the integrated
approach and clearly ler n went with the
modular narrative right so would you
mind providing some insights on your
decision making progress and why you
eventually went with modula instead of
integrated so please guide us through
some pros and cons of these two very
exciting exciting developments
the best way to think about our decision
making as a company and as a team is we
have one mission in mind and that is you
have the current sort of surface area of
what's possible to build on chain how do
we take that and increase it by 10x
right that that's not 10x you get like
how do we take the current surface area
of what's possible to build on chain and
vastly increase that right not just we
don't just want to increase it by 1.2x
1.3x 2x we want to increase it by 10x
and to us like the key towards unlocking
that really just comes down to compute
right like you enable more compute for
people that's going to allow people to
do so much more and to do that we need
to think in a completely different
Paradigm and a completely different way
about building applications on chain
right and the truth is yeah like I think
what what a lot the other teams are
doing in terms of improving the evm is
like super core and super important
right because there's a whole bunch of
people who are still building on the evm
the problem is hey like that's not going
to take us to a 10x in terms of compute
right really that's what we're doing
there is we're updating an existing
model but we're not thinking outside the
box in terms of like how can we create
an entirely new model that's just so
much better right so it's the same thing
as being
like it's a horse to the race car right
so there's this like really thing that I
think Ford said at some point in
time maybe it was someone else but I
think it was Ford and then he said
something along the lines basically of
hey ask people what they want today
they'll tell you that they want like
better horses right or like Faster
Horses and stuff like that right but
really what they want is like this new
thing that's called the car right so
it's like that that that's essentially
how we think about this dichotomy okay
nice so your take is to approach you are
taking is the superior one and with less
kind of less constraints like basically
the next chapter not only an iteration
but the next chapter that's cool let let
me try to consolidate a little bit what
we what we talked about until now you
combine I da which gives you like an
insane level of throughput with the stat
net which is a network of custom VMS and
that very statet is powered by shared
communication and liquidity layer only
stating that is already very exciting
because we obtain this super highly
performed L2 with also as I understood
like very low
latencies and I read in January you
announced that layer ends S engine which
is also a custom rust VM As I understood
that you manage to achieve like 100k TPS
on a close test net and I know it's a
test net but I still think it's beyond
remarkable to achieve like 100k TPS and
now I wonder where these TPS where is
full-fledged
transactions including kind of smart
contract logic or was it vanilla value
transactions and then maybe you can
provide some insight on insights on this
n engine that I just mentioned because
it's a new trading optimiz optimized
rollup engine and maybe one of the the
first examples on how highly optimizable
rollups will be on layer n for sure so
the Nord engine or the Nord VM as we
like to call it now we went through
different iterations of the name but but
they all mean the same thing right
basically it's this custom execution
environment custom VM custom engine that
just does one thing really well which is
like the order book use case and as you
brought up we ran this close test set
with the IG team actually were all we
were all live on the CLA at that time
and then we were running it and and
everyone got super excited when the
numbers came out but basically yeah like
it it was basically it wasn't just empty
messages it was trades right real trades
on the order book and I think we went up
to 120 something thousand number I
forget that it was 120 and something but
yeah so it's and it's a very meaningful
number right because I think like when
we think about scaling
the history of scaling on ethereum we've
always just dealt with hey how do we go
from 10 to 100 or how do we go from 100
to 700 how do we go from 700
to or I think the ma I think even with d
sharting the math comes out to about
you're only supposed to reach like
around 800 transactions per second
across all rollups or something like
that I think I don't remember the exact
MTH but some low
number full d Proto de sharting with
full sharting that might be different
but with Proto I think it's still just
800 or something but basically my point
is we're in like the ballpark R we're in
like we we just we haven't dreamt bigger
right and so I think this is very
meaningful because even salana does I
think 6,000 TPS or something like that
and then I think 6,000 that's accounted
for the vote transaction so not I think
without the vote transactions like
actual real TPS I would need to check
what it is but I think it was like 500
or like a thousand or something like
that yeah it's been a while since I
checked those stats but it's definitely
not 10,000 not even 100,000 we are
talking about an entirely different
Ballpark and the elegant thing to me is
while you innovate so much on this kind
of VM layer but also on the liquidity
layer and the composability aspect you
still incorporate what is very important
to me good amount of security and
decentralization from ethereum so that
is is pretty appealing to me at least I
would like to as we inch closer to the
end of this nice conversation David I
would like to your take on as I see it
layer n kind of builds what also
arbitrum orbit and stylus sets out to do
what optimism sets out to do with the
super chains what CK sync does with
Hyper chains but with a few like very
interesting and very elegant like
implementation differences so how do you
see yourself compared to these I will
not say competitors but other projects
that kind of aim to for a similar end
game and how do you compare when it
comes to decentralization when it comes
to security when it comes to training
wheels when it comes to
orifices aspect o ification for rollups
because just now with the tenun upgrade
coming it it proved that like imagine
there's a rollup that was aifi already
and not upgradeable like they could not
Implement blobs so sorry maybe that's a
very comprehensive question but maybe
would be nice to get your take on kind
of comp
landscape yeah for sure I think everyone
has this shared vision of eventually
things are going to move onto their own
rollups it just makes a lot of sense
right like the congestion that you face
from a single model and doesn't make a
lot of sense I think a lot of it just
ultimately comes down to implementation
details right because a lot of these
other projects they' started with a
certain fion in mind it's a lot harder
to easily and really quickly progress
towards this very scalable model that
I've described under the state net right
if you look at the super change stuff
super cool super it's great what they're
doing the problem is like there there's
like a lack of I guess like
implementation detail right so
everything that we've talked about in
terms of the communication protocol the
shared liquidity system like those
things are lacking right and the reality
is it's not it's just a really hard
problem right when you you have all of
these like independent rollups being
deployed and you don't there isn't this
pre-existing Network that brings them
all together right so you have this kind
of like situation where like you already
have a bunch of rollup that you're
trying to figure out like afterwards
like what are pipes that we could put
together but it's really hard because
they all have their own security and
they all want to do things their own
ways and so it's very hard whereas like
I think like where we come into play is
like really at making like the
implementation like Flawless and making
it right from the start and also making
it modular enough such that it's very
future proof so like think that you
talked about the con of tication is
really interesting but you also don't
want to do it in such a way that like
hey like two years from now some new
technology comes across some new
research unlocks some some new
technology you want to be able to
implement that so I think I think that's
really important that's like part of the
technology cycle right and so a lot of
our philosophy is based just on that
right what is the most practical
engineering solution towards achieving
trustless and that's really like how we
think about designing Solutions rather
than okay like rather than taking this
maybe this more like philosophical
approach of saying okay like there's
this one concept that I just really and
I'm just going to try to build things
around that instead of being like that
we like to think about what are the
problems and how do we build the best
solution out those problems so yeah
hopefully that answered the question but
basically I think it really just boils
down to
implementation and how we think about um
piecing all these all these pieces
together in a way that actually works
really well yeah it's it's certain does
thanks for the explanation and now that
we are all hyped up including the
audience maybe you can share some
insights on the current stage
development stage of ler n and can you
share some exciting strategic
Partnerships already with us I saw that
Sushi's soua is building on layer in
already so that's pretty cool but maybe
there's more to share I don't know for
sure yeah so number one
is this coming month is going to be
packed with a lot of exciting stuff so
if you're not following us already on
Twitter if you're not in our Discord and
telegram group chats already make sure
to join cuz we have a lot of exciting
stuff that's going to be announced Susa
is the first major announcement and
we're super super excited about it
because it's not just any other decks
it's the first hyper performance order
book that matches centralized exchange
performance but not only that it also
fully settles every single trade on
chain right so if you think of any of
the current offchain order book besides
like AO vertex hyper liquid etc
etc they're fast but the problem is they
only settle maxed trades on trade right
so all of the trades that you place like
Place cancel order the vast majority of
the trades aren't actually settled right
so it's not a fully onchain and
trustless system whereas Susa is not
only going to be like the fastest it's
also going to settle everything on shap
meaning you can actually fully verify
everything right down to every single
place order cancel trans action that you
make which is going to be pretty
gamechanging from a security perspective
but also from a feature perspective
there's a few things that will be major
unlocks in the deck space that are
uniquely that will be uniquely
implemented by their design and their
utility of this unrestricted compute
surface that we're enabling for them by
being their own XVM so we're super
excited about that announcement and
there's a few other announcements with
major liquidity providers that are
coming up as well when we talk to
developers and being a developer before
as well one of this not the most asked
question is just hey like where is
liquidity going to come from who could I
work with to get liquidity and where are
the user come from so that's all going
to be solved there's going to be major
partnership announcements coming up
ahead and we have yeah like huge
Partners to help bring in liquidity on
that front so I don't want to tease too
much but basically lots of exciting
announcements coming up in the next next
few weeks make sure to be following
along to stay up to dat and
yeah very cool very exciting and happy
to hear that such a kind of pioneering
project is close to launching some and
announcing some big stuff maybe last
question be before we end this cool
conversation outside I hope I don't
catch you off guard but outside of leer
n what is the most exciting project you
are following and why oh
that's a good question you're asking me
about yeah no that that's a really good
question there's a few really exciting
things I think modad for one is has a
very different approach to how we're
doing things but I think the sheer focus
on engineering and a lot of the
improvements that they're bringing to
the evm are actually really exciting so
that's the part that that I'm really
excited about modad I think the next
generation of
is also really exciting because up until
now I know there's a lot of dexes that
constantly get launched but up until now
like there really hasn't been a DEX that
truly Rivals like a centralized Exchange
in terms of being able to provide the
same level of user experience the same
set of assets the same set of of
features right and then even providing
more than the current set of features
there really isn't anyone that even
holds a candle to like coinbase Finance
Etc right so I'm really excited about
someone to do that wink on layer end and
the other thing I'm really excited about
is actually the intersection of AI and
crypto I think it's gotten like a bad
rep because of a lot of I think as
Anything Grows enough hype right there's
always going to be like a strong amount
of sort of noise around it but if you
think about actual use cases and what
actual
onchain the centralized inference can
enable it gets really exciting so one
thing that we're thinking and we're
working on behind the scenes or not one
thing a few things there's a few AI
partners that we're working with behind
the scenes as well to enable some actual
onchain use cases and I don't want to
spoil the sort of the announcements but
when we think about Ai and we think
about the intersection of trading and
gaming that's where I see like a lot of
the really interesting products and
applications and there's a few partners
that were working on making that happen
and that can uniquely happen thanks to
the fact that we can enable a lot more
compute allowing you to call these
basically AI functions right into these
like AI models and you can work with
with a zkl team to do all the imprint
proving and so on and so forth and now
suddenly you have ai running on chain
fully trustless right so that's actually
something we're super excited about as
well nice I really love your take
because I seriously believe like after
all of these years that we are in now in
crypto and digital assets it's also it's
time it's finally time for some actual
use cases right maybe we at some point
move a little bit away from having that
major focus on infrastructure and maybe
instead provide some very actual use
cases as you said with gaming or Ai and
with a with the current Bonanza around
AI it's very exciting to hear that you
have some Partnerships going on and I
know it's hyped it's hyped up for the
and the reason basically for me is like
two industries that are highly
speculative but don't have too many
tangible use cases yet are clashing
together which is AI crypto and the nice
thing about about that occurrence is
again that it shows us like as with many
instances say deep fakes or cbdcs that
blockchain Solutions and decentralized
systems are insanely important for
future use cases such as AI being a risk
mitigator being an enabler for
democratizing AI or things like that and
with that David thank you I really
enjoyed hosting you today I guess our
discussion ranged from very complex but
also to to digestible topics it was
super insightful on behalf of Bitcoin
Swiss and I also think the entire
audience thanks for sharing your
valuable time with us and also best of
success obviously with the challenges
ahead and a smooth public test net and
subse subsequent mayet launch David
thank you so much Dominic this was a lot
of fun and I hope I I didn't bore your
audience too much with my
ramblings not at all and finally to our
viewers if you're interested in learning
more about layer n be sure to visit
their social streams follow them on
Twitter read their blog posts to make
sure that you don't miss the
announcements I will provide links in
the description below and yeah until
next time thanks for watching
he
Посмотреть больше похожих видео
indexeddownlaoded
Ethereum Layer 2 Solutions Explained: Arbitrum, Optimism And More!
CEO Spotlight: Building the Future of Money - State of Crypto Summit 2024
Is This the End for Ethereum?
What is Blockchain Layer 0, 1, 2, 3 Explained | Layers of Blockchain Architecture
Layer 2 Scaling Solutions Explained (Rollups, Plasma, Sidechains, Channels ANIMATED)
5.0 / 5 (0 votes)