Join Delphi Research today and immediately get access to our full Member Portal!
Join Delphi Research today and immediately get access to our full Member Portal!

LayerZero Labs: Building the Future of Cross-chain, LayerZero Protocol, and Solving the Bridging Trilemma with Stargate

Nov 28, 2021 ·

By Avi Zurlo, and Can Gurel

Delphi Digital’s cross-chain experts, Avi Zurlo and Can Gurel sit down with co-founders, Bryan Pellegrino (CEO) and Ryan Zarick (CTO), of LayerZero Labs, a trustless omnichain interoperability protocol. They discuss building the future of cross-chain communication, solving the bridging trilemma, and Stargate, a cross-chain liquidity network.

Social links: 



Hey everyone. Welcome back to the Delphi podcast. I’m your co-host Avi, here with, my other co-host from Delphi. Digital Can our L1 expert today we have Bryan and Ryan from layer zero team, and we’re going to be discussing the layer zero cross chain communication platform, as well as their new cross chain liquidity application Stargate We’ll get into the potential use cases for layer zero. What does it mean for the ecosystem at large and all kinds of other exciting stuff? So let’s jump right into it. Bryan and Ryan, would you, introduce yourself for our listeners? 

Yeah, I’m Bryan I’m CEO of layer zero labs here is Ryan, our technical, CTO. Nice to meet everybody. Thanks for having us. 

Hi I’m Ryan Zarick. And I’m the CTO of layer zero labs.

Awesome. A good place to begin is, where you guys see the future of blockchains, you’re building a cross chain communication platform, which for many years has been a design challenge for the blockchain space. We’d love to just begin with your thesis on, the future of blockchains. 

Yeah, I mean, I think it’s definitely advanced this year where, it was like a lot of people had this season. It’s not a unique thesis. It will kind of live in a multi-channel world, but it was, let’s say less founded 12 months ago than it is now, right where over the last 12 months we’ve seen the explosion, whether it be Solana or Avalanche or all of these other ecosystems that have really cemented like, okay, this is a real thing that is likely to happen. We can see the extension. We can see that there are a real vibrant ecosystems being built here. I think there’s a, a much stronger case for it now. We’ve always kind of believed that, but it’s much easier to believe at this point in time. I think everyone can kind of see that’s a potential direction we may be moving in. 

So, yeah, we strongly believe, I think about a how I think about programming languages, but before I, you write some application, you don’t say I’m definitely going to write this thing in JavaScript. You try to figure out what you’re, and maybe you’re at the front end in react, and then maybe you need speed. You read a piece and see, and then you need more speed or you need parallelization. Like you go down a [inaudible] or you rate… you use the tools that, are needed for the solution. I, I think a lot of times parts of these different trade-offs are going to be seen similar within chains, right? So you have a certain chains that are a high security and low throughput and certain chains that are fast TPS and high throughput and all these different trade offs in between where, right now everything is in isolated, a single chain application. 

That’s probably going to be the way for a while, right. Where you’ll get multiple applications, kind of implemented across chain. Now maybe you have a Dex that can communicate across these chains, but eventually you might actually have pieces of that being offloaded. Right. So, you might have, eh, I guess Axie is even a great example where right. They do the game play on one chain and then it can like resolve out or there can be any things like that, where you have certain portions of computation or of action within an application that you might want to do, based on the various trade-offs. So, no, we think it is certainly moving that way. The space seems to be following that as well. Yeah. 

I think at a Delphi digital, we fully agree. That’s part of the reason why we’re so excited for layer zero and to be investors, with you guys, let’s jump into how layer zero, like fits into that stack that you’re explaining, what is it at a high level? 

Yeah. So, I mean, I think at a high level it’s in Omni chain interoperability protocol, the high level mission is connect every contract on every chain to every contract and every other chain. Pure Inter-operability and I think it’s important to kind of understand the differences between what exists, to kind of know what it really is. There, prior to this, there are two main ways that people approach interoperability the first way is kind of 95% plus of application. You have these two chains, there are atomic and state kind of, they know their own thing, but nothing outside and you want them to communicate. The solution is we’re going to put our own chain in the middle and that’s going to basically deal with this communication. You write a transaction from the source chain, the middle chain form, some consensus as to the validity of that transaction. 

And then it writes a transaction out. That’s like a really important caveat because what it means is that the destination chains are implicitly trusting that middle chain in the middle chain as complete signing authority over destination chains. So, you saw this in the Poly network hack. You’ve seen this a bunch of other places where if that middle chain is corrupted, even for a small number of blocks, it has the ability to just tap all liquidity on all pair chains, basically instantly because they’re implicitly trusting that whatever comes from it is valid. That’s just, you see, as this grows and becomes more connected, like a current state of these things, it’s still fairly new, but no, let’s say fairly weak security properties, 10 to 30 validating nodes to $300 million in bonded value, but they’re meant to secure kind of tens of billions of dollars of connected liquidity. 

It’s hard enough to secure a layer one. I mean, you look at the kind of 60 to 150 block reorgs happening weekly on polygon and some of these other chains, like it’s just, it’s difficult, it’s hard, security is hard. So, to do this on a chain that has completely different, security incentives where there’s nothing being built on top of it and make sure that is, never corrupted for any period of time ever is a difficult proposition and not one that were willing to build on top of. That was kind of the impetus for doing this. The only other real solution, it’s kinda IBC style where you’re going to run a complete light node on chain. You’re going to take the entire block history from one chain take its block header is write them sequentially to another chain and vice versa, which is amazing. Once you have that, you just submit the transaction, do the walk and validate that the proof is valid. 

The bad part is that writes on blockchains are just like incredibly expensive. So, pairing this to Ethereum is tens of millions of dollars per day, per pairwise chain. So that was the state of things. That’s, what were looking into as were trying to build other things, that’s what were presented with. And we said, none of this works. None of this is, something we would ever build on top of right now. So, we invented something, we call an ultra light node, which effectively the process of taking one single block in isolation and streaming it on demand. In order to validate a block directly on chain, you kind of need two pieces of information. You need the block header, which contains to receipts through, and you need the transaction proof. For any VM, that’s a Merkel Patricia proof. We split this up where you have an Oracle and a traditional sense of forwarding the block header. 

This is chain link, band, pith, super Oracles, et cetera. You have a relayer, which is an open permissionless system who is forwarding the transaction proof. Now we can dive in the technicals of this, but it basically reduces down to two interesting properties. The first is that the worst case security of this configuration is equivalent to the best case security of the chosen Oracle. If you choose Chainlink as your Oracle, the very worst case is that your Oracle and your relay are the exact same entity. They agree on everything they’re in complete collusion, whether malicious or not, but they just, they’re one thing and that’s still has a base case security as being as secure as the chainlink dawn in this case. The worst case security of this configuration is equivalent to the best case security of the chosen Oracle. The other interesting property is that even in the case that your Oracle becomes malicious, it has been corrupted it’s in collusion with Relayer A, they’re performing an attack in this case, only user applications, accepting messages from exactly that workable and relayer A would be affected. 

Anybody using relayer B through Z, anybody relaying their own transactions, anybody using any of the other Oracles completely unaffected. You’ve now taken what was a giant pool of risk and you’ve effectively sharded or siloed it. So, and that’s just like an extremely attractive property to have in this kind of system. Even when you do break the base case security of the Oracle, and you have an a relayer that is, colluding with you in malicious, the cost of that attack just becomes exponentially higher because rather than winning the entire pool that was sitting in the middle chain, you now we only win one tiny sliver. Again, that’s just a great property to have. In terms of security properties, that’s kind of the difference between most of the existing solutions in terms of implementation. Goal has always just been dead simple. We’re developers first, a user applications implement two things, send and receive that’s it, they’re sending generic bites, payload, and they’re interpreting them when they received them. 

Anything that you can write in solidity, you can write and rust, et cetera. If you can build it on chain right now, you can do with layers or across multiple chains. We’ve built it to be as modular as humanly possible, and I can kind of stop there. I think that’s enough of a high level overview. 

Yep. Thanks for that. Definitely. So you touched on great points. One of them is the siloed risk structure. And, the other one that I picked from your words is that, anyone that runs is, its own, note. Are we imagining a world where like user applications, they’re actually, running their own relayers  and, for the most of the time. And, so, and how many, Oracle Relayer combinations do you expect here? And, another point that, I would like you to expand on is what are the security implications for, liquidity providers and users? So I really liked, cross chain designs where users who actually do the swaps and all that, have a different and very, weak, trust assumptions compared to liquidity provider. Would like to have your view on this point as well. 

Yep. Yeah, absolutely. Some of that might better to get into when we talk about Stargate because Stargate kind of houses a bunch of that. In terms of the relayer network, the relayer network is completely open, right? So anybody can run a relayer. One thing that we care very deeply about is that the user applications themselves have complete control over all of the levers in terms of security. What this means is that each user application and their default, if they don’t want to do it, but you choose your application as the ability to specify exactly what oracle will they want exactly what relayer they want. Also, they’re responsible or able to be responsible for the number of, confirmations coming from the source chain, right? So if you want to bypass a roll-up time, if you want to, whatever, whether it’s short or long, all of those things like ultimately the user application is the one who’s bearing the risk and their liquidity. 

The user application should have control over that. We don’t like controlled systems where they don’t have any say in it and a bad configuration at the protocol level has a kind of a massive impact on these protocols. That’s something that were very particular about. We wouldn’t want to build on something like that. So, there are going to be, we have partners for a lot of relayers kind of from day one, we’re going to operate a relayer layers or labs. We certainly expect larger projects, right? Like you’re the Aaves of the world, the sushi of the world, whatever. You’re responsible for going to $20 billion of liquidity. It doesn’t matter what communication protocol you’re building across. Like there’s some element of risk there, period. So, what this allows us that Aave can run their own relayer. As long as they are not in malicious collusion with the Oracle against themselves, they have 100% control over all of their transactions. 

It doesn’t matter if the Oracle is completely malicious of obvious forwarding on its transaction proofs. That means it’s just going to fail to resolve in the destination chain, and there is zero risk to their liquidity. That’s something that we cared a lot about. We see a world where, let’s say you have kind of five primary order Oracle providers as there are right now, maybe 20 plus different, that get kind of significant use. On top of that, we expect there to be aggregate. So, we’ve left the, implementation modular enough that, you can have a relayer that is an aggregate of a couple querying for best price, or is using two out of three, consensus on Oracles, or, there’s all these different things where, you can see these kinds of subsets evolving. That’s how we’re thinking about that landscape right now. 

I want to jump back to an earlier point that you touched on, which was the ultra light node and kind of the architecture that you guys have designed here. The way that I think of a lot of these like bridge crossing solutions are that they’re performing some service. There’s many different participants who are, executing some action. When you have more participants who are executing a heavier transaction or action, it becomes more costly and more inefficient. What you guys have done here has kind of reinvented, that architecture in a really light and efficient way. I I’d like for you to dive a bit deeper into like that efficiency, because I think that’s really core to, what we’re excited about, with layer zero. 

Yeah. I mean, I think one of the big things is, again, when you have some chain that sits in the middle, it has its own throughput issues. It has its own security issues. We’re not like any of the things that happened in the middle of where there’d be the Oracle passing, a block header or the relayer forward in the transaction proof. Like it’s important to understand that there is no consensus being formed. There’s no validation being done. Like none of that is happening on-chain. Validation happens directly on chain, that walk of that tree to the root in validating that transaction is completely on chain. The bounds of what is able to be processed, et cetera, really just rely on the balance of whatever the transacting chains are. If you’re, if you’re going between Solana and some other high throughput chain, like your throughput is just going to be astronomically high, there are really no constraints in the middle layer because forwarding a block header or forwarding a transaction proof is just such a light operation for the Oracles. 

The relayer is that it can just be done, in tremendous volume, right? It’s much less of a lift than it is, and most of the other solutions. There are, there are still restrictions in terms of throughputs. Like you can only fit so many transactions into a block on a theorem. Like that’s just the nature of the chain itself. Right. We’re not adding any, additional bottlenecks we’re not adding, or, we’re trying to add as little additional complexity as humanly possible while still maintaining this trust, minimize communication, with all the properties that you would want those systems have. 

Yep. And, and, I think it’s also important to make a distinction between asset transfers, right. Just generic messaging, because while layer zero and the imminent application of it is like these cross, chain asset transfers, but it’s actually a generic data messaging protocol. Could you maybe talk about, how you might see that space open up, a brand new design space for developers, within blockchains to kind of build applications that maybe don’t make quite sense today. But might in the future. 

Yeah, a hundred percent. That’s something, again that everybody waits in prior to this was only focused on asset transfer. Like everybody wants, even at the communication level, they’re just building some derivative of a DEX, or a bridge. And like, that’s a focus. I get it because that is definitely the large bulk of what we’re doing. Like, ultimately that is just the way things interact right now. There are a lot of cases where, generic messaging may make sense there’s applications that may need to share state. That may be, whether it’s a yield aggregation or rebalancing across these changes. You’re kind of trying to trigger certain events, across chains, to whether it be governance metrics and like how emissions are set. Unified governance is another one where you’re just like casting votes from all of these chains to a protocol, basically rather than having it be on a single chain. 

Lending and borrowing is one that I talk about a lot where like the current process for lending borrowing is you’ll collateralize, Eth on, Ethereum, you’ll borrow you’ll bridge, which is a fee you’ll swap, which is a fee you’ll farm, some opportunity you’ll swap back, you’ll bridge back, you’ll repay the loan and you’ll get back your collateral. With something like this, like a good generic messaging, you can collateralize on your chain, a, you send a message to chain B confirming the collateral, borrow directly in native assets. Their farm repay message comes back, collaterals unlocked. Like all of the bridging, all the swapping, all of those fees are all abstracted away. Tons of uses within gaming, again of just like state share between applications. But, but I think there’s going to be a lot that doesn’t explicitly, involve like, Hey, we need pools of liquidity and we’re transferring assets across. 

Yeah, that’s awesome. Yeah, it’s green pastures here. I’m imagining once, one application of abstracting away complexity through the wallet extensions, right? Being able to interact like cross chain from a single interface, I think is going to be like, would be huge. It sounds like. 

We’re, we’re very bullish on future wallet integration. 

Little bit of alpha there. 

We, went over the layer zero, which is the bottom layer, which doesn’t do consensus, which, does validation on chain at the destination chain and all sorts of cool applications can be built on top of it. And, and you guys are building the first application. That’s going to sit on top of layer zero. That is a Stargate. Without further ado, I think we can dive into what Stargate is, how it differs from many applications out there, that all trying to somewhat similar, they’re all trying to solve a somewhat similar problem. So, with that, I leave the mic to you guys. 

Sure. For Stargate specifically, like originally the goal was we’re going to make layer zero and it’s going to be amazing. It’s going to be the best case communications protocol and a bunch of people are going to come build on it. We realized over time that there was just like a really big hurdle early on in trying to solve a couple of pieces of this. I’ll actually dive into Ryan’s bridging trilemma in a moment, but I I’ll explain coming at a high level why this came about. So, if you are, if you’re, a DEX right of you’re Uniswap or  sushiswap etc whatever it may be, there are a couple of ways that you might approach, building across chain DEX. The first kind of naive way would be, you’re going to have a pool of Ethereum and a pool of SOL on Solana, and those two pools kind of become your with Sol LP, and you’re going to send a transaction and it’s going to land and you’re going to execute X, Y equals K. 

This becomes problematic because like most of these protocols right now, don’t, have single-sided liquidity pools. They’ll need to incentivize these liquidity pools. X, Y equals K. It needs to be processed sequentially on the entire pool. Like only one of the chain gets kind of one way validation or one way, execution. The other one needs an act. It becomes like fairly messy where there’s a bunch of changes to core protocol. Of course they need like ETH-AVAX and ETH-MATIC. Like, all of these other pairs, they should, the amount of LP needs to be injected in the system to create this in a functional manner. Another way that you might approach this is okay, we’re gonna, we’ll keep our existing pools and we’ll kind of pool some bridged asset in the middle. We’ll say that’s USDC for right now. So, now we’re going to have pools of USDC. 

So, they still need to implement single-sided liquidity pools. They still need to incentivize these pools. They still need to do it kind of every pairwise pathway along the chains, but now they could do something that’s like, maybe do ETH-USDC bridge the USDC and then go USD to solve, cause like those accessible chains. Now we get to keep all of the existing LP without reinventing everything or adding all this liquidity, but we do still need these pairwise pathways of USDC across all of these chains. So it’s still a pretty big ask. The other thing is that, let’s say uniswap implements this, well, then, also sushi swap and quick swap and trader Joe. Like everybody just needs to reimplement this exact same liquidity transfer layer and top of yield aggregators, like all of these other applications, like it’s just something that’s so common, that we really wanted to just abstract this away. 

We consider Stargate being like a key composable DeFi Lego in terms of this is something that can sit in the middle Stargate does exactly that it’s a bridge, it deals in, asset transfer and a hundred percent native assets. Let’s say again, USDC pools on both sides. What this allows is that now your sushi is your unis. You, whoever that is building this cross chain DEX, they can now execute this swap bridge, swap over Stargate and one single transaction from the source chain. It will take zero changes to their existing protocol in, do they have zero risks to the liquidity, which is extremely important early on. What this means is Stargate houses 100% of that risk. So, if we’re on sushi, swap sushi on chain A gets in eth and spits out USDC. That’s a complete transaction in that sushi protocol. Exactly what works now, Stargate takes the USDC, bridges it over all of the messaging risks, all the liquidity risk is housed there. 

The DEX on the other side, sushi again, let’s say just gets in real USBC and spits out whatever asset. There is no risk at all, right when they’re implementing, when any big applications implementing a new messaging protocol, the risk is that there’s some again Pauline network or whatever it may be like there’s some problem within this within consensus or whatever it may be. It just tells your application like, Hey, all that liquidity belongs here, but it wasn’t real on the other side. Right. That’s like always the big risk. This now allows that to be completely abstracted away from every single application doing this step, which allows them to have almost riskless integration into doing this is just a direct integration into UI. It’s basically just a poll request on top of this, basically every, yeah, well, okay. Integration level risk lists there where it was like really great property and then a side of this, like obviously you have all of these other things. 

We can dive into kind of the bridging trilemma now where, and this is something that where Ryan… it stemmed from Ryan and I having, I mean this long argument where, there’s kind of three main properties within this trailer, which is, instant guaranteed finality, 100% native assets and unified liquidity, right? And every bridge today only has one or two out of three. 

Let’s repeat them slowly because not everyone can be up to pace with you. 

With you. So unified liquidity is basically having single. Right now almost everything is pairwise liquidity. If it’s in native assets is pairwise liquidity. This is, you might have a pool of, yeah, I’ll just keep using USDC on chain A and you’re gonna have a pool on chain B. If you want to also connect chain A to Chain C that’s a different pool. You’re going to make a pool of chain A and a pool of Chain C, Chain A Chain D. So, this is, pairwise liquidity across the chains. If you only have, a billion dollars of USDC, the more chains you have, the more kind of thin this is going to get unified liquidity is a concept of one single pool of liquidity on your chain A tied to all of these chains simultaneously. Now, one of the reasons that people don’t do now native assets is that you run into this issue of, if I send a transaction from chain A to chain B saying, I’ve added some money here, I’m going to subtract some tokens over there and some money over there. 

Before you get there, these other chains basically sending requests and they drain the pool and now the pool doesn’t have enough money to fulfill your request. What happens there? How do you build around that? So, does the user need to leave this flow, go to that chain and pay to revert the transaction. Does the user pay two X gas fees ahead of times in order to cover the revert, if it’s needed. They get refunded on the destination chain, does the protocol revert it? If so, that’s like a very easy attack where you drain the pool and just spam it with transactions that it has to revert. There are all these kinds of issues, with that. That, that’s one of the big reasons right now, you don’t see anything that’s unified liquidity in native assets, instant guaranteed finality is the concept of knowing at the source chain, before the transaction resolves on the source, that it will resolve on the destination chain. 

Most bridges right now are structured where you’re going to lock an asset on chain B, and you’re going to mint a synthetic asset on the destination chain. You’ll, burn that and you’ll unlock the original. This has instinct guaranteed finality because they have the ability to mint the asset on the other chain. It doesn’t have native assets. Right. One of the big problems with this, especially now as you’re moving to a lot more applications are moving natively, multi chain. Your Aaves, your curves, your MIMs, like there’s just not a lot of use, always for wrapped assets. If you have four different bridges, you’re gonna have four different versions of the asset. You might have Mim natively deploy like $10 million on avalanche, but have like $30 million bridged in, over any swap or something. The Mim that ends up on that chain, over the bridge is a different Mim than the Mim that Mim has natively deployed. 

Right? And so like you have to have some swap function and some liquidity between them. You need to add, like what’s getting integrated into, lending protocols or Dexs or all of that. Like where does acquitted, who live if you’ve ever tried to wrap a bridge USDC from like Solana to Ethereum over wormhole, you get like a wormhole USDC that has no liquidity, no pairs, no use, like, he’s just like what you can’t even do anything. So, it’s really kind of restrictive where the wrapped asset that the bridge issuing needs to be natively supported in a bunch of chains, which means all the other ones become more or less useless. It also means there’s just what you’re getting. Like they own the mint supply. They own the control of the mint. You’re getting just like a wrapped vanilla ERC 20. You can’t have any custom functionality, you have something that’s rebasing or any of these other things that are built in. 

Like, you can not do that over these existing bridges that do that. As more and more projects implement natively across multiple chains, this is going to become a bigger and bigger issue. Hopefully I described the three pieces of that trilemma, okay. With native assets, instant guaranteed finality and unified liquidity. Like, those are the three trade offs and you’re taking one or two in almost every instance, 

So far. It’s important to note that, nobody does incent guaranteed finality unless they’re minting. To have instant guaranteed finality and ended up on the source destination chain with native assets is something that no one does. 

On the user end of things. Like I think a lot of users, especially when bridging today, they’re focusing on the costs of bridging an asset from one chain to another. How does this trilemma solution that you guys have here with Stargate create a user experience, and maybe related to, like the costs that a user might incur, relative to existing solutions. Yep. 

I will say a couple of things. First of all, it will say that all existing bridges prior today, really focus on the user, they focus on an individual coming in, wanting to bridge an asset to go to the chain in bridging that asset. And that’s fine. We, we have that as a use case. Of course we’ve thought about that, but we focus far more on the applications themselves. It is our thesis that 95% plus of bridging is going to be driven by the applications that are building cross chain rather than the individual users. This is, your DEX is your, yield aggregators, all of these different applications that are building, like they’re going to be the ones interacting with these bridges in the middle. They’re going to be the one driving, like by far the most of the volume, but as an individual, again, the big thing is when you’re getting these wrapped assets on the destination chain, I think one of the big issues is that there’s no use for it. 

Like you get the wrapped asset and then you take it and then you swap it to another asset. Like you pay another fee on the other side. Like, of course, if you have a real asset on chain A, if you have a USDC or you have whatever that may be like, you just want the actual thing on the other chain, right. You don’t want to have four bridges each have their own derivative of USDC. The, some of them are different synthetics and like at the end of the day, you just get it and there is nothing to do. You just swap it to the thing that you actually want. Like, why don’t we skip that step? Why are we charging an extra 30 bips to the user? Like, why don’t we just bridge exactly what they want? So yeah, every user is going to miss out on that additional swap on the other end, if they’re not trying to switch assets, if they’re just bridging at the end of the day, you’re basically paying a transaction on the source chain, a transaction on the destination chain. 

That’s just like that must be done, right. That, that is how any cross chain works is you have to resolve a transaction of both chains because you have to inform them. You’ll pay gas on the source, you’ll pay gas on the destination and that’s it. Right. So hopefully that answers it. Yeah, I think really you’re going to eliminate this additional step of getting some unwanted asset that hopefully has liquidity, but sometimes doesn’t have that much of it. You’re saving yourself slippage and a swap fee. Yep. 

I, I agree. I think, long-term right. Bridges should be reserved or likely to be wholly used by power users on the kind of consumer level. Eventually they get abstracted away from the user experience where, we don’t have to go through this process and sending a transaction and praying that the bridge actually works. And, our assets end up on our destination chain. I think, focusing on that application to application layer, is really quite interesting and frankly is a larger market in the long-term. 

Yeah. We, we completely agree. If again, most of the stuff now is viable at a user perspective. It’s like, maybe there are some things you don’t want in there, but you can use a, it gets the job done for the most part, but at an application level, it’s almost impossible. Like you’re talking to integrate any of these existing bridges into, to do, let’s say just simple DEX case of like swap bridge swap, right. To do that, they need to like integrate an entire custom flow. You have to have the user switch wallets, you have to have the user get the native chain asset in that wallet to claim the transaction to then continue processing on the other end. It’s like 15 clicks for the user, at least, multiple wallet changes, multiple different types of gas assets, and all of this needs to be built in to application flow. 

You’re using something, you’re using wormhole to do this. Like the application needs to build in like listening to wormhole, to know when this is hitting the destination chain to do these different things. It’s like, it’s really difficult application layer. And, for us, it’s literally a wrapped contract. That’s all you do. You’re interacting exactly what you do now. Or you wrap your contracts, you interact with Stargate and it’s one single click paid only in source gas from the source chain, user clicks. Once asset, they want drops in their wallet on the destination chain. They don’t need native gas, sorry. They don’t need destination gas. They don’t need any of these things. You’re swapping assets, you’re doing this swap bridge swap. You’re going to click once. Let’s say in the uniswap UI or sushiswap UI, you’re going to click one single time. That asset that you’re going to on the destination chain is just going to drop in your wallet on the destination chain, and you’re going to have it. 

If that’s gas, again, you’re not getting a wrapped gas that you need to go. Then, get native gas to claim on the thing and then swap to what you want or unwrap. You’re getting real native gas. I’ll be publishing a demo next week. Most likely, either this week or next week, that shows me going between four chains on test nets, the equivalent of going, Eth to matic to BNB, to AVAX and back, having only starting with ETH, having no, any asset in any of the other channels and just one click going, okay, Eth now we’re here, this now we’re here. Now we’re here and back. Like, we think that’s just the way that all applications are going to kind of demand to work from the user perspective. I, I think it’s one of the most attractive properties and actually creating something that’s a composable layer between these, because it’s really tough to claim composability when it requires again, like kind of 15 steps and all of this custom, all this custom integration. 

Yeah, thanks for that. This, this sounds fantastic, actually, and this is, in large part, why we’re we are, we got very excited with the layer zero. I want to circle back on the trilemma that you mentioned and especially the instant, guaranteed finality part, because this is something new. This is something that doesn’t exist to the best of my knowledge in other solutions. Would you like to expand on that point, maybe, Ryan can, take this if he wants to. 

Yeah. I guess what we mean by the instinct guaranteed finality is that the transaction is guaranteed to show like the whatever you’re trading right, is guaranteed to show up on the destination chain if it successfully commits on the source. That is completed, you can do, you don’t have to go check your wallet and see if it’s going to show up, it’s final and that, it will end up on your source, wallet, other applications out there, you have, there’s a chance that it could revert, right? Cause we don’t, they don’t know if the assets exist on the other side, they have to wait for the block confirmations. By the time they get there, like at Bryan explained about where there’s this race condition, they could get there and it would have to revert. Building on top of something like a bridge that doesn’t have instant guaranteed finality for like adapt developer means they have to handle this case where if they get there on the other side, do they revert to, they pay for that? If they prove that’s an attack vector, because people can just spam an empty pool and there’ll be a, had to pay for all the reversion, does it user have to pay for that? Do they have to put that in there a whole flow, right? Instead they can just send the payloads and send over the token or whatever asset they’re moving. 

35:17 • Ryan
They know it’s going to get there another side. And, they don’t have to worry about any kind of reversion. 

Just specifically more than get their right. They know that the asset exists on the other side, enough of the asset exists for them to guarantee because obviously it’s, yeah, there are pools there, but it is the guarantee as it resolves in the source that there will be enough that asset on the destination chain. Again, this is something, what we’ll publish the paper on this kind of shortly, we’ve talked about this over kind of a medium post, but the way that this functions is actually quite interesting, it’s, it would have been novel in semi-interesting it’s between two chains of you’re just doing this pairwise. Like, that’s fine. Again, nobody else is doing that right now. We thought that was interesting, but, Ryan and I had this huge argument, where I was yelling that we had to kind of decide between unified liquidity or instant guaranteed finality. 

Like, we can only get a one or the other, and it was probably three hours of us yelling at each other. He came back to me three days later with kind of this really elegant solution, to have both and kind of solve this entire trilemma. The, solution we will publish that does have exactly this, one single pool of unified liquidity in native assets, with instant guaranteed finality to all chains, like, you’re talking about orders of magnitude, greater capital efficiency then versus pairwise solution, the property of instant guaranteed finality, which allows it to be just completely composable. Because again, if it resolves on source, there is a 100% guarantee. It goes through and destinations or no application needs to build around these edge cases. They can do it. Their users don’t even need to know necessarily that it’s happening in the middle. So, yeah. We’re very excited to publish the paper. 

I’ll be happy to talk about it like more, but it’s really quite unique. 

Yeah, that’s fantastic. I’m looking forward to it. I want to also, I wonder also the development overhead of building applications on layer zero and on that note, could you maybe talk about the chains that Stargate, that the Stargate is going to, the support like initially, and then what other chains, can support in future? 

Yup. Yup, absolutely. So, in terms of overhead it’s is, as we described, there’s literally, so layer zero endpoint lives on each chain and point is basically a library of on chain smart contracts that deal with validation and messaging. You’re building an application, all you do is you implement these two functions of send and receive. You’re literally sending, bytes payload, a generic bites, payload with a small header. That includes the destination chain, there’s this nation chain idea of the contract you’re going to et cetera. Everything else in there is whatever you, the application want to structure. You’re going to implement, receive, which is, Hey, you have a bytes payload coming to you. Like, how am I going to interpret this? What am I going to do when I get that message? It’s really like, that is everything there is to do. There are the configurable pieces, if you want, in terms of choosing your Oracle, choosing your relay or choosing block comps for each pathway, there, all of those other kinds of levers, but in terms of implementation like that is it is meant to be dead simple. 

We’ll publish, hopefully really beautiful documentation that kind of walks through a lot of work examples and makes this a really as easy as humanly possible for people, in terms of chains, we are launching EVM first. So, there’s of a caveat in that. Like, we need the Oracles to be able to support the passing of the block header between each of these pathways. We’ve been working quite closely with Chainlink and Band and, they’re, they say they should be able to support this stuff. So we’re really optimistic. We’ll have, most of these kind of from day one are very early on, but from EVM system, we’re thinking if Ethereum avalanche, Binance smart chain polygon, Phantom, arbitrary optimism, like those are like the usual suspects, right? Like that’s, our goal is to kind of launch with the majority of those chains, assuming everything is ready to go both on our end and their end, but, EVMS are quite easy once you’ve done the first, right? So after that, let’s say the next rolling nine months, you’re talking most of the primary non EVMS. 

So, Solana, Terra, polkadot cosmos, algorand, et cetera, more and more of those are kind of popping up every day, but then that’s kind of what we’re looking at is, when you’re doing non EVMS, there’s more of a technical lift that you need to do proof translation, right? So you need to be able to take a Merkle Patricia proof and validated and rust, and you need to be able to take us a lot of like proof of history and validated in solidity on the EVM. It’s just more manual than going between the EVM chains and primarily being able to just take the end point and deal with a little small nuances. 

As in, so I’m going to ask you guys a bit of a degen question, because we think we’ve got a few degen listeners on our podcast. That’s, when token and more specifically, how, what is the token structure here, with regard to layer zero, like w what is it used for, what can people do with it? and then, is there going to be a token specifically for Stargate, and how might that be used, within the application? 

Sure. So, I, I will say, no, all of this is a very hypothetical conversation. There, there is no guarantee of any token, like right now, everything has been purely technological based. You know, we’re buildings, protocol. I mean, you can imagine as with most things in the system that, maybe something like that would help, align incentives across the system. So, there’s a couple of levers when you’re building a system like layer zero, in terms of, what that might entail or with them. I look like, I think you have the Oracles kind of have their own security model, and they’re getting paid for doing that. They each have their own structure, and then you have, the relays are performing this job of, taking a transaction proof out of a full node and submitting them the destination chain. So, the primary parties are like, okay, possibly, there’s a very small fee around messaging, on a per message basis, if there was a case, then, probably the bulk of that would go to the relayer. 

Maybe a very small percentage would go to network. The relayer would kind of need their own incentive structure. So, the relayer is getting paid to perform this action. You would probably want some like bonding systems. Maybe there’s an insurance fund on each relayer and the relayer would need to decide maybe of the fee that’s coming in. Right. Some of that would go to the insurance fund. Some of that would go to the relayer and insurance fund with just the relayer staking into is fine. That’s like an okay bonded system, but it’s probably better if you have kind of users, also able to bond in these things and participate in kind of securing the network. Obviously there would be rewarded for that in some manner, probably through part of this relayer being allocated to the staking poolers insurance fund. That’s kind of living on the relayer is, and that kind of gets you to, let’s say the bulk of maybe a solid incentive structure. 

Obviously, maybe the contracts that are dealing across these chains, there’s probably some incentive for them to deal in a native token, rather than dealing in source gas. For me, there’s a discount for them to kind of hold this or, use this as the payment mechanism. There’s, a bunch of different levers, but I would say primarily like you have the user applications that are transacting across these chains, or whoever is implementing that in like, the bulk, like they’re probably going to say pay some fee for using the system. Then, of that fee, you want the bulk of that to go to kind of securing the system. How do you secure, like, there’s this, you have, these relayors  are operating and then, you have potentially, the broader user base or token base would be kind of supporting that through bonding and discern Relayer is, are helping the network via securing it via these insurance funds. 

Stargate specifically, so while layer zero doesn’t have, or need liquidity, layer zero as a messaging protocol, it doesn’t necessarily need, pools of liquidity across any channel. It’s very important to understand that it’s a messaging protocol. It’s not a bridge, it’s not a Dex. It’s not a letting it’s not, and these are things that are built on top of layer zero. There may be lots of bridges and lots of DEX is all implemented on top of layer Zero. Stargate itself is specifically meant to solve this liquidity transfer for all of these applications that are trying to integrate across layer zero, and want to leverage that as opposed to recreating themselves. Obviously they can recreate it themselves. Each one can house their own, but this is something that’s easy and composable for people to utilize. For that, you imagine a much more traditional structure, where you have some remissions, for providing liquidity and like, it’s basically this trade-off between, these days it’s moving, let’s say defe 2.0 movement where you’re kind of moving between, the protocol is providing emissions. 

The people participating are providing liquidity, obviously users and people using it are swapping and using that liquidity, which generates some fees for the rev, for the protocol. And, these days, things are moving much more towards like protocol and liquidity. The protocol is owning maybe a piece of this liquidity, a piece of these fees, et cetera. There are quite a few, kind of interesting dynamics can come into play there. If hypothetically, these kinds of things were being worked on or discuss, I would say those might be a way you might look at kind of building these systems.

Hypothetically speaking. Right. That’s great. One of the things that we frequently ask on ventures, when we’re due diligence in plays is, we want to know what the biggest challenges are for teams. I’m going to put you guys here on the spot here and ask you, what do you foresee your biggest challenge, over the coming months? 

Yeah, I think it was a great question. I think were really fortunate where, I, he doesn’t hype himself up enough, but Ryan is just like, unbelievable. I’ve tweeted the other day that, when you’re talking about like top solidity devs in the world, like people don’t know it yet, but once they see a lot of this stuff, they he’s a true savant. I’ve, he’s one of my closest friends we’ve worked together for 15 plus years. We’ve attracted some unbelievable talent to the team. So, like, I, w we have incredible engineers across the board. W w people coming from like the office of the CTO at red hat, were like the largest open source contributors to a bunch of projects across like the largest open source, one of the largest open source stories in the world. Like the people are really deep into this stuff, but Ryan just like stands apart. 

Yeah. On the technical side, we’re actually probably more comfortable than most projects are at this stage. I think a lot of people, these aren’t people who are as technically equipped as we have been, but that said, you kind of never know how, like the announcement stage, were really excited about we’re building, we attracted like an unbelievable group of, partners in VCs, in the group. Obviously you guys were part of that and we’re super happy to have you guys on board. You guys have been amazing, but, we have one single tweet, I think on the Twitter, maybe two, now that stargates announced. It’s like, 30,000 plus followers on the Twitter, like 10,000 plus people in the discords and telegrams. Like things have just grown and escalated where before we had this, I didn’t realize it at the time, but it was a real luxury to just build in stealth. 

We just built in stealth, completely heads down for like, almost three quarters of a year. We didn’t announce anything to anybody until all of the code was completely written protocols working. We’re testing it on Testnet, it’s frozen, it’s under audit, right? So like layer zero is just completing its third audit. Now, everything has gone. Great. So we’ll have three completed audits. Stargate is just in now for two audits. Basically we’re just waiting for those audits to be wrapped. Like everything is done, the technical work there’s ongoing technical work. They’re all, there’s huge amount of stuff we need to do, like in the future, but the core stuff launching for EVM is done. And that’s great. What we need now is like, it’s just scaling, right. We were six people when we announced, I think we’re up to 11 now. The amount of resources that we need, whether it’s from the community side, whether it’s from like the amount of sheer amount of inbound interest that we’ve gotten, from projects on like the BD side, there’s just so many things that need to get done. 

Every, everybody right now is pulling just like, 18 plus hour days, like everybody is just in it. And it’s been amazing. And like, I love everybody for that. You know that’s not going to be able to last forever, where everyone’s just like seven days a week, 18 hours a day. So, we need to fill out, we need to scale. We need to grow and I think that right now is our biggest challenge is, just getting ourselves up to a size that can handle kind of where this has already gone. Only seems to be growing more rapidly. 

Yeah. It needs you guys need to get some sleep. We w that was definitely one of the things that we saw really early on with you guys was your grind and also your competence within the technical fields. Being able to explain, frankly, these really complex subjects to a lowly ape like myself. And, I think that’s a, it’s a really, know strong skill, to have, especially when you’re building something so technically, robust and complex also guys, well, on that note, where can people contribute and participate in and help you guys out in some of these areas that you may need help in? 

Yeah. I mean, come to the discord, coming to the telegram, as has mentioned, I, it’s a bit of a mess right now. It’s, it’s hectic in both, but I think, I’m there every single day. You’ll, you’ll find me at 4:00 AM having conversations with anybody who wants to have a conversation about the technology and what we’re working on and ways to get involved. I, if you’re interested in coming to work with us, jobs at, we are happily I accepting any new applicants. Yeah, I think we’re going to be releasing a huge amount of information over the next six weeks. We’re gonna be rolling out more and more stuff. We’re gonna be rolling out documentation. Like, if you want to build, if you want to just come play with Stargate, when it first releases, anything that you guys want to do to get involved, there’s a ton of ways we’re bringing in people to make sure the community has more ways to get involved, because we’ve seen, like community has been amazing so far. 

We’ve seen, a thousand person groups spin up and Mandarin Turkish, Korean, Japanese, like, there’s been a lot of posts in Japanese, like blog posts and people really diving into the technology, which has been really amazing. Cause this is like not something we’ve catalyzed at all. We have no structure for kind of, no presented structure for like rewarding community or like anything like this. This is a completely organic interest and it’s been amazing to see, but that’s it, we want to find ways that we can grow together with the community, engage more. We are finding more and more ways to do that. We’re bringing in people whose sole focus is that. Yeah, if you want to get involved, just come, hang out, ask questions and, let your intellectual curiosity guide you. And hopefully we can, can build some cool stuff together. 

Awesome. That was a great closing note, guys. We really appreciate you taking the time today to join and I’m looking forward to the future of layer zero and Stargate, and, the many endless possible applications that I think will be built on top of the infrastructure that you have here. I appreciate your time today. Thanks guys.

Show Notes: 

(00:00:00) – Introduction.

(00:00:47) – Guests’ take on the future of blockchains.  

(00:03:25) – What is LayerZero?

(00:09:00) – Expected oracle-relayer combinations.

(00:12:51) – The architecture and efficiency of the ultra light node. 

(00:15:06) – The opportunities for on-chain generic messaging.

(00:18:01) – Overview of Stargate. 

(00:23:00) – The bridging trilemma.

(00:27:46) – Stargate’s user experience. 

(00:33:57) – Instant guaranteed finality on Stargate. 

(00:37:22) – Chains that Stargate will support. 

(00:40:30) – LayerZero and Stargate tokens? 

(00:45:23) – LayerZero’s biggest challenge. 

(00:49:18) – How people can get involved with LayerZero.

(00:51:18) – Closing thoughts.