L2 Peace – Scaling Ethereum with Vitalik Buterin and L2 Leaders

Share this post:


Watch Vitalik Buterin and Layer 2 leaders discuss Ethereum scaling solutions in this insightful video. Explore the future of Ethereum with Layer 2 advancements.

Vitalik:

Okay, yeah. So hello, everyone, and hello to all of our wonderful L2 crew and anyone who is currently watching this. It’s good to see the greater Ethereum verse come together again. I guess we’ve done this before in person at Devconnect and we’re doing this one online. So looking forward to seeing these kinds of amazing cross L2 chats become a regular thing. So I think we’ll start off with just a fairly simple question to any team that wants to answer it for themselves. Which is just from a technical perspective, whether it’s decentralization or functionality or whatever else, what milestones are many of you particularly excited about your project reaching in 2024? Anyone start?

Jordi:

I can start if you want.

Vitalik:

Perfect.

Jordi:

My main battle here for decentralization is forced transactions. I mean, it’s a little bit disgusting in my side that we have this design net. And it could be enabled from the first day and it’s still not enabled, it’s mainly for security reasons. I mean the problem of forced transactions, that forced transactions can be anything that can be a forced transaction. So this opens very much the vector space. But the good news is that what we are going to do in the next version is we are going to enable that, but we are going to limit the forced transactions just to withdraw transactions. This is just not the perfect thing, but it’s this reduce a lots the vector space for that, so this allows us to enable that.

So forced transactions and here is maybe a philosophical discussion, it’s not a perfect thing because I think the goal of this decentralized systems, it’s not only censorship resistance. I would say more the concept of universal systems, systems that anybody can use it. And with forced transactions, actually nobody can steal your funds, you can always recover that, but you effectively, what you can do is, you have the right to kick somebody out of the network? And this is not really what I like the best, so I think it’s better that a fully centralized system. But yeah, this is the main step that we are fighting for and let’s see if we can enable that as soon as possible. Want to promise nothing at this point, but yeah, it’s as soon as possible, it’s just a matter of getting confidence on that.

Vitalik:

Okay.

Alex:

I’m happy to go next.

Vitalik:

Sure.

Alex:

What I’m really excited about this year is that this is going to be the year where the ZKsync decentralizes, and I believe that decentralization of L2s is a really big topic where we still have a lot of centralized components with centralized sequencers. Centralized perverse security consoles are not in the perfect shape and they have credibility, all of those things. We really need to polish them to bring the promise of actually scaling Ethereum with all of its values in the L2 space. And what I’m really excited in the ZKsync decentralization roadmap is that we’ll have many chains powered by ZKsync, we call them hyper chains, they should be connected in the bridging network. Which I believe eventually is something that we will all converge in the L2 space, that the bridge ability between all L2s have to be completely seamless and trustless.

And we just want to pioneer that with the ZK technology. And I think that ZK is really a crucial component there. And I’m really excited that we can accomplish that with decentralizing not at the level of a single sequencer. And then doing something like shared sequencing for multiple chains, that will mean that all of them are using the same providers and getting the power together. But actually truly decentralizing, that many different teams can run many different versions of consensus of different protocols to aggregate proofs, et cetera. And all of them still inherit full security and trustless properties of Ethereum and censorship resistance down from layer one. So this is the design space that we’re super excited about.

Vitalik:

Got it. Anyone want to go next?

Toghrul:

I can go next.

Vitalik:

Sure.

Toghrul:

I think for us, and for me personally, what excites me the most is the steps that we’re taking towards removing the multisig or just having a delayed upgradeability without having any protocol capacity of instant upgradeability. And we just announced the first step towards it, we announced an SGX based multi prover that it’s going to go live on the main net soon. And I’m not sure if we’re going to be able to actually remove the multisig this year, it’s still quite risky, but we’re taking steps towards it and the multi prover is a good first step. And a side effect of actually having a multi prover is that we can easily decentralize and that’s also going to happen. We’re planning for this to happen later this year, so we’re working towards decentralization, but it’s going to be a side effect of the fact that we’re trying to improve the security.

Vitalik:

Got it. Maybe two people who have not gone yet, but just a question in a completely different direction. So I think we’re excited about layer two is because layer two is scale Ethereum and scaling Ethereum reduces fears and enables other applications. So what are some particular applications that you guys are excited about becoming possible that haven’t been possible on Ethereum so far?

Harry:

Yeah, happy to maybe jump in here.

Vitalik:

Sure.

Harry:

Yeah, I think maybe I can even loop this back into the previous question two in one, I think that one of the things I think that we’re seeing a bunch of is a lot of innovation in blockchain gaming, which has sort of historically been an area that’s been quite bottlenecked on capacity on functionality in terms of actually being able to have real experiences and the combination of a number of things that we’re working on and excited about in the next year. One of which is sort arbitrary orbit chains and the proliferation of essentially Ethereum DAP chains and the concept of dedicated capacity.

Certainly for it for four, and also other interesting exploration in ALT in order to lower costs enough, in order to actually have these things be feasible. And then work we’re doing on arbitrary stylists to enable laws of execution and get computation requirements down and allow kind of game developers to come in and use languages that they’re more used to and depend on libraries that might not already exist in Ethereum because they’re not that useful for existing applications. And all of those things coming together to form a perfect mix of what’s needed to enable new functionality.

Vitalik:

Amazing. Yeah, go ahead.

Bobbin:

I think one of the things that we’re trying to enable is privacy so that you can easily have a privacy preserving smart contracts and don’t have to expose your balances and all that. So that’s one of the things. And more broadly, on-privacy is just off chain transactions, meaning where you can prove using ZKP that the transaction executed correctly. And then just send the ZKP to the network so that the network can verify and that enables… Well, this is needed for privacy because you need to prove local state transitions for privacy purposes, but also may enable scalability where you can execute some very complex code on the client side, prove it, and then send it to the network and then that just needs to verify short proof that it worked correctly.

Vitalik:

Yeah, actually Bobbin, I think since a lot of people listening to this probably know about Polygon as basically a set of EDM clone chains, but you’re working on some pretty different stuff around privacy. So maybe just introduce what is the thing that you’re working on, what kinds of technology it’s being used, how it fits into the Polygon universe in general?

Bobbin:

Yeah, so I think Polygon Miden is one of the components of Polygon overall. I think we announced this aggregation layer yesterday, and this is something that will be one of the chains that plugs into this aggregation layer. But the nice thing about the aggregation layers, that doesn’t have to be all EDM chains, it could be different chains that use different execution environments and have different properties. So one of the things that we’re trying to solve with Polygon Miden specifically is we’re using a different state model from Ethereum specifically to enable privacy as one of the things, but also make parallel transactions use more naturally workable, enabled this off-chain transaction execution, as I mentioned and a bunch of other things.

And the way we do it is, as I mentioned, but changing how the state works, so we’re doing a hybrid between UTXO and account-based state model. So you kind of have the benefits of both accounts and UTXOs, where to make transactions, for example, to transfer assets between two accounts, you actually have two transactions where one transaction creates an UTXO, and another transaction consumes UTXO. And that allows you to decouple updates one account, from updates to another account and that makes a local state transition locally approvable. So this is changing the state model is one of the bigger things and that enables, as I mentioned, both privacy and other interesting things. And with this aggregation layer, we will be able to prove execution of Polygon Miden and put it to the aggregation layer, then this will get rolled up with a EVM based chains together in a single proof and that proof ZK proof will go to Ethereum. So it’s pretty exciting concept of how this all can come together.

Vitalik:

Amazing. Thank you. Eli I feel like we haven’t heard from you yet. What’s exciting in StarkWare land and what applications do we want to see?

Eli:

Yeah, so I think the thing that I’m most eagerly awaiting is the good UX that would come from applications building on Starknet that are using the native account abstraction that is common to all Starknet addresses, which would mean that users will get a much better UX that will feel… Well, it already feels very web two and familiar, those who install things like the Bravos or Argent wallets already feel this. So the main thing I think will be that the UX will look much nicer for users on Starknet, which will enable a lot more adoption by others who are not right now crypto savvy. And this will be coupled with the enhanced performance in terms of throughput and low costs.

Also, thanks a lot to 4844, which is soon coming. And congrats on yet another delivery that is going to impact us all in a positive way. So the main thing I would say is applications making cool new use of account abstraction and better UX, enabling totally new things. Now what these things are going to be, I don’t know, but I’m eagerly awaiting influence and KUBO and the other very cool applications or at the vanguard of this movement.

Vitalik:

And Nicholas, what’s your take on both of these?

Nicolas:

Yeah, so far ZK roll labs are still the compression, but we’re going to add. And basically it’s a fight for cost and transaction prices and it’s not a fight that is over. So compression allows to divide the price by five, usually that’s pretty good just for cold data. 4844 is incredibly important. I think it’s not going to be the end of a story. I think this data is going to be used very quickly, so we’ll need an extended number of blobs or more solutions, something like in a year or two. So I think we should anticipate this.

And once we will get there, which is basically very low transaction prices, a lot of use cases will open. I totally agree on the transaction and on UX, I think it’s absolutely key. And UX is actually linked to security as well. Sometimes in order to save, you either go too fast, sometimes you could actually, I’ve seen some patterns like you authorize the access and you do your transaction and then you remove the access. A lot of things are linked to the transaction prices and would be quite important in terms of UX and security in my opinion.

Vitalik:

Got it. Thank you. Yeah, so let’s move on to another theme that I think is really important, which is customization versus standardization. So I think the various layer two here have a lot of things in common in not all but most cases, the EVM in many forms of cryptography, a connection to Ethereum, potentially other things. But at the same time, all of you guys are exploring in different ways to extend the functionality that you’re providing to your users. And so how do you see the balance between extension and standardization being, what are some things that are particularly important to standardize and what are some areas where you’re personally, I’m excited about your project uniquely innovating on?

Nicolas:

I can start on this one. Okay. The way we see it, we would like to be as standardized as possible, that’s as much as we can. So we would really like the EVM to continue to evolve, not to be stuck in a logic, okay, actually everybody diverges in different way. And if we don’t have to, we’ll follow the protocol exactly as it is.

Alex:

I agree with this, but I also think that it’s our job to innovate and try things out that will then later be implementable for Ethereum itself and for all layer twos. Because it’s too risky to do all of the experiments on layer one because of the massive responsibility of the basis chain. We are on the other hand, are much more free to try things out and if they don’t work then they don’t work. So one thing ZKsync has done was native account abstraction, I think we’re the only EVML2 that has native account abstraction, and it really leads to much better UX where you can pay in different tokens. In the end, we will convert all of this fees into automatically programmatically so that we can pay for the focus for the block space on Ethereum. But the end user should not be bothered with that.

And it also enables things like multiple, and this leads to much better security because all of a sudden you can construct protocols that make it very visible, very clear to the users, what are going to be the impacts of their action. When I’m signing a transaction, I should not be worried that this transaction can steal all of my funds from the wallet. And the only way to do it is to go away from the infinite approvals and things like this and actually create strong in variance, which can be enforced by account abstraction, which has to be implemented natively. So that’s one example and we’ll see more examples where I think we should innovate. And then before we innovate, we should obviously coordinate and we have the really nice coordination channels. We have the rollup improvement proposal group, we have the chats. I think we should be using them really, really heavily, but still be courageous to innovate, try new things.

Vitalik:

Got it.

Toghrul:

I think for us, our goal is first and foremost to maximize compatibility with Ethereum, so we want to be as compatible with EVM as possible and with Ethereum code base in general. And then from there, there’s nothing stopping us from adding features on top of it, that Ethereum or EVM doesn’t have. So as long as we support the standard features that EVM has, we can extend them and add features that EVM doesn’t have. But for now we’re focusing on compatibility with EVM and Ethereum, and then once we achieve that, we’re going to move on to experimentation and adding things that EVM doesn’t have or possibly Ethereum isn’t considering adding for the foresee future.

Vitalik:

Yeah. One specific thing to zoom in on, the StateTrie hash function, right? So for the ZK roll-UPS here, what are you guys using? Are you cloning the L1 or do you have some ZK thing at the moment?

Toghrul:

We use Poseidon.

Vitalik:

Poseidon? Okay. An Linea?

Nicolas:

MiMSI.

Vitalik:

MiMSI and-

Toghrul:

MiMSI with complicated logic.

Vitalik:

Jordi.

Alex:

We are using-

Jordi:

We are using Poseidon too, with Goldilocks.

Vitalik:

Okay-

Bobbin:

We’re using Rescue, so yeah.

Vitalik:

Okay, cool. Yeah, so I think this one interesting question here is that the Ethereum L1 is currently actively planning to move to vertical trees because we care about reducing witness sizes. And I think one question here that I wonder if it’s worth thinking through is do we think it’s the goal for both L1 and every L2 to eventually converge converged to one particular NERC friendly StateTrie design or might you see legitimate reasons for different systems to diverge on this in the long term?

Jordi:

I think it would be good to have the same tree across, this will simplify a lot of things on that. I’m not sure if this should be Patricia try on there, but I mean Merkle trees is an option there. Here it’s important to mention that the proving systems, they’re evolving a lot and things that were not… I mean doing something that was not Poseidon or some hash-friendly function at some point was impossible or difficult to think about or hard to do. But every time we see that more things are possible, so currently, for example, thinking in Virgo trees, that’s using quite a good prime field even it’s a big one, but it’s a good prime field. It doesn’t look like a bad option, there are options and I think here that the evolution of the zero knowledge can change a little bit on what’s the best thing. At this point I would say that most of the projects are choose something that it’s to build, it’s something that works. But maybe now, we can start thinking about being a little bit more flexible here and unifying the StateTrie.

Eli:

Yeah, I want to say on this, that we will always be using the most efficient proving technology that there is, especially on the scaling side. And everything points towards the very small fields. So to that extent, as we do with the KZG blobs, we can use them and we adopt, but to the extent that standards will also support a number of fields and especially small fields, I think that would be far better from the point of use of scaling, just because you get the big bottleneck is the prover and the prover, you get much more efficiency over small fields. So be it Virgo tree or something else, going with a 381 bit field or even a 256 bit field will be much less efficient mathematically and engineering than using the small field. Now as we were working with KPG, we’ll work with anything that is a standard, but if the standard could be open to things that also support small fields, it’ll just get much, much better efficiency by two to three orders of magnitude.

Vitalik:

So with these ZK constructions in general, I remember back in the good old days when I was visiting ULE up in Haifa and you were explaining a lot of this stuff, look to me for the first time. There were basically a couple of ZK protocols out there, but now we have all of these distinctions of is it KZG? Is it IPA based? Is it Stark? Are we using Halo? Are the stark Goldilocks field? Are they binary field? Are we doing some lookup singularity thing? So do you guys see the space stabilizing anytime soon? Do you think five years from now at least the sane engineers among us will be able to look each other in the eye and we all agree that this thing is best? Or do you think we’ll continue having 10 different dueling constructions for a long time?

Eli:

I think we’ll have a lot of dueling constructions, but I do think that along certain parameters, especially if you look at numeric things, stuff will dominate. So it’s going to be very hard to get a single proof with smaller bite-sized than a growth 16. So maybe it could go down by factor two or something, but it’s very close to optimal. I think that in terms of proving time, some construction that uses either univariate or very small variant fields over… Sorry. Polynomials over a very small field will probably have fastest proving time. Let’s see, what would be a good analogy? If you think about sorting algorithms, there will always be more sorting algorithms, but there’s a small number of very popular ones, none of them is the very best along every dimension. But there is a very small number of them, and they’re well-known trade-offs. And so I think, I don’t know if in five years, maybe in 20 years will be something like that, where yeah, there’s always ongoing research about all kinds of problems, but along several important parameters, there’ll be sort of consensus as to the tools at hand that work best.

Vitalik:

Okay. Any other-

Bobbin:

I think-

Vitalik:

Yeah, go ahead, Bobbin.

Bobbin:

Yeah, I think probably within five years is the right timeframe, but not because people will stop innovating, but because I think making frequent changes or significant changes will matter less. I think we’re almost on the verge where proving is fast enough or good enough for most purposes where it doesn’t make sense to change it every year or every two years. I think from a cost perspective, we’re basically there where at least in the blockchain context, the proving costs are almost negligible compared to all other costs. From latest standpoint, we still have some ways to go and things like that, but I think within five years we’ll get to the point where people will say, okay, this proving system is good enough, I don’t need to change it next year because it doesn’t really make a material difference to the overall system design. Obviously people will keep innovating and maybe we’ll change things more slowly, but I do think we’re kind of almost on the verge where the proving cost and speed doesn’t matter anymore.

Alex:

I agree with Bobbin. And I just want to add that I believe it’s really important to standardize on the Virgo tree design that is compatible with ZK efficiency and that we can all then embrace and make it a lot easier to build type one zkEVMs. Then you can use all of the L2 projects as basic polite clients for main one, for layer one. And yes, on the efficiency side, we are at ZKsync, we’re using Shutter 56, we’re not using Poseidon, we’re not using MiMSI, we’re not using any algebraic cash functions because even though Shutter 56 six is slightly more expensive, we are currently the lowest cost or the cheapest zkEVM on Ethereum. So it doesn’t really matter because the proving is not the [inaudible 00:27:39].

Vitalik:

Okay. So I think just zooming out a bit from some of the math, just for the readers there who are a bit overwhelmed by the weeds here, it sounds like lots of amazing tech is still rapidly ongoing, but we value standardization, but it’ll happen over time and as the years pass by and as we flip over more cards and discover what technologies are waiting, we’ll start to converge on being able to standardize more and more.

Eli:

I haven’t heard anything from Kelvin and I’d like to hear-

Vitalik:

Yeah.

Kelvin:

Yes. The optimism ZK, no, I’m kidding. I don’t know. I got to be fed a question to answer.

Vitalik:

A bit actually. How about, we’ll go into probably one of your favorite topics and also something that’s I think probably of interest to lots of others in the L2 ecosystem as well. Which is how do you see L2’s role in funding common infrastructure across the Ethereum ecosystem?

Kelvin:

Oh, boy. I would say I think we have to, this is a personal opinion, but I think this is one of the big things that’s missing from L1, it’s hard to implement on L1. Obviously it’s a very challenging problem, that in my opinion doesn’t mean it’s not missing. And ultimately I think that it is sort of the L2 responsibility to fund these things because nobody else is, right? That’s sort of the fundamental problem.

Vitalik:

Anyone else wants to give their thoughts on that?

Nicolas:

I just agree with what Kevin said.

Vitalik:

Okay, amazing.

Harry:

Definitely agree. Also, I think it is interesting in that it’s very hard problem and I think that trying to figure out experimentation and resistance to capture is really, really hard. It is very cool watching what the optimism collective has been doing there just because I know there’s iteration in that it’s not an easy one to crack, but it’s one that’s extremely high value. And in general, I think, at least from my perspective, the thing that’s sort of most valuable now is sort of exploring a heterogeneous set of different possible mechanisms just because how hard it is to actually know how these things will play out and evolve in practice.

Kelvin:

Yeah, I think maybe even the mental model on my end, it’s not just necessary, it’s in a lot of ways fundamental to what we’re trying to achieve. We talk a lot about the technology. But if we just build this technology, where the only people that can fund it are people with an enormous amount of capital. Then we’ve sort of just created the same system again, if we can’t fund our own infrastructure in a communal manner, then how can we really run communal infrastructure? So yeah, I see it as fundamental to the goal.

Vitalik:

Maybe a different kind of a question, but still in a similar spirit. So this is the L2 peace panel and ideally who wants to go a step beyond the sort of cold peace of not having Twitter wars with each other and also have a warm peace of lots of amazing collaborations. So maybe anyone interested, maybe just share a concrete story of what is a place on which you or your team have collaborated with someone else on this call and gotten a really good outcome out of that?

Eli:

I am not going to share details yet, but we have a very, very cool collaboration with Polygon Zero. And I’ll say no more.

Vitalik:

Okay.

Jordi:

I can say some public things, for example, we’re collaborating for example with NEAR, we are building zkWASM together. I mean, they’re using the order meditation system and the approving system that we have and we are collaborating together. And especially all the proving systems and the language and all the tools. I think this is a good example. I mean, when we’re building the zkEVM is mainly building tooling and the tooling for building the zkEVM. So all these tooling can be used for many other projects and many other things. And what we are trying in Polygon is try to open and try to open this tooling so that the community can use improve, get better, [inaudible 00:33:03], even it’s in your own interest, but in interest of the community to having this tooling as extended as possible. And I think this is the spirit of the collaboration. You are taking a lot from the community, you are giving a lot to the community or you’re giving everything to the community. And this is the way to really progress and really go fast and innovate and getting results and getting adoption faster on this.

Harry:

There was a great effort a number of months ago that was led by Yoav Weiss and pulled in I think a good number of the L2 teams, to enable this interfaith ETH send raw transaction conditional, which is sort of a key enabler of a good 4337 experience on L2s, which was awesome and definitely an area that benefited a lot from standardization in that we want to have all the wallets work on all the chains, and I think that it was a really valuable movement.

Eli:

If I may, I wonder if we could invite Abdel on the show because [inaudible 00:34:17] collaboration stories. Probably 99% of the collaboration stories of Starknet with other cool projects originated in his mind or in discussions he had. So I don’t know if he can come on and share a few cool collaborative stories. I don’t know if he’s… I know he’s lurking in somewhere in one of the areas of this. I mean it was, yeah, Abdel, can you hear us? Can you come online and share some stories? There we go. Okay.

Abdel:

I was not too far, yes. Yeah, briefly. Hi. Hello, everyone. First of all, you hear me?

Nicolas:

Yes, we do.

Abdel:

Yeah, okay. First of all, I really like to collaborate with everyone, so I welcome everyone here to reach out to me to do some kind of collaboration. I wish to do more collaboration with other tools. But change, we do a lot of collaboration also with other DA layer like Celestia. We also did some work with NEAR, but it’s not directly for the public Starknet, but more for app chains and layer trees to be able to support multiple layers and so on. By the way, I want to shoot out optimism because we use proxy in production. So proxy is a tool from optimism to do load balancing on RPC requests and we use it in production and it helps us to improve the liveness of the network.

So this is also a way of collaborating, using other people tooling, et cetera, and we want to contribute to it, contribute back. And eventually discuss about making it to standalone product and why not adding compatibility with non EDML tools like Starknet and so on? So yeah, generally speaking, I love to collaborate with everyone. And I wish we could do that more because I agree it’s very important. As Nicolas said, standardization is also very important. And even if Starknet is not compatible, there are many other aspects other than the execution engine that we can standardize and collaborate together.

Vitalik:

Perfect. Yeah. One other I think specific standardization topic that’s worth digging in just a little bit, account abstraction. So it’s becoming a bigger and bigger topic. We’re starting to see more and more of these AA wallets. And especially as we’re seeing different native AA implementations, we’re starting to see different VMs become available in some cases. How are you guys thinking through making it friendly to account abstraction while it’s being able to deploy on all of the layer twos, including your simultaneously?

Jordi:

As a fully compatible zkEVM we are all in the 4337, so we’re growing together here.

Vitalik:

Okay, yeah.

Toghrul:

Same for us.

Vitalik:

I’d be interested to hear from someone who is not in the sort of uncompromisingly 100% EVM camp.

Alex:

For us, as I said, we support native account abstraction, but we follow the conventions of 4337. So it’s compatible, but just can do more because you can natively execute code. For example, many masked wallets can enjoy account abstraction, not only the wallets that are smart [inaudible 00:38:10].

Abdel:

I give example if you want-

Eli:

Yeah, yeah. I wanted to say I would answer, but Abdel is going to do such a better job.

Abdel:

A few examples. So on the gaming vertical that is very strong on Starknet, we have some builders who are using session keys because, of course if you have mentioned game, you don’t want to sign each individual transactions. So they implemented some session key mechanism where you can use some session keys for the duration of the game, but of course it’s limited to a specific context. He can interact only with the specific contracts, only with specific selector and so on. That’s one example. Another example is of course paymaster and we’ll extend the native account subtraction to also have non subtraction that will enable to send multiple transactions in parallel without having to execute them sequentially. Another aspect we start to explore is social finance wallets and social recovery wallets. That can be very powerful with native account transaction. Yeah, a very cool example is cartridge. They implemented an account that you can control by unlocking your face ID on your iPhone, so it’s totally seedless. You don’t have any seed phrase, so you just unlock your face ID and you interact with your smart contract.

Vitalik:

Oh, interesting. I mean, that case, where are the underlying keys stored?

Abdel:

In the secure element of the phone basically? Yeah, it’s using the curve on the iPhone and the [inaudible 00:39:40]-

Vitalik:

The R1?

Abdel:

Exactly, yeah.

Vitalik:

Okay, amazing.

Alex:

We have a similar example on ZKsync with clay where they use the pass keys for basically just sending a link, which anyone can click on and onboard in one click just using your face-

Abdel:

And you also have paymaster examples on Zksync? Yeah.

Vitalik:

Yeah, this is lovely. I feel like we should be having some [inaudible 00:40:09] abstraction wallet peace calls too at some point soon. And I know, Blocto did a good job of doing an event at ECC and I hope we’re going to start seeing more and more of those. One of the kind of L2 Twitter word topics that we’ve seen over the past couple of months is people arguing what is and isn’t a real rollup or what is and isn’t a layer two, particularly with an I2 data availability strategies.

And I feel like I’ve tried to keep the piece on this because my view is that different strategies just make sense for different applications and $10 million DFI does not need the exact same quality of data availability as some relatively tiny game. But from I guess each of your perspectives, I know actually many people here are increasingly offering both a rollup and some kind of offering with an on-chain data availability strategy or off-chain data. So how do you guys see the distinction between different data availability approaches, what kinds of applications each ones are appropriate too, and how you expect that to evolve in your ecosystem in the future?

Nicolas:

So when it comes to data availability, I’m always more comfortable when it goes to basically aerium because it makes the secure analysis much simpler, there’s a simple dependency, you don’t have multiple dependencies with links between the dependencies and so on. So I think it’s always simpler. I totally agree as well that there are some use cases where it’s absolutely acceptable to have lower security requirements. And I think there’s another category which is actually we can accept to lose the data. The famous example is everything related to social network. You’re not going to put on-chain all the pictures of all the world, it’s just not possible.

But it’s kind of interesting because here it’s data that you can accept to lose if something happens, okay, it’s lost, we’ve got verticals, we’ve got some stuff on-chain and the separation between there’s a part on-chain that is we are sure that we’re not going to lose it, and another part that we can accept to lose, it’s fine. It’s an intermediary step between a validium where okay, actually very likely we’ll not lose it, but you have just a more complicated security analysis to do. So there are a little bit of those three categories in my opinion.

Bobbin:

I think people tend to think of this as two polar opposites like you have on-chain and off-chain, but I think there are a lot of interesting models where things could be in between as well. You don’t have to put all the data on-chain or some things could go on one data like L1, and some things could just have other DA layer for example. A good example of this in Polygon Miden, for example, we have this distinction between accounts and notes and you could say accounts are on L1, but notes could go somewhere else. And what this means is whatever’s in your account is secured at the same level as Ethereum, but whatever’s in flight has a slightly different security guarantee.

So an in flight transaction may have a lower security guarantee but usually don’t transfer your entire account worth at the same time. So that’s just one interesting way. I know there’s other designs like Volution where you can have sometimes users can move in the same account from one data availability model to another. There is I think adamantinum that start where guys published at some point where it could automatically switch from global one to another. So I think there’s actually a very interesting space in between those polar opposites where you have some data sometimes in one place and another times in another place and the user decides where the data goes.

Alex:

I agree, it’s going to be probably, we’ll all converge around these hybrid models where every chain will not be a strictly a rollup or a validium every ZK chain, and I think everything is going to be CK in the future, but we’ll all have Volution and even deeper things where users host part of their data for certain accounts. But there are clear use cases just for rollups, obviously you want to host most of your net worth in the rollup because you want to derive full security from Ethereum without compromises. I think that’s going to be the case for not just… Most of the value on layer one and on layer twos are held by whales, by power users who are smaller numbers, but they have a lot of value there. You think of all the liquidity provider, arbitrators, market makers, they hold their value in something that they absolutely need to rely to.

So security is part of UX if you want. So they will certainly be on the rollup. And if you think about the enterprises and institutions and banks, most of them will clearly prefer a validium where they fully control the data and that’s just the reality. I don’t like it and I hope we will eventually build much better tools purely on-chain that people own and fully control, but we’ll have a continuum of the systems and the banks and institutions will have their assets issued in private validiums where the privacy is provided out of the box, but it’s still fully [inaudible 00:46:06] with the rest of Ethereum ecosystem.

Toghrul:

From my perspective, I think that for general purpose protocols, it just makes sense to be a rollup, just purely because if you have a lot of apps deploying on you et cetera, minimizing third party dependencies just makes sense because let’s say, whereas if you are an app specific chain, then I guess it depends on a specific use case. As you said, if the chain is securing billions of dollars, then I would say it’s probably best if you continue functioning as a rollup. Whereas if it’s like a gaming chain or something that has less value in it. And is more geared towards storing data or just making a certain state transitions, it just makes more sense to use a validium or an optium in case of an optimistic rollup. Because for the vast majority of that data, you don’t need the full security of Ethereum. It’s perfectly fine to compromise a bit, but the trade-off is that you get significantly lower costs.

Harry:

Completely agreed there. And I think to me it’s a matter of essentially, it’s not going to be practical for a while at least to have everything be a rollup and fundamentally you need to compare it to what the next best option is. And I think that there are a lot of DA approaches coming along that certainly like everyone has said, would not want to have be kind the main kind of primary ecosystem rollup chains, but fundamentally do enable use cases that don’t exist otherwise. Just to, I think there’s another part of your question that we kind of went less into one. Just a minor disagreement, I think you said that over the last few months there’s been arguing over what’s it held to, I would say over the last five years, it’s the sort of definitional problem that’s plagued us for a long time.

And in my mind on that front, the thing that I’d really love to continue to see more of is less… It’s very appealing to have these terminology buckets and you’re an L2, not an L2, you’re a rollup, not a rollup, what have you. But I think the really valuable thing is the one level deeper of what are the assumptions being made when I use this system, one of my trusting. And for example, sites like L2B that dig into security frameworks to me are kind of great resource there. Although there’s this tension which is how much can we expect end users and even application developers to actually fully understand those details, which is sort always been a challenge.

Jordi:

I have the feeling that the data availability as a consensus, how much bandwidth can handle a consensus in data availability? This is a problem that’s a relatively new problem and we don’t really know what are the limits there and what are the theoretical limits there. I know that foundation for sharding there are some theoretical problems. But I mean it’s very young, it’s a very young problem. There is not many chains that has been designed just for that availability itself. But the feeling that I have, and this is just a personal opinion, is that in few years that availability is going to be very much solved. I mean, it’s not going to be free because everything, the consensus is not going to be free. But it’s going to be cheap enough or very cheap so that the problem that maybe we are having now where we store that availability, have the feeling that maybe in 4, 5, 6 years, 10 years, it’s not going to be a huge problem as it may be right now. And here I would like to hear opinions on that.

Bobbin:

I think one other thing that is interesting from data availability standpoint to me personally is ability to maybe push a lot of the data completely off-chain into the user. So only the commitment to the account or someone’s account state goes on-chain and the actual contents of a user’s wallet and things like that are stored locally by the user. This enables a lot of scalability. Obviously it has its own pros and cons and trade-offs, but this is like when we think about where does the data go, we always think some other chain, but I actually think that another option is just the user stores the data locally and it’s responsible for the data and then just provides either ZKP or data proves in a stateless way to the network whenever they need to execute a transaction.

Vitalik:

Maybe just to pull things into a bit of a different direction, what should the layer one do? So there’s been I think a lot of just ongoing discussion around layer one making changes around possibly changing some gas costs to being a layer two friendly, possibly adding some pretty deep and complicated functionality. I know Justin Drake has been a fan of doing some of these things, possibly changing in response to or following behind some of the changes that are being innovated on layer twos, maybe other things. What are some ways that you guys think that the L1 should continue evolving in order to make the L2 world continue to progress as smoothly as possible?

Nicolas:

I would put two things, one is fast finality, which is not exactly a new topic, but I think it’ll secure much more the rollups, it’ll be much better for everybody. And maybe the technology as well could be reused in the layer two, all the progresses on first finality the layer one can be very interesting for the layer two. And on a totally different level, I will put at the EVM, I think it’s important link to actually a content structure to add some cryptographic primitives. There are a lot of EIP around that, but will allow much many more signature schemes or many more complicated cryptography at a content-structure level.

Harry:

I think there is a number of areas that are awesome to see, I’m glad. I appreciate that finality was the first call-out. Just I think that’s a really exciting one. It’s probably the most obvious one probably to people, which is exciting is expanded data capacity and obviously we have our dream final state of full danksharding and a bunch of interesting work going on in pure DAS, which is sort of relatively, relatively new to discussion as sort of another intermediate state. And then I think the last area I would call out, and I think one that’s sort of probably more useful to optimistic rollups than to ZK rollups, although I think incredibly useful in general is censorship resistance. In that right now Ethereum has relatively weak censorship resistance guarantees and there’s between inclusion lists. And personally I’m pretty excited and interested in embedded PDS and various things which would get Ethereum do a much harder guarantee there.

Nicolas:

I would add here ossification. I think it’s important that in order to build, if we want to build really L tools that are decentralized, we need one to be decentralized, and for that I think that one should be ossified. I know that this is not something that happened from one day on the other, Ethereum is not finished yet and there’s still a lot of things to do before doing ossification, but something that’s really important to happen at some point.

Eli:

If I could request or shape from a layer one, is I think the biggest challenge of blockchains is around the area of, let’s call it broadness. I just prefer the term broadness to decentralization because I don’t like double negation. So decentralization means not centralized, so centralized, bad, decentralized, good. So I prefer the term broadness. So blockchains are ultimately all about maximal broadness, so that there’s a very wide basis. It’s related a little bit to what Jordi spoke about censorship resistance.

And it’s a constant battle because there are natural forces and incentives that tend to centralization. And then what we need from the layer one of Ethereum is I think things related to tokenomics and incentives that would just keep on broadening all the time the base of operators and stakeholders. Now it’s a very hard challenge because there will always be forces of centralization. But the one thing that should be best done would be focusing on that. And then compatibility with… I mean, by the EVM being the first Turing complete machine, it already allows all of this whole family of L2s is all because of, Vitalik, your vision of a single Turing complete machine.

So if the one thing I would try as much to push forward from the layer one is the broadness and it’s related to the incentives in tokenomics. I don’t know how to change it in a way. A larger part of humanity has a stake in it and it’s ever more broad and at some point maybe we solve proof of humanity so that people can actually, we will get closer to one individual, one vote, or have elements of that without relying on nation state. So that would be the area I’d like.

Vitalik:

Amazing. So I saw Brian from RISC Zero just joined. And I feel like RISC Zero is another one of these newer members to the family that a lot of people have probably not yet had a chance to hear about. And talk about what are you guys up to and what are you guys looking forward to being up to in 2024?

Brian:

Yeah, I mean I can also sort of answer that last question and I think something that would be really useful from our perspective is just to make it easier to integrate zero knowledge proofs onto the L1 in any manner. So if that looks like Gros 16 pre-compiles or even pre-compiles that support more Stark-like systems, that would be great. And then obviously danksharding is going to be huge I think for all of us. But yeah, what we’re focused on right now is really getting our verifier onto main net so that people can use zero [inaudible 00:58:28] proofs produced by our system on Ethereum proper.

Vitalik:

Okay, amazing. Any final words from any of you guys?

Kelvin:

We need account abstraction please. Have you tried sending transactions recently? Oh, my God, it’s so bad. We need to fix it.

Vitalik:

Really? I thought sending transactions is amazing compared to four years ago. Remember back in the bad old days when you had the send a transaction and EP-1559 did not exist, and so you had to wait potentially five minutes or an hour for the thing to even get included?

Kelvin:

That’s true.

Vitalik:

But yes, we do need to get abstraction.

Kelvin:

You have to know how to use a wallet. I think if you know how to use a wallet today, it’s great compared to four years ago. If you don’t know how to use a wallet, it’s a whole mess, there’s so much complexity.

Brian:

I definitely think of that as a major obstacle in terms of when will my friends start using crypto instead of making fun of me for doing it.

Jordi:

Next [inaudible 00:59:42], right now we just wait for four, so I think that all of us will have a lot of work there.

Eli:

Yeah, 4844, let’s see it arriving. That’s going to be really great. And yeah.

Toghrul:

I would like a bit of an increase in the gas limit for the L1 because that will make our lives a bit easier in terms of decentralization because I have a feeling that for all of us, once we try to decentralize the costs of publishing the batches and finalizing batches on-chain, it’s going for a gas increase will help us with that.

Harry:

I want to see more multichain L2-native wallet development in general, just sort of managing the complexity of multiple networks in a way that makes it less confusing for users.

Alex:

I’ll just say I want to see more of this gathering in this format in product formats, coordinating, speaking together and making peace with L2s. And with this case, I need to unfortunately depart now. So it was a pleasure seeing all of you.

Vitalik:

Yep, okay. So we met on LANs, we met in the cloud, so maybe our next one’s going to be in a swimming pool.

Alex:

Bye all.

Vitalik:

Okay. Yeah, no, thank you so much. Yeah, anyone else have anything? Otherwise-

Harry:

Thank you for coming on and hosting.

Vitalik:

Yeah, no-

Eli:

Yeah, thanks.

Vitalik:

Yeah, yeah. Thank you guys for joining.

Eli:

Abdel. Do you want to say any, as the coordinator?

Abdel:

Thank you, Vitalik, because you accepted this invitation very last minute. So Vitalik actually accepted two hours ago. He did not know anything about the event, so thank you for that. And thank you very much to all the participants. I hope we can do another addition sometime later in the year and also some physical additions too. Probably we will all be at HCC or something like that, so maybe we can do something in person, that can be amazing.

Vitalik:

Amazing.

Abdel:

And yeah, let’s make peace together.

Vitalik:

Yep, let’s make peace and peace out and keep peace back in not too far in the future.

Abdel:

Bye-bye.

Brian:

Thanks, everyone.

Harry:

Bye.

Jordi:

Thanks.