Blobstream Starknet with Celestia

Share this post:


Join us for a discussion on the latest developments in Blobstream-Starknet 1.0. This version introduces features enabling data root verification from L1 using Herodotus storage proofs. We will also explore upcoming versions and potential architectures.

Transcript:

Ben Goebel:
All right, we are live now. We’re just going to make sure that all of our technical facilities are up and running. I’m going to do a quick check on Twitter to make sure I can hear me. Okay. Check. Check. Okay. And it looks like Twitter now does have video, which I didn’t realize. So you guys all get to see our lovely faces. So if you’ve joined us, thanks for joining us. Today we’re going to talk about Blobstream Starknet and the milestone that the dev community of Starknet, the open source ecosystem, along with some of the exploration team members have hit. It’s been an amazing project and amazing integration with both Herodotus and Celestia. It’s been an interesting architecture to come up with, and we could talk about some of the roadblocks and snags and how we’ve moved past them.
So Blobstream Starknet itself, it’s an implementation or a re-implementation of some of the Solidity code bases on Ethereum layer one. It’s a one-way messaging bridge from Celestia to Starknet. The original way that we had conceptualized coming about this, it was about the same time that Celestia had talked about migrating fully to Blobstream X, and I’ll let Diego talk a little bit about the difference between Blobstream X and Blobstream. And so we ended up going the route of using Herodotus. So to start today off, I’d like Diego to maybe give an overview of what a traditional Celestia Rollup looks like, what it needs to commit to, and then we can talk a bit about what’s been accomplished in Blobstream. So Diego?

Diego:
Yeah, absolutely. So obviously the traditional way quote-unquote, you build a Celestia Rollup depends on, one, whether this rollup has a canonical bridge somewhere, aka it’s settling to some other underlying layer. That’s the case with what we’re trying to accomplish here with Starknet L3’s for example, they settle to Starknet, for example. And then there’s [inaudible 00:02:37] rollups, which we’re not going to talk about today. And then the other difference is whether you are doing a ZK-Rollup or an optimistic rollup. In the case of Starknet, we all know it’s a ZK-Rollup or Validity Rollup. And in the traditional way, broadly speaking, without thinking about Starknet, but just thinking about ZK-Rollups for a second, the way you essentially build a Celestia, ZK-Rollup or Validium/Celestium, whatever you want to call them, is quote-unquote, straightforward. The idea is very simple. You just move from posting your state diffs, or broadly speaking your data or batch data for your rollup.
And instead of posting it to Ethereum or to the same network that you’re using for settlement, you would post it to Celestia. There’s obviously a minor issue here if you just do that, which is that posting data to Celestia makes it available, but there’s no way for the rollup system and thus for the underlying user to know whether this data was made available or not. They could run a Celestia-like client and they could verify that, but there’s no way for the rollup system itself to basically know when it should proceed with a batch or not, et cetera. That’s where the use of Blobstream comes in. Blobstream being the data attestation bridge that you can use to essentially build these constructions, these Validiums with Celestia DA. And what Blobstream does is a data attestation bridge is that it lets you bridge or transmit messages from Celestia to some other network, right?
The original implementation was written in Solidity, and it had some caveats, let’s just say. And eventually, instead of launching the first version that was built by a team member at Celestia Labs, we launched instead Blobstream X, which is the same thing as Blobstream in terms of functionality and what it does. But instead of submitting a long batch of Celestia headers and verifying a bunch of signatures, it uses ZK to verify all these Celestia signatures with one proof.
It’s more efficient, it’s more portable, let’s say. And this was built by the team at Succinct. But yeah, traditionally it’s very straightforward. You post your data to Celestia and then you take whatever your settlement contracts are and hook those up to Blobstream so that whenever your settlement contracts are verifying a batch and a validity proof for your rollup, you also verify one way or another whether the batch data for that specific proof was posted on Celestia or not. And yeah, the obvious reason why we’re here is to talk about how we migrated or built a version of this for Starknet, given that Blobstream was built in Solidity and thus we had no way of easily porting this to Starknet, but yeah.

Ben Goebel:
Sweet. And then that transition to Blobstream X moves all of the signature verification off chain into that SNARK?

Diego:
Correct. If I recall correctly, it’s a Plonky3 circuit, and then the proof gets wrapped into Groth16.

Ben Goebel:
Awesome. Okay. So that kind of ties in nicely to the problem that we had with Blobstream Starknet. And something that we’ll talk about a little bit later today, which is Groth16 is currently blocked on a syscall that basically needs to be implemented in our protocol stack that Marcello has some cool [inaudible 00:07:08] later. But we couldn’t go the full direct integration of basically mimicking the Succinct platform for proving those Groth16 circuits, which would lead to a clean architecture of basically just having exactly what Blobstream X does on L1 and port that to L2. So since that infrastructure has already stood up for relaying and proving those block header batches to L1, we essentially needed to appeal to the L1 and get the information and the data commitments from L1 where it’s already doing the work. It’s the same data commitments that need to be proved on L1 as L2 up to L2.
So that’s where we needed Herodotus to come in. And we basically use Herodotus storage proofs to prove what’s going on on the L1 and prove the storage slots that reflect the actual data commitments for the Celestia Blobstream X on L1, where it’s actually already doing the work up to L2 with the storage proofs. And it’s a pretty cool architecture. There’s a little bit of latency built in just because we do have to route from L1 to L2. But overall, Herodotus storage proofs have kind of unblocked us at least in the short term until that syscall comes about. So yeah, Marcello, if you could maybe talk about how unique architectures like this come about and how the storage proofs facilitated, that’d be awesome.

Marcello:
Sure. So thanks for giving the introduction. Maybe first of all, talk about what even makes storage proofs possible. So first of all, let’s remind ourselves how blockchains are constructed. So whenever we verify the consensus, okay, we’re on that [inaudible 00:09:02], at the end of the day just agreeing what is the latest state route, which eventually gets committed to the block cache. In the AVM, we have opcodes that allow to access the block cache. And as long as you trust that block cache, it also means that you trust the body data set of Ethereum. And the block cache, in a sense commits to the full history of the chain, because there’s a full linkage. And also the block cache is a hash of a block header, and the block header contains the state root.
So now that said, it means that as long as you have enough compute to well run a bunch of hashes, you can verify the inclusion of either any piece of data ever present on Ethereum. And by piece of data, I mean from state, accounts, receipts, block headers, pretty much everything that was ever seen on the chain. And that’s the idea that makes storage proofs possible. But there is one caveat, it’s really expensive to do it, especially directly on-chain. So this is why we use ZK to basically do this heavy computation off-chain and then on-chain just verify the proofs of doing so.
And now how this is applicable to layer twos. So like I said at the beginning, the full commitment to the history of the chain is this one block cache. So what if we can send using the canonical bridge, this one block hash to some layer two, and then given the capability, the computational capabilities as let’s say Starknet [inaudible 00:10:21] also we can verify this proofs directly, we effectively make layer twos not just like some platform that derives the security of layer one, but kind of an extension of layer one, because it gets access to its own state. So that’s what storage proofs do. And now how this is applicable in this context, whenever Blobstream X proof gets verified on layer one, we don’t have to really verify it on layer two, because you can just use the fact that, hey, this proof has been verified on layer one, so that just access the result of this verification on the layer two. That’s basically where we step in.

Ben Goebel:
Awesome. Awesome. Cool. Well, at the end of the day Blobstream Starknet is essentially a tool to enable L3s. So we do still need to have an integration, and I just talked to Elias at Kakarot, so we’ll be doing kind of the fine-tuning of the integration with those guys over there. But yeah, I want to talk a little bit more about the broad scale application of Blobstream Starknet and L3s in general.
One of the things that we obviously need is some of the things Diego mentioned, which is, okay, now we have a commitment to a state route that reflects a state diff, but how do we prove it? How do we prove that the Starknet OS actually ran the right computation? So some of the cool work that’s been done on the Herodotus side especially is the Cairo Verifier. So if you look instead of the perspective of the actual Blobstream Starknet contract and move out to the actual appchain itself, the appchain itself will need to run the Starknet OS and re-execute all the transactions and come and output two things, an execution trace to send to the STARK Prover and the state diff to basically post to Celestia and we need to tie those things together. So Marcello, do you want to talk a bit about the Cairo Verifier and how it plays into all of this?

Marcello:
Sure. So yeah, I mentioned proofs already. And with ZK we can build cool stuff such as, for example, ZK-Rollups, because what the ZK-Rollup is depending on the model, but effectively we have some runtime that either relies or not on some ZK VM. We run some computation, and part of the computation is also writing and reading to some state that we manage, and this is a ZK-Rollup. But the bottom line is that eventually just some proven computation, but of course when we have the trace, we put it into some prover, we get the proof, but this proof has to be verified.
And as of today on the existing on-chain verifier for STARK proofs generated by Stone by the Stone Prover was existing on L1 and was implemented in Solidity. So well, how do you build layer free, right? Because you have to verify also these proofs on Starknet in that case. So that’s effectively the work that we did. So we got some inspiration from the implementation in Cairo 0, [inaudible 00:13:30], which is used internally by SHARP to do recursion. We re-implemented the same logic in Cairo 1, of course with a few caveats. We changed a little bit architecture. But now effectively you can take any Cairo program such as, for example, Starknet to us, pass it through Stone, prove it, take the proof, verify it on-chain, and you have the usual logic and abstraction behind the [inaudible 00:13:51] history and so on. So that’s what we recently built.

Ben Goebel:
Amazing. And yeah, that’ll obviously be one of the crucial pieces to actually implement this in an entire appchain stack. So it’s awesome that that’s available today. Some of the things that’s not available today, but that are super interesting is some of the equivalency services that Celestia has been working on. Diego, I wonder if you could talk a bit about maybe the future of Blobstream Starknet or kind of architectures that this can potentially go.

Diego:
Yeah, so I mean now that Blobstream Starknet is a thing, the community has done a great job at bringing it essentially to fruition. And we have a way to relay or read these data commitments from Blobstream X on layer one on Starknet, right. And we already also have the integration with stacks like Madara that simply read and write data using a Celestia light node. The missing component here to make this essentially a full end-to-end implementation, like you mentioned, would be to essentially verify that the data posted to Celestia corresponds to the state diffs or well, the commitment to the state diffs that goes into Starknet OS. And then in that case, as far as I know, there’s two ways that you can go about it. One way is to take your rollup program, which in this case is Starknet OS, and you would add whatever necessary modification you need to add in order to essentially ingest the Celestia Blob commitment to your data, which is your state diffs.
And inside of Starknet OS verify that whatever commitment you have for your data, be it for Celestia, for anything else, matches the commitment of your data that is used internally on Starknet OS, that’s the Poseidon or Pedersen tree, if I recall correctly. That’s one way to go about it. The other way in which you could finish this integration without having to modify Starknet OS would be to essentially prove equivalency of the commitments that Starknet OS does your data, but essentially at the settlement layer or at the settlement level per se. So the previous approach does everything essentially inside of the rollup, and you could think about it almost like aggregating proofs, if you will. It’s in a certain sense simpler to think about or reason about, because now you just think about it as like, “Okay, I modified Madara, I modified Starknet OS or added this verification, and I used Blobstream with my settlement contracts, which would be something like a Piltover, and now you’re good.”
With the other approach you would, for example, spin up your Madara chain to write batches of data to Celestia, read them. You would still use Piltover, you still use Blobstream to verify that the data routes to which you committed your data to are available on Celestia. But then there’s the missing piece from step one that you could have done on Starknet OS, and the way you do this is by proving equivalency between the two commitments, the Poseidon/Pedersen tree for Starknet and the Celestia Blob commitment. And what you can do is prove this off-chain. So you would run this with whatever logic you want. Currently, the work that has been done by C Node at Celestia Labs has been in risk zero, but you could perfectly do this as well in Cairo, like Marcello mentioned.
You could run, for example, like a Stone Prover and you could write this equivalency service as a Cairo program, run it off-chain, verify it on-chain, and you would just hook that into the logic of your settlement contracts so that you know the sequencer/batch poster/whatever you want to call the node that posts data, has made the data available.
And that way you don’t have to essentially get your funds frozen. That’s the security assumptions of the Validiums. So with this different approach, you would still, obviously in the same way that [inaudible 00:19:06] does, you would still run this computation off-chain, but it doesn’t require modifications to Starknet OS. It does obviously require having to, one, write this equivalency service either in Cairo so that it can be verified with the Herodotus integrity contracts or program circuits, if you will. Or in a similar way to the, I guess, Blobstream on Starknet V2, it would require the BN parents like the syscall to be able to efficiently verify Groth16 proofs, because the current implementation of the equivalency service is filled with [inaudible 00:19:57], and you would wrap those proofs into Groth16 as an example. And yeah, it’s hard to say which one is better ultimately to make that decision. Both of these have to be built and then you have to do some benchmarking take into account infrastructure costs, gas costs, et cetera. But the options are plenty, basically.

Ben Goebel:
Well, that’s kind of interesting and I’m glad you mentioned Piltover because Piltover is a project that kind of spun out of Blobstream Starknet just because it’s kind of an obvious part of the appchain stack. So Piltover is part of the Twitter thread announcement. I encourage you to go check it out. What it is it’s the core contracts basically transposed to Cairo. But one of the things that those will free us up to do is experimentation. I mean, those will essentially let us basically not be locked into, this is exactly how the update state function works for these appchains, but here are five different ways the update state function can work in the Cairo code for these appchains. Pick whatever’s right for your customizable appchain.
So especially devs listening to this call, open source devs, we definitely love some contribution on Piltover, but all of those different varying architectures that Diego said, we can basically have a trait that has an overall description of what needs to be done in that update state and then be able to implement it in a variety of ways. Cool. I mean, that’s kind of the last thing I had. Do you guys have any other topics or pieces of conversation or questions or did we leave anything open-ended? What do you think, fellas?

Diego:
I mean, I guess it would be interesting to chat about, well, why would someone build a Starknet L3 and why Validium, I guess? That’s something that maybe we might be missing from this overall conversation. What is this all accomplished really, right?

Ben Goebel:
Yeah. That’s kind of like first principles, right? What are we even doing here? Yeah, I’ve gone back and forth about that a bunch in my head. I mean, the stock answer is customizability, hyper throughput. You’re always going to get some pushback from cynics that well just use the database. You know what I mean? But I do believe, I mean you are maybe lessening the trust assumptions as you walk up the stack, but you still gain security and trust from the stack beneath you in anything you’re doing in blockchain. So I see value in it there with customization, the security gain and yeah, hyper throughput or hyper scale. What do you guys think?

Diego:
I think for once you can essentially build your application that you would’ve otherwise built on Starknet, but as your own rollup or Validium, you’re still part of the Starknet ecosystem. There’s a lot of things that you can experiment with. Hopefully once we have versions of this that can be run on test nets, there’s the low-hanging fruits, like having a Kakarot L3 or Kakarot Validium right on top of Starknet. I think when you combine the provable VM paradigm of Starknet with the verifiability and overall cost reduction that things like Celestia afford you, you have more room to experiment. I say this as somebody that before working at Celestia Labs, worked as a smart contract developer and I worked on an application that was on the L1, and it’s always bothersome when you’re thinking about what you’re going to build and you’re like, “Wow, this is so cool. It’s going to be so dope.” And then you realize, “Oh, it’s going to cost way too much money,” or not just to develop or deploy it, but more so who’s going to use this if each interaction is $200? And that’s with low gas fees.
Obviously L2 solved this issue of overall gas fees on the Ethereum ecosystem. But when you have things like L3 or Validiums where you’re no longer posting the data to Ethereum even after EIP-4844, which has done a good job at reducing fees, you’re now talking about essentially almost zero fees. And in my opinion, the good thing about this combination is that you’re building L3 set are verifiable. They actually have proofs. There’s no dangerous games that are being played with trust assumptions. And yeah, I mean I’m overall excited to see not just things like Kakarot like experiment with Validiums and starting L3’s or well Validiums that are L3’s. I don’t know, the wording gets complicated. But things like [inaudible 00:25:59], things like Dojo, et cetera. Especially on-chain games, provable on-chain games, really I feel benefit from this overall scheme or architecture that’s being built with Madara and Validiums.

Ben Goebel:
For sure. There’s almost like if you build it, they will come thing where we don’t even know the coolest things that will be built on an appchain yet. Once it’s here, Dojo and Cartridge have thrown out crazy ideas, just like an entire autonomous world in an appchain on a click of a button, that kind of stuff. If you build it, they will come.

Marcello:
Yeah. Maybe one more reason to build layer three in my opinion, is that like, Diego, you served whenever you have an application deployed like a layer two, I think kind of natural to transition to an L3 at some point for many reasons. I think in my opinion, the most notable reason is that suppose that you built something amazing, but someone else also built something amazing, which is getting a lot of traction, let’s say on layer one, driving up the gas fees, and you’re also affected eventually. And well, it also runs on the same sequencer if it’s running both application on the same layer too. You kind of run into the noisy neighbor problem and with a layer for you work in complete isolation, you can settle whenever you want. When gas is cheap, you basically control your stack. And I think that’s super valuable, because you can extract so many things from your users just because of that. It’s probably worth it.

Diego:
No, absolutely. Absolutely. And I mean, there’s a bunch of other stuff that started coming into my head, like Oracles like pragma, right? That are, if I recall correctly, building out the recent [inaudible 00:27:48]. And a lot of applications that benefit from not just the provable VM or Cairo VM, but they also benefit from cost reduction, but more importantly verifiability. I think that’s the reason why without trying to be biased, you would choose something like Celestia DA or you would choose Alt DA that’s not essentially on the AWS S3 bucket that 10 people sign over. You want to have at least some form of security guarantees, if you will.

Ben Goebel:
Yeah. And as far as the use case of some type of control over your rollup stack and not running into the noisy neighbor problem, it seems to me like that’s basically been proved out. You have things like dYdX, Paradex, these things that have created massive amounts of volume that they don’t have to worry about the noisy neighbor. The only thing this is doing is basically opening that stack to anybody who wants to deploy it. Which is pretty powerful. Cool. Cool. Anything else you guys have?

Marcello:
I think maybe it’s worth to talk about also the verification of the Groth16 proofs on Starknet. I think we didn’t cover that. But yeah, there is a project called Garaga, which we have the pleasure to be contributors to as well. And yeah, basically soon, I think around the summer there is a chance that this will be available on Starknet, but also maybe [inaudible 00:29:38] did say something pretty weird, but actually today it’s even possible to already have [inaudible 00:29:43] in my opinion, because you can prove the execution of Garaga, which requires some hints. You can just prove it with Stone and then verify Stone Proves on Starknet, which is also potentially an interesting architecture and you can batch this together, which is even funny.

Ben Goebel:
Okay, let me ask you a question. Okay, so one of the things that Groth16 opens up is that cleaner architecture of basically verifying the Succinct X… Diego, if you don’t mind muting or maybe I can do it. So basically verifying the Groth16 proof of Succinct X platform, which is… What is it called? Grok? No, there’s some Solidity library that does it for L1. It basically auto-generates the function verifier.

Marcello:
[inaudible 00:30:47].

Ben Goebel:
Thank you. Thank you. So on L2, if we unblock this pairing, we can essentially do this function gateway, which is Succinct’s platform in Solidity where anyone can go auto-generate a function verifier for Groth16, and then they can basically prove whatever their circuit is proving. Blobstream X is actually one implementation of a function verifier in the Succinct gateway. So maybe Marcello, can you walk through originally what we were thinking is that once that Groth16 gets unblocked on Starknet, we would just implement it in Cairo. But if we did it with this way, what would that architecture look like?

Marcello:
I mean, as of today, it’s changing a bit, because there is a new built-in being added to Cairo, which is very specifically modular arithmetic. And this is not yet supported by Stone, I believe, but that’s a good question to some StarkWare folks. But yeah, like in previous version it was possible from what I recall, to just prove it with normal Cairo. Then you throw it into Stone and voila, we have a STARK proof that proves the verification of a Groth16 proof. So that was possible, but I think right now with this new Cisco just do it directly, there is no need to wrap it. But it’s maybe going to enable lower latency, but the question is also going to be around the cost, because maybe it will be the case that it’s still cheaper to verify a storage proof [inaudible 00:32:23].

Ben Goebel:
Interesting. Cool.

Marcello:
But you can also aggregate, right? You can generate a STARK proof that verifies the truth execution of many Groth16 proofs. Yeah, there is a lot of these things that can be used here.

Ben Goebel:
Yeah, go ahead.

Diego:
I just wanted to add, one thing to think about is that, so there’s this V2 of Blobstream X where you were verifying the already existing Blobstream X proof on Cairo through Groth16 and Garaga and whatnot. And obviously that has the benefit of now it shares the same proof as the Ethereum one version, et cetera. But there is also the possibility, I’m not saying it’s a simple task, but you could build Blobstream X or the logic underneath Blobstream X, which is TendermintX as a Cairo program, given that if you were to put side by side like Blobstream and Blobstream X, the main difference is how you verify that data commitment or a batch of headers is valid, right? In V1, it’s like you have to loop through signatures and verify all those. In Blobstream X, you verify this Groth16 proof of the Plonky3 proof, or sorry, Plonky2 proof and that’s it.
So you could in maybe a V3 or V2.5, I don’t know, you could try to build TendermintX, but using Cairo and that way you no longer need to wait for this pairing. You could verify it with Integrity. But another thing about TendermintX is that, yeah, it’s used for Blobstream and that would help a lot with Blobstream X, but because TendermintX is proving Tendermint consensus, you could, for example, use this for other applications that might want to talk with other or interact with other Cosmos space like chains or chains that use Tendermint under the hood, basically. I think that’s worth exploring in the future.

Marcello:
Yeah, definitely. Cairo is pretty powerful. It’s a ZK VM. You can do quite a lot of stuff with it. So literally nothing prevents you from implementing a program that just verifies a bunch of signature and then call it consensus verification. I guess you can do it. The question is what’s going to be the benefit of that? Probably you avoid going through Groth16, so you don’t need to generate the Groth16 proof, which probably takes a while. You just go directly through a STARK. Yeah, there is a lot of possibilities. The question is-

Diego:
Worth benchmarking.

Marcello:
Yeah, is it actually worth it?

Diego:
Yeah.

Ben Goebel:
Let’s build it. Let’s see. Sweet. Sweet. This has been an awesome conversation, guys. Yeah, I want to thank Marcello for joining. I want to thank Diego for joining and thank both you guys for helping with the integration to where we’re at. There’s obviously a lot of exciting work to be done. We’ve kind of just talked about a lot of avenues to take a lot of code to write. So if you are listening to this and you’re a dev, we would love to have you. The repo is GitHub.com, keep-starknet-strange/blobstream-starknet, and you can also find us on Twitter or on Telegram. So yeah, once again, thank you guys. I really appreciate it. This has been awesome and have a great afternoon or evening.

Marcello:
Thanks.

Diego:
Have a good one.

Marcello:
Bye.