Join Delphi Research today and immediately get access to our full Member Portal!
Join Delphi Research today and immediately get access to our full Member Portal!

Tarun Chitra: Drinking 3 Redbulls a Day, Gauntlet’s Financial Modelling Platform, and the Future of Governance

Dec 28, 2021 ·

By Tom Shaughnessy

The Delphi Podcast Host and GP of Delphi Ventures Tom Shaughnessy sits down with Tarun Chitra, Founder of Gauntlet, a financial modelling platform that uses battle-tested techniques from the algorithmic trading industry to inform on-chain protocol management. The two discuss the intricacies of Gauntlet’s risk models, the future of governance, the state of artificial intelligence, and much more!

Social links: 

Resources: 

More

00:00 • Tom
Hey everyone. Welcome back to the podcast. I’m your host, Tom Shaughnessy. Today, I’m thrilled to have Tarun on who is the founder of Gauntlet and GP of Robot Ventures. He often confuses me with his tweets that are way too smart for me to understand. Tarun, how’s it going? 


00:15 • Tarun
Good. I’m drinking a sugar-free Red Bull as people on the internet didn’t believe I do. I actually drink three of these. 


00:23 • Tom
I saw that. What’s the cadence? Is it just immediately wake up and chug one or shower first, then Red Bull – what is it?  


00:29 • Tarun
To be honest, if I had them at home I would immediately drink them. As a form of making sure I don’t drink too many, I make myself have to go out and get one. I know that sounds ridiculous, but I don’t keep them at home because I know they will disappear. 


00:44 • Tom
I thought you’d have a recurring Amazon purchase going everyday.


00:48 • Tarun
That is dangerous. Number of consumptions would only go up. 


00:54 • Tom
Jeez! We have your intellect until you have a heart attack. I don’t wish that. 


01:01 • Tarun
There’s this famous mathematician, Paul Erdos, who used to say a mathematician is a device for turning coffee into theorem. I think that, in general, caffeine is a hell of a drug. 


01:14 • Tom
I’m drinking concentrate caffeine that I buy and I mix it with water in case I don’t have time to run out, so I’m a fellow addict. Tarun, you have quite the intro story to crypto. I listened to your podcast, I think it was the ZK podcast, about your backstory. It’s pretty interesting, right? Can you give us a rundown of how you got into crypto and how you got started working at D. E. Shaw and this whole conversion? It’s a pretty cool story.


01:40 • Tarun
For sure. In 2011, I was working at a place called D. E. Shaw Research. We were building hardware for doing drug discovery. These were Asics for doing these simulations. Basically, at the same time, the first Bitcoin Asics were being created, and we put out this chip order for  $25 million to one of our suppliers. We escrowed the money. Then, the chip supplier just ghosts us for six months. When they come back they say, “Hey, we’ll give you a 10% discount.” Of course, for anyone who’s built a large engineering project you reply, “F**k you! You set us back six months.” Six months is a long time in a development process. Eventually they were being cagey, and we’re like, “Look. We’re never going to use you again unless you tell us.” 


02:44 • Tarun
So then they told us, “Oh, well this Bitcoin mining thing…” That was when I was like, “Okay, wow! This thing is crazy. I’m going to go mine some myself because it sounds insane if people are building hardware for it.” I had some GPUs so I mined a bunch, sold it all in 2013 at the bottom, so don’t follow my mining advice, ever. I  paid attention from then onwards. I think the thing that was interesting is that we built this custom hardware but we also worked really closely in distributed systems. We built these machines that had hundreds to thousands of nodes inside of them. We built this custom networking layer and all this stuff. A lot of the people I worked with were experts in distributed systems, but basically zero of them believed that crypto worked. I was in that camp because a lot of the traditional classical theory of distributed systems doesn’t really work when you try to reason about it in cryptography and crypto systems. 


03:50 • Tarun
Given your audience, I actually realized maybe I might explain that slightly more carefully. In databases, there’s this famous theorem called the cap theorem which says that you can only get two out of the three following properties: consistency, availability, and partition resistance. Consistency is: if I submit a transaction, then everyone agrees that this transaction has made it through. Availability is: the network can always receive new transactions. Partition resistance is: there is not a way for an adversary to split the network such that you get on two forks, effectively. The cap theorem says you can only get two out of three of these. This whole blockchain trilemma meme comes from the fact that there are these kinds of distributed systems theorems that say, “Here are three properties and you can only get two out of three of them.” 


04:57 • Tarun
Now, the reason blockchains are weird is they technically try to guarantee you all three, right? Naively, in distributed systems, you’d say that can’t happen. You have all this stuff like, “You’re just bullshitting me.You’re  some dumb undergrad,” which is usually how people would think about this. The thing you have to understand about crypto systems is that they actually guarantee you all three, only under certain circumstances. They say something like the following, “You get two out of three of the cap theorem properties and you get the third one with 99.9% probability.” Different blockchain systems will give you different guarantees on those. At that time, there was no theory for this notion that you could get all three but only probabilistically. It took until about 2015 for that to actually be rectified. I think that was when I first started being like, “Okay, there might be something real here.”


05:58 • Tarun
After that, I went to work in high-frequency trading. One of the things you do in high-frequency trading is you stress test your trading strategies by looking at how your strategy does compared to other types of strategies you know of in the market. One interesting thing was that, in proof-of-stake protocols, the guarantees that you get for this cap theorem style guarantees are not strictly dependent on cryptography. They’re also dependent on how easy it is for an adversary to accumulate a lot of stake. Unlike proof-of-work… in proof-of-work, aggregating a lot of hash power for proof-of-work is actually quite hard if you have: a. a large network like Bitcoin or ETH and b. it’s very decentralized geographically, which means you have to control a bunch of different energy markets. In proof-of-stake, that’s not really true. 


07:00 • Tarun
You can imagine making a lending product or a yield farm and everyone puts their stake into the yield farm. The yield farm aggregates 33% of stake and then it rug pulls and uses that to do an attack against a network. That’s something that can’t easily happen in proof-of-work. That led me down the rabbit hole of trying to stimulate these types of attacks. That’s the long story short, and then I just never returned out of the rabbit hole. 


07:29 • Tom
That’s a hell of a journey. I would have guessed that you went from distributed systems to crypto. I didn’t know the HFT was in the middle. There was a gap there.


07:40 • Tarun
Yeah. A lot of people from D. E. Shaw Research either went to DeepMind or Fair or to academia or to Jump. Jump’s Head of Research used to work with me. There’s a reason that in HFT people build a lot of hardware and do a lot of distributed systems. A simple way to think about it is that there are approximately thirty exchanges in the U.S.  You have to go co-locate your server at a bunch of different data centers and then synchronize them, so it turns out being a distributed system’s problem.


08:28 • Tom
That’s pretty cool. Did anybody else from your earlier days end up in crypto? Were you able to convince any of them after the fact? 


08:35 • Tarun
I think people from HFT for sure. Some are still in the denial phase. NFTs seem to have flipped them a little bit but not in some meaningfully obvious sense. I don’t think they’re jumping the gun to go join crypto, but they have less skepticism. For better or for worse, and actually someone had a tweet about this today, Web3 as a marketing campaign for crypto was probably one of the most successful things ever so far.  It really did convert a lot of people I knew who were thinking crypto as drug dealing and money scams into thinking about it as the future of the internet. I don’t know exactly who did that, but it was such a powerful marketing tool. It convinced Facebook to change their name. Who invented that term?  I remember hearing it in 2017, but I don’t know who made it, but it clearly worked, whoever pounded the pavement…


09:42 • Tom
I agree with you. It’s catchy, and it affects a lot of people.  DeFi is hard to understand: economic attack vectors, staking, tokens, etc. NFTs are like,  “Hey man, here’s a Pudgy Penguin. Go wild,” and they’re sold.


09:57 • Tarun
I don’t know if Pudgy Penguins is the example you want to use given their drama recently.


10:04 • Tom
That’s fair. Maybe we’ll go with something else later. We’ll edit that out and change it. It’s pretty cool to hear your arc and  how you started simulating models, doing these attacks, etc. Was Gauntlet founded soon after your move to HFT?


10:20 • Tarun
I worked in HFT for two years. Basically, what happened was: in 2016 or January 2017 (whenever the Gnosis ICO happened), that was what made me go, “Oh, wow! People are putting this much money into this stuff again.”  This was a little before the Tezos / Filecoin ICO. That was when I wanted to see what the academic papers said. One thing I do vaguely recall was that the academic paper quality increase is a leading indicator of the fact that there’s going to be another cycle in crypto. When the academic papers get crappy, that means  people made too much money.


11:11 • Tom
That’s a cool indicator. Usually mine is Coinbase being a top app and my dumb friends being rich, but that’s a way better indicator. 


11:19 • Tarun
Well, this one is a very low frequency indicator. It will tell you that there’s a cycle coming in one or two years. That was when I started reading all these new proof-of-stake papers. I remember NXT and Peercoin and all that  stuff. Those things didn’t seem to make that much sense. The Algorand paper was what made me first think that people were actually using their brain for once while writing these papers.  Thinking through the attack factors… I started reading it and realized they’re missing this one type of thing that people in finance would try to exploit in these systems. Then I started trying to write simulations that looked like what we used in trading. At some point, I started going to a lot of meetups and talks. I talked to people building these things or raising money and they’d say that they haven’t really thought that far ahead about it. 


12:13 • Tarun
“Do you want a job?” I thought, “I’m not leaving trading for these sketchy founders,” but I was  open to doing consulting. I was consulting through 2018. At some point I wondered why I was even working the trading job.  There were so many people who wanted this stuff. Slowly but surely, what happened is Libra tried to acqui-hire my consulting company in June 2018. Then I thought, “Maybe there’s a way to make data science software for developers who don’t want to learn any finance.” I met my co-founder around that time through some mutual friends and raised money, and here we are. 


13:06 • Tom
Did you have any idea of doing high-frequency trading for crypto? That would have been a natural transition for you.


13:13 • Tarun
While I like the challenge of HFT, I really like the building side and analyzing side a little more. The way I view it is that you can always go back to trading. Humans have to consume things, so there’s always some way that you can go back to do some type of market making type of stuff or longer-term strategies. They’re never disappearing. They might change, or the technology that’s used might change, but trading will never disappear. The opportunities to build something in a new space oftentimes don’t happen very often. One FOMO that happened to me was when I was working at D. E. Shaw was that some of my earliest colleagues went to go join  DeepMind or Google right at the beginning, or they were one of the founding people at Facebook AI Research. 


14:18 • Tarun
At that time, I questioned who believes this AI s**t.  You can’t prove any theorems about it.  Does it actually work?  Of course it took 10 years before you got some real evidence, but sometimes you have to trust your instinct and realize, “This is actually a new phenomenon that has a lot of unexplained stuff that you can do some interesting work in.” As I started going into crypto more, it felt more like that rather than just trading. Of course, you can do trading and be fine. There’s not as many opportunities where  there’s a totally new paradigm happening and you can make a huge larger impact in building things. 


15:08 • Tom
That makes sense. Honestly, if you’re going to go to AI eventually, I’d rather go crypto than AI first so that you have a lot of time for AI to actually be developed. You can make some good money in crypto and change the world and then move over, but I don’t really know the timeline. 


15:22 • Tarun
I ethically don’t like the AI industry. In a lot of ways, it still ends up being the military-industrial complex if you squint enough. It’s like, “Ads or spying?” I have all these friends who are really smart who do all this stuff, but they never ask themselves who is using the algorithms they wrote.  If you squint enough, you see it’s something like people oppressing leaders in China. I  don’t really want to be a part of that. 


15:53 • Tom
You’re on the opposite end of the spectrum. You’re on, “Crypto is supposed to be open for all.” 


15:58 • Tarun
For sure. When you’re asking, “Are a lot of your old colleagues in crypto?”….a lot of them went more to the AI side. They still view crypto as somewhat of a scam, but then they don’t view their own industries as helping to oppress some people in poor countries. It’s a weird dichotomy to me that they don’t realize that.


16:29 • Tom
It’s weird. Honestly, I just think it’s lazy. They don’t want to spend the time to understand the other side. I have a couple of AI questions for you but I’ll save them for the end so we can get into Gauntlet first. Give us an overview of Gauntlet. From the outside, I think people understand that the Gauntlet platform has a risk desk, you have continuous optimizations for projects, you talk with the community and have ongoing communications, and you execute changes to optimize these DeFi projects. That sounds great. That’s a very broad, basic stroke. What exactly is the Gauntlet platform doing? If you want to throw an example of a project that you’re working with that could be helpful too. 


17:16 • Tarun
The idea is that DeFi protocols are these autonomous creatures that our users can interact with. There’s many different types of users in them, perhaps more than normal with financial products. There is a lot of complexity hidden in them in that they have a lot more parameters. These parameters are things like fees, margin requirements, interest rates, expected payoffs, things like that. In general, a centralized exchange or a centralized entity generally has a risk desk or some type of optimization desk who says, “We’re going to change this margin requirement because suddenly this type of collateral is crashing, or there’s some type of bad behavior happening.” One day, I hope DeFi protocols can do such things autonomously, but remember -we’re running things on a TI-83, so there’s not much capacity. You can’t do as much stuff as you’d like to do. 


18:40 • Tarun
One thing that’s quite important is that you have a lot of off-chain analysis and intelligence that drives the choices in those parameters that you choose. As crypto and DeFi grow and as there’s more compute capacity, more and more things will be done fully on-chain. Right now, we can do these basic really awesome operations and hook them up together, but the monitoring and analytics for these things still has to be done off-chain. A lot of what we do is do a bunch of analytics from different blockchains. We take data from different exchanges, parse that, and then try to analyze that and say, “These are the different types of users that are currently in the system,” and categorize them and tag them. Based on that, make models of them. Imagine that’s like making an AI bot that’s supposed to replicate that user. 


19:37 • Tarun
The cool thing about blockchains, unlike the rest of the finance, is that a user’s interactions are all public (at least right now). You can try to make a little AI bot that replicates what their historical behavior was on-chain. You could think of it as: we make AI bots for all these different types of users, even some users that we haven’t seen  like the chaos monkey user – the one that’s trying to break everything. We then take all the users, and then imagine that we have something like a BattleBots stadium, and they all are playing against the blockchain, trying to optimize their profit. The profit that they have is something that you fit based on what you’ve seen on their data. We run simulations of how they would compete against each other. From those, you can see things like, “If the fee is this, then this type of user takes most of the value in the system.” 


20:42 • Tarun
“But if the fee is this, this other type of user takes most value in the system.” From there, you can say, “How do we get to an equilibrium where it’s stable, using different KPIs?” Deciding on those is partially what the community wants. Communities will say, “We want to be a really risky lender and make as much revenue from the protocol as possible.” Or, “We want to be super safe because we want TradFi banks to put money in our protocol one day.” Based on that, you can optimize what happens. It’s a little bit complicated but hopefully that  gives a rough view of the whole loop. 


21:24 • Tom
From the outside looking in, if we didn’t have Gauntlet, how would people do this? Right. I feel like most of the time, projects and protocols launch, and they attract a lot of attention. Then there’s a hack or an issue. They  solve it. Hopefully they save face and they optimize their project and move on. Your approach is the complete opposite of that, right? Your approach is to figure out what can go wrong before it goes wrong or is it more about value flows to specific stakeholders to optimize protocol revenue or whoever you want to incentivize?


21:58 • Tarun
It’s a combination of both of those. Risk is the latter part of trying to keep these things safe as a function of how market conditions change, which is different from smart contract security. Smart contract security is, “Is there absolutely any way that this bad thing could happen?” However, in a lot of protocols, bad things should be allowed to happen but should only happen in rare situations. If you don’t parameterize it well, you might make a situation that’s supposed to be rare suddenly more common, which will lead to some value loss for users. On the other hand, There’s always this trade-off between risk, i.e. mitigating those bad scenarios, versus protocol revenue and redistribution of assets within a protocol. That’s the capital efficiency side of things. What we do is – the community says what they want, right?


22:57 • Tarun
Imagine there’s just a spectrum between those two. They say, “Hey we want this point,” which is like, “We want to take no risk and we’re willing to trade revenue for it, or we want a ton of revenue.” Then we optimize for that and help make models and help make some visualizations and dashboards that hopefully are interpretable without you having to really understand the little details. One thing that we learned when we started the company was: we initially thought we would make this platform for developers to write their own models of these users. They would find that they’d take their own data. There’ve been people who’ve tried to make some different types of projects for doing simulations. What I found is that, in general, most developers in crypto don’t want to do a lot of the statistics work which is a different way of thinking about things. You need to generate hypothesis tests. You need to do lots of verification of like, “This type of property should be true in the system.” 


24:15 • Tarun
“Can we go stress test that this is true?”  You have to construct those properties. It’s a different mindset. It’s like  the engineer mindset versus a quant mindset. One is very focused on looking at probability distributions,  tails of things, error rates, things like  that. The other is really focused on writing tests and making sure that the sequence of things goes as expected versus, “Statistically, how do users use something?” I like that in a lot of what we do is closer to people like quants or what people at big tech companies who do incentive management for Uber drivers or AB testing of  features – that type of stuff. It’s sort of a hybrid of the two of those. One of the things that I’ve observed is that even when people have the tools to try to do this analysis themselves, they often either get intimidated, because it’s foreign to them relative to their other analysis,…. 


25:27 • Tarun
…or, they write a model that is too weak. Basically, that weak model says that everything is safe, like, “This  chaos monkey I wrote didn’t destroy the system, so it must be a hundred percent safe.” In a field that’s advancing as fast as DeFi, and the attacks are getting insanely more sophisticated, you really have to keep building new models of all the different types of attackers and users and people continuously. I think that’s one thing that’s quite important. 


26:06 • Tom
To take it a step further,  I went on your website and I saw the Compound risk management dashboard. The buy-line is super helpful, right?  You guys ran thousands of simulations across various volatilities and parameters. You figured out the risk parameters and stuff that you wanted to use to balance and optimize capital efficiency and value at risk, right? You mentioned earlier that  since everything in crypto is public, you could basically use people’s past actions to see what they’re going to do and do variations to that. How often are these risk models updating? If Compound has 10x the amount of users that are making 10x different decisions, how do you guys include that growth / that difference / that change to continue this optimization model long-term?


26:58 • Tarun
As a platform, and I think this is one of the things that might be less understood about what we do, is that we’re constantly retraining these models. Every hour we go take the data from the blockchain, refit these jobs, and then daily run these kinds of re-optimization routines. Over time, we’ve spent a lot of time building infrastructure and getting the backend stuff running (not just the actual raw simulation) such that we monitor blockchain, we see some new type of event, and when there’s a new event it triggers a new optimization run which will basically say, “Parse all the blockchain data. Try to find what the thing is that caused this new change, and then rerun all these simulations.” 


27:52 • Tarun
That will potentially generate new user types.  It is really a continuous monitoring type of situation where we’re constantly retraining things and constantly refitting what the liquidity looks like on different exchanges. For instance, these liquidity attacks, like this recent Cream Finance attack (which didn’t affect Aave but almost did – we were in the war room on that which was a little bit crazy) – that was very dependent on what the expected amount of liquidity on-chain for certain LP shares was. In theory, if everyone who had some of these assets on centralized exchanges withdrew them and then put them in the LP share, then the attack would be profitable. Now your game is: how do you estimate which types of users are the ones who would actually do that? And, is it actually possible?  How much of the token supply is frozen in the sense that it never touches on-chain stuff, just sitting in centralized exchanges? 


28:52 • Tarun
What supply, when it gets on-chain, actually makes it to being a liquidity provider? Stuff like that is the stuff that we’re constantly refitting because that is stuff that really represents user behavior, not necessarily the absolute worst case. Of course, the users all could collude and move all their assets at the same time. Historically, there’s a certain fraction who are centralized traders who never move their coins, etc. That type of stuff is the stuff you have to keep modeling. When you see events on-chain that tell you something new is happening, that’s when we automatically retrain our models to try to find new agents. Most of the process for that is automated. The key thing is that you… kind of like data science pipelines in big tech companies… you’re constantly adding new features when you notice that your model quality goes down. 


29:51 • Tarun
You’re monitoring your model quality of prediction relative to what’s happening in practice. Anytime you see a deviation, then you manually analyze it and try to add in a new feature type / a new user type that represents that deviation. You’re slowly building up this library of different types of users and features that you’re only expanding over time, if that makes sense. 


30:16 • Tom
I’m trying to figure out the best way to think about this not being on the inside.  When you think about a model or a user or adding a model or a simulation to do a simulation for a project, are these independent? Or, are these  all interacting with each other? I’m not sure the best way to phrase that but… if you’re a crazy user one, I’m crazy user two, is it just separate simulations of what we can do? Or, does it also include us interacting with each other? I’m botching this question but I think you know where I’m going.


30:49 • Tarun
That’s a great framing. I have a model for one type of user and I have for another type of user – do I model the joint distribution over all joint actions, or do I model the individual actions under independent? The answer to that is somewhat nuanced. If I did the latter always, the joint, well that grows exponentially in the number of agents. It’s crypto, so the number of agents is much higher than the ones in normal finance in some ways – at least exponentially actually – it could be worse in some cases. The key is that you try to statistically identify which combinations work together well. Some of that is thinking about a user who borrows from Compound might oftentimes take that money and put it into an LP share. This is  more contrived example. 


31:49 • Tarun
Let’s say, statistically you observe that the users who take loans on Compound put it into an LP share, but the users who take loans from Maker put it into AAVE, like they borrow DAI to farm. Now you can say, “This type of action is correlated with this other type of action, so we will focus more on trying to simulate the joint thing where they’re both together.” But, “We actually don’t see this other type of action correlated. So, we’ll sample it very infrequently.” That is something where you try to make some prior knowledge / prior distribution based on what you see in practice. You also try to do some searching. You try to say, “Let’s try to see if we can chaos monkey things, but don’t make that the majority of the search.” A similar type of problem that you see is in AI stuff, when people make these bots that win at poker or diplomacy or things like this, people do a lot of things…what they do to try to look at the entire action space… 


33:07 • Tarun
…is they run these simulations where they say, “Let’s pretend I’m playing poker. I model all the other people playing poker. I take my poker strategy and I run it up in the future a certain number of steps. I look at which branch did the best. I play the move of the branch that to the best.” Based on that, you do this thing that’s called counterfactual regret. You say, “I did this move. What was my regret?” In the sense of,  “If I redo the same simulation, after someone else plays a move after me ,do I get a worse or better outcome? Do I have a higher profit or lower profit?” That is one way of pruning these types of spaces. In crypto it’s a little bit different. You have to make sure you get all the virtual machine and low-level semantics correct, so it’s not as easy as poker in some ways


34:00 • Tarun
At a high level, there’s all sorts of techniques for doing this. In AI, this is quite a big topic. The answer is it’s a hard question. 


34:20 • Tom
Your answer definitely helps. I realized how badly I worded it, but your explanation makes a lot of sense. Let’s say you’re on autopilot. You’re continuously updating these models, you’re pulling data. Do you think we’ll ever get to a point where a project like Compound or maybe others automatically use the outputs to optimize? Or, is this something like, “Here’s a flag from the Gauntlet platform. You should change X to Y and please take it to a governance vote and act with a multisig or something.”  Do you think we’ll ever get to a point where it’s fully automated? 


34:57 • Tarun
One thing we’re starting to see, and I think this will be true overall, is that governance will do things like: governance votes once every quarter / half year to delegate responsibility to certain entities, and then they can be evaluated based on their performance on-chain. That will be one form of automation. The second form you’re asking about is if these parameters caan update themselves. Can there be some algorithm that’s constantly updating them? I think that will happen, but I do think that we’re very far from that happening. Part of the reason for that is we don’t quite know the objective function that we’re choosing for, like what it means for an interest rate to adjust itself correctly. Is the goal of the interest rate adjustment to increase borrow demand? Is the goal of the interest rate adjustment to maximize lender revenue?


36:10 • Tarun
Those two things actually can be different in some weird cases. Is the goal of the interest rate adjustment to ensure that it’s really safe, so that people who are riskier borrowers are priced out? The choice of that measurement will completely change the choice of the mechanism that you optimize these things with. I would say what we do is spend a lot of time trying to figure out what those KPIs / what those objectives are, and then based on that, try to optimize. I do foresee a future where, if given enough of a, “These are the things that you want optimized,” and it’s very clear what those functions are, and it’s easy to measure them on-chain, and, finally, it’s easy to make a mechanism that’s computationally easy to implement that can optimize them, then you can do those things. 


37:08 • Tarun
Now I know people are doing experiments and we see all sorts of things where people are trying to do that. It does feel like it’s a little cart before the horse because we don’t actually know the final thing that we’re trying to optimize towards, if that makes sense. We’re just choosing random ones.


37:25 • Tom
It sounds like we might need your friends on the AI side to do the subjective analysis eventually. 


37:30 • Tarun
The cool part about this stuff is that all data is public, so you can go analyze it afterwards. That’s what we do right now. I foresee something where (never say never… maybe one day we actually do try to do a protocol for this type of thing)… I think there will be, I just  don’t think it’s going to happen in a year.  In the same way that we needed the last bear market to get the e.g. Uniswap (a new mechanism that changed things), I think we need another bear market for  people to actually work on these types of things. Right now, there’s still too much  there’s too much incentive to just copy pasta something that exists and Frankenstein something, as opposed to  rebuild something from scratch. 


38:25 • Tarun
I do feel like bear markets are good times because whoever’s still around is committed to building what they’re building. 


38:32 • Tom
I agree with you. You get a lot done during the bear market side. This might be a naive question, but if we get to a place where this is automated, the parameters are updated in real time per se, the attack vector would then be on the people introducing new models to the simulations? Would the inputs to the model then be the attack vector instead of messing with the end DeFi project? I’m just trying to figure out where the attack vector is there. 


39:05 • Tarun
It would be more like that, for sure. Maybe in a 10 year timeframe I could say, “Look – there’s a decentralized version of Gauntlet that does the simulations on-chain and whatever.” It’s very compute-constrained. There’s a reason that Google consumes more power than all Bitcoin mining combined to train a model sometimes, like to train Burt or to train AlphaGo, right? Which, by the way, to all the climate haters I’m like, “You don’t care about that but you care about Bitcoin mining?” AI power usage is like five times Bitcoin mining.


39:51 • Tom
It doesn’t make any sense. Again, back to the biases. 


39:53 • Tarun
My point is, there’s a reason they’re using so much compute. Again, blockchains are TI-83s…maybe a little better – TI-89 now. That’s a dated reference that shows my age. They have the power of a calculator -let’s put it that way.


40:12 • Tom
I’m picturing you as Johnny Depp in Transcendence, just going into this decentralized version of Gauntlet in the future.


40:20 • Tarun
The compute capacity of these chains needs to increase in a way, and we just haven’t gotten to that.  I am very chain-neutral, I would say. I do think that, of the design space, that on the Ethereum side and on the side of a lot of other Layer1s, people don’t really care that much about performance. Obviously, decentralization is important, but they definitely don’t care about end-to-end performance time. You got to give Solana a lot of credit for (in spite of  a lot of trade-offs they made) they actually really stuck to that horse through the bear market, that compute power is the only thing you want to optimize for. 


41:15 • Tarun
I do think there is a world in which blockchains do get closer, but we’re quite far from that right now. That involves a lot of new cryptography I think to get that. 


41:27 • Tom
That’s pretty cool. To circle back a bit, I have this conversation with my partner Jose a lot, we think it’s kind of dumb that, on the future of governance,  everyone has to vote on everything, right?  There are some projects that have risk committees. I think you mentioned that  protocol users could vote in to use Gauntlet via a specialized committee or something… I’m trying to figure out –  how do you view the future of governance, broadly?  Do you think that we’re going to be in the space with specialized subcommittees, or do you think we’ll be in a place where everyone wants an entity thing, or quadratic voting? How do you see the best model playing out? 


42:03 • Tarun
I don’t totally know. I do think there’s a reason representative democracies have generally been the ones that survived the longest so far. I have this rough feeling that governance will still have some form of  delegation via representatives.  At the same time, I do think one of the interesting things about crypto is, in the current political process (for the U.S. where I live..I don’t know other processes that well) is this idea that we rely on the benevolence of the voter to actually turn up and vote and stuff like that. In the judicial branch of government, we do this proof-of-stake thing where we randomly select people and put them on a jury, right? Why do we separate that the executive branch or legislative branch do this thing where we need benevolence for choosing to actually show up versus the judicial branch where you get forced to go to jury duty or you go to jail or whatever? 


43:42 • Tarun
The interesting thing is blockchains combine the powers of all three while keeping them separate. They have a portion of the executive branch, they have a portion of the legislative branch, and they have a portion of the judicial branch. The judicial branch is the smart contract, and then the executive and legislative are the voters. You can now design systems where maybe you randomly select voters to participate to make sure that there is some level of active participation. You can also penalize more. You can play with  these slashing mechanisms. I guess my point is I somehow think that participation and incentivization will be done extremely differently because you can mix aspects of different political systems where some things are more punitive and some things are more benevolent. Mixing benevolence and punitive things in a new way is  how I think governance will work. 


44:47 • Tarun
That’s a very abstract philosophical description. Does that make sense? 


44:51 • Tom
It does. You guys are doing so much work to optimize protocols, and if people / projects don’t take your advice, they could be harmed. So, why wouldn’t they take your advice? The only thing I can come back to is either that people don’t agree with it because they don’t understand it, Or that people can’t really come together to decide to implement the changes that you guys are finding. I’m trying to figure out where the pain point is between Gauntlet figuring out, “Do XYZ,” and the project implementing it. I think that might come down to having everybody vote on everything which doesn’t make too much sense. 


45:25 • Tarun
Either you either have everyone vote on everything or you delegate power, or you have some sort of mechanism where governance can be able to choose a KPI and a provider tries to do something relative to that KPI. Based on how they do, they get some payment or reward or whatever. I think the latter part will happen. That might be a form of this mixing of the benevolence and punitive nature of governments. If you can have a KPI, then you can either reward or penalize based on how well it is matched. The hard part is DAO’s don’t know what KPIs are because we’re too early for figuring out what those are. This is like asking Stripe in 2012 what their KPI is. 


46:34 • Tarun
They probably couldn’t tell you. Or, asking Facebook in 2006, it would have been like, “User number go up,” right? Nothing else. Because it’s early, you have to wait until we’ve cohered on some standards, and then you can  take this third route through. The third route will take more time, but I think that will be amazing. 


46:59 • Tom
Is there any modeling that Gauntlet could do on the governance side?  You mentioned perverse incentives before. If you run a model, you want to incentivize people to vote. You don’t want to blow your entire token supply doing so. I’m sure there’s other parameters that you can play with like slashing and reputation and stuff like that. Do you think one we’ll get into some kind of automated governance modeling?


47:24 • Tarun
I mean we already kind of doing that. Some of our non-Ethereum customers basically do have that. We’ve been starting to try to do that type of stuff. Again, I think it’s a little nascent right now.Ethereum’s a little harder  because you’re constrained with compute capacity right now.  With Layer2 stuff, maybe that won’t be true, but  that’s why you can’t do too many fancy things in governance. I don’t think that on any other chain there is a governance contract that, outside of the Compound governance module and AAve governance module (which have a lot of similarities), other than those two, I haven’t seen anyone that’s super compelling. On Solana, Jet and Mango are pushing things forward a lot towards making such a thing, but we haven’t quite seen governance


48:18 • Tarun
The dual question to what you’re asking about, with regards to automated governance, is, how do you make the governance interfaces /the function interfaces / the actual smart contract, reflect allowing things like this? That’s where we don’t see that much development. Again, I once again am asking for a bear market.


48:44 • Tom
Tarun, you’re the only one hoping for it, but I know that you mean it. But, you also want the resulting bull market, which is good. That’s what I like about it. 


48:54 • Tarun
I want all these 19 year old, optimistic, kids who are doing 5 million OHM forks to work on this. They’re not going to when there’s OHM fork money.


49:05 • Tom
That’s got to end. Hopefully, it’s dying down, but we’ll see. You mentioned that the world is getting insanely complex, especially with how fast DeFi moves. Playing back to I guess your friends on the other AI side, do you see  an increase in sophistication, or are you scared of AI-based attack agents or normal bots gaming the system? They already kind of do with mints and stuff. How should we think about very sophisticated…basically what you do, but loading a bunch of models, training bots to straight up attack DeFi projects and drain them, like  your adversary – how do you combat that?


49:46 • Tarun
Life is always a cat and mouse game. Anything that’s competitive will always be this cat and mouse game between those who are choosing the rules of the game board and those who are playing the game on the game board, if we compare those two types of things. I effectively think that you can’t do that. You won’t combat them. But, the other thing you have to remember is that this is not a problem unique to crypto. This is true for every Web2 company and for every financial institution. There’s always tons of bots and people ready to spam your network. Cloudflare’s entire business is built off building a service so that people can’t spam you because, type of DOS attack vector from Boss* was…then there are hundreds of them…became so prevalent. 


50:42 • Tarun
At the end of the day, it boils down to making sure that there’s enough people on both sides of this game: of the people who are changing the game board and the people who are playing the game. If it’s competitive enough, it will reach some equilibrium where there’s some deadweight loss that goes to the bots, but then users still get enough utility that people are happy. 


51:06 • Tom
I’m scared to ask, but I’m pretty sure your ethics given how you answered the AI question will prevail here, but have you ever considered using your power for evil, like an evil Gauntlet? 


51:20 • Tarun
People always ask us, and I just wonder why.  The problem is, anytime you would do something like that, you can only really do it once, or  a small number of times. At the end of the day, the more often you do these things, the more easy it is to catch you. Do you want to be always hiding?  Do you care about your reputation?  It depends on  that aspect.  I get that the Russian teenager might not care, but I don’t think that there’s many very successful repeat hackers. You do really leak information every time, especially in blockchains where things are public. 


52:08 • Tom
That’s fair. You don’t want to look over your back for your whole life, and it’s not a long-term strategy. One of the questions I had closing out is your take on the AI side.  You’ve spent so much time on deep tech. You have so many friends in the space. I’m sure you’ve thought about it a lot. I don’t know what question to ask you, to be honest…  Do you think AI will be sentient soon? How far along are we? Are you scared of them? I’m not sure which way you want to take it, but I would  love your take on where you think we are in the AI side. We’re starting to see it bleed into crypto. People are training NFTs with AI type models and you could own them as NFTs stuff like that. It is starting to bleed in, nowhere near AlphaGo style, but I was wondering your take. 


52:53 • Tarun
I don’t think it’s going to be sentient very soon. In a lot of innovations, there’s this S curve of, no growth, no growth, super fast growth, and then it tapers. I feel like we’re at that with AI right now. It’s getting extremely hyper competitive, but the incremental improvements seem to be quite minor nowadays. There was this time in 2012 where people had models for language, they had models for vision, and they had models for game-playing. All those models were different and very specialized and custom. The biggest accomplishment of the last ten years is that we have unified all of those models, and we are able to use basically the same types of architectures, combine them in different ways, and it works on all these… that is a crazy thing, right? 


54:00 • Tarun
We can have one framework for describing models of many different portions of cognition or consciousness. On the other hand, it does feel like we have not had much incremental improvement per unit of compute power. The compute power needed for each new model seems to be growing faster than the size of the type of output that is needed, which doesn’t seem particularly sustainable. If your self-driving car, to go from New York to DC, requires as much energy as all Bitcoin miners in one year,  that doesn’t seem like it’s going to work great. We’re at this weird point where the AI world is now forced to not care about, “Hey can we do something?” and now forced to care about efficiency. 


54:59 • Tarun
That’s  a boring slog. Smart people don’t like doing that. I think that’s going to be this interesting trade-off is that  a lot of the people who made their name in it are going to get bored of it. 


55:11 • Tom
Just to clarify, is it that the amount to run an AI service like a self-driving car takes up too much memory or compute, or is it that the amount of compute needed to train a model is declining, so that the hardware is not keeping up with the models? I’m just confused which side of it it would be.  


55:30 • Tarun
I think it’s partially on the training side, but I also think it’s partially that the training is much more personalized than on a per user level. Think about how much training went into Alpha Craft for a single game player. Now, imagine having to do that for every car constantly because you still have to retrain. I don’t think that this idea of  permanently static models is good. That’s why self-driving cars were the hype cycle in 2015 and a lot of those companies have not done super well. That’s why you see a lot of acquisitions by big car companies because it’s taking way longer. Part of that is a safety thing, and part of that’s also a societal thing. If the probability of a human killing another human while driving is 10% and society says, “That’s bad, but it’s okay,”… 


56:29 • Tarun
…then that implies the probability of a robot killing a human at 1% is still terrible, right? We do have this weird societal double standard. I do think that things like that change how you build the model. You went from a model that needs to be good on average to a model that needs to be good in the tails of the distribution. That’s very hard for these neural net types of things to do. They’re much better at average case and near-average case performance. They’re not that great at really edge case performance. I don’t think sentience is coming soon, but I do feel like we are getting to this point where  we can’t just keep scaling these things by compute solely. There’s this whole world of edge computing: putting harder at the edge for custom hardware for doing model execution and stuff like that. 


57:25 • Tarun
That stuff is super cool. It feels like building a new cell phone network; it will take a long time. We don’t even have 5G. Instead, we have people making conspiracy theories that we’ve been injected with 5G. 


57:43 • Tom
Yeah. 5G is just  marketing right now. I agree with you.  I always was concerned with edge compute, because, whenever I read it, I wondered if people really thought the iPhone is going to compete with a cloud data center that’s optimized to the square centimeter? There’s no way, but we’ll see. I’m bullish on it happening eventually. Tarun, to close out, last question for you: what is the biggest goal you’re trying to solve with Gauntlet over the next year? Like the biggest thing you’re missing, a limitation you want to solve, an area you want to expand to, something you’re working on…What’s the biggest growth area for you? 


58:24 • Tarun
I think the biggest growth area is going to be expanding to different types of protocols. We spent a lot of time trying to merge our simulations for different protocols so that we can do the sampling that you’re talking about, where you have two types of actions that tend to be done together or two different types of users whose actions influence each other and have a feedback loop, and expanding that to other types of more complicated assets, such as derivatives and perpetual protocols. Also, I think game design mechanics is starting to be more of a thing because the blockchain game space does overlap quite a bit with DeFi, so really focusing on that and making sure that we can grow with complexity as DeFi’s complexity grows.


59:18 • Tom
I love it. Incredible conversation. You got me all amped up on AI and modeling now. Tarun, your intellect is insane. I’m glad we have you for good. I really appreciate you coming on the podcast. It’s a great conversation. I really learned a lot.

Show Notes: 

(00:00:00) – Introduction.

(00:01:23) – Tarun’s background / Moving into crypto.

(00:16:44) – Overview of Gauntlet.

(00:26:06) – Updating risk models.

(00:30:16) – Are risk models independent?

(00:34:27) – Getting to full automation.

(00:41:29) – The future of governance.

(00:45:58) – Automated governance modelling. 

(00:49:09) – Sophisticated AI attack vectors.

(00:52:12) – Thoughts on the state of AIs.

(00:58:05) – Biggest goal for Gauntlet in the next year.

(00:59:19) – Tarun’s favorite hair color.