Building Better Systems

#5: Talia Ringer – Proof Engineering for the People

Episode Summary

Talia Ringer, a Ph.D. candidate at University of Washington, explains how they do deep people-centric PL research. We discuss proof repair, UX for software correctness, and how to ask users of tools for feedback to react to.

Episode Notes

Talia Ringer, a Ph.D. candidate at University of Washington, explains how they do deep people-centric PL research. We discuss proof repair, UX for software correctness, and how to ask users of tools for feedback to react to.

You can watch this episode on our Youtube Channel. 

Joey Dodds: https://galois.com/team/joey-dodds/ 

Talia Ringer: https://dependenttyp.es

Contact us: podcast@galois.com 

Galois, Inc.: https://galois.com/ 
 

Episode Transcription

Introduction (00:02):
Designing manufacturing, installing and maintaining the high-speed electronic computer, the largest and most complex computers that are built.

Joey (00:20):

Welcome to the building better systems podcast, where we explore tools and approaches that make us more effective engineers and make our software safe and reliable. My name is Joey Dodds. I'm a researcher focusing on providing assurance for industrial systems. And today we're joined by Talia Ringer. I had the opportunity to work with Talia somewhat recently, Galois in general, had the opportunity to work with Talia somewhat recently. And I thought it was really noteworthy that when I reached out to other people at the company and said, would you like the opportunity to work with Talia? A number of people jumped up and said, yes, that would be absolutely wonderful. Talia is an absolute pleasure to work with and has a really unique outlook and some really unique insights into the world of formal methods and, ways that we can make formal methods research better and more accessible to a wider audience. So welcome. Thanks for joining, me today. So at a high level Talia, what is your approach to building better systems?

Talia (01:20):

Yeah. So all of my work, is on something called proof engineering, uh, which are, um, uh, so prevention nearing refers to building a verified systems. Uh, so these are systems that are proven correct. Uh, and the special tool is called proof assistance. Uh, so the way this works is, you know, you'll write your program, um, and then you'll write some kind of specification of what it means for the program to be correct in your proof assistant. Um, and then you'll write some kind of proof, um, with the help of the proof assistant, uh, the purpose of stunt we'll check this proof, uh, and then, you know, that, you know, your program actually does what you said it does. Um, and, uh, this used to be, so it used to be really hard to write proofs in these resistance. Like, um, you know, people would only prove correct like really simple toy programs.

Talia (02:16):

Um, but things have come really far since then. Um, there's decades and decades of work. But one thing I noticed, uh, you know, looking at all of this work is that it mostly focuses on how to prove your system correct to begin with. Uh, but one of the big problems that still exists is that, uh, you know, uh, once you've proven a program, correct, uh, you know, you're going to have to change your program, like programs change all the time. Uh, and then, uh, you know, you're going to need to change all of your proofs about that program. Uh, so this can be really hard. Uh, so all of my work is on, you know, how can I build tools, uh, that proof engineers can use, uh, to help them update their proofs, uh, almost automatically once they've made changes to their programs.

Joey (03:04):

Yeah, that seems incredibly valuable. Um, we see that even, you know, when, when you mentioned proof assistance, you are talking about, um, systems where often users go through and sort of piece by piece, manually build a proof, but even in the world where we're sort of building proofs automatically, we see a lot of challenges and in, um, keeping having the proofs keep up with the code. And I think from what we've seen in industry, um, nobody wants to prove if it's going to slow down the pace at which they can update their code, at least by a significant amount. There's always some wiggle room and everybody likes knowing that their code is correct, but, um, definitely that's critical to allowing these

proofs to be used. Um, we'll call it in the real world. Um, but definitely, definitely in systems that are actually in use today. Um, and so a really critical part and, and sort of a bit of what I was alluding to and your unique perspective is that I feel like you've gone through a lot of effort to make sure that not only are your tools, um, you know, your tools have deep technical foundations and they're they work. Um, but it's not enough to have them work. They need to actually work for the people that are using them. Can you tell me a bit about that?

Talia (04:15):

Yeah. Um, yeah, I mean, ultimately, um, my tools are supposed to serve prevention years. Uh, so if I don't have input from proof engineers, um, then I don't know if I've built a tool that can actually be used for anything. It might be theoretically interesting, but, uh, you know, in the end, like what does it matter? Um, so, um, I think I actually did this a little bit too late. Um, like I working with, uh, with Val, with someone at go LA, uh, and having him like really use my tool and, uh, give me input and then I would make changes, uh, to adjust to that input. I think that's been one of the most interesting experiences for me, um, because, um, I think the things that users want, um, in your tool are not always things that you would think of. Um, you're so used to using your tool, you know, the inside of it, you might know how to avoid problems, um, just intuitively that a user might be afraid to do. Um, so I feel like by, by partnering with someone it's been really nice to, to see what those problems are, um, and be able to address them and make the tool more usable. Um, I wish I'd done that earlier, honestly.

Joey (05:38):

Yeah. I mean, you know, on one hand it would have been great to do it earlier. On the other hand, you actually, you did it at all, right. Um, and it's not like it's not, it's not easy as a researcher, um, to just like email people and be like, Hey, do you want to work with me? How did you find, how, how have you found the response when, when you go to do that in general?

Talia (05:59):

Um, I think a lot of people are interested in helping. Um, I think it helps a bit that my, my target audience are proof engineers who are generally just interested in contributing back to the research world in some way. Um, so when I reach out with like, you know, Hey, I have this tool it's kind of a burrito type. I'd like to give back to you, you know, in the future, but it might not give back to you immediately. Um, I think people are, um, are, are really willing to get involved in that process. Uh, not just because they want my help, but because they want to help me as well. Yeah. I think it was a little nerve wracking doing that, reaching out. Uh, I think this is the main reason I delayed it for so long. Um, like, uh, so one of the professors that you'd have, uh, Zack Tatlock at one point, he told me that like, uh, you know, one of the big tests of success is like, uh, you know, um, having people actually, uh, use your tools.

Talia (06:58):

Um, I think this has been echoed by my advisor. A lot of people believe this. Um, I think it's true, but there's like a confidence aspect to that too. Like if you build a really good tool and you're too afraid to reach out and be like, Hey, try to use this tool, let me know how it's going, let me improve it. Um, then you'll never know if it's a good tool. Um, and it could be a really good tool. And I think I was surprised that like, by the time I reached out, the tool is already working, uh, very well. Um, and I think I just lacked the confidence for months to, uh, to reach out and ask someone.

Joey (07:40):

Yeah. I mean, and the other side of that is like sometimes, and this has happened. This has happened to me is sometimes your tool is bad, but it's, at some point it's better to know, like at some point at some point, like, you know, it makes it hard, no question, like the doubt that the doubt that your tool might not be what you think it is or that it might not work the way you think it is, is it makes it like really hard to reach out, but having received that feedback, like this tool isn't ready before us this year, like this tool, isn't where it needs to be for us to use it. Um, it gives you something to go after, right. And, and at least, you know, like at least, you know, my error messages aren't good enough if this is too stressful to use.

Joey (08:19):

Like, because I, you know, in some sense, you know, having received that feedback, sometimes the doubt is like the doubt and uncertainty is worse than, than the certainty. And like having a concrete thing to, to address can, can be really nice. Um, so I think, you know, I think it's great that you took that step cause like, you know, um, a lot of people, especially like, especially in grad school, but even researchers are not taking that step very often and we write things in our papers, like people should use this in this way, or this will make software better in this way. And then there's this, there's this, uh, this necessary step, uh, or you know, about like actually, you know, doing something to get that in people's hands. Um, so is this something that, you know, given how hard it was, is this just something you believed in, is that why you went after it in this way?

Talia (09:09):

Uh, I think it was actually like encouragement from, uh, from my advisor. Um, and from others that at university of Washington, I think this is a big, uh, difference in how university of Washington approaches research from some of the other universities that I've seen, uh, which is like, especially our group, our programming languages group, um, seems very focused in, uh, building systems. People use, uh, not just prototypes that you build for a paper and then throw away. Um, which I think is important because in a lot of our area we don't have, uh, we don't really have there, there are still a lot of areas where we don't have like good metrics for like what it means to build a good system, like even figuring out what those metrics should be is like an open problem. Uh, to some degree, there's obviously like a ton of research in software engineering, um, on this. But I think some of these questions are still open, especially when you get to like prove engineering. I think with that in mind, like while we're working on finding out those metrics, we also need a ways of measuring success and having people use your tools is one of those.

Joey (10:22):

Yeah, definitely. And having people use your tools, you know, metrics or metrics are great, but they're also, they also pose risks in terms of the things that you choose to optimize can get heavily guided by your metrics. And while user feedback is softer, it's also, um, it's also very useful in that you, you know, you get to just sort of see directly what people are thinking and you can, you know, you, you don't have any, you don't have any as many bad traps to fall into there. Um, as a side note, every time I hear cool things about university of Washington, I get a little, uh, get a little jealous. Cause I, I actually applied to university of Washington is like, it was basically my top choice and it didn't didn't work out for me. So

Talia (11:01):
It's all random to some degrees.

Joey (11:05):
It was, uh, yeah, it is. Um,

Talia (11:08):
But where did you go again?

Joey (11:14):

I ended up at Princeton, You know, it, it worked out, it worked out fine. Yeah. So that's, that's really cool. Um, and then like, just like, I think, you know, again, like the other thing that I missed out on, frankly, as a grad student that I think you're getting a lot of is just like talking to more people, uh, which again is hard to do, but I assume has been pretty beneficial.

Talia (11:32):
Yeah. What do you, sorry? What do you mean by talking to other people?

Joey (11:36):

Like so much of so much of our world is, and well, I mean, this is not uncommon to, to business or academia or whatever, but so much of that is about making connections, but it's really hard to do it. Like it's really hard to just like to just like talk to people. Um, it's something I've gotten quite a bit better at since then. And like, part of it is just like this, like it's the same sort of idea with like building tools is like, like, they're going to say that they're going to be like, no, I don't want to talk to you. You're like, you're not good. But most people are like, I'd love to talk to you. And it's great. And it works. They're like, you know, I I'd like to, but like not right now. And that's, that's not that bad, but like not, I've never, I've never like reached out to someone, you know, maybe you get silenced, but I've never reached out to someone of them. Like, no, like, cause I'm not going to do that because you're not good. Right. Like it's not a response in your head. It's a response.

Talia (12:27):

I remember, like, I think the, the switch for me there, um, the thing that like made me able to do that really easily, it was like in my first year of grad school, I went to a conference. Um, you know, I went to talk to him about my work. Uh, and I was so worried that it'd be like, Oh, that research project is garbage or something like that. Um, instead of both he, and like everyone else I spoke to was like, Oh wow, you're already doing a project. And I was like, okay, like expectations are set pretty appropriately. So now I just reach out kind of unabashedly, but it's still really scary for me to claim that my tool is good.

Joey (13:10):

Yeah. W well, um, you know, I think, I think successful successful researchers and, and like successful developers will, we'll go, we'll go. As far as saying successful people always find it hard to say that they're good enough. Cause part of the, part of that is definitely about like, you always want to find the next thing and do better at it. Um, and it's not a bad thing, but you definitely have to control it. Right. Like it can get out of hand. I'm sure. I'm sure a lot of us know about how that can get out of hand. Um, so let's go into a little more, let's go into a little more detail about the kinds of things with, with your research. You mentioned, um, updating, updating proofs. Um, can it, can we, can we talk a little about the kinds of things that might need to be updated and maybe what some of the challenges there are?

Talia (13:59):

Let's see. Uh, so, uh, so I can give some big examples, but a lot of these are still like too hard, uh, for,
um, for significant automation, but there are ongoing challenges. They're sort of like the driver of the kind of work I do. Uh, yeah. One example might be, um, uh, so there's a verified operating system. Um, that's been proven, secure, but is they're proven, secure, unlike one hardware architecture. Um, and you know, that people would love to be able to prove it to, to take those proofs and sort of adapt them to all these different hardware architectures. Uh, but this is really hard. Like the thing you're proving changes a little bit approves you, right. Have to change to adapt to this. Um, so, uh, you know, every time you do this, you'd have to change all of these proofs. So that's sort of like one of the big, like driving examples, uh, might be updating proofs that something like an operating system, uh, to, you know, adapt to new hardware, uh, smaller examples might just be something like you have a verified system.

Talia (15:08):

It uses some library, uh, like the, the previous isn't language that we use as like a standard library. The standard library will have not just, uh, functions and stuff that you can call. Um, it will also have proofs about those functions and, you know, you can use those, you can refer to those theorems, um, inside of your own proof. Uh, but sometimes, uh, you know, they'll change when they're updating the language, they'll change the standard library they'll change one of their specifications. Um, and then this will break like everyone's proof development. Um, and there were a couple of examples already where we were able to, uh, look at just changes that occurred inside of the library, uh, and use those, uh, to automatically fix proofs, um, at the client level, uh, which is pretty powerful.

Joey (15:58):

Yeah. I mean, honestly, like even this is, this is a problem that every software engineer is familiar with, right? The idea that a standard library could, uh, deprecate your function, then eventually remove it. And then all of a sudden your code not going to work and like the idea of in that setting, the idea of occasionally being able to go through and fix things automatically is really meaningful. So in the proof world, I think it's, I think it's, it's maybe even more impressive, but, um, hard to say, I guess you have some stronger, you have some stronger frameworks to work in, I guess in the proof world. Sometimes it's more clear what you're trying to do.

Talia (16:31):

Yeah. That's the big thing, like it's, it's, um, it's, it's both harder and easier, I would say. And, um, you know, it's harder because you have not just functioned, but also proves that you need to face. Um, and, but then I think it's, it's easier in that. Uh, one of the biggest challenges, uh, in program repair, uh, in, you know, automatically fixing your programs, um, when something is broken, uh, is that you don't know when you're correct. Um, and so you often need some additional information, uh, test suites tend not to be enough. So, uh, you might use examples, um, you know, like have a user make a couple fixes themselves, um, or you might write a specification, uh, but in this world of proofs, we already have these specifications. Uh, so we're just, you know, we're able to totally work around this problem. Uh, like we know exactly when we're right. And that's really nice,

Joey (17:30):

Although you do sometimes end up in a situation, I guess, where you're not clear whether the specification needs to change or whether it can stay the same. Right. That's a, if, if we're talking about,

um, if we're talking about proofs, that matched code, um, and the code changes in the specifications sometimes just becomes wrong by nature of the proof changing. That's true. I think

Talia (17:50):

There, there are a couple of ways around this, uh, which is like either, um, either you can have some human in the loop, like, uh, you know, Hey, is this the right way to change a specification? Or you let them change the specification themselves? Um, or sometimes you have sort of like, uh, like well-defined ways of changing your specification, uh, that have some sort of guaranteed that like, yes, the specification has changed, but it's changed in a way that only reflects, uh, this change somewhere else. And, um, you know, modular that change. It's the same, actually. Yeah.

Joey (18:28):

Yeah. I really like this idea of, of human in the loop with proof tools. Cause I think it's, you know, I think it's the answer to make well, making that, making that experience pleasant is I think challenging and, you know, you're doing kind of like the light version of this with just having people use your tools, but like there's, you know, UX people exist and it would be, it would be lovely to see some, some closer interactions between proof engineering, um, people in UX people. Cause I think that the, that a lot of really good, good work could come of that, but it's not something that I've seen. Um, it's something I think for, to aspire to probably. Yeah. And again, that, that UX story is key. I think there, and like under like figuring out what to ask is so hard, I think, um, cause I think, you know, it feels, it feels like if you can ask the right question, you could actually get it.

Joey (19:21):

You know, what, what, what I think I view is kind of the, the, the ideal end point for this is where, um, a proof engineer can write a proof once. But then from there we can ask good enough questions about how to update it, that a software engineer could maintain it. Um, I think that would be like an incredible place to end up because it's, it's one of the big questions when we write proofs is like, how, how do we keep this proof going? How do we keep it alive? And there's only so many proof ins, you know, there's only so many people that could, or want to be proof engineers, um, today, at least. Um, and so, and there's a lot of people that want to be software engineers. So it would be, it would be great to put that in their hands and let them, you know, and let them use the tools because I think software engineers see a ton of benefit from this, right? Like who wouldn't want to answer four questions and then say like, Oh yeah, like your codes, correct. Like you, you did it like go home and go home and sleep all night without worrying about whether it's going to crash your, you know, your production server or something.

Talia (20:23):

Yeah. I would really love to see for engineering, just merged back with suffering sharing at some point, like, I want fruits at some point to be as accessible as, as writing tests. Like we don't think of a lot of places, you know, most places I think like when I was at Amazon, for sure, you were expected to write your own tests for your code. It's not like you write your code and then you just give it to someone and they all the testing, we did have people doing extra testing, but like, uh, yeah, I think I would really love to see, uh, not for all programs you could ever write. Cause it doesn't make sense to prove correct. Every single program you would ever write. Um, but especially for like more critical programs, it'd be nice for software engineers to just be able to write and maintain them.

Joey (21:11):

And we, I mean, we've seen this kind of thing happen with other fields, right? Like even when I was in school, there was, I mean, it still is a field, but like systems as a, as a field used to be like a thing. And now if you like talk to people at Google or Microsoft, they're like, what, why is that a research field? That's what we do. Like, we figured out how to do protocols like day to day. And so, you know, having that as a research field, like, yeah, there's still like, there's still ground to be made, but also everybody's doing that every like every, but like nobody's building a monolithic system anymore. Right. Everybody's building systems that are talking to tons of other systems, um, so that there are like definitely inspiring stories of research fields just spreading out so much that they're like barely a research field anymore and that everybody's doing. Um, and yeah, for like formal methods of becoming that would be, would be incredible. Um, you know, if, if the two of us can see that in, in our, uh, in our careers, I think that would be just a great thing. Recently you reached out and did a user study for users of these proof assistance. Uh, can you tell me a bit about that?

Talia (22:14):

Yeah. So this was a, this was a user study of the particular proof assistant, um, that my work is focused on. Um, and, uh, essentially, uh, what we did was we worked with a group of, um, I think there were like eight people in the end. They were all intermediate through expert group engineers. Um, and we actually, uh, listened in on their entire development process, uh, with their permission obviously, uh, for a whole month. Um, so we had a server set up and like, um, we built this tool that would, uh, it just instrumented the, the purposes that they were using. Um, and every time they made any kind of change, uh, to a program or to approve, um, it would like send that over to the server record, that change as they went. So like very, very granular, like you can replay the entire process of development. Um, and our goal was to figure out, um, you know, how are engineers changing their programs and their products? Uh, what kinds of changes are they making? Uh, you know, what is it, uh, you know, and after they make those changes, how are they changing their proofs? Um, so yeah, this was published, um, in January I believe. Um, and it was, it was a lot of fun. I dunno,

Joey (23:43):
It sounds like it's, it sounds like you would get so much data. It would be incredibly overwhelming. How

did you tackle that problem?

Talia (23:51):

That's definitely true. Um, I think one thing that helped is that Rhea, um, we spent a lot of time building scripts, uh, to process the data. Um, so like one thing I did was I wanted to be able to look at the way that people were changing, um, like changing data types essentially. Um, and looking at this raw data, uh, it, this is like crazy impossible. Um, but instead what I did was I just have a get repository that's actually public. Um, and then every time they like made one of these changes, it would just like, uh, have a good commit. Um, so then you can go in and you can look at the diffs, um, and you can see, uh, you know, what sort of things they've changed. Um, now it was still a little bit hard, like, uh, um, you know, if you step down in your IDE and new, uh, you rate some data type, then you step up and you change something and then you step back down, like each of these is going to be considered, uh, like a change.

Talia (24:58):

Um, and you might have to look not at the difference between, you know, commit to and commit three. You might have to look at the difference between commit one and commit three, or like, uh, so this, this part is still still involved, way more human effort than I would have liked. Um, I think automating it more, um, it would be fantastic, but I did, you know, after I did all this, that sort of manual part of the effort, I did save all the commits, like link to them in the repository. So you, you know, the person who's using the data, doesn't have to go and look through those.

Joey (25:34):

And was your objective for this to just understand, was this linked in to your research basically was the objective of this to understand how people are using these tools so you can understand how to help them.

Talia (25:44):

Yeah, essentially. Um, so it was a partnership with, um, Alex Sanchez stern at UCSD, um, and he also builds proof engineering tools. Uh, his tools are, are like, they, they focus on something different from mine. Uh, but both of us were interested, um, in really similar questions about like what kinds of mistakes people are making, um, how they're fixing them, uh, and you know, how can we build proof engineering tools that help them do that? Uh, so yeah,

Joey (26:16):

So another approach and approach to, to proofs that update automatically with, with changing code that, that we use quite a bit is SMT solvers. And if you're not familiar with SMT solvers, I know you are, but if, if listeners are not, um, they're basically, you can kind of think of them as little, little math engines where you give them a math problem and they tell you if they can solve them or not. Um, so you'll give it an equation with some number of variables that it needs to figure out. And it'll tell you sort of if the, if it can figure out variables or not. Um, so is that something, is that a, is that a direction you've looked at for your research or have you found that it's, it doesn't, you know, it doesn't necessarily provide a lot of additional value for the kind of work that you're doing?

Talia (27:00):

Uh, it's an interesting question. Um, so I think, um, SMT solvers, um, these are, there's some work on using these from, within for resistance. Uh, you run into some fundamental problems where like SMT solvers use a, uh, essentially a language that's like, uh, just fundamentally different, um, from the one that we're working in and you can't fully translate like everything, uh, from our language into a world where you can use SMT, but sometimes you can. Um, and I think, uh, for those problems, um, yeah, you often, you often don't need, uh, that sort of, uh, heavy automation to change things like, um, for you, because, um, you can just use these like, uh, built in, uh, pre strategy is you can use the automation that there that already exists to dispatch them for you. Um, but that doesn't always work. Um, now I think there's some really promising work right now. Um, and I have a lot of ideas for this, um, on taking some of the things that SMT solvers do on these different languages, uh, and actually, uh, changing them into ways that will work, um, for this kind of language. And there are some like fundamental questions there. Um, but I think there's actually this hooks in really nicely, like to what I'm doing.

Joey (28:31):

Right. So, so, so what you're saying is like, you know, SMT solvers are great, but we're, we're not going to be able to just directly ask them the questions that we have. Right. We're going to have to be a little

clever and ask them questions that they know how to answer, but they are really good at answering the right kind of question basically. So like figuring out how to take these complicated goals and turn them into something they can deal with. Or are you talking about writing something that's like an SMT solver, but completely different. Yeah.

Talia (28:58):

I actually like, like an SMT solver a bit different. Um, yeah, like, like using the core algorithms that make up an SMT solver and like the core data structures underneath these, I think you can take them and tweak them and actually like, so Leo Demora, um, who's at Microsoft research has done this for, uh, one particular language. It's a little different from the one that I'm working in. Um, and this is one of the reasons why their previous instantly and I think, um, has become pretty successful. Uh, but I think you can do this, um, more generally for the kinds of changes that I'm interested in languages that I care about. Um, and I think that will add a lot of power. So yeah, this is something I'm really excited about. Um, I wanted to do it before I graduated, but it was, it's just too much of a time constraint. So this is going to be, um, something I really want to work on as like, as junior faculty. Yeah,

Joey (29:56):

Yeah, no, that sounds like an incredibly promising problem. This is, this is kind of one of these things is like, it's obviously people want this and it's in its best form, right? Like C has, for some reason, C has type annotations where you can actually like say I'm taking an array of this length. That's a thing you can, what's the thing you can say and C, it's going to completely ignore that and say, this is an array of whatever size, you know, if we want and more like, even worse, you can't figure out what it actually is. If you, if I've given it to you, but like people write these annotations that say like, I'm taking an array of like five, right. And that's not hard. That's that idea is not hard to understand. It's, it's very hard to understand that the programming language doesn't care if you wrote that or not, but that's a, that's a, that's a separate problem.

Joey (30:43):

Um, so like, yeah, like again, yeah, like once this, once this usability story is figured out, like, again, I don't think people are going to have trouble saying like, and maybe it's harder when you make it a variable. So like maybe it's harder for people to think about this is a function that takes an array of length n and returns in a array of like n plus one. But that doesn't, that doesn't seem that complicated and in most cases, um, and, and it's, again, just the details of making sure that the computer understands that as the hard part. And if there's a, if there's like a promising story for automation there, even in like a lot of the cases, right. Um, I think that that could go a long way towards, you know, hopefully not just being useful in this proof assistant world, but being useful in, you know, see as maybe a dream, but like, w we could talk about something like, you know, you could talk about something like Haskell, which is like a bit more mainstream or even like Java has. I think that the story is a little easier with Java.

Talia (31:34):

Yeah. I think, I think it would be a lot of, uh, the same kind of flow we were talking about earlier, like asking the right questions. Like, I think you'll be able to say like, you know, array of length five, and it'll be like, um, I can almost prove this, but like, I need you to help me out here a little bit. Um, then you answer it and then it will be able to check that for you.

Joey (31:57):

Yeah. Um, and you know, even like, even the world of like the, even the world of static analysis recently has shown us that like, even if you don't always say, yes, it's always like five, but if you say sometimes if you say, no, this is definitely not like five, that's actually like enough motivation for people to go ahead and use it. Right. And that world is, you know, the static analysis world has really taken off where it's like, you don't have to guarantee me the absence of bugs, but if you can tell me when there's definitely a bug, I'm always like, um, and, and do it quickly, I'm going to be really happy with your tool. Um, cause that's, you know, any one of those, like anytime you can concretely say, like here's a bug for sure. Fix this. Like that is that saves developers so much time. So that's all the time we have, uh, thanks a lot for joining me, Talia.

Talia (32:41):
Yeah. It was wonderful.