Building Better Systems

#10: Gregory Malecha – Formal Methods and Systems Programmers Working Together

Episode Summary

Gregory Malecha talks with Joey and Shpat about Bedrock, a startup bringing systems engineers together with formal methods engineers to build some of the most secure and correct systems in the world.

Episode Notes

Gregory Malecha talks with Joey and Shpat about Bedrock, a startup bringing systems engineers together with formal methods engineers to build some of the most secure and correct systems in the world. 

Watch all our episodes on the Building Better Systems youtube channel.

Joey Dodds: https://galois.com/team/joey-dodds/ 

Shpat Morina: https://galois.com/team/shpat-morina/  

Gregory Malecha: https://www.linkedin.com/in/gregory-malecha-91a71469/

https://gmalecha.github.io/

Formal Methods for the  Informal Engineer: https://fmie2021.github.io/agenda.html 

Galois, Inc.: https://galois.com/ 

Contact us: podcast@galois.com

Episode Transcription

Intro (00:02):

Designing manufacturing and maintaining the high-speed electronic computer, the largest and most complex computers ever built.

Shpat (00:21):

Hello everybody. And welcome to another episode of building better systems. My name is Shpat Morina.

Joey (00:27):

and I'm Joey Dodds.

Shpat (00:28):

Joey and I work at Galois about a computer science, R and D lab focused on just hard problems in computer science, software and hardware and systems.

Shpat (00:37):

So today we're going to be joined by Gregory Malecha. I've known Gregory for quite a while. Now we were grad students sit around the same time and we worked in related fields doing Coq proofs, um, and, and doing our best to make them go fast, which is, which is quite a challenge. Gregory is now at a startup that uses formal methods. Um, they're called bedrock systems. Uh, he's the head of the formal methods team there. Um, we're going to talk with Gregory today, a bit about what bedrock systems is doing. And one of the things that bedrock systems is doing, that's really cool is they're bringing together systems, engineers, people building low level systems, and they're, they're, they're taking those people and bringing them together with a formal methods team. And the output is something that's going to be more trustworthy than, than anything else on the market. So we're going to go into detail with Gregory around that, and it should be a really fun conversation. Thanks for joining us, Gregory. Thanks for having me. You've been working with bedrock systems for a while now doing some pretty heavyweight formal methods. Can you tell us a little bit about that? Sure.

Gregory (01:47):

Uh, bedrock systems has been around for about three full over three years now. I joined at the beginning. Uh, essentially what we're building is a secure compute stack that you can kind of trust from the bottom up. So this involves components like a microkernel hypervisor and a userspace built on top of it. And basically the idea is, is that, you know, there's, these are critical components that we run everything on top of these components. And so bugs in these components are, you know, cross cutting. They affect basically everyone who might be running your, your system. Uh, and they're, you know, obviously very complicated pieces of code. You have to deal with things like tape, page tables and things like that. And so there's a lot of value, um, in getting them right to some extent.

Joey (02:41):

So what you're saying is like, it's not enough to take those, those layers that you mentioned in isolation, you have to reason about how they interact with each other to get meaningful guarantees and to be able to trust that that low level stack basically. Yeah.

Gregory (02:56):

I mean, it does factor out as a stack. So at each level we establish some property, but you need to make sure that the property is strong enough to establish the next property on top of it. A lot of times you might think of things like I bought them out and I proved that there's no, no pointer D references in my code, but someone building on top of your API, just knowing there's no, no pointer de references in your code, doesn't really help them improving. For example, if there's no, no pointer to references and their own code, or you know, that they're providing a service to someone who wants it like banking or something like that, I guess one way you could think about it is, is, uh, if, if a customer comes to you and they don't say write a program that doesn't have any, no pointer D references, they say, write a program that, you know, learns how to identify cats and images and kind of no pointer, the references aren't part of the spec. They're kind of an implicit thing. And we want to kind of establish a property that, you know, the customer or the client might immediately know that's exactly what I want as opposed to saying, oh yeah, I don't want those, but what are you actually giving me? Uh,

Shpat (04:03):

And usually those are the benefits, which doesn't, it doesn't break unexpectedly, at least the system below your, your application or, um, well, actually I don't know what the benefits of no chance of no pointer preferences are, or are they,

Gregory (04:21):

What are the benefits of them? Well, it doesn't break unexpectedly, but you know, there's many other reasons. There's a whole CVE database of the reasons why your CA code might break unexpectedly. So if I go in and tell you, you won't be affected by the first 50 of them, you might come back and say, what about the next 50? Probably not. So this is a way of kind of saying none of them, nothing, not even the thing that no one's thought of yet will be a problem to you. And that's really important because not everybody first shares their exploits and then, uh, exploits you.

Joey (04:57):

And that's really cool because if you doubled the work you all are doing is fairly detailed at the end of the day, it involves I think, some pretty advanced work. Um, but at the end of the day, it sounds like you get to make some pretty straightforward claims about what people using your software can expect out of that detailed work. Is that the case?

Gregory (05:20):

Yeah, that's the case. So what we're working on establishing is something we're calling the bare metal property and it's a property about our hypervisor stack. And essentially what it says is that, um, anything that could happen if you run, if you have a stack, like, you know, windows, when it's whatnot and you run it on bare metal, a bare metal arm CPU, for example, and then you take that same piece of code, the same bites, exactly the same bites, and you put it to run on our stack. What we're saying is, is that there's nothing that can happen on our stack. That couldn't have also happened on that piece of hardware. And that's kind of like a transferability property that says, you know, we don't mess you up. If there's a bug in the system running on top of us, it's not our fault. That's well, it's your fault. We can try to, uh, help you, uh, fix your bugs or provide introspection mechanisms, things like that, um, to watch your code or monitor it. But, uh, ultimately we didn't cause the problem,

Shpat (06:19):

Not our fault. It's your fault. It's a great market in the nicest possible way. Great marketing, great marketing slogan. No, but I mean, uh, if, if it is true that you get to the point where people can be absolutely sure that what they're building on top is it's not going to surprise them. I mean, that's, that's amazing actually then, you know, uh, you know, maybe that's a little, uh, spin on that marketing slogan. Um, so essentially you're working on a, um, everything from bare metal to the operating system that people can build on. Is that right? Or is it kind of a lower, lower than that lower than the S

Gregory (06:58):

Um, it basically starts at the microkernel and then goes up, uh, we're not dealing with some of the complexities of compilers, but we do deal with the complexities of hardware devices, like, um, you know, uh, serial devices, DMA devices, page tables, things like that, that are, um, you don't get to program yourself. Uh, some hardware designer built them, but, uh, but yeah, you have to program against them to deal with all of those problems.

Joey (07:25):

This sounds really useful because one of the things I think we hear increasingly often is there's a big concern where people write a program in their local environment and they run it in some, in a development environment, basically a place where it's easy to run day to day. Um, and everything seems, seems fine. And then they drop it into the sort of deployment and everything changes. And it sounds like a bit of what you're promising is that that's not gonna happen. Um, if, if you, if other programs are running alongside your program, for example, it's not going to mess with, with what your program is doing and it should behave the same way it was behaving in the other. Um, is that a way to summarize this bare metal property that you mentioned

Gregory (08:06):

Somewhat to some extent? Yes. I mean, there are, there's a boundary to it, right? Like, uh, suppose you run two guests on top of the bedrock stack. That's like having two physically different computers and running your, compute your guests on top of those. Um, but if there's interaction between them, you know, different scheduling decisions and things like that that could occur, but maybe don't occur that are not likely to occur in one situation, but are more likely to occur in another, uh, there's not much we can do about that. Like in some sense, testing gives you, uh, especially in concurrent code gives you a relatively weak approximation of what your code can do. And, um, I mean, we can, we may uncover bugs that we can point to, it's actually a bug in your code and you can come back and say, well, that's okay. It never occurred in my platform, like, well, bumped to the next version of your AMD hardware or something like that. And then you'll see it there. Uh, it's kind of like it, does that make you think it exposes some of the, uh, assumptions you're making that you aren't cognizant of? Does that make sense?

Shpat (09:11):

Um, this is sort of off track a little bit, but, uh, um, I would imagine that what you're working on gets compared to things like SCL for, um, often, would you be able to kind of explain the, uh, the difference or the are the, you know, what, where it sits in relation to that for the people who are aware of that?

Gregory (09:31):

Sure. So let's start a little bit earlier, I guess, uh, SEL four is a project out of data, 61 in, uh, Australia. They've been building this well, they built a microkernel, uh, SEO for, and they verified it. It runs on a few different platforms. Um, and it's a great piece of software. I'm not gonna say anything bad about it. Uh, it's really is a great piece of software. Um, and what we have at the core of our system is comparable in many cases to us in many respects to SEL for it's a microkernel built on capabilities. So the same sort of underlying microkernel families are in there. Um, the same sort of ideas are in most of those systems. Um, SEL four kind of stops the proof at the microkernel and we are like, so SEL four, doesn't establish a bare metal property to establish a bare metal property. They would have to verify a VMM, um, virtual devices and things like that. And that's not part of what they're doing. Um, so we're kind of starting with a slightly different microkernel, uh, Nova it's open source as well. Um, and building a stack on top of that and, uh, verifying Nova and the stack on top and showing that we can actually compose these layers and establish that final bare metal property is a crucial difference between the two projects. Super cool.

Joey (10:54):

So one of the, one of the natural questions now that we've talked about SEL for is that group was really seen as one of the original groups of proof engineers, um, people that are doing day in and day out, large scale proof efforts. Do you view the team that, that you work with is, is roughly that

Gregory (11:15):

Absolutely. We are currently hiring proof engineers.

Joey (11:18):

Awesome.

Joey (11:20):

Um, and tell us, tell us a bit about that. Um, you and I, of course did proofs through grad school. Um, my experience with proofs when we moved to an industrial setting is that it starts feeling a little different. Um, is, does your team, well, how big is your team? Um, and, and how did they work together?

Gregory (11:38):

Uh, right now my team is around 10. Um, we are, a lot of us are in Europe, uh, Germany, uh, in the east coast, but we also have people on the west coast. Um, but one that's kind of different about the way that our systems approaches foreign methods than the way other companies do is that we really think that it's very important to get full methods in very early. And so they're not my team, but, you know, the goal is to get systems engineers who came to bedrock systems without any FM background, also applying these techniques when they're writing their code. Um, and when they're thinking about their code, their designs and things like that. So really, um, I guess that's one difference is like when you're in your PhD, you're kind of insulated from everybody doing real work, uh, writing real systems. And you're just focused on all these people who have also committed their lives to, uh, typing in Coq, for example.

Gregory (12:36):

And, uh, in this case, you know, most of my days are actually most of the time during my week. And a lot of the time during the people who worked for me their weeks is spent interfacing with no real developers who are, you know, know a lot about underlying hardware and are thinking about, oh, how do I make a VMM? And not necessarily as much about like, you know, is this, uh, the appropriate ad joint of this category to make this theorem workout or something like that. Like, it's kind of a more practical application. Um, and we think that's actually really useful. One thing that's nice about it is is you can't sweep complicated problems under the rug. Uh, it really forces you to tackle the real problem, as opposed to what I did in grad school in many cases was kind of sweep, sweep some of the most complicated stuff under the rug, because it's like the heart of the problem is this smaller thing. And we need to take a more holistic approach to some of these.

Shpat (13:33):

And maybe the complicated problem is not necessarily like the hardest or most interesting mathematical thing right there. Right. Then it's about deployability or just something, you know, interacting with the real world. Is that, is that a fair categorization of that?

Gregory (13:49):

Absolutely. Yeah. Uh, I mean, sometimes the hard problem is, you know, you have to run an arm, hardware and arm has a weird page table semantics, for example. And it's like, well, you know, at a pop-up paper or something like that, you'd be like, eh, that's not really that important. I'll just have this simple thing, but, uh, yeah. Uh, the real world is a little bit more complex and we like that complexity sometimes, and we don't like it other times. So,

Joey (14:15):

But at the end of the day, when you're talking to an engineer that needs to make a system run, yeah, there there's no escape basically. Um, I either the system doesn't work or it does from that point of view. And so you have to, you have to deal with things. Um, one of the things that I, that I've, that I find when we get into those situations though, is that maybe with my, with my grad student hat on, I didn't give the type of problems that arise out of that kind of situation, enough credit. Um, they ended up being pretty fun to work on in, in a, maybe a slightly different kind of way, but, um, pushing yourself outside of that comfort zone and, and dealing with some of the messier details is it's is its own kind of fun. I'd say, um, when you're doing, when you're doing assurance work a lot,

Gregory (15:02):

We, we find, you know, I did this in grad school. We find some clever and coding that makes our problem really easy. And then you like, take your problem. One step farther, you add one more feature to your programming and you realize your entire encoding just falls apart. And so sometimes this means like, you know, that gorgeous paper that showed how everything falls out for free. And you're just like, wow, this is amazing. Then you apply it to a real system. It's like, actually, there's this other instruction called SIS call. And it just blows your mind. And then you're like, actually I have to throw all of that away. I can take the ideas, but the code does isn't as useful anymore because you know that one little assumption that you made on page one, you know, basically said, this is going to be an interesting read, but it's not going to help you directly, uh, think is a really interesting sort of thing. Uh it's uh, it's unfortunately, more common than you would expect, but at the same time, I'm not trying to disparage academics. The great thing about that is, is they get to handle the very complex problems. So they actually push, push us forward and being able to do what we're able to do. Well,

Shpat (16:10):

I was going to say this well, we always get to this point, it's, it's a, it's a hard, you know, getting taped because those are not the incentives in grad school and the incentives aren't necessarily bad. I mean, you're, you're pushing the envelope in other ways. Um, not always like these are huge, broad sweeping generalizations.

Gregory (16:28):

Absolutely. I mean, to be concrete, like Joey worked on VST, we build on a system called Iris. These are separation logic systems, you know, without these systems, without the ideas of VST and without like the implementation and stuff in Iris, like we would not be able to do a lot of what we do. Um, we have extended some of these systems, uh, but we wouldn't have the basis. We wouldn't have someone to go to and ask these hard questions like, oh yeah. In this simpler setting, you know, such and such works, you can use those to draw inspiration to solve the same problem or similar problems in more complicated settings. I think that's crucial.

Joey (17:03):

Yeah. Well, and to take it further, like we have our, we have our tooling, you have your tooling and there's some experiments that are things that we'd like to do. Where if someone asks like, maybe you should plug this into your tooling today, you'd say, no, that's insane. You have to try it out on it. Small toy example first. Cause like, it's going to be so much legwork to get this up and running in the full-blown verb, usable version of the tool. Like if it doesn't work, it would've been a massive waste of time. So take it simpler, um, upfront and we'll see if it works and then we can, then we can start thinking about moving it into to the stuff that's being applied.

Gregory (17:36):

Absolutely. One particular instance where this arises, I think is, and this is especially true in the kind of OSTP low-level context is interoperability between programming languages. Oh, when I was in grad, we focused on a programming language and the idea of someone writing in another programming language was kind of ridiculous. Why would you write in another programming language? But you know, this is an inherent thing like Nova is written in C plus, plus it doesn't know anything. The master controller, you know, the user land that runs on top of it can be written in anything. So that requires you to kind of go down to a little bit lower level and kind of establish principles that allow you to compose, um, systems maybe at that lower level. And, but that gives you flexibility, um, that you may not prioritize, um, in academic work, but a is actually really useful in practice.

Joey (18:25):

So you mentioned you're spending a lot of time, day to day, talking with engineers that are not necessarily formal methods experts. Um, are those conversations mostly information gathering for, from you or is, are you, are you giving things to, to them as well? Uh,

Gregory (18:42):

It's definitely a give and take. Um, and it's really, I mean they tell us a lot of stuff around problems. They tell us about the way that they think about problems. Um, and we try to take that and digest it and then come back with like abstractions that we find or ways to explain their abstractions that make them more minimal to, to verification, especially. So, I mean, an example would be a locking discipline. Uh, we're building a virtual, a VMM. Uh, we have a locking discipline around the way that our CPU's can use memory and, you know, finding a way like, you know, our engineer that like, here's probably think about things, you know, we've got this locking system and we kind of showed that you can express that locking discipline in this way in our system, we're in the separation logic system and, you know, key finds insights from that because it's, you know, rather than having to tell us, explain to us for half an hour, he says all like these five definitions kind of characterize what's going on and they abstract away the boring parts, the part that says you have to call this function and then that function, and, uh, they'll let you focus on, oh, like, this is what's possible and this is what's not.

Gregory (19:57):

And then he says, oh, maybe I'll need to extend, add this feature in the future. Um, how would that impact things? So it's really like a summary of what you're doing, but at a more deeper level than just seeing C C method declarations or C C function declarations, right. You get a little bit more about, you know, what does this function is thread safe actually mean? Uh, as opposed to just, you can use it.

Shpat (20:26):

These are, these are the engineers that are working on the, on the software and not necessarily doing the verification. Is that my, is that right? So

Gregory (20:34):

Engineers at bedrock, um, non-formal methods engineers at bedrock work predominantly on C plus plus code and writing specifications. So they write specifications and we are working, you know, bring people up to speed, having them, able to actually write their own proofs. Um, and I mean, we expect that they will be able to write proofs about simple code initially, and then, you know, ultimately get up to more complicated code. There will inevitably be P places that are active research areas like, you know, weak memory, things like that, that, you know, are probably beyond what they'll be able to do in the long term. But I mean, the idea of the forum methods team is to really provide tools, um, to support this sort of work. They're doing patterns. If you're thinking fits into this pattern, you know, apply this technique and hopefully we can build automation that, you know, makes it easy to do that. And then obviously we also work on proofs that are just more difficult, uh, like the really core concurrency abstractions and stuff like that require a lot of deep knowledge about what's going on and, you know, require, you know, that extra depth, I guess. Sure.

Shpat (21:43):

So I'll just hold on. Like, so it sounds like you have engineers that are, have nothing to do with formal methods and they are writing specifications, which you can, as they write the code, they're writing specifications that then you can leverage to verify that. Yes, that's true. That's in my experience relatively rare. I wonder if you, uh, if you have anything to share about that process,

Gregory (22:07):

Not easy, uh, there's, there's definitely a learning curve. Like, you know, people go to grad school, people study this stuff for a long time and taking someone who's very used to writing C code, low level C code to tickle devices. And they were just the right way have obviously developed like different skillsets. Um, but a lot of the, like a lot of the things that we do inform methods, the code that we like informal methods is I would say, well, abstracted code and most software engineers also like, well, abstracted, like well-written code. If you give me some spaghetti code and I'm like, how am I supposed to understand this? And then you give me some nice, you know, class hierarchy or whatnot, you know, pick something appropriate for your topic. Don't just put, put objects on everything. But, uh, they like that too. And one thing that we can do actually, and it's very attractive is, is that if someone says I've written this piece of code and we come back, here are some changes to it.

Gregory (22:58):

It's often the case that, um, that they like the new code better that they say, I didn't see the abstraction that you saw and we can show them like the specification that we would have had to write for your code would have been, you know, twice as long would have had kind of these odd caveats around these things, which are often the places where bugs arise and you see like, oh, in this new implementation or the even slight reworking of the code and now becomes more obvious. Things can become a little bit more dynamic. There's less coupling, you know? And generally I think people like that code and find that easier, more natural to program against. So I think that FM improves code. And I think, and I shouldn't say this upfront, even if you never get to a QED, even if you never verify your code, if you apply for methods techniques, or, you know, you listened to a podcast like this, or you read some, read some documentation, like, yeah, I can think about foreign methods. You will. I think, uh, from my experience, you will end up with better code, even if you never take the time to actually QED the proof at the end of the day.

Shpat (24:09):

I'm just thinking about it in a way affects what you do. I

Joey (24:14):

Think when, for when formal methods goes, well, it feels like to an engineer, it feels like writing down stuff that you already know, basically like engineers already spend their day trying to produce correct code. Right. That's sort of undebatable nobody's out there. Like, you know what I'm going to, I'm going to mess this function up. Um, this one, this one I don't care about. Like, let's, let's just D reference all just everything and not worry about it. Um, you know, know or not know, um, everybody's trying to write correct code and everybody's explaining internally to themselves or hopefully to their teams through code reviews, why they have written correct code. And so, yeah, the, the, you know, if we can sort of get starry-eyed and gaze off into the future of formal methods, uh, eventually it's gonna, hopefully formal methods can get to the place where you're just writing down the stuff that you already know.

Joey (25:01):

And it feels so natural that, um, at the end of the day, like the computers doing code review with you basically, um, it's saying, Hey, what, you know, you, you said your code is right for this reason, but it, that doesn't quite match up. Let's, let's revise this. Um, and, and I think that is going to have to come from the tooling side, not from like, we can't just say like, Hey engineers, what's your problem? Why aren't you writing specs? Um, we have to make, we have to make the specs, the kind of thing that the engineers want to write. Um, cause I think the will is there on that side, if, if, if we can meet them basically. Um, and it sounds like you're taking steps in this direction. Certainly.

Gregory (25:39):

I think there's, I mean, if, if you look at, um, systems like rust, you know, rust already, like the Delta bleed rust and see is, and you kind of get to do whatever you want and then rust, you know, the type system will smack you on the head every time you do something wrong. And, uh, you know, it's kinda like opting into specifications. It's opting into particular style. The here the type system is enforcing a co an invariant on your code essentially. And, um, it doesn't at the end of the day, tell you that you are a VMM and that you are establishing the bare metal property, but it does give you the absence of no pointer D references and stuff like that. And so, you know, that's one way, like actually think of like rust is kind of like falling methods, light, you know, you can get a strong safety guarantee.

Gregory (26:24):

You also get this from other systems like OCaml and any type safe language like this. Uh, but you know, Russ showed that you could do it nicely in a low-level code. And I think that will take ideas from, you know, go to the extreme and be able to do anything you want. And, you know, come from the other side and you know, will meet in the middle. There will be a sweet spot. Um, and there will always be places where you have to go farther because thankfully not everyone is verifying page, table code, but, uh, someone has to do it. And that's probably not going to fit in the rust type system in a nice way though. Maybe someone will reply to your podcast and say, I did that last.

Joey (27:06):

It sounds like you're really taking things step by step with, with the engineers you work with, um, starting off writing specs that feel comfortable. And maybe now you're starting to also see more interest in, in writing the proofs. Now that now that the value has been seen from the specs, is that kind of the case? Yeah.

Gregory (27:25):

We actually went back and forth on this is the thing as an interesting thing, you can tell people the right specs and you can show them how to write specs and you can explain to them very well what a spec means. But you know, to some extent, until you can check your own spec, you'll be dissatisfied. Um, like I joined computer science, I became a computer scientist in part, because if I wanted to run my, if I wanted to testify, see if I got the right answer, I just ran it. And then if it came back and said, yes, then I was like, okay, I got the right answer. And otherwise it was like, oh no, I got the wrong answer. I'll try it again. Um, and engineers, you know, we spent a lot of time teaching them, you know, here's how you write a spec.

Gregory (28:03):

Here's what separation logic means, et cetera. And they kind of understand it at an academic level maybe, uh, until they can like go through and trace their code and say, oh, this is not going to work because this doesn't work or this doesn't work. That's when it becomes more real for them. And actually that's where you get the real value, because then you'll be like, oh, this thing that I wrote, like my spec, my spec is beautiful, but this place where I tried to implement it, it's really awkward. And that's where you'll often say, oh, my code is doing something that's, maybe it shouldn't be doing. Maybe it's gonna be hard to justify. That's correct. And maybe there's a better way. Sometimes there's not a better way. And sometimes there's a slower, there's a slower, easier way. But a lot of times there's a equally fast, sometimes faster way, um, that, you know, is actually easier to verify than once you wrote.

Joey (28:59):

So this isn't, I know this isn't what you all are doing, but I think it's a, it's a good question for, for formal methods is like, I think you identified something that I agree with, which is that writing specs alone, isn't going to provide enough value for engineers to want to do it. Um, you need to, you need to see that value expressed on the code basically, like it needs to find mistakes that you wouldn't, that you wouldn't have otherwise caught or help you organize things better. Um, I'm curious for that particular problem. I know you all are going for full formal verification. I I'm curious for the field of formal methods, if we can see benefit out of things like testing from specs. Um, if that, if that might get to the same place, um, when we talk about teams adopting formal methods, not, you know, again, not your, your goals are related to form of clarification, but if we focus on the separate goal of how do we get people to adopt, um, spec rating, I wonder if testing is enough. That's

Gregory (29:58):

An interesting question. Um, I will say that, so some things are pretty hard to express the, uh, testing, uh, especially concurrency. I mean, I, I go back and I think about the sort of verification that we do this sort of Kobe, right? Like when I remove concurrency from the equation, like, why don't we just test everything like this? So simple, we just run a bunch of tests. I mean, there's a lot of numbers and you guys can run them all. Uh, probably, you know, and there's a lot of work that does this static analysis and things like that are ways to, you know, uh, get formal methods for kind of cheaper at least. And they're great tools. Concurrency is another story. And there's not as many tools that are, that are equipped to handle the complexities of concurrent applications. You know, I think that there is a, there's definitely a, uh, a big cliff or a big mountain you have to cross.

Gregory (30:57):

But I think actually that it's a teaching problem. And then you, once you can cross that, you will find that you will not understand how you survive without it. This is like, uh, you know, and maybe this is just, I already drank the Kool-Aid. I already climbed up part of the mountain. Like, I don't want to go back. So, uh, but I think like what I've seen is people who actually really underst stand the, the, the theory behind the four methods and, and increasingly the people that we work with who don't have PhDs and computer science, uh, informal methods understand that, like, I there's real problems that I can talk about here that I was really hard to talk about previously. Um, and I think that's, you know, maybe we should just teach more formal methods. Uh, I guess that would help me in my hiring, but, uh, yeah, I think it would also improve the state of software, uh, overall,

Shpat (31:55):

I actually think that's just to take out to take a moment that's, uh, one of the better, um, uh, please for people to maybe, you know, start learning about formal methods that I've heard, um, because it's like, you don't, you don't know how you don't know what you don't know, and you don't know how useful that might be, and you might be trying to do something very hard that might have a very simple solution that then you will never be able to do without, uh, later. Um, and also it's a matter of like, and what almost sounds like vocabulary or a new language, or let's say music, right. There's amazing musicians that never need to learn a day of music theory, um, to, to be amazing. Um, and music theory can be very useful in, in you communicating what you're trying to say and talking to other musicians and, and actually maybe even bringing some of your ideas to existence, maybe it's a contrived, uh, comparison, but, uh, it's, it's, it's interesting. Nevertheless,

Gregory (32:52):

Uh, I mean, on, on this topic, I was chatting with someone a while ago, it was a C programmer, a C plus C C programmer became a C plus plus programmer, like basically told me, like, I didn't appreciate some, I didn't understand how to program C plus plus until I went to Haskell, he said, OCaml, wasn't, wasn't pure enough for me. Uh, I had to go all the way to Haskell to really understand what I was doing or how, how, how to use people's splits effectively. And I think, you know, we do C plus plus, and I wasn't a C plus plus programmer by day, uh, when I started at bedrock systems. So I do watch like, you know, C plus plus con and stuff like that. So you'd be con and those are the talks there by many people are taking ideas from functional programming or other things. And a lot of the things that I see that they're saying, oh, the cool kids are doing this. It's like, yeah, that that's more easily explained by following methods than what you were doing before. So, uh, maybe if you follow the cool kids, at least to some extent, don't always follow the cool kids, but sometimes they're

Shpat (33:52):

Right. Um, so yeah, I'm curious about, I think you answered this question part partway, but there are some things that you're doing at bedrock systems that well, most of them sounds like they, they require the, you know, the, the, the most correctness, uh, the, the, the best of the VAR, the top shelf of verification. Um, uh, um, I'm curious what your thoughts are on engineers who were work on kind of regular systems and software that perhaps doesn't require that kind of, uh, absolute top best correctness, um, uh, that are not in the world, the formal methods, what are some good, incremental ways to start building safer, more reliable to the systems and software that don't necessarily require them to be formal methods, experts. Some of them sounds like part of that answer is, uh, look at some of these other languages that might inform how you do things. What else would you kind of bring to that table? Um,

Gregory (34:52):

Well, I mean, I think learning more about formal methods is I think generally useful. Uh, and you can see this in a variety of ways. Like if you are a, you know, a C program or something like that, pick up a programming language like OCaml or Haskell Haskell might be too scary, but, uh, ho I think OCaml is relatively tractable. There's a good, good resources for it. I mean, we had this experience actually with, with our, with our VMM engineer, he came to us knowing OCaml and he picked up the ideas of fall methods relatively quickly. You know, he had no formal methods training per se, but just being exposed to functional programming as an idea was really, really in overcoming that gap. Um, if you're, you know, more in like the rust camp, if you're doing rust than you are kind of doing separation logic to some extent, um, and people say like, oh, I don't want to learn a new, I don't want to learn a prophecy is not a new proof.

Gregory (35:56):

It says, I want a lot of proof assistant. I don't want to do all these complicated things. Um, but there are, there are partial solutions to this. Like if you say I'm willing to remove side-effects and switch to like a functional language, you might think of things in a different way. And that might inform the way that you write code in JavaScript or C or something like that. And sometimes that makes for better code. Um, and at least it gives you different ideas. Um, you might think of, oh, there's another way to think about this abstraction. How might that influence, you know, my users or things like that in the future? I'm not sure if that's a total answer to your question.

Shpat (36:36):

Um, I, I think it is, I mean, the, explore these other areas and bring some of that stuff into your own work. And this resonates with me. I don't do a lot of programming, really, not at all comparatively speaking, but just by osmosis, by being a girl for a while, whenever I, I get to do something that's extremely utilitarian, I'm making, trying to make something work just by, you know, this idea of, you know, the thing more functional and remove side effects as much as possible. I mean, it has made a huge difference on, on some little things whenever I do it and it feels good. So it makes sense.

Joey (37:13):

Yeah. I think, I think when, when we see really experienced programmers and good programmers are almost synonymous in some sense, I think like if you say someone's experience, that's, that's almost, almost always a good thing in terms of what they're going to do. Um, and so, and, and we see this in programming, right? Sometimes we D we start writing a program and it has a pattern that like feels bad to us. Right. Um, and it's hard to explain why, but like, you sort of get the sense that you've done this enough and, you know, that's not going to go well. And a lot of times that turns out to be right, basically. Um, and what I'm hearing was functional programming, right? If you spend a long time in functional programming and all you, all of a sudden you're writing like a super side-effect heavy, um, C plus plus program, and it's going to be really hard to track the effects that it has through the system.

Joey (38:01):

You're going to start developing some of those same feelings and instincts. Um, and it sounds like the C plus plus it sounds like your impression also is that the C plus plus, uh, development strategy sort of as a community as broadly moving in this direction, which I think sounds like really good news people are, you know, as a community, um, some of these things that feel bad are sort of being treated as best practices and, and probably helping even people that, that aren't able to take the time, for example, to learn OCaml and to gain these insights on their own, um, to, to build patterns that help them be better from day to day. So that sounds like a really encouraging message. Yeah, absolutely.

Shpat (38:39):

Very, you mentioned that you were hiring, I wonder if you wanna, if you wanna, if you want to share a little bit about what you're looking for, just in case the perfect match is out there listening to this podcast right now.

Gregory (38:50):

Sure. Um, well, uh, we are hiring in a lot of areas we're hiring on both my team, the, the full methods group, um, as well as this, the more systems you group. And obviously we're looking for very different backgrounds. Um, but you know, interdisciplinary knowledge here is super useful. Like, um, if you are the sort of person who, you know, you're a C hacker by day and an OCaml hacker by night, then, uh, you're an ideal candidate for us because you probably have a great deal of knowledge about, you know, how to tickle bits or how to set up, you know, complicated systems that, uh, that work together. And you also understand kind of the other side of things. And I think, you know, the same thing can be said, if you're a functional programmer by day and a C hacker by night, you know, then if you join my team, then you won't have to learn about move semantics and things like that, uh, as much, uh, but we're hiring in both areas and in many areas. Um, so people, a lot of my team has PhDs, but it's not necessary. Uh, people with knowledge about, um, separation logic, especially concurrent separation logic, as well as proof automation, um, uh, that could be resolution theorem proof as it can be SMT, many sorts of things. Uh, and as well as on the other side, you know, obviously we're building a system stack. So people with knowledge about devices, hardware platforms, things like that are all very important to, uh, the engineering efforts that we do.

Shpat (40:22):

You mentioned at least on your team, you're looking for proof engineers. Um, what does that look like day to day? Uh, is it any, well, what would you highlight from, from working on your team that you'd like to share?

Gregory (40:35):

What does a proof engineer on my team do? You should be the interviewer? Uh, yeah. So what a proof engineers on my team do, they do a lot of things. Um, they talk with, uh, non proof engineers to figure out what code is supposed to do. Uh, they, you know, might go to meetings and, you know, learn about what the next feature is, and then they might interject their comments on how we should or should not do this. Um, so we're involved in the design process, obviously. Um, that's crucial for effectively building effectively verified software. Um, but we're also obviously on the, on the back end of this thing, like, look at the codes have been done, it's been specified, um, let's actually prove it. And sometimes you have new insights from the proof that you kind of hand waved away this detail, and he's like, oh, actually, that didn't work.

Gregory (41:28):

Um, so there's components of that. There's also components of, of teaching, collaborating, uh, people, you know, learning what they think about when they're designing the system. And then, you know, you know, teaching what you think about when you hear about this or when you would design the system. And the idea is ultimately, you know, to get better code across the board, um, more verifiable code, you know, easier code to work with more extensible code, et cetera, as you mentioned that last bit, you know, we've found that well, architected code may take a little bit longer to write upfront, but you don't spend the cost rewriting it when your feature set changes, right? And this is one thing that can build a hyper customized library and you say, ah, it's perfect. It's verifiable. But then at the end of the day, you say, well, there's the one feature that it didn't support is what we needed. And now all of those carefully crafted assumptions kind of fall by the wayside. And you're back to square one. Um, and understanding those are really helpful in, uh, in building software that lasts from release cycle to release cycle, as opposed to lasting only a once one or two release cycles

Joey (42:42):

Sounds like fun. It doesn't sound like you'll have much trouble keeping people, uh, keeping people engaged and, uh, you know, providing lots of variety and excitement. So, um, no that sounds like a pretty good opportunity for somebody that, for somebody that fits.

Gregory (42:57):

So I think it's a lot of fun. No one on my team has ever told me that they've been bored. Uh, there's lots to do. And there's a lot of interesting problems. And you know, some of it is typing and that's inherent to everything. And some of it's really deep thinking. And a lot of it, I think in the form method side of things, it's a nice combination of these two. This was a great conversation. Thank you for joining us today, Gregory. Thanks for having me.

Shpat (43:20):

It's a pleasure. This was another episode of building better systems with joy. See you next time.