Building Better Systems

#16: Iain Whiteside – Autonomous Driving: Reasoning About the Rules of the Road (Part 1)

Episode Summary

In this two-part episode, we speak with Iain Whiteside about the challenges and some of the more novel solutions to make autonomous vehicles safer and easier to program. In part 1, we discuss how Ian and his team formalize and check the different actions and situations that a car finds itself in while on the road. In part 2, we discuss how you might validate the accuracy of neural networks that sense the world, and how to mitigate issues that might arise.

Episode Notes

In this two-part episode, we speak with Iain Whiteside about the challenges and some of the more novel solutions to make autonomous vehicles safer and easier to program. In part 1, we discuss how Ian and his team formalize and check the different actions and situations that a car finds itself in while on the road. In part 2, we discuss how you might validate the accuracy of neural networks that sense the world, and how to mitigate issues that might arise.

Watch all our episodes on the Building Better Systems youtube channel.

Iain Whiteside: https://www.linkedin.com/in/iainjw

Joey Dodds: https://galois.com/team/joey-dodds/

Shpat Morina: https://galois.com/team/shpat-morina/ 

Galois, Inc.: https://galois.com/ 

Contact us: podcast@galois.com

Episode Transcription

Intro (00:02):

Designing manufacturing, installing and maintaining the high speed electronic computer, the largest and most complex computers ever built.

Joey (00:20):

Hello everyone, welcome to the building better systems podcast, where we chat with people in industry and academia that work on hard problems around building safer and more reliable software and hardware. My name is Joey Dodds

Shpat (00:32):

And I'm Shpat Morina

Joey (00:34):

Shpat. And I work at Galois research and development lab that focuses on high assurance systems development and hard problems in computer science.

Shpat (00:43):

Today, we're chatting with Iain Whiteside, who is a principal scientist and the director of assurance at fiveAI, a company that builds software development platforms for self-driving systems, as well as software components that others can use in their autonomous vehicles. Ian's background is in formal verification and system safety, and he spends a lot of his time working on how to build safer autonomous vehicles. In this two part episode, we talk about some of the challenges and some of the more novel solutions that make it a bit easier to make these systems safer. In part one, we talk about how Ian and his team formalize and check the different actions and that a car finds itself in while on the road, Ian discusses, how they're leveraging signal temporal logic and domain specific languages to reason about the rules of the road. These tools make it easier to develop building blocks. That reason about the world abstractly from the point of view on autonomous vehicle. So it's a very interesting conversation. This sort of planning is fully dependent on sensor data, of course, but processing sensor data can be notoriously unpredictable. So in part two, we get into what it takes to make sure that the neural networks, uh, used in building models of the world are as accurate as possible and how you might mitigate issues when they aren't. So make sure to check part two as well. Let's get into it.

Joey (02:06):

Thanks for joining us, Ian.

Iain Whiteside (02:08):

No problem. Glad to be here.

Joey (02:10):

So your day to day work is on autonomous vehicles and, and in particular, making sure that they're safe, can you at a high level describe what that work involves?

Iain Whiteside (02:23):

Yeah, sure problem. So, so I guess we all know that autonomous vehicles are super cool and are going to, you know, revolutionize the world. And we all know that actually people have been talking about them being just around the corner for, for quite a few years now. And one of, one of the sort of main reasons for that is because building autonomous vehicles is a hard challenge, right there there's a lot of complexity in the world, but an arguably harder challenge is to actually demonstrate and be confident yourself that the system is gonna be safe and safe to go on the roads. And generally what we mean by, you know, showing an autonomous vehicle is safe, is actually trying to compare it to human levels of driving performance. And generally, because these are machines, there's a little bit of a desire for them to be better than humans.

Iain Whiteside (03:19):

So the, the sort of challenge of the field that I work in is how can you demonstrate that an autonomous vehicle is as safe as we know humans are, cuz humans are actually pretty good at driving. Um, my wife would say something different, but anyway, so it it's, it's worth taking a step back to the early days of, of, you know, autonomous equal testing. When it, it seemed like the, the best thing to do was get these cars out on the road with, uh, a human behind a steering wheel just in case something went wrong and see where the problems lie. And, and that's super good for systems that are getting, you know, relatively mature and are capable of driving around the streets of mountain view or, um, relatively, um, plastic cities. And the challenge is that if you do some, some sort of statistics, 1 0 1 on how many miles you would have to drive of these vehicles with a human behind a wheel or whatever, without seeing any dangerous incidents, to be confident that they're going to be safer than humans you end up, um, with like mindboggling numbers, like tens of billions of miles.

Iain Whiteside (04:31):

And the, the, the reason for that is because humans are actually really good. You know, the, the, the, in the UK, I think it's every 10 million miles is actually a crash and it's less frequent for a fatality. Um, it's, I think it's a little bit less than that in the us, but that, that means that to get, you know, uh, sort of confidence of, you know, two nines or three nines, you have to, you know, multiply that number of miles. So basically we've got this problem that in order to make autonomous vehicles economic viable, we have to find other ways of testing them other than just driving them on the roads. And so inter simulation, which is the sort of the, the clear answer, you know, build a massive data center, lot of cloud computing and run your autonomous vehicle system in virtual world, grand theft auto style, um, you know, hundreds of hundreds of miles, um, a second or whatever speed you want to be able to do it. And then you can, you can make a mile of autonomous driving an extremely risky mile because you can set up all sorts of, um, of crazy decisions from humans for that. And this, this is a really powerful approach actually, cuz it gives you control over testing your requirements, which is, you know, as a formal verification person, um, you you'll know that that is, that is a good thing to have requirements.

Shpat (05:57):

So, you know, basically part of your job sounds like is how do we, uh, make these simulation environments in a way that they're as close to the real world as possible. So when we test things there, they translate into the real world. Um, how do you, how do you do that in a way that that actually is as close to one-to-one comparison because it feels like you, you try things in simulation and then, you know, everybody has a plan until they get punched to the faith sort of thing with, with reality.

Iain Whiteside (06:27):

Yeah. Ex exactly that that's a, that's a real challenge. That's, that's one of, one of the biggest of them. So you can run a test in simulation, um, and your vehicle doesn't crash, uh, or your vehicle does crash. And you need to know if in a similar situation in the real world, your vehicle would still not crash or still crash. And furthermore, like if it crashes, you have to know if who's acceptable to crash or if that situation was just so unlikely to happen in the real world. Because I, I mean, this is, this is sort of one of the sort of, I don't wanna call it a dirty secret, but let me call it that for something pithy to say, like, you know, in the safety industry, like, you know, we accept risk. So you're saying that there is a chance that bad things can happen, but I think that risk is acceptably low and in the real world, when you drive and your system fails, you know, that failure is a relevant failure and you need to fix it because it happened in the real world.

Iain Whiteside (07:34):

But if you generate it in simulation, you don't necessarily know that it's a relevant failure because that particular combination of vehicles and sunlight or something like that was, was not likely to happen in the real world or even, you know, there's, there's no way that even a superhuman like the, the best formula one driver in the world could have avoided that collision. Uh, and so you have to start building models of what, um, what an acceptable failure is. And you have to build models of, we, we call it saliency of a scenario, so how likely it is to happen. And then you have this whole other challenge of, well, in the, in, in simulation, you have to, you know, decide whether or not you want to generate auto, realistic, um, scenes that your neural networks that are, you know, basically at the other end of all those cameras and lidars in, in an autonomous vehicle, if those neural networks are going to react in the same way to your synthetically generated scene. And that's, that's a high, hard problem in the, in the deep learning community called the, the main adaptation problem, because it, from, from pre and from personal experience, neural networks are quite fragile to, um, their, their contributions. Yeah. Yes. I know

Joey (08:54):

Taking this approach is like a, I guess, a feels like a very healthy approach, not just for autonomous vehicles, but for software in general, right? The understanding that things are gonna go wrong and then that, that, you know, things are gonna go wrong basically. Um, and then trying to, trying to work within that framework is probably better, um, then saying, well, we're just not gonna make mistakes, right. Cause if you go out the door and say like, we're, we're not gonna make mistakes. Like, you know, this week, our programmers, aren't gonna introduce any bugs. Um, then you're going, you're, you're not gonna be ready for that inevitability. Right. So I think it, it feels like in, in a way autonomous vehicles, the having to exist in the real world has forced that perspective on you. But I guess even for the sort of non-real the, the machine only applications, this feels like a really valuable way to, to develop systems is with the understanding that things might fail and thinking about how do we recover? How do we minimize harm in, in that case?

Iain Whiteside (09:50):

Yeah, yeah. Ex exactly. And, and, and this type of safety approach has been used in, in mass. So, and, and other places for a long time. Like there, there, there are lots of interesting acronyms, like Asar, which stands for as safe as reasonably practicable and Mt. L OS, which stands for minimum tolerable level of safety, which is saying, and the combination of both of those is, is the system safety approach that NASA takes, which says, okay, like there's a minimal level of safety that I would like my system to have. Um, it needs to achieve this particular, uh, like level of residual risk. And I'm going to make every effort to make this system as safe as practical, but there are, you know, there are other constraints in which I'm operating. So like I cannot make a system reduce its level of availability for safety.

Iain Whiteside (10:42):

So, you know, the, the safest autonomous car will never leave a parking space. And within the point of view of cost and the sort of economic benefits. So there, there, there are of course, like challenging sacrifices that you have to a make to, to come to like a safety, like argument for whether or not the system is ready for deployment and whether or not the community can accept it. And autonomous vehicles, like you say, is, is basically the, the scariest use of incredibly complex cyber physical systems. You know, actually it's worth, it's worth talking about destructure of an autonomous vehicle, how a lot of them work, they basically split into three components. One is called like sensing or perception, which is basically a bunch of cameras, bunch of lidars, which is very cool spinning radar type thing, and traditional radars, and a few other sensors that, you know, take the world and generate an internal representation of that world.

Iain Whiteside (11:46):

And it's generally, you know, a list of 3d objects, plus a map where things are, what speed they're going, uh, what they're heading is and where they are with respect to the road. And so that's the sensing comp component. And then there's the planning component, which, you know, based on that representation of the world and where the car wants to go, comes up with, uh, a high level trajectory or goal to there. Um, and then there's the act part, which is the lower level actuation, which is, which is much more classical, which sort of takes that trajectory and turns it into realizable, basically acceleration steering wheel angle, um, to, yeah, to, to get you where you want to go. And obviously the challenges in, in the first two and the, the sort of, it, it doesn't take you long working in the autonomous vehicle space to realize that, you know, the sensing component, the perception side is always going to be inaccurate.

Iain Whiteside (12:47):

Um, it's never, you're, you're never gonna get it perfectly. Um, and so the, the planning side ask to find a way to plan a trajectory safely, uh, around this inaccuracies. And this is where you start to get a little bit of attention and, you know, it's worth thinking about this in terms of a human driver, right? So like, you know, I go out driving all the time and I make mistakes about how far a car away is from me or what our relative speeds are. And implicitly, you know, if I'm not sure when I just seen a vehicle, I'll be like keeping a little bit further away until I'm clear, and then I can, then I can sort of like approach it a bit more. And we are, we're sort of making all of these judgments of how, how inaccurate our perception is with I even realizing it.

Iain Whiteside (13:36):

And once you've got sort of neural networks involved, you, you need to have a better idea. And so you, you, you, you have this sort of compensation on the planning side for the weaknesses of the perception system, but at the same time, you can't compensate for everything. So you have to make the perception system as good as possible. And the traditional way of doing this, by the way, in, in systems where you have such an interface is by having requirements. So it's a contract. So system a provides a contract that, um, that system B takes. And this is a actually a really neat approach for autonomous vehicles. So you can say, well for vehicles that are less than 20 in front of me in the road, I may make an error of, you know, up to one meter on how far away that vehicle is. Um, and once it starts raining, that error might get up to, you know, 10 meters. And so the planning side can decide to drive a little bit further away and, you know, even drive much slower when it's raining to reduce, you know, the, the, the risk of something bad happening due to this larger error. So,

Joey (14:41):

So basically what you're saying here is that the systems have to be to some extent aware of their own chance of failure, and to be able to, to be able to quantify what quantification of that seems critically important. But again, the specifications can't be like, we'll just always see the right thing. They have to be, uh, we might see the right thing, but within this tolerance of, of what reality is basically, and that's, and that's the kind of thing that you're trying to hold these systems

Iain Whiteside (15:08):

To. Yeah, yeah, exactly. And so, and so to me, you get, you get two big challenges in, in autonomous vehicles. One is then saying, is the behavior of the planning system going to be safe with respect to the, the sort of errors in which it operates under. So, you know, can, can I safely follow Eagle and merge onto a highway, uh, in rain with that sort of perception testing and, and you have to actually be quite clear about what safe is. So it's, it's actually, you know, the, the highway code in UK is quite ambiguous in certain places, um, and purpose. So I would argue, and in, in quite a few places contradictory to what traffic does in general. So for example, the highway code says, things like don't stop for any vehicles. And everybody of course stops for emergency vehicles all the time, um, and pulls to decide of the road and negotiates, speak for

Shpat (16:12):

Yourself quite well.

Iain Whiteside (16:15):

And then of course it says, do not use your horn or flash your lights, or use that for communication. And everybody in the UK, you won't be familiar with this in the us, but in the UK roads are actually quite narrow and windy rather than have six lanes. Um, so one of the situations you get into a lot is that there may be parked cars at the side of the road, which means you have to negotiate with a car oncoming in order to go into its lane to cross. And the way people people do this in the UK is by flashing your lights to say, Hey, come on, I'll wait on you. Uh, and of course, that's, you know, technically not allowed in the highway code. And so a anyway, so you have to, you have to figure out how, how can I actually give, uh, a non contradictory definition of what safe behavior for my system is under the assumption that I've got a good enough representation of the world. And so, right. That's one of the big challenges. And, um, and then the other problem is demonstrating and being confident that your planning or that your perception system, sorry, can actually meet this contract that the, that the planner is operating under. Like, are you always only going to have one meter errors in the, the depth of a vehicle that is how far away it is, or is that just because that's a, you've seen so far in your data? And so that is a huge problem, basically.

Shpat (17:40):

Okay. So, so then part, part of planning, um, so let's assume that the, the sensing is right and we can get into sensing later. Part of the planning sounds like it has a bunch of, uh, a bunch of challenges that are, that are actually interesting there for, for you. How do you make sure that the planning is actually done it in a way that's, that's, uh, that's reasonable and safe?

Iain Whiteside (18:02):

Yeah. So, so the approach, the, the approach that we've been working on is, is trying to formalize some of the properties of what a safe maneuver is and how you can sort of demonstrate what a systems should do, um, in, in practice. So, so let me, let me, let me give you an example of what type of things that an autonomous vehicle should do in, in certain situations. And then I can try, try and sort of talk about how we actually start to express this formally and, and also test it. So the, the, like the background of course, to this should be that we are doing, you know, hundreds of thousands of simulations, of different, uh, like road traffic situations, um, all various types of them. And you need to have some sort of pass fail criteria for each of them. And that's actually quite ambiguous, right?

Iain Whiteside (18:59):

Because, you know, in, in various situations, there, there are more, there's more than one way in which you can safely negotiate, um, a, a challenge. And there's a little bit of wiggle room. It's not like a traditional like software unit test where it's very clear what the resulting answer should be. And so we started to build like a framework that actually allows you to test whether or not you're achieving safe behaviors based on your own formal rules of the road. And so, you know, one of the, one, the types of scenarios that you tend to get in highway situations is you're driving along happily, it's say, you know, 70 miles an hour, um, on the highway, um, and there's a car in front of you in one of the other lanes, and they cut into your lane because they're attempting to get to the exit or something like that, but they're driving slower than you.

Iain Whiteside (19:56):

And so when they cut into the lane in front of you, you have, what's called a sort of positive time to collision, which is, you know, if both vehicles were going to maintain the same speed, there would be a collision in, you know, X seconds. And a bad time collision is for a human is something like two seconds, because it takes us quite a while to react. And at higher speeds, these things become more dangerous. And so what you would like to have is, and, and this is something that you can't control. So the, the autonomous vehicle can't go around driving assume every vehicle is going to cut in, in front of it. So never drive faster than any other car. And so the, the standard way in which like vehicles are expected to react safely is that, you know, once somebody starts that movement from their lane to your lane, and you've detected them driving slower than you, you've got some reaction time in which you need to start breaking and start getting closer to their speed, um, and therefore have that time to collision, starting to disappear, or until you've opened up a safe following distance to that vehicle.

Iain Whiteside (21:10):

And so that that's actually quite a complicated property to express. And especially if you're just writing it in words and requirements and having software engineers attempt to implement it, especially when the implementation might be, you know, also, uh, you know, uh, a sort of like AI based system or being a really complex optimization style like planner, uh, how, how do you do it? So what we do is, um, like we've basically like taken like temporal logics that, you know, have been used for quite a long time to, to express properties of, of systems. And in particular, we're using a logical signal temporal logic, which is an extension of something like LTL that allows you to express properties in real Val in real value signals. That's the history is, you know, essentially from electronic systems where you get signals. Um, and it also allows you to have, you know, binded operators that are talking about things in terms of, well, they're talking about seconds, right?

Iain Whiteside (22:16):

So you can say, you know, eventually, you know, with a sort of bind of in up to three seconds, some sort of property holds. And so, you know, that that's how you get your sort of life properties. Um, and so we can, for, for that particular property that I was talking about, you can use something like the, until operator in, in temporal logic that says, you know, if a certain, well, if a certain condition holds that implies that you are breaking until property B holds, which is a situation nice, safe, um, and you can even use these binds to say that you've got, uh, you know, 0.5 of a second to actually react and start breaking. And by being binded, um, you have to have increased that gap so that it's safe within, you know, three seconds or something like that. And you can, you know, you can even enforce that that's a, a function of the speed.

Iain Whiteside (23:13):

And so what we built, um, by the company I work at is, uh, quite a sophisticated tool for evaluating these complex, um, temporal logic constraints, and also a language for writing them in a little bit more human friendly way. So you can take these low level like predicates and properties. So, for example, what is the velocity of the vehicle? What is the velocity of the other vehicle in the scene and what is the distance to like a lane marking? So we provide these, you know, atomic, um, predicates that can then be built into complex temporal logic constraints. And you can say something like if a vehicle cuts into your lane, then you should react in this particular way. And, you know, we use this to, you know, basically express extremely complex properties for, for autonomous vehicles. And the benefit of using signal temporal logic is that it has this magic quantitative semantics that instead of just saying true or false, I've broken the rule, or I've not broken the rule. It actually gives you a signal out, which says how safe I am or how close I am to breaking it. So it's like the truthiness or the falseness of that given property.

Joey (24:33):

It sounds like on one end, you want, presumably people that don't, or never will precisely understand signal temporal logic to be able to write down some specifications using, like, saying things like when a car enters my lane, which obviously signal temporal logic by default does not, does not provide. So you have to build something up there. Um, you then want to interpret that in a very formal framework with, with sort of written down semantics and then apply that presumably both to the, um, both to the simulations running. And then also, uh, when, when cars are on, on the road, you want to be able to apply it in both of those situations. Is that kind of the, the end to end view and, and check whether those things that, that people wrote at a high level are actually happening or not?

Iain Whiteside (25:18):

Yeah. Okay. So there's, there's few interesting things there. So let me make sure that I get back to talking about stuff in the real world. Um, but on, so, so, so ever since my PhD, I've been a little bit of a domain specific language nerd. So, you know, I've, I've been super interested in, in, in writing DSLs and their semantics. And so that this actually felt to me to be like the perfect opportunity to, to build a need like D L and like ID on top of it. And so, yeah, what, what we, what we've done is a few things. So in, in particular, we, we built a language that allows you to, you know, quite easily like build named expressions and functions and, and even land is, um, it's kind of higher order at actually, um, that, that allows you to build up like libraries of temporal logic properties.

Iain Whiteside (26:13):

And so, like, as, as a tool, as a sort of commercially available tool, like our libraries come with, you know, properties like, like, has there been a lane change, which are parameterized according to like some various definition. So the lane change property, which is, you know, has, has a vehicle changed lane starting at this time step and finishing at that time step takes parameters, like what initial lateral velocity. So at what, at what sort of turning of the steering wheel for one of a better word, like, do you want to actually consider the lane change to happen, or do you want to actually only consider it starting once you touch the lane marking or do you want some other, so we, we offer a lot of different definitions for that, that people can compose together, but at some point you, you do still need to have a little bit of an understanding of the semantics of the underlying temporal logic. And that is actually, that is actually quite challenging, um, because temp temporal, temporal logic, and especially ours, because it also supports, um, past temporal operators like sins and historically, which can be more intuitive to write things, but they, they evolve the brain when you're com composing them with future temporal logic operators. So, uh, we, we've had to write a, a lot of documentation to, to help people understand how to build these things. And,

Joey (27:40):

And this is, um, this is something that actually, when we talked to rod Chapman, um, this is something that like, kind of clicked into place for me is I've been interested in DSLs for a long time, but more from the language side. And when I, you know, in the past, when I heard DSL is like, oh, great, I get to design a language with a bunch of features, but I think you just sort of explained this as well. Uh, it's the domain part that, that is super helpful, um, in making these things usable. And, and I like the fact that you can pack in information about what it means to be on the road, into your language is what makes it really unique. And then I, the, the language stuff has to be done. Um, but it sort of stays under the hood. Like, if, if, if you've done your job right there as a language designer, nobody knows out how hard the signal temporal logic semantics are. Right. Like it works. And it does like, it does the thing that you said, right. Like I said, I want to know when, when a car change lanes, or I said, I wanna know when my car is gonna change lanes and that happens faithfully. So that means you did your work right there, but hopefully nobody ever has to, hopefully nobody has to know how hard that, that composition is space.

Iain Whiteside (28:45):

Yeah, exactly. And this, this reminds me of, so of, of course my PhD was in like interactive the improving formal methods. So like, I always think of analogies in that space, like, you know, in interactive the improving, you obviously start with low level axioms and you start building, you know, layers of representation on top of it, where if you done that, right, you don't actually have to refer to the lower level axioms at all. Um, most, most of the time you can, you can talk about it at higher level concepts. And so you end up, you know, you're, you're proving theories about, um, like calculus or whatever, without having to refer to anything underneath that, that, that, that's a really good thing. And like, we're not, I don't think we're entirely there yet. We haven't sort of like completed the theory space as it were, but, um, it's a, it's a sort of, it's a good ambition. And, you know, the other thing is really cool is that, you know, this is an integrated development environment. So I, I, I get the right little checks. It says, oh, by the way, um, do, do you really mean to use this? Because it doesn't like match from a typing perspective. And so I can do, do a little bit of type theory to, to help people. Um, yeah. So it's it, it's, it's a cool, it's a cool approach.

Shpat (29:57):

So this, a product that, that five AI aims to make available to, to, to people working on, uh, on autonomous vehicles in general.

Iain Whiteside (30:07):

Yeah. So, so as a, as a company, we are, we're building a platform for simulation based verification and validation. And one, one of the sort of key features that, that, that we have of it is this language and this language package, and actually an incredibly efficient framework for evaluating these temporal logic constraints that I maybe, I, I shouldn't talk too much about like nerdy caching or anything like that, but we sort of have a really neat technique. That means that you can do super fast three computation, if you want to change your temporal logic constraint, cuz we always save, um, you know, think about it as, you know, a massive abstract syntax tree. So any, anytime you get a pattern match in the syntax tree and any expression, we catch that and we don't RECU it. So that, that that's kind of cool. Um, um,

Shpat (30:58):

Sounds like the, this, this sort of tools that come also with, um, you said libraries, I'm assuming there's other scenarios and, and situations that happen that you can then build on top off if you're using these tools. Is that, is that right? Real quick. Okay. So yeah, yeah, exactly. So how do you make sure those are, those are right. Like, you know, when I say then make, if this happens in dual lane change and change, you you've, you made sure that that's, that's what that is.

Iain Whiteside (31:27):

Yeah. E exactly. So this is, this is the real world validation side of things. So in, in some very few cases, um, we are lucky because other people have done some work for us. So in mostly Europe and few other places in the world, there's some UN regulations that car manufacturers must obey for what a safe lane changes. And that talks about basically like if you change lane, you must not force any other vehicle to decelerate at a given rate for a given amount of time math or a given reaction time, et cetera, et cetera. So the, the, these have been already tested and determined to be, uh, safe according to that regulation. And, you know, if you follow the regulations, everything's great. Um, and, and we can relatively easily encode that. But the, the much broader challenge is, is the, that you have to have a way to one, you, you have to have a way to evaluate these same roles when you're driving in the real world, either as a human or evaluating other vehicles.

Iain Whiteside (32:39):

And, you know, one of the cool things that we do is, you know, anytime our autonomous vehicle goes out and drives in the road, we collect a lot of data that includes, you know, seeing other vehicles, changing lanes lanes beside us or in front of us or something. And we can evaluate these very same rules to on, on a different, it's not our own system and say, are you following the rules? And you probably will not be surprised to learn that quite a lot of the time human drivers don't follow the same stringent safety requirements as an autonomous vehicle would follow. Yeah. So, so, so what we, what we're essentially trying to build is mechanisms that could, and actually evaluate whether or not that rule was triggered in a situation that a human believed it to be triggered in, uh, or whether or not any of the preconditions of a given rule failed, but the conclusion still still occurred.

Iain Whiteside (33:36):

And so it's, it's tricky and we don't really have anything like all the answers right now, because it's a, it's a big data challenge primarily, but we're, we're very aware of this challenge, um, as are quite a lot of the sort of regulatory bodies in across the world and governments trying to figure out exactly how to formalize is their requirements for safe behavior. Um, so I'm hoping that through like a lot of collaboration between all of the stakeholders, plus, you know, enough data to validate or rules that at some point, we'll get to a point where it feels like you could write, um, a digital highway, a code and signal temporal logic, and be pretty happy with that as being representative of what, what should be safe behavior. But there's definitely a few potholes along the way,

Joey (34:31):

Potholes, right? So that's an, I mean, it's an interesting challenge. And, and the long in the long run, what you're hoping for is like, well, OB, like people aren't are gonna follow the rules. Right. I think we pretty well established that at this point. Like, it, it doesn't, well, uh, you know, we can talk about enforcement separately and like how those things get approached. But the reality is, um, I guess autonomous vehicles are gonna be on the road with people and people, you know, intentionally or unintentionally are going to consistently break the, the traffic rules. Um, and so I guess I'm curious in some sense, why this, why this matters basically like we can have a traffic code, um, it might influence reality, but at the end of the day, um, an autonomous vehicle still has to do its best even when other people are, you know, even when other, in particular non autonomous vehicles, cuz maybe there's a world where all the autonomous eventually follow the rules, um, they still need to try to keep people safe. Um, so, so how does, how do these worlds sort of, is this a long term vision or is there short term value in doing this as well?

Iain Whiteside (35:32):

Yeah, I, I mean, in, in a world where all vehicles are connected and autonomous it's, it's relatively straightforward to prove that there will never be a crash. Assuming there are no humans involved like pedestrians or anything like that. There's I read at least one paper that gives the proof of that. Um, but yeah, so, so there sort of two things to do with this. So one of, one of the way we reasons in which we're, we're keen to see what other drivers do with respect to these formal roles, is that it helps us actually validate some of the driver models you use in simulation. So in simulation, when you're testing these vehicles, you, you have a car doing a cut in and you simulate all sorts different cut-ins. And if you have enough data of what types of cut-ins humans do, then you can wait to your distribution.

Iain Whiteside (36:24):

So you can sort of focus your simulation in the, the, the right areas and be confident that it looks the same. And that the, the example I always use in this is this case is like the sort weird, like interaction between two different drivers who are trying to understand each other's intentions. So, you know, if you're, if you're driving along the road and there's a car edging out of a junction, a minor junction in front of you, that that car is trying to figure out, can I go there? Um, and the car approaching the junction trying to say, figure out is that car going to go? And so as a result that that driver might slow down, which makes the decision of the other vehicle a little bit easier, um, and, and vice versa. Um, and these are quite nuanced sort of back and forth communication that happens with human drivers.

Iain Whiteside (37:16):

And it's quite difficult to actually encode that and get that to come out in simulation. And so you might be simulating all sorts of variations of cars pulling, pulling out of a junction. But if your model doesn't allow for these types of hesitant situations where a car is, is gradually pulling out and stopping, pull out and stopping you, you might just be missing a little bit of, uh, part of the, the, the search space by like lack of model fidelity. And so that, that's sort of one of the reasons why we try to make use of, you know, not just failures of our autonomous vehicle in the real world and that, that amount of data, but we act really try to, I sort of squeeze as much juice sight of that, that, that real world data as possible by using it to validate models of other vehicles that use in simulation.

Joey (38:11):

And, and I guess in the long run, you know, the pessimistic side of me says like, all cars are never gonna be in communication or if they are, then they're never gonna be honest about it. So it sounds like I, having traffic rules is like in the long run, might be the, the protocol by which, um, vehicles are like, are able to communicate is like, well, we can, we can get you in trouble if you do this. So like, you can't do that. Like just setting boundaries on, uh, you know, we experience it with humans, like there's game theory happening out on the road every day where, um, some people are getting to work quite a bit faster, cuz they're willing to take risks that, that others aren't. And so, yeah, they'll swerve, they'll swerve between lanes and take advantage of the, of the more cautious drivers.

Joey (38:50):

Um, and that, you know, it, it seems plausible that that world could, could be where, where autonomous vehicles end up as well. Cuz like, yeah, I I'm, I like to get to work fast. So I'm gonna buy the car that, uh, that, that pushes things farther is, is like a, you know, a somewhat dystopian, uh, view of the future. But, um, you know, one that's maybe not that hard to imagine. So, so doing this code stuff early, uh, like getting the laws in it in a way that we can know what, uh, cars are supposed to be doing, I guess from that point of view is, is immensely valuable. Cuz then we can, you know, draw limits on, on how unsafe cars are, uh, going forward as well.

Iain Whiteside (39:27):

Yeah. Uh, and I think, uh, uh, I put, of course this will like open up the kind of worms about like ethics in, in driving for autonomous vehicles and you know, like even, you know, even yeah. Is it like ethical to be, um, breaking the rules in this particular situation because you know, all other vehicles are and then like that yeah. There's, there's there's dragons in there, but I think it it's gonna be a while before we start to get into these types of situations. Yeah. Like the, the thing you're gonna find first of all, of course is humans who are for want of a better word trolling autonomous vehicles. Right. So, you know, in the streets of London, the first autonomous vehicle that drives drives very slowly through a cry street is going, obviously get pedestrians walking out in front of it just to make it stop. And then you we're gonna have to cope with that and we're gonna have to cope with, you know, cyclists taking advantage of them in various ways. Um,

Shpat (40:30):

Are you programming in a, like a little arm that raises a middle finger or the window specific number of fingers? Yeah. Fair enough. A specific number of fingers is better than, uh, this.

Joey (40:46):

Well, yeah, there's a, you know, different countries have different number of in congregations of fingers. I think that are, that are meaningful. So we don't want to fair enough. We don't want to, uh, you know, overemphasize our American perspective. I,

Shpat (40:56):

I think people in Britain are pretty familiar with the middle finger.

Iain Whiteside (41:02):

Yeah. We we're we're, we're, we're sort of multilingual as where we can communicate in very many different ways.

Joey (41:09):

I, I mean, I guess any number of upward raised fingers can probably be red as an insult, right. Especially if it's like the back of your hand, it's you just like, it just reads, is it reads as aggressive? Yeah.

Iain Whiteside (41:20):

I mean, this is, this is, you know, all the, all the big level four or the, the big, the big companies doing level four, autonomous driving level four B the, the, the sort of big daddy with no, no humans behind the steering wheel. Like their, their main challenge is internationalization. So being able to go to other cities and, and give obscene gestures to, to people, um,

Joey (41:44):

Yeah, right. You don't want to accidentally like be, you know, encourage people to, to cut you off by using the wrong hand gesture, you know, thumbs up maybe is good in some places

Shpat (41:53):

And not so good news, Hey, speak of like, when you have to break the rules, this is the signal temporal logic sounds like a great way to encode some of this stuff you should do, but you know, you have, um, you have a property that explains what it means to do a lane change. One, what doesn't mean to do a safe one and one to never cross lane back boundaries at the same time when I'm on the highway and there's something going on where I can't avoid a crash, I swerve to the, to the side, to the side of the road where, you know, technically aren't supposed to do that, or I haven't been in this situation, but I imagine that if I'm about to hit a very small car with my SUV and, uh, I can instead choose to hit a bigger truck, I'm gonna hit the truck. Um, yeah. All of these, especially the lane change. Let's take that, uh, as a more simple one. Is there, are you thinking, or are we not there yet in terms of how do you program those in? Yeah. Yeah.

Iain Whiteside (42:49):

And let me, let me add another layer of complexity. Like, you know, gen you want to drive comfortably. And so most of the time actually you also have rules written in signal temporal logic. Again, um, that's say, you know, don't accelerate greater than three meters per second squared or decelerate with that laterally longitudinally different, you know, different values. And, and of course, if you have to do an emergency stop, you break that role. And so this does get quite complicated cuz as you say, you, our role is says, don't leave your lane, but actually if you're doing a lane change, you need to leave your lane. And um, and so you would technically break that role. And so the, the, the way we approach this is a combination of, um, building assumptions and conditions into certain rules. So, you know, don't change your, don't leave your lane unless, um, you are performing a lane change.

Iain Whiteside (43:46):

Um, don't leave your lane unless, um, there is a sort of the, the sort of way we express it is in terms of, um, the, you know, really dangerous time to collisions and there's not enough time to break and you've got space to, to change lanes. So there's, you, you, you end up building quite complicated properties. And the thing I, I, I do actually have this concern that when, when you end up with a world that is so, so very complex, that you are having properties that are, um, that have so many assumptions or conditions on them that you might end up getting some contradictions and, um, missing, missing particular spaces. And to, to be honest, we haven't, we haven't got to a, a world that is so complicated as that right now, we in, we, we spent a lot of time modeling safety in the highway for, you know, driving highways and changing lanes and merging.

Iain Whiteside (44:47):

And so you can, you can quite easily also in the same line express what it means to be a reasonable lane change. So one of the properties that we can express is, well, don't change lane unnecessarily. And this is, this is sort of like, I, I dunno, maybe it's like a liveness style property that you've got. So it's like saying, talking about making progress. So, you know, one of the other exp like rules that we have is make sure you're driving in a reasonable speed for that particular highway based on the conditions. And that that's slightly different from like the standard safety properties that you get. And so we've got a couple of mechanisms. One is these conditions. And then the other is we in the DSL itself, we, we have categories of different rules. So we can like express rules, which are more on comfort and rules, which are more normative to do with following the highway code and some which are safety and other various other types of categories.

Shpat (45:46):

Oh, so categor, so basically the comfort thing, you, you downgrade it to like, Hey, this is a nice to have, but if when push comes to shove, you can ignore it.

Iain Whiteside (45:57):

Yeah. So we, so we basically by, by, by introducing these categories and, and hierarchies, you can, you can basically provide enough data so that any downstream analysis can, can make the decisions that at one to, and, you know, the, the standard model that we have in our workflow is, you know, it's a bad thing. If you break a comfort rule, when you're nowhere near breaking a, a safety rule, um, and we can, we can do this because of this neat property that I've mentioned earlier and meant to come back to about, about temporal logic in that it talks about truthiness and falseness by having a signal about, you know, how close to a failure you are, because there, instead of just, you know, black and white, you, you can, so for, you know, for a rule that says don't collide with any other vehicles, um, that rule is actually the, the signal that comes out of that rule is basically how far away from any other vehicle, or how far away from the closest vehicle to you. Are you, so it's not actually saying I've collided. It's saying, I am, I am intersecting with that vehicle, cuz I'm so close and you can obviously put like a little like sort of halo Orin to make sure that your collision is actually getting to clubs.

Shpat (47:18):

And is it just how far I am or is it also how different our speeds are if I'm going towards a, a vehicle or something like that, or is it just distance?

Iain Whiteside (47:28):

This particular rule is using distance. Um, you know, the, the, the, the flexibility that, that we have in the platform is, you know, we have other rules which say, don't have a, let me say it correctly. Don't be responsible for a very low time to collision with another vehicle. So if a car is coming up behind you with a very high speed differential, um, they're, they're almost certainly gonna crash into you and right, the, the, the distance will, will catch that. Um, but there's also, you know, a, a way of looking at it in terms of time to collision. But, you know, one of the properties is if we are coming up against another vehicle that we haven't seen at a high time conclusion, there's a, there's another rule that we'll pick that up. So we try to keep rules at the right level of abstraction.

Iain Whiteside (48:21):

We can, we actually take advantage of this like signal nature of, of the, the logic in a few different ways, because you, you can actually, so in, in, in what we were just talking about, um, you, you can actually use it to say that, you know, if we're actually relatively close to another vehicle and the no collision rule is close to passing, then maybe you want to stop caring about the comfort rule. That that's sort of how we can use it. But the other way we use it is, is to allow you to direct the search space of your scenarios. Is there a cat or something? Um, there we go, my, my, my dog is locked safely upstairs. Otherwise he would be on my lap, um, getting in the way. Um, so yeah, so one of the things that we use with this, this signal nature is we use it to actually direct assert.

Iain Whiteside (49:19):

So for example, in, in simulation, you have parameters, you have, uh, what we call a scenario, which is basically an abstract definition of a traffic situation, and they have parameters. So you can, you can basically describe abstractly, a vehicle will start driving X meters in front of you, Y meters per second, slower than you. And when it's Z meters in front of you, it will cut in at, have ran out of letters like a meters per second lateral velocity. And so you've got basically four parameters, which are controlling to some extent the seriousness of this cut in behavior, right? So a high velocity Delta between you is gonna mean that when it's cutting in, it's getting really close to you and a high lateral velocity mean it's gonna happen quickly. So it's harder to spot. And so scenarios get more complicated than that. But, you know, for example, if we have rules evaluating high, close, you go up to collision in a scenario, we can actually do something like the easiest way to think about it is like gradient descent for like neural network training, where if you make small changes to the parameters, you can find which parameter change leads to the largest change in the, in the safety role, which, which gets you closest to failure.

Iain Whiteside (50:46):

And you can sort of search in that direction to find failures. And this is, this is super useful actually, because one of the big problems is that the search space for scenarios and simulation is huge, right? You've got many different parameters that are impacting a traffic situation, and you have potential and, and not just say potential actual non-linearity in these systems because perception system might do something weird or the planning system. So you can't just say that because I was safe between five meters per second and six meters, I'll be safe at 5.5 because something could go wrong there. Um,

Joey (51:25):

So this, uh, this conversations were really actually reminding me of, of the conversation we had with, um, Dan. And one of the big things that came up there was, um, and, and, and he was talking about like, what, what the blockchain, uh, what, what blockchain developers are doing, right. Basically, uh, and one of the main things that came out was the idea that testability is absolutely critical. And what I'm hearing from you is, is sort of the same thing. Like you're able to develop these, uh, incredibly complicated logics and change of chains of rules and priorities in part, because you can go to simulation and see if those things make sense. Uh, and you can do it over and over. And presumably you can get feedback on like, well, we put this new role in, and it never triggered. Like you can get that kind of feedback really quick, or we put this role in, and it was resulting in crashes all the time. Um, and like the it's incredibly empowering to have, like, be able to get such quick feedback about whether, you know, and it's probabilistic feedback, right? Like, no one's being like, yes, that's good or that's bad, but you can get a sense of like, does this broadly make, does is this broadly working for our system? And that seems like it's part of, what's enabled you to build such a complicated but useful system at the end of the day. Yeah.

Iain Whiteside (52:42):

Being like, have being able to efficiently and accurately explore the, the huge search space of, of, you know, driving in the real world in, in simulation is, is like the, the key challenge. And like basically everything, everything we are doing and, you know, everything I work on now and in research is, is to find ways to, to search that space more efficiently and get more information right. To it. Um, yeah, for sure.

Shpat (53:13):

So it sounds like signal te logic in general is very useful to express these, uh, kind of more complicated rules. I'm thinking about subject area experts, and I'm wondering how, how you're bringing in that input. So there's the codes, and then there's the rules, but I'm naively thinking, there's people who, whose area is driving or like, you know, the, the road and the rules of the road. I don't know who they are. I don't know what that job is, but I'm, I'm curious about how you think about bringing that input, uh, especially how you design those libraries, um, that then, you know, getting the hands of people building autonomous cars.

Iain Whiteside (53:53):

Yeah. So, so interestingly five, we, um, we, we work with we're

Shpat (53:59):

All truck drivers.

Iain Whiteside (54:02):

I, I mean, interesting leaders, so many people who have, you know, backgrounds in, in formula one and all sorts of like drive driving spaces and a lot with like advanced driving qualifications and, huh, I'm kind of embarrassed when I'm in conversations with these people, cuz they know so much more than me about this, but also actually like work with people whose job is I'll try and get the title, right. It's like a forensic crash investigation specialist. So, so these are people whose job is to, you know, analyze a crash and understand what the contributing factors, where, where, and what, where blame might lie and so on. And they're incredibly useful for, um, giving you very clear and precise understandings of what right of way might be or what a sort of safe behavior in a given situation could be. And the challenge of course is I, I'm not going to subject these people necessarily, um, to temporal logic, that's they, they made a different career choice.

Iain Whiteside (55:12):

So, so that, you know, you have to, you have to find a way to, to sort of help, um, get these subject matter experts, um, like views formalized and you know, one of the things you can do is once you formalize it, you can, the, the, the good thing about formalizing things is that actually we have a good understanding of the boundaries of these situations, um, with respect to, you know, some of the parameters in the equations. Um, so we can actually, you know, generate synthetic data based on the models that can help validate that the decision boundaries are correct. If you see what I mean. So, you know, we, we know exactly for certain it's like counter example checking or something like that. Like the equivalent that you can like give people confidence that the decision boundaries are right. And we find that, that in fact, we tweak them sometimes as a result of getting that, um, yeah, these people have lot of knowledge.

Shpat (56:11):

Thanks for tuning in for the first conversation of this two part episode, please find a link to part two in the.