Building Better Systems

#18: Jordan Kyriakidis — Helping People Write More Useful Requirements

Episode Summary

In episode #18, we chat with Jordan Kyriakidis, co-founder and CEO of QRA Corp. QRA is developing QVScribe, a product that helps engineers write requirements and analyze those requirements to gauge whether they are framed well and capture the writer's intent. We discuss the impact of writing good, early-stage design requirements, how they impact your system, how to write better requirements, the state of natural language processing, and machine learning for this use case. We also talk about applying those in situations where you need explainability and where ambiguity is unacceptable.

Episode Notes

In episode #18, we chat with Jordan Kyriakidis, co-founder and CEO of QRA Corp. QRA is developing QVScribe, a product that helps engineers write requirements and analyze those requirements to gauge whether they are framed well and capture the writer's intent.

We discuss the impact of writing good, early-stage design requirements, how they impact your system, how to write better requirements, the state of natural language processing, and machine learning for this use case. We also talk about applying those in situations where you need explainability and where ambiguity is unacceptable.

Watch all our episodes on the Building Better Systems youtube channel.

Jordan Kyriakidis: https://www.linkedin.com/in/jordankyriakidis/

Joey Dodds: https://galois.com/team/joey-dodds/

Shpat Morina: https://galois.com/team/shpat-morina/ 

Galois, Inc.: https://galois.com/ 

Contact us: podcast@galois.com

Episode Transcription

Intro (00:02):

Designing manufacturing, installing and maintaining the high speed electronic computer, the largest and most complex computers ever built.

Shpat (00:20):

Hello everyone and welcome to the building better systems podcast, where we chat with people in industry and academia that work on hard problems around building safer, more reliable software and hardware. My name is Shpat Morina.

Joey (00:33):

And I'm Joey Dodds

Shpat (00:35):

Joey and I work at Galois. Galois is a research and development lab that focuses on high assurance systems, development and broadly hard problems in computer science

Joey (00:45):

Today we're chatting with Jordan Kyriakidis, Jordan's the co-founder and the CEO of QRA Corp, a software company that's helping engineers to write clear requirements, Jordan and QRA Corp are developing a product that's called QV scribe. And what it does is helps you write requirements and then analyze those requirements in order to help answer the question, uh, uh, do these requirements actually describe what we want to build and are they useful as requirements down the road? Or is there anything missing today? We chat with Jordan about what the impact of writing good early stage design requirements is, um, how they impact your system, how you can and write better requirements. And then we'll also get a little bit into the state of neural language processing and machine learning, um, and then how we apply those in situations where you need explainability and where ambiguity is unacceptable. Thanks for joining us, Jordan

Jordan Kyriakidis (01:37):

Pleasure to be here, Joey and Shpat, looking forward to our conversation. Yeah.

Shpat (01:41):

We're excited to chat with you about RQA. Um, and I'll start maybe with a question that might seem, uh, a little silly, but early stage design and requirements writing is somewhat critical. What makes it critical?

Jordan Kyriakidis (01:58):

It is critical and it may not be the most fun part of the engineering process, but it's critical because this the first time where you capture the intent where you start describing, what is it we want to do, right. And what, what behavior do we want for a system to have, especially if you are the system or product that you're building is very complicated and evolved. You know, oftentimes many years, um, many, many engineers, um, in times of a whole supply chain. So different companies even, and you need to make sure that everyone understands what it is you're building and, um, and that you're building the right thing and make sure everyone's on the right page and have a common understanding of what it is you want to do

Shpat (02:40):

At QRA. You're essentially building a product that, uh, is supposed to help with that. Can you tell us a little bit about it?

Jordan Kyriakidis (02:47):

Sure. Um, we started solving this problem of you can call it design verification, if you will. It is, you know, how do you know that what you build, what you build is the right thing to build and what you build is gonna have the behaviors that you want it to have. Right. So how do you figure that out? When we first started, we were thinking the problem was in the design. So we ended up building something that verifies, uh, designs, right? Um, and these are usually simulating models for the dynamical system, these types of, of cyber physical systems. And when we put it into the market and got it into customer's hands, what we found the way they were using, it was comparing their designs with their actual textual requirements. And what we found was at least half the time, sometimes even more the error or actually originated in the requirements, not in the design.

Jordan Kyriakidis (03:38):

And now my background in training is, is in theoretical physics. So I have this kind of irresistible urge to get down to fundamental and first principles. And so I said, okay, well, we thought we wanted to start solve things right at the beginning in the design, it turns out there's even earlier stage of requirements. When you just basically trying to decide, what is it you want to design? And that's where we kind of move the company over to start looking at the requirements. It means a whole other set of technologies. You have to adopt, uh, more NLP and to really analyze the requirements, to make sure that, you know, the intent is very clear from all possible stakeholders.

Shpat (04:13):

Is that the main kind of benefit? Um, how would you, um, characterize what people would get from, from using the product that you're building?

Jordan Kyriakidis (04:21):

So I would say at the highest level, the benefit is they would save time and money, right? So they would save the rework costs. Right. Um, that is the main thing. So oftentimes, and I'm sure at Galois, you guys see us a lot as well, that you insert these errors into the development process very early on, but you don't know that you're inserting those errors and you don't have any mechanism to catch these errors until sometimes years later from another person who had no involvement in the early stage requirements of design uncovers, the error, and you have to go back and rework what you've done before. Um, so, um, saving that time is, is a big benefit that our customer has received

Shpat (05:02):

At, at the highest level. You know, in a way that's a lot of products are, are, you know, their, their goal is to save, to save time and, and money. Um, if we get a little deeper, if you're somebody writing requirements, um, is, is it a matter of clarifying the intent? Is it a matter of checking for ambiguity? Um, what are those benefit? That's, I'm curious.

Jordan Kyriakidis (05:22):

So, um, the benefit is really consistency and clarity and, and conciseness, right? So there's several, you know, in, we check currently we check 16 different problem types that we call 'em. So for each requirement that you would write, we'll go through and apply these Metro actually analyze it to these 16 problem types and we'll produce a score of it. So, typical thing that we check for is it, you know, are you, are you saying something that is ambiguous? It can be interpreted more than one way. Uh, we'll dock you some points for that is your requirement, non atomic, meaning the requirement can be act should be broken up into multiple sub requirements that each one is atomic. Every requirement should require one. And only one thing, typically from requirements that'll get passed on the test engineer and the test engineer will build tests. And so if you have a, that says, if this happens, do this, unless you have this feature in which case do this, but only under these conditions otherwise do that, right.

Jordan Kyriakidis (06:24):

We see requirements that are kind of structured that way. And we say, okay, this is actually five requirements. And so our product will go there and say, okay, break this up. Other times the non-functional requirements aren't specified. So they'll say if X should happen immediately, Y should happen next. Now what does immediately mean? Right. Other times we can check for ambiguity. Actually what we do check is for similarity, but oftentimes requirements that are very similar are also contradictory, right? And so we can flag ones that are semantically very similar and we can, we can flag those. And so it's a series of checks that you do very early on. That really are almost like, um, um, guideposts for the author. So that the mistake they cast a mistake right there as they're typing it right away, as opposed to later on when they're in a review meeting or deeper to the engineering process.

Joey (07:16):

And the feedback is, is that quick. So as someone's typing out requirements, they'll, they'll get feedback from your tool. Is that, is that the, the turnaround that, that you give

Jordan Kyriakidis (07:25):

It, it, it, it is nearly that, uh, there is an analyzed button that you have to press now, but, uh, we're looking future releases to get rid of the analyzed button, to just keep doing it just, just in the background. Right. Um, and to get it really, but it is right there. We produce a score from one to five on each and every one of your requirements. So as you're either looking, reading the requirements or writing the requirements, you get the score right there to actually, you know, to assess how well your requirements are written.

Joey (07:52):

And I haven't. So I guess I I'd say I like personally, I haven't put a lot of thought into how I write requirements. Um, some of the things you mentioned feel like they're obviously good things, right? The, the sort of repeating, if statement that you mentioned all being in a single requirement, it seems like it's suggesting a lot of the code and maybe suggesting and not very good structure to the code that would be better left to the, to the engineer to figure out in the long term, for example, how, when you come up with these lists of things that you want your software to help with in that case, it's obvious in other cases, maybe it's not so obvious. How do you, how do you evaluate how good a, um, a requirement is? Cuz it seems like you're aiming for certain things by, by applying your tool, but how do you know those are the right things?

Jordan Kyriakidis (08:35):

Well, I guess the ultimate arbitrator is our customers. Uh, you know, they tell us a they're if they or not, but you know, there are some rules of thumb that we, that we can apply. You know, for example, there are international standards. I don't, I don't mean official like ISO standards, but in COI, these kind of, uh, engineering systems, engineering societies, they publish guides on, you know, based on our experience, here are the characteristics, what a good requirement should have, right? And so we can take some of those and we can code them in a, a lot of them are taken from that. A lot of them are things like, you know, you should use active voice instead of pass the voice. Um, a lot of companies, their own internal guides, uh, that they use, um, on how to write requirements, almost like style guides and what we are actually very surprised on is how many industries have very similar, um, guidelines and adopt similar practices in writing requirements, right? Whether it be energy companies or medical device companies, or even automotive, they all kind of have a similar flavor of it, which is good for us because that means we can, we can apply one set of processes that apply for all of them, you know, looking back at now, maybe it's not so surprising because in the end it's all systems engineering, right. And there is an established practice for systems engineering. Um,

Shpat (09:55):

Curious actually, if we, maybe this is too much in the weeds, but it might be interesting for some of the listeners. Um, it sounds like you've seen a lot of different sets of, of standards and guidelines. That's a broad question. Um, out of those, what, what stands out as your rule of thumb? Like here's the top X things, when you're thing about how do you write better requirements at the beginning? You've gotta think about these things. I wonder if, if you could talk a little bit about that.

Jordan Kyriakidis (10:24):

Yeah. I would say it's, um, a lot of it is just a simple structure of the requirements. Um, a lot of 'em are written almost like pros and will help and actually chop it up into individual piece. Um, oftentimes we find that the requirements are sometimes, I don't know, I wanna say too specific. I don't really mean too specific, but they're more specific than the writer actually intends them to be. Right. Like, especially in the very early stages, you don't actually know all the details of what you want to build, right? So you have uncertain, not in the language that you're writing, but in your own mind of what it is you want to do, right. That type of uncertainty should, you know, the requirement should be more, should not be more specific than what you think. Right. So sometimes always having vague requirement is not always a bad thing.

Jordan Kyriakidis (11:12):

It's bad only when it's more vague than what you actually attend intended to be. Right. Um, so I, one that I, I would say is a surprise that I, that being in this game for a while, that I found, um, you know, but really the, the, I would say the, the, um, I call it ethnicity of the requirements is probably the biggest thing, uh, that we see and having, um, imperatives, meaning, you know, often requirements don't you read it and it doesn't actually require you to do anything. It's just sort of a statement of, you know, the, the, you know, of what should actually exist. Right. It's not actually, it's not actually a requirement, right?

Shpat (11:50):

Yeah. So number one, on the list, have it be a requirement,

Jordan Kyriakidis (11:54):

It should be a requirement and it should be written sort of in, in an imperative, in an imperative, uh, style, right. And so off, some companies even insist on using the word must or the word shop. Some, some of them are very specific to actually have individual words to use. And so if you, if you can't write your requirement, as they call it shell statements, it's probably not a requirement. Right. Um, and I would say a close second would be vagueness, right? Being using vague words and a big, um, Culper, there would be, um, nonspecific, temporal phrases, right. You know, things like instantly or X should happen, why should happen? And the usually say something like immediately or soon, or, you know, soon enough or adequate, you know, efficient as another kind of word, these types of kind of vibes and the requirements should be kind of eliminated. So a lot of it's kind of that qualitative nature.

Joey (12:46):

And so what, what you're highlighting with these points to, to some extent to me is that the requirement writer's job is, is not an easy one. Right. Cause there was, it's sort of, uh, too specific, there was a not specific enough like getting these things right. Is a, is a sweet spot. Right. And so right, that you don't want to over specify, you don't want to say you should use this very specific thing when, you know, that's not the requirement writer's call to make, but if you're too vague and temporal things, for example, then you can cause real issues for the system down the road. Um, and as a, as a newer requirement writer, or even maybe more experienced one, it wouldn't be obvious which kinds of, of where it's right to be abstract and where it's right to be specific. It sounds like is a real challenge for requirement writers.

Jordan Kyriakidis (13:27):

It is a real challenge for requirement writers, uh, today. Um, and it's, you know, so some there's certain trends, I would say in the systems engineering community, one trend is that all of the older experienced engineers are retiring. So new people coming in and there are also a lot of companies have requirements authors whose, whose, uh, native languages in English. So they're also new to writing. So there's a lot of there's training component and a language barrier, um, as well. And this actually helps them, but you know, it's not, you know, we don't, it's not a silver bullet. It's not like if you, you know, if you, if you, you know, if you use scribe, then all your problems are solved. Right. You still need the human in there. You still need judgment. For example, you know, we can't tell you not today, at least that your requirement is correct.

Jordan Kyriakidis (14:14):

Right. Because the it's a piece of software doesn't know what's in your brain, doesn't know what you intend. Right. But it can tell you, it can actually surface things to let you think about this. Right. When you say immediately, what do you actually mean? Is it like a nanosecond or is it a minute? Right. Um, was talking to one guy yesterday and he was saying that for them, it, it had it in, but really they mean like 10 minutes. Right. That's what they mean. It's like, you know, not a couple hours, but it doesn't have to be a nanosecond either. Right. It's just a timeframe that they were that the system, the response time was that was sufficient. Right. Um,

Shpat (14:49):

You, you touched on this thing where it was like, I guess when you build tools, like you described the, the tough part is when it comes to it's, you know, too vague or not vague enough, there is no way for you to really know that because it's all about intent. And so it's, it can be, I'm assuming it can be just you press' button is done. It's sort of a, a loop. Um, and so you gotta make this thing that really works well with the requirements writer. Um, I wonder what that's like or, or what the challenges there are.

Jordan Kyriakidis (15:17):

So there is, there is a challenge there and different, uh, tool makers offer different solutions. So the solution that we adopt is to make the tool configurable, right? And so you can control how much you, how deep an analysis you want to do, or how stringent analysis you wanna do. You, you can basically have some level of, um, configurability there. And, and the configurability happened in, in two stages. One is that different companies have different policies and different needs, right? If you're, if you know, if you're designing a, I don't know, an emergency set down procedure for nuclear power plant, right? You don't wanna, you wanna be very stringent. There's no room for misinterpretation. You wanna have, that's where you wanna do something very formally prove that this is actually gonna work. But if you're doing something that, you know, if it goes wrong, it's not that big a deal.

Jordan Kyriakidis (16:06):

You don't need a big Twitter pound gorilla to, you know, formally verify everything. So you can, you can relax in a bit. It, right. So that's kind of one thing, the, the project or the company or, or the system you're building. Another one is the stage of development of, of the product or system you're building earlier on. You may want to be a bit more lax because you're still just very early stages. You're just starting, you know, you don't need to turn off all the checks, but as you're getting closer baseline and closer to actually building it or setting it out to your supply chain, you wanna be a bit more stringent there. So I, I kinda liking it, you know, to like compilers, you have, uh, element of how pedantic and how stringent you want to be to these compiler checks. You know, something like that is, is, is the, uh, is the option that we're going, other methods that, um, are used are just do it by machine learning. So you learn from the individual company or the individual office over time. That gets a little better. Those are a lot more vague, a lot more difficult to quantify, but that's another option you can do it. Right. So over time, it conforms to the user's ability. The tricky part there is you want to conform to the good requirements, writer's ability, not kind of, you know, the ones that aren't, that aren't as good.

Shpat (17:18):

So actually that gets me into that. Another, another thing that I'm curious about, um, in QVScribe, you rely on kind of the latest and greatest in natural language processing, um, to do all this. In addition to that, I, I know you're thinking about how can we use kind of machine learning in general, in situations where the, the results have to be a little bit more, um, well, they can be ambiguity is unacceptable. I guess I don't have an actual question there, but I'd love to hear about what the challenges there are and what, you know, what would it take to apply some of these things very well in this domain?

Jordan Kyriakidis (17:58):

So we, we do use a fair bit of NLP. Um, we do use language models and some machine learning. We do a fair bit of transfer transfer learning. So we take language models that are developed out in the wild on natural pros. And we put some training on top to conform into engineering specifications and technical and technical requirements, but that's not only, we also use sometimes a rules based approach, um, as well. So we use in, in that instance in a way, really we wanna solve the problem. We'll use whatever technologies available to solve the problem, right? So that we take a very pragmatic approach, a big problem today with machine learning and neural net in particular is this aspect of explainability, right? And that is something that in our game, you absolutely have to say, right? You can't say that something is wrong. Um, you have to also say why it's wrong and, and you know, how did you come up with that decision, uh, neural nets right now, it's getting better, but they're not quite there yet.

Jordan Kyriakidis (19:02):

Um, at the level where you can say this requirement is wrong because of X, Y, and Z. And here's how you can write it better. Right? Um, oftentimes neural illness will tell you why something, oh, tell you that something is wrong and they could be correct. They could be almost magically. Correct. But you know, if I say you have an issue in your house, but I'm not gonna tell you why, you know, you can right away, you you'd be, you'd be, um, it'd be pretty valid to say, well, what the hell, man, who are you? I'm not gonna listen to you. Right. You can't even tell me why, you know, that that's, um, non unacceptable response.

Shpat (19:33):

That's how I talk to my doctors.

Jordan Kyriakidis (19:35):

Yeah. We could do a whole other show on doctors. I got a few things to say about that. Yeah. It's uh, yeah. You don't want just correlation. You want causation, right.

Joey (19:45):

Well, there's so there's probably some things you could discover that where someone would look at it and, and they wouldn't need, it explained so much. So if you say this temporal property is nonspecific, um, make it more specific. The why there maybe isn't so important, but it, I, I assume there's other things that someone might get a recommendation, they might say, well, I don't necessarily agree with that. Um, or I don't understand why that would be the right thing to do. And that's where you really run into this, the, these corners and, and needing to explain what the tool is suggesting. Right.

Jordan Kyriakidis (20:14):

That's right. Yeah. That's right. So we have to be, when we use machine learning techniques and AI techniques that depend on neural nets or depend on an underlying technology that is not explainable. We have to be very judicious in where we apply it. Right. And so typically we do it, you know, the application should have a feel. That's kind of like a recommendation engine. If you have a recommendation engine, it's like, here's some selections. You don't have to pick any of them if you don't want. Right. But here are some things that we come up with, if you like one better use that. Um, so something like that, you don't really need to, uh, explain, explain it. Right. It's just, it's just a suggestion.

Joey (20:54):

I see. So, so recommending alternatives gets you out of having to explain the decision, basically, cuz then someone, it, it gives someone the option to say, well, yeah, like that's better. Um, and then they get to make the judgment call on their own rather than sort of, of than sort of doing something that they don't understand. Um, it, it lets the, the person take the responsibility for, for making the decision, basically

Jordan Kyriakidis (21:15):

That's right. When the way we apply the technology, we always have in mind that we need the human in the loop and we want the human to be in the loop. Right. So it's not something that happens on its own automatically. Um, just be behind the scenes that you just get magical answers that are always correct. Right. So that's not how it works, right? The human needs to be in the loop. But what you want to do is eliminate the drudgery that the poor engineer has to go through in order to, in order to, in order to do their work, right. You want to elevate them to have, make the important decisions, opposed to just the kind of, you know, mundane kind of decisions. And what we see around us a lot is a lot of high value engineers are doing very low value work and that low value work. So we have review meetings. We have sometimes 20, 30 engineers, multiple days doing a big giant requirements and design review meeting. And they spend all the time talking about the syntax, right? What does, what does this requirement actually mean as a word and end up wordsmithing the requirement, right? Huge waste of resources when they ought to be thinking of, is this the right requirement, have a more strategic or a higher level of conversation about what it is they're building and are they doing at the right way? And are they the right thing?

Joey (22:29):

So maybe, maybe this is a bit of a, a bit of a tangent, but we think about this in the, in the world of code, um, as well, where like the value of having those meetings is that everybody is on the same page. And it sounds like the argument you would want to make is that, um, using a tool like QRA is gonna help your, your company, they get on the same page across the board and ha use the same kind of language across the board as a company, but without requiring so much time hashing over the same things over and over. Is that, is that something that you've, that you've basically been able to witness by applying your tool?

Jordan Kyriakidis (23:04):

Yeah, that's a huge use case for, for the tool is just that. So some of our top up customers, the way they get the most benefit from QVscribe is to embed it into the process. And so to say, you can't, you can't bring these sets of requirements up to a review meeting until you meet this minimum threshold score. Right. So we score each individual requirement and then we score the, the container, the document or the module, or, or, or what, what have you that has to receive a certain score before you can even bring to review, because if you can't get that score, it means that, you know, your requirements are still not in a, in a shape that we can all understand what it is. We're trying to say, so we can make a decision if it's the right thing to do.

Joey (23:47):

Yeah. And it seems, it, it seems kind of crazy to say, I guess this way, but like, to some extent, there is no absolute good or bad in requirements. And, and there's aspects of code that see this as well. Like there's not, we're communicating person to person, right. It there's if I have what I view as the perfect communication and, and I deliver that to you and it still doesn't make sense that hasn't been a successful communication, right. So there is no absolute truth, but bringing everybody, um, bringing everybody on the same page can presume is, is like, if we're speaking the same language, if, if you're ex expecting what I'm delivering, then, then things are gonna match up better. And, and that's what good communication looks like. Right? It requires everybody at the company. It doesn't require, uh, if we're seeing companies struggling with writing good requirements, that's not the answer isn't to point at the people and say, do better requirements it's to work together to sort of make sure that what's being delivered is matching up with expectations basically.

Jordan Kyriakidis (24:43):

Yeah. Uh, I think that that's, that's it exactly right, right. That's exactly right. It is really a communication tool and, you know, that's one of the it's especially true for requirements cuz most of them are written in, in just text. Right. And so, and there's good and bad because of that. Right? The, the, the bad part is it's just inherently ambiguous. Right. It's almost a complete opposite of a mathematical statement. Although I often try to tell people that just cuz it's math doesn't mean that's very precise. It can be fuzzy. It can definitely be fuzzy as well. But, but it, you know, our code, for example, when you write a piece of code that's gonna execute, you know, that's a very well formal syntax is very well defined. What's gonna happen. Right. Uh, assuming the compiler works right. Blah, blah, blah. Um, but English isn't like that or natural language isn't like that.

Jordan Kyriakidis (25:32):

Right. It's so that's kind of a negative. So then you may say, okay, well what we should do is not have requirements but have code instead of requirements. Right. Well that requires, now you've really decreased the expressive capabilities. So one of vantages of writing requirements in natural language is that you have this level of expression that anyone can contribute. Anyone can actually express themselves and, and more or less say what it is. They actually, what it is they actually mean. Right. Um, and so it is kind of a, a double edged sword. You know, the, the it in natural language has great benefits. It also has, uh, you know, negative, negative aspects of it as well. And of it's the same thing. That's both positive and negative. Um, one of the movements we see happening now that we think is, you know, is, you know, a good thing is that people are trying to model the requirements, not at the level of building like a functional implementation model, but modeling the behavior of, of the requirements.

Jordan Kyriakidis (26:34):

Right. Um, so I think that's kind of a general, a good direction to go, but that's also difficult, you know, you need to build a tool that people will actually use. Right. So one example I often use in our discussions are UML diagrams. Right? I would say most people would realize that if you build a UML diagram, you'll do a better job of the system in the end. And if you don't do it right, and everyone kind of realize that, but hardly anybody doesn't. So why don't they do it? Well, it's just, it's a lot of work and it's, and you have to think about it in a different way. And it's, and it's difficult and there's time pressures and there's this and there's that. And so we're just gonna like write code and I did something like this last year, so I know what I'm doing anyway. So I'll just go ahead and do it right. And so, you know, one thing we try very hard is not to be preachy to our customers because they have issues that they need to deal with and you have to give them something that they can actually, that they can actually use. So you don't have to have a, you know, a complete solution that nobody uses, it's kind of worthless, right. And a solution that takes you only partly of the way there that everyone uses is actually a good thing. So you should do that.

Shpat (27:34):

Right. In fact, actually, you know, often we talk to people who are building tools for the developer in this case for the requirements writer, it's all sort of in the vein of, of people who are building systems, um, some of the conversations we have are about how do you get embedded? How do you build something that gets embedded in that workflow that actually is used and actually is not just a nuisance or on the, in the way I'm assuming you are also thinking about this, given that this product is gonna be within people's workflows embedded like that. I wonder if you have any thoughts on that.

Jordan Kyriakidis (28:07):

We do. We have lots of thoughts on that. We talk about that. We talk about that a lot. Right. And so, in fact, I, it's really two problems we're solving when you, when you provide a solution and the intent of your solution is for people to actually use it. Right. As opposed to just, I know writing a research paper or something like that, right. Just a different objective in that case. Right? It's not that it's bad. It's just a diff it's just a different objective that, that you want to get. Right? If you wanna build a tool that people actually use, so one you have to solve for the problem. Um, that was just pretty clear, right? They have a problem with poor requirements. You wanna improve it. So you have to solve, give 'em to the actually does what you, what you advertise, but they, you have actually have to also solve for the path.

Jordan Kyriakidis (28:47):

And by that, I mean, you have to solve it for a way that is consumable by them. That is, it fits into their workflow. You don't need to train, you don't need like a room full of PhDs to use it. Especially for something, a company like us, that we that's a product company. Right. Know what we want to do. We're not a consulting company. We basically wanna give them a product, sell 'em a product. And then they go off and they use the product, right. So they have to be able to do it on their own. And if you, even if to solve the problem, but it puts up too much barriers, either. It doesn't integrate with the tools that they have now. So they have to go somewhere else to actually do what you want 'em to do before they come back here. Right.

Jordan Kyriakidis (29:24):

That's bad. Or if it requires too high of a technical sophistication or an unusual kind of technical sophistication that they need, that they don't have in house, then they just won't. Then they just won't use it. Right. Or if it's just like a pain in the to use, that's not very nice to you. It's not very pleasant to you, then they won't, they just won't use it. Right. And it's, you know, you can't just tell people to eat your spinach and eat your vegetables. You have to, you know, give 'em a painkiller sometimes as well. Right.

Shpat (29:52):

Well, so I wonder what that looks like actually tangibly, when we were talking to Musedev, which was a Galois spin out that actually got acquired by. So, um, they, or they worked on this really cool, um, sort of analyzer that was embedded in, in developer's workflows to find very critical boxes that are hard to find for, with humans. For them. It was, I think a lot more analyzing tool are doing this. It was let's provide feedback on specific polar requests versus, you know, you turn this thing on and it is just like, here's this whole world of things. Now they provided that as well. If you really wanted to dig into it. But if you do that, then people are bug me overwhelmed. They have, as you said, a job, they need to move forward. That was one concrete thing. What are the concrete things? When it comes to requirements, writing and making a more embedable.

Jordan Kyriakidis (30:43):

So it's, it's, it's actually quite similar to that. Right? And so right now we try to embed ourselves into their workflow, into the tools they use. Right? So most requirements are written in a small number of, of tools that they use, right? There's, you know, maybe five to 10 of them, not 2000 of them. Right? And so the ma by far, the most common one is word in Excel, right? Most requirements are written, other word in Excel. Other ones are, uh, these big requirement database systems. So Siemens makes a, a product called IBM makes a product called doors. And there's Java connect is another one that that's, that that is used, right? These are essentially databases that control traceability of the requirements control, change, and just basically hold all the requirements and help you manage them. Right. They don't tell you if the requirement is good or not, but they hold all the database of requirements. And so what we have decided to do in the company is embed right. Inside these tools. So if you're there right in the requirement, like if you're in word, it's a panel, it's right beside your word document. That is right there. They

Shpat (31:49):

Building the, the, what might be the first developer plugin for Microsoft word.

Jordan Kyriakidis (31:55):

Uh, yeah. It's the, uh, it's the developer added to word, to word and Excel. Right? Excel. I already got visual basic, so poor word is kinda left out in the, um, but, but yeah, you wanna be right there. I

Joey (32:07):

Have to ask the obvious question. Are you gonna hide an Easter egg where you can get Clippy to appear next to your, uh, next to your suggestions? I,

Jordan Kyriakidis (32:14):

Clippy Clippy will not, uh, uh, I Clippy will not appear. I will not comment on Easter eggs.

Joey (32:22):

All right,

Jordan Kyriakidis (32:23):

Fair enough. But clip's gonna to stay dead.

Joey (32:25):

All right. At least, at least your, your company will not be the one, uh, responsible for, for reviving clip.

Jordan Kyriakidis (32:31):

We will. Yeah. We will not be, uh, we will not be responsible for that, uh, at all. Right.

Joey (32:36):

I'm sure. I'm sure one or two of your users just got their hopes up and, and had 'em dashed, but the rest are breathing a sigh of relief now.

Jordan Kyriakidis (32:42):

Um, well, if you put my email in the show notes, then maybe they can know they can maybe, uh, reach out and we can have sort of educational, uh, discussion afterwards.

Joey (32:52):

This is, this is a pattern that comes up. I think every time we've talked to someone, who's building a product, um, they've mentioned how much work they need to do on integration to make their product usable. And it's, it's a, it's a topic that's come up time and time again. And those integrations are different for everybody. Of course. So it's a, it's a massive problem, but yeah, muse, um, who bought hu spot mentioned earlier, had a lot of work to do integrating with the various CI services, the various build systems, the various containers, um, all sorts of work there. Um, Akeda is dealing with a, from our, from our first episode is dealing with a multitude of integrations that live in the microservices world to make what they're doing, talk to everything. Um, and I don't know if anybody ever knows how big of a problem that's gonna be starting out, but that seems to be where everybody spends a lot of their work in, in the technology building world.

Jordan Kyriakidis (33:45):

It is absolutely something that when we first started out, I didn't even know was gonna be a problem. But, uh, as I got into it, you know, I call myself an accidental entrepreneur, um, cuz I kind of fell into it, but it, it absolutely agree a hundred percent. It's a lot. In fact, I think solving, you know, solving for the path as I call it is as difficult as the original problem you are trying to solve as well. Right? So you really have to look at these both as equal partners and in our development cycles, we're always kind of weighing the two, which ones are these? Are we gonna gonna focus on this release cycle to, to improve? Right. Um, it's very, very important because you know, most of the companies that we sell to, they are, you know, they're big engineering companies and they're very busy and they got work to do and they have jobs to do right. And they wanna get on with it and we want to help them. And so it, that it's no good. You help them on one side, but oh, by the way, it has some terrible side effects. You have to totally change your workflow. You know, they just won't do it. They just, whether it's the right thing to do or not, it just won't happen. So sometimes it doesn't matter if it's the right thing or not. We build tools for people to use them. So they have to be usable.

Joey (34:57):

Well, it's the, it's the right thing if people use it. Right. Um, and that's, that's what you have to realize. I think to build a product is like your, your technical dream of what is right. Might not always line up with, um, with what people want to use. And that's something that, you know, we, we see outta the people who seem to be sticking with this kind of problem, is they're willing to accept the challenge of helping people use the technology rather than, um, making the technology, the, the perfect dream, which it sounds like is something that you've embraced as well.

Jordan Kyriakidis (35:24):

Yeah, it is. Yeah. I know the first product that we built was a lot more technical to use and we realized that if we wanna keep with this particular very technical product, we have to have a whole consulting wing. A whole service is wing to help people, you know, hold their hands and bring them along. And that's just not, you know, that's not the, uh, the company that we wanted to build. That's not the solution we wanted to offer. We wanted to offer product solution. Right. So usability then becomes a lot more important

Shpat (35:52):

Change topics a little bit earlier you were talking about, Hey, if we solve this problem, these problems in general, to the extent that we can, then the focus of the engineer, the systems, engineering, programmers, all, all the folks work on that stuff can more be more about, is this going the right way? And are we building the right thing, um, versus, you know, more, more so in the weeds, right? You also said, you know, in with national language, everyone can express themselves. Especially if there are, there are tools to put guardrails around it. I wonder what given that, what is your take on well where software development is going and maybe what the role of, of programmers will be in, in a, in a future.

Jordan Kyriakidis (36:39):

Yeah. I, I think, um, I think there's a pretty clear trend that is emerging for those who care to look of, of the direction things are going, right. And it's not just software developed extended and say, not just software development, but systems engineering in general, right? One is that a, a lot of coding and a lot of manufacturing is just gonna be automated away and will become just a, almost a commodity, right? You're already seeing lots of automation happening now. That's gonna continue, not all coding, but a lot of coding will be like that as well. Right. You're seeing some movement over there already, like in, uh, you know, many modeling tools offer auto code where you just, you know, a task code to a block and then it just, you know, the, the product just spits out the code. And, and generally it's correct.

Jordan Kyriakidis (37:24):

It's ugly to look at, but it's not meant to be looked at you look at the model. So that's gonna continue. And I would say three big trends we see in that direction are one, this whole idea of adaptable algorithms, which is just under the AI umbrella in general. And there is a generative design and additive manufacturing, right? These things are all in, in that vein that go more towards automation. That means the human input and the value that the humans can put into into the, into this development is much earlier on, right? And you're seeing that this is a big trend. That's happening over time. It's gonna keep going in this way. So earlier on in the process into the design phase, into the, uh, requirement stage, there's gonna be start moving earlier on. And so that is where we're gonna see the human ingenuity and the human creativity as trying to understand what should we build?

Jordan Kyriakidis (38:14):

How can we solve this problem that we're facing and how do we know that we're actually, what we're gonna build is actually gonna do what it's supposed to do and not do what it's not supposed to do. And that's where this idea of having the human in the loop and these tools are gonna be almost like a, an aid in creation, right? Systems are becoming so complex that almost no one person will be able to ingest at all, you know, once. Right. And there was always, I can't remember who was it? Uh, it was one, uh, so many years ago it talked about why software development so hard and the reason so hard is that you have to think it's such so many different scales right down from like where the colon should be all the way to what the system is going to be.

Jordan Kyriakidis (38:59):

Right. And, uh, I can't remember the guy's name now. It escapes me now, but that's true. It's become even more so now. Right. So the complexity gonna be beyond the human to comprehend the whole thing. And so I see product development and system of development as being almost an act of this co-creation between the human kind of expressing their intent of what it is they wanna build. And then the systems of which, you know, eventually to describe will be a part of this that helps them. It's like, well, do you mean this? If you do this, then here are the consequences. Right. And, and it's a sort of back and forth where these properties I emerge. Um, the system emerges of this act of, uh, co-creation.

Shpat (39:41):

I will say it's a, it's a fascinating vision. And I will believe it's when I, when I see it, although, you know, we do, we, we work a lot on, co-generation making sure that's, you know, some of the complexity is hidden, but it's still correct. So certainly makes a lot of sense.

Joey (39:56):

And yeah, and it's worth, I mean, it's worth saying like in this, in this future, right. Where when we write requirements and we write specifications, we also rely on some degree of judgment being applied, um, down the process, basically. Um, and so there's this really frightening time. I think, along the path to what you've described, where we're, we're starting to see automation and it gets the right sometimes, but it executes almost no judgment. Um, and, and things go can go horribly off the rails if you really don't nail it at the requirements, like at the very top level of the design. Um, so that's sort of gonna speak to, you know, there, there's gonna be a point where if that transition's gonna happen and it sounds pretty desirable, right. We're taking a lot of, um, a lot out of more mechanical work off of people and, and having machines do it, which is, which is usually something, you know, to strive for, because then there's more engaging work that, that people can do. Um, but if we're gonna, if we're gonna see that happen, we're, we're gonna have to still get better and better on the requirement side or the automation under it's gonna have, have nothing to do, or it's gonna not be able to accomplish its task, um, successfully.

Jordan Kyriakidis (40:59):

Yeah. Uh, I, I, I strongly, I strongly agree with that. Right. It's and it's not just requirements to early stage development in the, at the beginning. Right. Of which requirements, maybe one of the earliest stages, but it's not just that it's kind of that old, early, early stage development as well. And, you know, I think the problem that the world is facing those problems that have a technic solution are becoming more and more complex than it is now. Right. And so it's gonna tax our ability to actually even build something that can actually solve the actual problems that exist in the world.

Shpat (41:29):

Hey, uh, Jordan, you're a pounding member of the government of Canada's advisory council on AI. Is that

Jordan Kyriakidis (41:35):

Right? Yeah, correct. Yeah.

Shpat (41:37):

I was curious what is what's Canada opt to?

Jordan Kyriakidis (41:40):

So we do have a lot of, uh, you know, we have a lot of the, um, fathers of AI neural nets are in, in Canada. And so we do have a big history, um, of AI and now to quantum computing too, there's a lot of work done in quantum computing, which is also very closely affiliated with AI as well. My background is in quantum theory, by the way, and really the government of Canada, what they, uh, realized is we need to have a certain, you know, we need input under the policies that we want to have. Right. So one is, you know, how do you, how do you, um, help encourage commercial activity? Like what sorts of things can we do to, to, uh, encourage that to happen at the same time, there's lots of, um, fear in the public about, about AI and what, you know, what are, you know, are the robot LORs gonna take over and, and, uh, destroy us all.

Jordan Kyriakidis (42:32):

Uh, and there's also, I would say a lot of, of ignorance, right? Um, just ignorance. I mean, it, in the literal sense, they just, people just don't know. So communication is a lot of it. And also there is very little, uh, or not enough, I should say education and university about AI as a practice and as a career. Right. Right now, you know, there's so many jobs that we acquire AI experience that there's not enough of a supply that's gonna get worse, right. So there are all these features. Some of them are ethical. Some of them are technical, some of them are commercial. And so the government of Canada was actually the minister of industry. He said, uh, I wanna have an advisory council that can advise me in all these different, different aspects. And so take the big picture so that we don't inadvertently set these policies that we think are the right ones, but it's gonna, you know, set the country back.

Jordan Kyriakidis (43:23):

We're not gonna know it until like five, six years later that, oh, we just screwed ourselves from these policies. Right. And so, um, I was very impressed with the cast. You go on the website, you'll see other people in there. It's, it's a very, uh, very impressive list of people. And I was also have to say, I was impressed with that. The government of Canada actually took it seriously. And they actually, you know, they actually wanted to try, they do wanna try, they do try hard to actually listen to people and actually make, make a difference to actually inform policy. So it was, I would say refreshing, let me say right. And surprising in a good way. And incidentally, explainability of AI was one of the important things. So I mentioned explainability, um, for requirements. Right. Which I think kind makes sense. Um, and I'm sure the kind of stuff that you guys do, you know, explainability also is a big, you know, this reasoning behind the, the systems decisions are very important, but you know, this may be a bit of a tangent here, but you can imagine if you're applying AI on policing or sentencing recommendation right now it's much more expandability becomes a lot more important now.

Jordan Kyriakidis (44:27):

So, you know, you can't have a recommender engine for sentencing for Victoria.

Joey (44:33):

Yeah. And, and more immediately self-driving cars, um, have a lot of need to explain their decisions as, as people go back and audit the, the decisions that they have made or not made. Um, so, so there's a lot of immediate demand for that kind of thing that, that, that we see for sure.

Jordan Kyriakidis (44:47):

Yeah. And self-driving cars is also another tricky one, cuz I obviously needs a, a great deal of trust, uh, for the public, right? Whether the trust is well founded or not, or whether it's misplaced trust, whatever trust needs to happen, otherwise nothing's gonna, nothing's gonna get done. And the tricky thing is if you don't have a population that understands the trade offs and technology, at some level, they don't need to have a mathematical understanding, but you know, if you have a self-driving car, it's gonna make mistakes that a human will never make. Right. But it's also not gonna make mistakes that humans often do make. Right. So it's gonna be, it's gonna be a trade off. Right. So it's nothing is ever just yes or just no, or good or just bad. It's always like a big, um, not even levels of gray. It's just technical. It's complicated.

Shpat (45:39):

Again, this is definitely a tangent, but what does it mean to make a mistake? You're trying to avoid hitting somebody and you go on the, on the side of the road, right. Cross the yellow line. Is that a mistake? How do you hell self-driving car? That's, that's not, you know, that's not. And how do you then do that whole thing for other hundreds and hundreds of, of different scenarios that, that, uh, you find yourself, you find yourself when you're in the

Jordan Kyriakidis (46:05):

Road. Yeah. And, and

Shpat (46:07):

Good luck with

Jordan Kyriakidis (46:07):

That. Yeah. And not all these solutions are technical in nature. Right. And so, you know, you see now sometimes you hear every once in a while, a report or a self truck and car, uh, it mistakes, you know, it doesn't see a bicycle because it was a truck behind it and it didn't see it cuz it kind of blended in. I thought it was part of the truck and, and it just, and it just hit them. Right. That's an example of something that, um, of human would never make cause very clear that they would never make that mistake. Right. But humans fall asleep at the wheel all the time. Right. And, and people will never actually make back at

Shpat (46:41):

A mistake, drive drunk, all that.

Jordan Kyriakidis (46:42):

That that's right. Computer will never make, never make that mistake. Right. And so it's um, yeah, those are very difficult, very difficult decisions. Um, I'm glad I don't have to make them

Shpat (46:53):

Well, Jordan, it is really good to talk to you. Thanks for joining us today.

Jordan Kyriakidis (46:57):

Yeah. It was my pleasure. A lot of fun.

Joey (46:59):

Great.

Shpat (47:01):

It, it indeed was a lot of fun. This was another episode of building better systems. We'll see you next.