In this episode, we're joined by Eric Daimler, CEO & co-founder of Conexus AI, Inc, an MIT spin out. We discuss the Conexus software platform, which is built on top of breakthroughs in the mathematics of Category Theory, and how it guarantees the integrity of universal data models. Eric shares real-world examples of applying this approach to various complex industries, such as transportation and logistics, avionics, and energy. Listen to this episode wherever you listen to podcasts. Eric Daimler: https://www.linkedin.com/in/ericdaimler/ Joey Dodds: https://www.linkedin.com/in/joey-dodds-4b462a41/ Rob Dockins: https://galois.com/team/robert-dockins/ Galois, Inc.: https://galois.com/ Contact us: firstname.lastname@example.org
In this episode, we're joined by Eric Daimler, CEO & co-founder of Conexus AI, Inc, an MIT spin out. We discuss the Conexus software platform, which is built on top of breakthroughs in the mathematics of Category Theory, and how it guarantees the integrity of universal data models. Eric shares real-world examples of applying this approach to various complex industries, such as transportation and logistics, avionics, and energy.
Listen to this episode wherever you listen to podcasts.
Eric Daimler: https://www.linkedin.com/in/ericdaimler/
Joey Dodds: https://www.linkedin.com/in/joey-dodds-4b462a41/
Rob Dockins: https://galois.com/team/robert-dockins/
Galois, Inc.: https://galois.com/
Contact us: email@example.com
[Joey] Welcome to another episode of
building better systems podcast,
where we chat with people in industry
and academia that work on hard problems
around building safer and more
reliable software and hardware.
My name is Joey Dodds.
[Rob] And I'm Rob Dockins.
[Joey] Rob and I work at Galois. Today,
we're talking with Eric Daimler,
CEO and founder of Conexus,
a company that helps people ensure the
correctness of their data integrations.
In this episode,
we talk about what Conexus is trying
to achieve and give some examples of
challenges that their
customers are facing.
We also discuss how they've taken
concepts from math and use those to ensure
that their clients are combining
their data in ways that are logically
consistent with respect to their
business rules. Let's start the episode.
[Intro] Designing manufacturing,
installing and maintaining the
highest speed electronic computer,
the largest and most complex
computers ever built.
[Joey] Thanks for joining us, Eric.
[Eric] Good to be here.
[Joey] So I wanted to start by just asking you
to sort of give us a high level overview
of what you and your company are really
interested in doing and what you're
really trying to do.
[Eric] Sure, sure.
So Conexus is a software
as a service company,
developing a product based
on a discovery in math,
which is probably the first and only
company you're gonna hear about this year
that has done that. I mean,
to have MIT say it's the
first spin out of MIT's math
department in the history of the
Institute. So, Conexus is in that way,
literally unique in developing
software based on math.
The domain of math is in category
theory and the application for it,
we are using in applications
on databases. So,
everybody's kind of
gotten the memo, we'd say,
about data being the new oil and all that.
What's less known is
that the data sources are
also increasing quadratically
or exponentially. And
then therefore what the tough
part is in the data relationships
that is expanding in a
unfathomably large rate,
bringing that knowledge together,
which is represented in
those data relationships and
capturing it throughout an
enterprise that guarantees the
integrity of that meaning is what
Conexus does. So I can
give you an example.
So we'll take a very sophisticated
company, Uber who had a Greenfield,
they've got a open plan about how to
develop their IT infrastructure with an
effectively infinite balance sheet to
fund it. And some very, very smart people.
That that's all true.
What we found is that
like every other company,
they focus on the business, not
optimizing for an ideal IT infrastructure.
So Uber then had the problem
of this large complex
system around the world that
was responsible to business
owners by jurisdiction or
by city then had a problem.
So Uber has a problem of, uh,
business intelligence questions.
What is the driver supply
given its championship
or what is the privacy lattice
for Massachusetts versus
France for drivers licenses
versus license plates.
Those sort of every day,
day to day business questions were
really difficult for them to execute
on when they then had 300,000 databases.
So this is a very rich company with
very smart people with a Greenfield
opportunity, but still had a
non-optimal IT infrastructure.
They then looked about how to solve this,
what sort of commercial solutions
involved or existed to try to
integrate the data or bring
together these data models.
They had Stanford in their
backyard and explored
the landscape of solutions
to solve this problem.
They like many of Conexus
customers found the solution lies
deeper than the computer science. What
me and my co-founders PhDs are in,
it lies in the math.
So they looked at this domain
of math called category theory,
which is a type of meta math, and that
meta math helps solve the problem.
So we happen to be 40
miles north of them. And
Conexus is the recognized leader in the
software expression of category theory.
So we worked with Uber over a number
of months to develop a solution
to bring together in one, we'll
say, universal data warehouse,
you know, it's a universal
data model for those 300,000
databases, you know, without
moving the bits, right?
We're not a data lake or any
of those sort of funny names.
We worked together with Uber
to bring all that together.
So Uber then could answer these
ordinary business questions.
To have them tell it,
they then save on the order
of 10 plus million USD a year
with the new alacrity with which
they're able to answer these ordinary
business questions and respect
privacy lattice. So that,
that's what Conexus has
done for our customers.
[Joey] So just to try to restate
what you're saying,
the problem you're really trying
to solve: all the data is stored,
you're not dealing with how they're
getting the data where the data is,
but you're trying to make the connection
between what people want to know from
the data and the data. That's
kind of where you all are living.
[Eric] We are guaranteeing the
meaning of the data is
preserved regardless of the
context under which it's queried.
So we guarantee the semantics
as the data is transformed.
[Rob] I know just enough category theory
to be dangerous. So, you know,
for someone who's in
my boots, like what's,
what's the interesting connection here
between this business intelligence
problem that you're solving
and the meta math, you know,
the ivory tower of mathematics.
[Eric] So category theory is not new.
It's been around for a while.
It was invented or discovered,
you know, the nature of math,
discovered to solve a problem in math,
in guaranteeing that the
translations of problems
between domains, you know,
between algebra and geometry,
doesn't have four become
You have to have it be exact.
So we can describe a circle, for example,
on a Cartesian coordinate X, Y
coordinates, you know, X squared,
plus Y squared equals one that
perfectly describes a circle. You know,
that is a representation in geometry
and a representation in algebra that
needs to be maintained regardless of how.
Whether you're gonna use an abstract math,
like type theory or set
theory or graph theory,
which is where my
academic research was in,
the discovery was that categories or
categorical algebra category theory
could be applied to databases.
This concept could be applied to
databases so that the integrity
guaranteed by the logic
of math would translate
between different domains.
That's the epiphany. You know,
the current solutions that Conexus
has run into with some clients
is that you might use legacy software.
So we work with a
manufacturer of airplanes that
had these old IBM S four
hundreds running on Cobal, or
Cobal running has four hundreds.
And these old systems you'd have
migration technologies to be sure,
but those would have to rely on testing
when you are in many modern
the testing is fine. You know, you
can use a Monte Carlo simulation,
and if you get a Ddigital ad served,
that's not perfect, it's fine.
But if you're flying a 100
million dollar jet plane,
that is in a high consequence
environment with a pilot or two
whose life is at stake, you don't
really, you don't wanna have errors.
You can't afford errors. That's
where this math is brought to bear.
It proves the integration
of the underlying logic
so that you can depend
upon it with your life.
That's what category theory provides.
It brings together that
logic in approvable way.
So in this particular case,
this airplane manufacturer has
formal methods deployed that we've
all learned in computer
science school. You know,
that help prove a
subsystem of an airplane.
And we would learn it when we're
doing code that we need to prove an
algorithm is complete, or can prove
a set of algorithms are complete.
We can do that, and that's fine.
And so Boeing or Airbus
or Desole or Bombardier,
they can do that for
these individual systems,
but there is literally no
method by which any of those
manufacturers or any other company
can bring together those systems
in an equally robust way.
There's no provable system
for integrating those
multiple validated formal methods.
Those then get resolved to the
extent that they are resolved,
just through iterations of
simulations, iterations of testing,
and to be fair, some rather robust
physical testing. But that's,
that's what we see in the world. You
know, I was just talking to a NASA,
engineer a couple weeks ago in
Houston. And this guy was saying,
they don't know,
because they don't use category theory
at the foundations of some of their
rockets yet, but they're looking to,
they don't know what's going
on inside of their systems.
The consequences are both that
they are left with some level
of uncertainty that they don't
like for obvious reasons,
but it's also resulting
in an overengineering
of their systems. So they will put
extra thickness, extra connectors,
because they just don't know about their
system. That obviously adds weight.
It adds complexity, it
adds cost, it adds time,
because they just don't know about these
systems, despite everything that we,
we have learned about formal
methods in subsystems.
So that's what Conexus provides.
That's a sort of solution
that Conexus brings to
various industries, from
transportation and logistics toavionics
[Rob] So let me see if I can rephrase
that a little bit from my biases,
I'm I come from a formal methods
background. So, you know,
that's the native language that I speak.
So it sounds to me like what you're
saying is that you've developed some
logic slash mathematical language in
which you can bring these other artifacts
that other people have generated.
You can bring them together and have
them sort of speak the same language,
in some way. You can integrate these
different formal and informal methods,
have I got the right idea?
[Eric] We are complimentary to
formal methods. I mean,
I come from formal methods as well.
That's what I learned in school. Now,
this is complimentary to formal methods.
What this does in complimenting
formal methods is it provides
the optimal path for these relations.
Formal methods really doesn't
have anything to say about
what is the ideal path
in trillions of possible relationships.
There is a computationally infeasible,
that would have to be solved to try
to do what is done here without this
abstract math. You know, it's done in
quantum theory in quantum computers,
it's done in smart contracts on the
blockchain. You know, this is done today.
It's just done in different contexts.
Those contexts might be a little sexier.
So, you know, quantum computers, you know,
we would not understand the
output of quantum computers,
if not for category theory, being
applied to quantum compilers.
So that's category theory in action today.
And similarly in smart contracts,
that the sophistication of smart
contracts wouldn't be enabled if not for
category theory. It's just that. And
type theory is kind of related to this.
It's just that it's a lot more fun to
talk about qubits and get the physics of
quantum computers. It's a lot more
fun to talk about smart contracts.
It's just that the underlying
language of category theory,
type theory gets a little
bit lost in those narratives,
but it's already applied to other domains.
So I think I'm still struggling a
little to understand, I guess, what
exactly you're combining.
Because I think I heard,
we're kind of combining ways of thinking
about data and maybe a semantics for
data. I think I heard we're combining
people's understanding of subsystems,
maybe that's represented
as data, I'm not sure. Can
you try to clarify a little, I guess,
what exactly are we
composing and combining,
I guess in the use that
you all are applying.
[Eric] Whatever companies or organizations
or use cases demand of the
composition is what will be composed.
We have nothing to say on
what anyone wants to compose.
Where Conexus works,
where category theory is required,
is if you want to prove
the robustness of that
composition. So in Uber's particular case,
they could already in some loose
way, bring together their 300,000
databases. It's just that they can't
prove that the result is accurate.
And that's similar to NASA,
similar to the avionics company.
They can do a lot of different things,
but is it become a sort of data
model tower of Babel. You know,
the composition is in the rules to,
is a short answer to your question. But
how those are composed is up to the SME.
It's not up to Conexus how the
composition happens in a larger system
is up to the demands of the system being
constructed. It's not up to Conexus.
What a Conexus instance proves
is a guaranteed integrity
of that composition of
models or composition of
So rather than focusing on
what comes into these, I
guess, to software that's
performing this active composition,
you're focusing on,
on that composition itself
and checking that composition
is happening correctly.
[Eric] We prove that the integrity
of the semantics is preserved.
[Joey] Is it possible to do that without
understanding what's coming in? Can
I know that I'm composing data
correctly without sort of intimately
understanding the data itself?
[Eric] So it's not about the data,
it's about data rules,
data models. So there
is a logical data model.
What Conexus provides is
a guaranteed integration
of those models, guaranteed
integration of the rules,
so that there exists then
a universal warehouse,
a universal data model, a
universal knowledge graph.
[Joey] So you don't have to worry about the
specifics of the data itself, but
you do have a representation of, I guess,
shape of the data to some extent or
some expectations about the data.
Are those coming from you? Or, a
company like Uber, as you mentioned,
has those descriptions
available in general?
[Eric] Yeah, we wouldn't define
anything about the rules,
anything about the
characteristics of the rules.
It's definitely part of the process
that you have to do some degree of
entity resolution and disambiguation,
but that's not any sort of secret sauce
for Conexus. That's just part of theflow
chart of work in any of
these exercises. You know,
foundational for Conexus is
using a chase engine to bring
together all the possible relationships
in the exercise that one would go
through doing formal methods and look
for the optimal path that then defines
the totality of the universal warehouse.
Another way of thinking about it is
this is a sort of deductive database,
and this deductive database then
allows for a database of viewpoints,
or a database of perspectives.
So instead of a database
of just data in a table,
it's how is the data being
used for a particular
or how are these data models then
represented in one world view
without requiring consensus,
which is just a complete
mind shift from how most
people have to operate in
requiring consensus. You know,
blockchains work that way.
They require consensus. I mean,
many of these require the energy
consumption for the entire country of
Ecuador. It's just resource inefficient
because it requires consensus
this level of math upon which
Conexus is expressing for our
roughly fortune 500 clients
doesn't require consensus,
but nonetheless gets to this
universal data model that then
can be queried any which way a user wants.
[Rob] So I wonder if I could dial in
a little bit more into this.
You've mentioned this notion
of rules a couple of times,
and I feel like I'm still struggling
to have a good mental model of what you
mean by that. To me, when I talk
about rules and databases, you know,
I go back to my old database 201
course, or whatever. And you know,
it talks about data invariance
that you have to satisfy.
I get the feeling you're
talking about something else,
but I'm not quite sure
what yet. Are those views,
or are they in variance on the data
or are they ways that you combine data
[Eric] A helpful way to think about that maybe
is a logical data model in the database.
So whatever your logical data model is,
however it's represented, whether it's
represented in Excel or some other way,
it's that logical data model
representation that then is put into the
semantics of a Conexus instance,
which is essentially like SQL.
And it's just that representation
with a Conexus instance software
that then is able to deduce
what is the optimal connection between
those models or those
relationships. They call rules,
they call 'em business
requirements if that's better.
[Joey] And so you've called, I think,
a couple of times you've said
Conexus instance. So you're going
in, you have a piece of software --
is this usually installed
in your company's
cloud or is this is something that you
all run locally and they access as a
[Eric] Yeah, we are a software as a service
company in that this is just a license,
but so it's cloud native, but it can
certainly also be run on prem, you know,
with our financial services clients and
with some of the clients in governments,
you know, they want it to be
run on prem, which is also fine.
[Joey] And so, I guess,
as a user of one of these instances,
am I asking it questions directly or is
it kind of keeping an eye on things and
telling me whether things went well or
not? How do I interact with this thing?
[Eric] So you're querying this
universal data warehouse call it.
And what it is gonna tell you in
response is if there are any logical
contradictions. So I can give you a
use case , that might appear. You know,
we work with this big
engineering firm and this
engineering firm happens to
do oil and gas exploration.
So their workflow goes like this,
that they have have exploration that
then says, "Hey, can you do an explore?
Can you do a well drill here?
We think there is some resources
to extract." So the well people
they look at the situation and find
that the ground is a little softer than
normal and have to modify
part of their well then
it goes down to approval and then
fabrication and then distribution.
And then it gets down to the
people that actually dig the hole.
The people that dig the hole also had
to modify their approach because the
ground was a little softer
than they had expected. Well,
that approach then broke the flange
that was modified back at the first,
at the first sequence.
And that's a problem because then
the flange falls in the hole.
They have to fill it up with
cement. They have to move their rig.
And apparently this thing
costs a lot of money.
The story we hear is $50 million, which
is just a mindboggling amount. And,
you know, nobody wants that. You know,
we've heard amounts of about
a hundred thousand dollars.
We even heard of a half a
billion dollar error like this,
no lives are lost but a lot of money
and time. You know, that's a bad day.
Sso these companies spend a lot of time
then to prevent these things
iterating through data models.
Before actually fabricating the flange
before we're getting approval for the
flange, they'll send that model down the
down the flow chart.
And then it'll iterate back the first
time and then down the flow chart.
And this happened in this
particular instance as well.
But if you rely on just
doing Monte Carlo simulations
and test and fail or test and pass,
you're gonna have these errors
in some number of times.
one error we had heard
was a mistranslation of
this footnote from Mandarin to
Spanish. And actually it was,
it was the leaving out of a footnote.
So the footnote got forgot
to be translated. These are,
these are things to be avoided
and as data relationships,
that's how we started out.
We're saying data's growing,
quadratic data sources are growing,
quadratic data relationships are just,
unfathomably large. So as data
relationships get to be so big,
or if you just think
you're ordinary database,
as you think your row count
is getting to be so large,
you have to think in
abstractions. Because you just,
you can't be thinking of trillions of
data points the same way you'd had before.
And you know, the column names
kind of, don't quite, you know,
speak to all the ways in which your
data can be represented. So this,
this particular client of Conexus,
they've come to the realization that the
different approach that is represented
by the math of category
theory is the requirement
that we bottoms up,
foundationally agree on the logical
data model for each of the many
different contexts. So one
engineer has a logical data model.
Another engineer has a
logical data model that,
so the logical data model here for the
person that designs the well then another
logical data model for the person
that's gonna drill the hole,
those get combined into this
universal data warehouse,
whenever all of us would then add or
subtract from our own universal data
model. What we then just find is,
as it gets integrated that
logical contradictions would
get exposed. That's it it's really,
as simple as that. It gets uploaded,
you know, it's definitively proved,
and then you get to see whether there's
a contradiction or whether the integrity
Yeah. So it sounds to me like,
if I've understood right,
the main goal here is to sort
of encode the important rules
that the business cares about in such a
way that you can monitor and make sure
that they're all in some
sense, consistent, you know,
like the guy drilling the hole,
isn't making a different assumption
in the guy building the flange that's
supposed to fit on it. And
by bringing those together,
you can expose when those things don't
match up so that they can be fixed before
you spend 50 million
building the wrong flange.
[Eric] You got it. Exactly. Okay.
That's it. That's exactly it.
And I'll even go further with
how you perfectly represented it,
which is this isn't magic. And so we
don't read anybody's mind
So the important part you said at
the beginning is worth repeating,
which is you're encoding and logic
being the rules you care about. Well,
you know, yes, it has to be really
all the rules. Right. You know,
whatever portion you leave out is the
portion that can't be considered for
[Joey] And who's doing this in encoding, because
obviously the rules exist, you know,
for each company in a range
of different forms. Right.
But you need to have some representation
of these things that probably relates
to the data models and also
is something your tools can
[Eric] Yeah. Now, this is just a brilliant
sequence of questions here.
You guys got it exactly. And,
you just defined our engineering
what you have to do is define
by customer or really by,
for these very large companies,
even the divisions of the customers,
how they input their own logical data
model. So we can import anything,
we haven't experienced yet any
sort of technical risk in taking
what we receive into a
universal data model.
But we have to allocate
resources to do that,
whether it's Excel or whether it's in,
like I said, or whether it's Oracle or,
you know, some other form, you
know, as long as it's not in PDFs,
which would be the worst.
[Joey] Based more on kind of the
formal methods experience,
one of the things I would expect, and
I'm curious if you've experienced,
this is even that act of, well,
we're gonna take this kind of,
you know, maybe shaky notion or this
less consistent kind of
notion of our business logic.
And we're gonna encode that into a really
concrete, meaningful form, you know,
probably with the semantics.
My expectation based on formal methods
experience is that you'd actually find
problems even in,
in that step before you kind of take
the next step of actually, you know,
sort of thoroughly checking
for logical inconsistencies.
[Eric] You know,
I think that the logical data
model as it's represented
the business is sort of
manifestly their best casefor how
they run the business. We're
not gonna second guess that. You
actually point out another really great
point about implementing any part of
which is that taking what
they've already done and just
proving that is so much more effective
than giving them yet another new tool
that will inevitably have some gaps and
that that'll take time to work out.
So we wanna just make this as
easy to our customer's workflowas
possible. And that's
what we're working to do.
[Joey] And then I guess my other question is,
I'll imagine I'm a customer, what am
I supposed to do when I query your
instance? And it's like, well,
there was an inconsistency. Um,
obviously I don't act on that data in the
way that I would maybe have otherwise,
but where do I go from there?
Do I just keep querying and only
act on the data where, you know,
where we don't see problems
or can I do better?
I think the logical consistency
is something you'd wanna address.
That becomes immediately apparent.
It's not like it's gonna be,
stuck upon you saying suddenly there's
a logical consistency appearing,
you know, this Tuesday. I don't know
when it happened or where it exists,
but you know, good luck. You know, these,
these sort of things get integrated very
quickly and you can see where the logical
contradiction will be taking place so
that you can bring the
appropriate expertise to bear to
[Joey] Gotcha. So I should be able to
sort of see the inconsistency.
Will the tool guide me into figuring
out what went wrong and point me
in the right direction at all.
[Eric] I'm gonna say that's really
dependent on the use case.
It really depends on whether we
are talking about our clients in
pharmaceutical research or our
clients in risk analysis and finance
about the degree to which they're gonna
feel that they need to work on the
model themselves or do something else
to resolve the logical contradiction.
[Joey] But the good news is if I believe I've
resolved the logical contradiction,
I assume I should be able to check that
by rerunning it and sort of right away.
[Eric] That is the promise of using
math. You know, it's foundational.
This proves, you know,
just like you don't need to use
a calculator and every day run
infinite queries on your calculator.
You just kind of trust that it,
that it's proven that, you know,
nine times nine is gonna be accurate.
And then 12 times 12 is
gonna be accurate. You know,
this is a proven logical data model.
What the challenge of
this exercise becomes
is that people, organizations,
will be confronted with the degree to
which they keep knowledge in their head.
So implicit knowledge
doesn't scale well. You know,
a known failure point and something
that needs to eventually be made
explicit. That's really the overhead.
And that's why the clients for Conexus
are generally larger organizations
operating in higher
smaller companies, the
proxy for this, not exactly,
but the proxy is databases
under five databases
generally suggests as simple
enough infrastructure that overhead
of making the implicit explicit
is maybe not yet worth it.
another place where this is a
little tougher is in areas where
you're the king or the queen,
and you can dictate, you know,
what your universal data model is.
So I don't need the engineers to
tell me or not tell me consensus,
I'll just tell 'em.
And that's the solution you'll
find in places like Amazon
where they don't have this complex
engineering system in the same way of an
avionics company or an energy company.
[Rob] Yeah, I think that's one
topic that strikes out to me when I
look at your resume is that you have
some public policy experience,
as well as business experience.
I wonder if any of the lessons that
you've learned here in this business are
things that impact
your public policy opinions or
things that you think ought to be
done or should be done. Is there
any interplay between those things?
[Eric] Yes. It's interesting that you
bring that up in several ways.
I can first say that I'm really grateful
for the time that I had serving in
policy, serving in the US government.
I came into that job with many of
the biases others might have had
going to work for the US
government, but I immediately was
impressed with the drive and
intelligence of the people with whom,
I worked in the Obama
White House, in my case.
I hope to do that again in some way.
The public services I found
to be really enlivening in
the difference one could
make. The output, I guess,
is it comes in at least two ways that
I can think of based on your question.
One is it had me see this,
Conexus's value as an opportunity.
Because I've spent my career in and
around various expressions of AI from an
academic researcher at various
schools, to a venture capitalist,
to an entrepreneur. And at that high level
of implementation, you know,
just at very, very large scale,
I quickly got to see that investors
and organizations would be very
disappointed with the returns they're
gonna get on their AI investments. Uh,
and I got to see where the
blockage was. It wasn't in, uh,
some of the ways we're looking at
it such as data cleaning or bringing
together every, all the books in the
library, throwing 'em all in and saying,
Hey, it's integrated, or then sorting
them by height and saying, great,
now it's now it's structured.
difficulty of fulfilling the promise
of AI was elsewhere.
And that's how I got to
find this research that
then led to the expression
that is Conexus.
On the other side, I guess,
or another way in which my public
policy experience informs my
current view is in encouraging
people to be a part of the
conversation around AI or around
these digital technologies so
that they can be comfortable in
how these things get expressed.
I came to find that many
people would not really understand
the definition of what AI
is, let alone how they expected it to
be put to use in their organizations.
And this is from any level of employee,
from the most junior level employees,
right up to the boards of directors.
Often the terms would get
mashed around, you know,
often the understanding of what they
intended to have happen from some of
these experiments, I'll call
them in AI, was misunderstood.
And I came to appreciate both
a way that I could represent AI
to members of Congress, which I feel
like I had to do multiple times a week,
and how I could work
with non technologists,
citizens in any capacity to
feel comfortable engaging in the
conversations, even if they
were non experts, you know,
even if they were non technologists.
An example of this is that
I think that all of us can
benefit by thinking through where
we want circuit breakers to occur.
So as technologists, you know, we think
automation's good, automation continues
and that linking automated
systems is also just nothing
but good. But, you know, the
canonical example is an automated car.
You know, as an automated
car rolls down the road,
it senses something that it
thinks may be a crosswalk,
and then it senses something,
maybe aside the crosswalk. well,
what level of confidence? Is it a person?
Is it a tumbleweed? Is it a shadow?
And then does the car
slow down? Does it stop?
Does it keep going?
Does it ask the driver?
Those are all a way of thinking
about circuit breakers. You know,
we have it right now where, you know,
my car might ask me to
jiggle the steering wheel.
We need to think through that as
technologists, in addition to,
as citizens about where we
want these to take place,
instead of just linking
automation and then,
and letting people then
revolt the danger about that.
And this is gonna be worth us all
being conscious of, as technologists.
We want people to be embracing
what we develop, right? Our
and I might even say our civilization
Western society is going to benefit
from us embracing this in many
cases, life changing technology.
And if we have people
somehow resisting it,
because we and this technology elite
somehow developed technology that killed
people, then it won't be trusted.
It won't be adopted and we
will be missing out perhaps,
relative to enemies of
We'll be missing out on the promise
that technology could bring.
[Rob] Yeah, I think it's a really interesting
question, how you can get people
to develop trust in a technology that
they maybe don't fully understand.
You know, I think that's
one of the, you know,
the enduring difficulties
of this kind of work.
[Joey] Well, I think this is about
all the time we have. Eric,
it's been wonderful talking with you.
Thanks for sharing so much about your
company and the way you
all are approaching things.
[Eric] It's been a great conversation.
Thanks for having me.
[Joey] This has been another episode
of building better systems.