March 4, 2026

241 - Opportunities with AI (in 2026) with MZ Naser

241 - Opportunities with AI (in 2026) with MZ Naser
The player is loading ...
241 - Opportunities with AI (in 2026) with MZ Naser

Is it too late to start with the AI in 2026? It wen't so far, does it still make sense to get interested in this technology? Absolutely. Today we sit down with MZ Naser of Clemson University to map a clear, useful path for engineers who want results without the hype. We start with the basics - clean data, the right algorithm, and a realistic mindset - and climb toward explainability, causality, and even philosophy to show where AI informs decisions and where it can quietly mislead. We dig in...

Apple Podcasts podcast player badge
Spotify podcast player badge
Apple Podcasts podcast player iconSpotify podcast player icon

Is it too late to start with the AI in 2026? It wen't so far, does it still make sense to get interested in this technology?

Absolutely. Today we sit down with MZ Naser of Clemson University to map a clear, useful path for engineers who want results without the hype. We start with the basics - clean data, the right algorithm, and a realistic mindset - and climb toward explainability, causality, and even philosophy to show where AI informs decisions and where it can quietly mislead.

We dig into the limits of our experiments: when tests are expensive, we control only a few variables and then celebrate when explainable AI “finds” the same drivers. That’s not discovery; that’s confirmation. MZ explains how broader sampling, anomaly detection, and careful clustering can reveal patterns we miss, while acknowledging that physics is fixed but our datasets are narrow. We also talk scale: a model that predicts whole-building fire behavior from scratch is a fantasy without impossible data. The practical play is combining reasoning, physics, and simulation to guide where AI adds value - sometimes leading to a simpler equation that replaces the model altogether.

Then we get tactical. What is agentic AI, and how can it save engineers real time? Think delegated workflows: data gathering, parametric setup, code lookups, Excel design sheets, quality checks, and concise summaries. Train agents with explicit steps and tight guardrails, keep them away from money and safety-critical controls, and make human review mandatory. We also confront traceability and model retirement - why freezing working versions, documenting assumptions, and cross-verifying with independent methods matter for audits years down the line.

Throughout, we balance open local models versus cloud LLMs, the trade-offs between control and convenience, and the hard truth that black boxes don’t absolve us of understanding. The big takeaway: AI is a lever, not a miracle. Use it to widen your view, automate routine work, and challenge your priors - while keeping physics, data quality, and professional judgment at the center.

If this conversation helps you think clearer about where AI fits in your workflow, follow the show, share it with a colleague, and leave a quick review so more engineers can find it.

----
The Fire Science Show is produced by the Fire Science Media in collaboration with OFR Consultants. Thank you to the podcast sponsor for their continuous support towards our mission.

00:00 - Setting The Stage: AI Jitters To Optimism

03:00 - Partner Shoutout And Show Milestones

04:16 - Is It Too Late To Start With AI

06:00 - Data Quality And Picking The Right Algorithm

08:20 - From Explainability To Causality To Philosophy

13:45 - Experimental Bias And Limits Of Legacy Data

19:00 - Visualizing Multivariable Behavior And Trends

23:30 - Scaling Problems: From Beams To Whole Buildings

27:30 - Circularity Of Experiments, Simulations, And Models

31:40 - A Mall Smoke Case: When AI Isn’t Needed

34:10 - Retiring Models And Reuse Beyond Papers

38:20 - Traceability, Frozen Models, And Accountability

43:00 - What Agentic AI Is And Why It Matters

47:10 - Training Agents, Reasoning, And Guardrails

51:20 - Practical Automations For Engineering Workflows

55:20 - Cost, Power, And Open vs Cloud Models

01:00:00 - Closing Thoughts And Optimism For AI In Fire Safety

WEBVTT

00:00:00.160 --> 00:00:02.479
Hello everybody, welcome to the Fire Science Show.

00:00:02.479 --> 00:00:08.080
Last week I've recorded a distressed message about my worries regarding AI.

00:00:08.080 --> 00:00:15.039
And today in a roller coaster, I am taking you to the world of opportunities and challenges with the AI as well.

00:00:15.039 --> 00:00:24.480
The world today is a very interesting place and it's perfectly fine to be at the same time stressed and excited about the rise of technology.

00:00:24.480 --> 00:00:35.920
And to discuss the rise of the technology, to discuss what is possible with the AI in 2026, I have once again invited MZ Naser from Clemson University.

00:00:35.920 --> 00:00:43.119
MZ is right now, I think, the leader of use of AI methods in civil engineering.

00:00:43.119 --> 00:00:50.960
I think he has more papers than anyone else in the world of academia on using AI in civil engineering.

00:00:50.960 --> 00:01:04.640
And since many many years we've talked already almost 220 episodes ago, and he's always been presenting this very practical way of implementing AI in the various tasks that we do.

00:01:04.640 --> 00:01:06.879
And it's no different than this podcast episode.

00:01:06.879 --> 00:01:20.480
In this podcast episode, we give a little bit of recap of what's been happening with the AI methods for the previous years in terms of how they could be applied in engineering and what benefits you could get out of them.

00:01:20.480 --> 00:01:34.719
We go through the causality and explainability, we touch a little bit philosophy informed AI, and then later in the episode we're switching into more practical application size.

00:01:34.719 --> 00:01:42.719
So among other things, in this episode we cover agent AI, we cover the size and scope of models, reusability of models, etc.

00:01:42.719 --> 00:01:57.359
I think it's quite a practical episode, and at the same time it's quite an optimistic episode because it really gives a view on what could be done if we use this technology that we got correctly.

00:01:57.359 --> 00:02:00.959
And an interesting twist is that there's no magic in it.

00:02:00.959 --> 00:02:27.199
There is it's just a tool, and as a tool, it's burdened by the same limitations as our previous tools were, as the CFD was, as Zone models were, as empirical models were, and that limitation is access to high quality data and high quality experiments, and we also spend a lot of time discussing why those are critical for the race of AI in the future.

00:02:27.199 --> 00:02:33.759
I think it's a good one, and I hope this time instead of stressing you, I bring in some optimism to the table.

00:02:33.759 --> 00:02:36.400
Let's spin the intro and jump into the episode.

00:02:36.400 --> 00:02:43.280
Welcome to the Firescience Show.

00:02:43.280 --> 00:02:47.199
My name is Wojciech Wegrzynski, and I will be your host.

00:02:47.199 --> 00:03:05.120
The Firescience Show podcast is brought to you in partnership with OFR Consultants.

00:03:05.120 --> 00:03:15.120
OFR is the UK's leading independent multi-award-winning fire engineering consultancy with a reputation for delivering innovative safety-driven solutions.

00:03:15.120 --> 00:03:24.240
We've been on this journey together for three years so far, and here it begins the fourth year of collaboration between the Fire Science Show and the OFR.

00:03:24.240 --> 00:03:41.599
So far, we've brought through more than 150 episodes, which translate into nearly 150 hours of educational content available, free, accessible all over the planet without any paywalls, advertisement, or hidden agendas.

00:03:41.599 --> 00:03:48.479
This makes me very proud and I am super thankful to OFR for this long-lasting partnership.

00:03:48.479 --> 00:03:55.840
I'm extremely happy that we've just started the year 4, and I hope there will be many years after that to come.

00:03:55.840 --> 00:04:04.479
So big thanks OFR for your support to the Fire Science Show and the support to the fire safety community at large that we can deliver together.

00:04:04.479 --> 00:04:11.919
And for you, the listener, if you would like to learn more or perhaps even become a part of OFR, they always have opportunities awaiting.

00:04:11.919 --> 00:04:14.639
Check their website at OFRConsultants.com.

00:04:14.639 --> 00:04:16.560
And now let's head back to the episode.

00:04:16.560 --> 00:04:17.839
Hello everybody.

00:04:17.839 --> 00:04:22.319
I am joined today by MZ Naser from Clemson University.

00:04:22.319 --> 00:04:24.000
Good to have you back in the podcast.

00:04:24.399 --> 00:04:25.040
Thank you very much.

00:04:25.040 --> 00:04:25.839
Good to be here.

00:04:26.160 --> 00:04:39.839
I'm I'm so happy that you're here because I am going through a roller coaster of emotions with uh AI, and I need uh someone to uh bring me back to the optimist uh swing on this.

00:04:39.839 --> 00:04:49.680
And man, we've been talking about AI in in fire science like already five years ago when you first joined me in the in the fire science show almost five years ago.

00:04:49.680 --> 00:04:51.040
That's that's insane.

00:04:51.040 --> 00:05:07.120
Back then I remembered that you said something that that really uh stuck with me that you were just you know a few years ahead of uh of people, like you didn't have that much experience with AI back then, but still, you know, it was great to to see how how how much you grow.

00:05:07.120 --> 00:05:17.920
I wonder, do you think in uh 2026, if someone has never used any AI methods, machine learning whatsoever, is it too late for them to join the bandwagon?

00:05:18.240 --> 00:05:20.000
It it's easier not to use AI.

00:05:20.000 --> 00:05:23.360
It's easier, it's much easier because you don't have to code anymore.

00:05:23.360 --> 00:05:27.920
Everybody's using in a chatbot to code for them, or like even those agents to code for them.

00:05:27.920 --> 00:05:36.000
So as long as you have like a good thought or a good idea, you might want to try those because you don't have to code it, it becomes a little bit more easier and accessible to use as well.

00:05:36.240 --> 00:05:42.000
Yeah, I remember back then it was about finding the the proper algorithm to your problem.

00:05:42.000 --> 00:05:46.160
And you said if you if we get that, you're almost uh sorted.

00:05:46.160 --> 00:05:47.680
Like that's the hardest part.

00:05:47.680 --> 00:05:50.319
Do you still believe that's the case in 2026?

00:05:50.639 --> 00:05:52.800
Yeah, I mean I'll just like add a small component.

00:05:52.800 --> 00:05:57.040
If you also have a great data set, that will even make your life much, much easier.

00:05:57.040 --> 00:06:04.079
It's all about like matching the right algorithm with the right data set, because uh at the end of the day, algorithms are just processes.

00:06:04.079 --> 00:06:05.360
They process data.

00:06:05.360 --> 00:06:14.800
If the data is nice and clean, the algorithm follows more or less a logical way to process its data, it becomes a straightforward chart to using AI.

00:06:15.120 --> 00:06:19.360
I like that way of thinking because it kind of takes the magic away out of AI.

00:06:19.360 --> 00:06:24.399
So it's not like a magical processing tool that will do your job for you.

00:06:24.399 --> 00:06:28.560
It's basically a tool you can apply.

00:06:28.560 --> 00:06:38.079
Maybe you can expand on that, on that thought based on your like years of experience teaching this now to engineers worldwide, not just in fire, but broader civil engineering.

00:06:38.319 --> 00:06:46.879
Well, the the biggest hurdle with AI was if if you would like to use it, you have to know how to code and program, which means you have to learn Python or or R.

00:06:46.879 --> 00:06:52.879
In engineering, as we all know, we have limited time for curriculum, maybe three and a half years, four reasons you graduate.

00:06:52.879 --> 00:07:02.800
Uh, I know some schools they have coding courses, maybe like in year two or year three, but uh unfortunately you start in year two, but you never get to use it as much in year three and four.

00:07:02.800 --> 00:07:07.199
So it's like one isolated course where you learn some Python and you never use it.

00:07:07.199 --> 00:07:10.800
By the time you graduate, your firm uses AI, nobody knows how to use AI.

00:07:10.800 --> 00:07:13.519
They all know a little bit about coding, they've forgotten.

00:07:13.519 --> 00:07:20.560
So now with uh with the rise of coding free AI as well as uh agents, you don't really have to code anymore.

00:07:20.560 --> 00:07:31.439
You can use natural language, like how the way that you chat with ChatGBT or with Claude and the LLM or the chatbot or the agent can like follow your steps and make an algorithm for you and develop those things.

00:07:31.439 --> 00:07:34.639
So, in a way, you could potentially be working on three different projects.

00:07:34.639 --> 00:07:37.920
One is pure coding, one is analysis, one is design.

00:07:37.920 --> 00:07:44.000
You could do one, and I can tell you the other two because the analysis can be automated, coding can be automated.

00:07:44.000 --> 00:07:52.240
It's gonna be up to you to think about like you know how how to process a design and analysis and have much more improvement and productivity.

00:07:52.639 --> 00:07:58.240
What kind of core skills would be most useful for those coding-free abilities?

00:07:58.240 --> 00:08:01.680
I I, for example, I have some basic knowledge of Python.

00:08:01.680 --> 00:08:14.959
I find that in the end, I find it quite useful when I work with AI, but it perhaps it's it's more about understanding the logic of the codes and objects and et cetera, rather than you know, specific Python uh abilities.

00:08:14.959 --> 00:08:20.079
What do you think should be a part of this skill set when someone wants to start?

00:08:20.079 --> 00:08:21.680
What gives them a stronger start?

00:08:21.920 --> 00:08:23.680
I I think of this like the same way.

00:08:23.680 --> 00:08:26.720
Somebody would think of Excel like 20 years ago.

00:08:26.720 --> 00:08:33.440
When Excel started to show up and became more mainstream, it becomes like a kind of a skill that's expected from everybody.

00:08:33.440 --> 00:08:39.120
So you have to take courses at the time, you have to like take some courses and take a certificate out of it.

00:08:39.120 --> 00:08:42.639
Now it's all about like how to communicate with the with the chatbot.

00:08:42.639 --> 00:08:43.840
You have to learn how to write prompts.

00:08:43.840 --> 00:08:54.720
So prompts engineering becomes very variable, very interesting, because now uh both of us could say the same thing to ChatGBT or to an LLM with different words and tones, and we can have different results.

00:08:54.720 --> 00:09:05.279
So the skid becomes how do you interact with the with the AI in a natural language processing way, like naturally with typic, and get the most that you can have without having to code.

00:09:05.600 --> 00:09:16.399
Well, I I'll I'll refer people to the resources that you provide on your web page and all the all the great stuff that you're releasing to the public because that's uh a goalmine on how to start.

00:09:16.399 --> 00:09:19.519
You've been uh teaching how to start with machine learning a long time ago.

00:09:19.519 --> 00:09:22.320
So so there there is a a lot to learn.

00:09:22.320 --> 00:09:32.080
Um in those years I've I've seen kind of like at least from your end, uh, evolution of how you are using those tools.

00:09:32.080 --> 00:09:40.000
Uh earlier, I've seen a lot of nice applications about you know just predicting physical phenomena like spalling.

00:09:40.000 --> 00:09:44.080
And I I still remember your uh web app for columns and spalling.

00:09:44.080 --> 00:09:46.240
I I played with that a lot.

00:09:46.240 --> 00:09:52.320
It was quite quite funny to like see uh how many variables go into that phenomenon.

00:09:52.320 --> 00:09:58.159
Then uh you brought up something called explainability and causality in AI.

00:09:58.159 --> 00:10:10.080
That was a great find because it's uh kind of returned the science to this for us at least uh in my institute, uh, because we saw that you can use AI to discover.

00:10:10.080 --> 00:10:16.559
Today I see you moving into more advanced concepts, philosophy-informed uh uh machine learning.

00:10:16.559 --> 00:10:20.799
I would love to hear the pathway from your end and how these things evolve.

00:10:20.799 --> 00:10:24.159
Perhaps what are the drivers uh for this evolution for you?

00:10:24.399 --> 00:10:25.759
Uh that's a very nice journey.

00:10:25.759 --> 00:10:30.080
So thank you for reminding me of all the of all the things uh in our five years.

00:10:31.360 --> 00:10:36.000
The earth is spinning much faster this this year than than it has been in the past, right?

00:10:36.399 --> 00:10:42.000
I'm just fortunate to have people that are supportive and and be a little bit lucky on the on the head curve.

00:10:42.000 --> 00:10:44.879
Uh so that the thing really started with explainable AI.

00:10:44.879 --> 00:10:50.799
So William, whenever somebody uses explainable AI, basically tells us how the algorithm arrived at a particular prediction.

00:10:50.799 --> 00:11:00.240
So if the algorithm says I'm predicting this column to spawn, when we use X AI, it tells us that the algorithm arrived at this prediction because it looked at these features.

00:11:00.240 --> 00:11:08.480
Or it did some combination of all the features and it arrived at a threshold that was passed, and hence we have uh you know this prediction.

00:11:08.480 --> 00:11:13.039
But if you look at your explainability, you realize that everything is driven by the data.

00:11:13.039 --> 00:11:19.279
The algorithm really is only giving you a results from your data, which means it doesn't really know anything about physics.

00:11:19.279 --> 00:11:23.120
It doesn't say the column has part because a physical phenomenon had happened.

00:11:23.120 --> 00:11:28.799
It says the column has part because some sort of associations as correlations have taken place.

00:11:28.799 --> 00:11:35.200
So when you realize this, you start to say, okay, well, I'm an engineer and I can't really rely on correlations for everything I do.

00:11:35.200 --> 00:11:37.200
I need like a higher level of understanding.

00:11:37.200 --> 00:11:40.559
So you do some research, you find that you find causal AI.

00:11:40.559 --> 00:12:01.840
Now, causal AI tries to go ahead of explainable AI and tries to say, you know what, based on this very high complex statistical method, we can almost be certain that the prediction is going to be arrived because of how the features link to each other with high sort of confidence that may imply some sort of physics in there.

00:12:01.840 --> 00:12:03.440
So it's not just pure correlation.

00:12:03.440 --> 00:12:11.039
Again, when you rely, when you think about causal AI for a little bit and you do some research, you realize, well, all of these things have to follow some certain assumptions.

00:12:11.039 --> 00:12:14.399
Because if you want to use a method, you have to follow its assumptions.

00:12:14.399 --> 00:12:18.720
And these assumptions without experiments can be very, very hard to justify.

00:12:18.720 --> 00:12:29.519
So you start to think about okay, well, now I have a bigger problem because now not only I can't use correlations or whatever we call causal AI, I also don't understand how everything relates to physics.

00:12:29.519 --> 00:12:37.840
And this brings you to philosophy because philosophy is all about how things happen, and you have to think about it from a way that uh not just numbers, logic.

00:12:37.840 --> 00:12:43.919
And from there you start to think, okay, maybe I'm looking at this AI thing in uh in a very skewed way.

00:12:43.919 --> 00:12:49.600
I really have to think of what the algorithm was meant to do and what is actually meant to produce.

00:12:49.600 --> 00:12:54.480
The algorithm is producing numbers or predictions, but that doesn't have to align with what you have in mind.

00:12:54.480 --> 00:13:02.639
That doesn't say the algorithm predicts spawning because I know concrete, you know, goes through chemical reactions and then eventually degrades on spawns.

00:13:02.639 --> 00:13:07.519
It doesn't have to align with that, it aligns with the way that the algorithm was processed and developed.

00:13:07.519 --> 00:13:12.080
And then this becomes not interesting because for the same prediction, you could have like different theories.

00:13:12.080 --> 00:13:20.879
One algorithm could predict something because of the way the features interact, and another one could give you the same predictions based on different features.

00:13:20.879 --> 00:13:23.679
So now you have like two windows to the same world.

00:13:23.679 --> 00:13:25.519
One looks at A, one looks at B.

00:13:25.519 --> 00:13:31.759
And this makes you think a lot because in design we lack certainty, but at the end of the day, with AI, we don't really have that much certainty anymore.

00:13:32.080 --> 00:13:33.919
So you basically start with an experiment.

00:13:33.919 --> 00:13:36.399
Your pathway was to start with an experiment.

00:13:36.399 --> 00:13:40.320
You had a lot of them, you start to see some correlations within them.

00:13:40.320 --> 00:13:58.639
Suddenly you start using AI to explore those relationships between different variables, get some higher-level correlations, you reach some level of explainability of what's happening that drives you to causality, that drives you to you know how higher level thinking about what is happening around.

00:13:58.639 --> 00:14:07.759
But you still had an experiment at the start of it, or you still had the data at the start of it, and the data was you know made kind of for some purpose.

00:14:07.759 --> 00:14:11.600
Like if I'm doing a fire experiment, I measure temperatures.

00:14:11.600 --> 00:14:17.440
I may not be measuring, you know, the moisture at depths uh of your concrete.

00:14:17.440 --> 00:14:27.600
I'm I'm focusing on some particular things which perhaps someone told me to measure 100 years ago when they designed a standard for fire resistance, you know.

00:14:27.600 --> 00:14:38.000
How much of the end product being the philosophy is narrowed down by the hundred-year-old assumptions in the experiment design?

00:14:38.000 --> 00:14:39.519
Significant thing.

00:14:39.840 --> 00:14:41.440
Significant, yeah, I can imagine.

00:14:41.440 --> 00:14:44.559
I I I mean, but what what can we do about it?

00:14:44.799 --> 00:14:59.200
Well, I think when I was when I was interested in philosophy, I found like this very nice article on IEEE, I think by a professor in the actually very close to me in the University of South Carolina, and she has outlined the history of engineering education when it started up to now.

00:14:59.200 --> 00:15:11.440
And you can clearly see it, I think 60 or 70 years ago, there was a drastic shift, I think, after World War II, where philosophy was taken away from engineering to focus on application because they would have like to rebuild and you know, industrial revolution.

00:15:11.440 --> 00:15:13.919
So there wasn't really a lot of time to study philosophy.

00:15:13.919 --> 00:15:19.120
And because of that, we start to shift more on math, physics, all this, which is great.

00:15:19.120 --> 00:15:23.519
But we lost touch in um with with the with the context of our methods.

00:15:23.519 --> 00:15:31.279
Like when you do when if somebody would like to use regression, a very simple method, regression, we have at least five hard assumptions that we have to follow.

00:15:31.279 --> 00:15:38.320
And if you look at our data, our methods, even the equations we have in welding codes, most of these assumptions are not verified.

00:15:38.320 --> 00:15:40.000
We always have issues with them.

00:15:40.000 --> 00:15:45.840
So, how do you use a method when the assumptions based for this method are limited or not applicable?

00:15:45.840 --> 00:15:50.559
Which means you can use the method, but the outcome is gonna always be is gonna always have some issues.

00:15:50.559 --> 00:16:00.320
And this becomes okay, well now the problem is not really the method, the problem is how we understand the method, which means we have to go back and rethink about the way that we link philosophy and engineering.

00:16:00.320 --> 00:16:05.360
And we need this right now because AI does a lot of things for us that we don't really understand.

00:16:05.360 --> 00:16:08.399
And if you don't understand the method, you can't apply it properly.

00:16:08.399 --> 00:16:14.559
And if you can't apply it properly, those who do apply it will have issues because they don't understand the whole thing.

00:16:14.799 --> 00:16:25.120
Is it possible that you give me an example of that, like practical example of some kind of research task that you've done and went through that pathway and identified those limitations?

00:16:25.679 --> 00:16:25.919
Yeah.

00:16:25.919 --> 00:16:31.200
Uh the simplest example is with uh we were talking about like a little bit earlier with explainable AI.

00:16:31.200 --> 00:16:44.159
We have data set, we have an algorithm, it becomes a model, we predict, we use explainable AI to tell us the algorithm or the model was arrived at this prediction because it saw those features.

00:16:44.159 --> 00:16:51.039
This tells me that if I change the data set with fake numbers, I can still have the same prediction.

00:16:51.039 --> 00:16:54.080
And this prediction will also be accurate because there's no physics.

00:16:54.320 --> 00:17:12.160
You mean you so you train the model on some data set and then you feed it a set of completely made up numbers, and because it has a pathway from numbers to outcome, it's still gonna like the true definition of garbage in, garbage out.

00:17:12.480 --> 00:17:12.799
Exactly.

00:17:12.799 --> 00:17:15.680
And in philosophy, this is called predictive ignorance.

00:17:15.680 --> 00:17:21.519
You're predicting things very well, but you're ignorant about why and how, because the method is faulty.

00:17:21.519 --> 00:17:27.920
And to engineers, when somebody builds like collects a lot of data or massive data, they spend a lot of money doing experiments.

00:17:27.920 --> 00:17:33.839
So they build the experiment, they spend all this money, they build algorithms, now they have models, they use explainable AI.

00:17:33.839 --> 00:17:46.960
The problem becomes when the algorithm says this specimen has falled, because I know concrete strength is a big feature, or because I know heat is a big feature, and this matches what you already know from physics.

00:17:46.960 --> 00:17:55.039
We think the algorithm gave us something new, but this is called confirmation bias because the algorithm confirms what you're biased towards in the beginning.

00:17:55.039 --> 00:17:59.200
You already know concrete is gonna be a problem, heat is gonna be a problem.

00:17:59.200 --> 00:18:03.920
And now you see the algorithm confirms this notion, and you think, oh, the algorithm works well.

00:18:03.920 --> 00:18:05.839
In reality, it doesn't work well.

00:18:05.839 --> 00:18:08.160
You could the problem becomes with your data.

00:18:08.160 --> 00:18:12.240
When you did the experiments, we know strength is going to be our problem, hate is going to be our problem.

00:18:12.240 --> 00:18:15.839
So we design the experiments in a way that controls these features.

00:18:15.839 --> 00:18:21.200
And because we control them in this, this is the only feature that changes in our data set.

00:18:21.200 --> 00:18:33.680
If you want to do a completely new experiment where you don't control features, then this becomes interesting to see because now you're not really imposing any bias on the experimental design, nor on the algorithmic design.

00:18:34.000 --> 00:18:34.960
What do you mean?

00:18:34.960 --> 00:18:36.960
How would you design such an experiment?

00:18:37.119 --> 00:18:51.279
Just random sampling of input variables that go in there, like you would uh basically randomize the strength of your concrete, the moisture content, uh that when we do experiments now, just say in fire, especially the problem becomes it's very expensive.

00:18:51.279 --> 00:19:01.839
So maybe we have six columns, and these six columns you're gonna have maybe I don't know, three normal strength concrete, three UHPC high strength, you are ultra-high performance concrete.

00:19:01.839 --> 00:19:05.039
So we're limited with the number of samples, we're limited with the type.

00:19:05.039 --> 00:19:06.720
We have one fire exposure.

00:19:06.720 --> 00:19:10.319
But what about all the other what about like changing moisture content?

00:19:10.319 --> 00:19:13.440
What about changing the loading for each kind of those variables?

00:19:13.440 --> 00:19:18.880
So our data sets would always have the big the main features limited, and we can't see the whole picture.

00:19:19.359 --> 00:19:20.480
I I see the same thing.

00:19:20.480 --> 00:19:24.319
I'm doing a lot of vehicle fires and car parks and stuff like that.

00:19:24.319 --> 00:19:30.480
And I see this kind of experimental bias when people are doing uh electric vehicles, for example.

00:19:30.480 --> 00:19:35.440
Because like you get an electric vehicle to burn, where do you start the fire?

00:19:35.440 --> 00:19:38.480
Come on, in the battery, like where else?

00:19:38.480 --> 00:19:43.759
Like what I'm gonna do with a completely burned vehicle and an untouched battery in the end.

00:19:43.759 --> 00:19:53.440
Like you start with the fire in battery, and then what you end up with is like a massive data set of all the ways, non-natural ways a fire could start in a battery.

00:19:53.440 --> 00:20:01.920
And almost like zero data points of what happens with an electric vehicle when the style when the fire starts in an archway above your wheel.

00:20:01.920 --> 00:20:04.079
And we have no idea.

00:20:04.559 --> 00:20:09.519
No, I mean it's it's also part of the it's part of the limitation that we have to admit to.

00:20:09.519 --> 00:20:12.400
Like we can't really design experiments for every single variable.

00:20:12.400 --> 00:20:18.960
We have to use a little bit of knowledge to try to zoom in into some of the highly valued variables, and then we go from there.

00:20:18.960 --> 00:20:29.680
But I'm trying to say if somebody would like to really sit down and think about like the essence of the problem, we can't just be relying on what we think is right or what we think is most likely.

00:20:29.680 --> 00:20:32.000
We have to have a much more larger picture.

00:20:32.000 --> 00:20:40.960
Think of it this way: when somebody's trying to create a drug in medicine, uh, it's it's very, very hard to come up with an experiment because this is why we use randomized experiments.

00:20:40.960 --> 00:20:47.920
And you have like thousands of people signing up, vaccines, drugs, because you can't people are different than specimens, than beams and corps.

00:20:47.920 --> 00:20:57.599
And because you can't control people and like their genes and DNA, you end up with thousands and thousands of people on on those trials, medical trials.

00:20:57.599 --> 00:21:10.319
In engineering, we end up with a handful of specimens just because we think you know the mix is the mix is fine, the strength is fine, the moisture content is fine, and we don't really verify.

00:21:10.319 --> 00:21:13.359
And this makes our our window very, very small.

00:21:13.359 --> 00:21:15.920
So and all the models we have have problems.

00:21:15.920 --> 00:21:33.519
If you think about this, if you take the equation for predicting the fire resistance of columns in ACI, Europe Code 2, and the Australian code, and you apply these three equations, it's highly unlikely that they will all agree on the one on the same number or on the same range for one column.

00:21:33.519 --> 00:21:40.640
So this tells us basically the problem is not like we don't know physics, but physics is the same in the US, in Europe, and Australia.

00:21:40.640 --> 00:21:46.799
The representation of physics, the equations are gonna be a problem, and that representation comes from our experiments.

00:21:47.119 --> 00:21:59.279
But yeah, I I find this kind of challenging because if I pursue your logic, that would mean that the moment I decide what I am looking in my experiment, I've kind of narrowed it down already.

00:21:59.279 --> 00:22:03.359
Like if I'm designing an experiment for a fire resistance.

00:22:03.359 --> 00:22:09.279
I've already narrowed it down like significantly, and perhaps I'm losing the discovery.

00:22:09.279 --> 00:22:14.079
Like which is like I love it because I'm doing a lot of exploratory experiments.

00:22:14.079 --> 00:22:24.720
We started to call like people were questioning, uh, you're doing it like incorrect, like you, you, you uh have the standard sales elsewhere, you're like you use the wrong heat flux or whatever.

00:22:24.720 --> 00:22:26.079
No, no, this is exploratory.

00:22:26.079 --> 00:22:28.480
I'm just I just wonder what's gonna happen.

00:22:28.480 --> 00:22:30.880
Like uh perhaps we need more of that in fire.

00:22:31.119 --> 00:22:34.480
Yeah, and and my my own my only concern is the following.

00:22:34.480 --> 00:22:41.200
I I'm always hesitant of saying that because I've done an experiment, that this is the cause.

00:22:41.200 --> 00:22:42.400
This is not the cause.

00:22:42.400 --> 00:22:46.480
This is the cause based on your own, you know, experimental setup and data.

00:22:46.480 --> 00:22:50.559
So journalizing this beyond that becomes a problem to me, at least to me.

00:22:50.559 --> 00:22:54.559
But you know, we all have limitations with experiments, and they tend to be very expensive.

00:22:54.559 --> 00:23:02.400
But like if you really would like to think about it from a different perspective, having more and looking at the problem from a fresh look is always going to be helpful.

00:23:02.400 --> 00:23:13.279
This is why now you hear a lot about like those uh publications and like in the news, somebody used AI to solve like this mathematical conjugate, and this has been there for like 200 years, not nobody was able to solve.

00:23:13.279 --> 00:23:14.880
Well, how did AI solve it?

00:23:14.880 --> 00:23:18.559
It's because you're looking at the problem from a completely fresh perspective.

00:23:18.559 --> 00:23:24.720
And this might help you, for the most part, to find unique solutions and new ways to think about it.

00:23:24.720 --> 00:23:30.640
And this might be helpful also to us in engineering, because we have many problems that we don't really know how to solve.

00:23:30.640 --> 00:23:37.200
And honestly, we have even the bending codes we have, we have a big component that are based on tables of like empirical equations.

00:23:37.200 --> 00:23:43.920
Empirical equations does not mean physics, it means observation, which means domain knowledge and you know modeling.

00:23:44.160 --> 00:23:54.160
In that case, like if we agree that this is true, do you think there is still new knowledge to find within those classical sets of experiments or data sets?

00:23:54.160 --> 00:24:04.079
Like imagine you're a researcher and and basically you have no ability to conduct new data, like you only have what's already in the literature.

00:24:04.079 --> 00:24:09.039
How much there is to apply to find new knowledge with AI?

00:24:09.039 --> 00:24:12.160
Because you know, I see a lot of papers.

00:24:12.160 --> 00:24:15.279
I'm I'm I'm avid uh reviewer of papers.

00:24:15.279 --> 00:24:21.759
I'm I'm a reviewer too, apologies uh to the listeners, like uh uh and then those who who faced me.

00:24:21.759 --> 00:24:24.799
But uh, you know, I I get a lot of those papers.

00:24:24.799 --> 00:24:30.400
Like we have applied this kind of algorithm to this data set, and we found something obvious.

00:24:30.400 --> 00:24:38.559
And I'm like, you know, well, just the fact that you've applied machine learning on something, this is not yet novelty, this is not new science.

00:24:38.559 --> 00:24:41.599
I wonder, I have a data set.

00:24:41.599 --> 00:24:46.079
What new things could AI realistically show me about that data set?

00:24:46.400 --> 00:24:50.400
When it comes to AI, this should always be used like in the very early stage.

00:24:50.400 --> 00:24:53.119
You don't just use it to write a paper.

00:24:53.119 --> 00:24:57.839
You use it earlier, and then if you find something, you try to understand why it habits.

00:24:57.839 --> 00:25:01.920
And of course, you have to verify this with multiple ways, multiple algorithms, multiple methods.

00:25:01.920 --> 00:25:11.200
But let's say we have this very old data set, something that we have for the last 50 years or 100 years, and we would like to see if AI can help us find something new.

00:25:11.200 --> 00:25:13.440
This is where we have to think about a few things.

00:25:13.440 --> 00:25:15.759
One area of AI is called data mining.

00:25:15.759 --> 00:25:23.599
So data mining is basically you have this data, you throw a brush of algorithms, and all what they try to do is to find some patterns, some trends.

00:25:23.599 --> 00:25:24.640
That's the job.

00:25:24.640 --> 00:25:35.200
If they identify something to us, this could be helpful because maybe you could see some trend that you haven't seen before, because instead of having six columns, now you have six thousand.

00:25:35.200 --> 00:25:44.160
So maybe the trend that was already implicit in those six columns earlier because we didn't have many, many specimens, now you see repeated because at the end of the day, physics doesn't change.

00:25:44.160 --> 00:25:45.359
Physics is physics.

00:25:45.359 --> 00:25:48.880
So the more specimens we see, more likely, and we see trends.

00:25:48.880 --> 00:25:54.240
Maybe now we're uncovering some trends that we were not able to see before because we didn't have enough specimens.

00:25:54.240 --> 00:25:55.599
So this is one way.

00:25:55.599 --> 00:25:58.720
Another way also, you know, it could help you out to cluster things.

00:25:58.720 --> 00:26:10.240
So instead of finding trends, simple trends, it could say, well, okay, we happen to see that these columns that happen to have those seven features or ten features tend to behave similarly.

00:26:10.240 --> 00:26:18.720
So not only thinking about one per a trend within one parameter or two, you think about okay, you have a group of columns that tend to share many features together.

00:26:18.720 --> 00:26:20.880
And now we call this clustering.

00:26:20.880 --> 00:26:22.960
So you could also cluster data, something new.

00:26:22.960 --> 00:26:24.799
You could also identify anomalies.

00:26:24.799 --> 00:26:36.319
You can say, okay, well, now somebody, I mean, I'm sure with experimental work, some many of us did experiments and you do 10 experiments and you find like two that are very, very odd that you never report them.

00:26:36.319 --> 00:26:37.680
Yeah in the paper.

00:26:37.680 --> 00:26:41.839
Well, maybe these are odd because of the trend that we haven't seen before.

00:26:41.839 --> 00:26:42.880
This is one.

00:26:42.880 --> 00:26:45.440
Maybe these are odd because they have anomalies.

00:26:45.440 --> 00:26:48.880
And anomalies tend we tend to think of them as a negative thing.

00:26:48.880 --> 00:26:51.519
Oh, this is like outlier, or this is anomalous.

00:26:51.519 --> 00:26:52.559
But think of it this way.

00:26:52.559 --> 00:26:59.440
Think of it, maybe the specimen or the combination of features, parameters, maybe give us something that we haven't seen before.

00:26:59.440 --> 00:27:01.200
Maybe this is something noble.

00:27:01.200 --> 00:27:02.480
You have to look into it.

00:27:02.480 --> 00:27:11.359
And this is how a lot of the research and material science come about, because all those like unique materials, metamaterials, they all came from, I mean, most of them came from anomalous data sets.

00:27:11.359 --> 00:27:14.160
You have anomalies in this data, nobody knows how to explain.

00:27:14.160 --> 00:27:17.440
The more you dig into it, now you see, okay, this is a community new behavior.

00:27:17.440 --> 00:27:22.000
It's only anomalous because I've seen many cases that don't look like it.

00:27:22.000 --> 00:27:25.200
Once I look into it, it's on the on this data point itself.

00:27:25.200 --> 00:27:28.880
Maybe I can find something new now, new behavior that they haven't imagined before.

00:27:28.880 --> 00:27:30.799
AI could help us with those too.

00:27:31.119 --> 00:27:35.279
Yeah, I can give you two experience two um experiences of myself.

00:27:35.279 --> 00:27:36.720
Maybe you can comment on them.

00:27:36.720 --> 00:27:43.119
One is uh it's quite old, it's it's uh way before uh any my my exposure into AI.

00:27:43.119 --> 00:27:50.240
I was simply working on a quite large data set which uh basically was a summary of academic uh achievements.

00:27:50.240 --> 00:27:57.359
It was a database of people and all the different things they had, you know, uh years since PhD papers, etc.

00:27:57.359 --> 00:28:02.480
We were like kind of mapping the technical sciences with the professor at ITB.

00:28:02.480 --> 00:28:09.200
And what I've done then, I I had a massive, massive table for like many disciplines, many people, many parameters.

00:28:09.200 --> 00:28:13.599
I just like set the Python, just plot me everything against everything.

00:28:13.599 --> 00:28:20.640
Like just you know, take every combination of two variables you can find and just give me a damn plot out of that, like just the line plot, you know.

00:28:20.640 --> 00:28:26.319
And I just I was dropped with like a thousand plots, and I was going through them, and some of them, wow, this is a kind of interesting.

00:28:26.319 --> 00:28:30.319
Oh wow, this is like I wouldn't expect that, I would expect this line to be different.

00:28:30.319 --> 00:28:33.839
And oh, and this discipline is like totally different than the other discipline.

00:28:33.839 --> 00:28:38.240
And and it kind of like guided me where to look further.

00:28:38.240 --> 00:28:46.319
So so this was my very like primitive approach because I just you know, you spite that I plotted, I spent a lot of time looking at those plots.

00:28:46.319 --> 00:28:52.160
I guess this is something you could now drop on an agent or or on chatbot even to do for you, right?

00:28:52.400 --> 00:28:55.119
Very easily, but you didn't even have to think about it, yeah.

00:28:55.279 --> 00:28:56.160
Yeah, I mean amazing.

00:28:56.160 --> 00:29:00.480
I mean, I it took like I literally learned how to code to do that.

00:29:00.480 --> 00:29:03.839
Like it took me like half a half a year to do that.

00:29:03.839 --> 00:29:04.799
So I'm really happy.

00:29:04.799 --> 00:29:08.160
Like it's like now just drop and like give me an outcome.

00:29:08.160 --> 00:29:09.359
Uh amazing.

00:29:09.359 --> 00:29:19.599
Uh, and this the second one is that while I believe it it's fairly easy when the relationships are between the one, maybe two variables, right?

00:29:19.599 --> 00:29:28.480
But for example, in in in structural, you would connect a lot of things to K Row C, which is three parameters, right?

00:29:28.480 --> 00:29:48.720
I wonder how many of those K ro Cs exist within the fire domain, and it's very hard to find them, your your own, because you're suddenly trying to twist your data set in kind of unpredictable ways, even, and maybe in the different exponential relationships, which is very difficult for a human, right?

00:29:49.039 --> 00:29:51.519
You can't it's very hard for us to visualize those.

00:29:51.519 --> 00:29:55.200
And because it's very hard for us to visualize them, this is why we use software.

00:29:55.200 --> 00:30:04.720
This is why when we design buildings, we use Sab and E tabs and all those like fancy softwares and and cipher too, because finite element helps us visualize what we can visualize.

00:30:04.720 --> 00:30:08.640
It's but we can all do this exercise if you know even with the listeners.

00:30:08.640 --> 00:30:12.319
Uh, think of a beam, like uh this beam, this very large beam.

00:30:12.559 --> 00:30:14.880
He's showing me a beam like with his fingers.

00:30:14.880 --> 00:30:19.200
This this doesn't really work well with podcasts, but I'll I'll I'll be your narrator.

00:30:19.359 --> 00:30:22.160
So just think we are still in a magnificent beam.

00:30:22.160 --> 00:30:23.519
Let's think of this.

00:30:23.519 --> 00:30:25.920
Uh, we have uh a four-meter beam.

00:30:25.920 --> 00:30:29.759
Yes, and now we have another one, so we have two identical beams.

00:30:29.920 --> 00:30:30.240
Yeah.

00:30:30.400 --> 00:30:35.200
We apply a hundred kilonewt in one, and then the other one we apply only ten.

00:30:35.200 --> 00:30:36.880
We fire both.

00:30:36.880 --> 00:30:38.240
Which one collapses first?

00:30:38.240 --> 00:30:41.279
Well, depends, but the hundred kilonewt in one, of course.

00:30:41.279 --> 00:30:42.720
Very good, very good.

00:30:42.720 --> 00:30:50.480
Now, thus the beam with the with the the one with the 100 is half the size as it was before.

00:30:50.480 --> 00:30:54.000
With uh height, not width, not length, same length.

00:30:54.000 --> 00:30:55.680
So it's only shorter.

00:30:55.680 --> 00:30:57.519
Which beam collapses first?

00:30:57.519 --> 00:30:59.680
The hundred is now half or the tenth is half.

00:30:59.680 --> 00:31:01.200
No, the the the hundred is half.

00:31:01.200 --> 00:31:02.799
Well, again, the the hundred.

00:31:02.799 --> 00:31:04.000
Very good, very good.

00:31:04.000 --> 00:31:10.400
Now I'm gonna take that I'm gonna take the tenth and I'm gonna make it a quarter and half shorter, which collapses first.

00:31:10.640 --> 00:31:18.079
I I am not able to answer because Dan is gonna cut my sponsorship if I try to go into structural fire engineering that far.

00:31:18.640 --> 00:31:20.559
This is this is exactly the experiment.

00:31:20.559 --> 00:31:24.240
We can we can visualize one or two parameters, maybe three, very, very well.

00:31:24.240 --> 00:31:30.480
Once we start to add multiple parameters, we can, and hence we have to use a method that can actually allow us to do so.

00:31:30.480 --> 00:31:32.960
Somebody could say, I could use pernet element, you could.

00:31:32.960 --> 00:31:40.160
You can have to build up X to the N models, but you could also use, you know, AI to help you at least visualize a few trends.

00:31:40.160 --> 00:31:44.960
We're not saying that these trends are physics, we're not saying that these trends are the ground truth.

00:31:44.960 --> 00:31:52.720
We're saying that these trends might be worth looking into and help at least in a very early stage before you commit them to anything else.

00:31:52.960 --> 00:32:01.519
Uh how did okay, I I'll I'll let's take this uh further because like you a single beam is uh super easy thing to model and predict.

00:32:01.519 --> 00:32:05.519
These are the famous uh last words of many researchers.

00:32:05.519 --> 00:32:21.839
Last time I've heard that was from Asi Busmani, who said that uh he went into Cardington, he started with something simple like a beam, and then for like five or six years they were like investigating a single beam to do it really, really well, and it was a major breakthrough to do that.

00:32:21.839 --> 00:32:23.359
So yeah, a beam is not simple.

00:32:23.359 --> 00:32:27.279
But let's uh let's for this particular discussion assume the beam is simple.

00:32:27.279 --> 00:32:41.039
But you have you have a whole structure, you have a very large beam, you have a World Trade Center, you have a joint structure, trusses, beams, like extremely complicated internal core, uh membrane action of your floors, ridiculous.

00:32:41.039 --> 00:32:45.680
Like you could do finite element modeling today of that.

00:32:45.680 --> 00:32:54.799
How complicated or how massive would have the machine learning model would have to be to give you particular prediction of that?

00:32:54.799 --> 00:33:01.440
Like, would you have to train it on a thousand World Trade Center collapses so that it provides you?

00:33:01.440 --> 00:33:19.440
Because you know, if I would like to have a general AI to predict a large-scale phenomenon related to fire in a complex building, the the scale of what goes into such a model feels like outwardly to me, like ridiculous.

00:33:19.440 --> 00:33:20.480
Correct.

00:33:21.279 --> 00:33:22.000
It's gonna be bad.

00:33:22.000 --> 00:33:29.039
I mean, Chad GBT can only respond back with and it took trillions trillions of parameters.

00:33:29.039 --> 00:33:30.400
Not just data set.

00:33:30.400 --> 00:33:34.079
It just took I I think it's it's more it's more it's larger than a trillion parameter.

00:33:34.079 --> 00:33:37.200
So this is one model that has a trillion parameter.

00:33:37.200 --> 00:33:41.759
And it all what it does, it talks, and some of many times it hallucinates.

00:33:41.759 --> 00:33:47.039
So imagine you would like to figure out something that's gonna give you like a physical response.

00:33:47.039 --> 00:33:50.960
I don't think it's gonna be a I don't think that the problem is the data, uh, W.

00:33:50.960 --> 00:33:53.599
I think the problem is uh the methodology.

00:33:53.599 --> 00:34:00.640
Those systems do not if we're gonna have to train them on data, we're gonna have to rely on thousands and thousands of collapses.

00:34:00.640 --> 00:34:07.920
I think we have to integrate the way that they think and reason, just like how we do, to be able to predict things, at least economically.

00:34:07.920 --> 00:34:08.800
But I might be mistaken.

00:34:08.800 --> 00:34:09.119
I don't know.

00:34:09.119 --> 00:34:11.039
I'm not really that good with computer science.

00:34:11.280 --> 00:34:20.800
Yeah, I'm wondering because you know, I I feel then, okay, if if experiments that we have is all we have, we're kind of doomed.

00:34:20.800 --> 00:34:27.360
Like you can still probably find new science, you can probably still make a lot of discoveries based on what's there.

00:34:27.360 --> 00:34:33.599
But unless you find a way to get more data out, it's limited.

00:34:33.599 --> 00:34:41.360
If you would base on new experiments, you're limited by time, by money, and your ability to design them.

00:34:41.360 --> 00:34:44.480
So so that that's a hell of a limitation, to be honest.

00:34:44.480 --> 00:34:49.599
And and you will only get financing if you have your goal in your mind.

00:34:49.599 --> 00:34:53.199
You're not gonna get a $10 million grant.

00:34:53.199 --> 00:34:57.119
I would like to burn a hundred various things and see what happens, right?

00:34:57.119 --> 00:35:15.760
Which means basically the the only way to significantly increase the amount of data you have to play with is through simulations, which is applying already known empirical models which are based on those experiments, and by that exponentially increasing you know the amount of data.

00:35:15.760 --> 00:35:18.719
But are you creating new knowledge in that process?

00:35:18.719 --> 00:35:35.599
That's like because you're already applying empirical model that's already you know burdened with you know the the whole burden that you described before coming from data, and I I I wonder this is a highly like philosophical question, if I may.

00:35:35.840 --> 00:35:37.519
This is why I went to philosophy.

00:35:37.519 --> 00:35:41.119
This is the exact reason because the circularity is is substantial.

00:35:41.119 --> 00:35:44.079
Like once you realize it, it's significant in engineering.

00:35:44.079 --> 00:35:45.199
I mean in many domains.

00:35:45.199 --> 00:35:57.440
But to be honest with you, despite this, and despite the limitation of experiments and limitation with simulations, we can we still have like a great building cause and great responses and great equations and methods.

00:35:57.440 --> 00:35:59.679
It's it's not like that we're you know zero.

00:35:59.679 --> 00:36:02.239
We're we're going forward, it's just gonna take time.

00:36:02.239 --> 00:36:05.920
And this is something that we're not like the first people to find this now.

00:36:05.920 --> 00:36:07.440
It's been there for a long time.

00:36:07.440 --> 00:36:12.880
It's just like uh it's not like we just we just like you know, we brush it, we don't think about it.

00:36:12.880 --> 00:36:21.760
But fundamentally speaking, uh this you see this in mechanical engineering, and I don't know, maybe like in poetry too, they have the same circularity issues, same concepts.

00:36:21.760 --> 00:36:24.000
Uh, this is why philosophy is philosophy.

00:36:24.000 --> 00:36:31.840
It's it's a problem that you try to solve, but you can't solve because of many, many other things, and it tends also to go into its own circular realm.

00:36:31.840 --> 00:36:37.599
But I think the more that we realize this, uh our expectations of AI become much more realistic.

00:36:37.599 --> 00:36:43.199
Because it's not like you know we're using AI to completely come up with something that we haven't seen before.

00:36:43.199 --> 00:36:48.800
Now we have to realize how we can properly use it and expect stuff from it also properly.

00:36:49.119 --> 00:36:53.280
I I can again try to give you an example from my own uh backyard.

00:36:53.280 --> 00:37:13.760
When I was doing my PhD, I I did a PhD on on smoke control in shopping malls, and uh one of the blank pages in that field of knowledge was what's happening in the very particular part of the shopping mall when the smoke just exits your mall unit on fire, then it travels underneath the balcony and flows up.

00:37:13.760 --> 00:37:21.119
Yeah, so so we had it solved on the compartment side, so we knew there's like a ton of models that tell you how much smoke exits the compartment.

00:37:21.119 --> 00:37:26.239
We had the beautiful work of Harrison Spearpoint on axis symmetric spiel plumes.

00:37:26.239 --> 00:37:28.960
So, what's happening from the edge of the balcony, no?

00:37:28.960 --> 00:37:35.039
But from the opening to the edge of the balcony, oh man, that's a wild, wild west.

00:37:35.039 --> 00:37:35.760
No one knew.

00:37:35.760 --> 00:37:38.559
And that was like guidance, like okay, multiply it by two.

00:37:38.559 --> 00:37:50.480
And you know, a funny thing that it created was that if you wanted to make the smallest system you could in your building, you probably would like to have the smallest amount of smoke.

00:37:50.480 --> 00:38:00.880
And because those relationships correlated the amount of smoke to the width of your doors, basically you optimize for the smallest doors because that let the smallest amount of smoke out.

00:38:00.880 --> 00:38:13.119
And basically, if you made a very, very wide doors, you would be infinitely penalized because with every meter of the width, the equation would give you more smoke and more smoke and more smoke, you know?

00:38:13.119 --> 00:38:16.559
Which is unphysical because the fire only produces this much smoke.

00:38:16.559 --> 00:38:19.679
And I've shown it in my PhD that there it eventually flatters out.

00:38:19.679 --> 00:38:30.320
There's like no more heat to cause more smoke, no more entrainments, no more possibility to increase these exponentially, and you actually can get to a safer design with the wider door.

00:38:30.320 --> 00:38:39.440
But why I'm saying that if today I implemented AI on that, I I've done hundreds of simulations.

00:38:39.440 --> 00:38:44.880
I could provide an AI, and it would most likely give me a number based on the width of the doors.

00:38:44.880 --> 00:38:46.400
I tell it it's seven, it's this.

00:38:46.400 --> 00:38:48.320
If it's 15, it's this.

00:38:48.320 --> 00:38:49.360
It could find.

00:38:49.360 --> 00:38:56.000
And and later on, with that, going to causality, I could find okay, there's like an inflation point in the predictions.

00:38:56.000 --> 00:39:01.119
You know, I could go all the way back, and and suddenly I don't need modeling, I don't need AI.

00:39:01.119 --> 00:39:07.119
I just need to understand the relationship between the energy, the width of the doors, and the size of my building.

00:39:07.119 --> 00:39:15.760
So I end up with a nice equation, and I don't need AI anymore, I don't need modeling anymore because I explain something in a simple way.

00:39:15.760 --> 00:39:17.840
I mean, that that's the perfect loop, probably.

00:39:17.840 --> 00:39:18.719
I haven't done that.

00:39:18.719 --> 00:39:20.320
I probably should go back to that.

00:39:20.320 --> 00:39:23.360
How does I think about exactly 100%?

00:39:23.920 --> 00:39:25.760
We don't have to use AI for everything.

00:39:25.760 --> 00:39:27.440
You only use it when you need to.

00:39:27.440 --> 00:39:32.320
If more black classical methods or seminar methods don't do the job, we're good to go.

00:39:32.320 --> 00:39:34.159
We don't really have to bother with AI.

00:39:34.159 --> 00:39:36.960
There is no need to look into it.

00:39:36.960 --> 00:39:45.039
It just helps you to maybe could give you some insights into something that's you know we haven't like looked into before or realized before.

00:39:45.440 --> 00:39:49.199
Okay, now let's let's uh move into practicalities.

00:39:49.199 --> 00:39:57.920
You also wrote uh a very uh interesting paper on retiring models and the way they when they become obsolete, etc.

00:39:57.920 --> 00:40:04.880
So, given that perspective of yours, how does one navigate the environment?

00:40:04.880 --> 00:40:11.599
Because I I think we agreed that picking the right model for the right question is is already halfway there.

00:40:11.599 --> 00:40:21.039
So, in this rapidly changing environment, any guidance on on how people should look for their models, where to start, where to finish?

00:40:21.760 --> 00:40:23.760
Yeah, no, this is a great question.

00:40:23.760 --> 00:40:28.400
And we we really stumbled on this by luck because I was I was organizing my folders.

00:40:28.400 --> 00:40:38.880
Okay, and like and putting things in in hard drives, I still use hard drives, and I realized for one paper I had like you know 50-something versions on those scripts.

00:40:38.880 --> 00:40:43.360
So I'm thinking, you know what, these scripts nobody's gonna even look into or look at at all.

00:40:43.360 --> 00:40:49.119
They're gonna only look at the results that they produce, and most likely, myself personally, I will never touch them again.

00:40:49.119 --> 00:40:58.400
So if you think about like the practicality, as you mentioned, like all these companies and firms, and you know, like God knows how many how many models are built and never touched again.

00:40:58.400 --> 00:40:59.760
So, what happened to these models?

00:40:59.760 --> 00:41:04.719
So I started to do some some some research and I found out that we really don't do anything with them.

00:41:04.719 --> 00:41:07.679
We just consume all this energy to build them, and that's it.

00:41:07.679 --> 00:41:12.880
We we produce a paper, we publish it, and most likely we're gonna never look into it again.

00:41:12.880 --> 00:41:19.039
And I was starting thinking about okay, well, what would happen if uh like some somehow Chad GBT retired?

00:41:19.039 --> 00:41:22.960
Yeah, what would happen to all those costs that was associated with training it?

00:41:22.960 --> 00:41:28.239
And it it was just interesting because you know this is like an idea that it it came and we had to think about.

00:41:28.239 --> 00:41:30.559
But I I also found something very, very cool actually.

00:41:30.559 --> 00:41:31.679
You might like this.

00:41:31.679 --> 00:41:39.519
Uh do you remember in the like late 90s or like early 2000s when uh the Russian chess master was playing D Blue?

00:41:39.519 --> 00:41:39.840
Yes.

00:41:39.840 --> 00:41:42.639
So what happened to D Blue?

00:41:42.639 --> 00:41:44.079
Like, where did it go?

00:41:44.079 --> 00:41:49.840
Now I think D Blue is if you go to an airport, you see those like all those monitors with the flights.

00:41:50.079 --> 00:41:50.400
Okay.

00:41:52.719 --> 00:41:54.000
I think that's what he does now.

00:41:54.000 --> 00:42:09.360
So it's the similar logic to D Blue, and now it has been reformulated from chess into something practical, which tells us maybe we could like you know reorganize these algorithms and like use them for different aspects versus just like bed them for a paper or like for a project and never use them again.

00:42:09.360 --> 00:42:10.639
So it's possible.

00:42:10.960 --> 00:42:20.960
You touched on uh I didn't plan this for this interview, but it brought me into an extremely interesting field, and I and I think you may be actually an awesome speaker to discuss this philosophically.

00:42:20.960 --> 00:42:22.880
So it's gonna be a longer one.

00:42:22.880 --> 00:42:23.440
Sorry.

00:42:23.440 --> 00:42:28.000
Uh they were building in Switzerland this Gotard base tunnel.

00:42:28.000 --> 00:42:34.000
It's a giant project underneath Alps, like 30-something kilometers of railway tunnel.

00:42:34.000 --> 00:42:37.599
It took them like, I don't know, 40 years to build it.

00:42:37.599 --> 00:42:47.840
A kind of an investment where the country has to like go to voting and the citizens had to vote that yes, we're gonna like devote this amount of GDP to building this tunnel.

00:42:47.840 --> 00:42:50.880
Amazing project, amazing project, completed now.

00:42:50.880 --> 00:43:03.360
But you know, it was so long that at some point in late 90s, 2000s, a lot of of stuff has been decided in the 70s, you know, by people in the 70s.

00:43:03.360 --> 00:43:12.639
And imagine you come into a tangible question in your construction, like how I have to change something, can I change it or not?

00:43:12.639 --> 00:43:21.760
And then the origins of that decision is like 30 years ago, and you have to trace back why a decision was made.

00:43:21.760 --> 00:43:31.920
And from what I've heard, they were able to trace back because they knew the methodology, they knew the assumptions, the codes of that time, there were written records of that, you know.

00:43:31.920 --> 00:43:41.679
It kind of allowed you to backtrace those decision-making processes of the 70s, 80s and understand what was the ground of that decision.

00:43:41.679 --> 00:43:47.519
Imagine if I use an AI model today to make a decision.

00:43:47.519 --> 00:43:50.639
You're not gonna be able to backtrace that next Thursday.

00:43:50.639 --> 00:43:53.119
Not to speak about like in the in the 30 years.

00:43:53.119 --> 00:44:01.760
No, it's it's kind of like it's it's not just a black box, but from this perspective, it's a single use black box.

00:44:01.760 --> 00:44:10.320
It's a black box that was created to take a decision and from the perspective of humanity disappeared shortly after.

00:44:10.320 --> 00:44:13.840
That's a hell of a dynamic that's that's challenging.

00:44:14.079 --> 00:44:21.199
Yeah it's it's I I I I I agree this was a I mean this is actually it's still a big problem in in computer science.

00:44:21.199 --> 00:44:25.199
And you know they're you know just like other problems they try to come up with solutions.

00:44:25.199 --> 00:44:28.000
Like I think now there's something called frozen model.

00:44:28.000 --> 00:44:32.079
So you take a model whenever it makes a decision you like screenshot it.

00:44:32.320 --> 00:44:51.519
So you can always like rerun it the same weight and everything but it's it's it's very hard because you don't kind of like saving your python uh code with all the with all the sub models the day you were uh running the code and just having this image if you want to run the code in five years you have the exact same Python version with everything.

00:44:52.320 --> 00:45:01.119
But for very complex models no matter how much you freeze the interaction between the feature between the between the submodels also has its own issues.

00:45:01.280 --> 00:46:01.360
So although you could be able to maybe like freeze the whole thing I don't know if it's if it's really possible to trace like every single item unless you run everything maybe if you run everything like deterministically maybe you'll be able to but if you keep some stochasticness into it maybe we gonna have to I don't know how it's if if it's possible to trace every single item because it makes a big difference between tracing one that particular decision versus like every decision the algorithm was able to make this is uh I I think this is gonna be one of the of the most problematic dynamics in that because it also like kind of makes it very hard to third party check the decisions making it makes it very hard to validate basically what you would have to have is is two people come up into the same result with pretty much different methods in a way as a cross check not validating one workflow but obtaining the same result with multiple workflows up to a point where it kind of converges.

00:46:01.760 --> 00:46:53.920
Correct correct yeah it's complex uh even for with finite element uh maybe finite element softwares have like a much smaller issue because they don't have to deal with with a lot of uh like algorithmic stuff but still a lot of the like matrices multiplications convergence uh criteria they also rely on heuristics like for instance at the an engineer they might use I don't know 0.2 for a Newton Rhapsan convergence criteria and somebody else might use like 0.1 well how do you justify those decisions like there's a domain knowledge how do you verify domain knowledge how if if there is a case where God forbid somebody has to go to court how do you defend your your decisions so uh the good news is we do have precedence like we have finite elements to to to learn from it but on the other extreme uh algorithmic architecture is it is very very different than the things that we know so you really have to think about it from a fresh perspective.

00:46:54.239 --> 00:47:12.559
And the final part about practicality is uh I wanted to ask you about agentific AI and maybe not everyone is in line of of what's happening around uh these days in the world of AI in fact you you have to be well invested with time to follow up on on what's happening in AI these days.

00:47:12.559 --> 00:47:25.199
So maybe like a crash call introduction to what agenic AI is and then how is this an opportunity to change the engineers' workflows because I also think it's it's it's a it's a great opportunity.

00:47:25.519 --> 00:48:17.840
I agree I agree so the the simplest way to think of agents at least to me is the following when we build the machine learning model we we do the coding we get the data we build the line by line and then we put the data there we press run model the algorithm runs we see the prediction we've done every step ourselves collect the data code all what you have to do at the end press the button analyze an agent does everything for us so in the morning you would say hey agent can you build an algorithm and data set for spawning in concrete you leave you come back in 30 minutes it's done so instead of us collecting the data coding building the steps the agent does everything that we have to do for us so this 30 minutes or realistically speaking this X many hours that we would have spent on doing this is now we can do other things.

00:48:17.840 --> 00:48:33.199
So an agent is like a way for us to delegate trees to delegate steps and processes to AI so it can do things for us and practically speaking you can think of it this way hey agent can you analyze this design document can you verify these design assumptions?

00:48:33.199 --> 00:48:44.800
Can you look at this finite element software and see if the you know inputs match your codes I don't know two or three guidelines or provisions instead of us doing this manual labor work.

00:48:45.199 --> 00:48:53.360
And and how do you train the agent how does the agent gain the competency to do those tasks in the correct way?

00:48:53.360 --> 00:49:16.400
Because if you tell it like okay brawse me the news and give me a five minute coffee read uh my morning so I know what happened overnight I think I find that simple but if you tell an agent you know how about you give me a model to predict spalling from uh you know five data sets you find that's not a trivial task how does it know how to what where to go with that that's a very good question.

00:49:16.639 --> 00:49:21.440
So the task itself is much more complex but the process to train is the same.

00:49:21.440 --> 00:49:44.880
If you would like to train something you expose it to the certain thing that you would like to train the reasoning part so the the the process of collecting building is going to be the same no matter what the task is the process of reasoning because if you're gonna collect only five minute coffee read up the reasoning is going to be very simple go to some TV channel collect what was the five most interesting bullet points summarize those.

00:49:44.880 --> 00:49:52.400
But for reasoning you actually have to use a higher algorithmic architecture that can look into a data point and break it down with reason.

00:49:52.400 --> 00:50:00.719
And this is where you see like the difference between for instance chat GBT03 and 5.1 or something very simple like you know chat GBT2.

00:50:00.719 --> 00:50:13.440
ChatGBT2 was always about like just text once we start to go four and five now it's it's reason it it breaks down the steps into into a chain of thought like for instance let's say to be able to find this result I have to do multiple steps.

00:50:13.440 --> 00:50:18.800
Step one look for data step two go through the to the websites that host data sets.

00:50:18.800 --> 00:50:25.280
Step three download the data so you you train the steps the the reasoning behind every step and it will do it for you.

00:50:25.440 --> 00:50:29.920
Do you do train explicitly or you expect it to come up with it on its own?

00:50:30.239 --> 00:50:45.039
No we trained explicitly okay so you have to teach the agent like what to do and correct just like how you teach a kid like for instance when they're in grade one how to write and how to read and write you have to show them the letters right then all of us learn by drawing.

00:50:45.039 --> 00:50:52.480
We draw the letters then we learn how to write the letters then how to put words letters together to be to come up with words and then sentences.

00:50:52.480 --> 00:50:56.159
It's the same process but from agent to agent the process can be different.

00:50:56.159 --> 00:51:03.039
Like for instance some of them you have to start from scratch suddenly you have to show one exam of two examens and they can pick it up from there.

00:51:03.039 --> 00:51:08.480
Suddenly you train them on certain aspects and they can transfer the same knowledge into different aspects.

00:51:08.480 --> 00:51:17.679
So but the process regardless of the time of the algorithm of the agent it more or less involves us putting the effort to train agents.

00:51:18.239 --> 00:51:29.039
Is the logic of the agent executed locally on your computer or is it still like cloud based service that the agent just reaches out and uses that resource?

00:51:29.519 --> 00:51:30.480
You could have it both ways.

00:51:30.480 --> 00:51:44.079
Just like how you can build the a decision tree algorithm from scratch you could build an agent from scratch just like how you can import an algorithm from a Python library you could do the same thing for ages as well.

00:51:44.400 --> 00:52:04.000
Okay and with agent implementation I I it feels like a higher level of these LLMs how would you scale this on the degree of complexity is it like harder than programming easier than programming like is is is it between Excel and Python I don't know where would you put it on the ladder of toughness?

00:52:04.320 --> 00:52:07.039
It's it's definitely more than coding an algorithm.

00:52:07.039 --> 00:52:12.239
So because for an algorithm all what all what an algorithm does is one thing process data.

00:52:12.239 --> 00:52:22.400
Like if you see this do this if you see this do that but for an agent it has to think so it has to identify tasks prioritarize the tasks solve the tasks and then combine everything back together.

00:52:22.400 --> 00:52:36.159
So you can think of an agent as a multi-algorithm where multiple processes have to work together in harmony and most of the time in parallel so this is how you save time and to arrive at the end to to give it the thing that you are asked you you asked to do.

00:52:36.400 --> 00:52:56.719
But in the end if you succeed you basically get a clone of yourself for this particular task pretty much and you can uh if anybody's listening now probably they have heard of uh claw claw code yeah yeah it's a it's it's the same thing now you can make one you can make a lot of money out of it because now it can automate a lot of the things that people like need to do on a daily basis.

00:52:56.880 --> 00:53:06.000
You don't have to go through your email 50 different times and like delete all those like spam or like respond to those emails now the the agent can do it for you very well.

00:53:06.239 --> 00:54:00.239
Yeah you can also lose a lot there was some stories about people losing a lot of money like there was one guy like I gave it the access to my credit card I told him to trade he did this magnificent analysis like he lost all the money but the analysis was beautiful like I'm like oh yeah maybe not when don't trust with money yet yeah yeah I I would not I would not give them too much uh access to real I'm kind of like old school in that when I observe this you know and I see at first all those approaches to use machine learning on the data sets I felt this is yeah the most people are just doing it because it's novel but that there's no like greater goal in that uh explainability causality when you introduced me to that I was like wow this is amazing but it's hard it's not easy like it's it's not that many people can do it.

00:54:00.239 --> 00:54:04.400
I I mean I see immense value in that but it's really really hard.

00:54:04.400 --> 00:54:08.320
But and and and not something you would uh probably use in your everyday engineering.

00:54:08.320 --> 00:54:26.800
Like I would struggle to find a uh uh a use for engineering because my buildings are so complex the scale to train is just not there you know it's it's still for me easier to build a CFD model for my shopping mall rather than build a million shopping malls to train a model that can solve my shopping mall.

00:54:26.800 --> 00:54:34.800
With agents actually I think there are a lot of tasks that you could automate and be supported with as an engineer.

00:54:35.119 --> 00:54:41.440
Correct it's like you know a very simple thing as like really using Excel.

00:54:41.440 --> 00:54:56.800
Excel is all Excel is all about cells with equations you don't really have to spend 30 minutes like coding an Excel sheet that you can tell the can you code this Excel sheet to follow whatever like ACI equation set to design concrete concrete beams or concrete columns.

00:54:56.800 --> 00:55:03.039
You don't have to do it yourself you can you can verify it at the end but the agent can probably do the whole the whole Excel sheet for you.

00:55:03.039 --> 00:55:27.119
So a lot of the things that happen to be like highly routine or require a lot of like time effort from us it it can be automated and you can think of things that are very like low maintenance low cost low risk like you don't have to make it go drive my car around and come back like an Excel sheet like my organize my calendar or like organize my reply uh to these emails so you don't have to worry about them.

00:55:27.119 --> 00:55:41.840
When it comes to actual design uh maybe you can say can you organize the pages of building codes or provisions lines where you have to actually use all the time like you don't have to open the code 50 different times a day to look for it.

00:55:41.920 --> 00:56:09.440
It can like organize everything in one in in one PDF makes your life easier more productive even for fine element building models you didn't have to I I I I think it could be also very useful in the routines of preparing engineering analysis you know like preparing large batches of simulations like okay we are still you know the the paradigm is that for commercial work you would just pick a one to design fires run your C of DSS and and be done with it.

00:56:09.440 --> 00:56:51.760
But I I truly hope that this paradigm will shift into more multiparametric you know stochastic analysis of of multiple variables and then you could probably tap into the agents to actually design those simulations for the parametric spaces you feel like and perhaps manage the calculations on the cloud for you, you know prepare data analysis provide you with like executive summary of what came out of the simulations and you could ask them to take the simulation 7, 5 and 15 to do the final report and draft it and I think I think tasks like that could be very smartly automated with with those with those tools.

00:56:52.079 --> 00:57:03.920
Correct yeah like the drafting and the coverheads letters are like you know summaries is it's very low cost plus you can verify very easily the more complex the task the more the tax is to actually to actually do it.

00:57:04.000 --> 00:58:00.079
So you have to be careful with those and I'm I'm I'm confident maybe like in the next maybe next time we talk we we'll be talking about something much more fancy and advanced and agents will be like everywhere now because you know they're gonna be in the old news or the world dance and we're gonna discuss like farming like what's the best soil to grow your food like to not start that's that's also a risk in that I I've heard the stories like internet is pretty wild if you follow this space because there was like some guy posting like we had an agent do executive financial reporting like for last half year it was making up all the numbers like all the all the decisions were like made up this like coming with plausible numbers when we started like someone asked me to cross check one of them and I'm like this is wrong and and all of the I believe those I mean yeah I know they're there but it doesn't it doesn't mean that they're ready.

00:58:00.079 --> 00:58:29.519
Yeah but yeah it's it's it's uh I think one thing that doesn't leave me perhaps you can confirm that to me or or maybe tell me I'm wrong that I feel that those models that are like sent public they are kind of optimized like what's the most advanced logic I have to use or most energy consuming logic I have to use that this user is happy.

00:58:29.519 --> 00:58:34.639
You know it's it they're not using full power for everything.

00:58:34.639 --> 00:58:37.760
And it's very hard to force them to use full power.

00:58:37.760 --> 00:58:49.760
Like I had a very mundane task for for a chatbot I had uh an analog clock that was that was a scale that was recorded and I wanted to read out the data points.

00:58:49.760 --> 00:59:05.599
And I gave it like a video and told him read the data point every 10 seconds and it did the first five and then for the another hundred he just approximated like you know the trend instead of reading and like wait wait wait wait I see what you're doing like you're not reading them read them.

00:59:05.599 --> 00:59:19.039
And it took me a lot of time to to to force the damn chat but to actually process that and I knew it could because it definitely read the first five but at some point it switched into energy saving mode and and they don't want to do it.

00:59:19.280 --> 00:59:21.519
Yeah it's like a consumer product to be honest.

00:59:21.519 --> 00:59:33.760
It is you really you you build it so you can gain the maximum attraction gain the maximum profit and then you can like work from the coolest stuff in the back that you know you're gonna be making much more money from you know in the next year or two.

00:59:33.760 --> 00:59:35.039
I 100% agree.

00:59:35.679 --> 00:59:39.199
Because it's quite energy consuming to produce the outputs.

00:59:39.199 --> 01:00:21.599
Like what what when I was speaking a lot about machine learning uh in different places the magic was that it takes a lot of energy and effort to train but the response is instantaneous and cheap which is not necessarily the case with the complex uh prompts for very high level LLMs right no I agree this is why when Deep Came came about uh it was transformant because Chat GBT's output almost everybody's LLM's outputs were very expensive and Deep C's outputs were like 10% or 15% of the total cost and people were like okay well this is now going to change a lot of things because now the context window can be very very long and you can serve more for the same amount of money.

01:00:21.599 --> 01:00:57.280
Do you think if if you have an access to a fairly strong workstation computer I'm not saying you have like Nvidia H100 supercomputers but let's say you have a strong one that you would comfortably use to run FDS simulations for your small company do you think it would be wiser to set up your own olama instance and use full power of like 70 billion parameter model or you know rely on trillion parameter chat GPT where you have no control over it using the the full power?

01:00:57.280 --> 01:00:58.880
That's that's tough one.

01:00:59.440 --> 01:01:12.159
That's that's very hard I'm all for open access because you know I want people to gain knowledge but it's just unfortunately these open models are not like well maintained like the the other ones they don't get updated as much.

01:01:12.159 --> 01:01:24.480
And they're well they're powerful still but it's just uh they they would require you to maintain a lot of things so the the the the the problem become becomes it's less convenient to use on the world.

01:01:24.480 --> 01:01:29.119
It's about convenience because if you can maintain them then you should be fine.

01:01:29.119 --> 01:01:35.119
And maybe you can like tweak them and make them work to your own specific company or your own specific case which is very great.

01:01:35.119 --> 01:01:55.519
But overall like you know this is a decision that somebody has to think about and think deeply because it's it makes a big difference between like paying a subscription every month and between like maybe hiring one person or two that can like take care of this for you for for broadly the foreseeable future and you know it works only for you.

01:01:55.840 --> 01:02:18.000
Well then there's a reason why it's uh a multi multi-billion dollar uh industry right now to build up and manage those projects which is a shame that they're uh retiring them every every three months after they have a slightly higher competitive advantage hence your your philosophic pathway of when should we uh reuse them is is is highly appreciated.

01:02:18.000 --> 01:04:25.519
Uh man an hour has passed it's always uh such a pleasure to to discuss the the new novelty of AI with you it's uh such an interesting world uh I'm absolutely sure we'll have to reconvene uh soon i i think that we we need to increase the we need to shorten you know the time between subsequent episodes because there's more and more happening on that definitely i'll be high back too you're you've always been very supportive a great friend and i i the only uh regret i have is we haven't met in person yet so maybe hopefully we'll we'll meet in person soon i'm still not certain like if you're a real human or you're an algorithm like there's like 17% chance you're like an algorithm a very very sophisticated one very thanks uh but uh yeah i i i hope uh this uh this will um eventually happen and i'll guide people to to your resources there's that there is a lot that that you've published and there's i assume uh much more to come uh thanks mz and and let's catch up again uh sometime soon thank you very much have a good day and that's it thank you for listening now i need to sit down and program my own agent i guess i would need uh a second voicek that that that would be very appreciated uh overall i think uh this episode uh brought optimism back to me and mz gave me a real reality check we've been talking about this explainability causality uh for a while i really really loved that concept uh maybe i'll tell you why i love this concept i'll I'll tell you how I feel about it the the thing with explainability and causality is like when you do a machine learning algorithm on a set of data it basically takes the data then it builds this massive neural network and drops the data out it builds the neural network by doing ton of different mathematical operations across different layers on all the input variables and it eventually reaches some output variable and it does this so many times that eventually the inputs start matching your expected outputs.

01:04:25.519 --> 01:04:26.880
That's called training.

01:04:26.880 --> 01:04:37.119
You train the network and it becomes better at solving the task by chance, by algorithms, by simple amount of repetitions you give it.

01:04:37.119 --> 01:04:50.159
No matter how you basically build a network that relates an input into an output and based on your testing set that you have kept aside you know that these outputs match the inputs.

01:04:50.159 --> 01:04:54.000
That's the typical use of machine learning you would have.

01:04:54.000 --> 01:05:02.800
Now the causality explainability how they come into that is that you're not really that much interested in the output.

01:05:02.800 --> 01:05:12.800
I mean you are interested that the output is correct but what you try to figure out is what pathway is leading from those data to outputs.

01:05:12.800 --> 01:05:29.599
Basically you allow the program to solve your your your your dataset to give you predictions but instead of focusing on the predictions you're focusing on on the reason why the program did solve it like this and there's new science in there.

01:05:29.599 --> 01:05:51.039
That's how you discover new interactions new ways of thinking new pathways that you perhaps have not never thought about that's where you discover those multivariable interactions etc that that's why I think it's so so powerful because looking into the pathways this is a completely new view on the same problem you have may have been looking for ages.

01:05:51.039 --> 01:05:53.679
And I really wish it was implemented more.

01:05:53.679 --> 01:06:04.239
We've submitted a massive grant for using this in Fire with Professor Bart Mercy and we just Like uh inches were between us and getting that grant.

01:06:04.239 --> 01:06:11.440
Maybe uh sometime in the future I will be able to apply this uh this tool in a smarter way to some stuff.

01:06:11.440 --> 01:06:13.599
But anyway, I think it's it's very promising.

01:06:13.599 --> 01:06:21.840
And all the stuff that was brought up today, philosophical informed AI, the reusability of models, all of this is fantastic.

01:06:21.840 --> 01:06:26.719
And all of this is a research paper somewhere because MZ publishes like a monster.

01:06:26.719 --> 01:06:31.440
He's publishing and publishing so many papers, all of them you can find online.

01:06:31.440 --> 01:06:38.239
I will link to his website where you will find pathways to everything interesting he has produced.

01:06:38.239 --> 01:06:53.760
And yeah, summarizing the two episodes, uh, the last week's and this week, I really hope that AI turns out into a very optimistic, useful tool for fire safety engineers and creates a world of opportunity and abundance for us.

01:06:53.760 --> 01:06:59.920
Thanks for being here with me in the Far Science Show, and I hope to see you here next week, same time, same place.

01:06:59.920 --> 01:07:01.360
Cheers, bye.