WEBVTT
00:00:00.160 --> 00:00:02.479
Hello everybody, welcome to the Fire Science Show.
00:00:02.479 --> 00:00:08.080
Last week I've recorded a distressed message about my worries regarding AI.
00:00:08.080 --> 00:00:15.039
And today in a roller coaster, I am taking you to the world of opportunities and challenges with the AI as well.
00:00:15.039 --> 00:00:24.480
The world today is a very interesting place and it's perfectly fine to be at the same time stressed and excited about the rise of technology.
00:00:24.480 --> 00:00:35.920
And to discuss the rise of the technology, to discuss what is possible with the AI in 2026, I have once again invited MZ Naser from Clemson University.
00:00:35.920 --> 00:00:43.119
MZ is right now, I think, the leader of use of AI methods in civil engineering.
00:00:43.119 --> 00:00:50.960
I think he has more papers than anyone else in the world of academia on using AI in civil engineering.
00:00:50.960 --> 00:01:04.640
And since many many years we've talked already almost 220 episodes ago, and he's always been presenting this very practical way of implementing AI in the various tasks that we do.
00:01:04.640 --> 00:01:06.879
And it's no different than this podcast episode.
00:01:06.879 --> 00:01:20.480
In this podcast episode, we give a little bit of recap of what's been happening with the AI methods for the previous years in terms of how they could be applied in engineering and what benefits you could get out of them.
00:01:20.480 --> 00:01:34.719
We go through the causality and explainability, we touch a little bit philosophy informed AI, and then later in the episode we're switching into more practical application size.
00:01:34.719 --> 00:01:42.719
So among other things, in this episode we cover agent AI, we cover the size and scope of models, reusability of models, etc.
00:01:42.719 --> 00:01:57.359
I think it's quite a practical episode, and at the same time it's quite an optimistic episode because it really gives a view on what could be done if we use this technology that we got correctly.
00:01:57.359 --> 00:02:00.959
And an interesting twist is that there's no magic in it.
00:02:00.959 --> 00:02:27.199
There is it's just a tool, and as a tool, it's burdened by the same limitations as our previous tools were, as the CFD was, as Zone models were, as empirical models were, and that limitation is access to high quality data and high quality experiments, and we also spend a lot of time discussing why those are critical for the race of AI in the future.
00:02:27.199 --> 00:02:33.759
I think it's a good one, and I hope this time instead of stressing you, I bring in some optimism to the table.
00:02:33.759 --> 00:02:36.400
Let's spin the intro and jump into the episode.
00:02:36.400 --> 00:02:43.280
Welcome to the Firescience Show.
00:02:43.280 --> 00:02:47.199
My name is Wojciech Wegrzynski, and I will be your host.
00:02:47.199 --> 00:03:05.120
The Firescience Show podcast is brought to you in partnership with OFR Consultants.
00:03:05.120 --> 00:03:15.120
OFR is the UK's leading independent multi-award-winning fire engineering consultancy with a reputation for delivering innovative safety-driven solutions.
00:03:15.120 --> 00:03:24.240
We've been on this journey together for three years so far, and here it begins the fourth year of collaboration between the Fire Science Show and the OFR.
00:03:24.240 --> 00:03:41.599
So far, we've brought through more than 150 episodes, which translate into nearly 150 hours of educational content available, free, accessible all over the planet without any paywalls, advertisement, or hidden agendas.
00:03:41.599 --> 00:03:48.479
This makes me very proud and I am super thankful to OFR for this long-lasting partnership.
00:03:48.479 --> 00:03:55.840
I'm extremely happy that we've just started the year 4, and I hope there will be many years after that to come.
00:03:55.840 --> 00:04:04.479
So big thanks OFR for your support to the Fire Science Show and the support to the fire safety community at large that we can deliver together.
00:04:04.479 --> 00:04:11.919
And for you, the listener, if you would like to learn more or perhaps even become a part of OFR, they always have opportunities awaiting.
00:04:11.919 --> 00:04:14.639
Check their website at OFRConsultants.com.
00:04:14.639 --> 00:04:16.560
And now let's head back to the episode.
00:04:16.560 --> 00:04:17.839
Hello everybody.
00:04:17.839 --> 00:04:22.319
I am joined today by MZ Naser from Clemson University.
00:04:22.319 --> 00:04:24.000
Good to have you back in the podcast.
00:04:24.399 --> 00:04:25.040
Thank you very much.
00:04:25.040 --> 00:04:25.839
Good to be here.
00:04:26.160 --> 00:04:39.839
I'm I'm so happy that you're here because I am going through a roller coaster of emotions with uh AI, and I need uh someone to uh bring me back to the optimist uh swing on this.
00:04:39.839 --> 00:04:49.680
And man, we've been talking about AI in in fire science like already five years ago when you first joined me in the in the fire science show almost five years ago.
00:04:49.680 --> 00:04:51.040
That's that's insane.
00:04:51.040 --> 00:05:07.120
Back then I remembered that you said something that that really uh stuck with me that you were just you know a few years ahead of uh of people, like you didn't have that much experience with AI back then, but still, you know, it was great to to see how how how much you grow.
00:05:07.120 --> 00:05:17.920
I wonder, do you think in uh 2026, if someone has never used any AI methods, machine learning whatsoever, is it too late for them to join the bandwagon?
00:05:18.240 --> 00:05:20.000
It it's easier not to use AI.
00:05:20.000 --> 00:05:23.360
It's easier, it's much easier because you don't have to code anymore.
00:05:23.360 --> 00:05:27.920
Everybody's using in a chatbot to code for them, or like even those agents to code for them.
00:05:27.920 --> 00:05:36.000
So as long as you have like a good thought or a good idea, you might want to try those because you don't have to code it, it becomes a little bit more easier and accessible to use as well.
00:05:36.240 --> 00:05:42.000
Yeah, I remember back then it was about finding the the proper algorithm to your problem.
00:05:42.000 --> 00:05:46.160
And you said if you if we get that, you're almost uh sorted.
00:05:46.160 --> 00:05:47.680
Like that's the hardest part.
00:05:47.680 --> 00:05:50.319
Do you still believe that's the case in 2026?
00:05:50.639 --> 00:05:52.800
Yeah, I mean I'll just like add a small component.
00:05:52.800 --> 00:05:57.040
If you also have a great data set, that will even make your life much, much easier.
00:05:57.040 --> 00:06:04.079
It's all about like matching the right algorithm with the right data set, because uh at the end of the day, algorithms are just processes.
00:06:04.079 --> 00:06:05.360
They process data.
00:06:05.360 --> 00:06:14.800
If the data is nice and clean, the algorithm follows more or less a logical way to process its data, it becomes a straightforward chart to using AI.
00:06:15.120 --> 00:06:19.360
I like that way of thinking because it kind of takes the magic away out of AI.
00:06:19.360 --> 00:06:24.399
So it's not like a magical processing tool that will do your job for you.
00:06:24.399 --> 00:06:28.560
It's basically a tool you can apply.
00:06:28.560 --> 00:06:38.079
Maybe you can expand on that, on that thought based on your like years of experience teaching this now to engineers worldwide, not just in fire, but broader civil engineering.
00:06:38.319 --> 00:06:46.879
Well, the the biggest hurdle with AI was if if you would like to use it, you have to know how to code and program, which means you have to learn Python or or R.
00:06:46.879 --> 00:06:52.879
In engineering, as we all know, we have limited time for curriculum, maybe three and a half years, four reasons you graduate.
00:06:52.879 --> 00:07:02.800
Uh, I know some schools they have coding courses, maybe like in year two or year three, but uh unfortunately you start in year two, but you never get to use it as much in year three and four.
00:07:02.800 --> 00:07:07.199
So it's like one isolated course where you learn some Python and you never use it.
00:07:07.199 --> 00:07:10.800
By the time you graduate, your firm uses AI, nobody knows how to use AI.
00:07:10.800 --> 00:07:13.519
They all know a little bit about coding, they've forgotten.
00:07:13.519 --> 00:07:20.560
So now with uh with the rise of coding free AI as well as uh agents, you don't really have to code anymore.
00:07:20.560 --> 00:07:31.439
You can use natural language, like how the way that you chat with ChatGBT or with Claude and the LLM or the chatbot or the agent can like follow your steps and make an algorithm for you and develop those things.
00:07:31.439 --> 00:07:34.639
So, in a way, you could potentially be working on three different projects.
00:07:34.639 --> 00:07:37.920
One is pure coding, one is analysis, one is design.
00:07:37.920 --> 00:07:44.000
You could do one, and I can tell you the other two because the analysis can be automated, coding can be automated.
00:07:44.000 --> 00:07:52.240
It's gonna be up to you to think about like you know how how to process a design and analysis and have much more improvement and productivity.
00:07:52.639 --> 00:07:58.240
What kind of core skills would be most useful for those coding-free abilities?
00:07:58.240 --> 00:08:01.680
I I, for example, I have some basic knowledge of Python.
00:08:01.680 --> 00:08:14.959
I find that in the end, I find it quite useful when I work with AI, but it perhaps it's it's more about understanding the logic of the codes and objects and et cetera, rather than you know, specific Python uh abilities.
00:08:14.959 --> 00:08:20.079
What do you think should be a part of this skill set when someone wants to start?
00:08:20.079 --> 00:08:21.680
What gives them a stronger start?
00:08:21.920 --> 00:08:23.680
I I think of this like the same way.
00:08:23.680 --> 00:08:26.720
Somebody would think of Excel like 20 years ago.
00:08:26.720 --> 00:08:33.440
When Excel started to show up and became more mainstream, it becomes like a kind of a skill that's expected from everybody.
00:08:33.440 --> 00:08:39.120
So you have to take courses at the time, you have to like take some courses and take a certificate out of it.
00:08:39.120 --> 00:08:42.639
Now it's all about like how to communicate with the with the chatbot.
00:08:42.639 --> 00:08:43.840
You have to learn how to write prompts.
00:08:43.840 --> 00:08:54.720
So prompts engineering becomes very variable, very interesting, because now uh both of us could say the same thing to ChatGBT or to an LLM with different words and tones, and we can have different results.
00:08:54.720 --> 00:09:05.279
So the skid becomes how do you interact with the with the AI in a natural language processing way, like naturally with typic, and get the most that you can have without having to code.
00:09:05.600 --> 00:09:16.399
Well, I I'll I'll refer people to the resources that you provide on your web page and all the all the great stuff that you're releasing to the public because that's uh a goalmine on how to start.
00:09:16.399 --> 00:09:19.519
You've been uh teaching how to start with machine learning a long time ago.
00:09:19.519 --> 00:09:22.320
So so there there is a a lot to learn.
00:09:22.320 --> 00:09:32.080
Um in those years I've I've seen kind of like at least from your end, uh, evolution of how you are using those tools.
00:09:32.080 --> 00:09:40.000
Uh earlier, I've seen a lot of nice applications about you know just predicting physical phenomena like spalling.
00:09:40.000 --> 00:09:44.080
And I I still remember your uh web app for columns and spalling.
00:09:44.080 --> 00:09:46.240
I I played with that a lot.
00:09:46.240 --> 00:09:52.320
It was quite quite funny to like see uh how many variables go into that phenomenon.
00:09:52.320 --> 00:09:58.159
Then uh you brought up something called explainability and causality in AI.
00:09:58.159 --> 00:10:10.080
That was a great find because it's uh kind of returned the science to this for us at least uh in my institute, uh, because we saw that you can use AI to discover.
00:10:10.080 --> 00:10:16.559
Today I see you moving into more advanced concepts, philosophy-informed uh uh machine learning.
00:10:16.559 --> 00:10:20.799
I would love to hear the pathway from your end and how these things evolve.
00:10:20.799 --> 00:10:24.159
Perhaps what are the drivers uh for this evolution for you?
00:10:24.399 --> 00:10:25.759
Uh that's a very nice journey.
00:10:25.759 --> 00:10:30.080
So thank you for reminding me of all the of all the things uh in our five years.
00:10:31.360 --> 00:10:36.000
The earth is spinning much faster this this year than than it has been in the past, right?
00:10:36.399 --> 00:10:42.000
I'm just fortunate to have people that are supportive and and be a little bit lucky on the on the head curve.
00:10:42.000 --> 00:10:44.879
Uh so that the thing really started with explainable AI.
00:10:44.879 --> 00:10:50.799
So William, whenever somebody uses explainable AI, basically tells us how the algorithm arrived at a particular prediction.
00:10:50.799 --> 00:11:00.240
So if the algorithm says I'm predicting this column to spawn, when we use X AI, it tells us that the algorithm arrived at this prediction because it looked at these features.
00:11:00.240 --> 00:11:08.480
Or it did some combination of all the features and it arrived at a threshold that was passed, and hence we have uh you know this prediction.
00:11:08.480 --> 00:11:13.039
But if you look at your explainability, you realize that everything is driven by the data.
00:11:13.039 --> 00:11:19.279
The algorithm really is only giving you a results from your data, which means it doesn't really know anything about physics.
00:11:19.279 --> 00:11:23.120
It doesn't say the column has part because a physical phenomenon had happened.
00:11:23.120 --> 00:11:28.799
It says the column has part because some sort of associations as correlations have taken place.
00:11:28.799 --> 00:11:35.200
So when you realize this, you start to say, okay, well, I'm an engineer and I can't really rely on correlations for everything I do.
00:11:35.200 --> 00:11:37.200
I need like a higher level of understanding.
00:11:37.200 --> 00:11:40.559
So you do some research, you find that you find causal AI.
00:11:40.559 --> 00:12:01.840
Now, causal AI tries to go ahead of explainable AI and tries to say, you know what, based on this very high complex statistical method, we can almost be certain that the prediction is going to be arrived because of how the features link to each other with high sort of confidence that may imply some sort of physics in there.
00:12:01.840 --> 00:12:03.440
So it's not just pure correlation.
00:12:03.440 --> 00:12:11.039
Again, when you rely, when you think about causal AI for a little bit and you do some research, you realize, well, all of these things have to follow some certain assumptions.
00:12:11.039 --> 00:12:14.399
Because if you want to use a method, you have to follow its assumptions.
00:12:14.399 --> 00:12:18.720
And these assumptions without experiments can be very, very hard to justify.
00:12:18.720 --> 00:12:29.519
So you start to think about okay, well, now I have a bigger problem because now not only I can't use correlations or whatever we call causal AI, I also don't understand how everything relates to physics.
00:12:29.519 --> 00:12:37.840
And this brings you to philosophy because philosophy is all about how things happen, and you have to think about it from a way that uh not just numbers, logic.
00:12:37.840 --> 00:12:43.919
And from there you start to think, okay, maybe I'm looking at this AI thing in uh in a very skewed way.
00:12:43.919 --> 00:12:49.600
I really have to think of what the algorithm was meant to do and what is actually meant to produce.
00:12:49.600 --> 00:12:54.480
The algorithm is producing numbers or predictions, but that doesn't have to align with what you have in mind.
00:12:54.480 --> 00:13:02.639
That doesn't say the algorithm predicts spawning because I know concrete, you know, goes through chemical reactions and then eventually degrades on spawns.
00:13:02.639 --> 00:13:07.519
It doesn't have to align with that, it aligns with the way that the algorithm was processed and developed.
00:13:07.519 --> 00:13:12.080
And then this becomes not interesting because for the same prediction, you could have like different theories.
00:13:12.080 --> 00:13:20.879
One algorithm could predict something because of the way the features interact, and another one could give you the same predictions based on different features.
00:13:20.879 --> 00:13:23.679
So now you have like two windows to the same world.
00:13:23.679 --> 00:13:25.519
One looks at A, one looks at B.
00:13:25.519 --> 00:13:31.759
And this makes you think a lot because in design we lack certainty, but at the end of the day, with AI, we don't really have that much certainty anymore.
00:13:32.080 --> 00:13:33.919
So you basically start with an experiment.
00:13:33.919 --> 00:13:36.399
Your pathway was to start with an experiment.
00:13:36.399 --> 00:13:40.320
You had a lot of them, you start to see some correlations within them.
00:13:40.320 --> 00:13:58.639
Suddenly you start using AI to explore those relationships between different variables, get some higher-level correlations, you reach some level of explainability of what's happening that drives you to causality, that drives you to you know how higher level thinking about what is happening around.
00:13:58.639 --> 00:14:07.759
But you still had an experiment at the start of it, or you still had the data at the start of it, and the data was you know made kind of for some purpose.
00:14:07.759 --> 00:14:11.600
Like if I'm doing a fire experiment, I measure temperatures.
00:14:11.600 --> 00:14:17.440
I may not be measuring, you know, the moisture at depths uh of your concrete.
00:14:17.440 --> 00:14:27.600
I'm I'm focusing on some particular things which perhaps someone told me to measure 100 years ago when they designed a standard for fire resistance, you know.
00:14:27.600 --> 00:14:38.000
How much of the end product being the philosophy is narrowed down by the hundred-year-old assumptions in the experiment design?
00:14:38.000 --> 00:14:39.519
Significant thing.
00:14:39.840 --> 00:14:41.440
Significant, yeah, I can imagine.
00:14:41.440 --> 00:14:44.559
I I I mean, but what what can we do about it?
00:14:44.799 --> 00:14:59.200
Well, I think when I was when I was interested in philosophy, I found like this very nice article on IEEE, I think by a professor in the actually very close to me in the University of South Carolina, and she has outlined the history of engineering education when it started up to now.
00:14:59.200 --> 00:15:11.440
And you can clearly see it, I think 60 or 70 years ago, there was a drastic shift, I think, after World War II, where philosophy was taken away from engineering to focus on application because they would have like to rebuild and you know, industrial revolution.
00:15:11.440 --> 00:15:13.919
So there wasn't really a lot of time to study philosophy.
00:15:13.919 --> 00:15:19.120
And because of that, we start to shift more on math, physics, all this, which is great.
00:15:19.120 --> 00:15:23.519
But we lost touch in um with with the with the context of our methods.
00:15:23.519 --> 00:15:31.279
Like when you do when if somebody would like to use regression, a very simple method, regression, we have at least five hard assumptions that we have to follow.
00:15:31.279 --> 00:15:38.320
And if you look at our data, our methods, even the equations we have in welding codes, most of these assumptions are not verified.
00:15:38.320 --> 00:15:40.000
We always have issues with them.
00:15:40.000 --> 00:15:45.840
So, how do you use a method when the assumptions based for this method are limited or not applicable?
00:15:45.840 --> 00:15:50.559
Which means you can use the method, but the outcome is gonna always be is gonna always have some issues.
00:15:50.559 --> 00:16:00.320
And this becomes okay, well now the problem is not really the method, the problem is how we understand the method, which means we have to go back and rethink about the way that we link philosophy and engineering.
00:16:00.320 --> 00:16:05.360
And we need this right now because AI does a lot of things for us that we don't really understand.
00:16:05.360 --> 00:16:08.399
And if you don't understand the method, you can't apply it properly.
00:16:08.399 --> 00:16:14.559
And if you can't apply it properly, those who do apply it will have issues because they don't understand the whole thing.
00:16:14.799 --> 00:16:25.120
Is it possible that you give me an example of that, like practical example of some kind of research task that you've done and went through that pathway and identified those limitations?
00:16:25.679 --> 00:16:25.919
Yeah.
00:16:25.919 --> 00:16:31.200
Uh the simplest example is with uh we were talking about like a little bit earlier with explainable AI.
00:16:31.200 --> 00:16:44.159
We have data set, we have an algorithm, it becomes a model, we predict, we use explainable AI to tell us the algorithm or the model was arrived at this prediction because it saw those features.
00:16:44.159 --> 00:16:51.039
This tells me that if I change the data set with fake numbers, I can still have the same prediction.
00:16:51.039 --> 00:16:54.080
And this prediction will also be accurate because there's no physics.
00:16:54.320 --> 00:17:12.160
You mean you so you train the model on some data set and then you feed it a set of completely made up numbers, and because it has a pathway from numbers to outcome, it's still gonna like the true definition of garbage in, garbage out.
00:17:12.480 --> 00:17:12.799
Exactly.
00:17:12.799 --> 00:17:15.680
And in philosophy, this is called predictive ignorance.
00:17:15.680 --> 00:17:21.519
You're predicting things very well, but you're ignorant about why and how, because the method is faulty.
00:17:21.519 --> 00:17:27.920
And to engineers, when somebody builds like collects a lot of data or massive data, they spend a lot of money doing experiments.
00:17:27.920 --> 00:17:33.839
So they build the experiment, they spend all this money, they build algorithms, now they have models, they use explainable AI.
00:17:33.839 --> 00:17:46.960
The problem becomes when the algorithm says this specimen has falled, because I know concrete strength is a big feature, or because I know heat is a big feature, and this matches what you already know from physics.
00:17:46.960 --> 00:17:55.039
We think the algorithm gave us something new, but this is called confirmation bias because the algorithm confirms what you're biased towards in the beginning.
00:17:55.039 --> 00:17:59.200
You already know concrete is gonna be a problem, heat is gonna be a problem.
00:17:59.200 --> 00:18:03.920
And now you see the algorithm confirms this notion, and you think, oh, the algorithm works well.
00:18:03.920 --> 00:18:05.839
In reality, it doesn't work well.
00:18:05.839 --> 00:18:08.160
You could the problem becomes with your data.
00:18:08.160 --> 00:18:12.240
When you did the experiments, we know strength is going to be our problem, hate is going to be our problem.
00:18:12.240 --> 00:18:15.839
So we design the experiments in a way that controls these features.
00:18:15.839 --> 00:18:21.200
And because we control them in this, this is the only feature that changes in our data set.
00:18:21.200 --> 00:18:33.680
If you want to do a completely new experiment where you don't control features, then this becomes interesting to see because now you're not really imposing any bias on the experimental design, nor on the algorithmic design.
00:18:34.000 --> 00:18:34.960
What do you mean?
00:18:34.960 --> 00:18:36.960
How would you design such an experiment?
00:18:37.119 --> 00:18:51.279
Just random sampling of input variables that go in there, like you would uh basically randomize the strength of your concrete, the moisture content, uh that when we do experiments now, just say in fire, especially the problem becomes it's very expensive.
00:18:51.279 --> 00:19:01.839
So maybe we have six columns, and these six columns you're gonna have maybe I don't know, three normal strength concrete, three UHPC high strength, you are ultra-high performance concrete.
00:19:01.839 --> 00:19:05.039
So we're limited with the number of samples, we're limited with the type.
00:19:05.039 --> 00:19:06.720
We have one fire exposure.
00:19:06.720 --> 00:19:10.319
But what about all the other what about like changing moisture content?
00:19:10.319 --> 00:19:13.440
What about changing the loading for each kind of those variables?
00:19:13.440 --> 00:19:18.880
So our data sets would always have the big the main features limited, and we can't see the whole picture.
00:19:19.359 --> 00:19:20.480
I I see the same thing.
00:19:20.480 --> 00:19:24.319
I'm doing a lot of vehicle fires and car parks and stuff like that.
00:19:24.319 --> 00:19:30.480
And I see this kind of experimental bias when people are doing uh electric vehicles, for example.
00:19:30.480 --> 00:19:35.440
Because like you get an electric vehicle to burn, where do you start the fire?
00:19:35.440 --> 00:19:38.480
Come on, in the battery, like where else?
00:19:38.480 --> 00:19:43.759
Like what I'm gonna do with a completely burned vehicle and an untouched battery in the end.
00:19:43.759 --> 00:19:53.440
Like you start with the fire in battery, and then what you end up with is like a massive data set of all the ways, non-natural ways a fire could start in a battery.
00:19:53.440 --> 00:20:01.920
And almost like zero data points of what happens with an electric vehicle when the style when the fire starts in an archway above your wheel.
00:20:01.920 --> 00:20:04.079
And we have no idea.
00:20:04.559 --> 00:20:09.519
No, I mean it's it's also part of the it's part of the limitation that we have to admit to.
00:20:09.519 --> 00:20:12.400
Like we can't really design experiments for every single variable.
00:20:12.400 --> 00:20:18.960
We have to use a little bit of knowledge to try to zoom in into some of the highly valued variables, and then we go from there.
00:20:18.960 --> 00:20:29.680
But I'm trying to say if somebody would like to really sit down and think about like the essence of the problem, we can't just be relying on what we think is right or what we think is most likely.
00:20:29.680 --> 00:20:32.000
We have to have a much more larger picture.
00:20:32.000 --> 00:20:40.960
Think of it this way: when somebody's trying to create a drug in medicine, uh, it's it's very, very hard to come up with an experiment because this is why we use randomized experiments.
00:20:40.960 --> 00:20:47.920
And you have like thousands of people signing up, vaccines, drugs, because you can't people are different than specimens, than beams and corps.
00:20:47.920 --> 00:20:57.599
And because you can't control people and like their genes and DNA, you end up with thousands and thousands of people on on those trials, medical trials.
00:20:57.599 --> 00:21:10.319
In engineering, we end up with a handful of specimens just because we think you know the mix is the mix is fine, the strength is fine, the moisture content is fine, and we don't really verify.
00:21:10.319 --> 00:21:13.359
And this makes our our window very, very small.
00:21:13.359 --> 00:21:15.920
So and all the models we have have problems.
00:21:15.920 --> 00:21:33.519
If you think about this, if you take the equation for predicting the fire resistance of columns in ACI, Europe Code 2, and the Australian code, and you apply these three equations, it's highly unlikely that they will all agree on the one on the same number or on the same range for one column.
00:21:33.519 --> 00:21:40.640
So this tells us basically the problem is not like we don't know physics, but physics is the same in the US, in Europe, and Australia.
00:21:40.640 --> 00:21:46.799
The representation of physics, the equations are gonna be a problem, and that representation comes from our experiments.
00:21:47.119 --> 00:21:59.279
But yeah, I I find this kind of challenging because if I pursue your logic, that would mean that the moment I decide what I am looking in my experiment, I've kind of narrowed it down already.
00:21:59.279 --> 00:22:03.359
Like if I'm designing an experiment for a fire resistance.
00:22:03.359 --> 00:22:09.279
I've already narrowed it down like significantly, and perhaps I'm losing the discovery.
00:22:09.279 --> 00:22:14.079
Like which is like I love it because I'm doing a lot of exploratory experiments.
00:22:14.079 --> 00:22:24.720
We started to call like people were questioning, uh, you're doing it like incorrect, like you, you, you uh have the standard sales elsewhere, you're like you use the wrong heat flux or whatever.
00:22:24.720 --> 00:22:26.079
No, no, this is exploratory.
00:22:26.079 --> 00:22:28.480
I'm just I just wonder what's gonna happen.
00:22:28.480 --> 00:22:30.880
Like uh perhaps we need more of that in fire.
00:22:31.119 --> 00:22:34.480
Yeah, and and my my own my only concern is the following.
00:22:34.480 --> 00:22:41.200
I I'm always hesitant of saying that because I've done an experiment, that this is the cause.
00:22:41.200 --> 00:22:42.400
This is not the cause.
00:22:42.400 --> 00:22:46.480
This is the cause based on your own, you know, experimental setup and data.
00:22:46.480 --> 00:22:50.559
So journalizing this beyond that becomes a problem to me, at least to me.
00:22:50.559 --> 00:22:54.559
But you know, we all have limitations with experiments, and they tend to be very expensive.
00:22:54.559 --> 00:23:02.400
But like if you really would like to think about it from a different perspective, having more and looking at the problem from a fresh look is always going to be helpful.
00:23:02.400 --> 00:23:13.279
This is why now you hear a lot about like those uh publications and like in the news, somebody used AI to solve like this mathematical conjugate, and this has been there for like 200 years, not nobody was able to solve.
00:23:13.279 --> 00:23:14.880
Well, how did AI solve it?
00:23:14.880 --> 00:23:18.559
It's because you're looking at the problem from a completely fresh perspective.
00:23:18.559 --> 00:23:24.720
And this might help you, for the most part, to find unique solutions and new ways to think about it.
00:23:24.720 --> 00:23:30.640
And this might be helpful also to us in engineering, because we have many problems that we don't really know how to solve.
00:23:30.640 --> 00:23:37.200
And honestly, we have even the bending codes we have, we have a big component that are based on tables of like empirical equations.
00:23:37.200 --> 00:23:43.920
Empirical equations does not mean physics, it means observation, which means domain knowledge and you know modeling.
00:23:44.160 --> 00:23:54.160
In that case, like if we agree that this is true, do you think there is still new knowledge to find within those classical sets of experiments or data sets?
00:23:54.160 --> 00:24:04.079
Like imagine you're a researcher and and basically you have no ability to conduct new data, like you only have what's already in the literature.
00:24:04.079 --> 00:24:09.039
How much there is to apply to find new knowledge with AI?
00:24:09.039 --> 00:24:12.160
Because you know, I see a lot of papers.
00:24:12.160 --> 00:24:15.279
I'm I'm I'm avid uh reviewer of papers.
00:24:15.279 --> 00:24:21.759
I'm I'm a reviewer too, apologies uh to the listeners, like uh uh and then those who who faced me.
00:24:21.759 --> 00:24:24.799
But uh, you know, I I get a lot of those papers.
00:24:24.799 --> 00:24:30.400
Like we have applied this kind of algorithm to this data set, and we found something obvious.
00:24:30.400 --> 00:24:38.559
And I'm like, you know, well, just the fact that you've applied machine learning on something, this is not yet novelty, this is not new science.
00:24:38.559 --> 00:24:41.599
I wonder, I have a data set.
00:24:41.599 --> 00:24:46.079
What new things could AI realistically show me about that data set?