Nov. 26, 2025

228 - Quantifying the expected utility of fire tests with Andrea Franchini

228 - Quantifying the expected utility of fire tests with Andrea Franchini
The player is loading ...
228 - Quantifying the expected utility of fire tests with Andrea Franchini

What do you expect from running a fire test? I would hope that it improves my state of knowledge. But do they do this? We often pursue them blindly, but it seems there is a way to do this in an informed way. 

In this episode we explore a rigorous, practical way to select and design experiments by asking a sharper question: which test delivers the most decision-changing information for the least cost, time, and impact. With Dr. Andrea Franchini of Ghent University, we unpack a Bayesian framework that simulates possible outcomes before you touch a sample, updates your state of knowledge, and quantifies the utility of that update as uncertainty reduction, economic value, or environmental benefit.

First, we reframe testing around information gain. Starting from a prior distribution for the parameter you care about, we model candidate experiments and compute how each would shift the posterior. The gap between prior and posterior is the signal; diminishing returns tell you when to stop. In the cone calorimeter case on PMMA ignition time, early trials yield large gains, then the curve flattens, revealing a rational stopping point and a transparent way to plan sample counts and budgets. The same structure scales from simple statistical models to high-fidelity or surrogate models when physics and geometry matter.

Then we tackle a post-fire decision with real financial stakes: repair a reinforced concrete slab, or accept residual risk. We connect Eurocode-based thermal analysis to two test options—rebound hammer temperature proxies and discoloration depth—and compute their value of information. By translating updated probabilities of exceeding 600°C into expected costs of repair versus undetected failure, we show how to choose the test that pays back the most. In the studied scenario, the rebound hammer provides higher value, even after accounting for testing costs, but the framework adapts to different buildings, cost ratios, and risk appetites.

Beyond pass-fail, this approach helps optimize sensor layouts, justify added instrumentation, and balance multiple objectives—uncertainty, money, and environmental impact—without slipping into guesswork. If you’re ready to move from ritual testing to evidence that changes outcomes, this conversation maps the path. 

Papers to read after this:

----
The Fire Science Show is produced by the Fire Science Media in collaboration with OFR Consultants. Thank you to the podcast sponsor for their continuous support towards our mission.

00:00 - Setting The Mission: Better Fire Tests

03:20 - The ATES Project And Big Picture

06:31 - Expected Utility Versus Traditional Testing

10:12 - Bayesian Updating: Priors To Posteriors

14:35 - Modeling Experiments Before They Happen

18:23 - Cone Calorimeter Case: Ignition Time

22:24 - Information Gain And Diminishing Returns

26:05 - Expanding Design Parameters And Complexity

30:03 - Outliers, Likelihoods, And Data Quality

34:05 - Utility Defined: Uncertainty, Cost, Environment

38:08 - Post‑Fire Assessment: Concrete Slab Decision

42:15 - Linking Tests To Eurocode Models

46:00 - Value Of Information And Economic Payoff

50:00 - Case Result: Rebound Hammer Beats Discoloration

53:05 - Generality, Assumptions, And Cost Models

WEBVTT

00:00:00.320 --> 00:00:02.720
Hello everybody, welcome to the Fire Science Show.

00:00:02.720 --> 00:00:12.800
In today's episode, we will be discussing how to pick the best tests to fulfill your needs, whatever the needs are in your fire safety engineering.

00:00:12.800 --> 00:00:15.439
And I've invited the nice guest, Dr.

00:00:15.439 --> 00:00:19.679
Andrea Franchini from University of Ghent to talk about this.

00:00:19.679 --> 00:00:27.120
And you may think we will be ranking test methods or giving you comparison between standards, and no, we're not gonna do that.

00:00:27.120 --> 00:00:40.640
Because what Andrea is doing is something quite more fundamental, which is gonna help those considerations about what test to use at pretty much any decision level you would be making.

00:00:40.640 --> 00:00:45.200
It is a part of a larger project, ATEST, by Dr.

00:00:45.200 --> 00:00:46.000
Ruben van Coile.

00:00:46.560 --> 00:00:50.640
Ruben is leading an ERC starting grant.

00:00:50.640 --> 00:00:53.840
We have discussed this in the podcast, I believe, two years ago.

00:00:53.840 --> 00:01:02.240
We had uh this uh podcast episode when Ruben was announcing his project and uh giving me an introduction about what he wants to do.

00:01:02.240 --> 00:01:18.719
Uh some years have passed, some work has been done, great work has been done, a massive paper was published, and here we are today able to discuss about the mathematical approach that is being used to decide upon the test methods.

00:01:18.719 --> 00:01:40.400
In this approach, instead of just thinking about uh the potential outcomes of a test instead of thinking about the criteria, pass fail criteria or other ways we would normally measure, this uses those, but uses those to inform what kind of information gain the test can give.

00:01:40.400 --> 00:01:52.319
This is a very specific term and Andrea will explain it later, but this kind of informs how much your expectations or centenities will move as you pursue the test.

00:01:52.319 --> 00:02:03.200
Because if you would like to do a test and it's not gonna move your knowledge by a bit, it's not gonna inform your decisions anymore, what's the point of running it?

00:02:03.200 --> 00:02:13.919
And if you have a limited budget, you would like to spend those money on the tests that will move your knowledge the most, so the reduced incentities the most or provide you some other metric.

00:02:13.919 --> 00:02:18.960
It's quite some complex mathematics and uh it's not an easy episode to be warned.

00:02:18.960 --> 00:02:32.879
But I think uh looking at the framework of what Ruben was proposing and looking today at the first outcomes and the proposals and the case study paper, I see this as a really, really good way.

00:02:32.879 --> 00:02:36.000
It's gonna be a challenge to introduce it into our paradigm.

00:02:36.000 --> 00:02:38.400
It doesn't work with our paradigm of fire testing.

00:02:38.400 --> 00:02:43.039
We need to change the paradigm to do this, but perhaps this is a good reason to change one.

00:02:43.039 --> 00:02:50.879
Um in the episode we go through two case studies, one related to concurmatry, one related to post-fire investigations.

00:02:50.879 --> 00:02:58.159
While you're uh listening to the episode, there's a Fire Safety Journal paper that covers the exact details of those two case studies.

00:02:58.159 --> 00:03:02.560
So if you have a chance uh to go through Fire Safety Journal paper, I would recommend doing that.

00:03:02.560 --> 00:03:08.400
And uh for now, please enjoy learning from Andrea about what has been done.

00:03:08.400 --> 00:03:11.520
Let's spin the intro and jump into the episode.

00:03:15.280 --> 00:03:17.360
Welcome to the Fire Science Show.

00:03:17.360 --> 00:03:21.039
My name is Wojciech Wegrzynski, and I will be your host.

00:03:36.319 --> 00:03:50.240
The Fire science Show is into its third year of continued support from its sponsor OFR consultants, who are an independent multi-award-winning fire engineering consultancy with a reputation for delivering innovative safety-driven solutions.

00:03:50.240 --> 00:04:04.000
As the UK leading independent fire consultancy, OFR's globally established team have developed a reputation for preeminent fire engineering expertise with colleagues working across the world to help protect people, property, and the planet.

00:04:04.000 --> 00:04:20.240
Established in the UK in 2016 as a startup business by two highly experienced fire engineering consultants, the business continues to grow at a phenomenal rate with offices across the country in eight locations, from Edinburgh to Bath, and plans for future expansions.

00:04:20.240 --> 00:04:28.959
If you're keen to find out more or join OFR consultants during this exciting period of growth, visit their website at OFRConsultants.com.

00:04:28.959 --> 00:04:31.040
And now back to the episode.

00:04:31.040 --> 00:04:32.160
Hello everybody.

00:04:32.160 --> 00:04:36.000
I am joined today by Andrea Franchini from Ghent University.

00:04:36.000 --> 00:04:37.519
Hey Andrea, good to have you in the show.

00:04:37.759 --> 00:04:38.000
Hi, Wojciech.

00:04:38.639 --> 00:04:39.519
Thanks for having me.

00:04:39.839 --> 00:04:46.720
Thanks for coming to discuss your interesting research carried out as a postdoc in the Ruben van Coile's uh ERC grant.

00:04:46.720 --> 00:04:48.480
How's working for ERC grant?

00:04:48.480 --> 00:04:51.680
We've advertised with Ruben uh the position in the podcast.

00:04:51.680 --> 00:04:53.519
I hope we have not overpromised.

00:04:54.000 --> 00:04:54.480
Yeah, yeah.

00:04:54.480 --> 00:04:57.680
I listened to the postcard before uh before applying for the position.

00:04:57.680 --> 00:04:59.600
And yeah, so far it has been great.

00:04:59.600 --> 00:05:03.199
It's a very mentally challenging and uh interesting project.

00:05:03.199 --> 00:05:10.160
And he gave us the opportunity to collaborate with a lot of people, and so it's very, very interesting project on my side.

00:05:10.480 --> 00:05:11.360
Yeah, super happy.

00:05:11.360 --> 00:05:21.519
And now seeing the first outcomes, or maybe not the first, but seeing the first big outcomes of the project that we're gonna discuss today, I'm I'm really excited for the next years of your research.

00:05:21.519 --> 00:05:26.399
So, Ruben had this presentation at ESFS conference in Ljubljana this year.

00:05:26.399 --> 00:05:31.120
Uh, he had a keynote on quantifying the utility of fire tests.

00:05:31.120 --> 00:05:38.000
I've talked with him and he told me that you've done all the hard jobs, so he wants you to uh speak on that, which I appreciate.

00:05:38.000 --> 00:05:48.079
Uh we know that the concept of Rubens ERC, the whole goal of A test was to like figure out how do we test in the future in such a way that it's the best.

00:05:48.079 --> 00:05:50.160
But now you have a framework.

00:05:50.160 --> 00:05:56.800
So let's introduce the listeners to the framework and perhaps let's start with the expected utility.

00:05:56.800 --> 00:05:58.000
I like that keyword.

00:05:58.000 --> 00:06:00.639
So so maybe let's start there and see where we get.

00:06:00.959 --> 00:06:01.920
Yeah, yeah, sure.

00:06:01.920 --> 00:06:07.120
So let me start with repeating the key problem we're trying to address.

00:06:07.120 --> 00:06:17.199
So we we want to understand which experimental protocol is best, meaning which experimental protocol uh experimenter should choose among all the options that he has.

00:06:17.199 --> 00:06:19.199
Should he go for uh a furnace test?

00:06:19.199 --> 00:06:22.000
Should you take a test in a concalorimeter?

00:06:22.000 --> 00:06:22.879
Which one?

00:06:23.279 --> 00:06:24.720
Assuming you have a freedom to choose.

00:06:24.959 --> 00:06:26.240
Assuming you have the freedom to choose.

00:06:26.240 --> 00:06:30.319
Yes, that's part of the FRPEST framework we we envision.

00:06:30.319 --> 00:06:42.399
So, assuming that you have this freedom, the choice among alternative experimental protocols is challenging because of several reasons, including that experimental outcomes are uncertain.

00:06:42.399 --> 00:06:45.600
Experiments are costly and time consuming.

00:06:45.600 --> 00:06:47.920
Probably you will know that better than me.

00:06:47.920 --> 00:06:54.959
And you can also have an environmental impact of your experiments, which you may want to account in your decision making.

00:06:54.959 --> 00:07:10.319
So the scope of our framework is to answer the question of which experimental protocol is best and we should choose by quantifying the expected utility of that experimental protocol before we actually conduct the experiment.

00:07:10.319 --> 00:07:41.279
So, in other words, we want to assess the potential benefit of collecting additional information through the experimental protocol we are planning, and we want to do that before doing the experiment, incorporating the available knowledge we have about the parameters we want to study and reflecting the experimental goals, which could be reducing uncertainty, reducing the economic impact of this uncertainty, or reducing, for example, the environmental impact of this uncertainty.

00:07:41.279 --> 00:07:48.240
So, in this sense, the expected utility you quantify captures the scopes of your experiment.

00:07:48.480 --> 00:07:52.399
And that utility is directly linked to your design goal.

00:07:52.399 --> 00:08:05.920
So, in what case you would be using that method, for example, you have to apply some very specific technology, and uh, I don't know, you want to know which product is the best to fit, and then you figure out which tests to do for that.

00:08:06.240 --> 00:08:08.800
Yeah, so this applies to both tests.

00:08:08.800 --> 00:08:15.199
So you want to, for example, demonstrate compliance or you want to classify a product, and it also applies to experiments.

00:08:15.199 --> 00:08:26.160
So for uh explorative experiments, you want to, for example, optimize your testing setup, you want to decide where you should put your sensors, and the utility definition aligns with your scope.

00:08:26.160 --> 00:08:30.879
So, for example, let's say you want to reduce uncertainty about some parameter.

00:08:30.879 --> 00:08:43.600
In that case, you can define a utility function that quantifies how much, in expectation, the outcomes of your experiment will reduce the uncertainty in those target parameters that you're tackling.

00:08:44.000 --> 00:08:54.240
Now, now that you said that I had the same feeling when Ruben was presenting in in Ljubljana that this is a very kind of universal framework which you can twist into different settings.

00:08:54.240 --> 00:09:00.320
You've uh twisted and fine-tuned it into uh fire testing and uh and and fire safety engineering.

00:09:00.320 --> 00:09:01.519
But indeed, you know what?

00:09:01.519 --> 00:09:07.919
When I when I heard Ruben's talk, I immediately thought about CFD and zone model uh dilemma.

00:09:07.919 --> 00:09:12.799
Like, is it better to run one CFD or uh or a thousand zone models in the same time?

00:09:12.799 --> 00:09:23.039
You know, because uh it's also like a kind of a challenge where you have like different centered, different levels of insight into the problem cost limitations.

00:09:23.039 --> 00:09:24.240
I I love it, I love it.

00:09:24.240 --> 00:09:28.559
But let's go back to the tests, to your applications that you've described.

00:09:28.559 --> 00:09:34.879
So in your paper, you have uh some practical examples how these have been implemented.

00:09:34.879 --> 00:09:38.240
So we'll probably discuss more of those in the discussion today.

00:09:38.240 --> 00:09:46.799
Perhaps let's introduce the listeners to a representative problem which we could then solve through the discussion, applying your your your thing.

00:09:46.799 --> 00:09:49.200
So uh let's let's go to the case study.

00:09:49.440 --> 00:10:07.039
Yeah, so for example, one of the case studies we presented pertaining the con calorimeter testing, and we wanted to understand how many tests we should run with the con calorimeter in order to reduce uncertainty in estimating the ignition time of a PMMA batch.

00:10:07.039 --> 00:10:08.480
That was the idea.

00:10:08.480 --> 00:10:26.080
And to answer this question, we apply the framework that we are discussing and uh we define a utility metric that captures the amount of uncertainty you have in different state of knowledge before you do the experiment and after you do the experiment.

00:10:26.080 --> 00:10:40.720
And then using this calculation that we can discuss more in detail uh later, you basically get an estimate of the number of tests that in expectation will minimize your uncertainty in the prediction of this ignition time.

00:10:40.960 --> 00:10:46.960
Okay, okay, so but but what are the things that you are playing with, for example, in the con calorimeter?

00:10:46.960 --> 00:10:57.679
Because, like, I mean, one way you could do it, the the classical way I would do it is I would just run the tests until I get some sort of convergence and say, okay, you know, this seems enough.

00:10:57.679 --> 00:11:03.759
But of course, uh, I do not know when the convergence will happen uh before I start doing that.

00:11:03.759 --> 00:11:04.720
So exactly.

00:11:04.720 --> 00:11:09.200
So uh this allows me to predict that the that that time when the convergence okay.

00:11:09.200 --> 00:11:10.159
So what do you say with it?

00:11:10.399 --> 00:11:13.840
Yeah, maybe I I can explain more what we mean by experimental protocol.

00:11:13.840 --> 00:11:18.799
Yes, so by experimental protocol, we mean a combination of experimental procedures.

00:11:18.799 --> 00:11:25.039
So you may have the concalorimeter, you may have the furnace test, and so on, and experimental design parameters.

00:11:25.039 --> 00:11:28.960
So, how do you tune your testing setup to run the experiment?

00:11:28.960 --> 00:11:33.759
For example, in the co-calorimeter, what is the heat flux exposure that you're going to use?

00:11:33.759 --> 00:11:39.679
Uh in in a different test, you may choose the temperature to which you're going to expose the sample, and so on.

00:11:39.679 --> 00:11:50.480
And in the case study of the ignition time that we have introduced, we say, okay, we already decided we want to use concalorimeter, so that's a fixed parameter.

00:11:50.480 --> 00:11:58.559
And the only experimental design parameter that we want to investigate is the number of con-calorimeter tests that we should run.

00:11:58.559 --> 00:12:03.519
So in this case, we just limit the analysis to one experimental design parameter.

00:12:03.519 --> 00:12:11.759
And we want to calculate how many tests we should do so that the uncertainty in predicting the ignition time is minimized.

00:12:11.759 --> 00:12:13.360
That's the that's the idea.

00:12:13.360 --> 00:12:17.360
But in in principle, you can include many more experimental design parameters.

00:12:17.360 --> 00:12:24.159
For example, the it flux and other variables that you can play with in with your experimental setup.

00:12:24.399 --> 00:12:31.120
And uh the procedure will uh look very very similar, no matter how many outcome variables you include.

00:12:31.120 --> 00:12:32.320
What changes?

00:12:32.639 --> 00:12:35.200
Yeah, so conceptually it works the same.

00:12:35.200 --> 00:12:48.559
The problem is that it becomes more challenging computationally and also conceptually, I think, if you have many design parameters, because you need to build models that are able to capture all your experimental design parameters.

00:12:48.559 --> 00:12:51.200
So conceptually, it works exactly the same.

00:12:51.200 --> 00:13:01.120
You can include as many experimental design parameters as you want, but you need to formulate the problem in a way such that it can account for all these experimental design parameters.

00:13:01.120 --> 00:13:04.159
You will need models that reproduce your experiment.

00:13:04.159 --> 00:13:08.799
So, whatever you want to optimize, you need a model that is able to capture that parameter.

00:13:09.200 --> 00:13:10.639
Yeah, that's what I wanted to ask.

00:13:10.639 --> 00:13:18.000
So, optimally, if it's a physical model, but do you need to also know the shape of the distribution of the outcome variable?

00:13:18.240 --> 00:13:26.240
Yeah, so maybe I can give you an overview of how the analysis works so that we know all the different uh pieces that we need.

00:13:26.799 --> 00:13:35.759
Maybe let's let's even step one step before because I feel it's gonna be interesting and and and intellectually engaging and challenging discussion.

00:13:35.759 --> 00:13:38.320
You use Bayesian framework for this.

00:13:38.320 --> 00:13:43.840
So maybe let's start with the framework itself and then build up into how this is applied.

00:13:43.840 --> 00:13:50.000
And let's try to keep it all in the the time to ignition of PMMA stage that we've set up.

00:13:50.000 --> 00:13:50.720
So that's great.

00:13:51.200 --> 00:13:57.919
Yes, Bayesian analysis is at the core of the methodology, and there are three main ideas in this methodology.

00:13:57.919 --> 00:14:05.360
One is the concept of state of knowledge, the second one is the concept of Bayesian analysis, and then the concept of utility.

00:14:05.360 --> 00:14:08.639
So let me link these three concepts for you.

00:14:08.639 --> 00:14:15.600
So we generally express our uncertain state of knowledge in terms of probability distributions.

00:14:15.600 --> 00:14:28.720
For example, back to the ignition time of PMMA, you may say, I have uncertainty about the ignition time, and to describe this uncertainty, I assign a distribution, let's say a normal distribution with a mean and a standard deviation.

00:14:29.039 --> 00:14:34.159
And then people do this all the time when they say like it's uh one minute plus minus 10 seconds.

00:14:34.159 --> 00:14:39.759
That's already a distribution you've given in this in this information, even if you don't think it's a distribution, you know.

00:14:39.759 --> 00:14:40.080
Okay.

00:14:40.480 --> 00:14:40.960
Absolutely.

00:14:40.960 --> 00:14:41.279
Yes.

00:14:41.279 --> 00:14:49.120
So you basically express your state of knowledge in terms of probability distributions that reflect uncertainty you have about some parameters.

00:14:49.120 --> 00:14:52.639
And second concept that introduced is Bayesian analysis.

00:14:52.639 --> 00:14:53.759
So, what's that?

00:14:53.759 --> 00:14:59.840
It's a method to compute and update probabilities after obtaining new data.

00:14:59.840 --> 00:15:05.759
Okay, so the key the all the Bayesian analysis is based on Bayes theorem.

00:15:05.759 --> 00:15:13.120
And the key idea of Bayes theorem is that evidence should not determine your state of knowledge, but it should update it.

00:15:13.120 --> 00:15:14.320
So, what does it mean?

00:15:14.320 --> 00:15:32.960
It means that if we, as we said, we assign a distribution to represent our current state of knowledge, we do the experiment, we get some data, and we use base theorem to get a second distribution that reflects your updated state of knowledge based on the observed experimental outcome.

00:15:32.960 --> 00:15:41.840
And for example, if you go back to the concrete method of testing, you have a distribution that reflects your knowledge of the ignition time before you do the experiment.

00:15:42.159 --> 00:15:45.200
So it's let's say 1210 seconds plus minus 30.

00:15:45.440 --> 00:15:46.159
Yes, exactly.

00:15:46.159 --> 00:15:52.559
You you can say it's a normal distribution with mean uh uh 210, as you said, and a standard deviation, as you said.

00:15:52.559 --> 00:16:06.159
So you do the experiment, you get some data points, and using Bayes' theorem, you can calculate a second probability distribution that somehow accounts for this updated knowledge that you get from your new data.

00:16:06.159 --> 00:16:12.480
So you get a second distribution that represents your updated state of knowledge after you do the experiment.

00:16:12.879 --> 00:16:16.399
So for example, now it's 207 plus minus 17.

00:16:16.399 --> 00:16:35.200
I do another one and I find it's plus minus 16, but eventually every time I do a new one, I just get the result within my standard deviation, it fits the pattern, and eventually I end up with the final standard uh normal distribution curve, which does not really change that much every time I run the experiment, potentially.

00:16:35.440 --> 00:16:39.360
Yeah, it may change or it may not, because if your experiment I don't know that, yes.

00:16:39.360 --> 00:16:40.639
Yeah, yeah, exactly.

00:16:40.639 --> 00:16:51.200
So if your experiments confirm your uh state of knowledge, meaning, for example, you get a result very close to your mean, to your estimating mean before you do the experiment, the distribution will remain essentially the same.

00:16:51.200 --> 00:17:01.440
But if you get, for example, an experimental data point very far from your initial state of knowledge, your distribution will somehow translate towards that value.

00:17:01.440 --> 00:17:11.279
So you will get an updated state of knowledge that tries to average your prior knowledge and the evidence that you get from the experiment.

00:17:11.519 --> 00:17:17.039
Yeah, the the that's probably a biggest challenge in here, like the outliers and how do you handle them.

00:17:17.039 --> 00:17:33.039
Because okay, going back to the world of fire science, it's not that you can afford to run cone colorimeter on PMMA 10,000 times, because there are also like economical and time restraints, and you probably would burn through three cone colorimeters running 10,000 samples anyway.

00:17:33.039 --> 00:17:37.359
So, yeah, there are physical limits in how much you can do.

00:17:37.359 --> 00:17:44.480
And if in general, we are usually working with very low sample sizes in fire science.

00:17:44.480 --> 00:17:46.240
Like if you have three, you're good.

00:17:46.240 --> 00:17:48.079
If you have five, you're plenty.

00:17:48.079 --> 00:17:54.480
If you have 30, you're David Morrison burning chairs because no one else does like 30 samples of one thing, right?

00:17:54.480 --> 00:18:03.200
Does this theorem also have a way to handle outliers, or you just you get one, you you you you face the consequences, they have to figure it out?

00:18:03.519 --> 00:18:14.799
Yeah, so in the theorem you have something called likelihood, which is the probability of observing that specific data point based on your prior state of knowledge.

00:18:14.799 --> 00:18:27.440
So if the data that you observe is very far from your current knowledge, it will essentially be assigned a very low probability so that it will uh tend to influence less your state of knowledge.

00:18:27.440 --> 00:18:29.200
So you you account for this.

00:18:29.599 --> 00:18:37.039
But in this approach, in no point of this kind of consideration, you try to understand why the deviation occurred, right?

00:18:37.039 --> 00:18:40.880
Because here it's just about the statistics of the outcome.

00:18:40.880 --> 00:18:56.720
While in reality, for example, uh someone was doing cone calorimeter tests on 25 kilowatts, the next day someone was doing on 50, and the third day a guy came back and didn't notice it's changed, and and maybe run a test in wrong conditions, so it could have been an error.

00:18:56.720 --> 00:19:02.720
So so but but the the theorem will not like it's a separate analysis to to clean out the data, I guess.

00:19:03.279 --> 00:19:08.799
Yes, so you you need to be critical about your data before you use the theorem.

00:19:08.799 --> 00:19:17.359
But one uh nice and elegant thing of Bayes theorem is that it's just centered on updating your belief.

00:19:17.359 --> 00:19:24.880
So even if you get your outlier, for example, once you can update your self-knowledge based on that outlier.

00:19:24.880 --> 00:19:28.880
So it's up to you to say, should I use the outlier and should I understand it?

00:19:28.880 --> 00:19:32.880
Yes, you definitely should, but that's not the the point of base theorem.

00:19:32.880 --> 00:19:47.119
You in other words, you don't need to take random data and throw them inside, although you can, you can do that, and you will still get a posterior distribution that reflects your updated state of knowledge based on the data that you give to the tier.

00:19:47.519 --> 00:20:01.519
Yeah, and I assume even if you did that and then you have uh hundreds of data points from your normal data, the the the theorem would result in a distribution that that's closer to the original and outlier will not impact it that much.

00:20:01.519 --> 00:20:03.200
I I that that's what I guess.

00:20:03.200 --> 00:20:03.759
Lovely.

00:20:03.759 --> 00:20:04.400
Yeah, yeah.

00:20:04.400 --> 00:20:09.680
And now how does uh okay, so so we know the the base theorem assumptions uh now.

00:20:09.680 --> 00:20:14.240
How do you apply this in in the in the technicalities of of the testing?

00:20:14.480 --> 00:20:20.559
Yeah, so the first component that's I mentioned for this framework is the concept of utility, right?

00:20:20.559 --> 00:20:24.240
So we define state of knowledge, we explain what Bayesian analysis does.

00:20:24.240 --> 00:20:31.279
So you update your state of knowledge, meaning you get a new distribution representing your state of knowledge after the experiment.

00:20:31.279 --> 00:20:41.599
And then what you can do is to assign to this state of knowledge a metric that describes how that state of knowledge is desirable to the user.

00:20:41.599 --> 00:20:43.200
How do we do that?

00:20:43.200 --> 00:20:52.480
We define utility function that quantifies the desirability of that state of knowledge based on your objectives.

00:20:52.480 --> 00:21:09.039
So, for example, if you're interested, as we are discussing for the case of the concolorimeter, in reducing uncertainty, you can define a utility metric that tells you how much uncertainty you have in your prior knowledge and how much uncertainty you have in your updated knowledge.

00:21:09.039 --> 00:21:19.839
And then you take the difference between the two and you assess whether your experimental data reduces the uncertainty with respect to your prior knowledge or increase it.

00:21:19.839 --> 00:21:20.319
Okay.

00:21:20.319 --> 00:21:24.640
So all this is if you have done the experiment, right?

00:21:24.640 --> 00:21:29.200
After you do it, after you have done the experiment, you can do all these calculations that I described.

00:21:29.200 --> 00:21:36.160
But as I mentioned in the beginning, the framework aims at calculate making this analysis before we do the experiment, right?

00:21:36.160 --> 00:21:37.359
So how do we do that?

00:21:37.359 --> 00:21:37.920
That's clear.

00:21:38.240 --> 00:21:41.759
So you want to budget how many cone calorimeters you want in your research grant.

00:21:41.759 --> 00:21:44.000
Do I need to go for five or fifty?

00:21:44.160 --> 00:21:44.480
Yeah.

00:21:44.799 --> 00:21:46.720
Your boss says, let's go for a hundred.

00:21:46.720 --> 00:21:48.880
We need uh exactly, exactly.

00:21:49.039 --> 00:21:51.200
And you want to do that before you run the experiment, right?

00:21:51.359 --> 00:21:51.759
Yes, yes.

00:21:52.079 --> 00:21:53.119
So, how do you do this?

00:21:53.119 --> 00:21:56.640
You need a model that simulates your experimental outcomes.

00:21:56.640 --> 00:21:59.599
Okay, based on your uncertain parameters.

00:21:59.599 --> 00:22:05.279
And you simulate multiple times the possible experimental outcomes of your experiment.

00:22:05.279 --> 00:22:18.160
You use the Bayesian theory that I described before for each of those outcomes, and for each of those analyses, you get an estimate of the utility of the experiment if the outcome was the one you simulate.

00:22:18.160 --> 00:22:25.599
And then you take the expected value of all these outcomes that you get, and that represents the expected utility of your experiment.

00:22:25.920 --> 00:22:54.559
So, as I understand you for the case of cone, a model of a cone would be some sort of uh, I don't know, let's say a machine learning of a cone based on a thousand samples or some formal previous statistical distribution, and then you expect the based on literature that the time of ignition of this material is like 300 seconds, and maybe the standard deviation would be 30 seconds, so you have no some expected outcome.

00:22:54.559 --> 00:23:02.720
Then, based on some model, you run the analysis and see that you potentially can reduce the uncertainties there.

00:23:02.720 --> 00:23:09.839
Well, in in this case, you probably can just use the model to predict your time to ignition if you have such a good model that predicts it.

00:23:09.839 --> 00:23:18.559
But I assume the wealth of this method comes when you have multiple tests to choose from and uh multiple uh utilities uh to balance.

00:23:18.559 --> 00:23:21.279
So that that that's when it it comes into play.

00:23:21.279 --> 00:23:28.960
The gain of information is this what was described in the papers as the information gain or the gain in utility?

00:23:28.960 --> 00:23:30.160
Yes, exactly.

00:23:30.160 --> 00:23:32.720
Let's introduce this concept to the listeners.

00:23:32.720 --> 00:23:37.920
So it's about how much more information you get per repeat of experiment.

00:23:37.920 --> 00:23:39.680
Is that I understand it correctly?

00:23:40.160 --> 00:23:40.480
Exactly.

00:23:40.480 --> 00:23:40.880
Yes.

00:23:40.880 --> 00:23:46.799
So what we did was calculating we we chose different number of tests in a concalorimeter.

00:23:46.799 --> 00:24:01.680
So we went from uh one all the way up to 30 tests, and we calculated for each of these possible number of tests, how much in expectation that number of tests will reduce uncertainty in the distribution of the ignition time.

00:24:01.680 --> 00:24:11.440
And then we, using the calculation that I described, we get that if you start at low trials, you see a very large increase in information gain.

00:24:11.440 --> 00:24:16.880
So any trials, like you have uh one, three, and so on, will give you a lot of information.

00:24:16.880 --> 00:24:19.839
But then you start seeing a plateau in the curve.

00:24:19.839 --> 00:24:30.400
So you see that the expected information gain that you get from running many tests reduces until at some point the marginal gain in this information is basically zero.

00:24:30.400 --> 00:24:40.240
Which means that if you want to understand an optimal number of tests that you should run, you should stop when this marginal expected information gain basically goes to zero.

00:24:40.240 --> 00:24:47.759
Because beyond that, based on your models, based on your assumption, the experiment will not give you more information.

00:24:48.079 --> 00:24:48.640
Brilliant.

00:24:48.640 --> 00:24:57.119
It's it's such a challenging concept because at some point like you may also be seeking those outliers, like to understand what happens in the outlier case.

00:24:57.119 --> 00:25:00.720
This is basically how the particle physics works, you know.

00:25:00.720 --> 00:25:11.279
Uh this is how uh certain experiments run, that you just collide billions and billions of particles, and most of them just follow the standard model like they should.

00:25:11.279 --> 00:25:17.920
And uh every now and then one of them goes a little different, and that's the ones that you get the Nobel Prize for.

00:25:17.920 --> 00:25:25.200
So you're basically seeking those uh those outliers at uh but at like five standard deviations from the from the model.

00:25:25.200 --> 00:25:27.039
So yeah, that that's super interesting.

00:25:27.039 --> 00:25:31.279
But we're not there yet, we cannot collide billions of uh cone calorimeter samples.

00:25:31.279 --> 00:25:36.559
Uh now uh we're we're in in the world of optimization because in the end, this is supposed to be useful.

00:25:36.559 --> 00:25:48.640
This is supposed to create the utility, not in terms of utility of your test, but utility in terms of being able to be confident that you've run enough to get your information and inform.

00:25:48.640 --> 00:25:57.920
I think the the cone was a simple sample, but there were some more difficult ones when different measures were compared for utility.

00:25:57.920 --> 00:26:01.599
So maybe let's switch the case study for something more complicated.

00:26:01.920 --> 00:26:02.720
Yeah, sure.

00:26:02.720 --> 00:26:08.960
So just to build a link, in the previous case, we defined utility as the reduction of uncertainty.

00:26:08.960 --> 00:26:18.960
We can make a step forward and assign an economic value to the effects of this uncertainty reduction, which is what we call value information.

00:26:18.960 --> 00:26:27.519
So we demonstrate this in the context of post-fire assessment, which is a practice that generally combines calculations and testing.

00:26:27.519 --> 00:26:35.440
But up to now, we don't have a structured framework to decide which tests we should perform and whether we should actually perform a test.

00:26:35.440 --> 00:26:39.839
So that this decision is left to the experience of the assessing engineer.

00:26:39.839 --> 00:26:44.799
So we want to show how our calculation approach can support this decision-making process.

00:26:44.799 --> 00:26:48.079
And this is the second example we present in the FAFER.

00:26:48.079 --> 00:27:00.960
We assume there was a single compartment fire in a building, and the assessing engineer needs to decide whether the reinforced concrete slab in this compartment needs structural repairs.

00:27:00.960 --> 00:27:02.799
So, how do we know that?

00:27:02.799 --> 00:27:18.720
We uh because steel exhibits a permanent reduction of yield stress if the temperature exceeds 600 degrees, we take this the steel temperature at 600 degrees as the threshold, a simplified threshold for structural repair.

00:27:18.720 --> 00:27:23.680
So the question becomes how do we know if the reinforcement reached 600 degrees?

00:27:23.680 --> 00:27:26.240
We can do that using calculation methods.

00:27:26.240 --> 00:27:36.240
So, for example, we consider the Eurocode parametric fire curve as a thermal binary condition, and we run thermal analysis to calculate the temperature inside steel.

00:27:36.240 --> 00:27:44.720
This model takes requires as input the geometry of the compartment, the thermal properties again of the compartment, and both we assume we know both of them.

00:27:44.720 --> 00:27:48.799
And then it also requires the fuel load and the opening factor.

00:27:48.799 --> 00:27:58.400
So since we have uncertainty about these two parameters, we assign them a probability distribution that is representing our prior knowledge.

00:27:58.400 --> 00:27:58.960
Okay.

00:27:58.960 --> 00:28:11.680
And the idea is that if we do some tests, we can get an updated distribution for the fuel load and the opening factor and better support the decision-making process of whether we should repair or not.

00:28:11.680 --> 00:28:14.319
So we consider two possible experiments.

00:28:14.319 --> 00:28:22.799
The first one is a remound hammer test, which gives us the maximum temperature reached at a depth of 15 millimeters from the cover.

00:28:22.799 --> 00:28:30.640
The second test we consider is a discoloration test, which gives us the maximum depth of the 300 degrees height of air.

00:28:30.640 --> 00:28:39.119
And now the question becomes if there is any economic benefit from testing with any of these two tests that I mentioned.

00:28:39.119 --> 00:28:42.240
And if so, which of these tests should we choose?

00:28:42.559 --> 00:28:54.160
Sorry, in this case, you already made your mind to use the Eurocode calculation method to approximate the rebirth temperature to see whether the 600 degree threshold was met or not.

00:28:54.160 --> 00:29:08.240
And here uh you are using the hammer or discoloration as a mean to increase uh what it is it's gonna give you some information about the peak temperature on the wall.

00:29:08.240 --> 00:29:11.839
Are you gonna loop this to the Eurocode model?

00:29:11.839 --> 00:29:21.039
Or if you find like that at 50 mils there was 200 degrees, you don't need Eurocode calculation to know that there was no not 600 at the river.

00:29:21.039 --> 00:29:22.960
How does it tie to the Eurocode model?

00:29:23.279 --> 00:29:28.480
Yeah, so the the point is that the none of the tests give you the temperature of the reinforcement, right?

00:29:28.480 --> 00:29:36.319
Yes, they give you the temperature of at the first one at a given depth, and the second one gives you the depth of the 300 degrees isotherm.

00:29:36.319 --> 00:29:51.440
So, what we want to do is to link the measurement of the test to our calculations and use the base theorem to update our knowledge of the field load and the opening factor based on the measurement we get from the test.

00:29:51.440 --> 00:29:58.319
That's how the test can reduce uncertainty in the uncertain distribution of fuel load and opening factor.

00:29:58.319 --> 00:29:59.519
So that is the starting point.

00:29:59.519 --> 00:30:03.599
And then How do we convert this in economic terms?

00:30:03.599 --> 00:30:11.759
We build a cost model that associates a cost to the distributions of the fuel load and of the opening factor.

00:30:11.759 --> 00:30:28.319
And this is done by taking into account the expected repair cost and the expected cost of undetected failure, which is given by the cost of failure multiplied by the probability that the maximum temperature of steel exceeds 600 degrees.

00:30:28.319 --> 00:30:40.880
And we also take into account the choice of a rational decision maker, which is to repair the slab if the expected repair cost is lower than the expected cost of undetected failure.

00:30:40.880 --> 00:30:45.519
For the device, the best choice is to leave the reinforced concrete slab as is.

00:30:45.519 --> 00:30:48.079
So all this is in the cost model.

00:30:48.079 --> 00:30:54.720
We use the cost model to calculate the expected cost with prior knowledge, meaning without doing any tests.

00:30:54.720 --> 00:31:04.160
And then we use an experimental model, meaning a model that simulates possible experimental outcomes to simulate possible outcomes of the two tests we consider.

00:31:04.160 --> 00:31:08.319
So again, the rebound hammer test and the discoloration test.

00:31:08.319 --> 00:31:18.319
And what we do is this we simulate a possible experimental outcome, we update the distribution of field load and opening factor using Bayes' theorem.

00:31:18.319 --> 00:31:30.160
We calculate again the probability of the temperature of steel exceeding 600 degrees, use this probability to calculate the expected cost for that realization of possible experimental outcome.

00:31:30.160 --> 00:31:37.759
And then we take the expected value of all these costs, and that is the expected cost of our experimental protocol.

00:31:38.000 --> 00:31:46.799
So this is uh again, we're not yet talking about solving the person's problem about their particular building being on fire or not.

00:31:46.799 --> 00:31:55.920
It's about how much narrower the uncertainty of the Eurocode method will be if you inform it through this coloration test.

00:31:55.920 --> 00:32:00.000
How much narrower will it be if you inform it through the hammer test?

00:32:00.000 --> 00:32:08.079
And given the two possibilities and the known costs of those two methods, which makes more sense to do, or perhaps none of them, right?

00:32:08.400 --> 00:32:08.880
Exactly.

00:32:08.880 --> 00:32:17.920
Because now we have an estimate of cost with prior knowledge, so without doing any tests, and we have an expected cost doing any of the two tests.

00:32:17.920 --> 00:32:26.240
So the difference between the expected cost without testing and the expected cost with testing is what we call value of information.

00:32:26.240 --> 00:32:32.240
And if this value of information is positive, it means that the test will give you an economic benefit.

00:32:32.240 --> 00:32:39.680
So the result of this analysis tells us first that any of the two tests provides an economic benefit.

00:32:39.680 --> 00:32:59.359
And second, we found that for this specific case, the value of information of the rebound Hammer test is higher than the value of information of the discoloration test, which means that if you are an assessing engineer, you want to choose the rebound Hammer test because it provides you a higher value of information.

00:32:59.359 --> 00:33:04.160
And we also demonstrate that this is true even if we consider the cost of testing.

00:33:04.640 --> 00:33:09.200
Is this specific for the case study or is it true in general?

00:33:09.200 --> 00:33:20.160
Because you're applying a general model of a compartment to a compartment, more or less, with two tests that are also like quite general in their own.

00:33:20.160 --> 00:33:27.759
So this consideration could be true for anyone who wants to apply the same approach to a compartment fire test, right?

00:33:27.759 --> 00:33:38.160
But uh in in I guess in some more complicated cases, you would have to narrow it down to very specific case of a person that seeks the input and the value of information.

00:33:38.480 --> 00:33:39.440
Yeah, definitely.

00:33:39.440 --> 00:33:50.319
So the approach is general, you can apply this to different compartments, and you can also choose different tests, but the conclusion is specific to the case study you consider and to your assumption.

00:33:50.319 --> 00:33:52.720
So that's why I say is in this case.

00:33:52.720 --> 00:33:59.519
Uh but yeah, the the same approach can be applied if you consider different tests, different experimental protocols.

00:33:59.519 --> 00:34:08.719
For example, now we just assume we do one measurement, but in a similar way to what we did for the concalorimeter, we could also assess how many tests we should actually perform.

00:34:09.280 --> 00:34:13.199
And the cost of failure, that's the the collapse of the structure.

00:34:13.199 --> 00:34:32.559
I assume this means that with the Eurocode method, you have predicted that the temperature of the river has not yet reached 600 degrees, while in reality, in that fire it did reach, which means there's a possibility of failure in the future due to that fire itself, and this is a hidden cost.

00:34:32.559 --> 00:34:40.639
Uh in your paper, uh, there was a number that uh you are uh okay if the probability of failure is 3.8%.

00:34:40.639 --> 00:34:51.440
So in the end, you look uh how much of the distribution of your outcomes uh exceeds or is below 3.8% of the 600 degree isotherm, I guess.

00:34:51.440 --> 00:35:16.719
Can can you comment because also like the the fact it it has been above yet doesn't mean failure, it's just a criterion you set, but you have to assign a cost to that failure, and that probably influences the value of information of the tools, which in this case turns your general considerations of a hammer and discoloration to a specific case because you're toying it to a very specific potential cost of failure.

00:35:17.039 --> 00:35:17.519
Exactly.

00:35:17.519 --> 00:35:24.159
Yeah, you that you need a model for costs, so you need a model for the repair cost, and you need a model for the cost of failure.

00:35:24.159 --> 00:35:28.239
And this is very specific to the building you're considering.

00:35:28.239 --> 00:35:36.559
So in our case, we've made a simplified assumption, which is we assume that the cost of failure is 10 times the cost of repair.

00:35:36.559 --> 00:35:41.199
And that's a choice we made for illustrative purposes of this example.

00:35:41.199 --> 00:35:49.440
But as soon as you have more details on your building, you can build up in complexity and include other parameters in the cost assessment.

00:35:49.760 --> 00:35:50.159
Very good.

00:35:50.159 --> 00:36:04.960
I I think I finally understood the difference between a sensitivity study and a value of information study, because this economic model part is the one that that really turns a general consideration into a case-specific consideration.

00:36:04.960 --> 00:36:27.119
If you have a historic building of an immense value and you're not allowed to do repairs, and the cost of collapse will be absolutely tremendous, uh then uh probably this is uh way more informative than if you have a single-story uh house or or you know some other building that that perhaps the the cost of failure is is not that huge.

00:36:27.119 --> 00:36:27.920
Fantastic.

00:36:27.920 --> 00:36:33.760
We've covered the uncertainty, we've covered a bit of the cost, let's perhaps move to the utility.

00:36:33.760 --> 00:36:40.239
You said that the utility is about three things uncertainty, cost, and environmental impact.

00:36:40.239 --> 00:36:43.920
I feel that we've touched a lot about the uncertainty.

00:36:43.920 --> 00:36:45.920
I guess we touched about the cost.

00:36:45.920 --> 00:36:52.000
How does I know it's not in the paper, but how how does the environmental part come come into play in here?

00:36:52.239 --> 00:37:04.320
Yeah, so in the same way as we assign an economic value to the uncertainty you have in your prior and posterior knowledge, you can also assign an environmental cost to this uncertainty.

00:37:04.320 --> 00:37:21.599
So, for example, you you have uncertainty about some parameters, with this uncertainty you predict the performance of a building, and in the case of a fire, you will have an environmental impact due to the collapse of this building or this part of the building, and that's your environmental cost with prior knowledge.

00:37:21.599 --> 00:37:42.639
Then if you estimate the outcomes of your experiment, maybe you want to test what is the utility of doing a furnace test, or maybe you want to test the utility of a compressive strength test at high temperature, any test you want, you can calculate what is the expected environmental benefit of the information that you would get from this test.

00:37:42.639 --> 00:37:54.239
And the calculation works pretty similarly to the calculation for the economic cost, but in this case, we want to we need to estimate the environmental consequences of your uncertainty.

00:37:54.639 --> 00:38:01.360
Through the same logic, you could potentially go even to life and uh health, you know, Fn curves as well.

00:38:01.360 --> 00:38:03.599
Was it the choice that you're not doing that?

00:38:03.840 --> 00:38:06.719
Um so the the concept is very general.

00:38:06.719 --> 00:38:09.280
We can define utility in many different ways.

00:38:09.280 --> 00:38:16.079
In the paper, we wanted to propose three ways that I think make make a case for using this methodology.

00:38:16.079 --> 00:38:27.119
And we also commented that you can define many other utility metrics, including the one you just mentioned, based on what is the end use of the experiment that you are going to do.

00:38:27.119 --> 00:38:33.360
And another uh interesting thing is that you don't need to define a single utility metric for your experiment.

00:38:33.360 --> 00:38:42.000
You can, in principle, define utility in terms of different metrics, for example, reduction of uncertainty, environmental impact, and economic impact.

00:38:42.000 --> 00:38:50.159
And you can perform the assessment considering these three metrics and use all of them to inform your decision making on the experimental protocol.

00:38:50.159 --> 00:39:02.960
And if you want to use the framework to optimize your experimental protocol, you can use something called multi-objective optimization to optimize at the same time all these utility metrics that you defined.

00:39:03.280 --> 00:39:04.079
Okay, yeah, yeah.

00:39:04.079 --> 00:39:12.639
The case studies uh it's a lot of work and there's a ton of plots in the paper, but they're kind of simple, like the real-world objectives.

00:39:12.639 --> 00:39:27.119
Like if if world follows uh Rubin's preaching and starts using tests in a different way, in the way that gives information, we we start to talk about complicated design decisions and complicated systems.

00:39:27.119 --> 00:39:34.719
How badly it complicates the method when you start having those multiple utility functions?

00:39:34.719 --> 00:39:45.360
Like, does the math get so much more complicated eventually it's like it's ridiculous, or is doesn't like you're the you're the only one I know who does the math of for this.

00:39:45.360 --> 00:39:49.119
So tell me how much how how much worse it gets when you start to mess with it.

00:39:49.360 --> 00:40:03.519
Yeah, so the computational cost of this analysis is a relevant aspect to account for, and we need models to represent both the experiment and the performance that we're interested in understanding.

00:40:03.519 --> 00:40:09.360
So the more complex the system and experiment is, the more complex is the model.

00:40:09.360 --> 00:40:12.320
And in this sense, you need more advanced models.

00:40:12.320 --> 00:40:18.159
So one way to hide computational efficiency is using surrogate models.

00:40:18.159 --> 00:40:25.440
And one part of the uh of the FARTES project is building these throughgate models to hide computational efficiency.

00:40:25.440 --> 00:40:33.599
Then the other part is uh what utility metrics should you actually use, and that's something that we are going to work on more in the future.

00:40:33.599 --> 00:40:43.199
So for for now, we we say okay, you can choose in principle all the utility metrics that you want, and you can use all of them to support decision making.

00:40:43.199 --> 00:40:44.880
Then how do you do that?

00:40:44.880 --> 00:40:54.719
You have different approaches, you have multi-objective optimization if you want to optimize, you have multi-criteria decision making if you want just to compare different metrics.

00:40:54.719 --> 00:41:02.239
So there are different tools that we can use to support this decision, and that's definitely something that we are going to work on in the future.

00:41:02.239 --> 00:41:16.880
And as you said, yes, it becomes more complicated as the application is more complicated, because you correctly notice that what we put in the paper are very simplified examples that aim at making the point of the potential of the methodology.

00:41:16.880 --> 00:41:25.679
So we wanted to show that we are able to estimate before doing an experiment the expected utility of that experiment.

00:41:25.679 --> 00:41:29.360
So the examples we put there aim at doing this.

00:41:29.360 --> 00:41:38.559
Then when we go more complex, yes, it requires more elaborated models, but the the concept and idea that we're going to apply remain the same.

00:41:38.800 --> 00:41:40.960
And again, I I put the model.

00:41:40.960 --> 00:41:48.960
Can you replace the model with just you know understanding the statistical distribution of typical outcomes of a test?

00:41:48.960 --> 00:41:52.079
Like, for example, the the conch colorometer, I'll go back to that.

00:41:52.079 --> 00:42:01.360
Like you, I mean, you can do the model of flame spread over flat surface, you can perhaps do some pyrolysis modeling, uh, GPyro and stuff.

00:42:01.360 --> 00:42:07.840
I there's there are things you can model to understand what is the scatter of outcomes, perhaps, of material.

00:42:07.840 --> 00:42:12.639
Though a model like will tend to give you the same uh value based on the same input.

00:42:12.639 --> 00:42:25.760
So I guess in here you also have to have distributions of the variables that you go into testing, like the density distribution of your PMMA sample, roughness of the surface, I don't know, the amount of impurities in it, whatever.

00:42:25.760 --> 00:42:35.920
But it's knowing that the the statistical distribution of a test looks more or less like that, a decent first approximation to find those boundaries.

00:42:36.559 --> 00:42:38.159
Yes, yes, in part, yes.

00:42:38.159 --> 00:42:39.840
Let me rephrase this a bit.

00:42:39.840 --> 00:42:43.360
So the framework is a computational modeling framework.

00:42:43.360 --> 00:42:47.760
So we need computational models for at least for our experiment.

00:42:47.760 --> 00:42:55.280
Now, when you go to look into how these models should look like, you can integrate different levels of complexity.

00:42:55.280 --> 00:43:10.159
And what you expect is that if you have a very accurate model, you will be able to optimize the experimental protocol more and to understand it more, but nothing blocks you to use a very simple model to inform your decision making on the experiment.

00:43:10.159 --> 00:43:19.519
For example, as you mentioned for the concalorimeter example, our model is a normal distribution of the possible outcomes of the concalorimeter.

00:43:19.519 --> 00:43:22.559
So that's a very simple model, and you can build in complexity.

00:43:22.559 --> 00:43:27.840
As you said, we can use an FDS model and we can go of different complexities.

00:43:27.840 --> 00:43:43.519
So increasing complexity enables you to get probably a better outcome, but you can do the same methodology also with simple models, and you should actually start like that, and then you're building complexity to optimize or support your decision making more.

00:43:43.519 --> 00:43:55.840
So the complexity of the model is definitely important, but you can use very simple models and you're still informing your experimental protocol decision making, even if you use these simple models.

00:43:56.079 --> 00:43:57.440
No, no, I found the missing link.

00:43:57.440 --> 00:43:59.039
Well, you you showed me the missing link.

00:43:59.039 --> 00:44:10.400
So when you refer to the model, I was thinking the statistical distribution is like a replacement of the model, but it was actually the model you were using, and there were advanced, more advanced models which you have not been using.

00:44:10.400 --> 00:44:12.960
Okay, but this makes a lot of sense, yes, yeah.

00:44:12.960 --> 00:44:15.360
So you consider the statistical distribution as a model.

00:44:15.360 --> 00:44:16.159
That's good.

00:44:16.159 --> 00:44:31.599
Um, in terms of the test outcomes, I think also one interesting thing that you can get from this approach is that like if I do a fire resistance test, what in the end my client gets is you know a classification.

00:44:31.599 --> 00:44:33.760
Your slab is 60 minutes rated.

00:44:33.760 --> 00:44:34.559
That's it.

00:44:34.559 --> 00:44:54.000
But in here, you can take the information about the minutes, but you can also seek different informations that the test gives you: the row temperature measurements, the temperature at your rebar, the temperature like uh at the surface, the deflection rate, the rate of the deflection, how fast it did, like how did the failure look like?

00:44:54.000 --> 00:44:56.639
When did the failure occur after those 60 minutes?

00:44:56.639 --> 00:45:10.079
And uh, knowing that the test gives you this much more information, perhaps if you just looked, okay, I'm doing fire resistance tests and it just gives me 60 minutes, so and the probability it will not pass is like 20%.

00:45:10.079 --> 00:45:13.679
So there's a distribution that's gonna give me this much information.

00:45:13.679 --> 00:45:19.679
But if you look into the raw data, suddenly you have a plethora of information that you can tap into.

00:45:19.679 --> 00:45:25.199
But to do that, you would have to define a separate utility function for all of all these parameters, I guess.

00:45:25.440 --> 00:45:26.239
Yes, exactly.

00:45:26.239 --> 00:45:33.599
So I think the one benefit you get out of this is that you can do better use of the experimental setups you already have.

00:45:33.599 --> 00:45:54.400
So, for example, what you mentioned now in uh with the furnace, if you have a computational model that includes your parameters of interest and reproduces the furnace test, you are able to improve your understanding of these parameters, being them, for example, material parameters or um thermal parameters, any parameter you want.

00:45:54.400 --> 00:46:02.880
And this better understanding of these parameters enables you a better prediction of the performance of the system in the real world outside the lab.

00:46:02.880 --> 00:46:13.119
So, in this sense, the framework enables you to link subsystem performance testing in the lab to the utility that this test will have in real world application.

00:46:13.360 --> 00:46:13.840
Brilliant.

00:46:13.840 --> 00:46:18.320
Ah, one thing about like uh uh you're breaking my paradigm.

00:46:18.320 --> 00:46:31.199
I I feel uncomfortable with that, but I I see I see potential, you know, because for example, when I run a fire test on the furnace, I know where I have to put the thermocouples because the standard defines it and it tells me how much I need.

00:46:31.199 --> 00:46:34.480
And sometimes clients would like to add some thermocouples.

00:46:34.480 --> 00:46:56.320
Now, if you think about uh and I'm making up numbers, let's say a test costs 20,000 euros and a single thermocouple costs you a hundred of Euros, it's like you can spend 2,000 euros extra and place 20 more thermocouples, and potentially you could increase the information you gain from the test exponentially, or perhaps you don't gain any information.

00:46:56.320 --> 00:47:03.760
So the question is will this give you information or not, and how valuable that information is to you?

00:47:03.760 --> 00:47:06.800
And based on that, you can you can decide, okay, you know what?

00:47:06.800 --> 00:47:15.440
I should add 25 more thermocouples in this location, just three in this location, and this location doesn't make any sense for us to increase the cost of the test.

00:47:15.440 --> 00:47:25.280
This is this is really brilliant when you when you start like this could literally be a service, you know, this could literally be uh a tool used to guide, really.

00:47:25.280 --> 00:47:26.239
I I I I like it.

00:47:26.639 --> 00:47:38.159
And you can use that to go to your client and show, or to the interest stakeholder, show what would be the benefit of them investing high amount of money in this additional thermocapus, for example.

00:47:38.159 --> 00:47:45.599
And you can map this to the performance of that the beam that you're testing in the real building where it will be implemented, for example.

00:47:45.920 --> 00:47:59.440
Or or you can perhaps instead of doing three full-scale slab experiments, just over instrument one of them and do, I don't know, five compressive strength under heat or something like that as a replacement, because that's gonna well.

00:47:59.440 --> 00:48:08.960
I mean, it it's the beauty and the problem because again, here you're doing what's the best, and you have a scientific way to prove that it's the best choice.

00:48:08.960 --> 00:48:20.880
And the problem is that it's so incompatible with the current paradigm, which just tells you, you know what, you have to achieve this rating, and you achieve this rating by testing this many samples in these conditions, you know.

00:48:20.880 --> 00:48:28.320
It could also like be used to assess how useful the the current paradigm is or to show how bad it is.

00:48:28.320 --> 00:48:34.719
Are are you at the well we were escaping the the papers and we're going into future studies, I guess.

00:48:34.719 --> 00:48:40.880
But are you are you trying to apply that to showcase how much information the current paradigm gives?

00:48:41.199 --> 00:48:42.159
Definitely, yes.

00:48:42.159 --> 00:48:45.199
So one part of the project is dedicated to that.

00:48:45.199 --> 00:48:52.000
We have a Apple stock that is working on to understanding the economic value of the current fire safety paradigm.

00:48:52.000 --> 00:49:01.519
And then we what we want to do is to compare that benefit that you get now with what you would get if you use this different approach of testing.

00:49:01.519 --> 00:49:08.239
And the final goal is showing that hopefully this uh approach is beneficial for society.

00:49:08.239 --> 00:49:25.360
So this links back to the fire test overall uh project, and in that case, adaptive fire testing is basically testing using the methodology and the framework that we discussed now, that's a part of the framework, which enters the whole fire safety demonstration paradigm.

00:49:25.360 --> 00:49:37.840
So there is the one that you mentioned now, and then you could think of how that should be changed based on this approach, and that's what will come as part of the of the EFCR project.

00:49:38.079 --> 00:49:38.639
Fantastic.

00:49:38.639 --> 00:49:39.760
Wow, really good.

00:49:39.760 --> 00:49:40.159
Thank you.

00:49:40.159 --> 00:49:45.280
Thank you so much, Andrea, for for bringing uh this up and uh explaining to me.

00:49:45.280 --> 00:49:55.760
Sorry for being a little slow, but it is difficult when you when you think about this, you know, from uh a different perspective, and it's so much easier for me to run a furnace test and do base theorems.

00:49:55.760 --> 00:50:15.920
But I I guess I'm the voice of the audience, and uh I assume that's the case for uh many of the fire engineers, and I am highly appreciative that you and Ruben and your team at Gantis is working hard on that because indeed the potential impact of that is is is really big, as expected of an ERC grant, of course.

00:50:16.239 --> 00:50:17.440
Thanks a lot, Marzik.

00:50:17.440 --> 00:50:18.719
Thanks a lot for inviting me.

00:50:19.039 --> 00:50:20.719
And that's it, thank you for listening.

00:50:20.719 --> 00:50:29.679
It was not an easy one, but I think we have provided you with the information in the most uh how to say it, digestible way.

00:50:29.679 --> 00:50:49.760
It's some tough mathematics, some tough concepts, but uh their final outcome is quite profound because if you have in front of you some choices to be made, and those choices can cost a lot of money, then using methods like ones that Andrea and Ruben propose can can really guide you.

00:50:49.760 --> 00:51:02.079
And I perhaps was venturing too much away from the case studies that Andrea was proposing, but I immediately see uses of this approach in in so many aspects of fire science.

00:51:02.079 --> 00:51:16.960
I'm not sure if my concepts are correct in terms of the methodology that has been developed by uh University of Ghent, but I still see the potential because it's it's quite generalizable, it's uh quite useful in many cases.

00:51:16.960 --> 00:51:26.880
If you are able to design your utility functions, objectives, etc., you can really twist this method into working uh on many, many levels.

00:51:26.880 --> 00:51:35.679
So after this podcast episode, I hope you have a general idea about how the approach to optimize the information gained from testing works.

00:51:35.679 --> 00:51:40.719
But if you would really like to learn about how it works, you need to read the papers.

00:51:40.719 --> 00:51:43.679
And there will be two papers linked in the show notes.

00:51:43.679 --> 00:51:59.920
One is a shorter one, which is the keynote of Ruben from the ESFSS conference in Ljubljana earlier this year, where Ruben just introduced the method and just gave an overview of the potential of it.

00:51:59.920 --> 00:52:14.239
And then there is a second paper in a Fire Safety Journal, which is a very in-depth uh jump into the topic with all the mathematics explained based on both examples that we have been discussing in this podcast episode.

00:52:14.239 --> 00:52:31.599
So both the concolarimetry and both the post-fire concrete assessment are step by step shown in the fire safety journal paper that you can go through, follow, and and see how the implementation looks like, how the math looks like, and what is the final outcome of the method.

00:52:31.599 --> 00:52:41.519
After this podcast episode, I think it's it's quite valuable to quickly, before you forget, quickly jump into the fire safety journal paper and just look at the case studies.

00:52:41.519 --> 00:52:51.280
So I leave you with this interesting homework and I expect you here next Wednesday because there is more fire science coming your way again.

00:52:51.280 --> 00:52:52.400
Thank you for being here with me.

00:52:52.400 --> 00:52:53.440
Cheers, bye.