Oct. 1, 2025

220 - Test vs experiment with David Morrisset

220 - Test vs experiment with David Morrisset
The player is loading ...
220 - Test vs experiment with David Morrisset

In this episode we dive into the ap between standardized tests and experiments, trying to figure out (a) is there a difference and (b) if there is, could not understanding the difference quietly erode safety. With guest David Morrisset (Queensland University), we unpack furnace ratings that read like time but aren’t, cladding classifications that were never meant for façades, and the infamous bird-strike test that shows how any standard bakes in choices and consequences. The throughline: context rules everything.

We talk plainly about what tests actually deliver—repeatability, reproducibility, and comparability under fixed boundary conditions—and why that’s powerful but limited. Then we pivot to experiments: how to define a clear question, choose boundary conditions that matter, use standard apparatus for non-standard insights, and document deviations without pretending they’re compliant. We share  stories from timber in furnaces to car park fires and design curves, showing when consistency beats a shaky chase for “realistic,” and when exploratory burns are the fastest way to find the unknowns that really drive risk.

If you’ve ever tried to drop a cone calorimeter value into a performance model, equated furnace minutes to evacuation time, or treated a single burn as gospel, this conversation will help you do this safely and prevent you from falling into some well known caveats. You’ll leave with practical heuristics for reading test data without overreach, structuring experiments that answer narrow questions well, and communicating uncertainty so decision-makers understand what the numbers can and cannot promise.

Essential reading after this episode:



----
The Fire Science Show is produced by the Fire Science Media in collaboration with OFR Consultants. Thank you to the podcast sponsor for their continuous support towards our mission.

00:00 - Setting Up The Big Question

03:38 - Why Fire Uses Tests And Experiments

09:20 - Defining Tests Versus Experiments

16:30 - Standard Furnace Minutes Aren’t Real Minutes

24:10 - Timber In Furnaces And Broken Assumptions

31:20 - Misapplied Benchmarks And Euroclass Limits

38:20 - Car Park Fires And The Benchmark Dilemma

45:00 - The Chicken And The Jet Engine Lesson

51:00 - Extracting Engineering Insight From Tests

58:00 - Designing Experiments With Purpose

WEBVTT

00:00:00.320 --> 00:00:02.879
Hello everybody, welcome to the Fire Science Show.

00:00:02.879 --> 00:00:17.920
I am very happy when people reach out to me and ask me difficult questions or drop interesting thoughts on me, especially if those people are potential guests for the podcast and they have something they would like to talk about on the air.

00:00:17.920 --> 00:00:20.879
And this is the case of today's episode.

00:00:20.879 --> 00:00:33.280
I am joined once again by David Morrisset from Queensland University, and David has dropped on me a very interesting thing to consider, and that is differences between testing and experiments.

00:00:33.280 --> 00:00:50.640
And he did it in a very clever way by dropping on me an extremely interesting paper outside of the fire science world about dropping chickens into turbojet fans in terms of testing the turbojet fan uh ability to sustain a bird injury.

00:00:50.640 --> 00:00:57.039
It's a very interesting read, it's in the show notes and we're discussing it more in the podcast, so maybe I'll not spoil anything else.

00:00:57.039 --> 00:01:04.640
But uh indeed uh the differences between testing and experiments are something very important in our profession.

00:01:04.640 --> 00:01:18.159
Actually, it's more important than you would uh probably think in the world of testing regimes and where fire safety is essentially largely dominated by codes and standards and standardization.

00:01:18.159 --> 00:01:23.680
So tests are inherently a part of the fire safety system out there.

00:01:23.680 --> 00:01:27.359
They're they're one of the most important key components out there.

00:01:27.359 --> 00:01:40.400
And uh also fire safety engineering is a thing where you need data, when you need models, when you need to understand real-world fire phenomena, and you have to base that fire safety engineering on something.

00:01:40.400 --> 00:01:42.079
And here comes the clash.

00:01:42.079 --> 00:01:46.959
Can you base fire safety engineering on outcomes of fire tests?

00:01:46.959 --> 00:01:56.079
This is probably the key difference between tests and experiments, the context, the reason they were performed, and what you can actually learn from them.

00:01:56.079 --> 00:02:00.400
In this podcast episode, we will learn when test stops being a good test.

00:02:00.400 --> 00:02:14.800
We will learn what it means to run an experiment, what it means to design experiments, and what caveats are there when you try to get some data or important knowledge outside of a test and put it directly into your engineering.

00:02:14.800 --> 00:02:20.319
I think it's just a bunch of really interesting thoughts dropped in this podcast episode.

00:02:20.319 --> 00:02:23.599
A conversation between two good friends who are both Fire Kicks.

00:02:23.599 --> 00:02:28.800
So I really hope you will enjoy it as much as I did when I've recorded that.

00:02:28.800 --> 00:02:31.759
So let's spin the intro and jump into the episode.

00:02:36.479 --> 00:02:38.319
Welcome to the Fireside Show.

00:02:38.319 --> 00:02:42.000
My name is Wojciech Wegrzynski, and I will be your host.

00:02:57.919 --> 00:03:04.400
This episode is brought to you in partnership with OFR Consultants, the UK's leading independent fire engineering consultancy.

00:03:04.400 --> 00:03:15.280
With a multi-award-winning team and offices across the country, OFR are experts in fire engineering committed to delivering pre-eminent expertise to protect people, property, and the planet.

00:03:15.280 --> 00:03:20.159
Applications for OFR's 2026 graduate program are now open.

00:03:20.159 --> 00:03:26.719
If you're ready to launch your career with a supportive forward-thinking team, visit OFRconsultants.com to apply.

00:03:26.719 --> 00:03:32.960
You will join a worldless organization recognized for its supportive culture and global expertise.

00:03:32.960 --> 00:03:37.439
Start your journey with OFR and help shape the future of fire engineering.

00:03:37.439 --> 00:03:42.319
Hello everybody, I am here again with David Morrisset from Queensland University.

00:03:42.319 --> 00:03:42.800
Hey David.

00:03:42.800 --> 00:03:44.319
Hey Voice, thanks for having me back.

00:03:44.319 --> 00:03:46.000
Yeah, well, welcome back, welcome back.

00:03:46.000 --> 00:03:47.919
A swift comeback to the show.

00:03:47.919 --> 00:03:50.719
You seem to be really into podcasting, my friend.

00:03:51.759 --> 00:03:54.080
I mean, I just like I like having a good conversation with you.

00:03:54.080 --> 00:03:54.800
What can I say?

00:03:55.039 --> 00:03:56.879
Yeah, well, thank you, thank you.

00:03:56.879 --> 00:03:58.159
And uh and vice versa.

00:03:58.159 --> 00:04:10.240
I really enjoy having conversations uh with you, especially when you give me an interesting topic and then you put me into rabbit hole reading about chickens hitting jet fans for an hour.

00:04:10.240 --> 00:04:16.560
And here we are uh now discussing this further and what it means for the Broader Fire community.

00:04:16.560 --> 00:04:21.920
What an interesting pathline of my life that put me into this uh position right now.

00:04:21.920 --> 00:04:27.439
But anyway, yeah, you brought out like a very interesting topic: test versus experiments.

00:04:27.439 --> 00:04:32.319
And whenever I think this, I hear uh a very loud voice of Guillermo Rain in my head.

00:04:32.319 --> 00:04:34.639
Uh Jake, this is a test, not an experiment.

00:04:34.639 --> 00:04:38.160
And uh this is something very strongly embedded in my consciousness.

00:04:38.160 --> 00:04:41.519
So let's let's probably start with the very hard question.

00:04:41.519 --> 00:04:45.600
What is the difference between a test and experiment?

00:04:46.160 --> 00:04:48.879
Okay, so this is this is really the crux of what I mean.

00:04:48.879 --> 00:04:51.439
We kind of got to this point in the last time that was on the show, right?

00:04:51.439 --> 00:04:53.759
And sort of discussing how do we make measurements, right?

00:04:53.759 --> 00:04:59.120
Because we we make measurements in the context of let's say live fire tests or experiments.

00:04:59.120 --> 00:05:08.240
And the really where we draw the line of what becomes a test versus an experiment comes down to basically what is the, I guess what's the the goal and the philosophy behind it, right?

00:05:08.240 --> 00:05:21.120
Because we mentioned this last time too, but something, a place to start with this is I think most would accept this being true, but I think we've accepted that fire is intrinsically a very complicated process, right?

00:05:21.120 --> 00:05:30.319
It combines all these various gas phase and solid phase phenomena, and these include heat transfer, you know, mass transport, chemical reactions, all sorts of complexity.

00:05:30.319 --> 00:05:39.279
And to resolve all of this from first principles, to say I want to be able to assess blank systems uh and and put them on buildings, right?

00:05:39.279 --> 00:05:45.199
There's a degree of complexity there where a lot of what we do is we say, let's just light it up and see how it performs.

00:05:45.199 --> 00:05:45.680
Right.

00:05:45.680 --> 00:05:52.000
And so uh there's a this actually there's a great quote by Howard Emmons that sort of rings my head whenever I start thinking about this, right?

00:05:52.000 --> 00:05:54.319
That in terms of this intrinsic complexity, right?

00:05:54.319 --> 00:05:56.959
And there's a there's a paper, I think it was called The Growth of Fire Science.

00:05:56.959 --> 00:06:00.639
And he he said something to the effect of, now what is fire science?

00:06:00.639 --> 00:06:01.519
You know, quotes.

00:06:01.519 --> 00:06:06.720
It is it's certainly not something as simple as basic chemistry or physics or something to that effect, right?

00:06:06.720 --> 00:06:15.920
And then then he goes on to talk about the interactions between all these physical and chemical phenomena, right, and and how how complex that is.

00:06:15.920 --> 00:06:17.199
And it's really interesting, right?

00:06:17.199 --> 00:06:26.800
And and so I think one one way that that manifests itself is to really fully resolve these complex processes, is we utilize experimentation and testing.

00:06:27.040 --> 00:06:35.120
And you used a very important uh caveat in in when you were introducing this, because you said, I want to do a test or experiment to put something on my building.

00:06:35.120 --> 00:06:44.240
So, besides like scientific curiosity and our need to understand the underlying physics, which I guess is a driver for many of us fire researchers.

00:06:44.240 --> 00:06:55.120
We simply love it, and it's so complex, it's so intellectually engaging to study fire phenomena because of how endlessly complex they are.

00:06:55.120 --> 00:07:02.480
Like, literally, there are layers of complexities, you can just add them and add them and add them beyond human comprehension.

00:07:02.480 --> 00:07:14.639
This is what a lot of us find attractive in fire science, but it's also a practical field where in the end you need to place things on your buildings, you need to have people in those buildings.

00:07:14.639 --> 00:07:21.360
Those buildings will encounter fires, and you want those people to not suffer from those fires.

00:07:21.360 --> 00:07:34.879
So there's a whole space of responsibility and you know, this whole machine of making sure that we are sure of what we're doing.

00:07:34.879 --> 00:07:50.399
And in here, it it kind of narrows it down because you stop experimenting in a blank space of curiosity, but you have a functional goal in the end to have uh something that I would call a safety framework or fire safety framework.

00:07:50.560 --> 00:07:52.720
And I think, wow, that's it's it's an interesting point, really.

00:07:52.720 --> 00:08:04.079
Now that we're going to the weeds here, of really there's like a spectrum of whenever we're doing some sort of, let's say, testing or you know, experiments in a lab, there's a spectrum between practical outputs and fundamental insight.

00:08:04.079 --> 00:08:08.560
Now, if you do them right, you can do both pretty effectively, right?

00:08:08.560 --> 00:08:17.120
Um, because the more that you get fundamental insight, sometimes you need to go to the scale of understanding, you know, the structure of a diffusion flame.

00:08:17.120 --> 00:08:24.000
Sometimes there's a lot of practical insight of understanding things like the actual chemical reactions that produce emissions in a fire, so on.

00:08:24.000 --> 00:08:37.279
Some of that fundamental insight is done on a scale, or the experiments to gain that insight are done on a scale that isn't immediately apparent of what the practical outputs are, but the insight you get from that informs engineering judgment, right?

00:08:37.279 --> 00:08:44.320
And so something that I feel a lot of a lot a lot of criticism can come around experiments not being practical enough, right?

00:08:44.320 --> 00:08:48.559
People doing experiments on tiny little pieces of, you know, PMMA or whatever, right?

00:08:48.559 --> 00:08:49.519
Uh what have it?

00:08:49.519 --> 00:08:53.440
But the insights that come from that are extremely impactful, right?

00:08:53.440 --> 00:08:58.399
Because if you can gain generalizable insight, right, that informs engineering judgment.

00:08:58.399 --> 00:09:00.559
The engineering judgment gives you practical outputs.

00:09:00.559 --> 00:09:00.799
Right.

00:09:00.799 --> 00:09:06.000
That's kind of like the way that I've always sort of been taught is the progression of the development of knowledge.

00:09:06.000 --> 00:09:08.240
Anyway, all this dance around the initial question, right?

00:09:08.240 --> 00:09:11.759
We haven't even which I haven't even addressed yet, is what is the difference between a test and an experiment?

00:09:11.759 --> 00:09:22.720
And I and I think it's up there the there's multiple ways to cut that definition, but I think for the sake of just today's discussion, I'll define it, then I'll let you define it, and then we'll we'll probably, you know, let's meet somewhere in between.

00:09:22.720 --> 00:09:27.919
I would say a test particular it generally follows a sort of standard procedure.

00:09:27.919 --> 00:09:34.240
We're looking at something that is effectively a standardized procedure by which you can achieve some sort of output, right?

00:09:34.240 --> 00:09:51.279
Uh generally, the idea would be to look at some fire performance under a live fire condition, whether that's in a sort of simulated thermal environment or exposure to a real fire source, um, which allows the output of those tests to be benchmarked against other materials or systems, right?

00:09:51.279 --> 00:09:54.480
And we can probably rattle off dozens of examples of that.

00:09:54.480 --> 00:09:56.480
That's probably a good place to go after this.

00:09:56.480 --> 00:10:04.559
But on the other hand, an experiment would be something that doesn't follow a sort of standardized, agreed upon consensus-based procedure, right?

00:10:04.559 --> 00:10:11.519
Um, instead, you investigate some sort of phenomenon purely for the uh quantification of some element of that, right?

00:10:11.519 --> 00:10:29.679
So you go in with a question, you say, I want to develop an ex an experiment to articulate this relevant bit of physics, whether that's just to explore it for the sake of knowledge, whether that's to validate a theory, or whether that's to compare to something like numerical simulation, all of which are very valid reasons to do an experiment.

00:10:29.679 --> 00:10:39.039
But in terms of the process, it's it's less of a structured, I guess prescribed is the right word, there's prescribed process by which you conduct said experiment.

00:10:39.360 --> 00:11:06.000
When you dropped uh this question at me, my brain immediately went back to an episode which I had not that long ago with Mike Spearpoint and Constantinus from afar about uh balconi fires, where they've done a set of experiments on balcony fires, and uh the outcome was to some extent expected, you know, more combustible material on a balcony equals to worse fire.

00:11:06.000 --> 00:11:10.080
It's uh it's a conclusion that is safe to be made even without one experiment.

00:11:10.080 --> 00:11:28.240
But in the end, they've done those beautiful experiments, and the outcome of those experiments was quantification of the hazard, it was a tangible proof of what the hazard is, it was a ranking of hazards present in that setting.

00:11:28.240 --> 00:11:44.399
Like what they've created, I mean, they've created some new knowledge and they've seen some interesting things during those experiments, but they have created a measurable proof that something happens in one way or another.

00:11:44.399 --> 00:11:58.159
And I think as our profession matures the less will be an outcome of community wisdom and opinions of people, the more it should be based on the proof.

00:11:58.159 --> 00:12:07.440
And here we reach a point where you can gain proof through testing, you can gain proof of experiments, though they will be different proofs.

00:12:07.440 --> 00:12:14.559
For me, if we talk testing, two words come into my mind repatibility, reproducibility.

00:12:14.559 --> 00:12:23.279
Those are two things that characterize tests for me, because if test is not reproducible or repeatable, it's not a good test.

00:12:23.279 --> 00:12:34.879
And in the end, the outcome is some sort of ranking or consistency within the testing framework, or placing something within a predefined framework.

00:12:34.879 --> 00:12:55.840
While experiment, while it can also provide you a proof and a guidance and knowledge necessary to move your product project forward, it gives you an exploration and perhaps answers questions you have not asked and allows you to give insight, but inherently makes comparison between different experiments extremely challenging.

00:12:56.080 --> 00:13:01.759
And I guess this might be a good stage to rattle off a few examples of standard tests that come to mind, right?

00:13:01.759 --> 00:13:04.879
Yeah, let's get for the for the listeners just sort of uh context.

00:13:04.879 --> 00:13:07.679
Let's drop some things on the playground to play with.

00:13:07.679 --> 00:13:08.159
Definitely.

00:13:08.159 --> 00:13:14.720
So I mean, I don't know if this is a a controversial sort of way to frame the problem, but let's lump them into two sort of larger categories.

00:13:14.720 --> 00:13:20.240
I mean, again, this is people from testing agencies are probably going to be rolling their eyes at this oversimplification, of course.

00:13:20.240 --> 00:13:21.600
Like I fully appreciate that.

00:13:21.600 --> 00:13:27.279
But let's let's lump things into some applications of testing include furnace testing, of course, right?

00:13:27.279 --> 00:13:29.519
So looking at standard furnace testing, right?

00:13:29.519 --> 00:13:31.039
Kv4, let's go.

00:13:31.039 --> 00:13:33.039
And then there's another large category.

00:13:33.039 --> 00:13:36.799
We could look at sort of the general reaction to fire classification testing.

00:13:36.799 --> 00:13:37.120
Right.

00:13:37.120 --> 00:13:50.399
So whether that's anything from you know the SBI, the single burning item, uh, whether that's, you know, you can look at there is there are standards out there for things like cone calorimetry, um, whether those are applied in many regulatory environments, there's exceptions to that.

00:13:50.399 --> 00:13:52.159
There are places where that that can be used.

00:13:52.159 --> 00:14:00.639
Um, there's other tests looking at things like the LIFT test, the lateral ignition and flames transport or spread test, however you want to define the acronym.

00:14:00.639 --> 00:14:02.639
You know, again, it's a standard test procedure.

00:14:02.639 --> 00:14:07.360
Uh, and then the outcomes from that are, you know, are standardized in terms of what you should be looking for.

00:14:07.360 --> 00:14:10.080
And we can go on and list off dozens of them, right?

00:14:10.080 --> 00:14:23.679
But the two sort of major categories I would think of is one that has sort of a temperature boundary condition and one that looks at either a sort of the world of exposing things to heat fluxes or exposing things to sexually direct contact with something like a burner.

00:14:23.679 --> 00:14:34.159
Um and also from uh from a cladding perspective, we can even incorporate things like the large-scale cladding tests, looking at you know, BS8414 type testing in all D European equivalents.

00:14:34.159 --> 00:14:42.799
Um these are when we're talking about standard test methods, we're talking about these kinds of tests where we can take a material, a product, or a system and use them in this test.

00:14:42.799 --> 00:14:52.240
And the outcomes from those tests should allow us to more or less index them against one another within the the, how do I say this, within the context of that test, though, right?

00:14:52.240 --> 00:14:55.440
The outcomes are limited to basically the scenario that we're looking at.

00:14:55.440 --> 00:14:59.759
Because when we're doing something like a standard test, we're accepting a certain scenario, right?

00:15:00.159 --> 00:15:10.159
And in here, I I put forward the question: should that scenario be closely representative to real-world fire?

00:15:10.159 --> 00:15:22.960
Because, you know, especially for perhaps I'm I'm I'm oversimplifying it, but I but I find fire resistance is especially difficult to understand for by people who are not working in in fire testing.

00:15:22.960 --> 00:15:32.000
For me, you know, fire resistance, a class of minutes, REI 60, this is a very precise thing.

00:15:32.000 --> 00:15:55.279
This is an extremely precise evaluation of the performance of a given assembly in a very specific testing conditions, in a very specific device, measured in an extremely highly specified way, in a very robust system, in a very reputable and uh reproducible manner by an accredited laboratory that has competencies to do so.

00:15:55.279 --> 00:15:56.960
And it's only that.

00:15:56.960 --> 00:16:01.600
And for layman, it's usually, oh yeah, this can resist 60 minutes of fire.

00:16:01.600 --> 00:16:08.080
Well, no, it it nowhere says that in resistance uh to fire of 60 minutes that it's resist fire for 60 minutes.

00:16:08.080 --> 00:16:10.159
But but it it's it's kind of the thing, right?

00:16:10.399 --> 00:16:17.120
I think this is this is a really important one to discuss, at least in one aspect of furnace testing, right, is the idea of the of the minutes, right?

00:16:17.120 --> 00:16:20.399
Of the output of this, because every test will have an output, right?

00:16:20.399 --> 00:16:23.039
And the way that you frame that output is really important, right?

00:16:23.039 --> 00:16:32.320
Where you draw the line between something passing or failing, where the where you draw the line of the difference between an A-rated material and a B-rated material, these all have huge significance, right?

00:16:32.320 --> 00:16:36.240
And and the output of a furnace test is basically the exposure time, right?

00:16:36.240 --> 00:16:45.200
And if you go back to the original work, you know, that you can read Angus Law and Luke Bisbee's paper from the University of Edinburgh looking at the rise and rise of fire resistance, it's an excellent paper.

00:16:45.200 --> 00:16:50.320
But looking at sort of some a bit of a historical perspective of the development of furnace testing, right?

00:16:50.320 --> 00:17:02.159
And you see the need that arised at the time, you know, over well over 100 years ago, to develop basically a testing regime for structural members and structural elements, building elements.

00:17:02.159 --> 00:17:05.359
And actually our boundary conditions from that haven't really changed.

00:17:05.359 --> 00:17:06.480
The curve is the same.

00:17:07.039 --> 00:17:09.119
This goes back to what I've asked before.

00:17:09.119 --> 00:17:11.839
Shall it be representative of a real world case?

00:17:11.839 --> 00:17:15.359
And here you you're touching on a very, very important thing.

00:17:15.359 --> 00:17:22.480
Because today we are perfectly aware that it is not representative of a real world fire, the exposure, the boundary condition.

00:17:22.480 --> 00:17:24.640
It is representative of some fires.

00:17:24.640 --> 00:17:30.480
Yes, fires exist that grow and and decay like well, the standard firecraft doesn't decay.

00:17:30.480 --> 00:17:33.119
That's another podcast episode coming to you soon.

00:17:33.119 --> 00:17:36.160
But they could grow like a standard fire.

00:17:36.160 --> 00:17:41.759
There are class of fires that grow like that, but it's one of a million, like that, there's a lot of different fires out there.

00:17:41.759 --> 00:17:48.559
So so it's very hard to state that this test uses a real-world fire exposure as a boundary condition.

00:17:48.559 --> 00:17:49.279
It does not.

00:17:49.279 --> 00:17:55.920
But does it mean it's worthless because we have a hundred years of using that?

00:17:55.920 --> 00:18:04.880
Because we've tested countless amounts of materials with that, because uh you could not put an argument that it did not create safety.

00:18:04.880 --> 00:18:13.839
It has created, it has resulted in safety, in safe applications and globalization of fire safety engineering in a way.

00:18:13.839 --> 00:18:16.960
So is it the best standard that it doesn't replicate real life?

00:18:16.960 --> 00:18:17.599
I don't know.

00:18:17.599 --> 00:18:21.440
If we consider it an experiment, it's a horrendous experiment.

00:18:21.440 --> 00:18:26.160
If we consider it a test, it perhaps is not as bad as I I would usually claim.

00:18:26.400 --> 00:18:29.839
But even within within that context, I think it's important.

00:18:29.839 --> 00:18:37.039
The key thing I'm getting at with the outputs of a fire resistance test is the structure of the of the output being in minutes, right?

00:18:37.039 --> 00:18:42.480
Because if you go back to the original definition of it, they you know they created this fire condition within a furnace.

00:18:42.480 --> 00:18:45.759
They tested an assembly for a certain amount of time exposed to that.

00:18:45.759 --> 00:18:51.920
The the hard part is that is translated to the everyday vernacular and staying in minutes, right?

00:18:51.920 --> 00:19:00.799
So, like you said, this is a very specific, this is a very specific condition that is maybe representative of a certain class of fires.

00:19:00.799 --> 00:19:04.400
Not every fire, sure, but you know, you you need to benchmark against something.

00:19:04.400 --> 00:19:05.279
I appreciate that.

00:19:05.279 --> 00:19:08.079
But the scary part is is the output being in minutes.

00:19:08.079 --> 00:19:13.920
To this day, there are many engineers who to prescribe that as time and then a quote unquote real fire.

00:19:13.920 --> 00:19:20.000
Yes, so people are using this as a benchmark to compare against things like egress times and things that are based on real time.

00:19:20.000 --> 00:19:30.720
So the time it takes to to get people out of a building, or the time it takes to for the fire service to arrive, or the time it takes to for a fire to spread from a compartment to compartment, those are real time.

00:19:30.720 --> 00:19:33.519
Those are based on uh that is in real time.

00:19:33.519 --> 00:19:41.759
And then those cannot be benchmarked against the outputs of something like a standard fire test, because the time in that furnace is a specific condition.

00:19:41.759 --> 00:19:45.039
The the ranking should almost just be called points or something.

00:19:45.039 --> 00:19:45.359
Right?

00:19:45.359 --> 00:19:55.200
You know, like that's uh that's something I remember Luke Bisbee used to always say was if we just called it points, that would that would reduce a lot of the the miscommunication to various parties who use these.

00:19:55.200 --> 00:20:01.200
Because of course you and I are buried in these this world of of testing and experimentation every day.

00:20:01.200 --> 00:20:09.839
But it's it's actually kind of not immediately apparent as you're as you're working in the whether you're working in the engineering space or or you know learning about these tests from from the onset.

00:20:09.839 --> 00:20:15.440
If it says it in minutes, it's a very easy thing to misinterpret in terms of you know what does that actually mean.

00:20:15.920 --> 00:20:18.799
Which brings us to like practicality of those.

00:20:18.799 --> 00:20:26.559
Like one practical aspect having some sort of ranking of materials, I see benefits of that.

00:20:26.559 --> 00:20:28.480
It could be considered useful.

00:20:28.480 --> 00:20:31.279
You need sometimes ranking of those to do that.

00:20:31.279 --> 00:20:44.480
But if a fire engineer actually is tasked with uh designing a load-bearing structure that can survive a fire, uh fire resistance has proven to be not the worst proxy in the world uh of that.

00:20:44.480 --> 00:20:53.119
But of course, for many aspects of structural design, it it's challenging and inaccurate and not really you can do better simply.

00:20:53.119 --> 00:21:05.680
However, if a fire engineer is burdened with providing a proof or or uh you know uh yeah, basically a proof that that the load brain will be maintained, they may be looking into some experimental data.

00:21:05.680 --> 00:21:15.039
And then when you start designing an experiment that answers your question in real-world minutes, like what's the real world minute time of collapse of my structure?

00:21:15.039 --> 00:21:35.359
We this is a very relevant question today when we are building those giant uh warehouses with stacking units, uh, with with the steel structure inside that can span over multiple levels, and you put 2,000 people into that warehouse and you have to evacuate them in five minutes, and there's a good chance it's gonna collapse after 10.

00:21:35.359 --> 00:21:38.799
So this is a very practical question when it will collapse.

00:21:38.799 --> 00:21:41.200
How we design an experiment to answer you that question.

00:21:41.440 --> 00:21:42.240
That's an interesting one.

00:21:42.240 --> 00:21:50.240
Before we move on to discussing the transition to experiments, though, in the context of uh the standard furnace too, I think there's one more element to discuss.

00:21:50.240 --> 00:21:53.119
And this is something that I know your lab is looking to too, right?

00:21:53.119 --> 00:21:57.279
But and and I agree that there is a there's a utility to benchmarking.

00:21:57.279 --> 00:22:14.799
I mean, I mean that's like I I am an absolutely huge believer in the ability to using you know standard tests to benchmark performance, uh so long as it's understood within the context of those tests, uh and those tests are done in a way that the outputs are actually applicable into to the context again of the test.

00:22:14.799 --> 00:22:22.640
But the thing about furnace testing that I think we've discovered recently too is if you put, you know, say a mass timber element uh in the furnace, right?

00:22:22.640 --> 00:22:24.960
What is I mean, you you can speak from your experience too.

00:22:24.960 --> 00:22:31.599
I know there's been many studies that have looked at this, but we know that conditions within that furnace are following a temperature time curve.

00:22:31.599 --> 00:22:37.759
And we know that the combustible elements of the wall assembly are contributing to that, those conditions within the furnace.

00:22:37.759 --> 00:22:42.559
So you just inherently end up with a different amount of fuel being injected into that furnace.

00:22:42.559 --> 00:22:46.400
So you're the the the boundary conditions from a temperature perspective aren't changing.

00:22:46.400 --> 00:22:51.440
But from the perspective of the actual fuel being injected into that furnace, things are changing.

00:22:51.440 --> 00:23:00.400
In the way that, you know, if you have a timber compartment on fire, if there's a couch on fire next to it, the couch doesn't know to regulate itself to burn less to match temperature time curve, right?

00:23:00.400 --> 00:23:02.640
The couch is just gonna do what the couch is gonna do.

00:23:02.640 --> 00:23:14.400
And so we we're introducing an interesting complexity here that just highlights the some of the this wasn't, you know, the intent of the original fire resistance framework was to look at structures that were inherently non-combustible.

00:23:14.400 --> 00:23:23.920
That was one of the original intents, and and to and that the fire resistance framework was designed so that the compartment couldn't withstand the burnout of the contents of that compartment.

00:23:23.920 --> 00:23:25.680
Those were like inherent assumptions.

00:23:25.680 --> 00:23:30.400
And I just think it's interesting that we're starting to sort of challenge those assumptions in the context of what we're doing now.

00:23:30.720 --> 00:23:39.359
Uh yeah, so so I've already multiple times I I've uh said that's a really horrible way to assess the properties of timber through fire resistance testing.

00:23:39.359 --> 00:23:41.279
And uh I I have papers on that.

00:23:41.279 --> 00:23:44.799
I truly uh despise this way of thinking about it.

00:23:44.799 --> 00:23:58.240
But if you for a second forget about the utility of it as in those elements reaching the building, if you consider it just within the testing framework, it kind of puts the timber in a rank with other elements.

00:23:58.240 --> 00:24:16.319
Like you you apply the same repetitive thing, you apply the same uh way of testing, you provide pretty much the same uh consistent metrics of performance in terms of minutes, in terms of load bearing, in terms of integrity, in terms of insulation.

00:24:16.319 --> 00:24:23.519
So you kind of perform the same thing to this to the slightly different element, but in a very same way.

00:24:23.519 --> 00:24:24.960
You just perform a test.

00:24:24.960 --> 00:24:29.920
From this perspective, the test has not failed yet because it has been done in the same way.

00:24:29.920 --> 00:24:33.839
And in a way, it allows you to have verified the timber.

00:24:33.839 --> 00:24:48.559
The problems with it is one, the timber kind of games on the boundary condition of the test, so it actually does interfere with the boundary condition in a way that no other tested element does.

00:24:48.559 --> 00:24:55.759
It's like you know, you play blackjack, but you're allowed to look at other people's hands or or the or the casino hand.

00:24:55.759 --> 00:24:57.759
So you're gaming the system in a way.

00:24:57.759 --> 00:24:58.559
That's one problem.

00:24:58.559 --> 00:25:13.519
And the second, the end world utility of that test is completely different than the end utility of different materials where you could perhaps use it as a proxy of your structural fire safety.

00:25:13.519 --> 00:25:16.880
In here, I would say, um no, not truly.

00:25:16.880 --> 00:25:25.680
There are too many other things to be considered, which again have been so like if you just consider this as a testing framework, it it kind of works.

00:25:25.680 --> 00:25:32.640
It's just, you know, the goal of the test suddenly is misaligned and though the boundary condition is broken.

00:25:32.960 --> 00:25:48.160
I mean, like you said earlier, uh, we have a track record that shows, for particularly if we're looking at, you know, let's say the tried and true non-combustible materials within the fire resistance framework, we have an idea of whether we're just getting lucky or whether we've actually truly provided implicit safety, right?

00:25:48.160 --> 00:25:50.960
There is an element to which there is at least a track record there.

00:25:50.960 --> 00:26:00.400
Uh and and I think you mentioned a really interesting point that with the introduction of timber, it's fair to say that we're still matching the conditions of the test, right?

00:26:00.400 --> 00:26:03.200
We are still achieving the standard temperature time curve.

00:26:03.200 --> 00:26:10.400
We're s we're achieving some the standard conditions, but like you also said, we know that the conditions in the furnace are just different, right?

00:26:10.400 --> 00:26:15.599
So it it raises an interesting question of it's not that there's anything intrinsically wrong with the tests.

00:26:15.599 --> 00:26:17.519
Like you said, the test is what the test is.

00:26:17.519 --> 00:26:23.920
It comes down to our interpretation of is this an adequate condition for what we are then using those outputs for.

00:26:23.920 --> 00:26:24.319
Right.

00:26:24.319 --> 00:26:27.680
And I think that's a really uh that's a it's an interesting question.

00:26:27.680 --> 00:26:36.000
And I'm not and I'm not convinced that, like you said, that in all structural applications we have enough information from fire resistance alone for something like timber elements.

00:26:36.559 --> 00:26:45.920
I I just find this single information to be perhaps interesting, but not a complete proxy uh in which you can encapsulate the problem.

00:26:45.920 --> 00:26:58.720
Again, also in this podcast, I had multiple guests with with whom we've agreed that you need to test the products in their end-use condition, like they are supposed to be used in the building.

00:26:58.720 --> 00:27:10.240
And with an assumption that your boundary condition, the thermal boundary condition, the exposure condition, whatever it is, that it kind of represents the hazard.

00:27:10.240 --> 00:27:17.599
So like SBI and uh and external walls, I mean, it's a test method that's very well established.

00:27:17.599 --> 00:27:25.839
It's just uh characteristic of the test itself, are nowhere close to what external walls are exposed to when uh they face venting fires.

00:27:25.839 --> 00:27:32.319
So uh a test is a test, it works, it's just doesn't really answer the questions that you're asking.

00:27:32.319 --> 00:27:47.920
No, the question is like as we build up our knowledge and we try to modify those tests, we also kind of lose you know the backtrace the history of outcomes that those tests created.

00:27:47.920 --> 00:28:12.799
So by changing and narrowing and fine-tuning those tests, moving closer to individual experiments, not repetitive, you know, framework of testing, you start to be closer to that reality, whatever reality is and whatever you define reality is, but you lose more and more of that comparativeness, uh, this compatibility of those tests altogether, you know.

00:28:12.799 --> 00:28:19.519
And I think it's it's a very challenging thing to balance out, like when it's worth to lose that.

00:28:19.519 --> 00:28:27.440
And uh and I'll I'll give you an answer me that and I'll give you an example where I found it extremely interesting uh case.

00:28:27.759 --> 00:28:33.200
Yeah, I think understanding the the history of of a testing regime, I think, is is really important, right?

00:28:33.200 --> 00:28:55.279
You you mentioned the SBI being applied to cladding systems, which we know that you know, if you if for anyone who's been followed followed the Grenfell Tower inquiry, right, that led to you know a myriad of issues that led to combustible clouding being effectively gaming that those testing regimes to show that certain products were able to pass those tests, but you know, in a real world application, those those performed terribly, right?

00:28:55.279 --> 00:29:04.960
And this comes down to again, this there's a lot more context actually provided in um there was a paper written by Angus Law and colleagues at the University of Edinburgh, who was my supervisor for my PhD, right?

00:29:04.960 --> 00:29:07.839
So that's why I'm so familiar with all of the bits that he's done.

00:29:07.839 --> 00:29:12.160
But uh that was about the sort of the black box of the Euroclass system.

00:29:12.160 --> 00:29:14.559
And that was it's an interesting paper, it's a fascinating paper.

00:29:14.559 --> 00:29:20.559
But one element to sort of to think about with the with the SBI in particular is really if you go far enough back, right?

00:29:20.559 --> 00:29:26.559
It's it's basically it's a it's a corner test within a you know, a it's a single burning item within a a wall corner.

00:29:26.559 --> 00:29:31.759
Uh it's akin in a lot of similarities to things like an almost like an ISO room corner test, right?

00:29:31.759 --> 00:29:34.160
I mean again, obviously they're they're separate tests.

00:29:34.160 --> 00:29:44.559
It wasn't a direct translation necessarily, but um, and if you look at the context of those tests, that is very different than the scenarios in which you see in an exterior wall system, right?

00:29:44.559 --> 00:29:51.680
Those things are designed to look at a single burning item within a corner, to look at basically interior finishes on a wall within a within a compartment.

00:29:51.680 --> 00:29:53.759
It seems to be the more applicable space there, right?

00:29:53.759 --> 00:29:59.839
Um I wasn't there as some there are people who've been on your podcast who were there for these discussions in the year in the development of the year.

00:29:59.839 --> 00:30:02.000
Class system, but I wasn't there for those.

00:30:02.000 --> 00:30:06.559
But I'm imagining that was sort of the you know the original intent for some of these sort of tests.

00:30:06.559 --> 00:30:14.799
And so to apply those to clouding systems, uh you've already broken the sort of the context that was established for these, right?

00:30:14.799 --> 00:30:21.119
And so so it's just that is an intrinsic assumption in using the test is that we're using this in a way that you know they should be applied.

00:30:21.839 --> 00:30:23.599
I would say again, it's nothing wrong with the test.

00:30:23.599 --> 00:30:27.279
Like if you just consider tests being a test, it's it's just a test.

00:30:27.279 --> 00:30:29.359
It's the way uh what do you use it?

00:30:29.359 --> 00:30:39.440
I I promised you an example from my end, how this chase of reality was abandoned, and that was in uh computer simulations of car park fires.

00:30:39.440 --> 00:30:45.200
So I have a long history of uh fighting with everyone else about uh car park fires and design curves.

00:30:45.200 --> 00:30:58.640
And uh actually, what what we are doing in the end is that we apply still always the same curve that comes from one single experiment done by TNO in the Netherlands in 1999, if I believe.

00:30:58.640 --> 00:31:18.240
And we still use this curve, and we're facing a load of backlash with the community around us because they're this curve is so wrong, it doesn't represent a car, it doesn't represent an electric vehicle, there's like a ton of different studies done in different ways, and now I've analyzed all of them.

00:31:18.240 --> 00:31:19.839
Like I had a podcast episode about it.

00:31:19.839 --> 00:31:26.480
We've analyzed all of them, like literally every single available data source that we could find in the existence.

00:31:26.480 --> 00:31:30.000
There's no unbiased car park design fire curve.

00:31:30.000 --> 00:31:31.920
Simply does not exist.

00:31:31.920 --> 00:31:34.400
Every of them comes with caveats.

00:31:34.400 --> 00:31:55.519
So if all of them are wrong, you could say, then let's just like keep one that we're using and be consistent with it, because at least I have ability to compare the outcomes of this simulation with 200 other simulations that I've done in my life, and it gives me a reasonable point of evaluating whether this smoke control system is good or bad.

00:31:55.519 --> 00:32:05.839
And it's not because of the realism of my design curve, it's because of the back catalogue of times it has been applied.

00:32:05.839 --> 00:32:18.799
Uh and and for me, this was an example of why seeking the truth, you know, the realism, the real fire is simply failing you because define what the real fire is.

00:32:18.799 --> 00:32:19.440
Not possible.

00:32:19.440 --> 00:32:19.920
Yeah.

00:32:20.079 --> 00:32:22.880
And I think there's, I mean, we've said this before, right?

00:32:22.880 --> 00:32:28.000
But there isn't there is immense utility in being able to have a benchmark for things, right?

00:32:28.000 --> 00:32:28.640
Absolutely.

00:32:28.640 --> 00:32:33.599
And so I think that on one level, it's important to have things that you can benchmark against, right?

00:32:33.599 --> 00:32:48.559
But I think it's also really important to like you just said, if you choose something as a comparative purpose, for a comparative purpose, I should say, then there is utility in that, but not to convince yourself that this is every fire you'll ever experience within this within this compartment, right?

00:32:48.559 --> 00:32:52.079
Or within this you know context, whether it's a car park or or or a building.

00:32:52.480 --> 00:33:01.119
In a car park, the vehicle can burn in so many different ways that are like completely different causes of fire, completely different consequences.

00:33:01.119 --> 00:33:05.279
It can be like one vehicle in the corner, five vehicles in the middle of it, you know.

00:33:05.279 --> 00:33:08.240
It can be a very tall car park, very low car park.

00:33:08.240 --> 00:33:09.519
It can have smoke control.

00:33:09.519 --> 00:33:12.880
Like there are thousands of ways a vehicle can burn.

00:33:12.880 --> 00:33:22.000
So there is no way you can say one fire is represent just like the standard time temperature relationship is not representative of every single fire out there, right?

00:33:22.240 --> 00:33:25.200
But I mean, just go back to that Howard Emmons quote that we started with, right?

00:33:25.200 --> 00:33:30.640
Of like, you know, it's you know, fire science is not a simple subject, you know, it's this complex mix of physics and chemistry.

00:33:30.640 --> 00:33:38.079
And and the reason that we have to we have to resolve the physics down to things like testing and experiments is because it's so complex, right?

00:33:38.079 --> 00:33:43.599
In the same way that if you have that's why we need benchmarks in a lot of uh a lot of cases, yeah, right.

00:33:43.599 --> 00:33:49.200
So it is one of the intrinsic assumptions that we um we sort of started talking about from the out from the outset.

00:33:49.200 --> 00:33:53.359
But I think let's let's maybe even move away from fire testing for a second.

00:33:53.440 --> 00:33:53.599
Yeah.

00:33:53.759 --> 00:33:59.039
And let's talk about, just for a moment, let's talk about shooting chickens into jet fans.

00:33:59.039 --> 00:34:01.279
Because I think why not?

00:34:01.279 --> 00:34:01.599
Why not?

00:34:01.680 --> 00:34:03.039
Yeah, that's a great example.

00:34:03.039 --> 00:34:11.280
It's it's in the show notes, and they're like it's uh I would say it's not yet Angus Law level of writing, but still good enough.

00:34:11.280 --> 00:34:13.599
I highly recommend uh to read it through.

00:34:13.599 --> 00:34:14.800
It's in the show notes.

00:34:14.800 --> 00:34:16.719
Uh check check the paper on your own.

00:34:16.719 --> 00:34:18.800
But let's let please show me the lesson.

00:34:18.800 --> 00:34:20.079
Show me the lesson, David.

00:34:20.320 --> 00:34:25.119
Well, I mean, I'm uh you you haven't heard this until you've heard uh presentation from someone like Angus on this one.

00:34:25.119 --> 00:34:30.320
But um but the the principle of this paper is this paper's called When the Chick Hits the Fan, right?

00:34:30.320 --> 00:34:36.800
And it's a it's an interesting study that's looking at basically the philosophy of standard testing, right?

00:34:36.800 --> 00:34:48.320
And the idea behind this is really if you look at the if you look at the industry of developing basically the turbofan jets for for jet airliners, like for airplanes, basically, right?

00:34:48.320 --> 00:35:13.280
One of the final tests that they have to do to to sort of determine the readiness for bird strikes, so whether or not uh an engine can actually strike a bird and continue on, is they literally will take a compressed air, basically, cannon, right, and they'll spin up one of these turbofans, and they will put effectively what is a frozen chicken in this in this giant air cannon and launch it into the into the engine.

00:35:13.280 --> 00:35:25.039
So, you know, this process by which you're developing these multi-million dollar or pound pieces of equipment, these amazing works of feats of engineering, these turbojet fans, right?

00:35:25.039 --> 00:35:36.239
The final test, really just look at look at readiness, is to literally just chuck a chicken at it, basically, and see if they can like chew it up and continue on without you know completely uh failing the engine.

00:35:36.239 --> 00:35:40.320
And what's really interesting is the paper is marvelously written.

00:35:40.320 --> 00:35:51.679
Where one of my favorite quotes from it is basically, all technological tests, right, um, unavoidably contain irreducible ambiguities that require judgments to bridge, right?

00:35:51.679 --> 00:35:55.119
And so to show these judgments can have real consequences, right?

00:35:55.119 --> 00:36:04.559
Um, and so basically the point of that being that the re the reality is is these engines must be tested to understand if if they can withstand bird strikes, right?

00:36:04.559 --> 00:36:21.440
But if you really take a step back and you look at the in the the absurdity of taking a a you know a chicken and launching it into a jet engine, it's just really sort of it's an interesting sort of thing to comment on because yes, you need a test standard to look at these, right?

00:36:21.440 --> 00:36:28.559
But by assigning that test standard, by by codifying that knowledge to say this is the threshold it must meet, there are implications to that.

00:36:28.559 --> 00:36:32.079
There are consequences where if that that is now what it's designed for, you know.

00:36:32.079 --> 00:36:38.480
So if if if that's your threshold, what happens if the bird strike that it actually experiences is slightly more severe?

00:36:38.480 --> 00:36:39.840
How did we choose that chicken?

00:36:39.840 --> 00:36:40.239
Right.

00:36:40.239 --> 00:36:42.320
What about like, what about other surrogates?

00:36:42.320 --> 00:36:44.960
What if you hit a flock of birds, right?

00:36:44.960 --> 00:36:56.400
So they actually come up with additional tests to look at multiple smaller items or you know, of various combinations of different, you know, sizes and weight distributions of these frozen birds.

00:36:56.400 --> 00:37:06.559
But all of these are really interesting because there's a direct sort of application within fire safety where there is unavoidable ambiguity in developing a test standard.

00:37:06.559 --> 00:37:14.639
But you got to draw the line somewhere and you have to say, you know, what is what is my threshold and what are the consequences of assuming that this is my threshold?

00:37:14.880 --> 00:37:30.000
I I think uh nice parallel in here you said that uh you want this test of a bird hitting the fan as a proxy of what will happen if a real-world jet engine hits a bird while the plane is flying, hopefully not crushing and killing everyone on the board.

00:37:30.000 --> 00:37:31.760
That makes a lot of sense, right?

00:37:31.760 --> 00:37:37.280
Uh but does this prove you that the real bird will not destroy the jet fan engine?

00:37:37.280 --> 00:37:39.840
In no way does the does it prove you, it's just a proxy.

00:37:39.840 --> 00:37:56.800
In the same way, a lot of our colleagues, fire safety engineers, would be burdened with having to solve some technical problems, some you know, technical challenge within their building, how to apply a specific piece of equipment in a specific context of a building.

00:37:56.800 --> 00:38:04.000
And they only have this body of knowledge that comes from tests because this is what the manufacturer gives them.

00:38:04.000 --> 00:38:15.760
We will go into how they can design an experiment to solve their problem in a second, but let's for now assume they're not able to design an experiment and they only have the body of knowledge that comes from tests, you know.

00:38:15.760 --> 00:38:24.239
What caveas those engineers should avoid to use that testing data to find an answer?

00:38:24.239 --> 00:38:33.440
Because if the question is, will this uh wall survive 60 minutes and I only have a 60-minute fire resistance test?

00:38:33.440 --> 00:38:42.000
On one hand, it seems very simple, yes, because it's 60 minutes versus 60 minutes, but as you cleverly said before, those are very different minutes.

00:38:42.000 --> 00:38:45.360
So the pathway to achieve the answer is actually quite complicated.

00:38:45.360 --> 00:38:50.880
So so how how not not always you will use tests to rank things up.

00:38:50.880 --> 00:38:54.960
Sometimes you need to extract information and knowledge from them.

00:38:54.960 --> 00:38:57.119
How how how can an engineer can do that?

00:38:57.519 --> 00:39:00.239
But I mean that is actually the trick, right?

00:39:00.239 --> 00:39:06.239
Because effectively all a test provides is a test does not provide safety, it provides compliance, right?

00:39:06.239 --> 00:39:07.360
There's a big difference.

00:39:07.360 --> 00:39:11.519
You say that you can tick the boxes that this has been deemed compliant, right?

00:39:11.519 --> 00:39:16.400
And that is an implicit level of safety where you can start extracting information from that.

00:39:16.400 --> 00:39:22.079
And the person to really speak about this would be like Reuben Van Coyle and his team, right, looking at adaptive fire testing, right?

00:39:22.079 --> 00:39:40.960
Because there are ways where you can take the frameworks that exist for fire testing, and by you know adding more measurements or or figuring out, you know, the different ways to sort of frame that test, you can still achieve sort of a standard test and extract more information that might actually be usable from an engineering context, right?

00:39:40.960 --> 00:39:47.440
What you can't currently do, right, is basically take things like, let me take a random example that uh comes to mind, right?

00:39:47.440 --> 00:39:50.639
There is a there is an ASDM standard for the Lyft, right?

00:39:50.639 --> 00:39:55.280
The flame, the lateral flame spread test developed by uh Quintiri and his colleagues, right?

00:39:55.280 --> 00:39:56.480
And it's an elegant test.

00:39:56.480 --> 00:40:00.000
It's it's you know, as far as standardized tests go, I could talk all day about that.

00:40:00.000 --> 00:40:07.440
There's there's some really interest, really interesting theory backed up by really clever design of a test system, right?

00:40:07.440 --> 00:40:14.800
And it's it's very excellent what you can in terms of the the the way you can sort of drop out some very valuable information very quickly from those tests.

00:40:14.800 --> 00:40:22.320
Now, the caveat being that, you know, one of the parameters that drops out of this is this flame spread parameter called fee, right?

00:40:22.320 --> 00:40:29.280
And you can you can use that to index basically the flame spread behavior across different materials under that test.

00:40:29.280 --> 00:40:40.239
One thing that I would that I I've I've seen engineers and I've just talked about engineers or talked with engineers about using is can we use those fee values to then approximate flame spread in a design fire scenario?

00:40:40.239 --> 00:40:44.320
And I think uh the the sort of default answer to that will always be no.

00:40:44.320 --> 00:40:49.519
There's always a huge caveat there of these are limited to the context of that test.

00:40:49.519 --> 00:40:52.639
So unless your design fire is a lift apparatus, right?

00:40:52.639 --> 00:40:57.360
Like I wouldn't use those values for predictive design conditions, right?

00:40:57.360 --> 00:41:02.639
Again, it's a very, very useful, extremely useful test for benchmarking materials, right?

00:41:03.039 --> 00:41:11.840
And I would say the same challenge would go with using heat release rate per unit area from your cone into your design fire in the building.

00:41:11.840 --> 00:41:25.440
The same would go for using your oxygen bomb heat of combustion and using it for uh fuel burning outdoors in a in a real uh in a real space and and uh and and th and things like this uh are endless.

00:41:25.440 --> 00:41:39.440
Sometimes stuff looks like looks very familiar and looks very directly applicable, yet the challenges within the context and and and how what it responded to in that test.

00:41:39.440 --> 00:41:41.920
And I think this is the thing you need to understand.

00:41:41.920 --> 00:41:54.719
You cannot like unfortunately the foreign engineer, when they try to apply outcomes of a test in some form of engineering beyond just ranking, beyond just applying a certificate, code speak, etc.

00:41:54.719 --> 00:42:06.079
If they want to use this in engineering, unfortunately they must be very familiar with the testing regime, standard assumptions, and the practice of how the test is performed.

00:42:06.079 --> 00:42:06.800
That's my opinion.

00:42:06.800 --> 00:42:07.519
Absolutely.

00:42:07.679 --> 00:42:13.840
And and not to say that we we can't get to a stage where we can do engineering analysis with various tests, right?

00:42:13.840 --> 00:42:16.639
But just as as a that shouldn't be the baseline assumption.

00:42:16.639 --> 00:42:23.440
The baseline assumption shouldn't be I can use this test to do additional engineering design, right?

00:42:23.440 --> 00:42:24.880
It should be the other way around.

00:42:24.880 --> 00:42:28.000
The outcomes are limited to the test itself.

00:42:28.000 --> 00:42:34.639
And if there is applicability, if I can if I can fully get around the context, then perhaps there's additional insight I can gain from that.

00:42:34.639 --> 00:42:35.440
Perfect.

00:42:35.679 --> 00:42:48.159
And uh what if like we've took the ability to experiment from the hands of our engineer, let's give it back into the hands of uh the fire engineer, the uh ability to experiment.

00:42:48.159 --> 00:42:53.360
How can the foreign engineer design an experiment to guide them?

00:42:53.360 --> 00:42:58.159
Because you're an experimenter, you know it is very difficult to design those.

00:42:58.159 --> 00:43:08.400
So what what would be the pitfalls of designing an experiment and and and what one has to think when they really need a specific answer to a very specific question?

00:43:08.400 --> 00:43:11.679
How do they get to that ex to that answer through experiments?

00:43:11.920 --> 00:43:12.159
Right.

00:43:12.159 --> 00:43:29.360
So in this transition now to discussing experiments, I think it's uh let's revisit that definition of basically an experiment is we'll say a it's to explore fire phenomena, uh various fire phenomena or a specific fire phenomenon that is not followed by a standard procedure, right?

00:43:29.360 --> 00:43:42.960
So you have the the freedom to sort of explore whatever you want and or to explore whatever you want and design the experiment in the way you want to, but the pitfall of that is you don't have a track record of a standardized uh procedure by which people spend.

00:43:43.519 --> 00:43:43.920
Why not?

00:43:43.920 --> 00:43:49.760
Why cannot I do like if you do you can do cone experiments and they're like Yeah, of course.

00:43:49.840 --> 00:43:50.800
No, no, you're right, you're right.

00:43:50.800 --> 00:43:51.679
And that's an important point.

00:43:51.679 --> 00:43:55.599
But what I'm saying, well, all I'm saying is is for example, that's a great example actually.

00:43:55.599 --> 00:43:58.880
You can take the cone calorimeter, a very standardized piece of equipment.

00:43:58.880 --> 00:44:01.119
And I have used it for experiments all the time.

00:44:01.119 --> 00:44:05.280
Because the second you deviate from one thing from the standard, you can, it's no longer a standard test, right?

00:44:05.280 --> 00:44:08.079
You're using this apparatus to explore something, right?

00:44:08.079 --> 00:44:14.639
And I don't, I mean, I agree, I think that's actually standard testing kit is sometimes the best place to start for an experiment, right?

00:44:14.639 --> 00:44:15.360
Absolutely.

00:44:15.360 --> 00:44:20.320
All I'm saying is now you have to basically once you change one of those things, right?

00:44:20.320 --> 00:44:23.840
Those is you need to sort of understand what are the implications of that, right?

00:44:23.840 --> 00:44:29.760
If I'm changing the size of my sample so it's no longer the standard, but what are the heat transfer effects within that sample?

00:44:29.760 --> 00:44:33.360
Am I truly getting things like in the context of the cone calorimeter?

00:44:33.360 --> 00:44:41.920
The reason the size is the sample size is specified to what it is is to achieve a uniform heat flux to create as close to one-dimensional heating conditions as possible.

00:44:41.920 --> 00:44:46.239
I've done experiments where I've put a sample way bigger than that under the cone.

00:44:46.239 --> 00:44:53.039
And that's okay, but you have to acknowledge the fact that you're probably not going to get nice one-dimensional heat transfer anymore.

00:44:53.039 --> 00:44:54.559
There's going to be other issues at play.

00:44:54.559 --> 00:44:56.880
Which maybe that's a problem, maybe that's not, right?

00:44:56.880 --> 00:45:01.119
I think uh this is one that we actually we get kind of, it's funny, right?

00:45:01.119 --> 00:45:11.679
When we're when we're working on these kinds of things where we're doing experiments on a standard piece of kit, like the cone, we've gotten comments from from reviewers and so on that, you know, that we're not following the standard, right?

00:45:11.679 --> 00:45:12.639
It's an interesting one.

00:45:12.639 --> 00:45:13.760
And I'm sure you've seen some of these.

00:45:13.760 --> 00:45:16.400
Some similar comments to this too, right?

00:45:16.400 --> 00:45:21.119
And it's and it's funny because like nowhere uh we're not we're not necessarily trying to follow the standard.

00:45:21.119 --> 00:45:28.880
We appreciate the the amount of effort that went into the to the the structure of of the standard procedure and the equipment that came from it.

00:45:28.880 --> 00:45:38.880
But I don't I don't see a downside then at all to using that kit to run a different experiment, to, to change things like the pilot location, to remove the pilot and look at auto ignition, right?

00:45:38.880 --> 00:45:42.559
To to you know, to change the size of your sample, to change the orientation.

00:45:42.559 --> 00:45:49.199
My during my master's thesis, I ran a bunch of experiments angling the cone at angles in between horizontal and vertical, right?

00:45:49.199 --> 00:45:53.360
So I would go 45 degrees and 30 degrees and 60 degrees with the cone.

00:45:53.360 --> 00:45:57.039
And of course, that's a non-standard thing, but you have the ability to do that, right?

00:45:57.039 --> 00:46:04.880
And I think there's a lot of benefit in using that kind of equipment to do non-standard, um, truly experimental procedures.

00:46:04.880 --> 00:46:09.280
The question becomes if you change something from that standard, why is that?

00:46:09.280 --> 00:46:11.920
Why, why do why are you making the measurement that you're making?

00:46:11.920 --> 00:46:14.000
Why are you changing the conditions that you're looking at?

00:46:14.000 --> 00:46:19.760
Because when you're establishing an experiment, the first question should always be, what is the what is on, what is it that I'm trying to articulate?

00:46:19.760 --> 00:46:21.280
What is it that I'm trying to explore?

00:46:21.280 --> 00:46:21.760
Right?

00:46:21.760 --> 00:46:23.760
And then the question becomes, how do I achieve that?

00:46:23.760 --> 00:46:30.400
And one of the big questions to look at is, depending on my question, what are the boundary conditions I'm gonna use, right?

00:46:30.400 --> 00:46:37.920
Is it sufficient to do something on a really small scale and understand what a heat flux exposure at the surface would tell you?

00:46:37.920 --> 00:46:38.239
Right?

00:46:38.239 --> 00:46:39.760
Can you gain insight from that?

00:46:39.760 --> 00:46:41.360
Do you need to go to a large scale?

00:46:41.360 --> 00:46:46.480
Do you need to actually look at, you know, fire behavior of uh at the compartment scale, right?

00:46:46.480 --> 00:46:49.440
And if you're gonna do that, how do you change those boundary conditions?

00:46:49.440 --> 00:46:51.440
What kind of initial fire do you use?

00:46:51.440 --> 00:46:57.360
Do you use things like you know wood cribs within the compartment to sustain a fully developed fire at some stage?

00:46:57.360 --> 00:47:05.840
But all these you you actually assign these boundary conditions in an experiment, there's nothing stopping you from completely changing any of those things, right?

00:47:05.840 --> 00:47:10.320
And sometimes it can be a bit of a decision paralysis kind of situation, right?

00:47:10.320 --> 00:47:21.599
Where you want to, you know, there's so many options to choose from, but you want to choose to choose the combination of boundary conditions and measurements, they're gonna give you the outputs that you're looking for.

00:47:22.000 --> 00:47:35.599
I resonate with you and your challenges with the cone because we went through a hell when we proposed the heat release rate uh control regime for a furnace, and uh the first response was you're not allowed to do that, it's a time-temperature relation.

00:47:35.599 --> 00:47:36.880
And by the way, it's standard.

00:47:36.880 --> 00:47:38.880
So that that was a fun one.

00:47:38.880 --> 00:47:45.440
Uh I think uh another thing in those experiments, like I also cherish the variable control.

00:47:45.440 --> 00:47:54.239
I also cherish uh ability to like change one thing in the system and uh find out the outcomes.

00:47:54.239 --> 00:48:00.639
Unfortunately, in the realm of fire, it's not always possible to fiddle with a single variable at a time.

00:48:00.639 --> 00:48:04.880
Uh, the costs and scales of experiments are often way, way too big.

00:48:04.880 --> 00:48:12.880
And also eventually you also get into space of something that I like the most, which are exploratory tests, experiments.

00:48:12.880 --> 00:48:14.559
I'm not sure they're experiments.

00:48:14.559 --> 00:48:16.400
I think they're experiments, not tests.

00:48:16.400 --> 00:48:25.360
Exploratory in a way that you try to do something that has never really been done, and you're absolutely not aware of what will be the outcomes.

00:48:25.360 --> 00:48:29.360
Like literally, you have no expectations towards the experiment.

00:48:29.360 --> 00:48:43.519
You are doing it for the sake of it to be done to provide you this first reference point from which you can move onwards and start your play with the variables, start your play with with stuff that for me those are the most rewarding experiments, really.

00:48:43.519 --> 00:48:49.119
And uh surprisingly, there's a lot of things we don't have exploratory experiments in fire science yet.

00:48:49.360 --> 00:48:51.440
Yeah, no, it's that's really interesting, right?

00:48:51.440 --> 00:49:08.559
It is the idea of if there's a if there's a question, if if I'm an engineer and I'm like, I want to know how a fire would behave in this really random, complex condition uh that's in my building, I think it's really satisfying to try to develop a an exploratory experiment and just see how it goes, right?

00:49:08.559 --> 00:49:10.559
And you can gain so much insight from that.

00:49:10.559 --> 00:49:13.519
Again, let's say, let's say let's go with that example.

00:49:13.519 --> 00:49:30.239
And let's say we have uh an atrium space, I don't know, in a building that's a bit complicated and and you you have a good idea of what the fuel load might look like, and you just want to know how would, you know, what would the fire behavior of this X fuel load actually look like, you can try it out.

00:49:30.239 --> 00:49:35.840
Just do an exploratory test, bring it into a lab, light it up, and see what kind of fire behaviors you observe.

00:49:35.840 --> 00:49:42.960
The hard part becomes how you do that, right, is gonna be very uh the outcomes from that are gonna depend on how you do it, right?

00:49:42.960 --> 00:49:48.960
So you're not gonna develop a magical design condition, but you'll see how this particular trial went, right?

00:49:48.960 --> 00:49:52.559
You depending on how you ignite it, depending on how you you know arrange your fuel.

00:49:52.559 --> 00:49:55.360
Say you're looking at a Christmas tree, right, in an atrium.

00:49:55.360 --> 00:49:55.599
Yeah.

00:49:55.599 --> 00:49:59.360
The way that you light that Christmas tree is gonna change the heat release rate behavior, right?

00:49:59.360 --> 00:50:01.679
Because you're gonna get different rates of fire growth and so on.

00:50:01.679 --> 00:50:11.360
And all those things create a single fire uh condition, which gives you insight for sure, but it doesn't necessarily automatically become a gold standard benchmark.

00:50:12.000 --> 00:50:25.039
And in almost no circumstance you will be able to claim that this is how a fire will develop, and the consequences of the fire will be exactly like that in the future.

00:50:25.039 --> 00:50:28.320
It will just give you a reference data point.

00:50:28.320 --> 00:50:32.159
The more you do, the better your cloud of points is.

00:50:32.159 --> 00:50:37.360
You can start playing with statistics, uncertainty, you can start controlling for that.

00:50:37.360 --> 00:50:48.639
But again, you will not, even if you set a fire in an office uh build in the same way, like uh you would have it in a real building.

00:50:48.639 --> 00:50:49.840
Look, look, Dalmerlock.

00:50:49.840 --> 00:50:53.199
I had Wolfram and Guillermo on the podcast here again.

00:50:53.199 --> 00:50:58.320
A simple rug changed an outcome of an experiment uh in Dalmerlock time.

00:50:58.320 --> 00:51:05.599
So that's something that I would love the engineers to be absolutely fully like aware.

00:51:05.599 --> 00:51:11.760
Even if you do it and it looks real, it doesn't mean that the fire will grow like this.

00:51:11.760 --> 00:51:20.239
And this is a struggle I have all the time with car park fires, you know, because PRE has burned a real car, which means the cars burned like that, right?

00:51:20.239 --> 00:51:23.360
Well, no, it means in that experiment it burned like that.

00:51:23.360 --> 00:51:27.599
It doesn't mean the vehicles at large will always behave like that.

00:51:27.599 --> 00:51:49.519
This is a very big pitfall, I think, where sometimes people would pursue experiments because they need to have a proof of how the truth will look like, whereas in real outcome they just get a sample, you know, they just get the data point, and the interpretation of the data point is very difficult.

00:51:50.079 --> 00:51:50.400
Definitely.

00:51:50.400 --> 00:51:58.239
I mean, I think so, yeah, there's a lot of there's a lot of benefit in having something like uh an exploratory experiment, right?

00:51:58.239 --> 00:52:00.639
Let's imagine go back to your car park example, right?

00:52:00.639 --> 00:52:04.960
And let's say that there's a we want to understand what's the dynamics of a fire in this car park.

00:52:04.960 --> 00:52:12.559
And one of the say one of the core elements of this engineering analysis is understanding the heat flux exposure to nearby objects, right?

00:52:12.559 --> 00:52:18.480
You can actually get a really good idea of what kind of heat fluxes you can expect from an actual laboratory experiment, right?

00:52:18.480 --> 00:52:22.559
We go back go back to our previous episode of how do we measure things like heat fluxes.

00:52:22.559 --> 00:52:24.159
You can you can make these measurements, right?

00:52:24.159 --> 00:52:32.480
And and again, that that doesn't mean that this is going to be your this isn't the fire that's gonna happen every time for your design condition, but you can still gain a lot of insight from that.

00:52:32.480 --> 00:52:40.559
But the but then you start entering this really slippery slope where if if let's say you want to measure the radiant heat flux from this object, right?

00:52:40.559 --> 00:52:48.159
And that's a really integral part of your your analysis, but then you're in a position where what if uh how how do you interpret those results?

00:52:48.159 --> 00:53:01.280
What if one of your heat fluxes is, you know, wildly high because actually you had an EV fire and there was a jet flame coming out of one side of it and it was, you know, the jet flame was basically impinging on your heat flux gauge.

00:53:01.280 --> 00:53:10.719
Does that become now, if that is a really high heat flux, can you buy it from from across the board, say as a design standard, can we use that as the maximum heat flux?

00:53:10.719 --> 00:53:12.480
And that's a that's a tricky question, right?

00:53:12.480 --> 00:53:17.679
Because you need to then understand what's the, again, what's the context of why was that measurement so high, right?

00:53:17.679 --> 00:53:24.159
And then can you overlay that context to understand the utility of the outcomes from that kind of test or experiment in that case?

00:53:24.480 --> 00:53:38.880
Oh man, I I think with uh flood of people with uh a lot of things and uh in the end uh the the things are more dull than uh they have been at the beginning of episodes because now uh the difference between the tests and experiments is even more unclear.

00:53:38.880 --> 00:53:40.880
But uh it kind of is the point, you know.

00:53:40.880 --> 00:53:53.119
I I I really want to embed this image in in people's heads that testing regime is a testing regime that just lows your comparison between different test outcomes in whatever framework the test has been conceived for.

00:53:53.119 --> 00:53:55.039
And that that's the premise of a test.

00:53:55.039 --> 00:54:07.199
Whatever mole you do with that, you do it at your own risk, and you do it at your own uh you know conscious choice and your own conscious interpretation of the new context, the new application, the new idea.

00:54:07.199 --> 00:54:20.320
And if you are conscious to that, we've achieved success with this with this uh with this podcast episode because perhaps uh there will be less extrapolation of things that are not extrapolatable if that even is a word.

00:54:20.320 --> 00:54:27.760
And also experiments are not like your one single direct proof of what reality is gonna look like.

00:54:27.760 --> 00:54:34.320
They're merely a data point crafted within the realm of variables that you have chosen to scout with.

00:54:34.320 --> 00:54:36.960
That's a message I would like to believe the listener with.

00:54:37.280 --> 00:54:43.039
And I think it's important to highlight there's there's I mean, tests are what they are and experiments kind of are what they are, right?

00:54:43.039 --> 00:54:44.480
There's no the there's no pros.

00:54:44.480 --> 00:54:49.440
There are pros and cons to both, right, in terms of of of what they offer in terms of utility.

00:54:49.440 --> 00:55:07.039
I think also in terms of particularly in experiments, there's um sometimes there's an over-reliance on this being the the the magic the magic bullet of this is exactly the fire that I'm going to see in my you know my design condition, which is not true, but you can still gain a lot of insight into some of the behavior you might expect.

00:55:07.039 --> 00:55:11.280
Uh in the same way that some experiments are, you know, really fundamental, right?

00:55:11.280 --> 00:55:18.159
You know, I I do quite a lot in the world of you know ignition and flame spread and and I, you know, things and sort of in combustion diagnostics, too.

00:55:18.159 --> 00:55:23.920
And it's hard to see the immediate uh engineering application of some of these measurements, right?

00:55:23.920 --> 00:55:34.559
But I think it's really important to remember the really the the goal of that scale of experiment, right, is to understand to build engineering judgment by which we can then apply to other problems.

00:55:34.559 --> 00:55:36.320
So it all kind of builds up.

00:55:36.320 --> 00:55:49.440
Again, that's not a uh flame spread on a slab PMMA is not something that actually most designers are gonna have to necessarily ever encounter, but the insight you can gain from that can then help uh develop engineering judgment for other scenarios.

00:55:49.440 --> 00:56:01.199
So basically trying to understand where on the spectrum of testing to experiments the study that you're looking at exists, and understanding, you know, what is the what how do I use this information, right?

00:56:01.199 --> 00:56:07.920
Um, and making sure that all of it is within the context of what those tests or experiments were intended for.

00:56:08.320 --> 00:56:08.960
Fantastic.

00:56:08.960 --> 00:56:11.679
Uh well, thank thank you so much for this conversation.

00:56:11.679 --> 00:56:14.400
As you said at the beginning, it's always nice to talk.

00:56:14.400 --> 00:56:28.239
And I hope we triggered a lot of people and you have very strong opinions about how wrong we are, and we are very keen to hear about uh that from you because it's uh it's an onset for another interesting uh conversation to be had.

00:56:28.239 --> 00:56:33.840
And uh as promised, laser diagnostics uh will eventually come.

00:56:33.840 --> 00:56:42.800
They have not been covered in this episode, but I have a feeling David will come back to a fire science show once again for another interesting conversation.

00:56:42.800 --> 00:56:44.559
Thanks, David, for coming here today.

00:56:44.559 --> 00:56:45.920
Twist my arm.

00:56:45.920 --> 00:56:46.719
I guess I'll come back.

00:56:46.719 --> 00:56:47.440
And that's it.

00:56:47.440 --> 00:56:48.079
Thank you for listening.

00:56:48.079 --> 00:56:50.159
Oh David, I don't need to twist your arm.

00:56:50.159 --> 00:56:51.519
I know you will come back here.

00:56:51.519 --> 00:56:54.960
And I already have like three different ideas for podcast episodes.

00:56:54.960 --> 00:56:55.679
It's just so nice.

00:56:55.679 --> 00:57:03.280
Uh chatting with another fire nerd that loves talking about fire and actually shares very useful information uh alongside.

00:57:03.280 --> 00:57:20.239
I hope, my dear listeners, we have not made uh your idea of what is a test and what is experiment even worse than it was before the podcast episode, but it actually comes down to a very simple thing, the context and how do you want to use it?

00:57:20.239 --> 00:57:28.400
If you want repetability, if you want scoring, if you want classification you're dealing with a test.

00:57:28.400 --> 00:57:37.440
If you want knowledge, if you want a variable control, if you want insight, well you're probably playing with an experiment.

00:57:37.440 --> 00:57:40.000
Can a test be an experiment?

00:57:40.000 --> 00:57:48.000
If you know how to use the data from a test in a new way, we've done that multiple times with Imperial College.

00:57:48.000 --> 00:57:50.320
I would say they are experiments.

00:57:50.320 --> 00:57:52.960
And can experiments be tests?

00:57:52.960 --> 00:57:54.480
Yes, sometimes.

00:57:54.480 --> 00:58:06.400
Uh they're just a repetition of each other to change if a different boundary condition, like a different material, different fuel, different opening factor or something changes the outcome at large.

00:58:06.400 --> 00:58:08.880
Yes, at some point they start to look like tests.

00:58:08.880 --> 00:58:13.920
It's all it all depends, I guess, whether they are used for any sort of classification or not.

00:58:13.920 --> 00:58:17.440
That's where I would put the boundary line between each other.

00:58:17.440 --> 00:58:21.280
But anyway, I I hope you got something from this episode.

00:58:21.280 --> 00:58:39.440
One important lesson that I really, really, really wanted to translate in the middle of the episode was that if you are forced to do fire engineering and only thing that you have is access to outcomes of fire tests from standardized methods, from fire testing laboratories.

00:58:39.440 --> 00:58:41.920
It's difficult, it's not straightforward.

00:58:41.920 --> 00:58:50.320
Minutes are not minutes, hit fluxes are not hit fluxes, flame spread velocities are not flamespread velocities in terms of the real world problems.

00:58:50.320 --> 00:59:04.639
You have to have a very, very good, robust understanding of the testing regime, the assumptions, the limitations, the way how the sample is built, handled, mounted, seasoned, etc.

00:59:04.639 --> 00:59:11.840
to be able to actually apply anything that comes from a laboratory into the reality.

00:59:11.840 --> 00:59:19.679
Whereas when you're designing an experiment, you have to be extremely good in defining those variables to match your real world conditions.

00:59:19.679 --> 00:59:23.679
That is the challenge and that is the practical lesson behind this experiment.

00:59:23.679 --> 00:59:28.079
I hope you found some more beyond this in this in this nice talk with David.

00:59:28.079 --> 00:59:31.840
Thank you very much for being here with us in the fire science show.

00:59:31.840 --> 00:59:35.679
Uh it's the first day of the academic year in here in Poland.

00:59:35.679 --> 00:59:38.559
So I wish all the students all the best.

00:59:38.559 --> 00:59:44.880
If you know a student who is starting fire safety engineering this year, make sure they know about the fire science show.

00:59:44.880 --> 00:59:47.440
It's an important resource for them.

00:59:47.440 --> 00:59:50.559
So make sure they are aware of the podcast.

00:59:50.559 --> 00:59:55.840
Let's get them hooked on fire science and let's help the best for their growth development.

00:59:55.840 --> 00:59:59.679
And in some years we're gonna have more fire engineers around because we are way.

00:59:59.679 --> 01:00:01.199
Way way too few.

01:00:01.199 --> 01:00:07.280
Anyway, thank you for being here with me this Wednesday, and I am looking forward to meet you again next Wednesday.

01:00:07.280 --> 01:00:08.559
Same place, same time.

01:00:08.559 --> 01:00:09.039
See you.

01:00:09.039 --> 01:00:09.760
Bye.