March 19, 2024

144 - Design fire generator with Greg Baker

144 - Design fire generator with Greg Baker

Imagine if we had a tool that we could use to design a design fire. Instead of simply assuming fire growth characteristics by slapping the alpha-t2 function, use a tool that could tell us which items in a compartment burn and what the characteristics of that fire are. I would say this dream is shared among many fire safety engineers - I think we can all agree that we could use such a tool.

Today's guest, Dr Greg Baker, has shared this dream and built a tool like this within his PhD at the University of Canterbury. The skeleton was a zone model, and the tool developed has played with it well.  Actually, feel free to try it out in here.

In the episode, we talk about how Greg built up this tool and how it decides on the growth, plateau and decay of the fire. We also discuss how such a tool may be critical in a probabilistic approach to fire safety engineering and general performance-based design. Who knows, perhaps in the near future, such an approach will also help us run our CFDs. 

If you enjoyed this, you perhaps would like to read more:

And a bonus:

----
The Fire Science Show is produced by the Fire Science Media in collaboration with OFR Consultants. Thank you to the podcast sponsor for their continuous support towards our mission.

Chapters

00:00 - Probabilistic Design Fires in Fire Science

13:29 - Probabilistic Fire Design Model Development

22:26 - Fire Science Research and Data Collection

34:42 - Flux Time Product Methodology Overview

39:13 - Design Fire Methods and Simulations

52:36 - Fire Safety Engineering Crafting Fires

Transcript

Speaker 1:

Hello everybody, welcome to the Fire Science Show. My name is Vojtje. I'm a professor at the Building Research Institute, itb in Poland. Every week I try to bring you the most interesting fire science there is. In the last weeks we've touched the subject of design fires quite a bit in the Fire Science Show and today I have a follow up on that. Actually, when I was interviewing David Morris a few weeks ago, an idea came to my mind that we should make an episode with Greg Baker, who's done his PhD on designing probabilistic design fire design tools to be precise and that's what I did. I've invited Greg and into today's episode you'll hear about work he's done at the University of Canterbury a decade ago. What's very interesting, the episode has a long start, actually that there's a very long build up into how the design fire tool works. But this buildup is quite important because it gives the background. Where does this tool place? Where was it supposed to fit in the New Zealand performance based design framework? And actually this would be as good today as it was a decade ago, because everything that Greg said, all of the rationale that he has presented, is still absolutely valid today and we still need such a tool today. It's great that it exists. You can just download B-Risk a tool from BrandsFire and use it and try and play with it, but definitely such a tool implemented in a CFD analysis or just in general fire engineering analysis. I see a huge value in that. So, dr Greg Baker from Hollywood Fire Research, and let's spin the intro and jump into the episode. Welcome to the Firesize Show. My name is Vojci Wimczynski and I will be your host.


Speaker 1:

This podcast is brought to you in collaboration with OFR Consultants. Ofr is the UK's leading fire risk consultancy. Its globally established team has developed a reputation for preeminent fire engineering expertise, with colleagues working across the world to help protect people, property and environment. Established in the UK in 2016 as a startup business of two highly experienced fire engineering consultants, the business has grown phenomenally in just seven years, with offices across the country in seven locations, from Edinburgh to Bath, and now employing more than a hundred professionals. Colleagues are on a mission to continually explore the challenges that fire creates for clients and society, applying the best research experience and diligence for effective, tailored fire safety solutions. In 2024, ofr will grow its team once more and is always keen to hear from industry professionals who would like to collaborate on fire safety futures this year. Get in touch at OFRConsultantscom.


Speaker 1:

Hello everybody, welcome to Fire Science Show. I'm today here with Greg Baker from Hallwell Fire Research, previously Fire Research Group. Hey, greg, good to have you in the show. Thanks for a check. Great to be here. Thank you for coming. And today we are discussing another subject on the design fires in the Fire Science Show. The Fire Science Show in the previous weeks had a lot of content on design fires and it seems it's my favorite kind of fire science and I've invited you because you have done your PhD on developing something called the probabilistic design fires. In essence, it's pretty much the idea approach of how to solve fire spread between objects, so the first object ignites and then it spreads. Tell me what got your interest in this particular problem in the fire science that you thought, oh yeah, that's a great idea to do a PhD on it.


Speaker 2:

Well, that's a really interesting question. I don't know that anyone's ever sort of asked me that direct question before, so I'll give you the background. So I was working at Brands, which is a research institute based in New Zealand where I live, and we had a research project that had been funded primarily by the government science sort of research agency that provide research funding in sort of various areas, and that was a collaborative project between Brands and the University of Canterbury, which is the University of New Zealand where the postgraduate fire engineering program is, and I'd always been keen on doing a PhD after my masters, but the opportunity had never sort of arisen. I was working full time and fitting a PhD in was going to be challenging, but at the same time the opportunity had never actually come up. So on this particular project they as often with research projects they were looking for PhD students to contribute towards the hard work, if you like, on the project. So it seemed like a great opportunity.


Speaker 2:

I had to talk to my wife about whether she'd be sort of happy for me to do a part time PhD. I had to talk to my boss at work at the time at Brands to see if that would be okay, but it turned out that we were struggling to find PhD students back in about 2007, 2008 when I got going, so the topic was there. I'd always sort of been interested in compartment fire dynamic etc. It seemed like a sort of interesting topic. So there was some work required on this particular project that we were doing and I was in the right place at the right time, basically. So I signed up to do a PhD not having a clue, of course, what was ahead of me, as to how far it was going to take me, sort of from a learning perspective, and also physically how long it was going to take. I hadn't signed up for almost nine years worth of PhD work, but anyway. So that's a bit of a long-winded answer to your simple question.


Speaker 1:

For us in Poland. New Zealand let's say 2010s in that time was the glorious land of performance based 5CFT engineering. I highly attribute to a visit of a certain gentleman, charlie Fleischman, in a certain conference in Zakopana in Poland, where he used some magic to inspire Polish people for performance based design. But I think at that point it really was the case in New Zealand. New Zealand had PBD in their code. New Zealand was using a lot of that. Is that one of the reasons why Brands was going forward with this topic?


Speaker 2:

Yeah, so I haven't explained what the actual project was about. So just sort of stepping back into the historical context from a New Zealand point of view and also, before I forget this, picking up on Charlie's name. Charlie was involved in the project and he was one of my co-supervisors as well. So yeah, this is a topic that Charlie would have been very familiar with, of course, over that period of time. But anyway, just stepping back to sort of the history of performance based design, so New Zealand introduced a new set of building sort of legislation in 1991. And then shortly thereafter, in 1992, the first performance based building code was introduced into New Zealand. So that obviously set the foundation in New Zealand, and fairly early on compared to other countries.


Speaker 2:

So obviously a performance based building code has been around in New Zealand for a long time and at the same time, across the various technical areas that the building code covers, fire engineering was sort of a perfect opportunity to apply performance based design principles, just because it's very nature. So on one hand, fire engineering was a relatively immature discipline and this performance based building code came in. So those two things sort of came together at the same time and that was sort of a good opportunity If you compare it to some of the more mature disciplines of engineering. So seismic engineering is sort of a very topical issue in New Zealand. So by comparison, seismic engineering is sort of much more mature and advanced and developed. So the performance based application of seismic engineering is sort of quite different, whereas fire sort of shot off with all this opportunity to do performance based design. From a seismic engineering point of view there was a well established loadings code which sort of specified exactly how to go about doing seismic engineering. There was no sort of starting with a blank paper and working out what you need us to do, as was the case in fire engineering. So anyway, that was sort of the historical background.


Speaker 2:

And then in 2007, this research project that I mentioned was funded. So that was the brand's Inversive Cannabis Collaboration. So at that time in New Zealand there'd been some reviews going on of how performance based design was working in New Zealand. Because although there was a sort of fantastic opportunity to practice performance based design in the fire engineering space, one of the really important sort of lessons we've learned, I guess, in New Zealand that to make the environment to have successful performance based design you sort of need really good what I used to call infrastructure in place. So you needed sort of necessary skills within the broad industry. So not to fire engineers but the people who are the AHJs, effectively you need to have a suitable level of knowledge and understanding at that sort of level of the sort of hierarchy. So things weren't sort of going that well in some regards.


Speaker 2:

But at the same time the building regulator was very interested in introducing a sort of more probabilistic dimension to the building code. So one of the things with performance based design and sort of depends on exactly what your philosophy about performance based design is. But I'm I come from a school of thought where performance-based design must include sort of probabilistic risk analysis. That's sort of one of my fundamentals. I've done a paper and a sort of presentation sort of describing all this stuff and that's sort of one of my basic fundamental principles of performance-based design that you have to have this quantitative risk analysis. Not everyone agrees with that. As you can imagine. There's some widely varying views on performance-based design, but that's my sort of personal view on it. So anyway, the building regulator was interested in sort of taking the specific building code clauses and introducing a more probabilistic dimension to those. So to be able to do probabilistic risk analysis in fire safety engineering you have to sort of have suitable probabilistic building code clauses to sort of match that, because otherwise it's sort of difficult to do your analysis if you don't sort of know what the target is, so to speak. So anyway, the current building regulatory team were very interested in introducing these sort of probabilistic components to the building code or performance requirements.


Speaker 2:

So it was identified at the time that to be able to introduce this, so it would be one thing to write probabilistic clauses, that's sort of one thing. That's a simple bit probably. But to actually make it work and practice you needed a modeling tool that was sort of producing probabilistic outputs from the modeling. And we're talking here about A-Z-R-Z modeling primarily. So the whole fundamental principle behind the funding for this research project was that these changes were coming to the building code, the introduction of probabilistic elements to the code, and to support that at the same time you needed sort of suitable tools. So this project was developing a probabilistic analysis tool.


Speaker 2:

So in New Zealand to use zone modeling sort of quite frequently, compared to some other countries where it's more CFD modeling, in New Zealand at that time zone modeling was used and the predominant zone model was the Brands Fire model that Colleen Wade, our colleague, had developed over a number of years, and so this Brands Fire zone model was used quite widely in New Zealand.


Speaker 2:

So the premise behind the project was to take Brands Fire, which was just a simple deterministic zone model used for A-Z-R-Z modeling, and convert that into a probabilistic model, and that's what became the current B-risk model. So, at a very simple level, what the project was doing was introducing sort of a version of Monte Carlo simulation processes to the previously deterministic model, and what that amounted to was that there was still this deterministic calculation engine at the core of what became the B-risk model, but instead of just taking single values for the respective input parameters for the calculations, those were sampled in a Monte Carlo fashion, so distributions were assigned to various parameters of interest, input parameters, and then you did a whole lot of simulations and you got cumulative density functions or distribution functions.


Speaker 2:

I can never remember what CFD stands for CFD functions that you could then compare to these probabilistic statements of building performance that the building code was going to introduce. So you sort of had both sides. You had sort of what the target was and you had a tool that you could use to sort of produce some rational engineering analysis to then compare against the probabilistic statements of building performance.


Speaker 1:

This is a very interesting pathway because indeed, you're not doing this in a vacuum. You're doing this in being surrounded with the certain environment that you work in and if your country wants to go into probabilistic fire designed for performance based design of buildings, the way how you do it is by making your input probabilistic, so you can relate your outputs to some probabilities of fires occurring.


Speaker 1:

Exactly and having a deterministic fire, like we are all doing it. You know, I take my, I take my room. I assume some alpha t squared curve. Okay, I have a bed in it, so this one is fast. I have just a wooden chair in inside of the room and nothing else. Okay, this might be a medium fire curve. You know, very simple engineering. Some would even say it's a guess, guessing work, not engineering, because in some essence it is, and but this does not really capture the, the fire behavior, even at the first object.


Speaker 1:

I just had David Morrisett on the podcast and we've discussed about the variability of fires on the same exact object, the cushioned chair.


Speaker 1:

You know, and and he's already observed on this one single, simple object, a multitude of outcomes. I also recall the conversation with Guillermo and Wolfram Jan about Dalmarnock, where in Dalmarnock there was some rug placed on an armchair that kind of changed the you know the course of the first item to ignite, which then changed the outcome of the entire fire and it kind of influenced the around robin study that they were been doing at that point, which, which is quite an interesting case study. I also have a podcast episode on that. So not that our deterministic models were already an answer to everything and I see a merit in in doing a probabilistic. But here okay, here you're talking not about just a development of a fire on a, on the, on the first object. It's more like how the fire would spread through the compartment, eventually, sure, reaching or not the flash over. So maybe tell me the idea about, like, what would go into such a probabilistic design fire model sure, sure, sure.


Speaker 2:

So so far I've just sort of described the sort of broad outline of the project and the sort of a simple sort of explanation of the sort of basic fundamentals. But specifically the part that I did was develop what we call a design fire generator. So we were going to be doing sort of hundreds or thousands of these Monte Carlo simulations and obviously if we just put in the same design fire every time and varied some other parameters, you know we'd get a bit of variability but the design fire would just be the same every time and, as you say, quite a lot of fire engineering practice. You have to come up with a design fire and now for T squared fire is to go to fire for that and in fact it's sort of effectively mandated or required in some jurisdictions, I would imagine, and in fact is the case in New Zealand. So for one of our compliance pathway options, which is called the CVM, to the verification method, that stipulate our for T squared fires to use and sort of what peak they grow to and those sort of things, that's all stipulated and specified. But what we wanted to do with this design fire generator, which I sort of worked on for my PhD was that every single iteration we wanted to have a sort of you know, a new and unique design fire input that was based on sort of the probability of various things happening. So the way we went about doing that was to come up with this concept of an initial item burning in a room. So if you just think of a typical sort of fire compartment, there's sort of various things that can burn in it. So we randomly populated a space with various objects. So we had a fire object database which had a heat release rate curve for the particular item and various other things, including the sort of amount of energy output, but then also some ignition sort of parameters for the object as well. So we had this whole sort of fire object database with all sorts of relevant bits of information.


Speaker 2:

So the way the design fire generator work was that had specified a fire load, energy density, as we call it in New Zealand, the fled, or just a fire load density maybe, depending on what part of the world you come from. So you might specify that as 400 megajoules per square meter of floor area. That's sort of a common fire load density in New Zealand. But then we also assigned a distribution to that as well. So it wasn't just always 400. It could vary between not very much and sort of you know, quite a bit more than 400. So for example, we might assign a triangular distribution for that. That sort of went up to sort of 80 percent was below 400 and the remaining 20 percent was above, or something like that.


Speaker 2:

Whatever you could assign whatever distribution you wanted to that. So, based on the energy for each of the items, the Monte Carlo process sort of sampled a fire load energy density from the distribution. So let's say it sampled 300. So for that particular iteration you could only get a maximum of 300 megajoules of total energy. So if you had 20 for that object and 50 for that object, you would keep on sampling until you got up to the level of energy density so it's not just a unified distribution of megajoules per square meters, it's just populating the room with objects until you get the max at random and then also that the location obviously the location and the proximity has quite a bearing on what will happen afterwards once the first item is ignited.


Speaker 2:

And so if you've got a really low fire load energy density and you've only got, say, a couple of objects and they're randomly spaced at the opposite ends of the compartment, you're not going to get anything more than the first object burning, whereas if you've sampled a higher value and you've got 10 objects in the space. So the model was sort of clever to the extent you couldn't just sort of stack objects in the same location. If there was an object there you had to be, you know went somewhere else in the space. So it went through a sort of room, fire item, fuel item, sort of population process and then randomly selected one of those to ignite and then basically the energy output Just followed the heat release rate curve for that first object and then we did some radiation Calculation.


Speaker 2:

So from the flames of the burning object, radiation 360 degrees, and plant all the other objects in the room. We also did a bit of simplification In that there are some pretty complex sort of energy exchange calculations you can do in compartments with radiation, but we worked out that all you really needed to worry about was the Radiation from the flames of the first burning object and the radiation from the underside of the hot upper layer. So basically it was calculating your upper layer and whatever height that was off the floor relative to where the object was. So you'd set up all the properties of the object how high, wide and we're sitting on the floor. So you sort of got a distance off the top of the object from the hot upper layer. You've got radiation coming from a burning object and then that as soon as it ignited second objects, you then had two radiation sources to all the other objects.


Speaker 2:

So it's sort of carried on doing that until the object sort of ran out of energy and sort of you know, ignited, so to speak. So yeah, basically it just goes through this process of working out how many and when they night, so ended up using the flux time product method, which is like a sort of radiation accumulation model.


Speaker 1:

Okay, I have to stop you because I already have like so many questions to what you've said and something I really want to go into, but I need some clarifications. So, first of all, your object was exposed to your first object, burning and and the upper layer of the smoke, which is super convenient because you are programming that into a zone model. So that's exactly what you're gonna get from a zone model the upper layer, height and its temperature. So I guess in here the simplification was with the shape of the, of the of the object. You just assume it uniformly, okay.


Speaker 2:

Yep, so I'm. It was just assumed to be a Rectangular prism, so so you specified, so you'd sort of specify a chair that maybe had a back that was, I know, 1.5 meters high, maybe something like that or whatever it was, and and you had a sort of a width and a depth and the model Just considered that as a sort of rectangular cube. So so that was definitely one simplification. So it wasn't sort of a a complicated shape made up of a back, and sides and all those sort of things another one or a Plug-in, before I go to another one, because we're talking and we're referring to the zone model.


Speaker 1:

I had an episode with Colleen wait, and she's an og of fire science show, because that's episode eight, it's a single digit episode as legendary today and it's called zone modeling, is not dead and it's an excellent episode I I highly recommend to everyone if you want to learn more about zone modeling, because it's a beautiful, beautiful fire science. The second question is we you've said you had an object database on on fire, so was it like an in-house colorimetry database? Did you act in this database? What did you get the data from?


Speaker 2:

so really good question. So. So part of the sort of combined project and part of my PhD was doing a sort of whole blind modeling exercise. We had a big experimental campaign but, learning from you know projects like Dalmanic, which you mentioned as well, we wanted to do a totally sort of blind, a Priory beforehand sort of modeling exercise where nobody was influenced by results or anything like that, because Mike and I had a lot of philosophical discussions about, you know, what is actually blind modeling and in fact we ended up writing a paper for a special issue Of course fire technology on exactly that topic. It was like a really esoteric, slash, random paper.


Speaker 2:

We were sort of fascinated about this concept of what is blind modeling anyway, that's by the by. So we did a blind modeling exercise and there was one of the rules was that in coming up with design fires for your Objects and your sort of customized database, you had to sort of publish data. So basically, coming back to your question in the long-winded way, was that we just pulled together data that was out there. So you know, in the SFPE handbook there's an amazing collection alone of all the stuff that veto Etc people at NIST did and published in the SFPE handbook, for example, and also because I was active in the fire research community, I could reach out to all sorts of people and get Access to their own in-house data. So you know, I knew people at NRC in Canada.


Speaker 2:

I knew some people at VTT in Finland and I knew people at SP. I actually ended up working there after my PhD or towards the end of my PhD, so I knew people around and I could just contact people and say, hey, you know. Hi, john, how are you? I'm looking for some data, have you got anything? And another person was Bart Messi Again his team.


Speaker 2:

They were doing a whole lot of useful sort of compartment fire, because I also needed to get data from actual Compartment fire experiments as well to sort of try and validate this design fire generator Sort of one of the validation exercises went to, but as far as the individual objects here, basically it was all sorts of data out there. But then also in our experimental campaign we had a mock-up of a single armchair, an item of sort of furniture and Sort of a TV basically. So we just did mock-ups of those. So for the armchair we just made a steel frame and with some standard sizes of 100 mil thick Polyurethane foam and we just had a had a base, two sides, a back, so four pieces of polyurethane and that was our sort of standard mock-up arms.


Speaker 2:

Was it comfortable to sit on to be honest, I'm not sure we ever actually sat on one. We've burned a heck of a lot, but whether we actually sat on one, that the answer there would have been that. It's a funny question. The the blocks of Polyurethane was sort of sitting there with friction, but it was only a sort of angle frame, so if you sat on it would have collapsed. There was nothing supporting the cushion, I think.


Speaker 1:

Well, I assume the ABS TV also did not work out that well as a TV, but the perfect as a target.


Speaker 2:

What we did there. So I'm I'd sort of come from the construction industry. My background is actually in structural engineering and Building products manufacturing and sort of construction projects on site, before I got into fire engineering and research. So I just sort of had some frames fabricated. So for the TV we just had a sort of square made out of 51 millimeter RHS, so just a sort of square frame and then a bit of a stand, and then we just had a sheet, sort of a one meter square sheet of I can't remember maybe sort of three mil thick ABS on either side of it. So it was the same plastic, sort of black plastic material that's used to make TV Cabinets and things like that and sort of electronics goods. So I sort of identified what the typical plastic was and we just got some sheets of that and we just had a sheet either side of the steel frame. So you know, they weren't obviously Exact replicas of those particular items but they were sort of close enough to be Relevant and appropriate, made from the right materials and also obviously, in wanting to do multiple Experiment, the steel frames are very robust. So the plastic and melted and did what it did and then the compartment cooled down and we could do another experiment later in the day, sort of slide in sheets of plastic and slide in sheets of polyurethane for the furniture.


Speaker 2:

The other item that we built so there's like the TV in the armchair, the other one was to represent sort of timber joinery, like shelves and Cables and chairs made out of timber. So what we did for that was we used an engineered wood product, mdf. I can't remember exactly, I think it might have been 12 millimeters thick and we just basically built a 600 by 600 cube, so a base and a top and four sides, so six pieces of 600 by 600 MDF and and that was just like a representative of Wood based furniture products you might have in a typical sort of you know residential Compartment. So that was really interesting, that one. It taught me one.


Speaker 2:

One of the sort of many, many, many things I learned during my PhD was some how difficult it is to actually ignite Sort of timber product people to think you know timbers a combustible product and you know Flick a cigarette button You've got a big fire going.


Speaker 2:

But I remember in the experimental campaign we sort of built up with these three furniture items from cone testing which we did for the Flux time product stuff that's sort of based on cone testing and we've ended Individual items burning, free burning, and and we used as a standard ignition source the, the ISO 9705 burner at 100 kilowatt and also we repeated them in the compartment but for the three burning ones in particular very vividly recalled in my mind that it took almost an hour of 100 kilowatts from the burner hard up against the face of one of the sides of the MDF to actually ignite it, to get flaming combustion. So it's something like 53 minutes to actually get ignition of the MDF. And that just really hit home to me how difficult it actually is to ignite solid timber or subengineered wood products. It's not that easy to actually ignite.


Speaker 1:

No, that is one of the challenges in fire safety, because we are working with highly non-linear things. Radiation is in the fault power of temperature and this is something a human mind simply has very, very big difficulties to grasp. It's very difficult to imagine how much it is in the fault power and here, on one hand, you have a single ignition source of your 100 kilowatts burner which puts nearby MDF, gives some radiant heat flux. In fact it was effectively direct flame, direct flaming, okay.


Speaker 2:

So it was more like, I don't know, 80 to 100 or something.


Speaker 1:

Okay, then it's very surprising. But anyway, observation, I think, still holds that on one hand you may have difficulty in igniting from a small source, but then when you transition into large fire, like a flash over fire, this amount of heat flying in your compartment is just unfathomable. It's insane. And all those objects that previously perhaps were very difficult to ignite, now they just ignite like it's nothing.


Speaker 2:

So it wasn't sort of a totally consistent finding, but it sort of was what it was and maybe I never thought about it. I sort of took the time to sort of think it through and sort of justify or explain exactly why it happened. Cause if I'd replicated that in the cone and I did I don't know hundreds or thousands maybe of cone experiments with the same MDF product, I was getting ignition, albeit piloted ignition, although I guess with a flame you've got the same piloted ignition going on but you'd be getting ignition at 20 kilowatts or something like that, so much earlier in the cone calorimeter. So it was an interesting sort of finding from that point of view.


Speaker 2:

Maybe if I'd done some vertical cone experiments, cause you know how you can sort of rotate and try to do the same thing. It's not very often done, but Because it's expensive. Sure, maybe if I'd done more vertical experiments I did quite a few vertical experiments, but not just for the MDF board that we use for this furniture item Maybe that would have been a sort of different scenario, cause you've got a whole, you know, totally different sort of convection, sort of going on with vertical compared to the horizontal.


Speaker 1:

So the last one I wanted to ask and I kind of touched it just now the flash over. And I assume if in your room you get another object and another object and another object and eventually it's cascading, you would reach the point where it's just a smoke layer that's causing the ignition and then it would spiral into some sort of flash over.


Speaker 2:

Is this something that the B risk was or is predicting, when you have enough items, or yeah, so one of the limitations we had is that we only had a ISO 9705 chamber for compartment experiments. So that's the room corner, yeah, the room corner one. So it's a 3.6 by 2.4. So it is actually pretty small by sort of compartment sense. So you'd sort of that would be an unusual well, maybe not. I guess 3.6 by 2.4. That's a pretty small compartment but compared to some sort of larger open plans, the spaces and larger compartments you might get, it's quite small. So we weren't actually able to validate the model and sort of larger applications. So the B risk can definitely do larger applications but sort of maybe not so well validated. This is smaller stuff that we did. But anyway, in the smaller room, absolutely, once you sort of get beyond, a couple of items burning you sort of you know into, flash over pretty quickly.


Speaker 2:

And I recall with our experiments, whenever you do a sort of new experimental campaign, you do the first experiment and you don't know how well it's going to go. And I remember this from a number of experimental campaigns. The first one it's like fingers crossed and how's this going to go and are there going to be any sort of safety issues? Things get out of control. So with the ISO 9705 room corner test apparatus at Brands, we had sort of some limitations on what our sort of extract system could handle and that was sort of roughly about 5 megawatts.


Speaker 2:

So with the first experiment we actually had a sort of delu, a sprinkler delu system. If things were looking like they're getting out of control, we sort of turned that on and sort of cooled it all down and put fire out. So with the first experiment, unfortunately we thought it was still growing and it was sort of starting to get a bit scary and we turned on the delu sprinkler. In fact, when we analyzed the data afterwards it was into the decay phase and we should have left it going. So we actually lost some valuable data. But yeah, we definitely got some pretty big post flash over fire. So you know lots of flames coming out of the door opening and starting to sort of put pressure on the equipment that we had and, you know, at risk of doing damage to the equipment.


Speaker 1:

So yeah, we were definitely getting a lot of flash over in the experiments that we did and that's a buildup of courage as the experimental campaign progresses is something. I can very relate to. It's very interesting. This experiment is always discovery. Ok, we've reached the FTP, so please tell me how did this model decide whether the secondary item is ready to ignite or not, and like what was the criteria that set the secondary object on fire?


Speaker 2:

Because of time that's gone past, I've sort of forgotten the exact specifics, so it might be a case of I need to sort of stand to be corrected on what I say. But essentially, fundamentally, the flux time product method. It's a correlation based on data from the likes of the cone calorimeter you can there are sort of other similar bits of apparatus you could use basically getting ignition times under sort of various levels of incident flux. So what I did basically was use the cone calorimeter on the materials of interest. So, for example, part of the work was these three furniture items that I discussed. So there's the black ABS plastic, there was the foam and it was just sort of bare foam. We didn't worry about fabrics or anything into lines and it's just bare foam, to get it simple. And then the MDF board.


Speaker 2:

We ran a series of experiment in the cone and I can't remember what the values were, but it was sort of something like maybe 10, 35, 60, so various levels of incident radiation, measuring the ignition time, so actually trying to measure them accurately. So I was basically sitting there with a stopwatch just waiting for the flaming to start, you know, clicking the stopwatch. So of course there's always a delay. So the ignition times I recorded were actually longer than the actual ignition times just because of the reaction times I might've been like half a second, the second or something like that. But the normal sort of commercial cone testing, that's done. There's sort of a guy sitting there with a pen and a thing and ignition and he's sort of writing down oh yeah, plus or minus. It'll be plus 10 seconds. It's not very accurate, but I was sitting there sort of accurately with a stopwatch trying to record the ignition time.


Speaker 2:

So I did replicates of a minimum of three tests as well, so it wasn't just a single value so at each radiation level. And then you sort of do the correlation process where you plot incident radiation versus the inverse of the time to various power N and I just can't remember the terminology but I think the N is called the flux time product index, so that the the end to see that you, that the power that you take the time to, so that varies between one and two depend, and I know once again, I can't remember which way around they are now but one or two signifies a firmly thick material. The other end of the extreme is a firmly thin material, and then there's sort of an intermediate. So if you read through all the research that the guys who developed the flux time product method sort of well, it came from various sources but the guys at Ulster, I think, sort of perfected it, if you like, and started applying it and generally Tol and Silcox and Shields and guys there are kind of remember all their names. But yeah, the guys from Ulster they developed this process and you do this correlation process and it's like a bit of a trial and error, I guess, to see which gives you the best straight line fit for the data.


Speaker 2:

So you sort of vary the sort of various parameters and then where it hits the, either the Y or the X axis, whichever way around you're plotting it, that gives you a critical heat flux. So you determine, you know, is the material firmly thick or thin or an intermediate material? So one, 1.5 or two, but then I was actually using the actual value, so it was sort of anywhere between one and two. And then you sort of just do a back calculation and come up with a sort of parameter that defines when ignition occurs. So you're getting radiation hitting your object and you're in your calculation process measuring how much it's accumulating. Then you spit the threshold of where ignition is estimated to occur and then if you've just got your first item burning, giving off energy, you ignite your second and that sort of triggers your second heat release rate curve and that starts accumulating and then basically build up cumulative heat release rate curve for the compartment from that point. So anyway, that's sort of the ignition criteria and that we developed based on the flux time product methodology.


Speaker 1:

I had a podcast episode with Rory, had an on ignition and I also highly recommend that to the listeners if you've missed that one and I've asked Rory about Fluke's time product and he was not a fan of that concept. Anyway, as I view, it.


Speaker 2:

It's nice and simple.


Speaker 1:

I mean, it's a fitting function as you described it. It feels like a fitting function that has some purposes in it. One is to capture that you can have a very long ignition time. If your heat flux is very low, it will be a very, very long time, and if the heat flux is very high, it can be almost instantaneous. And there's a spectrum in between those two where it will take a certain amount of time. It also captures the fact that there exists some critical value below which you will never get an ignition.


Speaker 2:

So that was one of the things I didn't explain. So, as well as sort of doing some standard radiation levels in the cone and measuring ignition data, I also sort of went backwards in one kilowatt per square meter increments to find the point at which ignition occurred at least one time out of a three-replicit, and then the next level down where no ignition occurred over three replicates.


Speaker 1:

And.


Speaker 2:

I ran my experiments for 900 seconds. So one of the many criticisms of the technical robustness of what I did, should I've run my cone experiments for more than 900 seconds, 15 minutes, but I ran them for 15 minutes. I did get one specimen. I recall that ignited it for like 14 and 57 seconds or something like that 14 minutes, 57 seconds. I got pretty close to that threshold but anyway that's what I did. So yeah, you can work backwards to get that sort of critical level at one kilowatt increments, sort of ignition and no ignition, then go in the middle as your critical value. So it might be seven and eight and your seven point five is your critical value. So yeah, that's one of the sort of aspects of that methodology. So basically, the correlation procedure, the curve fitting, straight line fitting, gives you a critical value where it intercepts the axis, but then you can experimentally check. So a reality check on that with experiments as well.


Speaker 1:

Quite interesting. You know it's not a physical loaded with describe some fundamental natural process, but it's a useful model and I really appreciate. The last thing that it captures is you mentioned thermally thick, thermally thin items. So if you have the polyurethane in a solid form and polyurethane in a phone form, it will behave completely different because of how it accumulates heat. Can you perhaps comment on that?


Speaker 2:

Yeah, well, you know, one of the obvious limitations with the cone testing is materials that shrink away from the heat source. So you know, polyurethane foam, for example, as soon as the heat goes on at any sort of reasonable level it's going to start shrinking away, even if it hasn't ignited, and hence you know it's going to be getting less radiation at the surface, the receding surface. So you know all that sort of stuff that's necessarily deal with all that. But I do remember, like, but I sort of went through, in one of the papers that I did as part of my PhD, sort of a justification of why this particular simplified method was chosen and as much as anything, it was acknowledging the fact you're probably aware, I'm sure if you've had a number of conversations with Mike, because I have this consistent level of crudeness is one of the sort of mantras that he operates by and very much from that point of view that ignition is a very complicated sort of scientific area and you can go to a lot of effort to sort of you know, mathematically represent it etc. Etc. But one of the downsides of that is you need a whole lot of material input parameters for it and sort of getting those for a wide range of materials sort of that are suitable for the purposes as a challenge in itself.


Speaker 2:

So one of the pieces of flux time product along the lines of this consistent level of crudeness is that it incorporates all that complexity, materials, property stuff into into the formulation. So you know, the scientific basis for the flux time product method takes account of all that, but you don't have to use all those sort of different input parameters for your calculations. It sort of just glosses over that, if you like, with a simplified method. But essentially the argument I made in one of these papers was that for the purposes where there's such a level of variability in the things you were dealing with, you know design fires in a compartment, having a super high level of you know accuracy, whatever that is for the ignition process, was unjustified. It would give us sort of assumed level of accuracy that wasn't really appropriate for other parts of the methodology that we were using.


Speaker 1:

And we're just after the episode with David Morrisett who studied that variability in depth with the code. So again, if the listeners have missed the episodes I think it's going to be two episodes ago then David has went way, way further than any of us in quantifying this uncertainty. So props to that. And OK, now to close the design fire generator. Once the ignition conditions based on this flux time product concept were met on your item, then you proceeded again into your fire object database. You've chosen the HRR curve to that object and applied it, or was there a modification to it?


Speaker 2:

As I was saying. So each of the object in the database that had been sampled, each of them had a heat release rate assigned to it and that was generally based on, where possible, experimental data. So it wasn't just an alpha T squared fire, it was an actual growth peaking, maybe a plateau and, depending on what it was, maybe double peaks, depending on what it was, and then a decay phase, so you know, a natural, real sort of fire curve. And then, yeah, there were these flux time, product indices and parameters for the ignition. There was three that you needed for that, for that methodology. And then, yeah, basically as soon as the received radiation, either from the flames of adjacent burning objects and the hot upper layer, and once again, that sort of combination was sort of simplified. It didn't take account of you had a top surface and a front surface, it just sort of considered the target as like a point, you know, like a tiny sphere that was receiving radiation From all directions concurrently at the same point. So, yeah, another simplification. But yeah, as soon as that nominal ignition threshold was received, yeah, you had your heat release rate curve from previous items and then you started adding cumulatively the heat release rate curve, starting from zero and then doing what it did for that individual item and that gave you a cumulative heat release rate curve for the whole compartment as that progressed and then, yeah, flash over ventilation, limited conditions etc, etc. All handled by the zone model functionality. So yeah, overall that was giving you. So that was an individual heat release rate curve for every single iteration of the Monte Carlo process and then it just repeated. All that again started from a blank page for the compartment, so the compartments sort of stayed the same.


Speaker 2:

It was what it was, because generally in most building situations you'd know what compartment sizes are. That's one of the givens. You'd sort of populate that space again randomly, so it could be sort of packed full or it could have, you know, a single item. It could just be down to a single item in the room and you wouldn't get any flash over. But yeah, I can't remember the percentages. You know we'd ran lots and lots of simulations but yeah, generally you were getting flash over, maybe in sort of 70 to 80 percent of the cases, something like that roughly. So originally high proportion. But there were some situations where it's just a single item burning never got to flash over. Maybe a second item took ages and ages to ignite and that never so one had burned out virtually and then another one went, so you just sort of got a bit of a peak, a bit of a peak, a bit of a peak, but never got high enough to reach flash over.


Speaker 2:

And that for me, with all the simplicity and lack of complexity, it sort of it did actually give you a good range of possibilities and I always felt sort of comfortable that it was doing a reasonable job to cover the range of possibilities and think about, you know the objective was to something that gave you a probabilistic sort of range of what might happen. I always felt comfortable that I came. Some of the sort of details wasn't super robust, it's quite simple, but it still does. It still did give you this good range of possibilities which I felt comfortable with.


Speaker 1:

Because we're approaching the end of the interview, I think a good exercise for anyone listening would be to actually download B-Risk and play with it, because it's built up in there. It's available. I became a B-Risk user a long time ago for a silly reason it's. I have reached a point where CIFAS would not run on my Windows computer. There was a version of CIFAS that was kind of like not working that well with Windows, and B-Risk has saved me back then because it's run flawlessly and I've kind of played with those probabilistic inputs, outputs. I kind of enjoyed that a lot. So I would recommend that to listeners who are interested in how actually it behaves.


Speaker 1:

It tells you whether the item is ignited. It tells you whether the flash over has occurred in your compartment or not, Whether the ventilation limit was met in your compartment. It's quite an interesting study and you can play with openings. I really like it as an educational tool, by the way, because you can very quickly demonstrate how different things play in fires. One final question I know that this method is I'm not sure how it's in New Zealand, but at least where I am it would not be commonly used, but I would associate it more with the zone model rather than any regards for the design fire generator. It's simply that the zone models are not that well perceived by the authorities. But now I also see the growth of the CFD, the developments in the computational power. The CFD is getting faster and faster and faster. I think eventually we'll come back to this concept as some sort of design fire generation for our CFD analysis and then it can make a really huge comeback. I hope you're looking forward to that.


Speaker 2:

Well, of course, because I guess that are closing out that loop, going back to the earlier comments I made about the history of the performance-based building code and sort of regulating, wanting to introduce these probabilistic sort of code requirements, so to finish off that story and to close out that loop, so that never really happened. I'd done all this work on the design fire generator and then it was sort of quite discouraging that the regulations didn't actually go as far as we might have imagined. And later on in the project we had to make a call that, although we had this probabilistic functionality, the way the new regulations were going or had gone was that they were introducing this new verification method where there was 10 scenarios you had to consider for your particular design and for some of them we're appropriate to define design fires and things like that.


Speaker 2:

So what we had to do, as well as this probabilistic functionality, was introduce another mode, which we call, surprisingly, cvm2 mode, where it actually preloaded all the relevant fire scenarios from the new CVM2 document. That's the mode of BRIS that is predominantly used and would probably be the one that would be suitable for use in other countries. So off to the side has been this probabilistic functionality or mode, and BRIS that's sort of scarcely ever used, except when you have conversations like this and people are aware that there's an option there if they're interested in doing something, maybe in the research sphere, where they want to sort of understand a bit more about some of the sort of variability that can occur with compartment design fires. But it was sort of a wee bit discouraging after putting in what amounted almost nine years of my life into my PhD and then it never really got used.


Speaker 1:

I'm pretty sure the time will come because the concept itself, if you take it away from the calculation software, as I said, if you take it away from the zone model, because I think the reason why it was not to use this is perhaps, yeah, haj related and from my perspective, it's zone model related, because I would not be able to present the zone model calculations to my fire department. They would laugh me out and they would expect colorful images instead.


Speaker 1:

But, I'm pretty sure the renaissance of this type of modeling will still come. And now people have a great reference to look into and I really hope with this conversation we perhaps inspired some listening researcher out there to perhaps take it a little further. Because, yeah, that's the concept that you've introduced at the beginning of the episode, your pathway to you know why you would build a doubt that up. It is still valid today. The validity of this has not changed. The necessity of such tools has not changed since when you've done it. So on this, I guess Will and Greg thank you very much for coming to the Fire Science Show and I have a feeling Will will see each other a lot more in the next week. Oh, and see you in person in Copenhagen. I'm looking forward to that SFP conference in some weeks.


Speaker 1:

That's going to be a very interesting event. See you there. Thank you, Greg.


Speaker 2:

Well, thank you very much for checking. It's been my pleasure to have a chat to you. It's been very interesting to remember some of the work during my PhD study. So, yeah, thank you for the opportunity. It's been great. I really appreciate it.


Speaker 1:

And that's it. I hope you enjoyed this. Greg sounded a little disappointed at the end that the design fire generator has not found that much practical use in the New Zealand, but I think it's a concept that is very timely and, as discussed at the end of the episode, will surely find its way to practical fire engineering. I also know that it's been used by Zahir at his PhD on vehicles. We've kind of discussed this a little bit in the episode with Mike's peer point on the car parks and, by the way, if you wondered which Mike Greg is referring to, that's of course Mike's peer point. Who was this PhD supervisor? Fantastic stuff has came up from Canterbury.


Speaker 1:

I was always looking up to this place as one of the most interesting academic hubs in the world of fire science and I will be bringing more guests from that region in the future episodes of Fire Science Show for sure. I hope this episode was kind of inspiring to you. I hope that you have some your own ideas how to craft your design fires better, together with the episode I tried to record on the vehicle design fires and the recent episode with David Morissette on the Uncending in the fire measurements. I think these three, together perhaps with the episode on the NIST Calibriometry Database, give a pretty solid foundation on how to craft, design fires. So, yeah, I hope we moved to fire safety engineering a little bit further today together. So that would be it for today's episode.


Speaker 1:

There is amazing stuff coming your way in April. I cannot wait that. I think the grand reveal of the name and the concept of the project is coming up soon, so stay focused and save some listening time for the beginning of April. You will need more than one hour for the content weekly, so I hope you'll enjoy that. Anyway, that's it for today. And guess what? Next Wednesday I'm here again to bring some interesting fire science to you. So, yeah, see you then, bye.