WEBVTT
00:00:00.541 --> 00:00:02.548
Hello everybody, welcome to the Fire Science Show.
00:00:02.548 --> 00:00:09.173
When the Chachapiti revolution occurred, I was very happy to tell you all about it as soon as I could.
00:00:09.173 --> 00:00:22.533
I had also an episode with Mike Kinsey where we've discussed the possibility of using tools like Chachapiti and setting out your own stuff that can support engineers' workflow.
00:00:22.533 --> 00:00:24.663
Fast forward a few years later.
00:00:24.663 --> 00:00:27.507
I think we got used to this technology by now.
00:00:27.507 --> 00:00:30.553
I think it's going to be the defining technology of this decade.
00:00:30.553 --> 00:00:42.072
You know, like Internet defined the 90s, I guess Facebook defined the 2000s, instagram and Twitter probably defined the 2010s.
00:00:42.072 --> 00:00:43.323
Maybe TikTok.
00:00:43.625 --> 00:00:50.567
I'm a different generation and I think this decade will be defined through large language models, chatbots, etc.
00:00:50.567 --> 00:00:54.293
And it's just a part of our lives nowadays.
00:00:54.293 --> 00:00:59.073
But are you really using that in your engineering workflow?
00:00:59.073 --> 00:01:00.639
I'm using it for programming.
00:01:00.639 --> 00:01:04.013
I'm using it to solve pieces of problems that I work.
00:01:04.013 --> 00:01:22.121
I find finding solutions to some issues much quicker with the support of chatbots, but it's not that I am really incorporating them in my workflows completely, and the real problem with them is really the privacy problem.
00:01:22.121 --> 00:01:28.313
And hallucinations, yes, but privacy would be the one that worries me the most.
00:01:28.799 --> 00:01:44.531
A day or two days ago, I saw a quote from Sam Altman when he was asked what's going to happen if a court asks OpenAI to release prompts of a user in some sort of court hearing, and Sam said that they probably will have to give that to the court.
00:01:44.531 --> 00:01:53.840
So if you are talking with ChatGPT, any kind of LLM it's not that you're having a secure conversation with your computer.
00:01:53.840 --> 00:02:00.281
You're sending all of that into the internet and if you upload a file, it goes somewhere.
00:02:00.281 --> 00:02:04.569
If you upload a confidential file, well, it goes somewhere as well.
00:02:04.569 --> 00:02:12.653
So you probably don't want to do that, and that kind of limits the ability for us to work, because most of the stuff we have here is confidential.
00:02:12.653 --> 00:02:16.751
The amount of NDAs I have to sign to do anything is crazy.
00:02:16.751 --> 00:02:22.473
Therefore, the ability to use AI in my engineering workflow is limited.
00:02:22.800 --> 00:02:34.271
And here comes the solution my two days guests, professor Ruggiero Lavreglio from Massey University and Dr Amir Rafae from Utah State University.
00:02:34.271 --> 00:02:47.911
They've been playing with this technology, but they were playing with LLMs, or small language models that you can install locally on your computers and you have the ownership of the data that is being sent.
00:02:47.911 --> 00:02:50.271
You don't even need internet for them to work.
00:02:50.271 --> 00:02:51.680
Quite magical world.
00:02:51.680 --> 00:03:00.889
Instead of relying on insane computational power of OpenAI or XAI, you can use your own computer to be your own.
00:03:00.889 --> 00:03:02.944
Chat Comes with the requirements.
00:03:02.944 --> 00:03:03.829
Not that easy.
00:03:03.829 --> 00:03:09.372
Well, it technically is easy, but it has its challenges, which you will learn in the episode.
00:03:09.372 --> 00:03:15.713
So I think this opens a new pathway where those tools can be really, really useful for fire safety engineering.
00:03:16.116 --> 00:03:23.469
Enough of my rambling, because there's a lot of valuable content behind the intro, so let's spin the intro and jump into the episode.
00:03:23.469 --> 00:03:29.719
Welcome to the firesize show.
00:03:29.719 --> 00:03:33.203
My name is Wojciech Wigrzynski and I will be your host.
00:03:49.193 --> 00:04:02.659
The FireSense Show is into its third year of continued support from its sponsor, ofar Consultants, who are an independent, multi-award winning fire engineering consultancy with a reputation for delivering innovative safety-driven solutions.
00:04:02.659 --> 00:04:16.420
As the UK-leading independent fire risk consultancy, ofr's globally established team have developed a reputation for preeminent fire engineering expertise, with colleagues working across the world to help protect people, property and the plant.
00:04:16.420 --> 00:04:32.548
Established in the UK in 2016 as a startup business by two highly experienced fire engineering consultants, the business continues to grow at a phenomenal rate, with offices across the country in eight locations, from Edinburgh to Bath, and plans for future expansions.
00:04:32.548 --> 00:04:40.752
If you're keen to find out more or join OFR Consultants during this exciting period of growth, visit their website at ofrconsultantscom.
00:04:40.752 --> 00:04:43.201
And now back to the episode.
00:04:43.201 --> 00:04:47.425
Hello everybody, welcome in the Fire Science Show around the globe.
00:04:47.425 --> 00:04:56.175
Today I'm in my studio in Warsaw, my first guest, dr Amir Rafay from Utah State University.
00:04:56.175 --> 00:04:58.016
Hey, amir, nice to see you.
00:04:58.360 --> 00:05:11.687
Hello, thank you for welcoming me, thank you Good afternoon, I guess, and my second guest, professor Ruggiero Lovreglio from Massey University.
00:05:11.687 --> 00:05:14.970
Hey, rino, good to see you Good morning everyone.
00:05:14.970 --> 00:05:17.086
Good morning in New Zealand.
00:05:17.105 --> 00:05:24.750
Wow, that's literally around the globe, nice we are in the future here and can tell you that the weather looks good.
00:05:31.540 --> 00:05:32.521
I'm so glad that tomorrow looks nice.
00:05:32.521 --> 00:05:33.785
Thank you, that's what I was looking for.
00:05:33.785 --> 00:05:38.182
It's it's a very late evening in warsaw, um amir, congratulations on passing your viva, and I just I heard that it was a few days ago.
00:05:38.182 --> 00:05:43.211
So all for a good start into the episode and it's a very interesting content.
00:05:43.211 --> 00:05:48.846
We are talking about ai and how ai will change the industry.
00:05:48.846 --> 00:06:02.302
I remember, I think two years ago I was talking with mike kinsey on podcasts about creating some sort of ai tools or ai alike tools, because michael had some explicit tools that were not really in ai.
00:06:02.302 --> 00:06:07.754
He also had ai, to be honest, felt like, you know, a dream for a future.
00:06:07.754 --> 00:06:09.146
It was very interesting.
00:06:09.146 --> 00:06:13.211
Today, two years later, holy crap, a lot has changed.
00:06:13.211 --> 00:06:18.771
Rino, can you summarize where we are in this madness of AI revolution today?
00:06:19.300 --> 00:06:45.314
So yeah, we can say that three years ago we all had the shakeup when we tried the GPT thing I think it was back then and we started typing and we started seeing oh gosh, it's answering questions, it's looking like a human, it's doing stuff that we're not expecting, and that was the big shock that the food world had with OpenAI and their first public tool for all of us.
00:06:45.314 --> 00:06:48.009
From there, things has been going wild.
00:06:48.009 --> 00:06:52.170
You can see that there is a lot more competition on cloud service.
00:06:52.170 --> 00:07:01.985
I experienced myself among them using Cloned Rock JV9, and they are really there trying to fly with each other.
00:07:01.985 --> 00:07:12.089
Who is going to have the best results with benchmarking some of them cheating, because they train the model on the benchmark and then they say, oh look, they're keeping a really good match.
00:07:12.089 --> 00:07:13.865
It's like, of course.
00:07:13.865 --> 00:07:21.461
So there is a lot at stake, especially who is going to be the one still leading forward.
00:07:21.903 --> 00:07:28.067
You've been talking, I don't know, for a year about Chachi PT and it's always the new one is always about to come.
00:07:28.067 --> 00:07:35.572
God knows when it's going to come, but we could see already, from the GTT 3 to 4, the great advancement.
00:07:35.572 --> 00:07:58.526
The latest news in the last couple of weeks was the release of an agent function within Chachi PT, which was like wow for the words Not much wow for me, amir, because if you are in the field and you see all the open tools that are in there, it's that prototyping a lot of stuff yourself before ChachPT or wherever produced those tools.
00:07:58.526 --> 00:08:02.425
So it was like, yeah, nice, let's give it a try, let's see how it works.
00:08:02.425 --> 00:08:08.293
And so now everyone has the buzzword agentic, ai, agentic, ai, goodness.
00:08:08.579 --> 00:08:19.312
And if you see what was Chachapiti one year ago, he had the possibility to be an agent because he was loading a Python environment, writing the code for you, developing the charts.
00:08:19.312 --> 00:08:30.516
And if I tell my wife, it's not the language model that developed the charts in Chachapiti, it's because he's having an agency on the Python code do stuff and give back the results to you.
00:08:30.516 --> 00:08:32.988
So agentic is not the new.
00:08:32.988 --> 00:08:40.014
It's a nice buzzword to do marketing, to sell fluff, but probably it's already two, three years old stuff.
00:08:40.014 --> 00:08:42.206
Amir probably can tell us more about it.
00:08:42.880 --> 00:08:44.368
Yeah, I'm very happy to hear that.
00:08:44.368 --> 00:08:51.529
If we could just quickly round up what are the popular models you mentioned ChatGPT, cloud, grok, gemini.
00:08:51.529 --> 00:08:54.149
There's also Perplexity, if I'm not wrong.
00:08:54.480 --> 00:08:59.533
Yeah, I'm not even mentioning Perplexity, copilot, because they are not models.
00:08:59.533 --> 00:09:04.110
They are just AI tools that use as a backbone these big models.
00:09:04.110 --> 00:09:16.787
They are just capable those platform, to reuse something that is already there through API and sell you something that is a bit more customized for specific task, and that's the direction we are taking.
00:09:16.787 --> 00:09:20.855
Also, profile protection engineering and the Chinese one what was his name?
00:09:20.855 --> 00:09:26.211
Ah yeah, deepcq was like there was another shaker for the wars.
00:09:26.211 --> 00:09:39.972
It's because of the results, because he was pretty cool with the faking capability, but also because they realized that they spend Chris that's the official data much less money than everyone else to train a model.
00:09:39.972 --> 00:09:46.532
And they were like, oh my goodness, and in China there is a lot of ban and difficulties to find the advanced graphic cards.
00:09:47.000 --> 00:09:48.725
But underneath the clothes.
00:09:48.725 --> 00:09:52.452
It's all instances of very similar concept.
00:09:52.452 --> 00:09:54.426
It was called Lama, I believe.
00:09:54.426 --> 00:09:54.947
I'm not wrong.
00:09:54.947 --> 00:10:01.273
But, Amir, tell us more about where we are from a technical point of view and how the environment looks right now.
00:10:01.633 --> 00:10:21.328
Sure, first I wanted to say we had AI, I think from 1940 and after that in 1950, when Alan Turing said and designed a test for how machines work as a human, or how they think as a human.
00:10:21.328 --> 00:10:30.687
And I think now, in 2025, we are working on that because we are looking for the AGI artificial general intelligence.
00:10:30.687 --> 00:10:42.086
But in the product side we have a lot of AI models or, as we can say, we have a lot of large language models or small language models.
00:10:42.086 --> 00:10:50.192
They can do a lot of things in the space of the fire, engineering or transportation.
00:10:50.192 --> 00:10:56.493
If I wanted to call it just one part of the Reno, what's the perplexity?
00:10:56.493 --> 00:11:00.990
It has a new large language models, as they called Sonar.
00:11:00.990 --> 00:11:06.230
So they have a Sonar reasoning model for the thinking model.
00:11:06.552 --> 00:11:24.274
But I think OpenAI when published the chat GPT and GPT models after the GPT 2.5 and after that 3.5, I think in 2023, if I'm correct.
00:11:24.274 --> 00:11:36.191
So they changed a lot of things because you know we worked with them for the usual tasks and you know the chat GPT was a generated AI.
00:11:36.191 --> 00:11:47.085
It's so important for us because we can communicate with these models and we can ask and after that, you, after that, the AI agent develops and a lot of things.
00:11:47.085 --> 00:12:00.542
So, after chat, gpt the meta company as we know it for the Facebook or WhatsApp or Instant Run, they publish an open source model as a LAMA.
00:12:00.542 --> 00:12:10.500
They change a lot of things because now we found we can work with the open source models, we can use the API for our tools.
00:12:10.500 --> 00:12:26.347
It's so important for us Now, in 2025, we have a lot of models, open source models, the Lama or Lama, and we can use a lot of API using the open router or similar products like this.
00:12:27.139 --> 00:12:43.914
So I think the open source models that we have on the Hugging Face and, you know, olamo, it's so important for us as a researcher or as engineers to create products for, you know, create a pipeline for our works.
00:12:43.914 --> 00:12:47.179
Create products, create a pipeline for our works.
00:12:47.179 --> 00:12:55.687
So I think it's a good start to talk about how we can use AI in our research or in our field, by engineering.
00:12:55.687 --> 00:12:55.947
That.
00:12:56.481 --> 00:13:03.049
It's an ongoing discussion and it's a discussion in the practical aspect already, because everyone is using AI in one way or another.
00:13:03.049 --> 00:13:07.869
Even if you're doing Google search today, you are using some sort of AI already in it.
00:13:07.869 --> 00:13:36.902
One thing that was with us from the start of this AI GPT-fueled revolution with the first release of the chatbot, was how deep they can go, how good answers can they make, and immediate problem that we've observed was that, with great confidence, they will give you a really bullshit answer, sometimes like blatantly wrong, with 100% confidence that you're right At this point.
00:13:36.902 --> 00:13:40.988
For me as a user of these tools, this is absolutely frustrating.
00:13:41.379 --> 00:13:45.250
Not sure if you've seen a meme of an AI surgeon.
00:13:45.250 --> 00:13:48.485
Is that I've removed part of your body or shouldn't it be on the other side?
00:13:48.485 --> 00:13:51.402
Oh, yes, you are right, it should have been on the other side.
00:13:51.402 --> 00:13:52.586
They let me remove it again.
00:13:52.586 --> 00:13:54.556
That kind of summarized this experience to me.
00:13:54.556 --> 00:13:57.804
Has anything changed in this hallucination aspect?
00:13:57.804 --> 00:14:00.110
Have things have improved, changed?
00:14:00.110 --> 00:14:02.943
I have a feeling they like sinusoidal curve.
00:14:02.943 --> 00:14:08.028
They get better and worse, better and worse, and I can't tell which part of the cycle we're in.
00:14:08.681 --> 00:14:18.572
Yeah, no, that hallucination has been one of the major things that people have been complaining about, especially when you are trying to get something with a reference.
00:14:18.572 --> 00:14:23.129
You get these beautiful title papers and then you go there and you don't find it.
00:14:23.129 --> 00:14:24.230
Can we prevent it?
00:14:24.230 --> 00:14:25.153
Definitely yes.
00:14:25.153 --> 00:14:35.639
The problem is that when we use really general models like ChachiDT have been trained to be good at answering a bit of everything and not actually report when the knowledge is not there.
00:14:35.639 --> 00:14:53.109
But I've been learning a lot from Amir that using a system prompt or using any other setting or parameter on the model like 180 quantum to meter is the temperature, that is the level of the model you can reuse it and make the model more deterministic.
00:14:53.109 --> 00:14:56.710
This stack more or work is the knowledge as we train.
00:14:56.710 --> 00:15:14.551
So I always show an example that when I use even small model like 3 million parameter, to give you a reference point, chachapy T4 is 1.7 trillion parameter, gtt 3.0 boards 10 times smaller, 170 million.
00:15:14.551 --> 00:15:17.947
And now you can run on your computer.
00:15:17.947 --> 00:15:23.710
Consider like one gigabytes of RAR will allow you roughly a bit more than 1 million parameters.
00:15:23.710 --> 00:15:28.940
So we can run more liquid language model on our own personal PC locally.
00:15:28.940 --> 00:15:30.886
So you unplug all the internet.
00:15:30.947 --> 00:15:35.029
It's still working and it starts asking things about who is Reno?
00:15:35.029 --> 00:15:37.769
Of course it's too small that he doesn't know me.
00:15:37.769 --> 00:15:51.888
He might know a lot about Isaac Newton and if I don't do any change in the setting, he's going to stop most likely telling me that I'm either a Macchia boss or I'm a musician from Naples I don't know which one is the worst, just kidding.
00:15:51.888 --> 00:15:55.289
And it's going to start telling me to fabricate a lot of stuff.
00:15:55.289 --> 00:16:06.221
But if I put then a specific prompt to say just stay on the facts, don't make hypotheses or make that work in the state then and reduce the temperature on the model, it will.
00:16:06.221 --> 00:16:09.990
The same model will tell you I don't have information about this context.
00:16:09.990 --> 00:16:11.927
Please ask something else.
00:16:11.927 --> 00:16:14.908
So it's something that can be modified.
00:16:15.539 --> 00:16:18.830
You have hallucination when the model doesn't have information about that.
00:16:18.830 --> 00:16:23.631
If you provide author with all the information that he needs, he will generate an answer.
00:16:23.631 --> 00:16:26.369
So that's the reason behind the hallucination.
00:16:26.369 --> 00:16:27.947
And they are probabilistic model.
00:16:27.947 --> 00:16:32.407
They don't even understand, they don't have consciousness, so they have a line here.
00:16:32.407 --> 00:16:33.341
You can tell them.
00:16:33.341 --> 00:16:34.888
Hey, you're making some assumption.
00:16:34.888 --> 00:16:35.509
Put it forward.
00:16:35.940 --> 00:16:44.123
The problem with the big models that we use through crowds is you don't have access to all these parameters, you don't have the possibility to use system prompts.
00:16:44.123 --> 00:16:51.835
You can do some clicking on Gemini when you use it with the Google AI Studio, but most of the others it's all locked.
00:16:51.835 --> 00:16:58.414
You don't even know what is the system prompt that the opening AI is using.
00:16:58.414 --> 00:17:00.385
So it's there, you can't touch it, and that's a big limitation.
00:17:00.385 --> 00:17:20.057
Hence much better using those models with ETH or using open source tools and, like Mir will say, just Google Ollama, and you will see that there are so many open solutions that are there for you for free, with model data nearly close to 1 trillion parameters, and so you can even download them.
00:17:20.057 --> 00:17:28.667
But good luck, maybe a computer that can run them, a cluster that can run them, because they are really GPU intense, and that's the other things that we can discuss later.
00:17:29.402 --> 00:17:31.008
Ceo, check your running piece model.
00:17:31.259 --> 00:17:43.009
I want to ask Amir on some of the terminology you have used, because I think for our listeners, the ones who are not that technologically savvy, let's try to clean some concepts.
00:17:43.009 --> 00:17:49.023
You've mentioned parameters, you've mentioned temperature, you've mentioned API tools.
00:17:49.023 --> 00:17:51.865
I would like to go over in more or less this order.
00:17:51.865 --> 00:17:59.904
So perhaps, amir, if Reno is telling me one model is like 3 billion parameters, other is trillion parameters, what does it mean?
00:17:59.904 --> 00:18:01.546
What was the parameter in this context?
00:18:01.880 --> 00:18:10.559
It's a very good question because it's a big start to working with the AI models in the product sections Yoda.
00:18:10.559 --> 00:18:20.490
When we call this GPT or Generative Between Transformers, they're created based on some data and after that they work for some tasks.
00:18:20.490 --> 00:18:33.034
So when we call these as a parameter, is large language model created based on some data, or textual data or image, or depends to which model that we are working with that.
00:18:33.661 --> 00:18:45.355
And with a different concept when we are talking about the data set created based on reinforcement learning, so it's different from other aspects that other models created.
00:18:45.676 --> 00:18:51.188
So parameters are the size of the data that model created based on that.
00:18:51.188 --> 00:19:05.215
When we are calling this model, for example, gemma from the Google, it has 4 billion parameters, so it's created based on the data that has a size of 4 billion.
00:19:05.215 --> 00:19:15.160
For example, when you call it the text data, pdfs or books, the size of them is 4 billion flowers.
00:19:15.160 --> 00:19:24.790
So because we have technical terms here as a context window, so a model can read a lot of data.
00:19:24.790 --> 00:19:30.063
Text window, so model can read a lot of data.
00:19:30.063 --> 00:19:31.849
It can read, for example, based on the context window that would read it, the model.
00:19:31.869 --> 00:19:33.054
One more follow-up question, if I can.
00:19:33.054 --> 00:19:55.007
But the simple fact that one model has more parameters is not necessarily meaning that it's a better model, because you would probably, if you are going for a specific use, you would probably, if you are going for a specific use, you would probably have a better, fine-tuned model on less parameters which are very fit to what you're trying to accomplish, rather than a multi-billion parameter model.
00:19:55.007 --> 00:20:12.133
Train on on random stuff and I guess when they, when they trained the next instances of grok or chat g, they're probably just let it read the entire internet as a training set as Amir was saying on this is like yeah, the bigger model is like the brain is much bigger.
00:20:12.880 --> 00:20:15.429
It comes with a much bigger context length.
00:20:15.429 --> 00:20:26.211
The context length is basically the short-term memory of a language model, so it's where you put all the information you want from turn, all the things that you want to digest and process.
00:20:26.211 --> 00:20:29.347
Based on that, the long-term memory, it's desperate.
00:20:29.347 --> 00:20:34.208
What has been the model trade-on can be changed, like we can do some fine-tuning on that.
00:20:34.208 --> 00:20:43.319
So that's why really big models are really good, because the reasoning capability improved and also the amount of information that can be worked on is much better.
00:20:43.883 --> 00:20:56.251
And now, if you could explain that concept of decreasing temperature and those parameter settings because, as like Rino said, you don't really play with those at all, with your normal chatbots.
00:20:56.251 --> 00:21:00.817
So what do you mean by altering the parameters of the model?
00:21:00.938 --> 00:21:04.145
as a user, I think just about your question.
00:21:04.145 --> 00:21:06.952
Everything it depends to your task.
00:21:06.952 --> 00:21:24.951
Maybe in some tasks in general tasks larger models, for example, where we have 170 billion model it works better in general tasks because they are more complex and they can answer you with more accuracy.
00:21:24.951 --> 00:21:45.453
But when you are working in the engineering side, when you're working when you wanted to connect the AI to the documents or some specific task, maybe a smaller model works better because you wanted to create the specific brain for the AI in the specific area.
00:21:45.453 --> 00:21:47.046
So it depends on your task.
00:21:47.480 --> 00:21:58.173
If you wanted to ask about the weather, if you wanted to ask about scheduling something, I think the larger model works better than the smaller one.
00:21:58.173 --> 00:22:05.205
So I want to just mention another thing about the temperature, as I forgot to say.
00:22:05.205 --> 00:22:24.368
Temperature is so important when you are using the API for that the creativity level of the model so it's so important when you are working with the RAG structure or when you wanted to extract the specific data from the document.
00:22:24.368 --> 00:22:36.272
It's important to set the temperature to zero or less than 0.5, because the temperature scale for the AI model is between zero to one.
00:22:36.272 --> 00:22:57.051
So when we decrease this and move from one to zero, we can decrease the creativity of the model and you know it's a kind of that we say to the model use our data for answering, not your brain or the data that you created based on that.
00:22:57.051 --> 00:23:02.606
So temperature in the product side is so important, choosing the model is so important.
00:23:02.980 --> 00:23:13.272
If a man is like drinking a bit, you will see that if you drink a bit more you might become more social, more chucky and tell us things that probably you don't even need.
00:23:13.272 --> 00:23:27.094
And when you get too much of the shots then you start telling a bit too much random stuff and almost you need the model to go and be flexible on what it says.
00:23:27.094 --> 00:23:33.773
And if the temperature is too low then you start becoming a bit boring and not able to come out with something.
00:23:34.480 --> 00:23:41.107
I once spent an entire evening talking in German and I don't know German, so that must have been a very high temperature.
00:23:41.107 --> 00:23:46.588
It's possible to recreate this in some experimental setting at some conference if you want.
00:23:46.588 --> 00:23:54.632
I guess this is also the reason why sometimes chatbot annoys me with the language it's using, like those ridiculous, ridiculous, you know texts.
00:23:54.632 --> 00:24:02.030
You can immediately say, oh, this is chatbot, generated like no one speaks like that, and probably when you go closer to zero, it just gives you more dense, simple answers, more to the point.
00:24:02.030 --> 00:24:04.980
But again, if you want something very creative, it just gives you more dense, simple answers, more to the point.
00:24:04.980 --> 00:24:10.412
But again, if you want something very creative, it's probably good to be higher.
00:24:10.412 --> 00:24:13.607
You've again used API, so not everyone knows what's API.
00:24:13.607 --> 00:24:14.611
So what's API?
00:24:15.079 --> 00:24:15.844
API it's.
00:24:15.844 --> 00:24:17.270
You know if it's simple?
00:24:17.270 --> 00:24:22.589
Because you know, I'm not a computer scientist, I'm just a user of the language in AI.
00:24:23.300 --> 00:24:24.083
I don't think any.
00:24:24.083 --> 00:24:28.432
Maybe there are a few listeners to Funnel Science Show who are computer scientists.
00:24:28.432 --> 00:24:30.728
So it's from one fire engineer to another.
00:24:33.202 --> 00:24:33.943
You know the.
00:24:34.224 --> 00:24:37.133
API is application programming interface.
00:24:37.133 --> 00:24:47.595
So you can use the API keys for using the AI in your code and calling the large language model from the server.
00:24:47.595 --> 00:24:58.373
If you are using the commercial large language model, you can call them from the server, or if you are using the open source model, you can call this from the hugging phase using the API.
00:24:59.160 --> 00:25:09.490
So an example would be I would be writing my code and instead of my code solving something would be I would be writing my code and instead of my code solving something, I could just write a, a piece of code, a piece of code.
00:25:09.490 --> 00:25:15.708
Go ask this to chat gpt and post me the answer, more or less yes, yes okay good what you need to do.
00:25:15.788 --> 00:25:17.491
You can even try with open ai.
00:25:17.813 --> 00:25:35.729
You can go in their api platform, you can generate a key and you can see that once you have that key, you can put it in many other user interface that allow you to use API or ChachiPT or any other model, and then you can run it directly on this new user interface.
00:25:35.729 --> 00:25:41.083
And the good news about that is that you don't need to have a subscription.
00:25:41.083 --> 00:25:42.125
You pay as you go.
00:25:42.125 --> 00:25:46.325
In fact, you can see how much every model costs in terms of token.
00:25:46.325 --> 00:25:48.428
That's the other keywords that we need to talk.
00:25:48.428 --> 00:26:05.898
Everything is running token, A set of characters that when you write a prompt it can be converted in a number of tokens that you send back to the server and the server comes back to you with an answer that is measured and broken and you pay the bills as you go.
00:26:05.898 --> 00:26:19.958
So if you have a really big company and you don't want to have a 300-substructure possibility, if some of your stuff don't use Chachapiti, AIM and C-Pro single instance would be probably cheaper to use it to.
00:26:19.958 --> 00:26:21.711
Ai were being probably cheaper from using the tool-aid AI.
00:26:22.153 --> 00:26:28.631
Okay, I mean it's a valid question for very generic use of AI, which happens seldom.
00:26:28.631 --> 00:26:32.391
You use it, let's say once a week or twice a week.
00:26:32.904 --> 00:26:33.907
Is it even a word to?
00:26:33.949 --> 00:26:35.051
go for it's much cheaper.
00:26:35.313 --> 00:26:42.192
Okay, good Guys, let's bring this discussion closer to fire safety engineering, because that's what I really wanted to know.
00:26:42.192 --> 00:26:46.942
I mean, it's fascinating to observe the AI revolution happening in front of our eyes.
00:26:46.942 --> 00:26:48.166
It's absolutely crazy.
00:26:48.166 --> 00:26:52.500
But well, let's get it closer to fire safety engineering.
00:26:52.500 --> 00:27:10.347
Before we started talking, I went to one of my favorite websites, willrobotstakemyjobcom, and this website actually has fire prevention and protection engineers in it, and it gives me minimal risk.
00:27:10.347 --> 00:27:19.106
So it tells me that there's a risk of 19% that AI will overtake fire safety engineers.
00:27:19.106 --> 00:27:27.789
It also tells me an average wage of fire safety engineer is $103,000 a year, which is very reassuring to me.
00:27:27.789 --> 00:27:30.256
What's so hard about fire safety engineering?
00:27:30.256 --> 00:27:34.310
That we are at minimal risk of being taken over by AI?
00:27:34.811 --> 00:27:40.872
No, this is a partial answer because we can accelerate a lot the work of fire protection engineer.
00:27:40.872 --> 00:27:46.345
That means a firm will need the capability, will have the capability to run more projects.
00:27:46.345 --> 00:27:59.315
That means that competition is going to be higher and probably more need fewer engineers, but also capable to do in a company to augment the staff using AI.
00:27:59.315 --> 00:28:11.994
It's helping to speed up a lot of the work, make more informed decisions, to have much more context when you make some decisions, but you still need the brain of humans to make the call.
00:28:12.486 --> 00:28:26.612
While I agree, it's also, I would rather say that, with ever-increasing workload, it's just going to allow us to catch up rather than decrease the number of fire engineers needed, which is also a very positive observation, but still still, you know, 19%.