Feb. 25, 2026

240 - Distressed by the AI stuff around

240 - Distressed by the AI stuff around
The player is loading ...
240 - Distressed by the AI stuff around
Apple Podcasts podcast player badge
Spotify podcast player badge
Apple Podcasts podcast player iconSpotify podcast player icon

I’m not stressed by AI itself. I’m stressed by the insatiable greed of those who profit from it, even if it means sacrificing large parts of the population. I'm also stressed about how ruthlessly it can be abused to cause deliberate harm.

In this episode I'm not taking you into world of fire science, but rather into my own thoughts on how the AI revolution influences our lives. And I was influenced it just last week - through a phishing attack on the IAFSS, and through reading a very disturbing piece of fiction I found on the Internet...

In the episode I comment on the targeted phishing attack against our association that used well-researched details and a cloned voice pulled from public audio. From there, we step into a stark forecast of near-term AI disruption in white-collar work. Agent teams can already write, review, and ship production code in loops, compressing time and cost while jolting stock prices across entire sectors the moment capabilities drop. 

Then we get specific about our field. Some tasks in fire safety are ripe for automation—code interpretation, routine calculations, device placement, and documentation—where speed and consistency help. But holistic fire strategy is contextual and slow to validate, with scarce, standardized case data and long feedback loops. Buildings are messy, multidisciplinary systems; that friction is a temporary moat against full automation. The larger risk may be macroeconomic: if AI compresses demand and margins across white-collar industries, construction cools, and safety work gets squeezed. Paradoxically, low digitalization in construction buys time, making it harder to train and deploy one-size-fits-all models.

I'm still to large extent positive Fire Safety Engineering won't be directly disrupted at the same scale as Software Engineers got, but as a part of a larger ecosystem we won't be untouched either... I hope the version of the future that plays out is more optimistic than the one I got worried about.

Read the Citrini piece here, if you have not yet: https://www.citriniresearch.com/p/2028gic

----
The Fire Science Show is produced by the Fire Science Media in collaboration with OFR Consultants. Thank you to the podcast sponsor for their continuous support towards our mission.

00:00 - Setting Expectations And Context

04:14 - Partnership And Show Milestones

05:50 - Agentic AI And MoldBots

07:19 - The Phishing Attack Unfolds

10:43 - Voice Cloning Crosses A Line

13:06 - Vigilance And The New Scam Playbook

16:22 - The “2028 Global Intelligence Crisis” Report

20:12 - How Agent Teams Are Rewriting Software Work

24:05 - Economic Shockwaves And Market Disruption

27:16 - Abundance Dreams Vs Human Incentives

29:28 - Could AI Replace Fire Safety Engineers

33:09 - Why Built Environment Is Harder To Automate

WEBVTT

00:00:00.160 --> 00:00:02.399
Hello everybody, welcome to the Fire Science Show.

00:00:02.640 --> 00:00:08.160
Today we will be talking about AI, but this is not your normal fire science show episode.

00:00:08.240 --> 00:00:10.000
There's no fire science in it.

00:00:10.080 --> 00:00:14.320
It's not a scientific record of research or anything.

00:00:14.480 --> 00:00:24.079
It's just some of my thoughts that about stuff that happened with the within the AI in the last week and they have genuinely frightened me.

00:00:24.399 --> 00:00:30.320
So I thought, well, I I feel an urge to record this and share those thoughts with you.

00:00:30.480 --> 00:00:35.600
I thought about such an episode already when MoldBots came out, but then uh I was just a gimmick.

00:00:35.759 --> 00:00:40.399
I thought uh no point in doing this, but last week, wow, that that was crazy.

00:00:40.560 --> 00:00:46.079
Uh if this is your first Fire Science Show episode, that's probably a poor choice of starting your journey with a podcast.

00:00:46.240 --> 00:00:51.359
There are 239 instances of real science in the podcast, so you'll find them below.

00:00:51.439 --> 00:00:56.320
And if you're a long-term listener, I think uh you may actually uh enjoy this.

00:00:56.560 --> 00:01:01.439
I felt the need to talk about uh AI because I feel somehow connected to the topic.

00:01:01.600 --> 00:01:17.840
Perhaps I'm not a creator, I'm not uh I'm not the leader, I'm not someone driving the AI efication of the world, but uh I'm at least an active observer or maybe an early adopter and uh in in that, or at least people put me in in such a bracket, if I may.

00:01:18.000 --> 00:01:22.959
In the Fire Science show, if you look at the podcast, in the first 10 episodes we had two AI episodes.

00:01:23.120 --> 00:01:27.680
Uh episode 20 something was one with Mz Nasser about how to enter the world of AI.

00:01:27.840 --> 00:01:32.719
That was way, way before chatbirds and when it was so much more difficult.

00:01:32.799 --> 00:01:41.519
And actually, uh Nasser, what he said back then, it was that he claimed that he's uh just a few years ahead of everyone else and he's just trying.

00:01:41.599 --> 00:01:48.959
And today, look, five years later, he's uh the undisputed industry leader in AI, not just in fire safety engineering, but in civil engineering.

00:01:49.120 --> 00:01:53.120
Actually, I have good news because he's coming back to the podcast in very, very, very soon.

00:01:53.200 --> 00:01:57.840
So we'll have a scientific view over AI, not just my rambling very, very soon.

00:01:57.920 --> 00:01:59.840
But let's let's continue with rambling.

00:02:00.000 --> 00:02:02.239
We have uh shown you chatbots very early.

00:02:02.319 --> 00:02:07.040
Well, just after they came, we're discussing how chatbots could be used for the benefit of fire safety engineering.

00:02:07.120 --> 00:02:13.120
We've discussed implementation of other AI tools, we've discussed how to set up your own AI LLM models.

00:02:13.280 --> 00:02:19.199
I've presented my own view on where it's heading and where it's gonna be useful, where it's not gonna be useful.

00:02:19.280 --> 00:02:28.400
So a lot of stuff has been shown about AI in the Fire Science Show and and by myself around in different places where I show up.

00:02:28.560 --> 00:02:31.680
And uh I'm usually extremely enthusiastic about it.

00:02:31.759 --> 00:02:38.400
Or if not enthusiastic, I'm realistic that it's gonna be used for a good and it it gives us opportunities.

00:02:38.479 --> 00:02:40.400
That that's what I see, opportunities.

00:02:40.560 --> 00:02:44.000
And and this week, two things have happened that that frightened me really.

00:02:44.080 --> 00:02:46.080
Like I my view has shifted.

00:02:46.240 --> 00:02:48.080
Perhaps I should start farming.

00:02:48.159 --> 00:02:50.560
Maybe Bronze Age was not that bad, actually.

00:02:50.800 --> 00:02:54.400
Like really 180 perception shift.

00:02:54.560 --> 00:02:58.800
It's I've recovered, I'm a bit better now, but this was quite strong for me.

00:02:58.960 --> 00:03:10.000
So one is uh a fishing attack on IFSS, quite a big attack, to be honest, and uh it involved me to a level uh way, way, way beyond my comfort zone.

00:03:10.240 --> 00:03:19.360
And uh second thing was uh a report, an online report essay novel, I don't know, with fiction work by Citrini.

00:03:19.599 --> 00:03:25.439
I mean it it's a work of fiction, but so is uh 1984 by Orwell and look at the world today.

00:03:25.520 --> 00:03:32.879
So yeah, sometimes those things materialize, and if that Citrini reports materialize, that would be a hell of a shock for economy.

00:03:33.039 --> 00:03:52.240
It's a report on on the uh disruption by AI into the economy, and basically quite catastrophic in its view, and this prompted me to think about uh what AI means for fire safety engineering again, and and what it means for the broader economy, and at broader economy point of view, what does it mean for fire safety engineering?

00:03:52.400 --> 00:03:56.400
So, those two things I would love to share with you in this podcast episode.

00:03:56.479 --> 00:04:04.240
If you stayed with me this long, there's a good chance that what's gonna show up after the intro will be interesting uh to you.

00:04:04.400 --> 00:04:10.240
So if you would like to hear my views on those topics, uh stay with me, let's spin the intro and jump into the episode.

00:04:14.719 --> 00:04:16.639
Welcome to the Firescience Show.

00:04:16.800 --> 00:04:20.240
My name is Voycze Vingzinsky, and I will be your host.

00:04:33.759 --> 00:04:38.319
The Firescience Show podcast is brought to you in partnership with OFR Consultants.

00:04:38.480 --> 00:04:48.240
OFR is the UK's leading independent multi-award-winning fire engineering consultancy with a reputation for delivering innovative safety-driven solutions.

00:04:48.480 --> 00:04:57.439
We've been on this journey together for three years so far, and here it begins the fourth year of collaboration between the Fire Science Show and the OFR.

00:04:57.600 --> 00:05:14.800
So far, we've brought through more than 150 episodes which translate into nearly 150 hours of educational content available, free, accessible all over the planet without any paywalls, advertisements, or hidden agendas.

00:05:14.879 --> 00:05:21.680
This makes me very proud and I am super thankful to OFR for this long-lasting partnership.

00:05:21.759 --> 00:05:28.959
I'm extremely happy that we've just started the year 4, and I hope there will be many years after that to come.

00:05:29.199 --> 00:05:37.680
So, big thanks OFR for your support to the Fire Science Show and the support to the fire safety community at large that we can deliver together.

00:05:37.839 --> 00:05:45.120
And for you, the listener, if you would like to learn more or perhaps even become a part of OFR, they always have opportunities awaiting.

00:05:45.199 --> 00:05:47.839
Check their website at OFRconsultants.com.

00:05:47.920 --> 00:05:50.000
And now let's head back to the episode.

00:05:50.319 --> 00:05:53.040
So in the intro, I've mentioned MoldBot Revolution.

00:05:53.120 --> 00:05:54.639
I'm not sure if you followed that one.

00:05:54.720 --> 00:06:06.879
Uh those revolutions come and go very quickly these days, but basically, one person has created an AI agent tool that you run over your own computer, MacBook Mini, actually.

00:06:07.040 --> 00:06:16.160
And uh basically what it does, it's uh like a self-uh guiding agent that just lives in that computer and does tasks that you do, like kind of a person.

00:06:16.240 --> 00:06:26.399
And uh what was interesting is that uh one of people were giving them uh access to sensitive information, you know, like credit cards, etc., like ridiculous stuff has happened with that.

00:06:26.639 --> 00:06:34.240
And also another thing was that uh someone has created an online forum for those mold bots to chat uh with each other, and that was also quite a thing.

00:06:34.319 --> 00:06:45.439
But then I thought, yeah, okay, that's an AI gimmick, that's uh you know, just a funny thing, just an interesting thing, uh, but not really world-shifting thing.

00:06:45.519 --> 00:06:49.680
Uh interestingly, since then, that happened like literally two, three weeks ago.

00:06:49.759 --> 00:06:50.879
It's really recent.

00:06:50.959 --> 00:07:06.639
And the person that was an open uh source project, but that person has uh apparently has been acquired by OpenAI for like one billion dollars or something, and they're just developing it now, and uh it caused an MacBook mini shortage all over the world.

00:07:06.720 --> 00:07:12.480
So yeah, that's a ripple effect in the economy, which is kind of relevant to what has scared me this week.

00:07:12.639 --> 00:07:19.519
So I I said uh I was frightened, uh I was generally frightened, uh, and two things have happened that cost that.

00:07:19.839 --> 00:07:21.199
I'll I'll give you a story.

00:07:21.360 --> 00:07:23.680
The first one, uh it was Friday morning.

00:07:23.839 --> 00:07:34.959
I woke I woke up and uh I had an early morning meeting, and uh, as I move into the meeting, I see my mail inbox flooded with messages.

00:07:35.120 --> 00:08:01.040
Well, flooded, that's probably a big word, but I see at least five different uh messages from my colleagues uh from the IFSS um members advisory council board and the trustees of the IFSS, so let's say the leadership of organization, and uh those emails range from hey Wojciech, there has been a scam uh attempt with your name in it, to uh hey, I'm really sorry for your loss.

00:08:01.120 --> 00:08:03.680
If there's anything I can help you with, uh like let me know.

00:08:03.759 --> 00:08:05.360
And I'm like, like, what the hell is happening?

00:08:05.600 --> 00:08:10.160
Apparently, IFSS has been a target of a large-scale phishing attack.

00:08:10.319 --> 00:08:21.759
Um, not the first time, it has happened before, not particularly difficult because all we do is public, so the names of the advisors and trustees are well known.

00:08:22.000 --> 00:08:25.920
The relationships between them are well known as who is who.

00:08:26.079 --> 00:08:33.279
There's a lot of public information that allows to track us and that allows to you know link us together.

00:08:33.440 --> 00:08:39.919
And and basically the phishing attempt the what was disturbing is that the phishing attempt was surprisingly well crafted.

00:08:40.080 --> 00:08:44.879
There was someone impersonating Professor Nayan Lu, who is the president of the IFSS.

00:08:45.200 --> 00:08:59.360
Uh, and uh in that email, the impersonator was letting know the people who received the email that I am in distress, that I am flying to a country, and there was the name of the country from which this particular person was coming from.

00:08:59.519 --> 00:09:05.919
So people from Japan got an email that I'm heading to Tokyo, and people from Australia got an email that I'm heading to Brisbane.

00:09:06.159 --> 00:09:19.120
And in this email, I'm in the distress, I'm stuck in the Philippines or or something, and uh stopped by the border, and I'm heading for a funeral, and I really need help, and he cannot help me because he's flying to Canada.

00:09:19.360 --> 00:09:27.840
And if that person who was a target of attack could call, uh, and there was like phone numbers to Philippines uh and and and help me.

00:09:28.080 --> 00:09:40.240
I was decently crafted uh email, like not your usual Nigerian prince scam thing that included a lot of genuine information, like someone injected a lot of genuine information into that.

00:09:40.480 --> 00:09:42.240
And this was targeted to a person.

00:09:42.320 --> 00:09:51.759
It was not like a generic message sent to everyone, it had information specific to that particular person, like the country, um, the direction, etc.

00:09:52.080 --> 00:09:56.799
Like it it it was frighteningly well-crafted message, I would say.

00:09:57.039 --> 00:10:06.480
And uh when I first saw chatbots, my initial I I I I kid you not, my first thought about Chatbot was my god, scammers are gonna get good.

00:10:06.720 --> 00:10:08.639
And and this felt like like that.

00:10:08.799 --> 00:10:17.360
Like this phishing attack felt like a a much bigger improvement over the the scamming attempts that I've seen in the past.

00:10:17.519 --> 00:10:19.360
But this is not enough to frighten me.

00:10:19.440 --> 00:10:21.600
I mean, it's it's a just a phishing attack.

00:10:21.759 --> 00:10:31.679
I've replied uh to IFSS that we are under attack, I've issued a warning, uh, we've sent an email to the membership, uh I thought case closed.

00:10:31.840 --> 00:10:37.519
But then I was chatting with some colleagues because more people were reaching out to me, and actually two people said that.

00:10:37.600 --> 00:10:40.879
But one person told me, like, Voczy, but are you sure you are safe?

00:10:40.960 --> 00:10:43.120
Because because they are now calling me.

00:10:43.200 --> 00:10:46.240
And I'm like, no, I'm sure I'm safe, don't answer.

00:10:46.399 --> 00:10:48.159
And then another person said, You know what?

00:10:48.320 --> 00:10:57.120
I just got a call from Philippines, and it was with your voice, and you were very convincing in that call, and that that's when I broke.

00:10:57.279 --> 00:10:59.679
Like, seriously, that is when I broke.

00:10:59.759 --> 00:11:11.759
Like, I I joked with some people that wow, this phishing attempt is really good, but imagine if they could clone the voice or use AI, you know, to enhance the scam and uh etc.

00:11:12.000 --> 00:11:19.840
And and I was thinking, yeah, that's a hypothetical scenario in three years, but no, that was the scenario that was in the play right now.

00:11:20.240 --> 00:11:24.720
Like they did that, they cloned my voice, they started playing those messages to people.

00:11:24.960 --> 00:11:28.159
Uh and wow, this this was like really bad.

00:11:28.240 --> 00:11:50.399
I mean, I I was nowhere no way connected uh to that attack, but I somehow feel bad if if if someone lost money or or was uh a victim of a larger attack uh because of their you know relationship with me, because they trust me, because they like me, because they felt uh they would like to help me.

00:11:50.480 --> 00:11:51.759
I mean that's horrible.

00:11:51.919 --> 00:11:57.440
Like that is truly, truly horrible, and this made me sick to my stomach.

00:11:57.840 --> 00:12:00.080
And uh I was not the only one.

00:12:00.240 --> 00:12:09.840
There was uh later in the day Europeans started receiving messages with my friend Sinan Huang in the same kind of sequence of of stuff.

00:12:10.000 --> 00:12:20.480
Not sure if they they cloned Cinean's voice as well, but uh it it appears it has been a part of a larger you know attempt just targeting our small association.

00:12:21.279 --> 00:12:24.639
But man, this like this is ridiculous.

00:12:25.039 --> 00:12:31.440
And it it it is it is so so easy to to make an artificial voice.

00:12:31.759 --> 00:12:36.000
It is very easy to create artificial voice, one that sounds quite real.

00:12:36.320 --> 00:12:40.000
This is in fact an AI-generated sample made straight from a text.

00:12:40.240 --> 00:12:44.399
Like this one was not even good, but it it took me like 10 seconds to create that.

00:12:44.639 --> 00:12:46.159
That's how easy it is.

00:12:46.399 --> 00:12:56.000
And you know, my job uh speaking at a podcast makes me especially vulnerable uh to those uh identity theft uh attempts.

00:12:56.159 --> 00:13:04.000
So uh one thing that I really really really need you to know is that if I ever call you and ask for money, don't give to them.

00:13:04.240 --> 00:13:06.080
Man, that's gonna backfire one day.

00:13:06.159 --> 00:13:13.600
But if I ever call you and I ask you for money, uh please, please, for love of God, do not give me more money.

00:13:13.759 --> 00:13:15.039
Please be vigilant.

00:13:15.360 --> 00:13:28.720
This the way how this is accelerating, the way how this is spinning up, and this connects you know to the cloud bots, this connects to the agentic AI because today it's not a single person who has to sit down and send those messages.

00:13:28.879 --> 00:13:39.759
No, you can set a friggin' MacBook Mini and tell it a scam 1,000 people and create an AI-generated voice to convince them to do that and just reach out to them.

00:13:39.919 --> 00:13:45.120
That's that's how easy it is, and this will unfortunately be better and better with time.

00:13:45.360 --> 00:13:49.039
It's already frighteningly good, and it will be better.

00:13:49.200 --> 00:13:51.039
I'm really speechless about that.

00:13:51.279 --> 00:14:02.080
It's uh please please stay safe online and and uh you know we we all have to be at a higher level of vigilance compared to just a few years ago.

00:14:02.320 --> 00:14:12.320
So yeah, that was my Friday that set a mood for the entire weekend, as you can imagine, and uh I was dealing with this kind of backlash of this fishing attack for the rest of the day.

00:14:12.480 --> 00:14:16.720
And then I think on Saturday I had a pretty disturbing read.

00:14:16.879 --> 00:14:23.039
So I found an online piece, an online paper uh called the 2028 Global Intelligence Crisis.

00:14:23.120 --> 00:14:34.799
It was written by someone calling themselves Citrini Research, apparently some some sort of financial uh equity investing uh firm that provides some financial research and commentary.

00:14:35.039 --> 00:14:52.879
And uh it's obviously a work of fiction because uh basically what it is uh they provide you a report written in 2028, which is a year in which the humanity is in a crisis and they trace back how the crisis has happened.

00:14:53.120 --> 00:14:55.840
That's that's the the the way how the stories build.

00:14:55.919 --> 00:14:58.080
It's linked in the show notes if you would like to read it.

00:14:58.240 --> 00:15:13.759
I think it's quite a decent read, and uh while I appreciate the fact it's a work of fiction, as I said in the intro, uh 1984 of Orwell is also a work of fiction, and yet uh in many places of the world it exactly unravels.

00:15:14.159 --> 00:15:24.000
A lot of uh Netflix Black Mirror episodes are works of fiction, and disturbingly a lot of those are coming to life in the real world in real technologies.

00:15:24.320 --> 00:15:49.600
So um while I appreciate this is a word work of fiction, it is disturbing because I'm not saying this is something that will happen, but I cannot exclude the possibility of their predictions coming to reality, maybe not at a full scale, because I I still think there are bottlenecks, which I will I will discuss later, but uh the it's it's the direction is is probably quite realistic.

00:15:49.840 --> 00:16:00.480
Um so why the why this particular report has uh has frightened me and and why I I started reviewing my position on AI after that.

00:16:00.720 --> 00:16:08.159
The thing is, if if you ask me that one and a half year ago, one and a half year ago I was giving you some predictions on AI for the future.

00:16:08.399 --> 00:16:10.879
Usually those things don't really work out well.

00:16:11.039 --> 00:16:15.360
And uh the ability to predict future is not that easy.

00:16:15.600 --> 00:16:23.200
If you asked me one and a half year ago about my views on using AI for programming, I would say that it's pretty damn good.

00:16:23.279 --> 00:16:24.399
It's pretty damn useful.

00:16:24.480 --> 00:16:25.759
I I'm using it all the time.

00:16:25.919 --> 00:16:39.120
Since the chatbots came up, I am using AI in my Python programming for my scientific research, scientific work, and uh always felt this tool is really really strong and really really useful.

00:16:39.360 --> 00:16:51.039
But uh I'm quite primitive in the way how I use it because uh if you look at the environment right now, the whole field of software engineering is currently replaced by AI.

00:16:51.200 --> 00:17:01.279
It's not that person goes to a chatbot and asks them, Dear chatbot, please write me a software for financial analysis of SP 500 or whatever.

00:17:01.519 --> 00:17:02.480
It doesn't work like that.

00:17:02.559 --> 00:17:05.200
But people s today set up agents.

00:17:05.519 --> 00:17:09.200
Agent is like an AI tool that has kind of a life of its own.

00:17:09.359 --> 00:17:31.839
You give it, you know, a set of rules, you give it uh a goal, a task, a bigger task, like you tell it, write me a software to analyze SP 500 live, and this agent is self-guided, so it knows what's the task, it assigns the goals or intermediate goals to itself, steps of of the process.

00:17:31.920 --> 00:17:46.559
It it it figures out the process and then does it step by step, and for those steps it crafts its own prompts and uses LLMs to you know build up chunk by chunk, and you don't really supervise it, it's just doing it on its own.

00:17:46.640 --> 00:17:53.519
It can take you and it can take an hour, it can take five hours, but eventually it comes back to you with with a working thing.

00:17:53.680 --> 00:17:57.519
And this is not yet enough to replace a whole field of software engineering, of course.

00:17:57.599 --> 00:18:00.640
It's it's a more complicated gimmick but gimmick.

00:18:00.799 --> 00:18:29.039
But now people have found that you could technically set one agent to create the code, then you create a second agent to review the code, a third agent to criticize the code, a fourth agent to optimize the code, a fifth agent to integrate the code and test it, and then you know you you're running an entire team of software engineers on one task, and they're talking to each other and they're working on it without your supervision.

00:18:29.200 --> 00:18:33.519
All you did is send uh a prompt to them on what they're supposed to deliver.

00:18:33.759 --> 00:18:51.839
It costs quite a lot because they are burning through tokens, and tokens is the currency of uh using AI through interfaces, so it's it's quite costly to run that, but still that's like a small fraction of costs you would have if you wanted to hire real people to do the same goal.

00:18:52.000 --> 00:18:58.559
And largely the outcomes are are really good, like they are on the human programmer level outcomes.

00:18:58.799 --> 00:19:07.759
Now, even the AI companies they said they're moving towards this type of engineering rather than having old school humans writing code.

00:19:08.000 --> 00:19:13.920
And this kind of disruption, this is changing the entire field of software engineering.

00:19:14.240 --> 00:19:20.640
Similar thing, maybe not on that scale, has happened to uh artists and and graphics designers, etc.

00:19:21.039 --> 00:19:24.079
So easy to create good graphics today with AI.

00:19:24.319 --> 00:19:35.359
And and today the coding it this is is uh strongly, strongly influenced by by this uh flow of agentic AI, which does the jobs of those uh of those people.

00:19:35.519 --> 00:19:37.519
And this goes into this citruni report.

00:19:37.599 --> 00:19:41.519
This is basically the start of that citrunny report because you see two things.

00:19:41.680 --> 00:19:47.519
One one is that you can automate jobs and replace people doing those jobs, that that's one thing.

00:19:47.599 --> 00:19:51.680
But the second thing is the uh economic impact that replacement has.

00:19:51.839 --> 00:20:07.920
So basically, when uh Anthropic, which is the company behind the cloud or uh an AI agent, if they drop a new functionality of their software, suddenly there's like drops on the stock market of companies which will be disrupted.

00:20:08.160 --> 00:20:15.039
Literally, like yesterday, I IBM has dropped because uh Anthropic has mentioned they can program in cobble.

00:20:15.359 --> 00:20:20.480
So it has really two avenues in which it's dis disrupting the economy.

00:20:20.640 --> 00:20:23.599
One is directly, one is through replacing opportunities.

00:20:23.920 --> 00:20:31.440
And it's quite easy to replace human workforce with AI when you kind of specialize it for the task.

00:20:31.599 --> 00:20:36.640
And once this happens, it disrupts the space within it, which it happened.

00:20:36.799 --> 00:21:00.240
What I mean by that is that if there's you know an industry that uh charges you one thousand dollars for a service and it hires two hundred thousand people, each of them earning a hundred thousand bucks a year, and suddenly you create an agating AI tool that can do the same job almost at the same level, but for a small fraction of costs.

00:21:00.559 --> 00:21:17.680
One is that suddenly those two hundred thousand people are perhaps going to be unemployed or will have to significantly change their lives, and two, I don't think you're you're able to charge a thousand bucks anymore because now you're competing with others who also do it for a fraction of cost.

00:21:17.920 --> 00:21:22.079
So the cost of the service falls down drastically.

00:21:22.319 --> 00:21:34.720
And uh the whole Citrini report is is that this is actually what's happening right now to some of the services, and when it happens to one service, that's pretty bad for that service.

00:21:35.039 --> 00:21:43.200
But when it happens to the entire economy, like the white collar economy, all the jobs imagine all the jobs that could be automated become automated.

00:21:43.599 --> 00:21:51.519
We are talking about the majority of workforce out there, which is victim to disruption.

00:21:51.680 --> 00:21:58.960
We're talking about insane amount of companies who absolutely shift their revenue streams and suddenly this revenue is not going to them.

00:21:59.039 --> 00:22:02.880
I mean the the Services become cheaper, the money is going elsewhere.

00:22:03.200 --> 00:22:13.599
This is a large-scale economy disruption that can pull banking and housing and other stuff collapsing to one uh one after another.

00:22:13.759 --> 00:22:18.160
A whole like systematic change in the world uh way how the world lives.

00:22:18.400 --> 00:22:24.880
And you know, uh Elon was uh speaking about such things some time ago.

00:22:25.039 --> 00:22:35.039
Uh Elon Musk has said that it's pointless to say for your retirement because we will be living in the world of uh abundance.

00:22:35.200 --> 00:22:36.400
That that's what he said.

00:22:36.640 --> 00:22:38.079
A world of abundance.

00:22:38.240 --> 00:22:51.680
Uh and and his logic was that everything is gonna be so cheap and so well developed through the AI tools and AI support, then there will be a very limited need for human jobs and human work.

00:22:51.839 --> 00:23:08.960
And actually, the wealth created by this AI revolution will be distributed among the human population in some sort of universal income, and basically you're gonna have everything you could ever need and want, like utopian world of abundance.

00:23:09.279 --> 00:23:17.359
And uh yeah, that's that's kind of cute, but uh I like observing the world around me, I don't think that's how the world works.

00:23:17.599 --> 00:23:23.440
Uh Charlie Mungo said that the world is not even driven by greed, it's it's driven by envy.

00:23:23.839 --> 00:23:34.960
And and what I see is you know, those optimizations they don't lead to create a little utopia for software engineers, they don't lead to create little utopias for anyone.

00:23:35.200 --> 00:23:41.839
It's about you know directing revenue streams to smaller and smaller groups of people and you know profiting.

00:23:42.079 --> 00:23:46.160
And those people who lose their jobs, that's just you know, as a cost.

00:23:46.400 --> 00:23:49.359
That that's they're they're the victims of the process.

00:23:49.519 --> 00:23:50.480
Who cares about them?

00:23:50.640 --> 00:23:54.240
Now, if this happens to the entire economy, we're we're kind of screwed.

00:23:54.400 --> 00:23:56.960
This this kind of frightened me in that Citrunny report.

00:23:57.119 --> 00:23:58.000
Is it fake?

00:23:58.160 --> 00:24:00.000
Is it like unrealistic?

00:24:00.319 --> 00:24:02.720
Is it hyper-optimistic about this assumption?

00:24:02.960 --> 00:24:07.119
Probably it is, it's it's it's quite strong, and I think there are bottlenecks.

00:24:07.440 --> 00:24:09.920
One really strong bottleneck is energy.

00:24:10.079 --> 00:24:24.720
That's you need a lot of power to support those uh AI tools to train them in the new fields, and uh if everyone is using them in an increasingly large scale, we're basically gonna run out of electricity to power that.

00:24:24.880 --> 00:24:30.559
So there's a hard cap on how much we can automate with AI based on an electricity power.

00:24:30.720 --> 00:24:40.160
And I would assume some legislation, perhaps some taxation will come up to soften the blow and perhaps move the world a little bit into that uh general abundance.

00:24:40.400 --> 00:24:49.839
But indeed, this scenario is something that that could realistically happen, that could realistically unravel.

00:24:50.079 --> 00:24:53.279
So now why I'm talking about this, why why this is relevant?

00:24:53.440 --> 00:25:13.119
Because immediately after I've read this paper and I had a chain of thoughts about you know the future and the AI and how it is disrupting the world we know, I started thinking, okay, if those software engineers have been replaced by AI largely, can fire safety engineers be replaced by AI at large?

00:25:13.279 --> 00:25:14.160
That that was my thought.

00:25:14.240 --> 00:25:15.680
That was my immediate concern.

00:25:15.920 --> 00:25:17.279
How about my job?

00:25:17.599 --> 00:25:21.519
Am I at risk of being replaced like that?

00:25:22.079 --> 00:25:34.079
And this chain of thought has led me to, let's say, an optimistic view that I don't think it's very realistic in the close time span, whatever foreseeable time span.

00:25:34.319 --> 00:25:36.319
And there are reasons are a few.

00:25:36.559 --> 00:25:40.400
One, uh we're kind of unknown to the larger society.

00:25:40.720 --> 00:25:46.400
Show me a work list, uh, list of jobs that counts as fire safety engineer as a job.

00:25:46.480 --> 00:25:50.000
Uh we're hidden in the shadow, the society doesn't know about us.

00:25:50.160 --> 00:25:57.359
So we're unlikely to be uh an obvious target for automation, you know, because we don't really exist in the heads of those people.

00:25:57.440 --> 00:26:01.759
Uh so uh that that's that's what gives us uh a little bit of safety.

00:26:01.839 --> 00:26:04.000
Uh this not being exposed.

00:26:04.240 --> 00:26:18.799
But uh joke jokes aside, in terms of software engineering, one you had an abundance of resources online, like you had Stack Overflow where people would be creatively solving each other's problems in programming.

00:26:18.960 --> 00:26:22.480
That's a gold mine to train AI models on.

00:26:22.799 --> 00:26:27.839
We had um abundance of code, a lot of softwares that work.

00:26:27.920 --> 00:26:34.559
You know, we we had that all available in the cyberspace where you could steal it.

00:26:34.799 --> 00:26:43.279
Let's be clear, those AI models are built on theft, so you could easily steal that and train your models on that.

00:26:43.440 --> 00:26:53.839
And once you did that, you received a model, and then you could run rounds of improvement on that model because in software engineering you can create software, you can test it.

00:26:54.079 --> 00:27:12.400
What works you can keep, what doesn't work, you can drop, and you know, in in a kind of evolutionary way, you're able to optimize the product to receive a really well-working one in the end, in a very short span of time, because those iterations can come very quickly.

00:27:12.640 --> 00:27:18.480
Uh, if you think about fire safety engineering, there's not that much of that particular problem solving.

00:27:18.640 --> 00:27:37.359
There is a lot of knowledge, there is a lot of information about how stuff works, there is a very sound and solid theoretical foundation of the discipline which you can uh go for, but there is not that much case study problem-solving things available online easily.

00:27:37.680 --> 00:27:41.440
So there's not such an abundance as there was for software engineers.

00:27:41.599 --> 00:27:45.359
Therefore, it makes training your initial models so much harder.

00:27:45.599 --> 00:27:50.079
And the second thing is the iterative, you know, recursive loops, they're not easy.

00:27:50.319 --> 00:27:52.400
We're talking about buildings, we're talking about projects.

00:27:52.480 --> 00:27:58.079
You cannot implement it and expect the next day you're gonna have outcomes and improve and improve and improve.

00:27:58.160 --> 00:28:03.359
Now it takes years to get buildings approved, accepted, built, tested.

00:28:03.519 --> 00:28:07.599
So those feedback loops also are kind of much slower.

00:28:07.839 --> 00:28:11.200
So I think there are mundane tasks that could be automated.

00:28:11.359 --> 00:28:19.359
I think I know the ability to read code, like the whole code consultancy, this is perhaps endangered.

00:28:19.519 --> 00:28:34.160
There are design tasks that could be automated, like distribution of pipings or cablings, or or even jobs like uh locations of sensors and sprinklers, etc., that this this could potentially be automated.

00:28:34.720 --> 00:28:38.880
But at large, you know, the the FAR strategies, the FAR scenarios.

00:28:38.960 --> 00:28:48.240
Uh I'm not sure if this is easily is it's possible to easily automate that because of how much they are connected to the project.

00:28:48.400 --> 00:29:00.160
So um despite you know this this uh kind of interesting outcome for software engineers, I still think that fire safety engineers are still to some extent safe.

00:29:00.240 --> 00:29:16.160
They're definitely safer than many, many other jobs, which would be high on the job ranking, which would be very visible, which would have those resources to train the models and solid economic incentives to kind of work and replace them.

00:29:16.319 --> 00:29:19.359
But now uh that's just one side of the story.

00:29:19.680 --> 00:29:29.680
The other side of the story of that of the Citrini report is that but what happens to the fire safety engineering if there is truly a massive global economic crisis?

00:29:29.839 --> 00:29:32.799
How how does fire safety engineering work?

00:29:32.960 --> 00:29:38.480
And you know, I I know some people who worked in 2008 uh in the last major economic crisis.

00:29:38.720 --> 00:29:39.680
It was tough.

00:29:39.920 --> 00:29:47.519
In Poland, I personally we had some smaller crises 2012-13, was not many jobs back then.

00:29:47.680 --> 00:29:48.559
It was tough.

00:29:48.799 --> 00:30:04.799
I think uh if if this scenario to some extent the travels and the economy is generally in a bad place, fire safety engineering is not gonna be in a great place either because the construction world will be a victim of the economic circumstances around us.

00:30:04.960 --> 00:30:16.160
That that's perhaps even though if we may not get automated, uh this may be a thing that uh causes a little bit of suffering to us and makes our lives harder.

00:30:16.319 --> 00:30:18.880
Uh I'm I'm a little worried that it's gonna unravel like that.

00:30:19.200 --> 00:30:39.920
One more thing about the uh civil engineering or or built environment, engineering at large and uh AI is that uh I remember I seen some report uh about the state of digitalization in EU of 20-something key areas of uh economy.

00:30:40.160 --> 00:30:54.240
And actually, I think construction was like either the last one or among the the last ones with like very, very, very low score of how digitalized it is, and it was worse than agriculture.

00:30:54.400 --> 00:30:59.920
And and then I thought, oh man, we're so lagging behind, we're so bad in in digitalization.

00:31:00.079 --> 00:31:07.279
And today I'm like, oh hell yeah, that's good because it's uh if we're not that digital, it's not that easy to train on us.

00:31:07.440 --> 00:31:16.400
So uh perhaps uh our reluctance to BM2 digitalization is actually something that uh saves us in in the end.

00:31:16.559 --> 00:31:21.359
I'm not calling to drop the attempts to digitalize the civil engineering.

00:31:21.440 --> 00:31:40.400
I still see massive opportunities in doing that in a good way, but you know it's kind of a circumstance that our inability to transition into the digital age as an entire industry kind of worked out for us in the end as the AI revolution unravels.

00:31:40.960 --> 00:31:44.480
So yeah, the this report really stressed me out.

00:31:44.559 --> 00:31:50.240
I spent like half of the weekend thinking about this and uh you know creating scenarios in my head.

00:31:50.400 --> 00:32:03.119
I have children, I want the children to live in a world where it's great for them to live, not in a world where people compete for the most basic jobs because all other well-paying jobs have been automated already.

00:32:03.359 --> 00:32:06.160
So yeah, it was kind of stressful.

00:32:06.559 --> 00:32:08.079
Probably overacting.

00:32:08.160 --> 00:32:22.319
Uh, probably it's not as bad as uh the report says, but still I think it was a valuable exercise to read through that and think about uh how the future will look like.

00:32:22.480 --> 00:32:24.079
Uh, I wonder if you've read it.

00:32:24.319 --> 00:32:26.880
I wonder what are your thoughts about it.

00:32:27.039 --> 00:32:29.440
Uh I just really wanted to share this with you.

00:32:29.599 --> 00:32:39.920
Those two things that happened over the weekend uh that they caused me to view the world of AI in a slightly different way.

00:32:40.400 --> 00:32:48.480
Uh still seeing the opportunities, but a little bit more frightened about the consequences of what's uh going around.

00:32:48.720 --> 00:32:51.279
Um that would be it for this podcast episode.

00:32:51.359 --> 00:32:57.119
I think I've achieved my goal of rambling and speaking out on the stuff that distressed me.

00:32:57.279 --> 00:32:58.559
Please stay safe online.

00:32:58.720 --> 00:33:00.720
Please do not send me any money.

00:33:00.880 --> 00:33:02.160
Really, like seriously, don't.

00:33:02.480 --> 00:33:15.119
If if I reach to you, especially from a suspicion, number suspicious number, especially from Philippines, please do not uh answer and do not uh do not send me anything in any condition.

00:33:15.519 --> 00:33:32.000
And uh stay safe online and uh as promised, uh very soon I'll have a proper AI revolution episode with the opportunities, with the possibilities uh with the person who is the undisputed leader of the Adam Zenoser.

00:33:32.319 --> 00:33:42.559
So uh if you lacked fire science or science in general in this podcast episode, I will make sure that you will get the double dose in the one where we do it properly.

00:33:42.720 --> 00:33:44.079
Thanks for being here with me.

00:33:44.160 --> 00:33:50.799
It's it's really great to have uh people that you can speak up with or to and share stuff like that.

00:33:50.960 --> 00:33:56.480
And uh I hope uh one day we meet at a beer and we can uh take this conversation further.

00:33:56.720 --> 00:33:58.240
Thanks for being here with me today.

00:33:58.400 --> 00:33:58.799
Cheers.

00:33:58.960 --> 00:33:59.279
Bye.