Major global IT outage linked to Crowdstrike software

Touring Mars

ツーリング マルス
Moderator
29,123
Scotland
Glasgow
GTP_Mars
A major global IT outage is currently happening:


It is being linked to a problem with Crowdstrike software, likely an overnight update that has gone awry and is causing BSOD on Microsoft computers.

So far, multiple airports, airlines, banks and hospitals across the world have been affected.

Sky News here in the UK was also taken off air for a while, as was CBeebies - so not a good day for children's TV so far.
 
Last edited:
And Mercedes F1 will be affected.

My boss is looking to move our company to Crowdstrike and because I have previous experience with the software (mostly due to an overly ambitious CISO who wanted to speedrun deployment) in a past job, I am uneasy about the change.
 
And Mercedes F1 will be affected.
Sounds like it was a problem with a driver...

Formula 1 Racing GIF by George Russell
 
Nothing is working for me at work (engineering center) and the Corewell hospital my wife works at is down as well. Someone's gonna be unemployed after today.
 
I'm doing an afternoon shift. IT guy came in half an hour ago to reboot the computers. Still can't access them. I'm about to clock off anyway.
 
This is a virtual machine error due to a faulty update. There's a workaround to reboot the machine in safe mode and delete the update command from the Windows directory.
 
"The emergency 911 lines in the US state of Alaska went down, according to the state troopers service. “Due to a nationwide technology related outage, many 911 and non-emergency call centres are not working correctly across the state of Alaska,” a statement read."

This is why emergency infrastructure should be the most robust and least technical it can be.
Anyone remember having copper wires for landline? These things work even during power outages.
Anyone remembering airhorn for alarm? These things dont require some internet switch.
...
But sure, the more stuff requires an IP address, the easier it gets to bring it all down - this is not doomsaying, but simple a question of when any random occurance hits like this again, or even worse some people with bad intent decide now is the time for max impact.
 
Most of the powerlines here in Mid-Northern Alberta are in the ground. Been in that province for 23 years now, barely ever had any IT problems.

Edit: I'm sorry for whoever this happened to, it sucks when it happens. I know it's not entirely related but last late-Winter I had some weirdo cut off a copper power cable under a main bridge on the side of the river. 2 days without Internet and our City Hospital was also affected by that.
 
Last edited:
We've determined we can't do anything and it's up to our vendor to fix their issues (our system is fine). I'm guessing a lot of places are in the same boat right now too. What's really unfortunate is the it's a holiday week in Utah next week so our staff is skeleton as it is.

I just want to know how stupid of an IT director you need to be to authorize a production change on Friday. Like IT 101 is you don't do stuff on a Thursday or Friday and doubly so not during the summer. I have to make changes by Wednesday night at 10 pm, and if I miss that window, I need to wait until the following Monday.
 
We've determined we can't do anything and it's up to our vendor to fix their issues (our system is fine). I'm guessing a lot of places are in the same boat right now too. What's really unfortunate is the it's a holiday week in Utah next week so our staff is skeleton as it is.

I just want to know how stupid of an IT director you need to be to authorize a production change on Friday. Like IT 101 is you don't do stuff on a Thursday or Friday and doubly so not during the summer. I have to make changes by Wednesday night at 10 pm, and if I miss that window, I need to wait until the following Monday.
Especially don't authorize a change that could affect AIRLINES on a Friday or Saturday.
 
And Mercedes F1 will be affected.

My boss is looking to move our company to Crowdstrike and because I have previous experience with the software (mostly due to an overly ambitious CISO who wanted to speedrun deployment) in a past job, I am uneasy about the change.
They literally were and it's hilarious.

crowdstrike bsod.jpeg
 
Thankfully we don’t use it so no big headaches here so far. However it would have been nice to have an easy day at work m seeing as it’s sunny and nice out.

I am surprised we haven’t as we use AWS.
 
It's crazy how vulnerable large swathes of western economies are to stupid bugs like this. How are 911 systems not more resilient?
 
It's crazy how vulnerable large swathes of western economies are to stupid bugs like this. How are 911 systems not more resilient?
They're designed to resist against malicious outside attacks, not clumsiness by someone who's supposed to be maintaining it. Also most telecom systems in the USA are patchworks of older systems that are a PITA to maintain under the best of circumstances.
 
They're designed to resist against malicious outside attacks, not clumsiness by someone who's supposed to be maintaining it. Also most telecom systems in the USA are patchworks of older systems that are a PITA to maintain under the best of circumstances.
You're not making me feel better. :lol:
 
I have to wonder what the fallout will be for the team that was responsible for this faulty patch. I know at my company we do push out updates as soon as they come out. We doy the lab room then pilot group wait a week then push stuff out. I believe all involved with this may have to leave the IT field. Because how are you going to live down that you put out a update that caused a major issues around the world
 
kjb
I have to wonder what the fallout will be for the team that was responsible for this faulty patch. I know at my company we do push out updates as soon as they come out. We doy the lab room then pilot group wait a week then push stuff out. I believe all involved with this may have to leave the IT field. Because how are you going to live down that you put out a update that caused a major issues around the world
Someone will be fired for not testing this before production AND there will probably be increased government scrutiny.
 
I think that's the thing I don't understand is how this got in production and no one caught this or maybe it was caught but because deadlines or whatever it was still pushed through. I don't know and I don't believe we will not find out the full truth
 
kjb
I have to wonder what the fallout will be for the team that was responsible for this faulty patch. I know at my company we do push out updates as soon as they come out. We doy the lab room then pilot group wait a week then push stuff out. I believe all involved with this may have to leave the IT field. Because how are you going to live down that you put out a update that caused a major issues around the world
One of IT's best abilities is to shift blame. The finger will be pointed to a vendor, who will point the finger to a vendor, and so on until it gets to some vendor in China or India that just doesn't care. Or they'll just pin it all on some guy they've been wanting to get rid of but couldn't figure out how to. At least that's almost always my experience when someone nukes production.
 
I've got a family member who works at CS (my sister's husband's cousin) who I used to work with at Twitter. I'm sure he's busy AF right now :lol:



Jerome
 
The amount of "this never would happen to Apple" comments on my FB feed are ridiculous.
 
One of IT's best abilities is to shift blame. The finger will be pointed to a vendor, who will point the finger to a vendor, and so on until it gets to some vendor in China or India that just doesn't care. Or they'll just pin it all on some guy they've been wanting to get rid of but couldn't figure out how to. At least that's almost always my experience when someone nukes production.
If CS has a decent audit log, the person will most certainly be known once the investigation starts.
 
They're designed to resist against malicious outside attacks, not clumsiness by someone who's supposed to be maintaining it. Also most telecom systems in the USA are patchworks of older systems that are a PITA to maintain under the best of circumstances.
"Well it worked on my machine!"

Speaking from my own experience as a dev, if Crowdstroke are competent then a change like this would've needed approval from a senior developer, and also some sort of automated test software like Jenkins that would operate in something closer to a production environment than if it were tested on the developer's own PC/laptop.
 
Back