As an ex-security researcher, I want to pull my hair out reading this. My only response is: you deserve it TSMC.
I cannot understand how companies can be so lax with their security policies. It shows how little management knows about security and how they will prioritise short-term profits and just take a head-in-the-sand approach and not try to update their systems for the slight cost of maintenance downtime.
It tells me that there are likely huge security holes in TSMC infrastructure that a little bit of social engineering will be able to exploit to copy the latest processor designs directly off of their intranet. The way around this is that customers should work in clauses that automatically brings in fines if the security of a manufacturer is not up to scratch. We even need to see governments bring in proper fines for information leakage hacks to incentivise companies to stop being so lazy about it.
Sorry to break this to you but as someone who worked in security and with security people extensively I can only say that these are the people with the most narrow field of view I have ever met. They also have a single point of view and imagine it's the only one. They are always the ones "willing" to completely bog down a business with "security" without understanding that there's a compromise.
I always expect that one day one of these people will just say: "Quick, disconnect every device from the network, cut the power, pour concrete all over them, lock the doors and go home; there, perfectly secure from hacks, (my) job well done". Or "you got a malware, I'm sure anyone can just use some social engineering thingy and steal all your designs, your monies, your children".
The business has to make a compromise sometimes and take a risk. Sometimes that bet doesn't pay off. Most of the times it actually does. And sometimes the fix could be just as risky or close to. The deeper you are in the field, the less likely you are to have any kind of perspective and vision.
Yes, it was a fkuc-up, yes it turned out bad and expensive, yes it could have probably been handled better, and maybe even completely avoided if the cards fell just right and the conditions were perfect. But no, try as you might you will never see it as it is. You will always see "oh, they didn't apply every single patch, didn't put every computer on a separate network with penta-factor authentication, oh [something else here that will definitely put a stick in the business' bicycle wheel]".
For one you fail to understand this is a private company and it's the customer's problem to complain, not your's or the governments. Not in this case anyway. As an "ex-security researcher" you should know that these vectors are as different as they get and they are treated in completely different ways. Social engineering is treated differently because there's really no perceptible downside to provide this education as opposed to possibly crippling production systems with a patch.
I am sure you are both familiar with Richard I. Cook, MD's famous 18 rules on "How Complex Systems Fail." If you are not just google it for it is a very famous relatively quick paper on this subject.
LMFAO I was thinking the same thing and reference Cook's rules all the time in IT. At the end of the day, security flaws almost always come down to money, or the lack of spending it. That isn't the IT departments fault. In many cases, management has been warned months or even years in advance that shit is eventually going to hit the fan if they don't do something about it.
And the last thing we need is govts sticking their noses in, given their dreadful track record in managing security. Most govts can't even secure their national borders.
Geez, you seem to think you're very smart. But, if you really were, you'd know that computer security firms are constantly in communication with their clients to figure out the details of a compromise between security and economics. That's really security 101, no great insight from you there.
As for patching the Wannacry vulnerability, that is a security patch which TSMC rolled out the moment their systems got infected. Security patches are designed to be implemented with the lowest risk to productivity, and in this case it appeared to work right out of the gate for them. There is no excuse for not installing existing security patches, and any system without security patches is prima facie hackable with commonly available tools - free tools like Kali Linux.
Online Forum Comments: OMG! Spectre Sub-variant 4.A[2] exists! APOCALYPSE!
Real World: We ran our whole internal network with totally unpatched Windows 7 systems and got hit by a more than one-year old worm that came preloaded on something we bought and didn't check.
The part where you wrote "Real World" should be replaced with "Offline systems with no access to the internet that are generally secure from outside attacks unless we F up".
For the ACTUAL "Real World": If you're on the interwebs, yes, you SHOULD be concerned about newer threats. Derp.
"Offline systems with no access to the internet that are generally secure from outside attacks unless we F up".
No offline system is 'generally secure'. This is not some surprise revelation, just being separate from the internet does not magically bestow security.
I was referring specifically to TSMC's systems. In addition I qualified that they were generally secure from OUTSIDE attacks. Unfortunately they rely too much on employees not failing at their job.
That said, airgapping a system from any outside network does boost security. By itself it does not secure a system, nor did I claim it magically secured the system by itself. If any system *at all* can be considered to be generally secure, then an offline system can be at least as secure, and then some. If you don't believe that is possible period (a different argument altogether), well then it can at least be built to be as secure as the most secure online system, and then some.
It's easy to say "OMG, you're running unpatched Windows 7!!!1!1!oneoneone!", but ... imagine the joy of having to qualify an individual patch on a $10B fab. You can't really canary it, you probably don't have a second preproduction system to qualify on, etc. And the individual bits and bobs running on Windows 7 can probably often have a wordwide installed base in the dozens, so you can't rely on bake time to prove things out, either. So once you've qualified things at a particular patch level, you most likely leave it at that patch level FOREVER, and introduce an elaborate procedure for vetting new systems to make sure they don't introduce any unknowns.
I think it's a real stretch to assume that they're just winging it on this. I wouldn't want to touch this problem with a ten foot pole.
Agreed. For most use cases I would encourage people to keep their systems patched and run a fairly current (and supported) OS. But mission critical industrial systems running custom software? Yeah, it's a little more complicated. Especially when the systems are offline - the risk is very low. The only reason they got hit was their personnel didn't scan the machine before tossing it on their network. As you said, vette new systems. I'd be willing to bet they had such a procedure in place, and the human element failed.
I will say that if and when possible, run your software on VMs.
It has been a long time since I have been any near that topic, but are VMs these days capable of running hard/soft realtime requiring applications? It used to be quite an issue.
Someone in the movie industry told me this week that pro apps running on VMs are now doing a better job at allocating machine resources than the native OS, ie. it's actually slower running on bare metal. I guess it's easier to add nuance of a hw platform into a VM than it is into an OS.
You get me wrong, I don't speak about overall performance, but latency contract. In industrial applications, you have requirement to perform certain operation at precise time (imagine assembly line operations for example) and you want to execute your device every x ms for y ms... in that case you need to have those operations scheduled at exact time intervals. Normal scheduler is not able to do that and you need realtime scheduler. Trouble I speak about is with VM you have pretty much one scheduler scheduling another scheduler which brings quite a bit issues on applications like these. I can only imagine high level chip manufacturing is significantly more demanding than petrochemical plant (where I have seen these requirements).
Not sure it solves all problems, but I know recent VMware has functionality to kinda skip the virtualization layer -- allowing target VMs to directly address hardware. I would think that would fix the double scheduler layer and latency, but again, I haven't had to implement it. Activating direct hardware access reduces some of the portability of the VM, but when the purpose is quicker restore from VM failure on the same or similar hardware, those compromises are probably tolerable.
Um... VMs do not run as fast as bare metal. There is a performance hit due to the overhead in virtualizing hardware. It only works because Intel CPUs added a lot of black magic to make it work effectively. What you gain in using VMs is allocating all available physical resources such as RAM, CPU, and storage. Say you have a server running something and you only use 25% of the resources on that physical server. Well setup a hypervisor on the server and install multiple VMs each with optimized servers with just the necessary allocated RAM, CPU, & Storage and now you can take up all the physical servers resources with multiple VMs. Nothing wasted. This saves on rack space, cooling, electricity, heat, etc.
But with these manufacturing machines each with it's own operating system and custom software cannot be virtualized. Why do they run vulnerable Windows versions? Because it's easy to write code for them and once a machine is built and installed, you really never need to change the software so they go many years until the whole machine is replaced running an unmatched Windows version. You can't upgrade the OS or even patch it without potentially breaking the machine. This is a serious problem now that all these machines are networked together. The worm jumped from machine to machine and killed a whole lot of them on the chip fab assembly lines. It was a nightmare scenario. Personally, I think they should all be running an embedded Linux that's hardened and requires software to be signed before it's allowed to be executed, etc. The only reason Windows is being used is it is easier to find programmers for it. I bet they still have Visual Basic applications powering these machines. I know of other manufacturing environments where a real old version of WinXP is in use across older machines. They cannot be easily upgraded, the software is likely doing things outside the norm of best practices, etc. If you were to even patch them you risk breaking them.
"But with these manufacturing machines each with it's own operating system and custom software cannot be virtualized." Why not? If you've got a Win7 machine driving software that's driving a piece of equipment, why would it be impossible to virtualize that Win7 machine (and have newer/more secure underlying software beneath the VM)?
Real or virtual doesn’t affect the patchability of an OS. My work uses tons of outdated software either frozen at a certain version or no longer updateable. That has been my experience at the prior two companies I worked for - it gets very expensive in time and money to update.
" and introduce an elaborate procedure for vetting new systems to make sure they don't introduce any unknowns.
I think it's a real stretch to assume that they're just winging it on this."
Given an infected system was connected to an internal network full of unpatched systems, the evidence of 'winging it' is pretty public. "But we need to keep our setups static" does not mean just throwing up your hands in resignation, it means security efforts need to be redoubled to compensate. Enhanced scans of new devices, popping it onto a honeypot network to see what crawls out, packet vetting for the internal network, etc.
"""Given an infected system was connected to an internal network full of unpatched systems, the evidence of 'winging it' is pretty public."""
We have evidence that they were compromised, but we have no evidence of how much effort they put into not being compromised. They might have a super-elaborate system to protect against this kind of problem, and someone might have put the wrong stick label on a piece of kit. Having a process which could be improved is distinct from making up your process as you go.
well, it largely does where it really counts: most RDBMS (modulo SQL Server, of course) run anywhere run on linux/*nix. there are lots of other places it can, too. the caveat is that, to the extent we're in a X86 monoculture, clever assembler bad guy coders can get around that problem.
I inferred he meant from the perspective of the idiots who released it in the first place. Malware doesn't have a perspective, it doesn't have agency. Thank grud not yet anyway.
WannaCry's utility as a viable, semi-controlled weapon is over. The underlying exploits have been patched long ago, virus scanners know it's signature, and tools created to reverse its encryption. Furthermore the random addresses are monitored, and it's well-known that paying said ransom won't get your data back.
So all it can do is lurk in the depths of unfixed machines, infecting anyone unlucky enough to stumble upon it. It no longer serves a purpose; just blind destruction.
I think worms should be classified legally the same as arson. Carelessly or intentionally lighting fires carries a huge penalty almost everywhere because of how easily fires can turn catastrophic. Rome burning, Chicago burning are examples from history. In this case it was TSMC burning a 100Million, added to the already burnt millions or billions from the main spread. If your worm infects a hospital and leads to a death you should be charged with murder by arson. Legal precedents are already set.
Well, once again, a third party software installed on a closed network infected the client IT infrastructure.
That means the contractor was having the malaware and infected the client (TSMC). This is the typical scenario of cyber security.
The problem is that some closed network needs to have custom updates. Also, each of them needs to be checked before release on the network in case some software crash.
TSMC might have patch their systems, however maybe one of the patch that was incompatible with their network was not applied.
It is way more complicated than the simple "OMG TEH UPDATES FAIL!".
"the fab expects certain shipments delays and additional charges"
So basically it was their fudge up by not properly checking the machine or if they did the tech that did check it most likely never actually checked it. So they are going to pass the costs off to their clients with additional charges. How is it their clients fault or problem that they fracked things things up. At this point if I was them I would be more worried about being sued or losing clients than being worried about trying to recoup their money from what it cost to do the clean up and get back up and running. Why should their clients pay for the clean up basically.
They screwed up so get the crap fixed so you can do business and keep your clients happy do not try to piss them off even more by over charging them.
I would also like to point out the only time Wannacry was known to crash systems was if it tried to install on Windows XP so are they admitting that they have a lot of Windows XP machines in their stables. If on Windows 7 it will run in the back round for a while as it locks your files down but it never touches windows itself bacically because if it was to actually crash your system to the point of non usable state how would it be able to put up that sheet on your screen telling you that you are pretty much fracked and of you want your family photo's and everything else back then pay the price or else you are hooped. Are they sure they actually had WannaCry and not something else. I have dealt with a lot of WannaCry infected machines and non of them ever crashed except XP machines.
I think...if the answer to the problem was patching fab tools, they probably would have done it.
Furthermore, updating a tool's software from Windows XP to Windows 7 generally isn't just insert the USB key, press F2 for the boot menu and boot from said USB key. A lot of the hardware and software is very closely tied together and upgrading to a new OS, depending on the vendor support would be very expensive or even impossible. Not every tool in the fab is going to be brand new with all the bells and whistles.
So, I'm not saying that TSMC is in the clear, but please do try and have some grace when making assumptions about what they can or cannot do.
As far as the "additional charges," there are a lot of charges beyond just customer loss like wafer scrap, etc. I'm being purposely vague with this post, but think of what can happen when manufacturing gets interrupted.
I was saying that the way they made it sound was it was all Windows 7 machines that got infected with this virus. I then went on to say that for the most part if a Windows 7 machine gets infected with this virus yes it will cause slow downs but that is because all of the user data is getting locked so the user will not be able to open those files without paying a fee. It would not be in the WannaCry's best interest to just completely make the system go into a total non working mode as in not being able to boot up at all because if they did how would they get their nasty little ransom demand posted all over the users screen.
Yes there may be times that the system might just crash but that would have more to do with the hardware config and the software that is installed on those machines that totally crash. For Windows XP it will just crash if the virus tries to install or run not because it is a more secure OS but because it is such a old OS that it does not have what is required to let the virus do a auto install. The weird thing is the virus can actually be installed manually on XP and it will install then lock your files and then do the ransom demands.
As for TSMC maybe going to recoup their losses by charging their clients extra money again I say how is it TSMC's customer problem or fault that they dropped the ball here. Like I said before they should be more worried about keeping the clients happy because of delays and lack of product made and not worry about recouping the money spent on the clean up or at least hide it in the price sheet on future products and deals made.
Well IBM seems to think macOS is far better. They have deployed 150,000+ and counting employee computers with Mac's. Saving them hundreds per Mac in licensing and support costs. They can buy the Mac's from Apple under their DEP system so they are zero touch. They ship the Mac still shrink wrapped to the employee straight from the factory. The employee opens the box connects power and connects it to the network (even Internet at home) and it phones home to Apple who due to DEP looks at the serial number and says, this is an IBM Mac so it redirects it to the IBM JAMF Pro servers which then enroll the Mac with the MDM. Then all the policies and configuration profiles are applied and software installed. The user starts seeing information displayed about the Mac@IBM program while they wait. It then pops up an IBM App store where they can install Microsoft Office, developer tools, Lotus Notes, etc., etc. The Mac's are encrypted and the keys escrowed into JAMF. The Self Service app provides all sorts of handy apps and scripts to fix stuff on your own. If you have to call the help desk they can remotely manage the Mac. This is worlds better than anything Microsoft is doing with Windows. The users rarely need to call for help and everything is heavily automated. The Macs are checking into the MDM on the corporate LAN and on the Internet and if the user does something they are not supposed to like enable something IBM wants disabled, it will either completely prevent the user from doing so or it will disable it when the Mac re-connects to the MDM on a regular check-in cycle. The Mac's also last longer than the PCs. On Mac's most things can be locked down with Configuration Profiles. The rest can be scripted. Apple keeps adding to the Config Profiles every year. Apple's new T2 64bit ARMv8 co-processor controls SSD encryption, provides a secure enclave, and supports Secure Boot. So you can lock them down so they cannot boot from USB and the boot cannot be infected by malware. This brings it much closer to being like an iPhone or iPad with hardware level security. The future will only tighten this security.
All that is great but I don't see Apple macOS being used with manufacturing tooling and custom machines. That's a space ideal for Linux if there were easier developer tools and APIs. The reason Windows is used is because there are more developers who can code for it. Modern systems would use Win10 and C# applications to run the machines whereas old machines were WinXP / Win7 and VisualBasic / C#.
Dig through the JAMF YouTube channel for JAMFNation Conferences, there a few IBM presentations talking about how they leveraged JAMF Pro to manage their Macs. Most of Silicon Valley is using Macs because they are building Linux based Cloud solutions and the Mac is Unix under the hood and plays very well in that space. There's a lot of different ways to manage them besides JAMF. Such as Chef, Puppet, alternative MDM's like SimpleMDM, Munki, etc., etc.
Microsoft is becoming much more cloud developer friendly as of late because they see the threat that Apple and Cloud presents to Microsoft. So they are playing along with SQL Server being ported, better support in everything for Cloud tech. SSH/SSHd in beta for Win10, and the Linux Subsystem for Win10. Those last two go a long way to bringing developers to Win10 instead of Mac but it's still not there 100% yet. But the days of PC vs Mac are pretty much over.
The problem is all the manufacturing tooling machinery that relies on old versions of Windows that haven't been patched against vulnerabilities. So whatever vendor provided the new tooling introduced a WannaCry variant into the internal production line network which then unleashed the worm across all the vulnerable machines and shutdown production. This is absolutely insane! These machines should not be running old vulnerable Windows operating systems, they should probably be embedded Linux and they should be patched. But patching these vulnerable Windows systems would probably break the tool just as bad as malware. It's really horrific how these very expensive machines are controlled by such god awful software running on ancient versions of Windows. Yes, it gets the job done but at what cost? So they hired programmers who could only deal with Windows instead of something a lot more rock solid. So sad... Really...
We’ve updated our terms. By continuing to use the site and/or by logging into your account, you agree to the Site’s updated Terms of Use and Privacy Policy.
41 Comments
Back to Article
DrizztVD - Thursday, August 9, 2018 - link
As an ex-security researcher, I want to pull my hair out reading this. My only response is: you deserve it TSMC.I cannot understand how companies can be so lax with their security policies. It shows how little management knows about security and how they will prioritise short-term profits and just take a head-in-the-sand approach and not try to update their systems for the slight cost of maintenance downtime.
It tells me that there are likely huge security holes in TSMC infrastructure that a little bit of social engineering will be able to exploit to copy the latest processor designs directly off of their intranet. The way around this is that customers should work in clauses that automatically brings in fines if the security of a manufacturer is not up to scratch. We even need to see governments bring in proper fines for information leakage hacks to incentivise companies to stop being so lazy about it.
close - Thursday, August 9, 2018 - link
Sorry to break this to you but as someone who worked in security and with security people extensively I can only say that these are the people with the most narrow field of view I have ever met. They also have a single point of view and imagine it's the only one. They are always the ones "willing" to completely bog down a business with "security" without understanding that there's a compromise.I always expect that one day one of these people will just say: "Quick, disconnect every device from the network, cut the power, pour concrete all over them, lock the doors and go home; there, perfectly secure from hacks, (my) job well done". Or "you got a malware, I'm sure anyone can just use some social engineering thingy and steal all your designs, your monies, your children".
The business has to make a compromise sometimes and take a risk. Sometimes that bet doesn't pay off. Most of the times it actually does. And sometimes the fix could be just as risky or close to. The deeper you are in the field, the less likely you are to have any kind of perspective and vision.
Yes, it was a fkuc-up, yes it turned out bad and expensive, yes it could have probably been handled better, and maybe even completely avoided if the cards fell just right and the conditions were perfect. But no, try as you might you will never see it as it is. You will always see "oh, they didn't apply every single patch, didn't put every computer on a separate network with penta-factor authentication, oh [something else here that will definitely put a stick in the business' bicycle wheel]".
For one you fail to understand this is a private company and it's the customer's problem to complain, not your's or the governments. Not in this case anyway. As an "ex-security researcher" you should know that these vectors are as different as they get and they are treated in completely different ways. Social engineering is treated differently because there's really no perceptible downside to provide this education as opposed to possibly crippling production systems with a patch.
Roland00Address - Thursday, August 9, 2018 - link
I am sure you are both familiar with Richard I. Cook, MD's famous 18 rules on "How Complex Systems Fail." If you are not just google it for it is a very famous relatively quick paper on this subject.Samus - Thursday, August 9, 2018 - link
LMFAO I was thinking the same thing and reference Cook's rules all the time in IT. At the end of the day, security flaws almost always come down to money, or the lack of spending it. That isn't the IT departments fault. In many cases, management has been warned months or even years in advance that shit is eventually going to hit the fan if they don't do something about it.mapesdhs - Friday, August 10, 2018 - link
And the last thing we need is govts sticking their noses in, given their dreadful track record in managing security. Most govts can't even secure their national borders.FunBunny2 - Friday, August 10, 2018 - link
yeah, get rid of the FDA, which secures our drugs, while your at it. corporations are stellar at protecting patients.DrizztVD - Saturday, August 11, 2018 - link
Geez, you seem to think you're very smart. But, if you really were, you'd know that computer security firms are constantly in communication with their clients to figure out the details of a compromise between security and economics. That's really security 101, no great insight from you there.As for patching the Wannacry vulnerability, that is a security patch which TSMC rolled out the moment their systems got infected. Security patches are designed to be implemented with the lowest risk to productivity, and in this case it appeared to work right out of the gate for them. There is no excuse for not installing existing security patches, and any system without security patches is prima facie hackable with commonly available tools - free tools like Kali Linux.
CajunArson - Thursday, August 9, 2018 - link
Online Forum Comments: OMG! Spectre Sub-variant 4.A[2] exists! APOCALYPSE!Real World: We ran our whole internal network with totally unpatched Windows 7 systems and got hit by a more than one-year old worm that came preloaded on something we bought and didn't check.
Alexvrb - Thursday, August 9, 2018 - link
The part where you wrote "Real World" should be replaced with "Offline systems with no access to the internet that are generally secure from outside attacks unless we F up".For the ACTUAL "Real World": If you're on the interwebs, yes, you SHOULD be concerned about newer threats. Derp.
edzieba - Friday, August 10, 2018 - link
"Offline systems with no access to the internet that are generally secure from outside attacks unless we F up".No offline system is 'generally secure'. This is not some surprise revelation, just being separate from the internet does not magically bestow security.
Alexvrb - Friday, August 10, 2018 - link
I was referring specifically to TSMC's systems. In addition I qualified that they were generally secure from OUTSIDE attacks. Unfortunately they rely too much on employees not failing at their job.That said, airgapping a system from any outside network does boost security. By itself it does not secure a system, nor did I claim it magically secured the system by itself. If any system *at all* can be considered to be generally secure, then an offline system can be at least as secure, and then some. If you don't believe that is possible period (a different argument altogether), well then it can at least be built to be as secure as the most secure online system, and then some.
dshess - Thursday, August 9, 2018 - link
It's easy to say "OMG, you're running unpatched Windows 7!!!1!1!oneoneone!", but ... imagine the joy of having to qualify an individual patch on a $10B fab. You can't really canary it, you probably don't have a second preproduction system to qualify on, etc. And the individual bits and bobs running on Windows 7 can probably often have a wordwide installed base in the dozens, so you can't rely on bake time to prove things out, either. So once you've qualified things at a particular patch level, you most likely leave it at that patch level FOREVER, and introduce an elaborate procedure for vetting new systems to make sure they don't introduce any unknowns.I think it's a real stretch to assume that they're just winging it on this. I wouldn't want to touch this problem with a ten foot pole.
Alexvrb - Thursday, August 9, 2018 - link
Agreed. For most use cases I would encourage people to keep their systems patched and run a fairly current (and supported) OS. But mission critical industrial systems running custom software? Yeah, it's a little more complicated. Especially when the systems are offline - the risk is very low. The only reason they got hit was their personnel didn't scan the machine before tossing it on their network. As you said, vette new systems. I'd be willing to bet they had such a procedure in place, and the human element failed.I will say that if and when possible, run your software on VMs.
HollyDOL - Friday, August 10, 2018 - link
It has been a long time since I have been any near that topic, but are VMs these days capable of running hard/soft realtime requiring applications? It used to be quite an issue.mapesdhs - Friday, August 10, 2018 - link
Someone in the movie industry told me this week that pro apps running on VMs are now doing a better job at allocating machine resources than the native OS, ie. it's actually slower running on bare metal. I guess it's easier to add nuance of a hw platform into a VM than it is into an OS.HollyDOL - Friday, August 10, 2018 - link
You get me wrong, I don't speak about overall performance, but latency contract.In industrial applications, you have requirement to perform certain operation at precise time (imagine assembly line operations for example) and you want to execute your device every x ms for y ms... in that case you need to have those operations scheduled at exact time intervals. Normal scheduler is not able to do that and you need realtime scheduler. Trouble I speak about is with VM you have pretty much one scheduler scheduling another scheduler which brings quite a bit issues on applications like these. I can only imagine high level chip manufacturing is significantly more demanding than petrochemical plant (where I have seen these requirements).
e_sandrs - Friday, August 10, 2018 - link
Not sure it solves all problems, but I know recent VMware has functionality to kinda skip the virtualization layer -- allowing target VMs to directly address hardware. I would think that would fix the double scheduler layer and latency, but again, I haven't had to implement it. Activating direct hardware access reduces some of the portability of the VM, but when the purpose is quicker restore from VM failure on the same or similar hardware, those compromises are probably tolerable.JBrickley - Friday, August 10, 2018 - link
Um... VMs do not run as fast as bare metal. There is a performance hit due to the overhead in virtualizing hardware. It only works because Intel CPUs added a lot of black magic to make it work effectively. What you gain in using VMs is allocating all available physical resources such as RAM, CPU, and storage. Say you have a server running something and you only use 25% of the resources on that physical server. Well setup a hypervisor on the server and install multiple VMs each with optimized servers with just the necessary allocated RAM, CPU, & Storage and now you can take up all the physical servers resources with multiple VMs. Nothing wasted. This saves on rack space, cooling, electricity, heat, etc.But with these manufacturing machines each with it's own operating system and custom software cannot be virtualized. Why do they run vulnerable Windows versions? Because it's easy to write code for them and once a machine is built and installed, you really never need to change the software so they go many years until the whole machine is replaced running an unmatched Windows version. You can't upgrade the OS or even patch it without potentially breaking the machine. This is a serious problem now that all these machines are networked together. The worm jumped from machine to machine and killed a whole lot of them on the chip fab assembly lines. It was a nightmare scenario. Personally, I think they should all be running an embedded Linux that's hardened and requires software to be signed before it's allowed to be executed, etc. The only reason Windows is being used is it is easier to find programmers for it. I bet they still have Visual Basic applications powering these machines. I know of other manufacturing environments where a real old version of WinXP is in use across older machines. They cannot be easily upgraded, the software is likely doing things outside the norm of best practices, etc. If you were to even patch them you risk breaking them.
Alexvrb - Friday, August 10, 2018 - link
"But with these manufacturing machines each with it's own operating system and custom software cannot be virtualized."Why not? If you've got a Win7 machine driving software that's driving a piece of equipment, why would it be impossible to virtualize that Win7 machine (and have newer/more secure underlying software beneath the VM)?
Icehawk - Friday, August 10, 2018 - link
Real or virtual doesn’t affect the patchability of an OS. My work uses tons of outdated software either frozen at a certain version or no longer updateable. That has been my experience at the prior two companies I worked for - it gets very expensive in time and money to update.edzieba - Friday, August 10, 2018 - link
" and introduce an elaborate procedure for vetting new systems to make sure they don't introduce any unknowns.I think it's a real stretch to assume that they're just winging it on this."
Given an infected system was connected to an internal network full of unpatched systems, the evidence of 'winging it' is pretty public. "But we need to keep our setups static" does not mean just throwing up your hands in resignation, it means security efforts need to be redoubled to compensate. Enhanced scans of new devices, popping it onto a honeypot network to see what crawls out, packet vetting for the internal network, etc.
dshess - Saturday, August 11, 2018 - link
"""Given an infected system was connected to an internal network full of unpatched systems, the evidence of 'winging it' is pretty public."""We have evidence that they were compromised, but we have no evidence of how much effort they put into not being compromised. They might have a super-elaborate system to protect against this kind of problem, and someone might have put the wrong stick label on a piece of kit. Having a process which could be improved is distinct from making up your process as you go.
baka_toroi - Thursday, August 9, 2018 - link
THIS IS LINUX'S CHANCE TO SHINE IN THE MANUFACTURING INDUSTRY! /sFunBunny2 - Thursday, August 9, 2018 - link
well, it largely does where it really counts: most RDBMS (modulo SQL Server, of course) run anywhere run on linux/*nix. there are lots of other places it can, too. the caveat is that, to the extent we're in a X86 monoculture, clever assembler bad guy coders can get around that problem.GreenReaper - Friday, August 10, 2018 - link
<a href="https://www.microsoft.com/en-gb/sql-server/sql-ser... guess you haven't heard the news</a>.GreenReaper - Friday, August 10, 2018 - link
Let's try that again! "I guess you haven't heard the news:"https://www.microsoft.com/en-gb/sql-server/sql-ser...
FunBunny2 - Friday, August 10, 2018 - link
sure I have. but linux SS isn't "real" SS. yet. may never be. only been around for a very little while.bji - Thursday, August 9, 2018 - link
What exactly does it mean for malware to have "done its job"?Malware's job is to infect and usually to monetize based on that infection. That job is never done from the malware's perspective.
mapesdhs - Friday, August 10, 2018 - link
I inferred he meant from the perspective of the idiots who released it in the first place. Malware doesn't have a perspective, it doesn't have agency. Thank grud not yet anyway.Ryan Smith - Friday, August 10, 2018 - link
Precisely.WannaCry's utility as a viable, semi-controlled weapon is over. The underlying exploits have been patched long ago, virus scanners know it's signature, and tools created to reverse its encryption. Furthermore the random addresses are monitored, and it's well-known that paying said ransom won't get your data back.
So all it can do is lurk in the depths of unfixed machines, infecting anyone unlucky enough to stumble upon it. It no longer serves a purpose; just blind destruction.
ironargonaut - Friday, August 10, 2018 - link
I think worms should be classified legally the same as arson. Carelessly or intentionally lighting fires carries a huge penalty almost everywhere because of how easily fires can turn catastrophic. Rome burning, Chicago burning are examples from history. In this case it was TSMC burning a 100Million, added to the already burnt millions or billions from the main spread. If your worm infects a hospital and leads to a death you should be charged with murder by arson. Legal precedents are already set.eva02langley - Friday, August 10, 2018 - link
Well, once again, a third party software installed on a closed network infected the client IT infrastructure.That means the contractor was having the malaware and infected the client (TSMC). This is the typical scenario of cyber security.
The problem is that some closed network needs to have custom updates. Also, each of them needs to be checked before release on the network in case some software crash.
TSMC might have patch their systems, however maybe one of the patch that was incompatible with their network was not applied.
It is way more complicated than the simple "OMG TEH UPDATES FAIL!".
rocky12345 - Friday, August 10, 2018 - link
"the fab expects certain shipments delays and additional charges"So basically it was their fudge up by not properly checking the machine or if they did the tech that did check it most likely never actually checked it. So they are going to pass the costs off to their clients with additional charges. How is it their clients fault or problem that they fracked things things up. At this point if I was them I would be more worried about being sued or losing clients than being worried about trying to recoup their money from what it cost to do the clean up and get back up and running. Why should their clients pay for the clean up basically.
They screwed up so get the crap fixed so you can do business and keep your clients happy do not try to piss them off even more by over charging them.
I would also like to point out the only time Wannacry was known to crash systems was if it tried to install on Windows XP so are they admitting that they have a lot of Windows XP machines in their stables. If on Windows 7 it will run in the back round for a while as it locks your files down but it never touches windows itself bacically because if it was to actually crash your system to the point of non usable state how would it be able to put up that sheet on your screen telling you that you are pretty much fracked and of you want your family photo's and everything else back then pay the price or else you are hooped. Are they sure they actually had WannaCry and not something else. I have dealt with a lot of WannaCry infected machines and non of them ever crashed except XP machines.
fuji_T - Friday, August 10, 2018 - link
I think...if the answer to the problem was patching fab tools, they probably would have done it.Furthermore, updating a tool's software from Windows XP to Windows 7 generally isn't just insert the USB key, press F2 for the boot menu and boot from said USB key. A lot of the hardware and software is very closely tied together and upgrading to a new OS, depending on the vendor support would be very expensive or even impossible. Not every tool in the fab is going to be brand new with all the bells and whistles.
So, I'm not saying that TSMC is in the clear, but please do try and have some grace when making assumptions about what they can or cannot do.
As far as the "additional charges," there are a lot of charges beyond just customer loss like wafer scrap, etc. I'm being purposely vague with this post, but think of what can happen when manufacturing gets interrupted.
rocky12345 - Friday, August 10, 2018 - link
I was saying that the way they made it sound was it was all Windows 7 machines that got infected with this virus. I then went on to say that for the most part if a Windows 7 machine gets infected with this virus yes it will cause slow downs but that is because all of the user data is getting locked so the user will not be able to open those files without paying a fee. It would not be in the WannaCry's best interest to just completely make the system go into a total non working mode as in not being able to boot up at all because if they did how would they get their nasty little ransom demand posted all over the users screen.Yes there may be times that the system might just crash but that would have more to do with the hardware config and the software that is installed on those machines that totally crash. For Windows XP it will just crash if the virus tries to install or run not because it is a more secure OS but because it is such a old OS that it does not have what is required to let the virus do a auto install. The weird thing is the virus can actually be installed manually on XP and it will install then lock your files and then do the ransom demands.
As for TSMC maybe going to recoup their losses by charging their clients extra money again I say how is it TSMC's customer problem or fault that they dropped the ball here. Like I said before they should be more worried about keeping the clients happy because of delays and lack of product made and not worry about recouping the money spent on the clean up or at least hide it in the price sheet on future products and deals made.
iwod - Friday, August 10, 2018 - link
I often wonder which one is worst in an enterprise environment, macOS or Windows from a security and lockdown perspective.rocky12345 - Friday, August 10, 2018 - link
That's actually a very good question if there is anyone that is involved in that sector hopefully they can answer that question.JBrickley - Friday, August 10, 2018 - link
Well IBM seems to think macOS is far better. They have deployed 150,000+ and counting employee computers with Mac's. Saving them hundreds per Mac in licensing and support costs. They can buy the Mac's from Apple under their DEP system so they are zero touch. They ship the Mac still shrink wrapped to the employee straight from the factory. The employee opens the box connects power and connects it to the network (even Internet at home) and it phones home to Apple who due to DEP looks at the serial number and says, this is an IBM Mac so it redirects it to the IBM JAMF Pro servers which then enroll the Mac with the MDM. Then all the policies and configuration profiles are applied and software installed. The user starts seeing information displayed about the Mac@IBM program while they wait. It then pops up an IBM App store where they can install Microsoft Office, developer tools, Lotus Notes, etc., etc. The Mac's are encrypted and the keys escrowed into JAMF. The Self Service app provides all sorts of handy apps and scripts to fix stuff on your own. If you have to call the help desk they can remotely manage the Mac. This is worlds better than anything Microsoft is doing with Windows. The users rarely need to call for help and everything is heavily automated. The Macs are checking into the MDM on the corporate LAN and on the Internet and if the user does something they are not supposed to like enable something IBM wants disabled, it will either completely prevent the user from doing so or it will disable it when the Mac re-connects to the MDM on a regular check-in cycle. The Mac's also last longer than the PCs. On Mac's most things can be locked down with Configuration Profiles. The rest can be scripted. Apple keeps adding to the Config Profiles every year. Apple's new T2 64bit ARMv8 co-processor controls SSD encryption, provides a secure enclave, and supports Secure Boot. So you can lock them down so they cannot boot from USB and the boot cannot be infected by malware. This brings it much closer to being like an iPhone or iPad with hardware level security. The future will only tighten this security.All that is great but I don't see Apple macOS being used with manufacturing tooling and custom machines. That's a space ideal for Linux if there were easier developer tools and APIs. The reason Windows is used is because there are more developers who can code for it. Modern systems would use Win10 and C# applications to run the machines whereas old machines were WinXP / Win7 and VisualBasic / C#.
JBrickley - Friday, August 10, 2018 - link
Dig through the JAMF YouTube channel for JAMFNation Conferences, there a few IBM presentations talking about how they leveraged JAMF Pro to manage their Macs. Most of Silicon Valley is using Macs because they are building Linux based Cloud solutions and the Mac is Unix under the hood and plays very well in that space. There's a lot of different ways to manage them besides JAMF. Such as Chef, Puppet, alternative MDM's like SimpleMDM, Munki, etc., etc.Microsoft is becoming much more cloud developer friendly as of late because they see the threat that Apple and Cloud presents to Microsoft. So they are playing along with SQL Server being ported, better support in everything for Cloud tech. SSH/SSHd in beta for Win10, and the Linux Subsystem for Win10. Those last two go a long way to bringing developers to Win10 instead of Mac but it's still not there 100% yet. But the days of PC vs Mac are pretty much over.
JBrickley - Friday, August 10, 2018 - link
The problem is all the manufacturing tooling machinery that relies on old versions of Windows that haven't been patched against vulnerabilities. So whatever vendor provided the new tooling introduced a WannaCry variant into the internal production line network which then unleashed the worm across all the vulnerable machines and shutdown production. This is absolutely insane! These machines should not be running old vulnerable Windows operating systems, they should probably be embedded Linux and they should be patched. But patching these vulnerable Windows systems would probably break the tool just as bad as malware. It's really horrific how these very expensive machines are controlled by such god awful software running on ancient versions of Windows. Yes, it gets the job done but at what cost? So they hired programmers who could only deal with Windows instead of something a lot more rock solid. So sad... Really...zamroni - Saturday, August 11, 2018 - link
If TSMC IT guy doesn't patch againts wanna cry until now, I wonder about other IT guy in other less IT company.