WannaCry NHS attack - busting the myths
Des Ward, information governance director at Innopsis, reflects on the real story behind the WannaCry cyber-attack.
Just over a month ago, the headlines were screaming about a cyber-attack against the NHS, the nightmare scenario of Denial of (public) Service was upon us. WannaCry ransomware was tearing through the world, encrypting everything in its wake and wreaking havoc.
It was a great case study in the way that cyber security confuses everyone. Very little of what was reported at the time was accurate.
Was the attack really aimed at Windows XP?
The issue was initially reported as a weakness that affected all systems from Windows XP to Windows 10 – using malware created by the NSA that was stolen and released by hackers onto the dark web.
As this weakness affected Windows XP, then the issue had to be the use of Windows XP didn’t it?
The reality however is very different – hacking tools used were unreliable on Windows XP and Windows 10 – but were effective on Windows 7 and Server 2008. It now looks increasingly likely that:
- The ransomware entered companies because a port only used for file sharing (SMBv1) was available on the internet
- The first hacking tool exploited the weakness on the system accessible from the internet
- This ransomware spread as the second hacking tool took control of the computer and continued scanning inside the network
Was the attack really aimed at the NHS?
The NHS was the name most commonly cited in early reporting of the issue, leading to many commentators describing it as an “attack on the NHS”. In reality, the NHS wasn’t a particular target. As of 20th May, there were an estimated 300,000 affected computers in more than 74 countries.
The spread of WannaCry had the attributes of an infectious disease, not a targeted attack. It propagated into any system where there were vulnerable points of entry. There were reports of infections in Spain, the UK, Portugal, Germany, China, Russia and others.
Organisations confirming they were targets included FedEx, Nissan, Deutsche Bahn Railway, Hitachi, the Russian Central Bank, Telefonica and the NHS. Car manufacturers Nissan and Renault confirmed that they halted car manufacturing to contain the attack.
The business impact of WannaCry was primarily the result of the control measures that organisations implemented to stop it from spreading, not of the ransomware itself. In many cases, email systems were suspended to prevent it from moving from one system to the other, and remote access services were also halted. It was this preventive action that caused the majority of the business impact, not the ransomware itself.
Was this a new type of threat?
WannaCry may appear like a new and dangerous type of threat, but it’s not. This type of attack was used against Sony in 2014 and 14 years ago in Blaster. It has been prevalent for over a decade and can be protected against by firewalls and regular patching. In fact, a patch against this specific vulnerability was made available almost two months prior to the attack (well within the timescales required by both Cyber Essentials and PCI DSS). The publication of the NSA tools made the tabloid press, the weakness was easy to exploit and the fact that the patch was available for over a month should prompt us to question why this happened on such a scale?
So, it wasn’t new, it wasn’t that clever and it was straightforward to prevent; so how did WannaCry manage to cause such extensive disruption, and what can the public sector do protect against future attacks?
Technical and systems controls don’t cover it
Most cyber security controls and standards are based on a technical or systems-based view of the work. They assume that if you define the scope of a system, application or network, then you can control and manage the security risks that are associated.
However, modern organisations are too complex and change too much for these approaches to work. The breadth of each security policy can be set to make devices and entire networks out of scope. For example, Cyber Essentials doesn’t include applications or connected cloud services in the assessment – how many organisations are there today that don’t use applications or have any connected cloud services?
The challenge has always been that scoping compliance programmes is preferable for businesses, as it reduces the perceived burden on the organisation’s ability to operate. However, the following examples show why this is approach is no longer tenable:
Acquisitions and corporate changes mean the technical environment is constantly changing. Looking at TalkTalk in 2015, the reason why the attack was possible was a web server lay untouched since Tiscali was taken into the company in 2009. That cost TalkTalk over £400,000 in fines and far more in costs of providing credit checks for their customers.
The impact from poor information governance is rarely assessed as part of cyber security standards. Looking at Dr Deer vs Oxford University, the costs of not being able to respond to a Subject Access Request under the Data Protection Act 1998 rose to £116,000 from having to search electronic storage. This was not a fine, but would’ve made the top ten easily for personal data.
Protecting against an attack can also cause disruption. Look at WannaCry, the likely reason for outage of NHS services was switching computers off – a case of simply not knowing what was happening and taking steps to protect data. They created an effective Denial of Service condition that has not been tallied in terms of costs yet.
We have reached a tipping point with regards to how far scoped compliance for cyber initiatives can take us on their own. We need to ensure that the organisation knows how, where and why to use the plethora of security tools, devices and advice.
The need for governance
The need for information governance to augment cyber initiatives is very real. Within the UK, a basic public sector organisation has over 40 separate laws to comply with for managing information (private sector organisations rise to over 50), which amount to the following activities for assessment:
- the location of the data (how do you know where it is being stored, or if it has been deleted?)
- the format of the information (what is the asset?)
- the usage requirements (what purpose is the information acquired for?)
- the disclosure requirements (can you share it, and what are the requirements?)
- the retrieval requirements (the retention period and whether you can you access the information throughout that period?)
- the handling requirements (does it need encryption, where can it be accessed from, what right of audit is there?)
My view of the current cyber approaches concludes that the lack of a standardised approach and concentration on information is hampering evolution. This is supported by Rob Wainwright, director of Europol, who believes that the recent failings in cyber defences were more to do with lack of leadership in large organisations than lack of IT investment.
The issue, therefore, is not addressed merely by buying more technology, but looking at how, why and where the technology is used.
Lessons to be learned
We need to learn from WannaCry in the following ways:
- Work out how, why and where you are undertaking cyber activities - revisit those applications/systems you thought you didn’t need to patch or update because they weren’t in a compliance scope – hacking tools don’t respect scopes
- Review your firewalls to ensure that they are only allowing the access to the applications, networks and/or systems you intend
- Look at your information and understand how well you could answer the activities identified above – they will be crucial for the General Data Protection Regulation (GDPR), in force from last year, but not law until 2018
- Use this understanding of information to review all your applications/systems again
The clock is ticking for action and the next wave of attacks is quite possibly on the way – using seven NSA tools this time and far more subtle.
The fix, however, remains the same – ensure that you manage information and related applications/systems within your entire estate.
A government-commissioned study makes 18 recommendations, including asking GDS to help create a scheme to boost public sector use of artificial intelligence
Warren Smith also opens up on how the launch of the Crown Marketplace will reshape central government procurement
An array of commentators from tech businesses and industry bodies offer their thoughts on Philip Hammond's Budget
A comprehensive round-up of a wide range of measures that commit more funding, create new bodies and programmes, and change the legislative landscape of public sector technology