The quest for cyber-trust

Few minutes to read
By Robert Bartram
Published on

With technology becoming ever more sophisticated and offering both enhanced opportunities and new vulnerabilities and threats, there is a danger that organizations of every different type leave themselves open to malicious attack or data breaches on a massive scale. Risk management, therefore, is just as vital in cyberspace as it is in the physical world. But what are these cyber-risks? How can International Standards help mitigate them? And is it really the case that the only answer is even more sophisticated technology?

The Oxford English Dictionary definition is certainly clear enough: “risk”, it says, is “a situation involving exposure to danger”. Risk must be taken to achieve results, but also risk must be managed to achieve positive outcomes and avoid negative consequences.

Avoiding risk is impossible. Risks need to be taken and this is an inevitable and necessary part of all our lives, both personally and professionally. Indeed, if any company or organization in any industry in today’s highly competitive world was to try and pretend that there were no risks in what they did – in effect, that risk did not exist – then quite apart from defaulting on their statutory and legal obligations, they would very quickly fold and disappear from sight.

But risk can also be a force for good. Managing risks successfully can have positive results, and companies need to take risks in order to achieve their objectives. Organizations quite naturally need a degree of certainty before taking important strategic decisions, and it is essential to understand that risk is really about the likely impact of uncertainty on those decisions. In short, risk is about managing decisions in a complex, volatile and ambiguous world, one that is fast becoming even more complex and ambiguous.

Top view through glass ceiling of an IT engineer working in a server room.

The digital threat

This is particularly true in the field of cyber-risk. In cyberspace, high levels of uncertainty routinely come from addressing issues of national and corporate security. The threat comes not from the context and circumstances of the marketplace, but from “malicious actors” who are attempting serious acts of criminality. They are also, in effect, invisible and like the ghosts and phantoms of ancient folklore, their invisibility merely exacerbates the sense of threat. These malicious actors have both the intent and capability to do harm and are agile and adaptive.

Moreover, technology is becoming more and more sophisticated by the day, if not by the hour. In the past, a successful industrial criminal could perhaps steal a briefcase’s worth of documents if they were left carelessly on a desk. Now with USB sticks or exfiltration exploits, the same criminal can steal gigabytes of information that could be the hard-copy equivalent of suitcases all the way to the moon. But it’s not just that data storage has become hyper-exponentially sophisticated – from paper to digital – but that the nature and purpose of data has itself changed. If a criminal is intent on stealing protected medical products, for instance, they no longer need to break into a storeroom but can copy the data in digital format and clone the product via a 3D printer.

It is a sine qua non that organizations of every shade need “cyber protection” of one form or another. Not only that, they also need a system that is robust enough to alert them to any attack – real or perceived – as quickly as possible. Threats in cyberspace fall into two broad categories: internal and external. To design and successfully execute a protective system from external threats, the emphasis needs to be placed on the intentions and capabilities of external malicious actors – what they are after, why they’re after it, and what technologies are available to them.

But organizations also need to prepare for the threat both of malicious insiders and insiders who have mistakenly left the system vulnerable to possible harm. Careless use of personal data can expose an individual to blackmail and recruitment by an organization with nefarious purposes. Organizations can have the best firewalls in the world, but these mean nothing when faced with an insider with high levels of access who can steal information without being detected.

What really counts

Two technicians seen looking at an IT server.So how do governments, businesses and individuals protect themselves from these threats? ISO technical committee ISO/TC 262, Risk management, has produced the ISO 31000 risk management standard which creates a framework of principles and process for managing risk in general. Jason Brown is the Chair of ISO/TC 262 and has been, amongst other things, responsible for managing cybersecurity assessment and assurance in the Australian Defence Department. He points out that, as with all risk management, if an organization is serious about protecting itself from cyber-risks, it needs to “go back to the objectives of the enterprise and look at what really counts – in other words, know its digital crown jewels”.

Businesses and governments have to carefully assess the value and nature of what they hold dear. For instance, if an organization is the guardian of high-level technical intellectual property in data form, it is obvious that the leaking or theft of such data would have enormous consequences for them. The consequences could, however, be even more destructive if this information were held on behalf of others who are reliant on that organization as part of a supply chain, as a breach in the system could mean the undoing of the entire chain. What counts in the first instance, therefore, is a strategic systemic overview, and not an assessment of the technology itself.

This approach chimes with that of Dr Donald R. Deutsch, Vice President and Chief Standards Officer of Oracle, based in California, and Chair of technical committee ISO/IEC JTC 1, Information technology, subcommittee SC 38, Cloud computing and distributed platforms, a group of experts working under the joint stewardship of ISO and the International Electrotechnical Commission (IEC). The cloud, and its position in the hierarchy of risk, has perhaps the greatest immediate significance for everyday consumers. If we use a computer nowadays, it’s highly likely we will also be using the cloud. But “cloud computing”, says Dr Deutsch, “is more of a deployment and business strategy than it is a technological strategy”. There are certainly recent technological enhancements that come with risks attached – such as the automatic provisioning of computing resources that are shared by multiple users – yet “the risks are much the same as you would have in any computing environment, but exacerbated and magnified by the scale”.

An IT administrator installs a new rackmount server in server room.

The price of resilience

International Standards underpin this strategic approach to cyber-risk. As Jason Brown points out, when addressing cyber-risks, the ISO 31000 series should also be assessed in conjunction with the ISO/IEC 27000 series on information security management systems, or ISMS for short. Such an approach balances a focus on technology with that on “human factors”. ISO/IEC 27000 will help an organization assess its purely technological needs, whereas ISO 31000 will help it to understand the value of the information or products it holds in cyberspace, and therefore the degree of technological protection it will need to prevent any attacks. Or to put it another way: a thorough risk assessment using ISO 31000 could save any organization a substantial financial outlay when it comes to purchasing technological security. Ignorance of risk can just as likely lead to paying too much for a protective system as it can to paying too little.

But these two series of standards are by no means the only ones that can help mitigate cyber-risks. Cybersecurity also has to be looked at in terms of business continuity and the ISO 22301 series for business continuity management does exactly that. This series allows for a “documented management system to protect against […] disruptive incidents when they arise” and enables an organization to assess how its information and telecommunications system supports its objectives and what the consequences would be should it collapse. An organization’s investment in cybersecurity may be driven by the level of dependency that it has in the system; a small organization may be able to continue with (or even return to) paper-based receipts, whereas a giant such as Amazon literally depends on connectivity.

Likewise, the work of ISO/IEC JTC 1/SC 38 helps producers – and therefore ultimately consumers – speak a common language for cloud computing. Crucially, the demand for this set of standards was not driven, as is usually the case, by the producers or sellers themselves, but by the customers and buyers. Governments and corporations pointed out that each producer was using its own terminology, making it impossible to compare products and make an informed choice about which one to opt for. This led to the publication of ISO/IEC 17789, Information technology – Cloud computing – Reference architecture, which established a reference architecture and a framework of common vocabulary. Subcommittee SC 38 also oversaw the creation of ISO/IEC 19086, a four-part standard on service-level agreements between cloud providers and their customers, of which two parts are still in development.

Two men, one holding a digital tablet, discuss work issues.

Quantum leap

There can be no doubting the positive impact that all these standards have had on cybersecurity in general and cyber-risks in particular. ISO 31000 has been adopted by approximately 40 countries as their national system for risk management. As if this weren’t enough, Google returns over 6.5 million hits in 0.54 seconds when “ISO 31000” is typed into its search engine.

But as technology develops at an ever-faster rate, so International Standards must keep up the pace. The tools that work today may not work in the future. With machine learning moving towards artificial intelligence, for instance, there is likely to be both a learning adaptive capacity and “philosophical” capability in the system which simply do not exist in today’s world. Data analytic capability is developing to the extent that large amounts of data can be analysed to pinpoint emerging issues that would not otherwise be detectable. Separately, the advent of quantum computing will also exponentially increase the speed of computing. The conjunction of these three changes in the cyber world will “probably be the most disruptive thing we’ve had since the discovery of electricity or the atom,” says Jason Brown. This doesn’t even begin to take into account nanotechnologies or the increasing interconnectivity of all things.

When these factors do eventually combine, the competitive environment for advantage in business, for advantage between countries and not least for advantage between the adversary and the principal, will be massively accelerated. So much so that human input will still probably establish the risk around objectives, but the actual human capacity to deal with cyberspace could be negligible. ISO/TC 262 is currently examining an area it has labelled “Managing Emerging Risks”, focusing on those risks that are likely to be the most highly disruptive. As Brown makes clear, both consumers and producers need to approach the future differently, and all of us will “have to be much more open to this highly volatile and highly ambiguous world”.

Elizabeth Gasiorowski-Denis
Elizabeth Gasiorowski-Denis

+41 22 749 03 25
Default ISOfocus
Elizabeth Gasiorowski-Denis
Editor-in-Chief of ISOfocus