COMP 4108 (Computer Systems Security) Course Notes /
COMP 4108 Notes, Chapter 1: Important Ideas
As always, these notes are adapted from Computer Security and the Internet: Tools and Jewels from Malware to Bitcoin by Paul C. Van Oorschot. You can find the main page with the rest of my notes here.
When we study computer and internet security (aka cybersecurity, in most circles), we are primarily interested in how people interact with software, computer systems, and networks, and in how they can be misused by various agents. We are typically not concerned with unintentional mistakes or other types of damages (such as a network failure cause by an outage or a natural disaster). Those are reliabity and redundancy concerns, which, while relevant and valuable in security contexts, are not the main focus in cybersecurity.
Security goals and services
The fundamental goals of cybersecurity are also known as security services, which are security properties that help deliver on expectations. There are six fundamental properties:
- confidentiality: protected (or non-public) data (or information) should only be accessible to authorized parties, whether it is at rest or in transit
- Confidentiality is supported by access control, and can be provided by technical means (such as encryption) or procedural means (such as physical access procedures)
- integrity: data, hardware, and/or software should remain unaltered, except by authorized parties
- Access control, digital signatures, and cryptographic checksums are often used to combat malicious integrity violations.
- availability: information, services, systems and resources should remain accessible for authorized use
- Availability requires not only reliable hardware/software, but also protection from intentional deletion and disruption (for example denial of service attacks).
- authorization: resources should only be accessible to authorized entities
- Authorized access is achieved through access control mechanisms.
- authentication: the assurance that the claimed identity of a principal matches its actual identity (entity authentication), or that the source of data or software is as claimed (data origin authentication).
- Data origin authentication and data integrity are tightly linked.
- accountability: the ability to identify principals (aka entities) responsible for previous actions.
- This is supported by authentication, and achieving the accountability property often involves logging and collecting transaction histories.
The first three properties are often grouped together and known as the CIA triad. I also like to see the last three as being connected, because they all rely on being able to identify a user.
Some additional notes:
- confidentiality is related to, but not the same as, privacy and anonymity (which are not the main focus of security)
- Just because an entity is trusted doesn’t mean it is trustworthy
Security policies
“Secure” is a very ambiguous term, so security policies are often used to exactly describe what the security requirements of a particular system will be. In an ideal world, all rules implied by the security policy should have an associated enforcement mechanism.
In theory, a precise security policy should define all possible states of a system and classify them as either being authorized or unauthorized. A policy violation would then occur if the system ever moved into an unauthorized state, and a secure system would prevent all policy violations. In practice, no one actually does this since it is difficult and time-consuming (I also suspect it is largely impossible to do this precisely). Many real-world security policies are informal documents with guidelines and expectations surrounding known security issues.
Security policies are enforced and supported by controls and countermeasures, which could be processes or technical means.
Not all threats are credible, and not all attacks or attack vectors are equally viable. A threat agent might not take action if they lack resources, incentive, or a specific objective. The goal of security is to protect assets by either mitigating or removing the most viable attack vectors.
Risk assessment and risk management
The goal of risk assessment is to estimate the probability of the success of various attacks and to estimate the impact or expected losses that these would case. This can be done either quantitatively or qualitatively. Quantitative risk assessment is extremely difficult to do correctly due to incomplete knowledge of adversaries and/or vulnerabilities and the difficulty of correctly estimating the values of different assets. Quantitative risk assessment and can often be misleading as a result.
Individual threats are best analyzed in the context of the specific vulnerabilities that will be exploited to create the attack vector.
Important questions for risk assessment:
- Which assets are the most valuable, and what are their values?
- What system vulnerabilities exist?
- What are the relevant threat agents and threat actors?
- What are the associated estimates of attack probabilities, or frequencies?
Most practical risk assessments are based on qualitative ratings (assigning risks labels such as low, medium, medium-high, or high for example) and comparative reasoning (comparing risks across assets to see which ones should receive the most attention or protection.) This is sometimes done using something called a risk rating matrix.
Not all risks can be eliminated, so risk management suggests several ways of dealing with risks. Risk can be:
- mitigated through procedural or technical means
- transferred to third parties through insurance
- accepted, in hopes that the cost associated with absorbing the risk is less that the cost of mitigating or tranferring it
- eliminating the risk by eliminating the part of the system with the associated risk.
Security analysis
An important part of security analysis is identifying potential adversaries (or groups thereof) as well as their motivations or capabilities. This is known as adversary modelling. Some important attributes to consider while adversary modelling include:
- the adversary’s objectives – these will suggest which assets might be targeted and therefore need special protection
- methods – the techniques or types of attacks the adversary is likely to use
- capabilities – the sklls, resources, knowledge, and access available to the adversary
- funding level – this will influence the adversary’s motivations, objectives, and capabilities
- outsider vs. insider – an adversary that is an insider likely has an advantage due to access, knowledge, or credentials
Some common groups of adversaries, roughly descending in level of capabilities and resources, include:
- foreign intelligence, nation-state actors, and government-funded agencies
- cyber-terrorists and politically motivated adversaries
- industrial espionage agents
- organized crime groups
- lesser criminal and hackers
- malicious insiders, such as disgruntled employees
- non-malicious employees, who are often security-unaware
The goal of security analysis is to consider a systems’ architecture and components, identify where protection is needed, identify whether and/or how existing designs meet the security requirements, plan defense mechanisms for addressing threats, and note which threats are mitigated. Threat modelling, then, is the cornerstone of security analysis. In addition to this, black-box testing, security review of design documents, and source code review can uncover vulnerabilities. Security analysis from experience and insights from design principles.
Threat modelling techniques
A good threat model should identity and take into account all assumptions made about the target system, environment, and/or attackers. There are many different approaches to threat modelling, including diagramming, checklists, attack trees, STRIDE, and hybrid approaches.
Diagram-driven threat modelling
The diagram-driven approach to threat modelling starts with an architectural diagram or represention of the system. this diagram, one should mark system gateways which filter or restric communications. It is also important to mark different trust zones and the assumptions associated with them. The idea is to add as much information to the diagram as possible, including any linkages, flows, connections, and assumptions about security. Now look at every part of the diagra, and wonder where things could go wrong.
Useful questions:
- How might the trust assumptions, or expectations about who controls what, be violated?
- Where can bad things happen, and how?
- Is there a possibility of information/data access from non-standard paths? What about from backup media or cloud storage?
- Who has access to various parts of the system? Who performs maintainance, or installs new hardware?
- Is there any way for an attacker to perform a person-in-the-middle attack?
Attack trees
Attack trees are very useful for identifying attack vectors for a specific attack. Nodes in the tree can be annotated with details indicating that a step is infeasible, or by values indicated costs or mitigation measures.
The process of constructing attack trees is best done iteratively:
- the tree can be extended with new paths or attacks suggested by colleagues
- the tree can be merged with other, independently constructed trees
This is essentially a form of brainstorming that required creativity and is improved with experience. It also requires the security architects to “think like attackers.”
Checklists
There are multiple existing lists of known attacks that a secutiry architect can use while designing a system. The benefit is that there are many of them out there and they always contain the well-known threats. The downside is that they are not-system specific and could overlook design or environment-specific threats. Checklists are very useful as a complementary tool or cross-reference when using other threat modelling methods to make sure nothing obvious was missed.
STRIDE
This is a list of six keywords that should help recall different ways in which security might be violated – six categories of problems or threats, really.
The six keywords are:
- Spoofing
- Tampering
- Repudiation
- Information Disclosure
- Denial of Service
- Escalation of privilege
Threat modelling, model-reality gaps, and real-world outcomes
Gaps in threat modelling
A threat model’s quality is highly dependent on how accurately the model reflects the details of the system it is trying to protect. It is often very difficult for a high-level, abstract model to include all of the details in a system, which can be dangerous because details are important in security.
A mismatch between a model and reality can give a fatal false sense of security. Model-reality gaps arise from two major factors:
- invalid assumptions and misplaced trust
- focus on the wrong threats
Another issue that can lead to a broken threat modle is the failure to identify and explicitly record implicit assumptions so that they can be scrutinized. Getting threat models wrong, or only partially correct, is the source of many unmitigated attacks in the real world.
If a security proof relies on an incorrect assumption, no matter how tight the proof is, it will not provide the claimed protection. Alternatively, a proof can be correct, but the model might be incomplete and not consider other viable attack paths.
Tying security policies to real outcomes
Security can be very ambiguous, and using clear language (for example, agreeing on what “secure” actually means) is very helpful.
Some key questions to shape the development of a security policy:
- What are the protection goals?
- What assets are valuable?
- What potential attacks put the assets at risk?
- How can potentially damaging attacks be mitigated or stopped?
Options to mitigate future damage include:
- prevention by countermeasures
- detection, response, and recovery (quick recovery can reduce impact, and backing up data is extremely important for this reason)
- reducing consequences by obtaining
Security is unobservable
It is very difficult to test whether security requirements have been met, because 1) only known and existing attacks can be tested against and 2) it is not possible to test for certain classes of attacks. This means that testing is always incomplete.
The problem is that proving that a system is secure requires showing that there no exploitable flaws in the system, but it is impossible to test for the absense of problems. We can’t observe security or demonstrate it; we can only observe when security is missing. Furthermore just because no bad things are being observed doesn’t mean that bad things aren’t happening – there could be unnoticed or unobservable bad things happening.