COMP 4108 (Computer Systems Security) Course Notes /
COMP 4108 Notes, Chapter 1: Security Principles and Why Security is Hard
As always, these notes are adapted from Computer Security and the Internet: Tools and Jewels from Malware to Bitcoin by Paul C. Van Oorschot. You can find the main page with the rest of my notes here.
Design Principles for Security
-
Simplicity and necessity: designs should be as simple and small as possible. Minimize functionality, favour minimal installs, and disable unused functionality. Aka: minimize the attack surface.
-
Safe defaults: deny-by-default. Design systems to fail closed (denying access) and favour allowlists over denylists.
-
Open design: avoid relying on security by obscurity. (Also see Kerchoff’s principle.) Invite open review and analysis. This does not, however, mean that unpredictability shouldn’t be leveraged or that exposing sensitive data that could leak secrets is a good idea.
-
Complete mediation: Correct authorization to access an object should always be verified immediately before access to. (Otherwise you may run into issues like TOCTOU errors.)
-
Isolated Compartments: System components should be isolated from each other – this could be through the use of containers, process/memory isolation, firewalls, zoning, etc. This is to limit damage in the case of failures and to help prevent privilege escalation.
-
Least Privilege: similar to the military need-to-know principle – allocate the least amount of privilege for the least amount of time necessary.
-
Modular Design: related to the separation of duties principle. Avoid monolithic designs that embed full privileges into large simple components.
-
Small trusted bases: minimize secrets; minimize the size and complexity of trusted components.
-
Time-tested tools: do not write or implement your own security tools; rely on expert-built tools, protocols, and cryptographic primitives that have stood the test of time.
-
Least surprise: align the design and functionality of user interfaces to the users’ mental models of how they should work.
-
User buy-in: Design security mechanisms that users are motivated (or at the very least willing) to use rather than bypass. Anything that is perceived as being too time-consuming or inconvenient may remain unused, or worse, be circumvented.
-
Sufficient work factor: the cost to defeat a security mechanism should be several orders of magnitude larger than the expected resources of anticipated adversaries.
-
Defence-in-depth: defenses should be built in multiple layers that back each other up so that there is no single point of failure. Redundancy is good.
-
Evidence production: logging and monitoring tools should be used to record system activity to promote accountability and help with forensic analysis when things go wrong.
-
Datatype validation: related idea: input and output validation. Make sure all data matches the expected datatype. Sanitize user input to avoid injection attacks.
-
Remnant Removal: remove all traces of sensitive data or secrets after completing tasks; use secure deletion to make data unrecoverable by forensic tools.
-
Trust anchor justification: make sure your trust anchors are actually trustworthy – verify trust assumptions wherever possible.
-
Independent confirmation: use independing crosschecks (just as comparing hash values) to increase confidence in data (especially data from untrusted channels).
-
Request-response integrity: beware of protocol lacking authentication; verify that responses match requests in name resolution.
-
Reluctant allocation: authenticate agents whenever possible; place a higher burden of effort on agents that initiate interactions; be reluctant to expend resources on interactions with unauthenticated, external agents.
-
Security-by-design: Consider security as early as possible in the design process and continue to include security in the design throughout. Specify security goals as early as possible in the process.
-
Design for evolution: design architectures, systems, and protocols to support evolutipn and updates with minimal impact on existing functionality.
Why Security is Hard
- Adversaries are smart, adaptable, and often economically motivated.
- Attackers don’t have to follow the rules, whereas defenders are bound by lots of rules and protocols.
- Attackers only need to find one point of attack, but defenders need to shield from all possible attacks.
- The Internet enables large-scale attacks at a low cost; also, attackers can be geographically distant, decreasing their traceability and physical risk.
- Technology evolves very quickly, which means it’s very hard to keep up, and software continues to grow in complexity, making it harder to defend.
- Many developers do not have any security knowledge or training and automated security tools are insufficient and difficult to build and use.
- Interoperability and backwards compatibility concerns means vulnerabilities might not always get patched, even if a fix is available.
- It is often hard to convince people of the real-world importance and consequences of security breaches, which aren’t always easily visible or identifiable.
- Managing secrets properly is a nightmare.
- Retrofitting security into existing designs (which often did not consider security at all) is difficult and costly at best and impossible at worst.
- The problem is that security also involves humans, and getting humans to change their behaviours to be more secure is quite difficult. Designing security mechanisms that are easily understood and used (and not bypassed) by humans is hard. Educating humans on security is hard.
- Designing and implementing new security mechanisms sometimes introduces new weaknesses and attack vectors.
- The government likes to monitor and spy on people and is therefore against practices like encryption by default.