370 likes | 507 Views
Trusted Computing and Digital Rights Management in a De-perimeterised Environment. seminar to Dennis Soong’s group at Lenovo R&D, Beijing Prof. Clark Thomborson 3 rd April 2007. Outline. An operational definition of “trust”.
E N D
Trusted Computing and Digital Rights Management in aDe-perimeterised Environment seminar to Dennis Soong’s group at Lenovo R&D, Beijing Prof. Clark Thomborson 3rd April 2007
Outline • An operational definition of “trust”. • Requirements analysis of e-government and corporate DRM (ECM) at three levels: static, dynamic, governance. • Suggested design improvements • DRM: Emphasise integrity and availability, not confidentiality • TC: More support for audit • Relationship Management: support for hierarchical, bridging, and peering trust with other systems and individuals • Steps toward uniform “purchase requirements” with emphasis on interoperability and appropriate security. • In progress at the Jericho Forum. • Eventually: develop an appropriate audit standard for DRM, TC, and relationship management.
Trust and Privilege • We must develop operational definitions for these terms, if we wish to develop trustworthy computer systems.
Technical and non-technical definitions of Trust • In security engineering, placing trust in a system is a last resort. • It’s better to rely on an assurance (e.g. a proof, or a recourse mechanism), than on a trusting belief that “she’ll be right”. • In non-technical circles, trust is a good thing: more trust is generally considered to be better. • Trustworthiness (an assurance) implies that trust(a risk-aware basis for a decision) is well-placed. • A completely trustworthy system (in hindsight) is one that has never violated the trust placed in it by its users. • Just because some users trust a system, we cannot conclude that the system is trustworthy. • A rational and well-informed person can estimate the trustworthiness of a system. • Irrational or poorly-informed users will make poor decisions about whether or not, and under what circumstances, to trust a system.
Privilege in a Hierarchy • Information flows upwards, toward the most powerful actor (at the root). • Commands and trustflow downwards. • The King is the most privileged. • The peons are the most trusted. King, President, Chief Justice, Pope, or … Peons, illegal immigrants, felons, excommunicants, or … • Information flowing up is “privileged”. • Information flowing down is “trusted”. • Orange book TCSEC, e.g. LOCKix.
Trustworthiness in a Hierarchy • Information flows upwards, toward the most powerful actor. • Commands and trust flow downwards. • Peons must be trusted with some information! • If the peons are not trustworthy, then the system is not secure. King, President, Chief Justice, Pope, or … Peons, illegal immigrants, felons, excommunicants, or … • If the King does not show good leadership (by issuing appropriate commands), then the system will not work well. “Noblesse oblige”!
Email in a Hierarchy • Information flows upwards, toward the leading actor. Actors can send email to their superiors. • Non-upwards email traffic is trusted: • not allowed by default; • should be filtered, audited, … King, President, Chief Justice, Pope, or … Peons, illegal immigrants, felons, excommunicants, or … • Email up: “privileged” (allowed by default) • Email down: “trusted” (disallowed by default, risk to confidentiality) • Email across: privileged & trustedrouting
Merged X+Y Email across Hierarchies Q: How should we handle email between hierarchies? Company X Agency Y Answers: • Merge • Subsume • Bridge • Not often desirable or even feasible. • Cryptography doesn’t protect X from Y, because the CEO/King of the merged company has the right to know all keys. • Can an appropriate King(X+Y) be found?
Email across Hierarchies Q: How can we manage email between hierarchies? Agency X Answers: • Merge • Subsume • Bridge Company Y
Email across Hierarchies Q: How can we manage email between hierarchies? Company X Agency Y Answers: • Merge • Subsume • Bridge! • Bridging connection: trusted in both directions.
Bridging Trust • We use “bridges” every time we send personal email from our work computer. • We build a bridge by constructing a “bridging persona”. • Even Kings can form bridges. • However Kings are most likely to use an actual person, e.g. their personal secretary, rather than a bridging persona. Agency X Hotmail C, acting as a governmental agent C, acting as a hotmail client • Bridging connection: bidirectional trusted. • Used for all communication among an actor’s personae. • C should encrypt all hotmail to avoid revelations.
Personae, Actors, and Agents • I use “actor” to refer to • an agent (a human, or a computer program), • pursuing a goal (risk vs. reward), • subject to some constraints (social, technical, ethical, …) • In Freudian terms: ego, id, superego. • Actors can act on behalf of another actor: “agency”. • In this part of the talk, we are considering agency relationships in a hierarchy. Company X Hotmail C, acting as an employee C, acting as a hotmail client • When an agent takes on a secondary goal, or accepts a different set of constraints, they create an actor with a new “persona”.
Bridging Trust: B2B e-commerce • Use case: employee C of X purchasing supplies through employee V of Y. • Employee C creates a hotmail account for a “purchasing” persona. • Purchaser C doesn’t know any irrelevant information. Company X Company Y Employee V C, acting as an employee C, acting as a purchaser • Most workflow systems have rigid personae definitions (= role assignments). • Current operating systems offer very little support for bridges. Important future work!
Why can’t we trust our leaders? • Commands and trust flow upwards (by majority vote, or by consensus). • Information flows downwards by default (“privileged”). • Upward information flows are “trusted” (filtered, audited, etc.) • In a peerage, the leading actors are trusted, have minimal privilege, don’t know very much, and can safely act on anything they know. “Our leaders are but trusted servants…” Peers • By contrast, the King of a hierarchy has an absolute right (“root” privilege) to know everything, is not trusted, and cannot act safely.
Turn the picture upside down! • Information flows upwards by default (“privileged”). • Commands and trust flow downwards. • Downward information flows are “trusted” (filtered, audited, etc.) • A peerage can be modeled by Bell-La Padula, because there is a partial order on the actors’ privileges. Peers, Group members, Citizens of an ideal democracy, … Facilitator, Moderator, Democratic Leader, … • Equality of privilege is the default in a peerage, whereas inequality of privilege is the default in a hierarchy.
Peer trust vs. Hierarchical trust • Trustingdecisions in a peerage are made by peers, according to some fixed decision rule. • There is no single root of peer trust. • There are many possible decision rules, but simple majority and consensus are the most common. • Weighted sums in a reputation scheme (e.g. eBay for goods, Poblano for documents) are a calculus of peer trust -- but “we” must all agree to abide by the scheme. • “First come, first serve” (e.g. Wiki) can be an appropriate decision rule, if the cost per serving is sufficiently low. • Trustingdecisions in a hierarchy are made by its most powerful members. • Ultimately, all hierarchical trust is rooted in the King.
Legitimation and enforcement • Hierarchies have difficulty with legitimation. • Why should I swear fealty (give ultimate privilege) to this would-be King? • Peerages have difficulty with enforcement. • How could the least privileged actor possibly be an effective facilitator? • This isn’t Political Science 101! • I will not try to model a government. It’s hard enough to build a model that will help us develop a better computer system! • I have tried to convince you that hierarchical trust is quite different to peer trust, that bridging trust is also distinct, and that all three forms are important in our world. • My thesis: Because our applications software will help us handle all three forms of trust, therefore our trusted operating systems should support all three forms.
Requirements for Relationship Management • Orange-book security is hierarchical. • This is a perfect match to a military or secret-service agency. • This is a poor match to e-government and corporate applications. • A general-purpose TC must support bridging and peering relationships. • Rights-management languages must support bridges and peerages, as well as hierarchies. • We cannot design an attractive, general purpose DRM system until we have designed the infrastructure properly!
Vapourware • Closed-source methodology is appropriate for designing hierarchical systems. • These systems have trouble with legitimation. • Why should a user trust that the system designers (and administrators) won’t abuse their privilege? • Open-source methodology is appropriate for designing peerage systems. • These systems have trouble with enforcement. • Why should anyone trust a user not to abuse their privilege? • Real-world peerages can legitimise hierarchies, and hierarchies can enforce peerages. • Can our next-generation OS use both design patterns?!?
A Legitimised Hierarchy • Each assurance group may want its own Audit (different scope, objectives, Trust, … ). • The OS Administrator may refuse to accept an Auditor. • The OS Administrator makes a Trusting appointment when granting auditor-level Privilege to a nominee. • Assurance organizations may be hierarchical, e.g. if the Users are governmental agencies or corporate divisions. OS Root Administrator Auditor Users Inspector-General (an elected officer) IG1 IG2 Chair of User Assurance Group
Summary of Static Trust • Three types of trust: hierarchical, bridging, peering. • Information flows are either trusted or privileged. • Hierarchical trust has been explored thoroughly in the Bell-La Padula model. • A subordinate actor is trusted to act appropriately, if a superior actor delegates some privileges. • Bell-La Padula, when the hierarchy is mostly concerned about confidentiality. • Biba, when the hierarchy is mostly concerned about integrity. • A general purpose TC OS must support all concerns of a hierarchy. • Actors have multiple personae. • Bridging trust connects all an actors’ personae. • A general purpose TC OS must support personae. • Peering trust is a shared decision to trust an actor who is inferior to the peers. • Peerages have trouble with enforcement; hierarchies have trouble with legitimation. • A trusted OS must be a legitimate enforcement agent!
Dynamic Trust and System Trust • When we join a hierarchy, form a bridge, or join a peerage, we make a trusting choice. • We also make trusting choices when we leave a hierarchy, dismantle a bridge, or resign from a peerage. • Hierarchies and peerages make trusting choices whenever they accept or reject a member. • A trusted operating system should assist us with making, and recording, these structural operations: dynamic trust. • Reputation systems could help us to make dynamic trust decisions, but they do not help us record these decisions. • Current workflow systems have very little support for dynamic trust. • System trust could be measured by our level of confidence in predicting the future behaviour of a hierarchical or peering system.
My Goals • I am trying to convene a broadly-representative group of purchasers to act as “our” governance body. • Large corporations and governmental agencies have similar requirements for interoperability, auditability, static security, and multiple vendors. • A first goal: develop buyer’s requirements for TC, DRM, and relationship management. • International agreement and political “buy-in” is required if we are to have a system that is broadly acceptable. • Regulatory requirements, such as protection of individual privacy, must be addressed. • Law-enforcement and national-security requirements must also be addressed. • A second goal: develop a trustworthy auditing process. • The Jericho Forum has a congruent goal. • It is developing buyer’s requirements for information security in large multinational corporations. • However it is not a standards organisation, and it is not focussed on TC. • The Jericho Forum is defining “de-perimeterized security”.
Jericho’s De-perimeterized Security • A corporate perimeter is not an easily-defendible security perimeter. • A corporate perimeter defines a QoS boundary: performance, not security. • We must harden our platforms, and our data objects, in order to take advantage of high connectivity and low cost of the internet. • Jericho members want to make trustworthy connections on an untrusted network (the internet), between authenticated (and trustworthy) users with authenticated (and trustworthy) platforms. • The connections must use open standards, to improve interoperability and integration, both within our own IT systems and with our business partners.
Organisation of the Jericho Forum • User members: • Own the Forum; • Vote on the deliverables; • Run the Board of Managers. • Vendor members: • Have no votes; • Participate fully in discussions. • We now have 12 vendor members, and want more. • Academic members: • Offer their expertise in exchange for information of interest to their research.
The Jericho Commandments: Fundamentals (1-3) • The scope and level of protection must be specific and appropriate to the asset at risk. • Security mechanisms must be pervasive, simple, scalable, and easy to manage. • Assume context at your peril: security solutions designed for one environment may not be transferable. • The first two commandments are appropriate design goals for any secure system, including my vapourware TC. • The third commandment suggests that there will be more than one TC OS, with differing levels of hardness. We’ll need help with our bridging trust across TC OSes!
Surviving in a Hostile World • Devices and applications must communicate using open, secure protocols. • All devices must be capable of maintaining their security policy on an untrusted network. • My vapourware TC could use VPNs on hierarchical links. • Implementing peerages on a completely untrusted network may be very difficult.
The Need for Trust • All people, processes, technology must have declared and transparent levels of trust for any transaction to take place. • Mutual trust assurance levels must be determinable. • These are requirements on dynamic trust in my TC OS. • Decision support will take the form of an interoperable “reputation system”. • These are also requirements on the use of an established bridge. • For example, a company-confidential data object should not be transmitted over a bridge to a system that does not respect confidentiality.
Identity, Management and Federation • Authentication, authorisation and accountability must interoperate out of your area of control. • My vapourware TC uses explicit “bridges” for interoperation. • The devil will be in the details here... • What control can be exerted, at reasonable cost, by a hierarchy over its members’ bridges? • Workflow systems are very expensive and inflexible, suggesting that hierarchical control (= strong DRM) over bridges will be infeasible. • I think we should focus on accountability rather than direct control over bridges. • The TC OS should keep complete records of bridge creations (= relationship management). • The TC OS should not keep records of bridge usage, except when highly confidential material is transferred.
Access to Data • Access to data should be controlled by security attributes of the data itself. • Data privacy (and security of any asset of sufficiently high value) requires a segregation of duties/privileges. • By default, data must be appropriately secured when stored, in transit and in use. • These are requirements on the DRM (ECM) system that would be hosted by my vapourware TC OS.
Static Security for Corporate and Governmental DRM (a.k.a. ECM) • CIA: confidentiality, integrity, and availability. • The primary vulnerabilities are operational difficulties unrelated to DRM: • the link between a user and their platform (“shared” login, unattended, or stolen); • the link between the platform and the server (especially while roaming); • the links between a user and their workgroups (unstable). • Three categories of internally-authored documents. • I > A > C : internal correspondence. Author must be identified. Keys must be shared widely within the agency, to ensure high availability. Confidential within a workgroup and its line managers. Note: workgroups can cross corporate boundaries! • I = A > C : operational data, e.g. citizen (or customer) records. Accuracy is very important. Downtime is very expensive. Group-level confidentiality. • I = C > A : highly sensitive data, such as state (or corporate) secrets, requiring expensive, fine-grained DRM control. Very rare, except in secret-service or military agencies -- these are a very specialised market. • Three categories of externally-authored documents. • I > A > C : unsigned objects, e.g. downloads from the web. • I = A > C : signed objects, e.g. contracts, tax returns. • I = C > A : objects whose confidentiality is controlled by an external party, e.g. licensed software and media. Very rare. • Conclusion: we should design DRM systems to handle the I = A > C case, without increasing the operational difficulties of workplace computing.
Dynamic Security Requirements • The gold standard: Authentication, Authorisation, Audit. • Dynamic security is expensive. We must avoid “gold-plated” system design! Requirements: • Offline server: the user’s platform handles document-level authorisations. • Platforms only occasionally re-authenticate/re-authorise with the server: • once per week, and when the individual joins a new group. • Platforms hold a master-key with read authority for all group-level documents. • Platform credentials remain valid after a reboot or disconnect. • Almost all documents are individually-signed and group-encrypted. • The document includes a copy of the author’s signing certificate. • Audit trails must assure the completeness of key escrow (for availability) and user enrolment/disenrolment (for integrity and confidentiality). • All signing certificates, all group-master keys, and all other identity-management functions, must be handled through an open-standard interface to the DRM server. • These requirements are supportive of an I = A > C design. • Integrity is high, due to the auditable and standardized ID management; • Availability is high, even while roaming; • Confidentiality is moderate. It takes a week to revoke an authority, however most authority revocations are due to workgroup reassignments of trusted individuals.
Security Governance • Governance should be pro-active, not reactive. • Governors should constantly be asking questions, considering the answers, and revising plans: • Specification, or Policy (answering the question of what the system is supposed to do), • Implementation (answering the question of how to make the system do what it is supposed to do), and • Assurance (answering the question of whether the system is meeting its specifications). • The monumental failures of early DRM systems were the result of inadequate governance: • poorly-conceived specifications, • overly-ambitious implementations, and • scant attention to assurance when specifying.
Malware Scans in TC/DRM • An infected document may have been encrypted before its malware payload is recognisable by a scanner. • An infected document may be opened at any time in the future. • Adding a comprehensive, online, malware scan would significantly increase the multi-second latency of a first-time access in IRM v1.0. • Third-party malware scans are problematic in a security-hardened kernel. • The scanner must be highly privileged and trustworthy.
Summary • There are three types of operational trust: hierarchical, bridging, peering. • A hierarchical system can be legitimated by a peerage. • A peering system can be enforced by a hierarchy. • I am trying to convene a broadly-representative group of purchasers to act as “our” governance body for Trusted Computing and Digital Rights Management. • Large corporations and governmental agencies have similar requirements for interoperability, auditability, static security, and multiple vendors. • The Jericho Forum is developing buyer’s requirements for information security in large multinational corporations, but it is not a standards organisation and it is not focussed on TC and DRM. • Goals: develop an audit standard and a trustworthy auditing process.
Acknowledgements & Sources • Privilege and Trust, LOCKix: Richard O'Brien, Clyde Rogers, “Developing Applications on LOCK”, 1991. • Trust and Power: NiklasLuhmann, Wiley, 1979. • Personae: Jihong Li, “A Fifth Generation Messaging System”, 2002; and Shelly Mutu-Grigg, “Examining Fifth Generation Messaging Systems”, 2003. • Use case (WTC): Qiang Dong, “Workflow Simulation for International Trade”, 2002. • Use case (P2P): Benjamin Lai, “Trust in Online Trading Systems”, 2004. • Use case (ADLS): Matt Barrett, “Using NGSCB to Mitigate Existing Software Threats”, 2005. • Use case (SOEI): Jinho Lee, “A survey-based analysis of HIPAA security requirements”, 2006. • Trusted OS: Matt Barrett, “Towards an Open Trusted Computing Framework”, 2005; and Thomborson and Barrett, “Governance of Trusted Computing”, ITG 06, Auckland. • White papers and privileged communications in the Jericho Forum, www.jerichoforum.org.