230 likes | 334 Views
Some are not thieves!. Alexandr Andoni (MIT) (work done while at PARC) Jessica Staddon (PARC). Model. Content distributor Broadcast channel (accessible to all) E.g., Pay-TV, Online service Content encrypted to limit access Users Privileged – ones that can decrypt the content
E N D
Some are not thieves! Alexandr Andoni (MIT) (work done while at PARC) Jessica Staddon (PARC)
Model • Content distributor • Broadcast channel (accessible to all) • E.g., Pay-TV, Online service • Content encrypted to limit access • Users • Privileged – ones that can decrypt the content • Revoked – whose privileges where revoked due to non-payment, expiration, etc • Key management protocol (revocation protocol) • More on this later
Problem • 0/1 (/) user hierarchy is too rigid • Ineffective, disruptive when the revocation happened unexpectedly, in error, etc • Imagine unfortunate scenario • User is late on the monthly payment • => is revoked by the distributor • => misses favorite TV show • => has to ask for reinstatement: high logistical cost • Want: • Graceful revocation • Cues on pending revocation: inherent to the content
Basic Solution • Servicedegradation • Degrade quality of service (e.g., content is delayed or partial) • Affects users that are “a little late” on payment • Cue of pending revocation: degradation itself • What means “degradation”? • Our definition: • Degraded = it takes more effort to decrypt the content; but all content is decrypted in the end • Other possible definitions (not considered here): • Video is choppy [Abdalla-Shavitt-Wool’03]
How? • Enforce user classes via key management protocols (a.k.a. revocation protocols) • Revocation protocol = can target any set P of users • Degradation protocol is a specialization of the revocation protocol, but hope to improve parameters • Effort to decrypt: via variably hard functions • Computing the function incurs computational effort • The amount of computational effort is parametrizable • Related to “pricing functions” [Dwork-Naor’92], “proofs of work” [Jakobsson-Juels’03] (in the context of spam-fighting)
Variably Hard Functions • Inspired from the idea of “proofs of work” proposed mostly for fighting spam: • For an email m, have to attach F(m) such that: • “Moderately hard” to compute F(m) (e.g., 10secs) • Easy/fast to check that <m,F(m)> is valid • We need: • Parametrizable “moderately hard” function F • A degraded user gets “m” and a hardness parameter p • For fixed m, F(m) must be the same for all p
Definition: Variably Hard Functions • F is variably hard if: • There is some test function g(x) (think g(x)=m) • For each x, there is a collection of hints Hints(x) • A hint is a set Y(p)(x) of size 2p s.t. xY(p)(x) • It takes ≥O(2p) time to compute F(x) given only g(x) and some Y(p)(x) (x is not given) • “Hardness” in not knowing x • Can compute F(x) in 2p given g(x), Y(p)(x): • Just try all possible xY(p)(x) and test with g(x)
Construction via OW Permutation • Let P be a one-way permutation • Define test function g(x)=P(x) • Define F(x)=x • Computing F(x) knowing g(x) is equivalent to inverting P • A hint Y(p)(x) is the set of y’s that have same first k-p bits as x k bits x= 01001… 11010... Y(p)(x)= 01001… *****... p bits
To privileged x= To degraded Using Variably Hard Functions • Encrypt the content with a session key SK=F(x) • Broadcast g(x) • Distribute hints of x using revocation protocol • Privileged users P: receive complete hint => easy to compute SK • Degraded users D: receive partial hint => moderate to compute • Revoked users R: receive no hint => impossible to compute • Inefficient: • Have to be able to target only P • More direct approach?
Revocation Protocols • Non-trivial: • If all users have the same key, how do we “take back” the key from a revoked user? • Studied since ’90s: • Stateful – users have “state”; but might be fatal if they miss a part of the broadcast • Stateless • Most common (stateless) are based on e.g., Shamir-like secret sharing
Improve Revocation • Illustration for revocation based on secret sharing • Revocation protocol of [Kumar-Rajagopalan-Sahai’99] in two steps: • 1st step: uses cover free families • Let U be a universe of keys • Users get distinct subsets Su U (all Su form cover-free family) • A message SK is broadcasted as: • Ek1[SK], Ek2[SK]… Eks[SK] , for some T={k1…ks}U • If SuT≠, then the user can decrypt SK • Design sets Su such that: • for any Su (privileged user), and S1,S2,…Sr (revoked) • |Su\S1\S2\...Sr|≥a|Su|, where a is a constant
Revocation via Secret Sharing (2) • 2nd step: reduce communication blow-up • For revoked S1,S2,…Sr, encrypt with all T=U\S1\S2\...Sr • Parameters so far: • User storage: |Su|=O(r log n) keys • Communication blow-up: |U|=O(r2 log n) • Can improve: a privileged user gets a|Su| copies of SK • Use a secret sharing scheme! • Create Ushares of SK such that any a|Su| shares are enough to reconstruct SK • Obtain parameters [KRS99, randomized]: • User storage: O(r*log n) • Communication blowup: O(r)
Secret Sharing for Degradation • [KRS’99] establishes: • A privileged user gets a|Su|=O(r log n) shares of SK • A revoked user gets 0 shares • Design such that a degraded user gets, e.g., (1-c)*a|Su| shares (0<c<1): • These shares constitute a hint Y(p)(x), p=ca|Su| • A degraded user recovers SK in 2ca|Su| steps • Indeed can modify the [KRS’99] cover-free family: • If key kU belongs to D but not R, choose k to be in T with some probability p≈1-c
Deficiencies • Can obtain some slightly better bounds, but messy • Many parameters (max # revoked, max # degraded) • Have to know the parameters in advance (same for KRS’99) • Not collusion resistant against degraded users • Several degraded users may get all the necessary shares • Not a big problem • Degradation mainly serves as a cue • Act of colluding is sufficient to serve as a cue
Towards (more) practical protocols • Observations: • Not necessary to redistribute hints for each new session if user classes don’t change • Want finer division into classes: • Privileged class P • Degraded classes D1, D2,… DL (progressively worse service quality) • Revoked class R • Known degradation schedule: sometimes we know when somebody will probably be degraded
Practical Degradation Protocols • Will present two: • Known degradation schedule: trial period scenario • Unknown degradation schedule: general scenario
normal service degraded revoked time t=0 subscription t=30 t=40 Trial Period Scenario: Model • Trial period scenario • In the period [30,40] days, the service is progressively worse • 1 degraded class per day: D1,D2,…D10 • Each Di has its “hardness” parameter
←A19←A20←A21←… ←A29←A30←A31←… Legend: ← means application of a one-way function/permutation … ? … ? ? Trial Period Scenario: Construction • Broadcast on day t: EKt[SK], EF(x)[SK], g(x) • Ki is a series such that Ki=W(Ki+1); W is one-way • Ai is defined the same way • A user gets K29 and A29 • On day t<30, the user can decrypt SK with Kt • On day t≥30, the user can compute F(x): from g(x) and an incomplete hint based on At-10…A29 At t=30, x= At t=31, x=
General Scenario • Can generalize the previous protocol • Same idea of using At series to create many degradation classes • But need more attentive distribution of At and Kt: using revocation protocols this time • Can be based on any revocation protocol • Expensive communication only when classes change (somebody is degraded/revoked)
Final Remarks • Computational effort may vary on different machines: • Then, use in fact the “memory-bound” functions of [Dwork-Goldberg-Naor’03] • Can guarantee O(2p) memory accesses • More uniform across platforms • We adapted “memory-bound” functions to be variably hard
Conclusions • Introduced the notion of service degradation • Degraded users: between privileged and revoked • Have degraded quality • Serves as a cue to impending revocation • Construction based on: • Variably hard functions • Revocation protocols
Interesting Questions • How much can degradation buy us in terms of user storage and communication? • Is this the right approach to degradation? Are there other (better) ones?