1 / 20

236608 - Coding and Algorithms for Memories Lecture 3 – WOM Codes

236608 - Coding and Algorithms for Memories Lecture 3 – WOM Codes. Write-Once Memories (WOM). Introduced by Rivest and Shamir , “ How to reuse a write-once memory ” , 1982

ziven
Download Presentation

236608 - Coding and Algorithms for Memories Lecture 3 – WOM Codes

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. 236608 - Coding and Algorithms for MemoriesLecture 3 – WOM Codes

  2. Write-Once Memories (WOM) • Introduced by Rivest and Shamir, “How to reuse a write-once memory”, 1982 • The memory elements represent bits (2 levels) and are irreversibly programmed from ‘0’ to ‘1’ Q:How many cells are required to write 100 bits twice? P1: Is it possible to do better…? P2: How many cells to write kbits twice? P3: How many cells to write kbits ttimes? P3’: What is the total number of bits that is possible to write in n cells in t writes? 1st Write 2nd Write

  3. Binary WOM Codes • k1,…,kt:the number of bits on each write • n cells and t writes • The sum-rate of the WOM code is R = ()/n • Rivest Shamir: R = (2+2)/3 = 4/3

  4. Definition: WOM Codes • Definition: An [n,t;M1,…,Mt] t-write WOM code is a coding scheme which consists of n cells and guarantees any t writes of alphabet size M1,…,Mt by programming cells from zero to one • A WOM code consists of t encoding and decoding maps Ei, Di, 1 ≤i≤ t • E1: {1,…,M1}  {0,1}n • For 2 ≤i≤ t, Ei: {1,…,Mi}×Im(Ei-1)  {0,1}nsuch that for all (m,c)∊{1,…,Mi}×Im(Ei-1), Ei(m,c) ≥ c • For 1 ≤i≤ t, Di: {0,1}n {1,…,Mi}such that for Di(Ei(m,c)) = m for all (m,c)∊{1,…,Mi}×Im(Ei-1) The sum-rate of the WOM code is R = ()/n Rivest Shamir: [3,2;4,4], R = (log4+log4)/3=4/3

  5. The Coset Coding Scheme • C[n,n-r] is an ECC with an r×nparity check matrix H • Write r bits: Given a syndrome s of r bits, find a length-n vector e such that H⋅eT = s • Example: H is aparity check matrix of a Hamming code • s=100, v1 = 0000100: c = 0000100 • s=000, v2 = 1001000: c = 1001100 • s=111, v3 = 0100010: c = 1101110 • s=010, …  can’t write! • This matrix gives a [7,3:8,8,8] WOM code • The Golay(23,12,7) code: [23,3; 211,211,211], R=33/23=1.43 • The Hamming code: r bits, 2r-2+2 times, 2r–1 cells: R=r(2r-2+2)/(2r –1) • Improved my Godlewski (1987) to 2r-2+2r-4+2 times: R=r(2r-2+2r-4+2)/(2r –1)

  6. The Coset Coding Scheme • Cohen, Godlewski, and Merkx ‘86 – The coset coding scheme • Use Error Correcting Codes (ECC) in order to construct WOM-codes • Let C[n,n-r] be an ECC with parity check matrix H of size r×n • Write r bits: Given a syndrome s of r bits, find a length-n vector e such that H⋅eT = s • Use ECC’s that guarantee on successive writes to find vectors that do not overlap with the previously programmed cells • The goal is to find a vector e of minimum weight such that only 0s flip to 1s

  7. Binary Two-Write WOM-Codes • C[n,n-r] is a linear code w/ parity check matrix H of size r×n • For a vector v ∊ {0,1}n, Hv is the matrix H with 0’s in the columns that correspond to the positions of the 1’s in v v1= (0 1 0 1 1 00)

  8. Binary Two-Write WOM-Codes • First Write: program only vectors v such that rank(Hv) = rVC = { v ∊ {0,1}n | rank(Hv) = r} • For H we get |VC| = 92 - we can write 92 messages • Assume we write v1= 0 1 0 1 1 0 0 v1= (0 1 0 1 1 00)

  9. Binary Two-Write WOM-Codes • First Write: program only vectors v such that rank(Hv) = r, VC = { v ∊ {0,1}n | rank(Hv) = r} • Second Write Encoding: • Second Write Decoding: Multiply the received word by H: H⋅(v1 + v2) = H⋅v1 + H⋅v2 = s1+ (s1 + s2) = s2 • s2 = 001 • s1 = H⋅v1 = 010 • Hv1⋅v2 = s1+s2 = 011a • v2 = 0 0 0 0 0 1 1 • v1+v2 = 0 1 0 1 1 1 1 • Write a vector s2 of r bits • Calculates1=H⋅v1 • Find v2 such that Hv1⋅v2= s1+s2a • v2 exists since rank(Hv1) = ra • Write v1+v2 to memory v1= (0 1 0 1 1 00)

  10. Example Summary 1 1 1 0 1 0 0 1 0 1 1 0 1 0 1 1 0 1 0 0 1 H = • Let H be the parity check matrix of the [7,4] Hamming code • First write: program only vectors v such that rank(Hv) = 3 VC = { v ϵ {0,1}n | rank(Hv) = 3} • For H we get |VC| = 92 - we can write 92 messages • Assume we write v1= 0 1 0 1 1 0 0 • Write 0’s in the columns of H corresponding to 1’s in v1: Hv1d • Second write: write r = 3 bits, for example: s2 = 0 0 1 • Calculate s1 = H⋅v1 = 0 1 0 • Solve: find a vector v2 such that Hv1⋅v2 = s1 + s2= 0 1 1d • Choose v2 = 0 0 0 0 0 1 1 • Finally, writev1 + v2 = 0 1 0 1 1 1 1 • Decoding: 1 0 1 0 0 0 0 Hv1 = 1 0 1 0 0 1 0 1 0 0 0 0 0 1 1 1 1 0 1 0 0 1 0 1 1 0 1 0 1 1 0 1 0 0 1 . [0 1 0 1 1 1 1]T = [0 0 1]

  11. Sum-rate Results • The construction works for any linear code C • For any C[n,n-r] with parity check matrix H, VC = { v ∊ {0,1}n | rank(Hv) = r} • The rate of the first write is: R1(C) = (log2|VC|)/n • The rate of the second write is: R2(C) = r/n • Thus, the sum-rate is: R(C) = (log2|VC| + r)/n • In the last example: • R1= log(92)/7=6.52/7=0.93, R2=3/7=0.42, R=1.35 • Goal: Choose a code C with parity check matrix H that maximizes the sum-rate

  12. Sum-rate Results • The (23,11,8)Golay code: (0.9415,0.5217), R = 1.4632 • The (16,5,8) Reed-Muller (4,2) code: (0.7691, 0.6875), R = 1.4566 • We can limit the number of messages available for the first write so that both writes have the same rate, R1 = R2 = 0.6875, andR = 1.375 • Computer search results • Best code with rate 1.4928 • For fixed rate on both writes 1.4546

  13. Capacity Region and Achievable Rates of Two-Write WOM codes

  14. Capacity Achieving Results • The Capacity region C2-WOM={(R1,R2)|∃p∊[0,0.5],R1≤h(p), R2≤1-p} • Theorem: For any (R1, R2)∊C2-WOM and 𝛆>0, there exists a linear code C satisfying R1(C) ≥ R1-𝛆and R2(C) ≥ R2–𝛆

  15. The Capacity of WOM Codes • The Capacity Region for two writes C2-WOM={(R1,R2)|∃p∊[0,0.5],R1≤h(p), R2≤1-p} h(p) – the entropy function h(p) = -plog(p)-(1-p)log(1-p) • The Capacity Region for t writes: Ct-WOM={(R1,…,Rt)| ∃p1,p2,…pt-1∊[0,0.5], R1≤ h(p1), R2 ≤ (1–p1)h(p2),…, Rt-1≤ (1–p1)(1–pt–2)h(pt–1) Rt≤ (1–p1)(1–pt–2)(1–pt–1)} • p1 - prob to prog. a cell on the 1st write: R1≤ h(p1) • p2 - prob to prog. a cell on the 2nd write (from the remainder): R2≤(1-p1)h(p2) • pt-1 - prob to prog. a cell on the (t-1)th write (from the remainder): Rt-1≤(1–p1)(1–pt–2)h(pt–1) • Rt≤ (1–p1)(1–pt–2)(1–pt–1) because(1–p1)(1–pt–2)(1–pt–1) cells weren’t programmed • The maximum achievable sum-rate is log(t+1)

  16. The Capacity for Fixed Rate • The capacity region for two writes C2-WOM={(R1,R2)|∃p∊[0,0.5],R1≤h(p), R2≤1-p} • When forcing R1=R2 we get h(p) = 1-p • The (numerical) solution is p = 0.2271, the sum-rate is 1.54 • Multiple writes: A recursive formula to calculate the maximum achievable sum-rateRF(1)=1RF(t+1) = (t+1)root{h(zt/RF(t))-z}where root{f(z)} is the min positive value zs.t.f(z)=0 • For example:RF(2) = 2root{h(z)-z} = 2 0.7729=1.5458RF(3) = 3root{h(2z/1.54)-z}=3 0.6437=1.9311

  17. More Constructions • Shpilka, “New constructions of WOM codes using the Wozencraft ensemble” • An efficient capacity-achieving two-write construction • 1st write – program any vector of weight at most m (fixed) • 2nd write – instead of using one matrix, use a set of matrices such that at least one of them succeeds on the second write • Need to index the matrix for the 2nd write – negligible if the number of matrices is small • Use the Wozencraft ensemble of linear codes to construct a good set of matrices

  18. Polar WOM Codes • A probabilistic approach to construct WOM codes which works with high probability • On each write, encode more bits and write a vector that matches the bits which were already programmed • Can combine with ECC so the redundancy is used both for rewriting and error correction • Another recent construction using LDPC codes

More Related