190 likes | 338 Views
Kimmo Kekäläinen/KJK-Tieto Oy 22. Sept ember 2014, Eastbourne, England. New tricks to old dogs. "It's not difficult to teach new tricks to old dogs. What is difficult is to make them to forget the old ones.". Paul Berry, APL1984 Conference, Finland.
E N D
Kimmo Kekäläinen/KJK-Tieto Oy 22. Sept ember 2014, Eastbourne, England New tricks to old dogs "It's not difficult to teach new tricks to old dogs. What is difficult is to make them to forget the old ones." Paul Berry, APL1984 Conference, Finland Development is fast Changes are slow
New things in the long line of versions (6 – 14, 1992-2014) • Base GUI objects (1992), []NA =>DDE, ... • Grid and namespaces in 1994 (ver 7) • Dfns • OLE, SQAPL/ODBC, TCP/IP, Conga, Web services, ... • Control structures, OO, Classes, ... • Unicode, SALT, User commands, []XML, ... • Extensions to language & system functions, ... • Improvements to session working environment • ... Some in use, some not
Reasons for ignoring or delaying adoption of new technics • Human • too busy, too lazy , maybe both • Practical "If it works, don't touch it“ • Many general purpose utils written long ago, tested in heavy usage by time, safe and robust. • Dyalogapplications are generally considered fast among clients in comparison to those built by mainstream vendors / technics. • Lack of functionality has never been major problem In APL, normally allows many ways to same goal (some faster, more eloquent etc) • Economical (consult situation) • Clients rather pay me for playing, not tuning the instrument • General hardware development => speed & capacity • … which has exploded power to gigabytes and -hertz with less and less cost • And so on ...
Conclusion Dyalog has matured by time as a system development environment and at some point you found out that you were able to do with it most of the things common clients wanted, when they were asking you a Windows based solution to their daily problems * * which, by the way, mostly were/are not any rocket science, just practical needs
Productivity • ... is the key success factor that has kept us alive in constant pressure of ever changing mainstream. We have seen many ”APL Killers” come and go while we are still here • ... is to be able to provide a solution • on time • with decent cost • working in way supposed to
Dyalog v 14 In my case, after many year sleep tempts to open your toolbox once again for evaluation and rewriting • Key operator ⌸ ( ⎕U2338 in Classic) • Rank ⍤ ( ⎕U2364 in Classic) • Tally ≢ • Row extension to iota ⍳ • Support for use of inverted tables (8⌶) • etc
Some benchmarks using Dyalog 14 Beta Classic on calculation with oper Key vs traditional APL code ∇ z←ctlkey_sumdm [1] z←ctl{⍺,+⌿⍵}⎕U2338 dm [2] z←z[⍋z[;1];] extracting the eqvivalent part ∇ ∇ z←ctltrad_calc_sumdm;i;p;r;l;x [1] [2] :If 1=⍴r←⍴dm ⍝ ensuring [3] dm←(r,1)⍴dm ⍝ that dm is matrix [4] :EndIf [5] [6] ⍝ sort ascending [7] i←⍋ctl ⋄ ctl←ctl[i] ⋄ dm←dm[i;] [8] ⍝ calculating field parts from FINNAPL Idiom Library [9] p←(1↓i)-¯1↓i←(l/⍳⍴l),⎕IO+⍴l←1,(1↓ctl)≠¯1↓ctl [10] ⍝ ... as is logic below for summing parts [11] z←(∪ctl),(~z=x)×z-x←¯1 0↓0,[1]z←(+⍀dm)[+\p;] ∇ vs calc_sum from my utils
Results where a bit unexpected BenchMark 100Amount of rows: 100 keysums←testdata[;1] key_sum testdata[;2 3] → 6.1E¯5 | 0% ⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕ oldsums←testdata[;1] #.Clc.calc_sum testdata[;2 3] → 5.1E¯5 | -16% ⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕ tradsums←testdata[;1] trad_calc_sum testdata[;2 3] → 2.5E¯5 | -59% ⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕ (⊂keysums)≡¨oldsums tradsums 1 1 BenchMark 10000 Amount of rows: 10000 keysums←testdata[;1] key_sum testdata[;2 3] → 1.2E¯3 | 0% ⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕ oldsums←testdata[;1] #.Clc.calc_sum testdata[;2 3] → 9.4E¯4 | -23% ⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕ tradsums←testdata[;1] trad_calc_sum testdata[;2 3] → 9.0E¯4 | -26% ⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕ (⊂keysums)≡¨oldsums tradsums 1 1 BenchMark 100000 Amount of rows: 100000 keysums←testdata[;1] key_sum testdata[;2 3] → 1.2E¯2 | 0% ⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕ oldsums←testdata[;1] #.Clc.calc_sum testdata[;2 3] → 8.9E¯3 | -27% ⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕ tradsums←testdata[;1] trad_calc_sum testdata[;2 3] → 8.8E¯3 | -27% ⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕ (⊂keysums)≡¨oldsums tradsums 1 1 BenchMark 1000000 Amount of rows: 1000000 keysums←testdata[;1] key_sum testdata[;2 3] → 1.3E¯1 | 0% ⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕ oldsums←testdata[;1] #.Clc.calc_sum testdata[;2 3] → 8.9E¯2 | -30% ⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕ tradsums←testdata[;1] trad_calc_sum testdata[;2 3] → 9.0E¯2 | -30% ⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕ (⊂keysums)≡¨oldsums tradsums 1 1 BenchMark 10000000 Amount of rows: 10000000 keysums←testdata[;1] key_sum testdata[;2 3] → 1.3E0 | 0% ⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕ oldsums←testdata[;1] #.Clc.calc_sum testdata[;2 3] → 9.2E¯1 | -30% ⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕ tradsums←testdata[;1] trad_calc_sum testdata[;2 3] → 9.2E¯1 | -31% ⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕ (⊂keysums)≡¨oldsums tradsums 1 1 BenchMark '' Amount of rows: 39043672 keysums←testdata[;1] key_sum testdata[;2 3] → 5.3E0 | 0% ⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕ oldsums←testdata[;1] #.Clc.calc_sum testdata[;2 3] → 3.7E0 | -29% ⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕ tradsums←testdata[;1] trad_calc_sum testdata[;2 3] → 3.6E0 | -32% ⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕ (⊂keysums)≡¨oldsums tradsums 1 1
Time to contact Morten! who passed the ball to Roger
Roger’s key_sum1 ∇ z←ctl key_sum1 dm;u;r [1] :If 1=⍴r←⍴dm ⋄ dm←(r,1)⍴dm ⋄ :EndIf [2] u←{⍵[⍋⍵]}∪ctl [3] z←u,(u,ctl){+⌿⍵}⎕U2338(((≢u),1↓⍴dm)⍴0)⍪dm ∇ and his precomments for the benchmark: The cmpx function already includes a check on the results of the expressions to be timed and puts a * on the output if they are different. I took the indexing out of the individual expressions to bring the timing differences into sharper focus. The key operator includes "special code" for specific operands. Currently {+⌿⍵}⌸ has special code but {⍺,+⌿⍵}⌸ does not. We can change that in the future with usage experience. {⍵[⍋⍵]} and {⍵[⍋⍵;]} are idioms (since 12.x?). key_sum1 exploits this but neither key_sum nor trad_calc_sum do.
Now results looked like BenchMark1¨10*⍳5 Amount of rows: 10 tradsums←x trad_calc_sum y → 1.5E¯5 | 0% ⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕ keysums←x key_sum y → 2.8E¯5 | +90% ⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕ k1←x key_sum1 y → 7.4E¯6 | -50% ⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕ Amount of rows: 100 tradsums←x trad_calc_sum y → 2.1E¯5 | 0% ⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕ keysums←x key_sum y → 4.3E¯5 | +110% ⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕ k1←x key_sum1 y → 9.1E¯6 | -56% ⎕⎕⎕⎕⎕⎕⎕⎕ Amount of rows: 1000 tradsums←x trad_calc_sum y → 7.8E¯5 | 0% ⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕ keysums←x key_sum y → 1.1E¯4 | +35% ⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕ k1←x key_sum1 y → 2.2E¯5 | -73% ⎕⎕⎕⎕⎕⎕⎕⎕ Amount of rows: 10000 tradsums←x trad_calc_sum y → 6.3E¯4 | 0% ⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕ keysums←x key_sum y → 7.2E¯4 | +13% ⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕ k1←x key_sum1 y → 1.5E¯4 | -77% ⎕⎕⎕⎕⎕⎕⎕⎕ Amount of rows: 100000 tradsums←x trad_calc_sum y → 6.2E¯3 | 0% ⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕ keysums←x key_sum y → 6.9E¯3 | +11% ⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕ k1←x key_sum1 y → 1.4E¯3 | -77% ⎕⎕⎕⎕⎕⎕⎕⎕
Some suprises • Sorting is faster than grading, which is part of sorting ?! • Should this make sense? numbers←100?100 cmpx '⍋numbers' '{⍵[⍋⍵]} numbers' ⍋numbers → 1.1E¯6 | 0% ⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕ * {⍵[⍋⍵]} numbers → 1.6E¯6 | +38% ⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕ • Well, OK, like expected!
But with growing amounts numbers←1000?1000 thousand cmpx '⍋numbers' '{⍵[⍋⍵]} numbers' ⍋numbers → 1.1E¯5 | 0% ⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕ * {⍵[⍋⍵]} numbers → 1.3E¯5 | +21% ⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕ numbers←10000?10000 ten thousand cmpx '⍋numbers' '{⍵[⍋⍵]} numbers' ⍋numbers → 1.3E¯4 | 0% ⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕ * {⍵[⍋⍵]} numbers → 1.5E¯4 | +17% ⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕ numbers←100000?100000 hundred thousand cmpx '⍋numbers' '{⍵[⍋⍵]} numbers' ⍋numbers → 8.2E¯3 | 0% ⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕ * {⍵[⍋⍵]} numbers → 1.7E¯3 | -79% ⎕⎕⎕⎕⎕⎕⎕⎕ numbers←1000000?1000000 million cmpx '⍋numbers' '{⍵[⍋⍵]} numbers' ⍋numbers → 1.1E¯1 | 0% ⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕ * {⍵[⍋⍵]} numbers → 2.7E¯2 | -77% ⎕⎕⎕⎕⎕⎕⎕⎕⎕ numbers←10000000?10000000 ten million cmpx '⍋numbers' '{⍵[⍋⍵]} numbers' ⍋numbers → 2.0E0 | 0% ⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕ * {⍵[⍋⍵]} numbers → 4.1E¯1 | -80% ⎕⎕⎕⎕⎕⎕⎕⎕
Inverted tables • Table as nested vector, each column either vector or matrix (normally numeric vector or character matrix) • Fits well together with old APL convention on storing data in component files as logical table: each column in own component, first component serving as directory including column names etc • SQAPL database fetch with option ”columnwise” returns data blocks this way, for example
Inverted tables tn←'o' 'c:\nexttab\demodata.tbl'#.Cmp.c_m'' vars←'r' tn #.Cmp.c_m'?' vars LASTNAME FIRSTNAME SEX DATEOFBIRTH DIVISION POSITION SALARY PROVISION STARTDATE matrix←'m' tn #.Cmp.c_m'' '*' ⍴matrix 49 9 inverted←'r' tn #.Cmp.c_m '' '*' ⍴inverted 9 ⍴¨inverted 49 9 49 8 49 49 49 49 49 49 49 ⎕dr ¨ inverted 82 82 83 323 163 163 323 163 323 ⎕size 'matrix‘ 9820 ⎕size 'inverted‘ Five times smaller than nested matrix 1980
Optimized ⍳ for inverted tablesX (8⌶) Y some_picked←3 6 9 Inv_get_by_index inverted inverted (8⌶) some_picked 3 6 9
Picking data from inverted table bool←49⍴0 ⋄ ind←2 3 4 6 ⋄ bool[ind]←1 ⎕vr 'Inv_get_by_index' ∇ z←indexes Inv_get_by_index invtable [1] z←(⊂⊂indexes)⌷¨invtable ∇ ⎕vr 'Inv_get_by_scan' ∇ z←loc Inv_get_by_scan invtable [1] z←(⊂loc)⌿¨invtable ∇ ind_picked← ind Inv_get_by_index inverted scan_picked←bool Inv_get_by_scan inverted ind_picked≡scan_picked 1 cmpx 'x←ind Inv_get_by_index inverted' 'x←bool Inv_get_by_scan inverted' x←ind Inv_get_by_index inverted → 5.2E¯6 | 0% ⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕ x←bool Inv_get_by_scan inverted → 4.2E¯6 | -20% ⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕ ∇ z←Inv2Mtx invmat;fz [1] z←⍉↑{1<⍴⍴⍵:↓⍵ ⋄ ⍵}¨invmat ∇ Inv2Mtx ind_picked Smith Lloyd 1 19611012 200 2000 37000 0 19960701 Brown Kim 2 19670721 300 3000 27000 8000 20010924 Wilson Peter 1 19700701 400 4000 32000 0 20011011 Davis Ken 1 19750308 200 2200 28000 0 19981101 Or dynamic fn from manual, which uses rank-oper, needs ⎕ml ≤ 1 unvert←{⍉ ↑ ⊂ ⍤ ¯1 ¨ ⍵} cmpx gives about the same execution speed with larger amounts for Inv2Mtx vs unvert
Amounts of data to pick and size of inverted table seem to matter tn←'o' 'h:\konserni\data\2012100.dcf' #.Cmp.c_m '' data←'r' tn #.Cmp.c_m '' 'org,tl,selite' ]display ⍴¨data ┌→──────────────────────────────┐ │ ┌→─────┐ ┌→─────┐ ┌→────────┐ │ │ │236970│ │236970│ │236970 30│ │ │ └~─────┘ └~─────┘ └~────────┘ │ └∊──────────────────────────────┘ ind←100000?236970 ind←{⍵[⍋⍵]} ind bool←236970⍴0 bool[ind]←1 cmpx 'x←ind Inv_get_by_index data' 'y←bool Inv_get_by_scan data' x←ind Inv_get_by_index data → 4.3E¯3 | 0% ⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕ y←bool Inv_get_by_scan data → 3.0E¯3 | -31% ⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕ ind←150000?236970 ind←{⍵[⍋⍵]} ind bool←236970⍴0 bool[ind]←1 cmpx 'x←ind Inv_get_by_index data' 'y←bool Inv_get_by_scan data' x←ind Inv_get_by_index data → 6.2E¯3 | 0% ⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕ y←bool Inv_get_by_scan data → 3.3E¯3 | -47% ⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕⎕
About speed! What is fast enough?