1 / 61

Research perspectives on Artificial Intelligence, transparency, privacy & law

Explore how AI struggles in the real world and the key challenges of games, data quality, black boxes, fairness models, and shortcuts. Delve into the importance of robustness, data engineering, interpretability, fairness, and auditing in AI applications under legal frameworks.

westl
Download Presentation

Research perspectives on Artificial Intelligence, transparency, privacy & law

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Research perspectives on Artificial Intelligence, transparency, privacy& law Anders Lølandassistant research director co-director NeIC2019

  2. amazon.com $106.23

  3. Seller 1 = 0.9983 x Seller 2 Seller 2 = 1.270589 x Seller 1

  4. amazon.com $23,698,655.93 https://www.wired.com/2011/04/amazon-flies-24-million/

  5. AI = a combination ofmyriadsofsuch simple algorithms

  6. why AI struggles in the real world – fivekeychallenges #1 games#2 data #3 blackboxes #4 unfair models#5 shortcuts

  7. reason #1: the real world is not a game

  8. Financial Times, May 24 2018

  9. Silver, David, et al. "Mastering chess and shogi by self-play with a general reinforcement learning algorithm." arXiv preprint (2017).

  10. 4 hours= 20 million training games

  11. 2 hours/game= 1.7 million daysofchess

  12. Simple rules

  13. Complicatedrules

  14. Wewouldneed 20 million drives ½ crash½ successful

  15. Simply not possible to simulaterealisticdrives

  16. Photo: Bjørn Jarle Kvande

  17. keychallenge #1 AI must be more robust

  18. reason #2: bad data

  19. Simulated sensor data Alarm

  20. Errors must be annotated Alarm? Real world sensor data Engine temperature (°C) Missing data Measurementerror?

  21. a wholenewfieldis emerging: data engineering

  22. keychallenge #2 howcanweensurequality data – and wherecanyou study to be a data engineer?

  23. reason#3:artificialintelligenceis a blackbox

  24. “Big data is a step forward,” […] “But our problems are not lack of access to data, but understanding them. [Big data] is very useful if I want to find out something without going to the library, but I have to understand it, and that’s the problem.” – Noam Chomsky in 2013 https://www.cnbc.com/2013/11/22/a-brief-history-of-big-data-the-noam-chomsky-way.html

  25. AI: a probabilityof 30% thatyouwilldefaultonyourloan

  26. AI: loanapplicationdenied

  27. you:

  28. Zhang, Wu and Zhu (2018) InterpretableCNNs The IEEE Conference on Computer Vision and Pattern Recognition

  29. major problem: dependent variables can/willcausehavoc

  30. whatcanhappenif AI givesyou awrongexplanation?

  31. keychallenge #3 open theblackbox to everyone

  32. reason#4:artificialintelligence is unfair

  33. What is fair?

  34. GDPR2016/679 71) […] In order to ensure fair and transparent processing […], the controller should use appropriate mathematical or statistical procedures for the profiling,

  35. GDPR2016/679 71) […] and that prevents, inter alia, discriminatory effects on natural persons on the basis of racial or ethnic origin, political opinion, religion or beliefs, […] or that result in measures having such an effect.

  36. Courtland: “The bias detectives"Nature, vol. 558 (2018) «high risk»: 2/3 chanceofbeingrearrestedaftertwoyears («predictiveparity»)

  37. 3/10 and 2/3 ofhigh risk grouparerearrested 6/10 and 2/3 ofhigh risk grouparerearrested

  38. 1/7 = 14 % incorrectlyclassified as high risk 2/4 = 50 % incorrectlyclassified as high risk false positives are more likelyin thisgroup

  39. as long as themembersofthetwogroupsarerearrested at different rates, it is difficult to achieve predictiveparity («2/3 ofhigh risk grouprearrested») and equal false-positive rates

  40. mathematicallyimpossibleto satisfypredictiveparityand equal false-positive ratesand equal false-negative rates

  41. universal fairness is impossible help!

  42. COMPAS – CorrectionalOffenderManagement Profiling for Alternative Sanctions Dressel and Farid: "The accuracy, fairness, and limits of predicting recidivism." Science Advances 4.1 (2018)

  43. COMPAS: calibrated probabilities Critics: false positives/negatives

  44. keychallenge #4 we must agreeonwhat is fair + how to audit AI?

  45. reason#5:artificialintelligencefindstheshortestpath

More Related