IST 402 Emerging Topics (Fall 2019)

Emerging Trends in Machine Learning

Course Description and Evalutation Criteria Document


Time Paper Readings (to be finished before class) Notes and Presentations
Week 1
Week 2 - Sep 3th class
  1. A Non-Technical Introduction to Machine Learning
  2. Machine Learning for Everyone
Week 2 - Sep 5th class
  1. Machine Learning for Everyone (Continued)
  1. PowerPoint
  2. Notes
Week 3 - Sep 10th class
  1. The Human Face of Big Data. (Video)
Week 3 - Sep 12th class
  1. Machine Learning: Living in the Age of AI (Video)
  2. The Deep Learning and Artificial Intelligence Revolution (Video)
  3. What is Deep Learning? How will it Change the World? (Video)
  1. PowerPoint
  2. Notes
Week 4 - Sep 17th class
  1. Y. LeCun, Y. Bengio, and G. Hinton, “Deep learning,” Nature, vol. 521, p. 436, May 2015.
Week 4 - Sep 19th class
  1. Don’t Call AI “Magic”
  2. Machine learning has become alchemy.” | Ali Rahimi, Google (Video)
  3. Deep Learning: A Critical Appraisal
  1. PowerPoint
  2. Notes
Week 5 - Sep 24th class
  1. Never mind killer robots—here are six real AI dangers to watch out for in 2019
  2. The GANfather: The man who’s given machines the gift of imagination
  3. Artificial Intelligence: It will kill us (Video)
  4. The Hidden Dangers of Artificial Intelligence: From Yelp to AI Backdoors - Ben Zhao on Big Brains (Video)
Week 5 - Sep 26th class
  1. Ali Rahimi's talk at NIPS(NIPS 2017 Test-of-time award presentation) (Video)
  2. Yann Le Cunn Response to Ali Rahimi Talk
  3. Ali Rahimi’s response to Yann Le Cunn’s response
  4. Reddit Thread that discusses this “fight” between Le Cunn and Rahimi
  1. PowerPoint
Week 6 - Oct 1st class
  1. The Great AI Debate - NIPS2017 - Yann LeCun
  2. Interpretable Machine Learning: A Guide for Making Black Box Models Explainable. Chapter 2.
Week 6 - Oct 3rd class
  1. Z. C. Lipton, “The mythos of model interpretability,” Commun. ACM, vol. 61, no. 10, pp. 35–43, 2018.
  2. F. Doshi-Velez and B. Kim, “Towards A Rigorous Science of Interpretable Machine Learning,” no. Ml, pp. 1–13, 2017.
  1. PowerPoint
  2. Notes
Week 7 - Oct 8th class
  1. A. A. Freitas, “Comprehensible classification models,” ACM SIGKDD Explor. Newsl., vol. 15, no. 1, pp. 1–10, 2014.
  2. Interpretability Methods in Machine Learning: A Brief Survey
  3. B. Goodman and S. Flaxman, “European Union regulations on algorithmic decision-making and a ‘ right to explanation ’ arXiv : 1606 . 08813v3 [ stat . ML ] 31 Aug 2016,” pp. 1–9.
Week 7 - Oct 10th class
  1. P. Bracke, A. Datta, and C. Jung, “Staff Working Paper No . 816 Machine learning explainability in finance : an application to default risk analysis Staff Working Paper No . 816 Machine learning explainability in finance : an application to default risk analysis,” no. 816, 2019.
  2. C. Rudin, “Stop Explaining Black Box Machine Learning Models for High Stakes Decisions and Use Interpretable Models Instead,” pp. 1–20.
  3. Video from Zoom.
Week 8 - Oct 15th class
  1. How big data is unfair: Understanding unintended sources of unfairness in data driven decision making
  2. The Hidden Biases in Big Data
  3. Technology Is Biased Too. How Do We Fix It?
  4. The Trouble with Bias - NIPS 2017 Keynote - Kate Crawford
Week 8 - Oct 17th class
  1. Machine Bias: There’s software used across the country to predict future criminals. And it’s biased against blacks
  2. COMPAS Risk Scales: Demonstrating Accuracy Equity and Predictive Parity.
Week 9 - Oct 22th class
  1. Fair prediction with disparate impact: A study of bias in recidivism prediction instruments
  2. FairML: Auditing Black-Box Predictive Models. Github Webpage.
Week 9 - Oct 24th class
  1. Equality of Opportunity in Supervised Learning
  2. Attacking discrimination with smarter machine learning

Site design modified by Dr. Amulya Yadav and Yu Liang.