IST 402 Emerging Topics (Fall 2019)

Emerging Trends in Machine Learning

Course Description and Evalutation Criteria Document


Time Paper Readings (to be finished before class) Notes and Presentations
Week 1
Week 2 - Sep 3th class
  1. A Non-Technical Introduction to Machine Learning
  2. Machine Learning for Everyone
Week 2 - Sep 5th class
  1. Machine Learning for Everyone (Continued)
  1. PowerPoint
  2. Notes
Week 3 - Sep 10th class
  1. The Human Face of Big Data. (Video)
Week 3 - Sep 12th class
  1. Machine Learning: Living in the Age of AI (Video)
  2. The Deep Learning and Artificial Intelligence Revolution (Video)
  3. What is Deep Learning? How will it Change the World? (Video)
  1. PowerPoint
  2. Notes
Week 4 - Sep 17th class
  1. Y. LeCun, Y. Bengio, and G. Hinton, “Deep learning,” Nature, vol. 521, p. 436, May 2015.
Week 4 - Sep 19th class
  1. Don’t Call AI “Magic”
  2. Machine learning has become alchemy.” | Ali Rahimi, Google (Video)
  3. Deep Learning: A Critical Appraisal
  1. PowerPoint
  2. Notes
Week 5 - Sep 24th class
  1. Never mind killer robots—here are six real AI dangers to watch out for in 2019
  2. The GANfather: The man who’s given machines the gift of imagination
  3. Artificial Intelligence: It will kill us (Video)
  4. The Hidden Dangers of Artificial Intelligence: From Yelp to AI Backdoors - Ben Zhao on Big Brains (Video)
Week 5 - Sep 26th class
  1. Ali Rahimi's talk at NIPS(NIPS 2017 Test-of-time award presentation) (Video)
  2. Yann Le Cunn Response to Ali Rahimi Talk
  3. Ali Rahimi’s response to Yann Le Cunn’s response
  4. Reddit Thread that discusses this “fight” between Le Cunn and Rahimi
  1. PowerPoint
  2. Notes
Week 6 - Oct 1st class
  1. The Great AI Debate - NIPS2017 - Yann LeCun
  2. Interpretable Machine Learning: A Guide for Making Black Box Models Explainable. Chapter 2.
Week 6 - Oct 3rd class
  1. Z. C. Lipton, “The mythos of model interpretability,” Commun. ACM, vol. 61, no. 10, pp. 35–43, 2018.
  2. F. Doshi-Velez and B. Kim, “Towards A Rigorous Science of Interpretable Machine Learning,” no. Ml, pp. 1–13, 2017.
  1. PowerPoint
  2. Notes
Week 7 - Oct 8th class
  1. A. A. Freitas, “Comprehensible classification models,” ACM SIGKDD Explor. Newsl., vol. 15, no. 1, pp. 1–10, 2014.
  2. Interpretability Methods in Machine Learning: A Brief Survey
  3. B. Goodman and S. Flaxman, “European Union regulations on algorithmic decision-making and a ‘ right to explanation ’ arXiv : 1606 . 08813v3 [ stat . ML ] 31 Aug 2016,” pp. 1–9.
Week 7 - Oct 10th class
  1. P. Bracke, A. Datta, and C. Jung, “Staff Working Paper No . 816 Machine learning explainability in finance : an application to default risk analysis Staff Working Paper No . 816 Machine learning explainability in finance : an application to default risk analysis,” no. 816, 2019.
  2. C. Rudin, “Stop Explaining Black Box Machine Learning Models for High Stakes Decisions and Use Interpretable Models Instead,” pp. 1–20.
  3. Video from Zoom.
  1. PowerPoint
Week 8 - Oct 15th class
  1. How big data is unfair: Understanding unintended sources of unfairness in data driven decision making
  2. The Hidden Biases in Big Data
  3. Technology Is Biased Too. How Do We Fix It?
  4. The Trouble with Bias - NIPS 2017 Keynote - Kate Crawford
Week 8 - Oct 17th class
  1. Machine Bias: There’s software used across the country to predict future criminals. And it’s biased against blacks
  2. COMPAS Risk Scales: Demonstrating Accuracy Equity and Predictive Parity.
  1. PowerPoint
  2. Notes
Week 9 - Oct 22th class
  1. Fair prediction with disparate impact: A study of bias in recidivism prediction instruments
  2. FairML: Auditing Black-Box Predictive Models. Github Webpage.
Week 9 - Oct 24th class
  1. Equality of Opportunity in Supervised Learning
  2. Attacking discrimination with smarter machine learning
  1. PowerPoint
Week 10 - Oct 29th class
  1. E. Awad et al., “The Moral Machine experiment,” Nature, vol. 563, no. 7729, pp. 59–64, 2018.
  2. The Moral Dilemmas of Self-Driving Cars
  3. What whould the average human do?
  4. Researchers go after the biggest problem with self-driving cars
  5. Let’s all vote: should we crowdsource the morality of driverless cars?
Week 10 - Oct 31th class
  1. V. Conitzer, W. Sinnott-Armstrong, J. S. Borg, Y. Deng, and M. Kramer, “Moral decision making frameworks for artificial intelligence,” AAAI Work. - Tech. Rep., vol. WS-17-01-WS-17-15, pp. 105–109, 2017.
  2. R. Kim et al., “A Computational Model of Commonsense Moral Decision Making,” AIES 2018 - Proc. 2018 AAAI/ACM Conf. AI, Ethics, Soc., pp. 197–203, 2018.
  1. PowerPoint
  2. Notes
Week 11 - Nov 5th class
  1. Why AI Is Still Waiting For Its Ethics Transplant.
  2. K. B. Korb, “The Ethics of Artificial Intelligence” Encycl. Inf. Ethics Secur., pp. 279–284, 2007.
  3. AdNauseam. (Browser extension software.)
  4. Building Consentful Tech
  5. M. Zook et al., “Ten simple rules for responsible big data research,” PLOS Comput. Biol., vol. 13, no. 3, pp. 1–10, 2017.
Week 11 - Nov 7th class
  1. P. Henderson et al., “Ethical Challenges in Data-Driven Dialogue Systems,” AIES 2018 - Proc. 2018 AAAI/ACM Conf. AI, Ethics, Soc., pp. 123–129, 2018.
  2. D. Vanderelst and A. Winfield, “The Dark Side of Ethical Robots,” AIES 2018 - Proc. 2018 AAAI/ACM Conf. AI, Ethics, Soc., no. 1, pp. 317–322, 2018.
  1. Notes
Week 12 - Nov 12th class
  1. DeepIndex website
  2. A. Kube, S. Das and P. J. Fowler, "Allocating Interventions Based on Predicted Outcomes: A Case Study on Homelessness Services," Proceedings of the AAAI Conference on Artificial Intelligence, vol. 33, pp. 622-629, 2019.
Week 12 - Nov 14th class
  1. Amulya Yadav, Bryan Wilder, Eric Rice, Robin Petering, Jaih Craddock, Amanda Yoshioka-Maxwell, Mary Hemler, Laura Onasch-Vera, Milind Tambe, and Darlene Woo. 2017. Influence Maximization in the Field: The Arduous Journey from Emerging to Deployed Application. In Proceedings of the 16th Conference on Autonomous Agents and MultiAgent Systems (AAMAS '17). International Foundation for Autonomous Agents and Multiagent Systems, Richland, SC, 150-158.
  2. N. Jean, M. Burke, M. Xie, W. M. Davis, D. B. Lobell, and S. Ermon, “Combining satellite imagery and machine learning to predict poverty,” Science (80-. )., vol. 353, no. 6301, pp. 790 LP – 794, Aug. 2016.
Week 13 - Nov 19th class
  1. Garren Gaut, Andrea Navarrete, Laila Wahedi, Paul van der Boor, Adolfo De Unánue, Jorge Díaz, Eduardo Clark, and Rayid Ghani. 2018. Improving Government Response to Citizen Requests Online. In Proceedings of the 1st ACM SIGCAS Conference on Computing and Sustainable Societies (COMPASS '18). ACM, New York, NY, USA, Article 13, 10 pages. DOI:
  2. Eric Potash, Joe Brew, Alexander Loewi, Subhabrata Majumdar, Andrew Reece, Joe Walsh, Eric Rozier, Emile Jorgenson, Raed Mansour, and Rayid Ghani. 2015. Predictive Modeling for Public Health: Preventing Childhood Lead Poisoning. In Proceedings of the 21th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining (KDD '15). ACM, New York, NY, USA, 2039-2047. DOI:
Week 13 - Nov 21th class
  1. Avishek Kumar, Syed Ali Asad Rizvi, Benjamin Brooks, R. Ali Vanderveld, Kevin H. Wilson, Chad Kenney, Sam Edelstein, Adria Finch, Andrew Maxwell, Joe Zuckerbraun, and Rayid Ghani. 2018. Using Machine Learning to Assess the Risk of and Prevent Water Main Breaks. In Proceedings of the 24th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining (KDD '18). ACM, New York, NY, USA, 472-480. DOI:
  2. Debarun Kar, Benjamin Ford, Shahrzad Gholami, Fei Fang, Andrew Plumptre, Milind Tambe, Margaret Driciru, Fred Wanyama, Aggrey Rwetsiba, Mustapha Nsubaga, and Joshua Mabonga. 2017. Cloudy with a Chance of Poaching: Adversary Behavior Modeling and Forecasting with Real-World Poaching Data. In Proceedings of the 16th Conference on Autonomous Agents and MultiAgent Systems (AAMAS '17). International Foundation for Autonomous Agents and Multiagent Systems, Richland, SC, 159-167.

Site design modified by Dr. Amulya Yadav and Yu Liang.