Synthetic Data Simulation for Autonomous Driving: Duke University (Duke Datathon), October 2020
Algorithmic Bias & High-Stakes Gambling: Summer STEM Institute, July 2020
An introduction to algorithmic bias—racial, gender, and socioeconomic bias in algorithmic decision making—in models used for high-stakes decisions, ranging from predicting recidivism to making loans to forecasting diseases. Programs and algorithms inherit biases from unrepresentative data and the subconscious decisions of researchers, and researchers should be aware of algorithmic bias and its implications in order to limit and eliminate it from their models.
An Introduction to Gradient Boosting Decision Trees: Duke University (Phoenix Project), July 2020
A theoretical introduction to gradient boosting decision trees, including a general overview of decision trees and a comparison against traditional decision tree models (C4.5, CART, and random forests). Why does gradient boosting work, and why are decision trees a good complement? What are the trade-offs versus other tree-based methods? What are important hyperparameters to adjust? We will then briefly look at two commonly used packages for GBDT, XGBoost and LightGBM. What are the unique features and important distinctions between the two?
An Introduction to Machine Learning: Duke University (Phoenix Project), June 2020
A non-technical introduction to machine learning, likely geared towards freshmen/sophomores who might be interested in machine learning (and learning more about what it is), but haven't had any exposure to it. The talk will introduce machine learning, including supervised (regression and classification) and unsupervised (clustering) learning and various evaluation methods. When is machine learning useful? What models should we consider? How much data do we need? We will also discuss modern day achievements of the applications of machine learning, including in computer vision, natural language processing, and reinforcement learning—from real-time object detection to language translation to AlphaGo.
An Introduction to Machine Learning: Durham Technical Community College, February 2020
The talk will introduce machine learning, including supervised (regression and classification) and unsupervised (clustering) learning and various evaluation methods. When is machine learning useful? What models should we consider? How much data do we need? We will also discuss modern day achievements of the applications of machine learning, including in computer vision, natural language processing, and reinforcement learning—from real-time object detection to language translation to AlphaGo.
Elements of Machine Learning: Duke University, Fall 2018 (COMPSCI 371D)
Fundamental concepts of supervised machine learning, with sample algorithms and applications. Focuses on how to think about machine learning problems and solutions, rather than on a systematic coverage of techniques. Serves as an introduction to the methods of machine learning.
Applied Machine Learning: Duke University, Fall 2018 (HOUSECS 59-01)
Introduction to topics in machine learning through an applied perspective. The course assumes basic fluency in programming and mathematics at the single-variable calculus level, and will include learning specific machine learning concepts (listed below), their historical origins, and existing and potential applications to modern society. Machine learning concepts studied will include: classification (including naive Bayes, support vector machines, kernel methods, and neural networks), regression (including spline interpolation and linear and polynomial regression), mixture of Gaussians clustering, object detection (including convolutional neural networks, feature extraction, edge detection, and processing methods), principal component analysis, and evaluation of machine learning models.
Web Development and Society: Duke University, Fall 2017 (HOUSECS 59-03)