Colin Raffel Headshot

Colin Raffel

I am an Assistant Professor in the Department of Computer Science at the University of North Carolina, Chapel Hill. I also spend one day a week as a Faculty Researcher at Hugging Face. Much of my recent research focuses on machine learning algorithms for learning from limited labeled data, including semi-supervised, unsupervised, and transfer learning.

Lab members

Nikhil Kandpal, PhD student at UNC
Derek Tam, PhD student at UNC (co-advised with Mohit Bansal)
Michael Matena, PhD student at UNC
Zhenlin Xu, PhD student at UNC (co-advised with Marc Niethammer)
Jay Mohta, Master's student at NC State

Recent publications

(full list)

On Training Sample Memorization: Lessons from Benchmarking Generative Modeling with a Large-scale Competition
Ching-Yuan Bai, Hsuan-Tien Lin, Colin Raffel, and Wendy Chih-wen Kan
27th ACM SIGKDD Conference on Knowledge Discovery and Data Mining (KDD), 2021 (to appear).

Extracting Training Data from Large Language Models
Nicholas Carlini, Florian Tramer, Eric Wallace, Matthew Jagielski, Ariel Herbert-Voss, Katherine Lee, Adam Roberts, Tom Brown, Dawn Song, Ulfar Erlingsson, Alina Oprea, and Colin Raffel
30th USENIX Security Symposium, 2021 (to appear).

An Empirical Survey of Data Augmentation for Limited Data Learning in NLP
Jiaao Chen*, Derek Tam*, Colin Raffel, Mohit Bansal, and Diyi Yang
arXiv preprint arxiv:2106.07499, 2021.

mT5: A Massively Multilingual Pre-Trained Text-to-Text Transformer
Linting Xue*, Noah Constant*, Adam Roberts*, Mihir Kale, Rami Al-Rfou, Aditya Siddhant, Aditya Barua, and Colin Raffel
2021 Conference of the North American Chapter of the Association for Computational Linguistics (NAACL), 2021.

ByT5: Towards a token-free future with pre-trained byte-to-byte models
Linting Xue*, Aditya Barua*, Noah Constant*, Rami Al-Rfou*, Sharan Narang, Mihir Kale, Adam Roberts, and Colin Raffel
arXiv preprint arxiv:2105.13626, 2021.

Robust and Generalizable Visual Representation Learning via Random Convolutions
Zhenlin Xu, Deyi Liu, Junlin Yang, Colin Raffel, and Marc Niethammer
9th International Conference on Learning Representations (ICLR), 2021.

Improving and Simplifying Pattern Exploiting Training
Derek Tam*, Rakesh R Menon*, Mohit Bansal, Shashank Srivastava, and Colin Raffel
arXiv preprint arxiv:2103.11955, 2021.

Do Transformer Modifications Transfer Across Implementations and Applications?
Sharan Narang, Hyung Won Chung, Yi Tay, William Fedus, Thibault Fevry, Michael Matena, Karishma Malkan, Noah Fiedel, Noam Shazeer, Zhenzhong Lan, Yanqi Zhou, Wei Li, Nan Ding, Jake Marcus, Adam Roberts, and Colin Raffel
arXiv preprint arXiv:2102.11972, 2021.

Learning with Fewer Labeled Examples
Colin Raffel and Kevin P. Murphy (ed.)
Book chapter in Probabilistic Machine Learning: An Introduction, 2021.

NeurIPS 2020 EfficientQA Competition: Systems, Analyses and Lessons Learned
Sewon Min, Jordan Boyd-Graber, Chris Alberti, Danqi Chen, Eunsol Choi, Michael Collins, Kelvin Guu, Hannaneh Hajishirzi, Kenton Lee, Jennimaria Palomaki, Colin Raffel, Adam Roberts, Tom Kwiatkowski, and EfficientQA Participants
arXiv preprint arXiv:2101.00133, 2021.

FixMatch: Simplifying Semi-Supervised Learning with Consistency and Confidence
Kihyuk Sohn*, David Berthelot*, Chun-Liang Li, Zizhao Zhang, Nicholas Carlini, Ekin D. Cubuk, Alex Kurakin, Han Zhang, and Colin Raffel
Neural Information Processing Systems 34 (NeurIPS), 2020.

Top-k Training of GANs: Improving GAN Performance by Throwing Away Bad Samples
Samarth Sinha, Anirudh Goyal, Colin Raffel, and Augustus Odena
Neural Information Processing Systems 34 (NeurIPS), 2020.

How Much Knowledge Can You Pack Into the Parameters of a Language Model?
Adam Roberts*, Colin Raffel*, and Noam Shazeer
2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), 2020.

Talks

Scaling up Models and Data at CIFAR Deep Learning and Reinforcement Learning Summer School, 2021.

Explicit and Implicit Entropy Minimization in Proxy-Label-Based Semi-Supervised Learning at CVPR Workshop on Learning with Limited and Imperfect Data, 2021.

The benefits of unified frameworks for language understanding at Conceptual Understanding of Deep Learning Workshop, 2021.

T5 and large language models: The good, the bad, and the ugly at Stanford University CS 224n Lecture, CU Boulder Applied Mathematics Colloqium, Twitter Machine Learning Seminar, Google Graduate Symposium & TTIC NLP Seminar, 2020.

Responsible publication: NLP case study at Navigating the Broader Impacts of AI Research Workshop Panel, 2020.

What Can MIR Learn From Transfer Learning in NLP? at NLP for Music and Audio Workshop Keynote, 2020.

Transfer Learning for NLP: T5 and Beyond at Montreal Institute for Learning Algorithms Tea Talk & Spotify Research Seminar, 2020.

Answering Questions by Querying the Implicit Knowledge Base Inside T5 at Unstructured and Structured Knowledge Bases Workshop, 2020.

Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer at Allen Institute for Artificial Intelligence & New York University CILVR Seminar, 2019.

Outskirts of Deep Generative Modeling at Faculty Job Talk, 2019.

Why are GANs Interesting? at New York University CILVR Seminar, 2018.

A Few Unusual Autoencoders at Vector Institute, New York University & San Francisco State University, 2018.

Leveraging MIDI Files for Music Information Retrieval at 18th International Society for Music Information Retrieval Conference Tutorials, 2017.

Doing Strange Things with Attention at AI With The Best & 1st USF Data Institute Conference, 2017.

The Lakh MIDI Dataset: How It Was Made, and How to Use It at BISH Bash Meetup, Centre for Digital Music Seminar & Jukedeck Lunch and Learn, 2016.

Learning-Based Methods for Comparing Sequences, with Applications to Audio-to-MIDI Alignment and Matching at 2nd ICML Machine Learning for Music Discovery Workshop, 2016.

Accelerating Large-Scale Sequence Retrieval with Convolutional Networks at IIT Bombay Electrical Engineering Seminar, 2015.

Learning Efficient Representations for Sequence Retrieval at Boston Data Festival, 2015.

Using Convolutional Networks (with Attention) for Orders-of-Magnitude Speedup of DTW-Based Sequence Retrieval at Spotify Machine Learning Seminar, 2015.

Recurrent Networks in Lasagne at Mount Sinai Hammer Lab Seminar, 2015.

Lasagne Tutorial at Next.ml Boston, 2015.

Theano Tutorial at Next.ml Boston, 2015.

mir_eval at Objective Evaluation in Semantic Audio Analysis and Processing Panel at the 138th Convention of the Audio Engineering Society, 2015.

Large-Scale Content-Based Matching of Audio and MIDI Data at Stanford University DSP Seminar, 2015.

Advances and Challenges in Large-Scale Music Information Retrieval at Digital Music Research Network+8, 2013.

Quantifying Rhythmic Synchrony at Midwestern Music Cognition Symposium, 2013.

A Sequential Approach to Musical Event Detection at Carnegie Mellon University Music and Technology Seminar, 2011.

ROW-mp3: An Enhanced MP3-Compatible Audio Codec at Stanford University DSP Seminar, 2010.

An Effective Model of Bucket-Brigade Device-Based Audio Circuits at Stanford University DSP Seminar, 2010.

Voltage-Controlled Resistance: Modulate Anything at Circuitastrophe Circuit Bending Music Festival, 2008.