A remarkable amount of information crosses our eyes and ears each day, yet we adeptly identify what is familiar with seemingly no effort at all. In many cases, what we see or hear has been shaped by recognizable prior sources. Such instances of intertextuality reveal a wealth of data about authorship, influence, and style, making them attractive targets for automatic identification. This lightning talk introduces quantitative intertextuality [Forstall-and-Scheirer-2019], a new approach for the algorithmic study of information reuse in text, sound and images. Using a variety of tools drawn from machine learning, natural language processing, and computer vision, we will describe how to trace patterns of reuse across diverse sources for scholarly work and practical applications.