Data Science Initiative Seminar Series
A list of upcoming seminars can be viewed on the new DSI website. If you are interested in attending the seminars or potentially getting on the spring schedule or fall wait list, please email Blake LeBaron.
The Brandeis Data Science Initiative launched a new seminar series in March 2021. The series provides opportunities for Brandeis researchers who know little to nothing about data science to learn from faculty members who use these methods in their own research and become inspired to adapt data science to their own projects. It includes speakers across the School of Arts and Sciences, Heller School, International Business School and the Rabb School.
This series reveals how data science has been used across multiple disciplines at Brandeis and beyond. It can also be used as a force for social good, especially during the pandemic. In addition to learning how to teach data science to the uninitiated, we discuss examples that demonstrate how it can impact global health, combat misinformation campaigns and be applied to medicine, social science, music and archeology. You can register for upcoming events and watch videos of the previous seminars below.
From self-driving cars to electoral districts, higher education, phones, medical diagnostic and imaging, algorithms have penetrated almost all aspects of our modern daily life. A largely open question however remains to assess whether data analysis and statistical techniques can advance our understanding of legal reasoning, and if, based on the history of cases, algorithms can provide accurate predictions of the outcome of a trial. This question of computability of law is currently the topic of an active debate in the legal literature. Beyond being a key theoretical question, it has also a potentially tremendous social impact, shall some legal questions be computable.
To explore these questions concretely, we will present our recent research on labor law in Canada, and will focus on two specific legal questions: the evaluation of a severance package for wrongfully terminated workers, and the question of whether a worker is entitled to get an employee status (and benefits associated) or if their work warrants the status of independent contractor, a question of particular importance today with the boom of the gig economy. We will further present a discussion on the good or bad biases that our algorithms detect, and conclude on the presentation of an internet platform, MyOpenCourt.org, that we developed to provide access to personalized algorithmic legal information in Canada.
This talk explores trends and occupational details of data scientific capacity in higher ed over the past decade. The analysis of these trends begins in computer science faculty jobs and ends by illuminating the relevance of data science, machine learning and artificial intelligence across the non-faculty workforce. The standardized nature of job posting analytics makes it possible to study faculty and non-faculty jobs through the same skills taxonomy, which results in a clear view of the distinctiveness of research universities as headquarters of data scientific talent. This clarification, in turn, points to untapped resources in the higher ed sector that can support innovation in data science education and in university operations.
Redistricting is one of the most politicized procedures within electoral management. Mexico's approach is unique — since 1996 an independent board has been creating plans algorithmically. Parties, however, are able to formulate hundreds of counter-proposals in a closed-door environment. This session will offer context on how redistricting processes work outside of the U.S. and explain how R's Tydeverse programming language can be used to analyze hundreds of partisan electoral maps to answer the following questions: What have been the political consequences of redistricting in Mexico? Why and when do political parties engage in the process? Were the most recent redistricting processes compliant with the law? Have discretionary rules been applied consistently within and across processes? Are the observed outputs consistent with how the process was officially portrayed?
Archaeological research in the neo-tropics has been transformed over the past decade by lidar — the first remote sensing methodology that allows us to peek beneath the often dense forest canopy, to virtually peel away the trees and see the bare earth beneath, with its ancient cityscapes and agrarian infrastructure. Yet, for all the research potential of the vast and growing body of digital data produced, we still face the interpretive challenge of getting from point clouds to human behavior and meaning. In this presentation we explore some of the results produced, and questions raised, by recent airborne and drone-based lidar surveys of ancient Maya urban and rural landscapes carried out by the Proyecto Arqueológico Busiljá - Chocoljá in the Usumacinta River region, along the modern border of Mexico and Guatemala.
Many complex systems can be modeled as network graphs, with nodes and edges representing objects and the relationships between them. Deep-learning based graph analytics (mainly graph representation learning) is one of the most important techniques in the modern artificial intelligence community. It has found valuable application in the social sciences in the development of neural networks that can classify text and images. Read Steven's recent article in The Conversation about vaccine misinformation on social media and how disinformation gets in the way of countering the pandemic.
A discussion of experiences with data science pedagogy and the teaching of applied data science to students and researchers in social science and the humanities. Tingley teaches "Data Science Ready," a course focused on providing nontechnical professionals and students the skills required to interpret and implement data in their fields of work.
This talk will present an introduction to common machine learning techniques, analytical tool kits, and big data technologies. Our focus will be applications of these methods in the social sciences, along with some of the unique research and computational challenges that social data presents. Examples using natural language processing, computer vision, prediction, inference, and classification will be discussed. We will also touch upon some of the recent and growing controversy around "ethical AI" and the pitfalls of using "biased" data.
This session will focus on digitalization and digital visualization of music scores and music production. Karen, an expert in Medieval polyphony, will show how digital transcriptions can use the original shapes of music notation in medieval manuscripts to create modern scores that can be performed or analyzed by musicians, students and researchers. Timothy and Sarah will demonstrate how multi-camera interfaces can be used to enable viewers of a musical performance to have a more interactive experience by switching between views of different instrumentalists and seeing where they are in the music.
How do you record the shape of something that will likely break as you excavate it? How do you record something shiny or reflective in-situ as it is removed from the dirt? We seek to develop new ways of analyzing the material culture of the ancient and historical world. The initial goal of our project is to prototype a "Single Camera Automated Photogrammetry Platform" (SCAPP) with the final designs available open-source. The SCAPP is intended to be a relatively low-cost and easily reproduced alternative to expensive digital imaging equipment, that often exceeds them in quality or covers gaps they cannot image.