Instructor Spotlight: Meia Chita Tegmark
Tell us about your background and what inspired you to teach this course
I am a researcher and an activist working on making AI beneficial for humans. I am the co-founder of several nonprofit organizations focused on using powerful technologies for good. As an activist who has engaged in a lot of outreach work, I know there are many ways to communicate with people and inspire them to take positive action on the great challenges of our time. However, as a teacher and lifelong learner, I understand that a course can make the greatest change in someone's life. Having the opportunity to sit with a set of topics for a semester and explore them with others can build lifelong motivation to make a difference for the better in the world. Many of my fellow activists can trace their inspiration back to a class they took in college. I believe that sometimes, all the world needs is one person in the right context who has once upon a time taken the right class. I think of this class as a mini-hero incubator. I hope the students will take the ideas they come up with and then go and engender positive change in the world. Creating change for the better may be something that fires up their entire career, or it can be a simple decision at a crucial point to listen to their moral compass and create a tech product that is more humane and more compatible with the human mind. The world needs both big and small acts of heroism.
Your course focuses on bridging the gaps between AI and human psychology. In what ways do students commonly encounter AI and what effects does this have on them?
AI is being embedded in tech products that we use, and I expect that soon AI will be pervasive in our lives. Right now, I feel that what strikes a chord with my students is how AI is being used to shape their online experience, particularly their content consumption on different media platforms. For this generation, life online constitutes a significant portion of life, so thinking critically about the kinds of experiences that are being curated for them with AI is crucial. In addition to this more passive interaction with AI, through content curation, we are also seeing a surge in AI products (for example, large language models) where the interaction is more active. Students, and society more generally, are trying to negotiate the terms of the interaction: Which of our activities should we delegate to these products? Is this truly good and helpful for us, or is it robbing us of opportunities to grow? I am very fortunate to have a highly international group of students in my class pursuing a multitude of majors; they bring a truly diverse set of perspectives to these questions.
How can Tufts students remain socially and morally engaged with their community despite the ways that AI can promote disengagement?
Engagement is discussed both with excitement and concern, but we often fail to really ponder— when we engage with AI products, what is it that we disengage from? I hope the answer is not that we disengage from others and from our moral responsibility to them. In our first class, I tried to convince my students that dark psychology is not just about engaging others with predatory intent, but also disengaging psychological mechanisms that keep us from caving in the face of dark forces—such as our moral compass and our empathy for others. How can we make sure that we do not follow the path of disengagement? I hear the solution being voiced by the students themselves in various forms; it comes up in class over and over almost like a theme or mantra— love something more than you love the technology. It may sound too lyrical, at first, but I believe it is in fact quite actionable— cultivate a love for learning and then use AI intentionally to help you be a better learner rather than to cut corners, cultivate a love for your community and use AI to serve it rather than for escapism and isolation, cultivate a love for the natural world around you and use AI to preserve it rather than dismiss and destroy it.
Your final project focuses on creating “Enlightened Solutions for Complex Challenges.” Could you tell us a bit more about this final project and what topics students may be gravitating towards?
I felt that a course on dark psychology needed to have some light at the end of the tunnel. In the first class, I made a pact with the students that, even though throughout the semester we would hold space for concerns and engage in some dark and heavy philosophical pondering, ultimately, we would orient ourselves towards solutions and taking action. We've already started brainstorming and discussing potential solutions to AI challenges, guided by questions such as: Are there some healthy habits for using AI that people could cultivate? Are there design features that would make a particular AI product more human-compatible? Are there any default protections that should be embedded in the product (e.g., privacy protections)? Is there a piece of legislation that would incentivize companies to create and deploy safer, more beneficial technologies, or disincentivize them from releasing products that endanger people's safety and wellbeing? Are there any professional, legal, or ethical standards to which AI developers and deployers should adhere? It is wonderful to see students engage creatively with all these different avenues for meaningful change. They’ve really cast a wide net of exploration. I am very excited to see what they will choose to flesh out more thoroughly as part of their final projects.
What is something that has happened in your course that you are excited about?
In this class, students alternate between submitting a reflective piece (a mindful musing) and a solution-oriented piece (a mindful mend). I’ve been blown away by how thoughtful and creative this group can be! For example, one student proposed an educational and awareness-oriented solution: educate people about the business models behind different AI products. This would enable them to better foresee psychological harms by grasping the underlying incentives. In a similar vein, another student submitted a design-oriented solution: create a tracker that shows people in real time how their compulsive engagement on social media is being monetized, and show people how much money other entities are making off of their mindless scrolling session. Yet other students have very creatively started borrowing and adapting successful ideas from other fields, for example, creating a version of carbon credits but for data, and then using these data credits to disincentivize companies from selling the data they collect on users to third parties. I look forward to seeing some of these fleshed out in the final projects.
Meia Chita-Tegmark is a researcher and activist focused on the impacts of artificial intelligence on human psychology. She is the co-founder of the Future of Life Institute, a non-profit organization aimed at steering transformative technologies towards benefiting life and away from extreme, large-scale risks. As a postdoctoral scholar she conducted research on human-robot interaction at Tufts, and holds a PhD in Psychology from Boston University.