What happens when technology knows more about us than we do? A computer now can detect our slightest facial microexpressions and be able to tell the difference between a real smile and a fake one. That's only the beginning. Technology has become incredibly intelligent and already knows a lot about our internal states. And whether we like it or not, we already are sharing parts of our inner lives that's out of our control. That seems like a problem, because a lot of us like to keep what's going on inside from what people actually see. We want to have agency over what we share and what we don't. We all like to have a poker face.
But I'm here to tell you that I think that's a thing of the past. And while that might sound scary, it's not necessarily a bad thing. I've spent a lot of time studying the circuits in the brain that create the unique perceptual realities that we each have. And now I bring that together with the capabilities of current technology to create new technology that does make us better, feel more, connect more. And I believe to do that, we have to be OK losing some of our agency.
With some animals, it's really amazing, and we get to see into their internal experiences. We get this upfront look at the mechanistic interaction between how they respond to the world around them and the state of their biological systems. This is where evolutionary pressures like eating, mating and making sure we don't get eaten drive deterministic behavioral responses to information in the world. And we get to see into this window, into their internal states and their biological experiences. It's really pretty cool. Now, stay with me for a moment—I'm a violinist, not a singer. But the spider's already given me a critical review.
It turns out, some spiders tune their webs like violins to resonate with certain sounds. And likely, the harmonics of my voice as it went higher coupled with how loud I was singing recreated either the predatory call of an echolocating bat or a bird, and the spider did what it should. It predictively told me to bug off. I love this. The spider's responding to its external world in a way that we get to see and know what's happening to its internal world. Biology is controlling the spider's response; it's wearing its internal state on its sleeve.
But us, humans—we're different. We like to think we have cognitive control over what people see, know and understand about our internal states—our emotions, our insecurities, our bluffs, our trials and tribulations—and how we respond. We get to have our poker face.
Or maybe we don't. Try this with me. Your eye responds to how hard your brain is working. The response you're about to see is driven entirely by mental effort and has nothing to do with changes in lighting. We know this from neuroscience. I promise, your eyes are doing the same thing as the subject in our lab, whether you want them to or not. At first, you'll hear some voices. Try and understand them and keep watching the eye in front of you. It's going to be hard at first, one should drop out, and it should get really easy. You're going to see the change in effort in the diameter of the pupil.
Your pupil doesn't lie. Your eye gives away your poker face. When your brain's having to work harder, your autonomic nervous system drives your pupil to dilate. When it's not, it contracts. When I take away one of the voices, the cognitive effort to understand the talkers gets a lot easier. I could have put the two voices in different spatial locations, I could have made one louder. You would have seen the same thing. We might think we have more agency over the reveal of our internal state than that spider, but maybe we don't.
Today's technology is starting to make it really easy to see the signals and tells that give us away. The amalgamation of sensors paired with machine learning on us, around us and in our environments, is a lot more than cameras and microphones tracking our external actions.
Our bodies radiate our stories from changes in the temperature of our physiology. We can look at these as infrared thermal images showing up behind me, where reds are hotter and blues are cooler. The dynamic signature of our thermal response gives away our changes in stress, how hard our brain is working, whether we're paying attention and engaged in the conversation we might be having and even whether we're experiencing a picture of fire as if it were real. We can actually see people give off heat on their cheeks in response to an image of flame.
But aside from giving away our poker bluffs, what if dimensions of data from someone's thermal response gave away a glow of interpersonal interest? Tracking the honesty of feelings in someone's thermal image might be a new part of how we fall in love and see attraction. Our technology can listen, develop insights and make predictions about our mental and physical health just by analyzing the timing dynamics of our speech and language picked up by microphones. Groups have shown that changes in the statistics of our language paired with machine learning can predict the likelihood someone will develop psychosis.
I'm going to take it a step further and look at linguistic changes and changes in our voice that show up with a lot of different conditions. Dementia, diabetes can alter the spectral coloration of our voice. Changes in our language associated with Alzheimer's can sometimes show up more than 10 years before clinical diagnosis. What we say and how we say it tells a much richer story than we used to think. And devices we already have in our homes could, if we let them, give us invaluable insight back. The chemical composition of our breath gives away our feelings. There's a dynamic mixture of acetone, isoprene and carbon dioxide that changes when our heart speeds up, when our muscles tense, and all without any obvious change in our behaviors.
Alright, I want you to watch this clip with me. Some things might be going on on the side screens, but try and focus on the image in the front and the man at the window.
Sorry about that. I needed to get a reaction.
I'm actually tracking the carbon dioxide you exhale in the room right now. We've installed tubes throughout the theater, lower to the ground, because CO2 is heavier than air. But they're connected to a device in the back that lets us measure, in real time, with high precision, the continuous differential concentration of CO2. The clouds on the sides are actually the real-time data visualization of the density of our CO2. You might still see a patch of red on the screen, because we're showing increases with larger colored clouds, larger colored areas of red. And that's the point where a lot of us jumped. It's our collective suspense driving a change in carbon dioxide. Alright, now, watch this with me one more time.
You knew it was coming. But it's a lot different when we changed the creator's intent. Changing the music and the sound effects completely alter the emotional impact of that scene. And we can see it in our breath. Suspense, fear, joy all show up as reproducible, visually identifiable moments. We broadcast a chemical signature of our emotions. It is the end of the poker face.
Our spaces, our technology will know what we're feeling. We will know more about each other than we ever have. We get a chance to reach in and connect to the experience and sentiments that are fundamental to us as humans in our senses, emotionally and socially. I believe it is the era of the empath. And we are enabling the capabilities that true technological partners can bring to how we connect with each other and with our technology. If we recognize the power of becoming technological empaths, we get this opportunity where technology can help us bridge the emotional and cognitive divide. And in that way, we get to change how we tell our stories. We can enable a better future for technologies like augmented reality to extend our own agency and connect us at a much deeper level.
Imagine a high school counselor being able to realize that an outwardly cheery student really was having a deeply hard time, where reaching out can make a crucial, positive difference. Or authorities, being able to know the difference between someone having a mental health crisis and a different type of aggression, and responding accordingly. Or an artist, knowing the direct impact of their work. Leo Tolstoy defined his perspective of art by whether what the creator intended was experienced by the person on the other end. Today's artists can know what we're feeling. But regardless of whether it's art or human connection, today's technologies will know and can know what we're experiencing on the other side, and this means we can be closer and more authentic.
But I realize a lot of us have a really hard time with the idea of sharing our data, and especially the idea that people know things about us that we didn't actively choose to share. Anytime we talk to someone, look at someone or choose not to look, data is exchanged, given away, that people use to learn, make decisions about their lives and about ours.
I'm not looking to create a world where our inner lives are ripped open and our personal data and our privacy given away to people and entities where we don't want to see it go. But I am looking to create a world where we can care about each other more effectively, we can know more about when someone is feeling something that we ought to pay attention to. And we can have richer experiences from our technology.
Any technology can be used for good or bad. Transparency to engagement and effective regulation are absolutely critical to building the trust for any of this. But the benefits that "empathetic technology" can bring to our lives are worth solving the problems that make us uncomfortable. And if we don't, there are too many opportunities and feelings we're going to be missing out on.
Thank you.