- Home >
- Resources >
- SecureTalk >
- AI Therapy: Should we believe Silicon Valley's Bold Claim at Solving Mental Health? With Daniel Oberhaus
AI Therapy: Should we believe Silicon Valley's Bold Claim at Solving Mental Health? With Daniel Oberhaus
In this episode of SecureTalk, Justin Beals welcomes Daniel Oberhaus, the author of Silicon Shrink, to discuss the revolutionary and controversial integration of artificial intelligence (AI) in mental health care. Daniel demystifies the central theme of his book, explaining the concept of Silicon Shrink and exploring how AI tools are increasingly being used to diagnose and treat mental health conditions.
He highlights the alarming implications of leveraging AI in psychiatry, the historical intersection of these two fields, and the potential pitfalls and ethical challenges this marriage presents. He also delves into the technical, policy, and philosophical dimensions of using AI in psychiatry, bringing attention to various case studies and real-world applications such as emotion-recognition technology and AI-driven triage systems like those used by the Crisis Text Line.
Daniel's insights present a compelling narrative, urging a cautious yet hopeful approach to adopting AI technologies in areas as sensitive as mental health, underscoring the need for transparency, privacy, and ethical considerations.
Book:
Oberhaus, Daniel. The Silicon Shrink, MIT Press, 2025.
View full transcript
Secure Talk - EP 211 Daniel Oberhaus
Justin Beals: Hello, everyone, and welcome to SecureTalk. I'm your host, Justin Beals. We have a great author to chat with today, and we're going to cover some really disparate topics. Everything from the nature of consciousness. To computer science, to mental health services, to investment by Silicon Valley.
It's intriguing that all these three things come together, but I'll just plant a little seed of where I first experienced some of these ideas.
In the 80s, I was very interested in home computing and learning how to program. And back then we shared a lot of software when we had a chance to. One of the programs that was making the rounds. It was a piece of software that was a chatbot, and the chatbot purported to be a psychotherapist. I think, in a lot of ways the bot was meant to kind of poke fun at the idea of therapy during that time.
The bot did a very simple thing. It took whatever you said to it, and with a little bit of lexical sleight of hand, turned it back into a question. Essentially pushing you to keep the conversation going. So if you said, Hey, I'm having an argument with my friend, it might say, how does this argument make you feel?
And many of the flips to the question were pretty narrow, and in what they could do, you knew right away, as you were playing with it, that it wasn't a very intelligent machine. But it certainly was fun chatting with it because you could say anything, and it wouldn't really care.
Of course, the technology that we have today for AI and conversational bots has leapt ahead dramatically from that era. And we're all using tooling that can hold a seemingly human conversation with us online on a very regular basis. There's been a side current of innovation in the marketplace of trying to develop computer software that functions as a therapist.
Hundreds of millions of dollars, maybe more, have been invested in trying to build computer programs that replace therapists.
It's an interesting idea because we certainly anthropomorphize the systems that we chat with day in, day out. We create an emotional attachment with them in a way only human, that we might make those decisions. But there's also a danger to it, of course, a danger from patient health care information perspective, a danger from effectiveness of medical care provided , nd, of course, I think a danger from the arc of the culture and society that we want to build together. And so that's why I'm really excited to introduce our guest today.
Our guest today is Daniel Oberhaus. Daniel Oberhaus is a science writer. He's based in Brooklyn, New York. And he's the founder of a deep tech communications agency Haus.
He was previously a staff writer at wired magazine. He's the author of Extraterrestrial Languages on MIT Press, which explores the art, science and philosophy of interstellar communication. He also wrote the book we're going to chat about today, The Silicon Shrink, How Artificial Intelligence Made the World an Asylum.
This book examines the dangers of AI in psychiatry and offers a ton of insights into the new tech-driven psychiatric paradigm. I hope you'll all join me in welcoming Daniel to SecureTalk today.
—
Daniel, thanks for joining us today on SecureTalk.
Daniel Oberhaus: Thank you so much for having me, Justin.
Justin Beals: So I had the real treat of reading your most recent book, and when I first picked it up, I wasn't quite sure what to expect from the title, Silicon Shrink. Perhaps you and I could start there a little bit.
Who or what is Silicon Shrink?
Daniel Oberhaus: That is a great question. You're the first one to ask me that. So it's a bit of a play on the term, I think in many circles, maybe it's considered derogatory for a psychiatrist, this idea of your shrink. And the book is fundamentally about the use of artificial intelligence in mental health applications and, you know, just psychiatry writ large.
So it was mostly a playful way of kind of trying to encapsulate everything that the book tries to cover into a pithy title. So yeah, at the end of the day, it really is kind of looking at all aspects, the philosophy, the policy, and you know, the technical side of what it does mean to apply artificial intelligence in, in this particular area of medicine.
So, the silicon shrink is. It's an idea in the sense of it's something that's quickly emerging. There are a lot of tools that are in use today in psychiatry and just in the general world that fall in this category. And so it is both a kind of an exploration and maybe a bit of, a warning bell about what, what is happening currently and what might happen in the future.
Justin Beals: Yeah. You know, when I dug into it, I quickly got the idea. Sometimes I. When I saw the title, I was like, are we going to talk about Silicon Valley getting smaller here? We're always interested in that a little bit, or it's influence maybe reducing. But, I have seen a lot of systems developed around human interaction.
And, of course, like health care, medical care, and just an intriguing story around mental health care. And what technology can do. And the book centers really well around that. I was shocked by the corollaries between the two different fields. Did you have an idea of how interconnected the ideas were when you started writing the book?
Daniel Oberhaus: I had a little bit of an idea. So, my first career was as a science and technology reporter. And so through that, I was doing a lot of, you know, I was staying very up-to-date on AI. And so, you know, before Chat GPT made it something that even your grandma's thinking about. I was kind of paying attention to developments in this space.
I also just love the history of technology. And you know, one of the more fun parts of this book was getting to really dive into that because I think, you know, perhaps you do, but a lot of people, I don't think, realize how closely intertwined the development of artificial intelligence. Intelligence and psychiatry have been for about the past 70 years.
These two fields were really kind of came in to their own together, um, in the sense of like modern psychiatry with, you know, pharmaceuticals was getting kicked off right around the same time that the Dartmouth conference was happening. People were looking at the psychiatry department for insights about how to build intelligence and machines decades ago before we had backpropagation and all this fun stuff.
So these two fields have always been in really close conversation. And for decades, people were dreaming about kind of the true confluence of, you know, using artificially intelligent machines in this particular context of psychiatry, you know, people were talking about this in the 1960s, but it really only has started to come to fruition in about the past 10 years, but it has come, it has come true.
So the things that these people were dreaming about, you know, back when we were dealing with a perception looking at numbers are now starting to be things that we carry around in our pocket. So it's actually pretty fun to be at this critical juncture in this 70-80-year history.
Justin Beals: Yeah,that was pretty amazing to me.
I'd certainly been aware, I think, just tangentially or through popular media of concepts in psychiatry and been closely aware of some of the innovations in technology and computer science. And certainly, as I read the book, there were little moments where I was like,” Oh, I remember playing with that thing”.
But before we get into that too deeply, one statement in the beginning of the book that I think highlights some of the history you were covering in psychiatry is that and you asked this question, how can we explain this explosive growth in mental disorders? You know, we've just seen mental health become such a huge topic of conversation, and 20 years ago, 60 years ago, 120 years ago, it wasn't quite the same concept, I think.
Daniel Oberhaus: We could probably spend the entire episode talking just about this. It'll, I guess, save the listeners that exercise. But, so, with the explosive growth of mental disorders in particular, it's a challenging question to answer because what it's really boiling down to, at least in my mind, Is there a high, like, is there something that can be referred to as a mental disorder, and is the incidence of that thing increasing?
Like, are, is there something called depression, and then are there more people with it? Or have we defined depression in such a way that you know, we're, we've started looking for a bogeyman, and every rock we turn over, we find it there? And that is a point that I think is often lost in discussions about we talk about having a mental health crisis.
This is not to like at all underplay the fact that there are people out there who are suffering. It's more asking about like, what are we, how are we defining their suffering? And what are the implications for the individual and society in the way that we talk about what that person is going through? and when you start to pathologize certain symptoms in the way that the DSM does, you're going to start finding a lot more of them.
And so there was actually like a definite moment in time that led to where we are today. The DSM, which is, you know, if you haven't heard of it before, is essentially psychiatry's Bible. It's a, it's a list of this point, hundreds of diagnoses for mental disorders and then symptoms that are associated with those disorders.
And then psychiatrists can use this. manual to make diagnoses of their patients. If you cross a certain threshold for symptoms, you are then qualified to be given that diagnosis. That didn't happen until the DSM 3, which was around the 1970s. Prior to that, it was like something that was barely used by practicing clinicians; it was largely influenced by psychoanalytic theory, which, you know, it's basically based on no science whatsoever. And so with the DSM 3, they really started to take this more checklist approach, which is a whole other can of worms in terms of the implications of that. But in this particular context with AI, it was very kind of important to allow AI systems to start to be used because these systems are effectively trying to look at symptoms and, you know, make diagnostic and therapeutic decisions based on criteria that would be defined within the DSM.
So that was a really roundabout way of answering your question of, like, what is the explosion of mental disorders?
This means I don't have an answer to that. But my suspicion is that we're seeing a higher incidence of mental disorders due to the way that we're defining them, and we should actually expect that to increase. So, psychiatry is It's the only field of medicine where despite billions of dollars being spent on research and just like a massive amount of effort being put into treating these patients, it's really the only field of medicine that has gotten markedly worse over the past few decades in terms of patient outcomes rather than better.
And so when you see that happening, given the billions of dollars, all the technologies, all the biopharma development that's happening and patient outcomes are getting worse, like Last year, we hit all time, all time highs for suicide rates, youth depression rates, deaths of despair from drug overdose.
Like, things have not gotten better despite these efforts, and the efforts of millions of well meaning people around the world. And so, that's been historically a, you know, Persistent challenge for psychiatry as a field. And what happens is when you have a field in crisis that is looking at this, like I talk with people all the time.
They recognize this. They're not denying it, but they don't know how to fix it. And so that's a really ripe opportunity for someone to come in and be like, Hey, I have. I have the, the magic bullet here and psychiatry goes through these phases every few decades where it's like they found that's going to fix it.
It was lobotomy and electric shock therapy a hundred years ago. Then it was pharmaceuticals and now it's artificial intelligence. And every single one came to the same promise, and basically every single time you look back and we're like, holy fuck. You know, there were a lot of downsides of this that we hadn't maybe considered, but it was with the best intentions in mind to help these patients like people are suffering that we need help, clinicians want to help them.
This book is basically saying, Hey, maybe we should stop to think about this particular one because it is different in kind than what we've seen before.
Justin Beals: Yeah. It's funny when you say best intentions of mind; I think about a lot of the software we've built over the years as well.
It's very human. Some of our security practices. Yeah. And the whole thing feels like a very. Kind of a human story in a way because the explosion in the mental health crisis, for example, or I would say how it's typically called in a lot of the media, may just have a lot to do with us, you know, sharing and recognizing common human emotions that we go through throughout life, right?
And how challenging those can be and looking for ways, you know, to cope with them in effective and productive opportunities. And there's nothing wrong with that, but it gets a little wonky when we start diagnosing, I think, you know, I think you start to point that out in your book is that some of these things we want to be scientific don't wind up feeling very scientific. Yeah,
Daniel Oberhaus: and I think that's if, like, if people read the book and walk away with nothing else, I think that would probably be the point that I hope that they walk away with, which is, in the process of putting this together, what was made very clear to me is that, you know, I have lots of people in my personal life who, you know, go to therapists or a psychiatrist.
They might be on various, you know, drugs for whatever they've been diagnosed with, but I think in general, people don't have a really a good understanding of psychiatry and like how it actually works and what I mean by that is psychiatry and like actual psychiatry, like you go to a psychiatrist, someone who can prescribe you drugs legally, it is a field of medicine, but it is the only field of medicine where there is not a single biomarker for a single disease that it claims to treat.
And that's really important for, like, how you conceptualize what psychiatrists are trying to do. Um, and I guess the important point here, it's like, it's discussed at length in the book, like when you are given a diagnosis by a psychiatrist, how are they arriving at that diagnosis? It's like, well, we already talked about the DSM and that kind of checklist approach to symptoms in medicine, like symptoms is kind of like the worst-case scenario, right? Because, like if you're diagnosing, like if I go to a doctor and I have the flu and they're only diagnosing my symptoms, they're going to, they might give me things that like give me relief in the sense of like, it lowers my fever, but like it has not actually done anything to address the pathogen.
And that is where psychiatry is and has been for its entire existence is we don't understand like in this metaphor, like the pathogen. We don't, there's no lesion in the brain or in the body. Like, we cannot actually tell you if you have depression in any non pretty subjective way. And it gets even worse because, you know, symptoms.
They can hang together syndromes exist in other fields of medicine, but the problem with psychiatry and like the disorders that it claims to treat is that a lot of them aren't even based on that much scientific evidence, if any, at all. Um, it's the only area of medicine where diseases are created by committee.
A really well-known example of this that is also happens to be exploding in terms of diagnoses right now is. Post-traumatic stress disorder. That was not a, you know, a diagnosis you could receive prior to the DSM 3. And the reason it is in the DSM 3 is not because there is a preponderance of evidence that something called PTSD actually exists in the same way that another disease treated by other fields of medicine exists.
It was created because in the aftermath of the Vietnam War, there were a lot of vets that actually, like they were in acute need of care, but because their suffering did not have a name, they could not get, you know, insurance provided clinical support. And so the veterans lo lobbied for something to be included in this so that these veterans could go to talk to therapists and they could get help that they needed, but like they could like a disease was created out of that.
And so that is, that's astounding. Like that is, that is crazy. And like the reverse happened with homosexuality. That was considered a mental disease until the DSM 3 when the gay rights lobby had it removed. And today we look back on that and be like, yes, that was like unarguably a good thing, but it is crazy.
Like I can't go to the APA and lobby cancer out of existence. Like it cannot happen, we know it exists. So a lot of these things, like these diagnostic categories, were essentially created for one purpose and one purpose only, which was to enhance clinician agreement about what they were treating. So the goal was to get from basically if I went to one therapist, they might tell me one, I have one thing, If I go to another, they're going to tell me something else.
We actually want both of those therapists, if they're looking at the same person, to arrive at the same diagnosis. Because we don't have biomarkers or any sort of like a quantifiable way to do that, we don't know what these diseases are at like a biophysical level.
We need to rely on symptoms and groupings of symptoms to say if you have clusters of these symptoms, you likely have X, Y, and Z. So that was literally the entire purpose of the DSM. It was not, you know, to say that these diseases are valid. It was to say that these disease categories are reliable. And the, the, the kicker to this, and I'll shut up for a second, is that it didn't work.
It didn't work and it hasn't worked for the past 50 years and the NIMH knows this in 2013 they stopped allowing the SM symptom-based criteria to be the basis of research project that was insufficient because they recognize that this doesn't work as advertised. We don't have anything better. And so that has led, especially under Tom Insel, who was the most recent NIMH director that has led to a big push for quantification.
So, you know, looking for the biology of mental disorder is still a huge area of research, but this also has a huge behavioral component, and now that we have this increasingly networked society where I carry a tracking device on me at all times and interact with it all the time, I sit in front of a computer for most of my day.
That's an incredible data stream that, and the idea, the thinking behind using artificial intelligence in psychiatry is that the data. That can be extracted from digital and network devices will actually allow us, for the first time, to like quantify mental disorders in a way that is both, that is at least reliable.
And if it's reliable, that means we might also be able to use that data to then develop valid disease categories that do allow for us to like make intelligent decisions about how people are treated. Hell of a goal. I love that goal. The problem is, again, not looking as advertised, but you wouldn't know it if you just looked at the label on the box.
Justin Beals: Yeah. And certainly I've heard a lot about reproducibility crisis. It's happened in a lot of scientific fields, but it does feel like the some of the social sciences or psychology has struggled the most, you know, with this reproducibility issue. And they've pushed the biomarker thing. You identify some of the areas in your book, I think that Insel himself worked on, like genomics, fMRI machines, you know, it does feel like we're getting a little more sophisticated or precise with the tooling.
But we're not quite there yet where we can point out, you know, a set of biological features that lead to a particular mental health issue.
Daniel Oberhaus: Yeah, I think the challenge to me is that we are unarguably getting better at measuring, but we don't know what we're measuring. It's just like we're getting more and more precise calculations of something that we don't understand, which is like a really interesting and unique problem. So it's like just because you can quantify my behavior down to the second and like, no, every single time I twitch my thumb, you know, we can try to correlate and that say, like, well, that correlated with a onset of like mania or something. But what the hell is mania? Like, what are we actually, what is that?
And so the problem with that is we don't know. And a lot of mental disorders are socially prescribed. And so. To the extent that like technology is used as a control vector, and I say control, hopefully in like the least, like there's no moral implication behind that, like just literally like that to me is like a pretty precise definition of technology is an extension of human control over their environment, we should really think about what that means when we're saying you have X, Y, and Z, and as a result, we're going to do A, B, and C, Y, and like, what is that thing that you are You're now making decisions and using this control vector.
You're telling me that I have some pathology and that this is the best course of treatment. Is that actually true? Does this lead to better patient outcomes? There's very little, if any, data that shows that these systems as currently exist, improve patient outcomes or are even better than, you know, just having a human do the same thing.
So we don't even know if they work and we certainly know that they're not, you know, like equivalent. So we might get there in like, I, because this is probably a largely a more technically savvy audience like the point of the book is like it's not about AI like I love AI and I'm actually rooting for Its success here.
It's not the tool. It's how it's the area It's being applied to like it's psychiatry that has the crisis not artificial intelligence. But that does mean we have a responsibility to maybe not use this tool in this way in this field right now. So it's more just saying hey guys psychiatry isn't ready for AI and here's why as opposed to saying AI is the problem because I think there's like a lot of people who just kind of hate AI at a gut level for various reasons. That's not what this is about at all.
Justin Beals: I could speak from the technical side. I have used AI or machine learning, you know, data-driven modeling and probabilistic predictions for a long time in software and code, but I am, I am very frustrated personally with the way some companies put AI tools out into the marketplace and the claims we make about them.
And I think I struggle when, and I see this in some of your work in the book where we can certainly analyze aspects of behavior, utilizing AI, things like lexical data, you know, biometric or smartphone data, movement, data, vocal tonality, emotion and facial recognition. We can measure those things, but the accuracy of our predictions is something we really don't do a good job ethically of really producing a reliability from those systems and what comes out of them. So I felt this other parallel between AI development and psychology.
Daniel Oberhaus: No, and it's a good and important point because, you know, it's what I was trying to get to earlier was that, you know, we are getting very good at measuring behavior and behavior is certainly a dimension of mental disorders.
Like, there's no disagreement there that we can measure our behaviors better than ever. All I'm asking is like, what are you telling me that my behavior means? And like, what is the outcome of you making that decision about, you know, these behaviors correlate with this thing that we're calling depression?
And the reason we're doing this is to provide help to you in that way, like, I want to know a little bit more about this thing called depression, like I, if, if this exists in my behavior correlates to this thing, but like, it kind of just feels like, you know, again, like, maybe, like, we define ghosts in a way where we're going to find them.
And now, with AI, we can do that at scale because. Prior to this, you know, clinical interventions, for better or worse, were limited to largely to physical institutions, right? So you could be an inpatient in a hospital, you could go to weekly therapy, and then they could prescribe you drugs, for instance, to like manage whatever you're dealing with in the interim between clinical visits.
You know, the vast majority of time, you're not in, like, clinicians are kind of operating with a black box. Like that no longer has to be the case. And that's part of the promise here. But again, it's like, we don't actually know the thing that we're trying to apply it to well enough to say, Like, I need to measure your behavior in this dimension to understand this, like, we're just kind of at the point where it's like, let's just Hoover up everything and see if we can start drawing correlations with this.
And there's been famous blowups already within the tech sector for this mindstrong, for instance, which was co-founded by Thomas Insell was trying to do exactly this. It's called digital phenotyping. And the objective there is to look at. effectively metadata and digital exhaust and trying to correlate that with, you know, flagging early onset crises for mental health patients, identifying disorders.
And that's, that's a really interesting proposition that came out of Harvard about a decade ago. This idea that it doesn't actually matter what I'm typing. For instance, it's like, it's my scrolling speed. This is my geophysical location data. It's my call records. The tenor of my voice and how it registers my talk on the phone, all of that data has been used to try to correlate that with the presence or onset of mental disorders, you know, to varying usefulness, but like mind strong, they raised 100 million dollars and they just shut down kind of, you know, tried not to make a big deal out of it, didn't really say why about a year ago. Was it because it didn't work? I would expect, I guess, if it worked as well as everyone was apparently hoping it would, it's like they were one of the most well funded mental health, you know, AI startups ever. Had every reason to succeed, but they never really had published any data showing that this was effective.
So I can't say whether or not it was, maybe it wasn't, they just didn't tell anyone, that would be a weird choice, but. Yeah, it's like, that's kind of where we're at here.
Justin Beals: There's a corollary to me with this concept of behaviorism that you talk about in the book and the need of psychologists to systematize, you know, their delivery of care.
And of course, as computer scientists, we're always looking for these systems and thinking about how we might replicate them. And we do a lot of, you know, we throw a lot of data in a black box, put an outcome in there and see if the results are probabilistically reproducible. But we don't, we oftentimes don't care what's in the black box too much.
Even on the AI side, that's been incredibly dangerous to me., I went through this when I built a hiring app a long time ago, and we started dealing with unstructured data. This is how we thought about it. In unstructured data, we would use things like lexical, like writing essay papers, and we'd get correlations out of it. You, know, every single time.
But, but we knew we'd look at really at the data we put in and we said, yeah, you'd expect that correlation if you'd ask the system of it. And it has nothing to do with actually telling us what is fundamentally wrong or inherent about that person at the end of the day. Now, higher view you mentioned in the book has used, you know, both vocal and emotional recognition to try and identify things like hiring. And these are psychology situations that we're making in business.
Daniel Oberhaus: Yeah. And I mean, it's the structured and unstructured data is a, you know, it's a really, It's a really interesting point on its own because like I think this is probably like an overly simplistic example. But maybe a relevant one is like if I create a model and it's a cat recognizer I can give it a picture of a cat and then give it a picture of a dog and if it identifies a picture of A cat correctly, I can say this model is working as advertised and like maybe sometimes it'll mistake a cat for a dog but like 98 per cent of the time, I can be pretty confident that I got it right.
The problem is, is like, we know how to define a cat. Like, no one's asking, like, is it a cat or isn't it a cat? Like, we can all look at it and say the model fucked up, or it didn't. Because, like, you know, whatever, I'll go get its DNA if I have to, but I can have a definitive answer to whether or not that thing was a cat.
We can't do that with mental disorders. So it's like when the model, like for instance, in this case, if the model is just hoovering up my data, when I'm interacting with digital tech, and then it's correlating this to, you know, the way we define mental disorders in the DSM, like based on symptom-based criteria, people are trying to move beyond that where it's like less about saying like, is this, this and this happening?
And it's more about just saying, Hey, if this person isn't calling a lot and like, they're probably disconnected from their friends, and like, this is maybe an implication, but like the general point still holds, which is like. The model might spit out depression, and then you'll bring in 10 psychiatrists to say what do they have and they'll give you 11 different answers, and it's like no one can even agree if it like the model got it right, like how do we even know if it's doing a good job it's it's and like, that's the problem.
It's like, the problem is not the models. There are lots of challenges, but like, they're not unique to psychiatry on the AI side. It's the same challenges you'd have applying AI anywhere in terms of like bias and things of that nature. The problem is on the psychiatry side is like, we can't actually define the cat.
And that's an example for psychiatry.
Justin Beals: Yeah. You know, in the book, one of the stories that really floored me and I think is an example of both on the AI side, how we struggle with accuracy and in the psychiatry side, how we struggle with accuracy was the, the work that had been done around like facial imaging and emotion.
Could you tell us a little bit about the early psychiatric studies around using, you know, face photos of faces and trying to identify if there was a pan-cultural emotion behind them?
Daniel Oberhaus: Yeah, so, this was one of the more interesting areas in writing the book that I wasn't super familiar with before, but you know, obviously are, you know, in terms of like what, like kind of the vectors for mental disorder and mental health, it's like, you know, you normally think about cognition, behavior and emotion like those are kind of like the three broad dimensions that you would look for indications that something is going wrong.
And so, you know, behaviorism obviously very well known and like we can look at like what is, you know, uh, disadvantageous behavior that becomes harmful to somebody. Cognition CBT is like kind of combines those two. It's one of the most successful therapeutic programs in ever in terms of patient outcomes is actually just plain old talk therapy.
And then there's the emotion thing. And emotions are really interesting. It's like, we actually, you know, we have a decent sense of like, there's a science to emotions, but it's so multifactorial in terms of like, what is influence, like your stomach has a role to play in how you feel today t an emotional level.
And so, you know, when psychiatrists were looking at this, like, that's a bit of a problem. But what they started to do in the early 20th century is looking at facial expressions in particular. And actually, it's kind of actually predated. It was in the 19th century as well. Like Darwin was very interested in this and like the study of how a person's facial expressions map to their kind of like internal psychology is how they were thinking about in the early days and led to a lot of interesting and sometimes like very problematic experiences that involved like shocking people to like get them to like, you know, their face to tuition a certain way. And then, in the 19 like '60s or so as things started to get a little bit more buttoned up there.
And the question was like, are you, are emotions in terms of like how they were outwardly presented universal? The answer to that is no, but like the question was, are there universal like emotions in terms of how they're displayed? Like, can I look at a person from every different continent, all these different cultures, and like, be able to say, like, that person's happy, that person is sad, and there's, you know.
Seven or eight odd dimensions that they were looking at. And so that was kind of this foundational question everyone was interested in, including like the defense industry. Like there was a lot of intelligence money flowing into answering these sorts of questions. And there was one gentleman within this field who kind of gave birth to what we might call a motive technology today.
And he ran an experiment where like he basically printed out a bunch of pictures of people's faces on note cards and assigned like five or six different emotions to each of them in terms of like what a Western audience would look at that picture and say, this person is smiling. They're happy. This person is frowning, they're sad, whatever.
And then he went to, I believe it was pop into Guinea up there. Read the book. I know it's accurate there. I'm a little bit rusty on that, but you're right. He went to a, you know, not a completely uncontacted tribe, but they had had virtually no exposure to the outside world. And he ran a series of studies where he showed them these faces and asked them to ascribe emotions to them to see, Hey, does the way we interpret this space in the U.S. map to the way a tribe who has never interacted with literally anyone besides like themselves and desert lizards, would they see the same thing?
And the answer was yes. The problem was is like you only had five options to pick from, right? So it's like you're giving a multiple-choice test as opposed to like asking someone, Hey, when I show you this picture, what emotion is this? And now we know like there's lots of different ways that people express like emotions that are like varying across cultures.
But the problem was, is that it created a standardized index that was then used on initial like facial recognition projects. A lot of this was driven by the government. They were looking at, you know, it was a lot of policing work as you might expect, um, But the goal was to train machines to automatically be able to recognize emotion.
And then the kind of the next step beyond that was like, well, if we can recognize emotion, and they were very explicit about this, a lot of the people who were involved in this early research is like, well, that has clear psychiatric benefits, right? Like if we can train a machine to analyze a person's outward expression of their emotions, then we can, because emotions have such a big impact on mental health, you know, we can kind of quickly start to map that to disorders and have this.
You know, the face as the Royal Road to understanding what disorder they might have, and so that has like spiraled out of hand. And you know, there's been a lot of work, obviously, automotive technology, some better than others, but like, Facebook is interested in this. Microsoft famously started a program around this and then recently shut it down not too long ago.
And for the reasons that I'm describing, which is that, like, it doesn't work, it like on if you look at the data, it does not work as advertised, like these systems are ineffective. And the promise, like the data does not back it up whatsoever, but for whatever reason, just like no one stopped to say, Hey guys, like AI can't do this.
And it certainly can't, like it can barely recognize emotions as accurately as we wanted to. And it certainly can't use those emotions to decide what is going on inside that person's head. And so there was a big meta-study that came out about five years ago that I would encourage everyone to read if they're interested in this because it goes through basically every single study that has ever been done on this.
And, like, conclusively came to the, you know, conclusion that, yeah, this does, doesn't work. So, I know there's a lot of people in this field that are very vocal about raising kind of the alarm about this and saying, hey guys, don't use AI in this way because it's not effective. But it continues the pace, and it's a huge part of the application in psychiatry for those reasons.
Justin Beals: Yeah. You quote a UK researcher in your book who says “Emotion recognition AI is akin to trying to measure the mass of an object in meters”. Yes. It's an incredible encapsulation. And the other thing that's so incredible about this story to me is like this confirmation bias on a lot of things we do.
Everything from a psychiatrist doing a research project in Papua New Guinea, wanting this outcome and being able to categorize, you know, emotion. All the way to when we build these companies and these models, there's a confirmation bias. We want them to work. We, we, you know, we're, we're putting investor dollars behind them.
We're putting parts of our life behind them. We want it to be successful and to help people. And that's so dangerous. In my mind, both on the technology side and on the researcher side.
Daniel Oberhaus: Yeah. And I mean, I think I agree with you, you know, and we exist in a paradigm, especially in like the research side where it's, you know, publisher perish, and no results tend not to get published.
So it's not just in people making sure that their VCs are happy and seeing some progress and spending the data to make it look a little bit better than it is. It's like it also goes down to like the people who are doing the research. It's like I, and I think the challenging thing is I do actually think that everyone working on this project has the best interests of people who are suffering in mind, but we shouldn't let that blind us to the important questions, which are, does it work?
Is it safe? And is it better than what we have already? And the answer to all of those right now is, I don't know. I don't know. I don't know. And if I, if you ask me personally, it's probably not, probably not, probably not. So, but I'm an open sceptic. Like I, I personally wanted to work. I'm just, Like calling it for what it is is what I'm really trying to do with the book not necessarily say this isn't worth Working on or that it will never be as effective as people want it to be.
It's just saying hey It isn't right now and no one is really You know, being honest about this in the sense of like, and yeah, there's a lot of personal motivations about why you might not be
Justin Beals: Right. And I think in a lot of times our brains don't do us the best. Our own brains don't do us the best service here.
One of the things that really brought back some memories for me is you talked about a software program called Eliza. from a long time ago. And I remember this program as a kid. It was built to an early kind of therapist, but it was more of a joke than a real therapist. But it, it actually, people would talk to it forever, you know?
Daniel Oberhaus: Yeah. And you know, I guess this is maybe something that people listening to this might appreciate. There's also a great video game, a more recent one called Eliza. That's about AI and psychiatry. I'd highly recommend playing it. It's beautiful. It's very poignant and also fun. It's like a kind of interactive story.
So if you're interested in that kind of thing, I highly recommend it. You know, saying basically essentially the same thing as my book, but in a narrative video game. But Eliza is really interesting to me because it's progenitor never intended it to be used as a therapeutic tool. It was explicitly for like studying computer and human interaction.
And this was just more or less a convenient kind of hack to do that because, as psychiatrists, it's like, Oh, it just spits questions back at you. It's like on the backend, that's like relatively easy to implement at a time when AI was in its infancy. And man, what a bummer for him because people took it and ran with it and really did start using it as a therapist, which he hated.
He saw it as incredibly dehumanizing. What was interesting is he had like kind of a foil, a man named Kenneth Colby, who was a psychiatrist and very interested in AI and came at it from the opposite direction where he was saying, Hey, I think we can use artificial intelligence. This is the 1960s to model mental disorder and use it as a way to study mental disorder to try to find, like, to try to actually figure out what it is.
And so rather than practicing on patients if, if we understand what psychosis is, we should, in principle, be able to model it on a machine. And so that was his kind of wrote in and he, he created a kind of You know, I think he called it Eliza with attitude, and it's called Perry, and it was meant to model a paranoid patient.
Whereas with Eliza, you play the role of the patient, the computer is a therapist in this role, you're playing the role of the therapist and trying to kind of figure out what's up with the patient. And during the first-ever public demonstration of ARPANET, they had both of these apps out of like the handful of apps that were, you know, accessible on a terminal.
So it's like psychiatry and AI have literally been around with us since like, the birth of the internet. And actually, I had an opportunity as part of this book to talk with Vince Cerf, of course, you know, well known for, you know,, all the protocols that he did for making the internet what it is today.
And when he was at UCLA, just as a lark, he connected Eliza and Perry to have them have a conversation and, you know, the full transcript is in the book, but it's nonsense. It was kind of just like a fun experiment, but people were really paying; it's all to say people were really paying attention to this and taking it very seriously 50 years ago.
And what's kind of been. It's come full circle because the thing that Kenneth Colby always wanted, and what Weizenbaum, the creator of Eliza, always dreaded, has now come true, which is this idea of a chatbot therapist, and there's plenty of these available today, you can go download them on the App Store, like Wobot, things of that nature.
Kenneth Colby always dreamed of having a chatbot therapist in everyone's pocket. It's here, but the problems haven't gone away or even really been addressed in any sort of meaningful way. So, you know, he was very aware of those. Challenges and problems even when he was developing this like he was not, you know, blinded I think, by ideology He actually used to be a psychoanalyst and left the field because of the lack of scientific rigor. So he was truly making an honest attempt to bring some scientific and you know medical credibility to this field that he saws essentially based on nothing and I guess now it's it's finally here But we just don't really have the results.
I think he was hoping for but we do have the tech.
Justin Beals: You have some recommendations in your book about how the But this particular area of research or innovation can improve itself. One of the ones that I really liked was the explainable algorithm concept. You know what, what would you hope that data scientists or AI engineers, if even pairing with psychiatrists and developing applications, would explain about their algorithms?
What would you want them to expose?
Daniel Oberhaus: Yeah. Challenging question, and I think the way I would approach it is by, like, point of comparison. And so I guess maybe we can use chatbots as an example, because there's many different kind of instantiations of psychiatric AI. I think this is probably the one that's most mainstream.
Like, there's news articles that come out all the time now about people using GPT as their therapist. And so with explainable algorithms and a lot of the other challenges that are more on the AI side, uh, such as data bias, like the, the reason it's hard to implement a Chatbot therapist is not necessarily because it won't work, but because of all the other things in therapy that it can't really implement.
And so one of these things like a very basic thing, is like patient data protection. So, in the United States, at least there are very strong protections through HIPAA around how people can collect, use and store sensitive patient information. The problem is like when you interact with the chatbot, a lot of them don't fall under HIPAA restrictions because they're not technically marketed as a medical device, right?
Like they're considered wellness devices. They're not FDA-approved. Some of them are seeking FDA approval so they can be prescribed as a medical device, but they've been launched for years now without having that; even though they're effectively being used as a medical tool, they do not call themselves therapists.
Extremely intentional. And so the bar, I guess, the short answer to your question is the bar I would set for any explainable algorithm is the same. I would set for a therapist myself, or a psychiatrist is if I go and visit a psychiatrist and they are telling me that I have a certain diagnosis and that I must do.X, Y, and Z, they prescribe me a drug or they, you know, implement some sort of behavioral regimen, whatever I have the opportunity and then they have the obligation to tell me why and to explain the thinking around that and actually be able to tell me not just what to do, but, you know, give me the opportunity as their patient to, like, engage with them on that and to, you know, make sure that that's something I'm comfortable with.
Maybe I don't actually agree. With what they're doing, I can go see another psychiatrist. You don't really have the opportunity to interrogate the algorithm in the same way. And, you know, that's a high bar because it is, in many ways, a black box. And maybe that is just something we have to take as a given.
But that makes it to me in my mind very risky because if the algorithm can't explain to me why it arrived at a certain diagnosis and, you know, we're not at the point where they're prescribing drugs yet. But it is 100 percent moving in that direction is moving in that direction really quickly. So as soon as you know, doctors are beginning, psychiatrists are beginning to take recommendations from these machines.
They can still use their professional judgment, but look at what's happened in every other field of medicine. When AI starts to be used in clinical practice, the doctors are still highly trained the algorithms and especially like this is kind of the challenge of psychiatry. It's a bit of a laggard here.
And so like when you look at say radiology. It's outperforming human radiologists all the time. And so, in fact, what you're starting to see is doctors beginning to trust the algorithm too much and maybe, like, doubt their own, like, kind of gut intuition around things because they have seen the data, and the data says, in most cases Like, this is actually going to find that tumor better than you are.
So if psychiatrists are now being told the same thing and begin to use these things to inform their clinical judgment, even if it's being delivered by a human, but being recommended by an algorithm, if that psychiatrist can't say how that algorithm arrived at the thing, then they are making an irresponsible recommendation to the extent that they're using it to inform how they're treating you.
And so. Yeah, the really short answer is with explainable algorithms with data, anything on the technical side, we need to hold these particular implementations to the same standards that we would hold a psychiatrist because psychiatrists are also responsible for you in the sense that there are legal obligations that they must fulfill as your provider, whereas like if an algorithm tells you to do something that results in serious harm.
At present, unclear who should be responsible for that. Obviously, I'm sure you're aware a very active area of debate, but we don't have a great answer there yet where this has been decided if I were to go visit the therapist, there's very clear rules and, you know, kind of operating procedures for how that interaction must be handled.
So I think that would be a great starting point is if we just say, hey, look. We hold these people to a high standard for a reason and these algorithms must be able to clear it or else they cannot be deployed in these scenarios.
Justin Beals: Yeah. I want to say to all the computer scientists that are listening to us and people who have been developing AI tools that I've been challenged with this problem in my work in education and personnel management to explain how an algorithm might work.
And it is work. I get it, but guess what? You can do it, guys and gals. Like it's very possible to find a way to really bake this into your code. And I think ethically, we as an industry need to hold ourselves to these types of standards, no matter what we're doing, but especially when it impacts people. The other side of it that you mentioned is like by treating these more like games or not actual medical devices, but selling them is creating better health.
They've skirted HIPAA requirements, and it's led to some really powerful privacy breaches. You talk a little bit in your book about, I believe, the crisis text line. Could you tell us a little bit about what the crisis text line is and how we had a major privacy breach there?
Daniel Oberhaus: Yeah, I'd be I'd be happy to so this this actually was one of the more recent examples in the book The challenge is writing a book about a actively evolving field is by the time it comes out It's like so much has already happened.
But with crisis health text line. This was over The like kind of early COVID pandemic. This was a hotline that was used to help people in extreme mental distress. So it started out mostly focused on young adults, but then kind of just became used by everybody. So much like many of the suicide hotlines that exist, but not just for suicide.
And they were incredibly effective relative to all these other systems. And the way they were able to be so effective is because they used an AI driven triage system, which meant that they would be able to look at like inbound inquiries from people in distress. And if it, if the system identified someone who's like, I'm about to kill myself, that's probably like a higher priority for someone to get on the phone with than someone who's like upset with their mom and dad.
And so through that, they were able to get, you know, in time support because, you know, fast intervention does matter for a lot of these really high-risk situations, such as suicide, like intervening at the right time can truly save someone's life. Very, very effective at that. What ended up happening though, is their former I guess their founder and former CEO; she was also the founder and board member of another company that was had nothing at all to do with this crisis health text line was a nonprofit, very mission-driven.
And then, she was the founder of another company that was developing chatbots for call centers. And what happened was in exchange for some equity for crisis health text line, all of that patient data that this very sensitive data, people who are calling because they are ideating suicide was used to train models that is going to be used for call centers, and we can say, like, you know, in the aftermath, you can go read all about it was very prominent for a little bit, but, you know, it was like, hey, the patient data was anonymized, blah, blah, blah. Well, I'm sure a lot of your listeners know that, you know, there's a huge grain of salt.
And when we say patient data is anonymized, it's like you know, security researchers understand that it's probably not as anonymized as you think. It's shocking how easy it is to identify people with data you wouldn't expect could be used that way. So, all to say, you know, it's like that might seem like, oh, whatever, they trained it and like maybe the chatbot is better at the call center, like maybe they're nicer and more responsive.
But no one who called the call center agreed to that or even knew that it was happening. And so that is something that if you, if a psychiatrist did that, they would be in jail. Or they would at least be, they would lose their license. So huge problem. And then there's, yeah, just like every other field of our application of AI, there's related challenges around data storage that we go into where there's been leaks of very personal information onto the dark web.
So, yeah, just given the nature of what you think about what you talk `about in a therapist's office. A data is not being handled how it should be.
Justin Beals: And two things there. One is, is that, as you mentioned, and I firmly believe this concept of anonymization is a foible. It's not true. We know that statistically, mathematically, how easy it is for just a little bit of data to identify someone.
And the second is, is that if you were actually wanting to be a source of medical therapies, HIPAA states that you can't share any patient health care information. Period, and this company would have been brought under investigation under those laws. Yeah. So it is, you know, I appreciated the revealing of that, we're already dealing with this.
And I think on the backside of it, I think this plays really into what's happening with open AI. Their status as a nonprofit, what they want to do with that data, bringing in external investment from corporate individuals, and then trying to switch the nature post collection of our data as so, yeah, I've got a lot to rant about.
Daniel Oberhaus: I'm sure I'm sure we could do a multi hour, a multi hour episode, but for the sake of our time and everyone's listening, we can we can avoid this. And I guess maybe like we're all just, you know, kind of like to put a finer point on this is like when I started writing this book in 2018, it was mostly rejected by publishers because no one thought that it was a topic that was wide enough to an audience.
Certainly an interesting academic one, but maybe not something that most people care about. Since then, chat GPT has happened. People have started using this in this way. Exactly what you might expect when you start digging into it, that this would happen. And all of a sudden it's become something that people are starting to pay real attention to.
It's, if six years ago, this was maybe it's might've sounded a little bit crackpot having this guy come out and say, Hey, we're on the verge of turning the world into an AI like monitored asylum, I might have sounded crazy, but like, you know, just a couple of weeks ago in the New York Times, a big story came out about the use of suicide detention, detection software in schools, it's being implemented on office computers it's being implemented in prisons. It's being implemented in courthouses. It's being implemented across all sectors of your life.
Apple is looking at this. Google is looking at this. Facebook is looking at this. This is very real and it is, I think, worth taking very seriously, both for the sake of patients and for everyone else who might get caught in this kind of like catch all drag net for surveillance that is in the name of your own good.
It's for your own mental health. It's to make sure you're safe. That is one of the most nefarious roads to, you know, loss of liberty, in my opinion, is when you can't, you can't, it's hard to argue that against It's something when it's positioned that way. So take a look at the data, do your, do your research.And really, I think we should continue maybe studying this a little bit before we release it into the world.
Justin Beals: Daniel, I'm so grateful for your book, Silicon Shrink. I thoroughly enjoyed reading it. It's a in depth research. carefully thought out and a really revealing a book to read. And also thank you for joining secure talk today and sharing your expertise with us.
We really appreciate it.
Daniel Oberhaus: It's my pleasure. Thanks for having me, Justin.
About our guest
Daniel Oberhaus is a science writer based in Brooklyn, New York. He is the founder of the deep tech communications agency HAUS and was previously a staff writer at WIRED. He is the author of Extraterrestrial Languages (MIT Press), which explores the art, science, and philosophy of interstellar communication, and The Silicon Shrink: How Artificial Intelligence Made the World an Asylum (MIT Press), which examines the dangers of applying AI in psychiatry and offers insights into the new tech-driven psychiatric paradigm.
Justin Beals is a serial entrepreneur with expertise in AI, cybersecurity, and governance who is passionate about making arcane cybersecurity standards plain and simple to achieve. He founded Strike Graph in 2020 to eliminate confusion surrounding cybersecurity audit and certification processes by offering an innovative, right-sized solution at a fraction of the time and cost of traditional methods.
Now, as Strike Graph CEO, Justin drives strategic innovation within the company. Based in Seattle, he previously served as the CTO of NextStep and Koru, which won the 2018 Most Impactful Startup award from Wharton People Analytics.
Justin is a board member for the Ada Developers Academy, VALID8 Financial, and Edify Software Consulting. He is the creator of the patented Training, Tracking & Placement System and the author of “Aligning curriculum and evidencing learning effectiveness using semantic mapping of learning assets,” which was published in the International Journal of Emerging Technologies in Learning (iJet). Justin earned a BA from Fort Lewis College.
Other recent episodes
Keep up to date with Strike Graph.
The security landscape is ever changing. Sign up for our newsletter to make sure you stay abreast of the latest regulations and requirements.