- Home >
- Resources >
- SecureTalk >
- Redefining Personhood: The Legal and Ethical Challenges of an Advanced General Intelligence with James Boyle
Redefining Personhood: The Legal and Ethical Challenges of an Advanced General Intelligence with James Boyle
In a groundbreaking conversation on SecureTalk, legal scholar James Boyle explores the complex landscape of artificial intelligence and biological innovation, challenging our understanding of personhood and consciousness. Drawing from his recent book “The Line: Artificial Intelligence and the Future of Personhood”, Boyle dissects the potential future of artificial general intelligence and biological engineering through the lens of legal and ethical frameworks. We shine a light on how our current technological advancements are forcing us to reexamine fundamental questions about what constitutes a "person" – a journey that parallels historical shifts like human rights and the evolution of corporate personhood.
Boyle also delves into the equally provocative realm of biological engineering, where technologies like CRISPR are blurring the lines between species and challenging our ethical boundaries. He warns that we're entering an era where genetic modifications could fundamentally alter human capabilities, raising critical questions about ownership, consent, and the rights of an invention. For cybersecurity professionals, AI researchers and corporate leaders, Boyle's legal insights offer a crucial roadmap for navigating the complex ethical terrain of emerging technologies, emphasizing the importance of proactive, critical thinking in shaping our technological future.
#AGI #AI #Ethics #Law #Cybersecurity #medev
You can find the book here
View full transcript
Secure Talk - James Boyle
Justin Beals: Hello, and welcome to another episode of Secure Talk. I'm your host, Justin Beals.
Today we're gonna delve into the whirling vortex of artificial intelligence and the deeper philosophical questions of our digital age. Well, the splash of humor, and a nod to my favorite ai, the how, 9,000. Now I'm not gonna ask you to open your pod bay doors, but I will ask you to consider that what was once science fiction may be closer than we think.
You see, what we're about to discuss isn't just another firewall upgrade or the latest phishing scam. We're venturing into a topic that could redefine what we know as personhood. For me, it was how 9,000, the star of 2001 as Space Odyssey. That first got me to wonder, can computers share the same sky-high complexities as humans, minus the whole need to kill the crew thing?
Well, Hal aside, we're now living in a world where this isn't purely science fiction. We can all imagine a near future where AI develops beyond its cold silicon existence, taking a leap from artificial narrow intelligence to artificial general intelligence. Could we someday see an AI that isn't just executing commands but actually wondering why did this designer build this software this way?
Anyways, the question no longer remains whether we should prevent AI from achieving first personhood. Rather, how do we protect it, once it does? , do we grant it rights, responsibilities? Do we give them their own advocacy groups? And what happens when AI starts picking its own favorite colors and heaven forbid, exhibits attachments to its technology.
Today we're going to jump into the rabbit hole with legal expert and author James Boyle on how and where the line between human tool and sentient consciousness becomes incredibly blurred. Our journey will examine not just the legal and ethical frameworks, but also the philosophical ponderings challenging us as engineers of this new frontier.
James Boyle is the William Nee Reynolds professor of law at Duke Law School and the founder of the Center for the Study of the Public Domain will discuss his newest book, The Line, Artificial Intelligence and the Future of Personhood, from MIT Press. Professor Boyle was one of the original board members and eventually the chairman of the Board of Creative Commons, which works to facilitate the free availability of art scholarship and cultural materials by developing innovative machine-readable licenses that individuals and institutions can attach to their work.
He was also the co-founder of Science Commons, which aimed to expand the Creative Commons mission into the realm of scientific and technical data, and he served as a member of the Board of the Public Library of Science. Professor Boyle was awarded the World Technology Network Award for Law for his work on the public domain and the second enclosure movement that threatens it.
The Electronic Frontier Foundation gave him their Pioneer Award for his work on digital civil liberties.
Professor Boyle is also the author of Shamans Software and Spleens Law and the construction of the Information Society. He wrote The Public Domain in Closing, the Comments of the Mind and an Open Textbook, Intellectual Property Law, and the Information Society in a sixth edition, he is the editor of Critical Legal Studies, collected papers on the public domain and cultural environmentalism at 10 with Larry Lasik.
He has also written a distressing number of articles on intellectual property, internet regulation, and legal theory, both for scholarly journals and the popular press. His other books include two scholarly comic books co-written with Jennifer Jenkins, bound by Law about fair use and theft, a history of music, a 2000-year-long story about music and music borrowing.
Both can be freely downloaded under Creative Commons licenses. Before newspapers disappeared behind firewalls, he wrote a regular column in the Financial Times New Economic. New Economy Policy Forum. His 280-character observations on various issues can be found on the site that used to be Twitter under the handle at the Public domain.
Join me on Secure Talk as we unravel the prospects of digital personhood and the immense responsibility we carry in shaping the inhabitants of tomorrow's world.
—-
Justin Beals:J ames, we really appreciate you joining SecureTalk today. Thanks for sharing with us.
James Boyle: It's a pleasure to be here, Justin.
Justin Beals: Excellent. Well, as I was doing my research for the podcast a little bit, I noticed something very prevalent in your background. Obviously a lawyer, but you had a big participation in the Creative Commons group early on.
I'm a huge fan. Creative Commons have been important both for some of the content I utilize, you know, to build things as well as something I've contributed to myself every once in a while, maybe poorly, but I'm a little curious. Any stories you like to tell us or any key moments in the early days of creating content? Just coming up with the idea of Creative Commons.
James Boyle: Oh, delighted to. It's something I'm passionate about. I believe in open access. Tthe moral warrant for access to knowledge is a pulse, not a wallet. And back in the late 1990s, early 2000s, a group of people, Larry Lessig, Hal Abelson, Eric Eldred, were noodling around this idea of, have some way of basically, at that point, they were thinking, making things public to make.
And I got, and they very kindly invited me in and I said I was part, part of the planning from a relatively early stage. And back then, we thought, okay, well, there must be a way of basically people, there are going to be people who want to share material. There are some people who want to keep their copyrights in the traditional manner and exercise them exclusively and we're great, good for you, that's fine. That's not, we're not here to preach that you should do it any particular way. As we said later, we want to help you exercise your copyright.
We also realized there were going to be millions and millions of people who were going to be coming online who were very different than the content providers of the past.
The content providers of the past, movie studios, publishers, broadcasters, they had like legions of lawyers, the kind of people I train, on hand. The copyright system was designed to work for them, for these, like large industrial entities, that produced content that was default closed.
It wasn't designed to work for ordinary people like you and me, and it certainly wasn't designed to build this sort of community of openness that where you could build on something that created by someone else, and I, in turn, could build on what you did, and you could take backs from the stuff I did.
And so we started thinking about that. At first, it was like, We'll have a big website somewhere, and we'll put it all on there because this is early days, right? And right, and then Hal Abelson says it's the internet the it's distributed, and we thought, okay What we need instead is tools that will enable people to share content under the terms they choose and so we contact the copyright office, and we say okay.
So, What's your preferred way for putting content in the public domain, and they were, sort of quite shocked and wrote back and said, we don't provide that service because they were sort of like, we're about locks, you know, we're about locking things up. We're not about opening things up. And that was obviously, you know, there was a space there, a missing thing.
I mean, as a business person, you understand this, like you look for like what, what is not being provided. This was a space in civil society, not, didn't need to be a business. This is something that was a public good. Everybody should have this way of doing it. So we thought, great, let's write.
First of all, write really, really good licenses using the best legal minds there are that allow people to do things like ,say, you can use it but only non-commercially, or you can use it, and you can even do it commercially, but you can't edit it, or change it, or do anything to it.
You can imagine a documentarian doing a documentary on Gaza might want to present both sides and certainly wouldn't want one side chopped off and only that given. And some people might want to say, no, you can take it, you can change it, but you've got to share it under the same terms, a so-called viral license going forward.
And so we sort of noodled all of those. And then we thought, well, what we've got to do is make them comprehensible to two entities that currently don't get to quote, read this stuff. The first is human beings, just normal people, non-lawyers. Look at these licenses and go, I have no idea what this says.
So we thought, well, we can simplify it. Let's have a front-end radio buttons, just really simple. The commons deed that you now know, you know, the dollar with the, or the Euro with the straight through it. Right. You know, okay. Non-commercial, and so on. And then we also wanted to. Make it readable to computers, which is why when you get on Google, and you look at advanced search, you can search for content that's free to use and share.
And so all of that and the sort of dead ends we stumbled into, which will be familiar to anyone who's ever done a project like this because your first idea of what it's going to be is never what it actually ends up being. All of that was just enormously fun. And then was the social engineering, which is persuading people that this was a good idea.
Everyone's like, they're gonna make my work into a porn! It's sort of like, you're writing about Renaissance Italy. I mean, if anybody wanted to make a porn about Renaissance Italy, I don't think they would need your work! and then it's sort of like, Nazis! It's like, well, But again, not so much of a danger.
So we started educating people, museums, scientific societies, funders of a government, governmentally funded research and saying, look, this can be a tool you can use and the social engineering, to be honest, wa s harder than the legal engineering and maybe even the computer science,engineering to get that, get it through the IEEE. - W3C.
But people started going, wow, this actually is useful. And look, look at my, we found publishers, academic publishers. They're like since I put my stuff under CC licenses, I'm getting, I'm having selling more books as well as having more people engage with them. And once people understood the notion, then it was all downhill.
Wikipedia ends up adopting our licenses. More than 90 per cent of all open-access articles on the web are under our licenses. And it really was this wonderful thing where we created a commons, a privately constructed commons, sort of like the equivalent of saying, Hey, look, do you want a nice park in your neighborhood?
Well, how about we get all the people who would like this to be green space, and you'll all agree to? Sort of buy up this land and make it a land conservancy. And you know what, the local developer, the local housing developer might actually be into that because maybe that would make his houses sell better, and suddenly you realize that this isn't, no one's coming to take away your stuff, what we're trying to do is give you another tool to exercise it.
So that's the story of Creative Commons from my side. It was just, it was an extraordinarily inspiring journey. I mean, I'm no longer on the board. But, you know, eventually I became the chair and I, was with it through some of the, the, the geeky adolescent years. And it was just a hugely wonderful experience working with some of the most creative people I've ever met.
Justin Beals: There felt like a zeitgeist moment to me on the consumer side of what's going on. And, and I experienced Creative Commons through a couple of different lenses. One, I was really interested in producing music at the time and of course, sampling. Was all the rage we were building, not just on the rhythmic patterns or the melodic patterns of co-artists, but we were literally copying and creating brand new music, things that wouldn't have been thought of by the original artists by mashing together these different aspects.
And it was a big legal battle. I think it was KRS One that finally got caught and had to deal with the legal ramifications of using corporate copyrighted material and the idea that we could just produce art and put it into the public domain because we wanted other artists to be able to work with it.
It was really inspiring, and it felt like it caught a moment of what we wanted culturally. The type of relationship, I think, in the 90s that young people wanted to share with each other.
James Boyle: Yeah, and that was, again, one of the amazing things, one of the things we learned is we had our use cases in mind. And what happens is your users teach you that those may be good use cases, they may have much more compelling ones.
The music community was definitely one. And you know, now I think that's done through, you know, TikTok blanket licenses and so forth. But there was this period, CCMixter, that I think was, was very much in tune with what you're saying. But there are just so many uses we hadn't thought of it. When I put my, I put all my books under Creative Commons licenses, , including the one we're going to talk about today.
Yeah. So your users are free to download it for free. MIT would love you to buy a copy. And they did let me put it under a CC license. So they're nice. That's good. Think about it. But if you don't have them. The Moolah, you can download it for free. I want it to be available to anyone, anywhere in the world who has access to the interwebs.
So I put a previous book out under a CC license and I got an email from a guy called the, whose, whose nickname was the blind flanner. And he says, you know, I'm visually impaired, but,, and honestly getting access to audio books is, is a pain as frequently, you know, they, their publishers stand in the way.
And he said, when you put it under a. CC license. You probably didn't, uh, didn't realize that that was going to allow me to create an audio book where I just do straight text-to-speech conversion. And then have that for my own use and listen to that. He said, but that's one of the things you enabled.
And I just love that story because it's sort of like the whole point about making things open is you can't quite imagine all the uses people will have both for the tools that we created for them to do that with, and for the material that people will use. So we have an open access, intellectual property casebook.
If you want to learn about IP, download it for free. Jennifer and I wrote it. So it's up there. We get emails from people all over the world who've just decided to teach themselves American intellectual property, unless they're some sailor out in the middle of the Mediterranean is sort of like, hi, my name's, well, I shouldn't say his name, uh, but it was a hilarious name.
And I'm a merchant seaman and I'm just learning about trademarks and, you know, it just sends this rattle people. So it's just, it's this wonderful. Sort of hankers back to the good old days of the, the internet, you know, less, less than the toxic sort of, as Cory would say, Cory Doctorow would say in shitified world we live in now.
So it's, in that sense, I think this is, for me, this was, this showed me. The really positive sides of openness and and it really was a formative experience for me.
Justin Beals: Yeah, we're there again. Intriguingly in this, some of these open source models and the corporate models and the revolution in AI and technology.
The conversation is being had once again about these copyright issues. Before I dig too deeply into The Line, you know, reading your book was really amazing. There's, you obviously Yeah, absolutely. Are a fan of popular culture around artificial intelligence, and I had to ask, do you have a favorite kind of artificial sentient being?
Is there one that stands out to you?
James Boyle: I'm gonna have to go with murder bolt. I think, um, Uh, a troubled, a troubled security unit whose, whose, uh, secret, um, vice is watching, um, just hours and hours and hours of schlocky soap operas, TV soap operas. I think that's, I think that, uh, that, uh, Murderbot currently has my appeal, but I think, I mean, I've definitely would have been influenced by it.
I mean, the, In the book, I write a lot about, Blade Runner and the android stream of Electric Sheep, and they, I think, present some of the deepest issues that we have to deal with, both the movie and the book, quite brilliantly, in a way that you know, I really thought helped me explain those issues better than any philosopher or historian could have done about the precise issues they were talking about.
Justin Beals: Yeah, mine has been HAL, from 2001 A Space Odyssey. It's interesting that the dark ones, I think we find an emotional attachment to their own struggles in the world. Blade Runner, is one of my favorite films of all time, of course, and, really brilliant. So, I, you know, I had this thought that most of these sci-fi explorations of AI personhood concentrate on artificial entities that are making moral arguments about why they should be considered as persons.
You know, another one is like data in Star Trek. You, you know, have spent a lot of time on those ethical arguments. This is,not the, you know, first opportunity you've had to consider these entities and what they mean. But your tool set is legal, of course. I'm interested in how you started thinking about these entities from this legal perspective.
Like, what was a touch point for you? And how did this grow? You know, this, this concept of the legality of these entities?
James Boyle: Well, in my work in general, I found that the best place to be is on the margins of things that could be, margins in the sense of people whose voices are being listened to, you know, history from the working class perspective, for example.
Or it could be margins like the things that we haven't quite figured out. Like, you know, Newtonian physics really just didn't explain light terribly well. And it turned out that there were some messages in there.And so I think that, um, the first thing I thought about was, well, we have these margins in law.
We have these margins in morality. It's certainly not just law. We have these margins of philosophy. When does life begin and end? so just take end, you know, it used to be, I mean, probably during, certainly during my lifetime and perhaps during your lifetime, the, the vision of what counted as death was much more focused on whether or not your heart was still beating, right?
And then, we moved to this concept of brain death. So literally our conception of Tle Line, the end of our lives. starts to change to the point that you're having screaming fights in hospital waiting rooms between relatives one group of his that says grandpa is gone tthe you know, the brain is it's flatlined and the other one's going his heart's still beating He's alive.
And so, you know, we have these clashes over the line and they're they're passionate the same between us and non-human animals Really the, the, the same in, in sort of like trying to explain how humans get to have, if they do get to have a special moral status. And so I thought, well, it seems inevitable that one of those is going to be the creation of entities.
Whose existence makes us doubt whether or not they have some of the cognitive aspects of our personhood. The things that we think make us different from a rock or even a chicken. And so that sort of was where the thought started. And as you said, when I first started thinking about this, I thought in terms of sort of empathy on the one hand and sort of ethics on the other.
Right? So the empathy is the flash when I look at your eyes and go, that guy's in pain. I empathize with him. The ethics as I try and then figure out what are my obligations to the person who I have seen a moral connection to. And so I was sort of writing about that. that oscillation about how our experiences with increasingly able, increasingly capable artificial agents would change.
And I started writing about this in 2010, which is like Jurassic era in, AI time.
Justin Beals: It was science fiction then.
James Boyle: Right. I mean, it really, back then people were very skeptical. And then, and I think this is where your question maybe leads. I realized, well, there's actually another way that we might end up.
Um, giving personhood, which is not through this, uh, you know, are you, not a silicone, you know, are, even though you're shine, under your shiny carapace, I can see the content of your character. You are my, you are my brother. This is the next stop in the civil rights movement. Instead, it might be the same way we give personality to corporations because they are artificial entities.
They are our creations. They are not alive, and we do not empathize with them. And they are. In that case, we do it for efficiency. We do it because this is, we want somewhere to be able, something to be able to sue, something that can make contracts and so forth. And so the book basically is about, the two main roads, lines, rail lines that we might take into AI personhood.
One of which, as I said, is the ethics and empathy and the other one, this analogy, or perhaps just straight out use of corporate personhood. And each of those, I think, has all kinds of fascinating twists and turns that we may become. It's at least I found it , hard to imagine when I first started thinking about it.
Justin Beals: Yeah. You, In the. In the opening of your book, you write there is a line, it is a line that separates persons, entities with moral and legal rights, from non-persons. Things, animals, machines, stuff we can buy, sell, or destroy. And I think the corporate thing is really interesting here, right? Because, They both take on an entity, like in some ways, I view a corporation like Nintendo is having a personality.
I've had a relationship with it for many, many years, but I also know the business work that I do, they're bought and sold there. They are changed many times over. It is a fringe area, isn't it, James? And it's continuing to become closer and closer to, to being defined.
James Boyle: I think that's right. And I think if you think of the fights in our own society, not about whether or not corporations should have personality in the sense that they should be able to sue or be sued or, you know, make contracts.
And I think most of us would go, no, this is, this is a society needs to make bets on innovation, not all forms of innovation or economic activity are going to pay off.
We need a way in advance to basically put a neat little package around this particular effort to provide a service or create a new good or do whatever it is. So that if it succeeds, we know who to pay off and how to pay them off. And if it fails, we know what our liability is going to be and what the limits of that liability are all of that makes I think a fair amount of sense where I think the next step is yeah And so do they have political rights are they allowed to lobby, or do they have First Amendment rights?
And they're the net that you know sort of like well, I mean, they're certainly not gonna be able to vote, are they and right now at least corporations can't vote though They certainly can influence how we vote and so If you came at it through the sort of empathy side, particularly if you're a sci-fi person like me, from the time I was a very small child, you're like, oh, the wonderful A.I. s out there, we'll be cruel to them and we need to recognize their humanity, and I think that makes this appear like it's a sort of classic liberal issue, the next frontier of the civil rights movement. If you come at it through the corporation side, you're like, Oh my God, is this Citizens United on steroids?
Are these immortal superhuman entities with no ethics who are now going to do everything that we worry about with corporations, but even more so? Then suddenly it looks very different. And I think part of the thing I try and do in the book is show how Indeterminate it is sort of how we will end up coding this politically, like whether it be a liberal issue or conservative issue.
And, but I think that's just something that we understate how difficult it is to figure that stuff out in advance.
Justin Beals: And it is coming closer and closer to bear because we have some strong voices, I think, in our public sphere that are talking about, especially you know, advanced general intelligence type of AI model that they, I think this is still a little ways into the future, technically, personally, from my perspective.
But they're, they're, they're certainly fighting hard for it and trying to gather together the resources to build one, you know, here in the near term, does this, you know, where, where are your thoughts and fears at, as especially the, the Silicon based AI is that we're building seem to be advancing quite rapidly.
James Boyle: Right. Well, I mean, first of all, I should say I'm not a computer scientist, and I don't play one on TV, but I've talked to a lot of people who are very expert in AI and machine learning, and some of whom have been in the field for decades. Hal Abelson a the Center for the Study of Artificial Intelligence at MIT.
And, you have to notice a few things, which go in different directions. The first is that even people who are deeply knowledgeable in the field, who used to, let's say, set their timeline for when AGI might be at least conceivable. Those that end time has been coming back towards the present moment quite rapidly, and that's a clear shift there are scholarly surveys of people who write on these issues, and the sort of expected time horizon has decreased. Now that doesn't mean they're right, right?So number one.
The second thing which seems to point the other direction is, I think there's no doubt that there's an AI boom right now, and probably an AI bubble. Because, and part of the reason for the AI CEOs to go on and say, We're gonna do everything, we're gonna change the world. And even sometimes to say, this is so good, it's scary.
Because, it's like, if I convince you that I'm so good, I'm scary. You're like, wow, this is so powerful, this is coming. And I think there's definitely some of the investments will prove to be unwise, and there will be a bubble. The bubble will pop. So don't think because I wrote this book that I'm a person who thinks that all of the hype is real.
So that's the, and then the third thing is the architecture of the current systems. You know, the, the large language models, which are glorified chatbots. That's not going to produce consciousness that it doesn't and adding more data to it doesn't change the nature of the encounter that this is an entity that understands syntax, how to form said, not semantics, what the content of the sentences, actually is and so.
In some ways, this technology, which has made people finally pay attention to some issues, which I've been thinking about for a long time, is arguably the worst technology that we could have in order genuinely to probe the limits of consciousness, because you pretty much couldn't do better to design a system that imitates many salient aspects of consciousness while possessing none of it than our current large language models.
Right? So it's sort of, in that sense, it's kind of like a, You know when you have a philosophical discussion, and the example the person uses to do is just a terrible example. Well, it's sort of in one sense, that's what we've got. But in another sense, It's really wonderful because there's this, you know, about the cog, the uncanny valley.
It's like, as the robot gets more like a human being if the robot looks nothing like a human being, we're fine with it. I love my Roomba. If the robot looks exactly like a human being, then we could deal. The robot's just somewhere in between. Then we freak out, right? Well, I think that our current chatbots are in a cognitive uncanny valley, which is they possess many of the attributes that I use to basically determine who's a conscious entity and who isn't while actually yet possessing none of the qualities under those capacities, which are the ones that I truly care about. And so that's what makes us, I think that's what makes some people freak out about them.
That's what makes some people delusionally right now believe that actually their AI is sentient. But it's also, I think, wonderfully, a really humbling kind of mirror to point back on ourselves and go, Well, we can't believe anymore that sentences imply sentience If language was the only thing that makes human beings somehow special.
Well, they've got language. Now, they don't have comprehension, but they've got language. So where does that leave us? And I would say that the process of the book you, you start, you asked about is me starting out thinking I'm thinking about Hal or Data or whatever and end up thinking that actually this is a book, uh, that is basically the same as how the Victorians dealt with the theory of evolution, a scientific development that through their worldview, Into chaos.
And that might be the most interesting thing about AI. It might be the mirror looking back at us that is actually the thing that is most insightful and most important.
Justin Beals: Yeah, n reading your book. I took a little break. I Pulled up I'm concerned about privacy with a lot of the corporate ownership of some of these models, so I run them locally on my local system, and I pulled it up, and I had to ask it.
I said I was using the llama model, uh, version three and I said, uh, do you experience? And I thought I'm going to try this very simple question, see how it responds. And his response to us, while I can simulate conversation, answer questions, and even create creative content, my existence remains fundamentally different from that of living beings.
But then it says, however, it's worth noting that some researchers and philosophers argue that consciousness and subject subjective experience might be emergent properties. And so I both start off saying, Oh, that's a good definition of how the model works and winds up on the backside saying, Are you self aware?
And I think that one of the things you point out is that we, as human beings are a little hackable with these sentence constructors.
James Boyle: Absolutely. It's, I mean, the story I begin with is Blake Lemoine, the Google engineer who started to believe that, um, the AI system he was working on, which was Lambda, Google's early one, um, was essentially that it was a ghost in the machine.
Google ends up firing him, and I think he's wrong. But this was not, you know, uh, technological naive. This was, uh, this was a person who, who was literally working on the system. It was actually testing to see whether or not it would be used to produce a hateful, harmful speech. And I think it shows that we have this tendency to anthropomorphize.
And obviously there are downsides to that. Um, I mean, it is a human tendency. We can't, you know, we put gods in our fields and streams. We, you know, put, give personalities to our, to the objects we deal with. You know, I talk to my Roomba. and you know, and to some extent that could leave us, lead us into error.
On the other hand, in the book, I point out. That, that tendency to be, as it were, generous with our attribution, of personhood is really very important, morally speaking, because the opposite tendency is the horrifying ability that human beings have shown through history to depersonalize other groups to call, uh, uh, Jewish people, rats, or, you know, the Hutus and Tutsis, uh, you know, Rwanda, like referring to their enemies as cockroaches.
And this is a familiar pattern that what we do, and to be honest, it's going on our current politics right now, is dehumanize the other side so that their moral claims are muted altogether. And I think that this sort of generosity of empathy, sometimes to the point of delusion. Is really important because it helps fight that equal and opposite tendency to close the boundaries around, you know, my, tribe, however one defines that tribe, whether it's by sex or race or language or country, religion.
Um, and so I think that the AI is triggering something really deep inside us here. Um, and that it will lead us into mistakes, but some of those mistakes, I think, are more benign in their long-term consequences than others. So, I think, uh, that's one of the things that I became convinced of while writing the book.
Justin Beals: Yeah. I liked your Statement about the mirror. Like some of these things are a mirror that we look back into and, and you actually bring forward a pretty poignant point as we manage, empathy versus, you know, reality in a way. You write, um, our culture, morality, and law will have to face new challenges to what it means to be human.
And you point out, and I thought this was a really interesting story about Blade Runner, where at the end of the film, you know, one of the androids is confronting their own creator. It's a, it's a creation creator kind of issue. And I think that, um, you highlight that that's, that's what we're dealing with in some ways.
Like, You know,
James Boyle: The first article I wrote on this was called ndowed by their creator, the magnificent and, uh, words in the declaration of independence, which of course the country did not live in, uh, live up to because it says all men are, uh, endowed by their creator with certain unalienable rights.
And of course we proceeded it to deny those rights to a large portion of the population, but, um, endowed by their creator for the first time, we've, we've had fights before. About who gets to be a natural or legal person about where the line is where life begins and ends And we've denied shamefully denied personhood legal personhood and moral personhood to members of our own species Um, so we've you know, we have this we have this long history and and not all of that history is is a good one But now for the first time The entities that are going to be troubling us about the line aren't going to be say some marginalized group that are saying, Hey, I have all of the attributes you have and you are wrong to treat me as property, but rather going to be entities that we have created ourselves.
And in the book, I, I create a, uh, some hypothetical, some thought pieces, some hypothetical stories. One of them, the guy who invents the chimpy,, a docile, biddable, chimpanzee with a chimpanzee, a human hybrid with an IQ of 60 guaranteed not to form unions. And the creator of the chimpy on being told that he's denying them the unalienable rights of all beings and says, look, I am their creator, and I gave them no such rights.
And I think that is genuinely a different thing. That's, that's a first, that's a species first. We have never before created. Things that have attributes that are that shape that we share in the very capacities that we think are the most morally significant. Those concerned with thought, judgment, reflection, morality, and perhaps even humor.
And right now, we're not there yet. They're just chatbots. But my point in the book is. This is early days, um, and so, let's maybe think about that now, before the sides are all entrenched before it's all become a partisan issue, and my tribe against your tribe, let's think about it now, and maybe we'll learn something, not just about how we should act towards these infinitely strange others that the future will bring us.
Maybe we'll figure out something about how to understand ourselves better. And I think that, to be honest, is already happening. I think we're getting insights about the nature of consciousness because of people thinking about AI architectures. The process goes both ways.
Justin Beals: We've been through this a little bit for before, I think, uh, You quote, writer Samuel Butler from 1872.
My favorite quote from him was, “Even a potato in a dark cellar has a certain low cunning about him”. But you had a much deeper quote. I'll kick it off if you'd like. Yeah, he says, “There is no security against the ultimate development of mechanical consciousness. In fact, in the fact of machines possessing little consciousness now assume for the sake of argument that conscious beings have existed for some 20 million years and see what strides machines have made in the last thousand”.
I think that's powering our fears about all of this.
James Boyle: He is and he goes on to say. Um, should we not nip it in the bud and just forbid it now and that, uh, that line for those of you, some of your listeners, I am sure, are science fiction lovers, and thus surely lovers of the original novel, Dune, Frank Herbert's novel, great novel, and they'll remember that in that world, that imaginary future, the Butlerian Jihad has banned .any kind of advanced, uh, machine intelligence to the point that instead they have to develop effectively like human computers, mentats. Um, and the Butlerian Jihad, people reading this is like, my God, was this guy in 18, this 1870s genuinely worrying about superhuman AI? And the irony is that so far as we can tell from the historical record.
No, Butler wasn't. He was using this as an allegory to talk about the big issue of his time, which was this enormously disruptive theory of evolution, which had this similarly, you know, mind-blowing effect on our conception. I think of myself as, you know, if you're a Victorian, you're at the pinnacle of your stand above animals, etc., far below you, all because of your lofty consciousness, and along comes this Darwin guy and says you're basically highly evolved, It's sort of like, you know, nematodes aren't conscious, E. coli is not conscious, and I'm just a fancy nematode. Is that what you're telling me? And so this, so Butler, because that was so deeply controversial in his time, used this allegory of machines possessing intelligence, and as you mentioned, saying,where does consciousness begin and end? I mean, look at, look at the potato moving in the cellar, overshadowing one neighbor, sending out a sprout this way. The potato tells us that that's what they want to do. It tells us by doing, and that is the best form of speech.
It, you know, is this not consciousness? And if it's not, then where must the line drawn? And what he's trying to tell us is but tell his own time is that , and people dispute whether he's attacking evolution or attacking evolution's critics But I think what he's clearly doing is setting up this fascinating sort of thought experiment sort of like the trolley problem or something like that that he thinks will help his readers come to terms with their own intellectual disruption.
Now the irony? We're reading Butler when we're actually facing that very disruption that he imagined as purely a fantasy. I mean, as I say in the book, it's as if you discover that Gulliver's Travels is actually a Yelp review written by, you know, watch out for the little guys with the ropes, the little ablutions, would rate this place zero stars if I could.
You know, it's just this thing that was actually a philosophical music and a sort of a] crazy idea designed to revoke the very edges of our consciousness is now what we're living through and Butler, I think, would smile at the irony of that, although it is truly a tragedy, I think we can all agree that Timothee Chalamet's elegant cheekbones never pronounced the words Butlerian jihad because surely that would have been just an immortal moment of cinema.
Justin Beals: I fear is the mind killer is one of my catchphrases for life. The, and the construct in Dune is brilliant with the mentats and the Bene Gesserit and some of the ways that they, um, he thought about biology. As you know, the computational power that was available, as a matter of fact, that's one thing I wanted to touch on in your book.
I started reading your book thinking, of course, about, you know, uh, Ai like we build in the models today, but I wound up feeling that the more pressing ethical issue was some of the biological things. I mean, even today, I read a new article on a better-designed kidney grown from a pig with biomarkers that had a less likelihood of rejection in human transplant.
This is, I think, going to be the thornier. challenge. I anthropomorphize my dog. I think my dog has just as much consciousness as I have, for example.
James Boyle: Yeah, no, I think you're right. And I mean, I think we have, and I talk about this in the book, there's a sort of intuitive yuck factor there because, you know, to the extent that we think scientists are playing with the species line, they are, and they're for a very good reason because there are lots of things that we where we can't experiment, but it would be unethical to experiment on human beings or use human beings for a particular purpose where we actually think that it may be ethical ethically permissible if, you know, many lives are to be saved, for example, that we can use non-human animals.
Some animal rights activists would disagree. But for example, our main test beds, some of our best test beds for, you know, discovering whether something carcinogenic or whether, or whether a, uh, a cancer treatment might work are so-called onco mice, which have, you know, basically human DNA, sliced in so,or have , are transgenic, hybrids, um, where we are mixing human and, uh, mouse, at genetic attributes precisely because we want a mouse that has hu the susceptibility to human cancers.
And so if it looks like the scientists are playing with the line between our species and others, that's because they are right. And that's why yourkidney example. And we have, some of us have a like, yeah, I don't know why that's wrong, but it's wrong. And in the book, I kind of explore that. And I think the intuitive reaction is not enough in its own.
That's not enough to carry the day. People had an intuitive yuck reaction towards, mixed race couples. They had intuitive yuck reaction towards gay people. I mean, you know, they had, they, they thought it was bizarre and crazy that women could vote, right? So, so having an intuitive yuck. Doesn't get to count as a moral argument, but it ought to make us think.
And in the book, I kind of explore some of the ways that we might police, morally and legally speaking, our species line, some of the things that people appeal to. Some of it has to do with, um, genetic similarities. Some of it has to do with, do they look like us? Right? That's almost the sort of idolatry prince anti-idolatry principle You know if it looks like me, then it's something, even if it has no genetic kinship to me some of them have to do with human remains or human tissues.
Are they being used in a disrespectful way? Some of them have to do with the environmental consequences or the consequences in terms of public reaction and public willingness to fund certain kinds of science, and some of them have to do with concerns about the entities themselves and whether or not if we create something like this that then you have to give the hybrid, transgenic, whatever entity greater human, human rights protection or treat the animals concerns, the non-human animals concerns more seriously.
All of those things get kind of bundled together into this sort of roiling mass of discontent and concern. And what I try and do in the book is say, look, Don't have just that reaction. Let's pick apart each one and look at each one one at a time to try and see which of these are concerns that you personally I'm not trying to speak for you that you, per my readers You know take more seriously.
I want this to be a it's not a what to think about The issue, it's a how to think about the issue book. And I believe that that's the great contribution of the humanities to our lives is to offer that kind of how to think through things critically, but also empathically. And hopefully, some people will agree that I did that with both non-human animals and transgenic species.
Justin Beals: Yeah, certainly for me, it was, um, it's easy for me to think about the computer science side of a general intelligence. I think it's been more prevalent in popular culture that we built a machine that looked like us a little bit. But, I actually think the biology situation is much more hackable. And when I think about the opportunity for innovation, that's the thing I'm actually Yeah, definitely.
After reading the book, I'm more interested in a society is having these discussions because I think it's sooner to happen and with greater benefit potentially for, you know, life care, or actually, I think exploration to write. I think that some of our concepts about building machines to go explore are going to find some limitations over centuries to come.
James Boyle: And no, I mean, and I think that, you know, hacking our own species is, I mean, in some sense we have been hacking this, that our species has been slowly hacked. You know, we all spread out from Africa and, and genetically, uh, there are divergences, um, due to things like, uh, melatonin and so forth, uh, melanin rather.
And, the, you know, we, we respond to diet, you know, if you look at the prevalence of, uh, anti-dairy, you know, um, allergies and you can see, and the, whereas the cultures that came to live with cows and so forth. So we've, we've been doing that sort of as we evolved and then doing it, but we're now doing it not through this sort of gradual process of evolution, but there's the potential for, um, intervening between one generation and the next.
And there, I think there are a great number of ethical concerns, and there's also a vast amount of money sloshing around, and there are also countries which don't have the same kind of regulatory apparatus that the United States or Great Britain has, and the idea that if there is the possibility using CRISPR or any of these other techniques not you know, everybody wants that the best for their, for their kitties.
And if you can, you know, give your kid fast twitch muscle fibre reaction and, you know, increased height and maybe greater resistance to early onset Alzheimer's, parents are going to start wanting that. And so we actually have to start thinking about the species line as something that is potentially malleable, and then we have to say if we want to say no to that what are our concerns is it we are worried about?
Effectively this will Dampen our ethical concerns so that we will that this is the sort of the first step into a nightmarish world Uh where we do too much either environmentally or ethically because each step seems so trivial are we? You know, should we be able to do this at all? Is there, should we, I mean, if you're a theist, you may believe that, the attributes of human beings have were set by, uh, an actual creator.
And you may think that it is not just hubris but original sin to, to mess with them. So I think to say that these concerns will be heated and confused and will involve a lot of shouting is an understatement.
Justin Beals: That's true. And to your point, the book helps us with the how, what aspects we should be considering, you know, when we think about what's going to happen.
And I also appreciate that. And agree with you that the time is now, you know, we're, we're starting to certainly think about corporations in these ways. How are we going to think about these other things that we're inventing?
James, I'm very grateful for you joining the podcast today, sharing deeply your expertise and your work.Certainly we'll have links to the book and other ways,, because there's a ton more work that you've produced, of course, over the years. Thank you for joining us. Well,
James Boyle: . I really enjoyed the conversation. Thanks to you and to Maria, for making this such a pleasant and smooth interview.
I'm, I enjoyed it very much and, um, uh, I look forward to seeing the final result.
Justin Beals: Excellent. All right. Thanks, everyone on SecureTalk. We'll be back with you in a week.
About our guest
James Boyle is the William Neal Reynolds Professor of Law at Duke Law School, founder of the Center for the Study of the Public Domain, and former Chair of Creative Commons.
His latest book is The Line: Artificial Intelligence and the Future of Personhood, published by MIT Press in 2024. He is also the author of The Public Domain and Shamans, Software, and Spleens, coauthor of two comic books, and recipient of the Electronic Frontier Foundation’s Pioneer Award for his contributions to digital civil liberties.
Justin Beals is a serial entrepreneur with expertise in AI, cybersecurity, and governance who is passionate about making arcane cybersecurity standards plain and simple to achieve. He founded Strike Graph in 2020 to eliminate confusion surrounding cybersecurity audit and certification processes by offering an innovative, right-sized solution at a fraction of the time and cost of traditional methods.
Now, as Strike Graph CEO, Justin drives strategic innovation within the company. Based in Seattle, he previously served as the CTO of NextStep and Koru, which won the 2018 Most Impactful Startup award from Wharton People Analytics.
Justin is a board member for the Ada Developers Academy, VALID8 Financial, and Edify Software Consulting. He is the creator of the patented Training, Tracking & Placement System and the author of “Aligning curriculum and evidencing learning effectiveness using semantic mapping of learning assets,” which was published in the International Journal of Emerging Technologies in Learning (iJet). Justin earned a BA from Fort Lewis College.
Other recent episodes
Keep up to date with Strike Graph.
The security landscape is ever changing. Sign up for our newsletter to make sure you stay abreast of the latest regulations and requirements.