The Algorithmic Mirror: Reflecting data's role in modern life

August 22, 2024
  • copy-link-icon
  • facebook-icon
  • linkedin-icon
  • copy-link-icon

    Copy URL

  • facebook-icon
  • linkedin-icon

Ever wonder how data invisibly shapes our world? Or what does the TikTok controversy really reveal about global cybersecurity threats? "The Secret Life of Data: Navigating Hype and Uncertainty in the Age of Algorithmic Surveillance" episode on SecureTalk dives into these questions and more with authors Aram Sinnreich and Jesse Gilbert.

Highlights include:

1. The real implications of the TikTok ban examined from a cybersecurity lens.

2. Unpacking how our digital habits are influenced by algorithms we seldom understand.

3. Exploring avenues for ethical data management and the role of individuals in data stewardship.

 

Join us for a deep dive into the interconnected world of data, security, and societal transformation. Your thoughts on reshaping our digital futures are welcome!

 

Book: The Secret Life of Data: Navigating Hype and Uncertainty in the Age of Algorithmic Surveillance"  :https://mitpress.mit.edu/9780262048811/the-secret-life-of-data/

 

 

 

 

 

 

 

 

 

 

View full transcript

Secure Talk - Aram Sinnreich and Jessee Gilbert 

Justin Beals:  Hello, everybody, and welcome back to SecureTalk. This is Justin Beals, your host. I'm very excited about our conversation today in reading this particular material. It brought back a lot of memories for my career and my work in technology. And it's just an exceptional topic. I think it drives a lot of interest around security, and we have two amazing experts that have written the book we're going to be discussing today with us.

So, today we're really excited to have Aram Sinnreich he is a professor and chair at American University. He has a PhD in communications from the University of Southern California.

 Also joining us, his co-author is Jesse Gilbert. Jesse is a transdisciplinary artist who is the founder and principal at Dark Matter Media LLC. He has an MFA from CalArts. Together, the title of their book is “The Secret Life of Data”. Welcome both of you to the podcast today. 

Aram Sinnreich: Thanks so much

Jesse Gilbert: Thanks for having us. 

Justin Beals: It's my pleasure. I really enjoyed reading the book, and in reading the book, I, think one of the story arcs is the timeline of data, is we've probably experienced it as technology professionals.

And so I was a little curious, you know, What prompted you to consider this subject and writing this particular book? 

Aram Sinnreich: It began as an aside in a conversation between Jesse and myself. The two of us have been friends since our early teens, and we've been basically talking about the role of technology in society and culture ever since then. That was one of the subject matters that brought us together. And we often will kind of shoot the breeze about kind of new tech that's coming down the pike or some tech oriented news that's, that's in the headlines. And I don't remember exactly what the prompt was, but we were trying to figure out like, there seemed to be a bunch of stories about the second order of consequences of data, like not what happens once it's absorbed and collected and analyzed, but kind of what happens after that unexpected kind of secondary consequences.

And we were like, huh, somebody should really come up with a term for that. And then the little light bulb went off and we said, oh, that somebody should be us. So we wrote an academic article where we talked about a lot of ideas that end up in the book. And it was called the carrier wave principle. And it was published in an academic journal in 2019.

And, and, you know, it's, it's kind of entered into that kind of tech scholar  lexicon. We also figured out, you know, as we were working on that article, that the implications of this idea weren't merely academic, they would affect everybody's lives for basically the rest of time, as long as we live in a data saturated network permeated society.

And so we decided we wanted to popularize the idea and we kind of reframed it as the secret life of data. And wrote this book in order to bring all the people who aren't tech professionals and are not policy wonks into the conversation too, so that they can feel Like they have a seat at the table and play a role in shaping how tech is going to change society for the rest of their lives.

Justin Beals: Jesse, as a, you know, as someone on the artistic side, maybe the expression of what's happening with us culturally, this has got to be a different perspective for you as you came in to co author. 

Jesse Gilbert: Well, you  know, it's interesting that you say that. So, of course, that is true on one hand, but actually, a lot of my artistic practice centers around data and certainly is very technology centric.

So, It's actually a pretty natural extension of my interests. And as Aaron was saying, we've been engaged in a dialogue about these topics for decades, really. So our careers paralleled the growth and popularization of the internet. We've both made careers, having expert inputs into a variety of different fields that are tech-centric.

My business is, you know, a tech consultancy in which I, you know, produce artworks, but I also work on large projects. That are, software centric, you know, installations. A lot of the concerns that were informing the impulses to write the book were actually very central to both of our practices. I think the overlap and kind of the, you know, the different perspectives that we might bring to it kind of made the book stronger than it would have been if we had written this ourselves.

Justin Beals: Yeah, Aaron, you alluded to this a little bit, the title” Secret Life”, I think, points out how unaware we are, um, of the impact data has on us personally, professionally, and collectively. I'm curious, you know, having, you know, invented some of the tech we work with, the way we consume it. What do you think is contributing to maybe the lack of understanding on the consumer side of what they're giving up and what the data means to the companies or organizations that are collecting it?

Aram Sinnreich: That's a great question. I don't think it's a grand conspiracy, but I do think it's largely intentional and it's intentional for a variety of reasons. One of the reasons is that, you know, there's a kind of like. inverse linear relationship between the usability of a system and the amount of power it gives to the end user to determine what happens to data within the system.

So, you know, Apple in popularizing, you know, everyday computing devices did a lot of work and borrowed a lot of design sensibilities from, you know, post war German firms like Braun to kind of hide all of the works inside of a candy colored shell and to tell people it's not, don't worry, everything works just fine, you don't have to think too hard about what happens after you interact with this machine. So part of it is that, and that's, you know, neither altruistic nor malevolent. It's just kind of in the interest of, of usability for better and for worse. But I also think that there's a more sinister dimension to it, which is that data systems are, after all, just social systems.

And social systems have consequences that affect The quality of people's lives and sometimes whether they actually live or die. And those kinds of consequences we expect to have liability for, you know, somebody hits you with a car and breaks your leg, you are going to want to take them to court and, you know, punish them and recover your losses from them. But if an algorithm. Driving a car does the same thing. It's harder to establish a chain of liability. And that's a very obvious example, but it's true of all of the data systems  that we invite into our lives right now is that they're all passing the buck for someone. And so the obscurity of the data systems is plays a kind of function of protecting vested interests who might otherwise have to pay the price for the social impact of their inventions. 

Justin Beals: I think you're exactly right. There's an intentionality to it, but not a conspiracy. It's also hard to express what's going on with the data sometimes. And I feel like sometimes there's an almost self imposed ignorance, like we want to feel individually very autonomous. Or able to make decisions outside of the context, which we live. But Jesse, you must find, especially in your work, that that's impossible, right? Like we, we are machines inside our context on some level. 

Jesse Gilbert: I mean, I think, you know, we take great pains within the book to really clearly differentiate between the concept of data as, you know, raw, raw, you know,  computational form and information, which is evaluated in a context.

And then also the concept of knowledge or intelligence within that, which is essentially, you know, actionable inference taken from the information derived. And what's interesting to me, and kind of, I think, part of the underlying premise of the book, is really that data itself, which, you know, may have been acquired in a context, but generally is evaluated in this kind of, you know, quote-unquote neutral place, then is evaluated in a variety of contexts.

And so data, which may be relevant to an insurance company or to a law enforcement agency, or to someone evaluating traffic patterns in an urban environment. These are all very specific contexts in which that raw data might be analyzed and, and applied. And I think that, actually, what's interesting is many of the times that we're really talking about the secret life of data, We are talking about the moment at which data is transformed into information and used to make decisions.

And that data itself then also has its own kind of afterlife, right? It has a longer-term life than just the original application. And that's, I think, a lot of what we're trying to point out in the book, is that With the advent of global networks and cheaper and cheaper storage and the ability to just archive and, you know, stockpile warehouse data and, you know, at a scale that we have never really seen, we will have future applications of that contextual information gathering that we cannot even predict at this point.

Justin Beals: Yeah, one of the big epiphanies for me in reading the book and the way that this really struck me was how much I had done it. I had been a progenitor. I got this sensation and I think it's, I don't think I'm quoting, but it felt like the discussion was that there's no end to the data we can generate on top of the data we generate.

And we're constantly taking a small subset of data and metastasizing it into a larger set of metadata, as you all point out in the early chapters. But then it gets very mushy. You know, is there, is there much difference? Or do things just go full circle between data and metadata? Didn't seem as much a clear distinction between the two concepts as maybe there was in the past.

Aram Sinnreich: No, I think that's very true. And part of the reason for that is that when metadata originated, they were used in libraries to keep track of books and other archived materials. But now metadata don't just keep track of books and artifacts, they keep track of human beings and human social systems, right?

We use metadata when we're using a dating app. We use metadata when we are hunting for the best carload. You know, all of our interactions with The social institutions that we rely on on a daily basis and with the cultural forms that we create and consume are mediated through metadata. That's our profile on Spotify is metadata.

And so because the subject of metadata has switched from being about objects to being about people and cultures, it's much more generative than it used to be, right? There's only so much. New data you can extract from phone numbers. Well, you could turn them into a phone book and then have metadata about the phone book, right?

But if you're abstracting metadata about people's music, listening habits, well, that generates all kinds of new insights, which become really valuable data about regions where people live, and the age cohorts that they're part of and their ethnic identities and a gazillion other things that Spotify can do, has fields for in their databases that we're not even aware of when we use them. I don't know about you, but this is skipping around a little bit, but like, I'm so fascinated when I am browsing through Netflix and it, it'll give me some micro micro micro-genre that it's decided that I like, you know, like edgy neo-noir with a sci-fi twist and a comedic angle, like that'll be like a whole sub subset of, of shows that, things that I wanna watch. 

And you know, I, when we experience that, we see the kind of shadow ontologies of the algorithm presented back to us and we get kind of this distorted mirror image of how the algorithm sees us. And in the book, we call that algo-vision, which is something we can talk more about it.

Justin Beals: Yes, absolutely. And. We have jumped the shark a little bit, right? Like algorithm as a society in a way, like the, the algorithms are generating new data on top of the old data. And that drives a whole new swaths of data. I was bemoaning with a friend recently about our popular music that we enjoy. And we came up with this phrase that, that which cannot be categorized, cannot be streamed, essentially.

And without metadata, it is impossible to find the things you might love, as opposed to stumbling on to it almost randomly, you know, or, or, or with, with a cultural affect in the moment. Yeah. One thing that you guys say that was really intriguing was a quote by Bruce Scheiner, a Harvard based security technologist, that said, It equals surveillance data.

It's a pretty scary way to perceive it, but as you point out in the AlgoVision example, metadata about you is surveillance of what you like and don't like and what you're doing and what you may like, right? 

Jesse Gilbert: Yeah. I mean, I think, I think that's one way to think about it. You know, one of the, one of the things that I think was an emergent finding as we were researching the book, you know, we should point out that the book contains quotes and excerpts from dozens of interviews that we did with experts across a variety of tech fields.

As we did, those interviews was that you know, that the concept of kind of a forensic approach to culture, a forensic approach to computation really seems to have permeated quite a number of fields. So, you know, we can think of that as data analytics, or we can think of that, you know, in a law enforcement context or in a cyber security context.

There are a lot of aspects in which the forensic mindset has, has really become the, almost the default mindset for people within computation, which I don't think was always true. You know, you know, hackers really were kind of in a. almost a separate category. Now we have, you know, it's not just the derivation of the metadata, but it's the correlations between different data sets that are used to either, for example, try to profile someone's behavior or spending habits or to look for weaknesses in their, you know, file management systems or their, you know, their, their communication systems, all of those activities, you know, bring into more of a mainstream approach, this idea of the forensic, you know, reconstruction of truth, reconstruction of the model that we can use to evaluate a person or, you know, an action.

And I think that was quite surprising to me as we wrote the book to, to really see Wow, this has, has really become kind of the lingua franca of computation and doing business on the internet. That really wasn't something I think we would have predicted at the, at the dawn of the computing age, you know, when, when, you know, programmers were really working in much more of a self contained system.

And, and this, I think, again, we could think of that as a network effect, right? That, you know, if we look at the scaling of the network to this global system, you're talking about things that actually are fairly predictable, but you know, it's a shift in the framing and the way that we think about the risks that we're taking.

the exposure that we, uh, might have, uh, ourselves, but also to others within our network. That, that's a kind of, maybe a, again, a, another level of the conversation that we are trying to push in, writing the book. 

Aram Sinnreich: Just to circle back to your question, Justin, there are many cases in which metadata is literally surveillance, right?

The most famous being the National Security Agency's longstanding practice of harvesting telephony and internet communications metadata from American citizens without a warrant. And, you know, just last week, Congress and the President renewed the FISA law that allows, that putatively allows them to, to do those kinds of things.

And the line from the intelligence community has always been, we're not spying on you. We're not listening to your phone conversations or reading your emails. We're just looking at who you called and for how long and who emailed you, and who emailed them. And as it turns out, the distinction between data and metadata in cases like that is really blurry, because the NSA collects literally trillions of communications of American citizens every year.

And from that, they're able to infer pretty much exactly what we actually are saying to each other because the metadata provide enough clues for people to figure out who is a coworker, who's a friend, who's married. Who is stepping out with a side piece, who is shopping for guns, who's considering getting an abortion, who is a drug user, right?

All this sensitive information is very immediately evident from the metadata. And on top of that, it actually turns out it's very easy to de anonymize metadata with a tiny number of clues. And this has been shown over and over and over again by information scientists since the 1990s. In a very real sense, the NSA is surveilling us and the intimate details of our daily lives when they collect metadata about our communication.

Justin Beals: Yeah, it's an area that you guys spend some time discussing about the statistics of the pieces of information you might need to identify an individual or an activity. And certainly, in consultation with customers and colleagues of mine, It's a tough conversation to have, and I've had it a number of occasions where they're like, Oh, well, we're anonymizing the data so we can build the model and therefore that's okay.

But I question whether anonymization is possible. And even when we get into things like, Oh, we, we tokenized or encrypted a database. You know, we've recently been looking at the issues for the one-pass team. And when this data gets exfiltrated, even if it's encrypted, They have plenty of time to find a way to unencrypted or understand meta information about that data that could be really dangerous, 

Aram Sinnreich: right?

Absolutely. And that's one of our key points is that the more time and computational power, and distance you put into it. The more knowledge you can ultimately extract from it, regardless of, you know, how encrypted it is or how anonymized it is, right? And, you know, we haven't even talked about like the quantum cliff yet when computer architectures change in a way that will allow them to essentially rapidly decrypt everything that was encrypted before three years ago.

Justin Beals: Yeah. It's funny. I was thinking about the quantum thing as I was reading your book and realized that it's actually, or I felt like it's not that big a deal because it's happening today without quantum computers, right? The computers are getting powerful enough that we, there's decryption methodologies available to us even with standard computing practices.

Aram Sinnreich: I think that's right. The insights from decryption will dribble out probably rather than be turned on in over the course of one day, but I do anticipate that there will probably be, in the next 10 years, like several instances of massive data dumps of information previously assumed to be completely inaccessible.

Justin Beals: Excellent. I wanted to talk a little bit about a concept in your book about null data space, and this is not something I had considered myself. So, of course, it really struck me, but I found that, the power of the null data, null data space to be really intriguing, and I'm wondering if you, Jesse or Aaron could highlight examples of how the absence of data in a sea of data really stands out as important information.

Jesse Gilbert: Sure. I mean, I think there are, there are.  Several examples that we cover in the book, um, one of them being, you know, the use of mapping, right? So, you know, any kind of, uh, satellite mapping and then erasure of sensitive sites from publicly available maps, which in, in a sense, give away their existence, right?

So, you know, this applies to military sites. I can't remember the country that we were specifically focusing on, Aram, in the book. There was a 

Aram Sinnreich: I think it was, it was Yeah. It might have been Russia or It was the index maps. Yeah. It was the index. Exactly. 

Jesse Gilbert: That's right. And so, you know, the interesting thing about what you were just speaking about earlier, you know, with regard to the concept of de-anonymization, is actually applicable here because we're really talking about pattern recognition and correlations, right? 

So, the absence of data within a data set may indicate a larger pattern, which actually itself is fairly easily detectable and understandable because you may understand the point of view, right? Or you may understand the purpose of that, that silence within the data. So the mapping is the most clear example because it has such a physical metaphor for us that this place doesn't exist on a map.

Well, actually, no, I can see it here. This is right? We, you haven't identified it, but it's pretty clear. It's something, you know, if you if you've erased it, there's a reason why you did. 

Justin Beals: Yeah, absolutely. And one of the things that I  thought about as you were talking about null data was actually the start of the value of no data.

Especially with the raft of ransomware attacks that we've had. And I think traditionally in the security field, where we were concerned about data was more about, you know, leaking data, personal identifiable information. But I think what many of the threat actors in the space have realized is that losing your data is almost more scary than having it.

Aram Sinnreich: It's one of the sad ironies of “The Secret Life of Data”  is that just because we can't predict when and how, and for whom our data will have an afterlife, we also can predict what's going to stick around and what isn't. And there's no time frame from which the distinction is obvious. There are all kinds of examples of data that were long believed to be lost, like for centuries or even millennia, and now in the 21st century, thanks to new forensic techniques, are being rediscovered.

Whether they're networks of ancient Mayan pyramids or whether they're Letters printed on the page of a, of a palimpsest that got written over by medieval monks or, you know, the hard drive of a corporate spy who thought that they'd covered their tracks, right? So the loss of data is a functional problem, not an existential one.

What we really mean when we say the data has been deleted or that it's disappeared, or that it's not retrievable is that you don't have access to it at the moment, given the tools that are at your disposal, right? But you can't guarantee that nobody else ever will for the rest of time, just because you don't have access to it.

So then it becomes a question, not so much of the presence or absence of data or the volume of data; it becomes a challenge of data flows. Which is who has the permission and the power to access a certain data point or data set at a certain point in time. And ransomware attacks are all about removing the power from the people who believe that they had it, and demanding that they share some of their power in the form of currency in exchange for getting it back. 

Justin Beals: Wow. I had,  Aram, I had not thought about it. From a power struggle perspective, but it, it's really a comprehensive way to see that you have the data and therefore the power, I take away access to the data, and then I want you to share with me your wealth and power.

Very quid pro quo. Yeah. Sounds like a really interesting art piece. Just like, 

Jesse Gilbert: Well, I mean, it's also, it's a, there's the tacit acknowledgement of data or access to data as power. Right. As something with clear economic value, right?. And so the datafication of the society, the, the over reliance on computation as the, the, the means of communication and transaction for every industry.

Which was the dream of, you know, computer scientists. You know, in the dawn of the personal computing era, right? This concept of ubiquitous computing, the idea that there would be a sort of an economy of disruption of that flow and the intermediation of that, is very clearly predictable, but not from the sort of first-person perspective that we might've had.

When we were thinking about starting a career or, or building a business in this information economy. 

Justin Beals: Yeah. And it really resonated with me in my personal career and experience because and I'm going to take us literally through the thread. The book takes a lot, which made a lot of sense was I started out writing code, building websites to move information out to those who wanted to learn about it.

Then we wanted to collect information, like, I remember, uh, developing early bill pay systems, uh, so we were looking for personal information so they could pay their bill, to creating metadata about that information, where we started developing, uh, knowledge assessment systems in the education space, so we were taking what students would answer about questions, and scoring them, you know, along vectors of intelligence, through to developing algorithms, you know, on top of that information to make predictions about even more people that had never given us data.

And, it led to, I think, one of the challenges that we have today, which felt like the top of the pyramid as we were climbing it together in your book, which is this concept of algo-vision and the development of algorithmic data. So your book highlights. And I've been aware of, that the data and the algorithms that we're building, and I include myself in that community in this highly digitized world, is shifting our perspective of reality. Do you all agree? 

Aram Sinnreich: Completely. But with the caveat that every tool we've ever created has shifted our perspective.Yeah. 

Justin Beals: So it's an, it's an ongoing invention process for good or bad. 

Aram Sinnreich: Yeah. We're, I mean, we're, when we're, I mean, and I'm stealing a little bit from Marshall McLuhan here, but you know, when we invent new tools, we're reinventing ourselves, right? And and what's, what's, what's endlessly fascinating to both of us and what a lot of our conversations rotate around is how rapidly humans adapt. It's like the capacity for all of these. technologies was latent in our psyche from like the day that the human species first emerged. And if, you know, if you took like a Neanderthal child and you gave them an iPad with, with like Snapchat on it, they'd be like vomiting rainbows by the end of the day.

And, you know, blocking trolls by the end of the next day. So, it's an interesting set of questions where the Hall of Mirrors begins. Like, does our technology reflect us, or are we such a memetic species that we always, like, adapt ourselves to whatever the technology is? Like, if aliens took over and forced us to live in their in their buildings and use their technology and live on their planets would we like would it just be Tuesday, you know, at a certain point. 

Justin Beals: It is hard to extract ourselves. I mean, it's hard for me to extract myself from the work. I do the algorithms that I'm influenced by the algorithms that I might be developing for some perspective. You know, I both swim in it and can't be separate from it. But yet, I want to view it objectively. We're kind of hindered yet.We have desires on all sides. Right, Jesse? 

Jesse Gilbert: Yeah, absolutely. I mean, One of the things that, you know, Aram is sort of hinting at is that you know, it's, it's apparent if you look at the work of like a scholar like Sherry Turkle, you know, that from the dawn of computing, people have been engaged in a kind of anthropomorphization of tech, you know, and that is fascinating to me in the kind of current, you know, marketplace, right? So you have these named entities like Siri and Alexa, but you also have unnamed algorithms that we interact with every day, right? And that, that are, you know, making recommendations or, you know, curating social feeds or, you know, interacting with the content that we're generating. And we have really started to, on a mass scale, internalize the language of algorithm as a kind of, you know, thinking, tangible presence.

And again, I, I just, you know, I've been programming since the late seventies. I, you know, I, I remember an era where that would have sounded like a complete pipe dream that, you know, this would have been adopted on a, you know, across the cultures as a way to describe an everyday activity, but, you know, gaming and algorithm is a mainstream concept now.

And I think that from my standpoint, that not only is, you know, it's A reflection of our tendencies, but also, again, a reflection of the fact that influencing our behaviors are actually, you know, is the aim of a lot of people developing the algorithms. And, so I think we can also begin to read back to the actual creators of the algorithms, and I think for many of us who engage with systems like chat, GPT, t, that's really one of the. The more enjoyable parts of it is to find the places within the algorithm where you sort of hit the limit of the reasoning of that, that, that system. And you get to the point where the author has to speak to you.

The example that I give quite often is that I try to get chat GPT to analyze end-user license agreements for me, which to me seemed like the natural use of an AI system would be to summarize the terms of its own end-user license agreement, and it refused to do it. It refused to do it because it acknowledged the legal risk, not only to analyze its own but to analyze its competitors.

And use the license agreements that a lawyer had intervened and had written a very interesting canned response. And it was a very tangible difference between the kind of conversational tone of the, of the algorithm and this, this sort of, you know, like the Truman Show where he butts up against the edge of the world, right? and the metaphor collapses. 

Justin Beals: Yeah. You know, in my experience in building. machine learning models, data science systems that do predictive work, you really can only understand it from understanding what data you put into the thing and then how you bump up against it, how it responds. You guys talk in your book about maybe folks that are younger, enjoying the process of gaming algorithms or finding a way around them.

I found that really intriguing, maybe the same curiosity I had with my old Apple computer a long, long time ago. 

Aram Sinnreich: Yeah, the kids are all right. There's something in the adolescent spirit that yearns to break free of social constraints. And when those constraints are levied via technology, they find ways to gain the technology.

You know, we, there are some really great examples. One of my favorite is kids who, uh, resist the um, surveillance capitalism of social media profiles by all. Logging into the same social media profile. So like, one of them will use TikTok to watch, like, you know, dance videos. Another will use TikTok to watch cooking videos, another will use TikTok to watch like old movies. And the algorithm like gets really confused and can't build an accurate profile of the end user. There are many, many different examples of that we talk about in the book of people kind of gainfully gaming the algorithm. And in most cases, it's, it's a form of either political or economic resistance.

But in some cases, it's just having fun since Jesse's talking about the late seventies, like I remember like the first video game that I played was a game called Number Crunchers for the Atari 2, 600, which I didn't know at the time was based on this. other game called Death Race 2000, where you were running over human beings with a stick figure car. But in this one, you were running over numbers with a stick figure car, and whoever got to 100 first one, it was a very elementary game, but there was a bug in it. And, of course, pack cartridges were unpatchable. So once the bug shipped, like that, oh, day was there every day, right? And if you ran sideways, I think it was onto the number eight, just at the right angle with your stick figure car, it would get caught on the number, and it would keep counting it every like half a second, and you'd win the game. And the only reason I wanted to play the game was to game the game. You know, the actual gameplay itself was boring to me. But, like, God, did I want to mess with that algorithm? And I was like six years old or something. It wasn't, I wasn't a super genius. This is just how people are. 

Justin Beals: Yeah. I think there's a natural curiosity to the machines that are a part of our universe, right? Speaking of TikTok, I feel like there's this pregnant moment in the United States with definitely a data-driven platform, an algorithmic platform in TikTok, and how we feel about that organization and how it's utilizing the data.

So, you have a section in your book titled “Beware of Geeks Bearing Griffs”. You know, in that phrase, how do you consider this concept of the banning of TikTok in the United States? 

Aram Sinnreich: It's a big subject. Two things that seem contradictory are true at the same time. One is that TikTok is a massive surveillance and disinformation apparatus that has very little air between it and the Chinese government, which is engaged in ongoing antagonistic relations with the U.S. And it would be naive and ludicrous to assume that the Chinese government was not exploiting TikTok to achieve those ends the same way that it exploits other platforms.  That being said,  American social media companies have also been used by antagonistic powers to surveil and disinform American citizens. Very famously, Facebook during the Cambridge Analytica scandal.

And so, banning TikTok, will not achieve any of the claimed ends for the for the legislation because, you know, It's like you want to like there's a flood coming at your house, and you close one window That's not gonna keep you dry. What you really need to do isn't to close any windows. It's to stem the flood or the conditions that created the flood to begin with.

So the notion that Congress can come together and ban TikTok, but they can't manage to pass meaningful data privacy regulation is like all you need to know about what a messed up policy this is. 

Jesse Gilbert: Yeah. I mean, I think the other interesting tidbit about this has to do with the. It's a ban or a preemptive forcing of a sale of the company to a non Chinese government, you know, controlled company, presumably a Western company.

And what is fascinating to me about this is that there there was preemptive legislation passed in China that made sure that if that ever happened, that the algorithm itself would never be transferred, which of course, is the valuable thing for the company, right? Is. It's again a tacit acknowledgement of the power and value of algorithms.

In this case, quite sophisticated biometric algorithms that are actually tracking, you know, autonomic responses to media, to stimuli. At a level that really hasn't been seen, although of course, Aaron was correct that there are other parallel companies that have had similar type of tech, not only the power of the algorithms, and then also the dominance of this platform for a generation, you know, TikTok really is a place that a lot of young people get news, entertainment, you know, communicate with one another and interact with memes, have a kind of concept of a collective culture.

The idea that this would be this kind of preemptive information warfare to determine who can control that platform is really far beyond, obviously, you know, we're couched in the rhetoric of this kind of geopolitical battles between these governments. But there's really a generational battle happening here as well, and I don't, I haven't seen that really discussed in the coverage of the event too much.Very true.

Justin Beals: Yeah, I think it's good to separate some of these things, right? Like, I have to applaud the software. It seems like a very creative tool in a small package, you know? People were building very interesting media. Yeah, I think that's really fun.

Aram Sinnreich:  But you know, the small package, you know better than us, the small package is deceptive, right?Because the front end is the small package. The algorithm is not living on your phone.

Justin Beals: Yeah, I think you're maximizing the use of the hardware that you're carrying around, right? Yeah. 

Aram Sinnreich: Yeah, all those sensors that Jesse's right, like Apple and other device manufacturers, have been building that stuff in very deliberately to create new business models where they would be the gateways for biometric data. And so they deserve some of the blame and some of the credit as well. 

Justin Beals: But then if we really wanted to secure or, or just find a way to help people understand what it is that they're consuming, you know, like a, a, a label on a nutrition product, then we would do a data privacy act. But this seems more about like a digital cold war with China more than anything else to me, I think.

And maybe there is, I'm sure if I spoke to folks in the intelligence community, there's a good reason to be concerned about that. But why not solve the problem more fundamentally for ourselves as a culture? 

Aram Sinnreich: Couldn't have said it better ourselves. 

Justin Beals: Well, we're certainly going to see how it plays out. I was actually reading the American Data Privacy Act in its latest form, uh, the other day in, in review for a couple of clients.

And what was interesting that the current format does express a lot about, uh, companies that are designing algorithms having to publish information about those algorithms, like what was the methodology of the design, the data that came in, how do they test it, for example? 

Aram Sinnreich: Yeah. The accountability policies have really been moving forward, and it's, it's been driven in recent years more by the executive branch than by the legislative in the U.S. where, you know, you see organizations like the FTC especially, but also the White House and the opposite. Science and technology policy kind of saying, like, we actually know how to audit algorithms in a meaningful way. And so the onus is on you, the operator of the algorithms, to do that for consumer benefit.

And, of course, the problem with it not being law yet is that you know, if there's a different administration staffing the federal agencies, they can just turn on a dime and make a different decision. And that makes, that creates a, an environment in which, you know, business owners like yourself have to be hedging your bets and your clients where you're, you know, you don't know what the policy is going to be in two years.So, you know, maybe you'll like to take one step, like make a feint towards algorithmic accountability, but you're not really going to invest heavily in it because it might not pay off for you.

Justin Beals:  The other thing that I think you point out Jesse and Aram is that we have this historical experience with Facebook and Cambridge Analytics and the Russian government and democracy.

And I love in the book how you tie together, you know, the impact that some of these decisions can have on our politics and well-being. I think the thing that really struck me so much about that experience, what gets me so nervous about the future, as y'all point out, we've long been trying to influence each other to gain power.

But there's an ability to do it at scale and with a new form of precision. Right? Absolutely. 

Jesse Gilbert: Yeah. I mean, the sort of the, the building blocks for a quite dystopian future in terms of political campaigning is really been already laid down, certainly Cambridge Analytica, given its origins, you know, and it's, it's, it's quite diabolical and also quite brilliant.

Right. You know, it's a recognition of. Kind of the cultural conditions as well as the technological conditions, right? So Citizens United is just as responsible for Cambridge Analytica as, you know, Facebook's advertising policies, as are the types of, you know, emotional profiling that were done via personality tests on Facebook. 

The combination of those conditions is what has set the stage for Uh, what we saw with Brexit and with the, you know, 2016 election in the States. What we're looking at right now and, you know, staring down the barrel of, of this whole new field of generative AI in which, you know, messages can be crafted, disinformation can be crafted that is not just a kind of, you know, blunt instrument, but is actually targeting individuals and their tendencies and their psychographic profiles.

With quite convincing media, that is very difficult even for experts to determine our, our, you know, fakes deep, deep fakes or otherwise, right? I think we're going to see this kind of weaponized and customized information warfare at scale in this upcoming election. If we're not already seeing it, we've seen a few examples that we've detected. I'm sure there are many others that we, we don't know about yet or that are in the works for this coming fall. 

Justin Beals: Yeah. I think it's a given now. We almost need to get new media savvy again. I remember the old yellow journalism discussion in grade school. One of the things that, you know, not to have so much of this particular episode, be doom and gloom.

One of the things y'all are careful to point out in the book is that these tools can be used for good or bad. You know, in y'all's vision of the future, how would you like to see these tools used for good? Or what are examples that we should be pushing forward? 

Aram Sinnreich: That's a great question. And there's a number of wonderful answers that we address in the book.

I'd say the most important takeaway from our research for this book was that the biggest change we can make for the better is changing our paradigm of what data and AI are for. So far, it's almost akin to like the change in painting. You know, there was, an invention at one point in art history of what's known as single-point perspective.

At some point in the Italian Renaissance, where all of a sudden, every building and every object within the frame were oriented towards an idealized viewer who was like standing six feet away and looking at the canvas. But before that, you know, if you look at like medieval era paintings, well, you look in the upper left-hand corner, and that's what happened yesterday.And you look in the upper right-hand corner and that's what happened in heaven before the beginning of time. And, you know, like as, as, as you traverse the canvas on your own track, you make your own navigating, uh, story through it. And, you know, data can operate the same way. Right now, we're very much stuck inside of this paradigm of like a single fixed perspective.

You know, when you look at Google Maps, you are looking down from one spot from above. And you know, when you decide what movie you want to watch on Netflix or who you want to date on Tinder, it tells you, you know, this person is a 92% match or that movie is a 97 %match. It's telling you very specifically that you are a fixed object and that what it's offering you is like tailored to your perspective.

But we profiled all of these different projects that actually use what we call triangulation to create like a multi-perspective. multi-stakehold data system that uses inputs from multiple sources and then presents the data in a way that tells different stories to different users and also overlays conflicting stories in a way that gives the user the power to kind of navigate between what they think is relevant or true and what isn't.

And some of that is about breaking this kind of illusion of certainty that statistical algorithms present themselves with when in fact, by virtue of being statistical, they're all about probabilities, not certainties. And some of it is about, is about opening up the black box of the algorithm so that people can tinker with it and, and, you know, can play a deeper role in changing the presentation of the data than simply submitting a photo or a piece of media into its inbox. So that kind of triangulation and collective understanding of data through an algorithm is like the utopian alternative that we see to the, to the kind of black box fixed point perspective algorithms that are currently ruling our world. But there are a bunch of other great examples too, which I'm sure Jesse, I mean, I think, 

Jesse Gilbert: you 

Aram Sinnreich: know, 

Jesse Gilbert: sort of jump onto that a little bit, but to extend it more is, you know, one of the concepts that we explored quite a bit in the book is this idea of, um, crowdsourced stewardship, right?

Again, another network effect where you have emergent communities that, that, you know, that co, you know, cohere around specific issues or specific shared concerns. They, they may not be, you know, in the same region of the world. They may be a global network, but that there are communities of affinity and care that emerge that have to do with a shared set of goals, right?

So you see that, for example, in a lot of the work that's being done in citizen science. You know, where people are collectively contributing to expanding our knowledge base about, you know, migratory patterns of insects or birds, where you have people that are submitting data from dash cams and doorbell cameras on trajectories of meteorites entering the Earth's atmosphere.

And these are actually really augmenting what is capable within science. Because scientists have this limited purview and limited resources, limited time to do that type of data collection and analysis. And actually we're seeing significant gains within basic material science and biological sciences to understanding these patterns of data that could only be accessed by the creation of these communities.

And I think that that, that also kind of points to, again, another Sort of latent desire that people have, which is, you know, once we are oriented towards these kinds of common goals, we may have technologies, you know, we all have cameras in our pockets and, you know, we may have cameras that are on moving vehicles or geotagged to our front doors, right?

That is, there are obviously, you know, dystopian potentials in all of those technologies that they might be used, you know, in against our will, or, you know, to compel evidence in. A criminal case or whatever it might be, but there are also really positive things that can come out of that, which challenge us to articulate what our actual goals are with the tech and then to act accordingly, and I think that you know, the, the book is really trying to point out that with some enlightened, you know, guidance and, and, and, and mitigation of some of the harms that we can try to emphasize some of those collective good.

You know, that we would all like to see in the world. 

Aram Sinnreich: Yeah. And it doesn't have to just be a large scale project like Jesse was talking about, right? Like you can be kind or cruel through technology at every scale, from the macro to the micro. And there are forms of kindness that have to do with making the tech in your home more accessible to somebody, um, who has accessibility issues or, you know, toning down the level of automated surveillance. If there's somebody who, for instance, if somebody visits you, who's an undocumented immigrant or somebody, you know, who has an important secret they have to keep, right? And, and, and, and the more aware we are of the kind of ambient computing and networks in our lives, the more. empowered we feel to be kind to one another by playing a role in shaping those information flows. It all goes back to what we were talking about before in terms of like, it really is about power and, you know, determining who gets to decide what about where information comes from and where it goes.

Justin Beals: I'm struck with an idea here that maybe one of the pillars of ethics we need to hold on to as a broad community is to share that power as opposed to store it. There is a lot of and sharing the power, both individually and collectively. It's hard for us to see that in a zero-sum perception of our communities, but it doesn't have to work that way, right? 

Jesse Gilbert: Absolutely. 

Aram Sinnreich: And this relates to a lot of the work that Jesse and I've done in the past, but all of our legal systems and our social norms are based on kind of material scarcity. This is my object. I control this object. And that's important because if it's my food and you take it, I won't be able to eat it.

Right. And if it's my home and you move into it, maybe there won't be room for me. But information doesn't behave that way, right? Information doesn't, people don't suffer when information is shared and when there's at least not because of it, not unless there's some malicious external factor at work. And so we really have to reimagine our entire not only our legal system, but our ethics around this notion of plenitude and collaborative information sharing. 

And there have been a lot of efforts even kind of before the, you know, the 2020s, like, you know, you think about like some of the, the debates that surrounded like peer to peer file sharing, or some of the more optimistic imaginations around what blockchain could do, that were very much about kind of creating a rhizomatic technical network that everybody could come into on equal footing.

It's, we're not technological determinants, so we don't think like just building the tools will make that happen. But the existence of the tools allows us to explore different ways, of building our society together with potentially democratizing outcomes. 

Justin Beals: I am such a big fan of the book and the work both of you are doing, Aram and Jesse.

I really greatly appreciate this conversation. I feel like I could have kept you for another hour, so maybe we'll do a part two some time in the future. Excellent. I highly recommend it to all our listeners, this is an amazing read, and I think it really sets the stage for what we're doing collectively in security with the technology we build and how we need to treat each other in a way.

Jesse and Aram, thank you so much for joining the podcast today. We really appreciate it. 

Aram Sinnreich: Thank you, Justin. And for my part, hearing a security expert like yourself say that is incredibly validating because, like if we, if we pass the smell test with you, that means we did something right. So thank you.

Jesse Gilbert: Thank you so much for having us. It was a great conversation. 

Justin Beals: All right. Thanks, everyone, for turning in and tuning into the Secure Talk today. I hope everyone has a wonderful day.

 

About our guest

Aram Sinnreich and Jesse Gilbert

Aram Sinnreich is an author, professor, and musician. He is Chair of Communication Studies at American University. His books include Mashed Up, The Piracy Crusade, The Essential Guide to Intellectual Property, and A Second Chance for Yesterday (published as R. A. Sinn).

Jesse Gilbert is an interdisciplinary artist who explores the intersection of visual art, sound, and software design at his firm, Dark Matter Media. He was the founding Chair of the Media Technology department at Woodbury University and has taught interactive software design at CalArts and UC San Diego.

Keep up to date with Strike Graph.

The security landscape is ever changing. Sign up for our newsletter to make sure you stay abreast of the latest regulations and requirements.