What Matters Next: Decision Making in a Rapidly Changing World with Kate O'Neill

February 4, 2025
  • copy-link-icon
  • facebook-icon
  • linkedin-icon
  • copy-link-icon

    Copy URL

  • facebook-icon
  • linkedin-icon
  1. In this episode of Secure Talk, host Justin Beals welcomes Kate O'Neill, a passionate tech humanist dedicated to crafting technology solutions that genuinely prioritize people. Together, they explore the key themes of Kate's  books, “Tech Humanist” and  'What Matters Next: A Leader's Guide to Making Human-Friendly Tech Decisions in a World That's Moving Too Fast.' 

Their engaging discussion shines a light on the power of systems thinking, the significance of thoughtful decision-making in the tech industry, and the vital balance between achieving business objectives and enhancing the human experience. 

This episode is a delightful must-listen for cybersecurity professionals who are excited to navigate the important intersection of technology, ethics, and human dignity in our ever-evolving digital world.

Books:

What Matters Next: A Leader's Guide to Making Human-Friendly Decisions In a World That's Moving Too Fast (2025) 

Tech Humanist: How You Can Make Technology Better for Business and Better for Humans (2018) 

00:00 Welcome to SecureTalk: Introduction and Host Background

00:32 Building AI Tools: Structured vs. Unstructured Data

05:30 Introducing Kate O'Neill: The Tech Humanist

07:06 The Value of Liberal Arts in Tech

09:25 Tech and Human Experience: Systems Thinking

12:52 Decision Making in Tech: Incrementalism vs. Radical Change

19:41 Kate O'Neill's Bibliography and Insights

26:02 Amazon Go: Encoding Human Behavior into Machines

30:14 Balancing Technology and Human Experience

31:21 Security Challenges in the Digital Age

32:16 Innovative Solutions with Human-Centric Design

34:02 Measuring Meaningful Experiences

38:48 The Role of Soft Skills in a Tech-Driven World

39:37 Preparing the Next Generation for a Digital Future

42:31 The Pace of Technological Change

45:08 The Importance of Meaning in Decision Making

49:30 Security and Trust in Technology

51:23 A Future So Bright: Optimism and Responsibility

53:23 Conclusion and Final Thoughts

 

 

 

View full transcript

Secure Talk - Kate O' Neill

Justin Beals: Hello everyone, and welcome to Secure Talk. I'm your host, Justin Beals. About seven or eight years ago, I was the chief technology officer at a new startup in Seattle, Washington. We were focused on helping our customers hire the best-fit talent for their open positions. We really wanted to harness both the scale of the internet in outreach and being able to find great talent, the expanding talent marketplace, being able to focus outside of, you know, strict university graduation rules for many of our customers, and also try and find some of the soft skills that best accentuated an employee's opportunity for success at the business. 

We built a lot of different types of products on the timeline. We built a school, we built an online job board with a certain amount of assessment technology, and we built a very powerful AI tool, machine learning-driven data science focused on helping to essentially score and match people that were coming into the job marketplace with jobs that were available.

One of the things that was really interesting when we were building our tooling is that we had a difficult conversation and a couple of different ideas in-house on how to build the best AI tools. The two camps really fit into two models. One was an unstructured data model. Think everything you write on Twitter or all the social media data that we can gather your browsing history.

We call that unstructured. Some of it has structure, but then another portion of it is strictly structured data. This would be more in the line of a quiz, assessment, some type of test where we identify questions and we box in the answers that are available, and then we're able to score and measure those particular questions.

There's actually a part of data science called metrics that allow you to score assessments and build those types of assessments. We've measured both these systems for their accuracy, of course because that's the most important way to build an AI model to focus on the accuracy of the model's predictions.

We found something really interesting when we did this work. When we did the structured data, we found good correlation with outcomes, and we were also able to assess that data for its biases and making sure that the biases were focused more on the attributes that matter and things that we didn't want to bias the model on, let's say protected classes and employment like race or gender.

And then, on the other side, in the unstructured data, when we first looked at it, It seemed very accurate. It often worked; it meant to apply and find a solution to fit the employer's job with a potential applicant. We then did another test, and we said, what if we used a different format of unstructured data or some other source of unstructured data alongside the initial unstructured data we built the model with to see what would happen?

And guess what we found? The model changed completely. The unstructured data really built only probabilistic projections that were a best fit, as good as the model could get. But there was no measure of accuracy within the model or an effective prediction; it didn't care. It was more interested in whether or not it could actually find someone that it liked.

That brings us to a really interesting point with our guest today. A quote that I'll use from the book, “we encode ourselves into our machines”. And when I realized what was happening, what we were doing was taking both people's best intention about representing themselves and people's most desired employees and giving them an opportunity to find things that seem to fit but may not accurately represent the outcome you wanted, which was a great employee that could be successful at the job.

Just like we might make a decision about who we hire because they come from a college we know. So the unstructured data operated in a pattern-matching fashion. I've been involved in a lot of things, that I'm not sure if the tech we built made humanity better. You know, there's always a question to it.

Are the tools that we're developing allow us to be better to each other, better to ourselves, more honest and transparent and open for the things that matter. That we want to prioritize and that is why we had so much fun talking with our guests today. Today, we're going to get a chance to hear from Kate O'Neill.

Kate is known as the Tech Humanist and has been a trailblazer in influencing humanity's future in a tech-centric world. She merges strategic optimism with extensive expertise to create significant change. She's the founder of KO Insights, a strategic advisory firm that enhances human experience at scale, particularly in AI-driven interactions.

And she includes amongst her clients, Adobe Amsterdam, Austin, Coca Cola, Google, IBM, and the United nations. Kate has been the recipient of various awards, including Technology Entrepreneur in the Year and recognition in Google's global campaign for women in entrepreneurship. Thinkers 50 named her among the world's management thinkers to watch.

Her insights have been featured in the New York Times, the Wall Street Journal and Wired, and she appears as a text expert on BBC and NPR. She's a highly sought-after keynote speaker who has addressed hundreds of thousands globally. Kate has authored six books, including Tech Humanist, Pixels in Place, A Future So Bright, and What Matters Next.

Please join me in welcoming Kate O'Neill to the SecureTalk podcast today. 


—-

Hello, everybody, and welcome to SecureTalk. This is your host, Justin Beals. We have an amazing guest, Kate O'Neill, with us today. Kate, thanks for joining SecureTalk. 

Kate O'Neill:Thanks for having me, Justin. 

Justin Beals: Yeah, it's a treat. I'm really excited for our discussion today.

So, I'm gonna kick off with something we share a little bit. I think you have a liberal arts degree. See my progress in academia was my bachelor's in theater, and then I went professional. Yeah, so one of my just first questions I'm always interested in talking with people about how liberal arts education was valuable to you in a technology career.

Kate O'Neill: It's funny that you bring that up because I I've been aware of it for a long time that you know that there's a relationship there, and it's something that I've talked about maybe on stage and a panel discussion once in a while but had never really addressed in any of my books and then with this latest book  coming out this month What Matters Next and I'm sure we'll talk about that.

I did, I did actually explore it at the beginning of the book and, and talked about the idea that, you know, I was a German major, a Russian and linguistics double minor in undergrad. And then my grad work was in linguistics and language development. And that background, you know, really thinking about languages and, you know, I had a real passion for music.

I've spent, spent years of my life in Nashville, you know, moonlighting as a songwriter in addition to my tech entrepreneurship kind of life and, and theater, you know, I also had a background in the theater, which I'm sure plays into being comfortable on stage now as a professional speaker, you know, all of these things, I think just have this sort of DNA in my work

And it's, I think the other piece of it that's really key, that I think is really important for people to hear is that it taught me systems thinking, and it taught me to see connections between seemingly disparate fields and ideas. And I think that this has become kind of a calling card of my work, you know, tech humanists, like these are kind of seemingly contradictory ideas or strategic optimism.

I had a blog years ago called Corporate Idealist. Um, so I'm like, I'm always drawn to that, that sort of seeming contradiction that actually illuminates some greater truth. And I actually find that to be one of the most powerful ways that we arrive at insights is by looking. for tensions between ideas, and those insights can help us make better decisions and understand better how to solve problems.

So, obviously, I think that has a great degree of applicability in tech. I think it's invaluable in how we approach tech. We're not just, you know, building features. We're shaping how humans interact with the world, and that's incredibly important to understand really dimensional ways, you know, human nature, cultural context, broader implications of the choices we make in that sense.

Justin Beals: I love that phrase like shaping the experience of humans in the world with tech. I certainly saw that. That was why I was interested in building software because I felt like I could make someone happy in a way or bring a little joy to their day, but you need that ability to humanize the tech, right?

Kate O'Neill: Yeah, yeah, I think so. I think it's really important to think about, you know, I think we'll probably get into a lot of different facets of this, but my sense is that You've got to know what you're doing that's of value, or else there's really no point. What problem are you trying to solve? Because we have problems in this world.

I don't know if, I don't know if your listeners agree, I would imagine that we're all in agreement that there are, you know, a few problems to solve in the world. So if you ask yourself, what problem am I solving with the technology that I'm using or the product that I'm. creating or designing. And the answer is, well, not a very good one, not a very interesting problem.

And why are you not solving one of the more interesting problems that are out there to be solved? 

Justin Beals: Yeah, we're definitely, it does feel like, especially in the last three months that we're in a little bit of a zeitgeist moment culturally around corporations, wealth inequality, and certainly the influence on politics.

And it does feel like we need a deeper way of communicating about the tech we want to build and why that's valuable for us as a community. 

Kate O'Neill: Yeah, I think so. And you know, one of the funny things about. Like What Matters Next and the work I've done around being an executive advisor, speaking to audiences of leaders is that when I talk about this with people at, you know, Sam at a cocktail party and just meeting, you know, strangers in New York or something, and someone might ask what to do for a living.

And if I describe the idea that I'm helping. leaders make better decisions using technology  to create more aligned experiences for people so that their business can succeed and that humanity can thrive alongside it. A lot of times, the response from people is, do you really think leaders care about that sort of thing?

And I'm like, oh, that's a really cynical, I understand where it's coming from, but it's a really cynical reaction. And I think everyone cares about that sort of thing. I think, you know, we're split on our incentives. You know, people who are in executive roles and, you know, high leadership roles are often beholden to incentives that, you know, they, that feel at odds with doing the right thing or, you know, creating better experiences for humans.

But that doesn't necessarily mean that they don't care. And I find that often people will approach me after a keynote and say,  “This is just what I needed to hear. You know, this is just the vocabulary that I needed to go and have this conversation with my board or to, you know, gather my team and have this conversation”.

So I think that we are, we're doing a disservice to people who are in positions of decision making authority and power and to everyone. Honestly, I also want to make clear that decision-making happens all through life, right?, like decision-making is happening through every organization at all levels, not just at the very, very tippy top, right? 

So it's important that I think we both don't automatically assume that people who are in positions of authority have no interest whatsoever in relearning how to make decisions that can be made. Thank you. Uh, sure that their company thrives and that humanity thrives. And I think we also need to recognize that every decision matters, like we're no matter where you are in an organization, your influence in that organization matters.

Justin Beals: Yeah. You know, to your point, and maybe leaning on my old theater degree, you know, I'm always curious about motivation, but also, the concept of motivation comes from the context in which we operate. And, if you're an executive or a decision-maker in a certain situation, the context in which you're operating can really change the decisions in which you're taking.

And, you know, it's oftentimes hard, hard to put yourself in those shoes and think about them. Yeah, I think about the shift in political environment in the United States and some of the context of decision-making in that shift in political environment. And the responsibility that some decision-makers have with the companies that they built in operating in that political environment.

And you can start to figure out why, why certain changes are happening. 

Kate O'Neill: Yeah. And I think it's also important to step back to that very first discussion point we had, which was about, you know, the idea of systems thinking and connecting disparate dots, right? Like I think it's very easy to, for someone who's sitting within the United States to look at United States politics and say like, Oh, this is, this is crazy, like everything is in chaos and it is, don't get me wrong. But I think it's also really important for us to have the ability and the skill to look beyond the United States context, to look at the greater, you know, geopolitical horizon and to see patterns, and there are patterns across many countries are experiencing their own versions of what we're going through in the United States, and that points to some more universal human experience at play here, you know, I think some of the things that make us feel uncertain right now when I can talk about climate, the climate crisis, we can talk about, you know, people who are in migration patterns because of climate upheaval and the strain that that places inside of different communities for the sense of jobs being at risk. And then, of course, AI feeling like an existential threat in the context of the future of jobs and work.

All of these things are happening all at the same time. So it creates an incredible vacuum in which. Someone who promises a strong man type of leadership is going to do very, very well for people who are experiencing fear and uncertainty. So I think if we can contextualize things like that, and you may, you know, if you're listening and you go like, that's not my political analysis of the situation, that's completely fine.

But I think what's important is to be able to create a political analysis of the situation, like a meta-analysis that connects these dots. If you're not connecting the dots in a way that forms some kind of sense-making narrative, then you're missing an opportunity, so you're missing a trick because there's so much happening that is connected at levels that we often, we do injustice to if we are only operating at a very You know, close in, zoomed in kind of a vision.

Justin Beals: Yeah, I also, I do have to say that one of the things and now I'm going to get on my soapbox for a second, but as a consumer of technology myself have been more and more critical of what I decide to spend time with or buy or. You know, invest my energy in, and I think it's really hard. Sometimes I look and I'm like, yes, we could all complain about some of these platforms, but are you actively posting to it?

Are you clocking in? Are you, are you, are you a part of the ecosystem that's, that allows it to thrive? Yeah. 

Kate O'Neill: Yeah. And you know, it's tough. I understand there, there are extenuating circumstances in many cases, like many people belong to groups on Facebook that they, they don't get the, they can't get the value of anywhere else.

Or, you know, I had tens of thousands of followers on Twitter that I had to abandon when we all decided that we were, you know, when, when Elon Musk took it over and  there was a mass exodus to, you know, threads and blue sky and all the rest. So, you know, it's tough to, to walk with integrity through those conflicting decisions.

And I think, you know, as long as we're being intellectually honest about what our motivations are and what, as you talked about, you know, the idea of motivation. And I think if we're being intellectually honest about how we can move ourselves incrementally toward a better outcome in the future, then, then we're, we're doing the right kinds of things.

And, you know, incrementalism is, as I say, moving incrementally toward that better future. This is a theme that comes up a lot in What Matters Next, because I think incrementalism gets kind of a bad rap. It's the sense that, you know, we're, if we move incrementally, we're, we're missing the chance to do big, radical, sweeping changes and, you know, overhaul what's not working.

That may be true, but I think in most of the time, what happens, especially in, business environments is that things just don't change. If we don't move carefully, we don't move at all, or we make sweeping changes that we are. are ill-considered that haven't really taken any consideration, you know, what the, the downstream consequences for communities are going to be and so on.

So I think especially in the context of a Secure Talk discussion, right? Like thinking about how security type of decisions are made and anything that relates to. you know, privacy or data protection or, you know, insurance or, or mortgage or, you know, any of these kinds of industries that are making big decisions that affect human thriving and how people experience the real world around them.

These kinds of things have to be thought of very, very carefully. And so I think making. Very carefully considered decisions that move us incrementally toward the future we would like to be in is much better than either sitting idly and doing nothing when we know we should be doing something or moving too fast and, you know, reacting in ways that might overstep our understanding of what the consequences may be.

Justin Beals: Yeah. So your book, What Matters Next? It feels like a, there's some continuity between your, your prior books. Like there's a little bit of a sequence here. I was really curious about maybe you could step us through, you know, your bibliography a little bit and be like, yeah, I kind of started here and now, now I'm here.

Kate O'Neill: it's funny. I've done this. I've actually created, a slide for keynotes in case, you know, whenever that comes up, if I, if I want to try to explain like if you're following along at home, here's the logic, right? So four books ago, this is my sixth book. By some measures, I did, a very brief ebook about some of the lessons learned from Netflix, and I also did a memoir, but apart from those, the four business and tech books have been Pixels in Place, which is really looking at the interconnected digital and physical experiences. And that landscape, which are published in 2016, was really emerging at that time, thinking about the Internet of things and thinking about, you know, augmented and virtual reality, like how these different kinds of experiences stood to change the way that companies were going to be able to interact with, with people and so on.

And so I was presenting about that very frequently in a lot of user experience and design kind of audiences and finding that a lot of the questions that came back took it to a higher, more macro level. And then some of these questions had to do with ethics and responsibility and that sort of thing. So that naturally led to Tech Humanist, which is more of this, like, let's talk about as a methodology of designing and building solutions that incorporate tech.

How do we do that in a way that aligns business objectives with user objectives or with human objectives, really? And then, you know, building on th`1//at, the, you know, again, kind of presenting on this, what a lot, a lot of times the questions that would come back would be like, well, but whose, whose idea of what's best for people really comes into play?

And how do we deal with the fact that there are so many big macro issues that are demanding our attention, like climate crisis, like supply chain issues, like geopolitical upheaval. And so a future so bright was the book that came out of examining how we think about the, opportunity for business to solve human problems at scale using technology and really pit some of these things, not in, in contest. Like it's not like whether it's about businesses being profitable and growing, but about how businesses can align with the, you know, the United Nations sustainable development goals, for example, in a way that's profit-seeking and aligned with humanity overall.

So that was that. And then out of that has come again, some presenting with audiences and having executives and leaders come up to me and say, you know, this is really fantastic, but what I find is I'm really daunted by the pace of change. Things are happening so fast. I can't keep up. You know, this AI moment has come in the last couple of years, people are feeling like digital transformation is demanding this kind of now, now, now, you know, and constant change, uh, in the, in the, in the business sphere, and yet feeling very disconnected from this larger picture of how humans need to think about our  place in the world and what value we bring, what our contribution means in the context of AI and so on.

So this, this book, What Matters Next, is really about, it's  helping, ostensibly helping leaders make better decisions. But as I said earlier, I think it's really a guide for everyone who's in a position to make a decision, which is everyone, to try to reconcile a few different things. One is an understanding of what now is versus the future?

Like I think a lot of times, people have a difficult time conceptualizing the future, and that's become something that over the last. several years, I've developed some, some fluency in talking with people about so breaking that down for people, but also about bringing this these tech and human considerations back into the equation and really trying to roll some of the, the tech humanist philosophy back into the discussion at a level that's relevant for leaders.

You know, to think about what it means to be a tech humanist leader, for example, and the decision making we do, 

Justin Beals: I mean, certainly a lot of the security folks that I've talked to describe their job as a part of the arms race, you know, and You know, I think your point about the future versus now is, is blurred today because it feels like the future is now, right?

Like we could talk about AI in that way, like it just came out and, yet, many people are making hiring decisions on something that a large language model that could produce some code. It has only been out for six months, and I'm going to immediately change and instead of maybe even giving themselves the human space to consume those differences or what's happening.

Kate O'Neill: Right, right. Yeah. We saw that in the year post rollout of chat GPT being available to the public from November, 2022, you saw a lot of newsroom ed editors, for example, you know, like CNET and Rolling Stone and a, and a few of these others making egregious decisions about, you know, removing editors from, from the human landscape of the job function and leaving a lot of editorial creation and review to large language models, which is, they're fantastic for iterating you through so many different discrete types  of content development and review.

But we're, we, especially at that time, we were nowhere near ready for large language models to take over editorial function in major newsroom type of, of capabilities. So, you know, I think we, we saw some embarrassing, you know, red face moments in public there, but I don't necessarily think it daunted too many leaders.

I think a lot of leaders saw that and went like, well, I'm not going to make that same decision, but then went on to make their own equivalent decision of how they rolled out GPT and, and large language models. 

Justin Beals: Yeah. You know, speaking of the, the AI change in tech humanists, you, you write being a tech humanist means recognizing that we encode ourselves in our machines and, in reading that line in your book, I was like, uh, this doesn't feel like the AI overlord that is separate from us, but our, our own invention in a way reflecting back who we are or, or, or what we prioritize. 

Kate O'Neill: Yeah. I think that's one of the biggest themes of many of my keynotes, and the message I most often try to get across to people is that to the extent that we project fear and uncertainty on to AI as you know, the the bogeyman that we're afraid of. It actually represents decisions that are being made by humans and encoded into machines. And so what we need is for people to make better decisions. And then encode those better decisions, those better values, those better versions of ourselves into the algorithms and the data models and the machines.

And once we do that, I feel like even people who hear that are oftentimes often react by thinking, well, that's not my role. You know, that's fine for you to say at this level of abstraction, but my role isn't to encode humanity and machines like, yes, it is. It is by each of us. consuming and using technology.

We are encoding ourselves into machines. We are data points that become consumed, you know, and part of a much larger data model. Everything you talked about it earlier, the decisions we make about the technology we use day to day, very much influences the data model that will, that will evolve from our usage, and of course, many of us have actual roles in technology in some kind of product development, product design, influence type of technology, architecture, and that's incredibly important. And one of the stories I love to share is Amazon Go, you know, like a lot of people by now are familiar and you, you live in the Pacific Northwest, so you're probably very familiar with, with Amazon Go.

Justin Beals: Big company. 

Kate O'Neill: Yeah. Yeah. Like if, if anybody is not as familiar, you know, it's that just walk out grocery concept where you're, you're taking your phone, and there's an app on your phone, and you're scanning it in, uh, through the gates as you come into this grocery store and then it's just like a grocery store.

You're gathering things off the shelf. And then you're just walking out with the, you know, scanning your phone again, and it rings you up for everything that you've carried out with you. And this is a brilliant innovation. It's facilitated by the fact that there are sensors on the shelves and cameras and a whole constellation of surveillance technology, really taking care of, you know, monitoring all that.

But what's super interesting to me is that when you open up the Amazon Go App for the first time. It gives you this onboarding sort of tutorial about how to use it. And what it says is that since you are charged for everything you take off the shelf, don't take anything off the shelf for anyone else.

And that struck me immediately; the very first time I saw it, I was like, that's not how this works. That's not how any of this works. Have you ever been in a grocery store? And I have asked this in audiences around the world. And every time I ask, you know, have you ever helped someone in a grocery store or been helped by someone in a grocery store?

Almost every hand goes up. This is a truly, there aren't very many universal human experiences. And this is one of them.  So the idea that some engineer somewhere a while trying to solve a problem of well, how are we going to make this so seamless that it charges you when you take something off the shelf?

When we know that someone could be removing it for someone else. You know what we'll do is we'll just make that a rule in the system that you cannot do that. And I think that right there is a decision that someone who's not the CEO. That wasn't a Jeff Bezos decision that never made it up to him.

You know, that was an engineer. That was an architect. That was someone somewhere who was saying. I can't figure out a better, more graceful way to preserve a meaningful human experience and yet solve this technological problem. So I'm just going to do the easy thing and I'm going to restrict human experience.

And I think that that is only one of many, many, many examples we could look at across a wide range of fields. And I don't mean to throw Amazon under the bus here, but why not? Because it's fun. 

Justin Beals: They're going to survive, I think. Yeah, 

Kate O'Neill: Yeah, they're going to be fine. They're going to be fine. But the point is, and it probably happens in your organization, listener, it probably happens in the apps that you use, the technologies we're all facing every day, decisions we're faced with all the time require us to choose between You know, what's the easiest kind of lowest, lowest hanging fruit solution to, you know, solution to a problem that's in front of us versus really thinking about how is this going to potentially change the way that people experience the solution that we're putting in front of them, and what opportunities they have going forward.

It's incredibly important when you think about things like whether someone's being approved for a mortgage or whether someone's getting approved for insurance or, you know, all of these kinds of things that happen within the purview of what many security jobs look at. 

Justin Beals: Oh, there are a ton of easy ones in the security space, right?

Like don't share your password. Are you kidding me? Right, right. We have people that, you know, we also want to log on to our accounts and, and share information with. Right. Don't, don't write your password down. Are you kidding me? Like, that's really hard. And, and instead of being, and I have railed against this in, in security space, instead of being great engineers and crafting our tech around great human experiences.

We limit the human in the engineering. And I think it's, I think it's, I think you're missing the opportunity to build some cool tech, actually, if you want to boil it down. Yeah, 

Kate O'Neill: yeah, exactly. I think, the most innovative, and I'll say this probably a bunch of times in a bunch of ways in our discussion, but the most innovative and promising solutions that we have in technology are always those that keep humans at the center. 

And that's just, that's just fact. I mean, I can't get around that. Every time I look at opportunities to think about other ways to solve, it's always back to humans. Back to keeping humans at the center.

And when I say that, by the way, I don't mean keeping humans at the center at the expense of nature. I don't mean keeping humans at the center at the expense of the planet or, or that we're the most important thing and not other living things that I just mean in the equation of business, human tech and thinking about, you know, kind of that, that taxonomy, that ecosystem.

We need to be at the center of that equation. There's no way that trying to solve a business problem that doesn't align with human needs is going to get any better by throwing tech at it. 

Justin Beals: Yeah.  You know, and there's a chapter title in your book that I also really loved. You have this phrase, “make experiences meaningful”.

You know, and, and I've, that reinvigorated my analysis of the product work that I do, right? Like, where are we making a customer interaction with our systems of meaningful interaction? You know, we have a lot of representations about efficiency, but I think, as an industry, we're moving beyond efficiency.

Where do you see the opportunities for meaningful interaction that go beyond just an efficient summarization of data? 

Kate O'Neill: Yeah, I think it's got, it has to go beyond making things faster or easier, like the meaningful experiences. I think that when I've talked to organizations about, you know, a lot of times, you know, I'll talk about meaning at a keynote, and people will be like, that's very inspiring.

I love the philosophy of it, but then how do you measure that? Like, how do we get closer to where we can measure? what is meaningful about the experiences we create. And I always say that there, there are perhaps no direct ways to measure meaning, but there are proxies. And one of the proxies might be faster or easier.

That's sometimes going to be a proxy. Like if you, if you know that that's a meaningful part of the experience for someone, I want to get through this process simpler. Like I, I've lost my password. I need to get logged into my, my account. And so fast and easy is a proxy for meaning in that case, right?

But usually, meaningful experiences go beyond that, and they create moments of genuine human connection or, of insight. Or of, growth. And those are really important considerations. And I think so instead of  just, you know, automating customer service,  we want to be thinking about how technology can help make those experiences more empathetic and more helpful.

So, you know, not necessarily faster or easier might be the answer, or it might be, how do we make an agent better prepared to answer questions and, you know, be ready to be empathetic and show helpful, show an awareness of contextual awareness and emotional sympathy for, you know, the problem you're going through and have the answers right there, you know, at disposal because AI has anticipated based on, you know, the cues within the conversation and provided the right kinds of prompts to get moving through the transaction.

So it's not just how can we make this faster? We're asking, how can we make it matter more? How can we make it where this interaction? Has had the right kind of appreciation for what matters both to the company and to the person that is interacting with the company at that point. So if we're enhancing and amplifying human potential and not just automating tasks, we're doing the right thing.

Justin Beals: Yeah. I, when I first started working with machine learning, or data science, especially when we started working with unstructured data and kind of qualitatively producing or producing results, one of my criticisms was that, you know, we were like, how do we measure how well this is doing? And  I pushed us, I said, you know, we've got to roll back to a more, an initially more qualitative way of analysis.

We should ask people did they like the results?  that were like, why, why, why did everything have to be a dashboard and a data point? Because we may miss some important insights along the way. 

Kate O'Neill: That's right. Yeah. No, that's a very important point too. And there certainly is a time and a place for those qualitative, more subjective insights.

And, and how those lead us to the kinds of nuances that we would never get from, you know, we, we probably won't. think to structure questions in such a way that we're going to be able to do a 10-point scale that's going to tell us what we need to know from just asking someone, you know, you also end up, you know, overfitting in many cases where you do that, that sort of over quantified type of response.

For example, and this is again, like with the Amazon dashing, but I remember buying a $500 blender on Amazon years ago, and then I got a series of emails from them like because you bought this Vitamix blender, here are some other things you might appreciate. And it's like a Blendtec blender and whatever, like some other blender.

I'm not buying another $500 blender, Amazon. Like these do not make sense as recommendations, and that's, you know, I'm sure they fixed that in their analytics since then; it's been a few years, but those are the kinds of things we risk doing if we rely too much on data-based systems that are going to provide the meaning as opposed to having occasional, you know, human reviews and, and being You know, having a human in the loop that's going to add some understanding of what's likely to matter, some awareness, some emotional context, some, you know, good judgment.

And those are those skills. It's funny. I think, you know, people are often wondering about what kinds of jobs and work are most in jeopardy and what kinds of jobs and work are most secure as we move forward with increasing automation and increasing AI. And what I find is that we just can't do better than those kind of core human soft skills, right?

Like those things we have always called soft skills, like contextual awareness and emotional intelligence and good judgment. Being able to be tactful as you handle a situation as opposed to, you know, being rude or curt or whatever. Like, those kinds of things are incredibly important in every interaction, every context.

There's hardly any interaction where they don't matter. So we're going to find, I think, continuous ways. To wrap  the more synthetic experiences with more human sort of softness, and that's, we're going to continually figure that out. Prepare for that. I think, you know, just make sure that people are always asking, like, what should my kids be studying?

What should, what should they be prepared for? Like, prepare for that. Make sure that they're. You know, reading the classics, make sure that they're, you know, learning humanities like they're, they're ready to go. They're, they're up to speed with everything that we're going to need that fits in and around, you know, the STEM fields.

STEM is incredibly important. Obviously, you know, you and I have. build rich, fine careers in it. But, but I don't think that either of us would say, you know, double down on STEM and get rid of humanities. 

Justin Beals: The pendulum certainly swings, right? Like for how many years did I hear everyone needs to do STEM? My niece and nephew need to learn how to code.

But I was just last night sitting at a dinner with a colleague and they, they have a couple young children. And they're reading some books about screen time and access to tech, and some of the recommendations are like, yeah, maybe not till they're 16 that they've built those that emotional intelligence, and I could see a day Kate where we're full circle, and we're back at, hey, You know, read the classics, write more, compare and contrast papers, because that's the career that's going to be available.

Kate O'Neill: Yeah, I think there's, there's a real opportunity here in, in creating kids safe tech, because I, I think that the skill that kids develop, you know, the sort of digital natives. skill of touching a phone or a tablet or something for the first time and intuitively knowing how to navigate it, you know, like learning that really quickly, that's a great ability and I think it's an incredible shortcut to allow kids to develop. 

But obviously we don't want to do that at the expense of, you know, feeding and attention. starved kind of economy, and creating addiction cycles and depleting their ability to function in real physical space. So, you know, there's a balance there, and it's not my expertise.

Plenty of other people are doing this work where they're thinking about, you know, kids safe tech, but man, what a huge opportunity that is going forward.

Justin Beals: forward. Yeah. So you have a new book coming out on January 25th. I'm super excited. 

Kate O'Neill: Yeah, it's January 28th or 29th. It's that week at any rate. So launch week is the 28th, 29. It's funny because, you know, normally, a book launches on a Tuesday of a week. So January 28th would be the day, but apparently, when it follows a major holiday, it sometimes gets pushed back a day. So we'll see. We'll see. It's kind of like, who knows at this point, one of those days. 

Justin Beals: And it's titled, What Matters Next?A Leader's Guide to Making Human-Friendly Tech Decisions in a World  That's moving Too Fast. Unless the title has changed. Do I still have a good? 

Kate O'Neill: No, no, that's it. 

Justin Beals: All right. Good. Title's 

Kate O'Neill: not moving too fast. The book is staying and staying put. 

Justin Beals: Well, I am curious, you know, what's your reflection on the pace of change today?

It's hard because we're all living in the moment to your point. Do we continue to see a lot of acceleration in the pace of change? 

Kate O'Neill: Yeah. I think, I feel like we are not imagining it that there really is this kind of increase in, the touch points of change and, and those that we interact with on a regular basis.

So it's not, it's not, you know, I think a lot of people are like, Very dismissive, like, well, everybody thought that every new technology was, you know, the, the brink of new change and it was all overwhelming. Like, yeah, they probably did. And it really was, you know, and every new invention brought about new touch points and new kind of contexts to be aware of.

So. That is it's it's at the same time true and also, I think more manageable than we're giving ourselves credit for. I think some of it requires us to just rethink some of the things that we or revisit some of the skills we already have. like focus and discipline and how we organize our days, you know, allowing ourselves to do things like turn off notifications on our phone and use things like iPhone's focus, you know, so that during work, we have work focus, so we're not being disrupted all day and things like that. 

There's all kinds of, of interventions that I think we can use, but, but I think the truth is, yeah, it is the pace of change is increasing. And it feels like decisions have more consequence. So the,, the movement toward effective acceleration within Silicon Valley, you know, the sense that, oh, we have to go as fast as we can, as fast as the technology will allow us.

That is also driving the sense that we can't keep up because, literally, no one can keep up. I mean, you've got leaders at the big AI companies that are in a breakneck race with each other to try to create AGI and to try to outdo one another on the capabilities of their models, their frontier models, and so on.

I mean, it's, it's really kind of bonkers. So it's not, it's not fiction, but I think that where that leaves us is that we who are not necessarily leading companies that are building frontier models are left to try to make our decisions in the context of what remains. And what remains is that humans are still human.

We're still, we still have very much the same needs for connection. and for a sense of what's real around us, what matters to one another and to ourselves, that, that sense that I always come back to meaning. Meaning is the most fundamental human experience and the most fundamental aspect of the human condition.

And meaning at every level is about what matters. So, you know, when you think about semantics, you know, when we try to communicate with one another, that's about what matters and what we're saying. And when we think about patterns or purpose or truth or significance, or if you take it out to the big, biggest picture, cosmic or existential questions of meaning, like, you know, what's it all about and why are we here?

What we're asking is what really matters about any of this, right? And so when you distill that down, when you kind of re collapse all of those different ways of looking at meaning and understand that, okay, meaning is the lens through which we as humans make sense of the world around us. How can we do that in a way that's fit to, to make good decisions about business and technology?

Well, what matters is our key. And then what we can look at innovation as being about what is likely to matter, what is going to matter. That gives us the opportunity to look at, like we talked about about now and in the future. kind of putting those, those things in relation to one another. So we have what matters and what's likely to matter.

Then if we're able to understand, how do we know what the risks are of each? I talk about this in the book as being the harms of action and the harms of inaction. So if we're, if we're moving like the accelerationists and we're like at breakneck speed, we're risking the harms of action. We're, we're going barreling for forward at speed into things we don't know about, we don't know what risks we're bringing about. 

So those are the harms of action that we're incurring there. On the other hand, if we're, if we're aware that things need to change and we're not making those changes, climate crisis is a really good illustration of that or a space in which that's very easy to find examples, there's the harms of inaction.

And so we just have to find ourselves operating meaningfully within the parameters of action. The, the right approach to that space and that speed, integrating the picture of what matters now and what's likely to matter in the future. And then making that next incremental decision that moves us forward.

So that's, you know, the title of What Matters Next is about moving mindfully and very intentionally from what we know matters now, what our values are, and what we're, what our priorities have been up to this point, to what we believe is very likely to be what pulls us into the future and where we need to be in the future.

So how do we take that very next step that moves us there? 

Justin Beals: It seems like a big ask for intentionality and, you know, thoughtfulness from us as leaders. You know, I certainly feel like there is opportunity or technology and the innovations that are available to make the human experience more meaningful, to your point, but also the danger that I would remove some meaningfulness from, you know, the experience, I think a lot, you know, I think a lot of what the Internet has been is about breaking down the barriers to information distribution.

You know, democratizing it in a lot of ways. There's been pros and cons, right? Like we can see, you know, the New York Times having to respond to a Buzzfeed, which is getting more traffic and, and, and a more, a desire from us to be more in the now of knowledge of what's happening today, limiting the opportunity we have to look at the information that's being disseminated and, and evaluate it or it's truthfulness or honesty or factual illness.

And I think, I think we've got to grapple, you know, with both sides of that. And as I look on the AI side, you know, I, I remember reading William Gibson's Snow Crash and thing, thinking how wonderful would it be to have a safe assistant, an AGI I could communicate with and be another part of my knowledge outside of me or a reflection tool for me. But then I also think about the ability to control how these things respond and creating a context that is unreality in the world but the reality another organization wanted to put around me. 

Kate O'Neill: Yeah. Yeah. I think that's a really important consideration, especially in the security and  data protection space.

You know, thinking about, you know, security evolving from being just about protecting data to protecting human dignity and  trust the very notion of trust. And so it, that's becomes really central to creating meaningful experiences because people aren't going to engage deeply with technology that they don't trust.

So it needs to be both robust and human-friendly, and that sense of security needs to be integrated into the experience design right from the start, as opposed to, you know, bolted on as an afterthought. 

Justin Beals: Yeah, I mean, I've certainly talked to some, some very innovative companies that are solely focused on better security as the selling point of their solution.

They may be not too different than the old billboard service chatroom-style product. But the major difference is the privacy that they bring, the security that they bring, you know, the, the ability to retain the information that they want and utilize it. And it's become a selling point for companies that are not in security but that their product is focused on security.

Kate O'Neill: Yeah. Yeah. And I think, you know, we, it's very easy to think about, you know, identity verification services as being, you know, a core security offering that is very much in line with protecting human user privacy in a very human dignity sort of way or, you know, safeguarding against deepfakes that undermine trust in digital communications and in institutions for that matter.

So, you know, these are really like, they're, they're not just technical challenges. They're, they're really fundamental to how humans flourish in an increasingly digital world. 

Justin Beals: Well, I'm going to quote another title of one of your books. I do believe the future is quite bright. I mean, the fact that we are actively talking about the challenges and concerns that we have, that we have the, the channels and, and opportunities to discuss it, I think means that, and hopefully, You know, in some ways in a consumer-driven environment, what we want as humans will be the things we prioritize, investing our time and resources into. And over the run, we'll wind up with a better society. 

Kate O'Neill: Yeah. I think the, the funny thing is, I wanna gently correct that. It's the future it's not, the future is bright. It's a future so bright. Oh yes. And that is the, the difference there I think, is that it is the implication that the future could be bright or we have the potential for a future that is so bright if we make the right choices.

Right? Yeah. If we invest the future with these strategically optimistic types of approaches, if we are balanced In the way that we are positioning the business objectives with human needs and all of that. So it's not so much about correcting your quotation of it as I think that that it's a it's an oversight on my part naming the book something that was such a close analog to the Timbuk 3 song of the 80s.

Was, was a big mistake because I think people are always like, Oh yeah, just like the future so bright, I got to wear shades. I'm like, Oh shoot. I wasn't only saying that the future is bright. I'm saying it can be bright. It's on us. It's totally up to us. And so that's where I think to your earlier point that there's this kind of organic flow through my work.

And that is where we, we find ourselves with What Matters Next is, is a natural progression to saying, look, if we're going to have a bright future. It's on us. It's on every one of us to make better decisions. And this book really walks through a different way of thinking about the future and about decision-making about technology and tries to put all of that into a better system for insights and foresights driven thinking and decision-making so that we can find ourselves moving into incrementally better futures every day.

Justin Beals: Wonderful, Kate. I am very grateful for your work, and I loved reading it and certainly, it was inspiring for me and has been a part of my thought process now as I think about what we build and why it matters, so. And thanks for joining SecureTalk today. 

Kate O'Neill: Well, thank you and thanks to all your listeners. And I'm, I'm happy to direct people to KO Insights if they want to read any further on my blog or any of the work we've done.

I am also on LinkedIn and on Blue Sky and Threads as a just, slash Kate O'Neill just K A T E O N E I L L. So yeah, happy to connect with folks. And I would love to hear back from anyone who's who's been listening and wants to share any thoughts or observations or questions that they have as they've listened to our great conversation. Thank you, Justin. 

Justin Beals: Yeah, you bet. And we'll include links for all that stuff in our show notes so that people can easily find you. Um, great. Thanks, Kate. Have a great day. 

Kate O'Neill: Thanks, Justin. You too.

About our guest

Kate O'Neill Tech Humanist, Global Keynote Speaker, Author KO Insights

Kate O'Neill, known as the “Tech Humanist,” is a trailblazer influencing humanity's future in a tech-centric world. She merges strategic optimism with extensive expertise to create significant change. She founded KO Insights, a strategic advisory firm that enhances human experience at scale, particularly in AI-driven interactions. Her clients include Adobe, Amsterdam, Austin, Coca-Cola, Google, IBM, and the United Nations.

Kate has earned various awards, including “Technology Entrepreneur of the Year” and recognition in Google's global campaign for women in entrepreneurship. Thinkers50 named her among the World’s Management Thinkers to Watch.

Her insights feature in the New York Times, The Wall Street Journal, and WIRED, and she appears as a tech expert on BBC and NPR. A sought-after keynote speaker, she has addressed hundreds of thousands globally. Kate has authored six books, including Tech Humanist, Pixels and Place, A Future So Bright, and What Matters Next.

Justin BealsFounder & CEO Strike Graph

Justin Beals is a serial entrepreneur with expertise in AI, cybersecurity, and governance who is passionate about making arcane cybersecurity standards plain and simple to achieve. He founded Strike Graph in 2020 to eliminate confusion surrounding cybersecurity audit and certification processes by offering an innovative, right-sized solution at a fraction of the time and cost of traditional methods.

Now, as Strike Graph CEO, Justin drives strategic innovation within the company. Based in Seattle, he previously served as the CTO of NextStep and Koru, which won the 2018 Most Impactful Startup award from Wharton People Analytics.

Justin is a board member for the Ada Developers Academy, VALID8 Financial, and Edify Software Consulting. He is the creator of the patented Training, Tracking & Placement System and the author of “Aligning curriculum and evidencing learning effectiveness using semantic mapping of learning assets,” which was published in the International Journal of Emerging Technologies in Learning (iJet). Justin earned a BA from Fort Lewis College.

Keep up to date with Strike Graph.

The security landscape is ever changing. Sign up for our newsletter to make sure you stay abreast of the latest regulations and requirements.