The Audit - Presented by IT Audit Labs

AI & Emerging Tech for a Better Future with Marsha Maxwell

IT Audit Labs Season 1 Episode 52

In this episode, we dive into emerging tech with Marsha Maxwell, co-founder of If These Lands Could Talk and Head of Innovation at Atlanta International School. Marsha shares insights on empowering indigenous and underserved communities through AI and VR, the ethical challenges of integrating AI, and the importance of digital inclusion. We discuss the impact of AI on knowledge, culture, and education and examine how to responsibly bridge gaps in tech access worldwide. 

In this episode we cover: 

  • Exploring AI and VR for indigenous and underserved communities 
  • Bridging digital divides: Tech access for all 
  • Ethical challenges in AI and identity 
  • How to navigate digital authenticity in the age of deepfakes 
  • The future of AI in creative and cultural spaces 
  • Practical strategies for blending AI with education and learning 

Tune in for a compelling look at the intersection of technology, education, and culture. Don’t forget to like, subscribe, and share to stay updated with our latest episodes! 

#ArtificialIntelligence #EmergingTech #DigitalInclusion #CyberSecurity #DataProtection #AIinSecurity 

Speaker 1:

Welcome to the Audit presented by IT Audit Labs. I'm your co-host and producer, joshua Schmidt. Today we're joined by Managing Director Eric Brown, and our Associate, bill Harris, and our illustrious guest Marsha Maxwell. Marsha has quite the track record. Just a couple of highlights here that I pulled from your LinkedIn. You're the co-founder of If these Lands Could Talk, a mission of empowering native and indigenous communities globally through digital technology. That's how we found you. I saw your TED Talk, which was great. It's called Stories for the Future. I highly suggest our listeners check that out. So we have lots to talk about. Today we're going to be kind of diving into AI, vr your take on that and kind of passing ideas around and maybe talking about the future, where that's going to lead us. So, without further ado, marcia, maybe you could give us a little background on yourself and tell us what you've been working on lately.

Speaker 2:

Yes, so I'm working on several projects. One is with If these Lands Could Talk, which is an organization that I co-founded with my partner, Natasha Rapsad, and we are trying to introduce indigenous native peoples around the world to emerging tech. And another nonprofit that I have is Liminal Spaces, which is doing the same kind of work but with underserved communities mostly in America, and in the meantime I'm also the head of innovation, research and technology at Atlanta International School, which of has a mission to spread education around the world and have people have a more global minded mindset as we enter into some very interesting times that's coming up for all of us, not just in America, but in the world.

Speaker 1:

Absolutely Looking forward to hearing more about that. So in this part of the world we're embracing for the winter that's coming. The cold weather here in Minnesota and bills out east in Maryland, so we're kind of dreaming about warmer climates, maybe a winter vacation. Do you have any travels coming up that you have planned?

Speaker 2:

Yeah, I'm going from hot to hotter. So I'm going to Brazil in a couple of weeks to work on a project, and later in the year, in December, I'm going to Ghana to work on another project, actually revolving around AI and identity.

Speaker 3:

Oh, that's really interesting. Marsha, what sort of work are you doing that involves Ghana in this kind of emerging technology?

Speaker 2:

Yeah, so I'm working in partnership with Google. So Google has an AI for Africa hub in Accra, which is the capital of Ghana. And so, looking at, you know, going over there, looking at really identity for peoples that have moved away from Ghana, and we're doing kind of a mini documentary and that kind of involves people who have Ghanaian ancestry, and we're doing kind of a mini documentary and that kind of involves people who have Ghanaian ancestry, and we're doing that here in America and we're taking it back to Ghana to showcase, and hopefully there are some interesting stories that are coming out of this and we're working in conjunction, I said, with Google. We're just trying to introduce AI to a continent that's been really innovative over the years but have kind of, you know, people have a different perception of Africa and what African, you know, thought leaders are doing, and so this is just to show that, yes, you know, the future has not left certain continents behind. Everybody has a role to play in this digital future that we're creating this digital future that we're creating.

Speaker 3:

As you're talking through that, as we're on this podcast here, I'm in a southern part of Pennsylvania at this cabin that's way out in the woods, and I'm on Starlink right now and that's how I'm getting to the internet and being able to work and do things like this podcast. How have you seen technologies like Starlink open up access for other areas of the world, or maybe not open up areas for the world? Have you seen these type of emerging technologies add influence to any of these projects that you're working on?

Speaker 2:

Yeah, well, I work on a different side of it. I know that from the take you're having kind of the penny hit or the penny dropped during the pandemic, where you know, as a technology leader in a school my school is fairly well-funded and you know we kind of pivoted really quickly gave children devices, went home never once thought about somebody not having internet access. And then as we're going through the pandemic and hearing from different parts of the community where you know you can give somebody a device but they don't have any, you know, method of actually using that device. So that kind of kick-started a lot of these ideas.

Speaker 2:

Um, so, programs like starlink is very important to be able to get the connectivity, or at least the ability to connect to people who are not in the general, you know, hub where we have everything connected.

Speaker 2:

But also I think it kind of shines a light on a problem that you know we think of this problem being, you know, third world developing countries, not a problem actually being in our own backyard and to see the reports from you know many of the areas where they have indigenous peoples who don't even have electricity sometimes. So we're kind of thinking about these really high level problems and there's some very low level problems like connectivity, actual electricity, that people are still struggling with. So these kinds of projects going to Africa, going to Asia, going to South America kinds of, I think, keeps us globally alert or aware of the problems that are around and not just thinking about okay, this is a really cool widget or gadget and what can it do for me? But really is, how do we bring everybody along on the journey and not leave anybody out, because there's so much that needs to be done and so very few people that have the ability to do it, especially with the new AI stuff that's coming out?

Speaker 3:

So, as you are working to bring emerging technology to underserved communities or indigenous communities, what are those emerging technologies that you're bringing and how are you integrating that into some of these communities that may not have things like electricity?

Speaker 2:

Yeah, well, the ones without electricity. I have to admit that I haven't been working with those communities so much. It's usually working with entities that are also involved in the community. And one thing that I've learned over the past couple of years working with If these Lands Could Talk and really getting into the indigenous communities there has been a history of either appropriation and definitely of distrust, and so the main part of the work, or the biggest part of the work, is establishing that trust and allowing people to determine how they're going to interact with the technology and also what parts of their communities they are going to allow to be infected or shown by the technology. And that's something that I had to learn.

Speaker 2:

I didn't have to learn it the hard way because I'm a good listener, but through your empathy, interviews and things like that, I'm hearing things that I would never have thought about. And you think about it, you know. You see, there are many people who get the like Maori tattoos, and it doesn't mean anything to certain people, but when you go talk to a Maori person, it means a lot, right, and so the same way you'll have people that are. You know, especially when the NFT boom was going, people would be kind of appropriating different symbols that were either sacred or very important to certain peoples and just kind of using them without the permission. So working with some indigenous communities is really important that you know, you understand how they want to use the materials and what things are they allow to be shown, what things they don't allow to be shown, and follow along like that.

Speaker 2:

Working in Africa has been really different, and that's been mainly a lot of the students that I work with there. They've heard of, they've heard of stuff, right, they've heard of VR, they've heard of AI, but they haven't actually interacted with it, and so when I talk about AI and identity, it's really about how to not forget who you are. But how do you incorporate, to augment who you are using AI, and how do you use AI to connect with people who have different experiences around the world and different perspectives than yours?

Speaker 1:

I could really see, just using the small amount of generative AI that I've been using to create text or images. I could see how that would really homogenize a lot of the things that people are creating. So when you're working with generative AI or VR, what kind of recommendations do you make to your pupils and how do they maintain a voice or stay authentic and have that personality shine through the technology?

Speaker 2:

Yeah, I think one of the things I talk about is knowing who you are and understanding who you are, knowing what your ethics are, knowing what your ethics are, knowing what your values are and understanding where you come from. All of us come from somewhere, you know. There's no, I think, in America we think of ethnic and we think of people of color, but every single person in America has an ethnicity, has a background, has family history, and just knowing that, and you know I we start sometimes by saying, okay, tell somebody a story, a memory of your childhood, and some people revolve around food, others maybe revolve around a certain holiday or visiting a certain place, and there are certain feelings that that evokes. And so, after you think about this offline, then I do an exercise where I say, well, how would you explain that to somebody from fill in the blank, you know who has a different perspective, what were the commonalities?

Speaker 2:

And then using AI to help you kind of bridge those gaps all the same, at the same time, you know, when AI especially in the beginning, when everybody was hopping on chat, gbt and it was, it was okay, it was was, you know, but it was kind of spitting out very, very kind of homogeneous stuff.

Speaker 2:

Right, it wasn't really differentiated, but you know, if you, the better the prompt, the better the response. So helping them with prompt engineering so that they can, you know, manufacture a better response, and then giving it to the person of that background and having them read it and see how close is it better or worse than the first iteration of what the person had written freehand or from their own thoughts, and sometimes it was better. And you know we had some laughs about what the AI spat out. But really it's not about necessarily the tool, but it's a tool to help people to start thinking about what they're putting out there and how it's being perceived, and also using the AI, like I said, to augment that. So finding more ways and better words in order to actually meet other people where they are.

Speaker 3:

I'm glad you brought that up, marcia, about the AI being yet another tool or an arrow in our quiver that we can use, and it reminds me of a story of a colleague of mine recently.

Speaker 3:

He's going through some education, he's taking some college classes, and one of the classes that he's taking is around religion and one of the assignments that he was given was to write a to research a religion that's not his own and then to write a 12 or 15 page children's story explaining the religion. And his teacher said you know, you can't use AI to do this, which is kind of like a perfect use case for AI if you wanted to just kind of take a shortcut and do it. So you know, we sat around and kind of chatted through what does this really mean as far as? How are we using these tools at our disposal? Versus what is the assignment attempting to have us learn and how is that going to enrich us, where you could really just have AI spit that out and probably do it in 20 minutes. Versus really learning about that other religion and then learning it enough to be able to write about it in a way that you could explain it to someone else at a very basic level. So we were just chatting through some ways where you could achieve that same thing, but use the technology that's available to us today and you know I always enjoy having those conversations around.

Speaker 3:

You know, how do we incorporate the technology at our disposal? So when I was growing up I was like, oh, you can't use calculators when you're in school, it's like, well, calculators are ubiquitous. I'm never going to be on a desert island where I can't do, you know, I need to do long division on paper. It's just, you know, just not going to happen. So is it better that I know how to use the tool well to achieve the outcomes that I want, or do I pretend the continue to evolve our education and our learning, but incorporate those things that are advancing with us as an overall society?

Speaker 2:

I think one thing that I hear a lot in educational circles is how you know and I believe in using AI is how you know and I believe in using AI. So when I say what I'm going to say, keep that in the back of your mind. That it's not important that you know AI will. They can just plug it in and figure it out and you know. They don't really have to think, and I think that's 100% wrong. I never want a doctor who's going to be AI, asking Jack TBT how to fix me when I'm on the operating table. Right. So I think the combination is important. I think having that knowledge is really, really important. Otherwise you're not going to know, you're not going to be able to test what's being spat out by the AI. So if I don't know anything about this other religion or whatever I'm comparing, then whatever AI tells me, I'm going to say, okay, that's right. Whereas if I know what the beliefs are and I know how the processes are and I know how to do long division, then I could do spot checks and see how much is valid and how much isn't valid. So I think this idea where we're going to kind of cede all responsibility and all thought to AI. I think that's very, very dangerous.

Speaker 2:

So I encourage students to use it to, you know, make what they're doing better and to give them ideas. Maybe spit out a couple of ideas that they can then, you know, kind of, you know, riff off of and do something great. And also as a test, you know, to see where they can maybe make something better or worse, but not a hundred percent. Just ai, and in the beginning people were using it and it was spitting out, you know, fake um bibliographies that looked okay, um and you know, but were fake. So if you didn't know that you know such and such was not a real person, then you would never, you would just turn it in right. But if you know, okay, this is not a leading scientist in this field, or do at least a double check of the references to make sure. So I think yes, it's yes and yes, we should use AI and we should also learn the fundamentals and some of the higher you know harder stuff about the different topics.

Speaker 3:

I really love that way that you're kind of blending in the technology really with the philosophy of education and learning when ChatGPT as you said in the early days last year was coming out you coming out a couple of years ago and I think this still holds true today If you get, say, a paragraph from ChatGPT or you ask it what is the first page of War and Peace, and it spits it out. If you ask the generative AI how many words were in the content that you just sent me and you ask it five times, it's going to give you five different answers because it doesn't have that concept or at least I haven't checked this in a couple of months but it didn't have a way of accurately counting how many words or maybe it didn't even really understand that concept because as a large language model, it's really just a really great predictive tool and it's just predicting that that next word. So where we we kind of blend that artificial intelligence and just kind of think of it as a magic black box, but really it's just a a really great way of guessing what the next word is that humans would use in language. And Bill's done a few talks on some futuristic computing things, quantum being one of them.

Speaker 3:

Another one on storage, so storage using crystals, holograms and DNA, like stuff that's kind of on the forefront of science fiction, right, and as we think about we're all headed there, and hopefully in our lives, where we're going to see some of these things out of the lab and into practice. Ai or you know, quote unquote AI and large language models will just become what today we would consider a calculator. To what that in the future, how far we're going to have come. It's really quite a lot to stay on top of for those of us who are in technology day in and day out, so those that are on the fringes or don't interact with it much at all you could really sweep past a few generations of people quite quickly.

Speaker 2:

Yeah, and that's scary too and I think that's one of the reasons why I started working in this area, especially with the other communities is it moves so quickly and we talked about like what OpenAI is just a couple of years old, I don't even think it's two, maybe two and a half or something like that in the public consciousness anyway, and in that time there's been, you know, a million and one other open AIs that have started. Everything that we do has an AI component to it, and whether it's a chat or something, or helping you with your emails or whatever. And so if you don't have the luxury of having context or having the ability, I can't imagine what's going to happen in 10 years, right, or five years. You're going to be so far behind and a couple, you know, years ago we were talking about the digital divide. That was a big thing and it was just having access to computers. But now computers are, you know, kind of passe. I mean they're like, okay, next there's something else. Everything is kind of compounding, and so I kind of feel that pressure that if you miss this wave, you're really going to be lost.

Speaker 2:

You think about, like our parents, think about my parents, and they wouldn't know what to do with AI. I think it would freak them out, or it kind of freaks them out, right, um, the ability for it to be so human-like and I think most of the ais now are quite good. I mean, they can write a decent, you know email or letter and you know you have the ones that are calling you, um, the robocalls, and they sound like a human being, um, until, I mean, you can tell when it's not. But it kind of sounds like a human being and it seems really realistic and, as as time goes on, that's going to get better and better. So, yeah, I think we have a really interesting future, near future.

Speaker 2:

I'm excited about it. I mean, I'm excited about it. We'll see what happens. I hope it turns out well. I think sometimes people don't get the right lesson from the movies. You're not meant to be doing certain things and like. That wasn't the point, which goes back to education again, right? So maybe they didn't understand. Good guy, bad guy, you know the moral story, that kind of thing.

Speaker 1:

So yeah, I'd just like to jump in, and since we have an educational expert here, I want to get your take on this, marsha. And then Bill, I know you're a father as well, so I'm going to bring it here for a second. So one thing I've been thinking about with my kids is, you know, knowledge is easily acquired these days, whether it's through chat, gpt or Google or what have you. But wisdom. I was reading a book called Breaking the Habit of being Yourself by Joe Dispenza. He's a little out there. He's a little out there. He's a healer kind of alternative guy.

Speaker 1:

But one of his quotes that I really enjoyed is knowledge without experience is merely philosophy. Experience without knowledge is ignorance. So it really takes that combined personal experience, combined with knowledge, to create wisdom. So it's easy for me to throw my kids in front of PBS Kids or put a tablet up at the restaurant, but you're really missing that piece of the wisdom that you get by being bored, or that creativity that's sparked by just you know, having your mind wander. So I'm wondering you know what your approach is to that? I like to rope in Bill, because I know he's a father as well and how do you navigate that experience with your children devices, when to bring in certain things, when do you give them a tablet? When do you give them a cell phone? When do you give them some freedom?

Speaker 2:

Well, I would say that for me, I've started this kind of low-tech, no-tech type of thing, which is kind of weird, right, because I am super emerging tech love it, but I'm also thinking that I'm old enough to appreciate it. In a way, I feel very, very lucky to have been born at a time when we had a lot of manual stuff, especially like photography. It's one of the. You know there was a film camera. So you couldn't do. You know kids now, when they just push the button and hopefully one of the 300 pictures they take in two seconds is gonna look good, right, when you have a film camera with 24 or 12, you know shots, you're really gonna pick your shots. You're not going to be wasting and film's expensive, you're not going to be wasting those shots. So now I've bought a film camera for the kids. I bought even the not so the Polaroids. You know you have to kind of shake them. So now it's like, okay, what am I going to take a picture of? Because I only have 10. What am I going to take a picture of? Because I only have 10, that's all I have, I do not have another. So it forces them to slow down, to think more, to compose the shots better.

Speaker 2:

And again with I'm writing. You know I like I have a million notepads and I love pens. I do a lot of writing, no computer, just write it. You know. Re engaging, you know. So you have to think.

Speaker 2:

I remember being in next thing I'm going to buy is a typewriter. But I remember being in school and we had to type our reports and then you type the whole thing and then the very last paragraph. You'd make a mistake and if you were, you didn't have the you know whiteout thingy, or you, you know the whiteout would look really bad and you didn't want the you know whiteout thingy, or you, you know the whiteout would look really bad and you want to turn it in because it looks so bad you have to scrap it and type the whole page over again. But that teaches you to be careful about what you do and I think a lot of times now kids aren't really careful what they do and this kind of easy access has made a lot of people. This is going to sound really bad, but sloppy and lazy, right. So they just one shot.

Speaker 2:

You know I'll write something. I'll turn in my first draft, or I will. You know I turn in the paper. It looks really bad, but I don't really. You know, I fold it whatever. I give it to the teacher. So I'm a stickler for don't give me something crumpled or stained, or you know. I want, you know, have pride in your work and think about what you're doing. So first draft isn't go write it again. Second, third drafts are always better, no matter how perfect the first one seems to be.

Speaker 4:

Well, I really like that answer that you provided, Marcia, about you know, just making sure that people take pride in what they do and how they think, and so the question I've got, then, related to that, is I was at a conference earlier this week where the keynote speaker was talking about the impact of influence operations on our culture today, and so it made me think that you've got this confluence here of a huge amount of data, social media and now artificial intelligence. So do you feel like as we kind of go back to Josh's comment about having the wisdom to navigate these things do you feel that AI it will either help or hinder or have no impact on how these things come together, so that one can differentiate fact from fiction, can see the nuances and the data that we have and apply some of the wisdom to this enormous landscape before them and the tools that they have?

Speaker 2:

Yeah, I think that's a hard question because I want to say that AI is going to help, but I think in reality, especially right now, it's going to hurt a little bit because people already don't know how to distinguish fact from fiction and it's always relied on, you know, some being able to trust the people that they're listening to or seeing. But now I had the pleasure of making a deep. I had a conference, a talk at a school, and I made a deep fake of their principal saying something that he would never say. And they, when first they were watching it, and then he started off fine, and then he kind of went off the script, you know, and I got his permission to do it. I didn't just do it, but so some of the kids were you could tell that they were, you know, some of them were really happy with what he said, because it was something they wanted to do Like you know, you I don't know what it was, but maybe you can run around the school or something like that, and they're like, they're really happy. Other kids were kind of, you know, kind of cocked their head a little bit and they're kind of like hmm, is that Mr? So they were kind of confused because that went against what they knew about that person, because that went against what they knew about that person. So in a context like that, if it's somebody that you know and you can trust that would or would not say something, then that's fine. But in this world of social media, you're halfway around the world. You have influence all around the world, right? So if somebody on social media says something, I don't know if this is something they would actually say or not. I don't know if this is something they would actually say or not. I don't know if this is. And then I'm going to especially the younger students or young, young people. They're going to run off and do what this person said. So imagine if you made a deep fake of Taylor Swift. She has a billion Swifties and now, all of a sudden, all these 10 year olds are going off and doing something because Taylor Swift told them to do it. So that worries me a lot.

Speaker 2:

I think the right. Now I don't know how people will distinguish right or truth from fiction. I think that's. I personally think that's a problem. Yeah, I really think that's a problem. I don't know how to solve that's. I personally think that's a problem. Yeah, I really think that's a problem. I don't know how to solve that problem.

Speaker 2:

I'm sure somebody smarter than me will come up with a way to solve it, but I think for me that's the biggest problem now, that you can't actually trust what you see. That for the first time, I think, in human history it's now you can't trust what you're actually seeing. And now, with the advent of maybe better holographic technology which I think is the next thing you're not going to be able to yeah, it's really, really hard to distinguish fact from fiction, and I guess you'll have to. I'm hoping that it doesn't spark a backlash where technology is banned or something like that so people can kind of catch up. But right now I think sometimes the bad actors are going to be acting faster than good actors can to stem whatever is coming.

Speaker 3:

In your school, marcia? Do you or do the students? Are they able to carry their cell phones with them throughout the day? And the reason I ask. In Minnesota there are some schools that have adopted a no cell phone policy because the students were just so distracted that it was impacting the education and learning of other students. So now you know they've taken the stance that no phones in the classroom. I have to put them away, you know, in a locker or something like that. So just wondering how your school deals with that and what you might have seen as an educator.

Speaker 2:

Yeah, right now we're dealing with that. So primary, it's not at all. Middle school so primary, it's not at all. Middle school in your locker you can't have it out during the school day. And high school they're allowed to use it, but not during class.

Speaker 2:

The phone thing is huge. I think a lot of kids are addicted to their phones. I remember I was working in Turkey and I took a phone from a kid once. I'm a big no phone person in class. I want them to be. I mean, I'm getting paid to teach you right? So I want you to pay attention to what I'm talking about.

Speaker 2:

And I did the draconian take away your phone for a week. She about had a heart attack, you know, because she couldn't have her phone. And I think, as a person who, who I have phones that I don't pay that much attention to. I'm not really on social, I don't, you know. So that, I think, was the worst punishment you could ever do to somebody was to take their phone away. So I think a lot of schools are now grappling with it.

Speaker 2:

I saw a couple things in the news fairly recently about kids who've, like, gotten into fights with a teacher, you know, if they take their phone or threaten to take their phone. So that feels to me almost like an addictive behavior. You know, it's weird the attachment that kids have, and the same with tablets. I've seen a lot of babies. People give their really young children these tablets and they're just on the tablet. People give their really young children these tablets and they're just on the tablet. And what you notice with the phones, especially in middle school more so in middle school than middle than high is that you have a bunch of kids together.

Speaker 2:

We had a screening of Black Panther 2 a couple of years ago and it was like a fun. Kids got to stay in and yeah, kind of like a sleepover at school type thing and and so in an audience of about 50 kids, every single kid was on their phone during the movie. They weren't talking, I don't know what they're doing, but they were on the phone kind of paying attention to the movie and and then maybe a couple scenes in the movie everybody would look up, watch the scene and then they would be going back to whatever they were doing. So they're in community, they're together, but they're alone, right, it's like they're. They have a body buddy nearby so they could see another human being, while they're not interacting with whoever or whatever in their hand, and I think that's um, strange, um, it's going to be a very interesting world, I think, in the next couple of years, of people who don't know how to be actual interacting with one another unless we do something about it one of our local legends here, uh prince.

Speaker 1:

When he was alive, he was known for making people lock up their phones before going into his, his shows at paisley park and elsewhere, which I, as a musician. I think that's a great idea because, like you say, people are so distracted, uh, on their phones and it really takes you out of them, out of the moment, and it comes back to that wisdom of of uh felt experience. Um, I'd like to back up just a little bit. Since we are a cybersecurity firm, I would like to explore, you know how you mentioned deep fakes, right, and I was wondering if we could chew on the topic of how AI would be threatening security in the future or the near future. You know deep fakes we've probably all seen them on Instagram or Facebook and there's some obvious ways that that could manifest as a threat. We've also heard of the voice generated phishing calls and things like that, but are there any other areas that you all see that could be an emerging threat in the cybersecurity domain?

Speaker 2:

Well, I think it really depends, like people not having good passwords or people not following the two-factor. There's ways you can kind of cheat with two-factor right factor. There's ways you can kind of cheat with two factor right so you can do your instead of having fingerprints or something you can just do like names, your parents' name or street. You grew up on all kinds of stuff. So I think as the technology gets more and more and AI gets better and better, it can kind of figure out maybe what Marsha's password would be. Or it can search through my stuff and figure out you know a likely password. It could pretend to be me. Better, if I'm looking, you know we get these calls or these emails. It will know the tricks because it's reading the information. So it's kind of like having a spy in the works. Right, it knows what the advantages or the cybersecurity protocols are. So it could better, I guess, get around them than a human being who has to figure out what those protocols are, things like that.

Speaker 2:

And I also believe that sometimes people spend more money, but there's been a lot of money, of course, in trying to figure out how to hack systems and things like that. There's tons of money behind that. So if they are using AI in order to make themselves better, that'll be interesting on the side of good right. So both sides, I guess, will be trying to win the AI war. Whoever has the best algorithms will get ahead, and so things like that. I don't know. I think it's worrisome for me. I'll put that out there.

Speaker 3:

It is worrisome for me what the next couple of years will be like six months ago, where I took a lot of content that I had generated a lot of written content, business emails and ran them through a language model and then had that language model respond as me to some other emails right that I needed to respond to because I was thinking, well, can I take a shortcut here, because a lot of the time I'm saying the same thing and if it's just kind of like a B-tier email where I don't need to really do anything but I need to respond, somehow can I leverage the model to respond and sound very similar to me.

Speaker 3:

And it was actually pretty scary how close it did sound to me. So I don't think we're far away from having an integrated email client that knows you know the history of years of email that maybe have been generated by you before and it could respond on your behalf. You know, we're probably only maybe 18 months, two years away from that where you could have auto-generated replies where you're just like oh you know, do you want to say this Instead of predictive now, where it might finish your sentence, maybe it'll start by saying do you want to say this in response in the email? So on one hand I could see people totally embracing that. But that goes down a rabbit hole pretty quickly and probably an area that we don't want to be in, because now we're having technology generate content that you know maybe our opinions or our values or what have you shift over time as we learn and educate, but if you're not re-influencing that model, it's staying the same.

Speaker 2:

Yeah, I think too, as you're talking, I was thinking about, you know, when I was creating the deep fake and I'm no like wonderful deep faker creator or whatever but the fact that I could do it both imitating the voice and manipulating the image for me was really scary, because it took me like 20 minutes or something to do a thing right. So if I was really motivated to, I could have a Zoom call, like we're having right, some kind of video conferencing with a made up me I am real, by the way, I'm not, it's like a made up me, right and just typing my responses right, and my mouth is doing the right things, my hands are doing the right things. So there would really be no way. There might be some, if I'm not very good, some glitches, some not, but if I'm really talented at this and I'm sure there are people around the world who are really talented at this you could do it right and you could say, yeah, I spoke to Marsha and she told me and this is the evidence, this is what she told me to do. So I did it and I'm on a beach somewhere, not even aware that this is happening.

Speaker 2:

So identity, I think, is going to be really, really, and if we choose to, like you said, you chose to use the bot or whatever to do this. So how much are we buying into it for convenience, and where is that going to come back to kind of bite us in the end? Like you know, if we, there has to be a lot more thinking going on on how we use ai, what ai is allowed to do and of course they're going to be people who you go around this. So, for the general public then, how we're going to use it, what's going to be allowed and I'm sure the whole legal profession is going to change not change but have something to do with it or policies, governments, because it's getting better faster than we can keep up with it regulation-wise regulation-wise.

Speaker 3:

On that regulation topic and Bill, you might have something to add here as well, because I know you do a lot of regulatory work around auditing and things like that, for instance just kind of rewinding a little bit, marsha, where we were talking about AI and how we might not want AI providing our medical direction, something like that but there might be areas where it might be better served that AI could provide, maybe like a radiology review where today it takes time for that image to be reviewed and then go to a doctor for review and then back to you, then go to a doctor for review and then back to you, so it could be maybe a week before you get that image reviewed and your results.

Speaker 3:

Where, potentially AI a good use for AI could be running that image through an AI model that has seen not just thousands of these images but millions and has access to a large database of images and may just have a finer-tuned way of ascertaining what is actually in that image and providing a better diagnosis based on the image than a practitioner could.

Speaker 3:

And we may not be there today, but 10 years we might be, but I think our legal and regulatory system has not caught up to that, where today, I believe, in order for radiologists to review your image, not only do they have to be licensed in the US, but they also have to be licensed in that state. So I think during COVID there was some times where physicians were moving out of state but yet they were VPNing in and they were able to view the images that way Because technically, if they're VPNing in, they're in state.

Speaker 3:

But to me that just seemed a little bit backwards as far as well, if we could take advantage of radiologists that were in a time zone that were, you know, 12 hours opposite of ours, so you get an image at 8 pm, you know, in the ER, that image could be read essentially by somebody in daytime somewhere. Maybe they're not in your state, but you know they're half a world away and they're just as your state, but you know they're. They're half a world away and they're just as educated, and maybe that's all they do is review these images and they're going to provide a great reading. But that wasn't medically legal to do because they're not necessarily licensed in the state. But now we can take that one step further. And then are we, Are we going to be licensing different AI technologies and those AI technologies may have to have a medical license in order to provide a diagnosis. It's just kind of where legal and efficacy and outcomes they don't all seem aligned right now.

Speaker 2:

Right. Yeah, I think that especially. I think there have been studies done where AI can spot something that doctors haven't spotted right. But I think you're right with the whole regulatory issue. It's going faster than we humans can comfortably process and come up with something that makes us feel comfortable. And also, I think some industries might feel more threatened than others. So I'm going to take legal, for instance, right Legal. A lot of the I guess the bite in legal is they have this body of knowledge, they have read all the things, they've seen all the things and know what law applies to what, whereas if the AI can do the same thing, then it's going to be like, hey, why am I going to pay this person a million dollars when I could, you know, take five minutes on chat to BT and it spits out the same advice, right? So there's some protectionist things. I think they're going to be going on and people are going to be have are having to figure out what they are going to do to stay relevant. So I think that's going to be some pushback and that'll be all of the knowledge intensive or information intensive jobs, even programming programmers now you can ask it to write a code for something in any language. You don't need to know the language yourself, but write a python code that will do xyz. It can do that for you, um. So, yeah, I think it's going to be really interesting again.

Speaker 2:

You know, I think a lot of issues are going to be happening right now, especially people in college right now. Um, what are they going to study? What should they study? Um, is their job going to be happening right now, especially people in college right now. What are they going to study? What should they study? Is their job going to be obsolete when they graduate?

Speaker 2:

You know, if the AI systems are getting so good, what should we do as humans? There's that, I think, wall-e movie, where all the people are just huge on the doing nothing, I guess, vacationing for the rest of their lives. On the doing nothing, I guess, vacationing for the rest of their lives while the bots do everything. I mean, is that the feet we need to figure out as a people, a global people? What do we want our future to look like? What do we want AI to do for us? Why are we doing these particular things with AI as opposed to having it help us solve some other types of problems?

Speaker 2:

Is the social structure going to change. So for the past couple hundred years, or a hundred and some years, it's been industry producing stuff and now it went to knowledge workers. Now knowledge is being outsourced to AI. Is it going to go flip back to physical and people? You know AI can't unclog a toilet or AI can't do you know. So is it going to flip? I don't know. It'll be interesting Again. I think we live in really really fascinating, fascinating times. We're lucky and unlucky at the same time.

Speaker 1:

Yeah, I want to kick it to Bill really quick. Just say you know I want to get his take on how you see that ethical concern dovetail into cybersecurity in your experience. Have you come across anything like that that raises any flags or anything in your experience, bill?

Speaker 4:

Yeah, so I mean from a cybersecurity perspective, we work pretty closely with compliance and with legal and some of the concerns that we've seen are around AI and its usage of intellectual property.

Speaker 4:

You know, potential violations of copyrights, of course, ethical concerns about plagiarism, ethical concerns about you know what AI could do to put human life at risk in you know sort of some unwitting fashion. Certainly a lot of that, and I guess you know. I'm wondering if you've seen any of that in some of your work. And as a follow-up to that, you just spoke, I think, very sagely, about how regulations aren't keeping up with any of this. But I was also wondering in your domestic and international experience, are you seeing any cultures or governments who are at least trying to approach it more aggressively or more innovatively than we might otherwise see from our positions?

Speaker 2:

Okay, so I think China has had been at the forefront for a long time not a long time, but it seems like a lot of work is coming out of China around AI. I don't read Chinese, unfortunately, so I can't keep up with the Chinese literature, but I think a lot of work's being done in China around AI and I think some of it is. Of course, they have a billion plus people right, so they need some systems and they have so much data that they can work with, because they don't have the same data protection or privacy laws that we do here. So I think a lot of interesting things might come out of China just because they have access to, you know, so much data. As far as I forgot, part of your question was about, oh, copyright. I think copyright is a really interesting thing because, like you can use, in the beginning, when the kids got, you know, heard about AI, then everybody was doing their papers. You know, write me an essay on XYZ and, okay, it spits it out there. But now, as people are getting more sophisticated, you can say, write me a paper on XYZ and write in the style of William Faulkner, or write this in the style of blah, blah, blah. So you could say is that or they don't? When you're doing a paper or doing some work and it's, you know, trawling the net to find information, you don't really know whose ideas it has picked up out of the billions or hundreds of millions of work, so you can't accurately cite anything or give credit to anything. It just gives you the compilation. Unless you are again back to the prompt engineering, you can ask it to annotate and do X, y, z. But I think copyright is a big one.

Speaker 2:

Even music when they were coming out with you know, I know they had a couple that were. I think it was Jay-Z is one, taylor Swift is another where they would say, right, drake is another, sing a song that sounds like this person, in this style. And one of the workshops I did, I used some of that music and had people rate it and you know, in the beginning was easier, because everybody now knows AI can do a lot of things. I said what if I told you this was written by a computer? You know it's written by AI. And then people started seeing things that was wrong with it, but before you told them that it was AI generated, it was fine, it was. You know this nuance and that and whatever. But once they you said, oh, this is AI generated, then they started to find issues with it.

Speaker 2:

So I think, yeah, if I were a creative feels also I guess copyright or um, not so much copyright but um, I don't know about the legal term, but copying an artist is that copyright? So I can say, draw in the style of picasso or somebody living, um, I can, I don't need artists to draw things for me anymore, really, I can ask ai to create an illustration. And then you have the issue of who does that illustration belong to? I mean, I guess it's mine because I asked it to generate it. But then can someone else use it or not? And if they do, do they have to? It's really complicated. Like I said, our laws currently have no reference in my mind, have no reference to what's being done, because the people who made the laws 50, 100 million years ago were not thinking that one day artificial intelligence could do this kind of work yeah, and that, that ownership, that that you mentioned, where you prompted it to create that particular work of art.

Speaker 3:

Well, did somebody else? Or what if you used chat, gpt to create a mid-journey prompt to create the art? Well then, you know how do you rewind that? And it, you know, it kind of goes back to um, the digital art that we were talking about at the beginning of of the, the episode where art maybe was borrowed or taken from different cultures without people really understanding the meaning, and then you know they're creating these, essentially maybe digital representations of that and monetizing it. And what have you during that kind of boom of that, that digital art technology, so that that does really get into a gray area pretty quickly what I've been seeing.

Speaker 1:

So I'm, uh, currently composing music for NBC and ABC, and the one benefit we have at the moment job security, I should say is that they won't touch anything that can't be copywritten, because the last thing they want to do is get into any legal hot water over something that's on air.

Speaker 1:

So, as it stands right now, they won't want to touch anything AI generated because you're not able to copyright it. So we have some sort of a protection there. However, I also think it's interesting Maybe you've come across this too, Marcia, with your students. The younger kids, Gen Z, Gen Alpha, are starting to be able to identify, for now, AI-generated music, and sometimes I can, as a seasoned musician, I can pick up on it, but there are little nuances or little glitches that are artifacts, if you will, that are still embedded in that stuff. That's kind of undeniably AI.

Speaker 1:

But yeah, you're right, I you know there's a public domain and it will be interesting to see. As you know, we're still using a ton of this baby boomer music in our commercials, in our culture. You know, from the Beatles to Elvis to Chuck Berry, to, you know, Little Richard. That stuff still gets a lot of traction in the advertising world and that's soon going to be in the public domain. I think it's 75 years. So I think they just had this happen with Disney, where the original Mickey Mouse is now public domain. So all sorts of crazy things are happening with that. So yeah, you're right, we're on a new frontier. So yeah, you're right, we're on a new frontier, and it'll be interesting to see how fast the lawyers and the ethics of our culture can kind of catch up to what's happening here.

Speaker 2:

I think that's the beautiful thing, though I mean, I think, going back to what I said in the beginning, I think it's great that because you can tell the difference between Well, right now I feel like I can tell the difference between ai generated and real, um, whatever that means, but I think it's the the combination of both, that's really great, right. So you can have ai start something, or you could have ai finish something. It doesn't have to be a hundred percent. I'm thinking about music. You know, you can maybe have, like I think they did a thing where they had all like 50 country songs and it was all the same song, but maybe a little bit higher or lower. But so if you do something like you have a melody, ai can maybe complete it or help you go to that next stage, and it could be a mashup between human intelligence and artificial.

Speaker 2:

I think for me, right now, that's where the beauty is. You know, you have an idea, you write something AI can help you figure out. Okay, this is where I want the story to go, or this is where I want the art to go, and I can erase it, and you know, without losing everything, or I can archive it or save it or have a hundred iterations. So it helps you again to augment your own thinking, and I think that should be copyrightable. I'm hoping because it is a combination of your work and the generative work, which is what real musicians and I'm not a musician, so, joshua, you can shut me down, but what real musicians do, nothing is from nothing, right? So you are reminded of something or you might see a link back to a composer or something like that, or a folk tune or whatever in what you're doing, and I think AI is kind of helping you do that, but in a different way.

Speaker 1:

Well, absolutely. I mean, all American music is appropriated from different cultures. We're a melting pot here and you know the dumbed down version of it is, you know, western music, or American music is Western classical mixed with the intricacies of African rhythms, and then blending those two things together is where we come up with blues and jazz and folk music, and that's a beautiful thing. So when you have that melting pot, I hope AI can contribute to that in a positive way. And then also, to your point, make creativity more accessible to people that might not have the means to buy a baby grand piano or a Fender Stratocaster. Maybe it opens it up. That comes with its own ramifications of just clutter and, uh, you know, all those things that that can happen when everyone's able to create. But, um, you know, hopefully it's largely for the better and, uh, a positive thing for humanity. I I would, I would hope I think it.

Speaker 2:

I think that's true. I'm thinking about all the art that I generate now. I love art-filled presentations. It has lots of art in it and before it would be almost impossible. I mean I would do it, but it would be really long hours trying to get the thing right or find a photograph that matches what you want. Or, if you could afford it, go on Fiverr and have somebody try to replicate what you want. Or if you could afford it to go on fiverr and have somebody, um, try to replicate what you want.

Speaker 2:

But now it's between mid journey and even canva. I mean, it's really simple, like you can generate what you want and tweak it and you know, take this part and you know ask it to do this kind of f-stop and whatever. You can kind of really incorporate what you know about real art in real life and have it manufacture unique things that suit you, and I think that's really really. I'm so really thankful that ai exists for that. If there's just one thing it could do is create art for me. There you go, all the translations. You, there you go, and the translations you know just being able to translate whatever into whatever language I want you know. So now you can reach so much more people. The world literally is at your feet, and I think that's a beautiful thing.

Speaker 1:

I think that's a great place to end it today with on a positive note. We're at an hour, unless anyone else has something they want to get in before we wrap.

Speaker 3:

Josh. I just wanted to ask Marcia if people are interested in learning more about what you do or getting involved in working in communities outside of their own, maybe outside of the domestic communities here. It sounds like you've got some interesting travel. You're seeing things all around the world. Where should people go to learn more and maybe participate locally?

Speaker 2:

Yes, so we have a website. You can go events at iftheselandscouldtalkcom. It's a really horrible URL. We've got to work on that. But iftheselandscouldtalkorg. Go there and you can see all the things that we're really actively URL. We got to work on that. But if these lands could talk that ORG, go there and you can see all the things that we're really actively doing and they can also just email me and my email is on there as well. So, events at if these lands could talk that org.

Speaker 3:

Great, thank you.

Speaker 1:

Wonderful. Well, you've been listening to the audit presented, presented by IT Audit Labs Today. Our guest was Marsha Maxwell. You can find her online by searching her name. We've also had Eric Brown on today and Bill Harris from IT Audit Labs. My name is Joshua Schmidt, your co-host and producer. You can catch us every other week. We publish on Monday and you can find us on Spotify, apple and wherever you get your podcasts. Please tell a friend, like and subscribe and we'll see you in two weeks.

Speaker 3:

You have been listening to the Audit presented by IT Audit Labs. We are experts at assessing risk and compliance, while providing administrative and technical controls to improve our clients' data security. Our threat assessments find the soft spots before the bad guys do, identifying likelihood and impact or all. Our security control assessments rank the level of maturity relative to the size of your organization. Thanks to our devoted listeners and followers, as well as our producer, joshua J Schmidt, and our audio video editor, cameron Hill, you can stay up to date on the latest cybersecurity topics by giving us a like and a follow on our socials and subscribing to this podcast on Apple, spotify or wherever you source your security content.