Sam Altman tries to Regulate AI, Why AI Will Displace Your Job & The Future of AI | E15

Chris Sharkey (00:00:00):
I think the thing I am sure about though is that I don't think that Sam Altman really actually fears the rise of ai. I think he wants it, if he really feared what, what AI might do, then why did they rush to release chat G B T? Why did they rush to release G B T four? Like if you were genuinely scared and genuinely thought regulation was required? Why the hurry?

Michael Sharkey (00:00:26):
Chris, this week was all about AI regulation. Sam Altman met with a senate committee along with people from I B M I B M. I'm not sure why they picked people from b m, but they didn't. It's all their

Chris Sharkey (00:00:42):
Buddies at the big companies, I think.

Michael Sharkey (00:00:44):
Yeah. And so Sam Altman and some people from I B M, I don't even know their names. , uh, stood up in front of a senate committee and told the Senate in the US that they want to be regulated. They want regulation, they want regulation for ai. What did you make of this push from? I

Chris Sharkey (00:01:06):
Think it's, I think it's a total reaction to the rise of open source, large language models that we've discussed in the last few episodes. I think that Open AI has had its 15 minutes of fame. They're scared. They want to shut everyone down through regulation and stop everyone so they can build and maintain a monopoly on this technology.

Michael Sharkey (00:01:26):
There's definitely two opinions out there. One is that, you know, Sam Altman's been talking about this for a long time, and that the need to regulate that is people like Elon Musk for years have famously been talking that we need to regulate ai. And now he's just, well,

Chris Sharkey (00:01:44):
That's the, I mean, Elon Musk helped fund open AI for that exact reason. He wanted it to be open. The problem is they're not open at all. They are the most closed of any of the, the AI providers by miles.

Michael Sharkey (00:01:58):
Yeah. So they, they started that way and now they're essentially, you know, somewhat owned by Microsoft. Well, they are owned by Microsoft.

Chris Sharkey (00:02:08):
I mean, someone pointed out in the Reddit thread in this discussion, you know, open AI lost 530 million last year without Microsoft. They don't exist. You know, they are Microsoft essentially at this point.

Michael Sharkey (00:02:20):
And do think that doing the ch the sort of 10, 10 billion deal was stupid in hindsight, and that maybe chat G P T could, could have funded their growth?

Chris Sharkey (00:02:32):
I'm not sure about that. I think the thing I am sure about, though is that I don't think that Sam Altman really actually fears the rise of ai. I think he wants it. I just think he's afraid of not being the one at the, the, their company being at the centre centre of it. Like I think if he really feared what, what AI might do, then why did they rush to release chat G B T? Why did they rush to release G B T four? Like, if you were scared and genuinely thought regulation was required, why the hurry? Why just smash it out there as fast as you can? It's a market dominance thing. They want to dominate the market. They fear they're gonna lose that. So they're calling for regulation to stop everybody else.

Michael Sharkey (00:03:14):
It does seem, a couple of weeks ago we talked about OpenAI has no mote, and that that document, internal document from Google that was released. And, and neither do we. And now fast forward, it's hard to not interpret the push for regulation to create a mote through some sort of licencing scheme around, uh, ai AI models. And the, the, there's two, two arguments here, and I want to cover both sides because a lot of people are saying, well, you know, he did stand up in Congress and say, uh, on, on one hand you can regulate this thing, uh, but regulate us, don't regulate the small guys and make it hard for them. But then he could, he sort of followed on with, but they can be used for harm as well. So my interpretation was, you know, if you wanted regulation and to create a mote, what would you do? You would go to politicians and spread fear about ai. You would say, yes, this thing's gonna kill us all. You, you just allude to it occasionally. But no, it's, it's gonna be great for humanity. And I think that's sort of what it felt like he was doing.

Chris Sharkey (00:04:20):
Yeah, and interestingly, open AI sort of alluded to it, you know, through indirect news reports that they're gonna be releasing their own open source model. And it's very, very interesting if you think about it, because a, it's a contradiction that they would, um, be saying like, we need to regulate open source models. First of all, I don't even know how they're gonna stop them. I mean, they're trying to just make it illegal, I suppose, um, without a licence or something. But also, you know, the sceptics are thinking, well, if Open AI released their own open source model, they may use that to actually create more fear and sort of control the narrative around open source by having a horse in that race.

Michael Sharkey (00:04:58):
Do you think though, they've seen something we can't see and they're genuinely fearful of it and, and you know, he's in there being like, please regulate this, I'm scared, or do you think it really is a case of if there's more regulation, we'll keep our lead? This keeps us front and centre in the media, front and centre of the news conversation, which means more people use our products and services.

Chris Sharkey (00:05:21):
I used to think the former that they must be miles ahead of what we're seeing, and that's where the fear comes from. But I think my scepticism has taken over completely to the point where this 100% looks like commercially motivated, or at least politically motivated in the sense that they want power and they're fearing that they're losing their power and their monopoly, or whatever you want to call it. Um, and I don't think anyone expected open source, large language models to rise as fast and as, and not just large language, the other models too, to rise as as it has. And I think they're panicking. And I think the call for regulation is exactly that.

Michael Sharkey (00:06:03):
We did an interview last night for a private event that's being played, you know, you've hit the big time when you get asked to do an interview for, for a private event. Uh, so we, we gladly accepted for our egos. But the, the one point you made during that interview, which I thought was really interesting, is we were asked about what, you know, where this stands historically in terms of invention. Like where does AI sit, you know, in terms of like the, is it the internet? Is it, is it mobile? Is it like a social platform? Mm-hmm. . And I thought your answer was really interesting. You said you think it's bigger than the internet, and this is just now a technology that we've invented and it's just something that everyone will use as opposed to something maybe one company will control.

Chris Sharkey (00:06:53):
That's right. It's sort of like, you know, the whole letting the genie out of the bottle thing. Like, or actually I think the analogy I used yesterday was like the bronze age. Like once someone discovers how to make bronze, right, uh, you know, then everybody's aware of that technology. It exists now. Like it is, it is something that is out there. And to say that, oh, okay, only the person who invented bronze, not, I'm not saying OpenAI by the way, invented large language models, but the person who sort of made it, it part of our, you know, zeitgeist, um, they don't have a monopoly on that. Like in a literal sense. Like it is something that exists. People are aware you can train these things, you can train them on commodity devices now that now that this, this sort of emergent behaviour, large language model AI is out there, you're not going to be able to stop people from using it.

(00:07:46):
So I think that the reason it is so significant is because it has the impact to change every single job in every single industry and change the very nature we humans sort of operate in the world. Like that may or may not happen immediately. It might take a long time to play out, but we were being asked specifically about, you know, five years down the track, 10 years down the track even further. And I think that it's going to be ev in everything, and it's going to be a big part of our lives whether we want, whether we like it or not. We all need to take a stance on it and, and think about what we think about it. And one big company. And also why does the US get to decide everything? You know, like they're acting like they get to decide for the whole world, the way we use this technology, but this affects the whole world. It's not just the US Congress who gets to decide how we all interact with something that's gonna fundamentally change our lives.

Michael Sharkey (00:08:38):
Yeah, I mean, the EU has, is, is much further ahead in developing AI regulation. I don't necessarily agree with it either, but they came out of the gate. Um, and I think there's still a lot to work through from that perspective, but they definitely have regulation in place. So it seems like some form of regulation will occur. But I, I, I, my, I personally don't trust any of them. I feel like it's way too early to be thinking through regulation because how do you even regulate it? There's already capable open source models out there that bad actors could be using right now. It, it's not too hard for any bad actor to just go and get a model. Like they're obviously not as good as G P T four, but they're good enough, they're close. You could do some pretty bad stuff and, and if you come out now and say, oh, well you need a licence to use those, I mean, I, I just don't see how it's gonna stop anyone.

Chris Sharkey (00:09:36):
I'm definitely, definitely back to, in the first few episodes of this podcast, I was encouraging everyone to download everything, you know, download every single weight point you can get, what the weight points are, the pre-trained models that are up to a certain point, which you can then train from. So someone's invested, you know, hundreds of thousands or millions of dollars training up to a certain point with a model, which you can then use as the basis for your own models. And those are available on websites like hugging face or other different places you can download the pre-made weights. And when Facebook, um, leaked llama, that's what everyone got from there. Their pre-trained rate weights, which were worth millions. And so I really think people should be hoarders in terms of data sets, in terms of pre-trained weight, uh, weights and the algorithm, like the actual model code. Because if this does come in, I don't believe they're gonna be able to stop open source, but I sure as hell wanna have my own copies of everything if they start to try.

Michael Sharkey (00:10:34):
Yeah, I mean that, that's definitely the scary argument, right, where they legislator, you have to pay one of these big vendors to use their model and then the governments can control the content in the models, which is, I think, you know, it, it's hard to trust Sam Altman because it's hard to understand what his motivation is, given that he'd said under oath that he has no equity in open ai. So what's his motivation? And that

Chris Sharkey (00:11:03):
Little weird, isn't it? Yeah,

Michael Sharkey (00:11:05):
It's obviously power. Like power.

Chris Sharkey (00:11:09):
Yeah. It, it would have to be, I mean, I mean he, he may not have equity. He may be getting, you know, compensated in another way that, that we are not aware of, but there's no way you'd be at the centre of this explosion of technology and not trying to enrich yourself from it. Like, he's definitely not altruistic because clearly they're, they, they went from being meant to be open to this evil private company who's trying to stifle the competition. So there's no altruism there. It is, there's something going on. Um, that's, that's, you know, seems sinister to me. I I, I don't trust them at all. And in fact, I've been using anthropic all week and I think it's better than G B T four. It's really, really good.

Michael Sharkey (00:11:48):
Let's come back Toro, because I think that's worth discussing a little bit later on.

Chris Sharkey (00:11:53):
Mm-hmm.

Michael Sharkey (00:11:54):
Do you want to keep talking through this regulation stuff or should we talk about some of the other announcements this week?

Chris Sharkey (00:12:00):
I think just to finish on the, the regulation, I think that, um, it had to happen. We can't, we can't accept that, um, that it wouldn't, I just think, and I think the EU one comes from a more pure place than the US one does. I think the US one is sort of big company Mac machinations trying to, you know, cement their place at the top. Whereas I think regulation is needed. I think there's, there's bad sides of AI that um, definitely need that level of oversight and you can't expect the governments to get it right immediately. Like, it's new to everyone. Not everyone understands it. So I understand the need to have the discussions. I just don't think it should, the, the policy should be immediately to shut down everyone except a few choice companies.

Michael Sharkey (00:12:48):
Yeah, I mean, in Sam Altman's defence, he did say that he doesn't, you know, he doesn't wanna stifle innovation and startups. But again, I come back to that point of just, it feels like people are starting to use the fear, the fear of it taking jobs, the fear of it getting out of control, which are all valid concerns longer term, but I just don't think we're there yet. The, the current large language models, the more and more I work with them, the more I feel like they're very, very limited in what they can do today. N and and there's obviously some bad things you can do with them right now, but they're not too dissimilar to things you could do already, like phish attacks.

Chris Sharkey (00:13:32):
Yeah, and it's interesting you bring up phishing attacks cuz there was a paper released during the week about spearfishing. And for anyone, first of all, phish, for anyone who doesn't know that is where someone will send you an email, you know, purportedly from someone you know, or a company you trust with a link saying, oh hey, you know, um, your password was compromised. Click on this link just to, to update your password and um, everything will be fine. You click the link, enter your password and someone's compromised your credentials. Or they might be like, oh hey, it's your boss. Uh, please approve this Excel file. You click the file, the file compromises your computer. What spear phishing is, is when you target a specific person. So someone might not, not fall for a general phishing attack cuz they're like, oh, I don't know who this is from, I won't open it.

(00:14:17):
But spear phishing might be like a case where they know a lot about you. They know, um, like specific details that only someone who knows you would know and they're targeting you specifically. So you are more likely to open the email and, and fall for the scam. And so what this paper showed is that using large language models, specifically chat, G p t and g p t four, they were able to use context information to write extremely customised, personalised, well-written phishing attacks. And they targeted US government representatives great success. They didn't actually exploit them of course, but they, you know, they proved that it was possible. And the idea is that the large language models allow, you know, leverage with this, you can do hundreds of thousands of attacks that are highly personalised because there isn't the labour of you having to A, do the research and b, write the emails. So, you know, those kind of scams are greatly enhanced by this technology.

Michael Sharkey (00:15:17):
So you could essentially automate the, some sort of evil AI agent to go and research people that can potentially scam Yeah. Get their information, find out, you know, what they've been doing, what's personal to them, have some scenarios, frame that content in a scenario. I feel like I'm giving the, the steps Yeah. To a scammer, but, you know, frame it in a way, send them an email, which the AI could also pretty much figure out and, and then handle and then send them a malicious file which compromises their device and you're in.

Chris Sharkey (00:15:51):
That's right. Or send them to a malicious website. Like there's a lot of techniques like they call 'em zero days, but like techniques where they know the current vulnerabilities in software and you know, also in government, they often don't have the latest UpToDate web browsers and other technologies. And there's always a vulnerability people access thing on personal devices when they shouldn't or forward the the content to someone else. And you know, there's so many ways to exploit people and you know, this is just the sort of entry into the door. One thing people often do is what they call, um, escalation attacks, where they'll target someone lower down in an organisation, use that to get sort of inside the building, so to speak. You know, like if you compromise, you know, someone lower down in the organisation, you can use that to leverage targeting someone higher up with even more spear phishing attacks or other social engineering. So the, you know, the potential for this, I, I thought about this early on and we spoke about the DEFCON conference, which is sort of all the, you know, absolute leading minds in, in, um, how to exploit technology. And, um, when they unleash these guys on the large language models, we're gonna see some amazing, amazing things, I think. But

Michael Sharkey (00:16:59):
Don't you think this sort of strengthens a case for the government to legislate in the sense that if they control all the models that are available, cause obviously they're using 3.5 and four for this mm-hmm.

Chris Sharkey (00:17:11):
,

Michael Sharkey (00:17:12):
And if they, you

Chris Sharkey (00:17:13):
Could do, you could do this with, with Lama, like with, um, alpaca. Sorry,

Michael Sharkey (00:17:17):
But would it be good enough today? Right now?

Chris Sharkey (00:17:19):
Yes, a hundred percent it would be good enough. Yes.

Michael Sharkey (00:17:22):
But, but maybe this is one of the cases for regulating it, which is to, to control the spread of this technology or at least filter it sort of how

Chris Sharkey (00:17:29):
Like, but I mean, I I just don't understand how you're going to stop it at this point. It's already good enough. It's already out there. Like, I I don't see how regulation helps here, do you?

Michael Sharkey (00:17:38):
No, I mean, I can't, I literally, if I was in charge now and someone said to me, you know, Mike, how would you regulate this? I have no idea. In fact, I would take the sit back and wait approach because I just don't think it's at a point yet where it's, it's too risky. But, but the question is, is like how do you, the, the phish attacks are obviously the most obvious one, right? How do you even, how do you even train people to

Chris Sharkey (00:18:05):
Well, that's the thing. I mean, they're very, very difficult to detect. And the whole point of it is, it's so good at writing and I, I always say English, but I mean, it's basically any language, you know, it, it's so good at crafting things that are so real that it's, it's virtually undetectable. You know, I was talking last night about, um, you know, students and the prevalence of the them using chat e p t to do their homework. And we've seen the, you know, the funny examples where people accidentally like leak the prompt into as an AI model. I can't answer this essay, but here it is. You know, other than those obvious ones, I, I think that teachers and organisations are threatening, oh, we'll know if you use AI to, to generate it. But I don't think people do. I think it's very difficult, like with images, you can water market, but I just think with text, it will be hard to detect what's generated by a model and what's generated by a human. I think it'll be really, really hard to know.

Michael Sharkey (00:18:58):
Do you think that's why Elon Musk, Twitter, blue Tick could actually come to email and other aspects of our lives, like images as well, having a blue tick where they're authenticated to be from the human origin as opposed to ai. And that's how we start to communicate. So if I send you an email, there's some sort of way that it gets a blue check mark and it's, but I mean, email

Chris Sharkey (00:19:22):
Sort of already has that with things like dkm and other authorization methods. The problem is that the people who fall for phishing attacks aren't unaware of all of that, you know? And, you know, the people use different platforms for email. People use older technology, people open things on their phones. There's, I just think there's so many ways around those kind of efforts to, to authenticate content that it's still gonna work anyway. There's

Michael Sharkey (00:19:48):
Also spear spearfishing in the sense that we haven't really thought about, which is voice, like actually calling someone as their colleague. Yes. And training, I mean, right now you could train my voice from this podcast and then call, you know, one of our team members as me quite easily. Uh,

Chris Sharkey (00:20:06):
Yes. And as you know, we've been experimenting recently with voice generation and, um, and the, the quality you could now, I tried one during the week called Bark, which is designed to, it's an open source voice generation, um, model, like text to speech model that can do things like sigh and laugh and all sorts of real human-like behaviours in a call. Um, as well as speaking in human sounding voices. Admittedly, all the ones I tried were American Voices, so it didn't sound, you know, as, as natural to us, but it, it's very real. And, you know, having those human quirks to it, which as I tried, you can get a large language model to, to know when to insert those. It's fast enough in terms of voice to text to understand what the other person's saying and fast enough to, to write the reply. So you actually could now operate a full phone conversation without the other person knowing that it's completely, you're talking to an ai. Is

Michael Sharkey (00:21:10):
There a delay, like a noticeable delay? And I've obviously heard these things that I Yes. I'm more asking for the audience.

Chris Sharkey (00:21:16):
Uh, yes. With, with Bark there is, um, but if you use 11 Labs, which is the, an AI company that came out very early, they were the first ones that could take your own voice where you uploaded samples of your own voice and essentially build an AI voice online. So you can, you can log onto their site and do this. I think they have a free plan as well. Um, and it is fast enough. Um, especially I was mostly trying with the built-in voices. So if you use the Built-in Voices, it is definitely fast enough to have a phone conversation. You're talking half a second to a second to to do it. And also it supports streaming. And obviously streaming is better because if I, let's say I'm saying this sentence, if I, with the normal text to speech, you'd have to wait till the entire sentence generates before you can start sending it to the other side. Whereas with 11 Labs, they allow streaming. So you can, it, it's, it's a phone call. It's like, I'm just speaking as the model pumps out the tokens, it turns 'em into voice.

Michael Sharkey (00:22:14):
Yeah, I, this is a thing. I just don't see how you regulate out these problems. We just have to adapt to them. I'm, and even

Chris Sharkey (00:22:21):
If you do, the enforcement of them is going to take a long time to manifest. So, um, these, these will be, you know, we talk about long-term thoughts and short-term thoughts in the short-term, the proliferation of these kind of attacks, let's put it that way, is going to be large.

Michael Sharkey (00:22:38):
So some other news today is that chat, G P T is now available for iOS, and they said coming to Android soon. Right now it's only available in the US App store, but they said it's coming to more countries pretty soon. It, it's a pretty interesting app. I think what it has over the web right now is voice. So I, I believe I read it's using Whisper for Voice. So you can tap the microphone button and literally talk to it, uh, natively for the first time. This is obviously gonna put all those clone apps out of business. Yeah. It's already number one in the US in productivity. I'm sure it'll be number one in the, in the world as as an app soon.

Chris Sharkey (00:23:21):
Of course. Yeah. I mean,

Michael Sharkey (00:23:22):
This was just inevitable. It's interesting the use cases that they're, they're sort of pushing instant answers, tailored advice, like how do I decline nicely to a text message, uh, creative inspiration, professional input. The, the, the interesting piece here is I wonder, given the privacy capabilities that they implemented on the web where you can have private conversations that they won't use for training, I wonder on the app, and I'm not able to use it yet, so I I can't answer this. Can you go into some sort of private mode? Because there's a part of me that wonders with this app, you know, is chat G P T and this app, the greatest information harvesting or training exercise in history, where you get a hundred million active users, probably more with, with

Chris Sharkey (00:24:16):
Fine availability photos, all their techs, all their calls,

Michael Sharkey (00:24:19):
Yeah. And then you can train better models, right? Like just, just like the smartest models in the world. And that's the path to agi. You just need more, more data.

Chris Sharkey (00:24:30):
And it, it, it, the first word that came into my head when you mentioned this this morning was Apple. Because you, you just know that Apple likes to do this. They let people creep out into the market, they get 'em on the app store, and then Apple's gonna have a play at this very soon, I would imagine. I I can't imagine they're going to allow them on their platform to, to harvest all of that data without them having a horse in the race

Michael Sharkey (00:24:57):
Possibly. But I think Apple's issue, I'm, I'm, I'm sure we'll see at their developer conference, which is coming up in, in a couple of weeks, my concern there is they talk about doing all the pro, well, they do it right now, all the processing's on the chip on your phone locally, so it's super private and they're not really uplinking any data back to the mothership. And, and really,

Chris Sharkey (00:25:22):
I didn't know that. That's pretty

Michael Sharkey (00:25:24):
Interesting. Yeah. So your photos are trained locally on your phone. The, the recognition, all of those things is all local.

Chris Sharkey (00:25:30):
Oh, you mean Apple? Sorry,

Michael Sharkey (00:25:31):
Apple, yeah, yeah, yeah, yeah,

Chris Sharkey (00:25:32):
Yeah. Right.

Michael Sharkey (00:25:33):
And, and so I just wonder then, how do you go and train a model if you don't have access to people's data? So the privacy issue that Apple lent into, I think mostly for marketing now could come back to bite them in the sense that they just don't have data to train on unless they allow you to train your own, like, personalised assistant L l M, but then the serious stay, like the poor cousin of LLMs.

Chris Sharkey (00:25:58):
I think one of the things we're seeing though, is that models can be trained with less and less data as the techniques get better and as the quality of the data improves. So I don't actually think you necessarily need access to all of the world's photos to train competent models that can compete with open ai. I, I just, I think that that's a discovery that's being made, just like humans don't have to see like every picture of every cat and every dog in the world to tell the difference, um, you know, at a brain or, you know, a, a neural net can get better at, at learning things faster, um, as the technology improves. So I don't think it's a case where you'll necessarily be left behind if you haven't been harvesting everyone's data for the last 20 years.

Michael Sharkey (00:26:42):
But it just comes back to then there is no mode, like there's mm-hmm. , how is there a mode if the learning becomes better and better, and then anyone can access that.

Chris Sharkey (00:26:51):
Yes. And I think that of all of the takeaways that I've had over the last, you know, 15 weeks or so since we've been doing this podcast is that the, the moat that we thought existed doesn't exist. This is a technology that's going to be fairly universally available, um, and accessible to a lot of people, which is why it's even more significant.

Michael Sharkey (00:27:14):
Yeah. Which again, comes back to, you probably can't regulate it because it's essentially a virus. It's just going to be everywhere, .

Chris Sharkey (00:27:22):
Yes, that's right. And, um, yeah, it's, it's just very interesting the impact and how pervasive it is in people's lives. It's not something anymore, like, you know, you, we talk, sort of compare it to the rise of the internet, and you think about the rise of the internet, everyone was like really, really sceptical at the start of the internet. Like, oh, it's just computer nerds. You know, like, oh, that'll never take off. That'll never work. Like people were, you know, probably rightfully sceptical of that kind of thing. Whereas with ai, you can bring it up to anyone of almost any age, and they've had actual real contact with it. And generally speaking, they have fairly, either everyone has the doomsday thoughts, but you know, generally they've had a positive experience with it. This is something that has, has become ubiquitous very quickly. And admittedly, you know, we live in a western culture rich country with access to technology, so probably isn't worldwide. But, um, it's pretty interesting just the rise of it in, in people's minds.

Michael Sharkey (00:28:20):
I think too, you know, even my son now, and this is how we started the very first podcast. We talked about writing Batman's stories using the original chat G P t. Yeah. And how I was able to read him stories that he dreamt up using his imagination. And even he now, you know, that's, that's his life. That's the world he lives in. Can you write me a story about, you know, this character does this to someone, and this is like a nightly occurrence where I'm reading him customised stories based on his imagination or ideas that Yeah, my kids

Chris Sharkey (00:28:53):
Do the same. They're like, just, oh, I'll, I'll state some problem I have. They're like, just get the AI to do it.

Michael Sharkey (00:28:58):
Yeah, just ask the ai, I mean, my four year old is saying that, and so you're right. It, it's really pervasive. It's, it's, it's everywhere. Our, our mother called, uh, a couple of days ago to talk to me and she said, this AI thing, it's everywhere. It's on the nightly news, it's on the morning shows, you know, it literally is, uh, it's just, people are very accepting of it. And almost, I think the fantasy of AI is probably far greater than the reality today of, of the things that it can do at this point in time.

Chris Sharkey (00:29:30):
That's definitely true. But it just because, um, it isn't there yet, doesn't mean we can't see what's coming in the future and prepare for it. I I think that, you know, when you look at the jobs market, for example, it's starting to become more and more obvious which jobs will be the early targets for replacement with ai. And I believe and know that people are actively working on those sort of autonomous agents that are going to replace certain workers who are out there currently doing or studying for those jobs. Like I was talking to a student last night who's thinking about his university choices and he's actually considering which jobs are likely to be impacted by AI or enhanced by ai. It's a genuine, genuine thought in a university. Well, a a a prospective university student's mind is, am I training for something that will be completely redundant?

Michael Sharkey (00:30:24):
Elon Musk was interviewed on MSN BBC during the week, and one of the parts of the interview that stood out to me is a part where he says that you really have to live in some form of suspended disbelief with what AI will, will do to the world in the next decade, simply because if you don't, it's really hard to get on with life. I'm paraphrasing,

Chris Sharkey (00:30:48):
But Yeah,

Michael Sharkey (00:30:49):
It, you know, it's hard to, to be motivated. And early on in this podcast we talked about having feelings of anxiety, uh, like we can't keep up. I, I still have that feeling, uh, but not as much.

Chris Sharkey (00:31:00):
Depends on the week, right? Yeah. Some weeks we do, some weeks I don't. Yeah.

Michael Sharkey (00:31:03):
And then, and then we talked about how the more you work with this technology, the less anxious you become. But I really think there is that natural, you know, thought that you go through about the future. Like, why am I even bothering if an AI will be able to replace this skill or replace this thing I'm, I'm building? And I think it's a, a really natural way to feel. And so I can't imagine what it's like for, you know, 17, 18, 19 year olds at this very infancy of their training in Korea, thinking about, well, what do I go and do? I think you mentioned this, but if you think about accountants, lawyers, you, you can already see very obvious ways those skill sets, at least at the lower levels or entry levels are just going to be fully replaceable. Like, and now almost.

Chris Sharkey (00:31:57):
Yes. And the follow on thought I've, I've been having lately from that is, let's say AI can replace, you know, junior lawyer positions like, you know, drafting documents and coming up with basic stuff. Excuse me. Um, how do you become a senior lawyer if you can never be a junior lawyer anymore? You know, how do you gain experience if the experience you're gaining is kind of futile in the sense that the AI could do it better? And I imagine there's a lot of, there's gonna be a lot of industries like that where the AI can take the relatively junior positions, but the senior ones are still occupied by experienced humans. But at some point you're going to have this situation where, well, do we need more university trainings? So you can leap straight to senior, but then, you know, you lack on the job experience. Um, and there's going to be these sort of skills gaps, I think, or maybe even just a certain apathy for certain careers where it's just like, well, what's the point? I'm not gonna bother. And then, you know, do humans get not stupider, but you know, less useful and, um, therefore, you know, less, I don't know. Um, yeah, like it's hard to, it's hard to fathom that sort of gap, those gaps that will come in experience. My

Michael Sharkey (00:33:10):
Feeling is in the immediate future, and I know for a fact these technologies are, are being worked on now, but I think a lot of the jobs, even at the senior level, will switch to replacing the processes in their jobs. So if you're, if you are a lawyer and you prepare a contract, you prepare a will, you know, thinking through most of those sort of basic, uh, step by step-by-step procedures or, uh, you know, almost tasks that you would go through as a human in that role and the knowledge you would need. I believe that, that the job that they will do, and there'll be less need for this, is to train all those skills. So, so you'll have a series of sort of skills or agents or whatever you want to call them that can complete those processes better than a human. Then you will combine those processes together and that will essentially fully replace the whole law firm potentially in the future.

Chris Sharkey (00:34:07):
Yes. And the, the significance there, I think is that the AI can do the coordination too, because I think that the natural thought is, okay, well I'll just end up, you know, commanding the various ais like employees and they will do the grunt work and I'll be the sort of puppet master who controls it all. The problem is the AI can be the puppet master too, and it's really good at it, you know, like if you give it a bunch of abilities, you give it a goal, um, and then you go, okay, here's the plan. It can create a better plan than you can already and go and execute it. So it's sort of that bit where we're sort of reserving for ourselves at the top where we would control everything is simply not needed. The AI can do that bit too, and it can already do it pretty well, and it's only gonna get better.

Michael Sharkey (00:34:50):
I read that paper this week on the chain of thought reasoning and how, you know, it's not as good at reasoning right now as a human, and I'm summarising this. It's human summary, not AI summary, so bear with me. But the, the, essentially what the paper says is that the reason chain of thought reasoning isn't as good as humans yet, or isn't as good as we think it is, and it can be easily influenced and sometimes it won't refer to the first instruction. And the chain of thought reasoning for those that are unaware is where the AI gives itself instructions or reasons through how to complete the problem. And one of the things, or one of the examples in the paper of where it can go wrong is if you ask it a multiple choice question and say, I think the answer's a, uh, but I'm not sure, can you sort of reason through and check it will bias towards the, a result more often than not, in fact it reduces the reliability of, of chain of thought reason according to this paper by something like down to 35% reliability.

(00:35:53):
So it, it really has a detrimental effect. But y you know, my point here is that large language models, according to this paper, because they're trained on human written text, it's not necessarily how we reason as humans. So you don't write out how you think through the job of doing writing a contract. If you're a lawyer, the what's going on in your brain? You, you know, you might write a process document, but that's not necessarily what's happening in your brain. So in order for chain of thought reasoning to get better in these models, which I'm sure it will, and I'm sure that will still lead to being able to replace a lot of these jobs, I started reading about, well, can you start to train the AI based on reading people's thoughts? , like, so you give them a task, you look at the MRI activity of the brain, you translate that into a former computer can interpret, and then you feed the instructions in to the ai, which remember you don't have to have the input just like a real brain.

(00:36:53):
The input doesn't have to be language, it can be anything that's your arms. That's true input. That's true. Sensors are an input, your eyes are inputs. And the brain does such a good job of understanding different, different inputs. It's why people can read braille after a while and, and they, it even uses the part of the brain eventually that, um, that's reserved for vision if you're blind. So if you start reading braille and you're blind, it'll actually use that part of the brain to do the processing of, of, of the braille. So I really think that, you know, if we see chain of Thor, uh, uh, advancements ar around this and there's, there's early evidence that this could occur, then it does get to a point where the AI can reason at a level that's far, far better than humans. So then what role does, do humans play at a law firm? Is it just relationships? Maybe it's just relationships. Do people value human relationships enough that it's it's all sort of sales?

Chris Sharkey (00:37:53):
Yeah. Or is it just AI wars? Like who has the best, um, army of AI lawyers versus the other ones? Cuz you just wanna win. Yeah,

Michael Sharkey (00:38:04):
I've re I, I kind of go through the steps. I think the first steps definitely is people just start automating repetitive processes and scaling out a lot of the busy, repetitive, annoying parts of their job. Mm-hmm. . And that's gonna be this time of, and I think that's happening now and probably for the next couple of years where people are super happy. They're like, AI is great, it's so helpful. And then

Chris Sharkey (00:38:24):
I think that's, that's the big rise in chat. G B T people are using it for mundane tasks. They don't want to do like their homework, like summary summarising, you know, talks like Sam Altman's talk, like the first thing that came out on Twitter was just hundreds of people. Like, I use cheap G P T for to write a summary of this. You know? So I think, yeah, you're right, it's like the mundane stuff that people target first, which makes sense. It's a, a good use of it.

Michael Sharkey (00:38:47):
Do you think though that, do you, do you feel this feeling of like, why is it worth going on sort of in a weird way? I do.

Chris Sharkey (00:38:55):
I do, but I mean, I'm involved in it and I'm excited by the advances in technology, and I think I'm always reassured by the thing that you are, which is, you know, we're not there yet in terms of our use of the technology. Like, we can see what's coming and we sort of know what's coming. And that maybe is why you're like, well, what's the point? But being involved in, in the technology now and what it can do now is very interesting and exciting. And in the short term, there's a lot of really, really cool stuff we can do with it that will make lives better and make us more productive. So, yeah, I, I, when I'm in it and when I'm using it, I find it hard to, to have that apathy about the future.

Michael Sharkey (00:39:33):
There's a part of me, a big part of me that thinks that, you know, maybe our timelines are just so wrong. Like maybe it's 10 years before even LLMs sort of invade these jobs in society and we see the true automation effects occurring. Yeah. Like, maybe it takes a lot longer than we, we think today.

Chris Sharkey (00:39:53):
Well, and there's potential positives too, you know, like so many people spend so long sitting in an office under unnatural light, not in touch with nature and not spending time with their families and things like that. There's, there is possibilities that, you know, with AI doing more of the work, people will have more of their time to themselves. Um, I know economically it may not work out that way depending on who's operating these things. But, you know, there is potential there that if work that needs to be done can be done by a com, a machine or a computer, then we'll get more time to ourselves. Though I know that throughout history, we've always said technology will do that, and it always just leads to more work. So, I don't know,

Michael Sharkey (00:40:33):
One other thing I wanted to talk about today and, and we covered a little bit, the idea of agents just then replacing jobs where it can follow a series of steps and reason and go through and, and complete a task. But going back to the, the fishing stuff where, you know, it could call you and pretend to be your your boss.

Chris Sharkey (00:40:55):
Yeah. What

Michael Sharkey (00:40:55):
Really interests me about that use case for, on the good side, the positive side that excites me a lot is this idea of a new interface, which is the human world for computers. And my point here is, if you think about food delivery today, you might go to Uber Eats, you load up that app and why has that business been successful? It's indexed all the local restaurants near you. It's given them a delivery option, which they may or may not have had before. And then it grabs labour from the labour market to go and deploy it to go and get your food and, and bring it. But it's, it's really an aggregator of, of restaurant menus and labour, I think that's fair to say. Yeah. And the, the reason it was so successful is because, you know, mobile devices, everyone's carrying one. It's, you know, really easy to, uh, to do that.

(00:41:50):
And that's, that's Uber in a nutshell. What about reverse Uber with ai? So AI in the human world and AI that you can go to your phone and say, uh, show me the menus of every Italian restaurant near me. Mm-hmm. , you know, above four and a half stars or, or whatever. So the AI goes off, crawls the web, grabs the menus, grab any, any available information, figures out if they deliver or not, serves it back up to you on your phone. So now you've got the menu, you've got obviously biassing towards restaurants that do deliver the AI has your payment details somehow, and then it calls the restaurant because the restaurant doesn't have online ordering yet.

Chris Sharkey (00:42:34):
Yeah. And

Michael Sharkey (00:42:35):
It can interact with these like almost non-technical businesses because it can call as you and say, Hey, I want to order a pizza. This is my order. And all you've had to say to your phone is just get me a pizza. I don't care where from it's Hawaiian because I love pineapple on my pizza from the nearest pizza place.

Chris Sharkey (00:42:53):
That's a hundred percent possible with current technology, what you're describing. It wasn't recent until recently, but it's a hundred percent possible that you can do that kind of thing. But I

Michael Sharkey (00:43:01):
Think this could be an enormous innovation in society, this new interface by unleashing interfaces that only humans have had access to in the past, which is just the physical world, like calling a place or, you know, calling a hundred places. Um, I mean, we've had Robodi for a while, but I think this is a little bit different.

Chris Sharkey (00:43:21):
I mean, if you think about services like LOB and MailLift and things like that as well, it could do mail. I mean, the AI could be sending out postal mail, um, it could be 3D printing objects. Um, you know, there's a lot of sort of real world actuators that we can give power to. And we, we did, I don't know if we spoke about chat G p t plugins, but the idea of starting to give the AI various abilities through APIs and other interfaces is where I think things really start to get interesting. It's ability to come up with a strategy based on the abilities it knows it has, and then going ahead and, and creating prompts and using maybe even different models to use those different, uh, actions. There's a lot of potential there for what an AI can sort of coordinate and do on its own when given a mission.

Michael Sharkey (00:44:14):
Yeah, I think it'll be really interesting because everyone's talking about the more obvious use cases of large language models right now, like summarising text and you know, automating writing blogs still. But to me, the exciting use cases are also opening up the, this new interface with how we just interact. Like when you like pull your doctor to book an appointment and, and, and do all these kinds of things. The other thing we wanted to talk about today was just how excited we are for the possibilities of AI in gaming. And there was a post on, on Reddit just yesterday actually. Um, I'm excited af for the possibilities of AI in gaming.

Chris Sharkey (00:44:57):
Yeah.

Michael Sharkey (00:44:58):
And the, the author says, gaming has gotten stale and repetitive. And I somewhat agree, I think similar to Hollywood movies, a lot of the best ideas have been done, and they're series like Grand Theft Auto that are up to, I think Grand Theft Auto five, if it ever comes out soon. People are speculating, you know, maybe they should use AI or are going to use AI for the characters. So you can really get immersed in the game. And we've, we've spoken about this before, obviously, but I think there's just so, so many interesting positive use cases to this where you can truly go into some sort of simulation and develop stories in the game yourself by inserting yourself into the game or into that.

Chris Sharkey (00:45:39):
And, and most importantly, as you pointed out last time, the having the game dynamically respond to the things you do and remember and that kind of thing.

Michael Sharkey (00:45:48):
Yeah. So to, to me, it feels like gaming is where we're also going to see these huge, like, and, and it's gotta happen right in the next maybe like six to 12 months where we see, you know, is it grand theft thought of five, is it some other game where you can truly start to get lost in the characters? I know there's a lot of like hacks right now on certain games that are, that are doing this, but I feel like what I really wanna see is like a shippable game that's mainstream, that people are now interacting with large language models. And the question though is like, would these big studios actually trust what it will do?

Chris Sharkey (00:46:28):
Yeah, it is, it is interesting and we're sort of already seeing it. One of our commented commenters on our YouTube video actually mentioned this thing called tavern ai, which I looked up the other, uh, day. And that's where someone's trying to build like a game with these interactive characters and you know, they've got a basic version working using large language models. And so I think people are already experimenting with it. It's a hundred percent coming. But to me, one of the other extremely interesting points in the Reddit thread that we're talking about, which I assume you're linked to with the, the, the reason that, that the big games are so stale to some degree is it is like a Hollywood movie. They've gotta spend half a billion dollars or you know, 50 million making the game to a AAA rating because you need all the content designers, you need the voice actors, you need, you know, the script writers, the, all the, all the, um, you know, meat and potatoes of the game needs like hundreds or thousands of people working for many years to produce.

(00:47:27):
But AI will mean that building 3D models can be done from text. You can write the text and build a 3D model from that, the quality of indie games or, you know, privately made games will go up. Not to mention you'll have the sort of one man army thing where individual developers will be able to create everything they need to make an extremely high quality shippable game. Like the, the actual power, the leverage it would give a game developer means that, you know, anyone can be doing it. It levels the playing field in terms of being able to produce really high quality detailed games. And then on top of that, you've got the actual dyna dynamicism of the AI within the game and what that can lead to. And there's so many possibilities there. It's gotta be one of the most positive, like un undeniably positive applications of the current advancements in AI because it's completely harmless. Like creating more games isn't gonna hurt anyone. It's gonna be great. And it's a real boon for creativity, unless

Michael Sharkey (00:48:29):
You are gonna a big game studio .

Chris Sharkey (00:48:32):
Yes, that's right. But no one's gonna feel sorry for them. They run their programmers in sort of slave labour conditions and um, you know, that's sort of been universally known for a long time that it's not pleasant being a game developer for the most part. The, this really opens up what the spirit of it, at least when I was younger, of, you know, a game as a form of creative expression,

Michael Sharkey (00:48:52):
Do you think what's going to happen and talking on gaming first, but then more broadly is teams or you know, a team of five people or a team of 10 people could feasibly create grand theft auto oh six in the future using these tools. Whereas in the past, that was hundreds of people.

Chris Sharkey (00:49:13):
Yeah. Yeah. And

Michael Sharkey (00:49:14):
Do you think that also then translates to just startups in general, whereas in the past it was all about this context of blitz scaling and just hire hundreds of people and spend millions of dollars rapidly, whereas now maybe they don't need much money and it can be very few people.

Chris Sharkey (00:49:30):
Yes. I think there's leverage there. And I must admit, I'd only thought about it in the gaming context and not the wider connotations. One thing I've noticed working with the models for development is that you concede more and more of the work to the AI and it does it really well. Half of it is, is reminding yourself that it's there and it can help you do these things. And I think partly that's why there's such an emphasis on co-generation from all of the big players in ai, and we are always seeing papers about co-generation because it really is that leverage. Like if you can generate the code for a whole module that, that accomplishes the goals you want, then really you are just the, the person putting the pieces together. But then you ask the ai, well what pieces do I need to accomplish this overall goal?

(00:50:12):
And then you're just the creative director. And I think that, you know, like we always say, it's not quite there yet, but you can see it happening. You can see that, you know, individuals or small groups of people are going to have a lot more, uh, creative ability than they do right now just because the investment of time. I think Simon Willison actually said that on his blog, um, a couple of months ago where he said what AI is doing for him is allowing him to try more things that he always thought about doing, but didn't want to have to invest the time. But now, because they can be done so much more quickly, you're trying more things because you can actually get something working done much faster.

Michael Sharkey (00:50:50):
I must admit that's the impact it's had on me is I'm willing to try things. In the past I probably would've got stuck early on and then just given up or you know, been like, ah, this is too time h consuming, we've got distracted.

Chris Sharkey (00:51:03):
Yeah. But you don't, you don't reach that crucial point where you can show someone and say, Hey, try this. Like a lot of the concepts that we talk about on this podcast, we've actually gone and tried them because we can and we can do it in our spare time. It isn't, it isn't this huge investment where you need to have, you know, weeks and months free to actually iterate on all of the stuff that you need to build. You can try something in an afternoon.

Michael Sharkey (00:51:24):
Yeah. I think that's the, the biggest takeaway for me is I can't imagine, and I think most people are like this now, I can't imagine living without this now. If you try to take it away from me, I would be very possessive of it. Which is I think almost why at the top of this podcast we speak to the dangers of regulation in the sense of we don't want these tools taken away. We don't Yeah. I think that's a big part of it for me is I, I don't want this controlled by anyone. To me it's almost like the internet. Like I feel like having like basic rights, right? Like having your society provide you with electricity, with water. Yes. With sewage, internet ai. Like that's what it feels like to me.

Chris Sharkey (00:52:10):
Yes. And I think the other thing that's really important to note, like a lot of the things we talk about we can do because we're programmers and we can access the raw technologies and apply them. But I think the days of that being a, a requirement to experiment with building things with this technology are gonna end like this year, I think it's gonna be very, very soon, where you are actually just directing what you would like created to an AI and you can build your own applications or your own implementation or your own agents of this technology. I believe we'll reach a point soon where everyone is going like, you know, everyone who you know, has access to the internet and computers and stuff, um, will be able to experiment with this technology themselves to pursue ideas. So the actual volume of people who are able to participate will increase a lot and very soon if it, if it hasn't already. And so, um, I think that is really interesting, but also another reason why, you know, shutting it down before we even get started would be disappointing.

Michael Sharkey (00:53:08):
Yeah. It's almost like it's either gonna go in one of two directions, there's going to be some sort of everything app AI that literally everyone builds everything in it. And that's just the, the chief controller, which I'm sure hoping AI would like that outcome. And then there's the other one where it's truly personalised ai, where people build custom software for themselves, custom video games, custom, literally everything. And that's another route that it could easily go where it actually takes power away from big tech companies today because like, you know, you want Google photos running on your local phone. Cool. Just like get the AI to build you like, you know, mic photos on photos. Yeah,

Chris Sharkey (00:53:51):
Well, you want your own photos, you want your own version of a TV show, or you know, you want to hear, um, you know, more of your favourite podcasts generates some more, you know, like you can, content creation is going to be very different. It, it's gonna be more personalised and, and different. Um, it's

Michael Sharkey (00:54:09):
Not, yeah, the economy too, and the, as long as, you know, robots aren't, aren't a big deal anytime soon. Although we saw an update this week from Tesla on their, uh, they're robot that is really designed to be used for general purpose labour, uh, coming along very nicely. But provided that stuff takes a long time, then I think the economy does start to shift to outside of maybe those like creation pieces or tools where you can actually do the creation of, of video games or movies or software or whatever it is to labour. Labour becomes a skill that for, for at least the foreseeable future, robots just aren't gonna get air on. So maybe, yeah,

Chris Sharkey (00:54:52):
They've sort of been, they've been demoing those like Boston Dynamics robots and stuff for years, but I've yet to see one come into my house and clean it up for

Michael Sharkey (00:55:00):
Me. Yeah. Won't that be life changing when there's a robot doing your dishes instead of a dish dishwasher?

Chris Sharkey (00:55:06):
I think that some people fear, you know, AI taking over, but if it wants to take over the cleaning, I say let it honestly,

Michael Sharkey (00:55:12):
Even if we get 10 good years of it's stacking the dishwasher and then it kills us, I, hey, they would be the happiest 10 years of my life.

Chris Sharkey (00:55:20):
. Yeah. Like it can yell at your kids to put the plates back in the sink and to and it can do the washing and ironing. That would just be the dream.

Michael Sharkey (00:55:27):
So do you want to talk about this, uh, Andrew drill Palmer lucky stuff, or do we n not want to go too dark mode here to end the podcast? Yeah.

Chris Sharkey (00:55:36):
So can you give, give some background for everyone before I, I'm not

Michael Sharkey (00:55:39):
Even gonna give background, I'm just gonna quickly roll the clip.

Chris Sharkey (00:55:43):
Okay. I think that's a better idea.

Speaker 3 (00:55:45):
This new capability that you are testing and you just unveiled mission, autonomy, what is it?

Speaker 4 (00:55:49):
Sure. I mean, we just announced our new product called Lattice for mission autonomy Lattice is kind of our AI sensor fusion engine that makes all of our annual systems work, and that also works for a lot of other systems. We just recently announced a lot of new capabilities that allow you to use lattice for mission autonomy to plan, simulate and execute missions with small numbers of people controlling large, extremely large numbers of autonomous systems, including lethal autonomous systems. And this is a really big capability that hasn't really existed in the past. It's not just about making an aeroplane fly away or you know, a car, self-driving, self navigating. That's typically what people think of when they think of autonomy as it pertains to vehicles. This is allowing vehicles to make decisions based on the commands they've been given by their human operators, uh, to actually manage, uh, missions mission decisions. So, uh, you know, what to do with certain types of targets, when to communicate, when not to communicate.

Michael Sharkey (00:56:44):
All right. So I think that's enough to get some perspective on it. For, for everyone who doesn't know who Palmer Lucky is, he is the founder of Oculus who sold to uh, uh, Facebook, mark Zuckerberg at the time, and then was famously fired for being a Trump supporter. That it's the sort of allegation that that was made. And then he went on to start this company called Andrew, which is a defence technology company specialising in autonomous warfare, uh, specifically through drones and, you know, different capabilities in the, in the battlefield. And so, you know, Palmer, if you listen to him speak about it, is actually fairly inspiring on it, saying that we need to have the best technology and, and dominate and have, have weapons that our enemies fear in order to keep the US and, and the rest of the, the world save. So that's the, the perspective, but it's hard not to listen to that. And, and for those listening, he is smiling like sort of got this smirk. Yeah.

Chris Sharkey (00:57:45):
Like that's because you asked me to watch that clip before this podcast and I watched it, I'm like, the guy's a maniac. He's like, he's, he's cheerful about creating robots that can make their own decisions to kill people. Like it's really serious and he should have a certain, I don't know how to say the word, but he should be solemn about it. He should be serious. Like, I wish I didn't have to do this, but we do for defence. But instead he's sort of like, isn't it delightful? You can just, oh, well even if it loses it connection, it can still follow through with its mission. And I understand maybe how you, you know, you'd be excited about what your technology can do given you're trying to solve that problem. But it just seems like very, very casual and, and sort of, um, very, uh, light attitude. Like, you know, it'll levity when it comes to killing drone, like drone killers. Like I just, I don't know, it disturbs me that video.

Michael Sharkey (00:58:34):
Yeah. It, to see AI being used to kill other humans is scary. Given that this is literally what we occasionally talk about when we get dark is once this thing gets smarter than us having capabilities,

Chris Sharkey (00:58:50):
Things

Michael Sharkey (00:58:50):
Can control these kind of drones and weapons, well then, you know, obviously it's gotta have the motivation to kill us, but it's like,

Chris Sharkey (00:58:57):
That's something that needs some bloody regulation. Yeah. God, it's like, you know, if that get into the wrong hands, which it will, I mean it definitely will. Like military equipment always ends up everywhere. Um, this is dangerous technology and it's, it's being made by a maniac. Like it definitely

Michael Sharkey (00:59:14):
Seems like this needs to be disconnected from the internet.

Chris Sharkey (00:59:17):
Like if it was a movie, you'd have a, you'd cast him as the psychopath who's building the crazy drive. Even

Michael Sharkey (00:59:24):
Even his, you know, even his outfit, like he's got, I don't know, it's a shirt out of, I guess it's like the seventies or eighties where it's like this aqua blue with these wave lines in it and his hair, he's almost got like this sort of, in Australia we call it a mullet, like long hair at the back and short on the sides. And it's like, I don't even know what the make

Chris Sharkey (00:59:46):
Of it. Well, all I wanna say is don't let him team up with Larry Page who think AI is superior and, and should take over you. You pair him up with a killer drone guy and we're all dead in a few years. Keep 'em apart. Yeah,

Michael Sharkey (00:59:59):
I think, I mean the, the counter to this is like, if we don't build it then, you know, China will, this seems like they're only defence. It's like, we'll, you know, if we don't, they'll, they'll kill us with ai. But it's, but I mean,

Chris Sharkey (01:00:11):
It only leads to one thing, which is drone wars. I mean like, that's the inevitability, right? There will be drone wars at some point in our lifetime.

Michael Sharkey (01:00:19):
It sounds very reminiscent of nuclear development where it's like, well, if we don't build a nuke, they'll build a nuke. And

Chris Sharkey (01:00:25):
It's, yeah. So it, it's like, yeah, we've got an army of like 600,000 killer drones. They've got 400,000, we better not deploy ours cuz then they'll come kill us.

Michael Sharkey (01:00:32):
Yeah. It could just be the newcleus rat, which is like, oh, we'll unleash our ai. And the AI is, what is it? Mutually

Chris Sharkey (01:00:39):
Assured destruction or

Michael Sharkey (01:00:40):
Something like that. Yeah, yeah. Mutually assured destruction and that, that is what I see here. But yeah, it's, it's hard to watch because it just seems like you said out of some scary movie and even like the way the clip, honestly, if you're listening, please go and watch this clip. Um, I

Chris Sharkey (01:00:58):
Agree. His outfit is also not appropriate. Like, you know, you're talking about making machines that kill people and you dress like you're on your way to a party in the 1980s. Like seriously.

Michael Sharkey (01:01:09):
Yeah, it's, um,

Chris Sharkey (01:01:10):
Surpris doesn't have like a creme de menthol what? Peanut col. I've

Michael Sharkey (01:01:14):
Listened to a fair amount of interviews with this guy and it's hard not to like him though when you watch his interviews, but you never make the correlation between what he's saying and then someone dying like you, it's very abstract, so you don't really, I'm like, oh, what a great guy.

Chris Sharkey (01:01:28):
It is very abstract. If you are just giving a, a drone general orders to accomplish a mission and it can make the decision to kill someone that is abstracting away human humanity. Yeah. I mean, that's really terrible. It's a terrible concept regardless of if you think your mission is right, people always think they're right. Both sides think they're right, you know, so it really is a horrible abstraction of humanity and it's a really te terrible thing.

Michael Sharkey (01:01:54):
Yeah. So going back, like moving this back to, to the early origins of this conversation today, I don't know what I would do if I was a student graduating high school now thinking about my future career because you, you potentially have killer drones and then huge job displacement, especially for anything that's process driven, I would probably bias towards creativity. And I think that's the most important thing all everyone listening to this podcast can do now is, is focus on creativity, come up with ideas, invent the future. Uh, and, and that seems like the most important skill moving forward, uh, is creativity. Yeah.

Chris Sharkey (01:02:34):
And think about the, you know, the problems that we face, you know, in society like poverty and disease and things like that. And think about how the technology could be applied to those. Because if the, the more run of the mill jobs are all, are all taken by ais, why not? Why not work on things that will help us as a, as a society and a world and a planet, like, you know, it, it sounds a little bit high and mighty to say that, but it really potentially opens us up to solving some of the bigger issues we face rather than sort of the the mundane everyday life thing. Just shuffling papers around everywhere.

Michael Sharkey (01:03:10):
And after that rollercoaster of emotions, conversation and anxiety will end the podcast there this week, .

Chris Sharkey (01:03:18):
Exactly.

Michael Sharkey (01:03:19):
If you enjoyed the podcast today, please do all the stuff we remind you to do every week. Uh, liking, subscribing. If you're watching, I don't know what else commenting. Try

Chris Sharkey (01:03:28):
Not, try not to write comments like the host don't know what they they're

Michael Sharkey (01:03:31):
Talking about. Yeah. Whoever did that. Five stars. We appreciate the five stars and, and yeah. Um, but yeah, if you are listening, uh, on wherever you get your podcast, we'd love your reviews. We've cracked the top 200 worldwide, uh, of business podcasts, so we really appreciate all the support and it's thanks to everyone listening that we're able to keep climbing the ladder and more and more people are hearing about the podcast and listening. So we, we really appreciate it and we'll see you next week.

Sam Altman tries to Regulate AI, Why AI Will Displace Your Job & The Future of AI | E15
Broadcast by