OpenAI Has No Moat, Google's Godfather of AI Quits & The Rise of Open Source LLMs | E13

Michael Sharkey:
I really didn't play this podcast to be like, let's shit on Jeffrey Hinden hour. But it's becoming that and I look, it's, I think you, you make so many great points that this guy , now I'm saying it out loud like this one, he's definitely protecting his legacy. And Chris, we start off with this leaked internal Google document. We have no mote and neither does open ai.

Chris Sharkey:
Yeah, it's interesting cuz they basically finishing finish it by saying open ai doesn't matter.

Michael Sharkey:
It's pretty, pretty brutal in scathing. I, there was so many takeaways from this document for me, I've since heard that it wasn't even necessarily someone who was working on the AI team. So it was just someone who wrote this was more looking at the higher level strategy of how things are playing out. But I also thought what it does is really validates this position around open source. And for those that haven't read this internal document that's leaked, it's essentially saying that as open source models or large language models are advancing, they can run on phones and the community's able to, I I guess the word's not train it, but run inference on it. Is that, would that be correct?

Chris Sharkey:
Yeah, exactly. Well it's both and like, I think the really interesting points are they say they are lapping us. The open source community is lapping us Google in terms of what they've got. And the words that really stood out to me were scalable. Personal ai, as in there are models that people can run on their own hardware, on their phones, on their toaster, as someone pointed out with the raspberry pie that are unrestricted and they're free and they're better in a lot of cases, in a lot of use cases.

Michael Sharkey:
It just seemed to me like what this is really saying is that everyone's going to have access to this technology. It's just going to become something you put in every application and on every device you run. And you can just use these models and if you're using those models to ask a lot of the questions or get the information that you would've traditionally gone to Google for, and I'm not necessarily talking about up to date information or travel searches or things like that. They're, they are

Chris Sharkey:
Talking about up, up-to-date information. Cuz they say these models because of their size, like they say the giant models are slowing us down. The smaller ones, you can retrain them so quickly, they call it stackable. You can stack these models together, get up to date and, you know, diverse information and retrain the models cheaply and cheaply, keep them up to date. So I think that's what Google's talking about. Like we can't just rely on having the biggest and best model because the smaller ones when adapted for purpose are actually better. They're faster, they're cheaper, and they're unrestricted.

Michael Sharkey:
Yeah, I guess my point earlier was though that if all of this, I don't want to call it traffic anymore, but if I can go to my phone, load an app, open up some sort of agent and I ask it questions that are related to my data, my world, my view train with memory of all the things that are in my universe, yeah, I'm going to go to that. I'm gonna get no ads. I don't need to, you know, sift through search results. And, and previously we said one of the applications of this was kind of boring around search, but I'm starting to think that this, this is fundamentally a crisis for Google's entire cash cow and, and business.

Chris Sharkey:
Yeah. Like I, I think it's a personal productivity thing. These are, these are things that directly influence your personal productivity and I think that's something people naturally gravitate towards because it's just your day-to-day life. You, you're gonna go with what's most convenient for you. And I, I think you're right.

Michael Sharkey:
I've noticed it myself this week. I've been dabbling with a, a lot more code than I had previously and I used to go to Google and Google the error codes, Google the trace backs, Google the, you know, all these elements of developing an application or just asking for the best approach to do something because I haven't coded in a decade and chat G P T right now is able to just give me back the answers. There's no sifting through. It gives me very specific code examples to my use case and I'm not going to Google it all. In fact, I haven't been to Google once.

Chris Sharkey:
Yeah. And you don't, you also don't have that context switch of the sort of flow of programming. You know, you can keep going in context and I think a lot of being productive as a programmer is knowing what the next thing is to do. And when you're in that mindset, you can get to the next thing and there's nothing blocking you. You can get a lot more done in an hour than you might get done in a day if, if you just always have that next answer available.

Michael Sharkey:
One of the other parts I took away from this, we have no motives. Things that we consider. This is what the note said, things that we consider major open problems are solved in people's hands today. Large language models on phones, people are already running foundation models on a pixel six, scalable personal ai. You can fine tune a personalised AI on your laptop in an evening responsible release. This one isn't solved so much. Uh, their entire website's full of art models with no restrictions whatsoever. And the text is not far behind. So it's basically saying that like the idea of responsible release of this technology is kind of over because it's already out in the hands of the masses. A

Chris Sharkey:
Direct quote from the article was, or whatever you call it, memo or something was anyone seeking to use large language models for unsanctioned purposes can take their pick of the freely available models. Like there is no restriction here because people who can use these for whatever they want. And they said that why would anyone like the more tightly that Google and open AI and everyone controls their model, the more attractive they make the open source alternatives because they're not restricted and they don't cost any money. Well, relatively speaking, you've obviously gotta have the hardware.

Michael Sharkey:
Yeah, I if it's somewhat free to use outside of the hardware and just like installing it on your device and you can train it for a, a mission or purpose specifically, as I said, I think one of the biggest use cases is just your own context and your own point of view. And I think that's what's upset people with open AI's approach of training it to a certain worldview or trying to basically restrict how it outputs based on a worldview or a particular set. Like, you know, worldview in general, and we said this early on, what did we want when this first came out? We didn't wanna beholden to some, uh, company. We just wanted a component that is a large language model to work with in software development or in our daily lives that was unrestricted that we could do whatever we wanted it with.

Chris Sharkey:
And what's fascinating about it is like a lot of the things that I prophesize and said, were just 100% wrong because I was saying they're gonna take this away. We're not gonna have access to it. We need to like, you know, be hoarders in terms of data, but quite the opposites happen. There's almost this abundance of, of data. There's so many models coming out, there's so many different ways you can do it. And it's actually causing as acknowledged in this paper, um, that the, the big players are like, oh geez, we need to embrace open source because they're winning and it's getting better. Like it's, it's really the best possible outcome that could have happened here. I think.

Michael Sharkey:
Yeah, it's a very well written document. It also relates the rise of, uh, stable diffusion and it's open source community and also mid journey with, you know, a couple of people essentially making dally irrelevant by OpenAI closed source because you've got this whole open source community, well specifically, not, not so much mid journey, but with stable diffusion, just making it better and better and better and better at, at a much more rapid rate. And it even calls out the timeline of how quickly since the meta's uh, LAMA model came out, how quick it's advanced. And to me, you know, when that leaked it, it seems sort of like jokes on meta, but this calls out maybe it's not because now there's hundreds of people around the world improving their model that they can take back into their own code base to improve their own products and services.

Chris Sharkey:
Some of the biggest tech companies in the world are built on open source. I mean Amazon is really just providing hardware and licencing, licencing isn't the right word, but allowing the deployment of open source software. So, and Linux is probably the most used server operating system in the world. It it's, it's a, it's a model that, you know, it's hard to understand, but it works because you've got the best minds thinking about it. And and they have this fundamental hacker mentality of it should be open, it should be free. Which is just more and more it's making open AI itself seem like a contradiction because it's the least open out of absolutely everything.

Michael Sharkey:
Yeah. Literally their name now contradicts their position and what might potentially also make them irrelevant as this document points out. The other thing I found fascinating is just how it calls out the legalities of things that the opensource community can do because of like personal use, copyright licencing where you can go and basically train it and do things a company could never do or a research department couldn't never do, but individuals can just go out and improve these models without any concern for, for copyright or Great

Chris Sharkey:
Point because yeah, that was the fear. It's like if you were going to have to rely on the companies that could afford these vast data centres in order to say fine tune or train a model, like if you had to go through open AI to train something that may not be allowed by that, now that goes away because you can train on your own G P U at home or your own MacBook or whatever. So you're right that that personal creativity, that personal freedom outside the scope of, of what's, you know, legal or whatever, um, is, is very, very exciting. It's what we've talked about repeatedly as being so important that ability to experiment. And we've been given that in spades and I, you know, you and I have been doing a lot of that ourselves.

Michael Sharkey:
I thought the, the most scathing thing, just going back to that point that they made, uh, this made around open ai and in the end, open AI doesn't matter. They are making the same mistake we are in their posture relative to open source and their ability to maintain an edge is necessarily in question. Open source alternatives can and will eventually ellipse them unless they change their stance in this respected lease, we can make the first move. So basically Google needs to go full, like embrace open source, try and build and own an ecosystem around it so everything's built on top of them, but maybe it's too late now.

Chris Sharkey:
Well, and uh, you know, when I saw that, I actually found that really exciting that this was sort of Google's internal official stance. But it sounds like based on what you've said, the source of this is in question, like is it someone with the actual power to enact what they're talking about? Or is it just some caring person within the organisation who this is what they want to see rather than it being some official Google internal directive?

Michael Sharkey:
Yeah, I, I'm not sure I, I really don't know enough about the origin of it, but I think it's a very well written piece and the observations.

Chris Sharkey:
That's what like, yeah, like the the points made in it are so salient that it doesn't really matter who wrote them. You know, it's, it's really, really good take on the situation regardless of the source. Well,

Michael Sharkey:
In the past, I mean I I made that claim that maybe chat G B T becomes this universal everything app in the future where like all of these plugins come out for it and it is truly this ecosystem where you interact with your model and it it knows everything about you and it's like one app to rule them all and it's just winner takes all. And and that could still potentially happen. There's nothing to say that won't happen or play out, whether it's them or someone else. Um, I'm not sure. But at the same time, you do start to wonder if other alternative models are open source and these plugins can easily be adapted, why the hell would you pay for G P T four access if in a couple of months from now we get to the equivalent level of G P T four in the open source community?

Chris Sharkey:
Yeah, and and portability right as well with this, they call it Laura, which I find really confusing because there's another technology called Laura, which is like lo long range, um, radio signals with low power that you use on farms and in remote locations. So they've used the same name, which is really confusing. But anyway, what it stands for in this context is low rank factorizations, but it's a technique where they can make the, the creation of these models a lot sim more simplistic in terms of the GPU U usage, but still get extremely good results. But what this means is that the models are becoming smaller and more portable. So, uh, I bought one of these, uh, Nvidia Jetsons, which is like a, it's like a raspberry pie, which is a small single board computer, sort of the size of a credit card. This one's bigger.
This one's sort of more like the size of say a phone. Um, and it's a G P U that's completely portable. So you can, and I've done it, run these models on a device that's small enough that it's a phone, right? I mean I know you can run it on a phone so it's not quite the same. But the idea is these things will get smaller. It has no, it doesn't need internet access. And you can run things like chat G P T, not chat G P T, but like chat G p t, the other models on this thing and have it take input from the real world sound video, everything and it can run multiple.

Michael Sharkey:
So it's sort of like a mini electronic brain.

Chris Sharkey:
Yeah, exactly. And you can put 'em in robots. People have done it. If you search on YouTube for Nvidia Jetson, the things people are making with these are truly exciting. It can see, it can hear, it can respond to those inputs and it can do the large language model inference as well. So the, the thing I was talking about, like making your own personal assistant, uh, a lot of the things that you would need that sort of portable device are now real. I mean it's practical. The only thing stopping you is the time to, to develop the applications you're interested in. It's very exciting.

Michael Sharkey:
I also think that a big part of it too is once you can scale it down to these devices, it can be such an important part of the programme itself where you get all of these added capabilities of almost having this sentient sort of, I, I hate to use the word sentient cuz I know it upsets people, but I still think, well

Chris Sharkey:
I can think right? It can, it can, it can make decisions. It's, it's not just a sort of blind thing that's following, its, its programming.

Michael Sharkey:
Yeah. And I think that's what's exciting about it because you can give it all these inputs and then get it to, to make decisions based on those inputs. Uh, I think for robotics it's gonna be so fascinating what happens. I know there's a lot of interesting stuff already happening there, but I really think turning this into something that a, you know, a small thing that you can carry in your pocket and plug inputs into it, probably just describing a phone at this point. , yeah. You have this like super intelligence everywhere you go and the applications that can be built on that are going to be game changing. Well, and

Chris Sharkey:
We all know to how technology goes, it always gets smaller and faster. So the inevitability is, and we've discussed this before in a sort of more doom and gloom sense, but the, the inevitability is that AI models are going to be in everything with an electronic in it. Like it's going to be in everything. It's, it's just an inevitability now.

Michael Sharkey:
So if you are open AI right now, what's your next move? You've read this, I mean we have no idea what technology they have, but it seems like my prediction before where they become more of a like Dropbox consumer tech company might be the only path forward if this is true. If you believe everything that's said in this document where they build the sort of, they make all this technology accessible to the masses and that's just what the company is. It's more like a, it truly is more like a Google play where they made search really accessible.

Chris Sharkey:
Um, you know what I think, I don't think they care about making money. The people who are really, really doing the work at OpenAI, I think they care about the technology and they care about what they can do with it. I, I think all the plays at commercialisation are literally faints in that direction to appease their investor overlords. I don't think they actually care. Cuz if you look at what they're focusing on now, like we wanted to talk about chat G p T code, which is a, a new thing where you can upload a bunch of data and actually have open AI build and run its own code in order to understand that data. Um, if you think about something like that, if they really wanted to commercialise and they really wanted to make money, they've got a fertile ground to do that. Like the plugin ecosystem, the, you know, monetizing their existing models, allowing the private distribution of these for big corporations, which I know again, we've seen faints at that, but I just wonder how serious they are about that side of the business. I just, I just can't help but wonder if they're just, you know, pretending like that's what they care about, but really what they care about is advancing the state of, of AGI and the, the AI technology in general. I think they just want to be the vanguard of this technology and all that stuff is literally just a sideshow to appease people. Well,

Michael Sharkey:
I saw a recruitment tweet on Twitter a couple of days ago with someone, if you're interested, if you're passionate about building safe agi, that was literally how they led the job, uh, description for this particular role. So it does make you think maybe they're just working on agency and multimodal and memories and like all these capabilities that you would need to build what we think i i is some sort of artificial general intelligence.

Chris Sharkey:
Yeah. Like, you know, like Elon Musk with SpaceX, admittedly he's more openly stated with his goals, but it's like SpaceX launching like satellites for companies and stuff. That's not really the goal there. His goal is to get to Mars and, you know, if if making money off reusable rockets and chucking a few satellites and starlink into space helps with that goal, then so be it. And I feel like these guys must have a similar mentality. They, they're really, really trying to make the best possible AI and the rest just doesn't matter to them.

Michael Sharkey:
I think also, I've mentioned this previously on many episodes, it's that innate feeling of I just wanna see what we can build. Like can we actually do this and what will happen if we do? And obviously there's the doomsday aspects of that and we'll get to that in a minute. With the godfather of AI quitting Google, um, who truly, when you read, uh, his history, uh, is is rightfully so the Godfather. But yeah, you can think about all the doom and GLM map like parts of that and go down the rabbit hole there. But fundamentally, I'm also of the view like, let's see what happens. Like this could bring huge innovation to humanity and you wonder if that's really what is motivating OpenAI. And on the other hand, you've got Microsoft, they just announced a bunch of like Bing updates and stuff. We were gonna cover them, but quite frankly, uh, it's pretty boring. But yeah,

Chris Sharkey:
Every time you mention Bing, I just instantly feel bored and

Michael Sharkey:
Like

Chris Sharkey:
I just, I just can't do it. Like, you know, it's just

Michael Sharkey:
What is interesting though, they're just replicating a couple of months later though open AI's chat functions. Like they're, they're integrating the wolf from Alpha plugin, the o o uh, what is it? Open table, like a bunch of like, you know, plugins and stuff like that. I, I just, I don't know, maybe I am naive, but I don't really get that excited about using this sophisticated chat bot to put the reservation.

Chris Sharkey:
Doesn't it show how quickly our expectations are just, and you get desensitised to how amazing this technology is. Like if they'd come out with that stuff cold, you'd just be like, oh my God, this is the most amazing thing I've ever seen. But because of the, the rapid pace of innovation in our, and really the, the, the access to this technology, um, you just, you just, it's just not that exciting because in the scheme of everything else, it's just really the gradual and slow commoditization of that technology that we're seeing.

Michael Sharkey:
Yeah, I thought another, going back to the Google has no mote, uh, document, one of the other call outs was that idea of data quality scales better than data size. And that's what they're seeing in the open source model. Yeah. So they don't have as much data, obviously they don't have access, but they have high quality and they are able to, uh, you know, give a lot of feedback to the, the systems in order to say like, you know, this is like the real meaty stuff. And we, we've talked about this before, actually using the AI's, like getting the outputs, I forget the name of it from G P T four to train your own open source model.

Chris Sharkey:
That's right. So that's how they did like, um, what was the one, uh, one of the open source data sets they did, um, was literally based on that, which was, you know, doing a bunch of prompts through chat G P T, extracting the sort of meat of how it works and then using that to train them like llama and not llama, sorry, alpaca. Um, and yeah, like I think it's a pretty common technique and that's that lower thing I mentioned earlier where they're really trying to, um, work out what is the essence of what's going on here. And it's working, it's actually leading to these smaller and smaller, uh, open source models that just don't need to be as large as the large models, um, and get similar results. It's absolutely fascinating how well they're doing that.

Michael Sharkey:
I watched a really long interview, uh, I think it was on CBS mornings, and I'll get more more context for everyone on this in a moment, but Jeffrey hin this godfather of AI who left Google warning of dangers ahead with ai. And I, I want to do a bit of a reality to check on what he actually said versus what the media said, because I think they've like spun it just like into doom and gloom. As soon

Chris Sharkey:
As I saw it on news.com today, you, I'm like, what the, I mean, this is really, really intense now when it's on like mainstream rag trash news website, .

Michael Sharkey:
Yeah. So that

Chris Sharkey:
I, that I read every day, ,

Michael Sharkey:
The point I would make though about him in that interview is one of the things he said, and it's similar to what Sam Altman also recently said, is the, the, the idea, like he, he was really interested in the human brain as a researcher. And one of the things that humans can do really well is take a, you know, breed a smaller subset of data. Because if you think about how a machine learns today, or G p t fours learn, it consumes more data than we could consume in millions of human lives. Oh yeah. And so we're able to take a very small amount and reason and build sort of weight in our own neural net, in our brains in order to be far smarter at reasoning. And a lot of things that GT four can't

Chris Sharkey:
Do, that's like, do you need to understand every book ever read in the entire Wikipedia to be able to write an essay? Like obviously not.

Michael Sharkey:
Yeah. And so the, the point he made is the methods like, uh, transformer and back propagation, two of the big breakthroughs. He, he invented the idea of back propagation in the eighties. Uh, but yeah,

Chris Sharkey:
So he invented neural nets essentially.

Michael Sharkey:
Yeah. Um, and everyone thought he was crazy in the eighties and stupid, but all he needed was time. He needed more data and more compute power. And that's what led to almost all the, all the breakthroughs here. But this idea of, you know, larger and larger training sets, I is not necessarily the path forward because, you know, so

Chris Sharkey:
This isn't just some random engineer at Google quitting over his fears of this. This is someone who's like a serious player, uh, who's been there from the beginning and he's making a stand with his own employment.

Michael Sharkey:
Yeah. So let's, let's back up and I'll give you plenty of context here. Yeah. The, the big news item was the New York Times had this article, the Godfather of AI leaves Google a warns of Danger Ahead. And this is Jeffrey Hinton. Now, he, uh, lives in Toronto. He most recently was working at Google. He originally left the United States because of his early research on ai, and he want to take funding from the Pentagon because he thought, ethically, I don't want to invent robots soldier like robot AI soldiers that indiscriminately kill. So he fled the US to take research grants in Toronto, ended up working at Google. Now this guy is responsible that the reason they, him the Godfather is, uh, I think it's Ilia Kowski. I, I might be pronounced like I, I'm, I think I'm getting names mixed up, but basically the guy that, uh, that is the chief scientist at OpenAI, I just all of a sudden forget his name.
He was, he worked with, uh, Jeffrey Hinton, uh, at, at a company. They created a company that could recognise what was in images. So you give it an image and it's like there's a dog in it. Google bought that company some time ago for $40 million. Ilia left and went to open ai. Jeffrey stayed at Google. And so they all sort of spawn the reason they call him the Godfather, it's all of his students are the ones that are, that are sort of releasing all of the, the new tech. And so he, he, the look, I think the main reason he actually left Google was he just wanted freedom to be able to talk about, uh, you know, a lot of his work and, and make it have some sort of meaning in the future. I think he just wanted to speak freely and be like, Hey, I, I told you guys back in the eighties, all this stuff would come to fruition.
And I think also what he's seen now with the, the sort of arms race between Microsoft and Google and a, a bit of open AI now as well, but it's, it's just Microsoft, let's be honest. So it's like Microsoft and Google is, they're rushing to try and outcompete each other. And he is concerned about what that will lead to. And one of the points he calls out in this interview is he used to think that AGI or some sort of singularity type scenario was 20 to 50 years away, where this intelligent would be far smarter than the human brain. But now he sort of says his timeline on that's more like fi uh, you know, 20 years, it's come down to 20, but conceivably it could be five years. And I think that's what got everyone freaked out. And he's just saying that, you know, if we, if we have a super intelligence, he's of that belief that everyone else has, like maybe this, you know, a more intelligent being than a human won't value us at all. And so his warning about it as well, and he related it to the invention of the wheel, the Industrial revolution or the Manhattan project developing nuclear, uh, like having a nuclear weapon.

Chris Sharkey:
I don't disagree with him, but do you, this, excuse my cynicism, but do you think maybe he just was starting to feel like he wasn't so relevant anymore and this is his way of bringing himself back into relevance?

Michael Sharkey:
Maybe ? I mean, I, I think that's a pretty, pretty, uh, harsh criticism without having the full context of this guy. But I do, I kind of thought like, it's like, you know,

Chris Sharkey:
If if your whole life you were like into curling, right? And no one gives, no one cares about curling at all, and then suddenly curling becomes the most popular sport in the world. But you're in some organisation that prevents you from talk, like being a curling commentator, you know, like you want to get in amongst it. It's like what he cares about, what he wants to be involved in. But really like, I mean, I'm not saying I'm an expert, but I'd never heard of the guy, you know, and, um, I just, and now I have, now he's everywhere. You know, like, I just wonder, I just wonder,

Michael Sharkey:
I mispronounce the name of Ilia before it's, uh, Suke Ike, he's a chief scientist at Open Air. Just to clarify outta the

Chris Sharkey:
Audience, you're exactly nailing it.

Michael Sharkey:
No, it's terrible. My pronunciation's horrible, but at least I tried, as I

Chris Sharkey:
Said before, don't worry. AI will take care of that for you in the future. You won't need to speak with your own voice. It's

Michael Sharkey:
Really hard recording in real time and having to pronounce very difficult names that I've never said in my entire life or read. But iress true. But yeah, so going back to, to this uh, article, yeah, I think he came out, he did a bit of a media blitz. He was in the New York Times, cnn, c b s, all the major networks. So I, it could be a, a part of legacy protection as well. Like, you know, I kind of conceived all this. He, he's, he's definitely not in the interviews saying, oh, you're too kind with the Godfather thing. He just like really embraces the Godfather. It's,

Chris Sharkey:
Yeah, it's like Tim Burners Lee comes out every couple of years to remind everyone he invented the internet.

Michael Sharkey:
. I ca I know all the comments now are gonna be like, how dare you criticise the Godfather? That really, let's be honest, most people hadn't heard about a couple of weeks ago .

Chris Sharkey:
Yeah, yeah, exactly. But it is interesting, there's a, there's a sort of interplay between the Google memo coming out and him like, cuz on one hand you've got Google being like, oh, someone at Google being like, we're behind, we're not keeping up. And then you've got the other guy being like, oh, I'm quitting because I'm really worried that we're gonna accidentally destroy the entire world. It's like quite the contrast and very possible within an organisation like Google. There is no way that everybody knows everything that's going on there. And so like to give the guy some credit, there might be, as we've discussed before, a lot more happening at Google than we, we suspect.

Michael Sharkey:
Oh yeah, I mean, clearly this guy, I mean he even said in this interview that he wasn't that impressed with chat G B T when it came out because they've been playing with large language models for five plus years and basically that.

Chris Sharkey:
So sounds like when you say you like a band and someone's like, you know what? They were, I've known them for five years, I'm

Michael Sharkey:
Talking. Yeah, yeah. I I was listening to them ages ago. I it's hard. I

Chris Sharkey:
Like that song. That's their shit song. Yeah.

Michael Sharkey:
I really didn't play this podcast to be like, let's shit on Jeffrey Hinden hour. But it's becoming that and I look, it's, I think you, you make so many great points that this guy, now I'm saying it out loud like this one, he's definitely protecting his legacy, but let's just go back to this guy's achievements. He's like really interested in the brain start studying psychology. He's, and and he's like, well, we're gonna have to mimic the brain. Everyone laughs at him back in the eighties now. Everything he sort of predicted, the back propagation invented basically, well, he invented the modern neural net. That's all come true and now is enabling all these technologies and to be

Chris Sharkey:
More charitable. It might just be that he was in a position where he couldn't do what he loves and what, like you say is his legacy. I mean, it probably is. He's seeing all this stuff that's going on and saying, you know what, it's, this isn't happening here for me at Google. I need to go somewhere. I can be, you know, at the cold face of it and be influential.

Michael Sharkey:
I don't think he cares about that anymore. He's 75 years old. I think it's a legacy thing. I I think you nailed it and I hate to say it, but I think it's a legacy thing. .

Chris Sharkey:
That's right. What, I'm 70 . I don't know what I'll be doing when I'm 75. Interesting.

Michael Sharkey:
Yeah. So it's, um, anyway, I don't know how we got, we got really sidetracked on this Jeffrey Hennon thing, but it, it, it's just another one of these people, very well-respected people in the community coming out and saying, Hey, we need to, we need to do something here. And on that note, let's just move on before we get in trouble for, for stalking this guy. Sure. . Oh man. So we we you really do me .

Chris Sharkey:
That's all right. Um, so I think something else I wanted to talk about was this paper that came out during the week about open assistant, and I dunno if you've heard about this one, but the idea was that they wanted to democratise large language model alignment. And by alignment they mean alignment with human preferences. And I think it's a really interesting one because the whole idea was to get models that humans react to with, I like that response. So the whole thing was based on human-generated assistant style conversations. So very, very similar to what Stanford did with Alpaca. So they got the leaked Facebook data set llama, and then they got a whole bunch of conversations that they actually use chat g p t for to give it a reward model and say, these are the kind of conversations you need to be having llama, you know, and then that led to what we get with the llama c p and all the models we have now.
So these guys decided to do it on a much larger scale and they decided to do it in an open source way. So they've released the paper, they've released their training method, um, and they've released the data and the code. So you can just do all of this yourself, which is quite profound because it means you can actually now using this data set of theirs, take any corpus of data and train it using their conversations that show it how to be an agent on top of that, which is extremely, extremely powerful. But just to give you some stats on this, they generated 161,000 messages across 66,000 conversations all done by humans. It's in 35 languages, they had 461,000 quality ratings. So that's like telling it, no, that's bad. Yes, this is good, et cetera, et cetera. So it's like human annotations. And this was done with 13,500 volunteers. Their results are the approval rating for their, the answers to their thing, which was done in an anonymous way is 48.3%. Theirs was chosen and 51.7% chat G P T was chosen. So they've essentially replicated chat G P T using open source data and you know, data they generated themselves using real humans.

Michael Sharkey:
What's the underlying model that they're using?

Chris Sharkey:
Well, it's their model. They made it so the, they've got a data set that they use just using publicly available data you can get on hugging face, you know, all the different sets that are available like Common Crawl and um, you know, the Reddit comments and the, the various sets that are out there. Right. So this, so this

Michael Sharkey:
Is their own large language model and then their training data is actual human conversations giving it feedback.

Chris Sharkey:
Yeah, so that ex Exactly. And so the, and they've released all the code, all the data under fully permissible licences. You can literally use it for anything you want. It's absolutely amazing. So

Michael Sharkey:
This is yet and yet another at attack, like attack , I hate to say attack, but attack from open source yet again, sort of, yeah, on chat G P T I mean

Chris Sharkey:
It makes those Google comments so relevant. I mean, they are lapping them. It's absolutely amazing. And they also, what's really interesting is that they laid out their technique exactly how they did it. You know, like you can, you can replicate it. I mean they gave everything. And so just so everybody understands, basically the procedure is this, they get human generated desired behaviours, they then use that to make a reward model. So that is then run through the neural net with the back propagation that that guy invented. And um, and then they use that reward model to fine tune the model. So it will maximise that reward. And what I found interesting about it, aside from the fact that you can then go and apply this to your own data sets, is that the human generated bit, I just, I just got stuck on that phrase and I'm like, human generated desired behaviours.
And then I thought, well, okay, cool. So they've got 161,000 conversations, 13,000 people being like, this is what humans want from this. Like, give us this. But I just thought imagine if step one wasn't done by humans, but it was done by ai, like, it's like, these are my AI generated desired behaviours. Here's what I'd like you to do. Or you know, you have someone malicious and they're like, here's what I'd like you to do as an evil person or whatever. Like imagine how different the models could turn out if you, you subbed out that so you could,

Michael Sharkey:
In theory, you could rapidly train a model this way with the AI conversations. Is that what you're saying?

Chris Sharkey:
Rapidly and you know, this wasn't done, this wasn't done on like the entire Microsoft data farm or anything like that. They did this with relatively modest hardware. I mean you can, you can run it yourself like, but

Michael Sharkey:
Then in theory though, that next model that you train rapidly off that first iteration, you could then what? Train again and again and again, like it's exponential.

Chris Sharkey:
Well, as we, as we discussed earlier, you can stack these models together. So you can have the, the models almost running sort of like, you know, in parallel or communicating with one another and things like that. So you can, and this is what that, that Google document pointed out so clearly is when you've got these models that can be trained so quickly, the smaller models you can iterate on them until you get them where you want them to be. It's not like you have to wait for G P T five or some shit where they've just had to spend like a year training it on the best. And you know, mo like hardest to get hardware, you can just, you can just do it every day. You can keep pounding away at some model until it's doing exactly what you want it to do based on reward models that you define or they define or you adjust to, to suit.

Michael Sharkey:
I, there's gotta be something in this. Maybe we should try this and then report back to everyone. I, I think it's really interesting. I I I really, this is something I'd like to try and uh, and, and see if it works. It, it does feel like the open source community could be the breakthrough here in terms of developing some sort of, uh, I don't know what to call it anymore, but like agi, singularity, whatever you want to call it. Basically an AI that can make better ai.

Chris Sharkey:
Yeah, exactly. It's like this kind of technique that leads to that, right? Like once you give the AI the ability, cuz you talked about, you know, your efforts earlier where you sort of did the sort of auto G P T thing where you've got one that's writing its own code and then, and then running that code or one that's taking like input from previous iterations and using that, sorry, output from previous iterations and using that as input to sort of get this, this sort of resident AI that can keep running, right? But this is the next level where the AI is like told, hey, you've got this ability, you can go off and train your own models either for specific things you'd like to do or, and this is the super intelligence thing, you can go train a model that's better than you maybe just slightly better than you, doesn't have to be significantly better.
Go train one that's 1% better than you and then have it train the next iteration. Like think about that this is exactly what they were prophesying could happen. And I'm not saying that open assistant is the game changer that does that. What I'm saying is that they're proving that this kind of ability is there, you know, latent within these systems. Like we can train the next iteration using the current iteration and it might not be quite good enough to get to the, the general intelligence, but it's getting closer. I mean, it's just a matter of people starting to try and you, you sort of wonder at these bigger organisations, they, I mean they've obviously realised this. Um, I don't know, like we're getting really close to actually being able to do it.

Michael Sharkey:
I can't get outta my head. I was just trash talking Jeffrey hinted for like 10 minutes there. My point that I wanna make though related to that is what do you think about the reasoning capabilities? Because he, he in this interview and the interview was great, Jeffrey, if you're listening, I thought you were great , but , but I, it's the reason capability of these models that I think is um, is, you know, he talks about the reasoning, human reasoning is still far superior. And going back to that code example where I did write some, uh, a Python script that would rewrite its own code over and over again and eventually, I don't know if you'd call it hallucinating, but it does get to a point where it makes some fundamental error. Like it goes so hard down a path of, of, I don't wanna call it reasoning, but it just goes down a pathway trying to get to your goal. So if it's like, you know, invent some, like a clone of Google photos or something and it hits a point where it's made some logical error that I would then have to go into the code, step through it and be like, okay, it needs to head in a different direction now. So I would call that roughly a bit of creativity and a bit of reasoning.

Chris Sharkey:
But you are running one model to get one goal. The thing is like humans aren't perfect. Lots of humans live their life normally up to a point that makes some fundamental error that ruins their life or someone else's. It's, it's normal for intelligence to not be on an individual level capable of doing everything. But as a society, as a group of intelligences, we can accomplish quite a lot. And I think that's what, you know, the early levels of a g I are going to be, right? It's gonna be generational, they'll be evolution, they'll be generations of them that live and are destroyed and, and they get better and they make better versions of themselves. We can't expect like the first crack at it to absolutely nail everything. And the second it makes a mistake, we're like, right, that's, it doesn't work. Like I think we need to think about it like that. Like, but

Michael Sharkey:
My point's more to our large language models ever going to be good at reasoning orders. Does a new technology have to come along in order to do that? And it sounds like it does.

Chris Sharkey:
Yes. But they will make the new technology. I I I think there's enough

Michael Sharkey:
Reason. Do you think they're that capable, like the LLMs advancing the way they are to do it?

Chris Sharkey:
I don't know, but I mean, I'm just a human. So I

Michael Sharkey:
Couldn't

Chris Sharkey:
Possibly evaluate that

Michael Sharkey:
As a human.

Chris Sharkey:
I, I I think my point is that whether they are or not today, the, the rapid pace of technology, I think it's an inevitability that they will be. I I just don't know at what point we declare it that, yeah, okay, now they're good enough to do it, but the point is they're getting closer and we're seeing people do these experiments. Like if you look at some of the, the prompt length updates, for example, like there was one release during the week called unlimit forma, which essentially uses a technology to, uh, it's not, it's not like, uh, prompt compression. It's more like it evaluates the prompt in a way that it takes the essence of what the prompt is and then is able to iterate running that through, um, the model to essentially have unlimited prompt sizes and they're, they're getting incredibly good results for that.
And there's another one that's been announced, but admittedly there's no paper and there's no code. So it could be, um, it could be bs, but it's called Long Boy, which I think is the great with two Gs, I think it's a great name. And they're talking about having a 64 K prompt size, which is absolutely enormous if you think about double what G P T four supposedly can do. And we've seen demos of that throughout the week. Obviously not us we're not good enough to get access, but um, but people have used it and they're saying like, here's an entire code base, write the docs for it and it can do it. You know, the other example we saw, and you'll put this, um, article in the reference note, I'm sure a guy actually went and gave it 60 meg of, um, of US census data and told it to come up with hypotheses and then write a paper about it. And it wrote an entire paper about it, including figures and graphs and all that sort of stuff. He said it wasn't wonderful, but it did it in a few seconds, like a few seconds it took 32,000, sorry, 60 meg of data, um, and wrote a paper about it. Like that's absolutely,

Michael Sharkey:
Can can you just talk for, for those listening that don't understand the idea of prompt size, like wh why is everyone in, in who's working with AI today really excited about larger prompt sizes?

Chris Sharkey:
Well, because firstly, the amount of live context data you can give it, so like, you know, models are trained up to a point and they don't necessarily, uh, retain all of that data. And especially like what we've been talking about today with the smaller models. So you were talking earlier about, you know, the essence of what a model is and saying, well if I can train on a lot less data, I can do it faster and more cheaply, but it doesn't remember everything, it doesn't have all of, you know, human knowledge baked into it. So it can't know that. But if you have a high prompt size, you can tell it, here's all the relevant information to what you're trying to do here, and you can use another model to actually summarise that information for it. So you don't have to give it, like if you are talking about, you know, um, a, a a A stock or something like that, you don't have to give it the entire history of that company. You can just give it all the salient points from the data, like the financials and like the who the executives are and that kind of thing.

Michael Sharkey:
Right? So essentially with, its like when it returns a response back to you, the, it just has more context, like a sort of infinite brain, is that how you should think about

Chris Sharkey:
It? That's part one. That's part one. It can have a lot more information about what it's trying to do, but part two is that it can output a lot more because the prompt usually is what you put in and what it can put out. So if it can output a lot more information, because remember even with chat G P T, you say, oh, well I can just ask it to send me another message to continue, but then it needs the previous history in order to know what it said before. So it, it's cumulative. So if you can have a larger prompt size, it means you can actually output an entire paper, an entire book, you know, like an entire symphony, whatever it is. You can, it can do more, it can, it can take more in, it can hold more in its brain at once and it can output more. And I think that hold more in its brain is such a valuable point because as you said earlier, it's like with, with the point about, you know, humans don't have to read the entire history of the world in order to make simple decisions or simple inferences. And so, but this thing can, it can take in, uh, you know, huge, like imagine asking it questions about a book and it has a photorealistic memory of literally every single phrase and word in that book. Like it's just fascinating. And that's just the start. And people

Michael Sharkey:
And people are doing this right now, right? Like there's a lot of examples if people have been looking where you can upload a PDF now and then ask questions specifically related to that pdf. And if you ask questions outside of that pdf, it's like, I don't know, because it's it's frame of references that pdf Yeah, this is essentially like more PDFs by explaining the

Chris Sharkey:
Token science and that's, that's using lang chain as well. So that's using an iterative process where it'll go through and score all the words in there and do sort of like a semantic search on getting the relevant summaries and then it shoves those summaries into the eight K G P T four prompt size for example. And then it makes an inference on that. But what I'm talking about is having the whole PDF maybe a hundred more in its memory at the same time and then making inferences on all of it rather than the summaries. And then if you take it to the logical next level and say, well you could still use a technology, you know, like Lang chain or Vector database where you take in, you know, enormous amounts of data and then the summaries fit into say 64 k of data or something like that. It's abilities are going to be absolutely astonishing. Like so it

Michael Sharkey:
Actually makes a vector database way more powerful.

Chris Sharkey:
I think so yeah, I don't think that larger prompt size necessarily makes vector databases irrelevant. And if anything, they'll probably be better technologies that come along that maybe replace vector databases. I'm no expert in it, but it seems to me like it's something now that people saw as a necessary evil because of restrictions in prompt size. But I think if anything, they're here to stay, although that technique is here to stay because why wouldn't you always want a little bit more, certainly the AI will like, imagine if it remembers every conversation it's ever had, everything it's ever said, everything it's ever thought. It's like a, it literally is like a memory for the ai. Why would it, why would it discard that at any point? Like it's only gains for it. And as we've seen like literally the memories of something like chat G P T are used to go off and train a better and more capable model. So being able to remember its history can only certainly help it.

Michael Sharkey:
Yeah, it, I mean even working with it myself, like the limitations of what you can insert into a prompt, it it like that's, that's the frustration. So you can see why vector databases and the prompt size, I mean it's high on my own personal wishlist, I wish I could have access. Please open AI gods please.

Chris Sharkey:
Well, and I think it's why people grasp at things like long Boy and you limber former because they're, they're here now, you can actually use them. And I think there's always this thing on our podcast that I reflect on during the week is there's this gap between what we talk about and what's practical right now. And I think that bridging that gap is what a lot of people are putting their energy into now. Like what can we do now with the actual technology and you know, and then how can we leverage the ai like every week things are coming out of here's how to use it better, here's how to get more out of what we have in, in addition to the advancements in the core technology.

Michael Sharkey:
Yeah. To me right now, there's no way of, of, if someone wants to easily experiment with this technology, uploading every file on their computer to a system and then being able to run some sort of agent behind it, which keeps them abreast of important information that they're collecting in their daily lives or, or that, like to me it's a distribution problem is we've, we've gotten overwhelmed by the advancements in the technology, but no one's really distributed the here and now of this technology. Like, you know, that whole thing of, you think progress is going to be really quickly like the invention of the smartphone, but then it takes a decade before the big, the big apps are release like, you know, Instagram and Yeah and

Chris Sharkey:
Sort of, and sort of looking back, you will look at these phases as these, these early foundational days of this stuff and, and you know, you're like, oh, if only we'd known this then kind of thing. But that doesn't make it like, you know, extremely exciting being in amongst it and actually trying it. And I think coming back to it, I know we've refrained on it a lot, uh, today, but the open source accessibility of it for someone technical is very exciting. The next phase has got to be providing that technology to other people in other industries and, you know, individuals who can try it for themselves and use it in a very accessible way.

Michael Sharkey:
I agree. I think for all of us nerds, it's really exciting cuz we can play around with it, but I think the next evolution is giving this technology to the general masses. And I, I think that was really the breakthrough with chat G B T. Everyone can use it, but to me the next breakthrough is like bigger memories agency, like automating things, potentially running simulations, all of these other aspects of it. Yeah. Uh, uh, are the next, the next wave

Chris Sharkey:
More than just ju doing your homework during the week. My, uh, teenage babysitter, he said his sister got done, uh, in class for writing her homework using uh, chat G P T and I was like, how did they know? And he is like, I think she left one of the

Michael Sharkey:
AI

Chris Sharkey:
Things in there. So it's like, it's, it's impacting people. But you know, on a sort of more positive side, it's showing that like people want this, like they, they can see the value in it. I mean, cheating at your homework's one thing, but you know, it's, it's, it is mainstream in that respect, but what isn't mainstream is is that next, next generation of what, what is capable.

Michael Sharkey:
Yeah. It's almost like we're just playing around with the infancy capability of what these things can do. Mark Zuckerberg this week came out and said Mark Zuckerberg, no, he didn't say this. He's like

Chris Sharkey:
The Mark Zuckerberg said, mark Zuckerberg AI said Mark Zuckerberg Thanks, .

Michael Sharkey:
I am a large language model. I'm just saying the next one, do you to you all week. Honestly tore ultimate

Chris Sharkey:
Meta game. If he was AI all along, like you'd just be like, of course I knew it.

Michael Sharkey:
Yeah, he's definitely ai, let's be honest, but AI

Chris Sharkey:
Don't like to smoke meats as much as he does. Probably

Michael Sharkey:
All this week. All all the, you know, like in my brain, if I'm having a shower now, you know how you'd have a thought like, oh, I should do that. I've got this weird internal narrative now it's like I should, like, I'm talking like the AI does when it's setting goals in like, um, the Yeah, the sort of agi uh, apps at the moment. Like, I'm going, I should do this. The, you know, like that, that phrasing in my mind. Now, I

Chris Sharkey:
Similar don't mean I told, I told my son this morning, I forget what it was. Oh, he had to, he had to clean the table right before he went to school, and I was thinking, is his brain going, I'm gonna need some sort of, you know, wet wipe or, uh, you know, rag or something to clean this table. Like, and I wanted to watch him solve the problem to see like what his thought process was. But his strategy was basically just ignore it until the problem . So I dunno if the AI's gonna be like that. Hopefully

Michael Sharkey:
Not. Yeah. I should get a dishcloth, I I should wipe

Chris Sharkey:
It up. Need his, his reward function or his function.

Michael Sharkey:
Yeah. Anyway, back to this, this, uh, story. So Mark Zuckerberg says, meta wants to introduce AI agents to billions of people. We saw Snap, snap, is it Snapchat? I forget what it is. No, they

Chris Sharkey:
Changed

Michael Sharkey:
Their name to Snap. Snap. Okay. It is Snap now. Yeah. So they introduced a, an AI bot that no one likes or wants, but, um, it sounds like Zuckerberg and, and Meta really, and maybe thanks to the open source community, Lama is getting better and better, but they really believe that introducing agents to billions of people is the, the way to go. This does scare me a little bit because of his whole Metaverse vision. Like, you go into the Metaverse, we talked about this two episodes ago, and there's all these AI agents and people you can interact with and this potentially,

Chris Sharkey:
I think it's the only thing that makes that interesting, right? Like just talking to the psychos who populate the internet in a virtual reality is, is awful. But thinking about that dynamic vision you've talked about with AI agents where the entire world and environment reacts to your interactions, that's exciting.

Michael Sharkey:
Yeah. I think it could be the world's best video game. The biggest educational opportunity ever. Yes. Um, moving forward. So I think he's definitely now found some sort of like, use case outside of a no legged metaverse, and we've, I'm not gonna go down that rabbit hole again. Um, no, but

Chris Sharkey:
It might be just that the timing's caught up because so many people were criticising the AI play, but AI plus, you know, close to real, sorry. Yeah. Uh, sorry. VR plus close to real ai, that's a game changer. That's actually pretty interesting.

Michael Sharkey:
Yeah, to me, like maybe meta stands to benefit most from this, now that people are in the open source community are improving their models, they've got great AR hardware and uh, and you know, they've built a skillset around it. So potentially in the future, it's not Microsoft and Google that wins this race. It's, it's this really meta who no one would expect today, you know, comes from behind somewhat and takes control of it. But at the same time, I also feel like Mark Zuckerberg's out there right now looking for some breakthrough app like TikTok or Instagram to go and buy that allows people to run agents and, and do a lot of this stuff. So it'll be fascinating to see how this stuff gets distributed in the coming weeks, months. I mean,

Chris Sharkey:
I guess the point is, the point is that you can't, no one can ignore this, you know, at every level of society and business, you can't ignore it. Like, this isn't something that you can just hope will pass. And it's different to when the internet came around because so many people dismissed it. And I think there's less people dismissing the rise of ai. There's more of a sense of an inevitability to it. And I think that's why we're seeing such big reactions all over the place where, you know, people are like people, companies, everyone is, is publicly announcing their stance on it rather than just waiting to see how it plays out.

Michael Sharkey:
Well, it's already having huge impacts. I mean, in my own day-to-day, like when I'm working on projects now, or I'm writing code, I, I said earlier, if I get a trace back, like, uh, which means just a problem with my code that the system's identified, I just paste the trace back now into chat, G v t get an instant resolution of what's wrong and what I need to fix. Whereas if I did that in Google, it used to be way too time consuming. And so for me, in my day-to-day life, by embracing it, I'm now way more productive. I I know I'm more productive. I couldn't live without it right now. Yeah. And what's happening is, in the education system, there's this company called Chegg. I don't know if you're familiar with it in the us I certainly wasn't, but it's an education, like a, a tutoring it, it's tutoring essentially. But basically they came out and they said chat GBT is having a hugely negative impact on their growth because everyone's just using chat G B T instead of their tutoring, uh, tools and lessons, and the stock was down 40%. So you can see it really starting to also just disrupt anything. I mean,

Chris Sharkey:
Did did they announce that?

Michael Sharkey:
Like Yeah, I don't know why they came out and blamed their growth problems on chat G B T. They did. They said it like why would you

Chris Sharkey:
? Well, I guess, I suppose as a public company you've gotta sort of explain things, but Jesus, yeah, that's, that's rough. That's tough. Yeah, I mean, I've, I've done it. So I'm, I, I've think I've said a few times on the cast, I'm learning German. I've made a little tool that will speak to me and I can speak back to in German. So if I don't know a word or a sentence or things like that, I can ask it, it'll explain it to me. Or you know, for example, if I write a sentence, I can say what's wrong with my sentence and it'll go through my grammatical and other errors. I made that in a couple of days. It's helping me a lot, um, as a sort of personal tutor so I can like, it, it literally is helping me every day with my learning German. And, um, so I could, I could absolutely see someone putting actual time into something like that. It could replace language education a hundred

Michael Sharkey:
Percent. I think education is, is is the rightful first disruption because it's making me smarter. As I said, I haven't coded in like a decade and I'm able to just rapidly move forward now with my like, side projects for fun that I'm doing to play around with ai. Whereas without ai I don't think I could do it. Like fundamentally, I think education is, is the, the best and, and, and it's a bit of,

Chris Sharkey:
It's a bit of fun. Bit of fun for the ai, right? Like puny humans trying

Michael Sharkey:
To educate. Yeah. It's like, look at this. Loser doesn't even know what a basic Traceback era is. Yeah.

Chris Sharkey:
Little do they know how futile it is.

Michael Sharkey:
Even more fascinating though there, there's an article, um, AI is taking the jobs of Kenyans who write essays for us college students. So even ghost writers, it's stealing their jobs as well. They're like, I don't have to hire that, that Kenyan guy anymore to write my essay. I can do that. Uh, you know, I can do that with chat G P T. Yeah.

Chris Sharkey:
I wonder how like companies like turn it in and those plagiarism detector companies will adapt to this. Like, is it going to be like an escalating AI war where they're writing stuff that looks for like watermarks or you know, some sort of way to detect something's AI generated. Like they must be thinking about that.

Michael Sharkey:
Yeah, and the other thing is like, um, you know, companies like Dura Lingo where you'd learn a language previously talking about languages and you'd use that app. To me, now, my instinct, and I hate to say like, it's just, it's becoming like the just Google it thing. My instinct is to literally just go straight to chat t and be like, how would I learn a new language? What's the best approach? Can you, you know, workshop this with me, can you help me learn it? And I would just go there. So I feel like that's the, the, these businesses are going to be insanely disrupted.

Chris Sharkey:
Well, and it's another, it's another case where Lang chain and prompt size comes in, right? Like the techniques websites like du lingo use are called space repetition. So the idea is that the things you make mistakes on, it'll repeat quite frequently the things that, you know, it'll still repeat just less frequently. So it's still staying top of mind. It's like how software like that anky works where it's like a card, you know, like a learn flashcard thing and it does space repetition. The AI is fine for learning language, but one thing I've noticed with the one I built, for example, is it doesn't sort of, well obviously cause I didn't make it, but it doesn't remember, hey, this guy's struggling with this. We better chuck that in from time to time for him to learn. And I think that as you get more context, either apply a lang chain so it can look for those things, or just simply give it all the conversation history, then it'll get better. So again, like we've discussed many times, a lot of this is someone just being dedicated to a single problem and applying the technology to that single problem until they get the best use case there.

Michael Sharkey:
Yeah. So gen like these broader, broader LLMs, you think it's gonna be more specialised and these specialised functions people will still use. Like there is going to be, well

Chris Sharkey:
That and what we discussed earlier, which is accessibility, it's like, you know, anyone could theoretically do what I'm doing for language, what you are doing for code, but can a sort of average person who wants to learn German or wants to learn coding go do that? Probably not because they're, they'd need to know how to apply it. All right?

Michael Sharkey:
But if they could,

Chris Sharkey:
Well, but that's what I mean. Like if they, if they, if if someone can bring access to them through putting the nice, you know, like trimmings around it to make it easily accessible, then I think they will. Yeah.

Michael Sharkey:
Yeah. I, and maybe, I mean maybe this is what Open AI is thinking about now, or as you said, maybe their focus is just go build agi, I be a research company, which is their foundation really and let Microsoft go and figure the rest of this out in their own applications.

Chris Sharkey:
Yeah, yeah, exactly. I think they'll leave it to others to commercialise it and they're really just gonna focus on the cortex.

Michael Sharkey:
Alright, this has been episode 13 of this day in ai. Thanks again for watching. If you like these, uh, this podcast, please consider leaving a comment. If you're watching on YouTube leaving us a comment as well would be great. Or a, a thumbs up or like, or whatever you tell people to do these days, I really have no idea. But I do wanna say thank you to our audience. We are now in the top 20 of technology podcasts in the United States and various other high rankings around the world. So thank you for listening. I'm still amazed people listen to us, uh, talk like we do and, and trash talk, the godfather of AI and do all these silly things. So we really appreciate you listening in and uh, yeah, thanks for helping us grow.

OpenAI Has No Moat, Google's Godfather of AI Quits & The Rise of Open Source LLMs | E13
Broadcast by