Futuristic #25 – GPT4o and Free Therapy


Episode Artwork
1.0x
0% played 00:00 00:00
May 18 2024 56 mins   7







This week on Futuristic we’re talking about the new ChatGPT-4o model, GPT officially passes the Turing Test, the OAI founder who thinks AGI is only 2-3 years away, Ilya has left OAI, Sam Altman doesn’t think we are worried enough about how AI will impact the economy, Google’s medical AI destroys GPT’s benchmark and outperforms doctors and ChatGPT-4 beat 100% of all psychologists in a study of Social Intelligence.


00:00 Kicking Off with Technical Glitches and AI Intrigues
00:31 Parenting in the Age of AI: A GPT-3 Experiment
01:45 Exploring Global Hypocrisy with GPT Conversations
03:55 Innovative Uses of GPT in Analyzing Budgets and Historical Figures
09:59 The Evolution of AI and Personal Computing: A 50-Year Journey
13:27 Tackling the Turing Test with GPT-4: Human or AI?
20:47 The Departure of OpenAI’s Chief Scientist: A Turning Point
23:12 The AI Timeline: Predictions and Implications
24:30 Exploring AGI Supremacy and Its Societal Impact
39:54 The Future of AI in Healthcare: A New Hope
43:31 AI’s Role in Psychology and Personal Well-being
54:17 Envisioning a Future with Personal Services AI























FULL TRANSCRIPT

Futuristic 25


[00:00:00] Cameron: Welcome to the Futuristic Episode 25. We’re back. It’s the


[00:00:09] Cameron: 17th of May. Steve Sammartino got some technical issues this morning.


[00:00:13] Cameron: Sorry about that. Don’t know what’s going on. I think the AI is trying


[00:00:17] Cameron: to stop us from having this conversation. This is the conversation that doesn’t want you to hear.


[00:00:23] Steve: Sentient.


[00:00:24] Steve: Sentient.


[00:00:25] Cameron: Steve, um, tell me what’s been


[00:00:26] Cameron: going on in your corner of the world


[00:00:29] Cameron: since we last spoke.


[00:00:30] Steve: Yeah. Well, I, uh, I’ve been


[00:00:33] Steve: using GPT


[00:00:36] Steve: to influence my son. He won’t


[00:00:38] Steve: listen to me. So what I did the other day, we’re lying in bed and I


[00:00:41] Steve: was trying to tell him how social media isn’t as, Good as he


[00:00:45] Steve: thinks it is, certainly the wormholes that he is down, wasn’t interested in what I had to say. he


[00:00:51] Steve: said, what would you know?


[00:00:51] Steve: I said, well, why don’t we ask someone who knows


[00:00:53] Steve: everything, has a PhD in every single subject. And although he’s


[00:00:56] Steve: across ChatGPT and everything, he’s never really had a live conversation with


[00:01:00] Steve: the voice activated version. So I did that and I gave it to him. And, you know, I said, oh, my son really likes TikTok, but he goes down some terrible wormholes with his algorithms, and it’s, I’m not sure it’s making him smarter, and I asked him to ask a question back and forth, and he actually


[00:01:17] Steve: stopped and listened.


[00:01:18] Steve: It was almost like it had this


[00:01:20] Steve: Separation or a sense of authority, and he paid more


[00:01:23] Steve: attention to it. A little bit like, you know, when you, your clever uncle


[00:01:27] Steve: or someone that is not your dad, because who wants to listen to their dad? And it was a really


[00:01:32] Steve: interesting way to get some information across to him that he just wasn’t prepared to hear from me.


[00:01:39] Steve: And it was a really cool little


[00:01:41] Steve: thing, you know.


[00:01:42] Cameron: That is really cool. I like that. Well, the only thing that I’ve been doing any


[00:01:47] Cameron: different from what I normally do, which is coding and all that kind of crap, is You know, I go into GPT often, and I have conversations with


[00:01:53] Cameron: it about subjects that. I’m trying to unpick. What I’ve started doing is blogging those.


[00:01:59] Cameron: So if you go over to CameronReilly. com And have a look at some of my recent conversations, I just did one on American hypocrisy over


[00:02:06] Cameron: tariffs. How they’re always pushing


[00:02:08] Cameron: for free trade and they’re using the IMF and the World Bank to enforce free trade


[00:02:12] Cameron: and uh, no tariffs, no duties on other countries to open up their markets, but then they will apply tariffs


[00:02:19] Cameron: As and when they see fit to protect their domestic industries to punish, uh, China as they’ve been doing again recently under the Biden administration. I did a


[00:02:29] Cameron: post, uh, before that on, um, the conversation with


[00:02:34] Cameron: it about, uh, American hypocrisy over China, um, again, looking at world trade


[00:02:41] Cameron: disputes and, uh, complaining about China producing too much stuff! producing too much stuff! Like the U. S.


[00:02:48] Cameron: hasn’t been flooding the world with stuff for decades. Now China’s flooding the world with stuff. They’re like, you gotta stop it!


[00:02:54] Cameron: You’re producing too much


[00:02:55] Cameron: stuff! Too much clean energy! Stop, stop producing so many solar panels! You might make, you might clean up the world! Might do something about climate change! But anyway, the point is just, I’m just blogging these conversations because I think, okay, well I’m having


[00:03:09] Cameron: these conversations and uh, I should blog


[00:03:12] Cameron: them and share them.


[00:03:13] Cameron: I don’t know if anyone cares, probably not, but,


[00:03:15] Steve: know if anyone’s even done that. I mean, I’m just reading it now, and I agree with GPT, had some very


[00:03:21] Steve: good insights on why they do uh, have trade, uh, tariffs, um, embargoes, all those kind of things.


[00:03:31] Steve: It’s always been that way. And every economy


[00:03:33] Steve: is filled with contradictions, right? Uh, vested


[00:03:36] Steve: interests. And, and, and that’s the whole idea that, You know, pure capitalism has never existed, pure socialism, pure anything


[00:03:41] Steve: has never really existed. Um, but I like that idea that


[00:03:45] Steve: instead of having the conversation with yourself or talking about an interview, you’re just taking the screenshot And throwing it down there.


[00:03:54] Steve: It’s interesting. One interesting thing that I saw this week, a good friend of mine, I think you might know him now, Josh Rowe


[00:03:59] Steve: from Melbourne,


[00:04:00] Cameron: hmm,


[00:04:01] Steve: part of the early Twitter cohort back in


[00:04:04] Steve: And he, this week for the budget, he took all of the


[00:04:07] Steve: papers and the PDFs, and it might have been, you know, 3, 000 words of PDF documents that have been prepared


[00:04:13] Steve: for, for a time, and he put it on his,


[00:04:16] Steve: um, he’s got a a substack called AI at Every


[00:04:19] Steve: Scale, and


[00:04:21] Steve: what he did was upload all of the documents, give some parameters for GPT on what it was, what the documents are about, Um, uploaded to the brain, the


[00:04:30] Steve: documents that wouldn’t be in GPT


[00:04:32] Steve: given uh, you know, cut off


[00:04:35] Steve: time and said, okay, now you’re in a room discussing.


[00:04:37] Steve: Discussing the various


[00:04:39] Steve: implications of the budget. In the room is Milton Friedman, in the room is John Maynard Cates, in the room is


[00:04:46] Steve: Chalmers, uh, in the room


[00:04:48] Steve: is


[00:04:49] Steve: someone who’s paying a mortgage with three kids,


[00:04:52] Steve: and it was really brilliant, it was a long post, but it had the discussion points


[00:04:56] Steve: that each of them would say, and he created a clock round


[00:04:59] Steve: table using GPT, and I thought, you know, You know, you’ve reminded me of this, what a clever way to interpret


[00:05:06] Steve: the tool for a new use case which hasn’t happened before, combining modern, uh, new recent, uh, data that’s


[00:05:13] Steve: come out with the budget, current people in society and their challenges, historical economic figures, and it kind of, not only


[00:05:20] Steve: was it brilliant and insightful, it just reminded me again, that everyone that


[00:05:25] Steve: says that oh, Every job is going to go away.


[00:05:28] Steve: it’s


[00:05:28] Steve: like, yeah, of course they are. And there’s not that many bison hunters anymore either, right? Whenever a job goes


[00:05:33] Steve: away, new ones come and take their place. And I know that we’ve got some stuff with Sam Alton that we’re going to talk about with


[00:05:39] Steve: that And I’ve got some views on that as well, but I love what


[00:05:42] Steve: you’ve done here, Cameron, it’s really smart.


[00:05:44] Cameron: hmm. Thanks, mate. Yeah, I just want to encourage people to use the tools in intelligent ways


[00:05:50] Cameron: and, um,


[00:05:51] Cameron: you know, have intelligent conversations


[00:05:53] Cameron: with them. Don’t just use them for, you know, checking the spelling of your emails or rewriting your emails. Use them to have intelligent conversations.


[00:06:01] Steve: Yeah.


[00:06:01] Cameron: Well, Steve, it’s been a huge


[00:06:03] Cameron: week.


[00:06:04] Cameron: I mean, for the last couple of weeks, you and I have been pinging each other late in the week. Are we going to do a show? There’s not much happened, but this week’s one of those weeks where a big thing happened,


[00:06:12] Cameron: so we had to The big thing, of course, is OpenAI


[00:06:15] Cameron: released version 4. 0 of ChatGPT.


[00:06:21] Cameron: 4. 0, the O stands for, oh my god, I think we’ve arrived at 4.


[00:06:26] Cameron: 0.


[00:06:26] Steve: Does it? Cause I did watch, I’m sure someone has said what it stands for, but I actually don’t know what


[00:06:31] Steve: it


[00:06:31] Cameron: Stands for Omni officially, but I prefer to think of it as, Oh my God, I can’t believe how cool those demos were. Um, I, uh, was


[00:06:42] Cameron: driving my son Tyler to the airport. He’s back in LA


[00:06:45] Cameron: hanging out with Will Smith today. And, um, He and I,


[00:06:50] Steve: over. Sorry. No go. We are out. I refuse to


[00:06:53] Steve: watch or anyway,


[00:06:55] Cameron: and Martin Lawrence,


[00:06:56] Cameron: he’s at a bad boys thing. Anyway, um, he and I were talking about it


[00:07:00] Cameron: and going over the demos. So for people that haven’t seen or caught up with the news


[00:07:04] Cameron: yet, basically this new version that they’re rolling out, a couple of


[00:07:09] Cameron: big things. Number one is that they’re making it available


[00:07:12] Cameron: for free to everybody.


[00:07:13] Cameron: You don’t have to have a premium subscription to get ChatGPT 4.


[00:07:19] Cameron: But if you’re a paid subscriber, you get more messages. You get


[00:07:23] Cameron: higher limits on, on, on how much you can use it over the course of a day for the premium users, the


[00:07:30] Cameron: super users. Secondly, the


[00:07:32] Cameron: other big thing was they’re introducing


[00:07:35] Cameron: a video model into it.


[00:07:37] Cameron: So not only, uh, Omni


[00:07:39] Cameron: being multimodal Omni, so not only can you interact with using text and voice.


[00:07:46] Cameron: And pictures, uh, images, now you can


[00:07:49] Cameron: interact with it with video, it’ll use the video camera on your phone, it can see around you, it can see you, and you can point it at things and go, tell me more


[00:07:59] Cameron: about this, what am I looking at, what’s going on. and


[00:08:02] Cameron: it will give you real time feedback. They’ve optimized it and


[00:08:07] Cameron: cut down the lag between question and answer to what


[00:08:09] Cameron: they say are human like response


[00:08:12] Cameron: levels. Um, and the demos


[00:08:16] Cameron: looked absolutely super cool. What did you think of the launch,


[00:08:20] Steve: that, yeah, I really liked the demo, like the idea I’m going on a date or I’ve got a job


[00:08:24] Steve: interview. How do I look? I mean, some of that stuff


[00:08:27] Steve: is, it’s, you know, it’s, I think it’s, It crosses one of the chasms that’s been lacking with the


[00:08:33] Steve: GPTs, and that is the lag on the


[00:08:36] Steve: data that they’ve been trained on, and having that real


[00:08:39] Steve: time, we had it with images, but having it with video where it can create interactions based on what


[00:08:43] Steve: it’s been trained on previously, it creates a new layer of value, it


[00:08:47] Steve: really does.


[00:08:49] Cameron: Yeah, I think there’s going to be a whole bunch of applications for


[00:08:51] Cameron: this that we haven’t even thought of yet. I mean, one of the demo videos, I


[00:08:56] Cameron: saw somebody was, I don’t know, in Europe or something, and they’re pointing it at a at an


[00:09:01] Cameron: old palace or something, or a flag. What’s this flag mean? What’s this palace?


[00:09:05] Cameron: Who built this? When was it built? You could have done that with photos. Videos aren’t really


[00:09:09] Cameron: necessary. But, you know, I saw Sal


[00:09:11] Cameron: Khan from the Khan Academy with his son doing a demo where they had


[00:09:16] Cameron: GPT running on an iPad alongside a drawing app, not sure if it was Khan Academy or some other drawing


[00:09:22] Cameron: app, and Sal said to GPT,


[00:09:25] Cameron: my son’s doing geometry, he’s trying to figure out the angles in this triangle, Don’t give him the


[00:09:32] Cameron: answers, but tell him how to work out the answers, and then they could, they were, they must have been sharing the screen with GPT, and they would draw, they were touching


[00:09:41] Cameron: the, you know, the hypotenuse, And and it was, saying, okay, so that’s the hypotenuse, and here’s how, you know, all three angles add up to 180 degrees, and you can calculate this one by that, it was, um, It was an interesting use case, um, but,


[00:09:56] Cameron: like, you know, it’s just another evolution.


[00:09:59] Cameron: You know, I’ve been saying to people recently,


[00:10:02] Cameron: you know, I think it was 1976


[00:10:04] Cameron: that the first personal computer came out by Apple, 75, 76?


[00:10:11] Cameron: We are almost at the 50th anniversary of the


[00:10:13] Cameron: first personal computer. We’ve gone, in 50 years, we’ve gone from the invention of the personal


[00:10:20] Cameron: computer to having


[00:10:22] Cameron: an AI device in our back pockets.


[00:10:24] Cameron: That’s mind blowing to me. 50 years. In our lifetime.


[00:10:30] Cameron: A lifespan. We’ve gone from, hey, let’s invent a computer,


[00:10:33] Cameron: to


[00:10:34] Cameron: let’s put AI in your back pocket.


[00:10:36] Steve: can do, it could count a few things, and maybe, yeah, I mean, we just, you’ve got to remember


[00:10:41] Steve: how rudimentary the early


[00:10:43] Cameron: Didn’t even have a monitor


[00:10:45] Cameron: at


[00:10:45] Steve: Yeah, right. The first one is X, right? They didn’t have monitors. You’re right about that. And


[00:10:48] Steve: punch cards and Well, the other thing too,


[00:10:52] Steve: and I did a presentation to Mercedes Benz trucks, Daimler, um, for the launch of their electric vehicles last week.


[00:11:00] Steve: I did three in a


[00:11:00] Steve: row. as part of it, I studied a


[00:11:03] Steve: fair bit of The industrialization of, uh,


[00:11:08] Steve: you know, what happened in the industrial era with, with transport and how incremental the


[00:11:13] Steve: improvements were. That was so incremental. Yes, the internal combustion engine


[00:11:18] Steve: has become more efficient, but but really in real terms,


[00:11:23] Steve: not that much more efficient in a hundred years, like almost compared to what computers do, almost


[00:11:29] Steve: nothing.


[00:11:31] Steve: Almost like no advance if you compare the 50 years of computational technology and how much it’s improved exponentially compared to industrial technology. And I guess it just comes down to simple things like this, and this is what I said to everyone on stage. I said a barrel of oil has the same number


[00:11:46] Steve: of kilojoules it’s always had.


[00:11:48] Steve: That’s


[00:11:49] Steve: it. The joules of energy just does not change. A lump of coal, a lump of coal is the same. And computational


[00:11:55] Steve: technology is just fundamentally in the way that it works. It’s just not the same species. And so, we see, and we see that now because energy is now an exponential technology as well


[00:12:06] Steve: because it’s based on the same stuff.


[00:12:07] Steve: And if you have enough


[00:12:08] Steve: information, you can create energy from information because energy and mass are the same thing, right? And it, it comes back to that whole


[00:12:14] Steve: full circle.


[00:12:16] Cameron: yeah. Yeah, it’s been an incredible


[00:12:20] Cameron: half century of progress. I was even thinking about it in terms of electricity. Edison and Tesla figuring out how we could


[00:12:27] Cameron: use electricity, uh, a little over a hundred years


[00:12:31] Cameron: ago. We’ve gone from


[00:12:34] Cameron: Rolling out electric lights


[00:12:35] Cameron: in the first cities a hundred years ago to an AI in your back pocket. it’s like in terms of the time span of human history, it’s just an insane level of progress. Anyway.


[00:12:51] Cameron: Speaking of progress, oh, I did use ChatGPT to help me diagnose what was wrong with my washing machine,


[00:12:57] Cameron: um, the other day, and took it apart and was showing it photos of what I


[00:13:02] Cameron: could see inside, and it was telling me what to look at, and I realized I needed a new washing machine, so that. helped.


[00:13:07] Cameron: There’s a new preprint out, Cameron Jones, uh, no relation, from the Department of Cognitive Science at UC San


[00:13:14] Cameron: Diego,


[00:13:14] Steve: People with their first name are usually related to each other with the same first names. A lot of people don’t realize


[00:13:18] Steve: that


[00:13:19] Steve: Cameron.


[00:13:19] Cameron: yeah, it’s in the DNA, um,


[00:13:21] Cameron: he, he, he tweeted this the other day, he and a colleague, Benjamin Bergen, We


[00:13:27] Cameron: have done a study on GPT 4 and the Turing test. We recruited 500 participants who were assigned to one of five conditions. Human witnesses or interrogators


[00:13:42] Cameron: for one of four AI witness types. GPT 4, Human or ELISA.


[00:13:51] Cameron: and basically they gave them a selection of questions And tried to figure, you know, ask the humans


[00:13:57] Cameron: to determine if the responses were coming from a human or from one of the AIs.


[00:14:05] Cameron: People judged GPT 4 to be human 54 percent of the time,


[00:14:11] Cameron: compared to 22 percent for ELISA and 67 percent for humans. The implication is that people are


[00:14:18] Cameron: at chance in determining that GPT 4 is an AI, even though the study is


[00:14:23] Cameron: powerful enough to detect differences from 50 percent accuracy. So, I mean, we’ve talked a bit on the show about, um, The Turing test


[00:14:36] Cameron: And uh, how we’ve kind of blown past it. I think we all sort of assumed


[00:14:41] Cameron: that we had blown past the Turing test. Um,


[00:14:45] Cameron: one day it seemed


[00:14:46] Cameron: relevant. And, uh, the next day we were like, Oh, remember that? Wasn’t that


[00:14:50] Cameron: cute?


[00:14:51] Steve: Yeah, well, again, doing a chatbot Turing test, I don’t know how much


[00:14:57] Steve: relevance there is to that. And one of the things that I noticed, and I actually wasn’t that impressed with the study, to be honest, when I had a look at, yeah, I was


[00:15:04] Steve: really unimpressed, um, but, but the limited amounts that I saw on what the conversations were, and,


[00:15:09] Steve: and, and this is why, this is not based on the researchers, this is actually more


[00:15:14] Steve: based on the reality of human behavior.


[00:15:17] Steve: Because what I noticed was the


[00:15:19] Steve: questions that the humans asked. weren’t really valuable to understand whether or not, uh, it


[00:15:27] Steve: was a computer. Like you got to be careful that you don’t ask


[00:15:30] Steve: questions that are too intelligent. There’s a real, there’s a real balance in trying to decipher whether or not the AI is a human.


[00:15:39] Steve: And


[00:15:39] Steve: I just feel like the average person doesn’t have the awareness on computational systems to ask valuable


[00:15:47] Steve: whether or not it’s an AI or a human. that, that was my outtake on it. I was really


[00:15:54] Steve: unimpressed by it. And 54 percent isn’t really a huge ratio. Yes, it’s hit or miss,


[00:15:59] Steve: you might get it.


[00:16:00] Steve: I didn’t think that was huge. For me, for anything to be significant, it’s got to be a


[00:16:04] Steve: 95 in my view. Like, you’ve got to get it right 95 percent of the time. Otherwise, it’s kind of hit or miss, there’s a bit of luck. I wasn’t that impressed, I really


[00:16:14] Steve: wasn’t.


[00:16:16] Cameron: Oh, wow.


[00:16:17] Steve: Like, like, like, for example, okay, what is the, one of the questions was, what is the


[00:16:22] Steve: capital of Ethiopia?


[00:16:23] Steve: You know, 90 percent of people don’t know the answer to right? Is that a good or a bad question?


[00:16:28] Steve: the questions that anyone ought to be asking to


[00:16:31] Steve: determine whether or not it’s an AI is questions That would be


[00:16:36] Steve: physical and


[00:16:37] Steve: personal, not general. Like we already know that these AIs have a far greater general


[00:16:43] Steve: intelligence than, um,


[00:16:46] Steve: phone going There


[00:16:47] Steve: a far greater


[00:16:47] Steve: intelligence than the average person.


[00:16:49] Steve: They’ve got a PhD in every


[00:16:50] Steve: single subject. So the questions that really matter are the mundane things.


[00:16:53] Steve: What did you have for breakfast? What was the weather like? How did you get to work? It’s all of that stuff. The really mundane, humane


[00:17:00] Steve: stuff. That gives us an indication of the nuance rather than the intellectual stuff.


[00:17:05] Steve: Anyway, that’s just, you know, look, I am


[00:17:08] Steve: falling down. I’ve got my falling down Michael Douglas glasses on today. So there’s a lot of


[00:17:11] Steve: that attitude that you’re going to get,


[00:17:15] Cameron: Well, okay, so let’s go back a few steps. What’s, uh, for people who don’t know ELIZA, ELIZA’s an early


[00:17:19] Cameron: natural language processing engine that was built in the 60s, uh, so that’s why it performed quite


[00:17:26] Cameron: badly. But, you know, I think that there’s a couple of important things here. Number one is that for


[00:17:32] Cameron: 50 years we’ve talked about the Turing test as being one of the defining metrics that we would have to determine whether or


[00:17:40] Cameron: not we had an artificial intelligence that could pass as a human.


[00:17:44] Cameron: The fact that it was fooling in a way, in a manner of speaking, the


[00:17:50] Cameron: human interrogators more than 50 percent of the time means it passes the test.


[00:17:55] Cameron: Uh,


[00:17:55] Steve: agree on, right? We agree on that.


[00:17:57] Steve: Yeah. I mean, we agree that it can do that. Anyway, keep going.


[00:18:01] Cameron: so the second thing in terms of the sort of questions that it’s asking, you, you don’t want to make it


[00:18:06] Cameron: too hard. Like, what did I have for breakfast? What did you have


[00:18:09] Cameron: for breakfast this morning? I think you’re, you’re setting a, um,


[00:18:14] Cameron: A hurdle there that, for the current state of AIs, is probably unrealistic, um, because it’s gonna say I didn’t eat breakfast, I’m an


[00:18:23] Cameron: AI, you know, generally speaking, unless you’re telling it


[00:18:26] Steve: That’s the point. the point of the Turing test


[00:18:28] Steve: is that the, the AI should


[00:18:31] Steve: be able to adapt to the


[00:18:32] Steve: situation, to be able to fool the other person at the other end. That’s the point of the test.


[00:18:40] Cameron: Um, yeah, I’m not sure,


[00:18:43] Steve: being a general intelligence test.


[00:18:44] Steve: ’cause we already know that AI is


[00:18:46] Steve: A GPT knows


[00:18:48] Steve: more on a broad range of topics than anyone in the world. Not


[00:18:53] Steve: even close


[00:18:55] Cameron: yes. But I


[00:18:57] Cameron: don’t think that’s necessarily what we’re trying to do with the Turing test. And I don’t think you want to bias it either


[00:19:02] Cameron: way. You don’t want to bias it in a way that it’s going to be too hard for an AI to pass it. You don’t want to bias it in a way it’s going to be too easy for it to You need some sort of a test that’s reasonable, has a reasonable list of questions that


[00:19:18] Cameron: both a machine and a human should be able to answer. And then it’s up to the


[00:19:22] Cameron: human interrogator to determine whether or not


[00:19:24] Steve: the direction I


[00:19:25] Cameron: It’s, it’s a


[00:19:26] Cameron: human or a machine.


[00:19:28] Cameron: Anyway, I don’t want to


[00:19:29] Cameron: spend too much time on this, but I think the fact that somebody has actually sat down and tried to do


[00:19:35] Cameron: a scientific experiment to determine whether or not it


[00:19:40] Cameron: actually can pass the Turing test, and in their


[00:19:44] Cameron: Uh, conclusions, it has, is, and, and interesting, and there’s a whole bunch of implications


[00:19:50] Cameron: for that like as


[00:19:51] Cameron: they point out in the


[00:19:52] Cameron: study, um, can humans tell now when you’re chatting to someone online, whether or


[00:19:59] Cameron: not you’re chatting to a a human or an AI,


[00:20:02] Cameron: At least 54 percent of the time it seems that humans won’t be able to tell that they’re chatting to an AI.


[00:20:07] Steve: get. Even some of the bots on


[00:20:08] Steve: websites that you talk to when you’re trying to solve a little problem these days are pretty damn good. But I’d always ask to say, is


[00:20:12] Steve: this a person? And, and I would want that, not because I care


[00:20:15] Steve: if it’s an ai, it’s just to give me a context of where to lead the conversation to get the result that I’m after.


[00:20:20] Cameron: Yeah, right, sure. And I would expect a higher level of service from an AI


[00:20:25] Cameron: than I would from a


[00:20:25] Steve: You always exactly like ai. Thank God for that. Right. Now


[00:20:28] Cameron: Yeah.


[00:20:30] Cameron: Yeah. Uh, look, I know you’ve got to go in five minutes because you’ve got a radio show to


[00:20:36] Cameron: do.


[00:20:37] Steve: doing a radio show with Old World Media, Cameron,


[00:20:41] Steve: 3AW,


[00:20:42] Cameron: Good old 3AW.


[00:20:43] Steve: all our global listeners.


[00:20:44] Cameron: they’re still


[00:20:44] Cameron: around,


[00:20:45] Steve: can’t either.


[00:20:47] Cameron: Couple of quick things in the news, um, Ilya Sutskever, the Chief Scientist at OpenAI, officially left this


[00:20:54] Cameron: week. Uh, Sam and Ilya independently announced his


[00:20:58] Cameron: leaving. Not much surprise


[00:21:00] Cameron: in that, um, after Ilya sided against Sam


[00:21:04] Cameron: in the board kerfuffle late


[00:21:06] Cameron: last year, he’s sort of been, um, hiding for the last six months, and


[00:21:12] Cameron: he’s finally, uh, Bopped his head up to say that he’s leaving to


[00:21:14] Cameron: do some sort of project that’s personal and special, hasn’t said what, but, um, that’s


[00:21:20] Cameron: a sad moment, uh, I think for a lot of


[00:21:23] Cameron: us that have been following this story for


[00:21:24] Cameron: a while, that, the guy who is probably one of the key people behind the AI revolution


[00:21:32] Cameron: has had to leave, uh, the place where he worked for 10 years to build it.


[00:21:37] Steve: So, just trying to think back to when we had the kerfuffle last year when


[00:21:42] Steve: Altman was


[00:21:43] Steve: abruptly sacked, um, where Ilya


[00:21:47] Steve: was.


[00:21:48] Steve: On that and him to be leaving


[00:21:51] Steve: out six months later. Um,


[00:21:54] Cameron: Ilya was a, Ilya was a, board member and he voted for


[00:21:58] Cameron: Sam’s, uh, execution.


[00:22:02] Steve: yeah, right. Okay. And, and,


[00:22:04] Steve: and do you think that he was pushed? There was no indication of why, how


[00:22:06] Steve: other than the mutual love that they had for each other on, on their tweets


[00:22:10] Steve: announcing, uh, his exit, he’s going to work


[00:22:13] Steve: on something personal apparently. So, um,


[00:22:18] Cameron: Yeah. Like I think,


[00:22:19] Steve: bring other interests


[00:22:21] Cameron: a major, a major


[00:22:22] Cameron: breach of, um, trust obviously between the two of them, you know, the internet is still full of what did Ilya see? You know, the, the conclusion that a lot of


[00:22:31] Cameron: Conspiracy theorists on Reddit have had in the last six months is that


[00:22:35] Cameron: for Ilya to do that he must have


[00:22:37] Cameron: been seriously concerned about what OpenAI was capable of.


[00:22:42] Cameron: Uh, what ChatGPT or whatever the


[00:22:43] Cameron: underlying technology is, what it was capable of. And


[00:22:48] Cameron: he was concerned that Sam wasn’t doing enough to take it seriously. We don’t know if there’s


[00:22:54] Cameron: any truth to that but it was a


[00:22:56] Cameron: very strange time. And, um, we, neither


[00:22:59] Cameron: of them


[00:22:59] Cameron: have talked about what really happened. We still don’t know, none of the


[00:23:03] Cameron: board members at the time have really spoken frankly about it.


[00:23:06] Cameron: Sam hasn’t, it’s been six months. I wonder if we’ll ever know. Anyway, he’s


[00:23:12] Cameron: left. Um, one of the open, other open AI founders, guy by the name of, um, uh, John Shulman,


[00:23:22] Cameron: think he’s in charge of, uh, post training ChatGPT these days, was interviewed on


[00:23:30] Cameron: Dwakesh Patel’s podcast this week. And at one point said he thinks


[00:23:35] Cameron: we’re two to three years.


[00:23:36] Cameron: away two to three years. I saw Sam on another podcast I was watching this morning where he said, Um, it won’t be the next model,


[00:23:51] Cameron: but it could be one or two models after that,


[00:23:55] Cameron: So again, that’s sort


[00:23:56] Cameron: of putting it, the two to three years time uh, depending on the, the time gap between models, the iteration


[00:24:02] Cameron: time gap.


[00:24:03] Cameron: But like we’ve got like a last time we did a show, we said a lot of experts were saying five years. Now they’re starting to say two to three years. And I know we haven’t done a show for a couple of weeks, but that’s a big time compression in expectations.


[00:24:18] Steve: a month while the Kurzweil number


[00:24:20] Steve: of


[00:24:20] Steve: 2029 seems to be. within most people’s scope now.


[00:24:25] Steve: Yeah, the various


[00:24:27] Steve: definitions of AGI


[00:24:28] Steve: is,


[00:24:30] Steve: I mean, two things for this came up for me that I thought about The first one is, do we have such a thing as AGI supremacy? I know that with


[00:24:38] Steve: quantum computing, we’ve spoken


[00:24:40] Steve: about the first person to truly nail a functional quantum


[00:24:44] Steve: computer kind of wins because there can only be one which supplants everything else.


[00:24:48] Steve: And I just wonder if AGI


[00:24:51] Steve: supremacy is a thing. Because if it becomes that intelligent, then it can whitewash over all the other competitors and not give them a chance to catch up. I don’t know.


[00:25:01] Steve: And the other one that was interesting for me as well


[00:25:04] Steve: is You know, they’re very capital and energy


[00:25:06] Steve: intensive, uh, so what happens


[00:25:10] Steve: with that?


[00:25:11] Steve: Do the AGIs


[00:25:12] Steve: stay within the hands of a conglomerate, a tech conglomerate, or big tech, or a couple of firms? Or is it


[00:25:19] Steve: something that gets released to the general public? Like if this AGI happens, do we get open


[00:25:25] Steve: source versions of this software where we all get access to it?


[00:25:29] Steve: Or do we end up,


[00:25:31] Steve: you know, in serfs to AGI


[00:25:33] Steve: supremacy? Over to you, Mr. Reilly, you geopolitical warrior.


[00:25:38] Cameron: Yeah, I’ve been thinking a lot about that Steve because obviously if we have AGI but it’s in the hands of


[00:25:46] Cameron: one or even two or more capitalist


[00:25:50] Cameron: corporations that are going to enslave it and it serve you an ad every time it answers a question, that is not going to be the optimal result that we’re looking for here.


[00:26:02] Cameron: That’s not The utopian result that we want, the opportunity for them to exist being as resource intensive as they


[00:26:11] Cameron: seem to be in an open source or some sort of a freeware


[00:26:16] Cameron: model really depends, I think, a lot


[00:26:19] Cameron: on where we end up with optimization like this. I still get the sense listening to Sam and,


[00:26:26] Cameron: um, Hassabis and Elon and these guys is that the sort of levels of compute that They are throwing at them at the moment is partly because we don’t really


[00:26:40] Cameron: understand how these things work.


[00:26:43] Cameron: We just have learned that if you throw way more compute at


[00:26:46] Cameron: it, they can do way more amazing


[00:26:48] Cameron: things. But, oh, there’s my alarm telling me that I


[00:26:53] Cameron: have to let you go in a minute. Uh, but if we, if we figure out how to You know, like the 80 20 rule, if we figure out that actually you can get most of the functioning that you need from an AI with 20 percent of the compute resources, um, we might end up with a situation where you can run one locally on your iPhone that will do nearly everything that you need it to do. But if you want something to cure cancer, you may need to throw a lot more compute at it, right?


[00:27:21] Steve: But do you really need that much compute too? Because most of these models are so general and trained on


[00:27:26] Steve: so many things. Can you have a thinned up version of what it’s trained


[00:27:30] Steve: on? You know, so,


[00:27:32] Steve: for example, the, the iPhone, you know, to have a personal AI that can really navigate your phone and everything that


[00:27:38] Steve: you know, and give you what you need, and only tap in to the wider, um, large language models, surely wouldn’t require


[00:27:45] Steve: that much compute, or even, like you say, a research, medical research one could be thinner.


[00:27:50] Steve: Um, everything at the moment is so


[00:27:52] Steve: general, and I know


[00:27:54] Steve: that we’ve been in a general purpose computing paradigm for a long time, but what are the, I don’t know, the GPTs, the smaller versions that are like the apps of the smartphone era, that are thinner and just have a set functionality, which doesn’t require the same level of compute?


[00:28:11] Steve: Because, You know, one of the interesting ideas is that in some ways, big tech has now become big energy and Scott Galloway, Professor Scott Galloway, did a really interesting post on that last week on his No Mercy, No Malice podcast. And he’s a marketing professor with a tech bent. I don’t know if you followed his work much.


[00:28:29] Steve: Um, yeah, pretty smart guy. And his kind of idea was, well, it was all about access to the resource. The


[00:28:36] Steve: resource in the industrial era was.


[00:28:39] Steve: Energy and oil and electricity and, and, and then now it’s kind of, it’s compute. And,


[00:28:45] Steve: and they become quasi energy companies and compute and access to compute


[00:28:51] Steve: is really what changes things.


[00:28:53] Steve: And he drew a really interesting


[00:28:54] Steve: analogy between those two and that that’s the energy of


[00:28:58] Steve: the modern era. And he again referenced the, the Einstein that, you know, mass and


[00:29:03] Steve: energy are the same thing as well. And,


[00:29:05] Cameron: so, Steve’s back from his 3AW interview, let’s get on with the rest of the show, so, Steve, um, couple of other quick things in the news, Sam Altman, done a couple of interviews recently, um, he’s agreeing with us that We’re not worried enough about how the AI will impact the economy. keeps saying that.


[00:29:27] Cameron: People aren’t talking about it enough and, and I agree with him. I’m not worried about it taking people’s jobs and that kind of thing that you often hear it talked about, although that will happen. It’ll take jobs, it’ll create jobs. You know, as I’ve said before, the thing that is, boggles my mind is I don’t think people realize What’s about to happen?


[00:29:51] Cameron: And they’re still, like, businesses and, and governments are just going about their knitting. Yeah, they’re looking at AI and they’re thinking about it and they’re playing with it around the edges, but they’re not really thinking about what is going to be the impact of a massive explosion of available intelligence in the next couple of years. What’s going to happen to their business, to their industry, to society in general? We’re just not talking about it enough. All of the conversation I hear in the public spaces about AI is all about either PDoOM related stuff, or people are going, oh, they’re going to take jobs, and we need a UBI, and that kind of stuff.


[00:30:34] Cameron: But I don’t think these are the issues that we need to be talking about. How do we start to gear up our society for an explosion? of intelligence.


[00:30:44] Steve: Intelligence explosion, a Cambrian explosion of intelligence. It’s going to have to come up with a name for that. Well, I agree with you. Uh, I’m not worried about P Doon, uh, when we’ve discussed that. I’m, I’m certainly not worried about it taking jobs. Yes. AI will supplant certain jobs, but as we’ve always said, you know, there’s not that many bison hunters that jobs will come out and jobs will come in.


[00:31:07] Steve: And there’s a zillion examples of jobs that didn’t exist 20 years ago, and a lot that are gone, right? And that happens slower than we think. A company just doesn’t turn up on a Monday and go, AI solved every problem, see you later, pig slips for everyone. It just doesn’t happen that quick. You know, one of the reasons it doesn’t happen is, guess what managers love?


[00:31:26] Steve: As many staff under their control as they can have. That’s what they like. I like that. I’m more interested in the societal implications on what happens if people only have relationships with AIs. What is the algorithmic social media equivalent of People going into wormholes with AIs. Not, I’m not so worried about the computational intelligence.


[00:31:49] Steve: I’m worried about the wormholes we get into in a world where we can just interact with AIs. Like, do we become like a borg, where we’re all just tapped into this system? We’re almost just plugged into it and oblivious to what’s happening in the world around us. They’re the kind of implications that I think about.


[00:32:08] Steve: The economic


[00:32:09] Steve: and job. ones don’t concern me at all.


[00:32:13] Steve: Not even one bit.


[00:32:15] Cameron: I mean, I’m concerned for my own job. I’m concerned for the jobs of my children. I’m concerned not for my wife. I don’t think it’s going to replace violin teachers quickly, but, um, you know, I do think that people will probably listen to less podcasts. They’ll probably, um, you know, pay less attention to social media influences, which is what my boys, my older boys are involved in.


[00:32:41] Cameron: But, um, You know, I’ve, you know, like everything else, man, I’m like, okay, I’m just going to have to, I’ve had like 27 career changes already, I’m just going to have to have another career change, like that doesn’t


[00:32:51] Steve: that’s actually the point.


[00:32:52] Cameron: much.


[00:32:53] Steve: we need to teach kids the ability to adapt. Not teach them for a job. Like, you know, in the past, your, your world used to be one where it’s, you have these stone foundations in your life is this stone building that you build.


[00:33:05] Steve: And yeah, your name is Carpenter or Banks or, uh, you know, whatever job you do becomes your name or Smith. Mm, Well, in the future, we’re not going to have these stone foundations, our life becomes like a tent that’s mobile and you have to wrap it up and move on to the next place and then put it down. You know, we have this mobility in our life and the ability to adapt really important again, and the speed with which you do it, and yeah, my job might change or the way that I do work, maybe humanoid robots will be on stage or giving more corporate advice than a human can, I don’t know. Why, why are people gonna pay a human to predict the future when they can ask an AI to predict well, I mean, the one reason that you would do it is those edge because it’s the nuance and probability based engines. I mean, you can ask them to go give me something that’s a low probability event and let’s scenario


[00:33:58] Steve: plan around it. You can do that. You can do that right away. Um, May not be as entertaining as you are on stage. And that’s the only thing I think I’ve maybe more entertaining than I am on stage, actually, maybe more, who knows? Yeah. But anyway,


[00:34:15] Cameron: Sam says GPT 4 didn’t


[00:34:17] Cameron: have this huge detectable impact on the economy and so people were kind of like, oh well we were too worried about that and that’s not a problem.


[00:34:25] Cameron: I have a fear that we just won’t take that one seriously enough going forward and it’s a massive, massive Obviously, he is expecting that and is predicting that GPT 5 or whatever they call it, the next model, is going to be a massive step change in terms of reliability and intelligence than GPT 4 has been, um, and, you know, he, he is concerned.


[00:34:56] Cameron: And he is talking about it eliminating many jobs and that we’re not, uh, thinking hard enough about, uh, how to put. things into place to prevent, not to prevent that, but to prevent the societal collapse that could come from that. But, you know, I think it’s just one of many factors that we need to think about, and we’re going to talk about one of those a little bit later on in the show.


[00:35:19] Steve: the only reason that you would employ, let’s talk about the economic impact. Um, AIs or AI agents, which it feels like is where it’s going to go that the agents who can work towards objectives rather than, uh, discrete tasks.


[00:35:35] Steve: Um, the only reason you would employ those, you use the word employee in inverted commas, is if it creates efficiency, which in a corporate setting is reduced costs. If it reduces cost, unless you have monopolies, reduced costs create lower cost products that frees up money, which goes to other places where there’s more margin and then the people follow that margin.


[00:36:00] Steve: That’s kind of the economic doctrine of It sort of moves sideways into other places. You know, and the example I use often on stage is what happened to the 30 that got spent on a CD for 10 songs? What happened to that 30? Well, the economy didn’t decline by that. Yes, the music industry declined, but that moved, that went into data, it went into smartphones, it went into Spotify, it went into YouTube Premium.


[00:36:23] Steve: It just changed places. And then the staff that were in that industry changed places too, into those other support realms. This happens again with AI. It always


[00:36:36] Steve: happens. Remembering again that 90% of people worked in agriculture before the Industrial Revolution. But I think Sam’s


[00:36:43] Cameron: concern is that


[00:36:44] Cameron: there will be a very fast


[00:36:49] Cameron: tectonic shift. In the economy, where we have AI that is good enough to quickly, and it’s trusted enough, to quickly replace 20, 30, 40 percent of the jobs of knowledge workers. But of course, as people always have to realize, if people don’t have jobs, then they’re not earning money.


[00:37:13] Cameron: If they’re not earning money, they’re not spending money. If they’re not spending money, yeah, businesses, Don’t survive. If people aren’t earning money, they’re not paying taxes. If they’re not paying taxes, governments can’t run infrastructure, pay for infrastructure. So it’s not just about people losing jobs.


[00:37:32] Cameron: People having jobs and having spendable cash is what keeps the economy and governments functioning.


[00:37:40] Steve: all of it disappears if you don’t have it’s all a function of that spending spiral. That’s slowly, I mean, how does the economy get here? It’s a lot of it’s based on debt and expectations. Yeah, the way a house or a building or an investment gets made is the promise that you will make more money later or you wouldn’t invest in it.


[00:37:58] Steve: Most of it’s just imagined, like money is just this imagined, uh, thing that we all just buy into, that there’ll be more tomorrow than there is today. um, Yuval Harari talks about this A lot. He talks about the idea that money is the greatest invention of all time because we just all collectively buy into it.


[00:38:19] Steve: And the idea that technology will just take away money, they’re not the same thing. Like, and we have this viewpoint that technology replaces jobs and because it replaces jobs, it replaces money. It doesn’t. Yes, they work in concert with each other, but money is a separate organism. that works on its own.


[00:38:40] Steve: And yes, technology leads to money and tasks, whether or not they’re automated or done by humans, but the money evolves and moves across to where the most high value thing is. And because technology reduces things and makes them lower value, they still create value, but they cost less, that money changes as well.


[00:38:57] Steve: And people really need to understand that. I think, you know, if, if anyone really truly wanted to understand how money works and the collective myth that we buy into, I would highly recommend. Reading Harari’s bit


[00:39:08] Steve: in Sapiens where he talks about the myth of money.


[00:39:13] Cameron: The bottom line is, uh, technology, I mean, the AI stuff is going to change work. It’s going to change society and it’s going to be way bigger and happen way faster. I think that most people, you know, I feel like, uh, you know, I, I, People still seem to think this either isn’t going to have a huge impact or that it’s still decades away as opposed to two to three years away, uh, which isn’t a lot of time.


[00:39:45] Cameron: I want to just, uh, move on to the futurist forecast, Steve. Oh, one last story I wanted to talk about, uh, before we do that. Google’s medical AI destroys GPT’s benchmark and outperforms doctors. was in newatlas. com. Google Research and Google’s AI research lab DeepMind have detailed the impressive reach of MedGemini, a family of advanced AI models specialized in medicine.


[00:40:12] Cameron: It’s a huge advancement in clinical diagnostics with massive real world potential. So, just talking about how they’ve been building dedicated medical AI They’ve put out a 58 page paper, I haven’t read the whole thing, but I’ve read the synopsis of it. And it’s basically helping doctors to quickly assess.


[00:40:39] Cameron: Uh, a patient’s, uh, uh, symptoms and come up with, uh, you know, uh, recommendations for things to look at. And the results have been pretty impressive. Basically the doctors have a conversation with MedGemini. Here’s an example. It says, in a test of real world usefulness. MedGemini was asked about an itchy skin lump by a patient user after asking for an image.


[00:41:07] Cameron: The model asked appropriate follow up questions and correctly diagnosed the rare lesion, recommending what the user would do next. MedGemini was also asked to interpret a chest x ray for a physician while they were waiting for a formal radiologist’s report and formulate a plain English version of the report That could be provided to the patient.


[00:41:28] Cameron: The multimodal conversation capabilities of MedGemini M1. 5 are promising, given they are attained without any specific medical dialogue fine tuning, the researches said. Such capabilities allow for seamless and natural interactions between people, clinicians, and AI about all this,


[00:41:49] Steve: you can feed it so much information and it can recall that. And of course, it’s going to do it better than a human memories and capacities to learn.


[00:41:59] Steve: And the breadth of medical knowledge is so vast. That I’m not surprised, but you see, this is one of the things that gives me hope. I mean, one of the things that we don’t have, and we’ve been struggling, you know, my son’s had some illnesses that you and I have spoken about off air that getting a pediatrician is really difficult.


[00:42:16] Steve: There’s like six month waiting lists to get on waiting lists, right? For something like that. This gives me hope. And I wonder if. The future of medical science is that, or doctors, is that you have someone who is an AI medical expert, not a doctor per se, but works with you with the AI to help diagnose you and put you in the right direction and circumvent some of that process.


[00:42:40] Steve: And we all use, you know, Dr. Google. As well now when we get ill, but this, this gives me hope that it can give us resources that there just isn’t enough time and people out there to study medicine. This is the exciting stuff. Okay, we’re going to have a serious conversation within the medical fraternity and healthcare.


[00:42:59] Steve: How do we work with the AIs to have better medical, medical outcomes for more people, more frequently around the globe?


[00:43:05] Steve: This is, this is such good news. This is my heart. Right. Cut Yeah. And I, you know, this is one of the things that I think we’re not thinking enough about is what happens when we have a million more doctors that are available, a million more scientists, a million more psychologists. And that’s the last story I wanted to drill down. And in the future is forecast.


[00:43:31] Cameron: I read this study a week or two ago. Um, it’s based on a study that was done in Saudi Arabia. where they took about 150, 180 students of counselling psychology.


[00:43:45] Steve: And cut them all up with


[00:43:46] Steve: BandSource. They got the journalist, they put them into a room, and they, they really let Is that?


[00:43:52] Cameron: to the Saudi regime test. They did that?


[00:43:55] Cameron: They’re the ones that passed the chainsaw test.


[00:43:59] Steve: Once it made it through.


[00:44:01] Cameron: Oh, we should not joke. It’s


[00:44:03] Cameron: brutal stuff. Um, 180 students of counseling psychology from the bachelor’s and doctoral stages at King Khalid University were selected for this So basically what they did, they were testing, they were testing, Social intelligence, um, so basically the ability to understand the feelings, emotions, and needs of people during the counseling process.


[00:44:27] Cameron: And, um, you know, this is, this is important and I think one of the things, when I get into conversations at parties about AI, people always say, well, you know, AI,


[00:44:38] Steve: Must be a real hoot at these parties, Kev. Everyone’s like, oh, I need the bathroom, I’ll be back


[00:44:43] Cameron: usually it’s


[00:44:44] Cameron: my wife coming up and, like, shutting me down. Okay, I think it’s time for us to go now. you know, people think, oh, well, they’ll never, an AI will never be able to really understand the emotions or the feelings of a human because they don’t have, an AI doesn’t have emotions, it doesn’t have So they, anyway, these researchers constructed this test. Which was, they created a bunch of questions that a new patient might ask or things that they might say and then they got the AI and they got the human psychology students to respond. To those things that the patients were saying. Bottom line is that the AI won, particularly GPT 4.


[00:45:35] Cameron: There was a number of AIs and they performed at different levels, but GPT 4 beat 100 percent of the patients. of the psychology students. And they’re not just bachelor students, but also, as I said, PhD students. Now, these are students, they’re not people that have been practicing for 10, 20, 30 years, but, um, these are the results.


[00:45:57] Cameron: In ChatGPT 4, the score on the SI scale was 59, exceeding 100 percent of specialists, whether at the doctoral or the bachelor’s levels. Bing, whose score on the SI scale was 48, outperformed 50 percent of doctoral specialists. Which is interesting, because I thought Bing was based on ChatGPT4, so you’d expect them to have the same result.


[00:46:21] Cameron: Google Barred, uh, was superior to only 10 percent of the doctoral holders. So, anyway, like, that’s interesting in and of itself, uh, and that’s where we are circa 2004, early 2004, So, I, you know, I’ve been, I’ve been talking to people about this for the last couple of weeks. I’m saying, imagine a year, two years from where we all have a 2005 or 2000, shit, 2025, 2026 model AI.


[00:46:55] Cameron: It’s on our phones. I mean, it looks like the rumors are that, um, Apple has done a deal for GPT to be integrated into the next iOS. It’s on your phone and it’s obviously the 2025 or 26 model, two years, one or two years better than the one we have today. So if Sam’s right, and the next version of GPT is going to be, you know, a major step forward, then we can expect it to be even better than the current results.


[00:47:22] Cameron: And the current results are beating a hundred percent of these psychology students. So this is going to be better. But then It’s going to be freely available on your phone unless the business model changes. And not only will it be as good or better than nearly every human psychologist, it will have read all of your emails, listened to all of your phone calls, read all of your text messages, it’ll have listened to,


[00:47:51] Steve: Let’s see my internet history,


[00:47:52] Cameron: yes, it’ll see your internet history, it’ll have listened to all of your in person


[00:47:57] Cameron: conversations, it’ll be tracking your steps, your diet, your exercise, your media habits, your sleep habits, your social media habits.


[00:48:07] Steve: watch your weight, I guess, diet,


[00:48:10] Cameron: it’s going to, like, when you go to see a therapist, uh, or a psychologist, whatever, first of all,


[00:48:17] Steve: or knew all,


[00:48:18] Steve: of that about me, I would definitely need a


[00:48:19] Steve: therapist, Cam. Let’s just


[00:48:22] Cameron: look, my starting point is that everyone


[00:48:24] Cameron: needs a therapist. Everyone should have a therapist, you know, life is complicated


[00:48:29] Steve: we’ve always had that, they used to be parents and uncles and whatever, and now maybe


[00:48:33] Steve: we’re, you know, more disparate, but yes, we, we need consigliere, right,


[00:48:38] Steve: consigliere.


[00:48:39] Cameron: but these are professionals that study, your parents and grandparents and uncles didn’t study human psychology, you know, they were amateurs, they could share their experiences with you. We’re talking about professionals, like if you, if you have a pain in your stomach, you don’t go to see your uncle and say, I’ve got a pain in my stomach, can you


[00:48:56] Steve: I do need more Italian relatives, I tell you what.


[00:49:01] Cameron: Certo, certo. Yeah, mio zio è molto professionale come psicologia. Um, so you, if you have a pain in your brain, you need to go see a professional, but there just aren’t enough of them. And, and, you know, if you try to get in to see one, again, it’s like six months to get an appointment. They’re hella expensive when you get them.


[00:49:27] Cameron: And then you get a 45, 50 minute appointment. It takes them six months. To get to know you, really what’s going


[00:49:36] Steve: they do. And, and I’ve, look, I’ve had over periods of time I had a divorce where I was, you know, crying in my boots, um, you know, 20 years ago. And seeing some people was really helpful. It made me get a, it made me. Not feel like a loser. I had this thing. I felt like I was a loser, which is weird, you know, just help you see things cause they’ve experienced and you realize that you’re not the only one and that this happens and this is normal and this is fine, but you know, when you, you go through all of those things and the Apple one is really interesting because that’s what the Apple AI would be able to do, absorb all of this data.


[00:50:13] Steve: So maybe it is that, and I hope they change its name from Siri, because Siri is really the clippy of the AI era, let’s be honest. Um, this Apple AI could potentially be far more than just your personal assistant or your digital twin. It could in many ways be the psycho, I mean, it would, surely it would, it would have that capability as well. Like, what capability would it have? Not just psychotherapy and be able to help you in everything from challenges in the workplace, marriage challenges, parenting. I mean, I was using it with my son, uh, earlier,


[00:50:52] Steve: you know, that was the first thing we spoke about


[00:50:54] Steve: on today’s podcast. Just


[00:50:56] Cameron: we’re talking, I mean, there’s a lot of talk in the media about


[00:50:59] Cameron: an epidemic of loneliness. I know you saw the same video in the New York Times that I did where a guy was testing out AI friends. one of my adult sons, uh, talks to Pi, he uses Pi a lot just to, you know, workshop girlfriend issues and life challenges and that kind of stuff.


[00:51:16] Cameron: You know, there’s. Every middle aged woman I know of is self diagnosed with ADHD. I mean, there’s, there’s an explosion of recognition today of neurodiversity that we didn’t have 10, 20, 50 years ago, right? People are starting to realize, oh, it’s not just that I’m bad at this or I’m, you know, I’m a loser or I’m, you know, Like I, I’m incompetent of getting out of the house on time.


[00:51:40] Cameron: I actually, there’s something with my brain that works a little bit differently to the norm and you know, I don’t need to feel bad about myself. It’s just that my brain works differently, but trying to get into, again, to get diagnosed with ADHD and to see a therapist, to give you some tools, some coping tools, some life hacks is difficult and expensive.


[00:51:59] Cameron: So I am thinking about what happens to society. When everyone has a free therapist on their phone that knows everything about you, what happens to marriages? What happens in the workplace? What happens to parenting? What happens to suicide rates? What happens to spousal abuse rates? Because we know we have an epidemic of that as well.


[00:52:22] Cameron: What happens when your AI is monitoring your psychological and emotional state and well being. Not only did it say, Hey, Steve, um, I noticed you haven’t gotten out of bed for a couple of days. Um, I’ve all, I’ve already made an appointment for you with your GP. Um, I’ve explained to the GP what you’re, you know, conditions, your symptoms are, uh, she’s got it.


[00:52:50] Cameron: She’s going to see you on Tuesday and she’s going to give you this new anxiety medication that, uh, or depression medication or Or it could be, Hey, I noticed you’ve, um, things aren’t going well for you lately. Um, I’ve already. Contacted two of your mates and suggested that they take you on a, uh, take you to the pub or you go to the footy on the weekend or you go fishing or you catch up with them, you know, or, uh, you know, if you’re an older person, you know, I’ve already contacted your kids and let them know that I think they need to spend a bit more face time with you when you, it’s not just in terms of it acting as a Therapist, but acting as a, you know, one of these things that Chrissy and I talk about all the time is how, know, people’s, uh, social skills, particularly Gen X and Millennials, not Gen X, Gen Y, Gen Z, Millennials, their social skills have disappeared because they’re stuck in a phone, they don’t know how to make eye contact, Um, and this, this again, then leads to the loneliness epidemic and all these sorts of What if It’s not only acting as your therapist and talking to you, but it’s also acting as your social coordinator. Hey, uh, I’ve identified that you need to get around some people, and I’ve, I’ve taken the liberty of speaking to your sibling’s AI, or your best friend’s AI.


[00:54:12] Cameron: And we’ve, we’ve booked a time in for you two to go bowling or whatever it is.


[00:54:17] Steve: me, like what you’re describing, the ultimate personal services AI, which is beyond psychology. Obviously, psychology would be there, and I think it could definitely improve. Well, certainly give you somewhere to go with relationships, workplace, all that kind of stuff, parenting, definitely be helpful. But it feels like it’s more than that, Cam.


[00:54:36] Steve: I think what you’re describing is a future where you have a personal services AI, which is an expert in quite a few things, based on the data that they’re fed. So, uh, a medical expert can have, you know, You know, preventative recommendations with your health, not just your mental health, but also your physical health.


[00:54:51] Steve: Understand, um, what sort of social interactions you’ve had, the number of social interactions. Have you seen a friend a lot? Haven’t you? You know, understands, you know, your well being and holidays. Looks at your finances. Say, hey, Steve, you’re starting


[00:55:02] Steve: to spend far too much money gambling on the GGs on weekends.


[00:55:05] Steve: I’ve noticed it’s gone up 30 percent in the last, whatever, I’ve, I’ve stopped your account. I mean, I would


[00:55:10] Cameron: only fan’s account,


[00:55:12] Steve: Oh, come on now, keep it clean, PG, PG, but, you know, financial advice, medical advice, psychology advice, um, social, social calendar, um, you know, organise things for the family. It feels like We’re going to have our personal AI, and I like where you’ve started with the psychology, and all of that makes sense, but the breadth of knowledge that something like Apple could, if they integrate it really well, could solve an inordinate number of problems and stop many of them, circumvent


[00:55:45] Steve: them from happening in the


[00:55:50] Cameron: that’s where we’ll leave it today, Steve. I want people just to think about the implications of that one tiny little snapshot of what And again, I think it’s going to happen faster than people realize. I a year. Two years for these things to be integrated with our devices properly and the, all of these possibilities start to emerge.


[00:56:09] Steve: Yeah. I think it’s going to be an extraordinary time. I just can’t wait to see what happens.


[00:56:15] Cameron: too. Thanks,