In this episode of Futuristic, Cam and Steve discuss OpenAI’s groundbreaking o1 model and Advanced Voice Mode. They voice concerns over social media’s impact on youth, touching on legislative measures and the Instagram Kids launch. Cam talks about how he’s used ChatGPT on his weight loss journey. The episode also looks back at the 1973 launch of the first personal computer, MCM 70, the relevance of Moore’s Law and Marvin Minsky, using of AI to digitise handwritten notes and Sam Altman’s vision for The Intelligence Age.
02:36 The Impact of Social Media on Youth
15:46 Using GPT to lose weight
21:07 OpenAI’s New Model: O1
34:31 Advanced Voice Mode in ChatGPT
41:03 The Future of AirPods: Integrating AI
43:54 OpenAI Leadership Changes and Implications
46:43 The Evolution of Silicon Valley
56:45 The Bystander Effect and Social Media Legislation
01:03:43 Sam Altman’s Vision for the Intelligence Age
01:16:15 Technology Time Warp: The First Personal Computer
FULL TRANSCRIPT
FUT 30
[00:00:00] SS: Freestyler, futuristic microphone, straight from the top of our domes. We got the Cam Reilly, a Rocker microphone and a Sammotron, s s s s s sammotron.
[00:00:17] CR: Uh, welcome back to Futuristic, uh, episode 30. I think this is, it’s been a while, Steve. Our last show was August 3rd. We’re recording this on August the, uh, September the 27th. So almost two months since we’ve done a show and partially because you had a lot of stuff going on, partially because I had a lot of stuff going on and partially because there was a lot going on in the.
[00:00:47] CR: AI world up until about a week ago, and then everything happened. So, you know, look, there’s a lot of, I follow a lot of, uh, AI based YouTube shows and podcasts. And a lot of time it’s like watching cable news. There’s just. filler stories happening and I’m like, really, this isn’t really, there’s a lot of gossip, there’s a lot of rumour, there’s a lot of minor news and I’m like, nah, I’m not going to waste your time, my time, our audience’s time with minor stuff.
[00:01:21] CR: I’m really here to talk about the big stuff and so if there’s no big stuff, I’m saying, we don’t do a show. There’s nothing really worth talking about, but of course in the last week or two, the big stuff has hit. Stuff that had been rumoured for a long time. Some of it finally landed and it’s definitely worth talking about.
[00:01:40] CR: What do you think, Sammartino?
[00:01:43] SS: You made a really good point on filler stuff in media. And so much of our social feeds are like that as well. And even podcasts that I like to listen to, I’m like, yeah, it was a whatever episode, cause they got to do one every week. You know, it’s all the game is frequency. We used to say that columns need to be filled and now it’s, you know, pixels need to be put across screens, but I like that we’re pretty straight up on it, but definitely a lot happening now.
[00:02:09] SS: And it feels like. AI has kind of crossed another bridge where it’s starting to piece together the islands and the archipelago of intellectual riches that AI is becoming.
[00:02:25] CR: Whoa, there’s some wordsmithing from Steve Sammartino. It’s beautiful. There’s a, all those years of getting up on stage, spitting your rhymes. Uh, let’s, before we get into the news, Steve, let’s, uh, talk about what’s been going on in your personal life from a futuristic perspective. What do you want to, how do you want to start off the show?
[00:02:47] CR: What insights do you want to share with
[00:02:50] SS: I’ve got, got a few insights. My little boy’s been a little bit ill, um, which has kind of kept me out for four weeks. When he was ill, he was using his smartphone a lot because it was one of the only things where he could have a reprieve because he was quiet. Um, you know, without going into too much detail, couldn’t move around much at all.
[00:03:07] SS: Um, and. Yeah, talking to a lot of the doctors and, uh, and various psychologists and people recently, I think we don’t take social media’s impact on youth seriously enough. Now, I know that in Australia and in the U. S. they’re looking at passing legislation. I actually still don’t think It, despite them looking to pass legislation, know the potential serious impact this could have.
[00:03:36] SS: I feel like we might be heading to a quasi lost generation of sorts. The Australian government has looked to ban social media for children under the age of, I think it was touted as 14 and potentially 16. It’s got to be 18 because I’m convinced That this is potentially more dangerous than underage drinking or underage driving.
[00:04:02] SS: Uh, we know that children’s brains and certainly boys brains don’t develop till they’re about 27. The impact it has on the frontal cortex and the lack of ability to delineate what’s worth watching, the fact that the algorithms put them down wormholes of terrible information, uh, misogyny, uh, rude, Uh, get some even quick viral loops.
[00:04:25] SS: One of the other things that we see a lot is that the speed of content and the truncation of content length is having a really significant impact on I won’t be surprised if in 10 years we say, wow, that has really had a developmental issue on the youth’s brain and their ability to think and their ability to Maintain an attention span, which is required for a lot of things to function as an adult human.
[00:04:53] SS: Uh, it’s not just a joke and I’ll make something short and make sure you have a hook. Yeah, that’s all good to get views. Um, but a little bit like junk food, like alcohol or driving dangerously, I would be flummoxed. Um, if we don’t come back and find out that that’s really bad. I did one LinkedIn post which had so much resonance, um, which I, I said, and I’ll, I’ll read it pretty quickly for the audience here.
[00:05:20] SS: The words that I used was, I’m just going to say it. Social media could be more dangerous for our youth than underage drinking or driving. Both of which are age gated at 18. Kids spend five hours plus a day, which is a stat based on Australia, it’s probably more in some areas, on tech purposely designed to create addiction.
[00:05:37] SS: And I said that I’m, I’m going to write a manifesto on how easy this is to fix, which I’m going to do this week. And I’ve said that we just need the political and social will to do it, to care about our kids more than we do about corporate profits. And the reason that that’s really important is that it had like six reposts, more than 10, 000 views, hundreds of comments with.
[00:05:55] SS: Anecdotal stories of so many, uh, kids and families having major issues with this. And, you know, it’s touched me personally in my house. And we had kept my boy away from social media for a long time. And it really just opened my eyes. The conversations that I’m having are that this is really causing a whole lot of social issues.
[00:06:21] SS: So ironic, social
[00:06:23] CR: Yes. And I, and I see it at two different age levels, right? Cause my older boys, uh, are nearly 24 and Fox is 10. And Fox doesn’t have a device of his own, but he borrows our devices to do things with. He used to have a device, uh, for, for a couple of years, he had devices, but then he broke his last iPad and we said, that’s it, you’re not, not getting another one for a while anyway.
[00:06:51] CR: And so, but he borrows our things, but we, we try to limit. He’s a, like one of the things that I limit a lot is YouTube, because he will just go down, it’ll take him down rabbit
[00:07:05] SS: Bad rabbit
[00:07:06] CR: he’s sitting there watching something that is, I deem is good for your brain content, I’ll let him watch it, but if I leave the room for half an hour and I come back and he’s watching shorts, I’m like, dude, this is just bad for your brain.
[00:07:20] CR: Get off it right now. You know, this is rubbish. And we don’t let him on, you know, TikTok or, or anything like that. My older boys, Hunter and Taylor, you know, they, you know, they, they sort of grew up in the early stages of this stuff. Um, and of course they’re both, they, they both TikTok world now. Hunter’s a creator, Taylor’s a manager of creators and You know, we have constant debates about short form content and the value or the danger of it.
[00:07:53] CR: They tend to agree by the way, that there’s a lot of bad stuff associated with this. Um, but you know, I see them like they don’t read. They struggle with long form content, paying attention to it. Um, and they’ve both had a certain amount of success in their early twenties. Um, but, which is great. You’re like, they’re doing okay in many ways, but you know, they, they, they suffer, Hunter suffers a lot from.
[00:08:24] CR: Loneliness. Um, and it’s cause he’s in his room all the time making content for strangers, for 2 million followers, but he doesn’t have any real friends.
[00:08:33] SS: That’s funny because my boy, um, is really struggling for connection. He’s, he’s been quite isolated through his illness and the thing that he’s struggling for his own offer with the people in his house, and yet he looks for it from strangers. There’s, there’s some really interesting element there and the struggling with the short form content.
[00:08:50] SS: You know, we know that it has like a dopa cycle and you’re getting into a dopamine hit. I wonder how similar it is to the physicality of junk food and, and fatty and sugary foods, where it gives you a short term hit, but you know that the broccoli and the protein, long form food, let’s call it, and long form content.
[00:09:08] SS: I do wonder again, now we’re expert in this, whether or not there’s some sort of physical analogy where those things, even though one is food related and one’s content related, whether or not they have the same chemical impact. Um, on your physical development.
[00:09:23] CR: I don’t think, I think the science is already in on that. I don’t think there’s any debate about that. I mean, the brain has a, um, limited number of hormones and neurochemicals that it uses to motivate behaviour and the social media companies have, Paid lots of money to consultants to, and gaming development companies.
[00:09:45] CR: It’s all shit, not just social media companies, to develop the technology in a way that creates those addictive responses. So you want more and more of it. It’s been, as you said earlier, deliberately designed to hack. Our brains to bring us back, to keep us on the devices. No doubt about that. And I can remember early days of this sort of stuff, like 2007, 2008, when Twitter was fairly new.
[00:10:16] SS: Yep. I
[00:10:16] CR: I can remember spending a lot of time on Twitter, having conversations, having debates, and when I wasn’t on my phone, when I was in my car, driving somewhere, feeling the need to have Twitter open to see what the last comment was, or the last part of this conversational debate, and feeling the need.
[00:10:36] CR: Increasingly icky about that. Like the same, the same feeling I get if I’m playing an Xbox game, you know, you get addicted to an Xbox game or something, and then you realize you’ve been playing it for two hours and you’re in a frustrating level and you’re trying to get through the level. I remember this most recently with Red Dead Redemption, which is a terrific, Red Dead Redemption 2, which is a terrific game, really beautiful, well designed game.
[00:11:05] CR: But I remember getting to a point where, um, I just felt kinda queasy after playing it for an hour, and I’d be like, you know what? Fuck this. I’m off. And you know, I got the same with Twitter 10, 15 years ago, whenever it was, where I just was like, you know what, this makes me feel bad. I, and I had to go cold turkey off of it. Like, you know what, I don’t want to, I don’t want this in my life. I don’t want to be part of this. It was a sort of a, a cycle. And I, and I’ve said this, Chrissy and I have had these debates for years and years, the pros and cons of these new technologies. You know, when they’re new as adults, we can get.
[00:11:46] CR: Swept up in them, but then as adults, if you, if you aren’t malfunctioning, I think you have the ability to go, you know what? This isn’t good for me. It’s like eating Tim Tams. Not good for me. I’m gonna step away from this and do something healthier. Kids! You know, as you said before, we know boys brains don’t fully mature until 25, something like that.
[00:12:08] CR: Particularly younger kids don’t have the ability, their brains don’t have the, literally don’t have the ability, to make wise decisions, why we don’t let them drink, why we don’t let them drive. We’ll send them off to war, but outside of that,
[00:12:21] SS: Look, look, I’ve always said Cameron. A toy is his gun, and all he’ll ever know is to fight forever, fight forever. Little bit of the noise works, circa 1987, welcome to the world. I’ve always said that,
[00:12:33] CR: And we’ll, we’ll teach them religion when they don’t have the ability to discern between what’s true and what’s bullshit. Nah, look, I’m, I, I’ve seen the gnashing of teeth over the social media ban, and I’m actually all for it. I, I, I, I don’t have any issue with a ban on this stuff for kids. I don’t think it’s good for their brains.
[00:12:53] CR: I don’t think it’s good for any of us, quite frankly. Um.
[00:12:57] SS: it’s actually really, really easy to do. And this is, and, and, two things I want to say. The first one is, What did I despise? And as you know, this is the futuristic, we talk about the future. And people often say to me, I thought you would be all for technology. Aren’t you a futurist? Isn’t that about technology?
[00:13:16] SS: And my answer is, well, we need to be better than that and think better. The future is about creating human flourishing. Which may involve new technology and may involve some of the things which are the oldest things we’ve had. Like, uh, slow food is a classic example. In the 60s and the 70s was irradiated food and packaged goods and just add water and that was the future.
[00:13:40] SS: And then we went, actually, I like slow made coffee and grow our vegetables. Like we learned that with food, right? And, uh, you know, meditation, thousands of years, like, A flourishing future isn’t just about technology, often it will, and technology, as, uh, we hear from Kevin Kelly, is 51 percent good most times, but what we need is the wisdom to delineate where technology use is good, and where we need to say, actually, now we’ve learned this, we need to, Uh, moderate that, or legislate, or put regulation around it, like we did with cars, like we did with factories, like we did with so many other things, and for some reason, society have this view that you’re a Luddite if you don’t embrace all new technology, no, no, no, I’m far more thoughtful than that, and I’m certainly not a Luddite, and in fact, not being a Luddite is finding civilize the technology, and that’s one of the things that we need to get better at.
[00:14:35] CR: Look, I’m, I’m very cautious about, um, government affairs in personal lives and censorship and having to legislate behaviour. Uh, you know, I, I think it’s necessary in certain Places for a safe and civil society, but you want to be really careful about how much of it you allow and where the line is. It’s something that needs constant vigilance.
[00:15:03] CR: I would rather see parents be responsible for their children’s access to this sort of technology. The parents be the gatekeepers and decide what’s best for them. what is good or what is bad for the children. But if parents aren’t doing that because they themselves are addicted to this sort of stuff and they’re not able to turn it off, then perhaps there is a role for the future.
[00:15:27] CR: Because if the parents aren’t protecting the children, it’s a bit like, uh, I don’t know, paedophile priests. If the parents are allowing the paedophile priests
[00:15:37] SS: Wow,
[00:15:38] CR: close to
[00:15:38] SS: you just Cameron Reilly,
[00:15:40] CR: then the
[00:15:41] SS: Reilly will go
[00:15:42] CR: to step in and do something about it. Well. On my front, Steve, um, since we last spoke, in the last two months, I’ve lost 12 or 13 kilos.
[00:15:53] CR: And I have to say that you are partly the inspiration and my mentor in this. We had this conversation on the show a while back. I said, how do you stay skinny? You said, I don’t eat until the afternoon. So. I started doing that. I started just not eating until after lunch, but the other side of it is Kung Fu.
[00:16:16] CR: I’ve got a grading coming up in a couple of weeks and I decided that I wanted to get down to a lower weight for my grading, uh, for my fitness and my health, but also ChatGPT. Now, as you know, cause we’ve talked about this on the show, I started using ChatGPT as my calorie. tracker about a year ago because I wanted to be more conscious of how much I was eating for my health and to lose weight and it wasn’t working.
[00:16:41] CR: I wasn’t losing weight or even with the amount of Kung Fu that I do, which is insane. Like I’m doing eight to 10 hours of Kung Fu a week. I still wasn’t losing weight. So, A couple of months ago, I jumped into GPT and I said, look, I’ve been tracking my calories for the last 10 months, nine months, whatever it is.
[00:17:01] CR: This is the amount of calories as my input. This is the amount of Kung Fu that I’m doing. I’m not losing weight. What should I do? And it said, well, try cutting your calories back a little bit extra. I was like, Okay, I said I want to lose about a kilo a week. Can I do that? Yeah, you can do that. You have to cut it down by about 1, 100 calories a day.
[00:17:25] CR: So I was eating about 2, 800 calories a day on average. I cut it back. Um, to about 1700 and weight just started dropping off me. So I’ve lost 12 kilos. I’ve lost eight centimetres off my waist, jumped into my swimming trunks for the first time in six or seven months on the weekend. And they fell off. I went into the, I went into a clothing store the other day.
[00:17:53] CR: I’ve dropped down from an XL to an L. in two months. Um, and you know, again, partly because of you, partly because of Kung Fu, largely because of ChatGPT’s given me the ability to track that. Like I used to use MyFitnessPal and some of these calorie tracking apps, and you have to look up everything that you’re reading and, you know, plug it in.
[00:18:16] CR: And it used to take, longer to plug in what I was eating than it actually took to eat what I was eating. And there’s a certain level of commitment that after a while just gets difficult. ChatGPT, I just go, yeah, I had a plate of this. I had some chicken and some vegetables and a bowl of lentil soup and a slice of bread with butter and honey and a coffee and whatever.
[00:18:38] CR: And it just works it out. Now is it completely accurate? Probably not, but it gives me a rough indication. Yeah, so when I said, hey, I need to cut back a bit, it goes, okay, well try cutting back the yogurt and frozen berries and honey that you’re having for dessert every night and replace X with Y. And, you know, not eating until after lunch helped as well.
[00:19:00] CR: And so, although now Tim Ferriss is for our body, I’m having two boiled eggs first thing in the morning when I get up, he reckons protein first thing in the morning, um, puts your body into a different mode. So when you have carbs or sugars later on in the day, it doesn’t, uh, start off your insulin response if you have protein anyway, so that’s been.
[00:19:21] CR: How I’ve used technology most in the last couple of months is just to lose weight. And it’s been fascinating because I’ve been trying to lose weight for 15 years. I haven’t been this light for 20 years. And it,
[00:19:36] SS: it’s easy. Like, I mean, the basic thing is you can never exercise away a bad diet. That’s number one. Often when you’re hungry, you can’t, you just can’t do it. You can no amount of exercise, right? Diet is everything. Because I think 80 percent of your calories are just keeping your body alive.
[00:19:54] CR: Yeah. Just your base
[00:19:55] SS: yeah, just basic,
[00:19:56] CR: through the day, right?
[00:19:57] SS: metabolism. A couple of other things, just simple hacks, is often you think you’re hungry when you’re really thirsty, so have a big chug, a big thing of water, and you won’t be hungry for an hour. So, that’s a good one. And the other one is the density of your food. So, food density. So, for me, high fat, high protein works.
[00:20:15] SS: And I think it works for most people, but most people just don’t have the discipline. If something’s, you know, Carbohydrates are low density foods, like breads and all those other high density. So, you know, broccolis, even bananas, they’re carb based foods, but they’re fine. Um, and, and basically, yeah, the less processed it is, the better it is.
[00:20:32] SS: But I just have a high protein. I can lose a half a kilo a day if I have a bad week. Pretty easy. Because I just not really, I’ve never really counted calories. But I just don’t eat until the afternoon and I just have protein only. Won’t really have any carbs. I just avoid bread, pasta, um, rice. And I eat as much fat as I want.
[00:20:52] CR: Yeah.
[00:20:53] SS: And
[00:20:54] CR: There you go.
[00:20:54] SS: that. Anyway.
[00:20:55] CR: you’re a good looking skinny dude. Let’s get, let’s move,
[00:21:00] SS: with skinny.
[00:21:01] CR: speaking of Kung Fu, I’ve got a hard time, I’ve got to get out of here to get to training soon. So let’s, uh, get into tech news. Steve, uh, obviously I think the biggest thing that’s happened in AI news in the last, This month has been OpenAI finally releasing the model formerly known as Strawberry, uh, which they have called O1. Now um, O1, Strawberry, we started hearing about Strawberry in November last year when Sam Altman was fired from the company. And there were these rumours that Ilya saw something, and there were rumours coming out about something that was codenamed Strawberry or Q Star, and we’ve been hearing rumours about it ever since, and it was this A new model that they had developed that could do thinking and reasoning, but they were very tight lipped about it until it finally dropped a week or so ago, O1 Preview and O1 Mini.
[00:22:12] CR: It’s called Preview because it’s not the full model, there are strict limits which they have increased, but I think you get 50. Queries to O1 preview a week at the moment and 50 a day for O1 mini. And then you get cut off and you have to wait a week. But these are the models for people that haven’t heard about it or haven’t played with it yet.
[00:22:37] CR: It does advanced thinking and reasoning before it answers. And look, it’s not perfect. And in fact, on the launch day, Sam tweeted, ah, it seems impressive when you first use it, but over time, not so much, but they’re also calling it. PhD level intelligence and
[00:22:58] SS: I’ve been using that on stage. They stole my bit. You’ve all got a PhD in every single subject now,
[00:23:05] CR: they, they did a put out a bunch of launch videos for, with, um, physicists and chemists and biologists and coders talking about, uh, how impressive they had found they’d been early testers of the model and what, you know, I’ve seen some other videos.
[00:23:23] CR: There’s a guy who gave it, uh, as a physicist on YouTube who gave it a. Question out of a PhD level physics textbook, which the textbook said it should take a week and a half to complete and O1 did it in 122 seconds and got the
[00:23:45] SS: Now we’re all, now we’re all goodwill hunting. Physicists and mathematics experts just on a chalkboard. I’m just going to, I’m going to get ChatGPT in my pocket and just become a janitor at MIT or Harvard and just try to smash up on the chalkboard some answers. Just bam.
[00:24:02] CR: like apples? Do you like apples,
[00:24:04] SS: like apples?
[00:24:05] SS: That’s what they should call the next one. Do you like apples? That should be called, Do you like apples? I
[00:24:12] CR: Uh, and the fascinating thing, uh, for me playing with O1 is you can watch its reasoning, particularly O1 Preview. You can give it a question and then watch it go through the reasoning steps, how it’s thinking about it. And the, the physicist I mentioned who did the textbook example. He showed the reasoning.
[00:24:34] CR: It started to try and solve this problem, and it spent like a minute going down one path, and then it went, no, no, this is the wrong approach. So I went back to the beginning and started again. So, uh, now they haven’t explained how this works, how they’ve done it. I think the QSTAR methodology came out of a university originally, and I haven’t bothered to drill into how it works, but
[00:24:57] SS: was just about to ask you to take me through the QSTAR methodology. You can’t drop that on me and not take me through it, Cameron.
[00:25:03] CR: man. Nah, nah. I mean, I haven’t had the time to drill down on that yet. I do plan on trying to get my head around it though, but it’s a huge step forward in the ability of LLMs and people like Sam Altman are now saying they have reached the conclusion that there is no upper limit of what we can do with LLMs.
[00:25:25] CR: That LLMs are now capable of way more than just, uh, fancy autocomplete, as people were saying two years ago. It’s,
[00:25:35] SS: That is just a bugbear of mine and Hinton and others, some of the others say that. It’s like,
[00:25:41] CR: oh, Hinton
[00:25:42] SS: what all intelligence is. An advanced level of autocomplete. You work out what comes next. That’s like, it’s just so mind numbingly frustrating that people do not understand. Sorry.
[00:25:54] CR: Cory Doctorow, not Hinton. Geoffrey Hinton’s,
[00:25:56] SS: Oh, sorry. Not Hinton.
[00:25:57] SS: Doctorow. My bad. My bad. Doctorow. Stand corrected. Which I love Doctorow too, by the way, which shows you can still love someone and disagree with some of their bits, right?
[00:26:06] CR: should be able to. I’m exactly in the same camp. I love Ayn Rand. I love Ayn Rand’s novels. I’ve read her letters. I love Ayn Rand. I think she was completely misled and wrong on lots of things. But then, you know, I appreciate a lot of the stuff that she
[00:26:20] SS: Yeah, exactly.
[00:26:23] CR: Um, okay, so let me read a little bit from OpenAI’s launch document, uh, for O1.
[00:26:30] CR: How it works. We train these models to spend more time thinking through problems before they respond much like a person would. Through training, they learn to refine their thinking process, try different strategies, and recognize their mistakes. In our tests, the next model update performs similarly to PhD students on challenging benchmark tasks in physics, chemistry, and biology.
[00:26:52] CR: Note PhD students, not PhD graduates, PhD students. Big difference between somebody who’s had a PhD for 20 years and somebody who’s trying to get a PhD, right?
[00:27:09] SS: Yep, okay.
[00:27:11] CR: We also found that it excels in math and coding. In a qualifying exam for the International Mathematics Olympiad, GPT 4. 0 correctly solved only 13 percent of problems, while the reasoning model scored 83%. Their coding abilities were evaluated in contests and reached the 89th percentile in Codeforces competitions.
[00:27:34] CR: Now, Um, you know, there’s been some benchmarking of this done by other people, they disagree or agree to varying degrees, but everyone seems to be blown away by what O1 can do. My personal experience with it from a coding perspective is some days I’m super impressed, other days I’m not so impressed, which was also true of, say, Claude, 3.
[00:28:02] CR: 5 Sonnet, which I was using before O1 came out. Extensively. But, uh, there’s a lot of, if you go into Reddit and look at the OpenAI subreddits, et cetera, people tend to be fairly impressed with what O1 is able to do. And again, it’s only sort of a week old, but, uh, people are pretty impressed. September 12th, it came out.
[00:28:26] CR: So, uh, yeah, two weeks, maybe. Mm
[00:28:29] SS: Yeah, I mean, the coding thing is just so often now, I’ll just get it to create like a little script for what I need. Even this morning, I’ve been trying to get my substack back and start publishing again on a weekly basis. And I’ve got some old MailChimp emails, uh, that people have been subscribing to there and not my substack.
[00:28:49] SS: And I couldn’t get things uploaded, so I just grabbed the data and just said, Write me a little, you know, write me a little CSV, get me some scripts, extract these emails from this, double checks, some things that you would have to muck around a hundred times within the substack or the MailChimp format to find its formatting.
[00:29:05] SS: I just grabbed raw data and then asked it to format in what I needed and I also asked it What is the best format for Substack? And it said, this is the best format. So can you make it? Yes. And it just starts writing the Python script and then it, and then it executes against it, just doing that scripting in there for you really is a magical feeling.
[00:29:23] SS: Cause it’s not like you have to download it onto your client, just getting it to do it in Python. In the interface is really kind of special. It reminds me of the early days of the first web browsers where it does it in the browser, where it writes the script, executes against it and gives you the thing that you need.
[00:29:39] SS: I don’t need it downloaded on my client. I don’t want to have that script sitting there and then have to put data in it and get it to execute against it. It’s a really empowering feeling. And that’s like something I think we should be getting people to use in companies. And one of my clients, I’m like, well, take information from your different data, like dump it in there and, and just get it to execute against and find the links that you need.
[00:30:00] SS: It’s really good at that. And the answer that I always get back is, oh, we’re too worried about the, uh, The safety and security concerns. I’m like, listen, no one’s really worried about your company. You’re going to go in there and steal your little piece of sales information. Seriously dudes, just, just be that, just be that company that waits 10 years.
[00:30:17] SS: That’ll work out. That always works out. What a great
[00:30:19] CR: that always works out. That’s never a bad idea. Yeah, I’ve used it. I’ve had a lot of, um, server issues. Um, I’ve increasingly over the last six to 12 months, my server that I host all my shows on has been struggling and it got to a point where it was going down two or three times a day for five to 10 minutes at a time.
[00:30:37] CR: My IT guy, the company that hosts the server that’s supposed to do support has been completely useless. Um, not able to give me any sort of guidance or fix anything. It’s in the too hard basket for them. And I tried to use GPT in the past and, and didn’t get very far. I jumped into O1 and I said, here’s the situation.
[00:30:59] CR: It said, download your log and give me a look at your log. So I download the log file. Which, if you’ve ever seen a log file, it’s full of unintelligible bullshit. Uploaded that to it, it goes, okay, I can see you’re getting hit by bots, there’s certain processes that are running cron jobs that are taking up a lot of Bandwidth, a lot of, uh, CPU time.
[00:31:25] CR: It got me to run some reports. It read the reports. It gave me the fixes and then told me what to do. All right, install this plugin and set it up like this. Go into CloudFare, set up these rate limiting options. Do this. I spent a day following its instructions and so far, so good. Fixed all of my, um, server performance issues.
[00:31:45] CR: So it was like having a. First tier, um, sysadmin to walk me through the, it couldn’t do it for me. And that’ll be the next stage is when it’s integrated into our OS’s. Yeah. But it could tell me what I needed to do, guide me through it, answer my questions, hold my hand in a friendly way, and enable me to solve problems that I would, no way could I solve them myself.
[00:32:14] CR: It’s like, again, it’s, I said this many times. It’s like having a superpower. Yeah.
[00:32:18] SS: Yeah. The way that it’s sewing together, the pieces of the puzzle, and I used in the first inferences, bridges between those islands of what it can do. Uh, the fact that it, you can ask it, how do I solve this problem? And then it comes up with the idea and then it helps you execute against the idea.
[00:32:35] SS: That’s super powerful, but it makes me feel that, you know, And I’ve, I’ve been reading a lot about the emergence of agents and I’ve been using agent G-P-T-A-A little bit and I still demo it on stage in keynotes to show how it’s objective driven rather than task driven. I mean, I think that’s gonna be the next phase at some point in 2025.
[00:32:55] SS: ’cause so many people are talking about AI agents and it seems like. It has all the pieces of the puzzle to make agency where someone actually manages a situation and a problem for you, just, just for the listeners to understand. Instead of doing discrete tasks, it accumulates all of the tasks to get from point A to point Z, let’s say.
[00:33:16] SS: And we’ll do all of those that, that achieves the objective. And that move towards agency based AI is going to be, I think, the really big part. And that’ll be big for a couple of reasons. It enables individuals to have an agent infiltrate their OS and have a look at all the various data points and find those.
[00:33:34] SS: You won’t even actually have to upload necessarily, oh, upload the file like you’ve just described. You’ll say, here, are you in there? It’ll go, oh, have a look around. And then it finds it. Think about how many calculations it can do per second, you know, trillions of calculations. It’ll be able to find it and find the pieces of the puzzle that it needs and then execute against it.
[00:33:53] SS: And you, Do that in a corporate sense. It’s really the administrative burden of modern life. You would have to feel like 90 percent of that is just going to go away. Like do my tax, go through everything I’ve spent, understand where it was spent, the places, what my benefits are, where I go, what can I claim back?
[00:34:12] SS: I mean, that’s a really complex task that you feel that agents would be able to do.
[00:34:17] CR: Yeah, particularly when they have access to your bank accounts, your emails, your calendar. They know all about, you know, your Xero, when it’s integrated into Xero, you know, all that kind of stuff. So the other thing that happened, Steve, this week is that OpenAI released what they call Advanced Voice Mode for nearly all Plus subscribers. This has been coming for months.
[00:34:47] CR: People have been complaining endlessly that they hadn’t launched it as fast as they suggested they were going to months and months ago. I think they’re still yet to roll it out to the EU for some unknown reason. But, uh, there’s a whole bunch of new voices. Now the voices have supposedly got a lot more capability of expression and personality in them.
[00:35:11] CR: People have been doing all sorts of demos of getting them to sing or to whisper or to do accents, but I just tried and I can’t get my, uh, ChatGPT to do any of those things successfully. So. I don’t know if I don’t have that new functionality, if it hasn’t rolled out into this geography yet and we’re just getting the stock standard voices.
[00:35:34] CR: So Kim in the editing booth here, folks, um, I did play around with chat GBT on my desktop. When we were recording this and tried to get it to do the advanced voice mode stuff. And there was really no difference between it and the normal voice mode, except there were new voices. However. After we finished the show, I jumped onto my iPhone version and oh yeah, it worked. It was far more expressive and I could interrupt it in the middle of it talking. Which is actually kind of useful.
[00:36:10] CR: I don’t know if you’ve ever been using the voice mode and then you pause to think about something and it assumes you’re finished and it starts replying. This new version does the same thing, but you can just stop it and say, hold on, I wasn’t finished and it will pause and you can finish your conversation. The answers are far quicker than they were before.
[00:36:30] CR: It used to be. You would talk to it, then you’d wait. Five seconds and then it would reply. Now it’s almost instantaneous. And I played around with the usual stuff. I asked him to whisper. Something to me. And it did. Um, my son, Taylor said he tried to get it to do an evil laugh and a did Ivan played around with that? And I like a lot of that stuff is kind of gimmicky right now, but over time, I’m sure that we will find it. Just more human, like to have an AI that has a full range of emotion and expression. So, if you haven’t played around with advanced voice mode on your phone or your iPad yet, uh, Definitely give it a go. I stay away from the desktop version for now. Stick with the, um, uh, the, the phone.
[00:37:20] CR: Uh, versions of it. Uh, have you had a play around with any of the advanced voice stuff?
[00:37:27] SS: I’ve had a play around with the advanced voice stuff, uh, not any of the singing or that, the hacks or tricks. I didn’t notice a huge difference. In the way it interacts, it’s pauses and it’s intonation. Um, because it was launched, I played with it more than I have previously. Normally I would only use it really, uh, to talk to it when I’m on my mobile, but when I’m on my desktop, I would always type.
[00:37:53] SS: But the one thing that I did notice is that when you get conversational with an AI, it creates almost like a collective thinking process. You know how you and I, when we chat on here, we’ll, we’ll propose different ideas that put us on different trajectories. When you’re typing, I think it slows down what you’re doing.
[00:38:13] SS: You become more specific and put more pieces into the puzzle that you want to be considered. But when you’re speaking, that has this live thought process where there’s an interchange. And I actually think it helps you go on different paths just by having that. And also the pauses of the AI, to me, just.
[00:38:34] SS: It says, and I do wonder if it is actually thinking harder and looking at more of the data to give its answer or whether it’s just been designed that way to give it a human feel. I don’t know the answer to that. Uh, but it does, I feel like, put you down on a different trajectory that you otherwise might be on because of the way that thoughts happen when they’re expressed verbally versus written.
[00:38:58] CR: Yeah. I don’t know, but I think you’re right. I, even though I’ve had the basic voice for quite a while now, I tend to I’m going to use typing more than I use voice. I use voice in the car. Sometimes I use it in the kitchen if I’m cooking and I want to ask it something, but I will, I will default to using text more than voice for some reason.
[00:39:23] CR: Now, this advanced voice stuff is supposed to And when they did the demo of it, uh, whenever it was, months and months ago, it was supposed to be much smoother in the conversation, not as many pauses. You could interrupt it in the middle of a conversation just by talking. Uh, it, it had, it was multimodal, so it could see what was going on in the front and back cameras of your phone.
[00:39:51] CR: And you could integrate those into the conversation. I haven’t noticed any of that in this new rollout. Really, all I’ve got is new voices. There’s, uh, nothing really that different that I can tell about how it, uh, is performing. So, I don’t know if they’re doing it in increments. Releasing the voices first and then there’s additional technology and different geographies as they go.
[00:40:20] CR: Maybe it’s just a bandwidth thing, they’re rolling it out slowly and incrementally, I don’t know.
[00:40:26] SS: Yeah, I didn’t notice a huge amount, but it does seem, and I’ve always kind of thought that I’ve always said that language is our killer app. We’ve just got talked about that a lot. And also the ability to speak is just so quick. And I think it’s the required element. Again, we talked about screen and addiction and, uh, earlier on, but that idea that you can talk to something and have an intelligence, uh, for me is really interesting.
[00:40:53] SS: Uh, and I, I think that this speaking thing could potentially be an interesting play for. Apple. And the reason I think it could be interesting for Apple is that the AirPods are such an extraordinary product. And the revenue, I can’t remember what the revenue was, but it was, would, would make it, I think, in a top, just the revenue of Apple AirPods alone would make it a top 20 company in the world.
[00:41:20] SS: Just that. You talk about extraordinary product launches. I can see you’re wearing yours. I wear mine all the time. Imagine if you had an intelligence in your ear. You don’t have to touch anything, talk to anything. You have to. Pull your phone out, uh, even potential for little cameras to be on the side of the earpod, sort of, you know, looking forward, um, pointing it with my hands from my ears for everyone who’s just listening.
[00:41:44] SS: It would be an in an incredible form of AI if Apple could get that right, uh, with the voice and integrate that into the AirPods. That would be An absolute product necessity. Certainly delineate it and differentiate it from all the other white AirPod copycats out on the market for, you know, 20 in a Mars bar.
[00:42:07] SS: Feels like that could, that could be extraordinary.
[00:42:10] CR: yeah. Speaking of Apple, my son Taylor got the new iPhone 16 Pro Max the other day. And, uh, after he’d had it for a few days and he’d travelled with it, he’d gone down to Melbourne to meet Katy Perry. And he came back, I picked him up from the airport. I said, uh, what do you think of your phone? He goes, don’t get it, man.
[00:42:32] CR: It’s like, there’s nothing. His battery life was a bit better. He said, instead of it being a. 30%, it’s at 50%, but, uh, apart from that, he’s not, I’m not impressed with the level, but you know, he got it because he knows Apple intelligence will be rolling out and he wants to have that when they finally start to deploy it.
[00:42:51] CR: But right now he said, it’s just, he’s gone from a 13 He goes, yeah, it’s barely noticeable.
[00:42:58] SS: Oh yeah. Yeah. They’re definitely in their bumper bar era. Uh, 100%. Well, sales of AirPods were 14. 5 billion in revenue in 2022, according to the SEC. I mean, that’s,
[00:43:12] CR: and I live in our Airpods. Our Airpods are working most of the day, you know.
[00:43:19] SS: Yeah, so I, I think that that’s a really interesting potential overlap, uh, whether or not Apple, and I think that they should, because they’ve got every resource in the world, they should be doing whatever they can to make an open AI like Uh, AI Engine, and then if they plug that into both their phones, then I think even in their, uh, earpods, that’s kind of, I think, where we would go, and that would enable us to have everything that we need, you know, without the screen necessarily, I think.
[00:43:50] CR: Yeah. Well, I’m sure those sorts of things will happen. Meanwhile, more OpenAI news. Mira Murari, the Chief Technology Officer of OpenAI, who’s been pretty much the face of them from a technological front since Ilya. Well, actually, for the last year since the attempted coup against Sam, just announced yesterday that she’s leaving the company as well.
[00:44:19] CR: So, of the, uh, sort of original team that was there a year ago, um, Sam, Greg, Ilya, and Mira, Now there’s only Sam left and he’s not a technology guy. He’s a marketing, venture capital CEO guy, which I’m not
[00:44:40] SS: I didn’t know, I thought it was a, I thought it was a developer as well. Like
[00:44:44] CR: No, Sam is not a developer.
[00:44:47] SS: like Jobsy, ask the average man on the street and they, they think, they think that Elon Musk has got the, the, the Sigweld out, and he’s out just welding up a couple of rockets, and Steve Jobs had the soldering iron just putting a couple of microchips in the back of a smartphone, that’s what people think, ask the person on the street, he’s a genius, he invented everything,
[00:45:06] CR: he is a coder though. He is an engineer and a coder. He’s not
[00:45:10] SS: uh, Elon is.
[00:45:11] CR: Altman. Yeah.
[00:45:12] SS: yeah, Steve Jobs wasn’t, but I just always loved that Bill Burr when he said Steve Jobs genius, he told other people what to invent.
[00:45:21] CR: That is the genius though. Yeah. That guy, you want to, have a look at what Apple did in the 10 years Steve wasn’t running the
[00:45:31] SS: know. No, he definitely did great
[00:45:33] CR: it did before and after.
[00:45:35] SS: Yeah,
[00:45:36] CR: You can’t, you can’t
[00:45:37] SS: no, no. They’re cool. Well, I’m not dizzying him. I’m just trying to, I’m just trying to, I’m just, I didn’t know that Sam was a marketing business guy. I actually thought he was a developer, but anyway, that’s okay. You learn something every day, don’t you?
[00:45:49] SS: Can
[00:45:50] CR: Sam’s just an entrepreneur. And, and, but look, again, I’m not taking anything away from that. You know, the suggestion seems to be that, I mean, Ilya obviously left because of the political situation, but he’s since announced his own startup. Probably Myra is going to like, she must be getting, You know, unlimited amounts of money thrown at her to go and do something.
[00:46:14] CR: So it’s going to be interesting. It is this Cambrian explosion already happening in the AI space. Now we’re going to have the startups and even we already have that within Anthropic. And Anthropic is run by the people that put out Claude, is run by ex OpenAI people. And then the ex OpenAI people are all ex Google DeepMind people in a lot of cases.
[00:46:35] CR: So Google spawned OpenAI, OpenAI spawned Anthropic, and now you’ll have people leaving from there to go and do stuff as there’s just unlimited money.
[00:46:43] SS: was it, was it the Fairchild 8? Was that what they were called?
[00:46:47] CR: Yeah, I think so. Yeah. That started the entire Silicon Valley
[00:46:52] SS: so Silicon Valley emerged out of Fairchild, was it Fairchild Semiconductor? Which was one of the first microchip plants, which just so happened to be in a particular area near some orange groves, uh, a valley, uh, in California.
[00:47:06] SS: Uh, which then spawned, uh, Intel, and a whole lot of other technology firms, and Hewlett Packard, and all of these guys emerged out of that.
[00:47:17] CR: the, the traitorous eight. We’re the guys that left Shockley Semiconductor in 1957 and founded Fairchild Semiconductor,
[00:47:30] SS: Right.
[00:47:31] CR: right?
[00:47:32] SS: then there was those that left Fairchild and founded Intel. And it is really interesting. And this hearkens back to one of the things that I say to people, I say, how do you get such a good view of the future? And, and, and, and I know that you do the same Cam, you more than anyone, is to be a student of history while the technology changes, the patterns don’t.
[00:47:54] SS: Because humans haven’t had a software upgrade in 200, 000 years. Although, it looks like we’re about to have one with some BMIs coming in real soon. So we’re about to have our first upgrade when we merge with the machines, according to Kurzweil. So
[00:48:07] CR: Mm.
[00:48:08] SS: Uh, yeah, but it is interesting that we’re seeing this same pattern again with the AI Cambrian explosion.
[00:48:13] SS: And we saw that, I think, we definitely saw that with the semiconductor explosion. And we also saw that with, um, a lot of the big tech companies as well, where they all spawned out in the early 2000s and a lot of startups have since come from there.
[00:48:29] CR: Do you remember FLIR, by the way? F L A I R? FLIR
[00:48:34] SS: No, I don’t. You better tell me.
[00:48:36] CR: was the Fairchild Laboratory for Artificial Intelligence Research started in 1980 at Fairchild.
[00:48:46] SS: Wow. That’s
[00:48:48] CR: And you could probably draw a line from Fairchild to OpenAI, right?
[00:48:54] CR: so, uh, anyway, expect to see more announcements in the not too distant future from what, from Mira Murati about what she’s doing. Um, of course, the other big news that’s just happened in the last 24 hours as well, coincidentally, when she announced she was leaving, is that OpenAI, uh, really, really, really, really According to rumours, are removing their non profit control and giving Sam Altman, Altman equity.
[00:49:20] CR: Don’t know if there’s any connection between those two stories, but what we all knew was gonna happen at some point.
[00:49:26] SS: feel like there, there, there, there could be a connection there. One of the things that clearly has happened, OpenAI obviously started, uh, it was meant to be open artificial intelligence for all non profit, and then it just became incredibly profit centric. Uh, When there’s a lot of money involved, it’s very, very hard for something to remain non profit.
[00:49:46] SS: It’s a miracle that Wikipedia has remained, um, as is still, I think, a pretty important, uh, website. It’s the only one that I can really think of. Uh, but that non profit thing was inevitable. When you have that much venture money chasing it, remember, uh, venture capitalists only ever get married with divorce in mind and, and, you know, the prenup is big and the numbers are big when they make that split.
[00:50:09] SS: And I’ll tell you what, a non profit just, just does not Fit that at all.
[00:50:15] CR: Yeah. And you know, you’re right. You really got to give Jimmy Wales credit for what he’s done. The establishment of Wikipedia and how he’s maintained that. I know a lot of people, including Jason Calacanis, have been telling him for years that he should take advertising. He should take money and should do this and that.
[00:50:33] CR: I think Calacanis tried to set up his own competitor to Wikileaks, uh, Wikimedia, Wikipedia, 10 years ago or so, which didn’t get very far. But, uh, Jimmy Wales set up a good board of trustees that have maintained the
[00:50:48] SS: rare. I can’t think of any kind of commercial media format that is as powerful. And I think it’s still a top 10 website in the world as powerful. And there’s no doubt that if it was commercial, it would have been poisoned. No doubt.
[00:51:05] CR: And you got to wonder how long it has left now that, uh, you know, AI is where people are going to go to get their questions answered. Um,
[00:51:14] SS: good point. I find now just in hindsight that you’ve mentioned that, that now a lot of things that I would go to Wikipedia for I now go to ChatGPT for, because that’s actually what you’re after, is a truncated answer to a particular topic, which it can give you, but you can also guide which parts of that you want and how long you want it to be in the format.
[00:51:33] SS: Whereas Wikipedia is in a fixed format, let’s say, um, I guess you could take Wikipedia pages and summarize it. Uh, but that’s actually, I’d be interested to see if Wikipedia’s traffic has declined since then.
[00:51:50] CR: Yeah. But you know, you could also, uh, try perplexity if you haven’t. Are you using perplexity as
[00:51:57] SS: I have used it. I have used it, but not
[00:52:01] CR: I go to. I go to Perplexity if it’s something that I would Google because its answers are much more truncated than OpenAI’s, GHRGPT’s, but it, um, standard provides you with sources and links is how it’s been designed.
[00:52:18] CR: So you can ask ChatGPT for sources, but it won’t always provide them straight up. Perplexity provides it straight up every time. So you can go and check the validity of the information. Uh, Steve, moving right along. Do you want to talk about Instagram kids?
[00:52:35] SS: Yeah, look, we opened up on the idea that around the world, America and Australia are moving pretty rapidly to get some form of legislation. I just think it’s kind of disgraceful that Instagram kids Miracle. It gets launched on the eve of legislation. And what it says to me is that we have a very nefarious corporation that understands the dangers of exposing kids and teenagers to a product.
[00:53:09] SS: And at the 11th hour, they’ve. Buy Miracle started to put it in some of the features that we’ve been asking for a long time. It feels like a corporation getting on the front foot to further delay and obfuscate potential legislation when they know that they have a potentially very damaging product. Um, you know, some of the features that go into Instagram Kids.
[00:53:29] SS: Under 18’s web account set to private by default. Wow, what a miracle. And 16’s only allowed to say, change the permission, uh, change the setting with parental permission. Teens only able to receive messages. From people they’re already connected with, as opposed to all the sexual predators and, uh, scammers that they’re getting to now.
[00:53:49] SS: Content restrictions, blocking of offensive words and messages. To the maximum set possible. Notifications turned off between 10 PM and 7 AM because yeah, we know it’s bad for kids while they should be sleeping. A reminder to leave 60, uh, to leave Instagram after 60 minutes of use each day. Ironic we’ve, uh, given my boy a 60 minute screen time of all screens limit, uh, to 60 minutes while he’s trying to get better.
[00:54:15] SS: Uh, I mean, for me, this really tells us that we’ve had a flaccid set of legislators Who really should have acted a lot quicker and a very deceptive set of corporations, which are driven by what corporations are always driven by, which is to maximize profit and to obfuscate the difference between something that’s possible and profitable.
[00:54:40] SS: And, and if anything, it shouldn’t make us far more suspicious that it is even worse. than what we think, based on the fact that they’ve flicked a switch and put these forward. And anyone who went out there and clapped Meta Corporation for doing this, really need to have a good hard look at themselves.
[00:54:58] CR: But you can trust Zuck now, Steve. He’s got his, he’s grown his hair out
[00:55:03] SS: Well, listen, I think the fact that he’s, he’s got, he’s got the chain, the gold chain of the curly hair and, and you know, he’s a surfer, apparently he’s a cool surfer. And look, we just
[00:55:16] CR: between a surfer and a rapper.
[00:55:18] SS: And a wrapper, and then we just put Jensen
[00:55:20] CR: And there’s nobody we trust more right now than hip hop stars. I think if
[00:55:27] SS: They’re right up
[00:55:28] CR: us anything,
[00:55:29] SS: can we just, can we just divert to a little bit of P. Diddy? Look, P. Diddy wasn’t really on my radar, but it sounds like he’s really done some iniquitous work there. I mean, what’s going on here?
[00:55:44] CR: uh, yeah, let’s not go down that rabbit hole. Anywho, yeah, Instagram kids, like you’re right, these tech companies, um, A, can’t be trusted, B, are going to try and find ways to manipulate governments and, um, To, you know, get around any legislative changes that are made. Um, that’s what we expect corporations to do, right?
[00:56:10] CR: Is to try and manipulate things to their advantage, to enable them to keep. generating profits. And, uh, yeah, but again, I think it’s the primary responsibility in this area is and should be on parents to be the first line of defense for their kids. Uh, but then if parents aren’t doing the job, then, uh, I, I do think governments have a responsibility to protect, uh, the people
[00:56:42] SS: look, you’re right about the parents. You’re right about that. Parents, what is that social phenomenon where if someone’s being injured, people all stand around and no one does anything about it? It’s got, there’s a certain psychological phenomenon that has a name for it. I can’t remember what that is.
[00:56:58] SS: You can think of it while I explain why I think this is really important that legislators
[00:57:02] CR: Samaritan, you’re talking about the Samaritan legislation, Good Samaritan?
[00:57:07] SS: No, I’m not talking about good Samaritan legislation. I’m talking about the idea that when someone’s getting injured or something and no one acts and everyone just stands around and watches it, even though they know that they should be doing something, there’s
[00:57:19] CR: That was it.
[00:57:20] SS: for it,
[00:57:21] CR: That’s what the Seinfeld episode was based on, the Samaritan laws that were introduced. Yeah, if you stand by and watch someone get hurt, and you’re not a Good Samaritan, you don’t step in and help them, it’s based
[00:57:33] SS: But there’s a psychological phenomenon where there’s a number, a large number of people not helping. Uh, you can go to ChatGPT and ask that while I explain why this is important. So, so,
[00:57:45] CR: can ask ChatGPT, hold on, let’s ask ChatGPT. live
[00:57:50] CR: What’s the phenomenon called where a lot of people will stand around and watch somebody getting hurt and do nothing about it? Um, the phenomenon you’re describing is called the bystander effect. It happens when individuals are less likely to help someone in need when there are others around.
[00:58:07] SS: I thought it had a far more fancy name like that, like the Dunning Kruger or something, you know, where two psychologists named it after themselves, which if I ever invent anything, I’m going to call it the Sammartino. Although we’ve already got one. We’ve already got the Sammartino
[00:58:19] CR: Yeah, I coined that on your behalf. Yeah.
[00:58:22] SS: So look, I think what happens is you get this, um, bystander effect where it’s a real challenge socially. And Scott Galloway has said some really important things on this. He, I think is really courageous. Again, I don’t agree with everything he says, but on this one he said there’s a challenge because if anyone is on these social forums, the deep desire for connection for other teenagers to be on it and not be the one left out has this collective bias where it becomes increasingly difficult for parents to isolate their children because some are on it.
[00:58:55] SS: If you make it a law that none of them are on it, it makes it easier for the parents to implement that because they can use the law as a way to Show that this isn’t just us. This is what society agrees upon. These are the values of society, and there’s reasons behind this, as it is with alcohol, as it is with cars.
[00:59:15] SS: And I think all of the research shows that a large majority of parents wish their children weren’t on social media up to the age of 18. And this is where the law becomes our proxy to enable us to collectively implement something which is difficult because if you even get 10 or 20 percent of the cool kids or whoever on it, others feel isolated and left off and you get this social paradigm where they all go down into it.
[00:59:38] SS: So I think that this is where legislation is important because it’s not the government handing down a bunch of Uh, laws which reduce free speech or our ability to connect. This is what the government should do, which is implement against the collective value systems that we have so that we can have a more civilized society.
[00:59:57] SS: Rather than them deciding That they want that for their benefit. We, we collectively, and the research shows it, don’t want our kids exposed to this. The help of a government implementing laws that we would want would make it easier for us to develop a society that benefits our kids by reducing their exposure to a negative tech.
[01:00:16] SS: That’s my thought.
[01:00:18] CR: Yeah, but the, the, the, I agree. The challenge though, and um, I talked about this in the Psychopath book is we don’t have, let’s take Australia as an example, we don’t have a commonly agreed upon set of values. We don’t have a ministry of ethics or a ministry of values.
[01:00:39] SS: No, you never do. And it’s always a moving, it’s always a moving beast as well. Values
[01:00:44] CR: how do we know what the common values are of a society at any given point in time?
[01:00:49] SS: Well, this is what research is for, right? The way that we know is that we do, we conduct robust research where we can assign an entire population to this. And those research methods have been long established for over a hundred years where we can understand what the large majority feel and we need to govern to the majority.
[01:01:11] SS: I mean, that, that’s not particularly hard. Now, values as a term is difficult because you and I might agree on some things and not others, but on a particular singular issue that we can isolate like this, exposure to social media, it’s actually not that difficult. The only difficult thing I think in this legislation is what is social media?
[01:01:32] SS: The definition of that, I think is probably the most difficult thing. And I would take YouTube as the example. Is YouTube social media? I mean, that, that, that’s an, that’s an interesting question. There is social media is something where you can publish. Maybe if you can’t publish, if you can publish, then it is social media.
[01:01:50] SS: I mean, that, that’s actually the hardest part of this equation. It’s determining what it is.
[01:01:55] CR: You know, I would love to take it up a level and say that we need a Department of Ethics, and then it should be responsible for looking at Murdoch Media. Uh, you know, I mean, we already have Some bodies that are supposedly there to restrain media companies from stirring up the shit or doing falsehoods, but, uh, it’s not very effective.
[01:02:20] CR: I mean, there is, again, there’s a line between censorship. and holding media organizations, be they traditional one to many media or social media, to certain values and standards that we as a society believe are important to uphold. Not that it’s the government just implementing things because the people in government think that this is good or this is not good.
[01:02:47] CR: That we as a society, the majority of a society believe that in order to live together, there are always going to be people that are bitching on the sidelines about who want ultimate freedom of speech and ultimate freedom of behaviour. But as we were saying during the COVID times, I don’t have the freedom to walk up to your table at a restaurant, drop my pants and take a dump on your meal.
[01:03:10] CR: So are. Restrictions. If we’re all going to get along and live together in a society, there have to be some certain standards that we all agree to. And it’s the same when it comes to freedom of speech as it does in freedom of behaviour. You can’t do 150 kilometres an hour down past a school. I mean, for good reason.
[01:03:33] CR: And I think the same applies to what we allow publishing and what we should allow with social media. So anyway. Instagram for kids. What’s next, Steve? I wanted to talk about one other, I know it’s all open AI today, but Sam Altman published a piece on his blog a couple of days ago. It’s called The Intelligence Age.
[01:03:54] CR: I’m just going to quickly flick through it for people and then you and I can discuss. He says, In the next couple of decades, we will be able to do things that would have seemed like magic to our grandparents. This phenomenon is not new, but it will be newly accelerated. People have become dramatically more capable over time.
[01:04:12] CR: We can already accomplish things now that our predecessors would have believed to be impossible. I like to tell the story that my, my mother grew up in a house with no electricity until I think she was four years old when they got electricity. Now she has AI on her mobile phone. Like it’s an incredible change in 75 years.
[01:04:34] CR: This phenomenon, oh no, we are more capable not because of genetic change, but because we benefit from the infrastructure of society being way smarter and more capable than any one of us. In an important sense, society itself is a form of advanced intelligence. Our grandparents and the generations that came before them built and achieved great things.
[01:04:55] CR: They contributed to the scaffolding of human progress that we all benefit from. AI will give people tools to solve hard problems and help us add new struts to that scaffolding that we couldn’t have figured out. on our own. The story of progress will continue and our children will be able to do things we can’t.
[01:05:16] CR: Um, and then later on he says, how did we get to the doorstep of the next leap in prosperity? In three words, deep learning worked. And this, I think, is the crux. In 15 words, deep learning worked, got predictably better with scale, and we dedicated increasing resources to it. That’s really it. Humanity discovered an algorithm that could really, truly learn any distribution of data, or, really, the underlying rules that produce any distribution of data, to a shocking degree of precision.
[01:05:53] CR: The more compute and data available, the better it gets at helping people solve hard problems. I find that no matter how much time I spend thinking about this, I can never really internalize how consequential it is. There are lots of details we still have to figure out, but it’s a mistake to get distracted by any particular challenge.
[01:06:14] CR: Deep learning works and we will solve the remaining problems. We can say a lot of things about what may happen next, but the main one is that AI is going to get better with scale, and that will lead to meaningful improvements to the lives of people around the world. You know, it reminds me of conversations I was having with my friend Dennis Bastas.
[01:06:37] CR: 20, 25 years ago, Dennis is now billionaire owner of one of Australia’s largest pharmaceutical companies. But back then he was the CEO of a startup and we used to hang out a lot together in Melbourne. We used to talk about consciousness and, and, and, uh, artificial intelligence and how we thought it would play out.
[01:07:00] CR: And we, we agreed back then that. Consciousness and intelligence were probably just very, very high levels of computation in that any sufficiently advanced system of computation would exhibit something that looked like consciousness and or intelligence and it would be increasingly difficult to discern between what is really conscious and what is not, what is really intelligent and what is not.
[01:07:30] CR: Sam is basically saying that deep learning. works. Throw enough compute and enough data at a system and it becomes intelligent.
[01:07:40] SS: It’s funny. It’s kind of, when you describe it that way, this is where my mind went. Not to say that this is right, but it’s almost like consciousness is the space between thoughts. Just a general awareness of being like you have like a bunch of thoughts and then you kind of there, I’m here, I can think of, I’m not thinking of anything in particular, but I have this general awareness and it’s kind of almost, yeah, it’s almost this, it’s almost a bit ironic that We were talking about some of the recent evolutions in the large language models, obviously open AI and chat JBT of those thoughts and deep thought and thinking between spaces, it’s like it’s progressing towards that.
[01:08:20] SS: The way that you described it then kind of reminds you of the, the human neural network and how it has this idea of consciousness and it, I often think. About a lot when I’m gardening, Cameron, about a worm, I’ll dig up a worm. I’m like, does this worm know that it’s here? Like, does it, where does consciousness kick in, in terms of creatures, whether it’s plants or grass or worms or bees or birds or other mammals?
[01:08:46] SS: When do they, you know, what level does it kick in?
[01:08:52] SS: Because we already have computation, which far exceeds many organic species in their computations per second and the size of their neural networks in their brains versus what we have now in large language models.
[01:09:04] CR: Yeah. I mean, my thinking on this back then was led by Marvin Minsky’s book, The Society of Mind, which I read At some point in the 90s, um, Marvin Minsky, for people who don’t know, was one of the guys who was at the beginning of artificial intelligence research and cognitive science. Um, he was, um, at MIT, co founded MIT’s AI Laboratory.
[01:09:36] CR: You know, he was, uh, one of Kurzweil’s, um, Uh, teachers and mentors, uh, K. Eric Drexler, who wrote the first book on nanotechnology, came out of Minsky’s lab. Um, Danny Hillis and the, uh, you know, what’s his clock, the immortal clock or whatever it is that he set up. A lot of the greatest, uh, current thinkers around AI and cognitive science, um, came out of, Marvin Minsky’s lab.
[01:10:05] CR: And in his book, Society of Mind, he basically just talked about how he believed that the concept of self was just a society, it was a society of thoughts. Basically, the brain is just a whole bunch of single unit processes happening, billions of single unit processes that somehow become aware of each other.
[01:10:29] CR: And it’s what he calls the interaction between those. Agents is what he referred to them as, was what we call mind. It was a society of. agents, billions of agents operating in conjunction, which became a society like millions of people living together, doing their own thing, become a society. There’s like a, a meta level of behaviour that emerges, emergent behaviour that comes out of enough people doing their own thing that exhibits a behaviour of a society that the brain, the mind is really no different.
[01:11:05] CR: So, you know, these ideas have gone right back to. You know, at least his book in the 80s that he, you know, 86, he came out with Society of Mind, one of the foundational books on artificial intelligence. So anyway, I thought it was really interesting and, you know, I remember I was talking about this a long time ago that, um, in an early interview that I saw that, um, Jensen Huang at NVIDIA did with Ilya Sutskever.
[01:11:30] CR: Um, he said what was the most surprising thing about ChatGPT to Ilya and he said it was that it worked. Like, we had this idea that if you threw enough data and enough compute at a large language model that it would, that it might show intelligence and it did. You know, that was, that was still surprising to him, um, a year and a half ago.
[01:11:53] SS: Yeah. It’s interesting. I mean, again, in Kurzweil’s book, which I’m savouring the words, almost don’t want to read it because I’m enjoying it so much because once I’ve read it, it’ll be read. The singularity is near. I just love the idea that he talks about the connectionist versus symbolic AIs. And that, that was the first type of AI they developed in 52, but they just didn’t have the computation to make it possible, but they, they thought that it would be the right approach because it had a sense of biomimicry about it, which is just, if you have enough, enough, uh, computation and enough input, which is really all there is, it’s computation and input, that’s all really there is, input out, it’s like something to mix it up and, and, and the data inputs really.
[01:12:34] CR: but he talks about, um, how this, like what became the LLM model, which is where there is no hard architecture between symbols and concepts that you just feed data. No, everyone thought that was, uh, the wrong approach and wouldn’t work. And, and so for 30, 30 years, We just gave up on that.
[01:12:55] SS: yeah, for a really long time. I mean, the simple example is, uh, words that have the exact same sound and spelling, but two different meanings, like boot and boot. If you feed it enough information, boot, car boot, put it in the boot, um, the boot was full, it’ll eventually work out that that boot in the right context is to do with cars versus it’s climbing up a mountain with the boot, you’re chewy on your boot, all that, and it can work it out just by the words that are near it.
[01:13:23] SS: Yeah,
[01:13:24] CR: Yeah, so, like, but those three words I think are incredibly profound. As Sam said in the article, deep learning worked. Like that’s, it’s like one of the pivotal statements. It will be one of the pivotal turning points, tipping points of not just human society and human history, but the history of intelligence on this planet.
[01:13:54] CR: And potentially. In the entire universe, because we don’t know how much intelligence there is in the universe, we can’t assume that there’s anything outside of this.
[01:14:04] SS: Wow.
[01:14:04] CR: Deep learning worked. To me, like, that is as, it’s going to be as profound a discovery as the taming of fire. Or The Wheel, or Electricity, or General Relativity, or Quantum Mechanics.
[01:14:24] CR: It is one of the most prof and it’s just slid by and I don’t think, most, I mean, most people still don’t know how electricity
[01:14:32] SS: I too, I too am absolutely astounded by the capability and I think that anyone who isn’t maybe hasn’t really played with it enough. Really?
[01:14:42] CR: Oh, absolutely. Yeah, yeah, yeah. Anyone who’s dismissive. I had lunch, one of my listeners from Boston, shout out to Doug if he’s listening, was
[01:14:50] SS: Aggie boy!
[01:14:52] CR: He was here in Brisbane a couple of weeks ago. I had lunch with him and he, he runs a biotechnology company, which is really interesting and we should get him on to talk about the work that they’re doing in solving
[01:15:02] SS: that’d be cool.
[01:15:04] CR: But I asked him about how much they’re using AI and he was still kind of dismissive about AI and I said, have you played around with it much? He goes, no, not really. And I was like, man, I pulled out my phone and did my usual parlour trick, which is like, let’s ask him a question about, Oncology or biotechnology or cancer research and have it have a voice conversation.
[01:15:24] CR: This is a few weeks ago before advanced voice came in. Uh, I just think I said, have you played with the voice? He goes, no. And I was like, seriously,
[01:15:33] SS: Get
[01:15:33] CR: just have a listen to this. Talk to you, man. Like anyway. Yeah. People who aren’t blown away either haven’t played with it or don’t understand it. But as I keep saying to people, the most profound thing is didn’t know this would work.
[01:15:51] CR: We just had a, we, I mean, the people who developed it
[01:15:55] SS: Well, I mean, when you and I were developing the LLMs back in 2007, we couldn’t believe what we stumbled upon. Could we,
[01:16:02] CR: Using the, the broad we of humans,
[01:16:05] SS: way. Look, I’m part of the human species. I’m one of the developers. If humanity is the
[01:16:10] CR: we. Yeah. Well, yeah. All right. Let’s move on. Let’s move on. Technology Time Warp, Steve, what have you got for us
[01:16:18] SS: I just wanted to put in a titbit. Every now and again, I’ll just say today in technology, and I did find out that the first ever personal computer was launched in 1973 today, and it was the Microcomputer Machines of Canada, the MCM 70, which had the first 8008 processor in it, but it failed, and I just, no one really knows about the Microcomputer Machines of Canada Corporation.
[01:16:45] SS: Big companies, Chevron, Mutual Life Assurance, NASA, And the US Army were its first customers and it was launched at a, uh, programmers, microcomputers, programmers conference in Toronto in 1973. So interesting how often it’s kind of 10 years before it gets in the zeitgeist and a, and a product then becomes available to general users, late seventies, early eighties.
[01:17:13] SS: I just thought that was an interesting little titbit as a technology time warp.
[01:17:18] CR: The company, Microcomputer Machines, was founded by Canadian inventor, Merz Kutt. Merzlau Kutt, born 1933, still alive.
[01:17:30] SS: Oh,
[01:17:32] CR: Former professor of mathematics at Queen’s University. Um, worked at Phillips, IBM, and Honeywell. Wow. There was a professor of mathematics at Queen’s during the 1960s, served as president of the Canadian Information Processing Society, was the director of the university’s computing center, and then in 1968 he was interacting with punched card based input systems and thought Maybe there’s an idea to put a keyboard on one of these things,
[01:18:10] SS: it’s one of the crazy things about. All of the early computers, none of them had screens, and most of them just had tapes. And that’s, that’s the interesting thing, you would do little printouts of what you get, a little punch card analysis. Uh, which is an interesting idea, a screen, and then from a screen to a voice.
[01:18:31] SS: I mean, it actually is really interesting on how things have this anthropological Natural flow. First, it’s something physical, whether it’s a cave wall drawing or whether it’s a printout or a punch card, then it’s a screen, then it’s a voice. It’s really interesting in the way that that flows. And it almost shows that in the first instance, it’s hands, hands driven, it’s typing, punch cards, physicality, and then it’s screen, it’s visual, it’s eyes.
[01:18:58] SS: And our eyes are working before our language developed. And then it’s language. And there’s almost this anthropological pattern that technological develops have followed and they’ve done it with AI again. Now we’re at voice. Actually, that’s ironic. I didn’t plan this. It sounds like, sounds like I planned that, that this technology time warp, that it would fit into all the other things, but I didn’t, it just turned out that way, just like large language models.
[01:19:23] SS: I can’t believe it actually worked.
[01:19:26] CR: And now we have vision, like visual inputs into the AI as well. Um, not just a screen as an output, but you can, you know, I, I mean, I do this increasingly with chat. PT is take a screenshot of whatever it is I’m working on a report or a spreadsheet or an error message and just drop it in and say, what do you make of this, you know, screenshots.
[01:19:50] CR: We can’t do live video input yet, but that’s what we’ll have. Oh, you do it with
[01:19:54] SS: my scrappy notes. I’m writing a piece on dead internet theory and you would be astounded at the accuracy that it can decipher my scribbles, which I don’t think a lot of humans would be able to make sense of what that word is. And I, and I’m guessing that the reason that it can do that is that it looks at the words around it and guesses what the word should be, even if it’s not spelled right.
[01:20:18] CR: He just gave me a great idea. I’m going to scan all my old paper diaries
[01:20:23] SS: I do that. I do it all the time.
[01:20:26] CR: and say,
[01:20:26] SS: when I’m doing a briefing, when I’m doing a briefing with the client on the phone for an event or some consulting work, I’ll write it all down verbally. I take a, I take a photo of it, dump it into ChatGPT, turn it into a Word doc, bam, I’ve got it all done, and then I’ll file it.
[01:20:42] CR: Wow. Good thinking. Just gave me another project to waste my weekend on. Uh, by the way, that Uh, computer. We talked about the M cm 70 based of course on Intel’s 8,008 or 80 0 8 processor, one of the earliest eight bit microprocessors, which had 16 kilobytes of memory
[01:21:07] SS: it was.
[01:21:08] CR: introduced in April, 1972.
[01:21:13] SS: And my first computer was the 16K RAM Terus 80, which had the same microchip in it, which was way back in about 1981. So, The recursion then was a lot slower, wasn’t it? I mean, you were still selling the older, slower ones for quite a long period of time, whereas now I imagine, you know, things don’t even last 12 months before they’re off the shelf.
[01:21:33] SS: Uh, but that, that was interesting.
[01:21:37] CR: just the leap from the 8008 through to what Apple are putting in their iPhone 16s, the processing power and the chips, all through to the NVIDIA, the NVIDIA chips that are running AI now, like that is just, it’s just a mind boggling demonstration of Moore’s Law. I was talking to my mum about AI the other night, I was explaining Moore’s Law.
[01:22:04] CR: And how over the last 20 years, I, there’s been so many people saying, Oh, Moore’s Law is coming to an end. We can’t get the gates down small enough now for an electron to pass through them. And then everyone’s like, no, well, now we just put them together. You know, what NVIDIA is doing where they’re just slamming these things together.
[01:22:23] CR: Uh, Moore’s Law has not, uh, died. It continues to speed up our ability
[01:22:28] SS: Yeah. I mean, I always liked Kurzweil’s and I think he did it in How to Build a Mind, where he talked about Moore’s law being, or the transistor being the fourth iteration of what is truncated ways of transmitting information, where he starts with punch cards and then he goes from punch cards to vacuum tubes, vacuum tubes to, um, microprocessors, microprocessors to transistors, and that, you know, Eventually you’ll invent a new technology or a new way to weave information together to hold and process more information.
[01:22:58] SS: It’s not people confuse Moore’s law with the overriding idea of the law of accelerating returns where technology finds a way to fit more information into less. Um, you could probably go all the way back to cave wall drawings, you know, bits of
[01:23:12] CR: Yeah.
[01:23:13] SS: right?
[01:23:14] CR: think Kurzweil does, right? I think he goes back that far. Yeah. All right. Uh, well, how are we doing for time? An hour and a half, Steve. Geez. And we’re not
[01:23:25] SS: Jeez. Well, what happens when you have
[01:23:27] CR: a line under it.
[01:23:29] SS: we tidy this up,
[01:23:32] CR: Yeah. Like, um, I think we’ve, we’ve talked about social media gating anyway. I think that’s pretty much it.
[01:23:39] CR: Um, yeah. Big couple of weeks. Let’s see how long it is before we do another one of these, Steve. But, uh, good to chat and
[01:23:48] SS: Oh, so enjoyable. Look, just, just, you know, exploring things with you, uh, and hopefully, yeah, and hopefully Lewis has enjoyed some of the explorations we went on, but so good. Mate, appreciate the chat.
[01:24:03] CR: Yeah. Thanks for taking time, man. And, uh, anyone out there, you got any ideas you’d like to, like us to cover in future episodes, uh, let us know, reach out. You know where to find us. Thanks, Steve.
[01:24:15] SS: Thanks, Cam.