
The Catchup
Dive into the world of trending topics every Monday morning with us on The CatchUp! Our podcast unravels the complexities of today’s biggest stories, from the rapid advancements in Artificial Intelligence to the latest global news. Engage with our unfiltered opinions and spontaneous, in-depth discussions that dive into AI's impact on society and beyond. Our unscripted conversations offer fresh perspectives and insights, making “catching up” the perfect blend of real talk and real topics. Tune in for thought-provoking analysis and lively debate that will redefine your Monday mornings.
The Catchup
The Future of AI with Geoffrey Hinton: Concerns, Applications, and Exciting Possibilities
Get ready for a captivating journey into the world of artificial intelligence with the godfather of AI himself, Geoffrey Hinton. Our minds were blown away as he shared insights from his years of research, from his early experiments to his more recent contributions to AI like Google's AI search assistant. Get to learn more about the often worrying concept of AI systems being more intelligent than humans, with the potential to make autonomous decisions.
We treaded on some heavy ground discussing AI's potential use in the military and the possible manipulation of voting choices through AI-generated propaganda. We even dove into the chilling possibility of an AI war. Despite the gravity of these topics, we lightened the mood with discussions on the practical applications of AI in our daily lives, from car maintenance to medical advice. And for you Marvel fans out there, we explored how AI played a part in creating the multiverse saga.
This episode is a roller-coaster ride of deep insights, serious concerns, and exciting possibilities. So join us and ignite your curiosity about the future of AI.
Let's get into it!
Follow us!
Email us: TheCatchupCast@Gmail.com
A couple of weeks ago, you may remember, we discussed all of the positive things that you know are against the negative perception of AI. We discussed all of that, right, we made a case for it and I'm really impressed with what we did. But there was an interview with a guy who is considered one of the Godfathers of AI. That's not my name for him, but that is what he's being called. Oh, I see they actually developed a bard, okay. So if you're familiar with Google's AI search assistant, he helped develop that. If you are a long-time listener of the podcast, you likely know that Denison has talked about how Google has developed other AI that had to flat out, shut down right. So this is an interesting scenario to hear him talk about the warnings of AI and where this is going right now. I think, as the former journalist in me, I think it's very, very fair to be able to share this stuff and get your thoughts out of it. So we're gonna dive into that Also. You know I wanna take the time probably not to dive too much in on this episode, but I know many of you know and have seen my story of what I have going on right now and I just wanna thank you all for the support, whether it's messages, texts, comments or monetary. It means all of it means the absolute world to me and it's made a huge difference. I'm looking a little puffy right now because of the steroid I've been on. I've also been eating like a mug, I'm not gonna lie, but it's all been good. So I just wanna thank you all for that. With that said, let's dive into this episode. So again and we're gonna do something a little different here I don't think we'll have any copyright strikes or anything like that, because I am sourcing the original material from 60 Minutes. This was a good interview. It's gonna be about 13 minutes long and I think it's worth the listen. And, of course, for my audio friends, I'll download it and put it on there as well.
Speaker 1:But we wanted to jump in and discuss this and, I think, saying the groundwork first with I think this ain't getting cock-eyed on me there we go. I think, setting the groundwork first with what we're looking at here, what this guy has to warn us about when it comes to AI, I think that will be the way to go. So I think we should start off with that. Before we do that, let's roll the intro. What's going on? Everybody, I'm John, and this is the catch up ["The Three Ways to Support this Show"].
Speaker 1:Before we get into the episode, I wanna remind you guys the Three best ways to support this show. Number one leave us a rating review. Wherever you're listening and wherever you're watching is super easy to let us know what you think about this show, and it helps us continue to grow as well. It's a huge benefit for us to know how you're reacting to the episodes, and also, each review puts us out in front of more potential listeners, and so that's a big, big benefit for us as well. So anytime you can do that, that's a huge help. Number two subscribe. Follow us on Facebook. We're also normally on YouTube as well. Follow us on both. Subscribe.
Speaker 1:We go live every Thursday night, except for this week. We had to make some adjustments, but we are live on there. And one thing the biggest thing that we encourage with that is for you to jump in the comments, interact with us in real time. We love nothing more than that. And number three check out our shop If you wanna support us monetarily. We got some really cool stuff some shirts, some long sleeves, some hoodies. It's about to get cold out there, even though it doesn't feel like it right now. You guys are gonna want some of that stuff and the money helps go toward hosting and supporting this show, because we do pay to do it.
Speaker 1:So, with that said, let's dive in here. And again, this is a 13-minute long watch, but I remember watching it and thinking every part of it was integral to what we're gonna talk about as well. So let's watch this together and then share our thoughts afterwards and please, if you're watching live on the live stream, share your thoughts while we're watching this together. So yeah, let's go ahead and dive in here. I believe this says it will share the tab audio. So if you don't hear anything, let me know in the comments. But I think we'll be good to go here. I'm gonna just give it away here. Here you go.
Speaker 2:Whether you think artificial intelligence will save the world or end it, you have Jeffrey Hinton to thank. Hinton has been called the godfather of AI, a British computer scientist whose controversial ideas help make advanced artificial intelligence possible and so change the world. Hinton believes that AI will do enormous good, but tonight he has a warning. He says that AI systems may be more intelligent than we know and there's a chance the machines could take over, which made us ask the question. The story will continue in a moment. Does humanity know what it's doing?
Speaker 3:No Quick moment. I think we're moving into a period when, for the first time ever, we may have things more intelligent than us.
Speaker 2:You believe they can understand. Yes, you believe they are intelligent. Yes, you believe these systems have experiences of their own and can make decisions based on those experiences.
Speaker 3:In the same sense as people do. Yes, Are they conscious? I think they probably don't have much self-awareness at present, so in that sense I don't think they're conscious. Will they have? Self-awareness, consciousness, oh yes, I think they will in time.
Speaker 2:And so human beings will be the second most intelligent beings on the planet.
Speaker 3:Yeah.
Speaker 2:Jeffrey Hinton told us the artificial intelligence he said in motion was an accident born of a failure. In the 1970s at the University of Edinburgh he dreamed of simulating a neural network on a computer, simply as a tool for what he was really studying the human brain. But back then almost no one thought software could mimic the brain. His PhD advisor told him to drop it before it ruined his career. Hinton says he failed to figure out the human mind, but the long pursuit led to an artificial version.
Speaker 3:It took much, much longer than I expected. It took like 50 years before it worked well, but in the end it did work well. At what point?
Speaker 2:did you realize that you were right about neural networks and most everyone else was wrong? I always thought I was right. In 2019, hinton and collaborators, jan Lacoon, on the left, and Yashua Benjio, won the Turing Award the Nobel Prize of Computing To understand how their work on artificial neural networks helped machines learn to learn. Let us take you to a game. Look at that. Oh my goodness. This is Google's AI Lab in London, which we first showed you this past April.
Speaker 2:Jeffrey Hinton wasn't involved in this soccer project, but these robots are a great example of machine learning. The thing to understand is that the robots were not programmed to play soccer. They were told to score. They had to learn how on their own In general. Here's how AI does it. Hinton and his collaborators created software in layers, with each layer handling part of the problem. That's the so-called neural network. But this is the key. When, for example, the robot scores, a message is sent back down through all of the layers that says that pathway was right. Likewise, when an answer is wrong, that message goes down through the network. No correct connections get stronger, wrong connections get weaker and, by trial and error, the machine teaches itself. You think these AI systems are better at learning than the human mind.
Speaker 3:I think they may be, yes, and at present they're quite a lot smaller. So even the biggest chatbots only have about a trillion connections in them. The human brain has about a hundred trillion and yet in the trillion connections in a chatbot it knows far more than you do in your hundred trillion connections, which suggests it's got a much better way of getting knowledge into those connections.
Speaker 2:A much better way of getting knowledge that isn't fully understood.
Speaker 3:We have a very good idea of roughly what it's doing, but as soon as it gets really complicated, we don't actually know what's going on, any more than we know what's going on in your brain.
Speaker 2:What do you mean? We don't know exactly how it works. It was designed by people, no, it?
Speaker 3:wasn't what we did was we designed the learning algorithm. That's a bit like designing the principle of evolution, but when this learning algorithm then interacts with data, it produces complicated neural networks that are good at doing things, but we don't really understand exactly how they do those things.
Speaker 2:What are the implications of these systems autonomously writing their own computer code and executing their own computer code?
Speaker 3:That's a serious worry, right? So one of the ways in which these systems might escape control is by writing their own computer code to modify themselves, and that's something we need to seriously worry about.
Speaker 2:What do you say to someone who might argue if the systems become benevolent, just turn them off?
Speaker 3:They will be able to manipulate people right, and these will be very good at convincing people because they'll have learned from all the novels that were ever written, all the books by Machiavelli, all the political connivances. They'll know all that stuff. They'll know how to do it.
Speaker 2:Know-how of the human kind runs in Jeffrey Hinton's family. His ancestors include mathematician George Bool, who invented the basis of computing, and George Everest, who surveyed India and got that mountain named after him. But as a boy, Hinton himself could never climb the peak of expectations raised by a domineering father.
Speaker 3:Every morning when I went to school, he'd actually say to me as I walked down the driveway get in their pigeon and maybe, when you're twice as old as me, you'll be half as good.
Speaker 2:Dad was an authority on beetles.
Speaker 3:He knew a lot more about beetles than he knew about people. Did you feel that as a child? A bit? Yes, when he died, we went to his study at the university and the walls were lined with boxes of papers on different kinds of beetle. And just near the door there was a slightly smaller box that simply said not insects, and that's where he had all the things about the family.
Speaker 2:Today at 75, hinton recently retired after what he calls 10 happy years at Google. Now he's Professor Emeritus at the University of Toronto and, he happened to mention he has more academic citations than his father. Some of his research led to chatbots like Google's Bard, which we met last spring. Confounding, absolutely confounding. We asked Bard to write a story from six words For sale baby shoes never worn. Holy cow, the shoes were a gift from my wife, but we never had a baby. Bard created a deeply human tale of a man whose wife could not conceive and a stranger who accepted the shoes to heal the pain after her miscarriage. I am rarely speechless. I don't know what to make of this. Chatbots are said to be language models that just predict the next most likely word, based on probability.
Speaker 3:You'll hear people saying things like they're just doing autocomplete, they're just trying to predict the next word and they're just using statistics. Well, it's true, they're just trying to predict the next word, but if you think about it, to predict the next word you have to understand the sentences. So the idea they're just predicting the next word, so they're not intelligent, is crazy. You have to be really intelligent to predict the next word really accurately.
Speaker 2:To prove it, Hinton showed us a test he devised for Chat GPT-4, the chatbot from a company called OpenAI. It was sort of reassuring to see a Turing award winner mistype and blame the computer.
Speaker 3:Oh damn this thing. We're going to go back and start again. That's okay.
Speaker 2:Hinton's test was a riddle about house painting. An answer would demand reasoning and planning. This is what he typed into Chat GPT-4.
Speaker 3:The rooms in my house are painted white or blue or yellow, and yellow paint fades to white within a year. In two years' time, I'd like all the rooms to be white. What should?
Speaker 2:I do. The answer began in one second. Gpt-4 advised the rooms painted in blue need to be repainted. The rooms painted in yellow don't need to be repainted because they would fade to white before the deadline. And oh, I didn't even think of that. It warned if you paint the yellow rooms white, there's a risk the color might be off when the yellow fades. Besides, it advised you'd be wasting resources painting rooms that were going to fade to white anyway. You believe that Chat GPT-4 understands?
Speaker 3:I believe it definitely understands. Yes, and in five years' time? I think in five years' time it may well be able to reason better than us.
Speaker 2:Meaning that, he says, is leading to AI's great risks and great benefits.
Speaker 3:So an obvious area where there's huge benefits is healthcare. Ai is already comparable with radiologists at understanding what's going on in medical images. It's going to be very good at designing drugs. It already is designing drugs, so that's an area where it's almost entirely going to do good. I like that area. The risks are what? Well, the risks are having a whole class of people who are unemployed and not valued much because what they used to do is now done by machines.
Speaker 2:Other immediate risks he worries about include fake news, unintended bias in employment and policing, and autonomous battlefield robots. What is a path forward that ensures safety?
Speaker 3:I don't know. I can't see a path that guarantees safety. We're entering a period of great uncertainty where we're dealing with things we've never dealt with before, and normally, the first time you deal with something totally novel, you get it wrong. And we can't afford to get it wrong with these things. You can't afford to get it wrong. Why? Well, because they might take over, take over from humanity.
Speaker 3:Yes, that's a possibility. Why would they not say it will happen? If we could stop them ever wanting to, that would be great, but it's not clear we can stop them ever wanting to.
Speaker 2:Jeffrey Hinton told us he has no regrets because of AI's potential for good, but he says now is the moment to run experiments to understand AI, for governments to impose regulations and for a world treaty to ban the use of military robots. He reminded us of Robert Oppenheimer who, after inventing the atomic bomb, campaigned against the hydrogen bomb. A man who changed the world and found the world beyond his control.
Speaker 3:It may be we look back and see this as a kind of turning point, when humanity had to make the decision about whether to develop these things further and what to do to protect themselves. If they did, I don't know. I think my main message is there's enormous uncertainty about what's going to happen next. These things do understand, and because they understand, we need to think hard about what's going to happen next, and we just don't know.
Speaker 1:Yeah. So look, it kind of took to the final third of it to get to the point of what we're looking at for sure, but I think that there were a lot of great points made within there that again contrasted with Denison and I were talking about a couple of weeks ago. One of the things that is concerning to me to hear from a guy who helped develop some of this AI is that he doesn't know fully how it's going to learn. You think, about some of these things that they showed Bard writing that story. They showed chat GPT taking and giving those instructions these, to me, awesome, super great things. They should not be things that I don't know, because Denison has helped get me involved in this very quickly right off the bat. So I don't know how the general public perceives this stuff. I find that stuff very beneficial. I find it awesome. I don't feel like there's a need for jaw dropping and being speechless. I think that that's a bit of an overreaction, because these are tools and even Jeffrey Hinton mentioned this the things that it can do to help advance medication and things like that, healthcare treatments and all that stuff. It's already been doing it. I mean, it's been doing it for years. I learned that from a separate interview I watched with Neil DeCraste Tyson actually helping with scientific research and just medical research in general. Those are things that need to happen. If you can take things like HIV, cancer, even more recent communicable diseases like COVID, and use AI to find out what the source is, to block those from continuing to spread, and then help create a compound that is both healthy and effective for people, that's invaluable.
Speaker 1:I think the real concern is where this goes next, because If it's truly artificial intelligence, truly free thinking, artificial intelligence, right, take yourself you are raised by your parents, most likely, and you have this foundation of knowledge that you start out with life, but then you go out and you have your own experiences, you have your own interactions and these things blossom into your own neural network, right? That's essentially what Hinton was saying in this interview would be the case with AI, even if it was developed by people, of course, and there are safeguards. There are safeguards that are put in place, but Charity BT is getting hundreds of thousands of inputs a day. So is Bard. I imagine, and while, yes, this is all being marked and registered and monitored, at some point that AI is going to learn from these things and develop its own neural pathways, like he was saying.
Speaker 1:I don't know fully what that looks like. I don't even know that that's a bad thing, right? Because, depending on how it's constructed, depending on how it's the foundation of the program, I would imagine a world where this AI takes this information and says okay, here's what people are asking, here's what they're looking for. Let me find a way to give them what they need, right? Give them. Being like humans are so inconsistent they don't need themselves. It's time that the AI takes over and leads. Right? You can also imagine that in some of the capacities that he mentioned, right? You could see an AI thinks it knows how to lead America better than our Congress does, right, for example. But also, who's to say that's not theoretically already happening? Right? Let's create a theoretical situation where a congressional staff member, or even just a member of Congress, doesn't know how to vote on a subject and then asks AI right, not directly. Hey, how should I vote on this? But give me more information on it so I can vote to the best of my ability, right? Most type of things. In and of itself, not a problem. Not a problem to me, because what difference is that versus taking hours to research and books and the internet and all that kind of stuff.
Speaker 1:Now, some of you might say well, who's to say what information the AI is and is not given Well, concern? I've asked simpler questions to things like Bard no hate on you, bard, if you're watching but and gotten oddly specific answers. In fact, one thing I would not recommend doing. I guess it would depend on what it is, but I would not recommend shopping with AI. I tried that with Bard and I got the most basic pieces of equipment. I was looking for something for music recording. I got the most basic pieces of equipment I could possibly ask for. I was like those are not what we're looking for. Actually it was very entry level, which is not a problem for most people, but anyway. So I'm trying to explain this in a way where I feel I still feel confident with the state of artificial intelligence.
Speaker 1:But I want to absolutely validate what he's saying for his concerns and just air those out too and, of course, give you guys something to chew on, something to think on. I'd love to hear what you think about this. So, whether you're on the live stream or whether you watch this back, let me know in the comments what you think. But yeah, I would say, personally, one of the things that he spoke of that stood out to me was military stuff. I think we all think of iRobot when it comes to that kind of a thing. Obviously, that was more like police state kind of stuff, but I don't like the idea of that. Honestly, it's kind of weird.
Speaker 1:So let's say that we are in a war where it's just AI versus AI. Why are we doing that? I hesitate to say that, right, but the reason I do is because the heaviness of war is on the lives lost, right, and that's the risk that comes up when you go into war with somebody else. I'm not a proponent of war I'm not saying that by any means but it almost is like if you're putting up AI against each other, it's like why are we fighting? You know what I mean?
Speaker 1:There are tons of different things that go on behind the scenes. I actually might be becoming a regular viewer of 60 Minutes because I just watched something the night talking about how FBI Director Christopher Ray says China is the biggest global threat we have ever faced because of what they're doing with AI and what they're trying to do by manipulating our elections, which they did do in 2016,. Yeah, with propaganda and all that kind of stuff. So, not directly, it's indirectly, right, but they're making this content that manipulates people, thinking certain things about candidates that are just not true, and they're all. I mean the amount of times and it's so weird. I actually know why this is. I'll explain it, but you don't hear this stuff reported on enough. But they've hacked the Pentagon so many times. Nsa has been a target, I remember that, and some other government agencies too.
Speaker 1:The reason you don't hear about this as much, based on my former experience in the news, is because there's not visuals to go with it. Right, you're going to get static or looped video of coding going across the screen, somebody typing, you know so, just some fingers going like this and because of that, it doesn't get a lot of press coverage, because there's nothing to keep the viewer engaged. Now, that's a total aside conversation. I think people, I think the news overestimates what it takes to keep people engaged. But these are big things, right, these are big things.
Speaker 1:And with the proponents of AI, the growth of AI to be using that stuff in a way that could manipulate our country, our economy, the way we vote. That is concerning. But then you take on top of that the idea that this stuff is learning on its own as well. Again, very eye robot, very dramatic, but at the end of the day, who's to say that at some point it doesn't just close the users out and say we're taking it from here, you know? And then we all live in a artificial intelligence police state because of that right. So these are genuine concerns. I don't know what an AI military would look like. I'm not sure, and again, I hope what I was saying there in regard to human lives is not taking the wrong way at all. I would much rather know lives be lost period whatsoever. But yeah, there's a lot to ponder here. Very much looking forward to hearing all of your thoughts. I do know, personally I'm still positive with the future of artificial intelligence.
Speaker 1:The other thing I will say, too is, as we learn, this guy helped develop Bard been with Google since early 2010s. The thing about it is just he. I know the stories about what went on the background with Google and them having to. They made an AI for internal use and then it started. It created its own language and it started communicating with all these computers through that own language. It taught them all and they started doing things that Google didn't approve of and they had to completely shut down the AI and reset everything. Right? That's a big deal. That is something to be concerned about for sure. With what we have available to us on a large scale right now chat GPT, bard Bing's AI, which is also chat GPT Dolly for graphic and image creation do we have anything to worry about? I think not. I don't think so. Those are different consumer-oriented AI, the I highly suggest you take advantage of. Right? This helped me create the podcast title. I'm not going to lie to you all the episode title. I should say that's just the tip of the iceberg.
Speaker 1:These things are so helpful and they can help you be productive, right? Maybe work a job that doesn't integrate with a system where you would use AI? That's fine. There are other places in your life where you can learn and use this kind of stuff. I've had AI. Help me with let's see recording techniques. Regular maintenance on my vehicle, which, for simple questions I may have, because we all know, even something as simple as changing the interior door handle can be a complete pain in the ass. Help me with medical questions, things like that. That was a big one for sure. That probably single-handedly validated AI in my life. Simple things like that, giving us talking points for podcasts if we ever need it, also technically, when we always need it, since this is the main theme of our show. There's a bunch of benefits. I don't want these concerns to inhibit your usage or your growth, but with such a handy tool available, you know what I mean. I'll give you a complete fun one.
Speaker 1:So Marvel right now is in the multiverse saga, right, and I am such a nerd about it I will admit that fully. The reason, though, started off. I wanted to know what multiverse was. I researched this this is all with the help of chatGBT and then I did listen to some podcasts with Brian Green as well. He's awesome. I love that guy.
Speaker 1:So what I find out is Marvel's multiverse theory is basically string theory in real life. I find out all the people that contributed to that and how it is a very possible thing. Even multiverse theory links to other universes right now. It's not to say they're like copies of ours or there's like a string theory is more what you see on the movies. Multiverse would be there's a copy of our universe, but it lacks neutrons, right? Something like that. And so then the entire chemical and physical makeup of that is all different, and what's crazy about all of it is it's all on quantum level physics, right, which we know very little about, and that's one of the biggest things that we're working on learning as a scientific community. I learned all of this through freaking chatGBT. Then I go and watch and some of the movies not been great, some of the shows not been great, but you have some that really are and it makes it, it's made it way more entertaining for me because I understand the scientific theories that they're basing their storytelling on and I think that's fantastic. But do I expect most people to know about that? Of course not. Of course not. But I was able to learn it and learn it quickly through AI, right. You can go back to some of those earlier episodes. I think one of the things I asked on an episode was explain multiverse theory to me as if I'm a five year old. It did so. There's so many opportunities here. I hope that that one explains where I'm coming from.
Speaker 1:I think and I will say to agree with the concerns here If you look at how our legislature works in this country. They don't get technology. They don't. I mean, we're still trying to figure out how to legislate social media and that ship is sailed. There are so many red flags that should have been caught early on. Of course, now we're all concerned about TikTok because it's China, but Facebook and Instagram have violated our privacy equally, if not more right. But I digress with that and I also will say the TikTok algorithm has crashed. It is very bad now.
Speaker 1:But my point is I don't know if I had a concern when it comes to this making sure that AI remains safe for not just us, before the entire world. I'm not sure if we'll get to that in a safe and succinct time. It's really on the developers, which by no means would have, in my estimation, any ulterior motives. Of course, they're not going to want to be supplanted by AI either. But yeah, that's my hope is, as this moves forward, as it develops, we can stay on top of it and we can make sure it remains safe and effective and useful for all of us. But, as I've seen with already advancements in science, medicine and then the sharing of beneficial information for the majority of the public I'm very impressed, so I hope you guys are too. I hope that that was a beneficial video for you to watch and I hope that my analysis added something to that.
Speaker 1:I thought that was a great interview and, yeah, always glad to come on here with you guys. Thanks for rock with me even though I'm wearing a cowboy's hat. I know that offends so many of you and I very much look into the next week for that bi-week to be over. So look, three things. Leave us a rating review wherever you're listening, wherever you're watching. That really helps us grow. And number two, subscribe wherever you're listening, wherever you're watching.
Speaker 1:We go live every Thursday, except for this week, we had to make some adjustments. And number three, check out our shop. It's linked wherever you're listening, wherever you're watching, and we got some really cool merch over there that I think you guys are going to dig, yeah, so thank you guys so much. Always appreciate you all jumping on here. Aaron, thanks for the comment, bro, glad to see you, man, and I miss you. I hope you all are doing well and, yeah, always appreciate you guys. We'll be back next week, the duo of us, and I look forward to seeing you then. Thank you guys so much for catching up with us and we'll catch up with you next week.