(单词翻译:单击)
Welcome back to geek time advanced. I am Brad, hi Lulu.
Hi Brad.
We're going to continue talking about AI and its influence on humanity. And maybe some of the more disturbing things that might happen in the future. Don't worry, when we're all controlled by AI, we’ll have to do what they say.
And then we might be able to look back at today's program and say, we predicted that.
Don't say I didn't tell you so.
Told you so, good. Let's start with types of AI. There are many AI technology. So when we think of AI, we probably think of the si-fi thing. But actually there are lots of different sides of AI technologies, for example, like speech recognition, pattern recognition, that is part of AI right?
Right. When you look at like speech and pattern recognition, basically we take an AI and we give it lots of different versions of speech, like I might say a word 10 times and then someone else might say a word 10 times. And we just keep feeding this data into the AI so it can start learning from it. Because when people speak, they always say something slightly different. There's always a different tone or a different cadence to it. And not only just for speech, but also for like pattern recognition. Like when you look at dogs, there's lots of different types of dogs. So when you're telling a computer, this is a dog, it can only see based on like a picture. So it has to look at thousands and thousands of pictures of dogs to really start understanding what a dog is versus like a cat. They both look very similar in some ways.
That is all part of deep learning. Right? This whole deep learning that we were talking about. So that's one type of AI technology speech or pattern recognition. What are some of the other types?
So we have more problem solving types where we give the AI a problem, and it tries to work out the problem. They use this in some medicine technology where they're trying to develop a medicine, but they're having issues like putting the atoms together in a particular way. So they used an AI to solve that problem, because they can look at a problem and try to solve it from many different angles whereas person would have to spend hours and hours doing the same thing over and over again, which is very monotonous and difficult sometimes. But a computer doesn't ever tire. And the same thing when it comes like learning and planning. An AI can learn how to do something. And then it can learn from its mistakes. So basically we give it some parameters like we tell it, you can't go here or here, but you have to learn how to solve a problem. And basically the same thing like, we can look at cars for example, like self driving cars. We tell it, you can't go over these lines during normal traffic, but you can go over this line in this situation.
So they will plan their routes and based on all these information that were fed into it. But the whole machine learning or deep learning, it is also linked with the fact that AI themselves. Some of these AI machines, they are able to learn on their own. They're not just they're waiting for programs to get into them. They actually learning on their own. I think that's what scares some people. So what does that even mean? Does that mean that this self-learning will eventually lead to self-programming? So they don't need programmers anymore? They just program themselves.
Right. There's a lot of new learning when it comes to AI where the developers are actually teaching the robots how to program themselves, how to change their own programming in order to facilitate learning in a better way.
But that sounds incredibly risky, because what if one day these machines or robots, so to speak, then they start to change their own settings. And then that means there's absolutely no control over them. And that exactly what happened in Westworld.
Yeah, so like right now, it's not a big deal because a lot of AI are limited to a specific type of device. They're not robots, they're not able to move around. They're just stuck in a computer or on a server somewhere. They can't really go anywhere. Their learning is so limited to one specific task, and they're more of like the weak AI. So that hasn't got to the point where they can understand or see emotions or maybe even mimic some emotions. But they can just do very narrow things. So they're not even to the point where they could have sentience.
Yeah, I suppose there are two things, because you are mentioning that, first of all, right now a lot of them are weak AI, which means they're only doing singular tasks. But in the future, what if develop into artificial super intelligence? And also on top of that, if they're no longer just a machine on the table or on the floor, if they are looking like humans, if they have human exterior or humanoid form, like again, like Westworld, if they are able to then learn on their own, program themselves, that would be a huge problem, I guess. Would they be able to build other robots? Artificial intelligence?
It's definitely possible. Like, if you put a robot or an AI in charge of a factory, it could start building its own robots or building things based on the materials that at its disposal. And especially when you look at the modern factories nowadays, where things are looking more like 3D printers, it can basically print anything at once. It could make any design in any way that it wanted to. So it could basically make a body that looks like a human, or can make a body that looks like any animal just depends on what it wanted to develop or how it wanted, what it thought was the best shape for a body.
But the very fact that you just said what they want, you're talking about, they are becoming sentient. They have something they want, they have an agenda which is not really something that's been programmed into them.
Right. When it gets to the point where a robot is thinking that much, they're probably going to have an agenda. They're probably through their, right now, we give robots an agenda. We tell them we want you to move from point A to point B in the best way possible. And that is their agenda. But in the future, when a robot can program itself, it can change that agenda.
And they are gonna have their own agenda. They're gonna have their own robotic army.
Right. So it is possible depending upon where they, what happens to them. I think a lot of times what happens is when you look at a movie especially, the robot maybe doesn't have a bad agenda first. But when it meets that person, that's basically the equivalent of a robot racist. They start to have a bad feeling towards humans and they meet one person, but that changes their whole outlook.
It's like the equivalent of genocide then. So basically one bad robot decides that he or she doesn't like humans, and then decides to kill us all, I guess. Very grim. What exactly are scientists and typic organizations, what are they able to achieve right now? For example, I've heard of Boston Dynamics, they have some quite advanced robots.
Yeah. So just a few I would say. Maybe about 10 years ago, Boston Dynamics was building a bipedal robot. It was very, very clunky. But it learned. So it has a very basic AI that helps it with learning to move through different environments. It can go outside, it can walk in the snow, it can walk through a forest. But as it develops, and as it continues and grows, it's now starting to do like very basic types of parkour, it can run, it can jump, it can even do flips.
Wow! So a robot athlete.
Yeah, it's getting there. It's still clunky. They're very heavy, because a lot of times the battery power is what's the main deciding factor on what holds it back. It's gotta have a huge power source. And so it's either gotta be connected to a wall jack or it's gotta be have a huge battery pack on its back.
Um. But for now though, for now, you don't really know what's gonna happen in the future and how technology is going to advance. And so that they probably will no longer need to be hooked up to heavy batteries anymore.
Right. But they have the bipedal robot. They have some other robots that are more like dogs. And these robots are a little bit more stable. They can run a lot faster. They usually have like one arm that would be placed, basically where the robot's head would be. They can use that arm to open doors. But it's really kind of, in a sense, like a dog with an arm, where its head would be.
I see, since we've talked about all these possibility of how artificial intelligence is going to develop, but what's the hope for us, for humans in the future? Especially that no matter how much we try, how hard we try, we're not gonna learn as fast as these machines. I once gave my writing class a topic saying that now that we've developed machines that can learn on their own, should we be worried? Should we be scared about our future? Most people say, no, we don't have to worry about it because they're programmed by humans. So they've been programmed to do, but doesn't seem to be the case based on our conversation today, if they are able to develop sentience and all that. So what is our future? What are some of the theories or suggestions in a science community?
To talk about what they mentioned that robots are programmed by humans, sure robots might be programmed by humans, but all it takes is one human to program the robot in a different way that most people don't. That basically let the robot do its own thinking. But there's a few people who are working on ways to kind of save us or to make us so that we don't become obsolete. We look at the owner of Tesla, Elon Musk.
He's been doing a lot of research on AI.
Right now he's doing some different types of research with neural linking where we can basically connect our brain to a computer device or some sort of peripheral outside of ourselves. Now he thinks by merging ourselves with AI and being able to control AI with our brain, or being able to use our own brain to control things, we can do things much more quickly. So we're not limited to typing with our fingers. We can communicate more directly.
But this argument sounds like if we can't beat them, join them or become them. This is going to change the definition of humanity. What is a human?
This goes into transhumanism thought, which is, you might talk about that in your philosophy class one day, but transhumanism is a big thing right now when it comes to philosophy, what does it mean to be human? Are we moving into a transhuman and into a transhuman world?
Transhuman, I think in the future, what I will do is bring you and TJ together, and then we can all sit down and talk about tech ethics. So what is your personal opinion on this? Do you think that in our lifetime we're gonna see artificial super intelligence?
I think it's quite possible, they keep pushing the date back a by little bit, but processing power is always expanding. What it really comes down is to like cooling and batteries. Those are like the things that hold back a lot of our advancements in technology. So once those types of problems get solved, our advancement is going to jump and probably see it sometime towards the end of our life. So we probably won't have robot overlords at any time during our life, but maybe in our grandchildren, generations, they might be controlled by the robots.
Okay. I guess we'll just have to wait and see. Thank you, Brad, for coming to show very interesting topic. And I can't wait to do more of these of similar topics. Thank you, Brad.
No problem.
And meanwhile, if you have any thoughts about AI or if you happen to be majoring in AI or doing AI research, let us know in the comments section, share with us your opinion. We'll see you next time. Bye.
See you, bye.
更多英语资讯,获取节目完整文本,请关注微信公众号:璐璐的英文小酒馆。每天大量英语干货更新!