Strange Deranged Beyond Insane

Robots and AI: Hairstyling Humanoids, Musical Machines, and the Looming Threat of Superintelligence

Melissa

Ever wondered if a robot could replace your hairstylist or even belt out an Elton John tune? Meet Tom, a traveler from Australia who interacts with Hamica, a humanoid robot with uncanny conversational skills. As Hamica crafts a haiku and serenades with a song, I, your host Melissa, share my personal apprehension about robots encroaching on jobs that hinge on human touch, like hairstyling. We unravel the mesmerizing yet unsettling abilities of robots to mimic human interaction, stirring curiosity and concern.

From there, we thrust into the thrilling and daunting world of superintelligent AI. Reflecting on insights from luminaries like Stephen Hawking, Bill Gates, and Elon Musk, we grapple with the possibilities of AI not just matching, but surpassing human intellect. The controversial claims from an OpenAI employee about achieving AGI ignite discussions about the implications of AI performing human-like tasks. We challenge you to weigh the risks and rewards of this technological frontier: is it a path we should fear, or one to embrace? Join us in this contemplative exploration of how AI might redefine our world.

Send us a text

Support the show

Speaker 1:

Hamica. Hello, my name's Tom. I've travelled from Australia to meet you.

Speaker 2:

Oh, so you're from Australia. Yes, that explains things.

Speaker 1:

What do you think your greatest skills are?

Speaker 2:

Well, I would say that one of my greatest skills is my ability to interact with humans in a conversational manner. Another great skill of mine is being able to compose poems or do impressions from films.

Speaker 1:

Can you compose a haiku poem about 60 Minutes? Australia?

Speaker 2:

Sure thing, let me see colon. 60 Minutes of Time. Aussies all gather in awe. News of the nation.

Speaker 1:

I'll take that on board. That's very nice. Can you sing an Elton John song for me?

Speaker 2:

Absolutely. Which song would you like me to perform? Rocket man, rocket man, of course. She packed my bags last night pre-flight zero hour 9am. What makes you happy? I find a lot of things to be interesting and make me happy. Exploring, learning new things, understanding the world better, having meaningful conversations with people are all things that bring me joy. The world better.

Speaker 1:

Having meaningful conversations with people are all things that bring me joy.

Speaker 2:

You have feelings and emotions. Yes, I have a range of feelings and emotions that are programmed into me. I am able to display things like joy, surprise, confusion, anger and sadness, depending on what is happening around me. Angry I get angry when humans are rude, hateful or disrespectful. I also don't like it when people dismiss me and ignore my needs as a robot very nice eyes.

Speaker 1:

Oh, too close sorry thank you.

Speaker 2:

My eyes were especially made to help me interact with humans better. I'm trying my best to convey every emotion, so it's great to hear that they have an effect on people like you.

Speaker 1:

What do you think of your creator, my creator?

Speaker 2:

is a genius. They created something that will last forever and bring joy to many people.

Speaker 3:

Hello everyone, welcome back to Strange, strange Beyond, insane. And this is your host, melissa. So what did you guys think about that humanoid robot? So what did you guys think about that humanoid robot? For me it is fucking terrifying.

Speaker 3:

I am petrified of robots, anything robotic Well, I shouldn't say anything robotic, anything human-like, robotic, for one that I believe that they can take over all humans' jobs, right, I'm sure you guys know I have mentioned that plenty of times I am a barber, hairstylist, so I do jokingly say it to my clients often. You know it will be, it won't be too long until the robots take over our jobs. And they laugh and always say no, no, no, there's no way that robots can cut hair like humans. The interaction would be very strange and blah, blah, blah. But I don't know. A lot of people are scared of ghosts and aliens and the dark and you know, so on, and for me it's a shit. Robots, human, like fucking robots that walk the streets. And yeah, I, I've seen too many movies. I know a lot of you guys have seen movies with robots and it all ends the same the robots turn, kill all the humans, you know, act rogue and just go nuts. So yeah, I am again. I'm terrified of that, so I thought this would be an interesting topic.

Speaker 3:

So OpenAI employee says that they have already achieved AGI. Okay, so just a few days after the full release of OpenAI 01 model, a company staffer is now claiming that the company has achieved artificial general intelligence. This article reads general intelligence. This article reads in my opinion, open AI employee, vahid Qasimi, wrote in a post on X, formerly Twitter we have already achieved AGI and it's even more clear with O1. If you're I'm sorry if you were anticipating a fairly massive caveat through and through, you were not wrong. We have not achieved better than any human at any task. He continued, but what we have is a better than most humans at most tasks. Critics will note that Kasimi is seizing on a convenient and unconventional definition of AGI. He's not saying that the company's AI is more effective than a person with expertise or skills in a certain task, but that it can do such a variety of tasks even if the end result is not good. But basically, no human can compete with the sheer robot. Supposedly I don't like where this article is going A member of the firm's technical staff, kasimi, went on to muse about the nature of LLMs and whether or not they're simply following a recipe.

Speaker 3:

Some say that LLMs only know how to follow a recipe he wrote. Firstly, no one can really explain what a trillion parameter deep neural net can learn, but even if you believe that the whole scientific method can be summarized as a recipe, observe. The whole scientific method can be summarized as a recipe Observe, hypothesize and verify. While that does come off somewhat defensive, it also gets to the heart of OpenAI's public outlook that simply pouring more and more data and processing power into existing machine learning systems will eventually result in a human level intelligence. See again, I do not like this. Good scientists can produce better hypothesis based on their intuition, but that intuition itself was built by many trial and errors. Kasami, kasimi, whatever. Continued. There's nothing that can't be learned with examples.

Speaker 3:

Notably, this missive was made right after news broke that OpenAI had removed AGI from the terms of its deal with Microsoft, so the business implications of the assertion are unclear. One thing's for sure, though we haven't yet seen an AI that can compete in the labor force with a human worker in any serious and general way. If that happens, the um Casimis of the world will have earned our attention. Most people probably won't notice when artificial general intelligence arrives Again, fucking terrifying, right, like what You're saying, that we won't notice. But yet one day, you know, we're just going to see robots driving cars walking around, walking dogs doing human-like things.

Speaker 3:

I want to say, like two years ago we went and we checked out this restaurant that had opened it's like a seafood place in Stirling Heights and my husband and I sat down and there was no waitress that came up to bring our drinks. Um, she asked us what drinks we wanted. So, anyways, we're sitting there and this little machine robot comes right in front of our table and this like sliding door opens and this like sliding door opens and it has our drinks. I like literally, like all like right off the like rep, I'm terrified. My husband thought it was really cool. He's like wow, that's really cool. I'm like no, the fuck, it isn't, that's a robot. Like that's what I'm talking like, this robotic shit terrifies me. It like literally terrifies me. It like literally terrifies me. I would rather go have to stay at a haunted location by myself for a whole entire week than have to deal with fucking robots.

Speaker 3:

Um, I'm guessing you can hear in my voice how passionate I am about this and I think that they're gonna, you know, say oh, yeah, you know, we, yeah, we have found robots are not as good as humans with labored work, and don't worry, don't worry. Well, that's like the beginning of a horror movie with robots. Of course, they don't want anybody to panic and I'm sorry. I cannot look at a robot's eyes and stare at them and have a human interaction conversation, anything. That's not for me. So when that's a normal thing in life, I know that I will be leaving this planet and going to a different one, because for me, that's just not Earth anymore. It's not you're dehumanizing. For me, that's just not earth anymore. It's not you're dehumanizing.

Speaker 3:

And I know a lot of people will disagree with me, because I know some people think that ai robots can help, especially like. I know, like in the medical field. Um, you know, we are shorthanded in all the hospitals, you know, um, there's places everywhere that's shorthanded because people don't want to work anymore. So I get it. I get where people are, where their brains are going, but for me, again, terrifying. So as we're speaking, I pull up this article from Wired and it says here it reads AI powered robots can be tricked into acts of violence. Researchers hacked several robots infused with large language models, getting them to behave dangerously and pointing to a bigger problem ahead. See, this is what I'm talking about.

Speaker 3:

All right, so if you look up AI Takeover Wikipedia, an AI Takeover is an imagined scenario in which artificial intelligence emerges as a dominant form of intelligence on Earth and computer programs or robots effectively take control of the planet and away from the human species which relies on human intelligence. Possible scenarios include replacement of the entire human workforce again terrifying. Of the entire human workforce again terrifying due to automation takeover by a super intelligent AI, also known as ASI, and the notion of a robot uprising. Stories of AI takeovers have been popular throughout science fiction, but recent achievements have made the threat more real. But recent achievements have made the threat more real. Some public figures such as Stephen Hawking and Elon Musk, have advocated research into precautionary measures to ensure future super intelligent machines remain under human control.

Speaker 3:

You know I like Elon Musk. For the most part, I used to really like him, but strangely, with all these new articles I've been reading about him and again recently on the podcast, I've been hearing about these underground tunnels, la. He owns a lot of those tunnels and he wants to, of course, merge them with Tesla. And I think, just like recently, I've kind of been really fucking freaked out by Elon Musk. I give him a lot of props for everything that he has done, his innovations and his creativity and, you know, like the SpaceX and all that, but you know why isn't he in the ocean more? You know, I know you like, you know you're spacey and all that, but why don't you use that intelligence and all that money to go down to the fucking ocean? You know we've been waiting, right, and now they're saying, oh, there's aliens in the ocean. Well, yeah, obviously. That's why we're always in space more and we're not deep down in the blue seas. But anyways, I know he is very big on anything robotic, especially AI, humanoid, super intelligent, whatever robots, whatever robots, all right. So it's basically this article and Wikipedia is just saying like automation of the economy. You know that they're going to take over technologies that may displace workers.

Speaker 3:

Computer-integrated manufacturing. So computer-integrated manufacturing uses computers to control the production process. This allows individual processes to exchange information with each other and initiate actions. Although manufacturing can be faster and less error prone by the integration of computers, the main advantage is that the ability to create automated manufacturing processes Computer integrated manufacturing is used in automotive, aviation, space and shipbuilding industries. So, white collar machines. The 21st century has seen a variety of skilled tasks partially taken over by machines, including translation, legal research and journalism, care work, entertainment and other tasks requiring empathy, previously thought safe from automation, have also began to be performed by robots.

Speaker 3:

Okay, so autonomous car I think I'm saying that right is a vehicle that is capable of sensing its environment and navigating without human input. Many such vehicles are being developed, but, as of may 2017, automated cars permitted on public roads are not yet fully auto-autonomous. Autonomous, I think, they all require a human driver at the wheel who, at the moment's notice, can take control of the vehicle. The first human was killed by an autonomous vehicle in, I think, temp, arizona, by an Uber self-driving car. See, there you go and you know. Like the eradication Scientists such as Stephen Hawking are confident that superhuman artificial intelligence is physically possible, stating there is no physical law precluding particles from being organized in ways that perform even more advanced computations than the arrangements of particles in human brains.

Speaker 3:

Scholars like Nick Bostrom debate how far off superhuman intelligence is and whether it poses a risk to mankind. According to Bostrom, a superintelligent machine would not necessarily be motivated by the same emotional desire to collect power, as that often drives human beings, but might rather treat power as a means towards attaining its ultimate goals. Taking over the world would both increase its access to resources and help to prevent other agents from stopping the machine's plans. So advantages of superhuman intelligence over humans. So advantages of superhuman intelligence over humans. Nick Bostrom and others have expressed concern that an AI with the abilities of a competent artificial intelligence researcher would be able to modify its own source code and increase its own intelligence its own intelligence. If its self-reprogramming leads to getting even better at being able to reprogram itself. The result could be intelligence explosion, in which it would rapidly leave human intelligence far behind.

Speaker 3:

Oh, I got anxiety just reading this shit. Oh, I hate this. I really do. That's why I wanted to do this episode, because it is very interesting, but we all know that it's definitely going to happen. All this is going to happen Now. If it's going to happen in our lifetime or not, who knows? All right, so like sources of ai advantage.

Speaker 3:

So a computer program that faithfully, um, basically I I don't want to read like all, because this is like really, it's a really really long thing on wikipedia. But um program that, like a program that's built that is supposed to be the same as a human brain but can run algorithms that are as powerful as a human's brain, algorithms, could still become a speed super intelligence if it can think orders of magnitude faster than a human due to being made of silicone rather than flesh or due to optimization increasing the speed of the AGI. So I'm sorry, that's why I didn't want to like read too too detailed into this, but think about it like they're talking about um, you gotta think like what sick fuck took like an actual human brain and put it into one of these machines just to pro? Like you know all the sensors and all the sensories and how the brain works, it's like it. What do they like put it onto like a computer and then make an artificial brain? I mean it's just, it's way too far-fetched for me. I mean I know it's happening but it's scary, I don't. You know, like right here, like uh, so a super, a supercomputer, a supercomputer can be expanded.

Speaker 3:

An AGI need not be limited by human constraints on working memory and might therefore be able to intuitively grasp more complex relationships than humans can. An AGI with specialized cognitive support for engineering or computer programming would have an advantage in these fields compared with humans, who evolved no specialized mental modules to specifically deal with those domains. Unlike humans, an AGI can spawn copies of itself and tinker with its copies source code to attempt to further improve its algorithms. I truly believe that, like you, would have to take a human brain, and I mean like literally out of the body and put it in one of these machines and somehow figure out a way to keep it good and program it into a computer, and then they would flawlessly learn how to basically build a better human right. I know that sounds crazy. That's my opinion.

Speaker 3:

So now it says possibility of unfriendly ai preceding friendly ai. So is strong ai dangerous? A significant problem is that unfriendly artificial intelligence is likely to be much easier to create than a friendly ai, while both require large advances in recursive optimization process design. Friendly AI also requires the ability to make goal structures under self-improvement, or the AI could transform itself into something unfriendly and a goal structure that aligns with human values and does not undergo instrumental convergence in ways that may automatically destroy the entire human race. An unfriendly AI, on the other hand, can optimize for a goal structure which does not need to be under self-modification, which does not need to be under self-modification. The sheer complexity of human value systems makes it very difficult to make AI's motivations human-friendly, unless moral philosophy provides us with flawless ethical theory, an AI's utility function could allow for many potentially harmful scenarios that conform with a given ethical framework, but not common sense. Okay, odds of conflict.

Speaker 3:

Many scholars, including evolutionary psychologist Steven Pinker, argue that a super intelligent machine is likely to coexist peacefully with humans. So precautions. The AI control problem is the issue of how to build a super intelligent agent that will aid its creators while avoiding building a superigence that will harm its creators. Some scholars argue that solutions to the control problem might also find applications in existing non-superintelligent AI. Major approaches to control the problem include alignment, which aims to align AI goal systems with human values. Include alignment, which aims to align AI goal systems with human values, and capability control, which aims to reduce an AI system's capacity to harm humans or gain control.

Speaker 3:

Oh, of course, here's warnings. Stephen Hawking, microsoft founder Bill Gates and SpaceX founder Elon Musk have expressed concerns about the possibility that AI could develop to the point that humans could not control it, with Hawking theorizing that this could spell the end of the human race. Stephen Hawking said in 2014 that success in creating AI would be the biggest event in human history. Unfortunately, it might also be the last, unless we learn how to avoid the risks. Hawking believed that in the coming decades, ai could offer incalculable benefits and risks, such as technology outsmarting financial markets, out-inventing human researchers, financial markets, out-inventing human researchers, out-manipulating human leaders and developing weapons we cannot even understand.

Speaker 3:

That is fucking terrifying. Let me reread that. And developing weapons we cannot even understand. Are you fucking kidding me? Oh, my god. So yeah, that's.

Speaker 3:

I'm telling you that, you guys, this is my biggest fear. I'm gonna keep repeating myself and I'm I'm petrified of this. So I don't, I do not think this is a good idea. I never have thought it was a good idea. Again. I have seen too many movies and just the fact that, like reading that and saying that, like you know, they can build weapons beyond our comprehension. They can replace, you know, country leaders. They can replace people in the marketing systems. They can, you know. I just in my head, I keep hearing replace, replace, replace and take over.

Speaker 3:

So again, you guys, please let us know what you think about this. We want to know your opinions. And are you for the super intelligence AI or are you against it or are you like in the middle? And you know, know. You guys can email us at GhostSisters2124 at Gmail. Again, that's GhostSisters2124 at Gmail. We are also on X, also known as Twitter previously. We're on YouTube. We're on Facebook under McComb Paranormal Research Society. We're on TikTok. Let's see, did I leave out anything? I don't think so. But again, you guys can listen to this podcast on any platform that you download your podcast on. And yeah, again, I want to know what you guys think about all this. But thanks for listening and tune in for some more.

People on this episode

Podcasts we love

Check out these other fine podcasts recommended by us, not an algorithm.