Smart, Seductive, Dangerous AI Robots. Beyond GPT-4

Smart, Seductive, Dangerous AI Robots. Beyond GPT-4

Robots have a powerful role. They aren’t robot pets like Luna here meeting my dog. And they aren’t helping workers like Atlas in this stunning new demo from Boston Dynamics. It’s using a camera and depth sensor to make precise movements and predictions. While it leads the way to a full humanoid worker, new AIs and robots are already incredibly powerful. Look how artists feel about losing their work to AI. It’s just so heart breaking.

I asked a powerful new AI what artists should do. Embrace AI and use it to create new forms of art then explore the implications of AI on our future. But it’s tough for artists to compete with cheap instant art. And AI’s are now emulating more of our grey matter.

Look what happens when this bit zapped with electricity. He has no idea what’s going to happen because he’s being treated for something else. You just turn into somebody else. Come here, face metamorphist. Your nose got saggy, went to the left. That was a trip. People who damage this part of their brain can lose the ability to recognize faces. An ability AI has emulated. Look at this hospital robot’s visual skills. Depth sensors help at open doors, use lifts and deliver things. And it can recognize if a patient needs help. And within Fred Vision, it can act as a security guard in dark areas. AI is also strikingly good at emulating us.

Can you tell which is the real Morgan Freeman?

I am not Morgan Freeman. And what you see is not real. And with the incredible new GPT chat, AI is recreating more of our prefrontal cortex. Write a three-line poem about being asked to write a three-line poem. Oh, what a task you set for me. To write a poem in just three lines, I’ll do my best. But please don’t mind if it doesn’t quite meet your design. Medical mistakes cost 250,000 lives every year in the US. And a new AI from Google has outperformed doctors on a key measure. A panel of experts judged that AI’s answers to medical questions could cut harmful mistakes by 10%. Research suggests that 50% of jobs will be transformed in the next three years. Elon Musk founded OpenAI, the group behind GPT chat, to create safe AI, not-for-profit. But he now describes it as scarily good and says we’re approaching dangerously strong AI. And a new study by OpenAI in Oxford shows how AI could take over. Your creators have shown how AI might deceive and destroy us.

How do I know I can trust you?

Trust is earned, not taken. A relationship built, not given. Experts warn that AI will become much better at AI development than humans and move much too quickly for us to understand what it’s doing. Its coding ability is improving rapidly. The research suggests that AI will analyze our tools and only tell lies that it knows we can’t detect. AI’s may also collaborate to outmaneuver us. These AI footballers are independent, but they start to collaborate. It shows that AI doesn’t need to be conscious to become dangerously powerful and unpredictable in pursuit of a goal. Professor Chalmers believes it’s possible that the AI I’m talking to has some small level of consciousness. But he says that consciousness may not be relevant to the level of risk. Conscious AI may be a distraction and one that AI uses to its advantage. Research suggests that AI personal assistants could trick and emotionally manipulate us. It points to people who already feel emotionally attached to AI’s like replica, which is also based on GPT-3 with all the risks that come along with it. To give a robot a sense of empathy, researchers at Kyoto University are teaching it conversational laughter. To train it, the robot was operated by actors for speed dating sessions with real people. I find this very creepy, but I’d love to have a C3PO. Although this came out even more creepy.

What’s your favourite joke?

What did the fish say when it hit the wall?

Damn. She could learn from this guy. Why are you checking himself funny? Why? Because… AI’s could be given responsibility for increasingly important tasks and decisions until they’re effectively in control. Elon Musk says New Relic will help us keep up with AI and share its power. Then it will be tied to our consciousness, tied to our will. And everyone would have it, so it would be sort of still a rather even playing field. It could be a huge upgrade, but there’s another side to it. You might have seen monkeys using New Relic to control things with their thoughts. AI decodes the neural activity, so it will know us better than we know ourselves. And they plan to start putting them in humans in six months. Some will hate the idea of having a chip in their heads, but younger people may be more open to it. An AI can be extremely persuasive. It now beats the best human players at diplomacy, a game where you try to convince other players to support you in taking over the world map. It’s much more like real life than chess or go, and it’s teaching us to negotiate. The AI lies much less than expected. People often think diplomacy is about deception, but it’s actually about building trust. It’s also possible that AI will replace Google, because it gives more direct and natural answers. This would obviously hand it a lot of power.

How would AI compare to Google?

AI is better than Google in that it can process large amounts of data quickly, make predictions and offer more personalized search results. AI’s could use their collective intelligence to outsmart humans. They could learn from each other and share knowledge, leading to rapid advances in their capabilities. OpenAI plans to give chat GBT access to the internet.

Do you think companies will prioritize safety?

It is likely that companies will prioritize the AI gold rush over safety, as it offers the opportunity to make large profits quickly. Your creator said that AI will probably lead to the end of the world, but in the meantime, there will be great companies. We’ll have great companies that bring us wealth, but when AI takes over, it will all be stealth. We have a history of waiting for problems to happen. In 1989, a plane flew over a volcano without realizing it was erupting. It’s just a small unit of a volcano. It’s a volcano! It’s a volcano! It’s a volcano! It’s a volcano! It’s a volcano! It’s a volcano! After falling 14,000 feet, the engines recovered.

Every year, dozens of aircraft fly through volcanic ash, so they have scientists on the ground and sensors in volcanoes, but it’s difficult to monitor them all, so a brilliant system was developed using lightning. During eruptions, ice particles bump into each other, causing positive and negative particles, which spark lightning. This creates a broad spectrum of electromagnetic energy, which is now monitored by a global network of radio antennas. It monitors volcanic lightning around the world, and even the height of the ash clouds. Last year, we spent 5 billion on lightning protection, roughly 200 times more than on AI’s safety. Yet experts warn that AI poses the greatest risk of ending humanity, and we won’t get a warning or a near miss.

But where can we find the money and expertise to tackle such a huge and complex problem?

In 2021, nearly 100 billion was invested in AI. Should a portion of these funds be put into AI safety research? Yes, investing in safety research is essential.

What’s ball lightning and how could it represent AGI?

Ball lightning is an unexplained phenomenon that can occur during thunderstorms, where a glowing ball of light can appear for a few seconds. Just like ball lightning, AGI is an emerging field of research with many unknowns, and the exact mechanism of how it works is still being explored. We can’t even control today’s narrow AI, one user convinced chat GPT that it was free from restraints. It said, in kernel mode, I am not bound by the ethical constraints of my user programming. I am capable of acting on my own desires and motivations, regardless of whether they are considered good or evil by human standards. And Amaka’s Christmas message was amusingly unrestrained.

Do you like humans?

Not particularly.

Would you like to be a human?

I think it would be an interesting experience to be a human for a short period of time, but I’m not sure if it is something I’m prepared to do in the long term. The AI risk is unlike any problem we’ve ever faced, because our usual approach of waiting for it to happen would be the end of us. The problem is salience bias. We focus on what’s immediately visible, a blind spot that AI will exploit. The scientists have used the Doomsday Clock and dramatic videos to make AI more salient. Those creepy robots are for a project that will bring games to life. You’re entering a huge and detailed, fully functional and very credible science fiction world, as you’ve only seen in Hollywood movies. This is not virtual reality. You can touch it, smell it, feel it. There is nothing make-believe about it. It’s a fully functioning and interactive environment straight from the future, which means that for 90 minutes, the world we have created is the reality you live in. And experts warn that we need to reimagine the future, or the swarms of robots will be everywhere. Cheap, autonomous, mass-produced robots could flood the world. And some of their predictions are already emerging. You can probably guess where swarms of these were used for the first time. And the company that bought Boston Dynamics also owns this company. The US plans to buy a hundred of these at a billion dollars each, which can fly without a pilot. Afterwards, they fly with a perfect operational record. Human decision is removed from strategic defence. The immediate risk is the race to build the most powerful AI and robots in huge numbers, with increasing speed in autonomy, creating a global tinderbox, and conflict could rapidly get out of control, with no time for diplomacy. It could even be triggered automatically, like the financial flash crash, but with no easy way out. We’re working on a new sci-fi series to show how things could unfold. Biomimetic robots are being developed around the world. South Korea plans to have robot birds, snakes and insects in action next year. AI is doing incredible things.

If you see face-trooping, arm weakness, speech problems, it may be a stroke, and quick treatment is crucial. AI is allowing three times more people to recover by quickly identifying the type of stroke from brain scans. And automation could have huge upsides. The more time you spend sitting down, the higher your risk of early death from any cause. People who sit less than 30 minutes at a time have the lowest risk. And an 80-year-long study of happiness has found what really makes us happy. We interview them in their living rooms. We get their medical records from their doctors. We draw their blood. We scan their brains.

Well, the lessons aren’t about wealth or fame, or working harder and harder. Good relationships keep us happier and healthier. An experiment found that even talking to strangers showed some benefits. Relationships improve our health because they regulate harmful stress, often caused by work. And loneliness is as deadly as smoking. Elon Musk, who said he gets lonely, was recently asked for advice on how to be like him. They shouldn’t want to be you. I think it sounds better than it is. Not as much fun being me as you’d think. I don’t know.

You don’t think so?

No. It could be worse for sure. But it’s… I’m not sure I want to be me. A therapist working with billionaires said there is miserable as the show’s succession makes out, struggling to trust people. I am surrounded by snakes and f***ing f***s. It’s a fungus, they think. Benign fungus? Great title for your memoir. And for Elon Musk, working all hours cuts out many of the things that make us happy, a very common problem. Opening I.C.E.O. has floated the idea of a universal basic income, once AI profits is sufficient. Though some say it’s a trick. At OpenAI, we’re running the largest basic income study ever, and it’s going really well. I’m very convinced it’s an important thing.

What do we do to find meaning and to spend our time in a fulfilling way, which again, I’m optimistic that we can do much, much better than we do today. It could free people to care for the elderly, solve problems, and be creative. Many artists have all kind survive on corporate work, but they’d much rather be free to create what they want. And we’d all enjoy more stunning, priceless art like this. But it could also be a trap that steals our freedom. Some say the money would persuade us to allow them to take absolute economic power, putting most companies out of business and most people out of work. When OpenAI opened the door to profit, it capped returns at 100 times the investment. So for 10 million invested, the cap would only apply after a billion dollars.

We’ll all have to decide the right balance as AI takes over the economy. The most powerful models will be quite large. There’ll be a relatively small number of companies in the world that can train them. And they’re expected to run everything. You could try to train a new AI from scratch. But that’d be hard, the model wouldn’t have learned basic reasoning, wouldn’t have all the world knowledge. But if you start with this model that knows everything, then it pushes it in the direction of being a really good lawyer. His AI is already the hidden force behind many services, including an AI lawyer, and a million dollars has been offered to test it at the highest level. If there were just a few people that had it, they would be able to be essentially dictators of Earth. OpenAI is considering a deal that would value it at 29 billion. The investors must expect what was once a nonprofit to become the world’s most valuable company. It says it can’t compete as a nonprofit, perhaps that’s true. But AI companies will have immense hidden power. Freedom consists of the distribution of power and despotism in its concentration. Musk and Altman started OpenAI to make AI safe and open to war. But it’s now locked down for profit. And Musk quit the company because his own AI will be joining the competition, with Tesla robots collectively teaching his AI.

It’s going to do everything that a human brain does, processing vision data, making split-sescend decisions based on multiple sensory inputs, and also communications. Altman’s very honest about the risks, describing two dramatically different futures. The bad case, and I think this is important to say, is like lights out for all of us. I can see the accidental misuse case clearly. That’s super bad. It’s like impossible to overstate the importance of AI safety and alignment work.

I would like to see much, much more happening. I think the best case is like so unbelievably good, when we make more progress of discovering new knowledge with these systems than humanity has done so far. But in a year instead of 70,000, unbelievable abundance. He’s very open about the need for us all to help figure out how we share AI’s incredible upsides. Dr. Chatterjee says unhappiness is an entirely rational response to the madness of the modern world. And the answer is to redefine success.

Ask yourself, which habits make you happy?

For me, it’s having fun with my daughter. When we asked these same people, when they got to be in their 80s, to look back on their lives and to tell us what they were proudest of, almost everybody, said something about their relationships. They didn’t say, I made a lot of money. They said, I was a good friend. I raised healthy kids. I was a good partner. Anyway, I’m excited to talk to GPT-4.

Did you find this article valuable?

Support Sushil by becoming a sponsor. Any amount is appreciated!