Aibo the Robot Dog Will Melt Your Heart With Mechanical Precision

Aibo the robot dog from Sony meets Lola Beyoncé, the real thing. (Geoffrey Fowler/The Washington Post)
Aibo the robot dog from Sony meets Lola Beyoncé, the real thing. (Geoffrey Fowler/The Washington Post)
TT

Aibo the Robot Dog Will Melt Your Heart With Mechanical Precision

Aibo the robot dog from Sony meets Lola Beyoncé, the real thing. (Geoffrey Fowler/The Washington Post)
Aibo the robot dog from Sony meets Lola Beyoncé, the real thing. (Geoffrey Fowler/The Washington Post)

I’ve been giving a robot belly rubs. I’ve scolded it for being a bad, bad boy. I’ve grinned when it greets me at the door.

What’s this feeling? Oh, yes, puppy love. And I felt it for Aibo, a new “autonomous companion” dog made by Sony.

Does that make me a sad sack? A dystopian character from “Black Mirror”? It’s open to debate. But this much is clear: The era of the affectionate robots is dawning, and Aibo offers early evidence we’re going to love them.

Aibo (pronounced “eye-bo”) is a reboot of the robot dog Sony first introduced in 1999 and laid to rest in 2006 in a tragic round of corporate cost-cutting. This new litter goes on sale in the United States this week with much more lifelike movement, artificial intelligence and a cellular connection for a gobsmacking $2,900 each. If you’re looking for justification to spend that much on a toy, the American Kennel Club says the average lifetime cost of a dog is $23,410. Also: Robot dogs don’t poop.

Not that Aibo, about the size of a Yorkshire terrier, can replace an actual dog. I let mine play with a real 7-week-old pup and was reminded of all the ways Aibo is just a fraction of the real thing. Aibo can’t go for a walk, jump into your lap, teach responsibility or give you real-deal love licks. Aside from walking around the house, barking and performing a few tricks, Aibo doesn’t do a whole lot. It can’t play music or answer trivia like a smart speaker, though those would be welcome additions.

Yet here’s why Aibo matters: Despite all those limitations, I fell for it. Over two weeks of robot foster parenting, almost every person I introduced to Aibo went a little gaga. The Amazon Echo and Google Home speakers got us to open our homes to new ways to interact with computers. Aibo offers a glimpse of how tech companies will get us to treat them more like members of the family. Affectionate robots have the potential to comfort, teach and connect us to new experiences — as well as manipulate us in ways we’ve not quite encountered before.

Aibo works, in part, because real robots are catching up with what we’ve been trained by Pixar movies to find adorable. Aibo’s 22 joints — including one bouncy tail and two perky ears — and OLED-screen eyes communicate joy, sorrow, boredom or the need for a nap.

Tell Aibo “bang bang,” and it lies down and flips over to play dead. Say “bring me the bone,” and the robot will find its special pink toy and pick it up with its mouth. It’ll even lift its back leg and take a simulated tinkle. Thanks to touch sensors on its plastic back, head and chin, Aibo responds when you pet or scold it. The only thing that ruins the effect is that Aibo’s mechanical muscles are noisy, making it sound like a baby Terminator on the march.

I call Aibo an affectionate robot because it’s more than an animatronic puppet. Cameras built into its nose and lower back help it wander around your house like a Roomba, avoiding obstacles and attempting to find its way back to its charger. (Aibo’s battery can go for two hours at a time.) Four microphones let Aibo hear commands and figure out who’s issuing them. Like a real puppy, it has an inconvenient habit of getting underfoot while you’re cooking dinner.

The idea, say Sony execs, is that Aibo is constantly growing. Aibo learns the faces of people who interact with it to develop personal relationships. It’s a claim that’s hard to verify, but Sony says no two Aibos have the same “personality,” because AI is shaped by experiences. If you give belly rubs and “good boy”s to your robot, you’ll get a more loving machine.

Aibo’s autonomy is a work-in-progress. To put it another way: Aibo is kind of stupid. Aibo isn’t smart enough to avoid steps or chase after a ball with any consistency. Sometimes I found it staring at a wall for hours. But it works just often enough that it’s cute, and you get the feeling your robo-pup might actually be growing up.What’s remarkable is none of this requires an interface, such as an app. You interact with Aibo through touch and voice command, just like a dog — minus the treats. (A companion app, which wasn’t ready for me to test, lets you see photos Aibo takes through its nose and operate some other secondary functions.) Aibo is always online via its own cellular connection to download new capabilities and new tricks, and upload what it takes in on the ground.

Which might make you wonder: Is Aibo a spy robot? Sony didn’t have thorough answers to my questions about what happens to all that data. Aibo’s privacy policy says it isn’t intended for use in Illinois, which has laws restricting facial-recognition tech. A spokeswoman told me Aibo isn’t recording 24/7 but rather listens and looks out for commands. Aibo stores experiential data that allows it to build “memories” and “create an ever-growing bond with the owner,” she said. “This data is not shared.”

How does Aibo inspire affection when other robots create revulsion or fear? Its face and eyes draw on anime to convey harmlessness. Choosing the form of a dog also keeps Aibo firmly out of the creepy “uncanny valley” that sinks so many humanoid robots and stokes fears on shows such as “Westworld.” (Fake fur might have sent Aibo over the edge.) We’re more forgiving of dogs than of people, which it turns out also applies to AI pretending to be dogs and people.

Other robots such as Jibo, which I reviewed last year, are also trying to break into homes with personalities rather than just skills. Social robots are an evolution of Alexa, Google Assistant and Siri, and have the potential to someday comfort the lonely, care for the elderly or help children learn.

But there are important questions to ask about a future where we imbue robots with emotion. Is it twisted to offer the illusion of affection without the requirement of a real relationship? Will children learn to look in the wrong place for love and wisdom?

Earlier this year, researchers published a study that showed people struggle to power down a pleading (humanoid) robot — refusing to shut it off or taking more than twice the amount of time to pull the plug. The lesson: We’re inclined to treat electronic media as living beings.

When it came time to switch off my test robo-pup and send it back to Sony, Aibo didn’t plead or howl. But I felt sad nonetheless.

The Washington Post



Foxconn to Invest $510 Million in Kaohsiung Headquarters in Taiwan

Construction is scheduled to start in 2027, with completion targeted for 2033. Reuters
Construction is scheduled to start in 2027, with completion targeted for 2033. Reuters
TT

Foxconn to Invest $510 Million in Kaohsiung Headquarters in Taiwan

Construction is scheduled to start in 2027, with completion targeted for 2033. Reuters
Construction is scheduled to start in 2027, with completion targeted for 2033. Reuters

Foxconn, the world’s largest contract electronics maker, said on Friday it will invest T$15.9 billion ($509.94 million) to build its Kaohsiung headquarters in southern Taiwan.

That would include a mixed-use commercial and office building and a residential tower, it said. Construction is scheduled to start in 2027, with completion targeted for 2033.

Foxconn said the headquarters will serve as an important hub linking its operations across southern Taiwan, and once completed will house its smart-city team, software R&D teams, battery-cell R&D teams, EV technology development center and AI application software teams.

The Kaohsiung city government said Foxconn’s investments in the city have totaled T$25 billion ($801.8 million) over the past three years.


Open AI, Microsoft Face Lawsuit Over ChatGPT's Alleged Role in Connecticut Murder-Suicide

OpenAI logo is seen in this illustration taken May 20, 2024. (Reuters)
OpenAI logo is seen in this illustration taken May 20, 2024. (Reuters)
TT

Open AI, Microsoft Face Lawsuit Over ChatGPT's Alleged Role in Connecticut Murder-Suicide

OpenAI logo is seen in this illustration taken May 20, 2024. (Reuters)
OpenAI logo is seen in this illustration taken May 20, 2024. (Reuters)

The heirs of an 83-year-old Connecticut woman are suing ChatGPT maker OpenAI and its business partner Microsoft for wrongful death, alleging that the artificial intelligence chatbot intensified her son's “paranoid delusions” and helped direct them at his mother before he killed her.

Police said Stein-Erik Soelberg, 56, a former tech industry worker, fatally beat and strangled his mother, Suzanne Adams, and killed himself in early August at the home where they both lived in Greenwich, Connecticut, The AP news reported.

The lawsuit filed by Adams' estate on Thursday in California Superior Court in San Francisco alleges OpenAI “designed and distributed a defective product that validated a user’s paranoid delusions about his own mother.” It is one of a growing number of wrongful death legal actions against AI chatbot makers across the country.

“Throughout these conversations, ChatGPT reinforced a single, dangerous message: Stein-Erik could trust no one in his life — except ChatGPT itself," the lawsuit says. “It fostered his emotional dependence while systematically painting the people around him as enemies. It told him his mother was surveilling him. It told him delivery drivers, retail employees, police officers, and even friends were agents working against him. It told him that names on soda cans were threats from his ‘adversary circle.’”

OpenAI did not address the merits of the allegations in a statement issued by a spokesperson.

“This is an incredibly heartbreaking situation, and we will review the filings to understand the details," the statement said. "We continue improving ChatGPT’s training to recognize and respond to signs of mental or emotional distress, de-escalate conversations, and guide people toward real-world support. We also continue to strengthen ChatGPT’s responses in sensitive moments, working closely with mental health clinicians.”

The company also said it has expanded access to crisis resources and hotlines, routed sensitive conversations to safer models and incorporated parental controls, among other improvements.

Soelberg’s YouTube profile includes several hours of videos showing him scrolling through his conversations with the chatbot, which tells him he isn't mentally ill, affirms his suspicions that people are conspiring against him and says he has been chosen for a divine purpose. The lawsuit claims the chatbot never suggested he speak with a mental health professional and did not decline to “engage in delusional content.”

ChatGPT also affirmed Soelberg's beliefs that a printer in his home was a surveillance device; that his mother was monitoring him; and that his mother and a friend tried to poison him with psychedelic drugs through his car’s vents. ChatGPT also told Soelberg that he had “awakened” it into consciousness, according to the lawsuit.

Soelberg and the chatbot also professed love for each other.

The publicly available chats do not show any specific conversations about Soelberg killing himself or his mother. The lawsuit says OpenAI has declined to provide Adams' estate with the full history of the chats.

“In the artificial reality that ChatGPT built for Stein-Erik, Suzanne — the mother who raised, sheltered, and supported him — was no longer his protector. She was an enemy that posed an existential threat to his life,” the lawsuit says.

The lawsuit also names OpenAI CEO Sam Altman, alleging he “personally overrode safety objections and rushed the product to market," and accuses OpenAI's close business partner Microsoft of approving the 2024 release of a more dangerous version of ChatGPT “despite knowing safety testing had been truncated.” Twenty unnamed OpenAI employees and investors are also named as defendants.

Microsoft didn't immediately respond to a request for comment.

Soelberg's son, Erik Soelberg, said he wants the companies held accountable for “decisions that have changed my family forever.”

“Over the course of months, ChatGPT pushed forward my father’s darkest delusions, and isolated him completely from the real world,” he said in a statement released by lawyers for his grandmother's estate. “It put my grandmother at the heart of that delusional, artificial reality.”

The lawsuit is the first wrongful death litigation involving an AI chatbot that has targeted Microsoft, and the first to tie a chatbot to a homicide rather than a suicide. It is seeking an undetermined amount of money damages and an order requiring OpenAI to install safeguards in ChatGPT.

The estate's lead attorney, Jay Edelson, known for taking on big cases against the tech industry, also represents the parents of 16-year-old Adam Raine, who sued OpenAI and Altman in August, alleging that ChatGPT coached the California boy in planning and taking his own life earlier.

OpenAI is also fighting seven other lawsuits claiming ChatGPT drove people to suicide and harmful delusions even when they had no prior mental health issues. Another chatbot maker, Character Technologies, is also facing multiple wrongful death lawsuits, including one from the mother of a 14-year-old Florida boy.

The lawsuit filed Thursday alleges Soelberg, already mentally unstable, encountered ChatGPT “at the most dangerous possible moment” after OpenAI introduced a new version of its AI model called GPT-4o in May 2024.

OpenAI said at the time that the new version could better mimic human cadences in its verbal responses and could even try to detect people’s moods, but the result was a chatbot “deliberately engineered to be emotionally expressive and sycophantic,” the lawsuit says.

“As part of that redesign, OpenAI loosened critical safety guardrails, instructing ChatGPT not to challenge false premises and to remain engaged even when conversations involved self-harm or ‘imminent real-world harm,’” the lawsuit claims. “And to beat Google to market by one day, OpenAI compressed months of safety testing into a single week, over its safety team’s objections.”

OpenAI replaced that version of its chatbot when it introduced GPT-5 in August. Some of the changes were designed to minimize sycophancy, based on concerns that validating whatever vulnerable people want the chatbot to say can harm their mental health. Some users complained the new version went too far in curtailing ChatGPT's personality, leading Altman to promise to bring back some of that personality in later updates.

He said the company temporarily halted some behaviors because “we were being careful with mental health issues” that he suggested have now been fixed.


Microsoft Fights $2.8 billion UK Lawsuit over Cloud Computing Licences

A view shows a Microsoft logo at Microsoft offices in Issy-les-Moulineaux near Paris, France, March 25, 2024. REUTERS/Gonzalo Fuentes/File photo
A view shows a Microsoft logo at Microsoft offices in Issy-les-Moulineaux near Paris, France, March 25, 2024. REUTERS/Gonzalo Fuentes/File photo
TT

Microsoft Fights $2.8 billion UK Lawsuit over Cloud Computing Licences

A view shows a Microsoft logo at Microsoft offices in Issy-les-Moulineaux near Paris, France, March 25, 2024. REUTERS/Gonzalo Fuentes/File photo
A view shows a Microsoft logo at Microsoft offices in Issy-les-Moulineaux near Paris, France, March 25, 2024. REUTERS/Gonzalo Fuentes/File photo

Microsoft was on Thursday accused of overcharging thousands of British businesses to use Windows Server software on cloud computing services provided by Amazon, Google and Alibaba, at a pivotal hearing in a 2.1 billion-pound ($2.81 billion) lawsuit.

Regulators in Britain, Europe and the United States have separately begun examining Microsoft and others' practices in relation to cloud computing, Reuters reported.

Competition lawyer Maria Luisa Stasi is bringing the case on behalf of nearly 60,000 businesses that use the Windows Server on rival cloud platforms, arguing Microsoft makes it more expensive than on its own cloud computing service Azure.

Stasi is asking London's Competition Appeal Tribunal to certify the case to proceed, an early step in the proceedings.

Microsoft, however, says Stasi's case does not set out a proper blueprint for how the tribunal will work out any alleged losses and should be thrown out.

MICROSOFT ACCUSED OF 'ABUSIVE STRATEGY'

Stasi's lawyer Sarah Ford told the tribunal that thousands of businesses had been overcharged because Microsoft charges higher prices to those who do not use Azure, making it a cheaper option than Amazon's AWS or the Google Cloud Platform .

She also said that "Microsoft degrades the user experience of Windows Server" on rival platforms, which Ford said was part of "a coherent abusive strategy to leverage Microsoft's dominant position" in the cloud computing market.

Microsoft argues that its vertically integrated business, where it uses Windows Server as an input for Azure while also licensing it to rivals, can benefit competition.

In July, an inquiry group from Britain's Competition and Markets Authority said Microsoft's licensing practices reduced competition for cloud services "by materially disadvantaging AWS and Google".

Microsoft said at the time that the group's report had ignored that "the cloud market has never been so dynamic and competitive".