The judge rejects the arguments that AI’s Chatbots have free expression rights in the demand on the death of adolescents

A federal judge on Wednesday rejected the arguments made by an artificial intelligence company that their Chatbots are protected by the first amendment, at least for the moment. The developers behind the character are sought to dismiss a lawsuit claiming that the company’s chat pushed a teenager to kill.

The judge’s order will allow for illicit fatal demand, in which legal experts say it is among the latest constitutional tests of artificial intelligence.

The demand was filed by a Mother of Florida, Megan Garcia, who claims that his 14 -year -old son Sewell Setzer III was the victim of a character.

The demand was filed by a Mother of Florida, Megan Garcia, who claims that his 14 -year -old son Sewell Setzer III was the victim of a character. Ap

Meettali Jain of the Technological Justice Bill, one of Garcia’s lawyers, said that the judge’s order sends a message that Silicon Valley “must stop and think and impose on the guards before launching products on the market.”

Demand against character technologies, the company of character, also names individual developers and Google as defendants. He has caught the attention of legal experts and AI observers in the United States and beyond, as technology quickly remodeling jobs, markets and relationships despite what experts warn are potentially existential risks.

“Order certainly configures it as a test potential for some wider problems involved and,” said Lyrissa Barnett Lidsky, a law professor at the University of Florida, focusing on the first amendment and artificial intelligence.

The demand claims that during the last months of his life, Sixzer was increasingly isolated from reality, as he devoted himself to sexualized talks with the Bot, which was shaped after a fictional character on the “Game of Thrones” television program. In his last moments, the Bot told Sixzer that he loved him and urged the teenager to “return home as soon as possible,” according to screenshots of exchanges. Moments after receiving the message, sixzer shot, according to legal files.

Demand claims that during the last months of his life, Sixzer became more and more isolated from reality as he devoted himself to sexualized talks with the BOT. Cavan – Stock.adobe.com

In a statement, a character spokesman said, a number of security functions that the company has implemented, including the children’s guards and suicide prevention resources announced the day that the lawsuit was filed.

“We are deeply concerned about the safety of our users and our goal is to provide an attractive and safe space,” the statement states.

Developer’s lawyers want the case to be rejected because they say that the Chatbots deserve protections from the first amendment and that the ruling could otherwise have a “creepy effect” on the AI ​​industry.

In a statement, a character spokesman said, a number of security functions that the company has implemented. Suppatman – Stock.adobe.com

On Wednesday, U.S. Senior District Judge Anne Conway rejected some of the accused claims of expression, saying that he is not “ready” to maintain that chats production is speech “at this stage.”

Conway found that character technologies can affirm the rights of the first amendment of their users, which he found to have the right to receive the “speech” of the Chatbots. He also determined that Garcia can move forward with the claims that Google can be responsible for his alleged role in helping to develop characters.AI. Some of the founders of the platform had previously worked on the creation of IA on Google and the dress say that the technological giant was “aware of the risks” of technology.

“We disagree with this decision,” said Google spokesman José Castañeda. “Google and characters are completely separated and Google did not create, design or manage the AI ​​characters or any component.”

In this undated photo provided by Megan Garcia de Florida in October 2024, he encounters his son, Sewell Setzer III. Ap

As much as the demand is developed, Lidsky says that the case is a warning of “the dangers of our emotional and mental health in AI companies.”

“It’s a notice for parents that social media and generative devices are not always harmless,” he said.

#judge #rejects #arguments #AIs #Chatbots #free #expression #rights #demand #death #adolescents
Image Source : nypost.com

Leave a Comment