Customize Consent Preferences

We use cookies to help you navigate efficiently and perform certain functions. You will find detailed information about all cookies under each consent category below.

The cookies that are categorized as "Necessary" are stored on your browser as they are essential for enabling the basic functionalities of the site. ... 

Always Active

Necessary cookies are required to enable the basic features of this site, such as providing secure log-in or adjusting your consent preferences. These cookies do not store any personally identifiable data.

No cookies to display.

Functional cookies help perform certain functionalities like sharing the content of the website on social media platforms, collecting feedback, and other third-party features.

No cookies to display.

Analytical cookies are used to understand how visitors interact with the website. These cookies help provide information on metrics such as the number of visitors, bounce rate, traffic source, etc.

No cookies to display.

Performance cookies are used to understand and analyze the key performance indexes of the website which helps in delivering a better user experience for the visitors.

No cookies to display.

Advertisement cookies are used to provide visitors with customized advertisements based on the pages you visited previously and to analyze the effectiveness of the ad campaigns.

No cookies to display.

Google AI pioneer quits to speak freely about technologys dangers

Google AI pioneer quits to speak freely about technologys dangers

A pioneer of synthetic intelligence mentioned he give up Google to talk freely in regards to the expertise’s risks, after realising computer systems might grow to be smarter than individuals far prior to he and different consultants had anticipated.

“I left so that I could talk about the dangers of AI without considering how this impacts Google,” Geoffrey Hinton wrote on Twitter.

In an interview with the New York Times, Hinton mentioned he was nervous about AI’s capability to create convincing false photographs and texts, making a world the place individuals will “not be able to know what is true anymore”.

“It is hard to see how you can prevent the bad actors from using it for bad things,” he mentioned.

The expertise might shortly displace employees, and grow to be a better hazard because it learns new behaviours.

“The idea that this stuff could actually get smarter than people — a few people believed that,” he instructed the New York Times. “But most people thought it was way off. And I thought it was way off. I thought it was 30 to 50 years or even longer away. Obviously, I no longer think that.”

In his tweet, Hinton mentioned Google itself had “acted very responsibly” and denied that he had give up in order that he might criticise his former employer.

Google, a part of Alphabet Inc., didn’t instantly reply to a request for remark from Reuters.

The Times quoted Google’s chief scientist, Jeff Dean, as saying in a press release: “We remain committed to a responsible approach to A.I. We’re continually learning to understand emerging risks while also innovating boldly.”

Since Microsoft-backed startup OpenAI launched ChatGPT in November, the rising variety of “generative AI” functions that may create textual content or photographs have provoked concern over the longer term regulation of the expertise.

“That so many experts are speaking up about their concerns regarding the safety of AI, with some computer scientists going as far as regretting some of their work, should alarm policymakers,” mentioned Dr Carissa Veliz, an affiliate professor in philosophy on the University of Oxford’s Institute for Ethics in AI. “The time to regulate AI is now.”

Source: www.anews.com.tr