OpenAI, Google DeepMind employees sign open letter calling for whistle-blower protections to speak out on AI risks

A group of current and former employees from OpenAI and Google DeepMind are calling for protection from retaliation for sharing concerns about the “serious risks” of the technologies these and other companies are building.

“So long as there is no effective government oversight of these corporations, current and former employees are among the few people who can hold them accountable to the public,” according to a public letter, which was signed by 13 people who have worked at the companies, seven of whom included their names.

“Yet broad confidentiality agreements block us from voicing our concerns, except to the very companies that may be failing to address these issues.”

In recent weeks, OpenAI has faced controversy about its approach to safeguarding artificial intelligence (AI) after dissolving one of its most high-profile safety teams and being hit by a series of staff departures.

OpenAI employees have also raised concerns that staff were asked to sign non-disparagement agreements tied to their shares in the company, potentially causing them to lose out on lucrative equity deals if they speak out against the AI start-up. After some resistance, OpenAI said it would release past employees from the agreements.

Jacob Hilton, one of the former OpenAI employees who signed the letter Tuesday, wrote on X that the company deserves credit for the non-disparagement policy change, “but employees may still fear other forms of retaliation for disclosure, such as being fired and sued for damages”.

In a statement sent to Bloomberg, a spokesperson for OpenAI said the company is proud of its “track record providing the most capable and safest AI systems and believe in our scientific approach to addressing risk.”

The spokesperson added: “We agree that rigorous debate is crucial given the significance of this technology and we’ll continue to engage with governments, civil society and other communities around the world.”

A representative for Google did not immediately respond to a request for comment.

In the letter, which was titled A Right to Warn about Advanced Artificial Intelligence, the employees said they are worried because leading AI companies “have strong financial incentives to avoid effective oversight”.

On the other hand, the companies have “only weak obligations” to share the true dangers of their AI systems with the public, they said. The letter argued that ordinary whistle-blower protections are insufficient because they focus on illegal activity, whereas many of the risks employees are concerned about are not yet regulated.

In their set of proposals, employees are asking AI companies to commit to prohibiting non-disparagement agreements for risk-related concerns and to create a verifiably anonymous process for staff to raise issues with the company’s boards as well as regulators.

The proposals also call for companies to refrain from retaliating against current and former employees who publicly share information about risks after exhausting other internal processes.

OpenAI said it holds regular Q&A sessions with the board as well as leadership office hours for employees to voice concerns. The company also said it has an anonymous “integrity hotline” for employees and contractors.

Daniel Kokotajlo, a former OpenAI employee who quit earlier this year, said he is worried about whether companies are prepared for the implications of artificial general intelligence (AGI), a hypothetical version of AI that can outperform humans on many tasks. Kokotajlo said he believes there is a 50 per cent chance of reaching AGI by 2027.

“There’s nothing really stopping companies from building AGI and using it for various things, and there isn’t much transparency,” said Kokotajlo, who risked foregoing his equity in order to avoid signing a non-disparagement agreement.

“I quit because I felt like we were not ready. We weren’t ready as a company, and we weren’t ready as a society for this, and we needed to really invest a lot more in preparing and thinking about the implications.”

FOLLOW US ON GOOGLE NEWS

Read original article here

Denial of responsibility! Chronicles Live is an automatic aggregator of the all world’s media. In each content, the hyperlink to the primary source is specified. All trademarks belong to their rightful owners, all materials to their authors. If you are the owner of the content and do not want us to publish your materials, please contact us by email – chronicleslive.com. The content will be deleted within 24 hours.

Leave a Comment