OpenAI, Google DeepMind insiders have serious warnings about AI

OpenAI, Google DeepMind insiders have serious warnings about AI

For OpenAI, the last few weeks have made multiple headlines – and not for the best reasons. The story hasn’t ended there: several current and former OpenAI employees, alongside Google DeepMind employees, are now calling out their leading AI companies on oversights, a culture of stifling criticism, and a general lack of transparency.

In an open letter, the whistleblowers essentially called for the right to openly criticize AI technology and its associated risks. They wrote that, due to a lack of obligation to share information with government bodies and regulators, “current and former employees are among the few people who can hold [these corporations] accountable to the public”, and said that many of them “fear various forms of retaliation” for doing so.

The signatories asked for advanced AI companies to commit to certain principles, including the facilitation of an anonymous process for employees to raise risk-related concerns and that the companies will support “a culture of open criticism”, so long as trade secrets are protected in the process. They also asked that the companies not retaliate against those who “publicly share risk-related confidential information after other processes have failed.”

In the letter, the group also touched upon the risks of AI that they recognize: from the entrenchment of existing inequalities, to the exacerbation of misinformation, to the possibility of human extinction.

Daniel Kokotajlo, a former OpenAI researcher and one of the group’s organizers, told the New York Times that OpenAI is “recklessly racing” to get to the top of the AI game. The company has faced scrutiny over its safety processes, with the recent launch of an internal safety team also raising eyebrows for the fact that CEO Sam Altman sits at its helm.