Victory Vibes | Inspiring Stories of Triumph and Resilience

Obanai suggests that the security of their AI is flexible if competitors are used with risk organizations

Advertisement

Open said he would consider Adjust your safety requirements if a company competes with a high -risk artificial intelligence model without protection. The company wrote in his report as ‘product structure’ and if another company starts Model it is a threatThis can be done after “hazard” has been “severe” that the “risk panorama” has changed.

The document explains how the company monitorsEvaluates, provides and protects from hazards They lift AI models. “If another border starts with a high -risk system without comparable protection, we can adjust our needs,” wrote a blog post published on Tuesday.

“However, We will strongly confirm whether the risk is panorama first This is actually changed and we will publicly recognize that we are fixing and we will evaluate it Adjustment does not significantly increase the general risk Keep severe damage and more protection at the most safety level. “

Before making a sample, Performs OpenAi evaluate If it causes severe damage by identifying reliably, measuring, new, serious and inappropriate risks and installing protection against them. Next, classify these risks as low, media, high or important.

Some of the risks skills that the company already finds Its models in the fields of biology, chemistry, internet security and crossing. The company also assesses new risks, as its AI model can work Long time without intervention What a threat to the human, self -remittance and nuclear and radiological fields.

“The risks of insistence”, As the use of saladgift for political campaigns or pressure groups, they will be conducted outside the frame and instead of, they will be examined by them Model specificationDocument to determine the behavior of Chadjift.

Reducing safety duties

Former Opanai Researcher Steven Atler X Report the updates The company’s product shows that “it quietly reduces its security duties”. In his message, he pointed out the company’s determination to test the “refined versions” of its AI models in December 2023, But he pointed out that the Openi will change now Try trained parameters or models that are released “pesos”.

People do not agree completely He said, “It is best to eliminate the commitment to maintain the refined models, and not fulfill the commitment to maintain it.” Open was clearly This was reversed in the previous commitment. “

The news has come up with the new family of AI models this week as GBT -4.1 Computer card No security report. ‘Euronevs Next’ has been asked to open By a security reportBut the answer was not available at the time of the release.

The news comes after 12 former employees OpenAi A. Provides Written In the case of Elon Musk against Opanai, it is said that a transition in a profit company will lead to security cuts.

.

Story Credit

Exit mobile version