TRENDING Umrah and Ziyarah forum kicks off in Madinah today.

Anthropic, an artificial intelligence (AI) business, is introducing a new program to enhance our understanding of AI systems’ capabilities. This initiative, which was unveiled on Monday, would reward external organizations that create novel techniques for evaluating AI performance. Anthropic claims that these new techniques ought to be able to quantify the “advanced capabilities” of AI models with accuracy. Applications for evaluation can be submitted at any time by organizations that are interested in taking part.

On its official blog, Anthropic stated: “We hope to improve the field of AI safety overall by investing in these evaluations and supplying useful tools that benefit the entire ecosystem.” It’s still difficult to develop high-quality, safety-relevant evaluations, and demand is outstripping supply.

The way we now test AI systems has a flaw. The majority of experiments do not accurately represent real-world AI usage. Some tests may not even measure what they are supposed to, especially the older ones that were created before sophisticated generative AI was available.The answer offered by Anthropic is to create brand-new testing procedures. These new, more difficult exams will center on the security of AI and its potential social impact. Anthropic intends to provide new resources, testing techniques, and tools in order to do this.

Anthropic worries that some possible hazards associated with powerful AI are not taken into account by present AI testing methodologies. Creating tests that can recognize these hazards will be the main goal of their new initiative.

A few of the new tests, for instance, will look into the possibility of using AI models for harmful tasks like making weapons or conducting cyberattacks. Additionally, they’ll search for ways AI might be used to trick individuals using false information or other forms of deception. An additional area of concern is national security. The specifics of Anthropic’s proposed “early warning system” to detect any AI threats before they materialize are not disclosed.

Additionally, research on applying AI for good will be funded under the program. Investigating AI’s potential to support scientific research, translate languages, and lessen decision-making bias are some examples of this. They also want to create AI that can recognize objectionable or dangerous information and refrain from creating it.

 

Share.
Leave A Reply Cancel Reply
Exit mobile version