Anthropic AI has taken a pivotal step in tightening its platform security measures now that the U.S. is working to define its guidelines to oversee the AI sector. The company has started to roll out KYC verifications for the subscribers of its Claude platform this week. With this, Anthropic has left its competetors like OpenAI and Google behind in implementing the measure.
While Anthropic has not posted an official announcement, screenshots showing its KYC requirements have started to surface on social media. These snapshots show that Clause users would have to present their government-issued photo IDs and mobile camera access as part of these verifications.
“We need to confirm your identity before you start your subscription. This should only take two minutes,” Anthropic’s direction under the “quick identity check” feature says.
Anthropic has partnered Persona to facilitate these verifications. Persona is a third party ID verification service that works by matching government IDs to live selfies. LinkedIn also uses Persona’s verification services only after which it awards the tick mark next to verified profiles.
The development met with mixed reactions on social media. While some welcomed the development as a needed security measure against AI exploit by suspected criminals, the others said that Anthropic is trying to treat AI like a financial asset which would threaten the future of open intelligence technologies.
Given the expansion in AI adoption, AI players like Anthropic are roping-in major invetsor attention. As per fresh reports, the Claude-parent has received proposals that could value the firm at a whopping $800 billion.
The company recently signed a multi-year AI cloud infrastructure deal with CoreWeave as it plans operational expansion.
The company is taking a bullish approach towards its growth strategy at a time when the U.S. regulators have started opening dialogues to define national AI regulations.
Earlier last month, the While House published a national legislative framework to help U.S. regulators trace to finalize a unified AI legislation that would be deployed federally.
The protection of children and empowering parents has been listed the first area of priority, followed by safeguarding American communities and respecting creators as well as intellectual property rights.
Preventing censorship, protecting free speech, and developing an AI-ready workforce are among the six focus points that the White House has directed the Congress to base the AI laws on.


