Skip to content

OpenAI outlines policy blueprint for ‘Intelligence Age’, here’s what it says

Robot Taxes, rogue AI management: OpenAI outlines policy approach for the 'Intelligence Age'
SHARE THIS ARTICLE

OpenAI on Monday published a blueprint of how the next upgrade of policies should look like now that technologies like AI and superintelligence have picked adoption momentum. The document is dubbed “Industrial policy for the intelligence age” and it outlines “people-first policy ideas” that could ensure that AI adds more help to the ecosystem than chaos.

The rules address an array of topics that suggest recognizing “Right to AI” for widespread institutional adoption of the technology to introducing taxes on AI-driven revenues and use of robotics.

“Without thoughtful policies, AI could widen inequality by compounding advantages for those already positioned to capture the upside. There is also a risk that the economic gains concentrate within a small number of firms like OpenAI, even as the technology itself becomes more powerful and widely used,” the post said, pointing out the need for building an open economy.

The notion that AI could, in the future, could crack breakthroughs in the fields of tech and medicine has been circulating widely around the world. However, the advancements in the intelligence technologies are also at great risks of being exploited by threat actors and terrorists.

In the wrong hands, OpenAI warned, the AI technology could go rogue to an extent of crossing human oversight. This makes the case for designing and deploying adequate safeguards to control AI misuse no less than an urgency.

The policy blueprint has called for playbooks on the development and tests on “how to contain dangerous systems” to be prepared for challenging times in the age of intelligence.

The creation of safety systems to tackle emerging risks and building AI trust stacks could be starting point to help the world population start understaning the right trust practices around AI, the U.S.-headquartered company noted.

“Frontier AI companies should adopt governance structures that embed public-interest accountability into decision-making,” OpenAI added, while suggesting that upscaling auditing models and monitoring high-risk deployments against letting AI systems concentrate power, must come under the notice of world governments.

While OpenAI called these ideas ambitious and exploratory, it said that these suggestions could become the point of initiation for the regulatory dialogue of the future.

The AI firm has asked tech players to go through the 13-page regulatory suggestions and submit feedback on the same on an email ID created especially to recieve the inputs.

“To help sustain momentum, OpenAI is establishing a pilot program of fellowships and focused research grants of up to $100,000 and up to $1 million in API credits for work that builds on these and related policy ideas,” the company added.

In May, the company has scheduled to organize an OpenAI Workshop in Washington DC where it plans to take these discussions forward.

In recent weeks, OpenAI raised a total of $122 billion at a valuation of $852 billion from a bunch of investors including Amazon, NVIDIA, and SoftBank. At the time, OpenAI was looking for fresh sources of private equity to help the company continue funding its product development.

Coin Headlines covers the latest news in crypto, blockchain, Web3, and markets, bringing you credible and up-to-date information on all the latest developments from around the world.

We focus on real-time news updates, market movements, whale transfers, and macroeconomic trends to keep you informed and engaged. Whether it’s Bitcoin price swings, altcoin updates, meme coin hype, regulatory changes, or major moves from the world of traditional finance, Coin Headlines gives you what you need to know, right when you need it.