
AI Bill of Rights - industry should police itself, claims ML company
The Biden White House’s Blueprint for an AI Bill of Rights – ‘making automated systems work for the American people’ – seeks to minimize bias and the potential risks to citizens from technology overreach, data grabs, and intrusion. So why are some tech companies up in arms about it? Perhaps some questions answer themselves. But on the face of it, the Blueprint contains a reasonable set of aims for a country with an insurance-based healthcare system, and where employment, finance, and credit decisions increasingly reside in inscrutable algorithms.
The US government’s stated desire for safer, more effective systems and greater personal privacy – not to mention its call to explore human alternatives to AI where possible – has rattled some in Silicon Valley. Indeed, it has left “many concerned about the future of ethical AI if left in the hands of the government”. At least, that’s the opinion of one opponent: CF Su, VP of Machine Learning at intelligent document processing provider, Hyperscience. In his view, AI ethics should be left in the hands of “those who know the technology the best”.
