News

EU Proposes Strict Regulations for AI

The European Union this week unveiled its first proposed regulations for artificial intelligence (AI) technology, along with a strategy for handling personal digital data. The new regs provide guidance around such AI use cases as autonomous vehicles and biometric IDs.

Published online by the European Commission, the proposed regulations would apply to "high-risk" uses of AI in areas such as health care, transportation and criminal justice. The criteria to determine risk would include such considerations as whether a person might get hurt, say, by a self-driving car or a medical device, and how much influence a human has on an AI's decision in areas like job recruiting and law enforcement.

The EU also indicated it wants to end black box AIs implementation by requiring human oversight over everything, including the large data sets used in training AI systems. The rules would ensure that that data was legally obtained, traceable to its source, and sufficiently broad to train the system. "An AI system needs to be technically robust and accurate in order to be trustworthy," said EU commissioner Margrethe Vestager in a statement.

The regulations would also establish culpability, answering the question, who is responsible for an AI system's actions, the person/entity using it, or the one who designed it? High-risk applications would have to be shown to be compliant with the rules before being deployed in the European Union. The commission also plans to offer a "trustworthy AI" certification, to encourage voluntary compliance in low-risk uses. Certified systems later found to have breached the rules could face fines.

The EU's stricter regs are meant, in part, to distinguish the region from the U.S. and China, which have been reluctant to impose restrictions that might slow their respective marches toward AI supremacy. Europe will "develop and pursue its own path to become a globally competitive, value-based and inclusive digital economy and society, while continuing to be an open but rules-based market, and to work closely with its international partners," the European Commission said in a statement.

The EU's proposed rules were not unexpected. By 2022, 85 percent of AI projects will deliver erroneous outcomes due to bias in data, algorithms, and the AI teams themselves, Gartner analysts believe. "In a time where algorithms have access to large data sets and are making critical decisions," the firm said in an email, "responsible development and deployment of AI systems is top of mind for business leaders and governments alike."

"We want every citizen, every employee, every business to stand a fair chance to reap the benefits of digitalization," said Vestager, who is also executive vice president of A Europe Fit for the Digital Age. "Whether that means driving more safely or polluting less thanks to connected cars; or even saving lives with AI-driven medical imagery that allows doctors to detect diseases earlier than ever before."

The proposal regs are not final. The European Commission is allowing 12 weeks for input from experts, lobby groups, and the public. The final regulation will need to be approved by the European Parliament and national governments, probably no sooner than 2021.

About the Author

John K. Waters is the editor in chief of a number of Converge360.com sites, with a focus on high-end development, AI and future tech. He's been writing about cutting-edge technologies and culture of Silicon Valley for more than two decades, and he's written more than a dozen books. He also co-scripted the documentary film Silicon Valley: A 100 Year Renaissance, which aired on PBS.  He can be reached at jwaters@converge360.com.

Featured