Categories
cyber Defense & Security Diplomacy & International Relations

The Need for Global A.I. Regulation: Urgent and Complicated

The lack of regulation surrounding the global A.I. race can be attributed to the adoption of A.I. systems by the United States and China, and the companies that develop them as strategic national assets. In contrast, the European Union has produced a comprehensive range of legislation to regulate A.I. technology. However, a global alignment of domestic and international regulation is necessary to protect against the risks of unethical A.I. development and usage.

Abstract: The lack of regulation surrounding the global A.I. race can be attributed to the adoption of A.I. systems by the United States and China, and the companies that develop them as strategic national assets. In contrast, the European Union has produced a comprehensive range of legislation to regulate A.I. technology. However, a global alignment of domestic and international regulation is necessary to protect against the risks of unethical A.I. development and usage. 

As global competition fuels the race to develop artificial intelligence (A.I.) systems, an open letter led by the Future of Life Institute, and signed by more than 1,000 technology leaders and researchers, warn of the “profound risks to society and humanity” that A.I. presents. While the letter acknowledges that A.I. developers are “locked in an out-of-control race to develop and deploy ever more powerful digital minds that no one – not even their creators – can understand, predict, or reliably control,” it nonetheless urges a six-month moratorium on the development of new A.I. systems. However, the proposed moratorium has yet to be adopted by U.S. or Chinese A.I. developers. From the letter’s lack of traction, one can infer that, since technological progress dictates power structures, global antitrust, data privacy, and industrial policies will continue to support large-scale A.I. innovation. Competition trumps regulation. 

Efforts to strengthen U.S. data privacy laws to align with EU protections have also been rejected by lawmakers who support the competitiveness of U.S. companies. For example, a proposed federal privacy bill, the American Data Privacy and Protection Act, was thought by the vice president of the U.S. Chamber of Commerce to come at an inconvenient time where “the U.S. is in a global race with China to lead the world in A.I.” Consequently, the U.S. remains a global outlier in its relative lack of data-privacy protections. The EU has protected consumers by allowing them access and the right to correct information that companies have collected. If companies are unable to keep consumer data secure, they may be fined up to six percent of their global revenue, potentially billions of dollars for the world’s largest tech companies. Likewise, the Chinese government has ostensibly strictly regulated access to the personal data that Chinese companies are able to collect and use.

As the processing power of A.I. systems require access to vast datasets, the privacy and security of individuals and companies are at risk without proper regulation. Data privacy violations may easily occur through the non-anonymization and collection of employee data. Companies who utilize generative A.I. should also be wary of the collection of sensitive data and trade secrets by third-party providers of the technology. In some situations, these providers may also have the right to use and/or reveal these inputs. The question of the ownership of inputs and outputs from third-party programmes is still in the process of being answered as there are copyright infringement concerns. In the U.S., issues surrounding generative A.I. intellectual property rights are in the process of being shaped by litigation. In China, recent draft rules require the exclusion of “content infringing intellectual property rights” from data used to train generative A.I. systems. The need to comply with those proposed rules appears to have prompted Chinese developers to improve data filtering tools beyond current international efforts. However, likely repercussions may include Chinese developers falling behind as they struggle to compile the massive datasets needed to keep up with international competitors. 

The potential dangers associated with A.I. are not inherent to A.I. itself, but rather flow from how A.I. is developed and used. For example, unregulated algorithms may reflect discriminatory and biased outcomes, furthering inequalities and hindering workforce diversity advancements. According to Baker McKenzie’s 2022 North America AI survey, 75% of companies already use A.I. applications for hiring and HR purposes. As emerging international legislation will likely include reporting requirements, companies that utilize A.I. applications should be able to provide a thorough understanding of the data sets being used, algorithmic performance, and technological barriers. Unfortunately, attempts to prevent discriminatory algorithms through company audits and to regulate facial recognition software have gone nowhere in the U.S. Instead, the U.S. Chamber of Commerce and more than 30 tech companies have lobbied for reliance on voluntary limitations in the use of A.I. and its applications. Mark Zuckerberg echoes the same commercial concern that imposing consent requirements for the use of facial recognition would increase the risk of “falling behind Chinese competitors.” Consequently, a federal study revealed that facial recognition systems used by police investigators commonly misidentify African American, Asian, and Native American people. Women are also more likely to be falsely identified than men. Thus, A.I. systems may still intensify racial, gender, and economic inequalities despite federal anti-discrimination laws. 

Other key risks of A.I. include the significant displacement of jobs as automated workers could replace paralegals, writers, translators, personal assistants, etc. An example of the concern of job displacement by A.I. use is evident in the current Writers Guild of America strike. The WGA advocates to regulate generative A.I. to prevent it from being used to write or change Minimum Basic Agreement (MBA) material. The MBA is the bargaining agreement that includes the benefits, rights, and protections for the majority of work performed by WGA members. The labor union also wishes to prohibit the use of MBA material to train A.I. programs. Another risk imposed by A.I. usage in the job market is that it may tend to weaken individuals’ ability to think critically—and to do so independently of the emerging technology. While generative A.I. systems have the ability to help individuals think, they cannot do the thinking for individuals. The development of A.I. technology also raises concerns due to the substantial public investment it requires. While the investment in the advancement of technology is crucial to compete in the global race for A.I. dominance, it should not occur at the expense of greater social priorities. Higher education should not face funding cuts through diversion of investments in digital intelligence. While digital intelligence may surpass biological intelligence when performing certain tasks, it cannot replace the importance of an educated population. For example, A.I. is unable to consider the ethical and moral principles of its development and usage. A.I. also does not have the emotional intelligence needed to participate in processes of democratic governance.

Lawmakers have the power to promote the democratic governance of A.I. through regulation. However, how can lawmakers effectively regulate A.I. and its applications if they do not fully understand A.I. or its potential for misuse? U.S. representative Jay Obernolte (R-CA), the only member of Congress with a master’s degree in artificial intelligence, stated, “Before regulation, there needs to be agreement on what the dangers are, and that requires a deep understanding of what A.I. is.” The Congressman continued, “You’d be surprised how much time I spend explaining to my colleagues the chief dangers of A.I. will not come from evil robots with red lasers coming out of their eyes.” A.I. technology is not fully identified or developed, yet legislation is being drafted to regulate it. Lessons from preemptive agricultural biotech regulation exemplify the need for reversible legislative decisions. Such room for error is not easily found in the legislative process. While the U.S. government is accepting public recommendations on the regulation of A.I., this is, at its most generous understanding, premature. A federal agency designated to regulate A.I. must be established, following the precedent of the creation of the Food and Drug Administration, which, similarly, was tasked with protecting individuals from harm that was also complex in nature and whose risks are not fully understood.

As the potential dangers of A.I. technology have worldwide implications, international cooperation and regulation is urgently needed to ensure that A.I. is developed and used responsibly. Additionally, there must be an alignment in approaches to A.I. risk management to support international trade and to strengthen regulatory oversight. While the U.S. has led recent efforts for regulation, passing nine A.I. related laws in 2022, there is still a lack of legal framework. Accordingly, 12 European Parliament members have urged President Biden and European Commission President Ursula von der Leyen to convene a global summit to construct a unified approach to A.I. risk management. Any such efforts must also address the serious environmental impacts of A.I. systems. The MIT Technology Review noted that the programming of just one A.I. model can emit more than 626,00 pounds of carbon dioxide, nearly five times the lifetime emissions of an average American car. While it will be difficult to establish unified A.I. regulation globally, it is necessary to protect individuals’ and companies from unforeseen impacts of A.I. technology. If A.I. applications are compromised by biological or digital intelligence sources with malicious intent, the result would be catastrophic. Therefore, experts with knowledge of this technology, its positive application and potential for misuse, should be the major body deciding the future of A.I. regulation. A global and neutral governing body comprised of the world’s leading scientists and leaders must be established to fully assess the consequences of this breakthrough technology. 




Processing…
Success! You're on the list.

___
Latest






___
Editor’s Picks

Dolce Sara's avatar

By Dolce Sara

Dolce is a recent graduate from the University of California, Los Angeles. She received her B.A. in Political Science with honors and concentrated in political theory. During her undergraduate career, Dolce served as Chief of Staff for USAC General Representative 2 where she worked to install 3D-printed braille signs in Boelter Hall and completed the USAC Expenditure Viewer. She was also involved in the Global Development Lab and the Ballet program. Dolce aims to attend law school to further her interest in international law and wishes to pursue a career as a transactional practitioner in intellectual property law. She also intends to practice humanitarian law. In addition to writing for the JWA, Dolce is an intern for the global law firm Norton Rose Fulbright.

2 replies on “The Need for Global A.I. Regulation: Urgent and Complicated”

Leave a comment