CQ MAGAZINE – COVER STORY Jun. 3, 2019 Government Struggles to Rein In Artificial Intelligence Programs

June 3, 2019 By Gopal Ratnam and Kate Ackley, CQ

It can help trace missing children, but misidentifies people of color. It can help detect cancer, but may recommend the wrong cure. It can help track criminals, but could aid foreign enemies in targeting voters. It can improve efficiency, but perpetuate long-standing biases.

(iStock)

The “it” is artificial intelligence, a that teaches machines to recognize complex patterns and make decisions based on them, much like humans do. While the promised benefits of the technology are profound, the downsides could be damaging, even dangerous.

Last year police in New Delhi, for example, traced 2,930 missing children in four days by using an experimental facial recognition technology that identified them by examining a database of 45,000 kids living in shelters and homes; yet a facial recognition tool developed by Amazon and tested by the American Civil Liberties Union in 2018 incorrectly identified 28 members of Congress as having been arrested for a crime, disproportionately picking out African-American lawmakers including civil rights icon Rep. John Lewis.

In another case, Boston’s attempts to use an automated system to help students attend higher quality schools closer to their homes and increase racial and geographic integration resulted in shortened commutes but greater segregation because of pre-existing racial disparities in school distribution, according to researchers at Northeastern and Harvard universities.

RELATED: What Is Artificial Intelligence?

Significant advances in computers’ ability to recognize visual patterns and human languages, including voice and text recognition, and to learn without supervision have brought machines closer to achieving cognitive tasks once reserved for humans. Vast quantities of data held privately and by governments are the necessary “food” that computers must digest to learn the new skills.

Lawmakers and regulators still grappling with the downsides of the internet and social media era such as the loss of privacy, criminal hacking and data breaches are now trying to balance the promises and perils of artificial intelligence. Industry groups, lobbyists and unions are angling to shape the debate over regulations affecting that could one day bring more job losses because of increased . Civil rights groups and some technologists are calling for greater oversight to prevent bias and discriminatory practices.

RELATED VIDEO: Is AI the Final Frontier? Probably Not

Determining the way forward is also complicated because artificial intelligence has emerged almost organically from existing technology and data, unhampered by restrictions on privacy and use, says Jason Schulz, a law professor at New York University who also oversees law and policy for NYU’s AI Now Institute. “We are now trying to figure out whether the scaffolding we have for the internet era is sufficient for AI or do we need a whole new foundation,” he says.

Northwell Health, New York state’s largest health care provider, for example, uses Amazon’s Echo voice-activated device to assist hospital patients with queries on everything from medications to music. Typical Echo devices store recordings of user requests on Amazon servers, but Northwell uses its own servers to comply with the Health Insurance Portability and Accountability Act (PL 104-191), the landmark law that safeguards patient information.

“But is that enough or do we need new regulations?” Schulz asks.

Good and Evil

Members of Congress are waking up to the potential dangers of widespread use of AI technologies. They have drafted bills that would not only require more transparency and accountability over these automated systems, but also allow users to withhold certain information from the large data sets that drive artificial intelligence.

“It’s a fundamental way in which decisions are made now — and computers,” says Sen. Ron Wyden, an Oregon Democrat who is co-sponsoring a bill dubbed the Algorithmic Accountability Act, introduced in April. “And it seems to me that there’s not much transparency, not much disclosure, and that’s what we sought to do in our bill.”

The bill would require the Federal Trade Commission to prepare rules requiring companies to test their AI-powered systems for accuracy, fairness, bias, discrimination, privacy and security, and to correct errors if they find them. The bill is backed by Sen. Cory Booker, a New Jersey Democrat running for president, and New York Democratic Rep. Yvette D. Clarke.

When introducing the legislation, Booker described his African-American parents’ experience with “real estate steering” in 1969, when agents coaxed black couples away from some neighborhoods.

While such practices have been outlawed, opaque automated advertising systems driven by algorithms could perpetuate discrimination and avoid scrutiny, he said.

Elsewhere in Congress, Sens. Roy Blunt, a Missouri Republican, and Brian Schatz, a Hawaii Democrat, have proposed legislation that would prohibit commercial users of facial recognition technology from collecting and sharing data for identifying and tracking consumers without their consent.

Tech-savvy lawmakers say Congress must be better educated before passing legislation addressing artificial intelligence to avoid repeating the failures made with earlier internet technologies.

California Democrat Ro Khanna, a member of the Congressional Artificial Intelligence Caucus whose district includes part of Silicon Valley, says he’s putting together a working group of economists, lawyers, academics and others to help lawmakers draft legislation.

“That may allow us to get ahead of the curve when it comes to AI and preventing misuse of AI in a way that we weren’t ahead of the curve on social media,” says Khanna, a national co- chairman of Vermont independent Sen. Bernie Sanders’ bid for the Democratic presidential nomination.

Rep. Ted Lieu, another California Democrat and a member of the AI Caucus who holds a computer science degree from Stanford University, had a similar idea two years ago when he proposed legislation calling for a federal commission to advise the government on how to develop and regulate artificial intelligence technologies. The bill failed to get any traction.

He believes that guidelines from a national commission and rules developed by appropriate regulatory agencies will be more effective than initiatives from Congress promoting or condemning specific technologies.

“If you get it wrong you need Congress to act again,” says Lieu. Congress paved the way for emergence of social media companies when it passed the Communications Decency Act (PL 104-104) in 1996 that exempted companies such as Facebook, YouTube, Twitter and others from being considered publishers — which means that unlike newspapers or book publishers, they are not responsible for the content uploaded to their sites by users.

In the aftermath of the 2016 elections when Russian operatives pretending to be Americans created fake accounts to influence U.S. voters, lawmakers have been trying to find ways to hold social media companies accountable for the content on their platforms.

The AI Caucus is still mostly a digital forum that has yet to meet in person, says Rep. André Carson, an Indiana Democrat who is part of the group. The goal of the caucus is not to turn lawmakers into tech wizards but to get them up to speed on key issues, he says. In the process of learning about technology, “some members have started Facebook pages or got rid of their flip phone.”

Congress has begun to take baby steps in adopting the technology too. The House of Representatives is developing an AI tool that would help lawmakers, staffers and the public compare legislative proposals line-by-line to current law.

As a “Star Wars” fan who also grew up watching the TV series “Knight Rider,” which featured a souped-up Pontiac Trans Am that could talk and help fight injustices, Carson says he’s both excited and concerned about the possibilities for AI.

Technologies that could enable the FBI and other law enforcement agencies to target criminals and potential terrorists could as easily be used by adversaries to target voters and whip up fear, Carson says. “Whatever is created for good can be used for evil purposes,” he says.

Prodding Regulators

Khanna says the Trump administration should bring together industry experts, philosophers and others to devise a national plan that would both advance AI and minimize harm from it.

The administration took a step toward that a year ago after critics complained it was failing to lead on AI efforts, while China and European countries had set ambitious goals. The White House assembled dozens of government officials, technologists and experts to discuss ways to encourage AI technologies while avoiding restrictive regulations.

In February, President Donald Trump issued an executive order emphasizing those goals and outlining priorities to, among other things, spur technological breakthroughs in AI and a skilled workforce in the sector.

The White House Office of Management and Budget and the Office of Science and Technology Policy are drafting guidelines on how federal agencies should approach AI regulations, says Lynne Parker, assistant director for artificial intelligence at OSTP.

“We need a consistent regulatory approach for all uses of artificial intelligence,” Parker says. Existing regulations already may cover some uses, she says. The president’s executive order also instructs agencies to make government data more widely available to researchers to broaden AI opportunities so “it’s not just the technology companies that have the data,” she says.

The White House also is seeking public comments on updating the national AI plan first issued by the Obama administration in 2016.

The Food and Drug Administration and the Department of Transportation already are examining rules to regulate AI-enabled devices and cars, and the Pentagon is working with a group of outside advisers, the Defense Innovation Board, to develop ethical principles to govern military applications.

During a recent public meeting at Carnegie Mellon University, Joshua Marcuse, the executive director of the board, said it is seeking a wide array of views and would not shy away from critics and those who strongly disagree with the use of AI in military applications.

The Defense Department “feels the need to view AI differently than other technologies, especially on ethics, and the imperative to get this right on how [the Defense Department] employs these technologies,” Marcuse said. The goal is to understand whether the use of artificial intelligence “precipitates strategic or tactical questions, about which current law on war is not developed,” he said.

The laws of war refers to a body of international law that addresses fundamental principles of armed conflict such as justified use of violence, avoiding destruction without purpose, avoiding civilian casualties and proportionality in the use of force.

Eliahu Niewood, a technologist at the Mitre Corp., a government-funded research and development entity, told the meeting that artificial intelligence could be positive for the military. If, for example, AI-enabled technologies had been installed aboard the USS Vincennes, a guided-missile cruiser that shot down an Iranian passenger jet in 1988 after mistaking it for a military aircraft, the accident could have been avoided. The technology could have distinguished a commercial airplane’s beacon signal from a military aircraft signal quicker than a human could have, he said. At the same meeting, scientists from the MIT Media Lab said tech companies developing artificial intelligence technologies were engaged in a large-scale public relations push that presented an “illusion” of positive change while glossing over obvious problems — crashes of autonomous vehicles, racial bias in criminal sentencing and robots eroding job opportunities. They called the effort “machine-washing,” similar to the fossil fuel industry portraying itself as “friends of the earth” in the early 1970s when public focus turned to environmental pollution — an effort that was later dubbed “greenwashing.”

Big Brother Issues

Civil rights activists and some technologists are concerned that emerging legislation is still largely addressing only commercial uses of artificial intelligence and not government use in surveillance, law enforcement and other areas.

Concerns about AI systems used by Google, Amazon and Facebook are well known, says the AI Now Institute’s Schulz. But the government’s use of AI in decision-making on criminal justice issues — by the FBI and the National Security Agency, for example — raise new concerns, he says.

Police and FBI use of facial recognition systems to identify suspects from driver’s license and other databases subjects as many as 117 million Americans — or 1 in 2 individuals — to a virtual lineup without their consent, the Center for Privacy and Technology at the Georgetown University Law Center said in a 2016 report.

“Facial recognition is a powerful tool in this respect,” says Jake Laperruque, a senior counsel at the Constitution Project at the Project on Government Oversight. “It is unique, and unlike fingerprints or other forms of identification, you can monitor without consent.”

In a March report titled Facing the Future of Surveillance, Laperruque called for laws that would require police departments and the FBI to seek judicial warrants to build and use facial recognition databases; limit real-time use of such technologies only to emergency situations and serious crimes; and require independent verification of results before they’re used. Laws should also place a moratorium on embedding facial recognition systems in police body cameras, Laperruque wrote.

Amazon’s facial recognition system, known as Rekognition, is used by several police departments around the country but technologists and activists say it produces false positives and higher error rates when classifying women and darker-skinned faces. Amazon, in response, says it “has not received a single report of misuse by law enforcement” in two years of use.

Microsoft, which also has developed a facial recognition system, has called for laws to regulate use of such technologies.

“Unless we act, we risk waking up five years from now to find that facial recognition services have spread in ways that exacerbate societal issues,” Microsoft President Brad Smith wrote in a December blog post. “By that time, these challenges will be much more difficult to bottle back up.”

Manufacturers should be required to explain how their systems work, outlining limitations that users can understand, he wrote. He also called for independent testing by groups such as Consumer Reports.

In the absence of a federal law, Microsoft has adopted six voluntary principles in developing facial recognition technologies.

The principles include guidance to help engineering and software teams prioritize nondiscrimination and fairness in any artificial intelligence products, Rich Sauer, Microsoft’s deputy general counsel, says in an email. Those steps include employing a diverse team of developers and testing the system for bias before deploying the technology.

Laperruque expects Congress to address government use of facial recognition technologies in a separate bill, especially after the ACLU demonstrated that the technology can wrongfully identify even lawmakers as criminals. But when it comes to regulating facial recognition, the irony is that San Francisco — incubator to many tech startups and just up the road from the world’s largest tech companies — got there first. It has banned the use of facial recognition by city agencies, including police, and using information from other systems that employ the technology.

The ban could be a harbinger. Other cities, including Oakland, Calif., and Somerville, Mass., are considering similar bans.

In late May, Congress began grappling with the dangers of facial recognition. At a hearing of the House Oversight and Government Reform committee, a spirit of rare bipartisanship prevailed. Rep. Jim Jordan of Ohio, the panel’s ranking Republican, called for a moratorium on the use of the technology until a federal law can regulate applications. Chairman Elijah E. Cummings, a Maryland Democrat, did not disagree. He said all Americans could become victims of facial surveillance and promised balanced legislation governing its use.

A Customs and Border Protection officer uses a facial recognition device to screen a traveler at Miami International Airport, the first in the country to provide expedited passport screening with the technology. (Joe Raedle/Getty Images)

Data Can Be Biased In addition to facial recognition, other government AI systems are particularly harmful to African-Americans and other minorities, critics say.

“It has been well-documented that systems have been weaponized against black people,” says Yeshimabeit Milner, founder and executive director of Data for Black Lives, a group that is calling for data science to improve the lives of black people.

In attempts to end cash bail, some states have turned to systems that predict whether a defendant is likely to skip a court date. But reports show these systems can unfairly penalize black defendants, and make it virtually impossible for them to fight back, Milner says.

“The movement to end cash bail is now turning into a risk-assessment industry,” she says.

The group has succeeded in persuading some local governments not to build massive databases, Milner says.

In the Twin Cities of St. Paul-Minneapolis, the group persuaded a coalition of government agencies, including the sheriff’s office and a foster care operation, against combining what Milner describes as vast, “historically biased” data that inadvertently could have worked against minorities.

In September, AI Now issued a report documenting areas where technologies are supplanting private and government decision-making, including public school teacher evaluations, criminal risk assessments and eligibility for Medicaid and other government benefits. Critics have begun pushing for adequate safeguards, oversight and appeals mechanisms, it said.

Even in cases where current laws protect against discrimination, the technologies raise new questions about how laws apply, policy experts say.

The Civil Rights Act, for example, prohibits discrimination based on race, color, religion, sex and national origin. But AI-driven advertising can decide who sees a job opening without explicitly excluding any group, says Miranda Bogen, a senior policy analyst at Upturn, a group that advocates for fairness in digital technologies. Because Facebook shows ads based on what users are interested in, Bogen says, it’s possible “you may not see an ad, and if you aren’t aware of a job, you can’t apply.”

In March, the Department of Housing and Urban Development sued Facebook, alleging that its automated advertising platform broke federal law by letting advertisers decide who can view housing ads. The practice allowed advertisers to exclude groups including foreigners, non-Christians, people of Hispanic origin and those wearing hijabs from viewing ads, the lawsuit said. Facebook has said it was surprised by the agency’s lawsuit and continues to work with the department.

Although the United States has not outlawed any specific application of artificial intelligence, it should, says Natasha Duarte, policy analyst at the Center for Democracy and Technology, which advocates for online civil liberties.

“We should not be using algorithms to predict individuals’ likelihood of future criminality or other highly subjective individualized judgments,” because there’s no fair way to predict criminal behavior or extrapolate future actions based on one’s past, Duarte says.

The center also opposes use of the technologies to analyze social media data that in turn would affect people’s lives, because such data often is unreliable and could lack context, she says.

Self-Policing?

While regulators seek to guide development of trustworthy and ethical AI systems, achieving those goals in all circumstances may not be possible, Duarte says.

The European Union in April outlined seven requirements that artificial intelligence systems should meet to be considered trustworthy. They include having humans in the loop, and being fair, transparent, accountable and safe as well as ensuring privacy and protecting data.

But guidelines for specific applications would likely work better than broad principles for all artificial intelligence development, Duarte says. Some technologists say lawmakers and regulators should enforce existing laws that prohibit discrimination and bias instead of rushing into new rules and legislation for AI.

David Hoffman, associate general counsel and global privacy officer at Intel, takes a cautious position on the government addressing such biases when the technology relies on personal data.

“Privacy laws already address those,” he says.

Regulators such as the Federal Trade Commission lack resources and technical capacity to oversee fast growing technologies, Hoffman says. The agency’s privacy enforcement unit has remained static with about 40 staff members while the country’s population has grown and technological complexity has increased, he says. Intel has proposed that the FTC employ 500 additional staff, including lawyers and technologists.

Amir Husain, founder and CEO of SparkCognition, an Austin, Texas, company that builds artificial intelligence systems for a variety of industries, says the government should look to the past as a model for employing AI in its regulatory strategy.

When cars replaced horse-drawn carriages and bank robbers were able to make quicker getaways, the “answer was not to ban or regulate cars but to supply police with better, faster cars,” Husain says. Similarly, AI should be deployed as an oversight tool, he adds.

Husain also believes that proposals by the Department of Commerce to restrict export of AI and related technologies are pointless because proprietary data is the key to developing artificial intelligence tools, and that’s not subject to export controls.

Others echo Husain’s view that the technology itself may help address questions about bias and discrimination.

Figuring out biases lurking in databases is a task artificial intelligence is best equipped to detect, says Roslyn Docktor, IBM’s director of technology policy.

IBM last year launched its AI Fairness 360 online tool that includes 70 fairness metrics and 10 bias mitigation algorithms and can be applied to detect problems in credit scoring, gender bias in image recognition and racial bias in health care decisions, for example, according to the company.

“The tool helps those developing AI solutions to understand if there’s bias in their datasets” and offers suggestions on how to fix them, Docktor says.

Corporate intent, which identifies the kinds of technological advances a company pursues, can also offer clues about what is and is not being prioritized, says Philip Resnik, professor of linguistics and computer science at the University of Maryland.

Technologies and systems by themselves have no notion of what is good or ethical, but fortunately a “large community of researchers” is focused on principles such as fairness, accountability and transparency, he says.

If algorithms aiming for efficiency, for example, are trained on biased data, they may produce efficient automation but repeat past harms, Resnik says.

K Street Awakens

Regardless of the complexities of the technology and the slow learning curve lawmakers face, lobbyists are on top of the technology on behalf of their clients and have ramped up their outreach to lawmakers over AI.

In 2015, Carnegie Mellon University was the lone organization to disclose “artificial intelligence” as a federal lobbying issue, according to quarterly reports filed with Congress. In the first months of this year, in contrast, nearly 100 interests have identified it, a signal that K Street sees it as an area of growth.

Washington’s largest lobbying practice at Akin Gump Strauss Hauer & Feld offers one example.

Earlier this year the firm hired former Rep. Lamar Smith, a Texas Republican who left at the conclusion of the 115th Congress after serving as chairman of the House Science, Space and Technology Committee. Smith and others at the firm have been pitching existing and potential clients about artificial intelligence. Akin Gump held a roundtable discussion with lawmakers in the AI Caucus plus outside stakeholders. Smith predicts a lot of activity this year with multiple bills touching AI already introduced.

“I think the government is also rightly concerned about privacy but equally concerned about the misuse of AI, malicious use,” Smith says. “Capitol Hill, Congress, the House and Senate, are just sort of trying to feel their way forward. I think they’re really just at the beginning of information gathering and the self-education process. At this point, we’re just at the beginning of the legislative process.”

The U.S. Chamber of Commerce is working through its Chamber Technology Engagement Center, or C_TEC, to help draft legislative ideas, says the group’s Tim Day.

The goal is to help “legislators and regulators to understand the technology before legislating or regulating,” he says.

The tussle to regulate technology also is playing out between employers and unions.

The trucking industry is seeking a federal safety standard for “highly automated vehicles,” arguing that such an approach is better than different state-by-state standards.

Chris Spear, president of the American Trucking Associations, says artificial intelligence can help improve safety and reduce congestion and pollution, citing the industry’s $75 billion a year loss because of congestion.

But unions, fearing that self-driven trucks may take away jobs, lobbied against a bill that would have set a federal standard for automated commercial vehicles.

Despite promises that the technology will make drivers’ lives easier, “it remains to be seen how that stuff’s going to roll out and whether the true goal is really just to take someone’s job,” says Sam Loesche, legislative representative for the Teamsters.

The push and pull between technologists and companies on the one side and labor unions and advocates on the other is likely to continue as lawmakers and regulators begin looking for ways to draw boundaries around artificial intelligence. “Without any question, AI is coming,” former Rep. Smith says. “We need to make sure it’s heading in the right direction.”

AI Timeline

Early 1900s: Mechanical and programmable computers begin to be developed in the United States, the United Kingdom and Germany.

1943: Walter Pitts, a logician, and Warren McCulloch, a neuroscientist, develop the first model of the neural networks of the human brain.

1950: Alan Turing, an English mathematician and computer scientist, publishes a paper called “Computing Machinery and Intelligence” in which he poses the question, “Can machines think?” He estimates that in 50 years computers would be capable of playing a game of questions called the Imitation Game in which a human interrogator may be tricked into believing that a machine is a human.

1951: Marvin Minsky, a computer scientist who later co-founded MIT’s Computer Science & Artificial Intelligence Laboratory, builds the first randomly wired neural network learning machine, the Stochastic Neural-Analog Reinforcement Computer, connecting 40 synapses.

1952: Researchers at IBM develop computer learning programs designed to play checkers that would learn from each game and find better ways to win based on past data.

1955: Herbert Simon, Allen Newell, and Cliff Shaw — cognitive scientists and economists funded by the RAND Corp. — design a software program called Logic Theorist to mimic problem-solving skills of a human — the first artificial intelligence program.

1959: Simon and Newell present a General Problem Solver, considered one of the earliest artificial intelligence programs, to solve mathematical problems in geometry and logic as well as play chess or solve word puzzles.

1960: Frank Rosenblatt, a psychologist at Cornell, builds the Perceptron, an constructed in accordance with biological principles that shows an ability to learn. The New York Times reports that the U.S. Navy expects the device to identify “its surroundings without any human training or control.”

1965: Alexey Ivakhnenko, a Russian and Ukrainian mathematician, and his associate V.G. Lapa develop a set of algorithms known as Group Method of Data Handling, which sets the stage for development of networks — an essential component of teaching computers to learn.

1966: Joseph Weizenbaum of The Massachusetts Institute of Technology’s artificial intelligence lab creates Eliza, an early natural language processing program.

1970s: The Defense Advanced Research Projects Agency funds several artificial intelligence research programs at U.S. universities with the goal of developing a machine that can transcribe and translate spoken languages and also conduct high throughput data processing.

1970: Marvin Minsky tells Time magazine that in “three to eight years we will have a machine with the general intelligence of an average human being” — a goal that has yet to be achieved.

1974-80: AI Winter I: After technology fails to live up to promises, government funding drops off in the United States and the United Kingdom.

1981: Japan’s Ministry of International Trade and Industry promises investment of about $850 million in a 10-year, so-called fifth generation computer project that would result in computers capable of understanding spoken language, translate, and diagnose diseases. The project fails to achieve its goals.

1982: John Hopfield of Princeton University creates a named after him that uses a special type of computer memory to compare inputs with stored data and returns matching data. The approach advances deep learning techniques.

1985: Terry Sejnowski, a computational neuroscientist at Johns Hopkins University, and Charles Rosenberg of Princeton develop NETtalk, a network system that converts English text to speech.

1989: Yann LeCun, a French computer scientist using neural networks, designs a system that can read handwritten digits. The technology is later used to read handwritten checks and ZIP codes.

Early 1990s: AI Winter II: Funding for artificial intelligence research drops again.

Mid 1990s: Scientists around the world continue to develop machine learning and neural networks.

1997: World chess champion and Grand Master Gary Kasparov is defeated by IBM’s Deep Blue chess-playing computer program.

1997: Dragon Systems speech recognition software called Naturally Speaking is launched and made available for Windows operating system computers.

2009: Stanford University’s Fei-Fei Li launches ImageNet, which now holds more than 15 million images that are labeled and available for free for researchers. The database is used to train neural networks in supervised learning.

2011: IBM develops Watson, a computer system that combines machine learning, natural language processing and information retrieval techniques. Watson competed and won on “Jeopardy!”

2012: Google Brain, a subsidiary focused on AI systems, launches the Cat Experiment with one of the largest-ever neural networks, feeding the system images of cats taken from 10 million YouTube videos without identifying them as such. The system teaches itself to recognize the felines in about 15 percent of the cases — a significant improvement over earlier attempts.

2014: Facebook launches DeepFace, a neural network image recognition system that identifies human faces with about 97 percent accuracy. Google also uses a similar image recognition technology. 2017: Darpa launches Explainable AI project to create machine-learning systems that can explain their rationale to human users.

2017: Google’s AlphaGo program developed by its artificial intelligence subsidiary Deep Mind, uses machine learning and search techniques to teach itself how to play the Chinese abstract strategy board game called Go and beat the world champion Ke Jie.

2018: U.S. Food and Drug Administration approves the use of AI-based medical devices for imaging applications.

Now: Google and Facebook image recognition, Apple’s Siri, Amazon’s Echo, Netflix’s recommendation engine, automatic email replies, chatbots and Waymo’s self-driving car experiment are some of the examples of the widespread use of deep learning in everyday applications. Tagged: ACLU, Artiicial Intelligence, Facebook, facial recognition technology, FBI, Technology

© 2019 · CQ - Roll Call, Inc · All Rights Reserved.

1625 Eye Street, Suite 200 · Washington, D.C. 20006-4681 · 202-650-6500

About CQ Help Privacy Policy Masthead Terms & Conditions