
Photo illustration: Soohee Cho/The Intercept, Getty Images
Try to imagine for a moment a declaration from Congress to the effect that safeguarding the environment is important, that the effects of pollution on the environment ought to be monitored, and that special care should be taken to protect particularly vulnerable and marginalized communities from toxic waste. So far, so good! Now imagine this resolution is enthusiastically endorsed by ExxonMobil and the American Coal Council. You would have good reason to be suspicious. Keep that in mind while you consider the newly announced House Resolution 153.
Last week, several members of Congress began pushing the resolution with the aim of “supporting the development of guidelines for ethical development of artificial intelligence.” It was introduced by Reps. Brenda Lawrence and Ro Khanna — the latter of whom, crucially, represents Silicon Valley, which is to the ethical development of software what West Virginia is to the rollout of clean energy. This has helped make Khanna a national figure, in part because, far from being a tech industry cheerleader, he’s publicly supported cracking down on the data Wild West his home district helped create. For example, he has criticized the wrist-slaps Google and Facebook receive in the wakes of their regular privacy scandals and called for congressional action against Amazon’s labor practices.
The resolution, co-sponsored by seven other representatives, has some strange fans. Its starting premises are unimpeachable: “Whereas the far-reaching societal impacts of AI necessitates its safe, responsible, and democratic development,” the resolution “supports the development of guidelines for the ethical development of artificial intelligence (AI), in consultation with diverse stakeholders.” It also supports adherence to a list of crucial values in the development of any kind of machine or algorithmic intelligence, including “[i]nformation privacy and the protection of one’s personal data”; “[a]ccountability and oversight for all automated decision making”; and “[s]afety, security, and control of AI systems now and in the future.”
These are laudable goals, if a little inexact: Key terms like “control” and “oversight” are left entirely undefined. Are we talking about self-regulation here — which algorithmic software companies want because of its ineffectiveness — or real, governmental regulation? When the resolution mentions accountability, are Khanna and company envisioning harsh penalties for AI mishaps, or is this a call for more public relations mea culpas after the fact?
It’s hard to square the track records of Facebook and IBM with many of the values listed in the AI resolution.
Details in the press release that accompanied the resolution might explain the wiggle room — or make one question the whole spiel. H.R. 153 “has been endorsed by the Future of Life Institute, BSA | The Software Alliance, IBM, and Facebook,” the release says.
The Future of Life Institute is a loose organization of concerned academics, as well as Elon Musk and, inexplicably, actors Alan Alda and Morgan Freeman. Those guys aren’t the problem, though. The real cause for concern is not that a resolution expresses a desire to rein in artificial intelligence, but that it does so with endorsements from Facebook and IBM — two fantastic examples of why such reining in is crucial. It’s hard to square the track records of either company with many of the values listed in the resolution.
Facebook — the world’s largest advertising network that happens to include social sharing features — is already leveraging artificial intelligence in earnest, and not just to track and purge extremist content, as touted by CEO Mark Zuckerberg. According to a confidential Facebook document obtained and reported on last year by The Intercept, the company is courting corporate partners with a new machine learning ability that makes explicit the goal of all marketing: to predict the future choices of consumers and invisibly change their decision without any forewarning. Using a technology called FBLearner Flow, the company boasts of its ability to “predict future behavior”; this allows it offer corporations the ability to target advertisements at users who are “at risk” of making choices that are considered unfavorable to such and such brand, ideally changing users’ decision before they even know they are going to make it. The company is also facing a class-action lawsuit over its controversial facial tagging feature, which uses machine intelligence to automatically identify and pair a Facebook user’s likeness with the company’s existing trove of personal information. The feature was rolled out without notice or anything resembling informed consent.
IBM’s machine intelligence adventures so far have been arguably more disquieting. Watson, the firm’s flagship AI product formerly known for its “Jeopardy!” victories, was found last year to have “often spit out erroneous cancer treatment advice,” according to a report in Stat. Last year, The Intercept revealed that the New York Police Department was sharing troves of surveillance camera footage with IBM to develop software that would allow other police departments to search for people by hair color, facial hair, and skin tone. Another 2018 Intercept report revealed that IBM was one of several tech firms lining up for a crack at aiding the Trump administration’s algorithmic “extreme vetting” program for immigrants — perhaps unsurprising, given that IBM CEO Ginni Rometty personally offered the company’s services to Trump following his election and later sat on a private-sector advisory board supporting the White House.
Although it’s true that AI has yet to be developed and perhaps never will, its precursors — lesser machine-learning or self-training algorithms — are already powerful instruments and growing more so every day. It’s hard to imagine two firms who should be farther from the oversight of such wide-reaching technology. For Facebook, a company that keeps the functionality of its intelligent software secret with a fervor rarely seen outside of the Pentagon, to endorse a resolution that calls for “[a]ccountability and oversight for all automated decision making” is absurd. That Facebook co-signed a resolution that hailed “[i]nformation privacy and the protection of one’s personal data” is something worse than absurd. So, too, is the fact that IBM, which sought the opportunity to build software to support the Trump administration’s immigration policies, would endorse a resolution to “empower … underrepresented or marginalized populations” through technology.
“It would be foolish to not involve some of the leading thinkers who happen to be at these companies.”
In a phone interview with The Intercept, Khanna defended the endorsements as being little more than the proverbial thumbs-up, and insisted that Facebook and IBM should have a seat at the table if and when Congress tackles meaningful federal regulation of AI. Such legislation, he thinks, must be “crafted by experts,” if not outright drafted by them. “I think the leaders of Silicon Valley are very concerned about an ethical framework for artificial intelligence,” Khanna said, “whether it’s Facebook or Sheryl Sandberg. That doesn’t mean they’ve been perfect actors.”
Khanna was careful to reject the notion of “self-regulation,” which tech firms have favored for its total meaninglessness. “The past few years have showed self-regulation doesn’t work,” said Khanna. Although he rejected the idea that tech firms could help directly shape future AI regulation, Khanna added, “It would be foolish to not involve some of the leading thinkers who happen to be at these companies.”
Asked if he imagined future AI “oversight,” as mentioned in the resolution, including independent audits of corporate black-box algorithms, Khanna replied that it “depends for what” — as long as it doesn’t mean that Facebook has to run every one of its algorithms before a regulatory agency, which would “stifle innovation.” Khanna, however, suggested that there are scenarios where government involvement would be necessary, if “it were periodic checks on algorithms.” He said, “If, for example, the FTC” — Federal Trade Commission— “received a complaint that an algorithm was systematically showing bias and there was some standard of probable cause, that should trigger an audit.”
Yet hashing out these and countless other specifics on the how, when, and who of algorithmic oversight will be a long slog, with or without Facebook’s endorsement.
View Article Here The Intercept
Be the first to write a comment.