top of page

The True Threat of Artificial Intelligence

Hollywood has been portraying artificial intelligence, A.I., as a genius-malevolent-conscious-inhuman being that poses a huge threat for humanity. For example, in the 2001: A Space Odyssey, the frightening A.I., HAL 9000, said one of the most classic and scary sentences in the cinematic history to Dr. Bowman: “I am sorry Dave, I am afraid I cannot do that,” which nearly got him killed in space by denying his access into the spaceship. Skynet: the iconic and wicked A.I. in the renowned Terminator series. In Pixar’s 2008 classic Wall-E, where the fearsome A.I., AUTO, was the main villain where it interfered with Captain McCrea’s noble goal to get back to Earth. However, we are extremely far from the future of terrifying A.I. that Hollywood has been depicting; moreover, the A.I. we have nowadays is extremely beneficial to our society. Obviously, Google Translate is societally beneficial, since it offers a completely free service for everyone to make sense of languages that are different from their native tongues; amazingly, it was enormously improved by Google Brain, a deep learning A.I., where its new translational accuracy from Japanese to English astonished a professor at the University of Tokyo (Lewis-Kratus). A.I is being deploy by PayPal to stop money laundering, and Deep Instinct is also using it to detect malware (Brynjofsson and McAfee). Evidently, giant technology companies realized how beneficial A.I. is to the society, and thus explaining why they are investing around $20 to $30 billion on developing A.I. and buying A.I. companies in 2016 (Columbus). Interestingly, machine learning is at the heart of developing the contemporary A.I., and yet it is also the main problem of it, which makes it a huge problem for the modern society that is slowly integrating A.I. into our daily lives.

​

To understand how machine learning is the problem of creating a modern A.I., we must understand what machine learning is, and why we are using it. The imperative thing to understand about machine learning is it is a “fundamentally different approach to creating software: the machine learns from examples, rather than being explicitly programmed for a particular outcome” (Brynjofsson and McAfee). Meaning, when we are coding a software, we are just transferring the existing knowledge and commands from our heads into a language that the machine can understand and execute, which is the main weakness of coding because, for example, we simply cannot write down all of the innumerable instructions on teaching someone how to drive or recognizing a face (Brynjofsson and McAfee). Thus, machine learning does not encounter the imperative problem of coding because the computer can actually learn, rather than having us to program instructions to do things. There many machine learning techniques, but the most successful one has been the “supervised learning systems,” according to Brynjofsson and McAfee’s Harvard Business Review article.

​

“Supervised learning system” is basically when a machine is given a myriad amount of correct answers to a particular problem with the hope that it would produce correct answers later on; moreover, successful machine learning systems regularly use gigantic training data that have thousands to millions of examples that are labelled as “correct” (Brynjofsson and McAfee). Logically, the more correctly labeled data we feed it, the higher the predictive accuracy of the machine learning system is when it is dealing with a set of unlabeled data. For example, we give the machine a million examples of the color red, and we tell it that “red” looks exactly like that, so that when we show it a colorful painting, it would have a higher chance of identifying the color red. Furthermore, one of the supervised learning system’s renowned examples would be the “Cat Paper” by Quoc Le, a Google engineer, where his team gave the Google Brain numerous of cat pictures taken from YouTube and it miraculously produced a blurry picture that artistically resembles a cat (Lewis-Kratus). Nevertheless, older generations of machine learning systems can only consume up to a certain amount of correct data, meaning adding more correct data would not lead to better prediction. Luckily, newer machine learning systems use an approach called “deep learning,” where they utilize huge neural networks that are run by supercomputers, that makes more accurate its predictions with the more data we feed it (Brynjofsson and McAfee). Interestingly, the machine is learning just like us because it is learning through trial and error, and its learning experience are stored in deep neutral networks that are just like our brains; philosophically speaking, we are actually designing ourselves when we are designing A.I. through machine learning algorithms, and biblically speaking, we are playing god.

​

It is naive to think that machine learning systems will not surpass human performance, in fact, they already did. According to Lewis-Kratus from the New York Times, neural networks have been doing a better job than the extremely-well-paid radiologists on detecting tumors in medical images and making diagnoses from pathology reports. J.P. Morgan Chase introduced a system that significantly reduces the time to review commercial loan contracts into just a few seconds, where as it would take lawyers more than 41 years to finish (Brynjofsson and McAfee). Logically, instead of spending hundreds of millions, even billions, of dollars to pay their employees’ expensive salaries, businesses would only need to spend only hundreds of thousands of dollars to pay for the A.I. systems’ electrical bills and the engineer salaries to maintain it. Thus, it is easy to understand why companies are spending billions into developing A.I, since the economic incentive for businesses is too great to skip. Moreover, it is logical to think that A.I. poses a threat to the economy, where it would leave thousands or even millions of people unemployed.

​

Kai-Fu Lee’s New York Times article imposes thought-provoking ideas on how A.I. would create an economic inequality. Obviously, Lee believes that A.I. will replace a lot of jobs, especially low-paying ones, but also some high-paying ones too. Lee imagines interesting possibilities where “how much money a company like Uber would make if it used only robot drivers, the profits if Apple could manufacture its products without human labor, and the gains to a loan company that could issue 30 million loans a year with virtually no human involvement.” Furthermore, Lee continues in his article by stating that China and America are the only two nations that have huge A.I. markets, which means new talents are being attracted into only these countries; moreover, as more talents come to those two countries, the better the A.I. systems of those countries are, which essentially creating two huge monopolies that would control the world’s wealth in the near future. Simply, he is saying that: “enormous wealth concentrated in relatively few hands and enormous numbers of people out of work.” Thus, it seems easy and logical to assume that A.I. create a huge economic inequality, since it has potential to force people out of their jobs. However, the reality is quite different.

​

The reality of contemporary A.I. and humans is that we are harmoniously and productively working together. Udacity is deploying a machine learning system to increase their employees’ productiveness; the company is using its enormous chat room logs as training data for their supervised learning system, where they labeled the “interactions that led to a sale” as correct, and the others as incorrect; moreover, the system’s goal is “to predict what answers successful salespeople were likely to give in response to certain very common inquiries and then shared those predictions with the other salespeople to nudge them toward better performance”; consequently, their sales people served twice the amount of customers while increased their effectiveness by 54% (Brynjofsson and McAfee). An aforesaid example in this paper shows about how better neural networks are at detecting tumors and making diagnoses than radiologists, which would mean they we do not need radiologists anymore, but this is not the case. This would mean radiologists have more time to focus on truly critical cases, to communicate with their patients, and to coordinate with other physicians more effectively (Brynjofsson and McAfee). Thus, even though Lee’s aforesaid concerns are rational, but the truth is A.I., or machine learning systems, can hardly ever replace an entire job, or any processes, or even a business model. Moreover, the contemporary A.I. systems usually complement human activities, making us more effective and efficient in our daily lives. In fact, the problem of A.I. was never its potential to disrupt the economy, the problem is ironically the machine learning process instead.

​

To reiterate, machine learning is at the heart of developing modern A.I. where we train the machine with the “correct” data, but what if the data that we give it is biased and morally wrong? Well, the results were comical, and obviously sad. An U.S. court used “COMPAS,” a software that was developed through machine learning algorithms, for risk assessment. Sadly, COMPAS was biased to black prisoners, where it “mistakenly label black defendants as likely to reoffend – wrongly flagging them at almost twice the rate as white people: 45% to 24%”; PredPol, “a program for police departments that predicts hotspots where future crimes might occur,” was being trained by previous bigoted crime reports, which made it more likely to recommend more patrols in black and brown neighborhoods; “a Google image recognition program labelled the faces of several black people as gorillas”; “a LinkedIn advertising program showed a preference for male names in searches,” meaning LinkedIn was making men more likely to get a job; Microsoft’s Tay, an A.I. chatbot, spent a day training on Twitter, only to begin to comically spamming anti-Semitic messages (qtq. in Buranyi). Beauty.ai held the world’s first “AI-driven beauty contest,” which 6,000 people applied and only 44 people won, and surprisingly the majority of winners were white; moreover, the company later admitted that they were training the machine with data that “were not balanced in terms of race and ethnicity” (qtq. in Dickson). Evidently, it seems logical to everyone that we should really pay more attention to making the machine learning’s training data more unbiased and objective, and yet we are not.

​

Knight from the MIT Technology Review simply reports that “companies that develop and apply machine learning systems, and government regulators show little interest in monitoring and limiting algorithmic bias.” Hence, the fact that companies and governments are not paying attention to developing an unbiased machine learning system is incomprehensible, because the risk of it damaging humanity is way more than depressing just some racism. According to Barrat’s Huffington Post article, “more than 50 nations are developing battlefield robots. The most sought-after will be robots that make the “kill decision” — the decision to target and kill someone — without a human in the loop.” The most prominent thing to notice in Barrat’s report is that nations are developing battlefield robots that will be the literal Grim Reaper, where they can decide to take a human’s life by its own reasoning. Evidently, machine learning will be the process in which nations used to develop their battlefield robots, but we know that they can be corrupted by biased data. For example, if one nation continuously trains their killer robots with data that labelled African American as “more dangerous” compare to other ethnicities, then dark-complexion individuals will be more likely to meet the “Grim Reaper” sooner in their lives.

​

The senior editor at the MIT Technology Review, Will Knight, plainly states in his article, “The Dark Secret of A.I.,” that we do not know how A.I. arrives at a conclusion. An experimental car developed by Nvidia, that utilized A.I., taught itself how to drive by just watching a human do it; however, its engineers and programmers did not even know how to explain its behavior and was completely unclear how it made driving decisions, because its system was extremely complicated (Knight). In 2015, a research team at Mount Sinai Hospital, New York, applied deep learning into the hospital’s enormous database with a training data set of 700,000 people, and the program is called “Deep Patient.” Consequently, besides from its success of detecting numerous illnesses, Deep Patient surprisingly did a fantastic job on anticipating “the onset of psychiatric disorders like schizophrenia, but schizophrenia is notoriously difficult for physicians to predict.” Hence, Deep Patient left its developers puzzled because it offered zero clue on how it can even do this; moreover, the its lead developer simply stated that “we can build these models, but we don’t know how they work” (Knight). Interestingly, we have no clue on how A.I. systems finalize their decisions, and yet “the U.S. military is pouring billions into projects that will use machine learning to pilot vehicles and aircraft, identify targets, and help analysts sift through huge piles of intelligence data” (Knight). Questionably, how can we be sure that the A.I. developed by the U.S. military, which will control deadly machineries, be safe to civilians when we do not even know how it thinks? Thus, if we do not know how an A.I. thinks, then we can at least ensure that its thinking will not be biased.

​

Obviously, in order for future A.I. systems to make objective judgements, rather than subjective, we must ensure that they are trained with unbiased data. However, there are zero international or national regulations that regulate the machine learning’s training data, and in fact, researchers usually “use and share off-the-shelf frameworks and databases that already have bias ingrained into them” (Dickson). Again, it is incomprehensible how legislatures would simply ignore this imperative problem, since we are basically building something that would have the capability to literally end lives. John Giannandrea, the A.I. chief at Google, plainly stated his concern for biased A.I. that “it is important that we be transparent about the training data that we are using, and are looking for hidden biases in it, otherwise we are building biased systems… If someone is trying to sell you a black box system for medical decision support, and you do not know how it works or what data was used to train it, then I would not trust it” (Knight). We should be really concern when one of the prominent leaders of the technology industry speak up about the perils of biased in A.I., so it is logical that we must take his recommendations seriously. Elaborating on Giannandrea’s recommendations, one solution would be “to create shared and regulated databases that are in possession of no single entity, thus preventing any party from unilaterally manipulating the data to their own favor” (Dickson); meaning, machine learning’s training data would have a lower chance of being biased if they are heavily regulated and transparent. On top of that, it is a relief to know that the technology industry giants, such as Facebook, Amazon, Google, IBM and Microsoft, came together and founded the “Partnership on A.I.,” to ensure that the development of A.I. will be viewed in different perspectives, so that it would not cause a problem for the public.

​

It is apparent that we cannot stem the development of A.I., since it has been integrating into our daily lives. For example, open the Photos application on your iPhone, and try to search for something specific like a dog or a tree in your photos. Amazingly, your iPhone can correctly identify the object that you searched due to machine learning. Machine learning is at the heart of creating the contemporary A.I., but it is also the problem of it. Undoubtedly, machine learning has been helping organizations improve their productivity, despite the concern that A.I. will damage the economy. A.I. is complementary to the human activities, and it cannot fully replace a job. Still, effective machine learning needs training data, so we really do not want to train the machine with our biased beliefs; since, according to Giannandrea, A.I. chief at Google, “if we give these systems biased data, they will be biased.” Therefore, we do not wish to give the biased machine the responsibility to operate deadly machinery, like driving a car or piloting a fighting jet or flying a passenger plane. As a result, we must urge our legislatures to take actions on regulating the machine learning’s training data, so that the data will be unbiased and transparent. Because, if we failed, and the machine that we created become self-aware and biased, then the terrifying A.I future that Hollywood has been portraying would soon become our reality.

​

Sources:

  1. Barrat, James. “Why Stephen Hawking and Bill Gates Are Terrified of Artificial Intelligence.” The Huffington Post, Oath Inc., 9 Apr. 2015, www.huffingtonpost.com/james-barrat/hawking-gates-artificial-intelligence_b_7008706.html. Accessed 5 Dec. 2017.

  2. Brynjolfsson, Erik, and Andrew McAfee. “The Business of Artificial Intelligence.” Harvard Business Review, Harvard Business School Publishing, 7 Aug. 2017, hbr.org/cover-story/2017/07/the-business-of-artificial-intelligence. Accessed 5 Dec. 2017.

  3. Buranyi, Stephen. “Rise of the racist robots – how AI is learning all our worst impulses.” The Guardian, Guardian News and Media, 8 Aug. 2017, www.theguardian.com/inequality/2017/aug/08/rise-of-the-racist-robots-how-ai-is-learning-all-our-worst-impulses. Accessed 5 Dec. 2017.

  4. Columbus, Louis. “McKinsey's State Of Machine Learning And AI, 2017.” Forbes, Forbes Magazine, 9 July 2017, www.forbes.com/sites/louiscolumbus/2017/07/09/mckinseys-state-of-machine-learning-and-ai-2017/#51e3903675b6. Accessed 5 Dec. 2017.

  5. Dickson, Ben. “Why it's so hard to create unbiased artificial intelligence.” TechCrunch, TechCrunch, 7 Nov. 2016, techcrunch.com/2016/11/07/why-its-so-hard-to-create-unbiased-artificial-intelligence/. Accessed 5 Dec. 2017.

  6. Knight, Will. “Biased algorithms are everywhere, and no one seems to care.” MIT Technology Review, MIT Technology Review, 12 July 2017, www.technologyreview.com/s/608248/biased-algorithms-are-everywhere-and-no-one-seems-to-care/. Accessed 5 Dec. 2017.

  7. Knight, Will. “Google's AI chief says forget Elon Musk's killer robots, and worry about bias in AI systems instead.” MIT Technology Review, MIT Technology Review, 3 Oct. 2017, www.technologyreview.com/s/608986/forget-killer-robotsbias-is-the-real-ai-danger/. Accessed 5 Dec. 2017.

  8. Knight, Will. “There's a big problem with AI: even its creators can't explain how it works.” MIT Technology Review, MIT Technology Review, 12 May 2017, www.technologyreview.com/s/604087/the-dark-secret-at-the-heart-of-ai/. Accessed 5 Dec. 2017.

  9. Lee, Kai-fu. “The Real Threat of Artificial Intelligence.” The New York Times, The New York Times, 24 June 2017, www.nytimes.com/2017/06/24/opinion/sunday/artificial-intelligence-economic-inequality.html. Accessed 5 Dec. 2017.

  10. Lewis-kraus, Gideon. “The Great A.I. Awakening.” The New York Times, The New York Times, 14 Dec. 2016, www.nytimes.com/2016/12/14/magazine/the-great-ai-awakening.html. Accessed 5 Dec. 2017.

That's All

bottom of page