1 / 11

Negative Impacts of AI: How Big a Threat Is It To Humanity?

Negative Impacts of AI: AI has broken the bottleneck of human efficiency and has reduced repetitious labour considerably. Perhaps the biggest concern associated with AI is the loss

7583
Download Presentation

Negative Impacts of AI: How Big a Threat Is It To Humanity?

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Nega?ve Impacts of AI: How Big a Threat Is It To Humanity? Muskan July 1, 2021 “Ar?ficial intelligence will reach human levels by around 2029. Follow that out further to, say, 2045, we will have mul?plied the intelligence, the human biological machine intelligence of our civiliza?on a billion-fold.” – Ray Kurzweil, American computer scien?st, and futurist who pioneered pa?ern-recogni?on technology. Almost every industry is embracing Ar?ficial Intelligence and reaping its benefits. In fact, we encounter AI every single day . However, with autonomous technology progressing at a breakneck pace, several stalwarts from the science and engineering background, including Stephen Hawking, have expressed their concerns over the increasing penetra?on of the technology. Tesla and SpaceX founder, Elon Musk, said, “I am really quite close… to the cu?ng edge in AI, and it scares the hell

  2. out of me. It’s capable of vastly more than almost anyone knows, and the rate of improvement is exponen?al.” Whether Ar?ficial Intelligence is a threat to humanity is a ques?on that has been haun?ng humankind ever since computer scien?st Alan Turing started to believe that someday computers might leave an unlimited impact on us. We are already aware of some of the threatening characteris?cs of AI. For starters, the pace of technological progress has been and will con?nue to be shockingly fast. For instance, OpenAI’s GPT 3 was shockingly good. In another example, people were taken aback by Microso?’s announcement that AI was proven to be be?er than professional radiologists. Robots were, however, intended to replace manual labor jobs, not professional work. But here we are: AI is quickly devouring en?re professions, and those jobs will not return. As Stephen Hawking stated: “Success in crea?ng effec?ve AI could be the biggest event in the history of our civiliza?on. Or the worst. So we cannot know if we will be infinitely helped by AI or ignored by it and sidelined, or conceivably destroyed by it.” Understanding what some of these nega?ve impacts of AI might be is the first step towards preparing for them. Here are some of the most pressing challenges: Nega?ve Impacts of AI Loss of Jobs AI has broken the bo?leneck of human efficiency and has reduced repe??ous labour considerably. Perhaps the biggest concern associated with AI is the loss of certain types of jobs. While AI will generate employment, there are many jobs done by people today that will be taken over by machines. In fact, AI has surpassed human capabili?es in many areas, including speech transla?on and accoun?ng.

  3. Once the technology becomes more accessible, it will be very challenging, if not impossible, to restrict it. For instance, if a business can replace 50 human operators with a single chatbot service, we can imagine the money they will save. Similarly, if a bus operator with a fleet of 500 buses can replace the drivers with driverless buses and save money, they will do so. Besides, there is no legal implica?on if they fire humans. Indeed, they might feel bad about it. But from just a shareholder- centric single bo?om-line perspec?ve, they might simply get over it. As a result, there is a fear that one day AI will take over many jobs. Humans will have to make adjustments to training and educa?onal programs in order to prepare the future workforce and the current workforce to transi?on to be?er posi?ons by upskilling. Losing the Human Touch The real fear with technology is about humans being “replaced”. While basic automa?on such as robot-run factories that automate simple, rudimentary tasks did not really achieve this, new concerns have emerged with developments such as autonomous cars. Another problem is even if humans are not really “replaced”, we might lose the “human touch” should AI become more prominent. A?er all, we s?ll strive for human interac?on. AI cannot connect with us on the emo?onal levels yet. Moreover, as impressive as the technology is, AI does not have all the answers. For instance, suppose you had an unpleasant experience on a cab ride. Understandably frustrated, you might want to speak to a human who can empathize with you instead of talking to the chatbot. Algorithmic Bias AI bias arises when an algorithm generates results that are systema?cally biased due to erroneous assump?ons. Incorrect, flawed, or incomplete data can result in inaccurate and biased predic?ons. In some cases, the data used to train the model can also reflect the exis?ng prejudices, stereotypes, and any other incorrect assump?ons. This perpetuates real-world biases into the

  4. computer system itself. Facial recogni?on technology, for example, has not been racially inclusive in many instances. A study conducted by the Massachuse?s Ins?tute of Technology showed that facial analysis so?ware exhibited an error rate of 34.7% for dark-skinned women. In contrast, the so?ware showed an error rate of merely 0.8% for light-skinned men. In the hands of the wrong people, AI can be used to manipulate elec?ons and spread misinforma?on. Security Concerns and the Terror of Deepfakes Although job loss is one of the most pressing issues associated with the emergence of AI, there are many other poten?al risks lingering around AI disrup?on. Of these, there are concerns surrounding how AI could poten?ally be used for tampering with privacy and security. A 2018 paper ?tled “The Malicious Use of Ar?ficial Intelligence: Forecas?ng, Preven?on, and Mi?ga?on” sheds light on how the “Malicious use of AI could threaten digital security (e.g. through criminals training machines to hack or socially engineer vic?ms at human or superhuman levels of performance), physical security (e.g. non-state actors weaponizing consumer drones), and poli?cal security (e.g., through privacy-elimina?ng surveillance, profiling, and repression, or through automated and targeted disinforma?on campaigns).” Just as AI can be leveraged to detect and halt cybera?acks, unfortunately, the technology can also be used by cybercriminals to launch sophis?cated a?acks. In fact, with the decreasing cost of development and increasing research in the field, access to emerging technologies has eased considerably. This simply implies that hackers can build more sophis?cated and harmful so?ware with less effort and at a lower cost. As with deepfakes, AI is making it extremely simple to produce fake videos of real people. In the worst scenarios, these videos may be used with the malicious intent of sabotaging an individual’s reputa?on or fueling some poli?cal propaganda. For instance, an audio recording of a poli?cian may be altered to make it seem as though the person spewed racist ideas, but in reality, they did not which can completely mislead the public.

  5. Challenges Concerning Transparency There has been a need for greater transparency into the inner works of AI models in recent years and for a good cause. Transparency may mi?gate problems associated with prejudice, par?ality, and injus?ce, which has drawn a?en?on lately. In one such instance, Apple’s credit card business was accused of a sexist and discriminatory lending model. Apple’s answer to this just added to the uncertainty and mistrust. No one from the company could explain how the algorithm operated, much less defend its results. While Goldman Sachs, the issuing bank, claimed that the algorithm is gender-neutral, it failed to provide any evidence to prove the same. The company then stated that a third party reviewed the algorithm, and it did not even consider gender as an input. However, this jus?fica?on was not taken well. For one reason, it was believed that even if the algorithm did not consider gender as an input, it was s?ll en?rely possible for it to discriminate based on gender. For example, enforcing inten?onal blindness to a cri?cal variable like gender makes it more difficult for a business to iden?fy, prevent, and reverse prejudice on that variable. However, AI disclosures come with their own set of risks. For instance, explana?ons can be hacked, exposing more data may render AI more vulnerable to assaults, and public disclosures may expose businesses to regulatory ac?on. In short, although greater transparency can provide tremendous advantages, it may also introduce equally harmful risks and threats. In order to mi?gate these risks, businesses must consider how they manage the informa?on generated about the risks and how data is shared and safeguarded. Autonomous Weapons and AI-enabled Terrorism

  6. Of late, there have been many heated debates amongst industry leaders, journalists, and governments on how AI is enabling deadly autonomous weapons systems and what could happen if this technology falls into the wrong hands. Although the original intent of incorpora?ng

  7. AI in military opera?ons is to safeguard humans, it is not difficult to imagine the poten?al nega?ve implica?ons of the same. In fact, at present, a variety of weapon systems with different degrees of human par?cipants are being tested. For instance, the Taranis drone, an autonomous combat aerial vehicle, is an?cipated to be fully opera?onal by 2030. It is expected that it will be capable of replacing the human-piloted tornado GR4 fighter planes. Though “killer robots” do not exist yet, it is expected that they con?nue to be a concern for many, and codes of ethics are already being developed. The ques?on is whether it is possible for a robot combatant to understand and implement the Laws of Armed Conflict and whether it can differen?ate between a friend and a foe. It is for this reason that Human Rights Watch has already urged prohibi?ons against fully autonomous AI units that can make lethal decisions, recommending a ban similar to those in place for mines and chemical weapons. Challenges with AI Regula?on It is widely believed that regula?on is the only way to prevent or at least temper the most malicious AI from wreaking havoc. However, there is a caveat. Experts believe that regula?on of AI implementa?on is acceptable, but not of the research. Regula?on of AI research can s?fle the actual progress itself, which can turn out to be rather dangerous and kill innova?on or rob the country that regulates it. Peter Diamandis, a Greek-American engineer, and entrepreneur stated: “If the government regulates against the use of drones or stem cells or ar?ficial intelligence, all that means is that the work and the research leave the borders of that country and go someplace else.” Among the many benefits that AI will bring are improving health, transporta?on, and the reshaping of business. The command and control paradigm should be replaced with humility, coopera?on, and voluntary solu?ons to change. As intelligent machines become more prevalent, innova?ve policies must follow the trail.

  8. Approaching Safe AI Developing advanced AI systems is fraught with uncertainty and disagreement about the an?cipated ?meline, but whatever the speed of progress in the field, there seems to be some valuable work that can be done right now. In the upcoming years, AI systems are bound to become more powerful and sophis?cated. While there are many uncertain?es pertaining to the future, this is the ?me calling for serious efforts to lay down the groundwork for future systems with an understanding of the possible dire consequences. In fact, if we do our homework and undertake the appropriate steps now, we can be be?er prepared for any possibili?es in the future. Globally, there has been a stark growth in the community working towards realizing a safe and beneficial AI, thanks to AI researchers who are demonstra?ng leadership on this issue. Safe AI research is being led by teams at OpenAI, DeepMind, among others. Research into AI governance is also emerging into a field of study. Summing up A be?er approach is required to manage a future with growing ar?ficial intelligence and machine intelligence intermediaries. As we harness the opportuni?es that AI is crea?ng, we must vigorously confront ethical issues across all the areas, including transporta?on, safety, healthcare, criminal jus?ce, etc. There will be many beneficial impacts resul?ng from AI’s con?nued transforma?on, but it is inevitable that it may come with some poten?al repercussions, as with any change in life. In light of the rapid advancement in the field of AI, we need to start deba?ng on how we can develop AI in a construc?ve manner while minimizing its destruc?ve poten?al. Muskan

  9. Technology #AI #How Big AI is a Threat To Humanity #Nega?ve impacts of AI Prev Next Top 10 Clubhouse Alterna?ves For Android and iOS Leave a Comment Your email address will not be published. Required fields are marked * Comments Name * Email * Website Save my name, email, and website in this browser for the next ?me I comment. Post Comment Home Technology Nega?ve Impacts of AI: How Big a Threat Is It To Humanity?

  10. Recent Posts Nega?ve Impacts of AI: How Big a Threat Is It To Humanity? Top 10 Clubhouse Alterna?ves For Android and iOS How Augmented Reality Could Change the Future of Surgery Top 5 IoT Programming Languages in 2021 7 Augmented Reality Use Cases in Architecture and Construc?on Categories Analysis of Famous Apps Event Games Human Resources Marke?ng Technology Privacy

  11. Made in India Adver?se with us ©2021 Indiametrics Feel free to share your ideas / query on contact@indiametrics.com

More Related