Tuesday, November 12, 2019
Artificial Intelligence, Are the Machines Taking over Essay
While a machine is just a machine made of metal, plastic, silicone and computer chips, it is only as smart as the human that programmed it right? The strides made thus far are only be the beginning of the huge impact and achievements of the computer revolution , and technological advances are creating machines, usually computers that are able to make seemingly intelligent decisions, or act as if possessing intelligence of a human scale. It is only a matter of time before we live in a world of robots that serve humans as portrayed in the 20th Century Fox movie ââ¬Å"I Robotâ⬠, because researchers are creating systems which can mimic human thought, understand speech and even play games with us. As our minds evolve, so does our imagination and the creations we come up with. Artificial intelligence may have been first imagined as an attempt at replicating our own intelligence, but the possibilities of achieving true artificial intelligence is closer than any of us have imagined. Computers, when first invented were fast at computing data, but now they communicate and calculate data much faster than most human beings, but still have difficult fulfilling certain functions such as pattern recognition. Today, research in artificial intelligence is advancing rapidly, and many people feel threatened by the possibility of a robot taking over their job, leaving human beings without work. When computers were first developed in the 1950ââ¬â¢s, the hype about how machines could think like human beings took the scientific world by storm, but the truth of the matter was that computers were very slow, and not capable of what inventors thought they could be. A few years later, an IBM computer defeated world chess champion Gary Kasparov at a game of chess and the hype was reborn. People immediately believed that computers would take over the world and robots would be here to stay. When thinking of Artificialà Intelligence (AI), we have to look at what is considered both strong AI and weak AI. Strong AI makes the bold claim that computers can be made to think on a level at least equal to humans; that they are capable of cognitive mental states. This is the kind of AI that is portrayed in movies like ââ¬Å"I Robotâ⬠. What this means is that the computer thinks and reasons like a human being. This then becomes the human-like AI. Also a form of strong AI is the non-human like AI in which computer program develops a totally non-human sentience, and a non-human way of thinking and reasoning. Weak AI simply states that some ââ¬Å"thinking-likeâ⬠features can be added to computers to make them more useful tools; that machines can simulate human cognition, in other words act as if they are intelligent. This has already started to happen, for example, speech recognition software. Much of the focus during the development of AI research draws from an experimental approach to psychology, looking at things such as mood and personality and emphasizes what may be called linguistic intelligence. In an article from the University of Zurich titled ââ¬Å"Experimental Standards in Research on AI and Humor when Considering Psychologyâ⬠Laughter is a significant feature of human communication, and machines acting in roles like companions or tutors should not be blind to it. So far, the progress has been limited that allows computer-based applications to deal with laughter and its recognition in the human user. In consequence, only few interactive multimodal systems exist that utilizes laughter in interactionâ⬠(Platt et Al 2012). Laughter is partly a contribution to moods in human beings and in research this is just one element that is being attempted to be recreated in AI. ââ¬Å"Understanding the psychological impact of the interface between computer and human allows for the evaluation of the AIââ¬â¢s successâ⬠(Platt et Al 2012). Linguistic intelligence is best explained or shown in the Turing test. Named for Alan Turing who in 1937, being one of the ââ¬Å"first people to consider the philosophical implications of intelligent machinesâ⬠(Bowles 2010), the Turning test was designed to ââ¬Å"prove whether or not a computer was intelligentâ⬠(Bowles 2010). The test consisted of a judge having a conversation with both a person and a computer, both hidden behind curtains to determine the difference between the person and the computer. If the determination could not be made then the computer was considered to be intelligent. ââ¬Å"The Turing Test became a founding concept in the philosophy of artificial intelligence ââ¬Å"(Bowles 2010). AI development also draws information and theories from animal studies, specifically with insects. By studying insects, it has been shown that insect movements are easier to emulate with robots that those movements of humans. It has also been argued that animals, also simpler than humans should be easy to mimic as well, however insect study has proven to be more productive. Practical applications of such computers with artificial intelligence could really be endless in the world. One such application was presented in 1997 with the creation of Deep Blue a chess playing computer by IBM. In that same year, ââ¬Å"Deep Blue was able to beat Garry Kasparov, the worldââ¬â¢s highest ranking chess player, in a series of six matchesâ⬠(Bowles 2010). Deep Blue was a highly powerful computer that was programmed to solve the complex, strategic game of chess. But IBMââ¬â¢s goal behind Deep Blue was a much grander challenge. Other applications include optical character recognition such as that in a license plate reader that is used on police cars. License Plate Reading (LPR) Technology uses specialized cameras and computers to quickly capture large numbers of photographs of license plates, convert them to text and compare them quickly to a large list of plates of interest. LPR systems can identify a target plate within seconds of contact with it, allowing law enforcement to identify target vehicles that might otherwise be overlooked. This technology is not only used for locating violators of registration and licensing laws but also for the use of Amber Alerts when children are abducted and the vehicle plate that the perpetrator is operating is known. Another widely known practical application that many people of the world know and use is speech recognition such as ââ¬Å"siriâ⬠of the Apple IPhone fame. This type of software is designed to learn how the operator speaks and from listening to a sample of the operatorââ¬â¢s voice can determine whether to call ââ¬Å"homeâ⬠or ââ¬Å"workâ⬠just by saying the command. Despite the conflicting opinions on the whether human beings will be successful in creating an artificial intelligence, the possibility is very real and must be considered from both ethical and philosophical perspectives. Substantial thought must be given not only to if human beings can create an AI, but if they should create an AI. Certainly we have crossed over the question of ââ¬Å"if we should create AIâ⬠and in some forms it does exist today. Isaac Asimov wrote, in his book ââ¬Å"I Robotâ⬠in 1923 the ââ¬Å"Three Rules of Roboticsâ⬠which are as follows: ââ¬Å"1. A robot may not injure a human being or, through inaction, allow a human being to come to harm; 2. A robot must obey the orders given it by human beings except where such orders would conflict with the First Law; 3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Lawsâ⬠(Bowles 2010). The legal and ethical dimensions of AI are strongly linked. Scientists and researchers argue that ethical considerations, such as making sure AI is programmed to act in an ethical way, making sure the ethics of people who design and use AI technology are sound, and ensuring people treat AI agents (robots) in an ethical manner are increasingly being seen in terms of legal responsibilities. If self-aware AI agents do indeed become more ubiquitous in the future, legal theory provides the framework for considering responsibility and agency. There are those who argue that these agents will need to have legal rights, the more they become advanced. Questions are being asked such as, should AI agents be given partial responsibility for their actions? Another consideration is how responsibility is transferred between humans and AI agents. This thought process consists of us considering how to prevent humans from unjustly attributing responsibility for their actions onto AI agents or deciding whether to charge an AI agentââ¬â¢s programmer or owner with negligence if an AI agent causes damage or breaks the law. Most of us have seen the movie, ââ¬Å"I Robotâ⬠from 20th Century Fox. The lead robot in the movie ââ¬Å"Sonnyâ⬠was designed to look and move like a human. Will Smithââ¬â¢s character even asks then question ââ¬Å"why do you give them facesâ⬠. There have already been great strides in producing a computer that is faster than the human brain and for that matter much more accurate as well. There are robots that exist today, not that they could walk down the street and not be noticed or pointed out as a robot because of their movements, but they do exist and they do function based on their programming. In conclusion, we are seeing more and more technology that is making our lives easier. From our cell phone assistants such as ââ¬Å"Siriâ⬠, to our Unmanned Ariel Vehicles (UAVââ¬â¢s) that are not only saving pilots but also saving the need to put boots on the ground. These machines or agents are just that, machines right? They are made of metal, plastic, silicone and computer chips. If I tell a UAV to turn left it will listen, right? I donââ¬â¢t believe that the machines are taking over just yet, but with computers such as ââ¬Å"Deep Blueâ⬠being the founding father of AI, and human curiosity to see if true AI can be created the possibility exists.
Subscribe to:
Post Comments (Atom)
No comments:
Post a Comment
Note: Only a member of this blog may post a comment.