Hi readers! Are you enjoying the blogs on Artificial Intelligence? I hope so and also hope that this hope is forever. Pl. remember that I write these blogs to spread awareness under the theme “Know it if you don’t”
Now when we talk about “Threats from AI” which AI are we talking about? There are three types:
- Artificial narrow intelligence (ANI) which is the most common form of AI designed to solve one single problem like recommending a product for e-commerce user or predicting the weather. It is capable to come close to human functioning in very specific contexts and the one that could easily be found in the market,
- Artificial general intelligence (AGI) possesses intellectual functions of humans in various domains comprising language and image processing, computational functioning, and reasoning. AGI comprises thousands of ANI systems working in tandem communication with each other to mimic human reasoning,
- Artificial supper intelligence (ASI) is a progression of AGI that surpass all human capabilities such as decision making, taking rational decisions and even things like making better art and building emotional relationships.
Once the objective of AGI is achieved, AI systems can quickly improve its capabilities and move into domains currently
beyond our comprehension as gap between AGI and ASI is relatively narrow (nanosecond) because AI learns very fast.
Such perceptions that AI is being endowed with human like intelligence forced some people to consider
AI a danger to humanity if it allowed to progress “unabated” and the most commonly felt threat is
mass unemployment.
So, first a field was invented that can progress rapidly without involvement of human beings and now apprehensions are mounting to take care of unemployed human and danger to humanity.
Whether the first perception is worth considering or the last? No one can decide as confusion is all over. Where lies the problem?
- Is it with human vision, which is different from revelation?
- Foresight which is different from intuition? and/or
- Decision making which is based on prevision?
No one has realized this before reaching where are now. But, since AI is everything and is everywhere in business arena, boosting new products development, and its endless propagation warrants its conscious deployment especially when Machine Learning (ML) assumes a larger role in how work is done and force people to think if AL is a threat to human existence?
Existential risk from AGI is a hypothesis based on substantial progress in AGI which could someday result in human extinction or in other unrecoverable global devastation. If AI surpasses humanity in general intelligence and becomes super-intelligent, it would be impossible for human to control it, and like the fate of mountain gorilla depends upon human goodwill, the fate of humanity might depend on the action of super-intelligent machine’s which means a system that can beat human’s capabilities in every sphere: the likelihood that some time back could only be seen in science fictions. But, when people like Stephen Hawking, Bill Gates and Elon Musk started talking about superintelligence in 2010, the subject became the talk of general public as a reality.
The major concerns showed were:
- Pre-programming and controlling a super intelligent machine with full set of human’s capabilities would be a hard task. Inherent ability of super-intelligent machine will resist shutting off or stop it to perform its goal drilled into its DNA. Nevertheless, Yann LeCun (Director of AI Research at Facebook and much more), is of the view that super intelligent machines will have no desire for self-preservation,
- Unexpected explosion of superintelligence will be a big surprise for human race not yet prepared to face this. If a first-generation computer completes a task in 6 months, any of the researchers of AI, capable of rewriting its algorithms will double its speed enabling it to complete the task in 3 months. Similarly, the time for each generation will shrink and the system will undergo an unprecedentedly large number of generations of improvement in a short time interval, jumping from subhuman to superhuman performance in all relevant areas (a proven reality). Samuel Butler: English author of the Utopian satirical novel “Erewhon” expressed serious concern in 1863’s essay, “Darwin among the machines,
- That the upshot is simply a question of time, but that time will come only when the machines hold real supremacy over the world and its inhabitants. No person of a truly philosophic mind would be able to question this even for a moment,
- Alan Turing: an English mathematician, wrote in 1951, in an article titled Intelligent Machinery: A Heretical Theory that “AGI would likely take control of the world as they became more intelligent than human beings”. This means, the first ultra-intelligent machine would be last invention that man need ever made provided the machine itself tell human how to control it?
- IJ Good: British mathematician stated in 1965 that, “the risks would be underappreciated”
- Marvin Minsky: an American cognitive and computer scientist and IJ Good himself expressed concerns that super-intelligent robot, nanotechnology, and engineered bio plagues are high-tech dangers to human survival.
- The Economist of 9 August 2014 has already argued that “a super intelligent machine would be as alien to humans as human thought processes are to cockroaches”. Such a machine may not have humanity’s best interests at heart, and it is not obvious if it would even care about human welfare at all? A “superintelligence” can outmaneuver humans any time its goals conflict with human goals therefore, unless the superintelligence decides to allow humanity to coexist, the first superintelligence to be created will inexorably result in human extinction”
- Nature warned in 2016 that “machines and robots that outperform humans across the board could self-improve beyond our control and their interests might not align with us”. The book titled “artificial intelligence: a modern approach” assesses that
“Super-intelligence” could be the end of human race because any technology has the potential to cause harm in the wrong hands, but the problem is that with superintelligence, the wrong hands might belong to the technology itself
Going through all this the question that comes to my mind is are we conveying a vague message to the mankind about
what could happen to them or in other words what could be their fate?
if we are convinced that superintelligence will result in human extinction and almost everyone related with AI agree with this perception than who will use the super-intelligent machines? how and for what purpose? After all humans have created this facet for the betterment of mankind without whom it will be useless. Does this mean that after human extinction, super-intelligent machines will talk to space residents and work for them certainly not for any good but to bring hell on the earth? Like of what we have seen in the movie
“Avengers”
I have not found any concrete discussion on this aspect except what top physicist Michio Kaku (American theoretical physicist) said in “Physics of Future” that, “it will take many decades for robots to ascend” on a scale of consciousness, and that in the meantime, corporations will likely succeed in creating robots that are “capable of love and earning a place in the extended human family” Really!
Broadly, it means that the Robots will be “by the humans and for the humans”.
Hope, this hope is for all the times to come but, what happened on Monday the 4th, October 2021, made me skeptical.
The breakdown of Facebook and WhatsApp (due to “faulty configuration change”) just for few hours and the consequential monitory losses made everyone shocked as the world at large was not prepared for this. Mark Zuckerberg losses $ 9 billion in net worth due to Face book rear outrage and due to stock falls on Monday October 4th, 2021 (Bloomberg and Business Standard). Besides, 3.5 billion users around the world were also affected through disrupted travel plains, Zoom, WhatsApp and Skype meetings and calls, Instagram, Messenger and what not leaving behind a question of the questions
“Is digitalization” of everything is the answer to every problem? or is it a fight between competitive technologies? From going offline to coming back online: 6 hours gap left a message in between.
Someone must have read it?
Stephen Hawking, Theoretical Physicist says,
“Success in creating AI would be the biggest event in human history. Unfortunately, it might also be the last,
unless we learn how to avoid the risks.”
See you next week with
Esotericism! What is and is not?
Bye.