Sorry, But This Makes No Sense
Suffice it to say we all have preferred scenarios. The trick is make those scenarios as bulletproof as possible to move people to your view. Clearly, you are attempting to sell a viewpoint. However, the surface skimming of your claimed “myths” actually come across as myths. For example:
- “It concerns the myths on AI found in the media, on the web and in all of our heads. Causing fears which, I dare to say, are largely groundless.”
There are indeed MANY AI myths in the media and in our heads. Which are groundless and why? For example, last week Russia’s Putin said, “AI raises ‘colossal opportunities and threats [but]…one who becomes the leader in this sphere will be the ruler of the world.” Is that a groundless myth? Are all the concerns raised by numerous scientists, AI experts, AI institutions, AI authors all wrong about the fears they raised? Is the fact that China officially announced it intends to dominate AI by 2020 an inconsequential development?
- “Out of many industries, only a few will really be able to benefit from the opportunities presented by robotization.”
Based on what I know, anything related to physical labor and distribution in agriculture and manufacturing, most administrative, clerical and routine intellectual tasks can be automated; which includes both robots and AI. So, how, contrary to all the published studies, are you able to substantiate this claim?
- “I don’t know where the idea that the digital ecosphere must be free of errors and dangers came from. As human beings, we have an existential duty to remain self-reliant.”
The problem is the exact opposite of your phrasing. We are CERTAIN the digital ecosphere will have errors and dangers. The issue is the consequences that impact real lives and fortunes. Consequences that are unknown or unforeseen, and have the potential to be widespread and profound; especially with increasingly opaque black-box algorithmic outputs. While am all for and all in on the existential duty to remain self-reliant, the simple fact is that this becomes increasingly difficult as machines and civilization become more complex and scale increases. You are lost in space with this myth.
- “I believe that we are UNABLE to predict the SPECIFIC consequences of the fact that computers will think faster than us within a few years.”
Again, all we have are scenarios. Predictions, especially “specific” ones are for fools. There are only probabilities. Based on human history, both against other species and humans, the ability to think faster is an advantage. As to the hedge, “within in a few years,” it is certain to too soon and will put us at a disadvantage.
Finally, while I have long followed and respected Kurzweil (especially law of accelerating returns) and Musk. In terms of AI/AGI am inclined to put more stock in Musk’s concerns because the consequences of being wrong — encountering a black swan — is too great an existential risk.
(btw, the movie Transcendence was pure Kurzweil fantasy. As attractive as the upload idea may be, it neglects resistance from those who believe in god (e.g., abortion, stem cells) and in nature (e.g., GMOs, vaccines). Plus, as in the movie, the problem of dictatorship is unaddressed , as is a cacophony that defaults to a borg-like outcome. )
Doc Huston