Issues Are Spot On. But Unclear If Solvable
Outstanding outline of problem-complex. Having considered the exact issues you raise for decades and following AI related discussions closely, as * you imply, the level of complexity in attempting to code what we accept as ambiguity may not be doable. Further, there are a few additional variables you probably want to add to your ideas.
- People in the AI related field are making a full-court press to tamp down concerns about AI related dangers, dissing those who raise concerns, and playing fast and loose with the timeline for an “intelligence explosion.” To me, the situation is eerily similar to climate change deniers (an analogy they do not like).
- You stated, “Expert AI researchers agree that it will probably take centuries for “superintelligence” to arrive, and a recent Stanford report concurs:…Panel found no cause for concern that AI is an imminent threat to humankind.”
But the MIT article you cited as “agree” was written by a key supporter of the “it’s okay, just trust us” camp. Even then, the consensus in the article had 40 percent of experts expecting some AGI capability within 40 years. If you read Bostrom or Barrat the timeframe is sooner and percentages higher. Even Ray Kurzweil, Google AI godfather, has an outside date of 2045.
I read the Stanford report and found that it frames the issues and concerns in self-serving and misleading ways, especially that it only goes out to 2030. In fact, when Astro Teller wrote an article for Medium to sell the report I pushed back with comment you can find here, Read Report. You are All Smart Guys — But ….
In addition to all the great practical issues you raise, as with all technology, AI is double edged sword. Every government is pushing this as hard and fast as possible because they fear there might be no second place with AGI. And the reason for this race is the well-founded concern about a nonlinear event, an “intelligence explosion.”
On my Medium publication, A Passion to Evolve, there are numerous articles I have written on these issues and concerns you might find interesting and useful, for example: Why You Should Fear Artificial Intelligence-AI, and Doc Says — Our Emotions, Institutions and Technological Capabilities Are Mismatched.
There is also a longish, but readable detailed discussion of nonlinear change and the big picture context for AGI in an article entitled, Macroscopic Evolutionary Paradigm
Doc Huston