Issues Are Spot On. But Unclear If Solvable

Doc Huston
2 min readSep 22, 2016

Outstanding outline of problem-complex. Having considered the exact issues you raise for decades and following AI related discussions closely, as * you imply, the level of complexity in attempting to code what we accept as ambiguity may not be doable. Further, there are a few additional variables you probably want to add to your ideas.

  • People in the AI related field are making a full-court press to tamp down concerns about AI related dangers, dissing those who raise concerns, and playing fast and loose with the timeline for an “intelligence explosion.” To me, the situation is eerily similar to climate change deniers (an analogy they do not like).
  • You stated, “Expert AI researchers agree that it will probably take centuries for “superintelligence” to arrive, and a recent Stanford report concurs:…Panel found no cause for concern that AI is an imminent threat to humankind.”

But the MIT article you cited as “agree” was written by a key supporter of the “it’s okay, just trust us” camp. Even then, the consensus in the article had 40 percent of experts expecting some AGI capability within 40 years. If you read Bostrom or Barrat the timeframe is sooner and percentages higher. Even Ray Kurzweil, Google AI godfather, has an outside date of 2045.

I read the Stanford report and found that it frames the issues and concerns in self-serving and misleading ways, especially that it only goes out to 2030. In fact, when Astro Teller wrote an article for Medium to sell the report I pushed back with comment you can find here, Read Report. You are All Smart Guys — But ….

In addition to all the great practical issues you raise, as with all technology, AI is double edged sword. Every government is pushing this as hard and fast as possible because they fear there might be no second place with AGI. And the reason for this race is the well-founded concern about a nonlinear event, an “intelligence explosion.”

On my Medium publication, A Passion to Evolve, there are numerous articles I have written on these issues and concerns you might find interesting and useful, for example: Why You Should Fear Artificial Intelligence-AI, and Doc Says — Our Emotions, Institutions and Technological Capabilities Are Mismatched.

There is also a longish, but readable detailed discussion of nonlinear change and the big picture context for AGI in an article entitled, Macroscopic Evolutionary Paradigm

Doc Huston

Sign up to discover human stories that deepen your understanding of the world.

Free

Distraction-free reading. No ads.

Organize your knowledge with lists and highlights.

Tell your story. Find your audience.

Membership

Read member-only stories

Support writers you read most

Earn money for your writing

Listen to audio narrations

Read offline with the Medium app

Doc Huston
Doc Huston

Written by Doc Huston

Consultant & Speaker on future nexus of technology-economics-politics, PhD Nested System Evolution, MA Alternative Futures, Patent Holder — dochuston1@gmail.com

No responses yet