Rather Simplistic & Cavalier
There’s no doubt we’re moving pell-mell to add AI to all societal infrastructure and operational systems and devices. And, for the most part it’s good. However, in the process we’ll stack and aggregate AI systems the same way we now do with other software. Thus, as we have and continue to see, instructions and bugs on connected and or lower levels can negatively impact connected and or higher levels, and vice versa.
Analogy to individual human behavior is specious in many ways. When you say, “People can’t explain how they balance, and the world’s greatest go players don’t know why one move seems better than another,” you conveniently ignore that some other humans can and share access to how. When speaking about how we establish trust with other people, you neglect to point out how rare and ephemeral trust is actually.
AI will not be it, rather them because increasingly all AI will be networked just as we moved from standalone CPUs and device to world of interconnected CPUs and devices. So, ultimately, AI is more “borg-like” than either individuals or human societies.
So, it’s true we’ll “adapt to a world of opaque AI,” as a convenient abstraction — until it’s not.
But why would anyone assume all people are altruists and emergence of tyrants impossible? Said differently, can we, should we be so cavalier as to willingly risk being wrong with this technology? And, note, neither you nor I referenced AGI, a still higher level of risk. We’ve also ignored opaque global military AI arms-race, an even higher level of risk. But am sure you see the point.
Doc Huston