Concern Valid. Exposition Seriously Flawed.
Avoiding the existential threat posed by AI (actually AGI) is the moonshot of our era. Agreed. But inserting erroneous and misleading statements as fact damages the case here.
For example here’s no evidence to support any of this: “unique power of the human mind comes in part from its ability to integrate opposing qualities, like emotion and reason,” “biologically compelled to create and to seek knowledge,” “we are all different, comprising an essential character founded on an innate sense of morality and ethics,” “machines will soon be able to render comprehensive psychographic profiles that not only help them “read” humans but in turn also allows us to better understand ourselves,” “self-driving cars are said to be programmed to sacrifice pedestrians to save the driver” (no one has actually done this), and “constitution is mandating the right “to pursue happiness” for each of its citizens” (that’s Declaration of Independence).
References to Partnership for AI, AI100, and Singularity University should be notated as deliberately limiting the time horizon to near term goals (2030) and assiduously avoiding issues in longer time horizon where existential concern (especially AGI).
Sorry to rain on your parade. But, like fake news, inaccuracy is damaging to basic concerns. On my Medium publication, A Passion to Evolve, there are some articles you might find interesting on this topic —
Artificial Intelligence (AI) Community Is Playing A Risky Game With Us,
Why You Should Fear Artificial Intelligence-AI,
Doc Says — Our Emotions, Institutions and Technological Capabilities Are Mismatched, and
Civilization’s Anti-Human, Not Machines
Doc Huston