Glad You See The Correlation
I’d summarize your thesis as “more complexity, more problems”. This seems generally true
>>>Okay
I’m unclear from reading this on the mechanism of operation for systems becoming supercritical. I don’t doubt that it happens….But even if your formulation is not parsimonious, it still seems to point towards a real pattern in reality.
>>>>Okay. It is the central nonlinear phenomenon in phenomenology
https://www.scientificamerican.com/article/self-organized-criticality/
https://en.wikipedia.org/wiki/Self-organized_criticality
As for your conclusions I specifically disagree that political solutions are likely to matter because I expect existential risks to dominate and so far we’ve not seen much reason to think politics is likely to wipe us out.
>>>Actually, after reading your article, expected as much.
The very first paper I wrote that received acclaim was a comparative analysis of existentialist writers, with emphasis on Sartre and Camus, and in particular Being and Nothingness and The Rebel.
In the end there are three ways to see existential risk.
Kierkegaardian — deer in the headlights
Sarte — Life sucks now but will get better — to do is to be
Camus — Great to be alive but assume life will get worse — to be is to do
For this reason I suspect that any political environment will work so long as it doesn’t collapse our ability to handle existential risks like artificial intelligence, bioweapons, nanotechnology, etc. and more specifically I expect artificial intelligence to dominate our long-term future since it seems the likely only way to not be destroyed by it is to “win” the strong, self-improving AI race by creating human-values-aligned AI first.
>>>Suggest you might want to read my article, Our Twilight Zone & What Comes Next
Doc Huston