Thanks For Such A Revealing Response
(Note, my comments are inserted with >>> symbol)
If you expect me to read and respond directly to points within your past written pieces, you might want to give me the courtesy of reading my past pieces and incorporating their points into your interpretation and criticism of my ideas.
>>>Initially only two issues were raised
1. Governments do not play by the rules
2. Any AI that can comprehend the Internet could be concern
>>>You did not respond to either in your initial response
>>>Your original article I initially commented on did not reference past articles only your project
I actually read several of your articles before replying and it seems we are very similar thinkers who could be productive collaborators.
So I am disappointed by your needless hostility and superficial critique.
>>>Hostile?
>>>My initial comment was titled, “Appropriate Caution But Overlooks Key Variables,” second titled, “Thanks, But Unresponsive to Issues Raised,” which, as noted above, was in fact the case.
>>>Superficial?? Are you suggesting concerns raise are not valid? These are the most prominent and ubiquitous concerns.
If you had looked into my work, you would have discovered that my proposals are much more detailed — and already implemented as software — than in this one article.
>>>Am certain that is the case, which begs the question: why not articulate it? Or was the article just promotional?
You’ve constructed a straw man by imputing any sort of belief in linearity.
>>>I imputed it because that is how it read.
The achievement of AGI is undoubtedly a non-linearity, which some people refer to as a singularity in that it represents a new version of reality.
>>>This is your first clear acknowledgment of AGI as such. Whether it is singularity is separate and more complicated issue.
And yet even non-linearity does not pop out of the blue sky.
>>>Big bang, supernovas, biology, cognition do seem to pop out
You seem to make that point yourself when you write about nested coevolution. Successful emergence, as far as I can tell, is always preceded by a period of adaptation and coevolution.
>>>That is not what is said. Rather, there is directionality and influences that result in a matrix of options, most of which are bad.
The more successful the emergence, the longer the period of coevolution. Or maybe it’s the other way around.
>>>There is no reference in my work or any literature than makes a time dependent claim like that. Only that every evolving system has a finite lifespan set at conception.
Perhaps you can explain to me how “society-in-the-loop” is so vastly superior to “individual-human-in-the-loop” as I’ve proposed.
>>>Collective intelligence. If that is not obvious you need to step back from what you are doing.
Both approaches are coevolutionary, are they not?
>>>Yes, but so are dictators and society, street gangs and communities. Coevolution is not a relativistic concept or one-size fits all solution.
Yet society is supervenient upon individual humans, so from an evolutionary perspective individuals are where the most important mutualistic evolution between humans and potential-AGI can happen. This, to me, makes a focus on individuals the place to begin.
>>>Actually, real co-evolution is dynamic mix of selfish genes (individuals), extended phenotypes, (societies) and the external nested environments
It seems to me that we are coming at the same problem from different perspectives.
>>> To be sure.
Based on what I’ve read, you’re approaching the problem of AGI from the perspective of social institutions or “society”, as would be expected given your political science background.
>>>Given the existential threat AGI poses, which you acknowledge, it is my approach.
Maybe you are secretly powerful and influential, but if appearances do not deceive me, the level of social institutions is not where you personally have the power to effect any change, no matter how many words you write on the Internet
>>>Actually, I give weight to both words and actions. Without words we are all wholly dependent on the beneficence on politicians (no track record to justify) or absolute confidence in individual experts or expert communities (despite scientific method and Kuhn’s paradigmatic shifts to the contrary).
or how quick you are to resort to ad hominem attacks.
>>>Nothing attacked you or your efforts. In fact, initial response said your approach had merit, albeit insufficient as I saw it. Follow up response highlighted the non-response response. Subsequent comments referencing your quote is hard to depict differently.
Yet I recognize where I can effect change, which is amongst individuals, beginning with myself. That decidedly non-naive grasp of reality forms the basis of my perspective. What the institutions and corporations of our society produce — AGI or otherwise — is completely out of my control and out of control of the vast majority of individuals.
>>>That acknowledgment would have sufficed to one of my initial questions. That said, the issue itself remains valid and open.
Literally the only thing in my personal power to do is understand what is happening in the world and mitigate the risks that I see, and if possible coordinate with others who hold similar beliefs.
We have 4 billion years of evolution in the form of widely-distributed natural selection to use as evidence of what works in the long term and what does not. Clearly, evolution works.
>>>Yes, but there is approx. 10 billion years before biology and billions more after biology. So evolution worked this far.
>>>Unfortunately, 99.999% of all species that ever existed are extinct. Point is that you are solely dependent on biological evolution, a special form of evolution. That said, even your interpretation of biological evolution seems askew.
>>>As Nobel Laureate, Daniel Kahneman, has noted, biological evolution does not actually provide solace as a model for AI or AGI.
I am arguing for building AGI atop the proven principles of evolution to counterbalance the risks of what we both seem to agree is inevitable: AGI in some form.
>>>Your reliance solely on biological evolution is part of the problem. Per Kahneman, biological evolution is unlikely to suffice. You might be right but nothing you have said shows otherwise and risks are too high to be wrong.
The risks of AGI are upon me (and us) no matter what I do so defense seems a prudent strategy. I have no power to affect social institutions directly so anything I develop must begin on the level of individuals where I do have at least a small amount of power. And I will always side with evolution; it would be intellectually cavalier to do otherwise.
>>>Again, if you are relying solely on biological evolution, excluding extended phenotype and nested environment (beyond simple biology), you might actually be such.
It is unfortunate that you take this so personally, though I do understand the passion and conviction. That said, your comments here actually increase concern about the project you are undertaking and your approach.
Doc Huston