News — At The Edge — 11/10

Doc Huston
7 min readNov 10, 2018

A technological civilization in denial about political consequences of new technologies — digital authoritarianism, rogue AI & bioterror, & other existential risks — guarantees the worse is yet to come.

~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

The global threat of China’s digital authoritarianism —

“[China’s] providing governments…with technology and training…to control their own citizens…[and replace the liberal international order with its own authoritarian vision….

The ‘Belt and Road Initiative’…[seeks] Chinese influence…through bilateral loans and infrastructure projects…[and] emphasis on information technology….

Chinese-built IT network of the African Union headquarters…[was] transmitting confidential data to Shanghai daily for five years.

Some Chinese companies are…exporting surveillance technology…to identify and track citizens’ everyday movements…[and] inviting those governments’ officials and media elites to China for training on how to control dissent and manipulate online opinion….

[Democracies] should impose sanctions on companies that knowingly provide technology designed for repressive crackdowns…and prohibit the export to those countries of any items that could be used…[for] censorship or repressive surveillance….

[However] best way for democracies to stem the rise of digital authoritarianism is to prove that there is a better model for managing the Internet…[by controlling] social media manipulation and misuse of data in a manner that respects human rights…[and] protect critical infrastructure and citizens’ personal data from misuse by governments, companies and criminals.

Tech companies should…scale up their work with civil-society experts to maximize their own transparency and ensure that their platforms are not being misused to spread disinformation….

If democracies fail to advance their own principles and interests…digital authoritarianism threatens to become the new reality for all of us.” https://www.washingtonpost.com/opinions/the-global-threat-of-chinas-digital-authoritarianism/2018/11/01/46d6d99c-dd40-11e8-b3f0-62607289efee_story.html?utm_term=.baf1954f5ebe

Rogue A.I., Bioterror, and Other Ways Tech Might Take Us Out —

“[W]e are living through a critical moment in the history of our species, and…will either find a way to responsibly harness [new] technologies or be destroyed by them…because of political constraints….

[Concern] pressures we’re putting…on the biosphere…leads to climate change, loss of biodiversity, and so forth….

[Other] new technologies, like in bio, cyber, and A.I., which…a few people or even an individual can, by error or by design, cause a catastrophe that cascades very widely…[and] is very new….Politicians focus on the local and the short term….

[Today] a single person could carry out a cyberattack, for example, on the electrical grid that is so severe as to demand a nuclear response….[Or] a few people to create some sort of a dangerous pathogen that could be released by error or by design….

[Fact is] anything that can be done will be done somewhere by someone….We have evolved…by a process which has favored intelligence but also favored aggression….

[If] biological evolution is replaced by technological evolution, there’s no particular reason why it should be strictly analogous to the harsh survival of the fittest….

[Still] the order in which things happen with A.I. will make a big difference…[and] want to develop the right kind of A.I. very quickly to help us…to ensure that we don’t accidentally develop A.I. which could go rogue and interact via the internet of things with the external world….

[Such] new ultrahigh-consequence but low-probability threats are very under-discussed…[yet] things will go wrong.” https://medium.com/s/thenewnew/rogue-a-i-bioterror-and-others-ways-tech-might-take-us-out-4e14ec3710b3

10 of the biggest risks to humanity’s survival in the next 50 years, from nuclear war to supervolcanoes —

“Global catastrophes can occur for a number of reasons, and…next 50 years will set the pace for humanity’s survival…and if we wait…caring may no longer matter’….10 greatest challenges facing humans right now.

  1. A nuclear explosion could trigger a ‘nuclear winter,’ with widespread famines to follow….
  2. Technological progress in synthetic biology and genetic engineering is making it easier and cheaper to weaponize pathogens….
  3. Climate change will have devastating consequences….
  4. A collapse of the global ecosystem…to support a growing human population….
  5. By 2050, 10 million people could die from antibiotic-resistant bacteria each year….
  6. An asteroid hitting the Earth could lead to global food shortages and the loss of millions of lives….
  7. A supervolcano eruption….
  8. The management of solar radiation…but this technology is not advanced enough….
  9. Intelligent machines could devastate humans if…not controlled….
  10. risks to humanity that scientists have not even imagined yet.” https://www.businessinsider.com/biggest-risks-survival-of-humanity-in-50-years-2018-10

AI lie detectors to be tested by the EU at border points —

“Lie detectors…with artificial intelligence are…to be tested at border points in Europe…[and] asked to upload pictures of their passport, visa and proof of funds, and…use a webcam to answer questions…from a computer-animated border guard….

The technology is…analyzing the micro-expressions…to figure out if the interviewee is lying’…[and] around 76% accurate, but…[say] can increase this to 85%. The AI will be backed by human border officials, who will…cross-check information, comparing facial images captured…to passports and photos taken on previous border crossings….

Police, immigration and passport…departments proposed creating a central system to…share DNA, fingerprint, photograph and…[voice] so they can cross check…visa applications or…[in] solving crimes.” https://www.telegraph.co.uk/technology/2018/11/01/ai-lie-detectors-tested-eu-border-points/

Privacy group calls on US government to adopt universal AI guidelines to protect safety, security and civil liberties —

“[Guidelines] to ‘inform and improve the design and use of AI by maximizing the benefits while reducing the risks…[and] ensure the protection of human rights…[like] a right to know the factors, logic and the techniques used to the outcome of a decision; a fairness obligation that removes discriminatory decision making; and an obligation to secure systems against cybersecurity threats.

The principles also include a prohibition on unitary scoring — to prevent governments from using AI to score their citizens and residents…[like] China’s controversial social credit system….

[The] full set of guidelines

  • Right to Transparency. All individuals have the right to know the basis of an AI decision….
  • Right to Human Determination. All individuals have the right to a final determination made by a person.
  • Identification Obligation. The institution responsible for an AI system must be made known to the public.
  • Fairness Obligation. Institutions must ensure that AI systems do not reflect unfair bias or make impermissible discriminatory decisions.
  • Assessment and Accountability Obligation. An AI system should be deployed only after an adequate evaluation of its purpose and objectives, its benefits[/risks]….
  • Accuracy, Reliability, and Validity Obligations. Institutions must ensure the accuracy, reliability, and validity of decisions.
  • Data Quality Obligation. Institutions must establish data provenance, and assure quality and relevance for the data input into algorithms.
  • Public Safety Obligation. Institutions must assess the public safety risks that arise from the deployment of AI systems that direct or control physical devices, and implement safety controls.
  • Cybersecurity Obligation. Institutions must secure AI systems against cybersecurity threats.
  • Prohibition on Secret Profiling. No institution shall establish or maintain a secret profiling system.
  • Prohibition on Unitary Scoring. No national government shall establish or maintain a general-purpose score on its citizens or residents.
  • Termination Obligation. An institution…[has] affirmative obligation to terminate…system if human control…is no longer possible.” https://techcrunch.com/2018/10/29/us-government-universal-artificial-intelligence-guidelines/

Illuminating the Dark Web —

“[Most] think of the dark web as a place…people sell drugs…stolen information…[or] Google can’t crawl…[but] dark websites are…built with standard web technologies…[and] can be viewed by a standard web browser….

[Key is] they can only be accessed through special network-routing software…designed to provide anonymity for both visitors…and publishers of these sites.

Websites on the dark web don’t end in ‘.com’ or ‘.org’…[rather] often include long strings of letters and numbers, ending in ‘.onion’ or ‘.i2p’…that tell software like Freenet, I2P or Tor how to find dark websites while keeping users’ and hosts’ identities private….

[Still] dark web is tiny, with tens of thousands of sites at the most….

[Tor] is so prominent that…Facebook, The New York Times and The Washington Post operate versions of their websites accessible on Tor’s network….[Tor] allows users to anonymously browse not only dark websites, but also regular websites….

Defining the dark web only by the bad things that happen [ignores]…privacy-conscious social networking…[and] blogging by political dissidents….

[While] search engines never see huge swaths of the regular internet…the dark web is [searchable]….

It’s inaccurate to assume that online crime is based on the dark web…[and] encourages governments and corporations to…monitor and police…[in] privacy-invading efforts.” https://www.scientificamerican.com/article/illuminating-the-dark-web/

A phone that… folds? (2 min video)

Samsung has revealed plans to make a phone that unfolds into a tablet.

Gene drives promise great gains and great dangers

“[A] gene drive…uses genetic engineering to drive certain traits through a population…[like] resilience to disease among crops or…tolerance to warming waters on the part of corals…[or] make a species extinct…[like] three types of mosquito responsible for transmitting malaria…[and] could save close to half a million lives a year…[or] Lyme disease, Zika and dengue fever…[or] weapon against invasive species such as foxes, mice, rabbits and rats, whose proliferation threatens native species in some parts of the world.

(Humans are unsuited to gene drives, which work best in species that reproduce quickly, with many offspring.)….

Opponents think the technology is simply too dangerous to [use]…[or] could entrench the power of big agritech firms….[Seems] removing a species from the food chain could have unintended consequences, particularly if gene drives can move to a closely related species….[Since] animals carrying gene drives…respect no borders…[a] country’s decision to use gene drives will have consequences for its neighbors….[Critically] mosquito, engineered to inject toxins, could be used as a weapon….

[Still] attempt to impose a moratorium…was rejected by governments in 2016 at a United Nations meeting….

[Since] 2016 researchers have made advances on drives that die out over time…[and] could go some way to solving the practical concerns….The ideal would be a set of norms for states and funders to adhere to.

[Still] malevolent actors may still want to use gene drives for malicious purposes…[because] not require big organizations in order to be made to work….

[So] gene drives must be managed carefully…[not] obscure the prize on offer.” https://www.economist.com/leaders/2018/11/08/gene-drives-promise-great-gains-and-great-dangers

Find more of my ideas on Medium at,

A Passion to Evolve.

Or click the Follow button below to add me to your feed.

Prefer a weekly email newsletter — free, no ads, no spam, & never sell the list — email me dochuston1@gmail.com with “add me” in the subject line.

May you live long and prosper!
Doc Huston

--

--

Doc Huston

Consultant & Speaker on future nexus of technology-economics-politics, PhD Nested System Evolution, MA Alternative Futures, Patent Holder — dochuston1@gmail.com