5 Matching Annotations
  1. Jan 2024
    1. etween 41.2% and 51.4% of respondents estimated a greater than 10% chance of humanextinction or severe disempowerment

      predictions on the likelihood of human extinction

    2. scenarios worthy of most concern were: spread of false information e.g. deepfakes (86%), manipulation oflarge-scale public opinion trends (79%), AI letting dangerous groups make powerful tools (e.g. engineered viruses)(73%), authoritarian rulers using AI to control their populations (73%), and AI systems worsening economic inequalityby disproportionately benefiting certain individuals (71%).

      likelihood of existing and exclusion AI risks

    3. Only one trait had a median answer below ‘even chance’: “Take actions to attain power.” While there wasno consensus even on this trait, it’s notable that it was deemed least likely, because it is arguably the most sinister, beingkey to an argument for extinction-level danger from AI


    4. substantial” or “extreme” concern is warranted about six different AI-relatedscenarios, including spread of false information, authoritarian population control, and worsenedinequality.


    5. While68.3% thought good outcomes from superhuman AI are more likely than bad, of these net optimists48% gave at least a 5% chance of extremely bad outcomes such as human extinction, and 59% of netpessimists gave 5% or more to extremelygoodoutcomes. Between 37.8% and 51.4% of respondentsgave at least a 10% chance to advanced AI leading to outcomes as bad as human extinction.

      Survey results on AI extinction risks. Quite interesting

    Created with Sketch. Visit annotations in context

    Created with Sketch. Tags

    Created with Sketch. Annotators

    Created with Sketch. URL