474 Matching Annotations
  1. Mar 2024
    1. Experts expect some inference to start moving from specialist graphics-processing units ( GPU s), which are Nvidia’s forte, to general-purpose central processing units ( CPU s) like those used in laptops and smartphones, which are AMD ’s and Intel’s.

      So what does this mean in practical terms? In Nvidia Loosing ground because the more common (?) general purpose CPUs become relevant for AI? ||JovanK||

    Created with Sketch. Visit annotations in context

    Created with Sketch. Tags

    Created with Sketch. Annotators

    Created with Sketch. URL

  2. Jan 2024
    1. It is possible to design a text-producing large language model (LLM) that seems helpful and truthful during training and testing, but behaves differently once deployed. And according to a study shared this month on arXiv1, attempts to detect and remove such two-faced behaviour are often useless — and can even make the models better at hiding their true nature.The finding that trying to retrain deceptive LLMs can make the situation worse “was something that was particularly surprising to us … and potentially scary”, says co-author Evan Hubinger, a computer scientist at Anthropic, an AI start-up company in San Francisco, California.

      Kind of scary indeed. It takes us back to the question of trust in developers.

    Created with Sketch. Visit annotations in context

    Created with Sketch. Tags

    Created with Sketch. Annotators

    Created with Sketch. URL

    1. including by scaling up the use of open science, affordable and open-source technology, research and development

      open science and open source

    2. We reaffirmthat the creation, development and diffusion of innovations and new technologies and associated know-how, including the transfer of technology on mutually agreed terms, are powerful drivers of economic growth and sustainable development. We reiterate the need to accelerate the transfer of environmentally sound technologies to developing countries on favourableterms, including on concessional and preferential terms, as mutually agreed, andwenote the importance of facilitating access to and sharing accessible and assistive technologies

      details for the part on tech transfers in the chapeau

    3. commit to exploring measures to address therisks involved in biotechnology and human enhancement technologies applied to the military domain

      Might have been good not to limit this to the applications in the military domain

    4. developingnorms, rules and principles on the design, development and use of military applications of artificial intelligence through a multilateral process

      also interesting

    5. strengtheningoversight mechanisms for the use of data-driven technology, including artificial intelligence, to support the maintenance of international peace and security

      which mechanisms?

    6. commit to concluding without delaya legally binding instrument to prohibit lethal autonomous weaponssystems that function without human control or oversight, and which cannot be used in compliance with international humanitarian law, and to regulate all other types of autonomous weapons systems

      LAWS - another proposed strong commitment

    7. international norms, rules and principles to address threats to space systems and, on that basis, launch negotiations on a treaty to ensure peace, security and the prevention of an arms race in outer space

      interesting commitment to working on a treaty for outer space security. curious if it will stay in the final doc

    8. We acknowledge that the accelerating pace of technological change necessitates ongoing assessment and holistic understanding of new and emerging developments in science and technology impactinginternational peace and security, including through misuse by non-State actors, including for terrorism

      tech, peace and security

    9. including through the transfer of technology on mutually agreed terms to help close the digital and innovation divide.

      Seems a win for developing countries. tech transfers were highlighted quite a lot during the SDG summit last year. Let's see reactions...

    Created with Sketch. Visit annotations in context

    Created with Sketch. Tags

    Created with Sketch. Annotators

    Created with Sketch. URL

    1. summary of Russell’s argument—which claims that with advanced AI, “you getexactly what you ask for, not what you want”

      Q: What is the alignment problem?

    2. 70% of respondents thought AI safety research should be prioritized more than it currently is.

      need for more AI safety research

    3. etween 41.2% and 51.4% of respondents estimated a greater than 10% chance of humanextinction or severe disempowerment

      predictions on the likelihood of human extinction

    4. Amount of concern potential scenarios deserve, organized from most to least extreme concern

      very interesting, in particular the relative limited concern related to the sense of meaning/purpose

    5. ven among net optimists,nearly half (48.4%) gave at least 5% credence to extremely bad outcomes, and among net pessimists, more than half(58.6%) gave at least 5% to extremely good outcomes. The broad variance in credence in catastrophic scenarios showsthere isn’t strong evidence understood by all experts that this kind of outcome is certain or implausible

      basically difficult to predict the consequences of high-level machine intelligence

    6. scenarios worthy of most concern were: spread of false information e.g. deepfakes (86%), manipulation oflarge-scale public opinion trends (79%), AI letting dangerous groups make powerful tools (e.g. engineered viruses)(73%), authoritarian rulers using AI to control their populations (73%), and AI systems worsening economic inequalityby disproportionately benefiting certain individuals (71%).

      likelihood of existing and exclusion AI risks

    7. Most respondents considered it unlikely that users of AI systems in 2028 will be able to know the truereasons for the AI systems’ choices, with only 20% giving it better than even odds.

      predictions on explainability / interpretability of AI systems

    8. Answers reflected substantial uncertainty and disagreement among participants. No trait attracted near-unanimity on anyprobability, and no more than 55% of respondents answered “very likely” or “very unlikely” about any trait.


    9. Only one trait had a median answer below ‘even chance’: “Take actions to attain power.” While there wasno consensus even on this trait, it’s notable that it was deemed least likely, because it is arguably the most sinister, beingkey to an argument for extinction-level danger from AI


    10. ‘intelligence explosion,’

      Term to watch for: intelligence explosion

    11. he top five most-suggested categories were: “Computerand Mathematical” (91 write-in answers in this category), “Life, Physical, and Social Science” (77 answers), “HealthcarePractitioners and Technical” (56), “Management” (49), and “Arts, Design, Entertainment, Sports, and Media”

      predictions on occupations likely to be among the last automatable

    12. predicted a 50% chance of FAOL by 2116, down 48 years from 2164 in the 2022 survey

      Timeframe prediction for full automation of labour: 50% chance it would happen by 2116.

      But what does this prediction - and the definition of full automation of labour - mean in the context of an ever-evolving work/occupation landscape? What about occupations that might not exist today? Can we predict how those might or might not be automated?

    13. ay an occupation becomes fully automatable when unaided machines can accomplish it betterand more cheaply than human workers. Ignore aspects of occupations for which being a human isintrinsically advantageous, e.g. being accepted as a jury member.Think feasibility, not adoption.[. . . ]Say we have reached ‘full automation of labor’ when all occupations are fully automatable. That is,when for any occupation, machines could be built to carry out the task better and more cheaply thanhuman workers

      Q: What is full automation of labour?

    14. the chance of unaided machines outperforming humans in every possible task was estimated at 10%by 2027, and 50% by 2047.

      Survey: 10% chance that machines become better than humans in 'every possible task by 2027, but 50% by 2047.

    15. High-level machine intelligence (HLMI) is achieved when unaided machines can accomplish everytask better and more cheaply than human workers. Ignore aspects of tasks for which being a humanis intrinsically advantageous, e.g. being accepted as a jury member.Think feasibility, not adoption

      Q: What is high-level machine intelligence?

    16. High-Level Machine Intelligence’

      new term: high-level machine intelligence

    17. six tasks expected to take longer than ten years were: “After spending time in a virtual world, output the differentialequations governing that world in symbolic form” (12 years), “Physically install the electrical wiring in a new home”(17 years), “Research and write” (19 years) or “Replicate” (12 years) “a high-quality ML paper,” “Prove mathematicaltheorems that are publishable in top mathematics journals today” (22 years), and solving “long-standing unsolvedproblems in mathematics” such as a Millennium Prize problem (27 years)

      Expectations on the tasks feasible to be taken over than AI later than 10 years from now

    18. 2,778 AI researchers who had published peer-reviewed research in the prior year in six topAI venues (NeurIPS, ICML, ICLR, AAAI, IJCAI, JMLR). This to our knowledge constitutes the largest survey of AIresearchers to date

      Who participated in the survey

    19. hey are experts in AI research, not AI forecasting and might thus lack generic forecasting skills andexperience, or expertise in non-technical factors that influence the trajectory of AI.

      Good to note this caveat

    20. lack of apparent consensus among AI experts on the future of AI [

      This has always been the case, no?

    21. was disagreement about whether faster or slower AI progress would be better for thefuture of humanity.

      interesting also

    22. substantial” or “extreme” concern is warranted about six different AI-relatedscenarios, including spread of false information, authoritarian population control, and worsenedinequality.


    23. While68.3% thought good outcomes from superhuman AI are more likely than bad, of these net optimists48% gave at least a 5% chance of extremely bad outcomes such as human extinction, and 59% of netpessimists gave 5% or more to extremelygoodoutcomes. Between 37.8% and 51.4% of respondentsgave at least a 10% chance to advanced AI leading to outcomes as bad as human extinction.

      Survey results on AI extinction risks. Quite interesting

    24. the chance of all humanoccupations becoming fully automatable was forecast to reach 10% by 2037, and 50% as late as 2116(compared to 2164 in the 2022 survey).
  3. Jul 2023
    1. In this classroom, each student should not only have the opportunity to practice but also actively participate in discussions, creating an inclusive and deeply participatory learning environment.
    2. Teaching others is a more powerful learning technique than re-reading or summarizing
    3. ssues that keep teams from fulfilling their potential, as process loss and these include social loafing (when individuals put forth less effort when working in a group) and groupthink (when group members’ desire for conformity can lead to bad decisions)
    4. For any assignment, it’s not enough to cite the AI; students should include an Appendix noting what they used the AI for and where its output fits into their work
    5. you should try it out a number of times for your specific topic or concept and see how it reacts
    6. ecause the AI can “get it wrong” students need to be aware of those limits and discussing this inclass is oneway to highlight its limitations
    7. the tutor's value is not merely subject knowledge, but also their capacity to prompt the student to make an effort, pay close attention to the material, make sense of new concepts, andconnect what they know with new knowledge
    8. utoring involves small group or one-on-one sessions with a tutor focusing on skills building.

      Q: What does tutoring involve?

    9. hat feedback should be considered critically, and students should be asked to articulate how and why the feedback they received is effective (or not).
    10. while the feedback may be helpful it should be coupled with an in-class discussion and clear guidelines
    11. feed-up, feedback, and feed-forward. Feed-up serves to clearly articulatethe goals and expectations students are to meet. Feedbackreflects students' current progress and pinpoints areas requiring further development; it provides actionable advice, helpingstudents to achievetheir goals. Feed-forward helps teachers plan and tweak their lessons based on student work

      Components of feedback: feed-up, feedback, and feed-forwards

    12. they aremost common when asking for quotes, sources, citations, or other detailed information

      When hallucination is most likely to appear in LLMs.

    Created with Sketch. Visit annotations in context

    Created with Sketch. Tags

    Created with Sketch. Annotators

    Created with Sketch. URL

    1. significant model public releases within scope

      ! Also, what is 'significant'?

    2. introduced after the watermarking system is developed


    3. reat unreleased AI model weights for models in scope as core intellectual property for their business,
    4. establishor joina forum or mechanism through which they can develop, advance, and adopt shared standards and best practices for frontier AI safety,
    5. Companies commit to advancing this area of research, and to developing a multi-faceted, specialized, and detailed red-teaming regime, including drawing on independent domain experts, for all major public releases of new models within scope

      So applying to what comes next...

    6. Where commitments mention particular models, they apply only to generative models that are overall more powerful than the current industry frontier (

      Very interesting... Not quite sure what is meant by 'particular models', though. ||JovanK||

    7. hese voluntary commitmentsto remain in effect until regulations covering substantially the same issues come into force
    8. designed to advancea generative AI legal and policy regime
    9. Realizing the promise and minimizing the risk of AI will require new laws, rules, oversight, and enforcement.
    10. only a first step in developing and enforcing binding obligations to ensure safety, security, and trus

      commitments to be followed by binding obligations

    Created with Sketch. Visit annotations in context

    Created with Sketch. Tags

    Created with Sketch. Annotators

    Created with Sketch. URL

    1. ollaborative could acquire or develop and then distribute AI systems to address these gaps, poolingresources from member states and international development programs, working with frontier AIlabs to provide appropriate technology, and partnering with local businesses, NGOs, and beneficiarygovernments to better understand technological needs and overcome barriers to use

      What would be the relation with big tech companies?

      And 'acquire' how?

    2. cquiresand distributes AI systems
    3. institutions taking on the role of several of the models above
    4. ssed by offering safety researchers withinAI labs dual appointments or advisory roles in the Project,
    5. it diverts safety research away from the sites of frontier AI development.
    6. such an effort would likely be spearheaded byfrontier risk-conscious actors like the US and UK governments, AGI labs and civil society groups
    7. ccelerate AI safety researchby increasing its scale, resourcing and coordination, thereby expanding the ways in which AI canbe safely deployed, and mitigating risks stemming from powerful general purpose capabilities
    8. xceptional leaders and governancestructures
    9. an institution with significant compute, engineering capacity and access tomodels (obtained via agreements with leading AI developers), and would recruit the world’s leadingexperts
    10. n AI Safety Project need not be, and should beorganized to benefit from the AI Safety expertise in civil society and the private sector

      And funded how?

    11. like ITER and CERN.
    12. conduct technical AI safety research26at an ambitious scale
    13. y to exclude from participation states who are likely to want to useAI technology in non-peaceful ways, or make participation in a governance regime the preconditionfor membership
    14. consistently implement the necessary controls to manage frontiersystem
    15. he Collaborative to have a clear mandateand purpose
    16. diffusing dangerous AI technologies around theworld, if the most powerful AI systems are general purpose, dual-use, and proliferate easily.
    17. would need to significantly promote access to the benefits of advanced AI (objective1), or put control of cutting-edge AI technology in the hands of a broad coalition (objective 2)
    18. he resourcesrequired to overcome these obstacles is likely to be substantial,


    19. nderstanding the needs of membercountries, building absorptive capacity through education and infrastructure, and supporting thedevelopment of a local commercial ecosystem to make use of the technology
    20. being deployed for “protective” purposes
    21. ncrease global resilience to misusedor misaligned AI systems
    22. existence of a technologicallyempowered neutral coalition may also mitigate the destabilizing effects of an AI race betweenstates


    23. access to its safe technology could be offered as an incentivefor countries to participate in a governance regime that enforces responsibility (such as agreementsto enact stricter regulation, or restrict military AI development).

      But the cynical question would be: If country X has the research and development capacities to develop advanced AI, why would it want access to the tech of this Collaborative?

    24. reduce geopolitical instability amidst fierce AI competition among states


    25. ensure the benefits ofcutting-edge AI reach groups that are otherwise underserved by AI developmen
    26. legitimate international access to advanced AI
    27. a Frontier AICollaborative could take the form of an international private-public partnership that leverages existingtechnology and capacity in industry, for example by contracting access to or funding innovation inappropriate AI technology from frontier AI developers.
    28. ligned countries may seek to form governance clubs, asthey have in other domains. This facilitates decision-making, but may make it harder to enlist othercountries later in the process

      Budapest Convention is a case in point

    29. its effectiveness will depend on its membership, governance and standard-setting processes
    30. Governance Organization should focus primarily on advanced AI systems thatpose significant global risks, but it will be difficult in practice to decide on the nature and sophisti-cation of AI tools that should be broadly available and uncontrolled versus the set of systems thatshould be subject to national or international governance
    31. Automated (even AI-enabled) monitoring
    32. some states may be especially reluctant to join due to fear of clandestinenoncompliance by other states
    33. Other AI-specific incentives for participation include conditioning on participationaccess to AI technology (possibly from a Frontier AI Collaborative) or computing resources.22Statesmight also adopt import restrictions on AI from countries that are not certified by a GovernanceOrganization

      Surely interesting. Though in the current geopolitical context, it is not difficult to imagine how this would work.

    34. while urgent risks may need to be addressed at first by smallergroups of frontier AI states, or aligned states with relevant expertise

      Geopolitics come to play again? Surely 'frontier AI states' is different from 'aligned states'.

    35. The impact of a Governance Organization depends on states adoptingits standards and/or agreeing to monitoring.
    36. standard setting (especially in an international and multistakeholder context) tends to be a slowprocess
    37. detection and inspections of large data centers
    38. self-reporting of compliance with international standards
    39. Where states have incentives to undercut each other'sregulatory commitments, international institutions may be needed to support and incentivize bestpractices. That may require monitoring standards compliance.
    40. therefore enable greater access to advanced AI

      Implementing international standards enables greater access to tech?

    41. international standard setting wouldreduce cross-border frictions due to differing domestic regulatory regimes

      If those standards (whatever we mean by them...) are taken up broadly.

    42. uture regulations will limit access to powerful AI technologies in jurisdictions with inadequate AIgovernanc

      Assuming those jurisdictions won't be able to develop their own powerful AI tech?

    43. he InternationalTelecommunication Union (ITU)

      In what sense?

    44. Advanced AI Governance Organization.
    45. intergovernmental or multi-stakeholder
    46. as less institutionalizedand politically authoritative scientific advisory panels on advanced AI

      So a advisory panel stands more chances of reaching that consensus than a commission?

    47. epresentation may trade off against a Commission’s ability to overcomescientific challenges and generate meaningful consensus
    48. broad geographic representation in the main decisionmaking bodies, and a predominance of scientificexperts in working groups
    49. he scientific understanding of the impacts of AI should ideally beseen as a universal good and not be politicized
    50. foundational “Conceptual Framework
    51. Commission might undertake activities that draw andfacilitate greater scientific attention, such as organizing conferences and workshops and publishingresearch agenda


    52. o increase chances of success, aCommission should foreground scientific rigor and the selection of highly competent AI expertswho work at the cutting edge of technological development and who can continually interpret theever-changing technological and risk landscape

      Sounds good; with the devil being in implementation. E.g." Whose standards would determine what is 'highly competent' AI expert?

    53. internationally representative group of experts
    54. there is significant disagreement even among experts about the different opportunities andchallenges created by advanced AI

      And so what makes us think that these disagreements would evolve into a consensus if a committee is created?

    55. International consensus on the opportunities and risks from advanced AI

      What does 'international consensus' mean?

    56. the Commission on Frontier AI could facilitatescientific consensus by convening experts to conduct rigorous and comprehensive assessments ofkey AI topics, such as interventions to unlock AI’s potential for sustainable development, the effectsof AI regulation on innovation, the distribution of benefits, and possible dual-use capabilities fromadvanced systems and how they ought to be managed

      What a Commission on Frontier AI would do.

      Silly question: Why 'frontier AI'?

    57. intergovernmental body
    58. dangerous inputs: computing resources have been targeted by US, Japanese and Dutch export controlsthat prevent the sale of certain AI chips and semiconductor manufacturing equipment to Chin

      So 'controlling dangerous inputs' is actually about preventing non-friendly countries to access/develop the tech?

    59. norms and standards

      Are norms and standards rules?

    60. standards

      Again, what kind of standards are we talking about?

    61. Establish guidelines

      Don't we have enough of these?

    62. through education, infrastructure, andsupport of the local commercial ecosystem

      So building capacities and creating enabling environments

    63. develop frontier AI collectively or distribute and enable access

      A bunch of questions here. It sounds good, but:

      • Collectively by whom?
      • How exactly would that distribution of access work?
    64. developing and/or enabling safe forms of access to AI.

      What does this mean?

    65. ontrolling AIinputs

      How could this be done?

      ||JovanNj|| Any thoughts?

    66. tobuild consensus on risksand how they can be mitigated,andset safety norms and standardsandsupport their implementationto help developers and regulators with responsible developmentand use. International efforts toconduct or support AI safety research
    67. national regulation may be ineffective for managing the risks of AI even within states. States willinevitably be impacted by the development of such capabilities in other jurisdictions
    68. building capacity to benefit from AI through education, infrastructure, andlocal commercial ecosystems

      Building capacity is always good. Not quite sure whose capacities are we talking about here and how/who would build them.

    69. build consensus on AI opportunities

      Do we really need to build consensus on this?

    70. Standards

      The more we talk/write about standards, the more I feel we need a bit of clarity here as well: Are we talking about technical standards (the ones developed by ITU, ISO, etc) or 'policy' standards (e.g. principles) or both?

    71. SO/IEC

      ITU, IEEE...

    72. whether these institutions should be new or evolutions of existing organizations, whether the con-ditions under which these institutions are likely to be most impactful will obtain,

      But aren't these essential questions if all this is to work?

    73. A failure to coordinate or harmonizeregulation may also slow innovation.

      A cynical question: Is this coming from a commercial interest angle, then?

    Created with Sketch. Visit annotations in context

    Created with Sketch. Tags

    Created with Sketch. Annotators

    Created with Sketch. URL

  4. Jun 2023
    1. My worst fears are that we cause significant, we, the field, the technology, the industry cause significant harm to the world. I think that could happen in a lot of different ways. It’s why we started the company

      why exactly?

    2. worry that as the models get better and better the users can have sort of less and less of their own discriminating thought process around it. But, but I think users are more capable than we give often, give them credit for in, in conversations like this.
    3. but also the genome project
    4. to us as the American people to write the answer
    5. t’s really like the invention of the internet in scale, at least, and potentially far, far more significant than that
    Created with Sketch. Visit annotations in context

    Created with Sketch. Tags

    Created with Sketch. Annotators

    Created with Sketch. URL

  5. May 2023
    1. in the text such as multistakeholders and networked-multilateralism

      Also in the GDC policy brief, where we have a combination of MSH, tripartite, networked cooperation.

    Created with Sketch. Visit annotations in context

    Created with Sketch. Annotators

    Created with Sketch. URL

  6. Feb 2023
    1. 2017: ESA announces its astronauts will train along Chinese one, with the overall goal of having European sent to China's space station. Jan 2023: ESA: "For the moment we have neither the budgetary nor the political, let’s say, green light or intention to engage in a second space station—that is participating on the Chinese space station"

    2. European space officials like the Artemis program and are seeking areas for greater involvement. This is drawing them closer to NASA.
    3. However, the more significant reason is probably a political one
    4. anuary, Josef Aschbacher, director general of the European Space Agency, said his focus remains on the International Space Station Partnership with NASA, Russia, Canada, and Japan. "For the moment we have neither the budgetary nor the political, let’s say, green light or intention to engage in a second space station—that is participating on the Chinese space station,"
    5. Nearly six years ago the European Space Agency surprised its longtime spaceflight partners at NASA, as well as diplomatic officials at the White House, with an announcement that some of its astronauts were training alongside Chinese astronauts. The goal was to send European astronauts to China's Tiangong space station by 2022.

      2017: ESA announces its astronauts will train along Chinese one, with the overall goal of having European sent to China's space station.

    Created with Sketch. Visit annotations in context

    Created with Sketch. Tags

    Created with Sketch.