36 Matching Annotations
  1. Jan 2024
    1. strengtheningoversight mechanisms for the use of data-driven technology, including artificial intelligence, to support the maintenance of international peace and security

      which mechanisms?

    Created with Sketch. Visit annotations in context

    Created with Sketch. Tags

    Created with Sketch. Annotators

    Created with Sketch. URL

  2. Jul 2023
    1. significant model public releases within scope

      ! Also, what is 'significant'?

    2. introduced after the watermarking system is developed

      !

    3. Companies commit to advancing this area of research, and to developing a multi-faceted, specialized, and detailed red-teaming regime, including drawing on independent domain experts, for all major public releases of new models within scope

      So applying to what comes next...

    4. Where commitments mention particular models, they apply only to generative models that are overall more powerful than the current industry frontier (

      Very interesting... Not quite sure what is meant by 'particular models', though. ||JovanK||

    5. only a first step in developing and enforcing binding obligations to ensure safety, security, and trus

      commitments to be followed by binding obligations

    Created with Sketch. Visit annotations in context

    Created with Sketch. Tags

    Created with Sketch. Annotators

    Created with Sketch. URL

    1. access to its safe technology could be offered as an incentivefor countries to participate in a governance regime that enforces responsibility (such as agreementsto enact stricter regulation, or restrict military AI development).

      But the cynical question would be: If country X has the research and development capacities to develop advanced AI, why would it want access to the tech of this Collaborative?

    2. uture regulations will limit access to powerful AI technologies in jurisdictions with inadequate AIgovernanc

      Assuming those jurisdictions won't be able to develop their own powerful AI tech?

    3. o increase chances of success, aCommission should foreground scientific rigor and the selection of highly competent AI expertswho work at the cutting edge of technological development and who can continually interpret theever-changing technological and risk landscape

      Sounds good; with the devil being in implementation. E.g." Whose standards would determine what is 'highly competent' AI expert?

    4. there is significant disagreement even among experts about the different opportunities andchallenges created by advanced AI

      And so what makes us think that these disagreements would evolve into a consensus if a committee is created?

    5. International consensus on the opportunities and risks from advanced AI

      What does 'international consensus' mean?

    6. the Commission on Frontier AI could facilitatescientific consensus by convening experts to conduct rigorous and comprehensive assessments ofkey AI topics, such as interventions to unlock AI’s potential for sustainable development, the effectsof AI regulation on innovation, the distribution of benefits, and possible dual-use capabilities fromadvanced systems and how they ought to be managed

      What a Commission on Frontier AI would do.

      Silly question: Why 'frontier AI'?

    7. dangerous inputs: computing resources have been targeted by US, Japanese and Dutch export controlsthat prevent the sale of certain AI chips and semiconductor manufacturing equipment to Chin

      So 'controlling dangerous inputs' is actually about preventing non-friendly countries to access/develop the tech?

    8. standards

      Again, what kind of standards are we talking about?

    9. Establish guidelines

      Don't we have enough of these?

    10. develop frontier AI collectively or distribute and enable access

      A bunch of questions here. It sounds good, but:

      • Collectively by whom?
      • How exactly would that distribution of access work?
    11. developing and/or enabling safe forms of access to AI.

      What does this mean?

    12. ontrolling AIinputs

      How could this be done?

      ||JovanNj|| Any thoughts?

    Created with Sketch. Visit annotations in context

    Created with Sketch. Tags

    Created with Sketch. Annotators

    Created with Sketch. URL

  3. May 2023
    1. three principles, transparency, accountability, and limits on use.

      3 principles for AI governance

    2. Number one, you’re here because AI is this extraordinary new technology that everyone says can be transformative as much as the printing press. Number two is really unknown what’s gonna happen. But there’s a big fear you’ve expressed to all of you about what bad actors can do and will do if there’s no rules of the road. Number three, as a member who served in the house and now in the Senate, I’ve come to the conclusion that it’s impossible for Congress to keep up with the speed of technology.

      A good summary of the current situation with AI technology.

    3. the central scientific issue

      Is it 'scientific issue'? I do not think so. It is more philosophical and possible even, theological, issue. Can science tell us what is good and bad?

    4. the conception of the EU AI Act is very consistent with this concept of precision regulation where you’re regulating the use of the technology in context.

      EU AI Act uses precise regulation of regulation AI in specific contexts.

    5. a reasonable care standard.

      Another vague concept. What is 'reasonable'? There will be a lot of job for AI-powered lawyers.

    6. Thank you, Mr. Chairman and Senator Hawley for having this. I’m trying to find out how it is different than social media and learn from the mistakes we made with social media. The idea of not suing social media companies is to allow the internet to flourish. Because if I slander you you can sue me. If you’re a billboard company and you put up the slander, can you sue the billboard company? We said no. Basically, section 230 is being used by social media companies to high, to avoid liability for activity that other people generate. When they refuse to comply with their terms of use, a mother calls up the company and says, this app is being used to bully my child to death. You promise, in the terms of use, she would prevent bullying. And she calls three times, she gets no response, the child kills herself and they can’t sue. Do you all agree we don’t wanna do that again?

      How to avoid repeating with AI governance what happened with Seciton 230 and social media governance?

    7. the current version of GPT-4 ended to training in 2021.

      2021 starts to being 'safety net' for OpenAI

    8. Sen. Marsha Blackburn (R-TN):

      It is probably the most practical approach to AI governance. Senator from Tennessee asked many questions on the protection of copyright of musicians. Is Nashville endangered. The more we anchor AI governance questions into practical concerns of citizens, communities, and companies - the better AI governance we will have.

    9. that people own their virtual you.

      People can own it only with 'bottom-up AI'

    10. When you think about the energy costs alone, just for training these systems, it would not be a good model if every country has its own policies and each, for each jurisdiction, every company has to train another model.

      It is naive view because AI is shaped by ethics and ethics is very 'local'. Yes, there are some global ethical principles: protect human life and dignity. But many other ethical rules are very 'local'.

    11. need a cabinet level organization within the United States in order to address this.

      Who can govern AI?

    12. If these large language models can, even now, based on the information we put into them quite accurately predict public opinion, you know, ahead of time. I mean, predict, it’s before you even ask the public these questions, what will happen when entities, whether it’s corporate entities or whether it’s governmental entities, or whether it’s campaigns or whether it’s foreign actors, take this survey information, these predictions about public opinion and then fine tune strategies to elicit certain responses, certain behavioral responses.

      this is what worries politicians - how to win elections? They like 'to see' (use AI for their needs) but 'not to be seen' (use by somebody else. The main problem with political elites worldwide is that they may win elections with use of AI (or not), but the humanity is sliding into 'knowledge slavery' by AI.

    13. large language models can indeed predict public opinion.

      They can as they, for example, predict continuation of this debate in the political space.

    14. So I think the most important thing that we could be doing and can, and should be doing now, is to prepare the workforce of today and the workforce of tomorrow for partnering with AI technologies and using them. And we’ve been very involved for, for years now in doing that in focusing on skills-based hiring in educating for the skills of the future. Our skills build platform has 7 million learners and over a thousand courses worldwide focused on skills. And we’ve pledged to train 30 million individuals by 2030 in the skills that are needed for society today.

      It is probably the only thing to do. But the problem remains that even re-skilling want be sufficient if we will need less human labour.

    15. The National Institutes of Standards and technology actually already has an AI accuracy test,

      It would be interesting to see how it works in practice. How can you judge accuracy if AI is about probability. It is not about certainty which is the first building block for accuracy.

    16. Ultimately, we may need something like cern Global, international and neutral, but focused on AI safety rather than high energy physics.

      He probably thought of analogy with IPCC as supervisory space. But CERN could play role as place for research on AI and processing huge amount of data.

    17. But we also need independent scientists, not just so that we scientists can have a voice, but so that we can participate directly in addressing the problems in evaluating solutions.

      An important stakeholder.

    Created with Sketch. Visit annotations in context

    Created with Sketch. Tags

    Created with Sketch. Annotators

    Created with Sketch. URL

  4. Nov 2021
    1. 46

      This para shifts previous balancing formulation on data governance (proper balance between data sovereignty and free data flows) towards more data sovereignty

    Created with Sketch. Visit annotations in context

    Created with Sketch. Tags

    Created with Sketch. Annotators

    Created with Sketch. URL