223 Matching Annotations
  1. Last 7 days
    1. Types of vector embeddings

      What type of embedding we have in our vector database: word, sentence or document. Could we have embedding of all three?

      ||JovanNj|| ||anjadjATdiplomacy.edu||

    Created with Sketch. Visit annotations in context

    Created with Sketch. Tags

    Created with Sketch. Annotators

    Created with Sketch. URL

  2. Feb 2024
    1. Computing Power and the Governance of Artificial Intelligence

      @jovan @sorina This article (published just a week ago) was a new research on regulating frontier AI models by imposing restrictions and safety measures on AI chips.

      I previously shared a similar article called 'How to Catch a Chinchilla', which expresses preliminary ideas of this one.

      The article is focused on regulating the 'hardware' part of frontier models both because it's more visible and easier to track/locate and because, as far as the current way AI is built are concerned, development of advanced AI models still rely on securing a good number of AI chips.

    Created with Sketch. Visit annotations in context

    Created with Sketch. Tags

    Created with Sketch. Annotators

    Created with Sketch. URL

    1. Another Hanna paper, presented at the Resistance AI workshop, urges the machine learning community to go beyond scale when considering how to address systemic social issues and asserts that resistance to scale thinking is needed.

      From what I observed, the notion of 'scaling an algorithm' always results in high decontextualization of data and outputs (i.e., training data is taken out of the context from which it was derived, and the outputs are applied to various contexts instead of merely the one the model was trained for).

      And a huge problem for the lack of reflections in this regard was, in my opinion, the tech community's blatant refusal to think about the systemic, societal risks of their models--they framed it as the problem of policymakers and ethicists, and that the social can be separately mitigated apart from the technical conception of the models.

    2. “We argue that fixes that focus narrowly on improving datasets by making them more representative or more challenging might miss the more general point raised by these critiques, and we’ll be trapped in a game of dataset whack-a-mole rather than making progress, so long as notions of ‘progress’ are largely defined by performance on datasets,” the paper reads. “Should this come to pass, we predict that machine learning as a field will be better positioned to understand how its technology impacts people and to design solutions that work with fidelity and equity in their deployment contexts.”

      I suspected that the current AI regulations discussions are missing out on long-term risks that certain paradigms in the current AI development space might impose; one is the predominant idea that we can resolve fairness by simply codifying fairness metrics and making the latter a requirement of a model to 'hit the market'.

      This logic fails to challenge what this paragraph is trying to say here: ensuring fairness goes beyond the idea of getting a 'fair dataset' that performs well; it concerns what the deployment context is, how developers and users are to interact with the AI model, and how does the model become interwoven in the lives of the people working around it.

    Created with Sketch. Visit annotations in context

    Created with Sketch. Tags

    Created with Sketch. Annotators

    Created with Sketch. URL

  3. Jan 2024
    1. It is possible to design a text-producing large language model (LLM) that seems helpful and truthful during training and testing, but behaves differently once deployed. And according to a study shared this month on arXiv1, attempts to detect and remove such two-faced behaviour are often useless — and can even make the models better at hiding their true nature.The finding that trying to retrain deceptive LLMs can make the situation worse “was something that was particularly surprising to us … and potentially scary”, says co-author Evan Hubinger, a computer scientist at Anthropic, an AI start-up company in San Francisco, California.

      Kind of scary indeed. It takes us back to the question of trust in developers.

    Created with Sketch. Visit annotations in context

    Created with Sketch. Tags

    Created with Sketch. Annotators

    Created with Sketch. URL

    1. developingnorms, rules and principles on the design, development and use of military applications of artificial intelligence through a multilateral process

      also interesting

    2. strengtheningoversight mechanisms for the use of data-driven technology, including artificial intelligence, to support the maintenance of international peace and security

      which mechanisms?

    3. commit to concluding without delaya legally binding instrument to prohibit lethal autonomous weaponssystems that function without human control or oversight, and which cannot be used in compliance with international humanitarian law, and to regulate all other types of autonomous weapons systems

      LAWS - another proposed strong commitment

    Created with Sketch. Visit annotations in context

    Created with Sketch. Tags

    Created with Sketch. Annotators

    Created with Sketch. URL

    1. 70% of respondents thought AI safety research should be prioritized more than it currently is.

      need for more AI safety research

    2. etween 41.2% and 51.4% of respondents estimated a greater than 10% chance of humanextinction or severe disempowerment

      predictions on the likelihood of human extinction

    3. Amount of concern potential scenarios deserve, organized from most to least extreme concern

      very interesting, in particular the relative limited concern related to the sense of meaning/purpose

    4. ven among net optimists,nearly half (48.4%) gave at least 5% credence to extremely bad outcomes, and among net pessimists, more than half(58.6%) gave at least 5% to extremely good outcomes. The broad variance in credence in catastrophic scenarios showsthere isn’t strong evidence understood by all experts that this kind of outcome is certain or implausible

      basically difficult to predict the consequences of high-level machine intelligence

    5. scenarios worthy of most concern were: spread of false information e.g. deepfakes (86%), manipulation oflarge-scale public opinion trends (79%), AI letting dangerous groups make powerful tools (e.g. engineered viruses)(73%), authoritarian rulers using AI to control their populations (73%), and AI systems worsening economic inequalityby disproportionately benefiting certain individuals (71%).

      likelihood of existing and exclusion AI risks

    6. Most respondents considered it unlikely that users of AI systems in 2028 will be able to know the truereasons for the AI systems’ choices, with only 20% giving it better than even odds.

      predictions on explainability / interpretability of AI systems

    7. Answers reflected substantial uncertainty and disagreement among participants. No trait attracted near-unanimity on anyprobability, and no more than 55% of respondents answered “very likely” or “very unlikely” about any trait.

      !

    8. Only one trait had a median answer below ‘even chance’: “Take actions to attain power.” While there wasno consensus even on this trait, it’s notable that it was deemed least likely, because it is arguably the most sinister, beingkey to an argument for extinction-level danger from AI

      .

    9. ‘intelligence explosion,’

      Term to watch for: intelligence explosion

    10. he top five most-suggested categories were: “Computerand Mathematical” (91 write-in answers in this category), “Life, Physical, and Social Science” (77 answers), “HealthcarePractitioners and Technical” (56), “Management” (49), and “Arts, Design, Entertainment, Sports, and Media”

      predictions on occupations likely to be among the last automatable

    11. predicted a 50% chance of FAOL by 2116, down 48 years from 2164 in the 2022 survey

      Timeframe prediction for full automation of labour: 50% chance it would happen by 2116.

      But what does this prediction - and the definition of full automation of labour - mean in the context of an ever-evolving work/occupation landscape? What about occupations that might not exist today? Can we predict how those might or might not be automated?

    12. ay an occupation becomes fully automatable when unaided machines can accomplish it betterand more cheaply than human workers. Ignore aspects of occupations for which being a human isintrinsically advantageous, e.g. being accepted as a jury member.Think feasibility, not adoption.[. . . ]Say we have reached ‘full automation of labor’ when all occupations are fully automatable. That is,when for any occupation, machines could be built to carry out the task better and more cheaply thanhuman workers

      Q: What is full automation of labour?

    13. the chance of unaided machines outperforming humans in every possible task was estimated at 10%by 2027, and 50% by 2047.

      Survey: 10% chance that machines become better than humans in 'every possible task by 2027, but 50% by 2047.

    14. High-level machine intelligence (HLMI) is achieved when unaided machines can accomplish everytask better and more cheaply than human workers. Ignore aspects of tasks for which being a humanis intrinsically advantageous, e.g. being accepted as a jury member.Think feasibility, not adoption

      Q: What is high-level machine intelligence?

    15. High-Level Machine Intelligence’

      new term: high-level machine intelligence

    16. six tasks expected to take longer than ten years were: “After spending time in a virtual world, output the differentialequations governing that world in symbolic form” (12 years), “Physically install the electrical wiring in a new home”(17 years), “Research and write” (19 years) or “Replicate” (12 years) “a high-quality ML paper,” “Prove mathematicaltheorems that are publishable in top mathematics journals today” (22 years), and solving “long-standing unsolvedproblems in mathematics” such as a Millennium Prize problem (27 years)

      Expectations on the tasks feasible to be taken over than AI later than 10 years from now

    17. lack of apparent consensus among AI experts on the future of AI [

      This has always been the case, no?

    18. was disagreement about whether faster or slower AI progress would be better for thefuture of humanity.

      interesting also

    19. substantial” or “extreme” concern is warranted about six different AI-relatedscenarios, including spread of false information, authoritarian population control, and worsenedinequality.

      .

    20. While68.3% thought good outcomes from superhuman AI are more likely than bad, of these net optimists48% gave at least a 5% chance of extremely bad outcomes such as human extinction, and 59% of netpessimists gave 5% or more to extremelygoodoutcomes. Between 37.8% and 51.4% of respondentsgave at least a 10% chance to advanced AI leading to outcomes as bad as human extinction.

      Survey results on AI extinction risks. Quite interesting

    21. the chance of all humanoccupations becoming fully automatable was forecast to reach 10% by 2037, and 50% as late as 2116(compared to 2164 in the 2022 survey).
  4. Dec 2023
    1. Andrew Ng: ‘Do we think the world is better off with more or less intelligence?’

      A very lucid interview. He highlights the importance of open source AI, the danger of regulating LLM as opposed to applications, the negative lobby of most big tech, and the need to focus on good regulation targeting the problems of today, not speculating about extinction, etc. ||JovanK|| and ||sorina||

    Created with Sketch. Visit annotations in context

    Created with Sketch. Tags

    Created with Sketch. Annotators

    Created with Sketch. URL

  5. Oct 2023
    1. The fellows funded by Open Philanthropy, which is financed primarily by billionaire Facebook co-founder and Asana CEO Dustin Moskovitz and his wife Cari Tuna, are already involved in negotiations that will shape Capitol Hill’s accelerating plans to regulate AI. And they’re closely tied to a powerful influence network that’s pushing Washington to focus on the technology’s long-term risks — a focus critics fear will divert Congress from more immediate rules that would tie the hands of tech firms.

      ||Jovan||||sorina||||JovanNj||

      Probably old news to you, but here's an article that explains about the billionaire that founded Open Philanthropy (a core funder in EA activities). It also explains about its reach into politics.

    Created with Sketch. Visit annotations in context

    Created with Sketch. Tags

    Created with Sketch. Annotators

    Created with Sketch. URL

    1. William Isaac

      Staff Research Scientist on DeepMind’s Ethics and Society Team and Research Affiliate at Oxford University Centre's for the Governance of AI: https://wsisaac.com/#about

      Both DeepMind and Centre for the Governance of AI (GovAI) have strong links to EA!

    2. Arvind Narayanan

      Professor of computer science from Princeton University and the director of the Center for Information Technology Policy: https://www.cs.princeton.edu/~arvindn/.

      Haven't read his work closely yet, but it seems sensible to me.

    3. Sara Hooker,

      Director at Cohere: https://cohere.com/ (an LLM AI company).

    4. Yoshua Bengio

      Professor at Université de Montréal and scientific director of the Montreal Institute for Learning Algorithms (MILA).

      Very influential computer scientist, and considered a leading force in AI. Also an AI doomer, though I can't find his clear link with EA.

    5. Irene Solaiman

      Head of Global Policy at Hugging Face: https://www.irenesolaiman.com/

    6. Paul Christiano
    7. Capabilities and risks from frontier AI

      ||Jovan|| ||sorina|| ||JovanNj||

      This is the report that the UK released ahead of the AI Safety Summit (1-2 November, 2023).

    Created with Sketch. Visit annotations in context

    Created with Sketch. Tags

    Created with Sketch. Annotators

    Created with Sketch. URL

    1. On Wednesday, the U.K. government released a report called “Capabilities and risks from frontier AI,” in which it explains frontier AI as “highly capable general-purpose AI models that can perform a wide variety of tasks and match or exceed the capabilities present in today’s most advanced models.”

      ||Jovan||||sorina||||JovanNj||

      This report might be a good source for understanding narrative change in the field of AI safety. I will find out where it is and send it to you via email.

      Also, it shouldn't come as a surprise to us that the UK government will be the most convinced in EA's description of AI ending humanity, considering that many EA orgnisations are based in and attached to UK universities (Oxbridge).

    Created with Sketch. Visit annotations in context

    Created with Sketch. Tags

    Created with Sketch. Annotators

    Created with Sketch. URL

    1. We are at the same juncture now with AI. Once again, a handful of competitive but ideologically aligned leaders are telling us that large-scale, general-purpose AI implementations are the only way forward. In doing so, they disregard the dangerous level of complexity and the undue level of control and financial return to be granted to them.

      @Jovan @Sorina

      The very same people warning us of extinction-level AI risks are the same people who are developing technologies in a way that leads us to it. In a way, the public release of GPT and other generative models created the very market pressure that makes "creating the best, most intelligent AGI" the most important and only goal for the market.

    2. What is happening is not just a battle for market control. A small number of tech titans are busy designing our collective future, presenting their societal vision, and specific beliefs about our humanity, as theonly possible path. Hiding behind an illusion of natural market forces, they are harnessing their wealth and influence to shape not just productization and implementation of AI technology, but also the research.

      @Jovan @Sorina

      This is my concern with solely using a longtermist view to make policy judgments.

    Created with Sketch. Visit annotations in context

    Created with Sketch. Tags

    Created with Sketch. Annotators

    Created with Sketch. URL

  6. Sep 2023
    1. Inside effective altruism, where the far future counts a lot more than the present

      ||Jovan|| A good explanation on what this effective altruism movement is and how that came to populate the current political discourse of AI governance with many terms like "fate-defining moment" and "threat against humanity".

    Created with Sketch. Visit annotations in context

    Created with Sketch. Tags

    Created with Sketch. Annotators

    Created with Sketch. URL

    1. The Reluctant Prophet of Effective Altruism

      ||Jovan||

      This is an in-depth interview with the original founder of effective altruism movement. There were several iterations of the same philanthropic/philosophical movement that turned political and ambitious very soon.

    Created with Sketch. Visit annotations in context

    Created with Sketch. Tags

    Created with Sketch. Annotators

    Created with Sketch. URL

    1. This task demonstrates the failure of language models to follow instructions when there is a popular continuation that does not fit with that instruction. Larger models are more hurt by this as the larger the model, the more familiar it is with common expressions and quotes.

      ||Jovan|| ||JovanNj||

      An example of how the superb capability of an LLM in induction and association actually breaks; also shows a little bit how the LLM "thinks" (which is very different from a human, who may have or have not memorized the Shakespeare's quote by heart but would nonetheless understand that the "prompt" is more important than reciting the most plausible quote.)

    Created with Sketch. Visit annotations in context

    Created with Sketch. Tags

    Created with Sketch. Annotators

    Created with Sketch. URL

    1. The standard paradigm in natural language processing today is to pretrain large language models to autocomplete text corpora. The resulting models are then either frozen and used directly for other tasks (zero-shot or using few-shot learning), or additionally trained on other tasks (fine-tuning).

      It might be interesting to carry out similar tasks for the model that Diplo is fine-tuning--to see where its algorithmic reasoning will break.

      Also, it might be a good comparison study to show on paper how the Diplo model works better with higher-quality data (bottom-up approach). It'd be good evidence to show, I suppose.||Jovan||||Jovan||||Jovan||||JovanNj||||Jovan||

    2. The purpose of this contest is to find evidence for a stronger failure mode: tasks where language models get worse as they become better at language modeling (next word prediction).

      ||Jovan|| ||JovanNj||

      I found this interesting as this might be insightful to the problem I faced when I was writing the speech for the Digital Cooperation Day "Chair of future generations" using ChatGPT. The model was really good at generating a quote that doesn't exist and was never said by the person it attributed to.

      It is very plausible because from the "reality" the model lives in, multiple data sources made it probable that "this quote might have existed and it makes sense that this quote follows that quote and follows that name and that organization." It is interesting to actually see where the model that is very good at inductive reasoning and association fails sometimes, because induction and association aren't the only two logics humans use to approach reality.

    Created with Sketch. Visit annotations in context

    Created with Sketch. Tags

    Created with Sketch. Annotators

    Created with Sketch. URL

  7. Aug 2023
    1. , Sakana’s approach could potentially lead to AI that’s cheaper to train and use than existing technology. That includes generative AI

      Influence on costs

    2. Sakana is still in the early stages: It hasn’t yet built an AI model and doesn’t have an office.

      Very early stage.

    3. The startup plans to make multiple smaller AI models, the kind of technology that powers products like ChatGPT, and have them work together. The idea is that a “swarm” of programs could be just as smart as the massive undertakings from larger organizations.

      This sounds similar to our "bottom-up AI" approach

      ||JovanK||||sorina||||anjadjATdiplomacy.edu||

    Created with Sketch. Visit annotations in context

    Created with Sketch. Tags

    Created with Sketch. Annotators

    Created with Sketch. URL

    1. Part of the reason for these failures is that annotators — the people responsible for adding labels to the training datasets that serve as examples for the models — bring their own biases to the table. For example, frequently, there’s differences in the annotations between labelers who self-identified as African Americans and members of the LGBTQ+ community versus annotators who don’t identify as either of those two groups.

      ||Jovan|| ||JovanNj||

      I'm not sure if this actually was the problem though. Perspective API researchers particularly designed the prompt given to the annotators in a way that is ambiguous and left up to the latter's own perspective. The researchers asked the annotators to mark a comment toxic or not based on whether that comment would make them want to stay or leave the conversation.

      The reasoning behind this ambiguity seems to be that the researchers don't want to a-priori give a set of what is defined as "good" and "bad", and instead rely on the ambiguities of how people feel. This makes sense to me, as if we have a dictionary of good words and bad words fixed in the system, we are also exercising our own bias and taking words out of context (we ourselves are ignoring contextual significance as well).

    2. Perspective often couldn’t recognize hate speech that used “reclaimed” slurs like “queer” and spelling variations such as missing characters.

      ||Jovan|| ||JovanNj||

      The problem happens to be that in the training dataset, most "toxic" comments include words referring to and against historically discriminated groups, so Perspective API made the linkage that the existence of words referring to these groups automatically make the comment toxic.

      I recently came across the concept of "contextual significance" that was created by early pattern recognition researchers in the 1950s, which basically means that a successful machine should be able to judge which meaning of the word is invoked in a given context (the "neighborhood", the nearby words of a sentence/pixels of a picture) and what effect it would create for which group of people. Perspective API lacked this.

      The Perspective researchers apparently decided to feed the algorithm more "non-toxic" comments that include terms relating to minorities or discriminated groups to balance out the adverse score associated with them.

    3. Several years ago, a team at Penn State found that posts on social media about people with disabilities could be flagged as more negative or toxic by commonly used public sentiment and toxicity detection models. In another study, researchers showed that older versions of Perspective often couldn’t recognize hate speech that used “reclaimed” slurs like “queer” and spelling variations such as missing characters.

      If I recall correctly, Perspective API by Google Jigsaw was trained on a dataset consisting of the comment section of Wikipedia Talk labelled by crowd-sourced workers. Just some background information.

    4. OpenAI proposes a new way to use GPT-4 for content moderation

      ||Jovan|| ||JovanNj||

      ChatGPT 4 apparently could take a defined policy and check if the new inputs violate such a policy. I'm more curious about how we could understand the logic or reasoning the model uses for classifying these policy-compliant or policy-non-compliant inputs.

    Created with Sketch. Visit annotations in context

    Created with Sketch. Tags

    Created with Sketch. Annotators

    Created with Sketch. URL

    1. "We're enabling ownership of these technologies by African founders," he told DW. "This is being built and served by people from these communities. So the financial rewards will also go directly back to them."

      ||Jovan||

      Might be worthwhile to investigate on how this mode of grassroots model-building is done. To me, it is even more interesting to think about how start-ups working closely with local communities and embracing this "bottom-up" approach thrive in places that are the most left out by the biggest/hottest machine learning algorithms of this day (like ChatGPT, DeepMind, etc.).

    Created with Sketch. Visit annotations in context

    Created with Sketch. Tags

    Created with Sketch. Annotators

    Created with Sketch. URL

    1. Consistent with the findings of the 2021 edition, software tools and reports are the most common outputs of UN AI projects, which can be used to address challenges impeding progress on the SDGs

      ||Jovan|| ||JovanNj||

      It might be interesting to look at the software tools and how many of those projects have come to fruition. Just as how Diplo is now exploring ways to incorporate AI tools in our line of work, other UN organisations are doing so, too. What is the scale of their incorporation (does AI replace a core/small function, or does AI assist humans in their job?)

      This might teach us about organisational thinking with regard to the incorporation of AI beyond weightless calls for more awareness of AI-induced transformation.

    Created with Sketch. Visit annotations in context

    Created with Sketch. Tags

    Created with Sketch. Annotators

    Created with Sketch. URL

  8. Jul 2023
    1. Debunking Common Misconceptions about Prompt Engineering

      Misconcpeetions about AI prompting

    Created with Sketch. Visit annotations in context

    Created with Sketch. Tags

    Created with Sketch. Annotators

    Created with Sketch. URL

    1. they aremost common when asking for quotes, sources, citations, or other detailed information

      When hallucination is most likely to appear in LLMs.

    2. This paper examines the transformative role of Large Language Models (LLMs) in education and their potential as learning tools, despite their inherent risks and limitations. The authors propose seven approaches for utilizing AI in classrooms: AI-tutor, AI-coach, AI-mentor, AI-teammate, AI-tool, AI-simulator, and AI-student, each with distinct pedagogical benefits and risks. The aim is to help students learn with and about AI, with practical strategies designed to mitigate risks such as complacency about the AI’s output, errors, and biases. These strategies promote active oversight, critical assessment of AI outputs, and complementation of AI's capabilities with the students' unique insights. By challenging students to remain the "human in the loop", the authors aim to enhance learning outcomes while ensuring that AI serves as a supportive tool rather than a replacement. The proposed framework offers a guide for educators navigating the integration ofAI-assisted learning in classrooms.

      ||Andrej||||Dragana||||sorina||||Jovan||

      this seems to be a paper worth consulting for our approach of using AI in the learning process

    Created with Sketch. Visit annotations in context

    Created with Sketch. Tags

    Created with Sketch. Annotators

    Created with Sketch. URL

    1. significant model public releases within scope

      ! Also, what is 'significant'?

    2. introduced after the watermarking system is developed

      !

    3. Companies commit to advancing this area of research, and to developing a multi-faceted, specialized, and detailed red-teaming regime, including drawing on independent domain experts, for all major public releases of new models within scope

      So applying to what comes next...

    4. Where commitments mention particular models, they apply only to generative models that are overall more powerful than the current industry frontier (

      Very interesting... Not quite sure what is meant by 'particular models', though. ||JovanK||

    5. only a first step in developing and enforcing binding obligations to ensure safety, security, and trus

      commitments to be followed by binding obligations

    Created with Sketch. Visit annotations in context

    Created with Sketch. Tags

    Created with Sketch. Annotators

    Created with Sketch. URL

    1. access to its safe technology could be offered as an incentivefor countries to participate in a governance regime that enforces responsibility (such as agreementsto enact stricter regulation, or restrict military AI development).

      But the cynical question would be: If country X has the research and development capacities to develop advanced AI, why would it want access to the tech of this Collaborative?

    2. uture regulations will limit access to powerful AI technologies in jurisdictions with inadequate AIgovernanc

      Assuming those jurisdictions won't be able to develop their own powerful AI tech?

    3. o increase chances of success, aCommission should foreground scientific rigor and the selection of highly competent AI expertswho work at the cutting edge of technological development and who can continually interpret theever-changing technological and risk landscape

      Sounds good; with the devil being in implementation. E.g." Whose standards would determine what is 'highly competent' AI expert?

    4. there is significant disagreement even among experts about the different opportunities andchallenges created by advanced AI

      And so what makes us think that these disagreements would evolve into a consensus if a committee is created?

    5. International consensus on the opportunities and risks from advanced AI

      What does 'international consensus' mean?

    6. the Commission on Frontier AI could facilitatescientific consensus by convening experts to conduct rigorous and comprehensive assessments ofkey AI topics, such as interventions to unlock AI’s potential for sustainable development, the effectsof AI regulation on innovation, the distribution of benefits, and possible dual-use capabilities fromadvanced systems and how they ought to be managed

      What a Commission on Frontier AI would do.

      Silly question: Why 'frontier AI'?

    7. dangerous inputs: computing resources have been targeted by US, Japanese and Dutch export controlsthat prevent the sale of certain AI chips and semiconductor manufacturing equipment to Chin

      So 'controlling dangerous inputs' is actually about preventing non-friendly countries to access/develop the tech?

    8. standards

      Again, what kind of standards are we talking about?

    9. Establish guidelines

      Don't we have enough of these?

    10. through education, infrastructure, andsupport of the local commercial ecosystem

      So building capacities and creating enabling environments

    11. develop frontier AI collectively or distribute and enable access

      A bunch of questions here. It sounds good, but:

      • Collectively by whom?
      • How exactly would that distribution of access work?
    12. developing and/or enabling safe forms of access to AI.

      What does this mean?

    13. ontrolling AIinputs

      How could this be done?

      ||JovanNj|| Any thoughts?

    Created with Sketch. Visit annotations in context

    Created with Sketch. Tags

    Created with Sketch. Annotators

    Created with Sketch. URL

  9. Jun 2023
    1. Interesting discussion on ways to regulate AI use, and the role (limitations) of open source there, by Bruce Schneier and Waldo.

      It raises some interesting questions about accountability of the open source community. They argue, as many others, that OS community is too fluid to be regulated. I tend to disagree - OS community has many levels, and a certain OS component (say a GitHub code) gets picked up by others at certain points to push to a mass market for benefit (commercial or other). It is when such OS products are picked up that the risk explodes - and it is then when we see tangible entities (companies or orgs) that should be and are accountable for how they use the OS code and push it to mass market.

      I see an analogy with vulnerabilities in digital products, and the responsibility of OS community for the supply chain security. While each coder should be accountable, for individuals it probably boils down to ethics (as the effect of a single github product is very limited); but there are entities in this supply chain that integrate such components that clearly should be hold accountable.

      My comments below. It is an interesting question for Geneva Dialogue as well, not only for AI debates.

      cc ||JovanK|| ||anastasiyakATdiplomacy.edu||

    Created with Sketch. Visit annotations in context

    Created with Sketch. Tags

    Created with Sketch. Annotators

    Created with Sketch. URL

    1. My worst fears are that we cause significant, we, the field, the technology, the industry cause significant harm to the world. I think that could happen in a lot of different ways. It’s why we started the company

      why exactly?

    Created with Sketch. Visit annotations in context

    Created with Sketch. Tags

    Created with Sketch. Annotators

    Created with Sketch. URL

  10. May 2023
    1. three principles, transparency, accountability, and limits on use.

      3 principles for AI governance

    2. Number one, you’re here because AI is this extraordinary new technology that everyone says can be transformative as much as the printing press. Number two is really unknown what’s gonna happen. But there’s a big fear you’ve expressed to all of you about what bad actors can do and will do if there’s no rules of the road. Number three, as a member who served in the house and now in the Senate, I’ve come to the conclusion that it’s impossible for Congress to keep up with the speed of technology.

      A good summary of the current situation with AI technology.

    3. And what auto GPT does is it allows systems to access source code, access the internet and so forth. And there are a lot of potential, let’s say cybersecurity risks. There, there should be an external agency that says, well, we need to be reassured if you’re going to release this product that there aren’t gonna be cybersecurity problems or there are ways of addressing it.

      ||VladaR|| Vlada, please follow-up on this aspect on AI and cybersecurity.

    4. the central scientific issue

      Is it 'scientific issue'? I do not think so. It is more philosophical and possible even, theological, issue. Can science tell us what is good and bad?

    5. the conception of the EU AI Act is very consistent with this concept of precision regulation where you’re regulating the use of the technology in context.

      EU AI Act uses precise regulation of regulation AI in specific contexts.

    6. a reasonable care standard.

      Another vague concept. What is 'reasonable'? There will be a lot of job for AI-powered lawyers.

    7. Thank you, Mr. Chairman and Senator Hawley for having this. I’m trying to find out how it is different than social media and learn from the mistakes we made with social media. The idea of not suing social media companies is to allow the internet to flourish. Because if I slander you you can sue me. If you’re a billboard company and you put up the slander, can you sue the billboard company? We said no. Basically, section 230 is being used by social media companies to high, to avoid liability for activity that other people generate. When they refuse to comply with their terms of use, a mother calls up the company and says, this app is being used to bully my child to death. You promise, in the terms of use, she would prevent bullying. And she calls three times, she gets no response, the child kills herself and they can’t sue. Do you all agree we don’t wanna do that again?

      How to avoid repeating with AI governance what happened with Seciton 230 and social media governance?

    8. the current version of GPT-4 ended to training in 2021.

      2021 starts to being 'safety net' for OpenAI

    9. Sen. Marsha Blackburn (R-TN):

      It is probably the most practical approach to AI governance. Senator from Tennessee asked many questions on the protection of copyright of musicians. Is Nashville endangered. The more we anchor AI governance questions into practical concerns of citizens, communities, and companies - the better AI governance we will have.

    10. that people own their virtual you.

      People can own it only with 'bottom-up AI'

    11. When you think about the energy costs alone, just for training these systems, it would not be a good model if every country has its own policies and each, for each jurisdiction, every company has to train another model.

      It is naive view because AI is shaped by ethics and ethics is very 'local'. Yes, there are some global ethical principles: protect human life and dignity. But many other ethical rules are very 'local'.

    12. need a cabinet level organization within the United States in order to address this.

      Who can govern AI?

    13. And we probably need scientists in there doing analysis in order to understand what the political influences of, for example, of these systems might be.

      Markus tries to make case for 'scientists'. But, frankly speaking, how scientists can decide if AI should rely on book written in favour of republicans or democrats or, even more as AI develops with more sophistication, what 'weight' they should give to one or another source.

      It is VERY dangerous to place ethical and political decisions in hands of scientists. It is also unfair towards them.

    14. If these large language models can, even now, based on the information we put into them quite accurately predict public opinion, you know, ahead of time. I mean, predict, it’s before you even ask the public these questions, what will happen when entities, whether it’s corporate entities or whether it’s governmental entities, or whether it’s campaigns or whether it’s foreign actors, take this survey information, these predictions about public opinion and then fine tune strategies to elicit certain responses, certain behavioral responses.

      this is what worries politicians - how to win elections? They like 'to see' (use AI for their needs) but 'not to be seen' (use by somebody else. The main problem with political elites worldwide is that they may win elections with use of AI (or not), but the humanity is sliding into 'knowledge slavery' by AI.

    15. large language models can indeed predict public opinion.

      They can as they, for example, predict continuation of this debate in the political space.

    16. so-called artificial general intelligence really will replace a large fraction of human jobs.

      It is a good point. There won't be more work.

    17. And the real question is over what time scale? Is it gonna be 10 years? Is it gonna be a hundred years?

      It is a crucial question. One generation will be 'thrown under the bus' in transition. Generation of age 25-50 should 'fasten seat-belts'. They were educated in the 'old system' while they have to work in a very uncertain new economy.

    18. So I think the most important thing that we could be doing and can, and should be doing now, is to prepare the workforce of today and the workforce of tomorrow for partnering with AI technologies and using them. And we’ve been very involved for, for years now in doing that in focusing on skills-based hiring in educating for the skills of the future. Our skills build platform has 7 million learners and over a thousand courses worldwide focused on skills. And we’ve pledged to train 30 million individuals by 2030 in the skills that are needed for society today.

      It is probably the only thing to do. But the problem remains that even re-skilling want be sufficient if we will need less human labour.

    19. not a creature,

      God point on avoiding anthropomorphism.

    20. The National Institutes of Standards and technology actually already has an AI accuracy test,

      It would be interesting to see how it works in practice. How can you judge accuracy if AI is about probability. It is not about certainty which is the first building block for accuracy.

    21. Ultimately, we may need something like cern Global, international and neutral, but focused on AI safety rather than high energy physics.

      He probably thought of analogy with IPCC as supervisory space. But CERN could play role as place for research on AI and processing huge amount of data.

    22. But we also need independent scientists, not just so that we scientists can have a voice, but so that we can participate directly in addressing the problems in evaluating solutions.

      An important stakeholder.

    23. We all more or less agrees on the values we would like for our AI systems to honor.

      Are we? Maybe in the USA, but not globally. Consult the work of Moral Machine which shows that different cultural contexts imply whom we would save in trolley experiment: young - elderly, man - women, rich - poor. See more: https://www.moralmachine.net/

    24. a threshold of capabilities

      What is 'a threashold'. As always devil is in detail.

    25. I was reminded of the psychologist and writer Carl Jung, who said at the beginning of the last century that our ability for technological innovation, our capacity for technological revolution, had far outpaced our ethical and moral ability to apply and harness the technology we developed.

      A good reminder of Jung's work. It is on the line of Frankenstein's warnings of Mary Shelly.

      ||Jovan||

    Created with Sketch. Visit annotations in context

    Created with Sketch. Tags

    Created with Sketch. Annotators

    Created with Sketch. URL

    1. Scalable Personal AI: You can finetune a personalized AI on your laptop in an evening.

      ||JovanNj||||anjadjATdiplomacy.edu|| Is it possible to have personalised AI in an evening.

    Created with Sketch. Visit annotations in context

    Created with Sketch. Tags

    Created with Sketch. Annotators

    Created with Sketch. URL

    1. it triggers a mental shortcut in the minds of users that we call a “machine heuristic.” This shortcut is the belief that machines are accurate, objective, unbiased, infallible and so on.

      An interesting conceput of machine heuristic.

    Created with Sketch. Visit annotations in context

    Created with Sketch. Tags

    Created with Sketch. Annotators

    Created with Sketch. URL

    1. In a horizontal approach, regulators create one comprehensive regulation that covers the many impacts AI can have. In a vertical strategy, policymakers take a bespoke approach, creating different regulations to target different applications or types of AI.

      ||sorina|| Here is an interesting distinction between horistonal (EU) and vertical (China) approaches to AI regulation.

    Created with Sketch. Visit annotations in context

    Created with Sketch. Tags

    Created with Sketch. Annotators

    Created with Sketch. URL

    1. The people who are already well versed in something are going to be the ones capable of making the most helpful applications for that particular field or industry.

      ||VladaR|| This is our main advantage which we should activate via cognitive proximity. We know what we are talking about and we know how to use AI.

    2. arent there already LLM models that cite their sources? or I heard that new plugin with chat GPT can cite its sources

      ||JovanNj|| Are there models that can cite sources?

    3. The general consensus is that, especially customer facing automation, MUST be "explainable." Meaning whenever a chat bot or autonomous system writes something or makes a decision, we have to know exactly how and why it came to that conclusion.

      explainability is critical

    4. They are caught up in the hype and just like everyone else have zero clue what's actually going to happen.

      narrative

    Created with Sketch. Visit annotations in context

    Created with Sketch. Tags

    Created with Sketch. Annotators

    Created with Sketch. URL

  11. Mar 2023
    1. ||JovanNj|| ||Katarina_An|| ||anjadjATdiplomacy.edu|| ||VladaR|| This is an intereresting story about style of communication.

    Created with Sketch. Visit annotations in context

    Created with Sketch. Tags

    Created with Sketch. Annotators

    Created with Sketch. URL

  12. Feb 2023