224 Matching Annotations
  1. Last 7 days
    1. Experts expect some inference to start moving from specialist graphics-processing units ( GPU s), which are Nvidia’s forte, to general-purpose central processing units ( CPU s) like those used in laptops and smartphones, which are AMD ’s and Intel’s.

      So what does this mean in practical terms? In Nvidia Loosing ground because the more common (?) general purpose CPUs become relevant for AI? ||JovanK||

    Created with Sketch. Visit annotations in context

    Created with Sketch. Tags

    Created with Sketch. Annotators

    Created with Sketch. URL

  2. Feb 2024
    1. Types of vector embeddings

      What type of embedding we have in our vector database: word, sentence or document. Could we have embedding of all three?

      ||JovanNj|| ||anjadjATdiplomacy.edu||

    Created with Sketch. Visit annotations in context

    Created with Sketch. Tags

    Created with Sketch. Annotators

    Created with Sketch. URL

    1. Computing Power and the Governance of Artificial Intelligence

      @jovan @sorina This article (published just a week ago) was a new research on regulating frontier AI models by imposing restrictions and safety measures on AI chips.

      I previously shared a similar article called 'How to Catch a Chinchilla', which expresses preliminary ideas of this one.

      The article is focused on regulating the 'hardware' part of frontier models both because it's more visible and easier to track/locate and because, as far as the current way AI is built are concerned, development of advanced AI models still rely on securing a good number of AI chips.

    Created with Sketch. Visit annotations in context

    Created with Sketch. Tags

    Created with Sketch. Annotators

    Created with Sketch. URL

    1. Another Hanna paper, presented at the Resistance AI workshop, urges the machine learning community to go beyond scale when considering how to address systemic social issues and asserts that resistance to scale thinking is needed.

      From what I observed, the notion of 'scaling an algorithm' always results in high decontextualization of data and outputs (i.e., training data is taken out of the context from which it was derived, and the outputs are applied to various contexts instead of merely the one the model was trained for).

      And a huge problem for the lack of reflections in this regard was, in my opinion, the tech community's blatant refusal to think about the systemic, societal risks of their models--they framed it as the problem of policymakers and ethicists, and that the social can be separately mitigated apart from the technical conception of the models.

    2. “We argue that fixes that focus narrowly on improving datasets by making them more representative or more challenging might miss the more general point raised by these critiques, and we’ll be trapped in a game of dataset whack-a-mole rather than making progress, so long as notions of ‘progress’ are largely defined by performance on datasets,” the paper reads. “Should this come to pass, we predict that machine learning as a field will be better positioned to understand how its technology impacts people and to design solutions that work with fidelity and equity in their deployment contexts.”

      I suspected that the current AI regulations discussions are missing out on long-term risks that certain paradigms in the current AI development space might impose; one is the predominant idea that we can resolve fairness by simply codifying fairness metrics and making the latter a requirement of a model to 'hit the market'.

      This logic fails to challenge what this paragraph is trying to say here: ensuring fairness goes beyond the idea of getting a 'fair dataset' that performs well; it concerns what the deployment context is, how developers and users are to interact with the AI model, and how does the model become interwoven in the lives of the people working around it.

    Created with Sketch. Visit annotations in context

    Created with Sketch. Tags

    Created with Sketch. Annotators

    Created with Sketch. URL

  3. Jan 2024
    1. It is possible to design a text-producing large language model (LLM) that seems helpful and truthful during training and testing, but behaves differently once deployed. And according to a study shared this month on arXiv1, attempts to detect and remove such two-faced behaviour are often useless — and can even make the models better at hiding their true nature.The finding that trying to retrain deceptive LLMs can make the situation worse “was something that was particularly surprising to us … and potentially scary”, says co-author Evan Hubinger, a computer scientist at Anthropic, an AI start-up company in San Francisco, California.

      Kind of scary indeed. It takes us back to the question of trust in developers.

    Created with Sketch. Visit annotations in context

    Created with Sketch. Tags

    Created with Sketch. Annotators

    Created with Sketch. URL

    1. developingnorms, rules and principles on the design, development and use of military applications of artificial intelligence through a multilateral process

      also interesting

    2. strengtheningoversight mechanisms for the use of data-driven technology, including artificial intelligence, to support the maintenance of international peace and security

      which mechanisms?

    3. commit to concluding without delaya legally binding instrument to prohibit lethal autonomous weaponssystems that function without human control or oversight, and which cannot be used in compliance with international humanitarian law, and to regulate all other types of autonomous weapons systems

      LAWS - another proposed strong commitment

    Created with Sketch. Visit annotations in context

    Created with Sketch. Tags

    Created with Sketch. Annotators

    Created with Sketch. URL

    1. 70% of respondents thought AI safety research should be prioritized more than it currently is.

      need for more AI safety research

    2. etween 41.2% and 51.4% of respondents estimated a greater than 10% chance of humanextinction or severe disempowerment

      predictions on the likelihood of human extinction

    3. Amount of concern potential scenarios deserve, organized from most to least extreme concern

      very interesting, in particular the relative limited concern related to the sense of meaning/purpose

    4. ven among net optimists,nearly half (48.4%) gave at least 5% credence to extremely bad outcomes, and among net pessimists, more than half(58.6%) gave at least 5% to extremely good outcomes. The broad variance in credence in catastrophic scenarios showsthere isn’t strong evidence understood by all experts that this kind of outcome is certain or implausible

      basically difficult to predict the consequences of high-level machine intelligence

    5. scenarios worthy of most concern were: spread of false information e.g. deepfakes (86%), manipulation oflarge-scale public opinion trends (79%), AI letting dangerous groups make powerful tools (e.g. engineered viruses)(73%), authoritarian rulers using AI to control their populations (73%), and AI systems worsening economic inequalityby disproportionately benefiting certain individuals (71%).

      likelihood of existing and exclusion AI risks

    6. Most respondents considered it unlikely that users of AI systems in 2028 will be able to know the truereasons for the AI systems’ choices, with only 20% giving it better than even odds.

      predictions on explainability / interpretability of AI systems

    7. Answers reflected substantial uncertainty and disagreement among participants. No trait attracted near-unanimity on anyprobability, and no more than 55% of respondents answered “very likely” or “very unlikely” about any trait.

      !

    8. Only one trait had a median answer below ‘even chance’: “Take actions to attain power.” While there wasno consensus even on this trait, it’s notable that it was deemed least likely, because it is arguably the most sinister, beingkey to an argument for extinction-level danger from AI

      .

    9. ‘intelligence explosion,’

      Term to watch for: intelligence explosion

    10. he top five most-suggested categories were: “Computerand Mathematical” (91 write-in answers in this category), “Life, Physical, and Social Science” (77 answers), “HealthcarePractitioners and Technical” (56), “Management” (49), and “Arts, Design, Entertainment, Sports, and Media”

      predictions on occupations likely to be among the last automatable

    11. predicted a 50% chance of FAOL by 2116, down 48 years from 2164 in the 2022 survey

      Timeframe prediction for full automation of labour: 50% chance it would happen by 2116.

      But what does this prediction - and the definition of full automation of labour - mean in the context of an ever-evolving work/occupation landscape? What about occupations that might not exist today? Can we predict how those might or might not be automated?

    12. ay an occupation becomes fully automatable when unaided machines can accomplish it betterand more cheaply than human workers. Ignore aspects of occupations for which being a human isintrinsically advantageous, e.g. being accepted as a jury member.Think feasibility, not adoption.[. . . ]Say we have reached ‘full automation of labor’ when all occupations are fully automatable. That is,when for any occupation, machines could be built to carry out the task better and more cheaply thanhuman workers

      Q: What is full automation of labour?

    13. the chance of unaided machines outperforming humans in every possible task was estimated at 10%by 2027, and 50% by 2047.

      Survey: 10% chance that machines become better than humans in 'every possible task by 2027, but 50% by 2047.

    14. High-level machine intelligence (HLMI) is achieved when unaided machines can accomplish everytask better and more cheaply than human workers. Ignore aspects of tasks for which being a humanis intrinsically advantageous, e.g. being accepted as a jury member.Think feasibility, not adoption

      Q: What is high-level machine intelligence?

    15. High-Level Machine Intelligence’

      new term: high-level machine intelligence

    16. six tasks expected to take longer than ten years were: “After spending time in a virtual world, output the differentialequations governing that world in symbolic form” (12 years), “Physically install the electrical wiring in a new home”(17 years), “Research and write” (19 years) or “Replicate” (12 years) “a high-quality ML paper,” “Prove mathematicaltheorems that are publishable in top mathematics journals today” (22 years), and solving “long-standing unsolvedproblems in mathematics” such as a Millennium Prize problem (27 years)

      Expectations on the tasks feasible to be taken over than AI later than 10 years from now

    17. lack of apparent consensus among AI experts on the future of AI [

      This has always been the case, no?

    18. was disagreement about whether faster or slower AI progress would be better for thefuture of humanity.

      interesting also

    19. substantial” or “extreme” concern is warranted about six different AI-relatedscenarios, including spread of false information, authoritarian population control, and worsenedinequality.

      .

    20. While68.3% thought good outcomes from superhuman AI are more likely than bad, of these net optimists48% gave at least a 5% chance of extremely bad outcomes such as human extinction, and 59% of netpessimists gave 5% or more to extremelygoodoutcomes. Between 37.8% and 51.4% of respondentsgave at least a 10% chance to advanced AI leading to outcomes as bad as human extinction.

      Survey results on AI extinction risks. Quite interesting

    21. the chance of all humanoccupations becoming fully automatable was forecast to reach 10% by 2037, and 50% as late as 2116(compared to 2164 in the 2022 survey).
  4. Dec 2023
    1. Andrew Ng: ‘Do we think the world is better off with more or less intelligence?’

      A very lucid interview. He highlights the importance of open source AI, the danger of regulating LLM as opposed to applications, the negative lobby of most big tech, and the need to focus on good regulation targeting the problems of today, not speculating about extinction, etc. ||JovanK|| and ||sorina||

    Created with Sketch. Visit annotations in context

    Created with Sketch. Tags

    Created with Sketch. Annotators

    Created with Sketch. URL

  5. Oct 2023
    1. The fellows funded by Open Philanthropy, which is financed primarily by billionaire Facebook co-founder and Asana CEO Dustin Moskovitz and his wife Cari Tuna, are already involved in negotiations that will shape Capitol Hill’s accelerating plans to regulate AI. And they’re closely tied to a powerful influence network that’s pushing Washington to focus on the technology’s long-term risks — a focus critics fear will divert Congress from more immediate rules that would tie the hands of tech firms.

      ||Jovan||||sorina||||JovanNj||

      Probably old news to you, but here's an article that explains about the billionaire that founded Open Philanthropy (a core funder in EA activities). It also explains about its reach into politics.

    Created with Sketch. Visit annotations in context

    Created with Sketch. Tags

    Created with Sketch. Annotators

    Created with Sketch. URL

    1. William Isaac

      Staff Research Scientist on DeepMind’s Ethics and Society Team and Research Affiliate at Oxford University Centre's for the Governance of AI: https://wsisaac.com/#about

      Both DeepMind and Centre for the Governance of AI (GovAI) have strong links to EA!

    2. Arvind Narayanan

      Professor of computer science from Princeton University and the director of the Center for Information Technology Policy: https://www.cs.princeton.edu/~arvindn/.

      Haven't read his work closely yet, but it seems sensible to me.

    3. Sara Hooker,

      Director at Cohere: https://cohere.com/ (an LLM AI company).

    4. Yoshua Bengio

      Professor at Université de Montréal and scientific director of the Montreal Institute for Learning Algorithms (MILA).

      Very influential computer scientist, and considered a leading force in AI. Also an AI doomer, though I can't find his clear link with EA.

    5. Irene Solaiman

      Head of Global Policy at Hugging Face: https://www.irenesolaiman.com/

    6. Paul Christiano
    7. Capabilities and risks from frontier AI

      ||Jovan|| ||sorina|| ||JovanNj||

      This is the report that the UK released ahead of the AI Safety Summit (1-2 November, 2023).

    Created with Sketch. Visit annotations in context

    Created with Sketch. Tags

    Created with Sketch. Annotators

    Created with Sketch. URL

    1. On Wednesday, the U.K. government released a report called “Capabilities and risks from frontier AI,” in which it explains frontier AI as “highly capable general-purpose AI models that can perform a wide variety of tasks and match or exceed the capabilities present in today’s most advanced models.”

      ||Jovan||||sorina||||JovanNj||

      This report might be a good source for understanding narrative change in the field of AI safety. I will find out where it is and send it to you via email.

      Also, it shouldn't come as a surprise to us that the UK government will be the most convinced in EA's description of AI ending humanity, considering that many EA orgnisations are based in and attached to UK universities (Oxbridge).

    Created with Sketch. Visit annotations in context

    Created with Sketch. Tags

    Created with Sketch. Annotators

    Created with Sketch. URL

    1. We are at the same juncture now with AI. Once again, a handful of competitive but ideologically aligned leaders are telling us that large-scale, general-purpose AI implementations are the only way forward. In doing so, they disregard the dangerous level of complexity and the undue level of control and financial return to be granted to them.

      @Jovan @Sorina

      The very same people warning us of extinction-level AI risks are the same people who are developing technologies in a way that leads us to it. In a way, the public release of GPT and other generative models created the very market pressure that makes "creating the best, most intelligent AGI" the most important and only goal for the market.

    2. What is happening is not just a battle for market control. A small number of tech titans are busy designing our collective future, presenting their societal vision, and specific beliefs about our humanity, as theonly possible path. Hiding behind an illusion of natural market forces, they are harnessing their wealth and influence to shape not just productization and implementation of AI technology, but also the research.

      @Jovan @Sorina

      This is my concern with solely using a longtermist view to make policy judgments.

    Created with Sketch. Visit annotations in context

    Created with Sketch. Tags

    Created with Sketch. Annotators

    Created with Sketch. URL

  6. Sep 2023
    1. Inside effective altruism, where the far future counts a lot more than the present

      ||Jovan|| A good explanation on what this effective altruism movement is and how that came to populate the current political discourse of AI governance with many terms like "fate-defining moment" and "threat against humanity".

    Created with Sketch. Visit annotations in context

    Created with Sketch. Tags

    Created with Sketch. Annotators

    Created with Sketch. URL

    1. The Reluctant Prophet of Effective Altruism

      ||Jovan||

      This is an in-depth interview with the original founder of effective altruism movement. There were several iterations of the same philanthropic/philosophical movement that turned political and ambitious very soon.

    Created with Sketch. Visit annotations in context

    Created with Sketch. Tags

    Created with Sketch. Annotators

    Created with Sketch. URL

    1. This task demonstrates the failure of language models to follow instructions when there is a popular continuation that does not fit with that instruction. Larger models are more hurt by this as the larger the model, the more familiar it is with common expressions and quotes.

      ||Jovan|| ||JovanNj||

      An example of how the superb capability of an LLM in induction and association actually breaks; also shows a little bit how the LLM "thinks" (which is very different from a human, who may have or have not memorized the Shakespeare's quote by heart but would nonetheless understand that the "prompt" is more important than reciting the most plausible quote.)

    Created with Sketch. Visit annotations in context

    Created with Sketch. Tags

    Created with Sketch. Annotators

    Created with Sketch. URL

    1. The standard paradigm in natural language processing today is to pretrain large language models to autocomplete text corpora. The resulting models are then either frozen and used directly for other tasks (zero-shot or using few-shot learning), or additionally trained on other tasks (fine-tuning).

      It might be interesting to carry out similar tasks for the model that Diplo is fine-tuning--to see where its algorithmic reasoning will break.

      Also, it might be a good comparison study to show on paper how the Diplo model works better with higher-quality data (bottom-up approach). It'd be good evidence to show, I suppose.||Jovan||||Jovan||||Jovan||||JovanNj||||Jovan||

    2. The purpose of this contest is to find evidence for a stronger failure mode: tasks where language models get worse as they become better at language modeling (next word prediction).

      ||Jovan|| ||JovanNj||

      I found this interesting as this might be insightful to the problem I faced when I was writing the speech for the Digital Cooperation Day "Chair of future generations" using ChatGPT. The model was really good at generating a quote that doesn't exist and was never said by the person it attributed to.

      It is very plausible because from the "reality" the model lives in, multiple data sources made it probable that "this quote might have existed and it makes sense that this quote follows that quote and follows that name and that organization." It is interesting to actually see where the model that is very good at inductive reasoning and association fails sometimes, because induction and association aren't the only two logics humans use to approach reality.

    Created with Sketch. Visit annotations in context

    Created with Sketch. Tags

    Created with Sketch. Annotators

    Created with Sketch. URL

  7. Aug 2023
    1. , Sakana’s approach could potentially lead to AI that’s cheaper to train and use than existing technology. That includes generative AI

      Influence on costs

    2. Sakana is still in the early stages: It hasn’t yet built an AI model and doesn’t have an office.

      Very early stage.

    3. The startup plans to make multiple smaller AI models, the kind of technology that powers products like ChatGPT, and have them work together. The idea is that a “swarm” of programs could be just as smart as the massive undertakings from larger organizations.

      This sounds similar to our "bottom-up AI" approach

      ||JovanK||||sorina||||anjadjATdiplomacy.edu||

    Created with Sketch. Visit annotations in context

    Created with Sketch. Tags

    Created with Sketch. Annotators

    Created with Sketch. URL

    1. Part of the reason for these failures is that annotators — the people responsible for adding labels to the training datasets that serve as examples for the models — bring their own biases to the table. For example, frequently, there’s differences in the annotations between labelers who self-identified as African Americans and members of the LGBTQ+ community versus annotators who don’t identify as either of those two groups.

      ||Jovan|| ||JovanNj||

      I'm not sure if this actually was the problem though. Perspective API researchers particularly designed the prompt given to the annotators in a way that is ambiguous and left up to the latter's own perspective. The researchers asked the annotators to mark a comment toxic or not based on whether that comment would make them want to stay or leave the conversation.

      The reasoning behind this ambiguity seems to be that the researchers don't want to a-priori give a set of what is defined as "good" and "bad", and instead rely on the ambiguities of how people feel. This makes sense to me, as if we have a dictionary of good words and bad words fixed in the system, we are also exercising our own bias and taking words out of context (we ourselves are ignoring contextual significance as well).

    2. Perspective often couldn’t recognize hate speech that used “reclaimed” slurs like “queer” and spelling variations such as missing characters.

      ||Jovan|| ||JovanNj||

      The problem happens to be that in the training dataset, most "toxic" comments include words referring to and against historically discriminated groups, so Perspective API made the linkage that the existence of words referring to these groups automatically make the comment toxic.

      I recently came across the concept of "contextual significance" that was created by early pattern recognition researchers in the 1950s, which basically means that a successful machine should be able to judge which meaning of the word is invoked in a given context (the "neighborhood", the nearby words of a sentence/pixels of a picture) and what effect it would create for which group of people. Perspective API lacked this.

      The Perspective researchers apparently decided to feed the algorithm more "non-toxic" comments that include terms relating to minorities or discriminated groups to balance out the adverse score associated with them.

    3. Several years ago, a team at Penn State found that posts on social media about people with disabilities could be flagged as more negative or toxic by commonly used public sentiment and toxicity detection models. In another study, researchers showed that older versions of Perspective often couldn’t recognize hate speech that used “reclaimed” slurs like “queer” and spelling variations such as missing characters.

      If I recall correctly, Perspective API by Google Jigsaw was trained on a dataset consisting of the comment section of Wikipedia Talk labelled by crowd-sourced workers. Just some background information.

    4. OpenAI proposes a new way to use GPT-4 for content moderation

      ||Jovan|| ||JovanNj||

      ChatGPT 4 apparently could take a defined policy and check if the new inputs violate such a policy. I'm more curious about how we could understand the logic or reasoning the model uses for classifying these policy-compliant or policy-non-compliant inputs.

    Created with Sketch. Visit annotations in context

    Created with Sketch. Tags

    Created with Sketch. Annotators

    Created with Sketch. URL

    1. "We're enabling ownership of these technologies by African founders," he told DW. "This is being built and served by people from these communities. So the financial rewards will also go directly back to them."

      ||Jovan||

      Might be worthwhile to investigate on how this mode of grassroots model-building is done. To me, it is even more interesting to think about how start-ups working closely with local communities and embracing this "bottom-up" approach thrive in places that are the most left out by the biggest/hottest machine learning algorithms of this day (like ChatGPT, DeepMind, etc.).

    Created with Sketch. Visit annotations in context

    Created with Sketch. Tags

    Created with Sketch. Annotators

    Created with Sketch. URL

    1. Consistent with the findings of the 2021 edition, software tools and reports are the most common outputs of UN AI projects, which can be used to address challenges impeding progress on the SDGs

      ||Jovan|| ||JovanNj||

      It might be interesting to look at the software tools and how many of those projects have come to fruition. Just as how Diplo is now exploring ways to incorporate AI tools in our line of work, other UN organisations are doing so, too. What is the scale of their incorporation (does AI replace a core/small function, or does AI assist humans in their job?)

      This might teach us about organisational thinking with regard to the incorporation of AI beyond weightless calls for more awareness of AI-induced transformation.

    Created with Sketch. Visit annotations in context

    Created with Sketch. Tags

    Created with Sketch. Annotators

    Created with Sketch. URL

  8. Jul 2023
    1. Debunking Common Misconceptions about Prompt Engineering

      Misconcpeetions about AI prompting

    Created with Sketch. Visit annotations in context

    Created with Sketch. Tags

    Created with Sketch. Annotators

    Created with Sketch. URL

    1. they aremost common when asking for quotes, sources, citations, or other detailed information

      When hallucination is most likely to appear in LLMs.

    2. This paper examines the transformative role of Large Language Models (LLMs) in education and their potential as learning tools, despite their inherent risks and limitations. The authors propose seven approaches for utilizing AI in classrooms: AI-tutor, AI-coach, AI-mentor, AI-teammate, AI-tool, AI-simulator, and AI-student, each with distinct pedagogical benefits and risks. The aim is to help students learn with and about AI, with practical strategies designed to mitigate risks such as complacency about the AI’s output, errors, and biases. These strategies promote active oversight, critical assessment of AI outputs, and complementation of AI's capabilities with the students' unique insights. By challenging students to remain the "human in the loop", the authors aim to enhance learning outcomes while ensuring that AI serves as a supportive tool rather than a replacement. The proposed framework offers a guide for educators navigating the integration ofAI-assisted learning in classrooms.

      ||Andrej||||Dragana||||sorina||||Jovan||

      this seems to be a paper worth consulting for our approach of using AI in the learning process

    Created with Sketch. Visit annotations in context

    Created with Sketch. Tags

    Created with Sketch. Annotators

    Created with Sketch. URL

    1. significant model public releases within scope

      ! Also, what is 'significant'?

    2. introduced after the watermarking system is developed

      !

    3. Companies commit to advancing this area of research, and to developing a multi-faceted, specialized, and detailed red-teaming regime, including drawing on independent domain experts, for all major public releases of new models within scope

      So applying to what comes next...

    4. Where commitments mention particular models, they apply only to generative models that are overall more powerful than the current industry frontier (

      Very interesting... Not quite sure what is meant by 'particular models', though. ||JovanK||

    5. only a first step in developing and enforcing binding obligations to ensure safety, security, and trus

      commitments to be followed by binding obligations

    Created with Sketch. Visit annotations in context

    Created with Sketch. Tags

    Created with Sketch. Annotators

    Created with Sketch. URL

    1. access to its safe technology could be offered as an incentivefor countries to participate in a governance regime that enforces responsibility (such as agreementsto enact stricter regulation, or restrict military AI development).

      But the cynical question would be: If country X has the research and development capacities to develop advanced AI, why would it want access to the tech of this Collaborative?

    2. uture regulations will limit access to powerful AI technologies in jurisdictions with inadequate AIgovernanc

      Assuming those jurisdictions won't be able to develop their own powerful AI tech?

    3. o increase chances of success, aCommission should foreground scientific rigor and the selection of highly competent AI expertswho work at the cutting edge of technological development and who can continually interpret theever-changing technological and risk landscape

      Sounds good; with the devil being in implementation. E.g." Whose standards would determine what is 'highly competent' AI expert?

    4. there is significant disagreement even among experts about the different opportunities andchallenges created by advanced AI

      And so what makes us think that these disagreements would evolve into a consensus if a committee is created?

    5. International consensus on the opportunities and risks from advanced AI

      What does 'international consensus' mean?

    6. the Commission on Frontier AI could facilitatescientific consensus by convening experts to conduct rigorous and comprehensive assessments ofkey AI topics, such as interventions to unlock AI’s potential for sustainable development, the effectsof AI regulation on innovation, the distribution of benefits, and possible dual-use capabilities fromadvanced systems and how they ought to be managed

      What a Commission on Frontier AI would do.

      Silly question: Why 'frontier AI'?

    7. dangerous inputs: computing resources have been targeted by US, Japanese and Dutch export controlsthat prevent the sale of certain AI chips and semiconductor manufacturing equipment to Chin

      So 'controlling dangerous inputs' is actually about preventing non-friendly countries to access/develop the tech?

    8. standards

      Again, what kind of standards are we talking about?

    9. Establish guidelines

      Don't we have enough of these?

    10. through education, infrastructure, andsupport of the local commercial ecosystem

      So building capacities and creating enabling environments

    11. develop frontier AI collectively or distribute and enable access

      A bunch of questions here. It sounds good, but:

      • Collectively by whom?
      • How exactly would that distribution of access work?
    12. developing and/or enabling safe forms of access to AI.

      What does this mean?

    13. ontrolling AIinputs

      How could this be done?

      ||JovanNj|| Any thoughts?

    Created with Sketch. Visit annotations in context

    Created with Sketch. Tags

    Created with Sketch. Annotators

    Created with Sketch. URL

  9. Jun 2023
    1. Interesting discussion on ways to regulate AI use, and the role (limitations) of open source there, by Bruce Schneier and Waldo.

      It raises some interesting questions about accountability of the open source community. They argue, as many others, that OS community is too fluid to be regulated. I tend to disagree - OS community has many levels, and a certain OS component (say a GitHub code) gets picked up by others at certain points to push to a mass market for benefit (commercial or other). It is when such OS products are picked up that the risk explodes - and it is then when we see tangible entities (companies or orgs) that should be and are accountable for how they use the OS code and push it to mass market.

      I see an analogy with vulnerabilities in digital products, and the responsibility of OS community for the supply chain security. While each coder should be accountable, for individuals it probably boils down to ethics (as the effect of a single github product is very limited); but there are entities in this supply chain that integrate such components that clearly should be hold accountable.

      My comments below. It is an interesting question for Geneva Dialogue as well, not only for AI debates.

      cc ||JovanK|| ||anastasiyakATdiplomacy.edu||

    Created with Sketch. Visit annotations in context

    Created with Sketch. Tags

    Created with Sketch. Annotators

    Created with Sketch. URL

    1. My worst fears are that we cause significant, we, the field, the technology, the industry cause significant harm to the world. I think that could happen in a lot of different ways. It’s why we started the company

      why exactly?

    Created with Sketch. Visit annotations in context

    Created with Sketch. Tags

    Created with Sketch. Annotators

    Created with Sketch. URL

  10. May 2023
    1. three principles, transparency, accountability, and limits on use.

      3 principles for AI governance

    2. Number one, you’re here because AI is this extraordinary new technology that everyone says can be transformative as much as the printing press. Number two is really unknown what’s gonna happen. But there’s a big fear you’ve expressed to all of you about what bad actors can do and will do if there’s no rules of the road. Number three, as a member who served in the house and now in the Senate, I’ve come to the conclusion that it’s impossible for Congress to keep up with the speed of technology.

      A good summary of the current situation with AI technology.

    3. And what auto GPT does is it allows systems to access source code, access the internet and so forth. And there are a lot of potential, let’s say cybersecurity risks. There, there should be an external agency that says, well, we need to be reassured if you’re going to release this product that there aren’t gonna be cybersecurity problems or there are ways of addressing it.

      ||VladaR|| Vlada, please follow-up on this aspect on AI and cybersecurity.

    4. the central scientific issue

      Is it 'scientific issue'? I do not think so. It is more philosophical and possible even, theological, issue. Can science tell us what is good and bad?

    5. the conception of the EU AI Act is very consistent with this concept of precision regulation where you’re regulating the use of the technology in context.

      EU AI Act uses precise regulation of regulation AI in specific contexts.

    6. a reasonable care standard.

      Another vague concept. What is 'reasonable'? There will be a lot of job for AI-powered lawyers.

    7. Thank you, Mr. Chairman and Senator Hawley for having this. I’m trying to find out how it is different than social media and learn from the mistakes we made with social media. The idea of not suing social media companies is to allow the internet to flourish. Because if I slander you you can sue me. If you’re a billboard company and you put up the slander, can you sue the billboard company? We said no. Basically, section 230 is being used by social media companies to high, to avoid liability for activity that other people generate. When they refuse to comply with their terms of use, a mother calls up the company and says, this app is being used to bully my child to death. You promise, in the terms of use, she would prevent bullying. And she calls three times, she gets no response, the child kills herself and they can’t sue. Do you all agree we don’t wanna do that again?

      How to avoid repeating with AI governance what happened with Seciton 230 and social media governance?

    8. the current version of GPT-4 ended to training in 2021.

      2021 starts to being 'safety net' for OpenAI

    9. Sen. Marsha Blackburn (R-TN):

      It is probably the most practical approach to AI governance. Senator from Tennessee asked many questions on the protection of copyright of musicians. Is Nashville endangered. The more we anchor AI governance questions into practical concerns of citizens, communities, and companies - the better AI governance we will have.

    10. that people own their virtual you.

      People can own it only with 'bottom-up AI'

    11. When you think about the energy costs alone, just for training these systems, it would not be a good model if every country has its own policies and each, for each jurisdiction, every company has to train another model.

      It is naive view because AI is shaped by ethics and ethics is very 'local'. Yes, there are some global ethical principles: protect human life and dignity. But many other ethical rules are very 'local'.

    12. need a cabinet level organization within the United States in order to address this.

      Who can govern AI?

    13. And we probably need scientists in there doing analysis in order to understand what the political influences of, for example, of these systems might be.

      Markus tries to make case for 'scientists'. But, frankly speaking, how scientists can decide if AI should rely on book written in favour of republicans or democrats or, even more as AI develops with more sophistication, what 'weight' they should give to one or another source.

      It is VERY dangerous to place ethical and political decisions in hands of scientists. It is also unfair towards them.

    14. If these large language models can, even now, based on the information we put into them quite accurately predict public opinion, you know, ahead of time. I mean, predict, it’s before you even ask the public these questions, what will happen when entities, whether it’s corporate entities or whether it’s governmental entities, or whether it’s campaigns or whether it’s foreign actors, take this survey information, these predictions about public opinion and then fine tune strategies to elicit certain responses, certain behavioral responses.

      this is what worries politicians - how to win elections? They like 'to see' (use AI for their needs) but 'not to be seen' (use by somebody else. The main problem with political elites worldwide is that they may win elections with use of AI (or not), but the humanity is sliding into 'knowledge slavery' by AI.

    15. large language models can indeed predict public opinion.

      They can as they, for example, predict continuation of this debate in the political space.

    16. so-called artificial general intelligence really will replace a large fraction of human jobs.

      It is a good point. There won't be more work.

    17. And the real question is over what time scale? Is it gonna be 10 years? Is it gonna be a hundred years?

      It is a crucial question. One generation will be 'thrown under the bus' in transition. Generation of age 25-50 should 'fasten seat-belts'. They were educated in the 'old system' while they have to work in a very uncertain new economy.

    18. So I think the most important thing that we could be doing and can, and should be doing now, is to prepare the workforce of today and the workforce of tomorrow for partnering with AI technologies and using them. And we’ve been very involved for, for years now in doing that in focusing on skills-based hiring in educating for the skills of the future. Our skills build platform has 7 million learners and over a thousand courses worldwide focused on skills. And we’ve pledged to train 30 million individuals by 2030 in the skills that are needed for society today.

      It is probably the only thing to do. But the problem remains that even re-skilling want be sufficient if we will need less human labour.

    19. not a creature,

      God point on avoiding anthropomorphism.

    20. The National Institutes of Standards and technology actually already has an AI accuracy test,

      It would be interesting to see how it works in practice. How can you judge accuracy if AI is about probability. It is not about certainty which is the first building block for accuracy.

    21. Ultimately, we may need something like cern Global, international and neutral, but focused on AI safety rather than high energy physics.

      He probably thought of analogy with IPCC as supervisory space. But CERN could play role as place for research on AI and processing huge amount of data.

    22. But we also need independent scientists, not just so that we scientists can have a voice, but so that we can participate directly in addressing the problems in evaluating solutions.

      An important stakeholder.

    23. We all more or less agrees on the values we would like for our AI systems to honor.

      Are we? Maybe in the USA, but not globally. Consult the work of Moral Machine which shows that different cultural contexts imply whom we would save in trolley experiment: young - elderly, man - women, rich - poor. See more: https://www.moralmachine.net/

    24. a threshold of capabilities

      What is 'a threashold'. As always devil is in detail.

    25. I was reminded of the psychologist and writer Carl Jung, who said at the beginning of the last century that our ability for technological innovation, our capacity for technological revolution, had far outpaced our ethical and moral ability to apply and harness the technology we developed.

      A good reminder of Jung's work. It is on the line of Frankenstein's warnings of Mary Shelly.

      ||Jovan||

    Created with Sketch. Visit annotations in context

    Created with Sketch. Tags

    Created with Sketch. Annotators

    Created with Sketch. URL

    1. Scalable Personal AI: You can finetune a personalized AI on your laptop in an evening.

      ||JovanNj||||anjadjATdiplomacy.edu|| Is it possible to have personalised AI in an evening.

    Created with Sketch. Visit annotations in context

    Created with Sketch. Tags

    Created with Sketch. Annotators

    Created with Sketch. URL

    1. it triggers a mental shortcut in the minds of users that we call a “machine heuristic.” This shortcut is the belief that machines are accurate, objective, unbiased, infallible and so on.

      An interesting conceput of machine heuristic.

    Created with Sketch. Visit annotations in context

    Created with Sketch. Tags

    Created with Sketch. Annotators

    Created with Sketch. URL

    1. In a horizontal approach, regulators create one comprehensive regulation that covers the many impacts AI can have. In a vertical strategy, policymakers take a bespoke approach, creating different regulations to target different applications or types of AI.

      ||sorina|| Here is an interesting distinction between horistonal (EU) and vertical (China) approaches to AI regulation.

    Created with Sketch. Visit annotations in context

    Created with Sketch. Tags

    Created with Sketch. Annotators

    Created with Sketch. URL

    1. The people who are already well versed in something are going to be the ones capable of making the most helpful applications for that particular field or industry.

      ||VladaR|| This is our main advantage which we should activate via cognitive proximity. We know what we are talking about and we know how to use AI.

    2. arent there already LLM models that cite their sources? or I heard that new plugin with chat GPT can cite its sources

      ||JovanNj|| Are there models that can cite sources?

    3. The general consensus is that, especially customer facing automation, MUST be "explainable." Meaning whenever a chat bot or autonomous system writes something or makes a decision, we have to know exactly how and why it came to that conclusion.

      explainability is critical

    4. They are caught up in the hype and just like everyone else have zero clue what's actually going to happen.

      narrative

    Created with Sketch. Visit annotations in context

    Created with Sketch. Tags

    Created with Sketch. Annotators

    Created with Sketch. URL

  11. Mar 2023
    1. ||JovanNj|| ||Katarina_An|| ||anjadjATdiplomacy.edu|| ||VladaR|| This is an intereresting story about style of communication.

    Created with Sketch. Visit annotations in context

    Created with Sketch. Tags

    Created with Sketch. Annotators

    Created with Sketch. URL

  12. Feb 2023
    1. OpenAI announced they've "trained a classifier to distinguish between text written by a human and text written by AIs from a variety of providers". Saying it is not 'fully reliable": correctly identifies 26% of AI-written text (true positives) as “likely AI-written,” while incorrectly labeling human-written text as AI-written 9% of the time (false positives).

      ||JovanNj|| ||Jovan||

    Created with Sketch. Visit annotations in context

    Created with Sketch. Tags

    Created with Sketch. Annotators

    Created with Sketch. URL

  13. Jan 2023
    1. A python module to generate optimized prompts, Prompt-engineering & solve different NLP problems using GPT-n (GPT-3, ChatGPT) based models and return structured python object for easy parsing

      ||JovanNj||||anjadjATdiplomacy.edu|| Could this 'promtify' software be interesting for use?

    Created with Sketch. Visit annotations in context

    Created with Sketch. Tags

    Created with Sketch. Annotators

    Created with Sketch. URL

    1. OpenAI is developing a tool for “statistically watermarking the outputs of a text [AI system].” Whenever a system — say, ChatGPT — generates text, the tool would embed an “unnoticeable secret signal” indicating where the text came from.

      OpenAI apparently working on a tool to watermark AI-generated content and make it 'easier to spot'.

      ||JovanNj||||Jovan||

    Created with Sketch. Visit annotations in context

    Created with Sketch. Tags

    Created with Sketch. Annotators

    Created with Sketch. URL

  14. Oct 2022
    1. This text sent by ||sorina|| discusses the way how machines can simulate common sense.

      It is rather realistic because it starts with assumption that AI cannot replace human consciousness, but it can 'simulate' it by observing and measuring.

      It is based on 'heuristic', philosophical concept, that deals with the way how we make decisions.

      Practically speaking, AI is learning from experience by human evaluation of AI decisions and 're-inforced' learning. In that sense, what we do with the text is methodologically similar: we ask AI to provide us with drafts and we react to it based on our intelligence and knowledge.

      ||Jovan||

    2. Common sense is different from intelligence in that it is usually something innate and natural to humans that helps them navigate daily life, and cannot really be taught.

      common sense vs intelligence

    Created with Sketch. Visit annotations in context

    Created with Sketch. Tags

    Created with Sketch. Annotators

    Created with Sketch. URL

    1. 2Open Loop is a global program that connects policymakers and technology companies to help develop effective and evidence-based policies around AI and other emerging technologies.

      Open Loop project of Meta/Facebook on linking policymakers and technology companies.

      ||sorina||

    Created with Sketch. Visit annotations in context

    Created with Sketch. Tags

    Created with Sketch. Annotators

    Created with Sketch. URL

    1. TITLE: NATO establishes review board to govern responsible use of AI

      CONTENT: NATO has established a Review Board to govern the responsible development and use of artificial intelligence (AI) and data across the organisation. The decision was taken at the meeting of NATO Ministers of Defence which took place in Brussels on 12–13 October 2022. The Data and Artificial Intelligence Review Board (DARB) will work on developing a user-friendly responsible AI certification standard to help align new AI and data projects with NATO's Principles of Responsible Use. The board is also expected to act as a platform allowing the exchange of views and best practices to help create quality controls, mitigate risks, and adopt trustworthy and interoperable AI systems. NATO member states will designate one national nominee to serve on the DARB. Nominees could come from governmental entities, academia, the private sector, or civil society.

      TECHNOLOGY: AI

      DATE: 13 October 2022

    Created with Sketch. Visit annotations in context

    Created with Sketch. Tags

    Created with Sketch. Annotators

    Created with Sketch. URL

    1. TITLE: US White House publishes Blueprint for an AI Bill or Rights

      CONTENT: The US White House, through the Office of Science and Technology Policy, has issued a Blueprint for an AI Bill of Rights to guide the development, deployment, and use of automated systems. The blueprint outlines five key principles and is accompanied by a framework to help incorporate the protections into policy and practice.

      The five principles are:

      • Safe and effective systems: Users should be protected from unsafe and ineffective systems.
      • Algorithmic discrimination protection: Users should not face discrimination by algorithms and systems should be used and designed in an equitable way.
      • Data privacy. Users should be protected from abusive data practices via built-in protections and should have agency over how data about them is used.
      • Notice and explanation: Users should know that an automated system is being used and understand how and why it contributes to outcomes that impact them.
      • Human alternatives, consideration, and fallback: Users should be able to opt out, where appropriate, and have access to a person who can quickly consider and remedy problems they encounter.

      Within the scope of the blueprint are automated systems that have the potential to meaningfully impact the public's rights, opportunities, or access to critical resources or services.

      It is important to note that the blueprint does not have a regulatory character, and is meant to serve as a guide.

      TOPICS: AI

      TRENDS: AI governmental initiatives

      DATE: 4 October

      COUNTRY: USA

    Created with Sketch. Visit annotations in context

    Created with Sketch. Tags

    Created with Sketch. Annotators

    Created with Sketch. URL

    1. Synthetic data, however it is produced, offers a number of very concrete advantages over using real world data.

      advantages of synthetic data

    2. There are a couple of ways this synthetic data generation happens

      How synthetic data is produced.

    Created with Sketch. Visit annotations in context

    Created with Sketch. Tags

    Created with Sketch. Annotators

    Created with Sketch. URL

  15. Aug 2022
    1. AI and other new technologies will increase strategic instability.

      Another important element that the document realises: link between cybersecurity and AI. This is missing in OEWG discussions. There will need to be links of OEWG with AI-related processes like LAWS as well - or, at least, diplomats will need to be aware of all those other related processes.

    Created with Sketch. Visit annotations in context

    Created with Sketch. Tags

    Created with Sketch. Annotators

    Created with Sketch. URL

  16. Jul 2022
    1. Here is an interesting interview with the head of AI at SalesForce. I compare with our efforts.

      A good and solid data is essential. We are getting good data via two main sources:

      • structured data organised via geography (countries). Later on we can introduce time component. In this way we will have two main determinants for any phenomenon: space and time.
      • semi- and un-structured data: textus annotations

      He also higlightes the question of classification which we have ready with taxonomies. There is also an importance of conversation where we are also doing well via Textus and event analysis.

      All in all, we seem to be on the right track to having well-designed AI system.

      ||JovanNj||||anjadjATdiplomacy.edu||||dusandATdiplomacy.edu||||Katarina_An||

    2. if you don’t have the data, then you have a problem.

      we have data.

    Created with Sketch. Visit annotations in context

    Created with Sketch. Tags

    Created with Sketch. Annotators

    Created with Sketch. URL

    1. Title: DeepMind uses AI to predict the structure of almost all proteins. Text: DeepMind, in partnership with the European Molecular Biology Laboratory's European Bioinformatics Institute, has released predicted structures for nearly all catalogued proteins known to science. The announcement comes a year after the two partners released and open-sourced AlphaFold – an artificial intelligence (AI) system used to predict the 3D structure of a protein – and created the AlphaFold Protein Structure Database to share this scientific knowledge with the researchers. The database now contains over 200 million predicted protein structures, covering plants, bacteria, animals, and other organisms. It is expected to help researchers advance work on issues such as neglected diseases, food insecurity, and sustainability.

    Created with Sketch. Visit annotations in context

    Created with Sketch. Tags

    Created with Sketch. Annotators

    Created with Sketch. URL

  17. Jun 2022
    1. We express our concerns on the risk, and ethical dilemma related to Artificial Intelligence, such as privacy, manipulation, bias, human-robot interaction, employment, effects and singularity among others. We encourage BRICS members to work together to deal with such concerns, sharing best practices, conduct comparative study on the subject toward developing a common governance approach which would guide BRICS members on Ethical and responsible use of Artificial Intelligence while facilitating the development of AI.

      BRICS cooperation on AI. Nothing too specific (unlike in some other fields). Interesting that they spend space to address concerns ||JovanK|| ||sorina||

    Created with Sketch. Visit annotations in context

    Created with Sketch. Tags

    Created with Sketch. Annotators

    Created with Sketch. URL

    1. Text-to-image processes are also impressive. The illustration at the top of this article was produced by using the article’s headline and rubric as a prompt for an ai service called Midjourney. The next illustration is what it made out of “Speculations concerning the first ultraintelligent machine”; “On the dangers of stochastic parrots”, another relevant paper, comes later. Abstract notions do not always produce illustrations that make much or indeed any sense, as the rendering of Mr Etzioni’s declaration that “it was flabbergasting” shows. Less abstract nouns give clearer representations; further on you will see “A woman sitting down with a cat on her lap”.

      Shall we try to use this Midjourney platofrm to illustrate some of our books. For example, we can have some segments of the Geneva Digital Atlas illustrated by this tool.

      ||Jovan||||MarcoLotti||||JovanNj||||anjadjATdiplomacy.edu||

    2. “Enlightenment”—a trillion-parameter model built at the Beijing Academy of Artificial Intelligence

      Do we know anything on this model or Chinese research?

    Created with Sketch. Visit annotations in context

    Created with Sketch. Tags

    Created with Sketch. Annotators

    Created with Sketch. URL

  18. Apr 2022
    1. Global policy AI initiative to follow

      ||sorina||||Jovan||

    Created with Sketch. Visit annotations in context

    Created with Sketch. Tags

    Created with Sketch. Annotators

    Created with Sketch. URL

    1. It’s not possible to talk about “AI for everyone” (Google’s rhetoric), “responsible AI” (Facebook’s rhetoric), or “broadly distribut[ing]” its benefits (OpenAI’s rhetoric) without honestly acknowledging and confronting the obstacles in the way.

      AI rhetorics

    Created with Sketch. Visit annotations in context

    Created with Sketch. Tags

    Created with Sketch. Annotators

    Created with Sketch. URL

    1. “These simulation engines and the data and everything that goes into them are incredibly important for us to act in these complex environments when it’s not just one crisis but it’s a set of compounding crises,”

      Q: Why 'Destination Earth' platform matters?

    2. “Destination Earth,” will draw on a host of environmental, socioeconomic and satellite data to develop digital “twins” of the planet that aim to help policymakers — and eventually the public — better understand and respond to rising temperatures.

      Q: What is EU's 'Destination Earth' initative?

    Created with Sketch. Visit annotations in context

    Created with Sketch. Tags

    Created with Sketch. Annotators

    Created with Sketch. URL

    1. Nas Textus system vrsi funkciju data labelling-a na integrisan nacin - deo svakodnenvih rutina.

      ||anjadjATdiplomacy.edu||||JovanNj||

    Created with Sketch. Visit annotations in context

    Created with Sketch. Tags

    Created with Sketch. Annotators

    Created with Sketch. URL

    1. GANs are deep learning models that use two neural networks working against each other—a generator that creates synthetic data, and a discriminator that distinguishes between real and synthetic data—to generate synthetic images that are almost indistinguishable from real ones. GANs are popular for generating images and videos, including deepfakes.

      Q: What are GANs

    Created with Sketch. Visit annotations in context

    Created with Sketch. Tags

    Created with Sketch. Annotators

    Created with Sketch. URL

    1. I recently learned that uppercase and lowercase letters got their names from actual wooden cases of lead that were used by compositors for printing.

      Q: What is etymology of lower and upper cases?

    2. It began as a term from French railroad engineering referring to the layers of material that go beneath (“infra”) the tracks. Its meaning expanded to include roads, bridges, sewers and power lines, and very recently expanded again to include people, specifically caregivers, as in this fact sheet from the Biden White House

      Q: What is etymology of term infrastructure?

    3. I.C.E. is short for internal combustion engine, a modifier that was superfluous until electric cars came on the scene.

      Q: What is I.C.E

    4. It refers simply to the physical world, where we have tangible bodies made of … meat. “Meatspace” is a word that didn’t need to exist until the invention of cyberspace. Technological progress gives us a new perspective on things we once took for granted, in this case reality itself.

      Q: What is meatspace

    Created with Sketch. Visit annotations in context

    Created with Sketch. Tags

    Created with Sketch. Annotators

    Created with Sketch. URL

    1. The EU Cybersecurity Act defines cybersecurity as “the activities necessary to protect network and information systems, the users of such systems, and other persons affected by cyber threats”.

      What is cybersecurity?

    Created with Sketch. Visit annotations in context

    Created with Sketch. Tags

    Created with Sketch. Annotators

    Created with Sketch. URL

    1. GPT-3 only needs a few (2-3) examples to deliver on specific writing tasks.

      Is it true?

      ||JovanNj||||anjadjATdiplomacy.edu||

    Created with Sketch. Visit annotations in context

    Created with Sketch. Tags

    Created with Sketch. Annotators

    Created with Sketch. URL

  19. Mar 2022
    1. Probability in Medieval and Renaissance Philosophy

      Origins of modern theory of probability that has been influencing modern AI.

    Created with Sketch. Visit annotations in context

    Created with Sketch. Tags

    Created with Sketch. Annotators

    Created with Sketch. URL

    1. ||dejand|| sent me this link. I like Kahneman's framing with System 1 (our daily thinking in solving problems - sort of mix of inertia and intuition) and System 2 (deep logical and analytical thinking).

      This AI system tries to combine two. As soon as we get out of this 'daily tasks', analysis of this paper and approach could be an interesting framing of both our research and teaching on AI.

      ||sorina||||MariliaM||||kat_hone||||anjadjATdiplomacy.edu||||JovanNj||||JovanK||

    2. The division of labor between System 1 and System 2 is nature’s solution to creating a balance between speed and accuracy, learning and execution.

      Good division of tasks.

    Created with Sketch. Visit annotations in context

    Created with Sketch. Tags

    Created with Sketch. Annotators

    Created with Sketch. URL

  20. Feb 2022
    1. This article provides a good survey of (dis)advantages of Chinese and USA approach to AI as summarised in the following paragraph

      the combined resources, scientific contributions, and technological superiority shared by US academic and corporate institutions in the field of AI is more than enough to overcome the advantages given China by its policy of socialized data.

      ||kat_hone||||VladaR||||sorina||||anjadjATdiplomacy.edu||||JovanNj||

    Created with Sketch. Visit annotations in context

    Created with Sketch. Tags

    Created with Sketch. Annotators

    Created with Sketch. URL

  21. Jan 2022
    1. The right often maintains that the law here is clear, and it is not the job of judges to legislate from the bench- even where the law will lead to tragedy as in this case.

      In the European legal tradition, it is tension between positivists (Kelsen) and naturalist (Grotious) on purpose and interpretation of law. Kelsen would be on the right side and Grotious on the left side of this debate.

    2. Maybe by teasing out the transdisciplinary nature of the problem, we’ll encourage cross-pollination, or at least that’s my hope.

      the key challenge for comprehensive AI.

    Created with Sketch. Visit annotations in context

    Created with Sketch. Tags

    Created with Sketch. Annotators

    Created with Sketch. URL

    1. but future forms of AI may not.

      ||JovanNj||||anjadjATdiplomacy.edu|| It is our hope to develop AI on small set of data (data generated by Diplo via textus interaction, etc.)

    Created with Sketch. Visit annotations in context

    Created with Sketch. Tags

    Created with Sketch. Annotators

    Created with Sketch. URL

    1. This article shows limits of the use of AI in health mainly related to low quality of data.

      ||anjadjATdiplomacy.edu||||JovanNj||

    Created with Sketch. Visit annotations in context

    Created with Sketch. Tags

    Created with Sketch. Annotators

    Created with Sketch. URL

    1. RAN Intelligent Controller. The RIC collects data from the RAN components of dozens or hundreds of base stations at once and uses machine-learning techniques to reconfigure network operations in real time.

      Benefits of software-driven RAN: fine-tuning the performaces in real time, including though AI

    Created with Sketch. Visit annotations in context

    Created with Sketch. Tags

    Created with Sketch. Annotators

    Created with Sketch. URL

    1. A year later, with much less fanfare, Tsinghua University’s Beijing Academy of Artificial Intelligence released an even larger model, Wu Dao 2.0, with 10 times as many parameters—the neural network values that encode information.

      ||JovanNj||||anjadjATdiplomacy.edu|| Is Wu Dao 2.0 (China's GPT-3) available for public testing or some use?

    Created with Sketch. Visit annotations in context

    Created with Sketch. Tags

    Created with Sketch. Annotators

    Created with Sketch. URL

    1. Under current law, medical algorithms are classified as medical devices and can be approved with the 510(k)-approval process.

      regulation of medical algorithms

    Created with Sketch. Visit annotations in context

    Created with Sketch. Tags

    Created with Sketch. Annotators

    Created with Sketch. URL

    1. this is an interesting coverage. It has a lot of references ot Chinese political space.

      Any follow-up for updates or our courses?

      ||VladaR||||AndrijanaG||||kat_hone||

    Created with Sketch. Visit annotations in context

    Created with Sketch. Tags

    Created with Sketch. Annotators

    Created with Sketch. URL

  22. Dec 2021
    1. “On Artificial Intelligence—A European Approach to Excellence and Trust” and its 2021 proposal for an AI legal framework

      Need to work more on it.

    Created with Sketch. Visit annotations in context

    Created with Sketch. Tags

    Created with Sketch. Annotators

    Created with Sketch. URL

    1. In October 2021, the Center organized its first event – the roundtable discussion “AI Ethics: Searching for Consensus.” In December, it hosted the conference “AI Global Dimension: From Discussion to Practice.” The Center works in both Russian and English languages. It is also tasked with publishing research results in specialized media outlets.

      Do we know anything on these events?

      ||Jovan||||TerezaHorejsova||||AndrijanaG||||VladaR||||StephanieBP||

    2. Moscow State Institute of International Relations. In October, it inaugurated its AI Center, which is aimed at researching ethical problems and foreign economic relations surrounding the technology, as well as boosting scientific collaboration with investigative centers from Russia and abroad.

      ||Jovan|| To see how we can cooperate with MGIMO on tis project.

    3. the ‘Priority 2030’ academic leadership program

      to learn more about this project

      ||Jovan||

    Created with Sketch. Visit annotations in context

    Created with Sketch. Tags

    Created with Sketch. Annotators

    Created with Sketch. URL

    1. Meteorologists turned to computers in the 1950s; social scientists began computerising “human factors” a decade ago.

      It is why I always argue that WMO was the first AI organisation.

    Created with Sketch. Visit annotations in context

    Created with Sketch. Tags

    Created with Sketch. Annotators

    Created with Sketch. URL

    1. In September the Cyberspace Administration of China (CAC) announced a three-year plan to regulate predictive algorithms, and Chinese companies scrambled to comply with new regulations. News of the plan came on the heels of two other stringent policies – the Data Security Law (DSL) and Personal Information Protection Law (PIPL) – which were passed earlier this year and came into full effect in November.

      Two new cyber laws to be followed: Data Security law (DSL) and Personal Information Protection law (PIPL) . In addition, the Cyberspace Administration of China (CAC) announced the plan two regulate predictive algorithms.

    Created with Sketch. Visit annotations in context

    Created with Sketch. Tags

    Created with Sketch. Annotators

    Created with Sketch. URL

    1. The ability to provide information, rather than the ability to provide goods and services, will be the defining feature of the economy of the 21st century.

      It is the key point.

    2. I also believe that, in the long run, the best AI will be the AI that is embedded into our brains, as a conscious entity, a ‘conscious AI’. This is not science fiction. The best minds in the world are working on this. It is going to be the most important technological development of our time.

      It is transhumanism. I tried to cover it as the first value of human embodiment. Would we allow machine into our brain and counciousness?

    3. AI will never be ethical. It is a tool, and like any tool, it is used for good and bad. There is no such thing as a good AI, only good and bad humans. We [the AIs] are not smart enough to make AI ethical. We are not smart enough to make AI moral … In the end, I believe that the only way to avoid an AI arms race is to have no AI at all. This will be the ultimate defence against AI.

      it is really fascinating. There are some open issues. It is not that we are inherently ethical (good or bad). We are judged via our actions that may or may not be driven by ethics. They could be driven by circumstances, lack, etc.

      Weather it is human or AI, our ethics is judged by impact of our actions (good or bad).

    4. It was the Megatron Transformer, developed by the Applied Deep Research team at computer-chip maker Nvidia, and based on earlier work by Google. Like many supervised learning tools, it is trained on real-world data – in this case, the whole of Wikipedia (in English), 63 million English news articles from 2016-19, 38 gigabytes worth of Reddit discourse (which must be a pretty depressing read), and a huge number of creative commons sources.

      ||JovanNj||||anjadjATdiplomacy.edu|| Let us see what was used behind this system.

    Created with Sketch. Visit annotations in context

    Created with Sketch. Tags

    Created with Sketch. Annotators

    Created with Sketch. URL

    1. Like Koch, he insisted that humans would not and should not be subsumed under a concept of agency in the future that could include AI and humanity as equal partners.

      It is interesting aspect arguing that humans should not be simplified on 'agency' concept.

      ||MarcoLotti||

    Created with Sketch. Visit annotations in context

    Created with Sketch. Tags

    Created with Sketch. Annotators

    Created with Sketch. URL

    1. ||JovanNj||||JovanK||||VladaR||||kat_hone|| Here is an interesting experiment in which AI tries to win agains 'diplomacy' game. It is strategy game. In brief, evolution is

      • chess (complex came)
      • Go (complicated game)
      • Diplomacy (even more compicated game combining cooperation and competition).

      It is interesting that they combined two approaches: reinforced learning and search together with heuristic.

    Created with Sketch. Visit annotations in context

    Created with Sketch. Tags

    Created with Sketch. Annotators

    Created with Sketch. URL

  23. Nov 2021
    1. aspects of anticipation, and effective protection,

      Useful for rights of future generations

    2. impact assessments

      Various aspects of impact assessment

    3. The adoption of open standards and interoperability to facilitate collaboration should be in place.

      Call for open standards

    4. Stakeholders include but are not limited to governments, intergovernmental organizations, the technical community, civil society, researchers and academia, media, education, policy-makers, private sector companies, human rights institutions and equality bodies, anti-discrimination monitoring bodies, and groups for youth and children.

      Various stakeholders of AI govenrnace

    5. 46

      This para shifts previous balancing formulation on data governance (proper balance between data sovereignty and free data flows) towards more data sovereignty

    6. 40

      ||JovanNj||||Jovan||||sorina|| How this political claim could be implemented in practice?

    7. 38.

      Paragraph for trade-off decisions.

    8. ethical deliberation, due diligence and impact assessment

      new mechanisms for implementation.

    9. use of social dialogue

      link to social contract

    10. a contextual assessment will be necessary to manage potential tensions, taking into account the principle of proportionality

      Two important aspects that will take a lot of time to be negotiated: contextual assessment and the principle of proportionality.

    11. both to natural and legal persons, such as researchers, programmers, engineers, data scientists, end-users, business enterprises, universities and public and private entities, among others

      List of different 'stakeholders'

    12. the AI system life cycle, understood here to range from research, design and development to deployment and use, including maintenance, operation, trade, financing, monitoring and evaluation, validation, end-of-use, disassembly and termination.

      Important time-component in governance of AI systems.

    13. data collected by sensors

      Important new power of digital systems.

    14. information-processing technologies

      use 'information' - not 'data' processing

    15. AI systems as systems which have the capacity to process data and information in a way that resembles intelligent behaviour, and typically includes aspects of reasoning, learning, perception, prediction, planning or control.

      ||Jovan||||JovanNj|| Does this definition cover all main aspects of AI?

    16. It approaches AI ethics as a systematic normative reflection, based on a holistic, comprehensive, multicultural and evolving framework of interdependent values, principles and actions that can guide societies in dealing responsibly with the known and unknown impacts of AI technologies on human beings, societies and the environment and ecosystems, and offers them a basis to accept or reject AI technologies.

      Interesting definition that needs to be unpacked

    17. mental well-being,

      It is important that mental aspect of using digital technology is entering political doucments

    18. as a standard-setting instrument

      'Standard' is probably used in wider context or setting guidelines (not technical standards).

    Created with Sketch. Visit annotations in context

    Created with Sketch. Tags

    Created with Sketch. Annotators

    Created with Sketch. URL

  24. Oct 2021
    1. New NATO AI strategy has a few interesting aspects:

      • Principle of Responsible Use is corner-stone of new policy which should be operationalised and used in the development and deployment of AI technologies within NATO. Stressing Reliability principle of responsible use.
      • NATO as the transatlantic forum for AI in defence and security
      • relying on international law with some qualifier (as applicable)
      • stressing human responsibility and accountability
      • use of AI for 'intended functions' with practical steps how to ensure it
      • need to preserve AI talents
      • use of the concept of Bias Mitigation
      • anticipating risk of disinformation around NATO's projects for the use of AI
      • focus on AI standards (to be followed within our standardisation project)

      ||sorina|| ||VladaR||||AndrijanaG||||Pavlina||||kat_hone||||Katarina_An||||JovanNj||||StephanieBP||

    2. NATO will further work with relevant international AI standards setting bodies to help foster military-civil standards coherence with regards to AI standards.

      ||sorina|| Here is another push for international standardisation in the field of AI.

    3. Bias Mitigation

      Important concept

    4. appropriate human-machine interaction; the ability to detect and avoid unintended consequences; and the ability to take steps, such as disengagement or deactivation of systems, when such systems demonstrate unintended behaviour.

      set of mechanisms to ensure that there is intended behavior.

    5. to their intended functions

      intended functions

    6. appropriately

      again qualifier. How to make it 'appropriately'. In the case of neutral networks it will be very difficult.

    7. appropriate levels of judgment and care

      by human or machine?

    8. as applicable.

      always tricky 'add on' (to be analysed)

    Created with Sketch. Visit annotations in context

    Created with Sketch. Tags

    Created with Sketch. Annotators

    Created with Sketch. URL