181 Matching Annotations
  1. Last 7 days
    1. Inside effective altruism, where the far future counts a lot more than the present

      ||Jovan|| A good explanation on what this effective altruism movement is and how that came to populate the current political discourse of AI governance with many terms like "fate-defining moment" and "threat against humanity".

    Created with Sketch. Visit annotations in context

    Created with Sketch. Tags

    Created with Sketch. Annotators

    Created with Sketch. URL

    1. The Reluctant Prophet of Effective Altruism

      ||Jovan||

      This is an in-depth interview with the original founder of effective altruism movement. There were several iterations of the same philanthropic/philosophical movement that turned political and ambitious very soon.

    Created with Sketch. Visit annotations in context

    Created with Sketch. Tags

    Created with Sketch. Annotators

    Created with Sketch. URL

  2. Sep 2023
    1. This task demonstrates the failure of language models to follow instructions when there is a popular continuation that does not fit with that instruction. Larger models are more hurt by this as the larger the model, the more familiar it is with common expressions and quotes.

      ||Jovan|| ||JovanNj||

      An example of how the superb capability of an LLM in induction and association actually breaks; also shows a little bit how the LLM "thinks" (which is very different from a human, who may have or have not memorized the Shakespeare's quote by heart but would nonetheless understand that the "prompt" is more important than reciting the most plausible quote.)

    Created with Sketch. Visit annotations in context

    Created with Sketch. Tags

    Created with Sketch. Annotators

    Created with Sketch. URL

    1. The standard paradigm in natural language processing today is to pretrain large language models to autocomplete text corpora. The resulting models are then either frozen and used directly for other tasks (zero-shot or using few-shot learning), or additionally trained on other tasks (fine-tuning).

      It might be interesting to carry out similar tasks for the model that Diplo is fine-tuning--to see where its algorithmic reasoning will break.

      Also, it might be a good comparison study to show on paper how the Diplo model works better with higher-quality data (bottom-up approach). It'd be good evidence to show, I suppose.||Jovan||||Jovan||||Jovan||||JovanNj||||Jovan||

    2. The purpose of this contest is to find evidence for a stronger failure mode: tasks where language models get worse as they become better at language modeling (next word prediction).

      ||Jovan|| ||JovanNj||

      I found this interesting as this might be insightful to the problem I faced when I was writing the speech for the Digital Cooperation Day "Chair of future generations" using ChatGPT. The model was really good at generating a quote that doesn't exist and was never said by the person it attributed to.

      It is very plausible because from the "reality" the model lives in, multiple data sources made it probable that "this quote might have existed and it makes sense that this quote follows that quote and follows that name and that organization." It is interesting to actually see where the model that is very good at inductive reasoning and association fails sometimes, because induction and association aren't the only two logics humans use to approach reality.

    Created with Sketch. Visit annotations in context

    Created with Sketch. Tags

    Created with Sketch. Annotators

    Created with Sketch. URL

  3. Aug 2023
    1. , Sakana’s approach could potentially lead to AI that’s cheaper to train and use than existing technology. That includes generative AI

      Influence on costs

    2. Sakana is still in the early stages: It hasn’t yet built an AI model and doesn’t have an office.

      Very early stage.

    3. The startup plans to make multiple smaller AI models, the kind of technology that powers products like ChatGPT, and have them work together. The idea is that a “swarm” of programs could be just as smart as the massive undertakings from larger organizations.

      This sounds similar to our "bottom-up AI" approach

      ||JovanK||||sorina||||anjadjATdiplomacy.edu||

    Created with Sketch. Visit annotations in context

    Created with Sketch. Tags

    Created with Sketch. Annotators

    Created with Sketch. URL

    1. Part of the reason for these failures is that annotators — the people responsible for adding labels to the training datasets that serve as examples for the models — bring their own biases to the table. For example, frequently, there’s differences in the annotations between labelers who self-identified as African Americans and members of the LGBTQ+ community versus annotators who don’t identify as either of those two groups.

      ||Jovan|| ||JovanNj||

      I'm not sure if this actually was the problem though. Perspective API researchers particularly designed the prompt given to the annotators in a way that is ambiguous and left up to the latter's own perspective. The researchers asked the annotators to mark a comment toxic or not based on whether that comment would make them want to stay or leave the conversation.

      The reasoning behind this ambiguity seems to be that the researchers don't want to a-priori give a set of what is defined as "good" and "bad", and instead rely on the ambiguities of how people feel. This makes sense to me, as if we have a dictionary of good words and bad words fixed in the system, we are also exercising our own bias and taking words out of context (we ourselves are ignoring contextual significance as well).

    2. Perspective often couldn’t recognize hate speech that used “reclaimed” slurs like “queer” and spelling variations such as missing characters.

      ||Jovan|| ||JovanNj||

      The problem happens to be that in the training dataset, most "toxic" comments include words referring to and against historically discriminated groups, so Perspective API made the linkage that the existence of words referring to these groups automatically make the comment toxic.

      I recently came across the concept of "contextual significance" that was created by early pattern recognition researchers in the 1950s, which basically means that a successful machine should be able to judge which meaning of the word is invoked in a given context (the "neighborhood", the nearby words of a sentence/pixels of a picture) and what effect it would create for which group of people. Perspective API lacked this.

      The Perspective researchers apparently decided to feed the algorithm more "non-toxic" comments that include terms relating to minorities or discriminated groups to balance out the adverse score associated with them.

    3. Several years ago, a team at Penn State found that posts on social media about people with disabilities could be flagged as more negative or toxic by commonly used public sentiment and toxicity detection models. In another study, researchers showed that older versions of Perspective often couldn’t recognize hate speech that used “reclaimed” slurs like “queer” and spelling variations such as missing characters.

      If I recall correctly, Perspective API by Google Jigsaw was trained on a dataset consisting of the comment section of Wikipedia Talk labelled by crowd-sourced workers. Just some background information.

    4. OpenAI proposes a new way to use GPT-4 for content moderation

      ||Jovan|| ||JovanNj||

      ChatGPT 4 apparently could take a defined policy and check if the new inputs violate such a policy. I'm more curious about how we could understand the logic or reasoning the model uses for classifying these policy-compliant or policy-non-compliant inputs.

    Created with Sketch. Visit annotations in context

    Created with Sketch. Tags

    Created with Sketch. Annotators

    Created with Sketch. URL

    1. "We're enabling ownership of these technologies by African founders," he told DW. "This is being built and served by people from these communities. So the financial rewards will also go directly back to them."

      ||Jovan||

      Might be worthwhile to investigate on how this mode of grassroots model-building is done. To me, it is even more interesting to think about how start-ups working closely with local communities and embracing this "bottom-up" approach thrive in places that are the most left out by the biggest/hottest machine learning algorithms of this day (like ChatGPT, DeepMind, etc.).

    Created with Sketch. Visit annotations in context

    Created with Sketch. Tags

    Created with Sketch. Annotators

    Created with Sketch. URL

    1. Consistent with the findings of the 2021 edition, software tools and reports are the most common outputs of UN AI projects, which can be used to address challenges impeding progress on the SDGs

      ||Jovan|| ||JovanNj||

      It might be interesting to look at the software tools and how many of those projects have come to fruition. Just as how Diplo is now exploring ways to incorporate AI tools in our line of work, other UN organisations are doing so, too. What is the scale of their incorporation (does AI replace a core/small function, or does AI assist humans in their job?)

      This might teach us about organisational thinking with regard to the incorporation of AI beyond weightless calls for more awareness of AI-induced transformation.

    Created with Sketch. Visit annotations in context

    Created with Sketch. Tags

    Created with Sketch. Annotators

    Created with Sketch. URL

  4. Jul 2023
    1. Debunking Common Misconceptions about Prompt Engineering

      Misconcpeetions about AI prompting

    Created with Sketch. Visit annotations in context

    Created with Sketch. Tags

    Created with Sketch. Annotators

    Created with Sketch. URL

    1. they aremost common when asking for quotes, sources, citations, or other detailed information

      When hallucination is most likely to appear in LLMs.

    2. This paper examines the transformative role of Large Language Models (LLMs) in education and their potential as learning tools, despite their inherent risks and limitations. The authors propose seven approaches for utilizing AI in classrooms: AI-tutor, AI-coach, AI-mentor, AI-teammate, AI-tool, AI-simulator, and AI-student, each with distinct pedagogical benefits and risks. The aim is to help students learn with and about AI, with practical strategies designed to mitigate risks such as complacency about the AI’s output, errors, and biases. These strategies promote active oversight, critical assessment of AI outputs, and complementation of AI's capabilities with the students' unique insights. By challenging students to remain the "human in the loop", the authors aim to enhance learning outcomes while ensuring that AI serves as a supportive tool rather than a replacement. The proposed framework offers a guide for educators navigating the integration ofAI-assisted learning in classrooms.

      ||Andrej||||Dragana||||sorina||||Jovan||

      this seems to be a paper worth consulting for our approach of using AI in the learning process

    Created with Sketch. Visit annotations in context

    Created with Sketch. Tags

    Created with Sketch. Annotators

    Created with Sketch. URL

    1. significant model public releases within scope

      ! Also, what is 'significant'?

    2. introduced after the watermarking system is developed

      !

    3. Companies commit to advancing this area of research, and to developing a multi-faceted, specialized, and detailed red-teaming regime, including drawing on independent domain experts, for all major public releases of new models within scope

      So applying to what comes next...

    4. Where commitments mention particular models, they apply only to generative models that are overall more powerful than the current industry frontier (

      Very interesting... Not quite sure what is meant by 'particular models', though. ||JovanK||

    5. only a first step in developing and enforcing binding obligations to ensure safety, security, and trus

      commitments to be followed by binding obligations

    Created with Sketch. Visit annotations in context

    Created with Sketch. Tags

    Created with Sketch. Annotators

    Created with Sketch. URL

    1. access to its safe technology could be offered as an incentivefor countries to participate in a governance regime that enforces responsibility (such as agreementsto enact stricter regulation, or restrict military AI development).

      But the cynical question would be: If country X has the research and development capacities to develop advanced AI, why would it want access to the tech of this Collaborative?

    2. uture regulations will limit access to powerful AI technologies in jurisdictions with inadequate AIgovernanc

      Assuming those jurisdictions won't be able to develop their own powerful AI tech?

    3. o increase chances of success, aCommission should foreground scientific rigor and the selection of highly competent AI expertswho work at the cutting edge of technological development and who can continually interpret theever-changing technological and risk landscape

      Sounds good; with the devil being in implementation. E.g." Whose standards would determine what is 'highly competent' AI expert?

    4. there is significant disagreement even among experts about the different opportunities andchallenges created by advanced AI

      And so what makes us think that these disagreements would evolve into a consensus if a committee is created?

    5. International consensus on the opportunities and risks from advanced AI

      What does 'international consensus' mean?

    6. the Commission on Frontier AI could facilitatescientific consensus by convening experts to conduct rigorous and comprehensive assessments ofkey AI topics, such as interventions to unlock AI’s potential for sustainable development, the effectsof AI regulation on innovation, the distribution of benefits, and possible dual-use capabilities fromadvanced systems and how they ought to be managed

      What a Commission on Frontier AI would do.

      Silly question: Why 'frontier AI'?

    7. dangerous inputs: computing resources have been targeted by US, Japanese and Dutch export controlsthat prevent the sale of certain AI chips and semiconductor manufacturing equipment to Chin

      So 'controlling dangerous inputs' is actually about preventing non-friendly countries to access/develop the tech?

    8. standards

      Again, what kind of standards are we talking about?

    9. Establish guidelines

      Don't we have enough of these?

    10. through education, infrastructure, andsupport of the local commercial ecosystem

      So building capacities and creating enabling environments

    11. develop frontier AI collectively or distribute and enable access

      A bunch of questions here. It sounds good, but:

      • Collectively by whom?
      • How exactly would that distribution of access work?
    12. developing and/or enabling safe forms of access to AI.

      What does this mean?

    13. ontrolling AIinputs

      How could this be done?

      ||JovanNj|| Any thoughts?

    Created with Sketch. Visit annotations in context

    Created with Sketch. Tags

    Created with Sketch. Annotators

    Created with Sketch. URL

  5. Jun 2023
    1. Interesting discussion on ways to regulate AI use, and the role (limitations) of open source there, by Bruce Schneier and Waldo.

      It raises some interesting questions about accountability of the open source community. They argue, as many others, that OS community is too fluid to be regulated. I tend to disagree - OS community has many levels, and a certain OS component (say a GitHub code) gets picked up by others at certain points to push to a mass market for benefit (commercial or other). It is when such OS products are picked up that the risk explodes - and it is then when we see tangible entities (companies or orgs) that should be and are accountable for how they use the OS code and push it to mass market.

      I see an analogy with vulnerabilities in digital products, and the responsibility of OS community for the supply chain security. While each coder should be accountable, for individuals it probably boils down to ethics (as the effect of a single github product is very limited); but there are entities in this supply chain that integrate such components that clearly should be hold accountable.

      My comments below. It is an interesting question for Geneva Dialogue as well, not only for AI debates.

      cc ||JovanK|| ||anastasiyakATdiplomacy.edu||

    Created with Sketch. Visit annotations in context

    Created with Sketch. Tags

    Created with Sketch. Annotators

    Created with Sketch. URL

    1. My worst fears are that we cause significant, we, the field, the technology, the industry cause significant harm to the world. I think that could happen in a lot of different ways. It’s why we started the company

      why exactly?

    Created with Sketch. Visit annotations in context

    Created with Sketch. Tags

    Created with Sketch. Annotators

    Created with Sketch. URL

  6. May 2023
    1. three principles, transparency, accountability, and limits on use.

      3 principles for AI governance

    2. Number one, you’re here because AI is this extraordinary new technology that everyone says can be transformative as much as the printing press. Number two is really unknown what’s gonna happen. But there’s a big fear you’ve expressed to all of you about what bad actors can do and will do if there’s no rules of the road. Number three, as a member who served in the house and now in the Senate, I’ve come to the conclusion that it’s impossible for Congress to keep up with the speed of technology.

      A good summary of the current situation with AI technology.

    3. And what auto GPT does is it allows systems to access source code, access the internet and so forth. And there are a lot of potential, let’s say cybersecurity risks. There, there should be an external agency that says, well, we need to be reassured if you’re going to release this product that there aren’t gonna be cybersecurity problems or there are ways of addressing it.

      ||VladaR|| Vlada, please follow-up on this aspect on AI and cybersecurity.

    4. the central scientific issue

      Is it 'scientific issue'? I do not think so. It is more philosophical and possible even, theological, issue. Can science tell us what is good and bad?

    5. the conception of the EU AI Act is very consistent with this concept of precision regulation where you’re regulating the use of the technology in context.

      EU AI Act uses precise regulation of regulation AI in specific contexts.

    6. a reasonable care standard.

      Another vague concept. What is 'reasonable'? There will be a lot of job for AI-powered lawyers.

    7. Thank you, Mr. Chairman and Senator Hawley for having this. I’m trying to find out how it is different than social media and learn from the mistakes we made with social media. The idea of not suing social media companies is to allow the internet to flourish. Because if I slander you you can sue me. If you’re a billboard company and you put up the slander, can you sue the billboard company? We said no. Basically, section 230 is being used by social media companies to high, to avoid liability for activity that other people generate. When they refuse to comply with their terms of use, a mother calls up the company and says, this app is being used to bully my child to death. You promise, in the terms of use, she would prevent bullying. And she calls three times, she gets no response, the child kills herself and they can’t sue. Do you all agree we don’t wanna do that again?

      How to avoid repeating with AI governance what happened with Seciton 230 and social media governance?

    8. the current version of GPT-4 ended to training in 2021.

      2021 starts to being 'safety net' for OpenAI

    9. Sen. Marsha Blackburn (R-TN):

      It is probably the most practical approach to AI governance. Senator from Tennessee asked many questions on the protection of copyright of musicians. Is Nashville endangered. The more we anchor AI governance questions into practical concerns of citizens, communities, and companies - the better AI governance we will have.

    10. that people own their virtual you.

      People can own it only with 'bottom-up AI'

    11. When you think about the energy costs alone, just for training these systems, it would not be a good model if every country has its own policies and each, for each jurisdiction, every company has to train another model.

      It is naive view because AI is shaped by ethics and ethics is very 'local'. Yes, there are some global ethical principles: protect human life and dignity. But many other ethical rules are very 'local'.

    12. need a cabinet level organization within the United States in order to address this.

      Who can govern AI?

    13. And we probably need scientists in there doing analysis in order to understand what the political influences of, for example, of these systems might be.

      Markus tries to make case for 'scientists'. But, frankly speaking, how scientists can decide if AI should rely on book written in favour of republicans or democrats or, even more as AI develops with more sophistication, what 'weight' they should give to one or another source.

      It is VERY dangerous to place ethical and political decisions in hands of scientists. It is also unfair towards them.

    14. If these large language models can, even now, based on the information we put into them quite accurately predict public opinion, you know, ahead of time. I mean, predict, it’s before you even ask the public these questions, what will happen when entities, whether it’s corporate entities or whether it’s governmental entities, or whether it’s campaigns or whether it’s foreign actors, take this survey information, these predictions about public opinion and then fine tune strategies to elicit certain responses, certain behavioral responses.

      this is what worries politicians - how to win elections? They like 'to see' (use AI for their needs) but 'not to be seen' (use by somebody else. The main problem with political elites worldwide is that they may win elections with use of AI (or not), but the humanity is sliding into 'knowledge slavery' by AI.

    15. large language models can indeed predict public opinion.

      They can as they, for example, predict continuation of this debate in the political space.

    16. so-called artificial general intelligence really will replace a large fraction of human jobs.

      It is a good point. There won't be more work.

    17. And the real question is over what time scale? Is it gonna be 10 years? Is it gonna be a hundred years?

      It is a crucial question. One generation will be 'thrown under the bus' in transition. Generation of age 25-50 should 'fasten seat-belts'. They were educated in the 'old system' while they have to work in a very uncertain new economy.

    18. So I think the most important thing that we could be doing and can, and should be doing now, is to prepare the workforce of today and the workforce of tomorrow for partnering with AI technologies and using them. And we’ve been very involved for, for years now in doing that in focusing on skills-based hiring in educating for the skills of the future. Our skills build platform has 7 million learners and over a thousand courses worldwide focused on skills. And we’ve pledged to train 30 million individuals by 2030 in the skills that are needed for society today.

      It is probably the only thing to do. But the problem remains that even re-skilling want be sufficient if we will need less human labour.

    19. not a creature,

      God point on avoiding anthropomorphism.

    20. The National Institutes of Standards and technology actually already has an AI accuracy test,

      It would be interesting to see how it works in practice. How can you judge accuracy if AI is about probability. It is not about certainty which is the first building block for accuracy.

    21. Ultimately, we may need something like cern Global, international and neutral, but focused on AI safety rather than high energy physics.

      He probably thought of analogy with IPCC as supervisory space. But CERN could play role as place for research on AI and processing huge amount of data.

    22. But we also need independent scientists, not just so that we scientists can have a voice, but so that we can participate directly in addressing the problems in evaluating solutions.

      An important stakeholder.

    23. We all more or less agrees on the values we would like for our AI systems to honor.

      Are we? Maybe in the USA, but not globally. Consult the work of Moral Machine which shows that different cultural contexts imply whom we would save in trolley experiment: young - elderly, man - women, rich - poor. See more: https://www.moralmachine.net/

    24. a threshold of capabilities

      What is 'a threashold'. As always devil is in detail.

    25. I was reminded of the psychologist and writer Carl Jung, who said at the beginning of the last century that our ability for technological innovation, our capacity for technological revolution, had far outpaced our ethical and moral ability to apply and harness the technology we developed.

      A good reminder of Jung's work. It is on the line of Frankenstein's warnings of Mary Shelly.

      ||Jovan||

    Created with Sketch. Visit annotations in context

    Created with Sketch. Tags

    Created with Sketch. Annotators

    Created with Sketch. URL

    1. Scalable Personal AI: You can finetune a personalized AI on your laptop in an evening.

      ||JovanNj||||anjadjATdiplomacy.edu|| Is it possible to have personalised AI in an evening.

    Created with Sketch. Visit annotations in context

    Created with Sketch. Tags

    Created with Sketch. Annotators

    Created with Sketch. URL

    1. it triggers a mental shortcut in the minds of users that we call a “machine heuristic.” This shortcut is the belief that machines are accurate, objective, unbiased, infallible and so on.

      An interesting conceput of machine heuristic.

    Created with Sketch. Visit annotations in context

    Created with Sketch. Tags

    Created with Sketch. Annotators

    Created with Sketch. URL

    1. In a horizontal approach, regulators create one comprehensive regulation that covers the many impacts AI can have. In a vertical strategy, policymakers take a bespoke approach, creating different regulations to target different applications or types of AI.

      ||sorina|| Here is an interesting distinction between horistonal (EU) and vertical (China) approaches to AI regulation.

    Created with Sketch. Visit annotations in context

    Created with Sketch. Tags

    Created with Sketch. Annotators

    Created with Sketch. URL

    1. The people who are already well versed in something are going to be the ones capable of making the most helpful applications for that particular field or industry.

      ||VladaR|| This is our main advantage which we should activate via cognitive proximity. We know what we are talking about and we know how to use AI.

    2. arent there already LLM models that cite their sources? or I heard that new plugin with chat GPT can cite its sources

      ||JovanNj|| Are there models that can cite sources?

    3. The general consensus is that, especially customer facing automation, MUST be "explainable." Meaning whenever a chat bot or autonomous system writes something or makes a decision, we have to know exactly how and why it came to that conclusion.

      explainability is critical

    4. They are caught up in the hype and just like everyone else have zero clue what's actually going to happen.

      narrative

    Created with Sketch. Visit annotations in context

    Created with Sketch. Tags

    Created with Sketch. Annotators

    Created with Sketch. URL

  7. Mar 2023
    1. ||JovanNj|| ||Katarina_An|| ||anjadjATdiplomacy.edu|| ||VladaR|| This is an intereresting story about style of communication.

    Created with Sketch. Visit annotations in context

    Created with Sketch. Tags

    Created with Sketch. Annotators

    Created with Sketch. URL

  8. Feb 2023
    1. OpenAI announced they've "trained a classifier to distinguish between text written by a human and text written by AIs from a variety of providers". Saying it is not 'fully reliable": correctly identifies 26% of AI-written text (true positives) as “likely AI-written,” while incorrectly labeling human-written text as AI-written 9% of the time (false positives).

      ||JovanNj|| ||Jovan||

    Created with Sketch. Visit annotations in context

    Created with Sketch. Tags

    Created with Sketch. Annotators

    Created with Sketch. URL

  9. Jan 2023
    1. A python module to generate optimized prompts, Prompt-engineering & solve different NLP problems using GPT-n (GPT-3, ChatGPT) based models and return structured python object for easy parsing

      ||JovanNj||||anjadjATdiplomacy.edu|| Could this 'promtify' software be interesting for use?

    Created with Sketch. Visit annotations in context

    Created with Sketch. Tags

    Created with Sketch. Annotators

    Created with Sketch. URL

    1. OpenAI is developing a tool for “statistically watermarking the outputs of a text [AI system].” Whenever a system — say, ChatGPT — generates text, the tool would embed an “unnoticeable secret signal” indicating where the text came from.

      OpenAI apparently working on a tool to watermark AI-generated content and make it 'easier to spot'.

      ||JovanNj||||Jovan||

    Created with Sketch. Visit annotations in context

    Created with Sketch. Tags

    Created with Sketch. Annotators

    Created with Sketch. URL

  10. Oct 2022
    1. This text sent by ||sorina|| discusses the way how machines can simulate common sense.

      It is rather realistic because it starts with assumption that AI cannot replace human consciousness, but it can 'simulate' it by observing and measuring.

      It is based on 'heuristic', philosophical concept, that deals with the way how we make decisions.

      Practically speaking, AI is learning from experience by human evaluation of AI decisions and 're-inforced' learning. In that sense, what we do with the text is methodologically similar: we ask AI to provide us with drafts and we react to it based on our intelligence and knowledge.

      ||Jovan||

    2. Common sense is different from intelligence in that it is usually something innate and natural to humans that helps them navigate daily life, and cannot really be taught.

      common sense vs intelligence

    Created with Sketch. Visit annotations in context

    Created with Sketch. Tags

    Created with Sketch. Annotators

    Created with Sketch. URL

    1. 2Open Loop is a global program that connects policymakers and technology companies to help develop effective and evidence-based policies around AI and other emerging technologies.

      Open Loop project of Meta/Facebook on linking policymakers and technology companies.

      ||sorina||

    Created with Sketch. Visit annotations in context

    Created with Sketch. Tags

    Created with Sketch. Annotators

    Created with Sketch. URL

    1. TITLE: NATO establishes review board to govern responsible use of AI

      CONTENT: NATO has established a Review Board to govern the responsible development and use of artificial intelligence (AI) and data across the organisation. The decision was taken at the meeting of NATO Ministers of Defence which took place in Brussels on 12–13 October 2022. The Data and Artificial Intelligence Review Board (DARB) will work on developing a user-friendly responsible AI certification standard to help align new AI and data projects with NATO's Principles of Responsible Use. The board is also expected to act as a platform allowing the exchange of views and best practices to help create quality controls, mitigate risks, and adopt trustworthy and interoperable AI systems. NATO member states will designate one national nominee to serve on the DARB. Nominees could come from governmental entities, academia, the private sector, or civil society.

      TECHNOLOGY: AI

      DATE: 13 October 2022

    Created with Sketch. Visit annotations in context

    Created with Sketch. Tags

    Created with Sketch. Annotators

    Created with Sketch. URL

    1. TITLE: US White House publishes Blueprint for an AI Bill or Rights

      CONTENT: The US White House, through the Office of Science and Technology Policy, has issued a Blueprint for an AI Bill of Rights to guide the development, deployment, and use of automated systems. The blueprint outlines five key principles and is accompanied by a framework to help incorporate the protections into policy and practice.

      The five principles are:

      • Safe and effective systems: Users should be protected from unsafe and ineffective systems.
      • Algorithmic discrimination protection: Users should not face discrimination by algorithms and systems should be used and designed in an equitable way.
      • Data privacy. Users should be protected from abusive data practices via built-in protections and should have agency over how data about them is used.
      • Notice and explanation: Users should know that an automated system is being used and understand how and why it contributes to outcomes that impact them.
      • Human alternatives, consideration, and fallback: Users should be able to opt out, where appropriate, and have access to a person who can quickly consider and remedy problems they encounter.

      Within the scope of the blueprint are automated systems that have the potential to meaningfully impact the public's rights, opportunities, or access to critical resources or services.

      It is important to note that the blueprint does not have a regulatory character, and is meant to serve as a guide.

      TOPICS: AI

      TRENDS: AI governmental initiatives

      DATE: 4 October

      COUNTRY: USA

    Created with Sketch. Visit annotations in context

    Created with Sketch. Tags

    Created with Sketch. Annotators

    Created with Sketch. URL

    1. Synthetic data, however it is produced, offers a number of very concrete advantages over using real world data.

      advantages of synthetic data

    2. There are a couple of ways this synthetic data generation happens

      How synthetic data is produced.

    Created with Sketch. Visit annotations in context

    Created with Sketch. Tags

    Created with Sketch. Annotators

    Created with Sketch. URL

  11. Aug 2022
    1. AI and other new technologies will increase strategic instability.

      Another important element that the document realises: link between cybersecurity and AI. This is missing in OEWG discussions. There will need to be links of OEWG with AI-related processes like LAWS as well - or, at least, diplomats will need to be aware of all those other related processes.

    Created with Sketch. Visit annotations in context

    Created with Sketch. Tags

    Created with Sketch. Annotators

    Created with Sketch. URL

  12. Jul 2022
    1. Here is an interesting interview with the head of AI at SalesForce. I compare with our efforts.

      A good and solid data is essential. We are getting good data via two main sources:

      • structured data organised via geography (countries). Later on we can introduce time component. In this way we will have two main determinants for any phenomenon: space and time.
      • semi- and un-structured data: textus annotations

      He also higlightes the question of classification which we have ready with taxonomies. There is also an importance of conversation where we are also doing well via Textus and event analysis.

      All in all, we seem to be on the right track to having well-designed AI system.

      ||JovanNj||||anjadjATdiplomacy.edu||||dusandATdiplomacy.edu||||Katarina_An||

    2. if you don’t have the data, then you have a problem.

      we have data.

    Created with Sketch. Visit annotations in context

    Created with Sketch. Tags

    Created with Sketch. Annotators

    Created with Sketch. URL

    1. Title: DeepMind uses AI to predict the structure of almost all proteins. Text: DeepMind, in partnership with the European Molecular Biology Laboratory's European Bioinformatics Institute, has released predicted structures for nearly all catalogued proteins known to science. The announcement comes a year after the two partners released and open-sourced AlphaFold – an artificial intelligence (AI) system used to predict the 3D structure of a protein – and created the AlphaFold Protein Structure Database to share this scientific knowledge with the researchers. The database now contains over 200 million predicted protein structures, covering plants, bacteria, animals, and other organisms. It is expected to help researchers advance work on issues such as neglected diseases, food insecurity, and sustainability.

    Created with Sketch. Visit annotations in context

    Created with Sketch. Tags

    Created with Sketch. Annotators

    Created with Sketch. URL

  13. Jun 2022
    1. We express our concerns on the risk, and ethical dilemma related to Artificial Intelligence, such as privacy, manipulation, bias, human-robot interaction, employment, effects and singularity among others. We encourage BRICS members to work together to deal with such concerns, sharing best practices, conduct comparative study on the subject toward developing a common governance approach which would guide BRICS members on Ethical and responsible use of Artificial Intelligence while facilitating the development of AI.

      BRICS cooperation on AI. Nothing too specific (unlike in some other fields). Interesting that they spend space to address concerns ||JovanK|| ||sorina||

    Created with Sketch. Visit annotations in context

    Created with Sketch. Tags

    Created with Sketch. Annotators

    Created with Sketch. URL

    1. Text-to-image processes are also impressive. The illustration at the top of this article was produced by using the article’s headline and rubric as a prompt for an ai service called Midjourney. The next illustration is what it made out of “Speculations concerning the first ultraintelligent machine”; “On the dangers of stochastic parrots”, another relevant paper, comes later. Abstract notions do not always produce illustrations that make much or indeed any sense, as the rendering of Mr Etzioni’s declaration that “it was flabbergasting” shows. Less abstract nouns give clearer representations; further on you will see “A woman sitting down with a cat on her lap”.

      Shall we try to use this Midjourney platofrm to illustrate some of our books. For example, we can have some segments of the Geneva Digital Atlas illustrated by this tool.

      ||Jovan||||MarcoLotti||||JovanNj||||anjadjATdiplomacy.edu||

    2. “Enlightenment”—a trillion-parameter model built at the Beijing Academy of Artificial Intelligence

      Do we know anything on this model or Chinese research?

    Created with Sketch. Visit annotations in context

    Created with Sketch. Tags

    Created with Sketch. Annotators

    Created with Sketch. URL

  14. Apr 2022
    1. Global policy AI initiative to follow

      ||sorina||||Jovan||

    Created with Sketch. Visit annotations in context

    Created with Sketch. Tags

    Created with Sketch. Annotators

    Created with Sketch. URL

    1. It’s not possible to talk about “AI for everyone” (Google’s rhetoric), “responsible AI” (Facebook’s rhetoric), or “broadly distribut[ing]” its benefits (OpenAI’s rhetoric) without honestly acknowledging and confronting the obstacles in the way.

      AI rhetorics

    Created with Sketch. Visit annotations in context

    Created with Sketch. Tags

    Created with Sketch. Annotators

    Created with Sketch. URL

    1. “These simulation engines and the data and everything that goes into them are incredibly important for us to act in these complex environments when it’s not just one crisis but it’s a set of compounding crises,”

      Q: Why 'Destination Earth' platform matters?

    2. “Destination Earth,” will draw on a host of environmental, socioeconomic and satellite data to develop digital “twins” of the planet that aim to help policymakers — and eventually the public — better understand and respond to rising temperatures.

      Q: What is EU's 'Destination Earth' initative?

    Created with Sketch. Visit annotations in context

    Created with Sketch. Tags

    Created with Sketch. Annotators

    Created with Sketch. URL

    1. Nas Textus system vrsi funkciju data labelling-a na integrisan nacin - deo svakodnenvih rutina.

      ||anjadjATdiplomacy.edu||||JovanNj||

    Created with Sketch. Visit annotations in context

    Created with Sketch. Tags

    Created with Sketch. Annotators

    Created with Sketch. URL

    1. GANs are deep learning models that use two neural networks working against each other—a generator that creates synthetic data, and a discriminator that distinguishes between real and synthetic data—to generate synthetic images that are almost indistinguishable from real ones. GANs are popular for generating images and videos, including deepfakes.

      Q: What are GANs

    Created with Sketch. Visit annotations in context

    Created with Sketch. Tags

    Created with Sketch. Annotators

    Created with Sketch. URL

    1. I recently learned that uppercase and lowercase letters got their names from actual wooden cases of lead that were used by compositors for printing.

      Q: What is etymology of lower and upper cases?

    2. It began as a term from French railroad engineering referring to the layers of material that go beneath (“infra”) the tracks. Its meaning expanded to include roads, bridges, sewers and power lines, and very recently expanded again to include people, specifically caregivers, as in this fact sheet from the Biden White House

      Q: What is etymology of term infrastructure?

    3. I.C.E. is short for internal combustion engine, a modifier that was superfluous until electric cars came on the scene.

      Q: What is I.C.E

    4. It refers simply to the physical world, where we have tangible bodies made of … meat. “Meatspace” is a word that didn’t need to exist until the invention of cyberspace. Technological progress gives us a new perspective on things we once took for granted, in this case reality itself.

      Q: What is meatspace

    Created with Sketch. Visit annotations in context

    Created with Sketch. Tags

    Created with Sketch. Annotators

    Created with Sketch. URL

    1. The EU Cybersecurity Act defines cybersecurity as “the activities necessary to protect network and information systems, the users of such systems, and other persons affected by cyber threats”.

      What is cybersecurity?

    Created with Sketch.