107 Matching Annotations
  1. Apr 2024
    1. This paper presents Climinator, a novel AI-based tool designed to automate the fact-checking of climate change claims. Utilizing an array of Large Language Models (LLMs) informed by authoritative sources like the IPCC reports and peer-reviewed scientific literature, Climinator employs an innovative Mediator-Advocate framework.

      ||JovanK|| ||JovanNj||

      This is an AI model that ChatClimate.ai and WMO have collaborated on building. The Climinator uses a 'mediator-advocate' framework that is quite innovative. I have doubts about the efficacy of this framework, but I thought it could be something of inspiration to the AI lab if we ever want to build an array of models capable of debating with each other and coming to a conclusion.

      Might be an interesting framework for us to try out, for example, in the following scenario: Our moderator is a diplomatic-text-finetuned LLM; our advocate 1 is an LLM based on GDC-related documents (via RAG) and our advocate 2 is an LLM fed with real-life expert interactions during our events (via RAG or finetuning). Then, we can give an overall prompt to the moderator to ask advocate 1 and 2 to find consensus on a GDC draft.||sorina||

    Created with Sketch. Visit annotations in context

    Created with Sketch. Tags

    Created with Sketch. Annotators

    Created with Sketch. URL

  2. Feb 2024
    1. Computing Power and the Governance of Artificial Intelligence

      @jovan @sorina This article (published just a week ago) was a new research on regulating frontier AI models by imposing restrictions and safety measures on AI chips.

      I previously shared a similar article called 'How to Catch a Chinchilla', which expresses preliminary ideas of this one.

      The article is focused on regulating the 'hardware' part of frontier models both because it's more visible and easier to track/locate and because, as far as the current way AI is built are concerned, development of advanced AI models still rely on securing a good number of AI chips.

    Created with Sketch. Visit annotations in context

    Created with Sketch. Tags

    Created with Sketch. Annotators

    Created with Sketch. URL

    1. Another Hanna paper, presented at the Resistance AI workshop, urges the machine learning community to go beyond scale when considering how to address systemic social issues and asserts that resistance to scale thinking is needed.

      From what I observed, the notion of 'scaling an algorithm' always results in high decontextualization of data and outputs (i.e., training data is taken out of the context from which it was derived, and the outputs are applied to various contexts instead of merely the one the model was trained for).

      And a huge problem for the lack of reflections in this regard was, in my opinion, the tech community's blatant refusal to think about the systemic, societal risks of their models--they framed it as the problem of policymakers and ethicists, and that the social can be separately mitigated apart from the technical conception of the models.

    2. “We argue that fixes that focus narrowly on improving datasets by making them more representative or more challenging might miss the more general point raised by these critiques, and we’ll be trapped in a game of dataset whack-a-mole rather than making progress, so long as notions of ‘progress’ are largely defined by performance on datasets,” the paper reads. “Should this come to pass, we predict that machine learning as a field will be better positioned to understand how its technology impacts people and to design solutions that work with fidelity and equity in their deployment contexts.”

      I suspected that the current AI regulations discussions are missing out on long-term risks that certain paradigms in the current AI development space might impose; one is the predominant idea that we can resolve fairness by simply codifying fairness metrics and making the latter a requirement of a model to 'hit the market'.

      This logic fails to challenge what this paragraph is trying to say here: ensuring fairness goes beyond the idea of getting a 'fair dataset' that performs well; it concerns what the deployment context is, how developers and users are to interact with the AI model, and how does the model become interwoven in the lives of the people working around it.

    Created with Sketch. Visit annotations in context

    Created with Sketch. Tags

    Created with Sketch. Annotators

    Created with Sketch. URL

  3. Nov 2023
    1. Given its dual-use nature, CAI could make it easier to train pernicious systems

      Could someone not do the reverse and curate a constitution of pure malicious intentions?

    2. AI alignment, which incentivizes developers to adopt this method

      How do we know if AI is aligning with us when we leave the job of aligning it to AI as well???

    3. CAI increases model transparency.

      Not really though...just as with regular LLMs, we don't know how the models comprehend the data that we give it and how it comes up with answers. There's no guarantee that the models understand the terms of the principles in the ways that we understand them; how do we know if the model is indeed making decisions according to the values (or whatever definition we might give to those values) and not just because of a happenstance?

    Created with Sketch. Visit annotations in context

    Created with Sketch. Annotators

    Created with Sketch. URL

    1. The Departmentwill develop and implement data qualityassessment tools and monitoring processeswith results that are transparent to users.Assessments will also be done to evaluate dataoutputs from other AI platforms to minimize risk.

      What about the input data used to train other AI systems?

    2. High quality datasets are those sufficiently freeof incomplete, inconsistent, or incorrect data,while also being well documented, organized,and secure

      Doesn't this definition mostly point to highly structured data?

    3. he Department’s CDAO will support and coordinate the establishment and maintenance of AI policies—such as 20 FAM 201.1—that provide clear guidelines for responsible AI use, steward AI models, and prioritize the evaluation and management of algorithmic risk (e.g., risks arising from using algorithms)in AI applications during their entire lifecycle— including those related to records retention, privacy, cybersecurity, and safety.

      Existence of current AI policies.

      Another thing is, they mention algorithmic risk, which means the evaluation of algorithms may not just be on applications?

    4. Much like the EDS aims to cultivate a data culture, the Department will imbue its values around responsible AI use across the organization, including to uphold data and scientific integrity.

      Very interesting how they use words like "culture" to describe AI integration. It certainly goes beyond simply adopting selective tools; instead, it's about perspective and norm-shaping within the organisation.

    5. enhance AI literacy, encourage and educate on responsible AI use, and ensure that users can adequately mitigate some risks associated with AI tools.

      What is AI literacy? What sets of knowledge and skills make a person AI literate?

    6. To meet the computational demands of AI development, our infrastructure will leverage Department cloud-based solutions and scalable infrastructure services.

      Did they already have that infrastructure ready?

    7. Robust access controls and authentication mechanisms aligned to Zero Trust principles will mitigate risk of unauthorized access to AI technologies and Department data, providing a high level of security
    8. with a mix of open-source, commercially available, and custom-built AI systems.

      Open-source is the key word here.

    9. ts Enterprise Data Strategy (EDS) in September 2021

      EAIS has a predecessor

    10. Innovate

      Use cases

    11. Ensure AI is Applied Responsibly

      Principles and standards

    12. Foster a Culture that Embraces AI Technology

      Workforce

    13. Leverage Secure AI Infrastructure

      Infrastructure

    Created with Sketch. Visit annotations in context

    Created with Sketch. Annotators

    Created with Sketch. URL

  4. Oct 2023
    1. The fellows funded by Open Philanthropy, which is financed primarily by billionaire Facebook co-founder and Asana CEO Dustin Moskovitz and his wife Cari Tuna, are already involved in negotiations that will shape Capitol Hill’s accelerating plans to regulate AI. And they’re closely tied to a powerful influence network that’s pushing Washington to focus on the technology’s long-term risks — a focus critics fear will divert Congress from more immediate rules that would tie the hands of tech firms.

      ||Jovan||||sorina||||JovanNj||

      Probably old news to you, but here's an article that explains about the billionaire that founded Open Philanthropy (a core funder in EA activities). It also explains about its reach into politics.

    Created with Sketch. Visit annotations in context

    Created with Sketch. Tags

    Created with Sketch. Annotators

    Created with Sketch. URL

    1. William Isaac

      Staff Research Scientist on DeepMind’s Ethics and Society Team and Research Affiliate at Oxford University Centre's for the Governance of AI: https://wsisaac.com/#about

      Both DeepMind and Centre for the Governance of AI (GovAI) have strong links to EA!

    2. Arvind Narayanan

      Professor of computer science from Princeton University and the director of the Center for Information Technology Policy: https://www.cs.princeton.edu/~arvindn/.

      Haven't read his work closely yet, but it seems sensible to me.

    3. Sara Hooker,

      Director at Cohere: https://cohere.com/ (an LLM AI company).

    4. Yoshua Bengio

      Professor at Université de Montréal and scientific director of the Montreal Institute for Learning Algorithms (MILA).

      Very influential computer scientist, and considered a leading force in AI. Also an AI doomer, though I can't find his clear link with EA.

    5. John McDermid

      Professor from University of York.

    6. Alexander Babuta
    7. Irene Solaiman

      Head of Global Policy at Hugging Face: https://www.irenesolaiman.com/

    8. Paul Christiano
    9. Capabilities and risks from frontier AI

      ||Jovan|| ||sorina|| ||JovanNj||

      This is the report that the UK released ahead of the AI Safety Summit (1-2 November, 2023).

    Created with Sketch. Visit annotations in context

    Created with Sketch. Tags

    Created with Sketch. Annotators

    Created with Sketch. URL

    1. On Wednesday, the U.K. government released a report called “Capabilities and risks from frontier AI,” in which it explains frontier AI as “highly capable general-purpose AI models that can perform a wide variety of tasks and match or exceed the capabilities present in today’s most advanced models.”

      ||Jovan||||sorina||||JovanNj||

      This report might be a good source for understanding narrative change in the field of AI safety. I will find out where it is and send it to you via email.

      Also, it shouldn't come as a surprise to us that the UK government will be the most convinced in EA's description of AI ending humanity, considering that many EA orgnisations are based in and attached to UK universities (Oxbridge).

    Created with Sketch. Visit annotations in context

    Created with Sketch. Tags

    Created with Sketch. Annotators

    Created with Sketch. URL

    1. We are at the same juncture now with AI. Once again, a handful of competitive but ideologically aligned leaders are telling us that large-scale, general-purpose AI implementations are the only way forward. In doing so, they disregard the dangerous level of complexity and the undue level of control and financial return to be granted to them.

      @Jovan @Sorina

      The very same people warning us of extinction-level AI risks are the same people who are developing technologies in a way that leads us to it. In a way, the public release of GPT and other generative models created the very market pressure that makes "creating the best, most intelligent AGI" the most important and only goal for the market.

    2. What is happening is not just a battle for market control. A small number of tech titans are busy designing our collective future, presenting their societal vision, and specific beliefs about our humanity, as theonly possible path. Hiding behind an illusion of natural market forces, they are harnessing their wealth and influence to shape not just productization and implementation of AI technology, but also the research.

      @Jovan @Sorina

      This is my concern with solely using a longtermist view to make policy judgments.

    Created with Sketch. Visit annotations in context

    Created with Sketch. Tags

    Created with Sketch. Annotators

    Created with Sketch. URL

  5. Sep 2023
    1. Inside effective altruism, where the far future counts a lot more than the present

      ||Jovan|| A good explanation on what this effective altruism movement is and how that came to populate the current political discourse of AI governance with many terms like "fate-defining moment" and "threat against humanity".

    Created with Sketch. Visit annotations in context

    Created with Sketch. Tags

    Created with Sketch. Annotators

    Created with Sketch. URL

    1. The Reluctant Prophet of Effective Altruism

      ||Jovan||

      This is an in-depth interview with the original founder of effective altruism movement. There were several iterations of the same philanthropic/philosophical movement that turned political and ambitious very soon.

    Created with Sketch. Visit annotations in context

    Created with Sketch. Tags

    Created with Sketch. Annotators

    Created with Sketch. URL

    1. This task demonstrates the failure of language models to follow instructions when there is a popular continuation that does not fit with that instruction. Larger models are more hurt by this as the larger the model, the more familiar it is with common expressions and quotes.

      ||Jovan|| ||JovanNj||

      An example of how the superb capability of an LLM in induction and association actually breaks; also shows a little bit how the LLM "thinks" (which is very different from a human, who may have or have not memorized the Shakespeare's quote by heart but would nonetheless understand that the "prompt" is more important than reciting the most plausible quote.)

    Created with Sketch. Visit annotations in context

    Created with Sketch. Tags

    Created with Sketch. Annotators

    Created with Sketch. URL

    1. The standard paradigm in natural language processing today is to pretrain large language models to autocomplete text corpora. The resulting models are then either frozen and used directly for other tasks (zero-shot or using few-shot learning), or additionally trained on other tasks (fine-tuning).

      It might be interesting to carry out similar tasks for the model that Diplo is fine-tuning--to see where its algorithmic reasoning will break.

      Also, it might be a good comparison study to show on paper how the Diplo model works better with higher-quality data (bottom-up approach). It'd be good evidence to show, I suppose.||Jovan||||Jovan||||Jovan||||JovanNj||||Jovan||

    2. The purpose of this contest is to find evidence for a stronger failure mode: tasks where language models get worse as they become better at language modeling (next word prediction).

      ||Jovan|| ||JovanNj||

      I found this interesting as this might be insightful to the problem I faced when I was writing the speech for the Digital Cooperation Day "Chair of future generations" using ChatGPT. The model was really good at generating a quote that doesn't exist and was never said by the person it attributed to.

      It is very plausible because from the "reality" the model lives in, multiple data sources made it probable that "this quote might have existed and it makes sense that this quote follows that quote and follows that name and that organization." It is interesting to actually see where the model that is very good at inductive reasoning and association fails sometimes, because induction and association aren't the only two logics humans use to approach reality.

    Created with Sketch. Visit annotations in context

    Created with Sketch. Tags

    Created with Sketch. Annotators

    Created with Sketch. URL

  6. Aug 2023
    1. Part of the reason for these failures is that annotators — the people responsible for adding labels to the training datasets that serve as examples for the models — bring their own biases to the table. For example, frequently, there’s differences in the annotations between labelers who self-identified as African Americans and members of the LGBTQ+ community versus annotators who don’t identify as either of those two groups.

      ||Jovan|| ||JovanNj||

      I'm not sure if this actually was the problem though. Perspective API researchers particularly designed the prompt given to the annotators in a way that is ambiguous and left up to the latter's own perspective. The researchers asked the annotators to mark a comment toxic or not based on whether that comment would make them want to stay or leave the conversation.

      The reasoning behind this ambiguity seems to be that the researchers don't want to a-priori give a set of what is defined as "good" and "bad", and instead rely on the ambiguities of how people feel. This makes sense to me, as if we have a dictionary of good words and bad words fixed in the system, we are also exercising our own bias and taking words out of context (we ourselves are ignoring contextual significance as well).

    2. Perspective often couldn’t recognize hate speech that used “reclaimed” slurs like “queer” and spelling variations such as missing characters.

      ||Jovan|| ||JovanNj||

      The problem happens to be that in the training dataset, most "toxic" comments include words referring to and against historically discriminated groups, so Perspective API made the linkage that the existence of words referring to these groups automatically make the comment toxic.

      I recently came across the concept of "contextual significance" that was created by early pattern recognition researchers in the 1950s, which basically means that a successful machine should be able to judge which meaning of the word is invoked in a given context (the "neighborhood", the nearby words of a sentence/pixels of a picture) and what effect it would create for which group of people. Perspective API lacked this.

      The Perspective researchers apparently decided to feed the algorithm more "non-toxic" comments that include terms relating to minorities or discriminated groups to balance out the adverse score associated with them.

    3. Several years ago, a team at Penn State found that posts on social media about people with disabilities could be flagged as more negative or toxic by commonly used public sentiment and toxicity detection models. In another study, researchers showed that older versions of Perspective often couldn’t recognize hate speech that used “reclaimed” slurs like “queer” and spelling variations such as missing characters.

      If I recall correctly, Perspective API by Google Jigsaw was trained on a dataset consisting of the comment section of Wikipedia Talk labelled by crowd-sourced workers. Just some background information.

    4. OpenAI proposes a new way to use GPT-4 for content moderation

      ||Jovan|| ||JovanNj||

      ChatGPT 4 apparently could take a defined policy and check if the new inputs violate such a policy. I'm more curious about how we could understand the logic or reasoning the model uses for classifying these policy-compliant or policy-non-compliant inputs.

    Created with Sketch. Visit annotations in context

    Created with Sketch. Tags

    Created with Sketch. Annotators

    Created with Sketch. URL

    1. "We're enabling ownership of these technologies by African founders," he told DW. "This is being built and served by people from these communities. So the financial rewards will also go directly back to them."

      ||Jovan||

      Might be worthwhile to investigate on how this mode of grassroots model-building is done. To me, it is even more interesting to think about how start-ups working closely with local communities and embracing this "bottom-up" approach thrive in places that are the most left out by the biggest/hottest machine learning algorithms of this day (like ChatGPT, DeepMind, etc.).

    2. Outside of Africa as well, researchers around the world are working on other languages including Jamaican Patois, Catalan, Sudanese, and Māori.
    3. With this approach, companies like Lesan cannot hope to rival the billions of pages of English content available, but they might not need to. Lesan, for instance, already outperforms Google Translate in both Amharic and Tigrinya. "We've shown that you can build useful models by using small, carefully curated data sets," said Asmelash Teka Hadgu. "We understand its limitations and capabilities. Meanwhile, Microsoft or Google usually build a single, gigantic model for all languages, so it's almost impossible to audit."Advertisement
    4. This requires a lot of manual labor. Contributors first identify high-quality datasets, such as trustworthy books or newspapers, then digitize and translate them into the target languages. Finally, they align the original and translated versions sentence by sentence to guide the machine learning process.African startups embrace AI technology
    Created with Sketch. Visit annotations in context

    Created with Sketch. Tags

    Created with Sketch. Annotators

    Created with Sketch. URL

    1. Consistent with the findings of the 2021 edition, software tools and reports are the most common outputs of UN AI projects, which can be used to address challenges impeding progress on the SDGs

      ||Jovan|| ||JovanNj||

      It might be interesting to look at the software tools and how many of those projects have come to fruition. Just as how Diplo is now exploring ways to incorporate AI tools in our line of work, other UN organisations are doing so, too. What is the scale of their incorporation (does AI replace a core/small function, or does AI assist humans in their job?)

      This might teach us about organisational thinking with regard to the incorporation of AI beyond weightless calls for more awareness of AI-induced transformation.

    Created with Sketch. Visit annotations in context

    Created with Sketch. Tags

    Created with Sketch. Annotators

    Created with Sketch. URL

  7. Jul 2023
    1. we propose a framework thatwe call model cards, to encourage such transparent model reporting.Model cards are short documents accompanying trained machinelearning models that provide benchmarked evaluation in a varietyof conditions, such as across different cultural, demographic, or phe-notypic groups (e.g., race, geographic location, sex, Fitzpatrick skintype [15]) and intersectional groups (e.g., age and race, or sex andFitzpatrick skin type) that are relevant to the intended applicationdomains. Model cards also disclose the context in which modelsare intended to be used, details of the performance evaluation pro-cedures, and other relevant information

      ||JovanK|| ||JovanNj||

      In my head, the idea of having model cards that accompany the models resembles having nutritional labels on food products we purchase. This self-reporting could then be regularised (through policies), and can be used by auditors to check if the model truly complies with the information presented in the cards.

      Anthropic (a firm that produced AI assistant Claude) has already put out a model card: https://h.diplomacy.edu:8000/6l6xBC1SEe6I1uPxv4kRmw/www-files.anthropic.com/production/images/Model-Card-Claude-2.pdf

    Created with Sketch. Visit annotations in context

    Created with Sketch. Annotators

    Created with Sketch. URL

    1. Model Card and Evaluations for Claude Models

      ||JovanK|| ||JovanNj||

      Model card example from Anthropic's Claude 2 AI assistant.

    Created with Sketch. Visit annotations in context

    Created with Sketch. Annotators

    Created with Sketch. URL

    1. most AI systems are complex socio-technical systems in which control over the system is extensively distributed. In many cases, a multitude of different actors is involved in the purpose setting, data management and data preparation, model development, as well as deployment, use, and refinement of such systems. Therefore, determining sensible addressees for the respective obligations is all but trivial.

      ||JovanK||||JovanNj||

      A comparison of how the EU's decision on how to assign responsibilities in AI governance changed. I think responsibility and accountability assignment is an important topic as well.

    Created with Sketch. Visit annotations in context

    Created with Sketch. Annotators

    Created with Sketch. URL

    1. 5.1 Lack of methods and metrics to operationalise normative concepts

      ||Jovan||||JovanNj||

      To check the compliance of LLMs with existing ethical principles or regulations, we first need to boil down what these principles even mean and how they could be operationalised in a technical sense.

      I believe that policymakers would really benefit from directly talking to tech developers not because the latter knows more about ethical principles, but because policymakers cannot come up with sensible and implementable ethical requirements without having a better understanding of how tech developers work. Questions like "what does it mean to build a model that exhibits 'fairness?'" cannot be answered unless we know how tech developers work.

    2. wepropose a three-layered approach, whereby governance audits (of technology providers that designand disseminate LLMs), model audits (of LLMs after pre-training but prior to their release), andapplication audits (of applications based on LLMs) complement and inform each other.

      ||Jovan||||JovanNj||

      Auditing methods needed for LLMs are trickier than traditional auditing methods for other types of computer programs. The latter's auditing methods are easier as the computing steps are all visible in the codes, but the former has shown great complexity in training data and establishing connections that are not easily visible or decipherable to auditors.

      It is imperative for us to understand what an LLM does with auditing, so a new approach is called for.

    Created with Sketch. Visit annotations in context

    Created with Sketch. Annotators

    Created with Sketch. URL

  8. Nov 2022
  9. Oct 2022
    1. Diplomatic Footprintof International Geneva

      The Diplomatic Footprint of International Geneva displays the visibility of 236 Geneva-based actors (link) when one searches Google for 267 policy, governance and diplomatic topics (link).

      The Diplomatic Footprint of International Geneva is calculated and evaluated based on Google searches routinely conducted in 53 cities around the world.

      *From the analysis on the page, I deduced that some actors were not mentioned at all. I believe that for the purpose of transparency and research integrity, they should be shown as well (as in, if they have 0 mentions, that should be visible to viewers, too). That's why I believe that there should be a link of all the actors used for each analysis (same for tech footprints and global media houses).

    2. Compare to the other actors:

      How is the weighted score calculated?

    3. Select the topic field of interest:

      The legend of this graph is not clearly presented. For example, the it wasn't shown that the color bar at the right-hand side represents the points an actor receives. The bottom axis should be named "the number of mentions an actor receives" instead of "#mentions". What the size of the dots represents isn't clear either.

      As the main metric used for this analysis is by the points, maybe the X axis could be marked by the points instead of the number of mentions? Vice versa, the colors could represent the number of mentions instead?

      For the y axis: How is the average rank calculated? Is it points divided by number of mentions?

    4. Select the search topic:

      It might be clearer if the graph information (now embedded in the "i" button) could be directly shown somewhere on the page.

    5. Digital Footprint

      Where could this link be found on Diplo's website?

      Also, in each Footprint, there are three ways to view the data: Ranking by topics, Ranking by actors, and Actor analysis. It might be better if on this homepage, there could be explanation texts about what each of these ways allows you to see (and why view from this way).

      There should also be some explanatory texts on what the Digital Footprint is in general (an analysis based on of searchability of actors/link analysis).

    6. Diplomatic Footprint of Global Media

      Title: "of Global Media" should be put to the second line

      The Diplomatic Footprint of Global Media displays the visibility of 45 media houses (link) when one searches Google for 267 policy, governance and diplomatic topics (link).

      The Diplomatic Footprint of global media is calculated and evaluated based on Google searches routinely conducted in 53 cities around the world.

    7. Tech Footprint of Global Media

      Title: "of Global Media" should be put to the second line

      The Tech Footprint of Global Media displays the visibility of 45 media houses (link) when one searches Google for 269 tech topics (link).

      The Tech Footprint of Global Media is calculated and evaluated based on Google searches routinely conducted in 12 cities around the world.

    8. Tech Footprintof International Geneva

      The Tech Footprint of International Geneva displays the visibility of 204 Geneva-based actors (link) when one searches Google for 269 tech topics (link).

      The Tech Footprint of International Geneva is calculated and evaluated based on Google searches routinely conducted in 12 cities around the world.

    Created with Sketch. Visit annotations in context

    Created with Sketch. Annotators

    Created with Sketch. URL

  10. Sep 2022
    1. The right to privacy in the digital age

      ||MarcoLotti|| ||JovanK||

      There was a report from OHCHR on "the right to privacy in the digital age" presented during the Human Rights Council on September 16th. I will look deeper into this if it concerns Diplo's interest!

    Created with Sketch. Visit annotations in context

    Created with Sketch. Annotators

    Created with Sketch. URL

  11. Aug 2022
    1. Digital Economy and Society Index 2022: overall progress but digital skills, SMEs and 5G networks lag behind

      ||JovanK||

      This is the Digital Economy and Society Index (DESI) mentioned in the EU Digital Decade policy programme. The EU has announced the results for 2022's DESI performance.

    Created with Sketch. Visit annotations in context

    Created with Sketch. Annotators

    Created with Sketch. URL

  12. Jul 2022
    1. The European Space Programme bolsters the EU Space policy

      The European Space Programme

    Created with Sketch. Visit annotations in context

    Created with Sketch. Annotators

    Created with Sketch. URL

    1. 16thEuropean Union -African Union Summit:

      Full text of the Joint Vision

    Created with Sketch. Visit annotations in context

    Created with Sketch. Annotators

    Created with Sketch. URL

    1. 11.

      ||JovanK|| Unsure if all different partnerships in this point should be added to the resource page. Currently the highlighted parts are the ones included already.

    2. the Joint Declaration by the EU and Indo-Pacific countries
    3. he Eastern Partnership’s EU4Digital Initiative
    4. the Secretary General’s Roadmap for Digital Cooperation
    5. he Paris and Christchurch Calls
    6. the UN’s Global Digital Compact
    7. Actively promote universal human rights and fundamental freedoms, the rule of law and democratic principles in the digital space and advance a human-centric and human rights-based approach to digital technologies

      The Commission has proposed a Declaration on European Digital Rights and Principles in Juanuary 2022. The Declaration is waiting to be endorsed by both the Parilament and the Council. Would the EU pursue the same sets of rights and principles internationally and universally?

    8. Mentioned in the Joint Communication to the European Parliament and the Council - Resilience, Deterrence and Defence: Building strong cybersecurity for the EU. JOIN(2017) 450. https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=JOIN:2017:450:FIN

    9. the Cyber Diplomacy Toolbox
    Created with Sketch. Visit annotations in context

    Created with Sketch. Annotators

    Created with Sketch. URL

    1. The Christchurch Call to Action

      Full text of the Christchurch Call

    Created with Sketch. Visit annotations in context

    Created with Sketch. Annotators

    Created with Sketch. URL

    1. The EU's Cybersecurity Strategy for the Digital Decade

      Text of the EU Cybersecurity Strategy

    Created with Sketch. Visit annotations in context

    Created with Sketch. Annotators

    Created with Sketch. URL

    1. Cyber attacks: EU ready to respond with a range of measures, including sanctions

      The Council's decision on a joint EU diplomatic response.

    Created with Sketch. Visit annotations in context

    Created with Sketch. Annotators

    Created with Sketch. URL

    1. ESCWA annual report 2018: Technology for development

      ||JovanK||

      Hi, I am not sure if this is a mislink from before or not. The title doesn't match the link provided on this page.

      The link goes to the "Joint Communication to the European Parliament and the Council - Resilience, Deterrence and Defence: Building strong cybersecurity for the EU. JOIN(2017) 450.", not the "ESCWA annual report 2018".

    Created with Sketch. Visit annotations in context

    Created with Sketch. Annotators

    Created with Sketch. URL

    1. The Amazon case also influenced the new rules. In November 2020, Vestager told reporters that “data on the activity of third-party sellers should not be used to the benefit of Amazon when it acts as a competitor to these sellers.” Just a month later, her DMA proposal included an outright prohibition for gatekeepers from using the non-public data of business users to compete against them.

      The interaction between Big Tech companies and the EU regulators also influence regulations. It might hint at a potential pathway of influence for the future implementation of the DSA/DMA.

    2. Amazon is aiming to solve Commission concerns by leaning on the recently-adopted Digital Markets Act (DMA), by potentially sharing some data with other sellers on the platform, two individuals with direct knowledge of the case said.

      This indicates that even before the roll-out of DMA, companies are already forwardly conform to the rules. This "foreshadowing effect" of the DMA could happen because 1) the past EU regulatory regime has shown a decent level of efficacy in execution and 2) the benefits of compliance outweigh the costs of non-compliance.

    Created with Sketch. Visit annotations in context

    Created with Sketch. Annotators

    Created with Sketch. URL

  13. Jun 2022
    1. China’s tech regulation is getting more ‘rational,’ says top executive of JD.com

      ||MarcoLotti||

      Would it be interesting to cover also China's technology regulatory regime? This probably doesn't fall under the DSA/DMA curation, but since China is also a major player in the tech industry, how the Chinese regulatory model progresses might be as interesting as the US and the EU model. Scholars/big tech CEOs often pit these three models against each other. The US big techs often say that they prefer the EU model and are wary of the Chinese model.

    Created with Sketch. Visit annotations in context

    Created with Sketch. Annotators

    Created with Sketch. URL

    1. The so-called code of practice on disinformation, a voluntary set of principles the likes of Facebook, Google and Twitter signed up to in 2018, has been rewritten to compel these social media giants to make more data available to outsiders, commit themselves to stop online falsehoods from making money via advertising, and pledge to enact a litany of other measures that Brussels will oversee. But the new rule book, which will come into force in early 2023, will not apply to scores of relatively new and unregulated social media platforms that have garnered millions of new users in recent years, after these companies did not sign up. googletag.cmd.push( function() { var sizeMappinginstory = googletag.sizeMapping().addSize([1024,200], [[728,90], [300,250], [1,1]]).addSize([768,200], [[728,90], [300,250], [1,1]]).addSize([0,0], [[300,250], [320,100], [320,50], [1,1]]).build(); googletag.defineSlot( '52224093/Instory-1', [[728,90], [300,250], [1,1]], 'div-gpt-ad-instory-1' ).setTargeting('tag',["algorithms","content-moderation","digital-services-act","disinformation","european-commission","facebook","google","media","meta","online-advertising","platforms","russia","social-media","tiktok","twitter","war-in-ukraine"]).setTargeting('section',["technology"]).setTargeting('country',["russia"]).setTargeting('person',[""]).setTargeting('organisation',["european-commission","facebook","google","meta","tiktok","twitter"]).setTargeting('post_type',['pro-free']).setTargeting('page_type',['single']).setTargeting('post_id',['2131069']).defineSizeMapping( sizeMappinginstory ).addService( googletag.pubads() ); googletag.display( "div-gpt-ad-instory-1" ); } ); Such growth is driven by the promise of offering people an online space unfettered by content moderation and with a focus on free speech. While these fringe networks do not have the high user volume of more mainstream social media giants, they have become ground zero in the spread of disinformation, hate speech and other extremist and violent content.

      ||MariliaM||

      Harkening back to the lunchtime discussion on Tuesday where you told us that there are many conspiracy theorists operating on smaller platforms that have yet to be touched by the EU regulations, this piece says exactly just that. The new code of practice on disinformation (which would go under the DSA) only sees big players signing up, the smaller platforms (like Telegram) where disinformation is rampantly spreading have not joined yet. The EU regulation might just see how disinformation campaigns (and conspiracy theorists) switch bases.

      ||MarcoLotti|| ||Jovan||

      The DSA has a strong focus on VLOPs, which it rightly should do. However, the battle against disinformation and other online fundamental rights abuses might be more difficult to fight against since disinformation campaign might shapeshift into other forms, or just take different bases (such as shifting to closed-off, smaller platforms). Would a blog post addressing this caveat of the DSA be interesting?

    Created with Sketch. Visit annotations in context

    Created with Sketch. Annotators

    Created with Sketch. URL

    1. EU plans Silicon Valley base as tech crackdown looms

      ||Jovan|| ||MarcoLotti||

      If the EU indeed establishes a new base in California, it might be very interesting for us to follow how their policymaking and interactive dynamics with Big Tech companies might change. Would this encourage more sandbox policy experiments? Or would this become an influence point where the Big Tech, in return, launch heavy lobbying on the policymakers?

    Created with Sketch. Visit annotations in context

    Created with Sketch. Tags

    Created with Sketch. Annotators

    Created with Sketch. URL