- Apr 2024
-
-
This paper presents Climinator, a novel AI-based tool designed to automate the fact-checking of climate change claims. Utilizing an array of Large Language Models (LLMs) informed by authoritative sources like the IPCC reports and peer-reviewed scientific literature, Climinator employs an innovative Mediator-Advocate framework.
||JovanK|| ||JovanNj||
This is an AI model that ChatClimate.ai and WMO have collaborated on building. The Climinator uses a 'mediator-advocate' framework that is quite innovative. I have doubts about the efficacy of this framework, but I thought it could be something of inspiration to the AI lab if we ever want to build an array of models capable of debating with each other and coming to a conclusion.
Might be an interesting framework for us to try out, for example, in the following scenario: Our moderator is a diplomatic-text-finetuned LLM; our advocate 1 is an LLM based on GDC-related documents (via RAG) and our advocate 2 is an LLM fed with real-life expert interactions during our events (via RAG or finetuning). Then, we can give an overall prompt to the moderator to ask advocate 1 and 2 to find consensus on a GDC draft.||sorina||
-
- Feb 2024
-
-
Computing Power and the Governance of Artificial Intelligence
@jovan @sorina This article (published just a week ago) was a new research on regulating frontier AI models by imposing restrictions and safety measures on AI chips.
I previously shared a similar article called 'How to Catch a Chinchilla', which expresses preliminary ideas of this one.
The article is focused on regulating the 'hardware' part of frontier models both because it's more visible and easier to track/locate and because, as far as the current way AI is built are concerned, development of advanced AI models still rely on securing a good number of AI chips.
-
-
-
Another Hanna paper, presented at the Resistance AI workshop, urges the machine learning community to go beyond scale when considering how to address systemic social issues and asserts that resistance to scale thinking is needed.
From what I observed, the notion of 'scaling an algorithm' always results in high decontextualization of data and outputs (i.e., training data is taken out of the context from which it was derived, and the outputs are applied to various contexts instead of merely the one the model was trained for).
And a huge problem for the lack of reflections in this regard was, in my opinion, the tech community's blatant refusal to think about the systemic, societal risks of their models--they framed it as the problem of policymakers and ethicists, and that the social can be separately mitigated apart from the technical conception of the models.
-
“We argue that fixes that focus narrowly on improving datasets by making them more representative or more challenging might miss the more general point raised by these critiques, and we’ll be trapped in a game of dataset whack-a-mole rather than making progress, so long as notions of ‘progress’ are largely defined by performance on datasets,” the paper reads. “Should this come to pass, we predict that machine learning as a field will be better positioned to understand how its technology impacts people and to design solutions that work with fidelity and equity in their deployment contexts.”
I suspected that the current AI regulations discussions are missing out on long-term risks that certain paradigms in the current AI development space might impose; one is the predominant idea that we can resolve fairness by simply codifying fairness metrics and making the latter a requirement of a model to 'hit the market'.
This logic fails to challenge what this paragraph is trying to say here: ensuring fairness goes beyond the idea of getting a 'fair dataset' that performs well; it concerns what the deployment context is, how developers and users are to interact with the AI model, and how does the model become interwoven in the lives of the people working around it.
-
- Nov 2023
-
www-files.anthropic.com www-files.anthropic.com
-
Given its dual-use nature, CAI could make it easier to train pernicious systems
Could someone not do the reverse and curate a constitution of pure malicious intentions?
-
AI alignment, which incentivizes developers to adopt this method
How do we know if AI is aligning with us when we leave the job of aligning it to AI as well???
-
CAI increases model transparency.
Not really though...just as with regular LLMs, we don't know how the models comprehend the data that we give it and how it comes up with answers. There's no guarantee that the models understand the terms of the principles in the ways that we understand them; how do we know if the model is indeed making decisions according to the values (or whatever definition we might give to those values) and not just because of a happenstance?
-
-
www.state.gov www.state.gov
-
The Departmentwill develop and implement data qualityassessment tools and monitoring processeswith results that are transparent to users.Assessments will also be done to evaluate dataoutputs from other AI platforms to minimize risk.
What about the input data used to train other AI systems?
-
High quality datasets are those sufficiently freeof incomplete, inconsistent, or incorrect data,while also being well documented, organized,and secure
Doesn't this definition mostly point to highly structured data?
-
he Department’s CDAO will support and coordinate the establishment and maintenance of AI policies—such as 20 FAM 201.1—that provide clear guidelines for responsible AI use, steward AI models, and prioritize the evaluation and management of algorithmic risk (e.g., risks arising from using algorithms)in AI applications during their entire lifecycle— including those related to records retention, privacy, cybersecurity, and safety.
Existence of current AI policies.
Another thing is, they mention algorithmic risk, which means the evaluation of algorithms may not just be on applications?
-
Much like the EDS aims to cultivate a data culture, the Department will imbue its values around responsible AI use across the organization, including to uphold data and scientific integrity.
Very interesting how they use words like "culture" to describe AI integration. It certainly goes beyond simply adopting selective tools; instead, it's about perspective and norm-shaping within the organisation.
-
enhance AI literacy, encourage and educate on responsible AI use, and ensure that users can adequately mitigate some risks associated with AI tools.
What is AI literacy? What sets of knowledge and skills make a person AI literate?
-
To meet the computational demands of AI development, our infrastructure will leverage Department cloud-based solutions and scalable infrastructure services.
Did they already have that infrastructure ready?
-
Robust access controls and authentication mechanisms aligned to Zero Trust principles will mitigate risk of unauthorized access to AI technologies and Department data, providing a high level of security
-
with a mix of open-source, commercially available, and custom-built AI systems.
Open-source is the key word here.
-
ts Enterprise Data Strategy (EDS) in September 2021
EAIS has a predecessor
-
Innovate
Use cases
-
Ensure AI is Applied Responsibly
Principles and standards
-
Foster a Culture that Embraces AI Technology
Workforce
-
Leverage Secure AI Infrastructure
Infrastructure
-
- Oct 2023
-
techcrunch.com techcrunch.com
-
OpenAI, Google and a ‘digital anthropologist’: the UN forms a high-level board to explore AI governance
-
-
www.politico.com www.politico.com
-
The fellows funded by Open Philanthropy, which is financed primarily by billionaire Facebook co-founder and Asana CEO Dustin Moskovitz and his wife Cari Tuna, are already involved in negotiations that will shape Capitol Hill’s accelerating plans to regulate AI. And they’re closely tied to a powerful influence network that’s pushing Washington to focus on the technology’s long-term risks — a focus critics fear will divert Congress from more immediate rules that would tie the hands of tech firms.
||Jovan||||sorina||||JovanNj||
Probably old news to you, but here's an article that explains about the billionaire that founded Open Philanthropy (a core funder in EA activities). It also explains about its reach into politics.
-
-
assets.publishing.service.gov.uk assets.publishing.service.gov.uk
-
William Isaac
Staff Research Scientist on DeepMind’s Ethics and Society Team and Research Affiliate at Oxford University Centre's for the Governance of AI: https://wsisaac.com/#about
Both DeepMind and Centre for the Governance of AI (GovAI) have strong links to EA!
-
Arvind Narayanan
Professor of computer science from Princeton University and the director of the Center for Information Technology Policy: https://www.cs.princeton.edu/~arvindn/.
Haven't read his work closely yet, but it seems sensible to me.
-
Sara Hooker,
Director at Cohere: https://cohere.com/ (an LLM AI company).
-
Yoshua Bengio
Professor at Université de Montréal and scientific director of the Montreal Institute for Learning Algorithms (MILA).
Very influential computer scientist, and considered a leading force in AI. Also an AI doomer, though I can't find his clear link with EA.
-
John McDermid
Professor from University of York.
-
Alexander Babuta
The Alan Turing Institute: https://cetas.turing.ac.uk/about/our-team/alexander-babuta
-
Irene Solaiman
Head of Global Policy at Hugging Face: https://www.irenesolaiman.com/
-
Paul Christiano
A very big name on the EA forum: https://forum.effectivealtruism.org/topics/paul-christiano
-
Capabilities and risks from frontier AI
||Jovan|| ||sorina|| ||JovanNj||
This is the report that the UK released ahead of the AI Safety Summit (1-2 November, 2023).
-
-
www.cnbc.com www.cnbc.com
-
On Wednesday, the U.K. government released a report called “Capabilities and risks from frontier AI,” in which it explains frontier AI as “highly capable general-purpose AI models that can perform a wide variety of tasks and match or exceed the capabilities present in today’s most advanced models.”
||Jovan||||sorina||||JovanNj||
This report might be a good source for understanding narrative change in the field of AI safety. I will find out where it is and send it to you via email.
Also, it shouldn't come as a surprise to us that the UK government will be the most convinced in EA's description of AI ending humanity, considering that many EA orgnisations are based in and attached to UK universities (Oxbridge).
-
-
-
We are at the same juncture now with AI. Once again, a handful of competitive but ideologically aligned leaders are telling us that large-scale, general-purpose AI implementations are the only way forward. In doing so, they disregard the dangerous level of complexity and the undue level of control and financial return to be granted to them.
@Jovan @Sorina
The very same people warning us of extinction-level AI risks are the same people who are developing technologies in a way that leads us to it. In a way, the public release of GPT and other generative models created the very market pressure that makes "creating the best, most intelligent AGI" the most important and only goal for the market.
-
What is happening is not just a battle for market control. A small number of tech titans are busy designing our collective future, presenting their societal vision, and specific beliefs about our humanity, as theonly possible path. Hiding behind an illusion of natural market forces, they are harnessing their wealth and influence to shape not just productization and implementation of AI technology, but also the research.
@Jovan @Sorina
This is my concern with solely using a longtermist view to make policy judgments.
Tags
Annotators
URL
-
- Sep 2023
-
www.nytimes.com www.nytimes.com
-
2 Senators Propose Bipartisan Framework for A.I. Laws
||Jovan||
-
-
www.technologyreview.com www.technologyreview.com
-
Inside effective altruism, where the far future counts a lot more than the present
||Jovan|| A good explanation on what this effective altruism movement is and how that came to populate the current political discourse of AI governance with many terms like "fate-defining moment" and "threat against humanity".
-
-
www.newyorker.com www.newyorker.com
-
The Reluctant Prophet of Effective Altruism
||Jovan||
This is an in-depth interview with the original founder of effective altruism movement. There were several iterations of the same philanthropic/philosophical movement that turned political and ambitious very soon.
-
-
irmckenzie.co.uk irmckenzie.co.uk
-
This task demonstrates the failure of language models to follow instructions when there is a popular continuation that does not fit with that instruction. Larger models are more hurt by this as the larger the model, the more familiar it is with common expressions and quotes.
||Jovan|| ||JovanNj||
An example of how the superb capability of an LLM in induction and association actually breaks; also shows a little bit how the LLM "thinks" (which is very different from a human, who may have or have not memorized the Shakespeare's quote by heart but would nonetheless understand that the "prompt" is more important than reciting the most plausible quote.)
-
-
github.com github.com
-
The standard paradigm in natural language processing today is to pretrain large language models to autocomplete text corpora. The resulting models are then either frozen and used directly for other tasks (zero-shot or using few-shot learning), or additionally trained on other tasks (fine-tuning).
It might be interesting to carry out similar tasks for the model that Diplo is fine-tuning--to see where its algorithmic reasoning will break.
Also, it might be a good comparison study to show on paper how the Diplo model works better with higher-quality data (bottom-up approach). It'd be good evidence to show, I suppose.||Jovan||||Jovan||||Jovan||||JovanNj||||Jovan||
-
The purpose of this contest is to find evidence for a stronger failure mode: tasks where language models get worse as they become better at language modeling (next word prediction).
||Jovan|| ||JovanNj||
I found this interesting as this might be insightful to the problem I faced when I was writing the speech for the Digital Cooperation Day "Chair of future generations" using ChatGPT. The model was really good at generating a quote that doesn't exist and was never said by the person it attributed to.
It is very plausible because from the "reality" the model lives in, multiple data sources made it probable that "this quote might have existed and it makes sense that this quote follows that quote and follows that name and that organization." It is interesting to actually see where the model that is very good at inductive reasoning and association fails sometimes, because induction and association aren't the only two logics humans use to approach reality.
Tags
Annotators
URL
-
- Aug 2023
-
techcrunch.com techcrunch.com
-
Part of the reason for these failures is that annotators — the people responsible for adding labels to the training datasets that serve as examples for the models — bring their own biases to the table. For example, frequently, there’s differences in the annotations between labelers who self-identified as African Americans and members of the LGBTQ+ community versus annotators who don’t identify as either of those two groups.
||Jovan|| ||JovanNj||
I'm not sure if this actually was the problem though. Perspective API researchers particularly designed the prompt given to the annotators in a way that is ambiguous and left up to the latter's own perspective. The researchers asked the annotators to mark a comment toxic or not based on whether that comment would make them want to stay or leave the conversation.
The reasoning behind this ambiguity seems to be that the researchers don't want to a-priori give a set of what is defined as "good" and "bad", and instead rely on the ambiguities of how people feel. This makes sense to me, as if we have a dictionary of good words and bad words fixed in the system, we are also exercising our own bias and taking words out of context (we ourselves are ignoring contextual significance as well).
-
Perspective often couldn’t recognize hate speech that used “reclaimed” slurs like “queer” and spelling variations such as missing characters.
||Jovan|| ||JovanNj||
The problem happens to be that in the training dataset, most "toxic" comments include words referring to and against historically discriminated groups, so Perspective API made the linkage that the existence of words referring to these groups automatically make the comment toxic.
I recently came across the concept of "contextual significance" that was created by early pattern recognition researchers in the 1950s, which basically means that a successful machine should be able to judge which meaning of the word is invoked in a given context (the "neighborhood", the nearby words of a sentence/pixels of a picture) and what effect it would create for which group of people. Perspective API lacked this.
The Perspective researchers apparently decided to feed the algorithm more "non-toxic" comments that include terms relating to minorities or discriminated groups to balance out the adverse score associated with them.
-
Several years ago, a team at Penn State found that posts on social media about people with disabilities could be flagged as more negative or toxic by commonly used public sentiment and toxicity detection models. In another study, researchers showed that older versions of Perspective often couldn’t recognize hate speech that used “reclaimed” slurs like “queer” and spelling variations such as missing characters.
If I recall correctly, Perspective API by Google Jigsaw was trained on a dataset consisting of the comment section of Wikipedia Talk labelled by crowd-sourced workers. Just some background information.
-
OpenAI proposes a new way to use GPT-4 for content moderation
||Jovan|| ||JovanNj||
ChatGPT 4 apparently could take a defined policy and check if the new inputs violate such a policy. I'm more curious about how we could understand the logic or reasoning the model uses for classifying these policy-compliant or policy-non-compliant inputs.
-
-
-
"We're enabling ownership of these technologies by African founders," he told DW. "This is being built and served by people from these communities. So the financial rewards will also go directly back to them."
||Jovan||
Might be worthwhile to investigate on how this mode of grassroots model-building is done. To me, it is even more interesting to think about how start-ups working closely with local communities and embracing this "bottom-up" approach thrive in places that are the most left out by the biggest/hottest machine learning algorithms of this day (like ChatGPT, DeepMind, etc.).
-
Outside of Africa as well, researchers around the world are working on other languages including Jamaican Patois, Catalan, Sudanese, and Māori.
-
With this approach, companies like Lesan cannot hope to rival the billions of pages of English content available, but they might not need to. Lesan, for instance, already outperforms Google Translate in both Amharic and Tigrinya. "We've shown that you can build useful models by using small, carefully curated data sets," said Asmelash Teka Hadgu. "We understand its limitations and capabilities. Meanwhile, Microsoft or Google usually build a single, gigantic model for all languages, so it's almost impossible to audit."Advertisement
-
This requires a lot of manual labor. Contributors first identify high-quality datasets, such as trustworthy books or newspapers, then digitize and translate them into the target languages. Finally, they align the original and translated versions sentence by sentence to guide the machine learning process.African startups embrace AI technology
-
-
-
Consistent with the findings of the 2021 edition, software tools and reports are the most common outputs of UN AI projects, which can be used to address challenges impeding progress on the SDGs
||Jovan|| ||JovanNj||
It might be interesting to look at the software tools and how many of those projects have come to fruition. Just as how Diplo is now exploring ways to incorporate AI tools in our line of work, other UN organisations are doing so, too. What is the scale of their incorporation (does AI replace a core/small function, or does AI assist humans in their job?)
This might teach us about organisational thinking with regard to the incorporation of AI beyond weightless calls for more awareness of AI-induced transformation.
-
- Jul 2023
-
arxiv.org arxiv.org
-
we propose a framework thatwe call model cards, to encourage such transparent model reporting.Model cards are short documents accompanying trained machinelearning models that provide benchmarked evaluation in a varietyof conditions, such as across different cultural, demographic, or phe-notypic groups (e.g., race, geographic location, sex, Fitzpatrick skintype [15]) and intersectional groups (e.g., age and race, or sex andFitzpatrick skin type) that are relevant to the intended applicationdomains. Model cards also disclose the context in which modelsare intended to be used, details of the performance evaluation pro-cedures, and other relevant information
||JovanK|| ||JovanNj||
In my head, the idea of having model cards that accompany the models resembles having nutritional labels on food products we purchase. This self-reporting could then be regularised (through policies), and can be used by auditors to check if the model truly complies with the information presented in the cards.
Anthropic (a firm that produced AI assistant Claude) has already put out a model card: https://h.diplomacy.edu:8000/6l6xBC1SEe6I1uPxv4kRmw/www-files.anthropic.com/production/images/Model-Card-Claude-2.pdf
-
-
www-files.anthropic.com www-files.anthropic.com
-
Model Card and Evaluations for Claude Models
||JovanK|| ||JovanNj||
Model card example from Anthropic's Claude 2 AI assistant.
-
-
link.springer.com link.springer.com
-
most AI systems are complex socio-technical systems in which control over the system is extensively distributed. In many cases, a multitude of different actors is involved in the purpose setting, data management and data preparation, model development, as well as deployment, use, and refinement of such systems. Therefore, determining sensible addressees for the respective obligations is all but trivial.
||JovanK||||JovanNj||
A comparison of how the EU's decision on how to assign responsibilities in AI governance changed. I think responsibility and accountability assignment is an important topic as well.
-
-
arxiv.org arxiv.org
-
5.1 Lack of methods and metrics to operationalise normative concepts
||Jovan||||JovanNj||
To check the compliance of LLMs with existing ethical principles or regulations, we first need to boil down what these principles even mean and how they could be operationalised in a technical sense.
I believe that policymakers would really benefit from directly talking to tech developers not because the latter knows more about ethical principles, but because policymakers cannot come up with sensible and implementable ethical requirements without having a better understanding of how tech developers work. Questions like "what does it mean to build a model that exhibits 'fairness?'" cannot be answered unless we know how tech developers work.
-
wepropose a three-layered approach, whereby governance audits (of technology providers that designand disseminate LLMs), model audits (of LLMs after pre-training but prior to their release), andapplication audits (of applications based on LLMs) complement and inform each other.
||Jovan||||JovanNj||
Auditing methods needed for LLMs are trickier than traditional auditing methods for other types of computer programs. The latter's auditing methods are easier as the computing steps are all visible in the codes, but the former has shown great complexity in training data and establishing connections that are not easily visible or decipherable to auditors.
It is imperative for us to understand what an LLM does with auditing, so a new approach is called for.
-
- Nov 2022
-
speechgen.humainism.ai speechgen.humainism.ai
-
dissagreement
disagreement.
-
- Oct 2022
-
www.diplomacy.edu www.diplomacy.edu
-
therell
||JovanK||||MarcoLotti||
Another typo: "there'll"
-
Were
||JovanK||||MarcoLotti||
I spotted a typo here! "We're"
-
-
footprint.diplomacy.edu footprint.diplomacy.edu
-
Diplomatic Footprintof International Geneva
The Diplomatic Footprint of International Geneva displays the visibility of 236 Geneva-based actors (link) when one searches Google for 267 policy, governance and diplomatic topics (link).
The Diplomatic Footprint of International Geneva is calculated and evaluated based on Google searches routinely conducted in 53 cities around the world.
*From the analysis on the page, I deduced that some actors were not mentioned at all. I believe that for the purpose of transparency and research integrity, they should be shown as well (as in, if they have 0 mentions, that should be visible to viewers, too). That's why I believe that there should be a link of all the actors used for each analysis (same for tech footprints and global media houses).
-
Compare to the other actors:
How is the weighted score calculated?
-
Select the topic field of interest:
The legend of this graph is not clearly presented. For example, the it wasn't shown that the color bar at the right-hand side represents the points an actor receives. The bottom axis should be named "the number of mentions an actor receives" instead of "#mentions". What the size of the dots represents isn't clear either.
As the main metric used for this analysis is by the points, maybe the X axis could be marked by the points instead of the number of mentions? Vice versa, the colors could represent the number of mentions instead?
For the y axis: How is the average rank calculated? Is it points divided by number of mentions?
-
Select the search topic:
It might be clearer if the graph information (now embedded in the "i" button) could be directly shown somewhere on the page.
-
Digital Footprint
Where could this link be found on Diplo's website?
Also, in each Footprint, there are three ways to view the data: Ranking by topics, Ranking by actors, and Actor analysis. It might be better if on this homepage, there could be explanation texts about what each of these ways allows you to see (and why view from this way).
There should also be some explanatory texts on what the Digital Footprint is in general (an analysis based on of searchability of actors/link analysis).
-
Diplomatic Footprint of Global Media
Title: "of Global Media" should be put to the second line
The Diplomatic Footprint of Global Media displays the visibility of 45 media houses (link) when one searches Google for 267 policy, governance and diplomatic topics (link).
The Diplomatic Footprint of global media is calculated and evaluated based on Google searches routinely conducted in 53 cities around the world.
-
Tech Footprint of Global Media
Title: "of Global Media" should be put to the second line
The Tech Footprint of Global Media displays the visibility of 45 media houses (link) when one searches Google for 269 tech topics (link).
The Tech Footprint of Global Media is calculated and evaluated based on Google searches routinely conducted in 12 cities around the world.
-
Tech Footprintof International Geneva
The Tech Footprint of International Geneva displays the visibility of 204 Geneva-based actors (link) when one searches Google for 269 tech topics (link).
The Tech Footprint of International Geneva is calculated and evaluated based on Google searches routinely conducted in 12 cities around the world.
-
- Sep 2022
-
documents-dds-ny.un.org documents-dds-ny.un.org
-
The right to privacy in the digital age
||MarcoLotti|| ||JovanK||
There was a report from OHCHR on "the right to privacy in the digital age" presented during the Human Rights Council on September 16th. I will look deeper into this if it concerns Diplo's interest!
-
- Aug 2022
-
ec.europa.eu ec.europa.eu
-
Digital Economy and Society Index 2022: overall progress but digital skills, SMEs and 5G networks lag behind
||JovanK||
This is the Digital Economy and Society Index (DESI) mentioned in the EU Digital Decade policy programme. The EU has announced the results for 2022's DESI performance.
-
- Jul 2022
-
digital-strategy.ec.europa.eu digital-strategy.ec.europa.eu
-
Full text of the Declaration
-
-
www.lisbondeclaration.eu www.lisbondeclaration.eu
-
Full text of the Lisbon Declaration
-
-
digital-strategy.ec.europa.eu digital-strategy.ec.europa.eu
-
Full text of the Berlin Declaration
-
-
digital-strategy.ec.europa.eu digital-strategy.ec.europa.eu
-
Ministerial Declaration on eGovernment
Full text of the Tallinn Declaration
-
-
digital-strategy.ec.europa.eu digital-strategy.ec.europa.eu
-
Declaration on European Digital Rights and Principles
Full text of the Declaration
-
-
defence-industry-space.ec.europa.eu defence-industry-space.ec.europa.eu
-
The European Space Programme bolsters the EU Space policy
The European Space Programme
-
-
www.eeas.europa.eu www.eeas.europa.eu
-
Full text of the new Agenda
-
-
www.consilium.europa.eu www.consilium.europa.eu
-
16thEuropean Union -African Union Summit:
Full text of the Joint Vision
-
-
www.consilium.europa.eu www.consilium.europa.eu
-
JAPAN-EU DIGITAL PARTNERSHIP
Full text of the digital partnership
-
-
www.eeas.europa.eu www.eeas.europa.eu
-
JOINT COMMUNICATION TO THE EUROPEAN PARLIAMENT AND THE COUNCIL
Text of the Joint Communication
-
-
data.consilium.europa.eu data.consilium.europa.eu
-
11.
||JovanK|| Unsure if all different partnerships in this point should be added to the resource page. Currently the highlighted parts are the ones included already.
-
the Joint Declaration by the EU and Indo-Pacific countries
-
he Eastern Partnership’s EU4Digital Initiative
-
the Secretary General’s Roadmap for Digital Cooperation
-
he Paris and Christchurch Calls
-
the UN’s Global Digital Compact
-
Actively promote universal human rights and fundamental freedoms, the rule of law and democratic principles in the digital space and advance a human-centric and human rights-based approach to digital technologies
The Commission has proposed a Declaration on European Digital Rights and Principles in Juanuary 2022. The Declaration is waiting to be endorsed by both the Parilament and the Council. Would the EU pursue the same sets of rights and principles internationally and universally?
-
Mentioned in the Joint Communication to the European Parliament and the Council - Resilience, Deterrence and Defence: Building strong cybersecurity for the EU. JOIN(2017) 450. https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=JOIN:2017:450:FIN
-
the Cyber Diplomacy Toolbox
-
-
presidence-francaise.consilium.europa.eu presidence-francaise.consilium.europa.eu
-
Joint Declaration
Full text of the Joint Declaration
-
-
ec.europa.eu ec.europa.eu
-
Today at the Digital Assembly in Sofia, Bulgaria
Press release of the launch
-
-
www.un.org www.un.org
-
Roadmap for Digital Cooperation
Full report
-
-
www.christchurchcall.com www.christchurchcall.com
-
The Christchurch Call to Action
Full text of the Christchurch Call
-
-
pariscall.international pariscall.international
-
Cyberspace now plays a crucial role
Text of the Paris Call
-
-
www.consilium.europa.eu www.consilium.europa.eu
-
THE COUNCIL OF THE EUROPEAN UNION,
Full text of the Council Conclusions
-
-
www.consilium.europa.eu www.consilium.europa.eu
-
posture
Text of the full Council Conclusions
-
-
eur-lex.europa.eu eur-lex.europa.eu
-
The EU's Cybersecurity Strategy for the Digital Decade
Text of the EU Cybersecurity Strategy
-
-
www.eeas.europa.eu www.eeas.europa.eu
-
A STRATEGIC COMPASS FOR SECURITY AND DEFENCE
Text of the EU Strategic Compass.
-
-
ec.europa.eu ec.europa.eu
-
The Global Gateway
Text of the Global Gateway Strategy.
-
-
ec.europa.eu ec.europa.eu
-
Global Gateway
Read more about the Global Gateway strategy
-
-
www.consilium.europa.eu www.consilium.europa.eu
-
Cyber attacks: EU ready to respond with a range of measures, including sanctions
The Council's decision on a joint EU diplomatic response.
-
-
www.un.org www.un.org