- Jul 2023
-
www.abc.net.au www.abc.net.au
-
Monash submitted that even if regulations were introduced to require AI tools to inject "watermarks" into their code to make AI detectable, other generative AI technologies could still be used to strip out those watermarks.
Monash is active in this context.
-
-
deliverypdf.ssrn.com deliverypdf.ssrn.com
-
In this classroom, each student should not only have the opportunity to practice but also actively participate in discussions, creating an inclusive and deeply participatory learning environment.
-
Teaching others is a more powerful learning technique than re-reading or summarizing
-
ssues that keep teams from fulfilling their potential, as process loss and these include social loafing (when individuals put forth less effort when working in a group) and groupthink (when group members’ desire for conformity can lead to bad decisions)
-
Example Promp
Important: basic prompt structure according to authors, regardless the AI implementation: - Role and Goal - who AI is, and what it suppose to do - Constraints - instructions for prevention from unexpected AI actions - Step-by-step instructions - Chain-of-thought instruction - Personalization (optional) - instruct AI to ask for additional information from students (their interest, level of prior knowledge, etc) - Pedagogy (optional) - instructions to AI how to behave in more pedagogic way - Specific instructions (optional) - such as to present students with summary, ask them for reflections, etc
AI Team suggestions for additional prompt steps/techniques: - evaluation and validation (additional step) - ask AI to check its answers if they may be offensive, inappropriate, not alilgned with Diplo stances, etc - Retrieval Augmented Generation (RAG) (additional step and technique) - ask AI to generate answer based on information retrieved from Diplo materials - usage of inner monologue (technique)- ask AI to discuss some question with itself in order to get to deeper analysis, before sending the answer to students - usage of chaining prompting (technique) - ask AI to break complex question/task into multiple simple tasks
-
Large Language Models and Prompt Compatibility
Very important! Test it on ChatGPT4 connected on Diplo knowledge base via custom made Retriever
Also, not tested on: - Claude 2 - Perplexity We do not have access to Claudie2 yet, and there is no API access to Perplexity. We could test Perplexity manually
-
Students should report out their interactions with the AI and write a reflection about the guidance and help the AI provided and how they plan to incorporate (or not) the AI’s feedback to help improve their work.
Important: Expectations from students in the process of AI as Mentor implementation.
-
Is can be very convincing, and have strong “viewpoints” about facts and theories that the models “believe” are correct.
A bit confusing, it seems to me that this is same as Bias risk, but applied in teaching process.
-
While ChatGPT offers a privacy mode that claims to not use input for future AI training, the current state of privacy remains unclearfor many models, and the legalimplications are often also uncertain.
Data privacy issue with models
-
AI is trained on a vast amount of text, and then receive additional training from humansto create guardrails on LLM output.
Model training + RLHF (Reinforcement learning through Human Feedback)
-
GPT-4 (accessiblevia ChatGPT Plus orMicrosoft Bing in Creative Mode) is the only modelthat consistently executes on the given prompts.
Advantage of GPT4 compared to other models
-
This approach helps to sharpentheir skillswhile having the AI serve as a supportive tool for their work, not a replacement. Although the AI’s output might be deemed “good enough,” students should hold themselves to a higher standard, and be accountable for their AI use.
Also Diplo approach to AI. Very important and relevant to other Diplo tasks, such as writing updates
-
For any assignment, it’s not enough to cite the AI; students should include an Appendix noting what they used the AI for and where its output fits into their work
-
you should try it out a number of times for your specific topic or concept and see how it reacts
-
ecause the AI can “get it wrong” students need to be aware of those limits and discussing this inclass is oneway to highlight its limitations
-
the tutor's value is not merely subject knowledge, but also their capacity to prompt the student to make an effort, pay close attention to the material, make sense of new concepts, andconnect what they know with new knowledge
-
utoring involves small group or one-on-one sessions with a tutor focusing on skills building.
Q: What does tutoring involve?
-
hat feedback should be considered critically, and students should be asked to articulate how and why the feedback they received is effective (or not).
-
while the feedback may be helpful it should be coupled with an in-class discussion and clear guidelines
-
feed-up, feedback, and feed-forward. Feed-up serves to clearly articulatethe goals and expectations students are to meet. Feedbackreflects students' current progress and pinpoints areas requiring further development; it provides actionable advice, helpingstudents to achievetheir goals. Feed-forward helps teachers plan and tweak their lessons based on student work
Components of feedback: feed-up, feedback, and feed-forwards
-
they aremost common when asking for quotes, sources, citations, or other detailed information
When hallucination is most likely to appear in LLMs.
-
metacognitive prompts.
-
hile AI has the potential to help students learn
I disagree with this statement.
-
In this scenario, the AI aids with personalized, readily available tutoring and coaching outside of the classroom and the classroom transforms into ahub of systematic engagement
-
Such changes foster an active learning environment, inviting each student to engage with class concepts, articulate reasoning, and actively construct knowledge
-
AI as Simulators: Build Your Own
-
AI as Simulator: Creating Opportunities for Practice
||Dragana||||Andrej||||JovanNj||||sorina||||anjadjATdiplomacy.edu||||VladaR||
We can use AI to help us with simulation exercises.
-
applying that concept actively in a novel situation requires a level of automation –students have to “think on their feet” as they apply what they know in a new way
-
Your assessment should focus on how well the AI has explained and illustrated the concept, not on the quality of its creative output; consider how the AI has applied the concept and not whether the poem or dialogue is engaging or unique.
-
Students can assess the AIs examples and explanations, identify gaps or inconsistencies in how the AI adapts theories to new scenarios,and then explain those issues to the AI.The student’s assessment of the AI’s output and their suggestions for improvement of that output is a learning opportunity.
-
By asking students to explicitly name what the AI gets wrong (or right) and teach the AI the concept, the prompt challenges student understanding of a topic and questionstheir assumptions about the depth of their knowledge
-
This is because teaching involves “elaborative interrogation” or explaining a fact or topic in detail and this requires a deep processing of the material and invokes comparison mechanisms: to generate an explanation, students much compare concepts and consider differences and similarities between concepts.
-
Teaching others helps students learn
-
AI as Coach: Reflection Prompt
||JovanNj||||anjadjATdiplomacy.edu|| Krenuo sam u izradu modela za ucenje - za nas prvi kurs o AI u diplomatiji. Ovo je odlican tekst koji objasnjava pisanje promptova za razlicite svrhe. Ovde mi nije jasno da li mi treba da ubacimo ovaj ceo prompt u ChatGPT ili deo po deo.
-
Metacognition plays a pivotal role in learning, enabling students to digest, retain, and apply newfound knowledge.
-
This type of metacognition involves“reflection after action”
-
Metacognitive exercises can help students generalize and extract meaning from an experience or simulate future scenarios.
-
AI Tutor: Instructions for students
-
students get more opportunities to restate ideasin their own words, explain, think out loud, answer questions, and elaborate on responses than they would in a classroom, where time is limited and one-on-one instruction isn’t possible
-
Tutoring is inherently interactive and can involve a number of learning strategies including:questioning(by both the tutor and the student); personalized explanations and feedback(the tutor can correct misunderstandings in real-time and provide targeted advice based on the student's unique needs); collaborative problem-solving(tutors may work through problems together with students, and not justshow them the solution); and real-time adjustment(based on the student's responses and progress, a tutor mayadjust the pace, difficulty level, making the learning process dynamic and responsive)
-
Tutoring,particularly high-dosage tutoring,has been shown to improve learning outcomes
-
n a paragraph,briefly discuss what you learned from using the tool. How well did it work? Did anything surprise you? What are some of your takeaways in working with the AI? What did you learn about your own work? What advice or suggestions did it give you? Was the advice helpful?
-
hat reflection can also serve as a springboard for a class discussion that serves a dual purpose: a discussion about the topic or concept and about how to work with the AI
-
Getting feedback on their work from the AI is an opportunity to practiceand improve,but that feedback should be considered critically, and students should be asked to articulate how and why the feedback they received is effective (or not).
-
Unlike educators in classroom,it doesn’t know the students or understand the students’ context; while the feedback may be helpful it should be coupled with an in-class discussion and clear guidelines
-
s one possible form of feedback
-
That feedback should be concrete and specific, straightforward, and balanced (tell the student what they are doing right and what they can do to improve)
-
an also give you a sense of where students are in their learning journey
-
Students should report out their interactions with the AI and write a reflection about the guidance and help the AI provided and how they plan to incorporate (or not) the AI’s feedback to help improve their work.
-
While ongoing, tailored feedback is important, it is difficultand time-consuming to implement in a large class setting. The time and effort required to consistently provide personalized feedback to numerous students can be daunting.
-
When feedback is coupled with practice itcreates anenvironment that helps students learn
-
Researchers notethe significance of incorporating feedback intothe broader learning process, as opposed to providing it at the conclusion of a project, test, or assignment.
Importance of continious feedback
-
Effective feedback pinpoints gaps and errors, and offers explanations about what students should do to improve
-
Making mistakes can help students learn. particularly if those mistakes are followed by feedback tailored to the individual student
-
Large Language Models are prone to producing incorrect, but plausible facts, a phenomenon known as confabulation or hallucination.
AI risks
-
Prompts are simply the text given to the LLM in order to produce an output.
Q: What are prompts?
-
Our guidelines challenge students to remain the “human in the loop” and maintain that not only are students responsible for their own work but they should actively oversee the AIs output, check with reliable sources, and complement any AI output with their unique perspectives and insights. Our aim is to encourage students to critically assess and interrogate AI outputs, rather than passively accept them.
Aim of the approach
-
increasing metacognition
-
: to help students learn with AI and to helpthem learn about AI
Dual approach of relevance for Diplo's AI approach.
-
how and when to use AI as they instill best practices in AI-assisted learning.
-
hese tools offer the potential for adaptivelearning experiences tailored to individual students’needs and abilities, as well as opportunities to increase learning through a variety of other pedagogical methods.
-
This paper examines the transformative role of Large Language Models (LLMs) in education and their potential as learning tools, despite their inherent risks and limitations. The authors propose seven approaches for utilizing AI in classrooms: AI-tutor, AI-coach, AI-mentor, AI-teammate, AI-tool, AI-simulator, and AI-student, each with distinct pedagogical benefits and risks. The aim is to help students learn with and about AI, with practical strategies designed to mitigate risks such as complacency about the AI’s output, errors, and biases. These strategies promote active oversight, critical assessment of AI outputs, and complementation of AI's capabilities with the students' unique insights. By challenging students to remain the "human in the loop", the authors aim to enhance learning outcomes while ensuring that AI serves as a supportive tool rather than a replacement. The proposed framework offers a guide for educators navigating the integration ofAI-assisted learning in classrooms.
||Andrej||||Dragana||||sorina||||Jovan||
this seems to be a paper worth consulting for our approach of using AI in the learning process
Tags
Annotators
URL
-
-
arxiv.org arxiv.org
-
we propose a framework thatwe call model cards, to encourage such transparent model reporting.Model cards are short documents accompanying trained machinelearning models that provide benchmarked evaluation in a varietyof conditions, such as across different cultural, demographic, or phe-notypic groups (e.g., race, geographic location, sex, Fitzpatrick skintype [15]) and intersectional groups (e.g., age and race, or sex andFitzpatrick skin type) that are relevant to the intended applicationdomains. Model cards also disclose the context in which modelsare intended to be used, details of the performance evaluation pro-cedures, and other relevant information
||JovanK|| ||JovanNj||
In my head, the idea of having model cards that accompany the models resembles having nutritional labels on food products we purchase. This self-reporting could then be regularised (through policies), and can be used by auditors to check if the model truly complies with the information presented in the cards.
Anthropic (a firm that produced AI assistant Claude) has already put out a model card: https://h.diplomacy.edu:8000/6l6xBC1SEe6I1uPxv4kRmw/www-files.anthropic.com/production/images/Model-Card-Claude-2.pdf
-
-
www-files.anthropic.com www-files.anthropic.com
-
Model Card and Evaluations for Claude Models
||JovanK|| ||JovanNj||
Model card example from Anthropic's Claude 2 AI assistant.
-
-
link.springer.com link.springer.com
-
most AI systems are complex socio-technical systems in which control over the system is extensively distributed. In many cases, a multitude of different actors is involved in the purpose setting, data management and data preparation, model development, as well as deployment, use, and refinement of such systems. Therefore, determining sensible addressees for the respective obligations is all but trivial.
||JovanK||||JovanNj||
A comparison of how the EU's decision on how to assign responsibilities in AI governance changed. I think responsibility and accountability assignment is an important topic as well.
-
-
arxiv.org arxiv.org
-
5.1 Lack of methods and metrics to operationalise normative concepts
||Jovan||||JovanNj||
To check the compliance of LLMs with existing ethical principles or regulations, we first need to boil down what these principles even mean and how they could be operationalised in a technical sense.
I believe that policymakers would really benefit from directly talking to tech developers not because the latter knows more about ethical principles, but because policymakers cannot come up with sensible and implementable ethical requirements without having a better understanding of how tech developers work. Questions like "what does it mean to build a model that exhibits 'fairness?'" cannot be answered unless we know how tech developers work.
-
wepropose a three-layered approach, whereby governance audits (of technology providers that designand disseminate LLMs), model audits (of LLMs after pre-training but prior to their release), andapplication audits (of applications based on LLMs) complement and inform each other.
||Jovan||||JovanNj||
Auditing methods needed for LLMs are trickier than traditional auditing methods for other types of computer programs. The latter's auditing methods are easier as the computing steps are all visible in the codes, but the former has shown great complexity in training data and establishing connections that are not easily visible or decipherable to auditors.
It is imperative for us to understand what an LLM does with auditing, so a new approach is called for.
-
-
-
Figure 3: Components, types,and subfields of AI based on Regona et al (2022).
||JovanNj||||sorina|| A very nice summary of AI (visual and content).
-
Educators are also aware of new risks
List of worries for educators
-
AI can be defined as “automation based on associations.”
It is a very interesting definition of AI!!! ||JovanNj||
-
We will consider “educational technology” (edtech) to include both (a) technologies specifically designed for educational use, as well as (b) general technologies that are widely used in educational settings
Q: What is educational technology?
-
-
www.aljazeera.com www.aljazeera.com
-
created a safe corridor for Ukraine’s grain exports from three Ukrainian ports – Odesa, Yuzhny and Chornomorsk.
Could have more theory about this?
-
-
www.newyorker.com www.newyorker.com
-
A company that doesn’t like the rules could threaten to pack up and leave. Then what?
-
“Keeping the details of AI technologies secret is likely to thwart good-faith researchers trying to protect the public interest, as well as competition, and open science,”
-
agreed not to share: the parameters that are known as the “weights” of their algorithms.
-
a detailed metadata trail that reflects the history of a given image.
-
It is also not clear who those experts will be, how they will be chosen, whether the same experts will be tasked with examining all the systems, and by what measure they will determine risk.
||sorina|| Good point
-
he outlined a plan to bring lawmakers up to speed on emerging technology by convening at least nine panels of experts to give them a crash course in A.I. and help them craft informed legislation.
||sorina||||StephanieBP||||VladaR||||Pavlina|| Our training with US embassy is timely. The similar initiative is proposed by the Senate Majority Leader Chuck Summer for members of senate.
Sorina, you may share this parallel with US Embassy.
-
-
crfm.stanford.edu crfm.stanford.edu
-
foundation model providers generally do not comply with draft requirements to describe the use of copyrighted training data, the hardware used and emissions produced in training, and how they evaluate and test models.
-
-
-
I support an Anti-AI movement in the sense that we should have some opposition questioning what's going on.
||sorina|| Do you agree with this reference?
-
-
miniszterelnok.hu miniszterelnok.hu
-
national unification programme for 2030
-
Bargaining is possible in relation to issues linked to tactical time – or even strategic time; but never on issues that belong to historical time.
-
by rejecting Christianity, we have in fact become hedonistic pagans.
||sorina|| hedonistic pagans is a new term.
-
thought that the rejection of religion and Christianity would be followed by the emergence of an ideal, enlightened community based on an understanding of the good and the common good, living a free and superior life according to recognised, sociologically based societal truths.
-
with spiritual foundations in mind, and digging a shovelful deeper, it is also worth saying that at the base of the Hungarian Constitution and the intellectual foundations of the new era there lies an anthropological insight
-
If you read the constitutions of other European countries, which are liberal constitutions, you will see that at the heart of them is the “I”. If you read the Hungarian Constitution, you will see that it is centred on the “we”.
-
the federalists are carrying out an attempt to oust us; they have openly said that they wanted a change of government in Hungary.
-
These could only be introduced in the European Union because the British left and we V4 members could not prevent them – and indeed the V4 was attacked by the federalists. We can all see the result.
-
I am not even going to talk about clever little European tricks such as the sudden doubling – in a single year – of the volume of goods exported from Germany to Kazakhstan.
-
either to decouple, or to participate in international competition. As they say in Brussels, “de-risking or connectivity”.
-
The amount paid for the European Union’s imports of gas and oil – the two together – was 300 billion euros before the Russian war, and 653 billion euros last year.
-
They call this seclusion “decoupling” – or, more subtly, “de-risking”, which is a form of risk reduction.
-
by the size of their economies, by their national GDPs, we see that in the rankings for 2030 Britain, Italy and France will have dropped out of the top ten where they still are today; and Germany – now fourth – will have slid down to tenth place.
-
“native genocide”, which I think means the extermination of indigenous peoples; slavery and the slave trade; and “reparatory justice”, meaning reparations for injustices.
-
The EU has about 400 million people; and if I add in the rest of the Western world, that is another 400 million. So this amounts to 800 million people, surrounded by another seven billion. And the European Union has an accurate view of itself: it is a rich union, but a weak one.
-
the settling of the new equilibrium will not happen overnight – or even from one month to the next
-
the “Thucydides Trap”,
-
Experience shows that the dominant great power tends to see itself as more benevolent and better-intentioned than it really is, and attributes malice to its challenger more often than is – or should be – justified
-
The bad news is that of the sixteen instances thus identified, twelve have ended in war, and only four were peacefully resolved
-
And it can neutralise the chief US weapon, the chief US weapon of power, which we call “universal values”
-
“Ending the century of humiliation” – or, to paraphrase the Americans, “Make China Great Again”.
-
So China has quite simply created its own: we see the BRICS and the One Belt One Road Initiative; and we also see the Asian Infrastructure Investment Bank, the development resources of which are several times greater than the development resources of all the Western countries.
-
there are no eternal winners and no eternal losers
-
In 2010 the US and the European Union contributed 22 – 23 per cent of total world production; today the US contributes 25 per cent and the European Union 17 per cent. In other words, the US has successfully repelled the European Union’s attempt to move up alongside it – or even ahead of it.
||sorina|| a very interesting statistics.
-
we also had a plan, which we expressed as the need to create a great free trade zone stretching from Lisbon to Vladivostok.
-
after its own civil war, from the 1870s onwards the United States grew to be the preeminent country, and its inalienable right to world economic supremacy is part of its national identity, and a kind of article of faith.
-
What has happened is that China has made the roughly three-hundred-year journey from the Western industrial revolution to the global information revolution in just thirty years.
-
But it has turned out that in fact this issue, the liberation of China, belongs to the historical timeframe; because as a result of that liberation, the United States – and all of us – are now facing a greater force than the one we wanted to defeat.
-
Back then the US decided to free China from its isolation, obviously to make it easier to deal with the Russians; and so it put that issue in the strategic timeframe.
-
tactical time, strategic time, and historical time.
Three historical times inspired by Brodel's thinking.
-
you have to simultaneously visualise three timeframes
-
then today “Western values” mean three things: migration, LGBTQ, and war.
||sorina|| There is an interesting discussion of Orban in Romania. I do not understand well Hungarian-Romanian dynamics which you will know better.
-
because the Ministry of Foreign Affairs of Romania – which I understand to belong more to the presidential branch of power – has come to my aid and sent me a démarche.
-
-
www.reddit.com www.reddit.comreddit7
-
Some key takeaways include :
Export control take aways
-
"They" being the multiple competing entities working in the field right now, most of whom are trying to make the barrier to entry in this field millions of dollars higher than it otherwise would be through mandatory paid consultations with "Safety experts" that could hypothetically double as corporate espionage assets to steal your secrets before you publish them.
||sorina|| A possible explanation of AI 'noise'
-
This form of analysis is antithetical to Wittgenstein's approach, particularly in his later work, where he stressed the importance of seeing things as interconnected wholes rather than reducing them to their constituent parts.
-
Wittgenstein introduced the idea of "language games," arguing that the meaning of words is determined by their usage in specific forms of life, or social practices. This stands somewhat in opposition to the detailed, precise and generalized logical framework outlined in the provided ontology, which assigns fixed roles and properties to human brains, AI models, and animal consciousness across all contexts.
-
There are a couple of reasons for this, and they predominantly center around Wittgenstein's ideas of language games, private language arguments, and his opposition to reductionism.
-
Heidegger's Dasein refers to the unique manner in which humans exist, embodying a subjective, in-the-world mode of being.
-
This restriction will stop these people from proliferating and creating fully open source LLMs based on Llama 2 outputs. So, say goodbye to improvements in some of the open source models.
||JovanNj|| will Llama 2 be a new language model?
-
-
www.gov.uk www.gov.uk
-
ovators and th
-
-
www.whitehouse.gov www.whitehouse.gov
-
significant model public releases within scope
! Also, what is 'significant'?
-
introduced after the watermarking system is developed
!
-
reat unreleased AI model weights for models in scope as core intellectual property for their business,
-
establishor joina forum or mechanism through which they can develop, advance, and adopt shared standards and best practices for frontier AI safety,
-
Companies commit to advancing this area of research, and to developing a multi-faceted, specialized, and detailed red-teaming regime, including drawing on independent domain experts, for all major public releases of new models within scope
So applying to what comes next...
-
Where commitments mention particular models, they apply only to generative models that are overall more powerful than the current industry frontier (
Very interesting... Not quite sure what is meant by 'particular models', though. ||JovanK||
-
hese voluntary commitmentsto remain in effect until regulations covering substantially the same issues come into force
-
designed to advancea generative AI legal and policy regime
-
Realizing the promise and minimizing the risk of AI will require new laws, rules, oversight, and enforcement.
-
only a first step in developing and enforcing binding obligations to ensure safety, security, and trus
commitments to be followed by binding obligations
-
-
www.whitehouse.gov www.whitehouse.gov
-
the Administration will work with allies and partners to establish a strong international framework to govern the development and use of AI.
-
robust technical mechanisms to ensure that users know when content is AI generated
-
will be carried out in part by independent experts
-
immediately
-
the G-7 Hiroshima Process—as a critical forum for developing shared principles for the governance of AI—as well as the United Kingdom’s leadership in hosting a Summit on AI Safety, and India’s leadership as Chair of the Global Partnership on AI.
||sorina|| 3 key processes for the USA on AI.
-
It has already consulted on the voluntary commitments with Australia, Brazil, Canada, Chile, France, Germany, India, Israel, Italy, Japan, Kenya, Mexico, the Netherlands, New Zealand, Nigeria, the Philippines, Singapore, South Korea, the UAE, and the UK.
-
model weights.
||JovanNj|| We need to focus on model weights issues.
-
safety, security, and trust
-
President Biden is convening seven leading AI companies at the White House today – Amazon, Anthropic, Google, Inflection, Meta, Microsoft, and OpenAI – to announce that the Biden-Harris Administration has secured voluntary commitments from these companies to help move toward safe, secure, and transparent development of AI technology.
-
-
www.sk.rs www.sk.rs
-
desetak godina eksperimentisanja
Test SK
-
-
arxiv.org arxiv.org
-
ollaborative could acquire or develop and then distribute AI systems to address these gaps, poolingresources from member states and international development programs, working with frontier AIlabs to provide appropriate technology, and partnering with local businesses, NGOs, and beneficiarygovernments to better understand technological needs and overcome barriers to use
What would be the relation with big tech companies?
And 'acquire' how?
-
cquiresand distributes AI systems
-
institutions taking on the role of several of the models above
-
ssed by offering safety researchers withinAI labs dual appointments or advisory roles in the Project,
-
it diverts safety research away from the sites of frontier AI development.
-
such an effort would likely be spearheaded byfrontier risk-conscious actors like the US and UK governments, AGI labs and civil society groups
-
ccelerate AI safety researchby increasing its scale, resourcing and coordination, thereby expanding the ways in which AI canbe safely deployed, and mitigating risks stemming from powerful general purpose capabilities
-
xceptional leaders and governancestructures
-
an institution with significant compute, engineering capacity and access tomodels (obtained via agreements with leading AI developers), and would recruit the world’s leadingexperts
-
n AI Safety Project need not be, and should beorganized to benefit from the AI Safety expertise in civil society and the private sector
And funded how?
-
like ITER and CERN.
-
conduct technical AI safety research26at an ambitious scale
-
y to exclude from participation states who are likely to want to useAI technology in non-peaceful ways, or make participation in a governance regime the preconditionfor membership
-
consistently implement the necessary controls to manage frontiersystem
-
he Collaborative to have a clear mandateand purpose
-
diffusing dangerous AI technologies around theworld, if the most powerful AI systems are general purpose, dual-use, and proliferate easily.
-
would need to significantly promote access to the benefits of advanced AI (objective1), or put control of cutting-edge AI technology in the hands of a broad coalition (objective 2)
-
he resourcesrequired to overcome these obstacles is likely to be substantial,
Indeed.
-
nderstanding the needs of membercountries, building absorptive capacity through education and infrastructure, and supporting thedevelopment of a local commercial ecosystem to make use of the technology
-
being deployed for “protective” purposes
-
ncrease global resilience to misusedor misaligned AI systems
-
existence of a technologicallyempowered neutral coalition may also mitigate the destabilizing effects of an AI race betweenstates
feasible?
-
access to its safe technology could be offered as an incentivefor countries to participate in a governance regime that enforces responsibility (such as agreementsto enact stricter regulation, or restrict military AI development).
But the cynical question would be: If country X has the research and development capacities to develop advanced AI, why would it want access to the tech of this Collaborative?
-
reduce geopolitical instability amidst fierce AI competition among states
hmm
-
ensure the benefits ofcutting-edge AI reach groups that are otherwise underserved by AI developmen
-
legitimate international access to advanced AI
-
a Frontier AICollaborative could take the form of an international private-public partnership that leverages existingtechnology and capacity in industry, for example by contracting access to or funding innovation inappropriate AI technology from frontier AI developers.
-
ligned countries may seek to form governance clubs, asthey have in other domains. This facilitates decision-making, but may make it harder to enlist othercountries later in the process
Budapest Convention is a case in point
-
its effectiveness will depend on its membership, governance and standard-setting processes
-
Governance Organization should focus primarily on advanced AI systems thatpose significant global risks, but it will be difficult in practice to decide on the nature and sophisti-cation of AI tools that should be broadly available and uncontrolled versus the set of systems thatshould be subject to national or international governance
-
Automated (even AI-enabled) monitoring
-
some states may be especially reluctant to join due to fear of clandestinenoncompliance by other states
-
Other AI-specific incentives for participation include conditioning on participationaccess to AI technology (possibly from a Frontier AI Collaborative) or computing resources.22Statesmight also adopt import restrictions on AI from countries that are not certified by a GovernanceOrganization
Surely interesting. Though in the current geopolitical context, it is not difficult to imagine how this would work.
-
while urgent risks may need to be addressed at first by smallergroups of frontier AI states, or aligned states with relevant expertise
Geopolitics come to play again? Surely 'frontier AI states' is different from 'aligned states'.
-
The impact of a Governance Organization depends on states adoptingits standards and/or agreeing to monitoring.
-
standard setting (especially in an international and multistakeholder context) tends to be a slowprocess
-
detection and inspections of large data centers
-
self-reporting of compliance with international standards
-
Where states have incentives to undercut each other'sregulatory commitments, international institutions may be needed to support and incentivize bestpractices. That may require monitoring standards compliance.
-
therefore enable greater access to advanced AI
Implementing international standards enables greater access to tech?
-
international standard setting wouldreduce cross-border frictions due to differing domestic regulatory regimes
If those standards (whatever we mean by them...) are taken up broadly.
-
uture regulations will limit access to powerful AI technologies in jurisdictions with inadequate AIgovernanc
Assuming those jurisdictions won't be able to develop their own powerful AI tech?
-
he InternationalTelecommunication Union (ITU)
In what sense?
-
Advanced AI Governance Organization.
-
intergovernmental or multi-stakeholder
-
as less institutionalizedand politically authoritative scientific advisory panels on advanced AI
So a advisory panel stands more chances of reaching that consensus than a commission?
-
epresentation may trade off against a Commission’s ability to overcomescientific challenges and generate meaningful consensus
-
broad geographic representation in the main decisionmaking bodies, and a predominance of scientificexperts in working groups
-
he scientific understanding of the impacts of AI should ideally beseen as a universal good and not be politicized
-
foundational “Conceptual Framework
-
Commission might undertake activities that draw andfacilitate greater scientific attention, such as organizing conferences and workshops and publishingresearch agenda
:))
-
o increase chances of success, aCommission should foreground scientific rigor and the selection of highly competent AI expertswho work at the cutting edge of technological development and who can continually interpret theever-changing technological and risk landscape
Sounds good; with the devil being in implementation. E.g." Whose standards would determine what is 'highly competent' AI expert?
-
internationally representative group of experts
-
there is significant disagreement even among experts about the different opportunities andchallenges created by advanced AI
And so what makes us think that these disagreements would evolve into a consensus if a committee is created?
-
International consensus on the opportunities and risks from advanced AI
What does 'international consensus' mean?
-
the Commission on Frontier AI could facilitatescientific consensus by convening experts to conduct rigorous and comprehensive assessments ofkey AI topics, such as interventions to unlock AI’s potential for sustainable development, the effectsof AI regulation on innovation, the distribution of benefits, and possible dual-use capabilities fromadvanced systems and how they ought to be managed
What a Commission on Frontier AI would do.
Silly question: Why 'frontier AI'?
-
intergovernmental body
-
dangerous inputs: computing resources have been targeted by US, Japanese and Dutch export controlsthat prevent the sale of certain AI chips and semiconductor manufacturing equipment to Chin
So 'controlling dangerous inputs' is actually about preventing non-friendly countries to access/develop the tech?
-
norms and standards
Are norms and standards rules?
Tags
Annotators
URL
-