1. Feb 2024
    1. as well as clarifying the principles and norms under which various organizations should operate.

      Deos it mean that it will instruct ITU, UNESCO about princples and norms?

    2. A ‘distributed-CERN’ reimagined for AI, networked across diverse states and regions, could expand opportunities for greater involvement

      CERN is already distributed.

    Created with Sketch. Visit annotations in context

    Created with Sketch. Annotators

    Created with Sketch. URL

    1. Chaos theory is about systems where small changes to the initial conditions result in extremely large changes in the results.

      Problem with predicitons

    Created with Sketch. Visit annotations in context

    Created with Sketch. Annotators

    Created with Sketch. URL

    1. Sometimes referred to as data-centre alley, northern Virginia is home to just over three square kilometres of data centres, most of which are within 75 square kilometres in Loudoun County.

      ||sorina|| Here is relevan paragraph for you.

    2. Because AI is based on matrix maths, it involves large blocks of computation being done at once, which means a lot of transistors have to change states very quickly. That draws a lot more power than normal computer tasks, which flip far fewer transistors at once for a typical calculation.

      Why AI takes more energy?

    Created with Sketch. Visit annotations in context

    Created with Sketch. Annotators

    Created with Sketch. URL

    1. The GDPR protects the personal data of individuals in the EU. Personal data is defined as: “any information relating to an identified or identifiable natural person (‘data subject’).”

      Milos testing

    Created with Sketch. Visit annotations in context

    Created with Sketch. Annotators

    Created with Sketch. URL

    1. However, it warned the cables at some points run at a depth of 100 metres, reducing the need for hi-tech submarines. In 2013, three divers were arrested in Egypt for attempting to cut an undersea cable near the port of Alexandria that provides much of the internet capacity between Europe and Egypt.

      This is a problem with cables ||sorina||

    Created with Sketch. Visit annotations in context

    Created with Sketch. Annotators

    Created with Sketch. URL

    1. for the first time

      ||AndrijanaG||||sorina|| Was it adopted for the first time?

    2. its own Global AI Governance Initiative in October, calling on major powers to take “a prudent and responsible attitude” on military use of artificial intelligence technologies.

      Do we have t

    3. the Political Declaration on the Responsible Military Use of Artificial Intelligence and Autonomy a year ago.

      ||sorina||||VladaR||||MariliaM|| Do we have anything on this declartion?

    Created with Sketch. Visit annotations in context

    Created with Sketch. Annotators

    Created with Sketch. URL

    1. A paper by Rebecca Johnson, a researcher at the University of Sydney, published in 2022, found that Chat GPT -3 gave replies on topics such as gun control and refugee policy that aligned most with the values displayed by Americans in the World Values Survey, a global questionnaire of public opinion.

      To be checked

    Created with Sketch. Visit annotations in context

    Created with Sketch. Annotators

    Created with Sketch. URL

    1. Firms in countries that have not signed on to the American export-control regime, like Singapore, can buy chips and send them on to Chinese entities without the knowledge of the American firms or the Department of Commerce.

      Singapore does not join American export restrictions to China.

    Created with Sketch. Visit annotations in context

    Created with Sketch. Annotators

    Created with Sketch. URL

    1. Publishers around the world are all too aware of this shift; over half of those recently surveyed by the Reuters Institute for the Study of Journalism said that they plan to devote more effort into putting stories on TikTok this year.

      @jovan Major shift

    2. “It has to be short, it has to be fast,”

      It is useul guidelines for DW ||Jovan||

    Created with Sketch. Visit annotations in context

    Created with Sketch. Annotators

    Created with Sketch. URL

    1. The emphasis on assessing academic performance by how many papers a researcher can publish, for example, acts as a powerful incentive for fraud at worst, and for gaming the system at best.

      This could question whole academic system - push for more publishing.

    2. Checking models against reality is what science is supposed to be about, after all.

      Good analogy with role of science in human society.

    3. That can cause “model collapse”.

      Model colapse can be triggered by using AI-genereated data for development of the model. ||JovanNj||||anjadjATdiplomacy.edu||||sorina||

    Created with Sketch. Visit annotations in context

    Created with Sketch. Annotators

    Created with Sketch. URL

    1. the company dominates the market for AI accelerator chips, accounting for 86% of such components sold globally;

      86% of AI chips are produced by Nvidia

    Created with Sketch. Visit annotations in context

    Created with Sketch. Annotators

    Created with Sketch. URL

    1. The concept intentionally obfuscates (clouds, one might say) the user’s ability to see the existence of hardware.

      Cloud i

    Created with Sketch. Visit annotations in context

    Created with Sketch. Annotators

    Created with Sketch. URL

    1. Taylor Swift, the latest high-profile victim of a deepfake, might disagree.

      H

    Created with Sketch. Visit annotations in context

    Created with Sketch. Annotators

    Created with Sketch. URL

    1. let’s get rid of bloody nationalism, let’s eradicate corruption, let’s stop the brain drain, let’s provide a decent life and, above all, a future to all our citizens, let’s take care of nature and let’s live in peace. The rest – EU, UN etc. – may come sooner or it may come later: what matters most is that we take better care of people. 

      Very good point! But, elites and mafia cannot make money in such scenario. It is easier to divide people and take money and resources from them.

    2. Bioregionalism, thus the rivers (White Drin, Lepenac, Ibar) and their plains and basins on all sides, could function as an excellent ‘negotiator’ and connect efficiently Kosovo to Albania, Macedonia and Serbia without mobilizing the narrative of ‘Great Serbia’, ‘Great Albania’, etc.

      Great proposal. Unfortunately, it is uitopian. But, worth keeping alive.

    3. The reality check is mind-blowing: the end of SFR Yugoslavia (1991-1992), the end of FR Yugoslavia (Serbia and Montenegro) in 2006, the loss of Kosovo (2008) and, let’s not forget, a ‘brain drain’ of unprecedent magnitude. Except for the potential breakup of the United Kingdom (Northern Ireland and Scotland), no other European country has faced such devastating consequences of its policy failures.

      Agree!

    4. the EU is quite supportive of Serbia... as it needs a strong player in the region.

      It is for debate. I agree that it is the case. But, I think it is more 'tactical' than 'strategic' reasoning. EU does not need conflict in the Balkans and they want to close 'Kosovo chapter'. They do not see Serbia as long-term partner.

    5. Both Vučić and Bosnian Serb leader Milorad Dodik have invested so much in pro-Russian sentiment that they are unable to wean themselves off it.

      True!

    6. What is noteworthy is that Serbia is increasingly being mentioned in the same way.

      I am not sure it is the case. Serbia is more mentioned as 'Western Balkans'.

    7. Consequently, the natural European anchorage for the post-Yugoslav states is Central Europe – naturally, I am viewing this in the framework of the European Union.

      Interesting 'shift of geography'.

    8. the Western powers are not in control of this new conflictual world where power has become powerless, while weakness has given rise to power to the point of destabilizing the agenda of the strongest.

      There is a good point here.

    9. Politics is driven less by state initiatives and more by social dynamics –

      I am not sure that it is the case. It could be the case on deeper level. But, we are facing exactly opposite - power polticis everywhere.

    Created with Sketch. Visit annotations in context

    Created with Sketch. Annotators

    Created with Sketch. URL

  2. Jan 2024
    1. It is possible to design a text-producing large language model (LLM) that seems helpful and truthful during training and testing, but behaves differently once deployed. And according to a study shared this month on arXiv1, attempts to detect and remove such two-faced behaviour are often useless — and can even make the models better at hiding their true nature.The finding that trying to retrain deceptive LLMs can make the situation worse “was something that was particularly surprising to us … and potentially scary”, says co-author Evan Hubinger, a computer scientist at Anthropic, an AI start-up company in San Francisco, California.

      Kind of scary indeed. It takes us back to the question of trust in developers.

    Created with Sketch. Visit annotations in context

    Created with Sketch. Tags

    Created with Sketch. Annotators

    Created with Sketch. URL

    1. four-fifths of grade six maths teachers in South Africa did not understand the concepts they were supposed to teach.

      What about training them?

    2. A chatbot can give undivided attention to each child, at any time of day, and never gets tired

      Makes sens

    3. The third reason is that developing countries have gaping shortages of skilled workers: there are nowhere near enough teachers, doctors, engineers or managers.

      Should we use AI or education of people? I can understand using AI in Europe with labour shortage, but in Africa, it does not make sense at all ||sorina||

    4. There are three main reasons for optimism.

      These arguments are not related only to Arica. They are general. Can they apply to Africa? ||sorina||

    Created with Sketch. Visit annotations in context

    Created with Sketch. Annotators

    Created with Sketch. URL

    1. Some developers in India are already taking Western models and fine-tuning them with local data to provide a whizzy language-translation service, avoiding the heavy capital costs of model-building.

      b

    2. the phone in their pockets.

      Ok

    3. Pupils in Kenya will soon be asking a chatbot questions about their homework, and the chatbot will be tweaking and improving its lessons in response.

      Is it g

    4. The imf says that a fifth to a quarter of workers there are most exposed to replacement, compared with a third in rich countries.

      Interest

    5. Because emerging countries have fewer white-collar workers, the disruption and the gain to existing firms may be smaller than in the West.

      Possible tru

    6. Most exciting of all, it could help income levels catch up with those in the rich world.

      How?

    7. As it spreads, the technology could raise productivity and shrink gaps in human capital faster than many before it.

      Interesti

    8. N ew technology brings with it both the sweet hope of greater prosperity and the cruel fear of missing out.

      Opportunitiy/threat binary.

    Created with Sketch. Visit annotations in context

    Created with Sketch. Annotators

    Created with Sketch. URL

    1. The boss has to show their face to employees regularly, and it cannot be the face of someone who looks like they haven’t slept for two weeks. They have to glad-hand the board, meet investors, attend endless networking events and make time for actual work. It is exhausting to contemplate, let alone

      ||sorina|| why it is not good to be boss.

    Created with Sketch. Visit annotations in context

    Created with Sketch. Annotators

    Created with Sketch. URL

    1. with national

      What about 'international standardisation organisations'.

    2. as the WTO, dispute resolution can also be facilitated through global forums.

      Dispute resolution failed in the WTO. Again, we have governments as actors.

    3. 72. Reporting frameworks can be inspired by existing practices of the IAEA for mutual reassurance on nuclear safety and nuclear security, as well as the WHO on disease surveillance.

      In both examples, reporting is done by member states. How realistic is to use this in the case of AI. Is analogy useful?

    4. a techno-prudential model, akin to the macro-prudential framework used to increase resilience in central banking

      How useful is this analogy between AI which is very diverse and rather centralised system of central banks.

    5. The possibility of rogue AI escaping control and posing still larger risks cannot be ruled out.

      Extinction risk stenence

    6. A new mechanism (or mechanisms) is required to facilitate access to data, compute, and talent

      New structuers/mechanisms

    7. The borderless nature of AI tools

      What is 'borderless' nature? AI is created and used within certain jurisdictions always.

    8. could also be coordinated through a body that harmonises policies, builds common understandings, surfaces best practices, supports implementation and promotes peer-to-peer learning

      new body

    9. A consensus on the direction and pace of AI technologies

      What type of consensus is expected?

    10. 45. Rather than proposing any single model for AI governance at this stage, the preliminary recommendations offered in this interim report focus on the principles that should guide the formation of new global governance institutions for AI and the broad functions such institutions would need to perform.

      Slight inherent contradiction.

    11. the possibility that it could pose an existential threat to humanity (even if there are debates over whether and how to assess such threats).

      Mentioning existential threat

    12. 28. Still others relate to human-machine interaction. At the individual level, this includes excessive trust in AI systems (automation bias) and potential de-skilling over time. At the societal level, it encompasses the impact on labour markets if large sections of the workforce are displaced, or on creativity if intellectual property rights are not protected. Societal shifts in the way we relate to each other as humans as more interactions are mediated by AI cannot also be ruled out. These may have unpredictable consequences for family life and for physical and emotional well-being.

      Societal risks

    13. New and existing institutions could form nodes in a network of governance structures.

      This sounds nice. But, nobody knows how it will work in reality as many are for 'coordination' but a few like to be 'coordinated'.

    14. New horizontal coordination and supervisory functions are required and they should be entrusted to a new organizational structure.

      There is a call for a new organisational structure. It is interesting that that the text use 'should' instead of 'may' or 'might' which are typically used in the policy documents.

    15. What should be the threshold or the trigger for identifying red lines (analogous, perhaps, to the ban on human cloning in biomedical research)? How would any such red line be policed and enforced?

      extinction risk

    16. shared and differentiated responsibilities

      Key concept

    17. At the global level, international organizations, governments, and private sector would bear primary responsibility for these functions. Civil society, including academia and independent scientists, would play key roles in building evidence for policy, assessing impact, and holding key actors to account during implementation.

      Different functoins

    18. (a) build scientific consensus on risks, impact, and policy (IPCC); (b) establish global standards (ICAO, ITU, IMO), iterate and adapt them; (c) provide capacity building, mutual assurance and monitoring (IAEA, ICAO); (d) network and pool research resources (CERN); (e) engage diverse stakeholders (ILO, ICANN); (f) facilitate commercial flows and address systemic risks (SWIFT, FATF, FSB).

      inspiration for governance mechanisms.

    19. 36. We also need to meet member states where they are and assist them with what they need in their own contexts given their specific constraints in terms of participation in and adherence to global AI governance, rather than telling them where they should be and what they should do based on a context to which they cannot relate.

      Novelty: avoid lecturing

    20. 25. We examined AI risks firstly from the perspective of technical characteristics of AI. Then we looked at risks through the lens of inappropriate use, including dual-use, and broader considerations of human-machine interaction. Finally, we looked at risks from the perspective of vulnerability.

      Novelty: Comprehensive approach to AI risks combining technical characteristics, inappropriate use and perspective of vulnerability.

    21. Repositories of AI models that can be adapted to different contexts could be the equivalent of generic medicines to expand access, in ways that do not promote AI concentration or consolidation.

      Novelity

    22. Open-Source and sharing of data and models could play an important role in spreading the benefits of AI and developing beneficial data and AI value chains across borders.

      Open source

    23. to develop local AI ecosystems, the ability to train local models on local data, as well as fine-tuning models developed elsewhere to suit local circumstances and purposes.

      Novelity

    24. AI presents distinctly global challenges and opportunities that the UN is uniquely positioned to address, turning a patchwork of evolving initiatives into a coherent, interoperable whole, grounded in universal values agreed by its member states, adaptable across contexts.

      An important 'coordinating function' of the UN

    Created with Sketch. Visit annotations in context

    Created with Sketch. Annotators

    Created with Sketch. URL

    1. including by scaling up the use of open science, affordable and open-source technology, research and development

      open science and open source

    2. We reaffirmthat the creation, development and diffusion of innovations and new technologies and associated know-how, including the transfer of technology on mutually agreed terms, are powerful drivers of economic growth and sustainable development. We reiterate the need to accelerate the transfer of environmentally sound technologies to developing countries on favourableterms, including on concessional and preferential terms, as mutually agreed, andwenote the importance of facilitating access to and sharing accessible and assistive technologies

      details for the part on tech transfers in the chapeau

    3. commit to exploring measures to address therisks involved in biotechnology and human enhancement technologies applied to the military domain

      Might have been good not to limit this to the applications in the military domain

    4. developingnorms, rules and principles on the design, development and use of military applications of artificial intelligence through a multilateral process

      also interesting

    5. strengtheningoversight mechanisms for the use of data-driven technology, including artificial intelligence, to support the maintenance of international peace and security

      which mechanisms?

    6. commit to concluding without delaya legally binding instrument to prohibit lethal autonomous weaponssystems that function without human control or oversight, and which cannot be used in compliance with international humanitarian law, and to regulate all other types of autonomous weapons systems

      LAWS - another proposed strong commitment

    7. international norms, rules and principles to address threats to space systems and, on that basis, launch negotiations on a treaty to ensure peace, security and the prevention of an arms race in outer space

      interesting commitment to working on a treaty for outer space security. curious if it will stay in the final doc

    8. We acknowledge that the accelerating pace of technological change necessitates ongoing assessment and holistic understanding of new and emerging developments in science and technology impactinginternational peace and security, including through misuse by non-State actors, including for terrorism

      tech, peace and security

    9. including through the transfer of technology on mutually agreed terms to help close the digital and innovation divide.

      Seems a win for developing countries. tech transfers were highlighted quite a lot during the SDG summit last year. Let's see reactions...

    Created with Sketch. Visit annotations in context

    Created with Sketch. Tags

    Created with Sketch. Annotators

    Created with Sketch. URL

    1. Their family had spent weeks agonizing over whether to flee as Israeli troops moved into Gaza City’s al-Rimal neighborhood, tanks rolling past their front door and a terrifying cacophony of bombs, quadcopter drones and gunfire thundering all around them.

      @sorina this paragraph is critial for understanding legal aspects of palestine crisis.

    Created with Sketch. Visit annotations in context

    Created with Sketch. Annotators

    Created with Sketch. URL

    1. hese models encapsulate a wealth of humanknowledge, linguistic patterns, and cultural nuances.

      Elements for AI training

    Created with Sketch. Visit annotations in context

    Created with Sketch. Annotators

    Created with Sketch. URL

    1. The founders' (or France's?) vision

      The French version of open source AI.

    Created with Sketch. Visit annotations in context

    Created with Sketch. Annotators

    Created with Sketch. URL

    1. Instagram walls or experiences attracted visitors to a locale and kept them engaged by giving them an activity to perform with their phones, like a restaurant providing colouring books for kids.

      Critical aspect

    2. In non-places, “people are always, and never, at home”, Augé wrote.

      Key statement.

    Created with Sketch. Visit annotations in context

    Created with Sketch. Annotators

    Created with Sketch. URL

    1. The group has its main base on the Israel-Lebanon border and has been exchanging fire with Israel since the Gaza war began. The movement is close to Hamas in Gaza.

      ||VladaR|| Vlado, ovo je test. Ovo ima veze sa nasim jucerasnjim razogovorom o encyrption.

    Created with Sketch. Visit annotations in context

    Created with Sketch. Annotators

    Created with Sketch. URL

    1. We have Arthi Prabhakar, who is the Director of the White House Office of Science and Technology.

      Important sentence

    Created with Sketch. Visit annotations in context

    Created with Sketch. Annotators

    Created with Sketch. URL

    1. Tencent launched PhotoMaker,

      to be tested.

    Created with Sketch. Visit annotations in context

    Created with Sketch. Annotators

    Created with Sketch. URL

    1. written communication

      Not just written, i.e., this has to be proceeded by a clear idea in your mind what it is that you want/need. So the first step is to articulate your need/thought clearly which, again, comes as a result of different factors like critical thinking, ability to understand the context, to know the audience you are addressing/want to reach, all in order to best phrase the message you want to send. Once these criteria are met, then the 'written communication' (or oral, for that matter) comes 'naturally'.

    Created with Sketch. Visit annotations in context

    Created with Sketch. Annotators

    Created with Sketch. URL

    1. Product and engineering teams need to work closely together

      ||sorina||||JovanNj|| This is technical explanation of 'cognitive proximity'

    2. Fine-tuning is when you slightly adjust the model parameters of a pre-trained AI model using example data. RAG is when you augment a generative AI model with traditional information retrieval to give the model access to private data.

      Two key techniques at Diplo. ||JovanNj||||sorina||

    3. They make it possible for non-technical experts to directly shape AI products through prompt engineering.

      ||JovanNj||||Jovan||

    Created with Sketch. Visit annotations in context

    Created with Sketch. Annotators

    Created with Sketch. URL

    1. first, its still-advanced military power; second, its central role in the global financial system, which provides an international settlement infrastructure and a convertible currency; third, its strong position in a number of technological fields; and fourth, its ideology and values platform, which, together with the other three dimensions, provide what can be tentatively called a "pyramid of credibility" for American strategy in the world.

      Four pillars of the US strategic advantage.

    Created with Sketch. Visit annotations in context

    Created with Sketch. Annotators

    Created with Sketch. URL

    1. At the G20 Summit, the president of the EU Commission, Ursula von der Leyen, proposed the Intergovernmental Panel on Climate Change (IPCC) as a model.

      von der Leyen on AI.

    2. “AI should be governed inclusively, by and for the benefit of all; AI must be governed in the public interest; AI governance should be built in step with data governance and the promotion of data commons; AI must be universal, networked and rooted in adaptive multistakeholder collaboration; AI governance should be anchored in the UN Charter, International Human Rights Law, and other agreed international commitments such as the Sustainable Development Goals.”

      AI principles.

    3. The EU, with its “risk-based approach,” prefers specific regulations for various applications. The US prefers a “framework approach”.

      difference between eu and usa.

    4. There will be two additional rounds of public consultations, both with governments and non-governmental stakeholders, in February and March 2024, followed by three rounds of intergovernmental negotiations in April and May. Written contributions can be delivered until March 10, 2024. The final text should be ready in July or August. If everything goes smoothly, the GDC will be adopted by acclamation on September 23, 2024.

      Description of GDC process.

    5. We are still in the early years of the “age of cyberinterdependence.”

      Search for interdependence.

    Created with Sketch. Visit annotations in context

    Created with Sketch. Annotators

    Created with Sketch. URL

    1. to GAO's Watchdog Report

      Name of the report

    Created with Sketch. Visit annotations in context

    Created with Sketch. Annotators

    Created with Sketch. URL

    1. 1) Be concise.

      How to be concise?

    2. Nielsen’s research found that 79% of people scan web pages. That begs the question: If the majority of readers already prefer skimming, why wouldn’t you want to make it an easy, enjoyable, and efficient process for them?

      Scanning website

    3. Whitespace. Blocks of text look daunting and intimidating to readers. Whitespace, like bullet points, organizes your text, giving it a more scannable and manageable appearance.

      Use whitespaces

    Created with Sketch. Visit annotations in context

    Created with Sketch. Annotators

    Created with Sketch. URL

    1. fluency in cyber topics as selection criteria for ambassadors.

      Cyber as career progression criteria.

    2. here is a trained Cyber and Digital Policy Officer at every embassy by the end of 2024.

      Interesting initiative.

    3. The Framework consists of three pillars: international law; voluntary norms establishing what states should and should not do in the digital realm; and confidence building measures strengthening transparency, predictability, and stability.

      ||VladaR|| Three elements of 'The Framework'

    Created with Sketch. Visit annotations in context

    Created with Sketch. Annotators

    Created with Sketch. URL

    1. Artwork for Visual Data, the column on "La Lettura", the cultural supplement of "Corriere Della Sera".

      Visual Art

    Created with Sketch. Visit annotations in context

    Created with Sketch. Annotators

    Created with Sketch. URL

    1. a set of voluntary company commitments

      Another legal instrument.

    2. that 10 top philanthropies have committed

      ||sorina|| Do we know anything about this initiative?

    3. 30 countries have joined our commitment to the responsible use of military AI.

      ||VladaR||||AndrijanaG|| Have we followed-up on this initiative?

    4. we created the AI Bill of Rights. 

      ||sorina|| Do we have this act?

    5. A future where AI is used to advance human rights and human dignity, where privacy is protected and people have equal access to opportunity, where we make our democracies stronger and our world safer.  A future where AI is used to advance the public interest.

      US AI priorities.

    6. There are additional threats that also demand our action — threats that are currently causing harm and which, to many people, also feel existential.

      ||sorina|| Here is an interesting shift towards 'existing' threads.

    7. AI-formulated bio-weapons that could endanger the lives of millions

      Link between AI and bio-weapons

    1. What all these countries have in common is their desire to run their own affairs; to be independent countries.

      Common for all rebel countries.

    Created with Sketch. Visit annotations in context

    Created with Sketch. Annotators

    Created with Sketch. URL

    1. summary of Russell’s argument—which claims that with advanced AI, “you getexactly what you ask for, not what you want”

      Q: What is the alignment problem?

    2. 70% of respondents thought AI safety research should be prioritized more than it currently is.

      need for more AI safety research

    3. etween 41.2% and 51.4% of respondents estimated a greater than 10% chance of humanextinction or severe disempowerment

      predictions on the likelihood of human extinction

    4. Amount of concern potential scenarios deserve, organized from most to least extreme concern

      very interesting, in particular the relative limited concern related to the sense of meaning/purpose

    5. ven among net optimists,nearly half (48.4%) gave at least 5% credence to extremely bad outcomes, and among net pessimists, more than half(58.6%) gave at least 5% to extremely good outcomes. The broad variance in credence in catastrophic scenarios showsthere isn’t strong evidence understood by all experts that this kind of outcome is certain or implausible

      basically difficult to predict the consequences of high-level machine intelligence

    6. scenarios worthy of most concern were: spread of false information e.g. deepfakes (86%), manipulation oflarge-scale public opinion trends (79%), AI letting dangerous groups make powerful tools (e.g. engineered viruses)(73%), authoritarian rulers using AI to control their populations (73%), and AI systems worsening economic inequalityby disproportionately benefiting certain individuals (71%).

      likelihood of existing and exclusion AI risks

    7. Most respondents considered it unlikely that users of AI systems in 2028 will be able to know the truereasons for the AI systems’ choices, with only 20% giving it better than even odds.

      predictions on explainability / interpretability of AI systems

    8. Answers reflected substantial uncertainty and disagreement among participants. No trait attracted near-unanimity on anyprobability, and no more than 55% of respondents answered “very likely” or “very unlikely” about any trait.

      !

    9. Only one trait had a median answer below ‘even chance’: “Take actions to attain power.” While there wasno consensus even on this trait, it’s notable that it was deemed least likely, because it is arguably the most sinister, beingkey to an argument for extinction-level danger from AI

      .

    10. ‘intelligence explosion,’

      Term to watch for: intelligence explosion

    11. he top five most-suggested categories were: “Computerand Mathematical” (91 write-in answers in this category), “Life, Physical, and Social Science” (77 answers), “HealthcarePractitioners and Technical” (56), “Management” (49), and “Arts, Design, Entertainment, Sports, and Media”

      predictions on occupations likely to be among the last automatable

    12. predicted a 50% chance of FAOL by 2116, down 48 years from 2164 in the 2022 survey

      Timeframe prediction for full automation of labour: 50% chance it would happen by 2116.

      But what does this prediction - and the definition of full automation of labour - mean in the context of an ever-evolving work/occupation landscape? What about occupations that might not exist today? Can we predict how those might or might not be automated?

    13. ay an occupation becomes fully automatable when unaided machines can accomplish it betterand more cheaply than human workers. Ignore aspects of occupations for which being a human isintrinsically advantageous, e.g. being accepted as a jury member.Think feasibility, not adoption.[. . . ]Say we have reached ‘full automation of labor’ when all occupations are fully automatable. That is,when for any occupation, machines could be built to carry out the task better and more cheaply thanhuman workers

      Q: What is full automation of labour?

    14. the chance of unaided machines outperforming humans in every possible task was estimated at 10%by 2027, and 50% by 2047.

      Survey: 10% chance that machines become better than humans in 'every possible task by 2027, but 50% by 2047.

    15. High-level machine intelligence (HLMI) is achieved when unaided machines can accomplish everytask better and more cheaply than human workers. Ignore aspects of tasks for which being a humanis intrinsically advantageous, e.g. being accepted as a jury member.Think feasibility, not adoption

      Q: What is high-level machine intelligence?

    16. High-Level Machine Intelligence’

      new term: high-level machine intelligence

    17. six tasks expected to take longer than ten years were: “After spending time in a virtual world, output the differentialequations governing that world in symbolic form” (12 years), “Physically install the electrical wiring in a new home”(17 years), “Research and write” (19 years) or “Replicate” (12 years) “a high-quality ML paper,” “Prove mathematicaltheorems that are publishable in top mathematics journals today” (22 years), and solving “long-standing unsolvedproblems in mathematics” such as a Millennium Prize problem (27 years)

      Expectations on the tasks feasible to be taken over than AI later than 10 years from now

    18. 2,778 AI researchers who had published peer-reviewed research in the prior year in six topAI venues (NeurIPS, ICML, ICLR, AAAI, IJCAI, JMLR). This to our knowledge constitutes the largest survey of AIresearchers to date

      Who participated in the survey

    19. hey are experts in AI research, not AI forecasting and might thus lack generic forecasting skills andexperience, or expertise in non-technical factors that influence the trajectory of AI.

      Good to note this caveat

    20. lack of apparent consensus among AI experts on the future of AI [

      This has always been the case, no?

    21. was disagreement about whether faster or slower AI progress would be better for thefuture of humanity.

      interesting also

    22. substantial” or “extreme” concern is warranted about six different AI-relatedscenarios, including spread of false information, authoritarian population control, and worsenedinequality.

      .

    23. While68.3% thought good outcomes from superhuman AI are more likely than bad, of these net optimists48% gave at least a 5% chance of extremely bad outcomes such as human extinction, and 59% of netpessimists gave 5% or more to extremelygoodoutcomes. Between 37.8% and 51.4% of respondentsgave at least a 10% chance to advanced AI leading to outcomes as bad as human extinction.

      Survey results on AI extinction risks. Quite interesting

    24. the chance of all humanoccupations becoming fully automatable was forecast to reach 10% by 2037, and 50% as late as 2116(compared to 2164 in the 2022 survey).
    1. Virtually all the policies that EAs and their allies are pushing — new reporting rules for advanced AI models, licensing requirements for AI firms, restrictions on open-source models, crackdowns on the mixing of AI with biotechnology or even a complete “pause” on “giant” AI experiments — are in furtherance of that goal.

      Key calls of extinction risk community.

    Created with Sketch. Visit annotations in context

    Created with Sketch. Annotators

    Created with Sketch. URL

  3. Dec 2023
    1. Andrew Ng: ‘Do we think the world is better off with more or less intelligence?’

      A very lucid interview. He highlights the importance of open source AI, the danger of regulating LLM as opposed to applications, the negative lobby of most big tech, and the need to focus on good regulation targeting the problems of today, not speculating about extinction, etc. ||JovanK|| and ||sorina||

    Created with Sketch. Visit annotations in context

    Created with Sketch. Tags

    Created with Sketch. Annotators

    Created with Sketch. URL

    1. promote research, development and innovation in various data-based areas, including Big Data Analytics, Artificial Intelligence, Quantum Computing, and Blockchain.

      For Serbian chamber of commerce this could be critical since they do not have any linkages between data, AI, quantum computeing and blockchain.

      They should encourage Serbian start-ups to look at this linkages.

    Created with Sketch. Visit annotations in context

    Created with Sketch. Annotators

    Created with Sketch. URL

    1. that chess could be crunched by brute force once hardware got fast enough, databases got big enough, algorithms got smart enough.

      How Kasparov lost the game.

    Created with Sketch. Visit annotations in context

    Created with Sketch. Annotators

    Created with Sketch. URL

    1. Lynn Margulis [14] has made strong arguments for the view that mutualism is the great driving force in evolution.

      Mutulalism is the doctrine that mutual dependence is necessary to social well-being.

    Created with Sketch. Visit annotations in context

    Created with Sketch. Annotators

    Created with Sketch. URL

    1. The table below illustrates the complexity of models and data used to train common language models.

      Sources of data for foundational models.

    Created with Sketch. Visit annotations in context

    Created with Sketch. Annotators

    Created with Sketch. URL

    1. Generative AI turned one in November 2023

      End of the year

    Created with Sketch. Visit annotations in context

    Created with Sketch. Tags

    Created with Sketch. Annotators

    Created with Sketch. URL

    1. Sunak said that was because up to that point the government’s scientists were not pushing for it. The aim had been to “flatten the curve” and manage the spread, rather than suppress it.

      ||sorina|| Sorina, this is relevant for our yesterday's discussion on the future of digital economy

    Created with Sketch. Visit annotations in context

    Created with Sketch. Annotators

    Created with Sketch. URL

    1. Essentially what this means is, you can test a smaller model and accurately predict how a model 106 x larger will perform.

      ||sorina|| Scaling intellingence

    2. the only differentiating factor between any two LLMs is the dataset.

      ||sorina||

    3. to create a model with the ability to solve math problems without having previously seen them.

      OpenAI may solve mathematical problems which is the key challenge of probabilistic AI

      ||sorina|| ||VladaR||

    Created with Sketch. Visit annotations in context

    Created with Sketch. Annotators

    Created with Sketch. URL

    1. On the banks of Hallstätter See and surrounded by soaring Alpine peaks, the town of Hallstatt and its stunning landscape enjoy UNESCO protection.

      ||sorina|| This paragraph explains why you should to Austria.

    Created with Sketch. Visit annotations in context

    Created with Sketch. Annotators

    Created with Sketch. URL

  4. Nov 2023
    1. Given its dual-use nature, CAI could make it easier to train pernicious systems

      Could someone not do the reverse and curate a constitution of pure malicious intentions?

    2. AI alignment, which incentivizes developers to adopt this method

      How do we know if AI is aligning with us when we leave the job of aligning it to AI as well???

    3. CAI increases model transparency.

      Not really though...just as with regular LLMs, we don't know how the models comprehend the data that we give it and how it comes up with answers. There's no guarantee that the models understand the terms of the principles in the ways that we understand them; how do we know if the model is indeed making decisions according to the values (or whatever definition we might give to those values) and not just because of a happenstance?

    Created with Sketch. Visit annotations in context

    Created with Sketch. Annotators

    Created with Sketch. URL

    1. dependence on a specific AI technology will diminish, so that end-users can avoid ‘lock-in’ effects and benefit from reduced switching costs

      The risks is that it could be too later if there is not immediate push against monopolies of a few major cmpanies.

    2. Interoperability of pre-trained models across platforms should also drastically reduce the need to retrain large models.

      Good point!

    3. rompt templates and standardized prompt optimizers

      Any suggestion?

    4. have assembled to develop an LLM called BLOOM10, should be valuable.

      A good example.

    5. public institutions can actively incentivize data-sharing partnerships, which, in combination with federated learning, may promote AI across institutional boundaries while ensuring data privacy.

      Open data access is tricky. There is growing concern in developing countries that open data can benefit only those withi processing power. Thus, big tech platforms can be mainly beneficiary of open data access.

      This issue must be sorted out by having tracebility of AI to specific data. It can be open but it shoudl be attributed to somebody.

    6. under a trustworthy and responsible governance model.

      Here is a possible role for Switzerland as 'ICRC for AI'

    7. the development of a LLM is estimated to cost between 300 and 400 million euros.

      It can be less.

    8. source codes for formalizing the training task

      It is too specific. Training tasks are part of one type of AI.

    9. AI is programmed to learn to perform a task.

      It is not the case with all AI systems. It is only the case with reinforced learning AI systems.

    10. arbitrarily decide

      It is the case today. Internet companies are free to decide what, where, and how they will provide services.

    11. the concentration of power over technology is known to hamper future innovation, fair competition, scientific progress, and hence human welfare and development at large

      Main concern

    12. concentrated power

      It is the main concern.

    13. An example is OpenAI, which was founded to make scientific research openly available but which eventually restricted access to research findings.

      OpenAI is not open source platform. It is typical 'Internet economy' business which provides service for 'free' in exchagne of data. It has been business Google, Facebook, Twitter, etc. OpenAI puts this model to the next level by capturing knowledge (inestead of data).

    Created with Sketch. Visit annotations in context

    Created with Sketch. Annotators

    Created with Sketch. URL

    1. The Departmentwill develop and implement data qualityassessment tools and monitoring processeswith results that are transparent to users.Assessments will also be done to evaluate dataoutputs from other AI platforms to minimize risk.

      What about the input data used to train other AI systems?

    2. High quality datasets are those sufficiently freeof incomplete, inconsistent, or incorrect data,while also being well documented, organized,and secure

      Doesn't this definition mostly point to highly structured data?

    3. he Department’s CDAO will support and coordinate the establishment and maintenance of AI policies—such as 20 FAM 201.1—that provide clear guidelines for responsible AI use, steward AI models, and prioritize the evaluation and management of algorithmic risk (e.g., risks arising from using algorithms)in AI applications during their entire lifecycle— including those related to records retention, privacy, cybersecurity, and safety.

      Existence of current AI policies.

      Another thing is, they mention algorithmic risk, which means the evaluation of algorithms may not just be on applications?

    4. Much like the EDS aims to cultivate a data culture, the Department will imbue its values around responsible AI use across the organization, including to uphold data and scientific integrity.

      Very interesting how they use words like "culture" to describe AI integration. It certainly goes beyond simply adopting selective tools; instead, it's about perspective and norm-shaping within the organisation.

    5. enhance AI literacy, encourage and educate on responsible AI use, and ensure that users can adequately mitigate some risks associated with AI tools.

      What is AI literacy? What sets of knowledge and skills make a person AI literate?

    6. To meet the computational demands of AI development, our infrastructure will leverage Department cloud-based solutions and scalable infrastructure services.

      Did they already have that infrastructure ready?

    7. Robust access controls and authentication mechanisms aligned to Zero Trust principles will mitigate risk of unauthorized access to AI technologies and Department data, providing a high level of security
    8. with a mix of open-source, commercially available, and custom-built AI systems.

      Open-source is the key word here.

    9. ts Enterprise Data Strategy (EDS) in September 2021

      EAIS has a predecessor

    10. Innovate

      Use cases

    11. Ensure AI is Applied Responsibly

      Principles and standards

    12. Foster a Culture that Embraces AI Technology

      Workforce

    13. Leverage Secure AI Infrastructure

      Infrastructure

    Created with Sketch. Visit annotations in context

    Created with Sketch. Annotators

    Created with Sketch. URL

    1. ||sorina||||StephanieBP||||VladaR|| This is - so far - one of the best analsis of the current geopolitical moment which will inevitable impact our work as well. The main question is if there will be at all space for support for interdependence and inclusion. ||Pavlina||

    Created with Sketch. Visit annotations in context

    Created with Sketch. Annotators

    Created with Sketch. URL

    1. 50 Global Hubs for Top AI Talent

      @jovan 50 global hubs for Top AI talents

    Created with Sketch. Visit annotations in context

    Created with Sketch. Annotators

    Created with Sketch. URL

    1. if Senator Pascale Gruny has anything to say about it. She has just taken a first step toward a proposed law making everything, or really anyone — at least in official documents — well, masculine.

      I disagree with this view. ||sorina|| it is what we discussed last evening during the dinner. What about this view.

    Created with Sketch. Visit annotations in context

    Created with Sketch. Annotators

    Created with Sketch. URL

    1. At Diplo, the organisation I lead, we’ve been successfully experimenting with an approach that integrates data labelling into our daily operations,

      Jovan, I DO NOT AGREE WITH THIS

    Created with Sketch. Visit annotations in context

    Created with Sketch. Annotators

    Created with Sketch. URL

    1. However, in an environment where data moves across borders in seconds, and can easily be destroyed or removed, cooperation based on a traditional MLAT is blamed for being slow and insufficient since it often takes months to address requests for assistance (Kent, 2015).  

      Totally Agree that many procedural framework between countries or region could spend for a long time to deal with tracking digital trace.

    Created with Sketch. Visit annotations in context

    Created with Sketch. Annotators

    Created with Sketch. URL

    1. US Secretary of State Antony Blinken has been touring the region again. He's currently in talks in Ankara, Turkey. And we are also being told that the head of the CIA, William Burns, who used to be the top US diplomat on the Middle East, is in the region too.

      Michale, have a look at this.

    Created with Sketch. Visit annotations in context

    Created with Sketch. Annotators

    Created with Sketch. URL

    1. Nishida’s philosophy is critical of dualistic perspectives that often influence our understanding of humans versus machines. He would likely argue that humans and machines are fundamentally interlinked. In this interconnected arena, beyond traditional dualistic frameworks (AI vs humans, good vs bad), we should formulate new approaches to AI.

      Q: What is Nishida's view on inerconnectedness?

    Created with Sketch. Visit annotations in context

    Created with Sketch. Annotators

    Created with Sketch. URL

  5. Oct 2023
    1. The fellows funded by Open Philanthropy, which is financed primarily by billionaire Facebook co-founder and Asana CEO Dustin Moskovitz and his wife Cari Tuna, are already involved in negotiations that will shape Capitol Hill’s accelerating plans to regulate AI. And they’re closely tied to a powerful influence network that’s pushing Washington to focus on the technology’s long-term risks — a focus critics fear will divert Congress from more immediate rules that would tie the hands of tech firms.

      ||Jovan||||sorina||||JovanNj||

      Probably old news to you, but here's an article that explains about the billionaire that founded Open Philanthropy (a core funder in EA activities). It also explains about its reach into politics.

    Created with Sketch. Visit annotations in context

    Created with Sketch. Tags

    Created with Sketch. Annotators

    Created with Sketch. URL

    1. William Isaac

      Staff Research Scientist on DeepMind’s Ethics and Society Team and Research Affiliate at Oxford University Centre's for the Governance of AI: https://wsisaac.com/#about

      Both DeepMind and Centre for the Governance of AI (GovAI) have strong links to EA!

    2. Arvind Narayanan

      Professor of computer science from Princeton University and the director of the Center for Information Technology Policy: https://www.cs.princeton.edu/~arvindn/.

      Haven't read his work closely yet, but it seems sensible to me.

    3. Sara Hooker,

      Director at Cohere: https://cohere.com/ (an LLM AI company).

    4. Yoshua Bengio

      Professor at Université de Montréal and scientific director of the Montreal Institute for Learning Algorithms (MILA).

      Very influential computer scientist, and considered a leading force in AI. Also an AI doomer, though I can't find his clear link with EA.

    5. John McDermid

      Professor from University of York.

    6. Alexander Babuta
    7. Irene Solaiman

      Head of Global Policy at Hugging Face: https://www.irenesolaiman.com/

    Created with Sketch. Visit annotations in context

    Created with Sketch. Tags

    Created with Sketch. Annotators

    Created with Sketch. URL