1. Oct 2023
    1. Paul Christiano
    2. Capabilities and risks from frontier AI

      ||Jovan|| ||sorina|| ||JovanNj||

      This is the report that the UK released ahead of the AI Safety Summit (1-2 November, 2023).

    Created with Sketch. Visit annotations in context

    Created with Sketch. Tags

    Created with Sketch. Annotators

    Created with Sketch. URL

    1. On Wednesday, the U.K. government released a report called “Capabilities and risks from frontier AI,” in which it explains frontier AI as “highly capable general-purpose AI models that can perform a wide variety of tasks and match or exceed the capabilities present in today’s most advanced models.”

      ||Jovan||||sorina||||JovanNj||

      This report might be a good source for understanding narrative change in the field of AI safety. I will find out where it is and send it to you via email.

      Also, it shouldn't come as a surprise to us that the UK government will be the most convinced in EA's description of AI ending humanity, considering that many EA orgnisations are based in and attached to UK universities (Oxbridge).

    Created with Sketch. Visit annotations in context

    Created with Sketch. Tags

    Created with Sketch. Annotators

    Created with Sketch. URL

    1. that the criteria apply,

      What are criteria?

    2. that maintains horizontal exemption conditions and largely overlooks the negative legal opinion.

      They ignored negative opinion.

    3. harshly criticised by the European Parliament’s legal office

      ||sorina||||wuATdiplomacy.edu|| Is there any article about these criticism?

    4. under a pre-set list of critical use cases were deemed automatically high-risk

      ||sorina||||wuATdiplomacy.edu|| Where are these pre-set list of ciritical uses listed in the current versions of EU AI Act: https://dig.watch/resource/eu-ai-act-proposed-amendments-structured-view

    Created with Sketch. Visit annotations in context

    Created with Sketch. Annotators

    Created with Sketch. URL

    1. I usually always thought of hackers

      It's what my family and friends think when I tell them about cyber and my work :D

    Created with Sketch. Visit annotations in context

    Created with Sketch. Annotators

    Created with Sketch. URL

    1. Inclusive Framework on BEPS

      I have doubt about this statement by OECD.

    Created with Sketch. Visit annotations in context

    Created with Sketch. Annotators

    Created with Sketch. URL

    1. The GloBE rules provide for a co-ordinated system of taxation intended to ensure large MNE groups pay this minimum level of tax on income arising in each of the jurisdictions in which they operate. The rules create a “top-up tax” to be applied on profits in any jurisdiction whenever the effective tax rate, determined on a jurisdictional basis, is below the minimum 15% rate.

      @john This paragraph relates to our latest discussion on Taxation of tech companies

    Created with Sketch. Visit annotations in context

    Created with Sketch. Annotators

    Created with Sketch. URL

    1. Can I capture my vision in a page? A paragraph? A word?

      Key quesiton.

    Created with Sketch. Visit annotations in context

    Created with Sketch. Annotators

    Created with Sketch. URL

    1. The European Cyber Resilience Act (CRA)

      Sources of data and documents

    Created with Sketch. Visit annotations in context

    Created with Sketch. Annotators

    Created with Sketch. URL

    1. I think that cooperation is important for both developed and developing countries, and I think that most of the challenges come from competing interests

    Created with Sketch. Visit annotations in context

    Created with Sketch. Annotators

    Created with Sketch. URL

    1. We emphasize that EU policymakers should consider strengthening deployment requirements for entities that bring foundation models to market to ensure there is sufficient accountability across the digital supply chain.

      Pyramide of evaluation.

    2. Open releases generally achieve strong scores on resource disclosure requirements (both data and compute), with EleutherAI receiving 19/20 for these categories.

      Open releases foudational models are more in compliace with law than closed oes.

      @jovak@diplomacy.edu

    3. the European Parliament adopted a draft of the Act by a vote of 499 in favor, 28 against, and 93 abstentions.

      Votes in the EU Parliament

    Created with Sketch. Visit annotations in context

    Created with Sketch. Annotators

    Created with Sketch. URL

    1. We are at the same juncture now with AI. Once again, a handful of competitive but ideologically aligned leaders are telling us that large-scale, general-purpose AI implementations are the only way forward. In doing so, they disregard the dangerous level of complexity and the undue level of control and financial return to be granted to them.

      @Jovan @Sorina

      The very same people warning us of extinction-level AI risks are the same people who are developing technologies in a way that leads us to it. In a way, the public release of GPT and other generative models created the very market pressure that makes "creating the best, most intelligent AGI" the most important and only goal for the market.

    2. What is happening is not just a battle for market control. A small number of tech titans are busy designing our collective future, presenting their societal vision, and specific beliefs about our humanity, as theonly possible path. Hiding behind an illusion of natural market forces, they are harnessing their wealth and influence to shape not just productization and implementation of AI technology, but also the research.

      @Jovan @Sorina

      This is my concern with solely using a longtermist view to make policy judgments.

    Created with Sketch. Visit annotations in context

    Created with Sketch. Tags

    Created with Sketch. Annotators

    Created with Sketch. URL

    1. sraeli forces were still bombing Gaza and fighting with Hamas gunmen in parts of southern Israel in the early hours of Sunday and a spokesman for the military said the situation in the country was not totally under control.

      ||sorina|| What dyou think about this argument?

    Created with Sketch. Visit annotations in context

    Created with Sketch. Annotators

    Created with Sketch. URL

    1. Large-scale compute is also environmentally unsustainable: chips are highly toxic to produce17 and require an enormous amount of energy to manufacture:18 for example, TSMC on its own accounts for 4.8 percent of Taiwan’s national energy consumption, more than the entire capital city of Taipei.19 Running data centers is likewise environmentally very costly: estimates equate every prompt run on ChatGPT to the equivalent of pouring out an entire bottle of water.20

      Enviornmental aspects of AI producing.

    2. access to compute—along with data and skilled labor—is a key component

      Three components for AI: computing, data, and skilled labour

    Created with Sketch. Visit annotations in context

    Created with Sketch. Annotators

    Created with Sketch. URL

  2. Sep 2023
    1. Artificial intelligence (AI) is in the news every day and corporate strategies are evolving to adapt our businesses to AI use.

      .AI and Anguilla ||Jovan||

    Created with Sketch. Visit annotations in context

    Created with Sketch. Annotators

    Created with Sketch. URL

    1. Inside effective altruism, where the far future counts a lot more than the present

      ||Jovan|| A good explanation on what this effective altruism movement is and how that came to populate the current political discourse of AI governance with many terms like "fate-defining moment" and "threat against humanity".

    Created with Sketch. Visit annotations in context

    Created with Sketch. Tags

    Created with Sketch. Annotators

    Created with Sketch. URL

    1. The Reluctant Prophet of Effective Altruism

      ||Jovan||

      This is an in-depth interview with the original founder of effective altruism movement. There were several iterations of the same philanthropic/philosophical movement that turned political and ambitious very soon.

    Created with Sketch. Visit annotations in context

    Created with Sketch. Tags

    Created with Sketch. Annotators

    Created with Sketch. URL

    1. ChatGPT-maker OpenAI and The Associated Press said Thursday that they’ve made a deal for the artificial intelligence company to license AP’s archive of news stories.

      Deal between OpenAI and Associated Press.

    Created with Sketch. Visit annotations in context

    Created with Sketch. Annotators

    Created with Sketch. URL

    1. “It’s like asking, ‘should the newsroom use the Internet?’ in the 1990s,” Tofel said. “The answer is yes, but not stupidly.”

      Should journalists use AI? ||Jovan||

    Created with Sketch. Visit annotations in context

    Created with Sketch. Annotators

    Created with Sketch. URL

    1. New Delhi’s decision reflected its “growing concern at the interference of Canadian diplomats in our internal matters and their involvement in anti-India activities”, the ministry said in a statement on Tuesday.

      ||sorina|| It is relevant for our course on public diplomacy.

    Created with Sketch. Visit annotations in context

    Created with Sketch. Annotators

    Created with Sketch. URL

    1. Timeline of Systematic Data and the Development of Computable Knowledge

      Timeline of Systematic Data and the Development of Computable Knwoledge.

    Created with Sketch. Visit annotations in context

    Created with Sketch. Annotators

    Created with Sketch. URL

    1. I think there is fatigue. If you ask the average New Yorker what the SDGs are, I’m not sure they’re going to be able to respond.
    2. the hardening divides between the West vs. the global South, with the two camps mainly feuding over the reform of such financial institutions as the World Bank and the International Monetary Fund.
    3. Only 15 percent of all 17 goals have been met, and 48 percent are off track and have either stagnated or regressed.
    Created with Sketch. Visit annotations in context

    Created with Sketch. Annotators

    Created with Sketch. URL

    1. he 80th Session of the United Nations General Assembly, a High-Level Event on Science, Technology and Innovation for Development
    2. the role of multi-stakeholder partnerships to foster strategic long-term investment in supporting the development of science, technology and innovation in developing countries,
    3. including the Global Digital Compact, aligned with the Sustainable Development Goals, should be considered, which should offer preferential access for developing countries to relevant advanced technologies
    4. triangular cooperation projects
    5. inequalities in data generation
    6. We acknowledge that all technological barriers, inter alia, as reported by the IPCC, limit adaptation to climate change and the implementation of the National Determined Contributions (NDCs) of developing countries
    7. We note the central role of Governments, with the active contribution from stakeholders from the private sector, civil society, academia and research institutions, in creating and supporting an enabling environment at all levels, including enabling regulatory and governance frameworks
    8. the expansion of open-science models
    9. the knowledge produced by research and innovation activities can have in designing better public policies
    10. the Tunis Agenda and the Geneva Declaration of Principles and plan of action shall lay down the guiding principles for digital cooperation.
    11. to ensure that the World Summit on the Information Society (WSIS+20) General Review process, the Global Digital Compact and the Summit of the Future contribute to, inter alia, the achievement of sustainable development and closing the digital divide between developed and developing countries.
    12. close alignment between the World Summit on the Information Society process and the 2030 Agenda for Sustainable Development,
    13. the full implementation of the 2030 Agenda and the Addis Ababa Action Agenda
    14. has the potential to resolve and minimize trade-offs among the Goals and targets,
    15. an open, fair, inclusive and non-discriminatory environment for scientific and technological development.
    16. stakeholders
    Created with Sketch. Visit annotations in context

    Created with Sketch. Annotators

    Created with Sketch. URL

    1. The bot is wonderful, he said, letting him speed through $10 tasks in a matter of minutes. When we spoke, he was having it rate another chatbot’s responses according to seven different criteria, one AI training the other.
    2. here were listings for AI trainers with expertise in health coaching, human resources, finance, economics, data science, programming, computer science, chemistry, biology, accounting, taxes, nutrition, physics, travel, K-12 education, sports journalism, and self-help. You can make $45 an hour teaching robots law or make $25 an hour teaching them poetry.
    3. One way the AI industry differs from manufacturers of phones and cars is in its fluidity. The work is constantly changing, constantly getting automated away and replaced with new needs for new types of data. It’s an assembly line but one that can be endlessly and instantly reconfigured, moving to wherever there is the right combination of skills, bandwidth, and wages.
    4. This debate spilled into the open earlier this year, when Scale’s CEO, Wang, tweeted that he predicted AI labs will soon be spending as many billions of dollars on human data as they do on computing power; OpenAI’s CEO, Sam Altman, responded that data needs will decrease as AI improves.
    5. Taskup.ai, DataAnnotation.tech, and Gethybrid.io all appear to be owned by the same company: Surge AI. Its CEO, Edwin Chen, would neither confirm nor deny the connection, but he was willing to talk about his company and how he sees annotation evolving.
    6. Often their work involved training chatbots, though with higher-quality expectations and more specialized purposes than other sites they had worked for.
    7. “If there was one thing I could change, I would just like to have more information about what happens on the other end,” he said. “We only know as much as we need to know to get work done, but if I could know more, then maybe I could get more established and perhaps pursue this as a career.”
    8. One engineer told me about buying examples of Socratic dialogues for up to $300 a pop. Another told me about paying $15 for a “darkly funny limerick about a goldfish.”
    9. But if you want to train a model to do legal research, you need someone with training in law, and this gets expensive.
    10. to be looking at their accuracy, helpfulness, and harmlessness
    11. The model is still a text-prediction machine mimicking patterns in human writing, but now its training corpus has been supplemented with bespoke examples, and the model has been weighted to favor them

      ||JovanNj|| Ovo bi mogle da rade nase anotacije.

    12. Each time Anna prompts Sparrow, it delivers two responses and she picks the best one, thereby creating something called “human-feedback data.”
    13. “I remember that someone posted that we will be remembered in the future,” he said. “And somebody else replied, ‘We are being treated worse than foot soldiers. We will be remembered nowhere in the future.’ I remember that very well. Nobody will recognize the work we did or the effort we put in.”
    14. Training a large model requires an enormous amount of annotation followed by more iterative updates, and engineers want it all as fast as possible so they can hit their target launch date.
    15. According to workers I spoke with and job listings, U.S.-based Remotasks annotators generally earn between $10 and $25 per hour, though some subject-matter experts can make more. By the beginning of this year, pay for the Kenyan annotators I spoke with had dropped to between $1 and $3 per hour.
    16. nstruction writers must come up with rules that will get humans to categorize the world with perfect consistency. To do so, they often create categories no human would use.

      taxonomies

    17. When AI comes for your job, you may not lose it, but it might become more alien, more isolating, more tedious.
    18. coherent processes broken into tasks and arrayed along assembly lines with some steps done by machines and some by humans but none resembling what came before.
    19. “AI doesn’t replace work,” he said. “But it does change how work is organized.”
    20. A recent Google Research paper gave an order-of-magnitude figure of “millions” with the potential to become “billions.”
    21. Annotation is big business. Scale, founded in 2016 by then-19-year-old Alexandr Wang, was valued in 2021 at $7.3 billion, making him what Forbes called “the youngest self-made billionaire,” though the magazine noted in a recent profile that his stake has fallen on secondary markets since then.
    22. Mechanical Turk and Clickworker
    23. CloudFactory,
    24. Human intelligence is the basis of artificial intelligence, and we need to be valuing these as real jobs in the AI economy that are going to be here for a while.”
    25. The more AI systems are put out into the world to dispense legal advice and medical help, the more edge cases they will encounter and the more humans will be needed to sort them.
    26. Machine-learning systems are what researchers call “brittle,” prone to fail when encountering something that isn’t well represented in their training data.
    27. The resulting annotated dataset, called ImageNet, enabled breakthroughs in machine learning that revitalized the field and ushered in a decade of progress.
    28. The anthropologist David Graeber defines “bullshit jobs” as employment without meaning or purpose, work that should be automated but for reasons of bureaucracy or status or inertia is not.
    29. But behind even the most impressive AI system are people — huge numbers of people labeling data to train it and clarifying data when it gets confused.
    30. Like most of the annotators I spoke with, Joe was unaware until I told him that Remotasks is the worker-facing subsidiary of a company called Scale AI, a multibillion-dollar Silicon Valley data vendor that counts OpenAI and the U.S. military among its customers
    31. oe got a job as an annotator — the tedious work of processing the raw information used to train artificial intelligence.
    Created with Sketch. Visit annotations in context

    Created with Sketch. Annotators

    Created with Sketch. URL

    1. LLMs are through and through based on language and patterns to be found through it.
    2. the very success of LLMs in the commonsense arena strongly suggests that you don’t fundamentally need deep “structured logic” for that.
    3. One of the surprises of LLMs is that they often seem, in effect, to use logic, even though there’s nothing in their setup that explicitly involves logic.
    4. “symbolic discourse language”
    5. particular formal system that described certain kinds of things
    6. we weren’t trying to use just logic to represent the world, we were using the full power and richness of computation.
    7. Lenat–Haase representation-languages
    8. the problem of commonsense knowledge and commonsense reasoning.
    9. heuristics: strategies for guessing how one might “jump ahead”
    10. a very classic approach to formalizing the world
    11. Encode knowledge about the world in the form of statements of logic.
    12. was it all just an “engineering problem” that simply required pulling together a bigger and better “expert system”?
    13. to use the framework of logic—in more or less the same form that Aristotle and Leibniz had it—to capture what happens in the world.

      Aristotle and Leibniz line in pure logic

    Created with Sketch. Visit annotations in context

    Created with Sketch. Annotators

    Created with Sketch. URL

    1. She rejects notions of progress, she is despairing of representative democracy, and she is not confident that freedom can be saved in the modern world.
    Created with Sketch. Visit annotations in context

    Created with Sketch. Annotators

    Created with Sketch. URL

    1. Like HTML, PDF facilitates the user’s choice of device and operating system. Unlike HTML, PDF does not assume that remote servers or content are available.
    2. From the vantage-point of 2023 we are positioned to recognize 1993 as a year of two key developments; the first specification of HTML, the language of the web, and the first specification of PDF, the language of documents.
    Created with Sketch. Visit annotations in context

    Created with Sketch. Annotators

    Created with Sketch. URL

    1. the plugin “Permalink Manager Lite” to manage the URL of the pages. 
    Created with Sketch. Visit annotations in context

    Created with Sketch. Annotators

    Created with Sketch. URL

    1. In April 2020, I ran a small SEO experiment with a blog post and transformed a long-form article into a topic cluster (also called content hub). 

      How to make topic cluster on website?

    Created with Sketch. Visit annotations in context

    Created with Sketch. Annotators

    Created with Sketch. URL

    1. Strengthen telecommunications and data transfers thanks to a new undersea cable connecting the region.

      Submarine cables are part of the new India - Middle East - Europe Economic Corridor

    Created with Sketch. Visit annotations in context

    Created with Sketch. Annotators

    Created with Sketch. URL

    1. How soon could AI replace human workers?

      This is the key decision.

    2. a soul-crushing amount of change and uncertainty — is to methodically plan for the future.
    3. how — and when — their workforce will need to change in order to leverage AI.
    4. The software includes more than 500 functions — but the vast majority of people only use a few dozen, because they don’t fully understand how to match the enormous number of features Excel offers to their daily cognitive tasks.
    5. Rather, they’ll need to learn how to leverage multimodal AI to do more, and better, work
    6. Most workers won’t need to learn how to code, or how to write basic prompts, as we often hear at conferences.
    7. so that both the human and the AI can accomplish more through collaboration than by working independently.
    8. She might go back and forth a few times, using different data sources, until an optimal quote is received for both the insurance company and the customer.
    9. near- and long-term scenarios for the myriad ways in which emerging tools will improve productivity and efficiency
    10. That’s because AI systems aren’t static; they are improving incrementally over time.
    11. aren’t planning for a future that includes an internal RHLF unit tasked with continuously monitoring, auditing, and tweaking AI systems and tools.
    12. Essentially, AI systems need constant human feedback, or they run the risk of learning and remembering the wrong information.
    13. By marketing their platforms to companies, they want to lock them (and their data) in.
    14. Business data is invaluable because once a model has been trained, it can be costly and technically cumbersome to port those data over to another system.
    15. AI is not a monolith, and we are just at the beginning of a very long trajectory.
    16. it’s not good enough to actually use.
    17. This happened again in 1987, when again, computer scientists and businesses made bold promises on a timeline for AI that was just never feasible.
    18. AI cycles through phases that involve breakthroughs, surges of funding and fleeting moments of mainstream interest, followed by missed expectations and funding clawbacks.
    19. using an iterative process to cultivate a ready workforce, and most importantly, creating evidence-backed future scenarios that challenge conventional thinking within the organization.
    20. The workforce will need to evolve, and workers will have to learn new skills, iteratively and over a period of years
    21. leaders are focused too narrowly on immediate gains, rather than how their value network will transform in the future
    22. the output has to be proven trustworthy, integrated into existing workstreams, and managed for compliance, risk, and regulatory issues.
    23. Exactly which jobs AI will eliminate, and when, is guesswork.
    24. Within just a few years, powerful AI systems will perform cognitive work at the same level (or even above) their human workforce.
    25. They all want to know how their companies can create more value using fewer human resources.
    26. How soon could AI replace human workers?
    Created with Sketch. Visit annotations in context

    Created with Sketch. Annotators

    Created with Sketch. URL

    1. To my mind, the true spiritual forefather of AI was Gottfried Wilhelm von Leibniz (1646 - 1716), with his ideas about a universal formal language that could encompass all of human knowledge. Another important figure in AI's pre-history is Ada Lovelace, who around 1850 imagined that Charles Babbage's Analytical Engine (which unfortunately was never actually built) could conceivably accomplish such tasks as playing chess and composing music.

      Potential contributors to the AI. ||Jovan||

    Created with Sketch. Visit annotations in context

    Created with Sketch. Annotators

    Created with Sketch. URL

    1. the Code of Conduct for Information Integrity on Digital Platforms that is being developed will be important.

      ||sorina|| Are you aware of this Code of Conduct?

    2. There is convergence around the potential for a GDC to promote digital trust and security and to address disinformation, hate speech and other harmful online content.

      ||sorina|| No traditional cybersecurity. Only 'content safety'

    3. There is broad consensus that the Internet Governance Forum (IGF) plays – and should continue to play – a key role in promoting the global and interoperable nature and governance of the Internet. The important roles played by IGF, ITU, UNCTAD, UNDP, UNESCO, WSIS and other UN entities, structures, and forums have been emphasized and that a GDC should not duplicate existing forums and processes.

      ||sorina|| These are probably two key sentences arguing for IGF and avoiding duplication. Here, they pushed against new forum.

    Created with Sketch. Visit annotations in context

    Created with Sketch. Annotators

    Created with Sketch. URL

    1. Until the Kenyan government suspended the process, thousands of Kenyan citizens lined up to have their iris scanned using the Worldcoin orb. The amount of Worldcoin being offered to each person was estimated to be about $49.

      Controversies about biometrics gathering in Kenya.

    Created with Sketch. Visit annotations in context

    Created with Sketch. Annotators

    Created with Sketch. URL

    1. We recognized this problem back in 2021 and argued that benchmarks should be as dynamic as the models they evaluate. To make this possible, we introduced a platform called Dynabench for creating living, and continuously evolving benchmarks. As part of the release, we created a figure that showed how quickly AI benchmarks were “saturating”, i.e., that state of the art systems were starting to surpass human performance on a variety of tasks.

      Plotting AI

    Created with Sketch. Visit annotations in context

    Created with Sketch. Annotators

    Created with Sketch. URL

    1. Specifically, Google hopes to unify and standardize the evaluation metrics for unlearning algorithms, as well as foster novel solutions to the problem.
    2. to identify problematic datasets, exclude them and retrain the entire model from scratch
    3. OpenAI, the creators of ChatGPT, have repeatedly come under fire regarding the data used to train their models. A number of generative AI art tools are also facing legal battles regarding their training data.
    Created with Sketch. Visit annotations in context

    Created with Sketch. Annotators

    Created with Sketch. URL

    1. You don't just put this stuff in public when it's within striking distance of achieving ASI. That's just insanely stupid. Don't compare it to ANY previous technology. We want to limit to as little rusk as possible.
    2. On the surface it might seem like a bad thing to distribute dangerous technology, but the alternative is the lack of balance.

      Good argument about nuclear destruction balance

    3. Even the transformer architecture was invented and released by Google researchers,
    4. Everything they touched turned into a monopoly.

      ||Jovan|| It is a good point on big AI companies.

    Created with Sketch. Visit annotations in context

    Created with Sketch. Annotators

    Created with Sketch. URL

    1. their ability to interact with AI
    2. support individual growth.
    3. how their skills in asking questions, analyzing responses, and integrating information have developed.
    4. students can reflect on their progress over time.
    5. to review each other's work, fostering a collaborative environment.
    6. to observe critical thinking and problem-solving approaches.
    7. Even if these tools could accurately identify AI-generated content (which they cannot yet), students can make small edits to evade detection.
    8. To elaborate on our research into the shortcomings of detectors, one of our key findings was that these tools sometimes suggest that human-written content was generated by AI.

      What did OpenAI learned by developing itsown AI detector.

    Created with Sketch. Visit annotations in context

    Created with Sketch. Annotators

    Created with Sketch. URL

    1. The goal is to help them “understand the importance of constantly working on their original critical thinking, problem solving and creativity skills.”
    2. ecommends teachers use ChatGPT as an assistant in crafting quizzes, exams and lesson plans for classes.
    3. She says exploring information in a conversational setting helps students understand their material with added nuance and new perspective.

      Importance of conversational setting for learning.

    Created with Sketch. Visit annotations in context

    Created with Sketch. Annotators

    Created with Sketch. URL

    1. But we don’t know how fast it’s moving. We don’t know why it’s working when it’s working.
    2. the story here really is about the unknowns.
    3. We can’t look at how a person thinks and explain their reasoning by looking at the firings of the neurons.
    4. Trying to build systems where by design, every piece of the system means something that we can understand.
    5. Interpretability is this goal of being able to look inside our systems and say pretty clearly with pretty high confidence what they’re doing, why they’re doing it.
    6. The paper describing GPT-4 talks about how when they first trained it, it could do a decent job of walking a layperson through building a biological weapons lab.
    7. they’ve just got to wait and see.
    8. hey’re basically investing in a mystery box?
    9. But that doesn’t mean we understand anything about the biology of that tree. We just kind of started the process, let it go, and try to nudge it around a little bit at the end.
    10. There’s no explanation in terms of things like checkers moves or strategy or what we think the other player is going to do
    11. steering these things almost completely through trial and error.
    12. by just putting these systems out in the world and seeing what they do.
    13. we don’t know how to steer these things or control them in any reliable way
    14. We just don’t understand what’s going on here. We built it, we trained it, but we don’t know what it’s doing.
    15. “All right, make this entire response more likely because the user liked it, and make this entire response less likely because the user didn’t like it.”
    16. reinforcement learning.
    17. we know what word actually comes next, we can tell it if it got it right.
    18. probability.
    19. to guess what word is gonna come next
    20. by basically doing autocomplete.
    Created with Sketch. Visit annotations in context

    Created with Sketch. Annotators

    Created with Sketch. URL

    1. By deciphering the complex processes taking place in the networks, the opportunity for comprehensive standardization arises, improving the comparability and interoperability of models.

      ||sorina|| If we discover how deep-learning works, it will open new possibilities for AI standardisation.

    2. by enabling the development of more efficient models that require less computing power.
    3. The possibility for better interpretation of the models also has ethical implications. Through a deeper understanding of the decision-making processes, discriminatory or biased mechanisms in AI could be identified and eliminated. Additionally, this understanding facilitates communication with domain experts who are not AI specialists, thus promoting broader acceptance in society.
    4. the challenge lies in deciphering the mechanisms behind the observed order in the neural networks.
    5. The Law of Equi-Separation The empirical Law of Equi-Separation cuts through this chaos and reveals an underlying order within the deep neural networks. At its core, the law quantifies how these networks categorize data based on class membership across various layers. It shows a consistent pattern: The data separation improves in each layer geometrically and at a constant rate. This challenges the notion of a chaotic training process and instead reveals a structured and predictable process.

      ||Jovan||||anjadjATdiplomacy.edu|| Is The Law of Equi-Separation a possible solution for transparency of AI models?

    6. With their numerous layers and interwoven nodes, these models perform complex data processing that appears chaotic and unpredictable.
    7. The fascination with artificial intelligence (AI) has long been shrouded in mystique, especially in the opaque realm of deep learning.

      Magical fascination of AI

    Created with Sketch. Visit annotations in context

    Created with Sketch. Annotators

    Created with Sketch. URL

    1. Bringing a CoE to life should follow the maxim of crawl, walk, run.
    2. a clear understanding of what automation means for an organization is vital.

      What automation means to specific organisations ||Jovan||

    3. a CoE comes in — a team dedicated to promoting automation across the organization.
    4. uch as hallucinations, IP infringement, amplified bias, etc.)
    Created with Sketch. Visit annotations in context

    Created with Sketch. Annotators

    Created with Sketch. URL

    1. This task demonstrates the failure of language models to follow instructions when there is a popular continuation that does not fit with that instruction. Larger models are more hurt by this as the larger the model, the more familiar it is with common expressions and quotes.

      ||Jovan|| ||JovanNj||

      An example of how the superb capability of an LLM in induction and association actually breaks; also shows a little bit how the LLM "thinks" (which is very different from a human, who may have or have not memorized the Shakespeare's quote by heart but would nonetheless understand that the "prompt" is more important than reciting the most plausible quote.)

    Created with Sketch. Visit annotations in context

    Created with Sketch. Tags

    Created with Sketch. Annotators

    Created with Sketch. URL

    1. The standard paradigm in natural language processing today is to pretrain large language models to autocomplete text corpora. The resulting models are then either frozen and used directly for other tasks (zero-shot or using few-shot learning), or additionally trained on other tasks (fine-tuning).

      It might be interesting to carry out similar tasks for the model that Diplo is fine-tuning--to see where its algorithmic reasoning will break.

      Also, it might be a good comparison study to show on paper how the Diplo model works better with higher-quality data (bottom-up approach). It'd be good evidence to show, I suppose.||Jovan||||Jovan||||Jovan||||JovanNj||||Jovan||

    2. The purpose of this contest is to find evidence for a stronger failure mode: tasks where language models get worse as they become better at language modeling (next word prediction).

      ||Jovan|| ||JovanNj||

      I found this interesting as this might be insightful to the problem I faced when I was writing the speech for the Digital Cooperation Day "Chair of future generations" using ChatGPT. The model was really good at generating a quote that doesn't exist and was never said by the person it attributed to.

      It is very plausible because from the "reality" the model lives in, multiple data sources made it probable that "this quote might have existed and it makes sense that this quote follows that quote and follows that name and that organization." It is interesting to actually see where the model that is very good at inductive reasoning and association fails sometimes, because induction and association aren't the only two logics humans use to approach reality.

    Created with Sketch. Visit annotations in context

    Created with Sketch. Tags

    Created with Sketch. Annotators

    Created with Sketch. URL

  3. Aug 2023
    1. G20 digital ministers sign up for Digital Public Infrastructure push

      G20 digital push

    Created with Sketch. Visit annotations in context

    Created with Sketch. Annotators

    Created with Sketch. URL

    1. In the period between 1000 and 1500 A.D., the relevant major civilizations were West Europe, the Islamic world, India, and China. Below I have illustrated the notable people per-capita rates for countries accumulated over the 1000–1500 period. I have focused on the region containing Europe and the Middle East as, with respect to this measure, they are the only real competitors in this period

      ||Jovan|| REad on Patterns of Humanity

    Created with Sketch. Visit annotations in context

    Created with Sketch. Annotators

    Created with Sketch. URL

    1. f I can add on, on the previous points about kind of like ensuring that open source and open science is safe. I think it’s important to recognize that when we keep things more secret, when we try to hurt open science, we actually slow down more the US than China, right? And I think that’s what’s been happening for the past, past few years, and it’s important to recognize that and, and make sure we keep our ecosystem open to help the United States.
    2. First, it’s good to remember that most of today’s progress has been powered by open science and open source, like the attention is all you need. Paper, the BERT paper, the latent diffusion paper, and so many others, the same way without open source, PyTorch, transformers, diffusers, all invented here in the US, the US might not be the leading country for AI. Now, when we look towards the future, open science and open source distribute economic gains by enabling hundreds of thousands of small companies and startups to build with AI. It fosters innovation and fair competition between all thanks to ethical openness. It creates a safer path for development of the technology by giving civil society, nonprofits, academia and policy makers the capabilities they need to counterbalance the power of big private, of big private companies. Open science and open source, prevent black box systems, make companies more accountable and help solving today’s challenges like mitigating biases, reducing misinformation, promoting copyrights, and rewarding all stakeholders, including artists and content creators in the value creation process.

      Reasons why open source approach is good for AI developments.

    Created with Sketch. Visit annotations in context

    Created with Sketch. Annotators

    Created with Sketch. URL

    1. he advocates for open science and open source. He argues these would “prevent black box systems, make companies more accountable and help [in] solving today’s challenges like mitigating biases, reducing misinformation.”
    2. If users don’t understand how this technology is built, it creates a lot of risks, a lot of misconceptions.”
    3. He points out the dangers of letting "an opaque monopoly or oligopoly appear on a technology as groundbreaking as AI.”

      Risk of monopolies in AI.

    4. With a robust $235M Series D funding and a cumulative valuation of $4.5 billion, Hugging Face is flush with cash, boasting reserves that enable aggressive growth and diversification strategies. Their fortified capital makes them a powerhouse in the machine learning ecosystem, primed to tackle challenges like inherent biases and expand beyond their NLP niche.
    5. But BLOOM shatters this narrative. Spawned by BigScience, a consortium of 1,200 researchers from 38 countries, this open-source model boasts 176 billion parameters and speaks 46 languages plus 13 coding tongues.
    Created with Sketch. Visit annotations in context

    Created with Sketch. Annotators

    Created with Sketch. URL

    1. e first developed ethical principles and then had to translate these into more specific corporate policies. We’re now on version 2 of the corporate standard that embodies these principles and defines more precise practices for our engineering teams to follow.
    2. As the current holder of the G20 Presidency and Chair of the Global Partnership on AI, India is well positioned to help advance a global discussion on AI issues.
    3. When we at Microsoft adopted our six ethical principles for AI in 2018, we noted that one principle was the bedrock for everything else—accountability.
    4. we have nearly 350 people working on responsible AI at Microsoft, helping us implement best practices for building safe, secure, and transparent AI systems designed to benefit society.
    Created with Sketch. Visit annotations in context

    Created with Sketch. Annotators

    Created with Sketch. URL