10,911 Matching Annotations
  1. Jul 2023
    1. the settling of the new equilibrium will not happen overnight – or even from one month to the next
    2. the “Thucydides Trap”,
    3. Experience shows that the dominant great power tends to see itself as more benevolent and better-intentioned than it really is, and attributes malice to its challenger more often than is – or should be – justified
    4. The bad news is that of the sixteen instances thus identified, twelve have ended in war, and only four were peacefully resolved
    5. And it can neutralise the chief US weapon, the chief US weapon of power, which we call “universal values”
    6. “Ending the century of humiliation” – or, to paraphrase the Americans, “Make China Great Again”.
    7. So China has quite simply created its own: we see the BRICS and the One Belt One Road Initiative; and we also see the Asian Infrastructure Investment Bank, the development resources of which are several times greater than the development resources of all the Western countries. 
    8. there are no eternal winners and no eternal losers
    9. In 2010 the US and the European Union contributed 22 – 23 per cent of total world production; today the US contributes 25 per cent and the European Union 17 per cent. In other words, the US has successfully repelled the European Union’s attempt to move up alongside it – or even ahead of it.

      ||sorina|| a very interesting statistics.

    10. we also had a plan, which we expressed as the need to create a great free trade zone stretching from Lisbon to Vladivostok.
    11. after its own civil war, from the 1870s onwards the United States grew to be the preeminent country, and its inalienable right to world economic supremacy is part of its national identity, and a kind of article of faith.
    12. What has happened is that China has made the roughly three-hundred-year journey from the Western industrial revolution to the global information revolution in just thirty years.
    13. But it has turned out that in fact this issue, the liberation of China, belongs to the historical timeframe; because as a result of that liberation, the United States – and all of us – are now facing a greater force than the one we wanted to defeat. 
    14. Back then the US decided to free China from its isolation, obviously to make it easier to deal with the Russians; and so it put that issue in the strategic timeframe.
    15. tactical time, strategic time, and historical time.

      Three historical times inspired by Brodel's thinking.

    16. you have to simultaneously visualise three timeframes
    17. then today “Western values” mean three things: migration, LGBTQ, and war.

      ||sorina|| There is an interesting discussion of Orban in Romania. I do not understand well Hungarian-Romanian dynamics which you will know better.

    18. because the Ministry of Foreign Affairs of Romania – which I understand to belong more to the presidential branch of power – has come to my aid and sent me a démarche.
    Created with Sketch. Visit annotations in context

    Created with Sketch. Annotators

    Created with Sketch. URL

    1. the G-7 Hiroshima Process—as a critical forum for developing shared principles for the governance of AI—as well as the United Kingdom’s leadership in hosting a Summit on AI Safety, and India’s leadership as Chair of the Global Partnership on AI. 

      ||sorina|| 3 key processes for the USA on AI.

    2. It has already consulted on the voluntary commitments with Australia, Brazil, Canada, Chile, France, Germany, India, Israel, Italy, Japan, Kenya, Mexico, the Netherlands, New Zealand, Nigeria, the Philippines, Singapore, South Korea, the UAE, and the UK.
    3. model weights.

      ||JovanNj|| We need to focus on model weights issues.

    4. safety, security, and trust
    5. President Biden is convening seven leading AI companies at the White House today – Amazon, Anthropic, Google, Inflection, Meta, Microsoft, and OpenAI – to announce that the Biden-Harris Administration has secured voluntary commitments from these companies to help move toward safe, secure, and transparent development of AI technology.   
    1. It groups these functions into fourinstitutional models that exhibit internal synergies and have precedents in existingorganizations:

      Four-structure for AI governance

    Created with Sketch. Visit annotations in context

    Created with Sketch. Annotators

    Created with Sketch. URL

    1. the Media in the Digital Age.
    Created with Sketch. Visit annotations in context

    Created with Sketch. Annotators

    Created with Sketch. URL

    1. But if the superforecasters are so good at predictions, and experts have so much topic-specific knowledge, you might at least expect the two groups to influence each other’s beliefs.
    2. But superforecasters and AI experts seemed to hold very different views of how societies might respond to small-scale damage caused by AI . Superforecasters tended to think that would prompt heavy scrutiny and regulation to head off bigger problems later. Domain experts, by contrast, tended to think that commercial and geopolitical incentives might outweigh worries about safety, even after real harm had been caused.
    3. Such people share a few characteristics, such as careful, numerical thinking and an awareness of the cognitive biases that might lead them astray.

      Q: Who are superforecasters?

    4. The emergence of modern, powerful machine-learning models dates to the early years of the 2010s. And the field is still developing quickly. That leaves much less data on which to base predictions.
    5. If humans used AI to help design more potent bioweapons, for instance, it would have contributed fundamentally, albeit indirectly, to the disaster.
    6. One reason for AI ’s strong showing, says Dan Maryland, a superforecaster who participated in the study, is that it acts as a “force multiplier” on other risks like nuclear weapons
    7. The median superforecaster reckoned there was a 2.1% chance of an AI -caused catastrophe, and a 0.38% chance of an AI -caused extinction, by the end of the century. AI experts, by contrast, assigned the two events a 12% and 3% chance, respectively.

      Q: What is likelihood catastrophe or extinction?

    8. A “catastrophe” was defined as something that killed a mere 10% of the humans in the world, or around 800m people. (The second world war, by way of comparison, is estimated to have killed about 3% of the world’s population of 2bn at the time.) An “extinction”, on the other hand, was defined as an event that wiped out everyone with the possible exception of, at most, 5,000 lucky (or unlucky) souls.

      Q: What is the difference between catastrophe and extinction?

    9. On the one hand were subject-matter, or “domain”, experts in nuclear war, bio-weapons, AI and even extinction itself. On the other were a group of “superforecasters”—general-purpose prognosticators with a proven record of making accurate predictions on all sorts of topics, from election results to the outbreak of wars.
    10. These days, worries about “existential risks”—those that pose a threat to humanity as a species, rather than to individuals—are not confined to military scientists. Nuclear war; nuclear winter; plagues (whether natural, like covid-19, or engineered); asteroid strikes and more could all wipe out most or all of the human race. The newest doomsday threat is artificial intelligence ( ai ). In May a group of luminaries in the field signed a one-sentence open letter stating: “Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.”
    Created with Sketch. Visit annotations in context

    Created with Sketch. Annotators

    Created with Sketch. URL

    1. India insists data must be stored locally: to give its law-enforcement agencies easy access, to protect against foreign snooping and as a way to boost investment in the tech sector.
    2. most governments lack policymakers with relevant technical expertise and most digital issues cut across different domains, extending beyond the traditional remit of trade negotiators.
    3. In 2019 Abe Shinzo, the late Japanese prime minister, proposed the concept of Data Free Flow with Trust. That rather nebulous idea is materialising as a set of global norms to counter digital protectionism. As Matthew Goodman of CSIS, a think-tank in Washington, puts it: “It’s about the un-China approach to data governance.”
    4. “There’s a vacuum in terms of rules, norms and agreements that govern digital trade,” laments Nigel Cory of the Information Technology and Innovation Foundation, a research institute in Washington .
    5. The Philippines and Guam have emerged as attractive substitutes.
    6. Hong Kong was traditionally one of three major data hubs in Asia, with Japan and Singapore.
    7. Intra-Asia data flows make up over 50% of the region’s bandwidth, up from 47% in 2018, while the share going to America and Canada has dipped from 40% to 34% over the same period.
    8. he most congested cable route in Asia is also its most contested: the South China Sea is the “main street” of submarine cables, especially between Japan, Singapore and Hong Kong, notes Murai Jun, a Japanese internet pioneer.
    9. “Customers are asking more about the security of cables and routes,” says Uchiyama Kazuaki of NTT World Engineering Marine Corporation, the firm that owns the Kizuna.
    10. Aside from a heavy state hand in China’s cable industry, such infrastructure tends to be privately financed and owned. A small handful of companies dominate the production and installation of cables; big tech firms are their main users.
    11. While in the past constructing internet infrastructure tended to be a “collaborative effort” between countries and between firms, in recent years its enabling environment has soured amid growing friction between China and America.
    12. Asia saw international bandwidth usage grow by 39% in 2022, compared to the global average of 36%, according to TeleGeography, a research firm.
    Created with Sketch. Visit annotations in context

    Created with Sketch. Annotators

    Created with Sketch. URL

    1. migrant and refuge
    Created with Sketch. Visit annotations in context

    Created with Sketch. Annotators

    Created with Sketch. URL

    1. Our continued investment in innovation and our specific regulatory environment is what helped us lead the world in critical tech industries like quantum computing.

      I disagree with this point.

    2. The average American is already starting to see the benefits of AI technology in accessibility, efficiency, and reduction of human error.
    Created with Sketch. Visit annotations in context

    Created with Sketch. Annotators

    Created with Sketch. URL

    1. The resulting generative AI models need not be trained from scratch but can build upon open-source generative AI that has used lawfully sourced content.
    2. Vendor and customer contracts can include AI-related language added to confidentiality provisions in order to bar receiving parties from inputting confidential information of the information-disclosing parties into text prompts of AI tools.
    3. they should demand terms of service from generative AI platforms that confirm proper licensure of the training data that feed their AI.
    4. Developing these audit trails would assure companies are prepared if (or, more likely, when) customers start including demands for them in contracts as a form of insurance that the vendor’s works aren’t willfully, or unintentionally, derivative without authorization
    5. would increase transparency about the works included in the training data.
    6. has announced that artists will be able to opt out of the next generation of the image generator.
    7. Stable Diffusion, Midjourney and others have created their models based on the LAION-5B dataset, which contains almost six billion tagged images compiled from scraping the web indiscriminately, and is known to include substantial number of copyrighted creations.
    8. Customers of AI tools should ask providers whether their models were trained with any protected content
    9. There’s also the risk of accidentally sharing confidential trade secrets or business information by inputting data into generative AI tools.
    10. to become unequivocally “transformative,”
    11. Google successfully defended itself against a lawsuit by arguing that transformative use allowed for the scraping of text from books to create its search engine, and for the time being, this decision remains precedential.
    12. without the owner’s permission “for purposes such as criticism (including satire), comment, news reporting, teaching (including multiple copies for classroom use), scholarship, or research,”
    13. the interpretation of the fair use doctrine,
    14. the bounds of what is a “derivative work” under intellectual property laws
    15. Getty, an image licensing service, filed a lawsuit against the creators of Stable Diffusion alleging the improper use of its photos, both violating copyright and trademark rights it has in its watermarked photograph collection.
    16. If a court finds that the AI’s works are unauthorized and derivative, substantial infringement penalties can apply.
    17. Andersen v. Stability AI et al., three artists formed a class to sue multiple generative AI platforms on the basis of the AI using their original works without license to train their AI in their styles
    18. how the laws on the books should be applied
    19. does copyright, patent, trademark infringement apply to AI creations?
    20. This process comes with legal risks, including intellectual property infringement
    21. to copyright infringement, ownership of AI-generated works, and unlicensed content in training data.
    Created with Sketch. Visit annotations in context

    Created with Sketch. Annotators

    Created with Sketch. URL

    1. With humans and AI working to their respective strengths, they can transform unknown unknowns into known unknowns, opening the door to breakthrough thinking: logical and conceptual leaps that neither could make without the other.
    2. our brains work in a reductive manner; we generate ideas and then explain them to other people.
    3. By focusing on areas where the human brain and machines complement one another.
    4. the technology is fundamentally backward-looking, trained on yesterday’s data — and the future might not look anything like the past
    5. How might you use those opportunities to throw people off balance so they’ll generate questions that reach beyond what they intellectually know to be right, what makes them emotionally comfortable, and what they are accustomed to saying and doing?
    6. Increased question velocity, variety, and especially novelty give facilitate recognizing where you’re intellectually wrong, and becoming emotionally uncomfortable and behaviorally quiet — the very conditions that, we’ve found, tend to produce game-changing lines of inquiry.
    7. AI can take really obscure variables and make novel connections.
    8. sift through much more data, and connect more dots,
    9. “category jumping” questions — the gold standard of innovative inquiry
    10. uncover patterns and correlations in large volumes of data — connections that humans can easily miss without the technology
    11. more questions don’t necessarily amount to better questions, which means you’ll still need to exercise human judgment in deciding how to proceed.
    12. we found that 79% of respondents asked more questions, 18% asked the same amount, and 3% asked fewer.
    13. humans can start exploring the power of more context-dependent and nuanced questions
    14. to reveal deeply buried patterns in the data
    15. we’ve defined “artificial intelligence” broadly to include machine learning, deep learning, robotics, and the recent explosion of generative AI.)
    16. design-thinking sessions
    17. to help people become more inquisitive, creative problem-solvers on the job.
    18. can help people ask smarter questions, which in turn makes them better problem solvers and breakthrough innovators.
    19. from identification to ideation.
    20. Paired with “soft” inquiry-related skills such as critical thinking, innovation, active learning, complex problem solving, creativity, originality, and initiative, this technology can further our understanding of an increasingly complex world
    21. it can help people ask better questions and be more innovative.
    22. still view AI rather narrowly, as a tool that alleviates the costs and inefficiencies of repetitive human labor and increasing organizations’ capacity to produce, process, and analyze piles and piles of data
    23. AI increases question velocity, question variety, and question novelty.
    Created with Sketch. Visit annotations in context

    Created with Sketch. Annotators

    Created with Sketch. URL

    1. “Cables are an enormous lever of power,” Wicker said. “If you can’t control these networks directly, you want a company you can trust to control them.”
    2. n 1997, AT&T sold its cable-laying operation, including a fleet of ships, to Tyco International, a security company based in New Jersey. In 2018, Tyco sold the cable unit, by this time dubbed TE SubCom, for $325 million to Cerberus, the New York private equity firm.
    3. That project, known as the Oman Australia Cable, was spearheaded by SUBCO, a Brisbane-based subsea cable investment company owned by Australian entrepreneur Bevan Slattery.
    4. “Silicon Valley is waking up to the reality that it has to pick a side,”
    5. Microsoft – whose President Brad Smith said in 2017 that the tech sector needed to be a “neutral digital Switzerland” – announced in May that it had discovered Chinese state-sponsored hackers targeting U.S. critical infrastructure, a rare example of a big tech firm calling out Beijing for espionage.
    6. America’s SubCom, Japan’s NEC Corporation, France’s Alcatel Submarine Networks and China’s HMN Tech.
    7. First, Washington needs SubCom to expand the Navy’s undersea cable network so that it can better coordinate military operations and enhance surveillance on China’s expanding fleet of submarines and warships, the people said. Second, the Biden administration wants SubCom to build more commercial subsea internet cables controlled by U.S. companies, a strategy aimed at ensuring that America remains the primary custodian of the internet, according to the two industry officials.
    8. Subsea cables are vulnerable to sabotage and espionage, and Beijing and Washington have accused each other of tapping cables to spy on data or carry out cyberattacks.
    9. SubCom’s journey from Cold War experiment to global cable constructor and now a shadowy player in the U.S.-China tech war is detailed in this story for the first time.
    10. This dual role has made SubCom increasingly valuable to Washington as global internet infrastructure – from undersea cables to data centers and 5G mobile networks – risks fracturing into two systems, one backed by the United States, the other controlled by China.
    11. SubCom is the exclusive undersea cable contractor to the U.S. military, laying a web of internet and surveillance cables across the ocean floor
    12. The CS Dependable is owned by SubCom, a small-town New Jersey cable manufacturer that’s playing an outsized role in a race between the United States and China to control advanced military and digital technologies that could decide which country emerges as the world’s preeminent superpower.
    Created with Sketch. Visit annotations in context

    Created with Sketch. Annotators

    Created with Sketch. URL

    1. The long-term vision of enabling any employee — and customers as well — to easily access important knowledge within and outside of a company to enhance productivity and innovation is a powerful draw. Generative AI appears to be the technology that is finally making it possible.
    2. Any company that commits to embedding its own knowledge into a generative AI system should be prepared to revise its approach to the issue frequently over the next several years.
    3. either though training or policies — include:

      good strategies.

    4. Generative AI capabilities, including awareness of context and history, generating new content by aggregating or combining knowledge from various sources, and data-driven predictions, can provide powerful support for knowledge work.
    5. To realize opportunities and manage potential risks of generative AI applications to knowledge management, companies need to develop a culture of transparency and accountability that would make generative AI-based knowledge management systems successful.
    6. User prompts into publicly-available LLMs are used to train future versions of the system, so some companies (Samsung, for example) have feared propagation of confidential and private information and banned LLM use by employees. However, most companies’ efforts to tune LLMs with domain-specific content are performed on private instances of the models that are not accessible to public users, so this should not be a problem. In addition, some generative AI systems such as ChatGPT allow users to turn off the collection of chat histories, which can address confidentiality issues even on public systems.

      Our model

    7. the company’s attorneys helped create a series of “pre-prompts” that tell the generative AI system what types of questions it should answer and those it should politely avoid.
    8. many legal observers feel that “fair use” provisions of copyright law will apply to them, although this hasn’t been tested in the courts (and not all countries have such provisions in their copyright laws).
    9. For example, for BloombergGPT, which is intended for answering financial and investing questions, the system was evaluated on public dataset financial tasks, named entity recognition, sentiment analysis ability, and a set of reasoning and general natural language processing tasks.
    10. The good news is that companies who have tuned their LLMs on domain-specific information have found that hallucinations are less of a problem than out-of-the-box LLMs, at least if there are no extended dialogues or non-business prompts.
    11. if content authors are aware of how to create effective documents.

      This is basically embeding SEO in the process of creating documents.

    12. Most companies that do not have well-curated content will find it challenging to do so for just this purpose.

      Here is why our textus approach integrates curation in the process.

    13. Morgan Stanley, for example, has a group of 20 or so knowledge managers in the Philippines who are constantly scoring documents along multiple criteria; these determine the suitability for incorporation into the GPT-4 system.
    14. The alternative is to create vector embeddings — arrays of numeric values produced from the text by another pre-trained machine learning model (Morgan Stanley uses one from OpenAI called Ada).

      Used by Diplo

    15. The prompt-trained system is operated in a private cloud that is only accessible to Morgan Stanley employees.
    16. Morgan Stanley, for example, used prompt tuning to train OpenAI’s GPT-4 model using a carefully curated set of 100,000 documents with important investing, general business, and investment process knowledge.
    17. It requires considerable data science expertise; the scientific paper for the Google project, for example, had 31 co-authors.
    18. with Google’s general PaLM2 LLM
    19. to add specific domain content to a system that is already trained on general knowledge and language-based interaction.
    20. Bloomberg’s data scientists employed 700 tokens, or about 350 billion words, 50 billion parameters, and 1.3 million hours of graphics processing unit time.
    21. some of the same factors that made knowledge management difficult in the past are still present.
    22. the heyday of the “knowledge management” movement in the 1990s and early 2000s, but most companies found the technology of the time inadequate for the task

      Diplo was there in pioneering phase of knowledge management.

    23. a generative AI-based system for customer support led to increased productivity of customer support agents and improved retention, while leading to higher positive feedback on the part of customers.

      One of areas where AI can help.

    24. can’t respond to prompts or questions regarding proprietary content or knowledge.

      This is the key aspect.

    25. to express complex ideas in articulate language
    26. knowledge within organizations is typically generated and captured across various sources and forms, including individual minds, processes, policies, reports, operational transactions, discussion boards, and online chats and meetings.

      Capturing various sources of knowledge.

    27. through effective and agile creation, management, application, recombination, and deployment of knowledge assets and know-how.
    Created with Sketch. Visit annotations in context

    Created with Sketch. Annotators

    Created with Sketch. URL

    1. We have to be very, very careful about ensuring that it doesn’t come across as AI surveillance,
    2. introducing AI ethicists
    3. While there are laws being whipped up around how employers should ethically and responsibly implement AI around, for example, hiring, so as to ensure job posts aren’t discriminatory in any way (like NYC’s Law 144), there are still relatively few guardrails.

      NYC Law 144 on AI and employement. ||sorina||

    4. more traditional kind, which detects patterns from data and provides predictive analysis, which has been used for the last few years by some businesses, but isn’t yet mainstream.
    5. they need to invest in reskilling their workforces
    6. A bit like the customer relationship management software that airlines use, to create a more personalized travel experience.

      ||Andrej|| We can have something like 'customised' student experience.

    7. “I’m a big believer in human interaction and connection. I don’t think that will go away. Automation and speed will help us do different things, focusing on more strategic, creative endeavors. But it will still be on us to understand our people on a human level.”
    8. The theory goes, that if people are saving time on more administrative, tedious aspects of their work, they’ll have more brain capacity for creative and strategic thinking.
    9. improve the meaning and purpose of work.
    10. “Knowing that AI will change many roles in the workforce, CEOs should encourage people to embrace experimentation with the technology and communicate the upskilling opportunities for them to work in tandem with the AI, rather than be replaced by it,”

      ||Jovan||

    11. “AI is not a technology that can be given to a CTO or a similar position, it is a tool that must be understood by the CEO and the entire management team.”
    12. four “trust-building” processes: improve the governance of AI systems and processes; confirm AI-driven decisions are interpretable and easily explainable; monitor and report on AI model performance and protect AI systems from cyber threats and manipulations, per a 2023 Trust survey from management consultancy PwC.
    13. to road test AI.
    14. but leaders still need to be able to frame the issues, make the smart calls and ensure they’re well executed.”
    15. the human element of interpreting data and asking AI the right questions to make sound judgments, will be crucial.
    16. Most (94%) business leaders agree that AI is critical for success, per a 2022 Deloitte report.
    Created with Sketch. Visit annotations in context

    Created with Sketch. Annotators

    Created with Sketch. URL

    1. we must not dismiss legitimate concerns regarding the alignment problem
    2. We should prioritize targeted regulation for specific applications — recognizing that each domain comes with its own set of ethical dilemmas and policy challenges.
    3. shift in the allocation of National Science Foundation (NSF) funding toward responsible AI research
    4. addressing data privacy reform is essential.
    5. we implement retention and reproducibility requirements for AI research
    6. the government must limit abuse across applications using existing laws, such as those governing data privacy and discrimination
    7. Proposals for centralized government-backed projects underestimate the sheer diversity of opinions among AI researchers.
    8. while the Manhattan Project’s ultimate goal was relatively singular — design and build the atomic bomb — AI safety encompasses numerous ambiguities ranging from the meaning of concepts like “mechanistic interpretability” to “value alignment.
    9. to address the “alignment problem,” the fear that powerful AI models might not act in a way we ask of them; or to address mechanistic interpretability, the ability to understand the function of each neuron in a neural network.
    10. gets mired in unprecedented alarmism rather than focusing on addressing these more proximate — and much more likely — challenges.
    11. rogue actors using large AI models to dismantle cybersecurity around critical infrastructure; political parties using disinformation at scale to destabilize fragile democratic governments; domestic terrorists using these models to learn how to build homemade weapons; and dictatorial regimes using them to surveil their populations or build dystopian social credit systems, among others.
    12. distracts from the grounded conversations necessary to develop well-informed policies around AI governance.
    13. The technology is neither inherently good nor evil; in contrast, philosophers, ethicists, and even the pope have argued that the same could not necessarily be said about nuclear weapons, because their mere possession is an inherent threat to kill millions of people.
    14. These types of common exaggerations ultimately detract from effective policymaking aimed at addressing both immediate risks and potential catastrophic threats posed by certain AI technologies.
    Created with Sketch. Visit annotations in context

    Created with Sketch. Annotators

    Created with Sketch. URL

    1. Bundling occurs when a company offers multiple products together as a single package.
    2. Experience has also shown how firms can use “open first, closed later” tactics in ways that undermine long-term competition.
    3. The open-source innovation explosion in image generation, coupled with new developments in optimizations, made it possible for nearly anyone to develop, iterate on, and deploy the models using smaller datasets and lower-cost consumer hardware
    4. Additionally, some markets for specialized chips are—or could be, without appropriate competition policies and antitrust enforcement—highly concentrated.
    5. the fine-tuning technique LoRa (Low-Rank Adaptation) can enable a developer to fine-tune a model to perform a specific task using consumer grade GPUs
    6. Access to computational resources is a third key input in generative AI markets.
    7. Companies that can acquire both the engineering experience and professional talent necessary to build and package the final generative AI product or service will be better positioned to gain market share.
    8. Developing a generative model requires a significant engineering and research workforce with particular—and relatively rare—skillsets, as well as a deep understanding of machine learning, natural language processing, and computer vision.
    9. This may be particularly true in specialized domains or domains where data is more highly regulated, such as healthcare or finance
    10. First, more established companies may benefit from access to data collected from their users over many years—especially if the incumbents also own digital platforms that amass large amounts of data.
    11. The volume and quality of data required to pre-train a generative AI model from scratch may impact the ability of new players to enter the market.
    12. The pre-training step creates a base model with broad competency in a specific domain, such as language or images.
    13. t is especially important that firms not engage in unfair methods of competition or other antitrust violations to squash competition and undermine the potential far-reaching benefits of this transformative technology.
    Created with Sketch. Visit annotations in context

    Created with Sketch. Annotators

    Created with Sketch. URL

    1. AI has proved to be a controversial topic in education resulting in questions over authenticity. The Russell Group, which includes Cambridge University, has set out principles for its use
    Created with Sketch. Visit annotations in context

    Created with Sketch. Annotators

    Created with Sketch. URL

    1. What principles should guide AI when handling topics that involve both human rights and local cultural or legal differences, like LGBTQ rights and women’s rights? Should AI responses change based on the location or culture in which it’s used?
    2. "...experiments in setting up a democratic process for deciding what rules AI systems should follow, within the bounds defined by the law."
    3. This fall I will be using it in my courses for the first time. As an example of what I intend, I am going to make some written assignments a contest for students to come up with the best essay that has been generated by AI. This will force them to engage with AI, enter a number of prompts, and then evaluate the output with specific grading rubrics and their own knowledge of the subject.
    4. linear regression.
    5. Just fyi this is known as “pointed ai”… or predictive analytics.
    Created with Sketch. Visit annotations in context

    Created with Sketch. Annotators

    Created with Sketch. URL

    1. By posing a series of related questions, the model develops a coherent chain of thought, leading to more coherent and contextually appropriate responses.
    2. By encouraging structured reasoning, the model can generate more reliable and accurate responses.
    3. By leveraging such comprehensive textual resources, language models can exhibit a broader understanding of various topics.
    Created with Sketch. Visit annotations in context

    Created with Sketch. Annotators

    Created with Sketch. URL

    1. "Mandating direct payments to telecom operators in the EU absent assurances on spending could reinforce the dominant market position of the largest operators,"
    Created with Sketch. Visit annotations in context

    Created with Sketch. Annotators

    Created with Sketch. URL

    1. And so it makes me feel diminished. It makes me feel, in some sense, like a very imperfect, flawed structure compared with these computational systems that have, you know, a million times or a billion times more knowledge than I have and are a billion times faster. It makes me feel extremely inferior.
    2. he defeat of Gary Kasparov by Deep Blue, and then going on to Go systems, Go programs, well, systems that could defeat some of the best Go players in the world. And then systems got better and better at translation between languages, and then at producing intelligible responses to difficult questions in natural language, and even writing poetry.
    Created with Sketch. Visit annotations in context

    Created with Sketch. Annotators

    Created with Sketch. URL

    1. Ambassador George Mina of Australia encouraged members to emphasize the significance of the initiative on e-commerce with trade ministers and to ensure they are engaged not only in the economic aspects of it but also in the larger strategic significance of the initiative.
    2. “While e-commerce has helped small businesses, and especially women-owned firms, tap into international markets, there is a great deal of room to improve. Digital trade can be an even more effective economic lifeline for marginalized groups and geographically remote areas.”
    3. o date, the initiative has “parked” 11 provisions, including paperless trading, e-contracts, e-signatures, e-invoicing, spam, consumer protection, cybersecurity and an electronic transactions framework. The negotiators hope to bridge differences on other provisions, such as “single windows”, personal information and data protection, by the summer.

      Agreements and disagreements in WTO ecommerce negotiations.

    Created with Sketch. Visit annotations in context

    Created with Sketch. Annotators

    Created with Sketch. URL

    1. But if the ocean nodules can be brought to market affordably, the sheer volume of metal available may start to ease the pressure on Indonesian forests.
    2. There are other environmental arguments in favour of mining the seabed. The nodules contain much higher concentrations of metal than deposits on land, which means less energy is required to process them. Peter Tom Jones, the director of the KU Leuven Institute for Sustainable Metals and Materials, in Belgium, reckons that processing the nodules will produce about 40% less greenhouse-gas emissions than those from terrestrial ore.

      Q: Why mining nikle from seabed will be more environmental friendly than on the land?

    3. Around 13 kilograms of biomass would be lost for every tonne of CCZ nickel mined.
    4. the island nation of Nauru
    5. Over the past five years most of the growth in demand has been met by Indonesia, which has been bulldozing rainforests to get at the ore beneath. In 2017 the country produced just 17% of the world’s nickel, according to CRU , a metals research firm. Today it is responsible for around half, or 1.6m tonnes a year, and that number is rising. CRU thinks Indonesia will account for 85% of production growth between now and 2027. Even so, that is unlikely to be enough to meet rising demand. And as Indonesian nickel production increases, it is expected to replace palm-oil production as the primary cause of deforestation in the country.

      Q: Why Indonesia's nickel production is critical for battery production and future of digitalisation?

    6. Nickel in particular is in short supply.
    7. Demand for the minerals from which those batteries are made is soaring.
    Created with Sketch. Visit annotations in context

    Created with Sketch. Annotators

    Created with Sketch. URL

    1. A proposed mandate from Trudeau’s office says the group would issue reports aimed at guiding the development of policies that could keep AI technology grounded in human rights. It lists several areas of interest, including how data for AI projects is collected and accessed, the effect of AI on human rights, and whether people trust AI technology. The group will also discuss military uses of AI.

      Q: What would IPAI do?

    Created with Sketch. Visit annotations in context

    Created with Sketch. Annotators

    Created with Sketch. URL

    1. The celebrated psycholinguist argues that “the curse of knowledge” is the biggest cause of bad writing: like children, writers forget that others often do not know what they know.

      Good point about 'the curse of knowledge'.

    Created with Sketch. Visit annotations in context

    Created with Sketch. Annotators

    Created with Sketch. URL