1. Jul 2023
    1. “I’m a big believer in human interaction and connection. I don’t think that will go away. Automation and speed will help us do different things, focusing on more strategic, creative endeavors. But it will still be on us to understand our people on a human level.”
    2. The theory goes, that if people are saving time on more administrative, tedious aspects of their work, they’ll have more brain capacity for creative and strategic thinking.
    3. improve the meaning and purpose of work.
    4. “Knowing that AI will change many roles in the workforce, CEOs should encourage people to embrace experimentation with the technology and communicate the upskilling opportunities for them to work in tandem with the AI, rather than be replaced by it,”

      ||Jovan||

    5. “AI is not a technology that can be given to a CTO or a similar position, it is a tool that must be understood by the CEO and the entire management team.”
    6. four “trust-building” processes: improve the governance of AI systems and processes; confirm AI-driven decisions are interpretable and easily explainable; monitor and report on AI model performance and protect AI systems from cyber threats and manipulations, per a 2023 Trust survey from management consultancy PwC.
    7. to road test AI.
    8. but leaders still need to be able to frame the issues, make the smart calls and ensure they’re well executed.”
    9. the human element of interpreting data and asking AI the right questions to make sound judgments, will be crucial.
    10. Most (94%) business leaders agree that AI is critical for success, per a 2022 Deloitte report.
    Created with Sketch. Visit annotations in context

    Created with Sketch. Annotators

    Created with Sketch. URL

    1. we must not dismiss legitimate concerns regarding the alignment problem
    2. We should prioritize targeted regulation for specific applications — recognizing that each domain comes with its own set of ethical dilemmas and policy challenges.
    3. shift in the allocation of National Science Foundation (NSF) funding toward responsible AI research
    4. addressing data privacy reform is essential.
    5. we implement retention and reproducibility requirements for AI research
    6. the government must limit abuse across applications using existing laws, such as those governing data privacy and discrimination
    7. Proposals for centralized government-backed projects underestimate the sheer diversity of opinions among AI researchers.
    8. while the Manhattan Project’s ultimate goal was relatively singular — design and build the atomic bomb — AI safety encompasses numerous ambiguities ranging from the meaning of concepts like “mechanistic interpretability” to “value alignment.
    9. to address the “alignment problem,” the fear that powerful AI models might not act in a way we ask of them; or to address mechanistic interpretability, the ability to understand the function of each neuron in a neural network.
    10. gets mired in unprecedented alarmism rather than focusing on addressing these more proximate — and much more likely — challenges.
    11. rogue actors using large AI models to dismantle cybersecurity around critical infrastructure; political parties using disinformation at scale to destabilize fragile democratic governments; domestic terrorists using these models to learn how to build homemade weapons; and dictatorial regimes using them to surveil their populations or build dystopian social credit systems, among others.
    12. distracts from the grounded conversations necessary to develop well-informed policies around AI governance.
    13. The technology is neither inherently good nor evil; in contrast, philosophers, ethicists, and even the pope have argued that the same could not necessarily be said about nuclear weapons, because their mere possession is an inherent threat to kill millions of people.
    14. These types of common exaggerations ultimately detract from effective policymaking aimed at addressing both immediate risks and potential catastrophic threats posed by certain AI technologies.
    Created with Sketch. Visit annotations in context

    Created with Sketch. Annotators

    Created with Sketch. URL

    1. Bundling occurs when a company offers multiple products together as a single package.
    2. Experience has also shown how firms can use “open first, closed later” tactics in ways that undermine long-term competition.
    3. The open-source innovation explosion in image generation, coupled with new developments in optimizations, made it possible for nearly anyone to develop, iterate on, and deploy the models using smaller datasets and lower-cost consumer hardware
    4. Additionally, some markets for specialized chips are—or could be, without appropriate competition policies and antitrust enforcement—highly concentrated.
    5. the fine-tuning technique LoRa (Low-Rank Adaptation) can enable a developer to fine-tune a model to perform a specific task using consumer grade GPUs
    6. Access to computational resources is a third key input in generative AI markets.
    7. Companies that can acquire both the engineering experience and professional talent necessary to build and package the final generative AI product or service will be better positioned to gain market share.
    8. Developing a generative model requires a significant engineering and research workforce with particular—and relatively rare—skillsets, as well as a deep understanding of machine learning, natural language processing, and computer vision.
    9. This may be particularly true in specialized domains or domains where data is more highly regulated, such as healthcare or finance
    10. First, more established companies may benefit from access to data collected from their users over many years—especially if the incumbents also own digital platforms that amass large amounts of data.
    11. The volume and quality of data required to pre-train a generative AI model from scratch may impact the ability of new players to enter the market.
    12. The pre-training step creates a base model with broad competency in a specific domain, such as language or images.
    13. t is especially important that firms not engage in unfair methods of competition or other antitrust violations to squash competition and undermine the potential far-reaching benefits of this transformative technology.
    Created with Sketch. Visit annotations in context

    Created with Sketch. Annotators

    Created with Sketch. URL

    1. AI has proved to be a controversial topic in education resulting in questions over authenticity. The Russell Group, which includes Cambridge University, has set out principles for its use
    Created with Sketch. Visit annotations in context

    Created with Sketch. Annotators

    Created with Sketch. URL

    1. What principles should guide AI when handling topics that involve both human rights and local cultural or legal differences, like LGBTQ rights and women’s rights? Should AI responses change based on the location or culture in which it’s used?
    2. "...experiments in setting up a democratic process for deciding what rules AI systems should follow, within the bounds defined by the law."
    3. This fall I will be using it in my courses for the first time. As an example of what I intend, I am going to make some written assignments a contest for students to come up with the best essay that has been generated by AI. This will force them to engage with AI, enter a number of prompts, and then evaluate the output with specific grading rubrics and their own knowledge of the subject.
    4. linear regression.
    5. Just fyi this is known as “pointed ai”… or predictive analytics.
    Created with Sketch. Visit annotations in context

    Created with Sketch. Annotators

    Created with Sketch. URL

    1. By posing a series of related questions, the model develops a coherent chain of thought, leading to more coherent and contextually appropriate responses.
    2. By encouraging structured reasoning, the model can generate more reliable and accurate responses.
    3. By leveraging such comprehensive textual resources, language models can exhibit a broader understanding of various topics.
    Created with Sketch. Visit annotations in context

    Created with Sketch. Annotators

    Created with Sketch. URL

    1. "Mandating direct payments to telecom operators in the EU absent assurances on spending could reinforce the dominant market position of the largest operators,"
    Created with Sketch. Visit annotations in context

    Created with Sketch. Annotators

    Created with Sketch. URL

    1. And so it makes me feel diminished. It makes me feel, in some sense, like a very imperfect, flawed structure compared with these computational systems that have, you know, a million times or a billion times more knowledge than I have and are a billion times faster. It makes me feel extremely inferior.
    2. he defeat of Gary Kasparov by Deep Blue, and then going on to Go systems, Go programs, well, systems that could defeat some of the best Go players in the world. And then systems got better and better at translation between languages, and then at producing intelligible responses to difficult questions in natural language, and even writing poetry.
    Created with Sketch. Visit annotations in context

    Created with Sketch. Annotators

    Created with Sketch. URL

    1. Ambassador George Mina of Australia encouraged members to emphasize the significance of the initiative on e-commerce with trade ministers and to ensure they are engaged not only in the economic aspects of it but also in the larger strategic significance of the initiative.
    2. “While e-commerce has helped small businesses, and especially women-owned firms, tap into international markets, there is a great deal of room to improve. Digital trade can be an even more effective economic lifeline for marginalized groups and geographically remote areas.”
    3. o date, the initiative has “parked” 11 provisions, including paperless trading, e-contracts, e-signatures, e-invoicing, spam, consumer protection, cybersecurity and an electronic transactions framework. The negotiators hope to bridge differences on other provisions, such as “single windows”, personal information and data protection, by the summer.

      Agreements and disagreements in WTO ecommerce negotiations.

    Created with Sketch. Visit annotations in context

    Created with Sketch. Annotators

    Created with Sketch. URL

    1. But if the ocean nodules can be brought to market affordably, the sheer volume of metal available may start to ease the pressure on Indonesian forests.
    2. There are other environmental arguments in favour of mining the seabed. The nodules contain much higher concentrations of metal than deposits on land, which means less energy is required to process them. Peter Tom Jones, the director of the KU Leuven Institute for Sustainable Metals and Materials, in Belgium, reckons that processing the nodules will produce about 40% less greenhouse-gas emissions than those from terrestrial ore.

      Q: Why mining nikle from seabed will be more environmental friendly than on the land?

    3. Around 13 kilograms of biomass would be lost for every tonne of CCZ nickel mined.
    4. the island nation of Nauru
    5. Over the past five years most of the growth in demand has been met by Indonesia, which has been bulldozing rainforests to get at the ore beneath. In 2017 the country produced just 17% of the world’s nickel, according to CRU , a metals research firm. Today it is responsible for around half, or 1.6m tonnes a year, and that number is rising. CRU thinks Indonesia will account for 85% of production growth between now and 2027. Even so, that is unlikely to be enough to meet rising demand. And as Indonesian nickel production increases, it is expected to replace palm-oil production as the primary cause of deforestation in the country.

      Q: Why Indonesia's nickel production is critical for battery production and future of digitalisation?

    6. Nickel in particular is in short supply.
    7. Demand for the minerals from which those batteries are made is soaring.
    Created with Sketch. Visit annotations in context

    Created with Sketch. Annotators

    Created with Sketch. URL

    1. A proposed mandate from Trudeau’s office says the group would issue reports aimed at guiding the development of policies that could keep AI technology grounded in human rights. It lists several areas of interest, including how data for AI projects is collected and accessed, the effect of AI on human rights, and whether people trust AI technology. The group will also discuss military uses of AI.

      Q: What would IPAI do?

    Created with Sketch. Visit annotations in context

    Created with Sketch. Annotators

    Created with Sketch. URL

    1. The celebrated psycholinguist argues that “the curse of knowledge” is the biggest cause of bad writing: like children, writers forget that others often do not know what they know.

      Good point about 'the curse of knowledge'.

    2. Orwell exhorted writers to “never use the passive where you can use the active”, Williams explains how passives can sometimes help create a sense of flow.
    3. But if you can first grasp the origins and qualities of bad writing, you may learn to diagnose and cure problems in your own prose (keeping things simple helps a lot).

      Can AI do it?

    Created with Sketch. Visit annotations in context

    Created with Sketch. Annotators

    Created with Sketch. URL

    1. Britain becoming an AI superpower.

      Q: Can Britain become AI superpower?

    Created with Sketch. Visit annotations in context

    Created with Sketch. Annotators

    Created with Sketch. URL

    1. As the US expands the number of critical data sets that are publicly available, their conditions of use and access will become increasingly important (Bates 2014).

      What is this?

    Created with Sketch. Visit annotations in context

    Created with Sketch. Annotators

    Created with Sketch. URL

    1. “It appears there was some sort of confrontation between the two men in the house and the Israeli army, and the two men in the house were shot dead,”

      @sorina What about your considerations?

    Created with Sketch. Visit annotations in context

    Created with Sketch. Annotators

    Created with Sketch. URL

    1. Some regulations intervene in the market while others facilitate the market; some regulations benefit large companies and others harm them.
    2. Both Democrats and Republicans support regulations that would require companies to label AI creations as such,
    3. As a self-professed pro-union president, Biden must answer to unions across the country that worry that the technology could eliminate workers’ jobs.
    4. By racing to regulate AI, lawmakers could miss the opportunity to address some of the technology’s less obvious dangers. Machine learning algorithms are often biased, explained Eric Rice, who founded USC’s Center for AI in Society. Several years ago, researchers found that a popular healthcare risk-prediction algorithm was racially biased, with Black patients receiving lower risk scores.
    5. Schaake expressed concern that when Altman and others warn of existential threats from AI, they are putting the regulatory focus on the horizon, rather than in the present. If lawmakers are worrying about AI ending humanity, they’re overlooking the more immediate, less dramatic worries.
    6. AI companies should adhere to “an appropriate set of safety requirements,” which could entail a government-run licensing or registration system.

      OpenAI is keen to have government-run licensing and registration system for AI.

    Created with Sketch. Visit annotations in context

    Created with Sketch. Annotators

    Created with Sketch. URL

    1. AI governance is a journey, not a destination.

      C: nice slogan for speeches

    2. The concept of safety brakes, along with licensing for highly capable foundation models and AI infrastructure obligations, should be key elements of the voluntary, internationally coordinated G7 code that signatory nation states agree to incorporate into their national systems.

      ||sorina|| It won't be voluntary any more.

    3. a means for mutual recognition of compliance and safety across borders
    4. against internationally agreed standards
    5. the OECD to develop principles for trustworthy AI
    6. an AI system certified as safe in one jurisdiction can also qualify as safe in another.
    7. a voluntary AI Code of Conduct.

      ||sorina|| It will be presented next week in the EU Parliament.

    8. s well as India and Indonesia
    9. In each area, the key to success will be to develop concrete initiatives and bring governments, companies, and NGOs together to advance them.
    10. Unless academic researchers can obtain access to substantially more computing resources, there is a real risk that scientific and technological inquiry will suffer, including that relating to AI itself.
    11. Transparency requirements in the AI Act, and potentially several of the to-be-developed standards related to the Act, present an opportunity to leverage such industry initiatives towards a shared goal.
    12. One of these is the Coalition for Content Provenance Authenticity, or C2PA, a global standards body with more than 60 members including Adobe, the BBC, Intel, Microsoft, Publicis Groupe, Sony, and Truepic. The group is dedicated to bolstering trust and transparency of online information including releasing the world’s first technical specification for certifying digital content in 2022, which now includes support for Generative AI.

      ||sorina|| New content standard in making.

    13. whenever an AI system is used to create artificially generated content, this should be easy to identify.
    14. The AI Act will require that AI providers make it clear to users that they are interacting with an AI system.
    15. The AI Act requires developers of high-risk systems to put in place a risk management system to ensure that systems are tested, to mitigate risks to the extent possible, including through responsible design and development, and to engage in post-market monitoring.
    16. t’s also important to make sure that obligations are attached to powerful AI models, with a focus on a defined class of highly capable foundation models and calibrated to model-level risk. This will impact two layers of the technology stack. The first will require new regulations for these models themselves. And the second will involve obligations for the AI infrastructure operators on which these models are developed and deployed. The blueprint we developed offers suggested goals and approaches for each of these layers.

      ||sorina|| this is an article for banning certain AI develpment but well camuflaged.

    17. The AI Act acknowledges the challenges to regulating complex architecture through its risk-based approach for establishing requirements for high-risk systems.
    18. They would be akin to the braking systems engineers have long built into other technologies such as elevators, school buses, and high-speed trains, to safely manage not just everyday scenarios, but emergencies as well.

      ||VladaR|| Da li ulazite ti i Anastasija u AI safety and security. Ovo se ozbiljno zahuktava i verovatno ce biti najvaznaija prica.

    19. Our blueprint proposes new safety requirements that, in effect, would create safety brakes for AI systems that control the operation of designated critical infrastructure.
    20. As the EU finalizes the AI Act, the EU could consider using procurement rules to promote the use of relevant trustworthy AI frameworks. For instance, when procuring high-risk AI systems, EU procurement authorities could require suppliers to certify via third-party audits that they comply with relevant international standards.

      C: Is there something on positioning Microsoft and OpenAI as actor.

      ||sorina||

    21. such as the AI Risk Management Framework developed by the U.S. National Institute of Standards and Technology, or NIST, and the new international standard ISO/IEC 42001 on AI Management Systems, which is expected to be published in the fall of 2023.

      C: Other risk-management approaches, including ISO/IEC 42001 ||sorina||

    22. we’ve been supportive of a regulatory regime in Europe that effectively addresses safety and upholds fundamental rights while continuing to enable innovations that will ensure that Europe remains globally competitive.
    23. People who design and operate AI systems cannot be accountable unless their decisions and actions are subject to the rule of law.
    24. This is the fundamental need to ensure that machines remain subject to effective oversight by people, and the people who design and operate machines remain accountable to everyone else.

      Q: What does mean AI accountability?

    25. accountability
    26. There are enormous opportunities to harness the power of AI to contribute to European growth and values. But another dimension is equally clear. It’s not enough to focus only on the many opportunities to use AI to improve people’s lives. We need to focus with equal determination on the challenges and risks that AI can create, and we need to manage them effectively.

      C: opportunities-risk framing

    Created with Sketch. Visit annotations in context

    Created with Sketch. Annotators

    Created with Sketch. URL

    1. the Algorithmic Accountability Act.

      C: new proposal to be considered

    2. the banning of certain applications of AI, like sentiment analysis or facial recognition, echoing parts of the EU regulation
    3. it would require that tech companies identify and label AI-made text and images, which is a massive undertaking.
    4. should tech companies have that same ‘get out of jail free’ pass for AI-generated content?
    5. A giant unanswered question for AI regulation in the US is whether we will or won’t see Section 230 reform.
    6. I must reflect “communist values.”)
    7. The subtext here is the narrative that US AI companies are different from Chinese AI companies.
    8. Technology, and AI in particular, ought to be aligned with “democratic values.”
    9. regulations from the European Union, which some tech companies and critics say will stifle innovation.
    10. regulators will probably be calling on tech CEOs to ask how they’d like to be regulated.
    11. Schumer called innovation the “north star” of US AI strategy

      C: Metaphore for US regulation

    12. Individual agencies like the FTC,the Department of Commerce, and the US Copyright Office have been quick to respond to the craze of the last six months, issuing policy statements, guidelines, and warnings about generative AI in particular.

      C: There is issue-specific approach to AI governance in the United States.

    13. a National AI Commission to manage AI policy,
    14. a bill that would exclude generative AI from Section 230 (the law that shields online platforms from liability for the content their users create).
    Created with Sketch. Visit annotations in context

    Created with Sketch. Annotators

    Created with Sketch. URL

    1. emain a high-stakes cat-and-mouse game
    2. AI has become a double-edged sword in the era of digital information.
    3. the potential of AI in safeguarding truth and trust in the digital age is vast.
    4. may generate false positives and negatives,
    5. AI models need large volumes of labeled data for training, but there's a shortage of labeled deepfake datasets.

      Need data

    6. by analyzing the linguistic structure of the content, cross-referencing it with verified databases, tracking the origin and propagation patterns, and more.
    7. to realistically simulate blinking
    8. AI models are being trained to spot deepfakes by identifying subtle patterns that humans might miss.

      C: AI helps dealing with deepfakes

    9. The generator improves its fakes based on feedback from the discriminator, resulting in extremely realistic deepfakes.

      C: Reinfored learning

    10. Generative Adversarial Networks (GANs)

      C: new terminology

    11. Deepfakes, powered by advanced machine learning techniques, can create hyper-realistic videos or audio where individuals appear to say or do things that they never did
    Created with Sketch. Visit annotations in context

    Created with Sketch. Annotators

    Created with Sketch. URL

    1. some form of “guidance or policy” before the US presidential elections next year, and hopes the industry can “converge on a small set of high-level principles.”
    2. He also said regulation could create a more level playing field for companies that are already trying to follow safe and ethical best practices.
    3. plans to “actively engage with governments and all stakeholders to establish risk-based regulatory frameworks” around AI.
    4. pointed to the CAI’s “nutritional label” format, which gives viewers a sense of AI involvement in creation while not revealing too much about a given creative professional’s “secret sauce.”

      ||JovanK|| how to use nutritional label for SDGs

    5. the Content Authenticity Initiative (CAI) aimed at bringing more transparency to AI-generated content, said they hope that their efforts will help the group stay ahead of any potential regulations.
    6. We have all kinds of use cases for AI, and it has to expand beyond GPT and large language models, or what we’re gonna legislate is GPT and large language models—not AI,
    7. “Maybe there should be some other method to make sure that people are acting well—have some scoring system,
    8. ‘Please give licenses to make big models,’” he said. “And if you see this from the perspective of trying to use regulation as a way to close the door behind you, all of the testimony fits in perfectly.”
    9. Senators on both sides of the aisle generally agreed on the need for regulation
    10. Microsoft President Brad Smith also threw the company’s support behind the creation of a new government agency and licensing system for pre-trained AI models at an event in Washington
    11. Altman, along with IBM’s Christina Montgomery, made suggestions for how Congress should take action, with Altman proposing “licensing and testing requirements” and Montgomery suggesting “precision regulation” of specific use cases.
    Created with Sketch. Visit annotations in context

    Created with Sketch. Annotators

    Created with Sketch. URL

    1. Search Console recommendation:

      ||aleksandarsATdiplomacy.edu|| @misas@diplomacy.edu Integracija GA4 i Search Console.

    2. New Predictive Insights

      Critical new feature of predictive AI.

    3. aving a complete guide to the behavior and interactions of your active users.
    4. If Universal Analytics was all about page views, GA4 is all about events.

      ||MilicaVK||||Jovan|| This is critical conceptual shift from page to event in web and online design.

    5. to collect web and app data

      It is very important as we develop more and more app tools.

    6. Changes to online privacy policies for one, and changes in consumer behavior for another.

      Two reasons why GA4 is introduced.

    Created with Sketch. Visit annotations in context

    Created with Sketch. Annotators

    Created with Sketch. URL

  2. Jun 2023
    1. Man is born free; and everywhere he is in chains.

      C: collection of the first sentences in the books.

    2. The nature of this guiding sentiment is explained in the Discourse on Inequality (p. 197, note 2), where egoism (amour-propre) is contrasted with self-respect (amour de soi). Naturally, Rousseau holds, man does not want everything for himself, and nothing for others.
    3. The General Will is, then, above all a universal and, in the Kantian sense, a "rational" will.
    4. The whole complex of human institutions is not a mere artificial structure; it is the expression of the mutual dependence and fellowship of men.
    5. "Why ought I to obey the General Will?" is that the General Will exists in me and not outside me. I am "obeying only myself," as Rousseau says.
    6. The Sovereign must, therefore, treat all its members alike; but, so long as it does this, it remains omnipotent. If it leaves the general for the particular, and treats one man better than another, it ceases to be Sovereign; but equality is already presupposed in the terms of the Contract.
    7. We have seen that the theory of the Social Contract is founded on human freedom: this freedom carries with it, in Rousseau's view, the guarantee of its own permanence; it is inalienable and indestructible.
    8. Rousseau bases his political doctrine throughout on his view of human freedom; it is because man is a free agent capable of being determined by a universal law prescribed by himself that the State is in like manner capable of realising the General Will, that is, of prescribing to itself and its members a similar universal law.
    9. The justification of democracy is not that it is always right, even in intention, but that it is more general than any other kind of supreme power.
    10. regarding it as a purely ideal conception, to which human institutions can only approximate, and holding it to be realised actually in every republican State, i.e. wherever the people is the Sovereign in fact as well as in right
    11. Every association of several persons creates a new common will; every association of a permanent character has already a "personality" of its own, and in consequence a "general" will
    12. he General Will Rousseau means something quite distinct from the Will of All, with which it should never have been confused.
    13. "There is often," he says, "a great deal of difference between the will of all and the general will; the latter takes account only of the common interest, while the former takes private interest into account, and is no more than a sum of particular wills."
    14. The body politic is also a moral being, possessed of a will, and this general will, which tends always to the preservation and welfare of the whole and of every part, and is the source of the laws, constitutes for all the members of the State, in their relations to one another and to it, the rule of what is just or unjust."
    15. The effect of the Social Contract is the creation of a new individual.
    16. Doubtless," says Rousseau, "there is a universal justice emanating from reason alone; but this justice, to be admitted among us, must be mutual. Humbly speaking, in default of natural sanctions, the laws of justice are ineffective among men."
    17. Rousseau saw the only means of securing effective popular government in a federal system, starting from the small unit as Sovereign.
    18. democracy is possible only in small States, aristocracy in those of medium extent, and monarchy in great States
    19. Government, therefore, exists only at the Sovereign's pleasure, and is always revocable by the sovereign will.
    20. Sovereignty, on the other hand, is in his view absolute, inalienable, indivisible, and indestructible.
    21. Government, therefore, will always be to some extent in the hands of selected persons.
    22. Rousseau regards as inalienable a supreme power which Hobbes makes the people alienate in its first corporate action.

      Q: What is the difference between Rousseau and Hobbes when it comes to sovereignty?

    23. It is the view that the people, whether it can alienate its right or not, is the ultimate director of its own destinies, the final power from which there is no appeal.
    24. This would leave us still in the realm of mere fact, outside both right and philosophy.
    25. essential to distinguish between the legal Sovereign of jurisprudence, and the political Sovereign of political science and philosophy.
    26. Where Sovereignty is placed is, on this view, a question purely of fact, and never of right.
    27. "Sovereignty is the exercise of the general will."
    28. He wished to break up the nation-states of Europe, and create instead federative leagues of independent city-states.
    29. he therefore held that self-government was impossible except for a city.
    Created with Sketch. Visit annotations in context

    Created with Sketch. Annotators

    Created with Sketch. URL

    1. In the fu­ture, the new mod­u­la­tion format they de­veloped is likely to in­crease band­widths in other data trans­mis­sion meth­ods where the en­ergy of the beam can be­come a lim­it­ing factor.
    2. French space com­pany, Thales Alenia Space is an ex­pert in tar­get­ing lasers with cen­ti­metre ac­cur­acy over thou­sands of kilo­metres in space. ONERA, also French, is an aerospace re­search in­sti­tute with ex­pert­ise in MEMS-​based ad­apt­ive op­tics, which has largely elim­in­ated the ef­fects of shim­mer­ing in the air. The most ef­fect­ive method of sig­nal mod­u­la­tion, which is es­sen­tial for high data rates, is a spe­cialty of Leuthold’s ETH Zurich re­search group.
    3. Paris-​based pro­ject part­ner ONERA de­ployed a mi­cro­elec­tromech­an­ical sys­tem (MEMS) chip with a mat­rix of 97 tiny ad­justable mir­rors. The mir­rors’ de­form­a­tions cor­rect the phase shift of the beam on its in­ter­sec­tion sur­face along the cur­rently meas­ured gradi­ent 1,500 times per second, ul­ti­mately im­prov­ing the sig­nals by a factor of about 500.
    4. Laser op­tical sys­tems, in con­trast, op­er­ate in the near-​infrared range with wavelengths of a few mi­cro­metres, which are about 10,000 times shorter. As a res­ult, they can trans­port more in­form­a­tion per unit of time.
    5. How­ever, trans­mit­ting data between satel­lites and ground sta­tions uses ra­dio tech­no­lo­gies, which are con­sid­er­ably less power­ful. Like a wire­less local area net­work (WLAN) or mo­bile com­mu­nic­a­tions, such tech­no­lo­gies op­er­ate in the mi­crowave range of the spec­trum and thus have wavelengths meas­ur­ing sev­eral cen­ti­metres.
    6. The laser beam travels through the dense at­mo­sphere near the ground. In the pro­cess, many factors – di­verse tur­bu­lence in the air over the high snow-​covered moun­tains, the wa­ter sur­face of Lake Thun, the densely built-​up Thun met­ro­pol­itan area and the Aare plane – in­flu­ence the move­ment of the light waves and con­sequently also the trans­mis­sion of data. The shim­mer­ing of the air, triggered by thermal phe­nom­ena, dis­turbs the uni­form move­ment of light and can be seen on hot sum­mer days by the na­ked eye.
    7. “For op­tical data trans­mis­sion, our test route between the High Alti­tude Re­search Sta­tion on the Jung­frau­joch and the Zi­m­mer­wald Ob­ser­vat­ory at the Uni­ver­sity of Bern is much more chal­len­ging than between a satel­lite and a ground sta­tion,”
    Created with Sketch. Visit annotations in context

    Created with Sketch. Annotators

    Created with Sketch. URL

    1. Interesting discussion on ways to regulate AI use, and the role (limitations) of open source there, by Bruce Schneier and Waldo.

      It raises some interesting questions about accountability of the open source community. They argue, as many others, that OS community is too fluid to be regulated. I tend to disagree - OS community has many levels, and a certain OS component (say a GitHub code) gets picked up by others at certain points to push to a mass market for benefit (commercial or other). It is when such OS products are picked up that the risk explodes - and it is then when we see tangible entities (companies or orgs) that should be and are accountable for how they use the OS code and push it to mass market.

      I see an analogy with vulnerabilities in digital products, and the responsibility of OS community for the supply chain security. While each coder should be accountable, for individuals it probably boils down to ethics (as the effect of a single github product is very limited); but there are entities in this supply chain that integrate such components that clearly should be hold accountable.

      My comments below. It is an interesting question for Geneva Dialogue as well, not only for AI debates.

      cc ||JovanK|| ||anastasiyakATdiplomacy.edu||

    2. Now that the open-source community is remixing LLMs, it’s no longer possible to regulate the technology by dictating what research and development can be done;

      There is a certain analogy with security of the open source, and how to ensure that open source code, which ends up being integral part of commercial products, is secure at the outset. It might not be possible to hold every code-writer in the open source community accountable for vulnerabilities, but there are certain moments later on when that code is picked up and commercialised by others, which allow the window of accountability. It is similar with LLM: it is when a certain code is picked up by others (often for monetisation or some other benefit) that accountability exists as well.

    3. Open source isn’t very good at original innovations, but once an innovation is seen and picked up, the community can be a pretty overwhelming thing.

      It is exactly this 'pick up' which is a milestone to look at: this is when actors involved go beyond a single github contributor, and involve certain entities (organisations or companies) which put certain resources in the promotion and reach of the product they have integrated, in order to create a mass market effect. This is where accountability for development can be looked for as well.

    4. The only governance mechanism available to governments now is to regulate usage (and only for those who pay attention to the law), or to offer incentives to those (including startups, individuals, and small companies) who are now the drivers of innovation in the arena.

      Is it really so? While open source community is diverse and numerous (and often boils down to a single person), their products become significant when they are put together and to the market (whether for free or monetisation). In other words, the 'danger' is not in each piece of code itself, but once those pieces are integrated into a powerful and mass-used product. This means there are certain milestones when the risks become sufficiently big to address it - and those milestones also involve certain entities (typically companies or organisations) which benefit from the reach of the product in one way or another. Devil is in details: we need to closely monitor how and when open source products come to a mass market and cause a concern - and who are the main actors at that very point that could be hold accountable. This still belongs to 'development', and addresses the developers (or integrators), not the users.

    Created with Sketch. Visit annotations in context

    Created with Sketch. Tags

    Created with Sketch. Annotators

    Created with Sketch. URL

    1. National defence is a paradigmatic example of a public good.
    2. goods are usually defined as public goods if and only if they are both non-rivalrous and non-excludable (e.g., Varian 1992: 414)
    3. Bob’s consumption of a grain of rice makes it impossible for Sally to consume the same grain of rice. By contrast, Sally’s enjoyment of Bruckner’s Symphony No. 9 in no way diminishes Bob’s ability to do the same. Rice is thus rivalrous while music is not.
    Created with Sketch. Visit annotations in context

    Created with Sketch. Annotators

    Created with Sketch. URL

    1. he experience gained through the engagement of the EU Office in San Francisco with the tech sectorshould be used
    2. the establishment of informaldigital hubs
    3. The Council calls forthe informal EU Digital Diplomacy Networkto continue to engagein strategic discussions on key emerging and challenging issues of tech and digitalpolicyand regularly to convene in enlarged format, bringing in, as appropriate, other European and like-minded partners, as well as other stakeholdersand relevant networks, andto further strengthen its coordination with the EU Cyber Ambassadors’ Network
    4. the Council calls on the High Representative, the Commission, and Member Statesto enhance digital capacity building and cooperation with Africa
    5. The Digital for Development (D4D) Hubis a good example of the Team Europe approach to digital cooperation with partner regions globally.
    6. as well asdigital commonswhich contribute toincreasingthe usability of new technologies and data for the benefit of a society as a whole, offering trusted and secure international connectivity, such as subsea and terrestrial cables, or wireless networks, and taking into account ICT supply chainsecurity as an important element of building a resilient digital ecosystem
    7. coordination in order to ensure that an improvedInternet Governance Forum(IGF)remain the main global platform for multistakeholder digital dialogue after 2025, in order to maintain support for the open, global, free, interoperable and decentralised internet including in the context of the negotiations for a Global Digital Compact

      ||sorina|| EU made stron endorsement of the IGF

    8. an ambitious agreement on e-commerce in the context of the World Trade Organisation (WTO), including rules on the data free flow with trust;
    9. Strengthen the role of the EU inthe International Telecommunication Union(ITU),by clarifyingstrategic goals, notably in view of the Plenipotentiary Conference in 2026, developingcoordinated positions, including, where appropriate, with other partners in the European Conference of Postal and Telecommunications Administrations (CEPT), particularly on telecommunication standardisation,including future generation such as 6G,radio-communicationand development, conducting cross-regional outreach and promotingas a strategic objective the ITU’s commitment to achieving universal, meaningful connectivity that respects human rights and fundamental freedoms; and increasingcooperation among EU Member States represented in the ITU Council. The EU should also aim to strengthen coordination in the International Organization for Standardization(ISO) and other standard setting forato ensure that new technologies develop on the basis of interoperable and/or open standards.
    10. otably the negotiations of the Global Digital Compact(GDC) and close cooperation with the UN Tech Envoy in particular on matters concerning Human Rights and the multistakeholder model of Internet Governancewhich is open, inclusive and decentralised.
    11. owards Geneva-based organizations such as the International Telecommunication Union(ITU) and World Trade Organization(WTO)
    12. n a Team Europe approach
    13. he rights of those in vulnerable and/or marginalized situations, including women, youth, children, older people and persons with disabilities, continue to address inequalities, such as the digital gender divide and step up action to strongly oppose and combat all forms of discrimination on any groundwith a specific attention to multiple and intersecting forms of discrimination, including on grounds of sex, race, ethnic or social origin, religion or belief, political or any other opinion, disability, age, sexual orientation and gender identity.

      List of weak constituencies.

    14. in line with the vision of digital humanism and preserving human dignity.
    15. by fostering digital literacy as well as to advance the human-centric and human rights-based approach to digital technologies, such as Artificial Intelligence,throughout their whole lifecycle.
    16. the twin digital and green transitions offer a huge opportunity for sustainable development worldwide
    Created with Sketch. Visit annotations in context

    Created with Sketch. Annotators

    Created with Sketch. URL