474 Matching Annotations
  1. Mar 2024
    1. Experts expect some inference to start moving from specialist graphics-processing units ( GPU s), which are Nvidia’s forte, to general-purpose central processing units ( CPU s) like those used in laptops and smartphones, which are AMD ’s and Intel’s.

      So what does this mean in practical terms? In Nvidia Loosing ground because the more common (?) general purpose CPUs become relevant for AI? ||JovanK||

    Created with Sketch. Visit annotations in context

    Created with Sketch. Tags

    Created with Sketch. Annotators

    Created with Sketch. URL

  2. Jan 2024
    1. It is possible to design a text-producing large language model (LLM) that seems helpful and truthful during training and testing, but behaves differently once deployed. And according to a study shared this month on arXiv1, attempts to detect and remove such two-faced behaviour are often useless — and can even make the models better at hiding their true nature.The finding that trying to retrain deceptive LLMs can make the situation worse “was something that was particularly surprising to us … and potentially scary”, says co-author Evan Hubinger, a computer scientist at Anthropic, an AI start-up company in San Francisco, California.

      Kind of scary indeed. It takes us back to the question of trust in developers.

    Created with Sketch. Visit annotations in context

    Created with Sketch. Tags

    Created with Sketch. Annotators

    Created with Sketch. URL

    1. including by scaling up the use of open science, affordable and open-source technology, research and development

      open science and open source

    2. We reaffirmthat the creation, development and diffusion of innovations and new technologies and associated know-how, including the transfer of technology on mutually agreed terms, are powerful drivers of economic growth and sustainable development. We reiterate the need to accelerate the transfer of environmentally sound technologies to developing countries on favourableterms, including on concessional and preferential terms, as mutually agreed, andwenote the importance of facilitating access to and sharing accessible and assistive technologies

      details for the part on tech transfers in the chapeau

    3. commit to exploring measures to address therisks involved in biotechnology and human enhancement technologies applied to the military domain

      Might have been good not to limit this to the applications in the military domain

    4. developingnorms, rules and principles on the design, development and use of military applications of artificial intelligence through a multilateral process

      also interesting

    5. strengtheningoversight mechanisms for the use of data-driven technology, including artificial intelligence, to support the maintenance of international peace and security

      which mechanisms?

    6. commit to concluding without delaya legally binding instrument to prohibit lethal autonomous weaponssystems that function without human control or oversight, and which cannot be used in compliance with international humanitarian law, and to regulate all other types of autonomous weapons systems

      LAWS - another proposed strong commitment

    7. international norms, rules and principles to address threats to space systems and, on that basis, launch negotiations on a treaty to ensure peace, security and the prevention of an arms race in outer space

      interesting commitment to working on a treaty for outer space security. curious if it will stay in the final doc

    8. We acknowledge that the accelerating pace of technological change necessitates ongoing assessment and holistic understanding of new and emerging developments in science and technology impactinginternational peace and security, including through misuse by non-State actors, including for terrorism

      tech, peace and security

    9. including through the transfer of technology on mutually agreed terms to help close the digital and innovation divide.

      Seems a win for developing countries. tech transfers were highlighted quite a lot during the SDG summit last year. Let's see reactions...

    Created with Sketch. Visit annotations in context

    Created with Sketch. Tags

    Created with Sketch. Annotators

    Created with Sketch. URL

    1. summary of Russell’s argument—which claims that with advanced AI, “you getexactly what you ask for, not what you want”

      Q: What is the alignment problem?

    2. 70% of respondents thought AI safety research should be prioritized more than it currently is.

      need for more AI safety research

    3. etween 41.2% and 51.4% of respondents estimated a greater than 10% chance of humanextinction or severe disempowerment

      predictions on the likelihood of human extinction

    4. Amount of concern potential scenarios deserve, organized from most to least extreme concern

      very interesting, in particular the relative limited concern related to the sense of meaning/purpose

    5. ven among net optimists,nearly half (48.4%) gave at least 5% credence to extremely bad outcomes, and among net pessimists, more than half(58.6%) gave at least 5% to extremely good outcomes. The broad variance in credence in catastrophic scenarios showsthere isn’t strong evidence understood by all experts that this kind of outcome is certain or implausible

      basically difficult to predict the consequences of high-level machine intelligence

    6. scenarios worthy of most concern were: spread of false information e.g. deepfakes (86%), manipulation oflarge-scale public opinion trends (79%), AI letting dangerous groups make powerful tools (e.g. engineered viruses)(73%), authoritarian rulers using AI to control their populations (73%), and AI systems worsening economic inequalityby disproportionately benefiting certain individuals (71%).

      likelihood of existing and exclusion AI risks

    7. Most respondents considered it unlikely that users of AI systems in 2028 will be able to know the truereasons for the AI systems’ choices, with only 20% giving it better than even odds.

      predictions on explainability / interpretability of AI systems

    8. Answers reflected substantial uncertainty and disagreement among participants. No trait attracted near-unanimity on anyprobability, and no more than 55% of respondents answered “very likely” or “very unlikely” about any trait.

      !

    9. Only one trait had a median answer below ‘even chance’: “Take actions to attain power.” While there wasno consensus even on this trait, it’s notable that it was deemed least likely, because it is arguably the most sinister, beingkey to an argument for extinction-level danger from AI

      .

    10. ‘intelligence explosion,’

      Term to watch for: intelligence explosion

    11. he top five most-suggested categories were: “Computerand Mathematical” (91 write-in answers in this category), “Life, Physical, and Social Science” (77 answers), “HealthcarePractitioners and Technical” (56), “Management” (49), and “Arts, Design, Entertainment, Sports, and Media”

      predictions on occupations likely to be among the last automatable

    12. predicted a 50% chance of FAOL by 2116, down 48 years from 2164 in the 2022 survey

      Timeframe prediction for full automation of labour: 50% chance it would happen by 2116.

      But what does this prediction - and the definition of full automation of labour - mean in the context of an ever-evolving work/occupation landscape? What about occupations that might not exist today? Can we predict how those might or might not be automated?

    13. ay an occupation becomes fully automatable when unaided machines can accomplish it betterand more cheaply than human workers. Ignore aspects of occupations for which being a human isintrinsically advantageous, e.g. being accepted as a jury member.Think feasibility, not adoption.[. . . ]Say we have reached ‘full automation of labor’ when all occupations are fully automatable. That is,when for any occupation, machines could be built to carry out the task better and more cheaply thanhuman workers

      Q: What is full automation of labour?

    14. the chance of unaided machines outperforming humans in every possible task was estimated at 10%by 2027, and 50% by 2047.

      Survey: 10% chance that machines become better than humans in 'every possible task by 2027, but 50% by 2047.

    15. High-level machine intelligence (HLMI) is achieved when unaided machines can accomplish everytask better and more cheaply than human workers. Ignore aspects of tasks for which being a humanis intrinsically advantageous, e.g. being accepted as a jury member.Think feasibility, not adoption

      Q: What is high-level machine intelligence?

    16. High-Level Machine Intelligence’

      new term: high-level machine intelligence

    17. six tasks expected to take longer than ten years were: “After spending time in a virtual world, output the differentialequations governing that world in symbolic form” (12 years), “Physically install the electrical wiring in a new home”(17 years), “Research and write” (19 years) or “Replicate” (12 years) “a high-quality ML paper,” “Prove mathematicaltheorems that are publishable in top mathematics journals today” (22 years), and solving “long-standing unsolvedproblems in mathematics” such as a Millennium Prize problem (27 years)

      Expectations on the tasks feasible to be taken over than AI later than 10 years from now

    18. 2,778 AI researchers who had published peer-reviewed research in the prior year in six topAI venues (NeurIPS, ICML, ICLR, AAAI, IJCAI, JMLR). This to our knowledge constitutes the largest survey of AIresearchers to date

      Who participated in the survey

    19. hey are experts in AI research, not AI forecasting and might thus lack generic forecasting skills andexperience, or expertise in non-technical factors that influence the trajectory of AI.

      Good to note this caveat

    20. lack of apparent consensus among AI experts on the future of AI [

      This has always been the case, no?

    21. was disagreement about whether faster or slower AI progress would be better for thefuture of humanity.

      interesting also

    22. substantial” or “extreme” concern is warranted about six different AI-relatedscenarios, including spread of false information, authoritarian population control, and worsenedinequality.

      .

    23. While68.3% thought good outcomes from superhuman AI are more likely than bad, of these net optimists48% gave at least a 5% chance of extremely bad outcomes such as human extinction, and 59% of netpessimists gave 5% or more to extremelygoodoutcomes. Between 37.8% and 51.4% of respondentsgave at least a 10% chance to advanced AI leading to outcomes as bad as human extinction.

      Survey results on AI extinction risks. Quite interesting

    24. the chance of all humanoccupations becoming fully automatable was forecast to reach 10% by 2037, and 50% as late as 2116(compared to 2164 in the 2022 survey).
  3. Jul 2023
    1. In this classroom, each student should not only have the opportunity to practice but also actively participate in discussions, creating an inclusive and deeply participatory learning environment.
    2. Teaching others is a more powerful learning technique than re-reading or summarizing
    3. ssues that keep teams from fulfilling their potential, as process loss and these include social loafing (when individuals put forth less effort when working in a group) and groupthink (when group members’ desire for conformity can lead to bad decisions)
    4. For any assignment, it’s not enough to cite the AI; students should include an Appendix noting what they used the AI for and where its output fits into their work
    5. you should try it out a number of times for your specific topic or concept and see how it reacts
    6. ecause the AI can “get it wrong” students need to be aware of those limits and discussing this inclass is oneway to highlight its limitations
    7. the tutor's value is not merely subject knowledge, but also their capacity to prompt the student to make an effort, pay close attention to the material, make sense of new concepts, andconnect what they know with new knowledge
    8. utoring involves small group or one-on-one sessions with a tutor focusing on skills building.

      Q: What does tutoring involve?

    9. hat feedback should be considered critically, and students should be asked to articulate how and why the feedback they received is effective (or not).
    10. while the feedback may be helpful it should be coupled with an in-class discussion and clear guidelines
    11. feed-up, feedback, and feed-forward. Feed-up serves to clearly articulatethe goals and expectations students are to meet. Feedbackreflects students' current progress and pinpoints areas requiring further development; it provides actionable advice, helpingstudents to achievetheir goals. Feed-forward helps teachers plan and tweak their lessons based on student work

      Components of feedback: feed-up, feedback, and feed-forwards

    12. they aremost common when asking for quotes, sources, citations, or other detailed information

      When hallucination is most likely to appear in LLMs.

    Created with Sketch. Visit annotations in context

    Created with Sketch. Tags

    Created with Sketch. Annotators

    Created with Sketch. URL

    1. significant model public releases within scope

      ! Also, what is 'significant'?

    2. introduced after the watermarking system is developed

      !

    3. reat unreleased AI model weights for models in scope as core intellectual property for their business,
    4. establishor joina forum or mechanism through which they can develop, advance, and adopt shared standards and best practices for frontier AI safety,
    5. Companies commit to advancing this area of research, and to developing a multi-faceted, specialized, and detailed red-teaming regime, including drawing on independent domain experts, for all major public releases of new models within scope

      So applying to what comes next...

    6. Where commitments mention particular models, they apply only to generative models that are overall more powerful than the current industry frontier (

      Very interesting... Not quite sure what is meant by 'particular models', though. ||JovanK||

    7. hese voluntary commitmentsto remain in effect until regulations covering substantially the same issues come into force
    8. designed to advancea generative AI legal and policy regime
    9. Realizing the promise and minimizing the risk of AI will require new laws, rules, oversight, and enforcement.
    10. only a first step in developing and enforcing binding obligations to ensure safety, security, and trus

      commitments to be followed by binding obligations

    Created with Sketch. Visit annotations in context

    Created with Sketch. Tags

    Created with Sketch. Annotators

    Created with Sketch. URL

    1. ollaborative could acquire or develop and then distribute AI systems to address these gaps, poolingresources from member states and international development programs, working with frontier AIlabs to provide appropriate technology, and partnering with local businesses, NGOs, and beneficiarygovernments to better understand technological needs and overcome barriers to use

      What would be the relation with big tech companies?

      And 'acquire' how?

    2. cquiresand distributes AI systems
    3. institutions taking on the role of several of the models above
    4. ssed by offering safety researchers withinAI labs dual appointments or advisory roles in the Project,
    5. it diverts safety research away from the sites of frontier AI development.
    6. such an effort would likely be spearheaded byfrontier risk-conscious actors like the US and UK governments, AGI labs and civil society groups
    7. ccelerate AI safety researchby increasing its scale, resourcing and coordination, thereby expanding the ways in which AI canbe safely deployed, and mitigating risks stemming from powerful general purpose capabilities
    8. xceptional leaders and governancestructures
    9. an institution with significant compute, engineering capacity and access tomodels (obtained via agreements with leading AI developers), and would recruit the world’s leadingexperts
    10. n AI Safety Project need not be, and should beorganized to benefit from the AI Safety expertise in civil society and the private sector

      And funded how?

    11. like ITER and CERN.
    12. conduct technical AI safety research26at an ambitious scale
    13. y to exclude from participation states who are likely to want to useAI technology in non-peaceful ways, or make participation in a governance regime the preconditionfor membership
    14. consistently implement the necessary controls to manage frontiersystem
    15. he Collaborative to have a clear mandateand purpose
    16. diffusing dangerous AI technologies around theworld, if the most powerful AI systems are general purpose, dual-use, and proliferate easily.
    17. would need to significantly promote access to the benefits of advanced AI (objective1), or put control of cutting-edge AI technology in the hands of a broad coalition (objective 2)
    18. he resourcesrequired to overcome these obstacles is likely to be substantial,

      Indeed.

    19. nderstanding the needs of membercountries, building absorptive capacity through education and infrastructure, and supporting thedevelopment of a local commercial ecosystem to make use of the technology
    20. being deployed for “protective” purposes
    21. ncrease global resilience to misusedor misaligned AI systems
    22. existence of a technologicallyempowered neutral coalition may also mitigate the destabilizing effects of an AI race betweenstates

      feasible?

    23. access to its safe technology could be offered as an incentivefor countries to participate in a governance regime that enforces responsibility (such as agreementsto enact stricter regulation, or restrict military AI development).

      But the cynical question would be: If country X has the research and development capacities to develop advanced AI, why would it want access to the tech of this Collaborative?

    24. reduce geopolitical instability amidst fierce AI competition among states

      hmm

    25. ensure the benefits ofcutting-edge AI reach groups that are otherwise underserved by AI developmen
    26. legitimate international access to advanced AI
    27. a Frontier AICollaborative could take the form of an international private-public partnership that leverages existingtechnology and capacity in industry, for example by contracting access to or funding innovation inappropriate AI technology from frontier AI developers.
    28. ligned countries may seek to form governance clubs, asthey have in other domains. This facilitates decision-making, but may make it harder to enlist othercountries later in the process

      Budapest Convention is a case in point

    29. its effectiveness will depend on its membership, governance and standard-setting processes
    30. Governance Organization should focus primarily on advanced AI systems thatpose significant global risks, but it will be difficult in practice to decide on the nature and sophisti-cation of AI tools that should be broadly available and uncontrolled versus the set of systems thatshould be subject to national or international governance
    31. Automated (even AI-enabled) monitoring
    32. some states may be especially reluctant to join due to fear of clandestinenoncompliance by other states
    33. Other AI-specific incentives for participation include conditioning on participationaccess to AI technology (possibly from a Frontier AI Collaborative) or computing resources.22Statesmight also adopt import restrictions on AI from countries that are not certified by a GovernanceOrganization

      Surely interesting. Though in the current geopolitical context, it is not difficult to imagine how this would work.

    34. while urgent risks may need to be addressed at first by smallergroups of frontier AI states, or aligned states with relevant expertise

      Geopolitics come to play again? Surely 'frontier AI states' is different from 'aligned states'.

    35. The impact of a Governance Organization depends on states adoptingits standards and/or agreeing to monitoring.
    36. standard setting (especially in an international and multistakeholder context) tends to be a slowprocess
    37. detection and inspections of large data centers
    38. self-reporting of compliance with international standards
    39. Where states have incentives to undercut each other'sregulatory commitments, international institutions may be needed to support and incentivize bestpractices. That may require monitoring standards compliance.
    40. therefore enable greater access to advanced AI

      Implementing international standards enables greater access to tech?

    41. international standard setting wouldreduce cross-border frictions due to differing domestic regulatory regimes

      If those standards (whatever we mean by them...) are taken up broadly.

    42. uture regulations will limit access to powerful AI technologies in jurisdictions with inadequate AIgovernanc

      Assuming those jurisdictions won't be able to develop their own powerful AI tech?

    43. he InternationalTelecommunication Union (ITU)

      In what sense?

    44. Advanced AI Governance Organization.
    45. intergovernmental or multi-stakeholder
    46. as less institutionalizedand politically authoritative scientific advisory panels on advanced AI

      So a advisory panel stands more chances of reaching that consensus than a commission?

    47. epresentation may trade off against a Commission’s ability to overcomescientific challenges and generate meaningful consensus
    48. broad geographic representation in the main decisionmaking bodies, and a predominance of scientificexperts in working groups
    49. he scientific understanding of the impacts of AI should ideally beseen as a universal good and not be politicized
    50. foundational “Conceptual Framework
    51. Commission might undertake activities that draw andfacilitate greater scientific attention, such as organizing conferences and workshops and publishingresearch agenda

      :))

    52. o increase chances of success, aCommission should foreground scientific rigor and the selection of highly competent AI expertswho work at the cutting edge of technological development and who can continually interpret theever-changing technological and risk landscape

      Sounds good; with the devil being in implementation. E.g." Whose standards would determine what is 'highly competent' AI expert?

    53. internationally representative group of experts
    54. there is significant disagreement even among experts about the different opportunities andchallenges created by advanced AI

      And so what makes us think that these disagreements would evolve into a consensus if a committee is created?

    55. International consensus on the opportunities and risks from advanced AI

      What does 'international consensus' mean?

    56. the Commission on Frontier AI could facilitatescientific consensus by convening experts to conduct rigorous and comprehensive assessments ofkey AI topics, such as interventions to unlock AI’s potential for sustainable development, the effectsof AI regulation on innovation, the distribution of benefits, and possible dual-use capabilities fromadvanced systems and how they ought to be managed

      What a Commission on Frontier AI would do.

      Silly question: Why 'frontier AI'?

    57. intergovernmental body
    58. dangerous inputs: computing resources have been targeted by US, Japanese and Dutch export controlsthat prevent the sale of certain AI chips and semiconductor manufacturing equipment to Chin

      So 'controlling dangerous inputs' is actually about preventing non-friendly countries to access/develop the tech?

    59. norms and standards

      Are norms and standards rules?

    60. standards

      Again, what kind of standards are we talking about?

    61. Establish guidelines

      Don't we have enough of these?

    62. through education, infrastructure, andsupport of the local commercial ecosystem

      So building capacities and creating enabling environments

    63. develop frontier AI collectively or distribute and enable access

      A bunch of questions here. It sounds good, but:

      • Collectively by whom?
      • How exactly would that distribution of access work?
    64. developing and/or enabling safe forms of access to AI.

      What does this mean?

    65. ontrolling AIinputs

      How could this be done?

      ||JovanNj|| Any thoughts?

    66. tobuild consensus on risksand how they can be mitigated,andset safety norms and standardsandsupport their implementationto help developers and regulators with responsible developmentand use. International efforts toconduct or support AI safety research
    67. national regulation may be ineffective for managing the risks of AI even within states. States willinevitably be impacted by the development of such capabilities in other jurisdictions
    68. building capacity to benefit from AI through education, infrastructure, andlocal commercial ecosystems

      Building capacity is always good. Not quite sure whose capacities are we talking about here and how/who would build them.

    69. build consensus on AI opportunities

      Do we really need to build consensus on this?

    70. Standards

      The more we talk/write about standards, the more I feel we need a bit of clarity here as well: Are we talking about technical standards (the ones developed by ITU, ISO, etc) or 'policy' standards (e.g. principles) or both?

    71. SO/IEC

      ITU, IEEE...

    72. whether these institutions should be new or evolutions of existing organizations, whether the con-ditions under which these institutions are likely to be most impactful will obtain,

      But aren't these essential questions if all this is to work?

    73. A failure to coordinate or harmonizeregulation may also slow innovation.

      A cynical question: Is this coming from a commercial interest angle, then?

    Created with Sketch. Visit annotations in context

    Created with Sketch. Tags

    Created with Sketch. Annotators

    Created with Sketch. URL

  4. Jun 2023
    1. My worst fears are that we cause significant, we, the field, the technology, the industry cause significant harm to the world. I think that could happen in a lot of different ways. It’s why we started the company

      why exactly?

    2. worry that as the models get better and better the users can have sort of less and less of their own discriminating thought process around it. But, but I think users are more capable than we give often, give them credit for in, in conversations like this.
    3. but also the genome project
    4. to us as the American people to write the answer
    5. t’s really like the invention of the internet in scale, at least, and potentially far, far more significant than that
    Created with Sketch. Visit annotations in context

    Created with Sketch. Tags

    Created with Sketch. Annotators

    Created with Sketch. URL

  5. May 2023
    1. in the text such as multistakeholders and networked-multilateralism

      Also in the GDC policy brief, where we have a combination of MSH, tripartite, networked cooperation.

    Created with Sketch. Visit annotations in context

    Created with Sketch. Annotators

    Created with Sketch. URL

  6. Feb 2023
    1. 2017: ESA announces its astronauts will train along Chinese one, with the overall goal of having European sent to China's space station. Jan 2023: ESA: "For the moment we have neither the budgetary nor the political, let’s say, green light or intention to engage in a second space station—that is participating on the Chinese space station"

    2. European space officials like the Artemis program and are seeking areas for greater involvement. This is drawing them closer to NASA.
    3. However, the more significant reason is probably a political one
    4. anuary, Josef Aschbacher, director general of the European Space Agency, said his focus remains on the International Space Station Partnership with NASA, Russia, Canada, and Japan. "For the moment we have neither the budgetary nor the political, let’s say, green light or intention to engage in a second space station—that is participating on the Chinese space station,"
    5. Nearly six years ago the European Space Agency surprised its longtime spaceflight partners at NASA, as well as diplomatic officials at the White House, with an announcement that some of its astronauts were training alongside Chinese astronauts. The goal was to send European astronauts to China's Tiangong space station by 2022.

      2017: ESA announces its astronauts will train along Chinese one, with the overall goal of having European sent to China's space station.

    Created with Sketch. Visit annotations in context

    Created with Sketch. Tags

    Created with Sketch. Annotators

    Created with Sketch. URL

    1. A new study argues that the boundary between Earth's atmosphere and outer space—known as the Kármán line—is 20 kilometers, or about 20%, closer than scientists thought
    2. Until now, most scientists have said that outer space is 100 kilometers away.

      Earth atmosphere - space boundary (Karman line): was thought to be at 100 km. Now scientists say it may be 20 km closer.

    3. World Air Sports Federation (FAI) in Lausanne, Switzerland, the keeper of outer space records
    Created with Sketch. Visit annotations in context

    Created with Sketch. Tags

    Created with Sketch. Annotators

    Created with Sketch. URL

    1. A country’s sovereign airspace theoretically extends to the (disputed) boundary with space.

      state sovereignty in space

    2. More serious challenges for balloons might be regulatory and political.
    3. This kind of technology could eventually be a threat—or a complement—to existing space data businesses.
    4. The benefits include higher resolution imagery: World View’s investor presentation claims sensor resolutions of 3 to 5 centimeters per pixel, compared to the 50 cm per pixel resolution that is top of line in space-based Earth observation. The balloons also offer more persistent monitoring by floating above one location for weeks at a time, whereas Earth observation satellites can typically gather data a few times a day at most. World View already has a partnership with Maxar, a leading satellite firm with an extensive remote-sensing business.

      High-altitude ballons vs Earth observations satellites

    Created with Sketch. Visit annotations in context

    Created with Sketch. Tags

    Created with Sketch. Annotators

    Created with Sketch. URL

    1. Axiom, a space company that flies passengers to the International Space Station and is developing its own space station, said this week that most of the demand for passenger services is coming from governments without their own space programs, not tourists with deep pockets.

      Axiom - private company flying passengers to ISS. Also building own space station.

    2. the project is still on the drawing board, with hopes to finalize its design by 2025. The goal is to fly the vehicle in 2027—but, as with most space technology forecasts, that’s an optimistic projection.
    3. The US has only put a nuclear reactor in space once before, through an Air Force program called SNAPSHOT.
    4. But the dream of space fanatics is a proper nuclear rocket, one using a fission reactor to run an engine two to three times more powerful than any motor dependent on combusting fossil fuels. Launched into space on a conventional rocket, it could shorten trips to Mars or give the Space Force unprecedented maneuverability. Last week, NASA and DARPA, the US military’s advanced tech lab, announced a collaborative project called DRACO to build and test exactly such a vehicle.

      towards a 'nuclear space rocket'

    5. Deep space missions regularly rely on nuclear power, using the heat emitted by radioactive substances to generate electricity without an atom-splitting chain reaction. NASA’s Perseverance rover, currently exploring the surface of Mars, is powered by one of these devices.

      Deep space missions relying on nuclear power

    Created with Sketch. Visit annotations in context

    Created with Sketch. Tags

    Created with Sketch. Annotators

    Created with Sketch. URL

    1. OpenAI announced they've "trained a classifier to distinguish between text written by a human and text written by AIs from a variety of providers". Saying it is not 'fully reliable": correctly identifies 26% of AI-written text (true positives) as “likely AI-written,” while incorrectly labeling human-written text as AI-written 9% of the time (false positives).

      ||JovanNj|| ||Jovan||

    Created with Sketch. Visit annotations in context

    Created with Sketch. Tags

    Created with Sketch. Annotators

    Created with Sketch. URL

  7. Jan 2023
    1. Asteroid miners prepare for launch. AstroForge, which wants to pluck platinum-group metals from near-Earth objects, plans to launch a payload in April that will demonstrate the ability to mine metals in space. The company also expects to launch another spacecraft onboard Intuitive Machines’ lunar mission later this year that will fly to a target asteroid and assess its viability for future mining.

      Asteroid mining missions (private) to launch in 2023.

    Created with Sketch. Visit annotations in context

    Created with Sketch. Tags

    Created with Sketch. Annotators

    Created with Sketch. URL

    1. Germany's new strategy for Africa (just launched).

      A few points on digital:

      • Digital transformation among the focus areas for development cooperation (although in the same basket with employment, fair trade, and migration)
      • Support for AfCFTA
      • Mobilise investment in digital infra
      • Support for the digital economy. Specifically, support for: enhancing economic and political frameworks; creating digital markets; enabling secure, universal internet access and bridging digital divides; fostering legal standards and data privacy regulations.
      • stimulate the creation of ICT jobs
      • Support the digitalisation of healthcare
      • Supporting women's economic participation, including through providing training for women with a special focus on digital expertise.
      • Supporting the digitalisation of the public sector and the use of digital tech to strengthen political participation

      Update published on DW and Diplo:

      https://www.diplomacy.edu/updates/digital-transformation-among-the-priorities-of-germanys-new-strategy-for-africa/

      ||mwendenATdiplomacy.edu|| FYI

    2. training for women, with a special focus on digital expertise.

      Supporting women's economic participation, including through providing training for women with a special focus on digital expertise.

    3. The focus is to be increasingly on software solutions (digital health),

      Support the digitalisation of healthcare

    4. stimulating the creation of jobs offering decent working conditions. It fo-cuses in particular on the promising industries of the future, such as information and communica-tion technologies (ICT),

      stimulating the creation of ICT jobs

    5. Support digitalisation of the African economyThe BMZ aims to effectively support the rapidly developing digital economy, for example through the Make-IT in Africa initiative, the establishment of digicentres and activities to assist African initiatives including the Smart Africa Alliance. It helps African partner countries to enhance the economic and political framework for digital transformation, to create digital markets, provide secure, universal internet access and bridge the “digital divide” within the population. It is also fostering legal standards and data privacy regulations, for example through Team Europe Initiatives such as the African European Digital Innovation Bridge Network and the EU-AU Data Flagship

      Support for the digital economy. Specifically, support for: enhancing economic and political frameworks; creating digital markets; enabling secure, universal internet access and bridging digital divides; fostering legal standards and data privacy regulations.

    6. digital infrastructure and health infrastructure, the BMZ aims to mobilise invest-ment –

      Mobile investment in digital infra

    7. Support the AfCFTA and ensure trade agreements are pro-development

      Support for AfCFTA

    8. German development policy faces the challenge of finding differentiated and flexible responses that take account of the fact that African states have their own interests, and that each state has its own view of the world and its own vision of the best economic, political and social order.
    9. digital transformation
    Created with Sketch. Visit annotations in context

    Created with Sketch. Tags

    Created with Sketch. Annotators

    Created with Sketch. URL

    1. TITLE: US government launches Digital Transformation with Africa

      TEXT: The US government has launched a Digital Transformation with Africa (DTA) initiative dedicated to 'expand[ing] digital access and literacy and strengthen[ing] digital enabling environments across the continent'. The USA plans to dedicate over US$350 million to this initiative, which is expected to support the implementation of both the African Union's Digital Transformation Strategy and the US Strategy Towards Sub-Saharan Africa. DTA's objectives revolve around three pillars:

      1. Digital economy and infrastructure: (a) expanding access to an open, interoperable, reliable, and secure internet; (b) expanding access to key enabling digital technologies, platforms, and services and scale the African technology and innovation ecosystem; (c) facilitating investment, trade, and partnerships in Africa’s digital economy.
      2. Human capital development: (a) facilitating inclusive access to digital skills and literacy, particularly for youth and women; (b) fostering inclusive participation in the digital economy; (c) strengthening the capacity of public sector employees to deliver digital services.
      3. Digital enabling environment: (a) strengthening the capacities of authorities and regulators to develop, implement, and enforce sound policies and regulations; (b) supporting policies and regulations that promote competition, innovation, and investment; (c) promoting governance that strengthens and sustains an open, interoperable, reliable, and secure digital ecosystem.

      Date: 14 December 2022

    2. Pillar 2: Human Capital Development
    3. Pillar 1: Digital Economy and Infrastructure
    4. DTA will foster an inclusive and resilient African digital ecosystem, led by African communities and built on an open, interoperable, reliable, and secure internet
    5. invest over $350 million and facilitate over $450 million in financing for Africa
    Created with Sketch. Visit annotations in context

    Created with Sketch. Tags

    Created with Sketch. Annotators

    Created with Sketch. URL

    1. That seems likely to change in the next few weeks, when an uncrewed lander becomes the first commercial vehicle to touch down on the Moon

      private sector making its way into Moon missions/exploration

      • 1 JP and 2 US private-led missions underway or planned
    2. The Outer Space Treaty of 1967, space law’s foundational text, is showing its age. It dates back to the era when only governments had access to space. And it states that no claims of sovereignty can be made, on the Moon or elsewhere. Efforts to update the treaty to establish rules around resource extraction have run into the lunar regolith. America has refused to sign the Moon Agreement, adopted by 18 countries in 1984, whereas China and Russia have rejected America’s latest proposal, the Artemis accords of 2020.

      existing governance frameworks for moon/space resource exploration; fragmentation

    3. Peregrine lander built by Astrobotic Technology, in Pittsburgh, Pennsylvania, also operates under the CLPS programme
    4. “commercial lunar payload services” (CLPS) programme
    5. Nova-C, created by Intuitive Machines, a startup in Houston, Texas
    6. One vehicle, HAKUTO-R Mission 1, operated by ispace, a Japanese firm, is already on its way and is scheduled to land in late April
    7. Of 178 successful missions in 2022, 90 were by companies (in many cases subcontracted by governments), and of those 61 were by one firm, SpaceX.

      overview of orbit launches in 2022

    Created with Sketch. Visit annotations in context

    Created with Sketch. Tags

    Created with Sketch. Annotators

    Created with Sketch. URL

    1. growing support for a ban on anti-satellite weapons tests as one sign of progress, but that effort came after a series of particularly messy orbital tests, and can be seen as an effort to limit Chinese and Russian weapons development.

      To look into: ban on anti-satellite weapons tests

      ||sorina||

    2. What might change that, the report suggests, is a disaster
    3. most influential actors in the space economy, whether nation-states or companies, are still happier with a free hand than an insurance policy
    4. global governance: Rules for space traffic management, protocols for space debris mitigation and removal, and norms for economic activity in space, from resource extraction to property rights.

      Space governance areas

    Created with Sketch. Visit annotations in context

    Created with Sketch. Tags

    Created with Sketch. Annotators

    Created with Sketch. URL

    1. OpenAI is developing a tool for “statistically watermarking the outputs of a text [AI system].” Whenever a system — say, ChatGPT — generates text, the tool would embed an “unnoticeable secret signal” indicating where the text came from.

      OpenAI apparently working on a tool to watermark AI-generated content and make it 'easier to spot'.

      ||JovanNj||||Jovan||

    2. he expressed the belief that, if OpenAI can demonstrate that watermarking works and doesn’t impact the quality of the generated text, it has the potential to become an industry standard.Not everyone agrees. As Devadas points out, the tool needs a key, meaning it can’t be completely open source — potentially limiting its adoption to organizations that agree to partner with OpenAI. (If the key were to be made public, anyone could deduce the pattern behind the watermarks, defeating their purpose.)But it might not be so far-fetched. A representative for Quora said the company would be interested in using such a system, and it likely wouldn’t be the only one.

      potential standard

    3. “If [it] becomes a free-for-all, then a lot of the safety measures do become harder, and might even be impossible, at least without government regulation,”

      how regulation comes into play

    4. Aaronson acknowledged the scheme would only really work in a world where companies like OpenAI are ahead in scaling up state-of-the-art systems — and they all agree to be responsible players. Even if OpenAI were to share the watermarking tool with other text-generating system providers, like Cohere and AI21Labs, this wouldn’t prevent others from choosing not to use it.
    5. Unaffiliated academics and industry experts, however, shared mixed opinions.

      Potential challenges/shortcomings of the tool

    6. OpenAI also declined, saying only that watermarking is among several “provenance techniques” it’s exploring to detect outputs generated by AI.
    7. Watermarking AI-generated text isn’t a new idea
    8. OpenAI’s watermarking tool acts like a “wrapper” over existing text-generating systems, Aaronson said during the lecture, leveraging a cryptographic function running at the server level to “pseudorandomly” select the next token. In theory, text generated by the system would still look random to you or I, but anyone possessing the “key” to the cryptographic function would be able to uncover a watermark.

      What the watermark tool would be.

    9. “We want it to be much harder to take [an AI system’s] output and pass it off as if it came from a human,” Aaronson said in his remarks. “This could be helpful for preventing academic plagiarism, obviously, but also, for example, mass generation of propaganda — you know, spamming every blog with seemingly on-topic comments supporting Russia’s invasion of Ukraine without even a building full of trolls in Moscow. Or impersonating someone’s writing style in order to incriminate them.”

      why the tool is developed

    10. the hope is to build it into future OpenAI-developed systems
    11. why it is working on a way to “watermark” AI-generated content
    Created with Sketch. Visit annotations in context

    Created with Sketch. Tags

    Created with Sketch. Annotators

    Created with Sketch. URL

    1. will fund musical curricula focused on conflict resolution and foreign exchange programs for young musicians across the globe
    2. Alongside funding for military jets and naval warships, the new $857.9 billion US defense spending bill includes a program to fund musical exchanges around the globe. Dubbed the PEACE Through Music Diplomacy Act, the legislation funds US State Department cultural exchange projects that encourage artistic collaboration across borders.

      Music diplomacy in the US defence spending bill.

      ||JovanK||

    Created with Sketch. Visit annotations in context

    Created with Sketch. Tags

    Created with Sketch. Annotators

    Created with Sketch. URL

  8. Dec 2022
    1. People using the stimulator and their physicians could no longer access the proprietary software needed to recalibrate the device and maintain its effectiveness.

      Key question: What happens with neurotech and the people using them when the companies behind the tech are no longer?

      Issues: access to software, battery drain, ...

    Created with Sketch. Visit annotations in context

    Created with Sketch. Tags

    Created with Sketch. Annotators

    Created with Sketch. URL