10,924 Matching Annotations
  1. Jun 2023
    1. banning biological weapons at the Biological Weapons Convention during the Cold War
    2. a system of prohibitions, regulations, ethics, and norms that ensures the wellbeing of society and individuals.
    3. Yet the ethos of this research is not to ‘move fast and break things,’ but to innovate as fast and as safely possible.
    4. Due to a handful of early missteps with nuclear energy, we have been unable to capitalize on its clean, safe power, and carbon neutrality and energy stability remain a pipe dream.
    5. These weapons, representing the first time in human history that man had developed a technology capable of ending human civilization, were the product of an arms race prioritizing speed and innovation over safety and control.
    Created with Sketch. Visit annotations in context

    Created with Sketch. Annotators

    Created with Sketch. URL

    1. Away from geopolitics, Mr Sunak is keen to interest Mr Biden in his ideas about new international structures to regulate artificial intelligence. As sound and rational as these ideas may be, however, they are likely to be swamped should America decide to work something out with the EU .
    Created with Sketch. Visit annotations in context

    Created with Sketch. Annotators

    Created with Sketch. URL

    1. Mr Rama’s confidence encompasses the region. The entire Balkans, he notes, even traditionally pro-Russian Serbia, is united behind Ukraine. A member of Nato since 2009 and since 2010 part of the Schengen group that grants visa-free travel within Europe for up to three months, Albania has no serious tussles with its neighbours, he adds. “This is unique in our history.”
    Created with Sketch. Visit annotations in context

    Created with Sketch. Annotators

    Created with Sketch. URL

    1. Each party owns half of Debswana, which mines 95% of the diamonds in Botswana, the second biggest producer after Russia
    Created with Sketch. Visit annotations in context

    Created with Sketch. Annotators

    Created with Sketch. URL

    1. It seems likely that many poor countries will want to emulate it, to their advantage—and India’s too. ■
    2. India’s reputation is much better in the global south than America’s or China’s.
    3. India’s technology could in such ways be tainted by the vishwaguru’s growing authoritarianism.
    4. The system also suffers security breaches. Experts say it is very easy to access it with false credentials or spoof fingerprints. India’s technology offer, says one analyst, includes a lot of “hot air”.
    5. By promoting its technology as a means to transform poor countries, India hopes to position itself as a neutral third force between what it sees as the transactional West and an authoritarian China.
    6. Cross-border linkages of such systems could bypass America’s financial architecture. In February NPCI connected UPI with Singapore’s digital payments systems, PayNow. In April it did the same with the United Arab Emirates’ system. Indians should, in theory, now be able to use UPI in shops and restaurants in Dubai. “India is self-sufficient on the domestic payments. We would like to be self-sufficient on cross-border payments and remittances as well,” says Dilip Asbe, NPCI’ s boss.
    7. In the event of a future crisis, domestic payments systems based on UPI could be insulated; they would be harder for American sanctions to target.
    8. And just as Europe’s influence on global technology has been boosted by its regulatory power, so India’s will grow if many countries adopt Indian-made digital systems.
    9. What we are trying to do is get them to build their own systems with building blocks which are interoperable,”
    10. The Philippines was the first to sign up; 76m of its 110m people have been issued with digital ID s using MOSIP’ s technology, says its boss, S. Rajagopalan. Morocco conducted a trial of the technology in 2021 and has made it available to 7m of its 36m people. Other countries using or piloting MOSIP include Ethiopia, Guinea, Sierra Leone, Sri Lanka and Togo.
    11. The International Institute of Information Technology, a university in Bangalore, launched the Modular Open Source Identity Platform ( MOSIP ) in 2018 to offer a publicly accessible version of Aadhaar-like technology to other countries.
    12. DPI is “infrastructure that can enable not just government transactions and welfare but also private innovation and competition,” says C.V. Madhukar of Co-Develop, a fund recently launched to help countries interested in building DPI pool resources.
    13. At the club’s meetings, delegates are hammering out a definition of DPI. India is also trying to launch a multilateral funding body to push DPI globally. It hopes to unveil both at a g 20 leaders’ summit in September, marking the end of its presidency.
    14. Starting without legacy systems such as credit cards and desktop computers, developing countries can leapfrog the West.
    15. Partly to that end, India invited 125 such countries to a “Voice of the Global South Summit” in Delhi in January. “I firmly believe that countries of the global south have a lot to learn from each other’s development,” Mr Modi told their delegates, offering DPI as an example.
    16. Aadhaar is run by the government; UPI is managed by a public-private venture, the National Payments Corporation of India ( NPCI ). Other platforms, such as for health and sanitation management, are created by NGO s and sold to state and local governments. Many have been designed by IT experts with private-sector experience.
    17. The Open Network for Digital Commerce is a newish government-backed non-profit dedicated to helping e-commerce services work together.
    18. The IMF thinks the government thereby saved 2.2trn rupees ($34bn), or 1.1% of GDP , between 2013 and March 2021.
    19. a triad of identity, payments and data management
    20. Once known as the “India Stack”, they have been rebranded “digital public infrastructure” ( DPI ) as the number and ambition of the platforms have grown. It is this DPI that India hopes to export—and in the process build its economy and influence. Think of it as India’s low-cost, software-based version of China’s infrastructure-led Belt and Road Initiative. “
    Created with Sketch. Visit annotations in context

    Created with Sketch. Annotators

    Created with Sketch. URL

    1. Every year, approximately $48 trillion are invested in projects. Yet according to the Standish Group, only 35% of projects are considered successful. The wasted resources and unrealized benefits of the other 65% are mind-blowing.

      Sobering statistics on waste of resoruces on project management.

    2. Only 35% of projects today are completed successfully.
    Created with Sketch. Visit annotations in context

    Created with Sketch. Annotators

    Created with Sketch. URL

    1. These private interests can change without public input, as when OpenAI effectively abandoned its nonprofit origins, and we can’t be sure that their founding intentions or operations will survive market pressures, fickle donors, and changes in leadership.
    2. Nonprofit projects are still beholden to private interests, even if they are benevolent ones.
    3. The open-source community is proof that it’s not always private companies that are the most innovative.
    4. A publicly funded LLM could serve as an open platform for innovation, helping any small business, nonprofit, or individual entrepreneur to build AI-assisted applications.
    5. But Washington can take inspiration from China and Europe’s long-range planning and leadership on regulation and public investment.
    6. The EU also continues to be at the cutting edge of aggressively regulating both data and AI.
    7. AI’s horrendous carbon emissions, and the exploitation of unlicensed data.
    8. et corporations aren’t the only entities large enough to absorb the cost of large-scale model training. Governments can do it, too. It’s time to start taking AI development out of the exclusive hands of private companies and bringing it into the public sector. The United States needs a government-funded and -directed AI program to develop widely reusable models in the public interest, guided by technical expertise housed in federal agencies.
    9. Silicon Valley has produced no small number of moral disappointments.
    Created with Sketch. Visit annotations in context

    Created with Sketch. Annotators

    Created with Sketch. URL

    1. governments working in partnership with the private sector should vigorously engage in each area of potential risk to use AI to maximize society’s defensive capabilities.
    2. Open source AI should be allowed to freely proliferate and compete with both big AI companies and startups. There should be no regulatory barriers to open source whatsoever.
    3. Startup AI companies should be allowed to build AI as fast and aggressively as they can.
    4. but not allowed to achieve regulatory capture, not allowed to establish a government-protect cartel that is insulated from market competition due to incorrect claims of AI risk.
    5. The single greatest risk of AI is that China wins global AI dominance and we – the United States and the West – do not.
    6. Let’s put AI to work in cyberdefense, in biological defense, in hunting terrorists, and in everything else that we do to keep ourselves, our communities, and our nation safe.
    7. Digital creation and alteration of both real and fake content was already here before AI; the answer is not to ban word processors and Photoshop – or AI – but to use technology to build a system that actually solves the problem.
    8. I’m not aware of a single actual bad use for AI that’s been proposed that’s not already illegal. And if a new bad use is identified, we ban that use.
    9. we have laws on the books to criminalize most of the bad things that anyone is going to do with AI.
    10. The level of totalitarian oppression that would be required to arrest that would be so draconian – a world government monitoring and controlling all computers? jackbooted thugs in black helicopters seizing rogue GPUs? – that we would not have a society left to protect.
    11. The AI cat is obviously already out of the bag.
    12. It’s the opposite, it’s the easiest material in the world to come by – math and code.
    13. Any technology can be used for good or bad. Fair enough. And AI will make it easier for criminals, terrorists, and hostile governments to do bad things, no question.
    14. AI will not come to life and kill us, AI will not ruin our society, AI will not cause mass unemployment, and AI will not cause an ruinous increase in inequality.
    15. To summarize, technology empowers people to be more productive. This causes the prices for existing goods and services to fall, and for wages to rise.
    16. This fallacy is the incorrect notion that there is a fixed amount of labor to be done in the economy at any given time, and either machines do it or people do it – and if machines do it, there will be no work for people to do.
    17. don’t let the thought police suppress AI.
    18. As the proponents of both “trust and safety” and “AI alignment” are clustered into the very narrow slice of the global population that characterizes the American coastal elites – which includes many of the people who work in and write about the tech industry – many of my readers will find yourselves primed to argue that dramatic restrictions on AI output are required to avoid destroying society.
    19. demand ever greater levels of censorship and suppression of whatever speech they view as threatening to society and/or their own personal preferences
    20. So any technological platform that facilitates or generates content – speech – is going to have some restrictions.
    21. no absolutist free speech position
    22. And the same concerns of “hate speech” (and its mathematical counterpart, “algorithmic bias”) and “misinformation” are being directly transferred from the social media context to the new frontier of “AI alignment”. 
    23. from “AI safety” – the term used by people who are worried that AI would literally kill us – to “AI alignment” – the term used by people who are worried about societal “harms”.
    24. If the murder robots don’t get us, the hate speech and misinformation will.
    25. But their extreme beliefs should not determine the future of laws and society – obviously not.
    26. “AI risk” has developed into a cult, which has suddenly emerged into the daylight of global press attention and the public conversation.
    27. There is a whole profession of “AI safety expert”, “AI ethicist”, “AI risk researcher”. They are paid to be doomers, and their statements should be processed appropriately.
    28. This explains the mismatch between the words and actions of the Baptists who are actually building and funding AI – watch their actions, not their words.
    29. What is the testable hypothesis? What would falsify the hypothesis? How do we know when we are getting into a danger zone? These questions go mainly unanswered apart from “You can’t prove it won’t happen!” In fact, these Baptists’ position is so non-scientific and so extreme – a conspiracy theory about math and code – and is already calling for physical violence, that I will do something I would normally not do and question their motives as well.
    30. They argue that because people like me cannot rule out future catastrophic consequences of AI, that we must assume a precautionary stance that may require large amounts of physical violence and death in order to prevent potential existential risk.
    31. n short, AI doesn’t want, it doesn’t have goals, it doesn’t want to kill you, because it’s not alive. And AI is a machine – is not going to come alive any more than your toaster will.
    32. My view is that the idea that AI will decide to literally kill humanity is a profound category error. AI is not a living being that has been primed by billions of years of evolution to participate in the battle for the survival of the fittest, as animals are, and as we are. It is math – code – computers, built by people, owned by people, used by people, controlled by people. The idea that it will at some point develop a mind of its own and decide that it has motivations that lead it to try to kill us is a superstitious handwave.
    33. new technologies, and in practice inflames destructive emotion rather than reasoned analysis.
    34. the Prometheus Myth – Prometheus brought the destructive power of fire, and more generally technology (“techne”), to man
    35. AI will decide to literally kill humanity.
    36. “Bootleggers” are the self-interested opportunists who stand to financially profit by the imposition of new restrictions, regulations, and laws that insulate them from competitors.
    37. it takes what may be a legitimate concern and inflates it into a level of hysteria that ironically makes it harder to confront actually serious concerns.

      ||Jovan|| very valid point

    38. The fine folks at Pessimists Archive have documented these technology-driven moral panics over the decades; their history makes the pattern vividly clear. It turns out this present panic is not even the first for AI.

      previous panics on technology.

    39. anything that people do with their natural intelligence today can be done much better with AI, and we will be able to take on new challenges that have been impossible to tackle without AI, from curing all diseases to achieving interstellar travel.
    40. I even think AI is going to improve warfare, when it has to happen, by reducing wartime death rates dramatically. Every war is characterized by terrible decisions made under intense pressure and with sharply limited information by very limited human leaders. Now, military commanders and political leaders will have AI advisors that will help them make much better strategic and tactical decisions, minimizing risk, error, and unnecessary bloodshed.

      While AI may reduce causalities in the wary?

    41. What AI offers us is the opportunity to profoundly augment human intelligence to make all of these outcomes of intelligence – and many others, from the creation of new medicines to ways to solve climate change to technologies to reach the stars – much, much better from here.

      augmented intelligence.

    42. The application of mathematics and software code to teach computers how to understand, synthesize, and generate knowledge in ways similar to how people do it.

      Q: what is AI?

    Created with Sketch. Visit annotations in context

    Created with Sketch. Annotators

    Created with Sketch. URL

    1. Mr Fick said the deadline seemed “quite an aggressive timeline to get to a [US] domestic [privacy] consensus, let alone a transatlantic consensus. But I think it’s imperative that we figure this one out.”
    2. a “deft touch”
    3. “I believe it’s incumbent upon those of us in developed wealthy economies to articulate a positive, compelling affirmative vision for what these technologies can do in the lives of our citizens.”
    4. The “positive power” of new technologies such as artificial intelligence, quantum science, and biotech “outweighs the risks”, according to Nathanial Fick, US ambassador at large for cyberspace and digital policy.
    Created with Sketch. Visit annotations in context

    Created with Sketch. Annotators

    Created with Sketch. URL

    1. My father's work was an attempt to make the proceedings understood by the farmers. Up to now, legal processes have remained inaccessible to poor Filipinos. But that is a story for another day. During my first year in the priesthood, on several occasions, my bishop then, Leonardo Legaspi, asked me to translate his homilies and pastoral letters from English to Bikol.
    Created with Sketch. Visit annotations in context

    Created with Sketch. Annotators

    Created with Sketch. URL

    1. The Broadband Access and in Home Solutions engineering process uses a top-down and bottom-up approach and several data triangulation methods to evaluate and validate the size of the entire market and other dependent sub-markets listed in Broadband Access and in Home Solutions report.
    Created with Sketch. Visit annotations in context

    Created with Sketch. Annotators

    Created with Sketch. URL

    1. Google je zbacio zanimljive AI aplikacije. Po ovoj prici izgleda da se mogu jednostavno koristiti. Nama je najzanimljivije da li i kako mozemo dodavati 'weight' recenicama i paragrafima.

      Radi se o tri aplikacije: Cloud portfolio, Vertex AI i Gen App Builder.

      ||Jovan|| ||JovanNj|| ||milosvATdiplomacy.edu|| ||anjadjATdiplomacy.edu||

    2. Cloud portfolio, Vertex AI and Gen App Builder

      sta se sve nalazi u ovim aplikacijama?

    3. Enterprise Search on Generative AI App Builder (Gen App Builder)
    4. including multiple tuning methods for large models,

      da li se ovde moze dodavati 'weight'.

    5. ML platform to provide Reinforcement Learning with Human Feedback, or RLHF

      Kako ovo funkcionise?

    6. Embeddings API for text

      Da li mozemo ovo da koristimo?

    7. Model Garden

      Sta je Model Garden?

    8. Model Garden and Generative AI Studio
    9. Vertex AI,

      Sta je Vertex AI

    10. our text model powered by PaLM 2, Embeddings API for text, and other foundation models in Model Garden, as well as leverage user-friendly tools in Generative AI Studio for model tuning and deployment.

      What are these technologies?

    Created with Sketch. Visit annotations in context

    Created with Sketch. Annotators

    Created with Sketch. URL

    1. The next race is to discover what it is for. Apple has just fired the starting gun. ■
    2. the technological struggle to make spatial computing a reality is being won.
    3. ts aim is to get the product to the people who will work out what spatial computing can do.
    4. It could be commercial (surgeons, engineers and architects have dabbled in the tech) or educational (Apple previewed a “planetarium” in its demo) or in entertainment (Disney made a cameo with ideas for immersive cinema and sports coverage).
    5. whose eyes also appear on the outside of the glasses to make wearing them less antisocial.
    6. hand gestures and eyeball tracking are in.
    7. after desktop and mobile computing, the next big tech era will be spatial computing—also known as augmented reality—in which computer graphics are overlaid on the world around the user.

      New word is 'spatial computing' and augmented reality.

    Created with Sketch. Visit annotations in context

    Created with Sketch. Annotators

    Created with Sketch. URL

    1. China and the U.S. have been developing artificial intelligence (AI) systems at a rapid pace that has evolved into a race for dominance, but should China surpass the U.S. in its technological capability, experts warn of dire consequences for America.
    Created with Sketch. Visit annotations in context

    Created with Sketch. Annotators

    Created with Sketch. URL

    1. The developing countries feel that greater attention needs to be given to ensuring that the SDGs adopted in 2015 are implemented.
    2. “we have no other option than to try to use (the preparations for the Summit of the Future) as an opportunity.”
    3. Guterres is expected to release later this month a report called the new agenda for peace, one of the many items to be discussed at the future summit.
    Created with Sketch. Visit annotations in context

    Created with Sketch. Annotators

    Created with Sketch. URL

    1. How to create a Topic Cluster with WordPress?

      ||Jovan|| How to create a topic cluster with WordPress?

    Created with Sketch. Visit annotations in context

    Created with Sketch. Annotators

    Created with Sketch. URL

    1. Every post in the cluster set needs to be linked to at least once with the same anchor text (the part that is hyperlinked) so that a search engine knows it’s part of a topic cluster.
    2. Algorithms have evolved to the point where they can understand the topical context behind the search intent, tie it back to similar searches they have encountered in the past, and deliver web pages that best answer the query.
    3. Multiple content pages that are related to that topic link back to the pillar page. This linking action signals to search engines that the pillar page is an authority on the topic, and over time, the page may rank higher for the topic it covers.
    Created with Sketch. Visit annotations in context

    Created with Sketch. Annotators

    Created with Sketch. URL

    1. But in a nutshell topic clusters are clusters of search optimized pages tied together under the umbrella of a high level topic.
    Created with Sketch. Visit annotations in context

    Created with Sketch. Annotators

    Created with Sketch. URL

    1. Support for Human Rights will be driven by the centrality of AI-driven data flows and the demand of the Global South for promoting AI data for development.
    2. SDGs provide tangible goals and metrics complementary to Human Rights
    3. The SDGs can be seen as part of a process of realizing Human Rights in the context of overall Well-being.
    4. The links between human dignity, expressed as Human Rights and human Well-being, expressed as Development Goals, were initially not well appreciated.
    5. “Human Rights” were intentionally not a significant component of the 2000 MDGs.
    Created with Sketch. Visit annotations in context

    Created with Sketch. Annotators

    Created with Sketch. URL

    1. The genie is out of the bottle. Cats out of the bag. If these people thought AI was/is so dangerous, why release and continue to release the tool to the mass public which inevitably includes bad actors?
    2. It would probably be more feasible imo to regulate the uses of a.i., for example if you wanted to utilize A.I. in some type of autonomous function, such as a self driving car, or some kind of guard robot, or traffic lights, or who knows what else, that you would need to ensure some level of standardized security and other such potential regulations.

      regulate uses or development of AI.

    3. What are we talking about here? Fishing scams? Advanced warfare capabilities that can result in a shifting unipolar world? A tertiary threat of mutual destruction politics? Whaaaaat are we talking about?
    4. We are headed for extinction either way. Global warming if it continues is going to cost trillions of damages, in flooded coastal cities, (most people live near coasts, need i remind you), more hurricanes, more pandemic (feral creatures forced to move north and come in more frequent contact with humans), deadlier pathogens (global rising temperatures makes pathogens adapt to higher temperatures, making our fever defense mechanisms less effective). Not to mention the USA-Russia-China are on the brink of wanting to go to war with each other. Russia invading an european country, and straight up threatening to use NUKES. Hello? China thinking about moving into Taiwan, an american protectorate because it's struggling economically
    5. I'm tired of these gloom and gloom without substance.
    6. I wish someone would actually address the elephant in the room
    7. OpenAI just keeps saying it's coming, it's dangerous, but when congress asked for "nutrition label" of their AI model, they avoided the question.

      ||Jovan|| good point

    8. It honestly feels like selling doom to limit control of AI to only select few.
    9. How would it defend itself from a Solar Flare or Nuclear EMP blast?
    10. The risk from AI going evil is hypothetical, the risk of AI being used by evil humans is not. Funny how these billionaires and their lackeys don't want us to police THEM.
    11. They let the genie out and now they want to put the genie back in the the bottle.
    12. OpenAI seems to be calling for AI regulations of only "cutting-edge" models and seems to think open-source wouldn't quality as cutting-edge --- but that could be a fallacy as open-source continues to rapidly improve.
    13. To protect their positions, monopolists can:A) Influence public opinion by drawing on science fiction imagery.B) Position themselves as the first supporters of the need for regulation;C) Induce legislators to enact very restrictive regulations with them as the main interlocutors.Expected outcome:- Stringent regulations;- Strong limitations on AI open source;- Long live the new monopolists!

      ||Jovan|| This is a good summary.

    14. The only “solution” I’ve seen offered up is to regulate who can build AI: only the larger “reputable” companies who will be able to get government contracts to develop AI. That’s the regulation they’re advocating for. News flash: it ain’t about protecting the human race from extinction, it’s about limiting competition so they can maximize profits.
    15. survival pressures
    16. Actually, we would be safer with a multilarity than a singularity because others very advanced AI or parts of AI would be able to counter one very advanced AI outside the realm of humanity basic needs.

      No singularity but multilarity

    17. to get more than 2 or 3 people to ever agree on exactly what "human values"
    Created with Sketch. Visit annotations in context

    Created with Sketch. Annotators

    Created with Sketch. URL

    1. ||Jovan|| An important letter from European open-source community.

    2. Byfosteringalegislativeenvironmentthatsupportsopen-sourceR&D,theParliamentcanpromotesafetythroughtransparency,driveinnovationandcompetition,andacceleratethedevelopmentofasovereignAIcapabilityinEurope
    3. Deterringopen-sourceAIwillputatriskthedigitalsecurity,economiccompetitiveness,andstrategicindependenceofEurope.
    4. Thiswillacceleratethesafedevelopmentofnext-generationfoundationmodelsundercontrolledconditionswithpublicoversightandinaccordancewithEuropeanvalues.
    5. thedefinitionof“generalpurposeAI”,whichisvagueandisnotsupportedbybroadscientificconsensus
    6. A“onesizefitsall”frameworkthattreatsallfoundationmodelsashigh-riskcouldmakeitimpossibletofieldlow-riskandopen-sourcemodelsinEurope
    7. TheActshouldrecogniseimportantdistinctionsbetweenclosed-sourceAImodelsofferedasaservice(e.g.viaapporAPIlikechatGPTorGPT-4)andAImodelsreleasedasopen-sourcecode(includingopen-sourcedata,trainingsourcecode,inferencesourcecode,andpre-trainedmodels)
    8. Buildingonopen-sourcefoundationmodels,Europeanresearchers,businessesandMemberStatescandeveloptheirownAIcapabilities–overseen,trained,andhostedinEurope.
    9. Europemaycrossapoint-of-no-return,fallingfarbehindinAIdevelopment,andbeingrelegatedtoaconsumerrolewithoutitsowndecision-makingoncriticaltechnologiesthatwillshapeoursocieties
    10. Thoserulescouldentrenchproprietarygatekeepers,oftenlargefirms,tothedetrimentofopen-sourceresearchersanddevelopers
    11. rulesthattreatallfoundationmodelsashigh-riskcouldmakeitdifficultorimpossibletoresearchanddevelopopen-sourcefoundationmodelsinEurope
    12. Publicandprivatesectororganizationscanadaptopen-sourcemodelsforspecializedapplicationswithoutsharingprivateorsensitivedatawithaproprietaryfirm
    13. promotessecurity
    14. open-sourceAIpromotescompetition.SmalltomediumenterprisesacrossEuropecanbuildonopen-sourcemodelstodriveproductivity,insteadofrelyingonahandfuloflargefirmsforessentialtechnology
    15. oaudittheperformanceofamodelorsystem;developinterpretabilitytechniques;identifyrisks;andestablishmitigationsordevelopanticipatorycountermeasures.
    16. open-sourceAIpromotessafetythroughtransparency
    17. ThedraftActisexpectedtointroducenewrequirementsforfoundationmodelsthatcouldstifleopen-sourceR&DonAI
    Created with Sketch. Visit annotations in context

    Created with Sketch. Annotators

    Created with Sketch. URL

    1. making AI models safer (more aligned) leads to a performance trade-off known as an alignment tax.
    2. AGI is the intelligence of a machine that can understand, learn, plan, and execute any intellectual task that a human being can.

      Q: What is Artificial General Intelligence (AGI)?

    3. In the field of AI, there's often a trade-off between safety and performance, known as the alignment tax.

      Q: what is the alignment tax?

    4. because it encourages the model to follow a human-approved process, making its reasoning more interpretable.
    5. Outcome supervision involves giving feedback based on the final result, whereas process supervision provides feedback for each individual step in a process.

      ||Jovan|| Is this real brekathrough or part of current spin of OpenAI?

    Created with Sketch. Visit annotations in context

    Created with Sketch. Annotators

    Created with Sketch. URL

    1. Propaganda is picking up. You have title that does not reflect the content. In the content you have rather balanced coverage about 'AI dooms day'. It also brings a few good points such as ethical training of AI engineers.

      ||Jovan||

    2. "It's never too late to improve," says Prof Bengio of AI's current state. "It's exactly like climate change.

      Dangerous use of analogies.

    3. "We also need the people who are close to these systems to have a kind of certification... we need ethical training here. Computer scientists don't usually get that, by the way."

      Ethical training is fine. Certification, I do not know.

    4. "Governments need to track what they're doing, they need to be able to audit them, and that's just the minimum thing we do for any other sector like building aeroplanes or cars or pharmaceuticals," he said.

      Argument for centralised control.

    5. The third "godfather", Prof Yann LeCun, who along with Prof Bengio and Dr Hinton won a prestigious Turing Award for their pioneering work, has said apocalyptic warnings are overblown.
    6. "It might be military, it might be terrorists, it might be somebody very angry, psychotic. And so if it's easy to program these AI systems to ask them to do something very bad, this could be very dangerous.
    7. a voluntary code of conduct for AI could be created "within the next weeks".
    Created with Sketch. Visit annotations in context

    Created with Sketch. Annotators

    Created with Sketch. URL

  2. May 2023
    1. Open-source may post a challenge as well for global cooperation. If everyone can cook AI models in their basements, how can AI truly be aligned to safe objectives?

      ||sorina|| it is concentrated effort to stop bottom-up AI. It is very dangerous development.

    2. About 58% of U.S. adults are familiar with ChatGPT.

      ||JovanNj|| ||sorina|| ||anjadjATdiplomacy.edu|| Relatively low awareness about ChatGPT and low use (only 14%). It is interesting that Asian minorities are more activite in using ChatGPT.

    Created with Sketch. Visit annotations in context

    Created with Sketch. Annotators

    Created with Sketch. URL

    1. The irony is that the G77 is arguably winning the intellectual battle about what the U.N. should be focusing on right now.”
    2. They fear that the rich world is going to tiptoe away from the SDGs, and the Summit of the Future is a sort of diplomatic smokescreen for that,”

      ||sorina||||VladaR||||Pavlina||||Katarina_An|| This article provides background about the atmosphere in New York. We should be aware of it as we prepare event in NY. We may strenghten linkages between SDGs and Future Summit.

    3. Its call for climate justice also echoes the calls by many nations, including Pakistan, to step up international funding to help countries respond to devastating climate-driven catastrophes such as the torrential storm that inundated a third of the country’s territory last year.

      climate concern of developing countries.

    4. “The countries of the European Union, to which my country belongs — 500 million people, a little bit less — received 160 billion U.S. dollars,” he said. “The African continent, three times the population, received 34 billion. There is something fundamentally wrong in the rules because these are the rules of the system that allow for these injustices to take place.”
    5. He also foresees a potentially contentious set of negotiations over a broad range of issues, from human rights to climate justice, the environment, and a newly articulated peace agenda.

      Other issues of concern of developing countries.

    6. “There is a sense that, you know, this is an effort to change the intergovernmental structure of the United Nations and the General Assembly and the General Assembly is an organization of member states,” he said.

      Concern about developing countries on multistakeholder approach.

    7. The United States and other key Security Council members killed off the proposal to remake the Trusteeship Council expressing concern it could trigger a messy reopening of negotiations on the U.N. Charter.
    8. It called for the creation of a Futures Lab to measure the impact and risks of policies over the long haul; the reform of the Trusteeship Council, established to manage decolonization, to advocate on behalf of future generations, and the appointment of a “special envoy to ensure that policy and budget decision take into account the impact of future generations.” Guterres proposed hosting a Summit of the Future this year so world leaders could turn his plan into action.

      Set of proposals for the Summit of Futre.

    9. “many member states have expressed their desire for a stronger link between the SDG Summit and Summit of the Future processes.”
    10. Dujarric added that the SDG Summit and the Summit of the Future are sequential opportunities and interrelated opportunities to achieve both.”
    11. Officials familiar with the deliberation say the Cuban delegation had not consulted widely within the G-77 membership before issuing the letter.
    12. we request the different co-facilitators on Our Common Agenda related processes and the secretariat, to stop conducting meetings, to avoid jeopardizing the focus we all should devote to the SDG Summit,”
    13. But a coalition of 134 developing countries, known as the Group of 77 and China, called last month on the U.N. and Germany and Namibia to halt preparations for the 2024 event altogether for the remainder of the year, citing the need to maintain a laser focus on groundwork for the September 2023 Summit on Sustainable Development Goals.
    14. Guterres believes it can do both at the same time
    15. an honest desire to focus on the implementation of the SDGs, because that is a priority.
    16. In a compromise, U.N. states last year agreed to convene their foreign ministers in New York in September to try and strike a deal on a statement laying out the basic contours, or “scope and elements,” of the Future Summit.
    17. The delay was partly triggered by concerns among developing countries that the U.N. chief’s signature initiative would detract attention from promoting the U.N.’s 17 Sustainable Development Goals, stalled for at least four years by the pandemic, climate change, and conflict.
    18. o halt the event’s preparations until next year and contending the U.N. must focus this year on implementing its existing, and faltering, development goals,
    Created with Sketch. Visit annotations in context

    Created with Sketch. Annotators

    Created with Sketch. URL

    1. The turnaround was two minutes, which is lightning-fast, given the accuracy.
    2. I looked for apps containing basic editing options within the app—like highlighting, inviting teammates to comment/edit, and adjusting playback speed.

      This feature is essential for us - how to annotate transcripts.

    3. I calculated how many mistakes each platform made and whether it was able to detect different speakers correctly.
    4. evaluating over 65 transcription apps and doing in-depth testing on all the top contenders.
    5. it's becoming a more saturated category, harder to pick out the right one. 
    Created with Sketch. Visit annotations in context

    Created with Sketch. Annotators

    Created with Sketch. URL

    1. It’s up to you to develop your own expert level understanding of this knowledge, and learn to leverage AI tools to make your workflow more efficient while also guarding against the knowledge dilution they cause.
    2. for maximizing the impact of your company’s collective knowledge
    3. building the processes and workflows for how uniquely valuable knowledge
    4. a company culture that values the knowledge of all your individual team members
    5. you’re company’s collective knowledge
    6. to pre-load your unique knowledge in long form prompts
    7. But I’ll likely use AI to help me re-purpose some of this content for other channels like social and email.
    8. And in cases where your knowledge sits far outside the bubble of consensus that AI tools draw from, it will likely look a lot more like co-writing.
    9. Because authority is ultimately something earned from the community you are involved in.
    10. The “average” outputs that AI generates can’t capture your unique life experiences.
    Created with Sketch. Visit annotations in context

    Created with Sketch. Annotators

    Created with Sketch. URL