1. Jul 2023
    1. only a first step in developing and enforcing binding obligations to ensure safety, security, and trus

      commitments to be followed by binding obligations

    Created with Sketch. Visit annotations in context

    Created with Sketch. Tags

    Created with Sketch. Annotators

    Created with Sketch. URL

    1. the Administration will work with allies and partners to establish a strong international framework to govern the development and use of AI.
    2. robust technical mechanisms to ensure that users know when content is AI generated
    3. will be carried out in part by independent experts
    4. immediately
    5. the G-7 Hiroshima Process—as a critical forum for developing shared principles for the governance of AI—as well as the United Kingdom’s leadership in hosting a Summit on AI Safety, and India’s leadership as Chair of the Global Partnership on AI. 

      ||sorina|| 3 key processes for the USA on AI.

    6. It has already consulted on the voluntary commitments with Australia, Brazil, Canada, Chile, France, Germany, India, Israel, Italy, Japan, Kenya, Mexico, the Netherlands, New Zealand, Nigeria, the Philippines, Singapore, South Korea, the UAE, and the UK.
    7. model weights.

      ||JovanNj|| We need to focus on model weights issues.

    8. safety, security, and trust
    9. President Biden is convening seven leading AI companies at the White House today – Amazon, Anthropic, Google, Inflection, Meta, Microsoft, and OpenAI – to announce that the Biden-Harris Administration has secured voluntary commitments from these companies to help move toward safe, secure, and transparent development of AI technology.   
    1. ollaborative could acquire or develop and then distribute AI systems to address these gaps, poolingresources from member states and international development programs, working with frontier AIlabs to provide appropriate technology, and partnering with local businesses, NGOs, and beneficiarygovernments to better understand technological needs and overcome barriers to use

      What would be the relation with big tech companies?

      And 'acquire' how?

    2. cquiresand distributes AI systems
    3. institutions taking on the role of several of the models above
    4. ssed by offering safety researchers withinAI labs dual appointments or advisory roles in the Project,
    5. it diverts safety research away from the sites of frontier AI development.
    6. such an effort would likely be spearheaded byfrontier risk-conscious actors like the US and UK governments, AGI labs and civil society groups
    7. ccelerate AI safety researchby increasing its scale, resourcing and coordination, thereby expanding the ways in which AI canbe safely deployed, and mitigating risks stemming from powerful general purpose capabilities
    8. xceptional leaders and governancestructures
    9. an institution with significant compute, engineering capacity and access tomodels (obtained via agreements with leading AI developers), and would recruit the world’s leadingexperts
    10. n AI Safety Project need not be, and should beorganized to benefit from the AI Safety expertise in civil society and the private sector

      And funded how?

    11. like ITER and CERN.
    12. conduct technical AI safety research26at an ambitious scale
    13. y to exclude from participation states who are likely to want to useAI technology in non-peaceful ways, or make participation in a governance regime the preconditionfor membership
    14. consistently implement the necessary controls to manage frontiersystem
    15. he Collaborative to have a clear mandateand purpose
    16. diffusing dangerous AI technologies around theworld, if the most powerful AI systems are general purpose, dual-use, and proliferate easily.
    17. would need to significantly promote access to the benefits of advanced AI (objective1), or put control of cutting-edge AI technology in the hands of a broad coalition (objective 2)
    18. he resourcesrequired to overcome these obstacles is likely to be substantial,

      Indeed.

    19. nderstanding the needs of membercountries, building absorptive capacity through education and infrastructure, and supporting thedevelopment of a local commercial ecosystem to make use of the technology
    20. being deployed for “protective” purposes
    21. ncrease global resilience to misusedor misaligned AI systems
    22. existence of a technologicallyempowered neutral coalition may also mitigate the destabilizing effects of an AI race betweenstates

      feasible?

    23. access to its safe technology could be offered as an incentivefor countries to participate in a governance regime that enforces responsibility (such as agreementsto enact stricter regulation, or restrict military AI development).

      But the cynical question would be: If country X has the research and development capacities to develop advanced AI, why would it want access to the tech of this Collaborative?

    24. reduce geopolitical instability amidst fierce AI competition among states

      hmm

    25. ensure the benefits ofcutting-edge AI reach groups that are otherwise underserved by AI developmen
    26. legitimate international access to advanced AI
    27. a Frontier AICollaborative could take the form of an international private-public partnership that leverages existingtechnology and capacity in industry, for example by contracting access to or funding innovation inappropriate AI technology from frontier AI developers.
    28. ligned countries may seek to form governance clubs, asthey have in other domains. This facilitates decision-making, but may make it harder to enlist othercountries later in the process

      Budapest Convention is a case in point

    29. its effectiveness will depend on its membership, governance and standard-setting processes
    30. Governance Organization should focus primarily on advanced AI systems thatpose significant global risks, but it will be difficult in practice to decide on the nature and sophisti-cation of AI tools that should be broadly available and uncontrolled versus the set of systems thatshould be subject to national or international governance
    31. Automated (even AI-enabled) monitoring
    32. some states may be especially reluctant to join due to fear of clandestinenoncompliance by other states
    33. Other AI-specific incentives for participation include conditioning on participationaccess to AI technology (possibly from a Frontier AI Collaborative) or computing resources.22Statesmight also adopt import restrictions on AI from countries that are not certified by a GovernanceOrganization

      Surely interesting. Though in the current geopolitical context, it is not difficult to imagine how this would work.

    34. while urgent risks may need to be addressed at first by smallergroups of frontier AI states, or aligned states with relevant expertise

      Geopolitics come to play again? Surely 'frontier AI states' is different from 'aligned states'.

    35. The impact of a Governance Organization depends on states adoptingits standards and/or agreeing to monitoring.
    36. standard setting (especially in an international and multistakeholder context) tends to be a slowprocess
    37. detection and inspections of large data centers
    38. self-reporting of compliance with international standards
    39. Where states have incentives to undercut each other'sregulatory commitments, international institutions may be needed to support and incentivize bestpractices. That may require monitoring standards compliance.
    40. therefore enable greater access to advanced AI

      Implementing international standards enables greater access to tech?

    41. international standard setting wouldreduce cross-border frictions due to differing domestic regulatory regimes

      If those standards (whatever we mean by them...) are taken up broadly.

    42. uture regulations will limit access to powerful AI technologies in jurisdictions with inadequate AIgovernanc

      Assuming those jurisdictions won't be able to develop their own powerful AI tech?

    43. he InternationalTelecommunication Union (ITU)

      In what sense?

    44. Advanced AI Governance Organization.
    45. intergovernmental or multi-stakeholder
    46. as less institutionalizedand politically authoritative scientific advisory panels on advanced AI

      So a advisory panel stands more chances of reaching that consensus than a commission?

    47. epresentation may trade off against a Commission’s ability to overcomescientific challenges and generate meaningful consensus
    48. broad geographic representation in the main decisionmaking bodies, and a predominance of scientificexperts in working groups
    49. he scientific understanding of the impacts of AI should ideally beseen as a universal good and not be politicized
    50. foundational “Conceptual Framework
    51. Commission might undertake activities that draw andfacilitate greater scientific attention, such as organizing conferences and workshops and publishingresearch agenda

      :))

    52. o increase chances of success, aCommission should foreground scientific rigor and the selection of highly competent AI expertswho work at the cutting edge of technological development and who can continually interpret theever-changing technological and risk landscape

      Sounds good; with the devil being in implementation. E.g." Whose standards would determine what is 'highly competent' AI expert?

    53. internationally representative group of experts
    54. there is significant disagreement even among experts about the different opportunities andchallenges created by advanced AI

      And so what makes us think that these disagreements would evolve into a consensus if a committee is created?

    55. International consensus on the opportunities and risks from advanced AI

      What does 'international consensus' mean?

    56. the Commission on Frontier AI could facilitatescientific consensus by convening experts to conduct rigorous and comprehensive assessments ofkey AI topics, such as interventions to unlock AI’s potential for sustainable development, the effectsof AI regulation on innovation, the distribution of benefits, and possible dual-use capabilities fromadvanced systems and how they ought to be managed

      What a Commission on Frontier AI would do.

      Silly question: Why 'frontier AI'?

    57. intergovernmental body
    58. dangerous inputs: computing resources have been targeted by US, Japanese and Dutch export controlsthat prevent the sale of certain AI chips and semiconductor manufacturing equipment to Chin

      So 'controlling dangerous inputs' is actually about preventing non-friendly countries to access/develop the tech?

    59. norms and standards

      Are norms and standards rules?

    60. standards

      Again, what kind of standards are we talking about?

    61. Establish guidelines

      Don't we have enough of these?

    62. through education, infrastructure, andsupport of the local commercial ecosystem

      So building capacities and creating enabling environments

    63. develop frontier AI collectively or distribute and enable access

      A bunch of questions here. It sounds good, but:

      • Collectively by whom?
      • How exactly would that distribution of access work?
    64. developing and/or enabling safe forms of access to AI.

      What does this mean?

    65. ontrolling AIinputs

      How could this be done?

      ||JovanNj|| Any thoughts?

    66. tobuild consensus on risksand how they can be mitigated,andset safety norms and standardsandsupport their implementationto help developers and regulators with responsible developmentand use. International efforts toconduct or support AI safety research
    67. national regulation may be ineffective for managing the risks of AI even within states. States willinevitably be impacted by the development of such capabilities in other jurisdictions
    68. building capacity to benefit from AI through education, infrastructure, andlocal commercial ecosystems

      Building capacity is always good. Not quite sure whose capacities are we talking about here and how/who would build them.

    69. build consensus on AI opportunities

      Do we really need to build consensus on this?

    70. Standards

      The more we talk/write about standards, the more I feel we need a bit of clarity here as well: Are we talking about technical standards (the ones developed by ITU, ISO, etc) or 'policy' standards (e.g. principles) or both?

    71. SO/IEC

      ITU, IEEE...

    72. whether these institutions should be new or evolutions of existing organizations, whether the con-ditions under which these institutions are likely to be most impactful will obtain,

      But aren't these essential questions if all this is to work?

    73. A failure to coordinate or harmonizeregulation may also slow innovation.

      A cynical question: Is this coming from a commercial interest angle, then?

    74. It groups these functions into fourinstitutional models that exhibit internal synergies and have precedents in existingorganizations:

      Four-structure for AI governance

    Created with Sketch. Visit annotations in context

    Created with Sketch. Tags

    Created with Sketch. Annotators

    Created with Sketch. URL

    1. the Media in the Digital Age.
    Created with Sketch. Visit annotations in context

    Created with Sketch. Annotators

    Created with Sketch. URL

    1. But if the superforecasters are so good at predictions, and experts have so much topic-specific knowledge, you might at least expect the two groups to influence each other’s beliefs.
    2. But superforecasters and AI experts seemed to hold very different views of how societies might respond to small-scale damage caused by AI . Superforecasters tended to think that would prompt heavy scrutiny and regulation to head off bigger problems later. Domain experts, by contrast, tended to think that commercial and geopolitical incentives might outweigh worries about safety, even after real harm had been caused.
    3. Such people share a few characteristics, such as careful, numerical thinking and an awareness of the cognitive biases that might lead them astray.

      Q: Who are superforecasters?

    4. The emergence of modern, powerful machine-learning models dates to the early years of the 2010s. And the field is still developing quickly. That leaves much less data on which to base predictions.
    5. If humans used AI to help design more potent bioweapons, for instance, it would have contributed fundamentally, albeit indirectly, to the disaster.
    6. One reason for AI ’s strong showing, says Dan Maryland, a superforecaster who participated in the study, is that it acts as a “force multiplier” on other risks like nuclear weapons
    7. The median superforecaster reckoned there was a 2.1% chance of an AI -caused catastrophe, and a 0.38% chance of an AI -caused extinction, by the end of the century. AI experts, by contrast, assigned the two events a 12% and 3% chance, respectively.

      Q: What is likelihood catastrophe or extinction?

    8. A “catastrophe” was defined as something that killed a mere 10% of the humans in the world, or around 800m people. (The second world war, by way of comparison, is estimated to have killed about 3% of the world’s population of 2bn at the time.) An “extinction”, on the other hand, was defined as an event that wiped out everyone with the possible exception of, at most, 5,000 lucky (or unlucky) souls.

      Q: What is the difference between catastrophe and extinction?

    9. On the one hand were subject-matter, or “domain”, experts in nuclear war, bio-weapons, AI and even extinction itself. On the other were a group of “superforecasters”—general-purpose prognosticators with a proven record of making accurate predictions on all sorts of topics, from election results to the outbreak of wars.
    10. These days, worries about “existential risks”—those that pose a threat to humanity as a species, rather than to individuals—are not confined to military scientists. Nuclear war; nuclear winter; plagues (whether natural, like covid-19, or engineered); asteroid strikes and more could all wipe out most or all of the human race. The newest doomsday threat is artificial intelligence ( ai ). In May a group of luminaries in the field signed a one-sentence open letter stating: “Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.”
    Created with Sketch. Visit annotations in context

    Created with Sketch. Annotators

    Created with Sketch. URL

    1. India insists data must be stored locally: to give its law-enforcement agencies easy access, to protect against foreign snooping and as a way to boost investment in the tech sector.
    2. most governments lack policymakers with relevant technical expertise and most digital issues cut across different domains, extending beyond the traditional remit of trade negotiators.
    3. In 2019 Abe Shinzo, the late Japanese prime minister, proposed the concept of Data Free Flow with Trust. That rather nebulous idea is materialising as a set of global norms to counter digital protectionism. As Matthew Goodman of CSIS, a think-tank in Washington, puts it: “It’s about the un-China approach to data governance.”
    4. “There’s a vacuum in terms of rules, norms and agreements that govern digital trade,” laments Nigel Cory of the Information Technology and Innovation Foundation, a research institute in Washington .
    5. The Philippines and Guam have emerged as attractive substitutes.
    6. Hong Kong was traditionally one of three major data hubs in Asia, with Japan and Singapore.
    7. Intra-Asia data flows make up over 50% of the region’s bandwidth, up from 47% in 2018, while the share going to America and Canada has dipped from 40% to 34% over the same period.
    8. he most congested cable route in Asia is also its most contested: the South China Sea is the “main street” of submarine cables, especially between Japan, Singapore and Hong Kong, notes Murai Jun, a Japanese internet pioneer.
    9. “Customers are asking more about the security of cables and routes,” says Uchiyama Kazuaki of NTT World Engineering Marine Corporation, the firm that owns the Kizuna.
    10. Aside from a heavy state hand in China’s cable industry, such infrastructure tends to be privately financed and owned. A small handful of companies dominate the production and installation of cables; big tech firms are their main users.
    11. While in the past constructing internet infrastructure tended to be a “collaborative effort” between countries and between firms, in recent years its enabling environment has soured amid growing friction between China and America.
    12. Asia saw international bandwidth usage grow by 39% in 2022, compared to the global average of 36%, according to TeleGeography, a research firm.
    Created with Sketch. Visit annotations in context

    Created with Sketch. Annotators

    Created with Sketch. URL

    1. migrant and refuge
    Created with Sketch. Visit annotations in context

    Created with Sketch. Annotators

    Created with Sketch. URL

    1. Our continued investment in innovation and our specific regulatory environment is what helped us lead the world in critical tech industries like quantum computing.

      I disagree with this point.

    2. The average American is already starting to see the benefits of AI technology in accessibility, efficiency, and reduction of human error.
    Created with Sketch. Visit annotations in context

    Created with Sketch. Annotators

    Created with Sketch. URL

    1. The resulting generative AI models need not be trained from scratch but can build upon open-source generative AI that has used lawfully sourced content.
    2. Vendor and customer contracts can include AI-related language added to confidentiality provisions in order to bar receiving parties from inputting confidential information of the information-disclosing parties into text prompts of AI tools.
    3. they should demand terms of service from generative AI platforms that confirm proper licensure of the training data that feed their AI.
    4. Developing these audit trails would assure companies are prepared if (or, more likely, when) customers start including demands for them in contracts as a form of insurance that the vendor’s works aren’t willfully, or unintentionally, derivative without authorization
    5. would increase transparency about the works included in the training data.
    6. has announced that artists will be able to opt out of the next generation of the image generator.
    7. Stable Diffusion, Midjourney and others have created their models based on the LAION-5B dataset, which contains almost six billion tagged images compiled from scraping the web indiscriminately, and is known to include substantial number of copyrighted creations.
    8. Customers of AI tools should ask providers whether their models were trained with any protected content
    9. There’s also the risk of accidentally sharing confidential trade secrets or business information by inputting data into generative AI tools.
    10. to become unequivocally “transformative,”
    11. Google successfully defended itself against a lawsuit by arguing that transformative use allowed for the scraping of text from books to create its search engine, and for the time being, this decision remains precedential.
    12. without the owner’s permission “for purposes such as criticism (including satire), comment, news reporting, teaching (including multiple copies for classroom use), scholarship, or research,”
    13. the interpretation of the fair use doctrine,
    14. the bounds of what is a “derivative work” under intellectual property laws
    15. Getty, an image licensing service, filed a lawsuit against the creators of Stable Diffusion alleging the improper use of its photos, both violating copyright and trademark rights it has in its watermarked photograph collection.
    16. If a court finds that the AI’s works are unauthorized and derivative, substantial infringement penalties can apply.
    17. Andersen v. Stability AI et al., three artists formed a class to sue multiple generative AI platforms on the basis of the AI using their original works without license to train their AI in their styles
    18. how the laws on the books should be applied
    19. does copyright, patent, trademark infringement apply to AI creations?
    20. This process comes with legal risks, including intellectual property infringement
    21. to copyright infringement, ownership of AI-generated works, and unlicensed content in training data.
    Created with Sketch. Visit annotations in context

    Created with Sketch. Annotators

    Created with Sketch. URL

    1. With humans and AI working to their respective strengths, they can transform unknown unknowns into known unknowns, opening the door to breakthrough thinking: logical and conceptual leaps that neither could make without the other.
    2. our brains work in a reductive manner; we generate ideas and then explain them to other people.
    3. By focusing on areas where the human brain and machines complement one another.
    4. the technology is fundamentally backward-looking, trained on yesterday’s data — and the future might not look anything like the past
    5. How might you use those opportunities to throw people off balance so they’ll generate questions that reach beyond what they intellectually know to be right, what makes them emotionally comfortable, and what they are accustomed to saying and doing?
    6. Increased question velocity, variety, and especially novelty give facilitate recognizing where you’re intellectually wrong, and becoming emotionally uncomfortable and behaviorally quiet — the very conditions that, we’ve found, tend to produce game-changing lines of inquiry.
    7. AI can take really obscure variables and make novel connections.
    8. sift through much more data, and connect more dots,
    9. “category jumping” questions — the gold standard of innovative inquiry
    10. uncover patterns and correlations in large volumes of data — connections that humans can easily miss without the technology
    11. more questions don’t necessarily amount to better questions, which means you’ll still need to exercise human judgment in deciding how to proceed.
    12. we found that 79% of respondents asked more questions, 18% asked the same amount, and 3% asked fewer.
    13. humans can start exploring the power of more context-dependent and nuanced questions
    14. to reveal deeply buried patterns in the data
    15. we’ve defined “artificial intelligence” broadly to include machine learning, deep learning, robotics, and the recent explosion of generative AI.)
    16. design-thinking sessions
    17. to help people become more inquisitive, creative problem-solvers on the job.
    18. can help people ask smarter questions, which in turn makes them better problem solvers and breakthrough innovators.
    19. from identification to ideation.
    20. Paired with “soft” inquiry-related skills such as critical thinking, innovation, active learning, complex problem solving, creativity, originality, and initiative, this technology can further our understanding of an increasingly complex world
    21. it can help people ask better questions and be more innovative.
    22. still view AI rather narrowly, as a tool that alleviates the costs and inefficiencies of repetitive human labor and increasing organizations’ capacity to produce, process, and analyze piles and piles of data
    23. AI increases question velocity, question variety, and question novelty.
    Created with Sketch. Visit annotations in context

    Created with Sketch. Annotators

    Created with Sketch. URL

    1. “Cables are an enormous lever of power,” Wicker said. “If you can’t control these networks directly, you want a company you can trust to control them.”
    2. n 1997, AT&T sold its cable-laying operation, including a fleet of ships, to Tyco International, a security company based in New Jersey. In 2018, Tyco sold the cable unit, by this time dubbed TE SubCom, for $325 million to Cerberus, the New York private equity firm.
    3. That project, known as the Oman Australia Cable, was spearheaded by SUBCO, a Brisbane-based subsea cable investment company owned by Australian entrepreneur Bevan Slattery.
    4. “Silicon Valley is waking up to the reality that it has to pick a side,”
    5. Microsoft – whose President Brad Smith said in 2017 that the tech sector needed to be a “neutral digital Switzerland” – announced in May that it had discovered Chinese state-sponsored hackers targeting U.S. critical infrastructure, a rare example of a big tech firm calling out Beijing for espionage.
    6. America’s SubCom, Japan’s NEC Corporation, France’s Alcatel Submarine Networks and China’s HMN Tech.
    7. First, Washington needs SubCom to expand the Navy’s undersea cable network so that it can better coordinate military operations and enhance surveillance on China’s expanding fleet of submarines and warships, the people said. Second, the Biden administration wants SubCom to build more commercial subsea internet cables controlled by U.S. companies, a strategy aimed at ensuring that America remains the primary custodian of the internet, according to the two industry officials.
    8. Subsea cables are vulnerable to sabotage and espionage, and Beijing and Washington have accused each other of tapping cables to spy on data or carry out cyberattacks.
    9. SubCom’s journey from Cold War experiment to global cable constructor and now a shadowy player in the U.S.-China tech war is detailed in this story for the first time.
    10. This dual role has made SubCom increasingly valuable to Washington as global internet infrastructure – from undersea cables to data centers and 5G mobile networks – risks fracturing into two systems, one backed by the United States, the other controlled by China.
    11. SubCom is the exclusive undersea cable contractor to the U.S. military, laying a web of internet and surveillance cables across the ocean floor
    12. The CS Dependable is owned by SubCom, a small-town New Jersey cable manufacturer that’s playing an outsized role in a race between the United States and China to control advanced military and digital technologies that could decide which country emerges as the world’s preeminent superpower.
    Created with Sketch. Visit annotations in context

    Created with Sketch. Annotators

    Created with Sketch. URL

    1. The long-term vision of enabling any employee — and customers as well — to easily access important knowledge within and outside of a company to enhance productivity and innovation is a powerful draw. Generative AI appears to be the technology that is finally making it possible.
    2. Any company that commits to embedding its own knowledge into a generative AI system should be prepared to revise its approach to the issue frequently over the next several years.
    3. either though training or policies — include:

      good strategies.

    4. Generative AI capabilities, including awareness of context and history, generating new content by aggregating or combining knowledge from various sources, and data-driven predictions, can provide powerful support for knowledge work.
    5. To realize opportunities and manage potential risks of generative AI applications to knowledge management, companies need to develop a culture of transparency and accountability that would make generative AI-based knowledge management systems successful.
    6. User prompts into publicly-available LLMs are used to train future versions of the system, so some companies (Samsung, for example) have feared propagation of confidential and private information and banned LLM use by employees. However, most companies’ efforts to tune LLMs with domain-specific content are performed on private instances of the models that are not accessible to public users, so this should not be a problem. In addition, some generative AI systems such as ChatGPT allow users to turn off the collection of chat histories, which can address confidentiality issues even on public systems.

      Our model

    7. the company’s attorneys helped create a series of “pre-prompts” that tell the generative AI system what types of questions it should answer and those it should politely avoid.
    8. many legal observers feel that “fair use” provisions of copyright law will apply to them, although this hasn’t been tested in the courts (and not all countries have such provisions in their copyright laws).
    9. For example, for BloombergGPT, which is intended for answering financial and investing questions, the system was evaluated on public dataset financial tasks, named entity recognition, sentiment analysis ability, and a set of reasoning and general natural language processing tasks.
    10. The good news is that companies who have tuned their LLMs on domain-specific information have found that hallucinations are less of a problem than out-of-the-box LLMs, at least if there are no extended dialogues or non-business prompts.
    11. if content authors are aware of how to create effective documents.

      This is basically embeding SEO in the process of creating documents.

    12. Most companies that do not have well-curated content will find it challenging to do so for just this purpose.

      Here is why our textus approach integrates curation in the process.

    13. Morgan Stanley, for example, has a group of 20 or so knowledge managers in the Philippines who are constantly scoring documents along multiple criteria; these determine the suitability for incorporation into the GPT-4 system.
    14. The alternative is to create vector embeddings — arrays of numeric values produced from the text by another pre-trained machine learning model (Morgan Stanley uses one from OpenAI called Ada).

      Used by Diplo

    15. The prompt-trained system is operated in a private cloud that is only accessible to Morgan Stanley employees.
    16. Morgan Stanley, for example, used prompt tuning to train OpenAI’s GPT-4 model using a carefully curated set of 100,000 documents with important investing, general business, and investment process knowledge.
    17. It requires considerable data science expertise; the scientific paper for the Google project, for example, had 31 co-authors.
    18. with Google’s general PaLM2 LLM
    19. to add specific domain content to a system that is already trained on general knowledge and language-based interaction.
    20. Bloomberg’s data scientists employed 700 tokens, or about 350 billion words, 50 billion parameters, and 1.3 million hours of graphics processing unit time.
    21. some of the same factors that made knowledge management difficult in the past are still present.
    22. the heyday of the “knowledge management” movement in the 1990s and early 2000s, but most companies found the technology of the time inadequate for the task

      Diplo was there in pioneering phase of knowledge management.

    23. a generative AI-based system for customer support led to increased productivity of customer support agents and improved retention, while leading to higher positive feedback on the part of customers.

      One of areas where AI can help.

    24. can’t respond to prompts or questions regarding proprietary content or knowledge.

      This is the key aspect.

    25. to express complex ideas in articulate language
    26. knowledge within organizations is typically generated and captured across various sources and forms, including individual minds, processes, policies, reports, operational transactions, discussion boards, and online chats and meetings.

      Capturing various sources of knowledge.

    27. through effective and agile creation, management, application, recombination, and deployment of knowledge assets and know-how.
    Created with Sketch. Visit annotations in context

    Created with Sketch. Annotators

    Created with Sketch. URL

    1. We have to be very, very careful about ensuring that it doesn’t come across as AI surveillance,
    2. introducing AI ethicists
    3. While there are laws being whipped up around how employers should ethically and responsibly implement AI around, for example, hiring, so as to ensure job posts aren’t discriminatory in any way (like NYC’s Law 144), there are still relatively few guardrails.

      NYC Law 144 on AI and employement. ||sorina||

    4. more traditional kind, which detects patterns from data and provides predictive analysis, which has been used for the last few years by some businesses, but isn’t yet mainstream.
    5. they need to invest in reskilling their workforces
    6. A bit like the customer relationship management software that airlines use, to create a more personalized travel experience.

      ||Andrej|| We can have something like 'customised' student experience.

    Created with Sketch. Visit annotations in context

    Created with Sketch. Annotators

    Created with Sketch. URL