- Jul 2023
-
deliverypdf.ssrn.com deliverypdf.ssrn.com
-
Getting feedback on their work from the AI is an opportunity to practiceand improve,but that feedback should be considered critically, and students should be asked to articulate how and why the feedback they received is effective (or not).
-
Unlike educators in classroom,it doesn’t know the students or understand the students’ context; while the feedback may be helpful it should be coupled with an in-class discussion and clear guidelines
-
s one possible form of feedback
-
That feedback should be concrete and specific, straightforward, and balanced (tell the student what they are doing right and what they can do to improve)
-
an also give you a sense of where students are in their learning journey
-
Students should report out their interactions with the AI and write a reflection about the guidance and help the AI provided and how they plan to incorporate (or not) the AI’s feedback to help improve their work.
-
While ongoing, tailored feedback is important, it is difficultand time-consuming to implement in a large class setting. The time and effort required to consistently provide personalized feedback to numerous students can be daunting.
-
When feedback is coupled with practice itcreates anenvironment that helps students learn
-
Researchers notethe significance of incorporating feedback intothe broader learning process, as opposed to providing it at the conclusion of a project, test, or assignment.
Importance of continious feedback
-
Effective feedback pinpoints gaps and errors, and offers explanations about what students should do to improve
-
Making mistakes can help students learn. particularly if those mistakes are followed by feedback tailored to the individual student
-
Large Language Models are prone to producing incorrect, but plausible facts, a phenomenon known as confabulation or hallucination.
AI risks
-
Prompts are simply the text given to the LLM in order to produce an output.
Q: What are prompts?
-
Our guidelines challenge students to remain the “human in the loop” and maintain that not only are students responsible for their own work but they should actively oversee the AIs output, check with reliable sources, and complement any AI output with their unique perspectives and insights. Our aim is to encourage students to critically assess and interrogate AI outputs, rather than passively accept them.
Aim of the approach
-
increasing metacognition
-
: to help students learn with AI and to helpthem learn about AI
Dual approach of relevance for Diplo's AI approach.
-
how and when to use AI as they instill best practices in AI-assisted learning.
-
hese tools offer the potential for adaptivelearning experiences tailored to individual students’needs and abilities, as well as opportunities to increase learning through a variety of other pedagogical methods.
-
This paper examines the transformative role of Large Language Models (LLMs) in education and their potential as learning tools, despite their inherent risks and limitations. The authors propose seven approaches for utilizing AI in classrooms: AI-tutor, AI-coach, AI-mentor, AI-teammate, AI-tool, AI-simulator, and AI-student, each with distinct pedagogical benefits and risks. The aim is to help students learn with and about AI, with practical strategies designed to mitigate risks such as complacency about the AI’s output, errors, and biases. These strategies promote active oversight, critical assessment of AI outputs, and complementation of AI's capabilities with the students' unique insights. By challenging students to remain the "human in the loop", the authors aim to enhance learning outcomes while ensuring that AI serves as a supportive tool rather than a replacement. The proposed framework offers a guide for educators navigating the integration ofAI-assisted learning in classrooms.
||Andrej||||Dragana||||sorina||||Jovan||
this seems to be a paper worth consulting for our approach of using AI in the learning process
Tags
Annotators
URL
-
-
www.newyorker.com www.newyorker.com
-
A company that doesn’t like the rules could threaten to pack up and leave. Then what?
-
“Keeping the details of AI technologies secret is likely to thwart good-faith researchers trying to protect the public interest, as well as competition, and open science,”
-
agreed not to share: the parameters that are known as the “weights” of their algorithms.
-
a detailed metadata trail that reflects the history of a given image.
-
It is also not clear who those experts will be, how they will be chosen, whether the same experts will be tasked with examining all the systems, and by what measure they will determine risk.
||sorina|| Good point
-
he outlined a plan to bring lawmakers up to speed on emerging technology by convening at least nine panels of experts to give them a crash course in A.I. and help them craft informed legislation.
||sorina||||StephanieBP||||VladaR||||Pavlina|| Our training with US embassy is timely. The similar initiative is proposed by the Senate Majority Leader Chuck Summer for members of senate.
Sorina, you may share this parallel with US Embassy.
-
-
crfm.stanford.edu crfm.stanford.edu
-
foundation model providers generally do not comply with draft requirements to describe the use of copyrighted training data, the hardware used and emissions produced in training, and how they evaluate and test models.
-
-
-
I support an Anti-AI movement in the sense that we should have some opposition questioning what's going on.
||sorina|| Do you agree with this reference?
-
-
www.reddit.com www.reddit.comreddit7
-
Some key takeaways include :
Export control take aways
-
"They" being the multiple competing entities working in the field right now, most of whom are trying to make the barrier to entry in this field millions of dollars higher than it otherwise would be through mandatory paid consultations with "Safety experts" that could hypothetically double as corporate espionage assets to steal your secrets before you publish them.
||sorina|| A possible explanation of AI 'noise'
-
This form of analysis is antithetical to Wittgenstein's approach, particularly in his later work, where he stressed the importance of seeing things as interconnected wholes rather than reducing them to their constituent parts.
-
Wittgenstein introduced the idea of "language games," arguing that the meaning of words is determined by their usage in specific forms of life, or social practices. This stands somewhat in opposition to the detailed, precise and generalized logical framework outlined in the provided ontology, which assigns fixed roles and properties to human brains, AI models, and animal consciousness across all contexts.
-
There are a couple of reasons for this, and they predominantly center around Wittgenstein's ideas of language games, private language arguments, and his opposition to reductionism.
-
Heidegger's Dasein refers to the unique manner in which humans exist, embodying a subjective, in-the-world mode of being.
-
This restriction will stop these people from proliferating and creating fully open source LLMs based on Llama 2 outputs. So, say goodbye to improvements in some of the open source models.
||JovanNj|| will Llama 2 be a new language model?
-
-
miniszterelnok.hu miniszterelnok.hu
-
Bargaining is possible in relation to issues linked to tactical time – or even strategic time; but never on issues that belong to historical time.
-
by rejecting Christianity, we have in fact become hedonistic pagans.
||sorina|| hedonistic pagans is a new term.
-
thought that the rejection of religion and Christianity would be followed by the emergence of an ideal, enlightened community based on an understanding of the good and the common good, living a free and superior life according to recognised, sociologically based societal truths.
-
with spiritual foundations in mind, and digging a shovelful deeper, it is also worth saying that at the base of the Hungarian Constitution and the intellectual foundations of the new era there lies an anthropological insight
-
If you read the constitutions of other European countries, which are liberal constitutions, you will see that at the heart of them is the “I”. If you read the Hungarian Constitution, you will see that it is centred on the “we”.
-
the federalists are carrying out an attempt to oust us; they have openly said that they wanted a change of government in Hungary.
-
These could only be introduced in the European Union because the British left and we V4 members could not prevent them – and indeed the V4 was attacked by the federalists. We can all see the result.
-
I am not even going to talk about clever little European tricks such as the sudden doubling – in a single year – of the volume of goods exported from Germany to Kazakhstan.
-
either to decouple, or to participate in international competition. As they say in Brussels, “de-risking or connectivity”.
-
The amount paid for the European Union’s imports of gas and oil – the two together – was 300 billion euros before the Russian war, and 653 billion euros last year.
-
They call this seclusion “decoupling” – or, more subtly, “de-risking”, which is a form of risk reduction.
-
by the size of their economies, by their national GDPs, we see that in the rankings for 2030 Britain, Italy and France will have dropped out of the top ten where they still are today; and Germany – now fourth – will have slid down to tenth place.
-
“native genocide”, which I think means the extermination of indigenous peoples; slavery and the slave trade; and “reparatory justice”, meaning reparations for injustices.
-
The EU has about 400 million people; and if I add in the rest of the Western world, that is another 400 million. So this amounts to 800 million people, surrounded by another seven billion. And the European Union has an accurate view of itself: it is a rich union, but a weak one.
-
the settling of the new equilibrium will not happen overnight – or even from one month to the next
-
the “Thucydides Trap”,
-
Experience shows that the dominant great power tends to see itself as more benevolent and better-intentioned than it really is, and attributes malice to its challenger more often than is – or should be – justified
-
The bad news is that of the sixteen instances thus identified, twelve have ended in war, and only four were peacefully resolved
-
And it can neutralise the chief US weapon, the chief US weapon of power, which we call “universal values”
-
“Ending the century of humiliation” – or, to paraphrase the Americans, “Make China Great Again”.
-
So China has quite simply created its own: we see the BRICS and the One Belt One Road Initiative; and we also see the Asian Infrastructure Investment Bank, the development resources of which are several times greater than the development resources of all the Western countries.
-
there are no eternal winners and no eternal losers
-
In 2010 the US and the European Union contributed 22 – 23 per cent of total world production; today the US contributes 25 per cent and the European Union 17 per cent. In other words, the US has successfully repelled the European Union’s attempt to move up alongside it – or even ahead of it.
||sorina|| a very interesting statistics.
-
we also had a plan, which we expressed as the need to create a great free trade zone stretching from Lisbon to Vladivostok.
-
after its own civil war, from the 1870s onwards the United States grew to be the preeminent country, and its inalienable right to world economic supremacy is part of its national identity, and a kind of article of faith.
-
What has happened is that China has made the roughly three-hundred-year journey from the Western industrial revolution to the global information revolution in just thirty years.
-
But it has turned out that in fact this issue, the liberation of China, belongs to the historical timeframe; because as a result of that liberation, the United States – and all of us – are now facing a greater force than the one we wanted to defeat.
-
Back then the US decided to free China from its isolation, obviously to make it easier to deal with the Russians; and so it put that issue in the strategic timeframe.
-
tactical time, strategic time, and historical time.
Three historical times inspired by Brodel's thinking.
-
you have to simultaneously visualise three timeframes
-
then today “Western values” mean three things: migration, LGBTQ, and war.
||sorina|| There is an interesting discussion of Orban in Romania. I do not understand well Hungarian-Romanian dynamics which you will know better.
-
because the Ministry of Foreign Affairs of Romania – which I understand to belong more to the presidential branch of power – has come to my aid and sent me a démarche.
-
-
www.whitehouse.gov www.whitehouse.gov
-
the G-7 Hiroshima Process—as a critical forum for developing shared principles for the governance of AI—as well as the United Kingdom’s leadership in hosting a Summit on AI Safety, and India’s leadership as Chair of the Global Partnership on AI.
||sorina|| 3 key processes for the USA on AI.
-
It has already consulted on the voluntary commitments with Australia, Brazil, Canada, Chile, France, Germany, India, Israel, Italy, Japan, Kenya, Mexico, the Netherlands, New Zealand, Nigeria, the Philippines, Singapore, South Korea, the UAE, and the UK.
-
model weights.
||JovanNj|| We need to focus on model weights issues.
-
safety, security, and trust
-
President Biden is convening seven leading AI companies at the White House today – Amazon, Anthropic, Google, Inflection, Meta, Microsoft, and OpenAI – to announce that the Biden-Harris Administration has secured voluntary commitments from these companies to help move toward safe, secure, and transparent development of AI technology.
-
-
arxiv.org arxiv.org
-
It groups these functions into fourinstitutional models that exhibit internal synergies and have precedents in existingorganizations:
Four-structure for AI governance
-
-
dev-ai.diplomacy.edu:8541 dev-ai.diplomacy.edu:8541
-
the Media in the Digital Age.
-
-
dev-ai.diplomacy.edu:8541 dev-ai.diplomacy.edu:8541React App10
-
But if the superforecasters are so good at predictions, and experts have so much topic-specific knowledge, you might at least expect the two groups to influence each other’s beliefs.
-
But superforecasters and AI experts seemed to hold very different views of how societies might respond to small-scale damage caused by AI . Superforecasters tended to think that would prompt heavy scrutiny and regulation to head off bigger problems later. Domain experts, by contrast, tended to think that commercial and geopolitical incentives might outweigh worries about safety, even after real harm had been caused.
-
Such people share a few characteristics, such as careful, numerical thinking and an awareness of the cognitive biases that might lead them astray.
Q: Who are superforecasters?
-
The emergence of modern, powerful machine-learning models dates to the early years of the 2010s. And the field is still developing quickly. That leaves much less data on which to base predictions.
-
If humans used AI to help design more potent bioweapons, for instance, it would have contributed fundamentally, albeit indirectly, to the disaster.
-
One reason for AI ’s strong showing, says Dan Maryland, a superforecaster who participated in the study, is that it acts as a “force multiplier” on other risks like nuclear weapons
-
The median superforecaster reckoned there was a 2.1% chance of an AI -caused catastrophe, and a 0.38% chance of an AI -caused extinction, by the end of the century. AI experts, by contrast, assigned the two events a 12% and 3% chance, respectively.
Q: What is likelihood catastrophe or extinction?
-
A “catastrophe” was defined as something that killed a mere 10% of the humans in the world, or around 800m people. (The second world war, by way of comparison, is estimated to have killed about 3% of the world’s population of 2bn at the time.) An “extinction”, on the other hand, was defined as an event that wiped out everyone with the possible exception of, at most, 5,000 lucky (or unlucky) souls.
Q: What is the difference between catastrophe and extinction?
-
On the one hand were subject-matter, or “domain”, experts in nuclear war, bio-weapons, AI and even extinction itself. On the other were a group of “superforecasters”—general-purpose prognosticators with a proven record of making accurate predictions on all sorts of topics, from election results to the outbreak of wars.
-
These days, worries about “existential risks”—those that pose a threat to humanity as a species, rather than to individuals—are not confined to military scientists. Nuclear war; nuclear winter; plagues (whether natural, like covid-19, or engineered); asteroid strikes and more could all wipe out most or all of the human race. The newest doomsday threat is artificial intelligence ( ai ). In May a group of luminaries in the field signed a one-sentence open letter stating: “Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.”
-
-
dev-ai.diplomacy.edu:8541 dev-ai.diplomacy.edu:8541React App12
-
India insists data must be stored locally: to give its law-enforcement agencies easy access, to protect against foreign snooping and as a way to boost investment in the tech sector.
-
most governments lack policymakers with relevant technical expertise and most digital issues cut across different domains, extending beyond the traditional remit of trade negotiators.
-
In 2019 Abe Shinzo, the late Japanese prime minister, proposed the concept of Data Free Flow with Trust. That rather nebulous idea is materialising as a set of global norms to counter digital protectionism. As Matthew Goodman of CSIS, a think-tank in Washington, puts it: “It’s about the un-China approach to data governance.”
-
“There’s a vacuum in terms of rules, norms and agreements that govern digital trade,” laments Nigel Cory of the Information Technology and Innovation Foundation, a research institute in Washington .
-
The Philippines and Guam have emerged as attractive substitutes.
-
Hong Kong was traditionally one of three major data hubs in Asia, with Japan and Singapore.
-
Intra-Asia data flows make up over 50% of the region’s bandwidth, up from 47% in 2018, while the share going to America and Canada has dipped from 40% to 34% over the same period.
-
he most congested cable route in Asia is also its most contested: the South China Sea is the “main street” of submarine cables, especially between Japan, Singapore and Hong Kong, notes Murai Jun, a Japanese internet pioneer.
-
“Customers are asking more about the security of cables and routes,” says Uchiyama Kazuaki of NTT World Engineering Marine Corporation, the firm that owns the Kizuna.
-
Aside from a heavy state hand in China’s cable industry, such infrastructure tends to be privately financed and owned. A small handful of companies dominate the production and installation of cables; big tech firms are their main users.
-
While in the past constructing internet infrastructure tended to be a “collaborative effort” between countries and between firms, in recent years its enabling environment has soured amid growing friction between China and America.
-
Asia saw international bandwidth usage grow by 39% in 2022, compared to the global average of 36%, according to TeleGeography, a research firm.
-
-
dev-ai.diplomacy.edu:8541 dev-ai.diplomacy.edu:8541
-
migrant and refuge
-
-
dev-ai.diplomacy.edu:8541 dev-ai.diplomacy.edu:8541
-
Our continued investment in innovation and our specific regulatory environment is what helped us lead the world in critical tech industries like quantum computing.
I disagree with this point.
-
The average American is already starting to see the benefits of AI technology in accessibility, efficiency, and reduction of human error.
-
-
dev-ai.diplomacy.edu:8541 dev-ai.diplomacy.edu:8541React App21
-
The resulting generative AI models need not be trained from scratch but can build upon open-source generative AI that has used lawfully sourced content.
-
Vendor and customer contracts can include AI-related language added to confidentiality provisions in order to bar receiving parties from inputting confidential information of the information-disclosing parties into text prompts of AI tools.
-
they should demand terms of service from generative AI platforms that confirm proper licensure of the training data that feed their AI.
-
Developing these audit trails would assure companies are prepared if (or, more likely, when) customers start including demands for them in contracts as a form of insurance that the vendor’s works aren’t willfully, or unintentionally, derivative without authorization
-
would increase transparency about the works included in the training data.
-
has announced that artists will be able to opt out of the next generation of the image generator.
-
Stable Diffusion, Midjourney and others have created their models based on the LAION-5B dataset, which contains almost six billion tagged images compiled from scraping the web indiscriminately, and is known to include substantial number of copyrighted creations.
-
Customers of AI tools should ask providers whether their models were trained with any protected content
-
There’s also the risk of accidentally sharing confidential trade secrets or business information by inputting data into generative AI tools.
-
to become unequivocally “transformative,”
-
Google successfully defended itself against a lawsuit by arguing that transformative use allowed for the scraping of text from books to create its search engine, and for the time being, this decision remains precedential.
-
without the owner’s permission “for purposes such as criticism (including satire), comment, news reporting, teaching (including multiple copies for classroom use), scholarship, or research,”
-
the interpretation of the fair use doctrine,
-
the bounds of what is a “derivative work” under intellectual property laws
-
Getty, an image licensing service, filed a lawsuit against the creators of Stable Diffusion alleging the improper use of its photos, both violating copyright and trademark rights it has in its watermarked photograph collection.
-
If a court finds that the AI’s works are unauthorized and derivative, substantial infringement penalties can apply.
-
Andersen v. Stability AI et al., three artists formed a class to sue multiple generative AI platforms on the basis of the AI using their original works without license to train their AI in their styles
-
how the laws on the books should be applied
-
does copyright, patent, trademark infringement apply to AI creations?
-
This process comes with legal risks, including intellectual property infringement
-
to copyright infringement, ownership of AI-generated works, and unlicensed content in training data.
-
-
dev-ai.diplomacy.edu:8541 dev-ai.diplomacy.edu:8541React App23
-
With humans and AI working to their respective strengths, they can transform unknown unknowns into known unknowns, opening the door to breakthrough thinking: logical and conceptual leaps that neither could make without the other.
-
our brains work in a reductive manner; we generate ideas and then explain them to other people.
-
By focusing on areas where the human brain and machines complement one another.
-
the technology is fundamentally backward-looking, trained on yesterday’s data — and the future might not look anything like the past
-
How might you use those opportunities to throw people off balance so they’ll generate questions that reach beyond what they intellectually know to be right, what makes them emotionally comfortable, and what they are accustomed to saying and doing?
-
Increased question velocity, variety, and especially novelty give facilitate recognizing where you’re intellectually wrong, and becoming emotionally uncomfortable and behaviorally quiet — the very conditions that, we’ve found, tend to produce game-changing lines of inquiry.
-
AI can take really obscure variables and make novel connections.
-
sift through much more data, and connect more dots,
-
“category jumping” questions — the gold standard of innovative inquiry
-
uncover patterns and correlations in large volumes of data — connections that humans can easily miss without the technology
-
more questions don’t necessarily amount to better questions, which means you’ll still need to exercise human judgment in deciding how to proceed.
-
we found that 79% of respondents asked more questions, 18% asked the same amount, and 3% asked fewer.
-
humans can start exploring the power of more context-dependent and nuanced questions
-
to reveal deeply buried patterns in the data
-
we’ve defined “artificial intelligence” broadly to include machine learning, deep learning, robotics, and the recent explosion of generative AI.)
-
design-thinking sessions
-
to help people become more inquisitive, creative problem-solvers on the job.
-
can help people ask smarter questions, which in turn makes them better problem solvers and breakthrough innovators.
-
from identification to ideation.
-
Paired with “soft” inquiry-related skills such as critical thinking, innovation, active learning, complex problem solving, creativity, originality, and initiative, this technology can further our understanding of an increasingly complex world
-
it can help people ask better questions and be more innovative.
-
still view AI rather narrowly, as a tool that alleviates the costs and inefficiencies of repetitive human labor and increasing organizations’ capacity to produce, process, and analyze piles and piles of data
-
AI increases question velocity, question variety, and question novelty.
-
-
dev-ai.diplomacy.edu:8541 dev-ai.diplomacy.edu:8541React App12
-
“Cables are an enormous lever of power,” Wicker said. “If you can’t control these networks directly, you want a company you can trust to control them.”
-
n 1997, AT&T sold its cable-laying operation, including a fleet of ships, to Tyco International, a security company based in New Jersey. In 2018, Tyco sold the cable unit, by this time dubbed TE SubCom, for $325 million to Cerberus, the New York private equity firm.
-
That project, known as the Oman Australia Cable, was spearheaded by SUBCO, a Brisbane-based subsea cable investment company owned by Australian entrepreneur Bevan Slattery.
-
“Silicon Valley is waking up to the reality that it has to pick a side,”
-
Microsoft – whose President Brad Smith said in 2017 that the tech sector needed to be a “neutral digital Switzerland” – announced in May that it had discovered Chinese state-sponsored hackers targeting U.S. critical infrastructure, a rare example of a big tech firm calling out Beijing for espionage.
-
America’s SubCom, Japan’s NEC Corporation, France’s Alcatel Submarine Networks and China’s HMN Tech.
-
First, Washington needs SubCom to expand the Navy’s undersea cable network so that it can better coordinate military operations and enhance surveillance on China’s expanding fleet of submarines and warships, the people said. Second, the Biden administration wants SubCom to build more commercial subsea internet cables controlled by U.S. companies, a strategy aimed at ensuring that America remains the primary custodian of the internet, according to the two industry officials.
-
Subsea cables are vulnerable to sabotage and espionage, and Beijing and Washington have accused each other of tapping cables to spy on data or carry out cyberattacks.
-
SubCom’s journey from Cold War experiment to global cable constructor and now a shadowy player in the U.S.-China tech war is detailed in this story for the first time.
-
This dual role has made SubCom increasingly valuable to Washington as global internet infrastructure – from undersea cables to data centers and 5G mobile networks – risks fracturing into two systems, one backed by the United States, the other controlled by China.
-
SubCom is the exclusive undersea cable contractor to the U.S. military, laying a web of internet and surveillance cables across the ocean floor
-
The CS Dependable is owned by SubCom, a small-town New Jersey cable manufacturer that’s playing an outsized role in a race between the United States and China to control advanced military and digital technologies that could decide which country emerges as the world’s preeminent superpower.
-
-
dev-ai.diplomacy.edu:8541 dev-ai.diplomacy.edu:8541React App27
-
The long-term vision of enabling any employee — and customers as well — to easily access important knowledge within and outside of a company to enhance productivity and innovation is a powerful draw. Generative AI appears to be the technology that is finally making it possible.
-
Any company that commits to embedding its own knowledge into a generative AI system should be prepared to revise its approach to the issue frequently over the next several years.
-
either though training or policies — include:
good strategies.
-
Generative AI capabilities, including awareness of context and history, generating new content by aggregating or combining knowledge from various sources, and data-driven predictions, can provide powerful support for knowledge work.
-
To realize opportunities and manage potential risks of generative AI applications to knowledge management, companies need to develop a culture of transparency and accountability that would make generative AI-based knowledge management systems successful.
-
User prompts into publicly-available LLMs are used to train future versions of the system, so some companies (Samsung, for example) have feared propagation of confidential and private information and banned LLM use by employees. However, most companies’ efforts to tune LLMs with domain-specific content are performed on private instances of the models that are not accessible to public users, so this should not be a problem. In addition, some generative AI systems such as ChatGPT allow users to turn off the collection of chat histories, which can address confidentiality issues even on public systems.
Our model
-
the company’s attorneys helped create a series of “pre-prompts” that tell the generative AI system what types of questions it should answer and those it should politely avoid.
-
many legal observers feel that “fair use” provisions of copyright law will apply to them, although this hasn’t been tested in the courts (and not all countries have such provisions in their copyright laws).
-
For example, for BloombergGPT, which is intended for answering financial and investing questions, the system was evaluated on public dataset financial tasks, named entity recognition, sentiment analysis ability, and a set of reasoning and general natural language processing tasks.
-
The good news is that companies who have tuned their LLMs on domain-specific information have found that hallucinations are less of a problem than out-of-the-box LLMs, at least if there are no extended dialogues or non-business prompts.
-
if content authors are aware of how to create effective documents.
This is basically embeding SEO in the process of creating documents.
-
Most companies that do not have well-curated content will find it challenging to do so for just this purpose.
Here is why our textus approach integrates curation in the process.
-
Morgan Stanley, for example, has a group of 20 or so knowledge managers in the Philippines who are constantly scoring documents along multiple criteria; these determine the suitability for incorporation into the GPT-4 system.
-
The alternative is to create vector embeddings — arrays of numeric values produced from the text by another pre-trained machine learning model (Morgan Stanley uses one from OpenAI called Ada).
Used by Diplo
-
The prompt-trained system is operated in a private cloud that is only accessible to Morgan Stanley employees.
-
Morgan Stanley, for example, used prompt tuning to train OpenAI’s GPT-4 model using a carefully curated set of 100,000 documents with important investing, general business, and investment process knowledge.
-
It requires considerable data science expertise; the scientific paper for the Google project, for example, had 31 co-authors.
-
with Google’s general PaLM2 LLM
-
to add specific domain content to a system that is already trained on general knowledge and language-based interaction.
-
Bloomberg’s data scientists employed 700 tokens, or about 350 billion words, 50 billion parameters, and 1.3 million hours of graphics processing unit time.
-
some of the same factors that made knowledge management difficult in the past are still present.
-
the heyday of the “knowledge management” movement in the 1990s and early 2000s, but most companies found the technology of the time inadequate for the task
Diplo was there in pioneering phase of knowledge management.
-
a generative AI-based system for customer support led to increased productivity of customer support agents and improved retention, while leading to higher positive feedback on the part of customers.
One of areas where AI can help.
-
can’t respond to prompts or questions regarding proprietary content or knowledge.
This is the key aspect.
-
to express complex ideas in articulate language
-
knowledge within organizations is typically generated and captured across various sources and forms, including individual minds, processes, policies, reports, operational transactions, discussion boards, and online chats and meetings.
Capturing various sources of knowledge.
-
through effective and agile creation, management, application, recombination, and deployment of knowledge assets and know-how.
-
-
dev-ai.diplomacy.edu:8541 dev-ai.diplomacy.edu:8541React App16
-
We have to be very, very careful about ensuring that it doesn’t come across as AI surveillance,
-
introducing AI ethicists
-
While there are laws being whipped up around how employers should ethically and responsibly implement AI around, for example, hiring, so as to ensure job posts aren’t discriminatory in any way (like NYC’s Law 144), there are still relatively few guardrails.
NYC Law 144 on AI and employement. ||sorina||
-
more traditional kind, which detects patterns from data and provides predictive analysis, which has been used for the last few years by some businesses, but isn’t yet mainstream.
-
they need to invest in reskilling their workforces
-
A bit like the customer relationship management software that airlines use, to create a more personalized travel experience.
||Andrej|| We can have something like 'customised' student experience.
-
“I’m a big believer in human interaction and connection. I don’t think that will go away. Automation and speed will help us do different things, focusing on more strategic, creative endeavors. But it will still be on us to understand our people on a human level.”
-
The theory goes, that if people are saving time on more administrative, tedious aspects of their work, they’ll have more brain capacity for creative and strategic thinking.
-
improve the meaning and purpose of work.
-
“Knowing that AI will change many roles in the workforce, CEOs should encourage people to embrace experimentation with the technology and communicate the upskilling opportunities for them to work in tandem with the AI, rather than be replaced by it,”
||Jovan||
-
“AI is not a technology that can be given to a CTO or a similar position, it is a tool that must be understood by the CEO and the entire management team.”
-
four “trust-building” processes: improve the governance of AI systems and processes; confirm AI-driven decisions are interpretable and easily explainable; monitor and report on AI model performance and protect AI systems from cyber threats and manipulations, per a 2023 Trust survey from management consultancy PwC.
-
to road test AI.
-
but leaders still need to be able to frame the issues, make the smart calls and ensure they’re well executed.”
-
the human element of interpreting data and asking AI the right questions to make sound judgments, will be crucial.
-
Most (94%) business leaders agree that AI is critical for success, per a 2022 Deloitte report.
-
-
dev-ai.diplomacy.edu:8541 dev-ai.diplomacy.edu:8541
-
we must not dismiss legitimate concerns regarding the alignment problem
-
We should prioritize targeted regulation for specific applications — recognizing that each domain comes with its own set of ethical dilemmas and policy challenges.
-
shift in the allocation of National Science Foundation (NSF) funding toward responsible AI research
-