- Oct 2023
-
assets.publishing.service.gov.uk assets.publishing.service.gov.uk
-
Paul Christiano
A very big name on the EA forum: https://forum.effectivealtruism.org/topics/paul-christiano
-
Capabilities and risks from frontier AI
||Jovan|| ||sorina|| ||JovanNj||
This is the report that the UK released ahead of the AI Safety Summit (1-2 November, 2023).
-
-
www.cnbc.com www.cnbc.com
-
On Wednesday, the U.K. government released a report called “Capabilities and risks from frontier AI,” in which it explains frontier AI as “highly capable general-purpose AI models that can perform a wide variety of tasks and match or exceed the capabilities present in today’s most advanced models.”
||Jovan||||sorina||||JovanNj||
This report might be a good source for understanding narrative change in the field of AI safety. I will find out where it is and send it to you via email.
Also, it shouldn't come as a surprise to us that the UK government will be the most convinced in EA's description of AI ending humanity, considering that many EA orgnisations are based in and attached to UK universities (Oxbridge).
-
-
www.euractiv.com www.euractiv.com
-
that the criteria apply,
What are criteria?
-
that maintains horizontal exemption conditions and largely overlooks the negative legal opinion.
They ignored negative opinion.
-
harshly criticised by the European Parliament’s legal office
||sorina||||wuATdiplomacy.edu|| Is there any article about these criticism?
-
under a pre-set list of critical use cases were deemed automatically high-risk
||sorina||||wuATdiplomacy.edu|| Where are these pre-set list of ciritical uses listed in the current versions of EU AI Act: https://dig.watch/resource/eu-ai-act-proposed-amendments-structured-view
-
-
study.diplomacy.edu study.diplomacy.edu
-
I usually always thought of hackers
It's what my family and friends think when I tell them about cyber and my work :D
-
-
www.oecd.org www.oecd.org
-
Inclusive Framework on BEPS
I have doubt about this statement by OECD.
-
-
www.oecd.org www.oecd.org
-
The GloBE rules provide for a co-ordinated system of taxation intended to ensure large MNE groups pay this minimum level of tax on income arising in each of the jurisdictions in which they operate. The rules create a “top-up tax” to be applied on profits in any jurisdiction whenever the effective tax rate, determined on a jurisdictional basis, is below the minimum 15% rate.
@john This paragraph relates to our latest discussion on Taxation of tech companies
-
-
hbr.org hbr.org
-
Can I capture my vision in a page? A paragraph? A word?
Key quesiton.
-
-
www.european-cyber-resilience-act.com www.european-cyber-resilience-act.com
-
The European Cyber Resilience Act (CRA)
Sources of data and documents
-
-
study.diplomacy.edu study.diplomacy.edu
-
I think that cooperation is important for both developed and developing countries, and I think that most of the challenges come from competing interests
-
-
crfm.stanford.edu crfm.stanford.edu
-
We emphasize that EU policymakers should consider strengthening deployment requirements for entities that bring foundation models to market to ensure there is sufficient accountability across the digital supply chain.
Pyramide of evaluation.
-
Open releases generally achieve strong scores on resource disclosure requirements (both data and compute), with EleutherAI receiving 19/20 for these categories.
Open releases foudational models are more in compliace with law than closed oes.
@jovak@diplomacy.edu
-
the European Parliament adopted a draft of the Act by a vote of 499 in favor, 28 against, and 93 abstentions.
Votes in the EU Parliament
-
-
-
We are at the same juncture now with AI. Once again, a handful of competitive but ideologically aligned leaders are telling us that large-scale, general-purpose AI implementations are the only way forward. In doing so, they disregard the dangerous level of complexity and the undue level of control and financial return to be granted to them.
@Jovan @Sorina
The very same people warning us of extinction-level AI risks are the same people who are developing technologies in a way that leads us to it. In a way, the public release of GPT and other generative models created the very market pressure that makes "creating the best, most intelligent AGI" the most important and only goal for the market.
-
What is happening is not just a battle for market control. A small number of tech titans are busy designing our collective future, presenting their societal vision, and specific beliefs about our humanity, as theonly possible path. Hiding behind an illusion of natural market forces, they are harnessing their wealth and influence to shape not just productization and implementation of AI technology, but also the research.
@Jovan @Sorina
This is my concern with solely using a longtermist view to make policy judgments.
Tags
Annotators
URL
-
-
www.aljazeera.com www.aljazeera.com
-
sraeli forces were still bombing Gaza and fighting with Hamas gunmen in parts of southern Israel in the early hours of Sunday and a spokesman for the military said the situation in the country was not totally under control.
||sorina|| What dyou think about this argument?
-
-
ainowinstitute.org ainowinstitute.org
-
Large-scale compute is also environmentally unsustainable: chips are highly toxic to produce17 and require an enormous amount of energy to manufacture:18 for example, TSMC on its own accounts for 4.8 percent of Taiwan’s national energy consumption, more than the entire capital city of Taipei.19 Running data centers is likewise environmentally very costly: estimates equate every prompt run on ChatGPT to the equivalent of pouring out an entire bottle of water.20
Enviornmental aspects of AI producing.
-
access to compute—along with data and skilled labor—is a key component
Three components for AI: computing, data, and skilled labour
-
- Sep 2023
-
circleid.com circleid.com
-
Artificial intelligence (AI) is in the news every day and corporate strategies are evolving to adapt our businesses to AI use.
.AI and Anguilla ||Jovan||
-
-
www.nytimes.com www.nytimes.com
-
2 Senators Propose Bipartisan Framework for A.I. Laws
||Jovan||
-
-
www.technologyreview.com www.technologyreview.com
-
Inside effective altruism, where the far future counts a lot more than the present
||Jovan|| A good explanation on what this effective altruism movement is and how that came to populate the current political discourse of AI governance with many terms like "fate-defining moment" and "threat against humanity".
-
-
www.newyorker.com www.newyorker.com
-
The Reluctant Prophet of Effective Altruism
||Jovan||
This is an in-depth interview with the original founder of effective altruism movement. There were several iterations of the same philanthropic/philosophical movement that turned political and ambitious very soon.
-
-
-
ChatGPT-maker OpenAI and The Associated Press said Thursday that they’ve made a deal for the artificial intelligence company to license AP’s archive of news stories.
Deal between OpenAI and Associated Press.
-
-
apnews.com apnews.com
-
“It’s like asking, ‘should the newsroom use the Internet?’ in the 1990s,” Tofel said. “The answer is yes, but not stupidly.”
Should journalists use AI? ||Jovan||
-
-
blog.ruanbekker.com blog.ruanbekker.com
-
Restore the Index by Importing the Mapping
test Dusan ES
-
-
blog.goranrakic.com blog.goranrakic.com
-
Korisnicima treba
DeltaDigit user test Dusan
-
-
www.aljazeera.com www.aljazeera.com
-
New Delhi’s decision reflected its “growing concern at the interference of Canadian diplomats in our internal matters and their involvement in anti-India activities”, the ministry said in a statement on Tuesday.
||sorina|| It is relevant for our course on public diplomacy.
-
-
www.wolframalpha.com www.wolframalpha.com
-
Timeline of Systematic Data and the Development of Computable Knowledge
Timeline of Systematic Data and the Development of Computable Knwoledge.
-
-
www.passblue.com www.passblue.com
-
I think there is fatigue. If you ask the average New Yorker what the SDGs are, I’m not sure they’re going to be able to respond.
-
the hardening divides between the West vs. the global South, with the two camps mainly feuding over the reform of such financial institutions as the World Bank and the International Monetary Fund.
-
Only 15 percent of all 17 goals have been met, and 48 percent are off track and have either stagnated or regressed.
-
-
www.diplomacy.edu www.diplomacy.edu
-
he 80th Session of the United Nations General Assembly, a High-Level Event on Science, Technology and Innovation for Development
-
the role of multi-stakeholder partnerships to foster strategic long-term investment in supporting the development of science, technology and innovation in developing countries,
-
including the Global Digital Compact, aligned with the Sustainable Development Goals, should be considered, which should offer preferential access for developing countries to relevant advanced technologies
-
triangular cooperation projects
-
inequalities in data generation
-
We acknowledge that all technological barriers, inter alia, as reported by the IPCC, limit adaptation to climate change and the implementation of the National Determined Contributions (NDCs) of developing countries
-
We note the central role of Governments, with the active contribution from stakeholders from the private sector, civil society, academia and research institutions, in creating and supporting an enabling environment at all levels, including enabling regulatory and governance frameworks
-
the expansion of open-science models
-
the knowledge produced by research and innovation activities can have in designing better public policies
-
the Tunis Agenda and the Geneva Declaration of Principles and plan of action shall lay down the guiding principles for digital cooperation.
-
to ensure that the World Summit on the Information Society (WSIS+20) General Review process, the Global Digital Compact and the Summit of the Future contribute to, inter alia, the achievement of sustainable development and closing the digital divide between developed and developing countries.
-
close alignment between the World Summit on the Information Society process and the 2030 Agenda for Sustainable Development,
-
the full implementation of the 2030 Agenda and the Addis Ababa Action Agenda
-
has the potential to resolve and minimize trade-offs among the Goals and targets,
-
an open, fair, inclusive and non-discriminatory environment for scientific and technological development.
-
stakeholders
-
-
www.theverge.com www.theverge.com
-
The bot is wonderful, he said, letting him speed through $10 tasks in a matter of minutes. When we spoke, he was having it rate another chatbot’s responses according to seven different criteria, one AI training the other.
-
here were listings for AI trainers with expertise in health coaching, human resources, finance, economics, data science, programming, computer science, chemistry, biology, accounting, taxes, nutrition, physics, travel, K-12 education, sports journalism, and self-help. You can make $45 an hour teaching robots law or make $25 an hour teaching them poetry.
-
One way the AI industry differs from manufacturers of phones and cars is in its fluidity. The work is constantly changing, constantly getting automated away and replaced with new needs for new types of data. It’s an assembly line but one that can be endlessly and instantly reconfigured, moving to wherever there is the right combination of skills, bandwidth, and wages.
-
This debate spilled into the open earlier this year, when Scale’s CEO, Wang, tweeted that he predicted AI labs will soon be spending as many billions of dollars on human data as they do on computing power; OpenAI’s CEO, Sam Altman, responded that data needs will decrease as AI improves.
-
Taskup.ai, DataAnnotation.tech, and Gethybrid.io all appear to be owned by the same company: Surge AI. Its CEO, Edwin Chen, would neither confirm nor deny the connection, but he was willing to talk about his company and how he sees annotation evolving.
-
Often their work involved training chatbots, though with higher-quality expectations and more specialized purposes than other sites they had worked for.
-
“If there was one thing I could change, I would just like to have more information about what happens on the other end,” he said. “We only know as much as we need to know to get work done, but if I could know more, then maybe I could get more established and perhaps pursue this as a career.”
-
One engineer told me about buying examples of Socratic dialogues for up to $300 a pop. Another told me about paying $15 for a “darkly funny limerick about a goldfish.”
-
But if you want to train a model to do legal research, you need someone with training in law, and this gets expensive.
-
to be looking at their accuracy, helpfulness, and harmlessness
-
The model is still a text-prediction machine mimicking patterns in human writing, but now its training corpus has been supplemented with bespoke examples, and the model has been weighted to favor them
||JovanNj|| Ovo bi mogle da rade nase anotacije.
-
Each time Anna prompts Sparrow, it delivers two responses and she picks the best one, thereby creating something called “human-feedback data.”
-
“I remember that someone posted that we will be remembered in the future,” he said. “And somebody else replied, ‘We are being treated worse than foot soldiers. We will be remembered nowhere in the future.’ I remember that very well. Nobody will recognize the work we did or the effort we put in.”
-
Training a large model requires an enormous amount of annotation followed by more iterative updates, and engineers want it all as fast as possible so they can hit their target launch date.
-
According to workers I spoke with and job listings, U.S.-based Remotasks annotators generally earn between $10 and $25 per hour, though some subject-matter experts can make more. By the beginning of this year, pay for the Kenyan annotators I spoke with had dropped to between $1 and $3 per hour.
-
nstruction writers must come up with rules that will get humans to categorize the world with perfect consistency. To do so, they often create categories no human would use.
taxonomies
-
When AI comes for your job, you may not lose it, but it might become more alien, more isolating, more tedious.
-
coherent processes broken into tasks and arrayed along assembly lines with some steps done by machines and some by humans but none resembling what came before.
-
“AI doesn’t replace work,” he said. “But it does change how work is organized.”
-
A recent Google Research paper gave an order-of-magnitude figure of “millions” with the potential to become “billions.”
-
Annotation is big business. Scale, founded in 2016 by then-19-year-old Alexandr Wang, was valued in 2021 at $7.3 billion, making him what Forbes called “the youngest self-made billionaire,” though the magazine noted in a recent profile that his stake has fallen on secondary markets since then.
-
Mechanical Turk and Clickworker
-
CloudFactory,
-
Human intelligence is the basis of artificial intelligence, and we need to be valuing these as real jobs in the AI economy that are going to be here for a while.”
-
The more AI systems are put out into the world to dispense legal advice and medical help, the more edge cases they will encounter and the more humans will be needed to sort them.
-
Machine-learning systems are what researchers call “brittle,” prone to fail when encountering something that isn’t well represented in their training data.
-
The resulting annotated dataset, called ImageNet, enabled breakthroughs in machine learning that revitalized the field and ushered in a decade of progress.
-
The anthropologist David Graeber defines “bullshit jobs” as employment without meaning or purpose, work that should be automated but for reasons of bureaucracy or status or inertia is not.
-
But behind even the most impressive AI system are people — huge numbers of people labeling data to train it and clarifying data when it gets confused.
-
Like most of the annotators I spoke with, Joe was unaware until I told him that Remotasks is the worker-facing subsidiary of a company called Scale AI, a multibillion-dollar Silicon Valley data vendor that counts OpenAI and the U.S. military among its customers
-
oe got a job as an annotator — the tedious work of processing the raw information used to train artificial intelligence.
-
-
writings.stephenwolfram.com writings.stephenwolfram.com
-
LLMs are through and through based on language and patterns to be found through it.
-
the very success of LLMs in the commonsense arena strongly suggests that you don’t fundamentally need deep “structured logic” for that.
-
One of the surprises of LLMs is that they often seem, in effect, to use logic, even though there’s nothing in their setup that explicitly involves logic.
-
“symbolic discourse language”
-
particular formal system that described certain kinds of things
-
we weren’t trying to use just logic to represent the world, we were using the full power and richness of computation.
-
Lenat–Haase representation-languages
-
the problem of commonsense knowledge and commonsense reasoning.
-
heuristics: strategies for guessing how one might “jump ahead”
-
a very classic approach to formalizing the world
-
Encode knowledge about the world in the form of statements of logic.
-
was it all just an “engineering problem” that simply required pulling together a bigger and better “expert system”?
-
to use the framework of logic—in more or less the same form that Aristotle and Leibniz had it—to capture what happens in the world.
Aristotle and Leibniz line in pure logic
-
-
-
She rejects notions of progress, she is despairing of representative democracy, and she is not confident that freedom can be saved in the modern world.
-
-
-
Like HTML, PDF facilitates the user’s choice of device and operating system. Unlike HTML, PDF does not assume that remote servers or content are available.
-
From the vantage-point of 2023 we are positioned to recognize 1993 as a year of two key developments; the first specification of HTML, the language of the web, and the first specification of PDF, the language of documents.
-
-
samuelschmitt.com samuelschmitt.com
-
the plugin “Permalink Manager Lite” to manage the URL of the pages.
-
-
samuelschmitt.com samuelschmitt.com
-
In April 2020, I ran a small SEO experiment with a blog post and transformed a long-form article into a topic cluster (also called content hub).
How to make topic cluster on website?
-
-
-
Strengthen telecommunications and data transfers thanks to a new undersea cable connecting the region.
Submarine cables are part of the new India - Middle East - Europe Economic Corridor
-
-
-
How soon could AI replace human workers?
This is the key decision.
-
a soul-crushing amount of change and uncertainty — is to methodically plan for the future.
-
how — and when — their workforce will need to change in order to leverage AI.
-
The software includes more than 500 functions — but the vast majority of people only use a few dozen, because they don’t fully understand how to match the enormous number of features Excel offers to their daily cognitive tasks.
-
Rather, they’ll need to learn how to leverage multimodal AI to do more, and better, work
-
Most workers won’t need to learn how to code, or how to write basic prompts, as we often hear at conferences.
-
so that both the human and the AI can accomplish more through collaboration than by working independently.
-
She might go back and forth a few times, using different data sources, until an optimal quote is received for both the insurance company and the customer.
-
near- and long-term scenarios for the myriad ways in which emerging tools will improve productivity and efficiency
-
That’s because AI systems aren’t static; they are improving incrementally over time.
-
aren’t planning for a future that includes an internal RHLF unit tasked with continuously monitoring, auditing, and tweaking AI systems and tools.
-
Essentially, AI systems need constant human feedback, or they run the risk of learning and remembering the wrong information.
-
By marketing their platforms to companies, they want to lock them (and their data) in.
-
Business data is invaluable because once a model has been trained, it can be costly and technically cumbersome to port those data over to another system.
-
AI is not a monolith, and we are just at the beginning of a very long trajectory.
-
it’s not good enough to actually use.
-
This happened again in 1987, when again, computer scientists and businesses made bold promises on a timeline for AI that was just never feasible.
-
AI cycles through phases that involve breakthroughs, surges of funding and fleeting moments of mainstream interest, followed by missed expectations and funding clawbacks.
-
using an iterative process to cultivate a ready workforce, and most importantly, creating evidence-backed future scenarios that challenge conventional thinking within the organization.
-
The workforce will need to evolve, and workers will have to learn new skills, iteratively and over a period of years
-
leaders are focused too narrowly on immediate gains, rather than how their value network will transform in the future
-
the output has to be proven trustworthy, integrated into existing workstreams, and managed for compliance, risk, and regulatory issues.
-
Exactly which jobs AI will eliminate, and when, is guesswork.
-
Within just a few years, powerful AI systems will perform cognitive work at the same level (or even above) their human workforce.
-
They all want to know how their companies can create more value using fewer human resources.
-
How soon could AI replace human workers?
-
-
www.reddit.com www.reddit.com
-
To my mind, the true spiritual forefather of AI was Gottfried Wilhelm von Leibniz (1646 - 1716), with his ideas about a universal formal language that could encompass all of human knowledge. Another important figure in AI's pre-history is Ada Lovelace, who around 1850 imagined that Charles Babbage's Analytical Engine (which unfortunately was never actually built) could conceivably accomplish such tasks as playing chess and composing music.
Potential contributors to the AI. ||Jovan||
-
-
dig.watch dig.watch
-
the Code of Conduct for Information Integrity on Digital Platforms that is being developed will be important.
||sorina|| Are you aware of this Code of Conduct?
-
There is convergence around the potential for a GDC to promote digital trust and security and to address disinformation, hate speech and other harmful online content.
||sorina|| No traditional cybersecurity. Only 'content safety'
-
There is broad consensus that the Internet Governance Forum (IGF) plays – and should continue to play – a key role in promoting the global and interoperable nature and governance of the Internet. The important roles played by IGF, ITU, UNCTAD, UNDP, UNESCO, WSIS and other UN entities, structures, and forums have been emphasized and that a GDC should not duplicate existing forums and processes.
||sorina|| These are probably two key sentences arguing for IGF and avoiding duplication. Here, they pushed against new forum.
-
-
www.reddit.com www.reddit.com
-
OpenAI models leaned left/libertarian, Google's BERT conservative, Meta's LLaMA right-authoritarian.
Political biases of AI models
-
-
www.reddit.com www.reddit.com
-
Until the Kenyan government suspended the process, thousands of Kenyan citizens lined up to have their iris scanned using the Worldcoin orb. The amount of Worldcoin being offered to each person was estimated to be about $49.
Controversies about biometrics gathering in Kenya.
-
-
contextual.ai contextual.ai
-
We recognized this problem back in 2021 and argued that benchmarks should be as dynamic as the models they evaluate. To make this possible, we introduced a platform called Dynabench for creating living, and continuously evolving benchmarks. As part of the release, we created a figure that showed how quickly AI benchmarks were “saturating”, i.e., that state of the art systems were starting to surpass human performance on a variety of tasks.
Plotting AI
-
-
venturebeat.com venturebeat.com
-
Specifically, Google hopes to unify and standardize the evaluation metrics for unlearning algorithms, as well as foster novel solutions to the problem.
-
to identify problematic datasets, exclude them and retrain the entire model from scratch
-
OpenAI, the creators of ChatGPT, have repeatedly come under fire regarding the data used to train their models. A number of generative AI art tools are also facing legal battles regarding their training data.
-
-
www.reddit.com www.reddit.com
-
The world doesn’t have alignment. How can AI.
Good point on alignment.
-
-
www.reddit.com www.reddit.com
-
You don't just put this stuff in public when it's within striking distance of achieving ASI. That's just insanely stupid. Don't compare it to ANY previous technology. We want to limit to as little rusk as possible.
-
On the surface it might seem like a bad thing to distribute dangerous technology, but the alternative is the lack of balance.
Good argument about nuclear destruction balance
-
Even the transformer architecture was invented and released by Google researchers,
-
Everything they touched turned into a monopoly.
||Jovan|| It is a good point on big AI companies.
-
-
help.openai.com help.openai.com
-
Are there any resources for educators to learn more about AI?
Course materials on education and OpenAI
-
-
help.openai.com help.openai.com
-
their ability to interact with AI
-
support individual growth.
-
how their skills in asking questions, analyzing responses, and integrating information have developed.
-
students can reflect on their progress over time.
-
to review each other's work, fostering a collaborative environment.
-
to observe critical thinking and problem-solving approaches.
-
Even if these tools could accurately identify AI-generated content (which they cannot yet), students can make small edits to evade detection.
-
To elaborate on our research into the shortcomings of detectors, one of our key findings was that these tools sometimes suggest that human-written content was generated by AI.
What did OpenAI learned by developing itsown AI detector.
-
-
openai.com openai.com
-
The goal is to help them “understand the importance of constantly working on their original critical thinking, problem solving and creativity skills.”
-
ecommends teachers use ChatGPT as an assistant in crafting quizzes, exams and lesson plans for classes.
-
She says exploring information in a conversational setting helps students understand their material with added nuance and new perspective.
Importance of conversational setting for learning.
-
-
-
But we don’t know how fast it’s moving. We don’t know why it’s working when it’s working.
-
the story here really is about the unknowns.
-
We can’t look at how a person thinks and explain their reasoning by looking at the firings of the neurons.
-
Trying to build systems where by design, every piece of the system means something that we can understand.
-
Interpretability is this goal of being able to look inside our systems and say pretty clearly with pretty high confidence what they’re doing, why they’re doing it.
-
The paper describing GPT-4 talks about how when they first trained it, it could do a decent job of walking a layperson through building a biological weapons lab.
-
they’ve just got to wait and see.
-
hey’re basically investing in a mystery box?
-
But that doesn’t mean we understand anything about the biology of that tree. We just kind of started the process, let it go, and try to nudge it around a little bit at the end.
-
There’s no explanation in terms of things like checkers moves or strategy or what we think the other player is going to do
-
steering these things almost completely through trial and error.
-
by just putting these systems out in the world and seeing what they do.
-
we don’t know how to steer these things or control them in any reliable way
-
We just don’t understand what’s going on here. We built it, we trained it, but we don’t know what it’s doing.
-
“All right, make this entire response more likely because the user liked it, and make this entire response less likely because the user didn’t like it.”
-
reinforcement learning.
-
we know what word actually comes next, we can tell it if it got it right.
-
probability.
-
to guess what word is gonna come next
-
by basically doing autocomplete.
-
-
kinews24.de kinews24.de
-
By deciphering the complex processes taking place in the networks, the opportunity for comprehensive standardization arises, improving the comparability and interoperability of models.
||sorina|| If we discover how deep-learning works, it will open new possibilities for AI standardisation.
-
by enabling the development of more efficient models that require less computing power.
-
The possibility for better interpretation of the models also has ethical implications. Through a deeper understanding of the decision-making processes, discriminatory or biased mechanisms in AI could be identified and eliminated. Additionally, this understanding facilitates communication with domain experts who are not AI specialists, thus promoting broader acceptance in society.
-
the challenge lies in deciphering the mechanisms behind the observed order in the neural networks.
-
The Law of Equi-Separation The empirical Law of Equi-Separation cuts through this chaos and reveals an underlying order within the deep neural networks. At its core, the law quantifies how these networks categorize data based on class membership across various layers. It shows a consistent pattern: The data separation improves in each layer geometrically and at a constant rate. This challenges the notion of a chaotic training process and instead reveals a structured and predictable process.
||Jovan||||anjadjATdiplomacy.edu|| Is The Law of Equi-Separation a possible solution for transparency of AI models?
-
With their numerous layers and interwoven nodes, these models perform complex data processing that appears chaotic and unpredictable.
-
The fascination with artificial intelligence (AI) has long been shrouded in mystique, especially in the opaque realm of deep learning.
Magical fascination of AI
-
-
intelligencebriefing.substack.com intelligencebriefing.substack.com
-
Bringing a CoE to life should follow the maxim of crawl, walk, run.
-
a clear understanding of what automation means for an organization is vital.
What automation means to specific organisations ||Jovan||
-
a CoE comes in — a team dedicated to promoting automation across the organization.
-
uch as hallucinations, IP infringement, amplified bias, etc.)
-
-
irmckenzie.co.uk irmckenzie.co.uk
-
This task demonstrates the failure of language models to follow instructions when there is a popular continuation that does not fit with that instruction. Larger models are more hurt by this as the larger the model, the more familiar it is with common expressions and quotes.
||Jovan|| ||JovanNj||
An example of how the superb capability of an LLM in induction and association actually breaks; also shows a little bit how the LLM "thinks" (which is very different from a human, who may have or have not memorized the Shakespeare's quote by heart but would nonetheless understand that the "prompt" is more important than reciting the most plausible quote.)
-
-
github.com github.com
-
The standard paradigm in natural language processing today is to pretrain large language models to autocomplete text corpora. The resulting models are then either frozen and used directly for other tasks (zero-shot or using few-shot learning), or additionally trained on other tasks (fine-tuning).
It might be interesting to carry out similar tasks for the model that Diplo is fine-tuning--to see where its algorithmic reasoning will break.
Also, it might be a good comparison study to show on paper how the Diplo model works better with higher-quality data (bottom-up approach). It'd be good evidence to show, I suppose.||Jovan||||Jovan||||Jovan||||JovanNj||||Jovan||
-
The purpose of this contest is to find evidence for a stronger failure mode: tasks where language models get worse as they become better at language modeling (next word prediction).
||Jovan|| ||JovanNj||
I found this interesting as this might be insightful to the problem I faced when I was writing the speech for the Digital Cooperation Day "Chair of future generations" using ChatGPT. The model was really good at generating a quote that doesn't exist and was never said by the person it attributed to.
It is very plausible because from the "reality" the model lives in, multiple data sources made it probable that "this quote might have existed and it makes sense that this quote follows that quote and follows that name and that organization." It is interesting to actually see where the model that is very good at inductive reasoning and association fails sometimes, because induction and association aren't the only two logics humans use to approach reality.
Tags
Annotators
URL
-
- Aug 2023
-
www.theregister.com www.theregister.com
-
G20 digital ministers sign up for Digital Public Infrastructure push
G20 digital push
-
-
inquisitivebird.substack.com inquisitivebird.substack.com
-
In the period between 1000 and 1500 A.D., the relevant major civilizations were West Europe, the Islamic world, India, and China. Below I have illustrated the notable people per-capita rates for countries accumulated over the 1000–1500 period. I have focused on the region containing Europe and the Middle East as, with respect to this measure, they are the only real competitors in this period
||Jovan|| REad on Patterns of Humanity
-
-
techpolicy.press techpolicy.press
-
f I can add on, on the previous points about kind of like ensuring that open source and open science is safe. I think it’s important to recognize that when we keep things more secret, when we try to hurt open science, we actually slow down more the US than China, right? And I think that’s what’s been happening for the past, past few years, and it’s important to recognize that and, and make sure we keep our ecosystem open to help the United States.
-
First, it’s good to remember that most of today’s progress has been powered by open science and open source, like the attention is all you need. Paper, the BERT paper, the latent diffusion paper, and so many others, the same way without open source, PyTorch, transformers, diffusers, all invented here in the US, the US might not be the leading country for AI. Now, when we look towards the future, open science and open source distribute economic gains by enabling hundreds of thousands of small companies and startups to build with AI. It fosters innovation and fair competition between all thanks to ethical openness. It creates a safer path for development of the technology by giving civil society, nonprofits, academia and policy makers the capabilities they need to counterbalance the power of big private, of big private companies. Open science and open source, prevent black box systems, make companies more accountable and help solving today’s challenges like mitigating biases, reducing misinformation, promoting copyrights, and rewarding all stakeholders, including artists and content creators in the value creation process.
Reasons why open source approach is good for AI developments.
-
-
www.turingpost.com www.turingpost.com
-
he advocates for open science and open source. He argues these would “prevent black box systems, make companies more accountable and help [in] solving today’s challenges like mitigating biases, reducing misinformation.”
-
If users don’t understand how this technology is built, it creates a lot of risks, a lot of misconceptions.”
-
He points out the dangers of letting "an opaque monopoly or oligopoly appear on a technology as groundbreaking as AI.”
Risk of monopolies in AI.
-
With a robust $235M Series D funding and a cumulative valuation of $4.5 billion, Hugging Face is flush with cash, boasting reserves that enable aggressive growth and diversification strategies. Their fortified capital makes them a powerhouse in the machine learning ecosystem, primed to tackle challenges like inherent biases and expand beyond their NLP niche.
-
But BLOOM shatters this narrative. Spawned by BigScience, a consortium of 1,200 researchers from 38 countries, this open-source model boasts 176 billion parameters and speaks 46 languages plus 13 coding tongues.
-
-
blogs.microsoft.com blogs.microsoft.com
-
e first developed ethical principles and then had to translate these into more specific corporate policies. We’re now on version 2 of the corporate standard that embodies these principles and defines more precise practices for our engineering teams to follow.
-
As the current holder of the G20 Presidency and Chair of the Global Partnership on AI, India is well positioned to help advance a global discussion on AI issues.
-
When we at Microsoft adopted our six ethical principles for AI in 2018, we noted that one principle was the bedrock for everything else—accountability.
-
we have nearly 350 people working on responsible AI at Microsoft, helping us implement best practices for building safe, secure, and transparent AI systems designed to benefit society.
-