- Jan 2024
-
aiimpacts.org aiimpacts.org
-
High-Level Machine Intelligence’
new term: high-level machine intelligence
-
six tasks expected to take longer than ten years were: “After spending time in a virtual world, output the differentialequations governing that world in symbolic form” (12 years), “Physically install the electrical wiring in a new home”(17 years), “Research and write” (19 years) or “Replicate” (12 years) “a high-quality ML paper,” “Prove mathematicaltheorems that are publishable in top mathematics journals today” (22 years), and solving “long-standing unsolvedproblems in mathematics” such as a Millennium Prize problem (27 years)
Expectations on the tasks feasible to be taken over than AI later than 10 years from now
-
2,778 AI researchers who had published peer-reviewed research in the prior year in six topAI venues (NeurIPS, ICML, ICLR, AAAI, IJCAI, JMLR). This to our knowledge constitutes the largest survey of AIresearchers to date
Who participated in the survey
-
hey are experts in AI research, not AI forecasting and might thus lack generic forecasting skills andexperience, or expertise in non-technical factors that influence the trajectory of AI.
Good to note this caveat
-
lack of apparent consensus among AI experts on the future of AI [
This has always been the case, no?
-
was disagreement about whether faster or slower AI progress would be better for thefuture of humanity.
interesting also
-
substantial” or “extreme” concern is warranted about six different AI-relatedscenarios, including spread of false information, authoritarian population control, and worsenedinequality.
.
-
While68.3% thought good outcomes from superhuman AI are more likely than bad, of these net optimists48% gave at least a 5% chance of extremely bad outcomes such as human extinction, and 59% of netpessimists gave 5% or more to extremelygoodoutcomes. Between 37.8% and 51.4% of respondentsgave at least a 10% chance to advanced AI leading to outcomes as bad as human extinction.
Survey results on AI extinction risks. Quite interesting
-
the chance of all humanoccupations becoming fully automatable was forecast to reach 10% by 2037, and 50% as late as 2116(compared to 2164 in the 2022 survey).
-
-
www.politico.com www.politico.com
-
Virtually all the policies that EAs and their allies are pushing — new reporting rules for advanced AI models, licensing requirements for AI firms, restrictions on open-source models, crackdowns on the mixing of AI with biotechnology or even a complete “pause” on “giant” AI experiments — are in furtherance of that goal.
Key calls of extinction risk community.
-
- Dec 2023
-
www.ft.com www.ft.com
-
Andrew Ng: ‘Do we think the world is better off with more or less intelligence?’
A very lucid interview. He highlights the importance of open source AI, the danger of regulating LLM as opposed to applications, the negative lobby of most big tech, and the need to focus on good regulation targeting the problems of today, not speculating about extinction, etc. ||JovanK|| and ||sorina||
Tags
Annotators
URL
-
-
-
promote research, development and innovation in various data-based areas, including Big Data Analytics, Artificial Intelligence, Quantum Computing, and Blockchain.
For Serbian chamber of commerce this could be critical since they do not have any linkages between data, AI, quantum computeing and blockchain.
They should encourage Serbian start-ups to look at this linkages.
-
-
marginalrevolution.com marginalrevolution.com
-
Average is Over
||Jovan|| to read this book
-
-
conversationswithtyler.com conversationswithtyler.com
-
that chess could be crunched by brute force once hardware got fast enough, databases got big enough, algorithms got smart enough.
How Kasparov lost the game.
-
-
edoras.sdsu.edu edoras.sdsu.edu
-
Lynn Margulis [14] has made strong arguments for the view that mutualism is the great driving force in evolution.
Mutulalism is the doctrine that mutual dependence is necessary to social well-being.
-
-
www.turingpost.com www.turingpost.com
-
The table below illustrates the complexity of models and data used to train common language models.
Sources of data for foundational models.
-
-
-
Generative AI turned one in November 2023
End of the year
-
-
www.bbc.co.uk www.bbc.co.uk
-
Sunak said that was because up to that point the government’s scientists were not pushing for it. The aim had been to “flatten the curve” and manage the spread, rather than suppress it.
||sorina|| Sorina, this is relevant for our yesterday's discussion on the future of digital economy
-
-
www.euractiv.com www.euractiv.com
-
AI Act: EU Commission attempts to revive tiered approach shifting to General Purpose AI
EU AI Act regulation of General Purpose AI
-
-
nofil.beehiiv.com nofil.beehiiv.com
-
Essentially what this means is, you can test a smaller model and accurately predict how a model 106 x larger will perform.
||sorina|| Scaling intellingence
-
the only differentiating factor between any two LLMs is the dataset.
||sorina||
-
to create a model with the ability to solve math problems without having previously seen them.
OpenAI may solve mathematical problems which is the key challenge of probabilistic AI
||sorina|| ||VladaR||
-
-
www.europarl.europa.eu www.europarl.europa.eu
-
certain criteria
||sorina|| What will be criteria?
-
-
-
On the banks of Hallstätter See and surrounded by soaring Alpine peaks, the town of Hallstatt and its stunning landscape enjoy UNESCO protection.
||sorina|| This paragraph explains why you should to Austria.
-
- Nov 2023
-
www-files.anthropic.com www-files.anthropic.com
-
Given its dual-use nature, CAI could make it easier to train pernicious systems
Could someone not do the reverse and curate a constitution of pure malicious intentions?
-
AI alignment, which incentivizes developers to adopt this method
How do we know if AI is aligning with us when we leave the job of aligning it to AI as well???
-
CAI increases model transparency.
Not really though...just as with regular LLMs, we don't know how the models comprehend the data that we give it and how it comes up with answers. There's no guarantee that the models understand the terms of the principles in the ways that we understand them; how do we know if the model is indeed making decisions according to the values (or whatever definition we might give to those values) and not just because of a happenstance?
-
-
www.nature.com www.nature.com
-
dependence on a specific AI technology will diminish, so that end-users can avoid ‘lock-in’ effects and benefit from reduced switching costs
The risks is that it could be too later if there is not immediate push against monopolies of a few major cmpanies.
-
Interoperability of pre-trained models across platforms should also drastically reduce the need to retrain large models.
Good point!
-
rompt templates and standardized prompt optimizers
Any suggestion?
-
have assembled to develop an LLM called BLOOM10, should be valuable.
A good example.
-
public institutions can actively incentivize data-sharing partnerships, which, in combination with federated learning, may promote AI across institutional boundaries while ensuring data privacy.
Open data access is tricky. There is growing concern in developing countries that open data can benefit only those withi processing power. Thus, big tech platforms can be mainly beneficiary of open data access.
This issue must be sorted out by having tracebility of AI to specific data. It can be open but it shoudl be attributed to somebody.
-
under a trustworthy and responsible governance model.
Here is a possible role for Switzerland as 'ICRC for AI'
-
the development of a LLM is estimated to cost between 300 and 400 million euros.
It can be less.
-
source codes for formalizing the training task
It is too specific. Training tasks are part of one type of AI.
-
AI is programmed to learn to perform a task.
It is not the case with all AI systems. It is only the case with reinforced learning AI systems.
-
arbitrarily decide
It is the case today. Internet companies are free to decide what, where, and how they will provide services.
-
the concentration of power over technology is known to hamper future innovation, fair competition, scientific progress, and hence human welfare and development at large
Main concern
-
concentrated power
It is the main concern.
-
An example is OpenAI, which was founded to make scientific research openly available but which eventually restricted access to research findings.
OpenAI is not open source platform. It is typical 'Internet economy' business which provides service for 'free' in exchagne of data. It has been business Google, Facebook, Twitter, etc. OpenAI puts this model to the next level by capturing knowledge (inestead of data).
-
-
www.state.gov www.state.gov
-
The Departmentwill develop and implement data qualityassessment tools and monitoring processeswith results that are transparent to users.Assessments will also be done to evaluate dataoutputs from other AI platforms to minimize risk.
What about the input data used to train other AI systems?
-
High quality datasets are those sufficiently freeof incomplete, inconsistent, or incorrect data,while also being well documented, organized,and secure
Doesn't this definition mostly point to highly structured data?
-
he Department’s CDAO will support and coordinate the establishment and maintenance of AI policies—such as 20 FAM 201.1—that provide clear guidelines for responsible AI use, steward AI models, and prioritize the evaluation and management of algorithmic risk (e.g., risks arising from using algorithms)in AI applications during their entire lifecycle— including those related to records retention, privacy, cybersecurity, and safety.
Existence of current AI policies.
Another thing is, they mention algorithmic risk, which means the evaluation of algorithms may not just be on applications?
-
Much like the EDS aims to cultivate a data culture, the Department will imbue its values around responsible AI use across the organization, including to uphold data and scientific integrity.
Very interesting how they use words like "culture" to describe AI integration. It certainly goes beyond simply adopting selective tools; instead, it's about perspective and norm-shaping within the organisation.
-
enhance AI literacy, encourage and educate on responsible AI use, and ensure that users can adequately mitigate some risks associated with AI tools.
What is AI literacy? What sets of knowledge and skills make a person AI literate?
-
To meet the computational demands of AI development, our infrastructure will leverage Department cloud-based solutions and scalable infrastructure services.
Did they already have that infrastructure ready?
-
Robust access controls and authentication mechanisms aligned to Zero Trust principles will mitigate risk of unauthorized access to AI technologies and Department data, providing a high level of security
-
with a mix of open-source, commercially available, and custom-built AI systems.
Open-source is the key word here.
-
ts Enterprise Data Strategy (EDS) in September 2021
EAIS has a predecessor
-
Innovate
Use cases
-
Ensure AI is Applied Responsibly
Principles and standards
-
Foster a Culture that Embraces AI Technology
Workforce
-
Leverage Secure AI Infrastructure
Infrastructure
-
-
twitter.com twitter.com
-
||sorina|| This is so far the most reasonable view on the AI regulation and EU AI Act.
-
-
ecfr.eu ecfr.eu
-
||sorina||||StephanieBP||||VladaR|| This is - so far - one of the best analsis of the current geopolitical moment which will inevitable impact our work as well. The main question is if there will be at all space for support for interdependence and inclusion. ||Pavlina||
-
-
hbr.org hbr.org
-
50 Global Hubs for Top AI Talent
@jovan 50 global hubs for Top AI talents
-
-
-
if Senator Pascale Gruny has anything to say about it. She has just taken a first step toward a proposed law making everything, or really anyone — at least in official documents — well, masculine.
I disagree with this view. ||sorina|| it is what we discussed last evening during the dinner. What about this view.
-
-
www.diplomacy.edu www.diplomacy.edu
-
At Diplo, the organisation I lead, we’ve been successfully experimenting with an approach that integrates data labelling into our daily operations,
Jovan, I DO NOT AGREE WITH THIS
-
-
study.diplomacy.edu study.diplomacy.edu
-
However, in an environment where data moves across borders in seconds, and can easily be destroyed or removed, cooperation based on a traditional MLAT is blamed for being slow and insufficient since it often takes months to address requests for assistance (Kent, 2015).
Totally Agree that many procedural framework between countries or region could spend for a long time to deal with tracking digital trace.
-
-
www.bbc.co.uk www.bbc.co.uk
-
US Secretary of State Antony Blinken has been touring the region again. He's currently in talks in Ankara, Turkey. And we are also being told that the head of the CIA, William Burns, who used to be the top US diplomat on the Middle East, is in the region too.
Michale, have a look at this.
-
-
www.diplomacy.edu www.diplomacy.edu
-
Nishida’s philosophy is critical of dualistic perspectives that often influence our understanding of humans versus machines. He would likely argue that humans and machines are fundamentally interlinked. In this interconnected arena, beyond traditional dualistic frameworks (AI vs humans, good vs bad), we should formulate new approaches to AI.
Q: What is Nishida's view on inerconnectedness?
-
- Oct 2023
-
techcrunch.com techcrunch.com
-
OpenAI, Google and a ‘digital anthropologist’: the UN forms a high-level board to explore AI governance
-
-
www.politico.com www.politico.com
-
The fellows funded by Open Philanthropy, which is financed primarily by billionaire Facebook co-founder and Asana CEO Dustin Moskovitz and his wife Cari Tuna, are already involved in negotiations that will shape Capitol Hill’s accelerating plans to regulate AI. And they’re closely tied to a powerful influence network that’s pushing Washington to focus on the technology’s long-term risks — a focus critics fear will divert Congress from more immediate rules that would tie the hands of tech firms.
||Jovan||||sorina||||JovanNj||
Probably old news to you, but here's an article that explains about the billionaire that founded Open Philanthropy (a core funder in EA activities). It also explains about its reach into politics.
-
-
assets.publishing.service.gov.uk assets.publishing.service.gov.uk
-
William Isaac
Staff Research Scientist on DeepMind’s Ethics and Society Team and Research Affiliate at Oxford University Centre's for the Governance of AI: https://wsisaac.com/#about
Both DeepMind and Centre for the Governance of AI (GovAI) have strong links to EA!
-
Arvind Narayanan
Professor of computer science from Princeton University and the director of the Center for Information Technology Policy: https://www.cs.princeton.edu/~arvindn/.
Haven't read his work closely yet, but it seems sensible to me.
-
Sara Hooker,
Director at Cohere: https://cohere.com/ (an LLM AI company).
-
Yoshua Bengio
Professor at Université de Montréal and scientific director of the Montreal Institute for Learning Algorithms (MILA).
Very influential computer scientist, and considered a leading force in AI. Also an AI doomer, though I can't find his clear link with EA.
-
John McDermid
Professor from University of York.
-
Alexander Babuta
The Alan Turing Institute: https://cetas.turing.ac.uk/about/our-team/alexander-babuta
-
Irene Solaiman
Head of Global Policy at Hugging Face: https://www.irenesolaiman.com/
-
Paul Christiano
A very big name on the EA forum: https://forum.effectivealtruism.org/topics/paul-christiano
-
Capabilities and risks from frontier AI
||Jovan|| ||sorina|| ||JovanNj||
This is the report that the UK released ahead of the AI Safety Summit (1-2 November, 2023).
-
-
www.cnbc.com www.cnbc.com
-
On Wednesday, the U.K. government released a report called “Capabilities and risks from frontier AI,” in which it explains frontier AI as “highly capable general-purpose AI models that can perform a wide variety of tasks and match or exceed the capabilities present in today’s most advanced models.”
||Jovan||||sorina||||JovanNj||
This report might be a good source for understanding narrative change in the field of AI safety. I will find out where it is and send it to you via email.
Also, it shouldn't come as a surprise to us that the UK government will be the most convinced in EA's description of AI ending humanity, considering that many EA orgnisations are based in and attached to UK universities (Oxbridge).
-
-
www.euractiv.com www.euractiv.com
-
that the criteria apply,
What are criteria?
-
that maintains horizontal exemption conditions and largely overlooks the negative legal opinion.
They ignored negative opinion.
-
harshly criticised by the European Parliament’s legal office
||sorina||||wuATdiplomacy.edu|| Is there any article about these criticism?
-
under a pre-set list of critical use cases were deemed automatically high-risk
||sorina||||wuATdiplomacy.edu|| Where are these pre-set list of ciritical uses listed in the current versions of EU AI Act: https://dig.watch/resource/eu-ai-act-proposed-amendments-structured-view
-
-
study.diplomacy.edu study.diplomacy.edu
-
I usually always thought of hackers
It's what my family and friends think when I tell them about cyber and my work :D
-
-
www.oecd.org www.oecd.org
-
Inclusive Framework on BEPS
I have doubt about this statement by OECD.
-
-
www.oecd.org www.oecd.org
-
The GloBE rules provide for a co-ordinated system of taxation intended to ensure large MNE groups pay this minimum level of tax on income arising in each of the jurisdictions in which they operate. The rules create a “top-up tax” to be applied on profits in any jurisdiction whenever the effective tax rate, determined on a jurisdictional basis, is below the minimum 15% rate.
@john This paragraph relates to our latest discussion on Taxation of tech companies
-
-
hbr.org hbr.org
-
Can I capture my vision in a page? A paragraph? A word?
Key quesiton.
-
-
www.european-cyber-resilience-act.com www.european-cyber-resilience-act.com
-
The European Cyber Resilience Act (CRA)
Sources of data and documents
-
-
study.diplomacy.edu study.diplomacy.edu
-
I think that cooperation is important for both developed and developing countries, and I think that most of the challenges come from competing interests
-
-
crfm.stanford.edu crfm.stanford.edu
-
We emphasize that EU policymakers should consider strengthening deployment requirements for entities that bring foundation models to market to ensure there is sufficient accountability across the digital supply chain.
Pyramide of evaluation.
-
Open releases generally achieve strong scores on resource disclosure requirements (both data and compute), with EleutherAI receiving 19/20 for these categories.
Open releases foudational models are more in compliace with law than closed oes.
@jovak@diplomacy.edu
-
the European Parliament adopted a draft of the Act by a vote of 499 in favor, 28 against, and 93 abstentions.
Votes in the EU Parliament
-
-
-
We are at the same juncture now with AI. Once again, a handful of competitive but ideologically aligned leaders are telling us that large-scale, general-purpose AI implementations are the only way forward. In doing so, they disregard the dangerous level of complexity and the undue level of control and financial return to be granted to them.
@Jovan @Sorina
The very same people warning us of extinction-level AI risks are the same people who are developing technologies in a way that leads us to it. In a way, the public release of GPT and other generative models created the very market pressure that makes "creating the best, most intelligent AGI" the most important and only goal for the market.
-
What is happening is not just a battle for market control. A small number of tech titans are busy designing our collective future, presenting their societal vision, and specific beliefs about our humanity, as theonly possible path. Hiding behind an illusion of natural market forces, they are harnessing their wealth and influence to shape not just productization and implementation of AI technology, but also the research.
@Jovan @Sorina
This is my concern with solely using a longtermist view to make policy judgments.
Tags
Annotators
URL
-
-
www.aljazeera.com www.aljazeera.com
-
sraeli forces were still bombing Gaza and fighting with Hamas gunmen in parts of southern Israel in the early hours of Sunday and a spokesman for the military said the situation in the country was not totally under control.
||sorina|| What dyou think about this argument?
-
-
ainowinstitute.org ainowinstitute.org
-
Large-scale compute is also environmentally unsustainable: chips are highly toxic to produce17 and require an enormous amount of energy to manufacture:18 for example, TSMC on its own accounts for 4.8 percent of Taiwan’s national energy consumption, more than the entire capital city of Taipei.19 Running data centers is likewise environmentally very costly: estimates equate every prompt run on ChatGPT to the equivalent of pouring out an entire bottle of water.20
Enviornmental aspects of AI producing.
-
access to compute—along with data and skilled labor—is a key component
Three components for AI: computing, data, and skilled labour
-
- Sep 2023
-
circleid.com circleid.com
-
Artificial intelligence (AI) is in the news every day and corporate strategies are evolving to adapt our businesses to AI use.
.AI and Anguilla ||Jovan||
-
-
www.nytimes.com www.nytimes.com
-
2 Senators Propose Bipartisan Framework for A.I. Laws
||Jovan||
-
-
www.technologyreview.com www.technologyreview.com
-
Inside effective altruism, where the far future counts a lot more than the present
||Jovan|| A good explanation on what this effective altruism movement is and how that came to populate the current political discourse of AI governance with many terms like "fate-defining moment" and "threat against humanity".
-
-
www.newyorker.com www.newyorker.com
-
The Reluctant Prophet of Effective Altruism
||Jovan||
This is an in-depth interview with the original founder of effective altruism movement. There were several iterations of the same philanthropic/philosophical movement that turned political and ambitious very soon.
-
-
-
ChatGPT-maker OpenAI and The Associated Press said Thursday that they’ve made a deal for the artificial intelligence company to license AP’s archive of news stories.
Deal between OpenAI and Associated Press.
-
-
apnews.com apnews.com
-
“It’s like asking, ‘should the newsroom use the Internet?’ in the 1990s,” Tofel said. “The answer is yes, but not stupidly.”
Should journalists use AI? ||Jovan||
-
-
blog.ruanbekker.com blog.ruanbekker.com
-
Restore the Index by Importing the Mapping
test Dusan ES
-
-
blog.goranrakic.com blog.goranrakic.com
-
Korisnicima treba
DeltaDigit user test Dusan
-
-
www.aljazeera.com www.aljazeera.com
-
New Delhi’s decision reflected its “growing concern at the interference of Canadian diplomats in our internal matters and their involvement in anti-India activities”, the ministry said in a statement on Tuesday.
||sorina|| It is relevant for our course on public diplomacy.
-
-
www.wolframalpha.com www.wolframalpha.com
-
Timeline of Systematic Data and the Development of Computable Knowledge
Timeline of Systematic Data and the Development of Computable Knwoledge.
-
-
www.passblue.com www.passblue.com
-
I think there is fatigue. If you ask the average New Yorker what the SDGs are, I’m not sure they’re going to be able to respond.
-
the hardening divides between the West vs. the global South, with the two camps mainly feuding over the reform of such financial institutions as the World Bank and the International Monetary Fund.
-
Only 15 percent of all 17 goals have been met, and 48 percent are off track and have either stagnated or regressed.
-
-
www.diplomacy.edu www.diplomacy.edu
-
he 80th Session of the United Nations General Assembly, a High-Level Event on Science, Technology and Innovation for Development
-
the role of multi-stakeholder partnerships to foster strategic long-term investment in supporting the development of science, technology and innovation in developing countries,
-
including the Global Digital Compact, aligned with the Sustainable Development Goals, should be considered, which should offer preferential access for developing countries to relevant advanced technologies
-
triangular cooperation projects
-
inequalities in data generation
-
We acknowledge that all technological barriers, inter alia, as reported by the IPCC, limit adaptation to climate change and the implementation of the National Determined Contributions (NDCs) of developing countries
-
We note the central role of Governments, with the active contribution from stakeholders from the private sector, civil society, academia and research institutions, in creating and supporting an enabling environment at all levels, including enabling regulatory and governance frameworks
-
the expansion of open-science models
-
the knowledge produced by research and innovation activities can have in designing better public policies
-
the Tunis Agenda and the Geneva Declaration of Principles and plan of action shall lay down the guiding principles for digital cooperation.
-
to ensure that the World Summit on the Information Society (WSIS+20) General Review process, the Global Digital Compact and the Summit of the Future contribute to, inter alia, the achievement of sustainable development and closing the digital divide between developed and developing countries.
-
close alignment between the World Summit on the Information Society process and the 2030 Agenda for Sustainable Development,
-
the full implementation of the 2030 Agenda and the Addis Ababa Action Agenda
-
has the potential to resolve and minimize trade-offs among the Goals and targets,
-
an open, fair, inclusive and non-discriminatory environment for scientific and technological development.
-
stakeholders
-
-
www.theverge.com www.theverge.com
-
The bot is wonderful, he said, letting him speed through $10 tasks in a matter of minutes. When we spoke, he was having it rate another chatbot’s responses according to seven different criteria, one AI training the other.
-
here were listings for AI trainers with expertise in health coaching, human resources, finance, economics, data science, programming, computer science, chemistry, biology, accounting, taxes, nutrition, physics, travel, K-12 education, sports journalism, and self-help. You can make $45 an hour teaching robots law or make $25 an hour teaching them poetry.
-
One way the AI industry differs from manufacturers of phones and cars is in its fluidity. The work is constantly changing, constantly getting automated away and replaced with new needs for new types of data. It’s an assembly line but one that can be endlessly and instantly reconfigured, moving to wherever there is the right combination of skills, bandwidth, and wages.
-
This debate spilled into the open earlier this year, when Scale’s CEO, Wang, tweeted that he predicted AI labs will soon be spending as many billions of dollars on human data as they do on computing power; OpenAI’s CEO, Sam Altman, responded that data needs will decrease as AI improves.
-
Taskup.ai, DataAnnotation.tech, and Gethybrid.io all appear to be owned by the same company: Surge AI. Its CEO, Edwin Chen, would neither confirm nor deny the connection, but he was willing to talk about his company and how he sees annotation evolving.
-
Often their work involved training chatbots, though with higher-quality expectations and more specialized purposes than other sites they had worked for.
-
“If there was one thing I could change, I would just like to have more information about what happens on the other end,” he said. “We only know as much as we need to know to get work done, but if I could know more, then maybe I could get more established and perhaps pursue this as a career.”
-
One engineer told me about buying examples of Socratic dialogues for up to $300 a pop. Another told me about paying $15 for a “darkly funny limerick about a goldfish.”
-
But if you want to train a model to do legal research, you need someone with training in law, and this gets expensive.
-
to be looking at their accuracy, helpfulness, and harmlessness
-
The model is still a text-prediction machine mimicking patterns in human writing, but now its training corpus has been supplemented with bespoke examples, and the model has been weighted to favor them
||JovanNj|| Ovo bi mogle da rade nase anotacije.
-
Each time Anna prompts Sparrow, it delivers two responses and she picks the best one, thereby creating something called “human-feedback data.”
-
“I remember that someone posted that we will be remembered in the future,” he said. “And somebody else replied, ‘We are being treated worse than foot soldiers. We will be remembered nowhere in the future.’ I remember that very well. Nobody will recognize the work we did or the effort we put in.”
-
Training a large model requires an enormous amount of annotation followed by more iterative updates, and engineers want it all as fast as possible so they can hit their target launch date.
-
According to workers I spoke with and job listings, U.S.-based Remotasks annotators generally earn between $10 and $25 per hour, though some subject-matter experts can make more. By the beginning of this year, pay for the Kenyan annotators I spoke with had dropped to between $1 and $3 per hour.
-
nstruction writers must come up with rules that will get humans to categorize the world with perfect consistency. To do so, they often create categories no human would use.
taxonomies
-
When AI comes for your job, you may not lose it, but it might become more alien, more isolating, more tedious.
-
coherent processes broken into tasks and arrayed along assembly lines with some steps done by machines and some by humans but none resembling what came before.
-
“AI doesn’t replace work,” he said. “But it does change how work is organized.”
-
A recent Google Research paper gave an order-of-magnitude figure of “millions” with the potential to become “billions.”
-
Annotation is big business. Scale, founded in 2016 by then-19-year-old Alexandr Wang, was valued in 2021 at $7.3 billion, making him what Forbes called “the youngest self-made billionaire,” though the magazine noted in a recent profile that his stake has fallen on secondary markets since then.
-
Mechanical Turk and Clickworker
-
CloudFactory,
-
Human intelligence is the basis of artificial intelligence, and we need to be valuing these as real jobs in the AI economy that are going to be here for a while.”
-
The more AI systems are put out into the world to dispense legal advice and medical help, the more edge cases they will encounter and the more humans will be needed to sort them.
-
Machine-learning systems are what researchers call “brittle,” prone to fail when encountering something that isn’t well represented in their training data.
-
The resulting annotated dataset, called ImageNet, enabled breakthroughs in machine learning that revitalized the field and ushered in a decade of progress.
-
The anthropologist David Graeber defines “bullshit jobs” as employment without meaning or purpose, work that should be automated but for reasons of bureaucracy or status or inertia is not.
-
But behind even the most impressive AI system are people — huge numbers of people labeling data to train it and clarifying data when it gets confused.
-
Like most of the annotators I spoke with, Joe was unaware until I told him that Remotasks is the worker-facing subsidiary of a company called Scale AI, a multibillion-dollar Silicon Valley data vendor that counts OpenAI and the U.S. military among its customers
-
oe got a job as an annotator — the tedious work of processing the raw information used to train artificial intelligence.
-
-
writings.stephenwolfram.com writings.stephenwolfram.com
-
LLMs are through and through based on language and patterns to be found through it.
-
the very success of LLMs in the commonsense arena strongly suggests that you don’t fundamentally need deep “structured logic” for that.
-
One of the surprises of LLMs is that they often seem, in effect, to use logic, even though there’s nothing in their setup that explicitly involves logic.
-
“symbolic discourse language”
-
particular formal system that described certain kinds of things
-
we weren’t trying to use just logic to represent the world, we were using the full power and richness of computation.
-
Lenat–Haase representation-languages
-
the problem of commonsense knowledge and commonsense reasoning.
-
heuristics: strategies for guessing how one might “jump ahead”
-
a very classic approach to formalizing the world
-
Encode knowledge about the world in the form of statements of logic.
-
was it all just an “engineering problem” that simply required pulling together a bigger and better “expert system”?
-
to use the framework of logic—in more or less the same form that Aristotle and Leibniz had it—to capture what happens in the world.
Aristotle and Leibniz line in pure logic
-
-
-
She rejects notions of progress, she is despairing of representative democracy, and she is not confident that freedom can be saved in the modern world.
-
-
-
Like HTML, PDF facilitates the user’s choice of device and operating system. Unlike HTML, PDF does not assume that remote servers or content are available.
-
From the vantage-point of 2023 we are positioned to recognize 1993 as a year of two key developments; the first specification of HTML, the language of the web, and the first specification of PDF, the language of documents.
-
-
samuelschmitt.com samuelschmitt.com
-
the plugin “Permalink Manager Lite” to manage the URL of the pages.
-
-
samuelschmitt.com samuelschmitt.com
-
In April 2020, I ran a small SEO experiment with a blog post and transformed a long-form article into a topic cluster (also called content hub).
How to make topic cluster on website?
-
-
-
Strengthen telecommunications and data transfers thanks to a new undersea cable connecting the region.
Submarine cables are part of the new India - Middle East - Europe Economic Corridor
-
-
-
How soon could AI replace human workers?
This is the key decision.
-
a soul-crushing amount of change and uncertainty — is to methodically plan for the future.
-
how — and when — their workforce will need to change in order to leverage AI.
-
The software includes more than 500 functions — but the vast majority of people only use a few dozen, because they don’t fully understand how to match the enormous number of features Excel offers to their daily cognitive tasks.
-
Rather, they’ll need to learn how to leverage multimodal AI to do more, and better, work
-
Most workers won’t need to learn how to code, or how to write basic prompts, as we often hear at conferences.
-
so that both the human and the AI can accomplish more through collaboration than by working independently.
-
She might go back and forth a few times, using different data sources, until an optimal quote is received for both the insurance company and the customer.
-
near- and long-term scenarios for the myriad ways in which emerging tools will improve productivity and efficiency
-
That’s because AI systems aren’t static; they are improving incrementally over time.
-
aren’t planning for a future that includes an internal RHLF unit tasked with continuously monitoring, auditing, and tweaking AI systems and tools.
-
Essentially, AI systems need constant human feedback, or they run the risk of learning and remembering the wrong information.
-
By marketing their platforms to companies, they want to lock them (and their data) in.
-
Business data is invaluable because once a model has been trained, it can be costly and technically cumbersome to port those data over to another system.
-
AI is not a monolith, and we are just at the beginning of a very long trajectory.
-
it’s not good enough to actually use.
-
This happened again in 1987, when again, computer scientists and businesses made bold promises on a timeline for AI that was just never feasible.
-
AI cycles through phases that involve breakthroughs, surges of funding and fleeting moments of mainstream interest, followed by missed expectations and funding clawbacks.
-
using an iterative process to cultivate a ready workforce, and most importantly, creating evidence-backed future scenarios that challenge conventional thinking within the organization.
-
The workforce will need to evolve, and workers will have to learn new skills, iteratively and over a period of years
-
leaders are focused too narrowly on immediate gains, rather than how their value network will transform in the future
-
the output has to be proven trustworthy, integrated into existing workstreams, and managed for compliance, risk, and regulatory issues.
-
Exactly which jobs AI will eliminate, and when, is guesswork.
-
Within just a few years, powerful AI systems will perform cognitive work at the same level (or even above) their human workforce.
-
They all want to know how their companies can create more value using fewer human resources.
-
How soon could AI replace human workers?
-
-
www.reddit.com www.reddit.com
-
To my mind, the true spiritual forefather of AI was Gottfried Wilhelm von Leibniz (1646 - 1716), with his ideas about a universal formal language that could encompass all of human knowledge. Another important figure in AI's pre-history is Ada Lovelace, who around 1850 imagined that Charles Babbage's Analytical Engine (which unfortunately was never actually built) could conceivably accomplish such tasks as playing chess and composing music.
Potential contributors to the AI. ||Jovan||
-
-
dig.watch dig.watch
-
the Code of Conduct for Information Integrity on Digital Platforms that is being developed will be important.
||sorina|| Are you aware of this Code of Conduct?
-
There is convergence around the potential for a GDC to promote digital trust and security and to address disinformation, hate speech and other harmful online content.
||sorina|| No traditional cybersecurity. Only 'content safety'
-
There is broad consensus that the Internet Governance Forum (IGF) plays – and should continue to play – a key role in promoting the global and interoperable nature and governance of the Internet. The important roles played by IGF, ITU, UNCTAD, UNDP, UNESCO, WSIS and other UN entities, structures, and forums have been emphasized and that a GDC should not duplicate existing forums and processes.
||sorina|| These are probably two key sentences arguing for IGF and avoiding duplication. Here, they pushed against new forum.
-