- Jun 2023
-
samuelschmitt.com samuelschmitt.com
-
How to create a Topic Cluster with WordPress?
||Jovan|| How to create a topic cluster with WordPress?
-
-
blog.hubspot.com blog.hubspot.com
-
Every post in the cluster set needs to be linked to at least once with the same anchor text (the part that is hyperlinked) so that a search engine knows it’s part of a topic cluster.
-
Algorithms have evolved to the point where they can understand the topical context behind the search intent, tie it back to similar searches they have encountered in the past, and deliver web pages that best answer the query.
-
Multiple content pages that are related to that topic link back to the pillar page. This linking action signals to search engines that the pillar page is an authority on the topic, and over time, the page may rank higher for the topic it covers.
-
-
seomodels.com seomodels.com
-
But in a nutshell topic clusters are clusters of search optimized pages tied together under the umbrella of a high level topic.
-
-
circleid.com circleid.com
-
Support for Human Rights will be driven by the centrality of AI-driven data flows and the demand of the Global South for promoting AI data for development.
-
SDGs provide tangible goals and metrics complementary to Human Rights
-
The SDGs can be seen as part of a process of realizing Human Rights in the context of overall Well-being.
-
The links between human dignity, expressed as Human Rights and human Well-being, expressed as Development Goals, were initially not well appreciated.
-
“Human Rights” were intentionally not a significant component of the 2000 MDGs.
-
-
www.reddit.com www.reddit.comreddit17
-
The genie is out of the bottle. Cats out of the bag. If these people thought AI was/is so dangerous, why release and continue to release the tool to the mass public which inevitably includes bad actors?
-
It would probably be more feasible imo to regulate the uses of a.i., for example if you wanted to utilize A.I. in some type of autonomous function, such as a self driving car, or some kind of guard robot, or traffic lights, or who knows what else, that you would need to ensure some level of standardized security and other such potential regulations.
regulate uses or development of AI.
-
What are we talking about here? Fishing scams? Advanced warfare capabilities that can result in a shifting unipolar world? A tertiary threat of mutual destruction politics? Whaaaaat are we talking about?
-
We are headed for extinction either way. Global warming if it continues is going to cost trillions of damages, in flooded coastal cities, (most people live near coasts, need i remind you), more hurricanes, more pandemic (feral creatures forced to move north and come in more frequent contact with humans), deadlier pathogens (global rising temperatures makes pathogens adapt to higher temperatures, making our fever defense mechanisms less effective). Not to mention the USA-Russia-China are on the brink of wanting to go to war with each other. Russia invading an european country, and straight up threatening to use NUKES. Hello? China thinking about moving into Taiwan, an american protectorate because it's struggling economically
-
I'm tired of these gloom and gloom without substance.
-
I wish someone would actually address the elephant in the room
-
OpenAI just keeps saying it's coming, it's dangerous, but when congress asked for "nutrition label" of their AI model, they avoided the question.
||Jovan|| good point
-
It honestly feels like selling doom to limit control of AI to only select few.
-
How would it defend itself from a Solar Flare or Nuclear EMP blast?
-
The risk from AI going evil is hypothetical, the risk of AI being used by evil humans is not. Funny how these billionaires and their lackeys don't want us to police THEM.
-
They let the genie out and now they want to put the genie back in the the bottle.
-
OpenAI seems to be calling for AI regulations of only "cutting-edge" models and seems to think open-source wouldn't quality as cutting-edge --- but that could be a fallacy as open-source continues to rapidly improve.
-
To protect their positions, monopolists can:A) Influence public opinion by drawing on science fiction imagery.B) Position themselves as the first supporters of the need for regulation;C) Induce legislators to enact very restrictive regulations with them as the main interlocutors.Expected outcome:- Stringent regulations;- Strong limitations on AI open source;- Long live the new monopolists!
||Jovan|| This is a good summary.
-
The only “solution” I’ve seen offered up is to regulate who can build AI: only the larger “reputable” companies who will be able to get government contracts to develop AI. That’s the regulation they’re advocating for. News flash: it ain’t about protecting the human race from extinction, it’s about limiting competition so they can maximize profits.
-
survival pressures
-
Actually, we would be safer with a multilarity than a singularity because others very advanced AI or parts of AI would be able to counter one very advanced AI outside the realm of humanity basic needs.
No singularity but multilarity
-
to get more than 2 or 3 people to ever agree on exactly what "human values"
-
-
laion.ai laion.ai
-
||Jovan|| An important letter from European open-source community.
-
Byfosteringalegislativeenvironmentthatsupportsopen-sourceR&D,theParliamentcanpromotesafetythroughtransparency,driveinnovationandcompetition,andacceleratethedevelopmentofasovereignAIcapabilityinEurope
-
Deterringopen-sourceAIwillputatriskthedigitalsecurity,economiccompetitiveness,andstrategicindependenceofEurope.
-
Thiswillacceleratethesafedevelopmentofnext-generationfoundationmodelsundercontrolledconditionswithpublicoversightandinaccordancewithEuropeanvalues.
-
thedefinitionof“generalpurposeAI”,whichisvagueandisnotsupportedbybroadscientificconsensus
-
A“onesizefitsall”frameworkthattreatsallfoundationmodelsashigh-riskcouldmakeitimpossibletofieldlow-riskandopen-sourcemodelsinEurope
-
TheActshouldrecogniseimportantdistinctionsbetweenclosed-sourceAImodelsofferedasaservice(e.g.viaapporAPIlikechatGPTorGPT-4)andAImodelsreleasedasopen-sourcecode(includingopen-sourcedata,trainingsourcecode,inferencesourcecode,andpre-trainedmodels)
-
Buildingonopen-sourcefoundationmodels,Europeanresearchers,businessesandMemberStatescandeveloptheirownAIcapabilities–overseen,trained,andhostedinEurope.
-
Europemaycrossapoint-of-no-return,fallingfarbehindinAIdevelopment,andbeingrelegatedtoaconsumerrolewithoutitsowndecision-makingoncriticaltechnologiesthatwillshapeoursocieties
-
Thoserulescouldentrenchproprietarygatekeepers,oftenlargefirms,tothedetrimentofopen-sourceresearchersanddevelopers
-
rulesthattreatallfoundationmodelsashigh-riskcouldmakeitdifficultorimpossibletoresearchanddevelopopen-sourcefoundationmodelsinEurope
-
Publicandprivatesectororganizationscanadaptopen-sourcemodelsforspecializedapplicationswithoutsharingprivateorsensitivedatawithaproprietaryfirm
-
promotessecurity
-
open-sourceAIpromotescompetition.SmalltomediumenterprisesacrossEuropecanbuildonopen-sourcemodelstodriveproductivity,insteadofrelyingonahandfuloflargefirmsforessentialtechnology
-
oaudittheperformanceofamodelorsystem;developinterpretabilitytechniques;identifyrisks;andestablishmitigationsordevelopanticipatorycountermeasures.
-
open-sourceAIpromotessafetythroughtransparency
-
ThedraftActisexpectedtointroducenewrequirementsforfoundationmodelsthatcouldstifleopen-sourceR&DonAI
-
-
www.reddit.com www.reddit.com
-
making AI models safer (more aligned) leads to a performance trade-off known as an alignment tax.
-
AGI is the intelligence of a machine that can understand, learn, plan, and execute any intellectual task that a human being can.
Q: What is Artificial General Intelligence (AGI)?
-
In the field of AI, there's often a trade-off between safety and performance, known as the alignment tax.
Q: what is the alignment tax?
-
because it encourages the model to follow a human-approved process, making its reasoning more interpretable.
-
Outcome supervision involves giving feedback based on the final result, whereas process supervision provides feedback for each individual step in a process.
||Jovan|| Is this real brekathrough or part of current spin of OpenAI?
-
-
-
Propaganda is picking up. You have title that does not reflect the content. In the content you have rather balanced coverage about 'AI dooms day'. It also brings a few good points such as ethical training of AI engineers.
||Jovan||
-
"It's never too late to improve," says Prof Bengio of AI's current state. "It's exactly like climate change.
Dangerous use of analogies.
-
"We also need the people who are close to these systems to have a kind of certification... we need ethical training here. Computer scientists don't usually get that, by the way."
Ethical training is fine. Certification, I do not know.
-
"Governments need to track what they're doing, they need to be able to audit them, and that's just the minimum thing we do for any other sector like building aeroplanes or cars or pharmaceuticals," he said.
Argument for centralised control.
-
The third "godfather", Prof Yann LeCun, who along with Prof Bengio and Dr Hinton won a prestigious Turing Award for their pioneering work, has said apocalyptic warnings are overblown.
-
"It might be military, it might be terrorists, it might be somebody very angry, psychotic. And so if it's easy to program these AI systems to ask them to do something very bad, this could be very dangerous.
-
a voluntary code of conduct for AI could be created "within the next weeks".
-
- May 2023
-
www.reddit.com www.reddit.comreddit2
-
Open-source may post a challenge as well for global cooperation. If everyone can cook AI models in their basements, how can AI truly be aligned to safe objectives?
||sorina|| it is concentrated effort to stop bottom-up AI. It is very dangerous development.
-
About 58% of U.S. adults are familiar with ChatGPT.
||JovanNj|| ||sorina|| ||anjadjATdiplomacy.edu|| Relatively low awareness about ChatGPT and low use (only 14%). It is interesting that Asian minorities are more activite in using ChatGPT.
-
-
www.devex.com www.devex.com
-
The irony is that the G77 is arguably winning the intellectual battle about what the U.N. should be focusing on right now.”
-
They fear that the rich world is going to tiptoe away from the SDGs, and the Summit of the Future is a sort of diplomatic smokescreen for that,”
||sorina||||VladaR||||Pavlina||||Katarina_An|| This article provides background about the atmosphere in New York. We should be aware of it as we prepare event in NY. We may strenghten linkages between SDGs and Future Summit.
-
Its call for climate justice also echoes the calls by many nations, including Pakistan, to step up international funding to help countries respond to devastating climate-driven catastrophes such as the torrential storm that inundated a third of the country’s territory last year.
climate concern of developing countries.
-
“The countries of the European Union, to which my country belongs — 500 million people, a little bit less — received 160 billion U.S. dollars,” he said. “The African continent, three times the population, received 34 billion. There is something fundamentally wrong in the rules because these are the rules of the system that allow for these injustices to take place.”
-
He also foresees a potentially contentious set of negotiations over a broad range of issues, from human rights to climate justice, the environment, and a newly articulated peace agenda.
Other issues of concern of developing countries.
-
“There is a sense that, you know, this is an effort to change the intergovernmental structure of the United Nations and the General Assembly and the General Assembly is an organization of member states,” he said.
Concern about developing countries on multistakeholder approach.
-
The United States and other key Security Council members killed off the proposal to remake the Trusteeship Council expressing concern it could trigger a messy reopening of negotiations on the U.N. Charter.
-
It called for the creation of a Futures Lab to measure the impact and risks of policies over the long haul; the reform of the Trusteeship Council, established to manage decolonization, to advocate on behalf of future generations, and the appointment of a “special envoy to ensure that policy and budget decision take into account the impact of future generations.” Guterres proposed hosting a Summit of the Future this year so world leaders could turn his plan into action.
Set of proposals for the Summit of Futre.
-
“many member states have expressed their desire for a stronger link between the SDG Summit and Summit of the Future processes.”
-
Dujarric added that the SDG Summit and the Summit of the Future are sequential opportunities and interrelated opportunities to achieve both.”
-
Officials familiar with the deliberation say the Cuban delegation had not consulted widely within the G-77 membership before issuing the letter.
-
we request the different co-facilitators on Our Common Agenda related processes and the secretariat, to stop conducting meetings, to avoid jeopardizing the focus we all should devote to the SDG Summit,”
-
But a coalition of 134 developing countries, known as the Group of 77 and China, called last month on the U.N. and Germany and Namibia to halt preparations for the 2024 event altogether for the remainder of the year, citing the need to maintain a laser focus on groundwork for the September 2023 Summit on Sustainable Development Goals.
-
Guterres believes it can do both at the same time
-
an honest desire to focus on the implementation of the SDGs, because that is a priority.
-
In a compromise, U.N. states last year agreed to convene their foreign ministers in New York in September to try and strike a deal on a statement laying out the basic contours, or “scope and elements,” of the Future Summit.
-
The delay was partly triggered by concerns among developing countries that the U.N. chief’s signature initiative would detract attention from promoting the U.N.’s 17 Sustainable Development Goals, stalled for at least four years by the pandemic, climate change, and conflict.
-
o halt the event’s preparations until next year and contending the U.N. must focus this year on implementing its existing, and faltering, development goals,
-
-
-
The turnaround was two minutes, which is lightning-fast, given the accuracy.
-
I looked for apps containing basic editing options within the app—like highlighting, inviting teammates to comment/edit, and adjusting playback speed.
This feature is essential for us - how to annotate transcripts.
-
I calculated how many mistakes each platform made and whether it was able to detect different speakers correctly.
-
evaluating over 65 transcription apps and doing in-depth testing on all the top contenders.
-
it's becoming a more saturated category, harder to pick out the right one.
-
-
-
It’s up to you to develop your own expert level understanding of this knowledge, and learn to leverage AI tools to make your workflow more efficient while also guarding against the knowledge dilution they cause.
-
for maximizing the impact of your company’s collective knowledge
-
building the processes and workflows for how uniquely valuable knowledge
-
a company culture that values the knowledge of all your individual team members
-
you’re company’s collective knowledge
-
to pre-load your unique knowledge in long form prompts
-
But I’ll likely use AI to help me re-purpose some of this content for other channels like social and email.
-
And in cases where your knowledge sits far outside the bubble of consensus that AI tools draw from, it will likely look a lot more like co-writing.
-
Because authority is ultimately something earned from the community you are involved in.
-
The “average” outputs that AI generates can’t capture your unique life experiences.
-
This gravity toward the average will lead to…
-
that 20% of your company’s collective knowledge will drive 80% of your results.
-
-
seomodels.com seomodels.com
-
have a keyword difficulty of 10 or less.
-
-
unsc.diplomacy.edu unsc.diplomacy.edu
-
Utilizing cutting-edge fine-tuning methodologies
||Jovan|| test
-
-
dig.watch dig.watch
-
our api
-
also think it’s really important to decide to whose values we’re going to align these models.
-
I’m a big believer in the democratizing potential of technology,
-
is going to be small just because of the resources required.
Is it correct? Can we train model with less resources?
-
I think it’s important to democratize the inputs to these systems, the values that we’re going to Alli align to.
-
with Microsoft releasing Sydney.
-
there are areas like copyright where we don’t really have laws.
It is not correct. There are rules. The other question if they can be enforced.
-
to prioritize ethics and responsible technology as opposed to posing development.
-
My opinion is that the moratorium that we should focus on is actually deployment until we have good safety cases.
-
try to make these things actually enforced.
-
three principles, transparency, accountability, and limits on use.
3 principles for AI governance
-
These systems are almost like counterfeit people, and we don’t really honestly understand what the consequence of that is.
-
counterfeit people
-
Same with psychiatric advice
-
medical misinformation
-
haven’t been invented yet
-
pre-deployment and post-deployment.
-
we put so much burden that only the big players can do it.
-
regulatory capture.
-
you can still cause great harm with a smaller model.
-
you slow down American industry in such a way that China or somebody else makes faster progress.
-
economic transformation
-
safety concerns
-
Number one, you’re here because AI is this extraordinary new technology that everyone says can be transformative as much as the printing press. Number two is really unknown what’s gonna happen. But there’s a big fear you’ve expressed to all of you about what bad actors can do and will do if there’s no rules of the road. Number three, as a member who served in the house and now in the Senate, I’ve come to the conclusion that it’s impossible for Congress to keep up with the speed of technology.
A good summary of the current situation with AI technology.
-
what these systems get aligned to, whose values,
-
There is a real risk of a kind of technocracy combined with oligarchy, where small number of companies influence people’s beliefs through the nature of these systems.
-
this massive corporate concentration.
-
We have API developers pay us and we have ChatGPT.
-
we need to have some international meetings very quickly with people who have expertise in how you grow agencies in the history of growing agencies.
-
no way to put this genie in the bottle globally.
-
But I I, you know, you talked about defining the highest risk uses. We don’t know all of them. We really don’t. We can’t see where this is going regulating at the point of risk.
-
we’re not an advertising based model.
But BING is!
-
the safety for children of you
-
tools that humans use to make human judgments, and that we don’t take away human judgment
-
can predict future human behavior is potentially pretty significant at the individual level.
-
a model that could help create novel biological agents would be a great threshold.
-
capability thresholds
good point, but difficiult to define.
-
Andwe risk if we’re not thoughtful about it contributing to the development of tools and approaches that only exacerbate the bias and inequities that exist in our society.
Valid point about inequality.
-
Excited to work with people who have particular data sets and to work to collect a representative set of values from around the world to draw these wide bounds of what the system can do.
-
Can can you speak just for a second specifically to language inclusivity?
Many good questions from senators asking for clarity. There are not clear answers always. But clarity of language must prevail even if you disiplay trade-offs.
-
language and cultural inclusivity
Another important topic.
-
And what auto GPT does is it allows systems to access source code, access the internet and so forth. And there are a lot of potential, let’s say cybersecurity risks. There, there should be an external agency that says, well, we need to be reassured if you’re going to release this product that there aren’t gonna be cybersecurity problems or there are ways of addressing it.
||VladaR|| Vlada, please follow-up on this aspect on AI and cybersecurity.
-
the central scientific issue
Is it 'scientific issue'? I do not think so. It is more philosophical and possible even, theological, issue. Can science tell us what is good and bad?
-
I can’t envision or imagine right now what kind of a licensing scheme we would be able to create to pretty much regulate the vastness of, of the, this game-changing tool.
difficult in establishing AI licencing scheme.
-
some sort of standard,
-
what do you consider a harmful request?
Critical issue.
-
require independent audits.
-
specific tests that a model has to pass
-
a set of safety standards
-
a new agency that licenses any effort above a certain scale of capabilities and can take that license away and ensure compliance with safety standards.
-
basically focus on AI safety research.
-
ai, constitution
-
a nimble monitoring agency to follow what’s going on.
-
please don’t just use concepts. I’m looking for specificity.
Great comment for AI debate
-
So disclosure of the data that’s used to train AI, disclosure of the model and how it performs and making sure that there’s continuous governance over these models.
Q: What are the main aspects of AI transparency?
-
the I P C A UN body
The closest analogy is with IPCC
-
So we have existing regulatory authorities in place who have been clear that they have the ability to regulate in their respective domains. A lot of the issues we’re talking about today span multiple domains, elections, and the like.
How to use existing governance and regulatory agencies?
-
Guardrails need to be in place.
Guardrails are increasing in 'lingo intensity'
-
Different rules for different risks.
Good slogan
-
the conception of the EU AI Act is very consistent with this concept of precision regulation where you’re regulating the use of the technology in context.
EU AI Act uses precise regulation of regulation AI in specific contexts.
-
we need to give policy makers and the world as a whole the tools to say, here’s the values and implement them.
Use SDGs as guardrails for AI.
-
that interaction with the world is very important.
-
constitutional AI
New concept?
-
a reasonable care standard.
Another vague concept. What is 'reasonable'? There will be a lot of job for AI-powered lawyers.
-
Thank you, Mr. Chairman and Senator Hawley for having this. I’m trying to find out how it is different than social media and learn from the mistakes we made with social media. The idea of not suing social media companies is to allow the internet to flourish. Because if I slander you you can sue me. If you’re a billboard company and you put up the slander, can you sue the billboard company? We said no. Basically, section 230 is being used by social media companies to high, to avoid liability for activity that other people generate. When they refuse to comply with their terms of use, a mother calls up the company and says, this app is being used to bully my child to death. You promise, in the terms of use, she would prevent bullying. And she calls three times, she gets no response, the child kills herself and they can’t sue. Do you all agree we don’t wanna do that again?
How to avoid repeating with AI governance what happened with Seciton 230 and social media governance?
-
And so the quality of the sort of overall news market is going to decline as we have more generated content by systems that aren’t actually reliable in the content they’re generated.
Risks for newsmarket.
-
the current version of GPT-4 ended to training in 2021.
2021 starts to being 'safety net' for OpenAI
-
And other countries are doing this, Australia and the like. And so my question is, when we already have a study by Northwestern predicting that one-third of the US newspapers are that roughly existed, two decades are gonna go, are gonna be gone by 2025, unless you start compensating for everything from book movies, books. Yes. but also news content. We’re gonna lose any realistic content producers. And so I’d like your response to that. And of course, there is an exemption for copyright in section two 30. But I think asking little newspapers to go out and sue all the time just can’t be the answer. They’re not gonna be able to keep up.
Q: How to protect newspapers and content producers?
-
When some of those fake ads. So that’s number one. Number two is the impact on intellectual property.
Two concers: fake adds and IPRs.
-
Sen. Marsha Blackburn (R-TN):
It is probably the most practical approach to AI governance. Senator from Tennessee asked many questions on the protection of copyright of musicians. Is Nashville endangered. The more we anchor AI governance questions into practical concerns of citizens, communities, and companies - the better AI governance we will have.
-
OpenAI Jukebox
Diplo team shoudl follow on this development.
-
And I wanna come to you on music and content creation, because we’ve got a lot of songwriters and artists. Yeah. And a, I think we have the best creative community on the face of the Earth. They’re in Tennessee, and they should be able to decide if their copyrighted songs and images are going to be used to train these models. And I’m concerned about OpenAI’s jukebox. It offers some renditions in the style of Garth Brooks, which suggests that OpenAI is trained on Garth Brooks songs. I went in this weekend and I said, write me a song that sounds like Garth Brooks. And it gave me a different version of Simple Man. So it’s interesting that it would do that. But you’re training it on these copyrighted songs, these mini files, these sound technologies. So as you do this, who owns the rights to that AI generated material and using your technology, could I remake a song, insert content from my favorite artist, and then own the creative right to that song?
Bring intellectual property into debate
-
that people own their virtual you.
People can own it only with 'bottom-up AI'
-
We’ve done it before with the IAEA.
Now IAEA comes as analogy, probably driven by nuclear power?
-
When you think about the energy costs alone, just for training these systems, it would not be a good model if every country has its own policies and each, for each jurisdiction, every company has to train another model.
It is naive view because AI is shaped by ethics and ethics is very 'local'. Yes, there are some global ethical principles: protect human life and dignity. But many other ethical rules are very 'local'.
-
need a cabinet level organization within the United States in order to address this.
Who can govern AI?
-
So we think that AI should be regulated at the point of risk, essentially, and that’s the point at which technology meets society.
Nice 'meeting' language
-
I can’t recall when we’ve had people representing large corporations or private sector entities come before us and plead with us to regulate them. In fact, many people in the Senate have base their careers on the opposite that the economy will thrive if government gets the hell out of the way. And what I’m hearing instead today is that ‘stop me before I innovate again’ message. And I’m just curious as to how we’re going to achieve this
Great point and strategic shift. It is 'Frankenstein moment'. Companies realised that they created something they cannot control.
-
an enterprise technology company, not consumer focused. S
It is an interesting distinction. However, technology developed by IBM will be used for consumer services.
-
hyper targeting of advertising is definitely going to come.
-
And we probably need scientists in there doing analysis in order to understand what the political influences of, for example, of these systems might be.
Markus tries to make case for 'scientists'. But, frankly speaking, how scientists can decide if AI should rely on book written in favour of republicans or democrats or, even more as AI develops with more sophistication, what 'weight' they should give to one or another source.
It is VERY dangerous to place ethical and political decisions in hands of scientists. It is also unfair towards them.
-
we don’t know what it’s trained on.
It is a good point. OpenAI is not transparent on datasets used for training GPT. But, the problem is that even if they inform us, the question will be who decided what datasets should be used for training.
-
If these large language models can, even now, based on the information we put into them quite accurately predict public opinion, you know, ahead of time. I mean, predict, it’s before you even ask the public these questions, what will happen when entities, whether it’s corporate entities or whether it’s governmental entities, or whether it’s campaigns or whether it’s foreign actors, take this survey information, these predictions about public opinion and then fine tune strategies to elicit certain responses, certain behavioral responses.
this is what worries politicians - how to win elections? They like 'to see' (use AI for their needs) but 'not to be seen' (use by somebody else. The main problem with political elites worldwide is that they may win elections with use of AI (or not), but the humanity is sliding into 'knowledge slavery' by AI.
-
large language models can indeed predict public opinion.
They can as they, for example, predict continuation of this debate in the political space.
-
so-called artificial general intelligence really will replace a large fraction of human jobs.
It is a good point. There won't be more work.
-
And the real question is over what time scale? Is it gonna be 10 years? Is it gonna be a hundred years?
It is a crucial question. One generation will be 'thrown under the bus' in transition. Generation of age 25-50 should 'fasten seat-belts'. They were educated in the 'old system' while they have to work in a very uncertain new economy.
-
that scientists be part of that process.
What should scientist do specifically? Can scientist judge if something is true? Who are scientists (e.g. do we refer to IT specialists)?
-
So I think the most important thing that we could be doing and can, and should be doing now, is to prepare the workforce of today and the workforce of tomorrow for partnering with AI technologies and using them. And we’ve been very involved for, for years now in doing that in focusing on skills-based hiring in educating for the skills of the future. Our skills build platform has 7 million learners and over a thousand courses worldwide focused on skills. And we’ve pledged to train 30 million individuals by 2030 in the skills that are needed for society today.
It is probably the only thing to do. But the problem remains that even re-skilling want be sufficient if we will need less human labour.
-
And so you see already people that are using GPT-4 to do their job much more efficiently by helping them with tasks. Now, GPT-4 will I think entirely automate away some jobs, and it will create new ones that we believe will be much better. This happens again, my understanding of the history of technology is one long technological revolution, not a bunch of different ones put together, but this has been continually happening. We, as our quality of life raises and as machines and tools that we create can help us live better lives the bar raises for what we do and, and our human ability and what we spend our time going after goes after more ambitious, more satisfying projects. So there will be an impact on jobs. We try to be very clear about that, and I think it will require partnership between the industry and government, but mostly action by government to figure out how we want to mitigate that. But I’m very optimistic about how great the jobs of the future will be.
Fair statement. There is a bit naive view that we will increase happiness as we work less in some part of the work. But, so far, digital revolution has proven opposite. With Gig economy people work more and more. There is only sharp increase in inequality as capital becomes more relevant than labour.
-
not a creature,
God point on avoiding anthropomorphism.
-
this technology is in its early stages
As with Google and other tech companies, it is likely to remain in permanent 'beta version'.
-
I think that’s a great idea.
To be cynical - it is a 'great idea' because it won't work in practice, but there is pretention that we are doing something.
-
The National Institutes of Standards and technology actually already has an AI accuracy test,
It would be interesting to see how it works in practice. How can you judge accuracy if AI is about probability. It is not about certainty which is the first building block for accuracy.
-
Some of us might characterize it more like a bomb in a China shop, not a bull.
Q: Is AI bull or bomb in a China ship?
-
Ultimately, we may need something like cern Global, international and neutral, but focused on AI safety rather than high energy physics.
He probably thought of analogy with IPCC as supervisory space. But CERN could play role as place for research on AI and processing huge amount of data.
-
But we also need independent scientists, not just so that we scientists can have a voice, but so that we can participate directly in addressing the problems in evaluating solutions.
An important stakeholder.
-
The sums of money at stake are mind boggling. Emissions drift, OpenAI’s original mission statement proclaimed our goal is to advance AI in the way that most is most likely to benefit humanity as a whole, unconstrained by a need to generate financial return. Seven years later, they’re largely beholden to Microsoft, embroiled in part an epic battle of search engines that routinely make things up.
Why we should not trust AI companies?
-
We want, for example, for our systems to be transparent, to protect our privacy, to be free of bias and above all else to be safe. But current systems are not in line with these values. Current systems are not transparent. They do not adequately protect our privacy, and they continue to perpetuate bias, and even their makers don’t entirely understand how they work. Most of all, we cannot remotely guarantee that they’re safe. And hope here is not enough. The big tech company’s preferred plan boils down to trust us. But why should we?
What is the current situation in AI industry?
-
We all more or less agrees on the values we would like for our AI systems to honor.
Are we? Maybe in the USA, but not globally. Consult the work of Moral Machine which shows that different cultural contexts imply whom we would save in trolley experiment: young - elderly, man - women, rich - poor. See more: https://www.moralmachine.net/
-
A law professor, for example, was accused by a chatbot of sexual harassment untrue. And it pointed to a Washington Post article that didn’t even exist. The more that that happens, the more that anybody can deny anything. As one prominent lawyer told me on Friday, defendants are starting to claim that plaintiffs are making up legitimate evidence. These sorts of allegations undermine the abilities of juries to decide what or who to believe and contribute to the undermining of democracy. Poor medical advice could have serious consequences to an open source large language model recently seems to have played a role in a person’s decision to take their own life. The large language model asked the human, if you wanted to die, why didn’t you do it earlier? And then followed up with, were you thinking of me? When you overdosed without ever referring the patient to the human health?
Examples of risk narrative
-
What criminals are gonna do here is to create counterfeit people.
Risks narrative
-
Choices about data sets that AI companies use will have enormous unseen influence. Those who choose the data will make the rules shaping society in subtle but powerful ways.
What about each of us choosing datasets? AI has to be bottom-up.
-
Fundamentally, these new systems are going to be destabilizing. They can and will create persuasive lies at a scale humanity has never seen before. Outsiders will use them to affect our elections, insiders to manipulate our markets and our political systems. Democracy itself is threatened. Chatbots will also clandestinely shape our opinions, potentially exceeding what social media can do.
Risks narrative
-
guardrails
Guardrails are emerging lingo in AI governance.
-