- May 2023
-
dig.watch dig.watch
-
three principles, transparency, accountability, and limits on use.
3 principles for AI governance
-
Number one, you’re here because AI is this extraordinary new technology that everyone says can be transformative as much as the printing press. Number two is really unknown what’s gonna happen. But there’s a big fear you’ve expressed to all of you about what bad actors can do and will do if there’s no rules of the road. Number three, as a member who served in the house and now in the Senate, I’ve come to the conclusion that it’s impossible for Congress to keep up with the speed of technology.
A good summary of the current situation with AI technology.
-
the central scientific issue
Is it 'scientific issue'? I do not think so. It is more philosophical and possible even, theological, issue. Can science tell us what is good and bad?
-
the conception of the EU AI Act is very consistent with this concept of precision regulation where you’re regulating the use of the technology in context.
EU AI Act uses precise regulation of regulation AI in specific contexts.
-
a reasonable care standard.
Another vague concept. What is 'reasonable'? There will be a lot of job for AI-powered lawyers.
-
Thank you, Mr. Chairman and Senator Hawley for having this. I’m trying to find out how it is different than social media and learn from the mistakes we made with social media. The idea of not suing social media companies is to allow the internet to flourish. Because if I slander you you can sue me. If you’re a billboard company and you put up the slander, can you sue the billboard company? We said no. Basically, section 230 is being used by social media companies to high, to avoid liability for activity that other people generate. When they refuse to comply with their terms of use, a mother calls up the company and says, this app is being used to bully my child to death. You promise, in the terms of use, she would prevent bullying. And she calls three times, she gets no response, the child kills herself and they can’t sue. Do you all agree we don’t wanna do that again?
How to avoid repeating with AI governance what happened with Seciton 230 and social media governance?
-
the current version of GPT-4 ended to training in 2021.
2021 starts to being 'safety net' for OpenAI
-
Sen. Marsha Blackburn (R-TN):
It is probably the most practical approach to AI governance. Senator from Tennessee asked many questions on the protection of copyright of musicians. Is Nashville endangered. The more we anchor AI governance questions into practical concerns of citizens, communities, and companies - the better AI governance we will have.
-
that people own their virtual you.
People can own it only with 'bottom-up AI'
-
When you think about the energy costs alone, just for training these systems, it would not be a good model if every country has its own policies and each, for each jurisdiction, every company has to train another model.
It is naive view because AI is shaped by ethics and ethics is very 'local'. Yes, there are some global ethical principles: protect human life and dignity. But many other ethical rules are very 'local'.
-
need a cabinet level organization within the United States in order to address this.
Who can govern AI?
-
If these large language models can, even now, based on the information we put into them quite accurately predict public opinion, you know, ahead of time. I mean, predict, it’s before you even ask the public these questions, what will happen when entities, whether it’s corporate entities or whether it’s governmental entities, or whether it’s campaigns or whether it’s foreign actors, take this survey information, these predictions about public opinion and then fine tune strategies to elicit certain responses, certain behavioral responses.
this is what worries politicians - how to win elections? They like 'to see' (use AI for their needs) but 'not to be seen' (use by somebody else. The main problem with political elites worldwide is that they may win elections with use of AI (or not), but the humanity is sliding into 'knowledge slavery' by AI.
-
large language models can indeed predict public opinion.
They can as they, for example, predict continuation of this debate in the political space.
-
So I think the most important thing that we could be doing and can, and should be doing now, is to prepare the workforce of today and the workforce of tomorrow for partnering with AI technologies and using them. And we’ve been very involved for, for years now in doing that in focusing on skills-based hiring in educating for the skills of the future. Our skills build platform has 7 million learners and over a thousand courses worldwide focused on skills. And we’ve pledged to train 30 million individuals by 2030 in the skills that are needed for society today.
It is probably the only thing to do. But the problem remains that even re-skilling want be sufficient if we will need less human labour.
-
The National Institutes of Standards and technology actually already has an AI accuracy test,
It would be interesting to see how it works in practice. How can you judge accuracy if AI is about probability. It is not about certainty which is the first building block for accuracy.
-
Ultimately, we may need something like cern Global, international and neutral, but focused on AI safety rather than high energy physics.
He probably thought of analogy with IPCC as supervisory space. But CERN could play role as place for research on AI and processing huge amount of data.
-
But we also need independent scientists, not just so that we scientists can have a voice, but so that we can participate directly in addressing the problems in evaluating solutions.
An important stakeholder.
-
- Nov 2021
-
www.diplomacy.edu www.diplomacy.edu
-
46
This para shifts previous balancing formulation on data governance (proper balance between data sovereignty and free data flows) towards more data sovereignty
-