- May 2023
-
dig.watch dig.watch
-
please don’t just use concepts. I’m looking for specificity.
Great comment for AI debate
-
So disclosure of the data that’s used to train AI, disclosure of the model and how it performs and making sure that there’s continuous governance over these models.
Q: What are the main aspects of AI transparency?
-
Guardrails need to be in place.
Guardrails are increasing in 'lingo intensity'
-
Thank you, Mr. Chairman and Senator Hawley for having this. I’m trying to find out how it is different than social media and learn from the mistakes we made with social media. The idea of not suing social media companies is to allow the internet to flourish. Because if I slander you you can sue me. If you’re a billboard company and you put up the slander, can you sue the billboard company? We said no. Basically, section 230 is being used by social media companies to high, to avoid liability for activity that other people generate. When they refuse to comply with their terms of use, a mother calls up the company and says, this app is being used to bully my child to death. You promise, in the terms of use, she would prevent bullying. And she calls three times, she gets no response, the child kills herself and they can’t sue. Do you all agree we don’t wanna do that again?
How to avoid repeating with AI governance what happened with Seciton 230 and social media governance?
-
And other countries are doing this, Australia and the like. And so my question is, when we already have a study by Northwestern predicting that one-third of the US newspapers are that roughly existed, two decades are gonna go, are gonna be gone by 2025, unless you start compensating for everything from book movies, books. Yes. but also news content. We’re gonna lose any realistic content producers. And so I’d like your response to that. And of course, there is an exemption for copyright in section two 30. But I think asking little newspapers to go out and sue all the time just can’t be the answer. They’re not gonna be able to keep up.
Q: How to protect newspapers and content producers?
-
Some of us might characterize it more like a bomb in a China shop, not a bull.
Q: Is AI bull or bomb in a China ship?
-
The sums of money at stake are mind boggling. Emissions drift, OpenAI’s original mission statement proclaimed our goal is to advance AI in the way that most is most likely to benefit humanity as a whole, unconstrained by a need to generate financial return. Seven years later, they’re largely beholden to Microsoft, embroiled in part an epic battle of search engines that routinely make things up.
Why we should not trust AI companies?
-
A law professor, for example, was accused by a chatbot of sexual harassment untrue. And it pointed to a Washington Post article that didn’t even exist. The more that that happens, the more that anybody can deny anything. As one prominent lawyer told me on Friday, defendants are starting to claim that plaintiffs are making up legitimate evidence. These sorts of allegations undermine the abilities of juries to decide what or who to believe and contribute to the undermining of democracy. Poor medical advice could have serious consequences to an open source large language model recently seems to have played a role in a person’s decision to take their own life. The large language model asked the human, if you wanted to die, why didn’t you do it earlier? And then followed up with, were you thinking of me? When you overdosed without ever referring the patient to the human health?
Examples of risk narrative
-
Fundamentally, these new systems are going to be destabilizing. They can and will create persuasive lies at a scale humanity has never seen before. Outsiders will use them to affect our elections, insiders to manipulate our markets and our political systems. Democracy itself is threatened. Chatbots will also clandestinely shape our opinions, potentially exceeding what social media can do.
Risks narrative
-
guardrails
Guardrails are emerging lingo in AI governance.
-
First, different rules for different risks. The strongest regulation should be applied to use cases with the greatest risks to people and society. Second, clearly defining risks. There must be clear guidance on AI uses or categories of AI supported activity that are inherently high risk. This common definition is key to enabling a clear understanding of what regulatory requirements will apply in different use cases and contexts. Third, be transparent. So AI shouldn’t be hidden. Consumers should know when they’re interacting with an AI system and that they have recourse to engage with a real person should they so desire. No person anywhere should be tricked into interacting with an AI system. And finally, showing the impact. For higher risk use cases, companies should be required to conduct impact assessments that show how their systems perform against tests for bias and other ways that they could potentially impact the public. And to attest that they’ve done so by following risk-based use case-specific approach.
Q: What are 4 elements of precision regulation as proposed by IBM?
-
a precision regulation
Language
Precision regulation is another concept to follow.
-
We believe that the benefits of the tools we have deployed so far vastly outweigh the risks,
Balancing narrative Opportunities 80 - Risks 20
-
be My Eyes, used our new multimodal technology in GPT-4 to help visually impaired individuals navigate their environment.
Optimistic narrative
-
We think it can be a printing press moment.
Paradigm shift narrative
-
will we strike that balance between technological innovation and our ethical and moral responsibility to humanity, to liberty, to the freedom of this country?
Balance narrative Choice narrative
-
is it gonna be more like the atom bomb, huge technological breakthrough, but the consequences severe, terrible, continue to haunt us to this day
Analogy - atomic bomb
-
Is it gonna be like the printing press that diffused knowledge, power, and learning widely across the landscape that empowered, ordinary, everyday individuals that led to greater flourishing, that led above all two greater liberty?
Analogy with Printing press
-
These opportunities are why former U.S. Treasury Secretary Lawrence Summers has said that AI tools such as ChatGPT might be as impactful as the printing press, electricity, or even the wheel or fire.
It is a good example of using authority argument. This quote about digital as printing press, electricity, wheel or fire is probably the most frequently mentioned in any discussion on impact of digital technology on society.
In this case Lawrence Summers is quoted because of his authority in the US political/academic establishment.
-