- Jun 2023
-
-
Interesting discussion on ways to regulate AI use, and the role (limitations) of open source there, by Bruce Schneier and Waldo.
It raises some interesting questions about accountability of the open source community. They argue, as many others, that OS community is too fluid to be regulated. I tend to disagree - OS community has many levels, and a certain OS component (say a GitHub code) gets picked up by others at certain points to push to a mass market for benefit (commercial or other). It is when such OS products are picked up that the risk explodes - and it is then when we see tangible entities (companies or orgs) that should be and are accountable for how they use the OS code and push it to mass market.
I see an analogy with vulnerabilities in digital products, and the responsibility of OS community for the supply chain security. While each coder should be accountable, for individuals it probably boils down to ethics (as the effect of a single github product is very limited); but there are entities in this supply chain that integrate such components that clearly should be hold accountable.
My comments below. It is an interesting question for Geneva Dialogue as well, not only for AI debates.
cc ||JovanK|| ||anastasiyakATdiplomacy.edu||
-