Reposted from CISA
The accelerated development of new artificial intelligence (AI) capabilities, including with large language models (LLMs), has spurred international debates around the potential impact of “open source AI” models. Does open sourcing a model benefit society because it enables rapid innovation, as developers can study, use, share, and collaboratively iterate on state-of-the-art models? Or do such capabilities pose security threats, allowing adversaries to leverage these models for greater harm?
Fortunately, the conversation doesn’t have to start from scratch. As the Cybersecurity and Infrastructure Security Agency’s (CISA) leads on open source software (OSS) security, we’ve spent significant time immersed in open source communities. OSS faced similar debates during the 1990s, and we know that there are many lessons to be learned from the history of OSS.
See Original Post