Numerous AI safety organizations have pushed for policy changes to criminalize the release and usage of current open-source large language models (LLMs), aiming to cap AI capabilities to roughly their present levels. Despite the intent for AI security, these proposals may hinder open-source contributions, research, and development, and inadvertently support monopolization by large corporate entities. Differing safety groups have varying stances and proposed measures, but the collective impact of their efforts may have a significant β and perhaps detrimental β influence on the evolution of open-source AI technologies.