🦸🏻‍♂️New Trick Blocks AI Misuse

PLUS: Fei-Fei Li Slams California's AI Bill

Reading time: 5 minutes

Key Points 

  • The method involves making changes to the model’s parameters in such a way that common techniques for bypassing safety features become ineffective.

  • While not perfect, this approach raises the difficulty of tampering with AI models and aims to make such efforts costly enough to deter most would-be adversaries.

📒Context of the news - In April, Meta released its large language model, Llama 3, as open-source software to enable more developers to build products using the technology. However, it wasn't long before some bad actors managed to bypass the built-in safety restrictions designed to prevent the model from generating harmful or illegal content, prompting safety experts to advocate for tamper-proof measures.

👨🏻‍🔬News - In response, researchers from the University of Illinois Urbana-Champaign, UC San Diego, Lapis Labs, and the Center for AI Safety have developed a new training technique that could make it more difficult to remove safeguards from Llama and other open-source AI models.

🪄What's their magic trick? Their method involves making changes to the model’s parameters in such a way that common techniques for bypassing safety features become ineffective. In tests with a simplified version of Llama 3, they adjusted the parameters so the model couldn’t be trained to answer harmful questions, even after numerous attempts. 

While not perfect, this approach raises the difficulty of tampering with AI models and aims to make such efforts costly enough to deter most would-be adversaries.

👨🏻‍🏫In conclusion - Since open models are already competing with state-of-the-art closed models from companies like OpenAI and Google, the idea of tamper-proofing open models might become more popular as interest in open-source AI grows.

However, Stella Biderman, director of EleutherAI, made an interesting point: if the concern is about LLMs generating harmful information, the focus should be on managing the training data rather than altering the model after it has been trained.

Key Points 

  • Fei-Fei Li warned that California’s SB-1047 could negatively impact the US AI ecosystem, affecting the public sector, academia, and smaller tech firms.

  • Li criticized the bill for holding developers accountable for all model misuse, requiring a “kill switch,” and potentially stifling open-source contributions and academic research.

👩🏻‍💻News - Fei-Fei Li, a leading computer scientist often called the 'Godmother of AI,' has recently voiced concerns about California’s Safe and Secure Innovation for Frontier Artificial Intelligence Models Act, or SB-1047. 

In a post on X, Li cautioned that if SB-1047 becomes law, it could harm the US AI ecosystem, particularly affecting the public sector, academia, and smaller tech companies that are already at a disadvantage compared to major tech giants. She believes the bill might unfairly penalize developers, stifle the open-source community, and restrict academic AI research, all while failing to address the actual issues it was meant to solve.

🕵🏻‍♀️ What’s Li’s reasoning?

  • Li pointed out that the bill would hold both developers and creators of AI models responsible for any misuse. Since it’s unrealistic for developers to foresee every possible use of their models, this could make them more cautious and less willing to innovate.

  • She further noted that SB-1047 requires a “kill switch” for models exceeding a certain threshold. This could deter developers from contributing to open-source projects, fearing their work might be shut down, which would negatively impact the open-source community that has driven many technological advances.

  • Additionally, Li highlighted that the bill could hinder AI research in academia and the public sector by limiting access to essential model data and collaboration. This would make it harder for students and researchers to develop and advance AI technologies, crucial for training future AI professionals.

  • She also criticized the bill for not addressing major issues like AI bias or deepfakes, adding that it instead imposes arbitrary restrictions based on computing power or training costs, which could stifle innovation and limit progress, especially in academic research.

🤔So, is Li against AI regulation? Li isn’t against regulation; in fact, she views it as essential for the safe and effective growth of AI. However, she believes AI policy should support open-source development, establish clear and consistent rules, and build consumer trust. She also advocates for a “moonshot mentality” to drive AI education, research, and development in the country.

🙆🏻‍♀️What else is happening?

👩🏼‍🚒Discover mind-blowing AI tools

  1. OpenTools AI Tools Expert GPT - Find the perfect AI Tool to solve supercharge your workflow. This GPT is connected to our database, so you can ask in depth questions on any AI tool directly in ChatGPT (free w/ ChatGPT)

  2. Scite - An AI-driven research tool that assists users in finding reliable information from millions of research articles ($20/month)

  3. Video2Recipe - A tool that allows users to convert cooking videos into recipes by pasting the video URL (Fee up to 3 recipes/month) 

  4. Anyword - An AI copywriting tool that helps performance marketers generate and optimize their copy to get more conversions and drive more sales ($39/month)

  5. Gistty - A Chrome extension that provides brief summaries of product reviews on Amazon (Free)

  6. Hocoos AI - A website builder that uses artificial intelligence to create customized websites for businesses (15/month)

  7. Branchbob.ai - An AI-powered platform that enables merchants to quickly create and easily manage fully functional online stores ($30/month) 

  8. Sendsteps - An AI-powered presentation maker that helps users create visually appealing and interactive presentations 10x faster ($5/month)

  9. Fibery AI - An AI-powered tool that aids in brainstorming, writing, task automation, and process experimentation within a single context ($10/month)

How likely is it that you would recommend the OpenTools' newsletter to a friend or colleague?

Login or Subscribe to participate in polls.