Figma, a popular design tool, recently faced backlash over its “Make Designs” generative AI tool. The tool came under scrutiny after a user discovered that it was producing designs that closely resembled Apple’s weather app. This raised concerns about potential legal implications and sparked a discussion about the source of Figma’s design training.
In response to the controversy, Figma released a statement explaining that they had not trained the AI tool on Figma content or app designs. The company claimed to have carefully reviewed the underlying design systems of the tool during development, but acknowledged that some assets were added without proper vetting. Figma quickly identified and removed the problematic assets from the design system and disabled the feature. They are now working on implementing an improved quality assurance process before reactivating the tool.
Figma’s Vice President of Product Design, Noah Levin, provided insight into the training process of the AI models powering the tool. The company commissioned two extensive design systems, one for mobile and one for desktop, with hundreds of components. These design systems serve as the foundation for the AI to generate designs based on user prompts. The AI model assembles components inspired by examples from the design systems to create fully parameterized designs. Amazon Titan, a diffusion model, is then used to generate the images for the designs.
Despite the controversy surrounding the Make Designs tool, Figma continues to develop and release other AI tools, such as a text generator for designs. The company has also outlined its AI training policies, giving users the option to opt in or out of allowing Figma to train on their data for future models. Figma’s Chief Technology Officer, Kris Rasmussen, expressed confidence in reenabling the Make Designs feature in the near future.
The incident with Figma’s Make Designs tool serves as a reminder of the importance of thorough quality assurance processes in AI development. It highlights the risks of unintentionally replicating existing designs and the potential legal consequences that can arise. Moving forward, Figma and other companies developing AI tools must prioritize transparency, accountability, and user consent in their training processes to avoid similar controversies.
The controversy surrounding Figma’s Make Designs tool sheds light on the complexities of AI design tools and the need for responsible development and implementation. By learning from this incident, Figma can improve its processes and regain the trust of its users.
Leave a Reply