A senior Microsoft employee, Shane Jones, has raised significant concerns regarding the safety of Copilot Designer, an AI tool developed by the company for generating images from text. Jones has taken his concerns to both the US Federal Trade Commission and Microsoft’s board of directors, urging them to investigate the potential risks associated with the tool.
In his letter to the regulatory body and the company’s leadership, Jones outlined his apprehensions about Copilot Designer’s capability to produce images that could be deemed inappropriate, depicting sensitive subjects such as sex, violence, underage drinking, drug use, as well as instances of political bias and conspiracy theories. He emphasized the importance of educating the public, especially parents and educators, about the potential dangers posed by such technology, particularly when used in educational environments.
Despite Jones’s repeated attempts to address the issue internally within Microsoft over the course of three months, the company has declined to remove Copilot Designer from public use or implement adequate safeguards. Jones proposed measures such as adding disclosures to the product and adjusting its rating on the Android app store to inform users of its potential risks, but these suggestions were not acted upon.
In response to Jones’s concerns, Microsoft stated its commitment to addressing any employee apprehensions in accordance with its policies, and expressed appreciation for efforts aimed at enhancing the safety of its technology.
This isn’t the first instance of Jones voicing concerns regarding AI safety. Prior to his letter to the FTC, he had publicly called on OpenAI to withdraw DALL-E, the model powering Copilot Designer, from public usage due to similar safety concerns. Despite facing pressure from Microsoft’s legal team to retract his statements, Jones persisted in highlighting his concerns, even reaching out to US senators to raise awareness about AI safety risks.
This incident occurs against a backdrop of increased scrutiny surrounding AI technologies within the tech industry. Recently, Google decided to temporarily suspend access to its image generation feature on Gemini, its competitor to OpenAI’s ChatGPT, following complaints about historically inaccurate images related to race. Demis Hassabis, CEO of DeepMind, Google’s AI division, assured the public that the feature would be reinstated once the concerns had been addressed.
In summary, Jones’s concerns regarding the safety of Copilot Designer highlight the need for careful consideration and oversight of AI technologies, especially those with the potential to generate sensitive or harmful content. The response from Microsoft, as well as recent developments in the wider tech industry, underscore the importance of addressing such concerns and implementing appropriate safeguards to mitigate potential risks associated with AI-powered tools.