敵友一念,是非年年

OpenAI Says China-Linked Group Tried to Phish Its Employees

The unsuccessful effort involved posing as a ChatGPT user.

The ChatGPT virtual assistant logo on a smartphone.

Photographer: Andrey Rudakov/Bloomberg

By Seth Fiegerman

October 9, 2024 at 6:30 PM GMT+8

OpenAI said a group with apparent ties to China tried to carry out a phishing attack on its employees, reigniting concerns that bad actors in Beijing want to steal sensitive information from top US artificial intelligence companies.

The AI startup said Wednesday that a suspected China-based group called SweetSpecter posed as a user of OpenAI’s chatbot ChatGPT earlier this year and sent customer support emails to staff. The emails included malware attachments that, if opened, would have allowed SweetSpecter to take screenshots and exfiltrate data, OpenAI said, but the attempt was unsuccessful.

“OpenAI’s security team contacted employees who were believed to have been targeted in this spear phishing campaign and found that existing security controls prevented the emails from ever reaching their corporate emails,” OpenAI said.

The disclosure highlights the potential cybersecurity risks for leading AI companies as the US and China are locked in a high-stakes battle for artificial intelligence supremacy. In March, for example, a former Google engineer was charged with stealing AI trade secrets for a Chinese firm.

China’s government has repeatedly denied allegations by the US that organizations within the country perpetrate cyberattacks, accusing external parties of organizing smear campaigns.

OpenAI revealed the attempted phishing attack as part of its latest threat intelligence report, outlining its efforts to combat influence operations around the world. In the report, OpenAI said it took down accounts from groups with links to Iran and China that used AI for coding assistance, conducting research and other tasks.

— With assistance from Rachel Metz

繼續為敵⋯⋯就很⋯⋯哎

Last edited by @suen 2024-10-09T12:22:21Z

https://openai.com/global-affairs/an-update-on-disrupting-deceptive-uses-of-ai/

An update on disrupting deceptive uses of AI

OpenAI’s mission is to ensure that artificial general intelligence benefits all of humanity. We are dedicated to identifying, preventing, and disrupting attempts to abuse our models for harmful ends. In this year of global elections, we know it is particularly important to build robust, multi-layered defenses against state-linked cyber actors and covert influence operations that may attempt to use our models in furtherance of deceptive campaigns on social media and other internet platforms.

Since the beginning of the year, we’ve disrupted more than 20 operations and deceptive networks from around the world that attempted to use our models. To understand the ways in which threat actors attempt to use AI, we’ve analyzed the activity we’ve disrupted, identifying an initial set of trends that we believe can inform debate on how AI fits into the broader threat landscape. Today, we are publishing OpenAI’s latest threat intelligence report, which represents a snapshot of our understanding as of October 2024.

As we look to the future, we will continue to work across our intelligence, investigations, security, safety, and policy teams to anticipate how malicious actors may use advanced models for dangerous ends and to plan enforcement steps appropriately. We will continue to share our findings with our internal safety and security teams, communicate lessons to key stakeholders, and partner with our industry peers and the broader research community to stay ahead of risks and strengthen our collective safety and security.