Just a sample of the Echomail archive
[ << oldest | < older | list | newer > | newest >> ]
|  Message 1820  |
|  Mike Powell to All  |
|  OpenAI bans Chinese, Nort  |
|  08 Oct 25 08:56:22  |
 TZUTC: -0500 MSGID: 1569.consprcy@1:2320/105 2d4ba9a7 PID: Synchronet 3.21a-Linux master/123f2d28a Jul 12 2025 GCC 12.2.0 TID: SBBSecho 3.28-Linux master/123f2d28a Jul 12 2025 GCC 12.2.0 BBSID: CAPCITY2 CHRS: ASCII 1 FORMAT: flowed OpenAI bans Chinese, North Korean hacker accounts using ChatGPT to launch surveillance Date: Wed, 08 Oct 2025 12:38:00 +0000 Description: Malicious actors are trying to trick ChatGPT into doing bad things, but are now being banned. FULL STORY OpenAI has banned Chinese, North Korean, and other accounts which were reportedly using ChatGPT to launch surveillance campaigns, develop phishing techniques and malware , and engage in other malicious practices. In a new report , OpenAI said it observed individuals reportedly affiliated with Chinese government entities, or state-linked organizations, using its Large Language Model ( LLM ) to help write proposals for surveillance systems and profiling technologies. These included tools for monitoring individuals and analyzing behavioral patterns. Exploring phishing Some of the accounts that we banned appeared to be attempting to use ChatGPT to develop tools for large-scale monitoring: analyzing datasets, often gathered from Western or Chinese social media platforms, the report reads. These users typically asked ChatGPT to help design such tools or generate promotional materials about them, but not to implement the monitoring. The prompts were framed in a way that avoided triggering safety filters, and were often phrased as academic or technical inquiries. While the returned content did not directly enable surveillance, its outputs were used to refine documentation and planning for such systems, it was said. The North Koreans, on the other hand, used ChatGPT to explore phishing techniques, credential theft, and macOS malware development. OpenAI said it observed these accounts testing prompts related to social engineering, password harvesting, and debugging malicious code, especially targeting Apple systems. The model refused direct requests for malicious code, OpenAI said, but stressed that the threat actors still tried to bypass safeguards by rephrasing prompts, or asking for general technical help. Just like any other tool, LLMs are being used by both financially motivated, and state-sponsored threat actors, for all sorts of malicious activity. This AI misuse is evolving, with threat actors increasingly integrating AI into existing workflows to improve their efficiency. While developers such as OpenAI work hard on minimizing risk and making sure their products cannot be used like this, there are many prompts that fall between legitimate and malicious use. This gray zone activity, the report hints, requires nuanced detection strategies. Via The Register ====================================================================== Link to news story: https://www.techradar.com/pro/security/openai-bans-chinese-north-korean-hacker -accounts-using-chatgpt-to-launch-surveillance $$ --- SBBSecho 3.28-Linux * Origin: capitolcityonline.net * Telnet/SSH:2022/HTTP (1:2320/105) SEEN-BY: 105/81 106/201 128/187 129/14 305 153/7715 154/110 218/700 SEEN-BY: 226/30 227/114 229/110 111 206 300 307 317 400 426 428 470 SEEN-BY: 229/664 700 705 266/512 291/111 320/219 322/757 342/200 396/45 SEEN-BY: 460/58 633/280 712/848 902/26 2320/0 105 304 3634/12 5075/35 PATH: 2320/105 229/426 |
[ << oldest | < older | list | newer > | newest >> ]