Just a sample of the Echomail archive
[ << oldest | < older | list | newer > | newest >> ]
|  Message 1928  |
|  Mike Powell to All  |
|  Illicit AT tools market  |
|  07 Nov 25 12:47:05  |
 TZUTC: -0500 MSGID: 1685.consprcy@1:2320/105 2d736d2e PID: Synchronet 3.21a-Linux master/123f2d28a Jul 12 2025 GCC 12.2.0 TID: SBBSecho 3.28-Linux master/123f2d28a Jul 12 2025 GCC 12.2.0 BBSID: CAPCITY2 CHRS: ASCII 1 FORMAT: flowed Google warns criminals are building and selling illicit AI tools - and the market is growing Date: Thu, 06 Nov 2025 14:03:00 +0000 Description: AI tools are being specially built for cyber crime, new Google research warns. FULL STORY Googles Threat Intelligence Group has identified a worrying shift in AI trends, with AI no longer just being used to make criminals more productive, but also now being specially developed for active operations. Its research found Large Language Models ( LLMs ) are being used in malware in particular, with Just-in-Time AI like PROMPTFLUX - which is written in VBScript and engages with Geminis API to request specific VBScript obfuscation and evasion techniques to facilitate "just-in-time" self-modification, likely to evade static signature-based detection. This illustrates how criminals are experimenting with LLMs to develop dynamic obfuscation techniques and targeting victims. The PROMPTFLUX samples examined by Google suggest that this code family is currently in the testing phase - so it could get even more dangerous once criminals develop them further. Built for harm The marketplace for legitimate AI tools is maturing, and so is the criminal black market. Underground forums offer purpose-built AI tools that help lower the barrier for criminals to engage in illicit activities. This is bad news for everyone, since criminals no longer have to be particularly skilled to carry out complex cyberattacks, and they have a growing number of options. Threat actors are using tactics reminiscent of social engineering to side-step AI safety features - pretending to be cybersecurity researchers in order to convince Gemini to provide them with information that might otherwise be prohibited. But whos behind these incidents? Well, the research identifies, perhaps unsurprisingly, links to state-sponsored actors from Iran and China. These campaigns have a range of objectives, from data exfiltration to reconnaissance - similar to previously observed influence operations by the states , also using AI tools. Since AI tools have become popularized, both criminals and security teams have been using the tools to boost productivity and assist in operations - and its not quite clear who has the upper hand. ====================================================================== Link to news story: https://www.techradar.com/pro/security/google-warns-criminals-are-building-and -selling-illicit-ai-tools-and-the-market-is-growing $$ --- SBBSecho 3.28-Linux * Origin: capitolcityonline.net * Telnet/SSH:2022/HTTP (1:2320/105) SEEN-BY: 105/81 106/201 128/187 129/14 305 153/7715 154/110 218/700 SEEN-BY: 226/30 227/114 229/110 206 300 307 317 400 426 428 470 664 SEEN-BY: 229/700 705 266/512 291/111 320/219 322/757 342/200 396/45 SEEN-BY: 460/58 633/280 712/848 902/26 2320/0 105 304 3634/12 5075/35 PATH: 2320/105 229/426 |
[ << oldest | < older | list | newer > | newest >> ]