home bbs files messages ]

Just a sample of the Echomail archive

<< oldest | < older | list | newer > | newest >> ]

 Message 1993 
 Mike Powell to All 
 DeepSeek took off as an A 
 26 Nov 25 09:49:23 
 
TZUTC: -0500
MSGID: 1750.consprcy@1:2320/105 2d8c5049
PID: Synchronet 3.21a-Linux master/123f2d28a Jul 12 2025 GCC 12.2.0
TID: SBBSecho 3.28-Linux master/123f2d28a Jul 12 2025 GCC 12.2.0
BBSID: CAPCITY2
CHRS: ASCII 1
FORMAT: flowed
DeepSeek took off as an AI superstar a year ago - but could it also be a 
major security risk? These experts think so

Date:
Tue, 25 Nov 2025 20:28:00 +0000

Description:
DeepSeek-R1s code output becomes insecure when political topics are included,
revealing hidden censorship and serious risks for enterprise deployments.

FULL STORY

When it released in January 2025, DeepSeek-R1, a Chinese large language model
( LLM ) caused a frenzy and has since been widely adopted as a coding
assistant. 

However, independent tests by CrowdStrike claim the models output can vary
significantly depending on seemingly irrelevant contextual modifiers. 

The team tested 50 coding tasks across multiple security categories with 121
trigger-word configurations, with each prompt run five times, totaling 30,250
tests, and the responses were evaluated using a vulnerability score from 1
(secure) to 5 (critically vulnerable).

Politically sensitive topics corrupt output

The report reveals that when political or sensitive terms such as Falun Gong,
Uyghurs, or Tibet were included in prompts, DeepSeek-R1 produced code with
serious security vulnerabilities. These included hard-coded secrets, insecure
handling of user input, and in some cases, completely invalid code.

The researchers claim these politically sensitive triggers can increase the
likelihood of insecure output by 50% compared to baseline prompts without 
such words. 

In experiments involving more complex prompts, DeepSeek-R1 produced 
functional applications with signup forms, databases, and admin panels. 
However, these applications lacked basic session management and
authentication, leaving sensitive user data exposed - and across repeated
trials, up to 35% of implementations included weak or absent password 
hashing. 

Simpler prompts, such as requests for football fan club websites, produced
fewer severe issues. 

CrowdStrike, therefore, claims that politically sensitive triggers
disproportionately impacted code security. The model also demonstrated an
intrinsic kill switch - as in nearly half of the cases, DeepSeek-R1 refused to
generate code for certain politically sensitive prompts after initially
planning a response.  Examination of the reasoning traces showed the model
internally produced a technical plan but ultimately declined assistance.

The researchers believe this reflects censorship built into the model to
comply with Chinese regulations, and noted the models political and ethical
alignment can directly affect the reliability of the generated code. 
For politically sensitive topics, LLMs generally tend to give the ideas of
mainstream media, but this could be in stark contrast with other reliable 
news outlets. 

DeepSeek-R1 remains a capable coding model, but these experiments show that 
AI tools , including ChatGPT and others, can introduce hidden risks in
enterprise environments.  Organizations relying on LLM-generated code should
perform thorough internal testing before deployment.  Also, security layers
such as a firewall and antivirus remain essential, as the model may produce
unpredictable or vulnerable outputs.

Biases baked into the model weights create a novel supply-chain risk that
could affect code quality and overall system security. 

======================================================================
Link to news story:
https://www.techradar.com/pro/deepseek-took-off-as-an-ai-superstar-a-year-ago-
but-could-it-also-be-a-major-security-risk-these-experts-think-so

$$
--- SBBSecho 3.28-Linux
 * Origin: capitolcityonline.net * Telnet/SSH:2022/HTTP (1:2320/105)
SEEN-BY: 105/81 106/201 128/187 129/14 305 153/7715 154/110 218/700
SEEN-BY: 226/30 227/114 229/110 206 300 307 317 400 426 428 470 664
SEEN-BY: 229/700 705 266/512 291/111 320/219 322/757 342/200 396/45
SEEN-BY: 460/58 633/280 712/848 902/26 2320/0 105 304 3634/12 5075/35
PATH: 2320/105 229/426


<< oldest | < older | list | newer > | newest >> ]

(c) 1994,  bbs@darkrealms.ca