home bbs files messages ]

Just a sample of the Echomail archive

<< oldest | < older | list | newer > | newest >> ]

 Message 1912 
 Mike Powell to All 
 Sentient AI fantasies... 
 04 Nov 25 09:19:23 
 
TZUTC: -0500
MSGID: 1669.consprcy@1:2320/105 2d6f47f0
PID: Synchronet 3.21a-Linux master/123f2d28a Jul 12 2025 GCC 12.2.0
TID: SBBSecho 3.28-Linux master/123f2d28a Jul 12 2025 GCC 12.2.0
BBSID: CAPCITY2
CHRS: ASCII 1
FORMAT: flowed
Microsoft's AI boss is right: sentient AI fantasies aren't just impossible,
they're irrelevent

Date:
Tue, 04 Nov 2025 09:53:47 +0000

Description:
Microsofts AI boss is pushing back against the illusion of sentient machines,
arguing that chasing consciousness is a distraction from building truly
helpful AI.

FULL STORY

Microsoft AI CEO Mustafa Suleyman's opinions on AI's shape and development
carry some weight, which is why it felt like a breath of fresh air to hear 
him say that AI cannot achieve consciousness and that pursuing it
misunderstands the point of the technology. 

The idea of Frankenstein-ing sentience into AI chatbots gets a lot of buzz,
but Suleymans' comments at the recent AfroTech Conference dismissed the very
idea of artificial consciousness as starting from a false premise. 

If you ask the wrong question, he said, you end up with the wrong answer. 
And, in his view, asking whether AIs can be conscious is a textbook example 
of the wrong question. 

Pushing back on the breathless speculation about artificial general
intelligence (AGI) or claims that ChatGPT has achieved self-awareness is
something more people in AI with some authority on the subject should do. 

Not that Suleyman is against building new and better AI models. He just
believes it's better to focus on making AI into useful tools for people, not
pretending we're nurturing a digital Pinocchio into a real boy. 

The distinction between AI performing well and AI being aware is crucial.
Because pretending there's a spark of real self-awareness behind the
algorithms is distracting and possibly even dangerous if people start 
treating these fancy auto-completes like they're capable of introspection.

'Smart' doesn't mean 'thinking' 

As Suleyman pointed out, it's possible to actually see what the model is 
doing when it mimics emotions and feelings. They dont have hidden internal
lives. We can watch the math happen. We can trace the input tokens, the
attention weights, and the statistical probabilities as the sausage gets 
made. And nowhere in that pipeline is there a mechanism for subjective
experience. 

Dwelling on the mistaken belief that simulated emotions are the real thing is
a waste of effort as it is. But when we start responding to machines as if
they were human and anthropomorphizing them, we can lose track of reality. 

People calling a chatbot their best friend, therapist, or even their romantic
partner isn't more of a crisis than treating a fictional character or
celebrity who's never met you as an important part of your life. But having a
true breakdown over a tragic end to your favorite character in a novel or
changing your life to match a fad promoted by a celebrity would be rightly
considered concerning. The same worries should arise when a user starts
attributing suffering to a chatbot. 

Thats not to say it isnt useful. Quite the opposite. And a little personality
can make tools more engaging, more effective, and more fun. But the focus
should be on the user's experience, not the illusion of the tools inner life. 

The real frontier of AI isnt how close can we get to making it seem alive? 
Its how do we make it actually useful? 

Theres still plenty of mystery in AI development. These systems are complex,
and we dont fully understand every emergent behavior. But that doesnt mean
theres a mind hiding in the wires. The longer we continue to treat
consciousness as the holy grail, the more the public is misled. 

It would be like seeing a magician pull a coin from your ear and deciding 
he's truly conjured the cash from nothingness and is therefore an actual
sorcerer. The trick is now an over-the-top misunderstanding of what happened.
AI chatbots pulling off sleight of hand (or code) is a good trick, but it's
not really magic.

======================================================================
Link to news story:
https://www.techradar.com/ai-platforms-assistants/microsofts-ai-boss-is-right-
sentient-ai-fantasies-arent-just-impossible-theyre-irrelevent

$$
--- SBBSecho 3.28-Linux
 * Origin: capitolcityonline.net * Telnet/SSH:2022/HTTP (1:2320/105)
SEEN-BY: 105/81 106/201 128/187 129/14 305 153/7715 154/110 218/700
SEEN-BY: 226/30 227/114 229/110 206 300 307 317 400 426 428 470 664
SEEN-BY: 229/700 705 266/512 291/111 320/219 322/757 342/200 396/45
SEEN-BY: 460/58 633/280 712/848 902/26 2320/0 105 304 3634/12 5075/35
PATH: 2320/105 229/426


<< oldest | < older | list | newer > | newest >> ]

(c) 1994,  bbs@darkrealms.ca