home bbs files messages ]

Forums before death by AOL, social media and spammers... "We can't have nice things"

   alt.buddha.short.fat.guy      Uhhh not sure, something about Buddhism      156,682 messages   

[   << oldest   |   < older   |   list   |   newer >   |   newest >>   ]

   Message 155,412 of 156,682   
   Wilson to All   
   Something Big Is Happening (1/2)   
   18 Feb 26 13:52:55   
   
   From: Wilson@nowhere.invalid   
      
   By Matt Shumer • Feb 9, 2026   
      
   Think back to February 2020. If you were paying close attention, you   
   might have noticed a few people talking about a virus spreading   
   overseas. But most of us weren't paying close attention. The stock   
   market was doing great, your kids were in school, you were going to   
   restaurants and shaking hands and planning trips. If someone told you   
   they were stockpiling toilet paper you would have thought they'd been   
   spending too much time on a weird corner of the internet. Then, over the   
   course of about three weeks, the entire world changed. Your office   
   closed, your kids came home, and life rearranged itself into something   
   you wouldn't have believed if you'd described it to yourself a month   
   earlier.   
      
   I think we're in the "this seems overblown" phase of something much,   
   much bigger than Covid.   
      
   I've spent six years building an AI startup and investing in the space.   
   I live in this world. And I'm writing this for the people in my life who   
   don't... my family, my friends, the people I care about who keep asking   
   me "so what's the deal with AI?" and getting an answer that doesn't do   
   justice to what's actually happening. I keep giving them the polite   
   version. The cocktail-party version. Because the honest version sounds   
   like I've lost my mind. And for a while, I told myself that was a good   
   enough reason to keep what's truly happening to myself. But the gap   
   between what I've been saying and what is actually happening has gotten   
   far too big. The people I care about deserve to hear what is coming,   
   even if it sounds crazy.   
      
   I should be clear about something up front: even though I work in AI, I   
   have almost no influence over what's about to happen, and neither does   
   the vast majority of the industry. The future is being shaped by a   
   remarkably small number of people: a few hundred researchers at a   
   handful of companies... OpenAI, Anthropic, Google DeepMind, and a few   
   others. A single training run, managed by a small team over a few   
   months, can produce an AI system that shifts the entire trajectory of   
   the technology. Most of us who work in AI are building on top of   
   foundations we didn't lay. We're watching this unfold the same as you...   
   we just happen to be close enough to feel the ground shake first.   
      
      
   For years, AI had been improving steadily. Big jumps here and there, but   
   each big jump was spaced out enough that you could absorb them as they   
   came. Then in 2025, new techniques for building these models unlocked a   
   much faster pace of progress. And then it got even faster. And then   
   faster again. Each new model wasn't just better than the last... it was   
   better by a wider margin, and the time between new model releases was   
   shorter. I was using AI more and more, going back and forth with it less   
   and less, watching it handle things I used to think required my expertise.   
      
   Then, on February 5th, two major AI labs released new models on the same   
   day: GPT-5.3 Codex from OpenAI, and Opus 4.6 from Anthropic (the makers   
   of Claude, one of the main competitors to ChatGPT). And something   
   clicked. Not like a light switch... more like the moment you realize the   
   water has been rising around you and is now at your chest.   
      
   I am no longer needed for the actual technical work of my job. I   
   describe what I want built, in plain English, and it just... appears.   
   Not a rough draft I need to fix. The finished thing. I tell the AI what   
   I want, walk away from my computer for four hours, and come back to find   
   the work done. Done well, done better than I would have done it myself,   
   with no corrections needed. A couple of months ago, I was going back and   
   forth with the AI, guiding it, making edits. Now I just describe the   
   outcome and leave.   
      
   Let me give you an example so you can understand what this actually   
   looks like in practice. I'll tell the AI: "I want to build this app.   
   Here's what it should do, here's roughly what it should look like.   
   Figure out the user flow, the design, all of it." And it does. It writes   
   tens of thousands of lines of code. Then, and this is the part that   
   would have been unthinkable a year ago, it opens the app itself. It   
   clicks through the buttons. It tests the features. It uses the app the   
   way a person would. If it doesn't like how something looks or feels, it   
   goes back and changes it, on its own. It iterates, like a developer   
   would, fixing and refining until it's satisfied. Only once it has   
   decided the app meets its own standards does it come back to me and say:   
   "It's ready for you to test." And when I test it, it's usually perfect.   
      
   I'm not exaggerating. That is what my Monday looked like this week.   
      
   But it was the model that was released last week (GPT-5.3 Codex) that   
   shook me the most. It wasn't just executing my instructions. It was   
   making intelligent decisions. It had something that felt, for the first   
   time, like judgment. Like taste. The inexplicable sense of knowing what   
   the right call is that people always said AI would never have. This   
   model has it, or something close enough that the distinction is starting   
   not to matter.   
      
   I've always been early to adopt AI tools. But the last few months have   
   shocked me. These new AI models aren't incremental improvements. This is   
   a different thing entirely.   
      
   And here's why this matters to you, even if you don't work in tech.   
      
   The AI labs made a deliberate choice. They focused on making AI great at   
   writing code first... because building AI requires a lot of code. If AI   
   can write that code, it can help build the next version of itself. A   
   smarter version, which writes better code, which builds an even smarter   
   version. Making AI great at coding was the strategy that unlocks   
   everything else. That's why they did it first. My job started changing   
   before yours not because they were targeting software engineers... it   
   was just a side effect of where they chose to aim first.   
      
   They've now done it. And they're moving on to everything else.   
      
   The experience that tech workers have had over the past year, of   
   watching AI go from "helpful tool" to "does my job better than I do", is   
   the experience everyone else is about to have. Law, finance, medicine,   
   accounting, consulting, writing, design, analysis, customer service. Not   
   in ten years. The people building these systems say one to five years.   
   Some say less. And given what I've seen in just the last couple of   
   months, I think "less" is more likely.   
   "But I tried AI and it wasn't that good"   
      
   I hear this constantly. I understand it, because it used to be true.   
      
   If you tried ChatGPT in 2023 or early 2024 and thought "this makes stuff   
   up" or "this isn't that impressive", you were right. Those early   
      
   [continued in next message]   
      
   --- SoupGate-Win32 v1.05   
    * Origin: you cannot sedate... all the things you hate (1:229/2)   

[   << oldest   |   < older   |   list   |   newer >   |   newest >>   ]


(c) 1994,  bbs@darkrealms.ca