From: punditster@gmail.com   
      
   On 2/18/2026 11:15 AM, Noah Sombrero wrote:   
   > On Wed, 18 Feb 2026 14:06:31 -0500, Noah Sombrero    
   > wrote:   
   >   
   >> On Wed, 18 Feb 2026 13:52:55 -0500, Wilson    
   >> wrote:   
   >>   
   >>> By Matt Shumer • Feb 9, 2026   
   >>>   
   >>> Think back to February 2020. If you were paying close attention, you   
   >>> might have noticed a few people talking about a virus spreading   
   >>> overseas. But most of us weren't paying close attention. The stock   
   >>> market was doing great, your kids were in school, you were going to   
   >>> restaurants and shaking hands and planning trips. If someone told you   
   >>> they were stockpiling toilet paper you would have thought they'd been   
   >>> spending too much time on a weird corner of the internet. Then, over the   
   >>> course of about three weeks, the entire world changed. Your office   
   >>> closed, your kids came home, and life rearranged itself into something   
   >>> you wouldn't have believed if you'd described it to yourself a month   
   >>> earlier.   
   >>>   
   >>> I think we're in the "this seems overblown" phase of something much,   
   >>> much bigger than Covid.   
   >>   
   >> Right, it happened before so be perpetually afraid. Surely it will   
   >> happen again, sometime, but none of us, zero, zilch know when it will   
   >> happen again or why.   
   >   
   > It is the difference between the unforeseen adversity and the foreseen   
   > one. For the foreseen one, deny any such possibility, for the   
   > unforeseen one, don't force me to take part in any real remedy, phony   
   > ones sure.   
   >   
   > But the after the adversity, remain scared. The same might happen   
   > tomorrow, so listen to what I am telling you.   
   >   
   Past progress is no guarantee of future success.   
    >   
      
   >>> I've spent six years building an AI startup and investing in the space.   
   >>> I live in this world. And I'm writing this for the people in my life who   
   >>> don't... my family, my friends, the people I care about who keep asking   
   >>> me "so what's the deal with AI?" and getting an answer that doesn't do   
   >>> justice to what's actually happening. I keep giving them the polite   
   >>> version. The cocktail-party version. Because the honest version sounds   
   >>> like I've lost my mind. And for a while, I told myself that was a good   
   >>> enough reason to keep what's truly happening to myself. But the gap   
   >>> between what I've been saying and what is actually happening has gotten   
   >>> far too big. The people I care about deserve to hear what is coming,   
   >>> even if it sounds crazy.   
   >>>   
   >>> I should be clear about something up front: even though I work in AI, I   
   >>> have almost no influence over what's about to happen, and neither does   
   >>> the vast majority of the industry. The future is being shaped by a   
   >>> remarkably small number of people: a few hundred researchers at a   
   >>> handful of companies... OpenAI, Anthropic, Google DeepMind, and a few   
   >>> others. A single training run, managed by a small team over a few   
   >>> months, can produce an AI system that shifts the entire trajectory of   
   >>> the technology. Most of us who work in AI are building on top of   
   >>> foundations we didn't lay. We're watching this unfold the same as you...   
   >>> we just happen to be close enough to feel the ground shake first.   
   >>>   
   >>>   
   >>> For years, AI had been improving steadily. Big jumps here and there, but   
   >>> each big jump was spaced out enough that you could absorb them as they   
   >>> came. Then in 2025, new techniques for building these models unlocked a   
   >>> much faster pace of progress. And then it got even faster. And then   
   >>> faster again. Each new model wasn't just better than the last... it was   
   >>> better by a wider margin, and the time between new model releases was   
   >>> shorter. I was using AI more and more, going back and forth with it less   
   >>> and less, watching it handle things I used to think required my expertise.   
   >>>   
   >>> Then, on February 5th, two major AI labs released new models on the same   
   >>> day: GPT-5.3 Codex from OpenAI, and Opus 4.6 from Anthropic (the makers   
   >>> of Claude, one of the main competitors to ChatGPT). And something   
   >>> clicked. Not like a light switch... more like the moment you realize the   
   >>> water has been rising around you and is now at your chest.   
   >>>   
   >>> I am no longer needed for the actual technical work of my job. I   
   >>> describe what I want built, in plain English, and it just... appears.   
   >>> Not a rough draft I need to fix. The finished thing. I tell the AI what   
   >>> I want, walk away from my computer for four hours, and come back to find   
   >>> the work done. Done well, done better than I would have done it myself,   
   >>> with no corrections needed. A couple of months ago, I was going back and   
   >>> forth with the AI, guiding it, making edits. Now I just describe the   
   >>> outcome and leave.   
   >>>   
   >>> Let me give you an example so you can understand what this actually   
   >>> looks like in practice. I'll tell the AI: "I want to build this app.   
   >>> Here's what it should do, here's roughly what it should look like.   
   >>> Figure out the user flow, the design, all of it." And it does. It writes   
   >>> tens of thousands of lines of code. Then, and this is the part that   
   >>> would have been unthinkable a year ago, it opens the app itself. It   
   >>> clicks through the buttons. It tests the features. It uses the app the   
   >>> way a person would. If it doesn't like how something looks or feels, it   
   >>> goes back and changes it, on its own. It iterates, like a developer   
   >>> would, fixing and refining until it's satisfied. Only once it has   
   >>> decided the app meets its own standards does it come back to me and say:   
   >>> "It's ready for you to test." And when I test it, it's usually perfect.   
   >>>   
   >>> I'm not exaggerating. That is what my Monday looked like this week.   
   >>>   
   >>> But it was the model that was released last week (GPT-5.3 Codex) that   
   >>> shook me the most. It wasn't just executing my instructions. It was   
   >>> making intelligent decisions. It had something that felt, for the first   
   >>> time, like judgment. Like taste. The inexplicable sense of knowing what   
   >>> the right call is that people always said AI would never have. This   
   >>> model has it, or something close enough that the distinction is starting   
   >>> not to matter.   
   >>>   
   >>> I've always been early to adopt AI tools. But the last few months have   
   >>> shocked me. These new AI models aren't incremental improvements. This is   
   >>> a different thing entirely.   
   >>>   
   >>> And here's why this matters to you, even if you don't work in tech.   
   >>>   
   >>> The AI labs made a deliberate choice. They focused on making AI great at   
   >>> writing code first... because building AI requires a lot of code. If AI   
      
   [continued in next message]   
      
   --- SoupGate-Win32 v1.05   
    * Origin: you cannot sedate... all the things you hate (1:229/2)   
|