68640550   
   From: marcov@stack.nl   
      
   On 2010-01-02, Rugxulo wrote:   
   > On Jan 2, 8:43 am, Marco van de Voort wrote:   
   >> On 2010-01-01, Rugxulo wrote:   
   >>   
   >> FPC uses v2, apparantly I'm mistaking/outdated since go32.exe is no longer   
   >> in SVN   
   >   
   > From the looks of it, FPC 1.0.10 used to perhaps support v1 also   
   > (source/rtl/go32v1 and source/rtl/go32v2).   
      
   FPC did support go32v1, but I never saw it run. When I joined in 1997/8 iirc   
   it was already v2. People have played with a bunch more extenders too   
   (pmode, watcom), but none really stuck.   
      
   > However, nowadays it's fairly moot as NTVDM isn't regarded as having   
   > much of a future (even on 32-bit).   
      
   Well, it hasn been like that since w2000 didn't notably improve. And then we   
   are talking about 2000/2001 timeframe, and it hasn't stopped many.   
      
   This why we never supported Dos on NT based systems. We did commit decent   
   fixes though, which why it now at least runs on XP halfdecently.   
      
   > P.S. I could be wrong, but it seems go32v1 apps run fine on Vista but   
   > not on XP (you can use RSX, though). This is probably the only   
   > improvement I know of, offhand. ;-)   
      
   I've never run v1.   
      
   >> When I still used Dos, I used qemm, and had 64MB iirc. No problems with   
   >> fpc/go32v2.   
   >   
   > But did you use QDPMI or not? I think CWSDPMI is still considered   
   > better.   
      
   Yes.   
      
   CWSDPMI, maybe, but Qemm never gave me much trouble after v5, and one   
   configuration worked relatively well. 4dos-hyperdisk-qemm and quite a bit   
   of DV/X was my config at the time.   
      
   In practice I only had one other bootoption, one with only himem and 4dos.   
   Since nearly all programs that actually used memory could use higher memory   
   and were at least 286 extended, optimizing for maximum conventional memory   
   beyond having 600kb free was not terribly interesting.   
      
   >> Sure, but since they are not memory mapped, they have to be loaded first,   
   >> and only then paged out. And that doesn't matter for UPX, if the pages in   
   >> mem were compressed before or not.   
   >   
   > Well, there are other DOS extenders that were demand loaded (e.g.   
   > MOSS, I think), but those were rare. And even that was mostly written   
   > to support a commercial game on 486s, so you kinda had to have   
   > something (as the main .EXE had data combined with it for a 17 MB   
   > whopper).   
      
   I can imagine a separate dos extender being like a pager. But afaik Dos   
   itself always loads the binary whole? OTOH, that could be avoided by having   
   a PE like format (where dos also only loads the stub).   
      
   But to be honest, I don't really know in detail how this works in dos.   
      
   >> > UPX is probably more reliable than DoubleSpace or Stacker,   
   >>   
   >> Well, I never had a antivirus balking at stacker. In dos times, I kept the   
   >> sources that were for reference only on a small stacker drive.   
   >   
   > I think various things would have issues, but I never bothered finding   
   > out. The compression ratio is probably subpar anyways   
      
   Of course. But there are reasons, roughly the same reasons why I only used   
   solid archives for longterm archival purposes.   
      
   > (although I never converted all my .ZIPs to .7z either, so ...). Also not   
   > good for dual booting.   
      
   I converted them all to .Q at some point. It was the rage back then.   
      
   >> I kept one 1GB partition FAT16, and the rest FAT32 for a long time, till   
   >> *nix FAT32 support was mature enough.    
   >   
   > MS patents on FAT32 still exist, which is annoying.   
      
   So I'll create my partition with my licensed XP/Vista. Big deal :-)   
      
   > Last I heard, Linux moved to a "read-only" hack, then made it where it   
   > would only save the LFN part, not SFN, to avoid problems. Kinda silly in   
   > this day and age, but oh well.   
      
   Afaik that is mostly debian and followers, not as much Linux as a whole. But   
   it is a problem indeed. And history seems to be repeating with the now   
   proposed fatex/fatplus. A different solution is using SUSE, or anything else   
   by Novell.   
      
   IOW the fat patents are more against the Sony/Philips/photocamera   
   manufacterers then against linux.   
      
   We can only hope that somepoint in the future the MS IP portfolio costs more   
   than it brings, and they cancel a while later. The usual Debian   
   holier-than-thou attitude will probably mean that FAT32 will be   
   postinstalled for a whole time longer.   
      
   > BTW, what I mainly meant was that several DOSes (e.g. official DR-DOS)   
   > never had FAT32 support (and mine never came with any beta TSRs   
   > either), so you're kinda stuck. In short, never ever use FAT16 with >   
   > 512 MB. I should've split mine up into several FAT16 partitions so   
   > that it would be 8k clusters instead of 16k (horrible for lots of   
   > small files).   
      
   I don't think I have used a non MSDOS7 version after say '98. And those were   
   even the exception (gaming and tests mostly), usually it was under w9x.   
      
   When I used w2000 more and more at work, I started to replace more and more   
   tools with win32 versions, and a few years later (2002 or so) chucked w9x   
   all together. It helped that the license was my employer's though, since w2k   
   was expensive at the time.   
      
   >> On fast computers, I never saw TP code outperform the same code in 32-bit if   
   >> the code was non-trivial.   
   >   
   > Well, x86-64 has no 16-bit, so DOSEMU has to emulate it.   
      
   (or chuck 16-bit. Not terribly painful. My maindesktop is 64-bit nowdays.   
   OTOH while I'm writing this, I'm programming a 8K, 40MIPS microprocessor in   
   GCC)   
      
   > But you're right, even on 486s, 32-bit code is faster because that was the   
   > design intention. So there usually is no comparison. Doesn't mean there   
   > aren't corner cases, but for the most part it does run pretty fast. DJGPP   
   > has never been considered slow.   
      
   All 486's that ran 32-bit code were secondary machines that typically ran   
   Linux or BSD. I only had Dos on my Cyrix P166+ main desktop.   
      
   >> You only say that because you know I threw out my 386SX25 last year :-)   
   >   
   > My 486 still runs (barely), so if you have any reasonable benchmarks,   
   > I'll try them for ya.   
      
   The only interest I would have (if I suddenly got unlimited time) is to test   
   the FPU emulation situation. But it could be that 2.4.0 now is PPRO (cmov)   
   default.   
      
   >> I'd guess that FPC speed is roughly GCC minus two specific GCC problems:   
   >> - the C disease to reparse headers again and again   
   >> - IIRC FPC/go32v2 has AS built in, so no need to call a separate assembler.   
   >   
   > Precompiled headers are pretty much unsupported in DOS, last I heard.   
      
   TC++ probably does fine.   
      
   As far as DJGPP/GCC goes, does gcc support them at all? ccached can cache   
   I/O time of finding/loading headers, but that is not the same as precompiled   
   headers afaik.   
      
   > DJGPP's GCC doesn't support -pipe (although EMX under OS/2 does,   
   > IIRC)   
      
   OS/2 does have decent pipes in principle, like cousin NT.   
      
   > As you probably know, OpenWatcom has a built-in assembler, and it   
   > directly outputs to .OBJ, so usually it's much faster than GCC (e.g.   
      
   [continued in next message]   
      
   --- SoupGate-Win32 v1.05   
    * Origin: you cannot sedate... all the things you hate (1:229/2)   
|