Forums before death by AOL, social media and spammers... "We can't have nice things"
|    alt.os.development    |    Operating system development chatter    |    4,255 messages    |
[   << oldest   |   < older   |   list   |   newer >   |   newest >>   ]
|    Message 3,379 of 4,255    |
|    mutazilah@gmail.com to anti...@math.uni.wroc.pl    |
|    Re: segmentation (1/2)    |
|    30 Oct 22 12:13:41    |
      From: muta...@gmail.com              On Saturday, October 29, 2022 at 7:04:14 AM UTC+8, anti...@math.uni.wroc.pl       wrote:              > > Sorry for my bad English. I expect the function itself to be       > > less than 64k in size.       > >       > > > In principle compiler can split       > > > function into multiple segments.       > >       > > It is the linker I am worried about. I believe it needs to add       > > padding (x'00') before placing the code in the executable       > > if the executable size is going to cross a 64k boundary       > > while laying out the function (or maybe object file - I'm       > > not sure).              > That is routine problem. For 8086 linker "knew" that segment       > must start at address divisible by 16 and arranged executable       > so that this was true. On 286 (and 386) segment can start at       > arbitrary address, so no extra problem. In fact, for various       > reasons addresses divisible by 16 are preferable, so placement       > done on 8086 will work. Of course, if you compiled code for       > 8086 you will have 1M limit on code, because in C you can not       > dynamically create new code, so only data can benefit from       > bigger address space.              Oh, that's how they did it.              However, that won't work if instead of an 80286 I       am running on an 8086-5 with 5-bit segment shifts,       and also on an 8086-16 with 16-bit segment shifts.              For that I believe I need to ensure the linker       ensures a function (or maybe an object file?)       doesn't cross a 64k boundary.              Also that's a more complex 80286 implementation       than I want.              I want to line up the segments consecutively, creating       a single simple address space like I have in MSDOS.              > > Also, this assumes that the hardware progressed at       > > exactly the pace it had. If there had been a 200 year       > > delay between the 80286 and the 80386, and a 500       > > year delay between the 8086 and 80286, how would       > > you develop the software?       > I would code for 68000.       > > Even though the 8086 was going to be there for 500       > > years, it doesn't mean the clock speed wasn't going       > > to change. It may have been fast enough to enable       > > some solutions.              > There are rules for chips which were observed in sixties       > (and stated as "Moore law") which basically say that       > both number of elements and clock freqency double every       > few years.              That's not a formal law, and software people       shouldn't have been relying on such a thing.              > > > Well, passing from linear address to disc geometry requires       > > > division. In 1982 disc contained no processor, so disc       > > > had to get CHS info from OS. BIOS interface in theory       > > > could accept linear addresses and turn them into CHS.       > > > But IIRC division on 8086 took 150 clocks and you need       > > > two of them for conversion. That 300 clocks is equvalent       > > > to something like 50 simpler instructions, at that time       > > > quite significant cost.       > >       > > Significant compared to waiting for a disk head to seek       > > and retrieve the desired sector? Are you sure?              > There is no warranty that you need seek. BIOS in general       > was rather slow, so this was probably premature optimization.              And maybe they could have provided both options,       for people who had theoretical applications       that wanted to avoid the division inherent in LBA.              > But if you want efficient system you need to start somewhere       > and using CHS is "cheap" optimization.              No, that caused a lot of pain for programmers,       for dubious benefit, and at least in hindsight,       machines got faster processors before "efficient"       systems not using the BIOS became a thing.              > You underestimate trouble of this approach. And I think you       > miss big advantage of C: C could generate reasonably good code       > _without_ optimizing compiler.              Thanks for that information.              And before I forget - you mentioned that some people       said that a formal grammar for C was impossible - why       did they say that and why were they wrong?              I assume a formal grammar is important so that a       compiler can partly be automatically generated.              > Now, if you ask about optimization for C, note that each language       > has its own issues. IBM had experience with Fortran (Fortran H       > should produce quite efficient code from equivalent of my first       > snippet above), and also with PL/I. But it took several years       > to develop optimizations for PL/I. There was also question of       > using optimized high level code for sytem/kernel work. IBM       > had special low-level variant of PL/I which they apparently       > treated as "secret weapon" (compiler was not available outside       > IBM and available description was vague). Multics folks had       > optimizing compiler for PL/I, but again this was proprietary.              Sorry - what is this question about kernel work?              I don't think the kernel needs to be optimized.       99% of CPU time should be spent running the       application code, not doing kernel calls.              > To put it differently, to get compiler experience required hard       > work and incentives and conditions for such work appeared only       > in eighties, when C was popular enough. Trying to do needed       > work on mainfraimes had almost no incentives: mainfraime time       > was expensive and unavailable for most C developers. And if       > you wanted to compile on mainframe, than other languages could       > be preferable and possibly more accessible. For example, there       > was reasonably good Pascal compiler available with source.       > Having source, it was reasonable to re-target to generate code       > for micros and PC-s (but AFAICS most Pascal developement on       > micros/PC was native).              That's interesting. So there was an issue. Someone       had made a Pascal compiler available with source,       but no-one had made a C compiler available with       source? That's just the way it turned out, it could       have been the reverse?              I would like to see SubC written earlier too.              Given that SubC was written by a single guy, and is       6000 lines of code, and given that the US government       was apparently splashing money around, why didn't       they hire someone like Nils for a year to write a       public domain C compiler - ie SubC - once and for all?              I guess that's what I really want. I want to see       PDOS-generic and SubC written by someone employed       by some government in the 1950s, setting the       industry up for the future.              > > We may have been stuck with 128k of memory until the year 2300.       > > Or 2 MB. If 2 MB had been some sort of magical barrier for       > > decades, maybe we would have seen a 5-bit shift after all.       > >       > > I don't want software development to be tied down to the same       > > timeline as hardware development.              > You are dreaming, that is your right. But simply, in 128k you       > can do some things, in fact quite a lot. But you can not do              [continued in next message]              --- SoupGate-Win32 v1.05        * Origin: you cannot sedate... all the things you hate (1:229/2)    |
[   << oldest   |   < older   |   list   |   newer >   |   newest >>   ]
(c) 1994, bbs@darkrealms.ca