Forums before death by AOL, social media and spammers... "We can't have nice things"
|    alt.os.development    |    Operating system development chatter    |    4,255 messages    |
[   << oldest   |   < older   |   list   |   newer >   |   newest >>   ]
|    Message 2,771 of 4,255    |
|    mutazilah@gmail.com to anti...@math.uni.wroc.pl    |
|    Re: PDOS/86 (1/2)    |
|    05 Aug 21 22:01:51    |
      From: muta...@gmail.com              On Friday, July 23, 2021 at 5:32:07 AM UTC+10, anti...@math.uni.wroc.pl wrote:              > > > > It may be cleaner/easier, but it defeats the purpose for       > > > > what segmentation was created for. Intel didn't come       > > > > up with a segmented processor for fun, and they didn't       > > > > use a 4-bit shift because they'd been smoking wacky       > > > > weed. It was the absolute correct engineering solution.       > >       > > > I wander what in your opinion was purpose of segmentation?       > >       > > To allow multiple tiny-mode applications to run, without       > > requiring alignment on a 64k address boundary, thus       > > wasting space.              > That have have at least 3 different solutions: relocation (base)       > register, paging and position independent code. Note that              Thanks for explaining the underlying theory to me!!!              > relocation register is set up by operating system, so you can       > enlarge it without change to application (of course operating       > system must know (possibly compute) size of relocation register).       > For many years paging is preferred method to run multiple       > applications.              Ok. But that wasn't available on the 8086. Maybe there was       some way "fake paging" could have been created. Regardless,       we ended up with the "relocation (base) register" option, which       is presumably what a segment register is considered to be.              > > As opposed to what alternative for large memory model       > > 8086 programs? There's some advantage to restricting       > > them to 1 MiB instead of letting them fly on an 80386?              > Simple fact is that program which has more than 1M of code       > will not run on 1M machine.              This is true, but I would have stopped right here. There is       nothing special about 1 M. The problem could be restated       as 2 M instead. The proper thing to do is abstract the       situation right here.              > More generally, if program       > really needs more than 1M of data, it will not run on 1M       > machine. In such case, why to bother with segments?              Because the Norks may produce an 8086+ with 5-bit       segment shifts giving you 2 M, tomorrow. For no change       whatsoever to the application program.              > What remain are programs that are small enough to fit       > in 1M and do some useful work and which can also do       > useful work on larger datasets. IME significant percentage       > of such programs involved largish array, they would fail       > with large data in large model.              There is a more significant percentage that WORK with       the large memory model, which is why they WORK, even       with 1 M.              > At best you could       > use huge model, at cost of significant slowdown.              Huge is very rare. Turbo C++, a very popular compiler,       doesn't even generate suitable code.              > So you deal with small class of programs. And apparently       > you did not realize that if you _have to_ deal with       > segments (say for compatibility with 8086), then what       > 286 and 386 did is much better.              I don't know what you are talking about. The usage of       segments that I described seems to be the most       appropriate solution.              Are you talking about effectively having two executables       combined into one? I don't consider that to be superior       to the design I outlined.              > [S/360]       > In instruction set, IBM missed PC relative instructions              Is that really a problem? I don't see anything particularly       wrong with the assembler generated by GCC.              > and usefulness of larger immediates.              Ditto. Yes, you can pile on loads of features, but are they       really necessary? To achieve what purpose? You think       S/360 applications would be 10% faster if IBM had thought       of this? 50% faster?              > In software conventions       > IBM underestimated or overlooked usefulness of stack.              Again, I know how the (effective) stack works on S/360,       and it seems perfectly fine to me.              I don't know what problem you are trying to address.              > OTOH, when 360 was designed, it was not clear if concept       > of series of compatible machines was feasible. I mean,       > it sounds nice, but people make errors and it was not       > clear if ineviable errors do not destroy compatibility.       > There was also question of economic cost. At hardware       > level, 360/30 is machine with 16-bit registers and       > 8 bit memory having 1.5us cycle. 360 instructions are       > interpreted by microcode. Microcode runs with 750ns       > cycle from ROM. With different microcode it would be       > 16-bit mini and could probably execute 3-5 times       > more instructions than as 360. So there were nontrival       > cost of compatiblity.              Can you elaborate a bit more on this? You said it was       "not clear" if "compatible machines" could be made -       what was the result? Did we get compatible machines       or not?              > With hindsight, concept of CKD discs was a non-starter:       > CKD support lead to complicated software,              What's wrong with everyone using RECFM=U, BLKSIZE=6233?              > and conseqently       > discs were hard to use on small machines (disc access       > routines did not fit RAM).              How much memory are we talking about? And couldn't       the disk access routines in question be put into ROM?              > You may pretend to find something from first principles,       > but fact is that many early predictions were wrong.       > It seems that most pioniers underestimated complexity       > of of software, it size and effort needed to create it.       > Also, all times specific properties/restrictions of       > hardware play role. Do you think that anybody would       > bother with complexity of programming 8 core processor       > if they could get single core that is 8 times faster       > and of comparable power and area as 8 cores?              Sure. But isn't this something that could have been       tackled as a theoretical problem, back in 1950 or       whatever, in case we ever encountered multiple CPUs/cores?              > In principle theory of parsing, finite automata, etc       > could be developed say around 1910.              Thanks for giving me a timeframe!!!              > But pragmatics       > requires experience: you need to know that it is       > useful to describe programming languages using       > context free grammars, you need experience to       > know that LL(1) and LALR work for practical languages.              I looked that up:              https://en.wikipedia.org/wiki/LALR_parser              But it is above my head. That's OK though. I just wanted to       know the barrier - as you said, you need experience (feedback).       You can't develop these things in a vacuum. Babbage couldn't       have done it himself.              Almost all of my work was only possible because of feedback       from other people. If I had been born and plonked on a desert       island, the only thing I would have been able to come up with       is coconut recipes.              > > > The only remaining problem is that       > > > OS data structure       > >       > > That doesn't matter. If you directly access those things       > > (and you're sure that is a supported thing to do), it is              [continued in next message]              --- SoupGate-Win32 v1.05        * Origin: you cannot sedate... all the things you hate (1:229/2)    |
[   << oldest   |   < older   |   list   |   newer >   |   newest >>   ]
(c) 1994, bbs@darkrealms.ca