Forums before death by AOL, social media and spammers... "We can't have nice things"
|    comp.arch    |    Apparently more than just beeps & boops    |    131,241 messages    |
[   << oldest   |   < older   |   list   |   newer >   |   newest >>   ]
|    Message 130,002 of 131,241    |
|    Lawrence =?iso-8859-13?q?D=FFOlivei to BGB    |
|    Re: Crisis? What Crisis? (was Re: On Cra    |
|    17 Oct 25 22:20:49    |
      From: ldo@nz.invalid              On Fri, 17 Oct 2025 15:32:39 -0500, BGB wrote:              > On 10/17/2025 2:03 AM, Lawrence D’Oliveiro wrote:       >>       >> On Thu, 16 Oct 2025 15:17:22 -0500, BGB wrote:       >>       >>> Also, the V extension doesn't even fit entirely in the opcode, it       >>> depends on additional state held in CSRs.       >>       >> I know, you could consider that a cheat in some ways. But on the other       >> hand, it allows code reuse, by having different (overloaded) function       >> entry points each do type-specific setup, then all branch to common       >> code to execute the actual loop bodies.       >>       > The SuperH also did this for the FPU:       > Didn't have enough encoding space to fit everything, so they sorta used       > FPU control bits to control which instructions were decoded.              That was probably not cost-effective for scalar instructions, because it       would turn a single operation instruction into multiple instructions for       operand type setup followed by the actual operation instruction.              Probably better for vector instructions, where one sequence of operand       type setup lets it then chug away to process a whole sequence of operand       tuples in exactly the same way.              >>> Most use-cases for longer vectors tend to matrix-like rather than       >>> vector-like. Or, what cases that would appear suited to an 8-element       >>> vector are often achieved sufficiently with two vectors.       >>       >> Back in the days of Seymour Cray, his machines were getting useful       >> results out of vector lengths up to 64 elements.       >>       >> Perhaps that was more a substitute for parallel processing.       >>       > Maybe. Just in my own experience, it seems to fizzle out pretty quickly.              Maybe that was just a software thing: the Cray machines had their own       architecture(s), which was never carried forward to the new massively-       parallel supers, or RISC machines etc. Maybe the parallelism was thought       to render deep pipelines obsolete -- at least in the early years. (*Cough*       Pentium 4 *Cough*)              Short-vector SIMD was introduced along an entirely separate evolutionary       path, namely that of bringing DSP-style operations into general-purpose       CPUs.              > It may not count for Cray though, since IIRC their vectors were encoded       > as memory-addresses and they were effectively using pipelining tricks       > for the vectors.              Certainly if you look at the evolution of Seymour Cray’s designs, explicit       vectorization was for him the next stage after implicit pipelining, so the       two were bound to have underlying features in common.              > So, in this case, a truer analog of Cray style vectors would not be       > variable width SIMD that can fake large vectors, but rather a mechanism       > to stream the vector through a SIMD unit.              But short-vector SIMD can only deal with operands in lockstep. If you       loosen this restriction, then you are back to multiple function units and       superscalar execution.              >> Maybe it’s time to look beyond RGB colours. I remember some “Photo”       >> inkjet printers had 5 or 6 different colour inks, to try to fill out       >> more of the CIE space. Computer monitors could do the same. Look at the       >> OpenEXR image format that these CG folks like to use: that allows for       >> more than 3 colour components, and each component can be a float --       >> even single-precision might not be enough, so they allow for double       >> precision as well.       >>       >>       > IME:       > The visible difference between RGB555 and RGB24 is small;       > The difference between RGB24 and RGB30 in mostly imperceptible;       > Though, most modern LCD/LED monitors actually only give around 5 or 6       > bits per color channel (unlike the true analog on VGA CRTs, *).              First of all, we have some “HDR” monitors around now that can output a       much greater gradation of brightness levels. These can be used to produce       apparent brightnesses greater than 100%.              Secondly, we’re talking about input image formats. Remember that every       image-processing step is going to introduce some generational loss due to       rounding errors; therefore the higher the quality of the raw input       imagery, the better the quality of the output.              Sure, you may think 64-bit floats must be overkill for this purpose; but       these are artists you’re dealing with. ;)              > Had noted though that for me, IRL, monitors can't really represent real       > life colors. Like, I live in a world where computer displays all have a       > slight tint (with a similar tint and color distortion also applying to       > the output of color laser printers; and a different color distortion for       > inkjet printers).              That is always true; “white” is never truly “white”, which is why those       who work in colour always talk about a “white point” for defining what is       meant by “white”, which is the colour of a perfect “black body”       emitter at       a specific temperature (typically 5500K or above).              --- SoupGate-Win32 v1.05        * Origin: you cannot sedate... all the things you hate (1:229/2)    |
[   << oldest   |   < older   |   list   |   newer >   |   newest >>   ]
(c) 1994, bbs@darkrealms.ca