home bbs files messages ]

Forums before death by AOL, social media and spammers... "We can't have nice things"

   alt.os.linux.gentoo      Stupid OS you gotta compile EVERYTHING      17,684 messages   

[   << oldest   |   < older   |   list   |   newer >   |   newest >>   ]

   Message 16,017 of 17,684   
   J.O. Aho to Aragorn   
   Re: when will 2007.1 be released? (1/2)   
   05 Jan 08 12:18:30   
   
   From: user@example.net   
      
   Aragorn wrote:   
   > J.O. Aho wrote:   
      
   > and all information on the subject I've managed to dig   
   > up suggests that a PCI videocard will always take precedence over a   
   > PCIe/AGP card as the primary video adapter.   
   > I was told by someone who builds computers himself that it's got something   
   > to do with the IRQ, in the sense that a lower IRQ number would take   
   > precedence, but I don't consider myself tech-savvy enough to corroborate or   
   > argue this statement.   
      
   Ok, you could try to switch the IRQ numbering, if your BIOS has that option,   
   used to be more common on those old PCI/ISA motherboards.   
      
      
   > nVidia has received an as yet still open invitation to collaborate with the   
   > Xen developers, even without needing to open up their source code, and to   
   > my knowledge they have totally consistently been ignoring this invitation   
   > for over a year already so far.   
      
   nVidia has been quite unhelpful, as I have understood they haven't helped   
   anything with the nv driver (included in xorg/xfree) nor the new driver that   
   will include 3D-support, compare with AMD (formerly ATi), who has given out   
   information and code to the development of the xorg/xfree driver.   
   They may even open up the code for the closed source driver.   
      
      
   > nVidia has always had better drivers than ATI, but with ATI now being owned   
   > by AMD and AMD opening up the driver code - including for older ATI   
   > chipsets - this balance may soon find itself radically tipped over to the   
   > ATI camp, and in that case, I will come to seriously regret that I opted   
   > for an nVidia adapter and their proprietary drivers in the first place,   
   > being a Free Software advocate myself.   
      
   Yes, during the old ATi times I think there was kind of a small handful part   
   time developers who worked with the driver, while there was closer to 100   
   people at nVidia who worked with the nVidia driver.   
   I have been using nVidia for the most of my x86 based machines, as the driver   
   is better, but on all my other machines (Sparc and PowerPC), there hasn't been   
   any support and nVidia dropped the PowerPC project when Apple dropped the   
   PowerPC as the CPU in Macs, so I have gone with ATi cards on those, getting a   
   soso hardware 3D support from the open source drivers.   
      
      
   > (My motherboard has an nForce Professional chipset, but the specs for those   
   > are open.)   
      
   The forcedeth was developed without any help from nVidia, but nVidia did drop   
   their own network driver when they thought the forcedeth driver was good   
   enough. IMHO they haven't contributed much, they could do a lot more, but I   
   guess they feel they are big enough to not care about 5% of the user market.   
      
      
   > Certain ports will have to be forwarded to designated virtual machines - I   
   > will be running three or four of them, including /dom0/ - such as port 22   
   > for /ssh,/ port 80 for /http,/ the port range between 6660 and 6669 for an   
   > IRC server - there will also be additional needed ports for this but I   
   > don't know them from memory right now - and then most of the userspace   
   > ports will have to be forwarded to the X11 /domU/ machine.   
      
   I would suggest you to not use the port 22 for ssh, while I did I had damn a   
   lot of script kiddies trying to force them into my system using, even if it   
   wasn't working, I didn't like the long log reports I had about these tries, so   
   I switched to an alternative port.   
      
      
   > I'm sure it'll all be quite easy to accomplish for someone experienced   
   > in /iptables,/ but unfortunately I consider myself quite a novice in that   
   > area as I have thusfar always relied upon GUI utilities such as /webmin/   
   > for this purpose. :-/   
      
   As long as you manage to get things to run on virtual NICs, the NATing will be   
   the easy part, you will just to follow one of the many HOWTOs you can find at   
   the net.   
      
      
   >> One of the disadvantages I see is the high load on the domU while making   
   >> disk access, even if you have dedicated slices for them. I have been   
   >> thinking of testing KVM instead and see if it's kinder on the load or not,   
   >> but been too busy with other things.   
   >   
   > The above said, I doubt that the load inside /dom0/ - you wrote /domU/ but I   
   > take it you mean /dom0/ instead? - would really differ a lot when using KVM   
   > rather than Xen.  It is after all virtualization, which means that there   
   > are multiple operating systems, each with their respective I/O   
   > requirements, trying to access the same hardware simultaneously.  So I   
   > think the phenomenon is rather indigenous to the concept of virtualization   
   > itself rather than to the virtualization technology used.   
      
   No, I mean domU, the dom= has a low load.   
   The test I made was setting up 3 domU running Apache and making approx 10000   
   request per sec per domU.   
      
   With diskimage on file the load on each domU was up on 2.8, while dom0 was at   
   0.3.   
      
   With dedicated slices the load on each domU was up on 0.9 while dom0 was at   
   0.2.   
      
   Doing the same kind of attack on the dom0 don't result in much higher load   
   than 0.15 (if I remember it right)   
      
   This high difference in loads will it make difficult with rules that prevents   
   high loads.   
      
      
   > By the way, I do indeed intend to give each operating system instance its   
   > own dedicated slices where possible/needed and use NFS where otherwise   
   > possible/recommended.   
      
   If you want read only access, then NFS has the advantage that it's the server   
   that decides if the client has read/write access or not, while a dedicated   
   slice can always be remounted as read/write. But usually a local hard drive is   
   faster.   
      
      
   > Ergo, I will have to set up LVM(2)   
   > slices instead, which I've only done once before so far, but which also   
   > seems like a very interesting technology to me.  I do wonder as to how much   
   > this extra level of hardware abstraction will impede on performance,   
   > though.   
      
   I have to say I haven't noticed any downside with my LVM that I share over NFS   
   to all my machines (just a small 500G /home).   
   I read a bit about zfs and it seems interesting, but I don't want it as a   
   userspace file system, which is what you get in Linux today.   
      
      
   > Of course, in a virtualization environment where you have several virtual   
   > machines running quite dedicated server tasks, your partitioning layout may   
   > not need to be all that elaborate.  There's a huge chance that */opt* and   
   > */usr/local* - which I normally also split off from */usr* - will be empty   
   > anyway.  */usr/src* and */usr/portage* can be shared via NFS.  */tmp* can   
   > be a /tmpfs./  */usr* itself could even be shared over NFS in its entirety   
   > between minimally installed server VMs.  */boot* is unneeded by any virtual   
   > machine other than /dom0,/ et al.   
      
   I never split out /usr/local, as it's always been more or less empty on all my   
   installs, /usr/src I did split out when I went to Gentoo. I try to split out   
   those parts where you have a lot of writes, as I see it more likely that   
      
   [continued in next message]   
      
   --- SoupGate-Win32 v1.05   
    * Origin: you cannot sedate... all the things you hate (1:229/2)   

[   << oldest   |   < older   |   list   |   newer >   |   newest >>   ]


(c) 1994,  bbs@darkrealms.ca