Forums before death by AOL, social media and spammers... "We can't have nice things"
|    alt.os.linux.gentoo    |    Stupid OS you gotta compile EVERYTHING    |    17,684 messages    |
[   << oldest   |   < older   |   list   |   newer >   |   newest >>   ]
|    Message 16,016 of 17,684    |
|    Aragorn to J.O. Aho    |
|    Re: when will 2007.1 be released? (1/2)    |
|    05 Jan 08 04:30:41    |
      From: aragorn@chatfactory.invalid              J.O. Aho wrote:              > Aragorn wrote:       >       >> It's just that the Live CD attempted to start up X11 - which I had not       >> expected it to do - and then went a little crazy on the fact that there       >> is a PCI Radeon card (which always takes precedence as the primary       >> videocard) and a PCIe GeForce card.       >       > At least on older machines you could set the primary graphics card in the       > BIOS, wouldn't surprise me if there is a PCIe/PCI option in your BIOS.              Trust me, there isn't one, nor are there any jumpers on the motherboard to       select a primary graphics adapter - the motherboard doesn't even have       onboard graphics - and all information on the subject I've managed to dig       up suggests that a PCI videocard will always take precedence over a       PCIe/AGP card as the primary video adapter.              I was told by someone who builds computers himself that it's got something       to do with the IRQ, in the sense that a lower IRQ number would take       precedence, but I don't consider myself tech-savvy enough to corroborate or       argue this statement.              >> As for nVidia releasing a Xen-compatible driver - or let us have a wild       >> dream for a moment: having them open up their source code - that is not       >> going to happen. The best they will do is offer a driver that no longer       >> needs to be patched in order to run it inside a GNU/Linux system in a       >> Xen /dom0./       >       > No, they will not open up the source and the majority of distros won't       > have it as a default driver due the license.              nVidia has received an as yet still open invitation to collaborate with the       Xen developers, even without needing to open up their source code, and to       my knowledge they have totally consistently been ignoring this invitation       for over a year already so far.              On the other hand, it would appear that they have at least been silently       monitoring some of the Xen development and are now at least acknowledging       that there is such a thing as Xen and that virtualization really exists,       through their recommended use of a compile-time "IGNORE_XEN" variable.              nVidia has always had better drivers than ATI, but with ATI now being owned       by AMD and AMD opening up the driver code - including for older ATI       chipsets - this balance may soon find itself radically tipped over to the       ATI camp, and in that case, I will come to seriously regret that I opted       for an nVidia adapter and their proprietary drivers in the first place,       being a Free Software advocate myself.              My stance here was that no usable 3D acceleration was available for any       fairly recent videocards from FOSS drivers so far, and that it was thus       ethically allowed by sheer necessity to use a proprietary video driver.       With AMD opening up their driver code now, I find myself "committing a sin"       against my own principles by using nVidia. :(              (My motherboard has an nForce Professional chipset, but the specs for those       are open.)              >> In addition, I have very little knowledge of iptables and routing, and       >> I'll have to set up everything using a custom routing table, as I'll only       >> have one public IP address, but all virtual machines should have access       >> to the       >> internet. For /dom0,/ this only need be /ssh/ access, but I might set up       >> a /ssh/ DMZ inside one of the other virtual machines, so that one       >> must /ssh/ into that virtual machine first and then from there /ssh/ into       >> the /dom0./ This is probably the wisest solution. ;-)       >       > I did setup Xen on a test machine running CentOS, had no need of       > configuring iptables rules to access internet with domU. Of course if you       > want to protect ports, you have to do something. I would have recommended       > you to take a look at FireStarter, but it hasn't been maintained for a       > while and has some troubles with later kernels and gnome2 libs.              My main needs are the following: /domU/ must be able to access the internet,       but as I will have only one public IP address available, it'll have to be       done through NAT/routing. Firewalling will need to be taken care of on a       VM-specific manner, depending on the individual needs of each virtual       machine.              Certain ports will have to be forwarded to designated virtual machines - I       will be running three or four of them, including /dom0/ - such as port 22       for /ssh,/ port 80 for /http,/ the port range between 6660 and 6669 for an       IRC server - there will also be additional needed ports for this but I       don't know them from memory right now - and then most of the userspace       ports will have to be forwarded to the X11 /domU/ machine.              In addition, the motherboard has a second onboard Gigabit NIC, which will be       connected to my switch, and through which the other machines on my network       must be able to connect to the internet via NAT on the same machine as       which is running the virtual machine set-up.              I'm sure it'll all be quite easy to accomplish for someone experienced       in /iptables,/ but unfortunately I consider myself quite a novice in that       area as I have thusfar always relied upon GUI utilities such as /webmin/       for this purpose. :-/              > One of the disadvantages I see is the high load on the domU while making       > disk access, even if you have dedicated slices for them. I have been       > thinking of testing KVM instead and see if it's kinder on the load or not,       > but been too busy with other things.              KVM is an interesting approach to virtualization and its existence could       give the Linux kernel even more credibility as a viable corporate operating       system kernel - previously, UNIX-style kernel-based virtualization was only       available in Solaris, which still allowed for some FUD to circle in the       minds of those Pointy Haired Bosses (TM) who were at least willing to       consider something other than Crimosoft Wintendo on their server hardware -       and the project certainly has already come a long way in a short time, but       there is still a lot of work to be done, and at the moment I still consider       Xen to be the best and most stable option. It's faster than KVM and       requires less of a memory overhead because of the small hypervisor codebase       versus a fully-blown Linux kernel.              The above said, I doubt that the load inside /dom0/ - you wrote /domU/ but I       take it you mean /dom0/ instead? - would really differ a lot when using KVM       rather than Xen. It is after all virtualization, which means that there       are multiple operating systems, each with their respective I/O       requirements, trying to access the same hardware simultaneously. So I       think the phenomenon is rather indigenous to the concept of virtualization       itself rather than to the virtualization technology used.              On the other hand, both Xen and KVM - and who knows, even VMWare - will       probably benefit a lot from the new and improved scheduler in the Linux       kernel, as well as from using the leaner /SLUB/ instead of the       traditional /SLAB./              By the way, I do indeed intend to give each operating system instance its              [continued in next message]              --- SoupGate-Win32 v1.05        * Origin: you cannot sedate... all the things you hate (1:229/2)    |
[   << oldest   |   < older   |   list   |   newer >   |   newest >>   ]
(c) 1994, bbs@darkrealms.ca