Forums before death by AOL, social media and spammers... "We can't have nice things"
|    linux.debian.kernel    |    Debian kernel discussions    |    3,019 messages    |
[   << oldest   |   < older   |   list   |   newer >   |   newest >>   ]
|    Message 1,636 of 3,019    |
|    Filippo Giunchedi to Filippo Giunchedi    |
|    Bug#1121006: raid10 and component device    |
|    21 Nov 25 12:10:01    |
      XPost: linux.debian.bugs.dist       From: filippo@debian.org              Hello linux-raid,       I'm seeking assistance with the following bug: recent versions of mpt3sas       started announcing drive's optimal_io_size of 0xFFF000 and when said drives are       part of a mdraid raid10 the array's optimal_io_size results in 0xFFF000.              When an LVM PV is created on the array its metadata area by default is aligned       with its optimal_io_size, resulting in an abnormally-large size of ~4GB. During       GRUB's LVM detection an allocation is made based on the metadata area size       which results in an unbootable system. This problem shows up only for       newly-created PVs and thus systems with existing PVs are not affected in my       testing.              I was able to reproduce the problem on qemu using scsi-hd devices as shown       below and on https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=1121006. The bug       is present both on Debian' stable kernel and Linux 6.18, though I haven't yet       determined when the change was introduced in mpt3sas.              I'm wondering where the problem is in this case and what could be done to fix       it?              thank you,       Filippo              On Thu, Nov 20, 2025 at 02:43:24PM +0000, Filippo Giunchedi wrote:       > Hello Salvatore,       > Thank you for the quick reply.       >       > On Wed, Nov 19, 2025 at 05:59:48PM +0100, Salvatore Bonaccorso wrote:       > [...]       > > > Capabilities: [348] Vendor Specific Information: ID=0001 Rev=1       Len=038 >       > > > Capabilities: [380] Data Link Feature >       > > > Kernel driver in use: mpt3sas       > >       > > This sounds like quite an intersting finding but probably hard to       > > reproduce without the hardware if it comes to be specific to the       > > controller type and driver.       >       > That's a great point re: reproducibility, and it got me curious on something       I       > hadn't thought of testing. Namely if there's another angle to this: does any       > block device with the same block I/O hints exhibit the same problem? The       answer is       > actually "yes".       >       > I used qemu 'scsi-hd' device to set the same values to be able to test       locally.       > On an already-installed VM I added the following to present four new devices:       >       > -device virtio-scsi-pci,id=scsi0       >       > -drive file=./workdir/disks/disk3.qcow2,format=qcow2,if=none,id=drive3       > -device scsi-hd,bus=scsi0.0,drive=drive3,physical_block_size=4       96,logical_block_size=512,min_io_size=4096,opt_io_size=16773120       >       > -drive file=./workdir/disks/disk4.qcow2,format=qcow2,if=none,id=drive4       > -device scsi-hd,bus=scsi0.0,drive=drive4,physical_block_size=4       96,logical_block_size=512,min_io_size=4096,opt_io_size=16773120       >       > -drive file=./workdir/disks/disk5.qcow2,format=qcow2,if=none,id=drive5       > -device scsi-hd,bus=scsi0.0,drive=drive5,physical_block_size=4       96,logical_block_size=512,min_io_size=4096,opt_io_size=16773120       >       > -drive file=./workdir/disks/disk6.qcow2,format=qcow2,if=none,id=drive6       > -device scsi-hd,bus=scsi0.0,drive=drive6,physical_block_size=4       96,logical_block_size=512,min_io_size=4096,opt_io_size=16773120       >       > I used 10G files with 'qemu-img create -f qcow2 |
[   << oldest   |   < older   |   list   |   newer >   |   newest >>   ]
(c) 1994, bbs@darkrealms.ca