Commit Graph

3553 Commits

Author SHA1 Message Date
Jes Sorensen 59416da78f tests/func.sh: Fix some total breakage in the test scripts
We will never mandate an obsolete file system such as ext[2-4] for
running the test suite, nor should the test version of mdadm be
installed on the system for the tests to be run.

Signed-off-by: Jes Sorensen <jsorensen@fb.com>
Fixes: 20d10b4be8 ("mdadm/test: Refactor and revamp 'test' script")
2018-04-11 17:27:28 -04:00
Michal Zylowski b91ad097d6 imsm: Allow create RAID volume with link to container
After 1db03765("Subdevs can't be all missing when create raid device")
raid volume can't be created with link to container. This feature should
not be blocked in Create function.  IMSM code forbids creation of
container with missing disk, so case like all dev's missing is already
handled.

Permit IMSM volume creation when devices are given as link to container.

Signed-off-by: Michal Zylowski <michal.zylowski@intel.com>
Signed-off-by: Jes Sorensen <jsorensen@fb.com>
2018-04-10 16:12:00 -04:00
Zhipeng Xie 1c7c65a3e5 mdadm: fix use-after-free after free_mdstat
e->percent access the mdstat_ent which was already freed in free_mdstat

Signed-off-by: Zhipeng Xie <xiezhipeng1@huawei.com>
Signed-off-by: Jes Sorensen <jsorensen@fb.com>
2018-04-10 11:12:24 -04:00
Jes Sorensen 6a173ed317 mdadm: 4.1-rc1
Signed-off-by: Jes Sorensen <jsorensen@fb.com>
2018-03-22 13:06:56 -04:00
Jes Sorensen 94d7a6c361 makedist: Fix to handle rc releases
Signed-off-by: Jes Sorensen <jsorensen@fb.com>
2018-03-22 13:06:56 -04:00
Artur Paszkiewicz e397cefe13 imsm: fix assemble with ppl during rebuild
When assembling an array undergoing rebuild the kernel will switch to
resync if there are no ppl entries to recover. Prevent that by adding an
empty entry when validating the ppl header.

Signed-off-by: Artur Paszkiewicz <artur.paszkiewicz@intel.com>
Signed-off-by: Jes Sorensen <jsorensen@fb.com>
2018-03-22 13:06:56 -04:00
Zhilong Liu 548b2a3d2f clustermd_tests: add test case to test switch-recovery against cluster-raid10
03r10_switch-recovery:
Create new array with 2 active and 1 spare disk, set 1 active disk as 'fail',
it triggers recovery and the spare disk would replace the failure disk, then
stop the array in doing recovery node, the other node would take it over and
continue to complete the recovery.

Signed-off-by: Zhilong Liu <zlliu@suse.com>
Signed-off-by: Jes Sorensen <jsorensen@fb.com>
2018-03-08 14:40:34 -05:00
Zhilong Liu 305a051bdf clustermd_tests: add test case to test switch-recovery against cluster-raid1
03r1_switch-recovery:
Create new array with 2 active and 1 spare disk, set 1 active disk as 'fail',
it triggers recovery and the spare disk would replace the failure disk, then
stop the array in doing recovery node, the other node would take it over and
continue to complete the recovery.

Signed-off-by: Zhilong Liu <zlliu@suse.com>
Signed-off-by: Jes Sorensen <jsorensen@fb.com>
2018-03-08 14:40:17 -05:00
Zhilong Liu 2ec2fb76ad clustermd_tests: add test case to test switch-resync against cluster-raid10
03r10_switch-resync:
Create new array, 1 node is doing resync and other node would keep PENDING,
stop the array in resync node, other node would take it over and continue
to complete the resync.

Signed-off-by: Zhilong Liu <zlliu@suse.com>
Signed-off-by: Jes Sorensen <jsorensen@fb.com>
2018-03-08 14:40:03 -05:00
Zhilong Liu 9042a561b1 clustermd_tests: add test case to test switch-resync against cluster-raid1
03r1_switch-resync:
Create new array, 1 node is doing resync and other node would keep PENDING,
stop the array in resync node, other node would take it over and continue
to complete the resync.

Signed-off-by: Zhilong Liu <zlliu@suse.com>
Signed-off-by: Jes Sorensen <jsorensen@fb.com>
2018-03-08 14:39:21 -05:00
Zhilong Liu 56928762ba clustermd_tests: add test case to test manage_re-add against cluster-raid10
02r10_Manage_re-add:
2 active disk in array, set 1 disk 'fail' and 'remove' it from array,
then re-add the disk back to array and triggers recovery.

Signed-off-by: Zhilong Liu <zlliu@suse.com>
Signed-off-by: Jes Sorensen <jsorensen@fb.com>
2018-03-08 14:39:04 -05:00
Zhilong Liu 05f1959b19 clustermd_tests: add test case to test manage_re-add against cluster-raid1
02r1_Manage_re-add:
2 active disk in array, set 1 disk 'fail' and 'remove' it from array,
then re-add the disk back to array and triggers recovery.

Signed-off-by: Zhilong Liu <zlliu@suse.com>
Signed-off-by: Jes Sorensen <jsorensen@fb.com>
2018-03-08 14:38:32 -05:00
Zhilong Liu ffa22ea2f3 clustermd_tests: add test case to test manage_add-spare against cluster-raid10
02r10_Manage_add-spare: it has 2 scenarios against manage_add-spare.
1. 2 active disks in md array, using add-spare to add spare disk.
2. 2 active disks and 1 spare in array, add-spare 1 new disk into array,
   then check spares.

Signed-off-by: Zhilong Liu <zlliu@suse.com>
Signed-off-by: Jes Sorensen <jsorensen@fb.com>
2018-03-08 14:38:18 -05:00
Zhilong Liu bb76bc6825 clustermd_tests: add test case to test manage_add-spare against cluster-raid1
02r1_Manage_add-spare: it has 2 scenarios against manage_add-spare.
1. 2 active disks in md array, using add-spare to add spare disk.
2. 2 active disks and 1 spare in array, add-spare 1 new disk into array,
   then check spares.

Signed-off-by: Zhilong Liu <zlliu@suse.com>
Signed-off-by: Jes Sorensen <jsorensen@fb.com>
2018-03-08 14:38:03 -05:00
Zhilong Liu 1070b7f867 clustermd_tests: add test case to test manage_add against cluster-raid10
02r10_Manage_add: it covers testing 2 scenarios against manage_add.
1. 2 active disks in md array, set 1 disk 'fail' and 'remove' it
   from array, then add 1 pure disk into array.
2. 2 active disks in array, add 1 new disk into array directly, now
   the 'add' in equal to 'add-spare'.

Signed-off-by: Zhilong Liu <zlliu@suse.com>
Signed-off-by: Jes Sorensen <jsorensen@fb.com>
2018-03-08 14:37:10 -05:00
Zhilong Liu db76b9b156 clustermd_tests: add test case to test manage_add against cluster-raid1
02r1_Manage_add: it covers testing 2 scenarios against manage_add.
1. 2 active disks in md array, set 1 disk 'fail' and 'remove' it
   from array, then add 1 pure disk into array.
2. 2 active disks in array, add 1 new disk into array directly, now
   the 'add' in equal to 'add-spare'.

Signed-off-by: Zhilong Liu <zlliu@suse.com>
Signed-off-by: Jes Sorensen <jsorensen@fb.com>
2018-03-08 14:34:59 -05:00
Zhilong Liu 6374882f21 clustermd_tests: add test case to test grow_add against cluster-raid1
01r1_Grow_add: It contains 3 kinds of growing array.
1. 2 active disk in md array, grow and add new disk into array.
2. 2 active and 1 spare disk in md array, grow and add new disk
   into array.
3. 2 active and 1 spare disk in md array, grow the device-number
   and make spare disk as active disk in array.

Signed-off-by: Zhilong Liu <zlliu@suse.com>
Signed-off-by: Jes Sorensen <jsorensen@fb.com>
2018-03-08 14:34:40 -05:00
Zhilong Liu b3d330436a clustermd_tests: add test case to test switching bitmap against cluster-raid10
01r10_Grow_bitmap-switch:
It tests switching bitmap during three modes include of
clustered, none and internal, this case is testing the
clustered raid10.

Signed-off-by: Zhilong Liu <zlliu@suse.com>
Signed-off-by: Jes Sorensen <jsorensen@fb.com>
2018-03-08 14:34:20 -05:00
Zhilong Liu 4a557d9d1d clustermd_tests: add test case to test switching bitmap against cluster-raid1
01r1_Grow_bitmap-switch:
It tests switching bitmap during three modes include of
clustered, none and internal, this case is testing the
clustered raid1.

Signed-off-by: Zhilong Liu <zlliu@suse.com>
Signed-off-by: Jes Sorensen <jsorensen@fb.com>
2018-03-08 14:32:33 -05:00
Zhilong Liu b2a613ddd5 manpage: add prompt in --zero-superblock against clustered raid
Clustered raid would be damaged if calls --zero-superblock
incorrectly, so add prompt in --zero-superblock chapter of
manpage. Such as: cluster node1 has assembled cluster-md,
but calls --zero-superblock in other cluster node.

Signed-off-by: Zhilong Liu <zlliu@suse.com>
Signed-off-by: Jes Sorensen <jsorensen@fb.com>
2018-03-08 14:30:53 -05:00
Zhilong Liu b3872c0284 mdadm/clustermd_tests: delete meaningless commands in check
Signed-off-by: Zhilong Liu <zlliu@suse.com>
Signed-off-by: Jes Sorensen <jsorensen@fb.com>
2018-03-08 14:30:36 -05:00
Zhilong Liu f7331a1158 mdadm/clustermd_tests: add nobitmap in check
Signed-off-by: Zhilong Liu <zlliu@suse.com>
Signed-off-by: Jes Sorensen <jsorensen@fb.com>
2018-03-08 14:30:15 -05:00
Zhilong Liu 064bd3f5ca mdadm/test: add do_clean to ensure each case only catch its own testlog
Signed-off-by: Zhilong Liu <zlliu@suse.com>
Signed-off-by: Jes Sorensen <jsorensen@fb.com>
2018-03-08 14:29:50 -05:00
Zhilong Liu 7d81135e8a mdadm/test: add disk metadata infos in save_log
Signed-off-by: Zhilong Liu <zlliu@suse.com>
Signed-off-by: Jes Sorensen <jsorensen@fb.com>
2018-03-08 14:28:39 -05:00
Zhilong Liu 05e0e58f70 mdadm/test: improve filtering r10 from raid1 in raidtype
Signed-off-by: Zhilong Liu <zlliu@suse.com>
Signed-off-by: Jes Sorensen <jsorensen@fb.com>
2018-03-08 14:28:03 -05:00
Guoqing Jiang 57908e9eba Assemble: cleanup the failure path
There are some failure paths which share common codes
before return, so simplify them by move common codes
to the end of function, and just goto out in case
failure happened.

Reviewed-by: NeilBrown <neilb@suse.com>
Signed-off-by: Guoqing Jiang <gqjiang@suse.com>
Signed-off-by: Jes Sorensen <jsorensen@fb.com>
2018-03-08 14:22:25 -05:00
Guoqing Jiang 76781701a4 Assemble: provide protection when clustered raid do assemble
The previous patch provides protection for other modes
such as CREATE, MANAGE, GROW and INCREMENTAL. And for
ASSEMBLE mode, we also need to protect during the process
of assemble clustered raid.

However, we can only know the array is clustered or not
when the metadata is ready, so the lock_cluster is called
after select_devices(). And we could re-read the metadata
when doing auto-assembly, so refresh the locking.

Reviewed-by: NeilBrown <neilb@suse.com>
Signed-off-by: Guoqing Jiang <gqjiang@suse.com>
Signed-off-by: Jes Sorensen <jsorensen@fb.com>
2018-03-08 14:18:41 -05:00
Guoqing Jiang 1b7eb962db mdadm: improve the dlm locking mechanism for clustered raid
Previously, the dlm locking only protects several
functions which writes to superblock (update_super,
add_to_super and store_super), and we missed other
funcs such as add_internal_bitmap. We also need to
call the funcs which read superblock under the
locking protection to avoid consistent issue.

So let's remove the dlm stuffs from super1.c, and
provide the locking mechanism to the main() except
assemble mode which will be handled in next commit.
And since we can identify it is a clustered raid or
not based on check the different conditions of each
mode, so the change should not have effect on native
array.

And we improve the existed locking stuffs as follows:

1. replace ls_unlock with ls_unlock_wait since we
should return when unlock operation is complete.

2. inspired by lvm, let's also try to use the existed
lockspace first before creat a lockspace blindly if
the lockspace not released for some reason.

3. try more times before quit if EAGAIN happened for
locking.

Note: for MANAGE mode, we do not need to get lock if
node just want to confirm device change, otherwise we
can't add a disk to cluster since all nodes are compete
for the lock.

Reviewed-by: NeilBrown <neilb@suse.com>
Signed-off-by: Guoqing Jiang <gqjiang@suse.com>
Signed-off-by: Jes Sorensen <jsorensen@fb.com>
2018-03-08 14:16:42 -05:00
Xiao Ni 9c816fe2ad Add one sanity check for missing device
Signed-off-by: Xiao Ni <xni@redhat.com>
Signed-off-by: Jes Sorensen <jsorensen@fb.com>
2018-02-23 11:10:00 -05:00
BingJing Chang 62f1aee7ad mdadm: prevent out-of-date reshaping devices from force assemble
With "--force", we can assemble the array even if some superblocks
appear out-of-date. But their data layout is regarded to make sense.
In reshape cases, if two devices claims different reshape progresses,
we cannot forcely assemble them back to array. Kernel will treat only
one of them as reshape progress. However, their data is still laid on
different layouts. It may lead to disaster if reshape goes on.

Reproducible Steps:
mdadm -C /dev/md0 --assume-clean -l5 -n3 /dev/loop[012]
mdadm -a /dev/md0 /dev/loop3
mdadm -G /dev/md0 -n4
mdadm -f /dev/md0 /dev/loop0 # after a period
mdadm -S /dev/md0 # after another period
mdadm -E /dev/loop[01] # make sure that they claims different ones

mdadm -Af -R /dev/md0 /dev/loop[023] # give no enough devices for
force_array() to pick non-fresh devices
cat /sys/block/md0/md/reshape_position # You can see that Kernel resume
reshape the from any progress of them.

Note: The unit of mdadm -E is KB, but reshape_position's is sector.

In order to prevent disaster, we add logics to prevent devices with
different reshape progress from being added into the array.

Reported-by: Allen Peng <allenpeng@synology.com>
Reviewed-by: Alex Wu <alexwu@synology.com>
Signed-off-by: BingJing Chang <bingjingc@synology.com>
Signed-off-by: Jes Sorensen <jsorensen@fb.com>
2018-02-23 11:05:00 -05:00
Michal Zylowski 8b75124792 imsm: update product name in error message
This commit extends ab0c6bb ("imsm: update name in --detail-platform").
Refer user to RSTe/VROC manual when needed.

Signed-off-by: Michal Zylowski <michal.zylowski@intel.com>
Signed-off-by: Jes Sorensen <jsorensen@fb.com>
2018-02-20 14:27:29 -05:00
Jonathan Underwood b96c193b9f Add udev-md-raid-safe-timeouts.rules
These udev rules attempt to set a safe kernel controller
timeout for disks containing RAID level 1 or higher
partitions for commodity disks which do not have SCTERC
capability, or do have it but it is disabled.

No attempt is made to change the STCERC settings on devices
which support it.

This attempts to mitigate the problem described here:

    https://raid.wiki.kernel.org/index.php/Timeout_Mismatch
    http://strugglers.net/~andy/blog/2015/11/09/linux-software-raid-and-drive-timeouts/

where the kernel controller may timeout on a read from a
disk after the default timeout of 30 seconds and consequently
cause mdraid to regard the disk as dead and eject it from the
RAID array.

The mitigation is to set the timeout to 180 seconds for disks
which contain a RAID level 1 or higher partition.

Signed-off-by: Jonathan G. Underwood <jonathan.underwood@gmail.com>
Signed-off-by: Jes Sorensen <jsorensen@fb.com>
2018-02-01 09:08:51 -05:00
Xiao Ni 1db0376585 Subdevs can't be all missing when create raid device
Signed-off-by: Xiao Ni <xni@redhat.com>
Signed-off-by: Jes Sorensen <jsorensen@fb.com>
2018-01-26 13:51:01 -05:00
Mariusz Tkaczyk a3b831c9e1 Grow.c: Block any level migration with chunk size change
Mixing level and chunk changes in one grow operation is not supported.
Mdadm performs level migration correctly and ignores new chunk, but
after migration it tries to write this chunk to sysfs properties.
This is dangerous and can cause unexpected behaviours.

Block it before level migration starts.

Signed-off-by: Mariusz Tkaczyk <mariusz.tkaczyk@intel.com>
Signed-off-by: Jes Sorensen <jsorensen@fb.com>
2018-01-25 14:31:47 -05:00
Andrea Righi 31b6f0cdc1 Assemble: prevent segfault with faulty "best" devices
I was able to trigger this curious problem that seems to happen only on
one of our server:

Segmentation fault

This md volume is a raid1 volume made of 2 device mapper (dm-multipath)
devices and the underlying LUNs are imported via iSCSI.

Applying the following patch (see below) seems to fix the problem:

mdadm: /dev/md/10.4.237.12-volume has been started with 2 drives.

But I'm not sure if it's the right fix or if there're some other
problems that I'm missing.

More details about the md superblocks that might help to better
understand the nature of the problem:

dev: 36001405a04ed0c104881100000000000p2
/dev/mapper/36001405a04ed0c104881100000000000p2:
          Magic : a92b4efc
        Version : 1.2
    Feature Map : 0x1
     Array UUID : 5f3e8283:7f831b85:bc1958b9:6f2787a4
           Name : 10.4.237.12-volume
  Creation Time : Thu Jul 27 14:43:16 2017
     Raid Level : raid1
   Raid Devices : 2

 Avail Dev Size : 1073729503 (511.99 GiB 549.75 GB)
     Array Size : 536864704 (511.99 GiB 549.75 GB)
  Used Dev Size : 1073729408 (511.99 GiB 549.75 GB)
    Data Offset : 8192 sectors
   Super Offset : 8 sectors
   Unused Space : before=8104 sectors, after=95 sectors
          State : clean
    Device UUID : 16dae7e3:42f3487f:fbeac43a:71cf1f63

Internal Bitmap : 8 sectors from superblock
    Update Time : Tue Aug  8 11:12:22 2017
  Bad Block Log : 512 entries available at offset 72 sectors
       Checksum : 518c443e - correct
         Events : 167

   Device Role : Active device 0
   Array State : AA ('A' == active, '.' == missing, 'R' == replacing)
dev: 36001405a04ed0c104881200000000000p2
/dev/mapper/36001405a04ed0c104881200000000000p2:
          Magic : a92b4efc
        Version : 1.2
    Feature Map : 0x1
     Array UUID : 5f3e8283:7f831b85:bc1958b9:6f2787a4
           Name : 10.4.237.12-volume
  Creation Time : Thu Jul 27 14:43:16 2017
     Raid Level : raid1
   Raid Devices : 2

 Avail Dev Size : 1073729503 (511.99 GiB 549.75 GB)
     Array Size : 536864704 (511.99 GiB 549.75 GB)
  Used Dev Size : 1073729408 (511.99 GiB 549.75 GB)
    Data Offset : 8192 sectors
   Super Offset : 8 sectors
   Unused Space : before=8104 sectors, after=95 sectors
          State : clean
    Device UUID : ef612bdd:e475fe02:5d3fc55e:53612f34

Internal Bitmap : 8 sectors from superblock
    Update Time : Tue Aug  8 11:12:22 2017
  Bad Block Log : 512 entries available at offset 72 sectors
       Checksum : c39534fd - correct
         Events : 167

   Device Role : Active device 1
   Array State : AA ('A' == active, '.' == missing, 'R' == replacing)

dev: 36001405a04ed0c104881100000000000p2
00001000  fc 4e 2b a9 01 00 00 00  01 00 00 00 00 00 00 00  |.N+.............|
00001010  5f 3e 82 83 7f 83 1b 85  bc 19 58 b9 6f 27 87 a4  |_>........X.o'..|
00001020  31 30 2e 34 2e 32 33 37  2e 31 32 2d 76 6f 6c 75  |10.4.237.12-volu|
00001030  6d 65 00 00 00 00 00 00  00 00 00 00 00 00 00 00  |me..............|
00001040  64 50 7a 59 00 00 00 00  01 00 00 00 00 00 00 00  |dPzY............|
00001050  80 cf ff 3f 00 00 00 00  00 00 00 00 02 00 00 00  |...?............|
00001060  08 00 00 00 00 00 00 00  00 00 00 00 00 00 00 00  |................|
00001070  00 00 00 00 00 00 00 00  00 00 00 00 00 00 00 00  |................|
00001080  00 20 00 00 00 00 00 00  df cf ff 3f 00 00 00 00  |. .........?....|
00001090  08 00 00 00 00 00 00 00  00 00 00 00 00 00 00 00  |................|
000010a0  00 00 00 00 00 00 00 00  16 da e7 e3 42 f3 48 7f  |............B.H.|
000010b0  fb ea c4 3a 71 cf 1f 63  00 00 08 00 48 00 00 00  |...:q..c....H...|
000010c0  54 f0 89 59 00 00 00 00  a7 00 00 00 00 00 00 00  |T..Y............|
000010d0  ff ff ff ff ff ff ff ff  9c 43 8c 51 80 00 00 00  |.........C.Q....|
000010e0  00 00 00 00 00 00 00 00  00 00 00 00 00 00 00 00  |................|
*
00001100  00 00 01 00 fe ff fe ff  fe ff fe ff fe ff fe ff  |................|
00001110  fe ff fe ff fe ff fe ff  fe ff fe ff fe ff fe ff  |................|
*
00001200  00 00 00 00 00 00 00 00  00 00 00 00 00 00 00 00  |................|
*
00002000  62 69 74 6d 04 00 00 00  5f 3e 82 83 7f 83 1b 85  |bitm...._>......|
00002010  bc 19 58 b9 6f 27 87 a4  a7 00 00 00 00 00 00 00  |..X.o'..........|
00002020  a7 00 00 00 00 00 00 00  80 cf ff 3f 00 00 00 00  |...........?....|
00002030  00 00 00 00 00 00 00 01  05 00 00 00 00 00 00 00  |................|
00002040  00 00 00 00 00 00 00 00  00 00 00 00 00 00 00 00  |................|
*
00003100  ff ff ff ff ff ff ff ff  ff ff ff ff ff ff ff ff  |................|
*
00004000  00 00 00 00 00 00 00 00  00 00 00 00 00 00 00 00  |................|
*
003ffe00
dev: 36001405a04ed0c104881200000000000p2
00001000  fc 4e 2b a9 01 00 00 00  01 00 00 00 00 00 00 00  |.N+.............|
00001010  5f 3e 82 83 7f 83 1b 85  bc 19 58 b9 6f 27 87 a4  |_>........X.o'..|
00001020  31 30 2e 34 2e 32 33 37  2e 31 32 2d 76 6f 6c 75  |10.4.237.12-volu|
00001030  6d 65 00 00 00 00 00 00  00 00 00 00 00 00 00 00  |me..............|
00001040  64 50 7a 59 00 00 00 00  01 00 00 00 00 00 00 00  |dPzY............|
00001050  80 cf ff 3f 00 00 00 00  00 00 00 00 02 00 00 00  |...?............|
00001060  08 00 00 00 00 00 00 00  00 00 00 00 00 00 00 00  |................|
00001070  00 00 00 00 00 00 00 00  00 00 00 00 00 00 00 00  |................|
00001080  00 20 00 00 00 00 00 00  df cf ff 3f 00 00 00 00  |. .........?....|
00001090  08 00 00 00 00 00 00 00  00 00 00 00 00 00 00 00  |................|
000010a0  01 00 00 00 00 00 00 00  ef 61 2b dd e4 75 fe 02  |.........a+..u..|
000010b0  5d 3f c5 5e 53 61 2f 34  00 00 08 00 48 00 00 00  |]?.^Sa/4....H...|
000010c0  54 f0 89 59 00 00 00 00  a7 00 00 00 00 00 00 00  |T..Y............|
000010d0  ff ff ff ff ff ff ff ff  5b 34 95 c3 80 00 00 00  |........[4......|
000010e0  00 00 00 00 00 00 00 00  00 00 00 00 00 00 00 00  |................|
*
00001100  00 00 01 00 fe ff fe ff  fe ff fe ff fe ff fe ff  |................|
00001110  fe ff fe ff fe ff fe ff  fe ff fe ff fe ff fe ff  |................|
*
00001200  00 00 00 00 00 00 00 00  00 00 00 00 00 00 00 00  |................|
*
00002000  62 69 74 6d 04 00 00 00  5f 3e 82 83 7f 83 1b 85  |bitm...._>......|
00002010  bc 19 58 b9 6f 27 87 a4  a7 00 00 00 00 00 00 00  |..X.o'..........|
00002020  a7 00 00 00 00 00 00 00  80 cf ff 3f 00 00 00 00  |...........?....|
00002030  00 00 00 00 00 00 00 01  05 00 00 00 00 00 00 00  |................|
00002040  00 00 00 00 00 00 00 00  00 00 00 00 00 00 00 00  |................|
*
00003100  ff ff ff ff ff ff ff ff  ff ff ff ff ff ff ff ff  |................|
*
00004000  00 00 00 00 00 00 00 00  00 00 00 00 00 00 00 00  |................|
*
003ffe00

Assemble: prevent segfault with faulty "best" devices

In Assemble(), after context reload, best[i] can be -1 in some cases,
and before checking if this value is negative we use it to access
devices[j].i.disk.raid_disk, potentially causing a segfault.

Check if best[i] is negative before using it to prevent this potential
segfault.

Signed-off-by: Andrea Righi <andrea@betterlinux.com>
Fixes: 69a481166b ("Assemble array with write journal")
Reviewed-by: NeilBrown <neilb@suse.com>
Signed-off-by: Robert LeBlanc <robert@leblancnet.us>
Signed-off-by: Jes Sorensen <jsorensen@fb.com>
2018-01-21 16:36:43 -05:00
Zhilong Liu 2920144ec9 mdadm/clustermd_tests: add test case to test grow_resize cluster-raid10
01r10_Grow_resize:
1. Create clustered raid10 with smaller size, then resize the
mddev to max size, finally change back to smaller size.
2. Create clustered raid10 with smaller chunk-size, then resize
it to larger, and trigger reshape.

Signed-off-by: Zhilong Liu <zlliu@suse.com>
Signed-off-by: Jes Sorensen <jsorensen@fb.com>
2018-01-21 16:36:08 -05:00
Zhilong Liu be2287de4d mdadm/clustermd_tests: add test case to test creating cluster-raid10
00r10_Create: It contains 4 scenarios of creating clustered raid10.
1. General creating, master node does resync and slave node does
   Pending.
2. Creating clustered raid10 with --assume-clean.
3. Creating clustered raid10 with spare disk.
4. Creating clustered raid10 with --name.

Signed-off-by: Zhilong Liu <zlliu@suse.com>
Signed-off-by: Jes Sorensen <jsorensen@fb.com>
2018-01-21 16:36:08 -05:00
Zhilong Liu fd24893e7b mdadm/clustermd_tests: add test case to test grow_resize cluster-raid1
01r1_Grow_resize: Create clustered raid1 with smaller size, then
resize the mddev to max size, finally change back to smaller size.

Signed-off-by: Zhilong Liu <zlliu@suse.com>
Signed-off-by: Jes Sorensen <jsorensen@fb.com>
2018-01-21 16:36:08 -05:00
Zhilong Liu 258735fecb mdadm/clustermd_tests: add test case to test creating cluster-raid1
00r1_Create: It contains 4 scenarios of creating clustered raid1.
1. General creating, master node does resync and slave node does
   Pending.
2. Creating clustered raid1 with --assume-clean parameter.
3. Creating clustered raid1 with spare disk.
4. Creating clustered raid1 with --name.

Signed-off-by: Zhilong Liu <zlliu@suse.com>
Signed-off-by: Jes Sorensen <jsorensen@fb.com>
2018-01-21 16:36:08 -05:00
Zhilong Liu 6c33d34df2 mdadm/test: add '--testdir=' to switch choosing test suite
By now, mdadm has two test suites to cover traditional sofr-raid
testing and clustermd testing, the '--testdir=' option supports
to switch which suite to test, tests/ or clustermd_tests/.

Signed-off-by: Zhilong Liu <zlliu@suse.com>
Signed-off-by: Jes Sorensen <jsorensen@fb.com>
2018-01-21 16:36:08 -05:00
Zhilong Liu beb71de04d mdadm/test: enable clustermd testing under clustermd_tests/
For clustermd testing, it needs user deploys the basic cluster
manually, test scripts don't cover auto-deploy cluster due to
different linux distributions have lots of difference.
Then complete the configuration in cluster_conf, please refer to
the detail comments in 'cluster_conf'.

1. 'func.sh' source file, it achieves feature functions for
   clustermd testing.
2. 'cluster_conf' configure file, it contains two parts as
   the input of testing.

Signed-off-by: Zhilong Liu <zlliu@suse.com>
Signed-off-by: Jes Sorensen <jsorensen@fb.com>
2018-01-21 16:36:08 -05:00
Zhilong Liu cbb8d34a81 mdadm/test: move some functions to new source file
To make 'test' file concise, move some functions to new file
tests/func.sh, and leave core functions in 'test' file.

Signed-off-by: Zhilong Liu <zlliu@suse.com>
Signed-off-by: Jes Sorensen <jsorensen@fb.com>
2018-01-21 16:36:08 -05:00
Zhilong Liu ed5e31aa21 mdadm/test: correct the logic operation in save_log
1. delete the mdadm -As, keep the original testing scene intact.
2. move some actions into 'array' test, 'mdadm -D $array' would
   complain errors if $array is null.

Signed-off-by: Zhilong Liu <zlliu@suse.com>
Signed-off-by: Jes Sorensen <jsorensen@fb.com>
2018-01-21 16:36:08 -05:00
Mariusz Tkaczyk 3bf9495270 policy.c: Avoid to take spare without defined domain by imsm
Only Imsm get_disk_controller_domain returns disk controller domain for
each disk. It causes that mdadm automatically creates disk controller
domain policy for imsm metadata, and imsm containers in the same disk
controller domain can take spare for recovery.

Ignore spares if only one imsm domain is matched.

Signed-off-by: Mariusz Tkaczyk <mariusz.tkaczyk@intel.com>
Signed-off-by: Jes Sorensen <jsorensen@fb.com>
2018-01-21 16:19:25 -05:00
Artur Paszkiewicz ab0c6bb9fc imsm: update name in --detail-platform
For IMSM enterprise firmware starting with major version 6, present the
platform name as Intel VROC.

Signed-off-by: Artur Paszkiewicz <artur.paszkiewicz@intel.com>
Signed-off-by: Jes Sorensen <jsorensen@fb.com>
2018-01-21 16:17:27 -05:00
Guoqing Jiang 18160d3455 mdadm: allow clustered raid10 to be created with default layout
Since the default layout of raid10 is n2, so we
should allow the behavior.

Signed-off-by: Guoqing Jiang <gqjiang@suse.com>
Signed-off-by: Jes Sorensen <jsorensen@fb.com>
2018-01-21 15:56:27 -05:00
Tomasz Majchrzak a44c262abc managemon: Don't add disk to the array after it has started
If disk has disappeared from the system and appears again, it is added to the
corresponding container as long the metadata matches and disk number is set.
This code had no effect on imsm until commit 20dc76d15b ("imsm: Set disk slot
number"). Now the disk is added to container but not to the array - it is
correct as the disk is out-of-sync. Rebuild should start for the disk but it
doesn't. There is the same behaviour for both imsm and ddf metadata.

There is no point to handle out-of-sync disk as "good member of array" so
remove that part of code. There are no scenarios when monitor is already
running and disk can be safely added to the array. Just write initial metadata
to the disk so it's taken for rebuild.

Signed-off-by: Tomasz Majchrzak <tomasz.majchrzak@intel.com>
Signed-off-by: Jes Sorensen <jsorensen@fb.com>
2017-12-07 09:20:16 -05:00
Zhilong Liu 56e1e6ace0 mdadm/grow: correct the s->size > 1 to make 'max' work
s->size > 1 : s->size is '1' when '--grow --size max'
parameter is specified, so correct this test here.

Fixes: 1b21c449e6 ("mdadm/grow: adding a test to ensure resize was required")
Signed-off-by: Zhilong Liu <zlliu@suse.com>
Signed-off-by: Jes Sorensen <jsorensen@fb.com>
2017-11-28 11:05:47 -05:00
Maksymilian Kunt 8b9cd157dc imsm: continue resync on 3-disk RAID10
If RAID10 gets degraded during resync and is stopped, it doesn't continue
resync after automatic assemble and it is reported to be in sync. Resync
is blocked because the disk is missing. It should not happen for RAID10 as
it can still continue with 3 disks.

Count missing disks. Block resync only if number of missing disks exceeds
limit for given RAID level (only different for RAID10). Check if the
disk under recovery is present. If not, resync should be allowed to run.

Signed-off-by: Maksymilian Kunt <maksymilian.kunt@intel.com>
Signed-off-by: Tomasz Majchrzak <tomasz.majchrzak@intel.com>
Signed-off-by: Jes Sorensen <jsorensen@fb.com>
2017-11-21 13:31:39 -05:00
Mariusz Tkaczyk 1ea0462990 Monitor/msg: Don't print error message if mdmon doesn't run
Commit 4515fb28a5 ("Add detail information when can not connect
monitor") was added to warn about failed connection to monitor in
WaitClean function (see link below).

Mdmon runs for IMSM containers when they have array with redundancy so
if mdmon doesn't run, mdadm prints this error. This is misleading and
unnecessary. Just print it in WaitClean function.

The sock in WaitClean is deprecated so it is removed.

Link: https://bugzilla.redhat.com/show_bug.cgi?id=1375002
Signed-off-by: Mariusz Tkaczyk <mariusz.tkaczyk@intel.com>
Signed-off-by: Jes Sorensen <jsorensen@fb.com>
2017-11-21 13:26:09 -05:00