super-intel marks a number of structures 'packed', but this
doesn't change the layout - they are already well organized.
This is a problem a gcc warns when code takes the address
of a field in a packet struct - as super-intel sometimes does.
So remove the marking where isn't needed.
Do ensure this does introduce a regression, add a compile-time
assertion that the size of the structure is exactly the value
it had before the 'packed' notation was removed.
Note that a couple of structure do need to be packed.
As the address of fields is never taken, that is safe.
Signed-off-by: NeilBrown <neilb@suse.de>
Acked-by: Artur Paszkiewicz <artur.paszkiewicz@intel.com>
Signed-off-by: Jes Sorensen <jsorensen@fb.com>
Removed checks which limited second volume size only to max value (the
largest size that fits on all current drives). It is now permitted
to create second volume with size lower then maximum possible.
Signed-off-by: Krzysztof Smolinski <krzysztof.smolinski@intel.com>
Signed-off-by: Jes Sorensen <jsorensen@fb.com>
The imsm container_content routine will set curr_volume index in super
for getting volume information. This flag has never been restored to
original value, later other function may rely on it.
Restore this flag to original value.
Signed-off-by: Mariusz Tkaczyk <mariusz.tkaczyk@intel.com>
Signed-off-by: Jes Sorensen <jsorensen@fb.com>
When member drive fails, managemon prepares metadata update and adds
the drive to disk_mgmt_list with DISK_REMOVE flag. It fills only
minor and major. It is enough to recognize the device later.
Monitor thread while processing this update will remove the drive from
super only if it is a spare. It never removes failed member from
disks list. As a result, it still keeps opened descriptor to
non-existing device.
If removed drive is not a spare fill fd in disk_cfg structure
(prepared by managemon), monitor will close fd during freeing it.
Also set this drive fd to -1 in super to avoid double closing because
monitor will close the fd (if needed) while replacing removed drive
in array.
Signed-off-by: Mariusz Tkaczyk <mariusz.tkaczyk@intel.com>
Signed-off-by: Jes Sorensen <jsorensen@fb.com>
Shut up some gcc9 errors by using put_unaligned() accessors. Not pretty,
but better than it was.
Also correct to the correct swap macros.
Signed-off-by: Jes Sorensen <jsorensen@fb.com>
If a member drive disappears and is set faulty by the kernel during
mdmon startup, after ss->load_container() but before manage_new(), mdmon
will try to readd the faulty drive to the array and start rebuilding.
Metadata on the active drive is updated, but the faulty drive is not
removed from the array and is left in a "blocked" state and any write
request to the array will block. If the faulty drive reappears in the
system e.g. after a reboot, the array will not assemble because metadata
on the drives will be incompatible (at least on imsm).
Fix this by adding a new option for sysfs_read(): "GET_DEVS_ALL". This
is an extension for the "GET_DEVS" option and causes all member devices
to be returned, even if the associated block device has been removed.
Use this option in manage_new() to include the faulty device on the
active_array's devices list. Mdmon will then properly remove the faulty
device from the array and update the metadata to reflect the degraded
state.
Signed-off-by: Artur Paszkiewicz <artur.paszkiewicz@intel.com>
Signed-off-by: Jes Sorensen <jsorensen@fb.com>
When passed size is smaller than chunk, mdadm rounds it to 0 but 0 there
means max available space.
Block it for every metadata. Remove the same check from imsm routine.
Signed-off-by: Mariusz Tkaczyk <mariusz.tkaczyk@intel.com>
Signed-off-by: Jes Sorensen <jsorensen@fb.com>
During spare activation get_extents() calculates metadata reserved space based
on smallest active RAID member or it will take the defaults. Since patch
611d9529("imsm: change reserved space to 4MB") default is extended. If array
was created prior that patch, reserved space is smaller. In case of matrix
RAID - spare is activated in each array one-by-one, so it is spare for first
activation, but treated as "active" during second one.
In case of adding spare drive to old matrix RAID with the size the same as
already existing member drive the routine will take the defaults during second
run and mdmon will refuse to rebuild second volume, claiming that the drive
does not have enough free space.
Add parameter to get_extents(), so the during spare activation reserved space
is always based on smallest active drive - even if given drive is already
active in some other array of matrix RAID.
Signed-off-by: Pawel Baldysiak <pawel.baldysiak@intel.com>
Signed-off-by: Jes Sorensen <jsorensen@fb.com>
If reshape is performed on drives larger then 2 TB,
migration checkpoint area that is calculated exeeds 32-bit value.
This checkpoint area is a reserved space threated as backup
during reshape - at the end of the drive, right before metadata.
As a result - wrong space is used and the data that may exists there
is overwritten.
Adding additional field to migration record to track high order 32-bits
of pba of this area. Three other fields that may exceed 32-bit value
for large drives are added as well.
Signed-off-by: Pawel Baldysiak <pawel.baldysiak@intel.com>
Signed-off-by: Jes Sorensen <jsorensen@fb.com>
Commit d7a1fda276 ("imsm: update metadata correctly while raid10 double
degradation") resolves main Imsm double degradation problems but it
omits one case. Now metadata hangs in the rebuilding state if the drive
under rebuild is removed during recovery from double degradation.
The root cause of this problem is comparing new map_state with current
and if they both are degraded assuming that nothing new happens.
Don't rely on map states, just check if device is failed. If the drive
under rebuild fails then finish migration, in other cases update map
state only (second fail means that destination map state can't be normal).
To avoid problems with reassembling move end_migration (called after
double degradation successful recovery) after check if recovery really
finished, for details see (7ce057018 "imsm: fix: rebuild does not
continue after reboot").
Remove redundant code responsible for finishing rebuild process. Function
end_migration do exactly the same. Set last_checkpoint to 0, to prepare
it for the next rebuild.
Signed-off-by: Mariusz Tkaczyk <mariusz.tkaczyk@intel.com>
Signed-off-by: Jes Sorensen <jsorensen@fb.com>
Mdmon calls end_migration() when map state changes from normal to
degraded. It is not valid because in raid 10 double degradation case
mdmon breaks checkpointing but array is still rebuilding.
In this case mdmon has to mark map as degraded and continues marking
recovery checkpoint in metadata. Migration can be finished only if newly
failed device is a rebuilding device.
Add catching double degraded to degraded transition. Migration is
finished but map state doesn't change, array is still degraded.
Update failed_disk_num correctly. If double degradation
happens rebuild will start on the lowest slot, but this variable points
to the first failed slot. If second fail happens while rebuild this
variable shouldn't be updated until rebuild is not finished.
Signed-off-by: Mariusz Tkaczyk <mariusz.tkaczyk@intel.com>
Signed-off-by: Jes Sorensen <jsorensen@fb.com>
When IMSM_NO_PLATFORM is exported mdadm allows to create array with
partitions or add partition to existing array but there is no
possibilty to assemble it after stopping, see commit 691c6ee1b6
("IMSM/DDF: don't recognised these metadata on partitions.").
When searching for hba capabilities first test device and print
corresponding error if it is a partition.
Signed-off-by: Mariusz Tkaczyk <mariusz.tkaczyk@intel.com>
Signed-off-by: Jes Sorensen <jsorensen@fb.com>
There is a possibility to create a RAID with empty name. Block it. Also
remove trailing and leading whitespaces from given name.
Signed-off-by: Roman Sobanski <roman.sobanski@intel.com>
Signed-off-by: Jes Sorensen <jsorensen@fb.com>
When migrating an array from R0 to R10 num_data_stripes in metadata map
will not be updated. Update it to allow correct migration process.
Changes in R10 to R0 migration for clarity of code.
Signed-off-by: Roman Sobanski <roman.sobanski@intel.com>
Signed-off-by: Jes Sorensen <jsorensen@fb.com>
Grow feature for IMSM metadata is currently fully supported and tested.
Reshape operation is not in experimental state anymore, so usage of this
flag is unnecessary.
Do not require MDADM_EXPERIMENTAL flag and remove obsolete information
from manual.
Signed-off-by: Michal Zylowski <michal.zylowski@intel.com>
Acked-by: Mariusz Dabrowski <mariusz.dabrowski@intel.com>
Acked-by: Roman Sobanski <roman.sobanski@intel.com>
Signed-off-by: Jes Sorensen <jsorensen@fb.com>
Currently when created container keeps disks with mixed sector size (few
4K disks and some 512 disks) there is no possibility to create volume from
disks with one sector size.
Allow volume creation when given disks are related with mixed container.
Signed-off-by: Michal Zylowski <michal.zylowski@intel.com>
Signed-off-by: Jes Sorensen <jsorensen@fb.com>
When added :0 to serial number and copying it back, use memcpy()
instead of strncpy() as we know the actual length. This stops gcc
from complaining with -Werror=stringop-truncation enabled
Signed-off-by: Jes Sorensen <jsorensen@fb.com>
gcc-8.1's -Werror=stringop-truncation is easily confused. Rather than
disabling the check, make it explicit we are OK truncating here.
Signed-off-by: Jes Sorensen <jsorensen@fb.com>
Set gemetry to geometry in error message about geometry validation failed.
Fix misspelled 'alignment' word in imsm_component_size_alignment_check
function.
Signed-off-by: Michal Zylowski <michal.zylowski@intel.com>
Signed-off-by: Jes Sorensen <jsorensen@fb.com>
Block creation of the imsm volume when given size is smaller than 1M and
print appropriate message.
Commit b53bfba611
(imsm: use rounded size for metadata initialization) introduces issue with
rounding volume sizes smaller than 1M to 0. There is an inconsistency when
size smaller than 1M was given depends of what we give as target device:
1) When block devices was given created volume has maximum available size.
2) When container symlink was given created volume has size 0. Additionally
it causes below call trace:
[69587.891556] WARNING: CPU: 28 PID: 22485 at ../drivers/md/md.c:7582 md_seq_show+0x764/0x770 [md_mod]
[69588.066405] Call Trace:
[69588.066409] seq_read+0x336/0x430
[69588.066411] proc_reg_read+0x40/0x70
[69588.066412] __vfs_read+0x26/0x140
[69588.066414] vfs_read+0x89/0x130
[69588.066415] SyS_read+0x42/0x90
[69588.066417] do_syscall_64+0x74/0x140
[69588.066419] entry_SYSCALL_64_after_hwframe+0x3d/0xa2
Signed-off-by: Roman Sobanski <roman.sobanski@intel.com>
Signed-off-by: Jes Sorensen <jsorensen@fb.com>
mdadm assumes that blocks_per_member value is equal to num_data_stripes *
blocks_per_stripe but it is not true. For IMSM arrays created in OROM
NUM_BLOCKS_DIRTY_STRIPE_REGION sectors are added up to this value. Because
of this mdadm shows invalid size of arrays created in OROM and to fix this
we need to use array size calculation based on num data stripes and blocks
per stripe.
Signed-off-by: Mariusz Dabrowski <mariusz.dabrowski@intel.com>
Signed-off-by: Jes Sorensen <jsorensen@fb.com>
In almost every place where imsm_num_data_members is called there is
already existing map so it can be used it to avoid mistake when specifying
map for imsm_num_data_members.
Signed-off-by: Mariusz Dabrowski <mariusz.dabrowski@intel.com>
Signed-off-by: Jes Sorensen <jsorensen@fb.com>
Due to compatibility to the newest OROM, imsm reserved space has to be
expanded to 4MB.
Signed-off-by: Mariusz Dabrowski <mariusz.dabrowski@intel.com>
Signed-off-by: Jes Sorensen <jsorensen@fb.com>
When assembling an array undergoing rebuild the kernel will switch to
resync if there are no ppl entries to recover. Prevent that by adding an
empty entry when validating the ppl header.
Signed-off-by: Artur Paszkiewicz <artur.paszkiewicz@intel.com>
Signed-off-by: Jes Sorensen <jsorensen@fb.com>
This commit extends ab0c6bb ("imsm: update name in --detail-platform").
Refer user to RSTe/VROC manual when needed.
Signed-off-by: Michal Zylowski <michal.zylowski@intel.com>
Signed-off-by: Jes Sorensen <jsorensen@fb.com>
For IMSM enterprise firmware starting with major version 6, present the
platform name as Intel VROC.
Signed-off-by: Artur Paszkiewicz <artur.paszkiewicz@intel.com>
Signed-off-by: Jes Sorensen <jsorensen@fb.com>
If RAID10 gets degraded during resync and is stopped, it doesn't continue
resync after automatic assemble and it is reported to be in sync. Resync
is blocked because the disk is missing. It should not happen for RAID10 as
it can still continue with 3 disks.
Count missing disks. Block resync only if number of missing disks exceeds
limit for given RAID level (only different for RAID10). Check if the
disk under recovery is present. If not, resync should be allowed to run.
Signed-off-by: Maksymilian Kunt <maksymilian.kunt@intel.com>
Signed-off-by: Tomasz Majchrzak <tomasz.majchrzak@intel.com>
Signed-off-by: Jes Sorensen <jsorensen@fb.com>
When RAID is created between VMD and SATA disks, printed message is
"Mixing devices attached to different VMD domains is not allowed". This message
is unclear and misleading because creating spanned containers between different
VMD domains is allowed. Set error message to more precise text.
Signed-off-by: Michal Zylowski <michal.zylowski@intel.com>
Signed-off-by: Jes Sorensen <jsorensen@fb.com>
If first disk of IMSM RAID1 is failed but still present in the system,
the array is not auto-assembled. Auto-assemble uses raid disk slot from
metadata to index disks. As it's not set, the valid disk is seen as a
replacement disk and its metadata is ignored. The problem is not
observed for other RAID levels as they have more than 2 disks -
replacement disks are only stored under uneven indexes so third disk
metadata is used in such scenario.
Signed-off-by: Mariusz Tkaczyk <mariusz.tkaczyk@intel.com>
Reviewed-by: Tomasz Majchrzak <tomasz.majchrzak@intel.com>
Signed-off-by: Jes Sorensen <jsorensen@fb.com>
When rebuild is initiated by the UEFI driver it is possible that the new
disk will not contain a valid ppl header. Just write the initial ppl
and don't abort assembly.
Signed-off-by: Artur Paszkiewicz <artur.paszkiewicz@intel.com>
Signed-off-by: Jes Sorensen <jsorensen@fb.com>
Use the first map to get the correct disk when rebuilding and not the
failed disk from the second map.
Signed-off-by: Artur Paszkiewicz <artur.paszkiewicz@intel.com>
Signed-off-by: Jes Sorensen <jsorensen@fb.com>
Set resync_start to 0 when starting a rebuilding array to make the
kernel perform ppl recovery before the rebuild.
Signed-off-by: Artur Paszkiewicz <artur.paszkiewicz@intel.com>
Signed-off-by: Jes Sorensen <jsorensen@fb.com>
If array was initially assembled with kernel without PPL support -
initial header was never written to the drive.
If initial resync was completed and system is rebooted to kernel with
PPL support - mdadm prevents from assembling normal clean array
due to lack of valid PPL.
Write empty header when assemble normal clean array, so the
its assamble is no longer blocked.
Signed-off-by: Pawel Baldysiak <pawel.baldysiak@intel.com>
Signed-off-by: Jes Sorensen <jsorensen@fb.com>
If validate_ppl_imsm() detects an invalid ppl header it will be
overwritten with a valid, empty ppl header. But if we are assembling an
array after unclean shutdown this will cause the kernel to skip resync
after ppl recovery. We don't want that because if there was an invalid
ppl it's best to assume that the ppl recovery is not enough to make the
array consistent and a full resync should be performed. So when
overwriting the invalid ppl add one ppl_header_entry with a wrong
checksum. This will prevent the kernel from skipping resync after ppl
recovery.
Signed-off-by: Artur Paszkiewicz <artur.paszkiewicz@intel.com>
Signed-off-by: Jes Sorensen <jsorensen@fb.com>
If user has array with single ppl -
update the metadata to use multiple ppls.
Signed-off-by: Pawel Baldysiak <pawel.baldysiak@intel.com>
Signed-off-by: Artur Paszkiewicz <artur.paszkiewicz@intel.com>
Signed-off-by: Jes Sorensen <jsorensen@fb.com>
PPL area should be cleared before creation/force assemble.
If the drive was used in other RAID array, it might contains PPL from it.
There is a risk that mdadm recognizes those PPLs and
refuses to assemble the RAID due to PPL conflict with created
array.
Signed-off-by: Pawel Baldysiak <pawel.baldysiak@intel.com>
Signed-off-by: Jes Sorensen <jsorensen@fb.com>
Change validation algorithm to check validity of multiple ppls that
are stored in PPL area.
If read error occurs during - treat the all PPLs as invalid -
there is no guarantee that this one was not latest. If the header CRC is
incorrect - assume that there are no further PPLs in PPL area.
If whole PPL area was written at least once - there is a possibility that
old PPL (with lower generation number) will follow the recent one
(with higest generation number). Compare those generation numbers to check
which PPL is latest.
Signed-off-by: Pawel Baldysiak <pawel.baldysiak@intel.com>
Signed-off-by: Jes Sorensen <jsorensen@fb.com>
Add interpreting new rwh_policy bits. Set PPL size as 1MB.
If new array with ppl is created - use new implementation of ppl by
default.
Signed-off-by: Pawel Baldysiak <pawel.baldysiak@intel.com>
Signed-off-by: Artur Paszkiewicz <artur.paszkiewicz@intel.com>
Signed-off-by: Jes Sorensen <jsorensen@fb.com>
Change the behavior of assemble and create for consistency-policy=ppl
for external metadata arrays. If the kernel does not support ppl, don't
abort but print a warning and start the array without ppl
(consistency-policy=resync). No change for native md arrays because the
kernel will not allow starting the array if it finds an unsupported
feature bit in the superblock.
In sysfs_add_disk() check consistency_policy in the mdinfo structure
that represents the array, not the disk and read the current consistency
policy from sysfs in mdmon's manage_member(). This is necessary to make
sysfs_add_disk() honor the actual consistency policy and not what is in
the metadata. Also remove all the places where consistency_policy is set
for a disk's mdinfo - it is a property of the array, not the disk.
Signed-off-by: Artur Paszkiewicz <artur.paszkiewicz@intel.com>
Signed-off-by: Jes Sorensen <jsorensen@fb.com>
Add disk controller domain for nvme and vmd devices to prevent moving
spares between different domains.
Signed-off-by: Mariusz Tkaczyk <mariusz.tkaczyk@intel.com>
Signed-off-by: Jes Sorensen <jsorensen@fb.com>
When RAID10 loses 2 disks and it is still operational, it cannot be
rebuilt. The rebuild process starts for the first disk and completes,
however completion is not recorded in metadata. There is an assumption
that rebuild completion corresponds to transition from degraded to
normal state. It's not the case for 2-disk RAID10 as it's still degraded
after rebuild to first disk completes.
Check if disk rebuild flag is set in the second map and clear it. So far it
has been checked only in the first map (where it was not set). The flag in
the second map has not been cleared but rebuild completion dropped second
map so the problem was not visible.
If rebuild completion is notified and array still has failed disks and is in
degraded state, check first if rebuild position is really unset (the same
check as for array in normal state). If so, mark migration as done but don't
change array state (it should remain degraded). Update failed disk number.
On rebuild start don't clear the rebuild flag in the destination map for all
the drives because failed state is lost for one of them. Just do a copy of
a map and clear the flag in the destination map for the disk that goes into
rebuild. Similarily preserve the rebuild flag in the map during disk removal.
If the disk is missing on array start and migration has been in progress,
don't just cancel it. Check first if maybe one of the disks was not under
rebuild (rebuild flag present both in source and destination map). If so,
rebuild was running despite of failed disk so there is no need to cancel
migration.
Signed-off-by: Tomasz Majchrzak <tomasz.majchrzak@intel.com>
Signed-off-by: Jes Sorensen <jsorensen@fb.com>