Substantial corrections to man pages.

From: "Scott Weikart" <Scott.W@Benetech.org>

Thanks Scott!
This commit is contained in:
Scott Weikart 2007-07-13 15:13:43 +10:00 committed by Neil Brown
parent bf40ab857f
commit 93e790afef
4 changed files with 156 additions and 159 deletions

94
md.4
View File

@ -6,7 +6,7 @@
''' See file COPYING in distribution for details.
.TH MD 4
.SH NAME
md \- Multiple Device driver aka Linux Software Raid
md \- Multiple Device driver aka Linux Software RAID
.SH SYNOPSIS
.BI /dev/md n
.br
@ -29,7 +29,7 @@ supports RAID levels
If some number of underlying devices fails while using one of these
levels, the array will continue to function; this number is one for
RAID levels 4 and 5, two for RAID level 6, and all but one (N-1) for
RAID level 1, and dependant on configuration for level 10.
RAID level 1, and dependent on configuration for level 10.
.PP
.B md
also supports a number of pseudo RAID (non-redundant) configurations
@ -57,7 +57,7 @@ device down to a multiple of 64K and then subtract 64K).
The available size of each device is the amount of space before the
super block, so between 64K and 128K is lost when a device in
incorporated into an MD array.
This superblock stores multi-byte fields in a processor-dependant
This superblock stores multi-byte fields in a processor-dependent
manner, so arrays cannot easily be moved between computers with
different processors.
@ -67,7 +67,7 @@ and 12K from the end of the device, on a 4K boundary, though
variations can be stored at the start of the device (version 1.1) or 4K from
the start of the device (version 1.2).
This superblock format stores multibyte data in a
processor-independent format and has supports up to hundreds of
processor-independent format and supports up to hundreds of
component devices (version 0.90 only supports 28).
The superblock contains, among other things:
@ -78,7 +78,7 @@ The manner in which the devices are arranged into the array
.TP
UUID
a 128 bit Universally Unique Identifier that identifies the array that
this device is part of.
contains this device.
When a version 0.90 array is being reshaped (e.g. adding extra devices
to a RAID5), the version number is temporarily set to 0.91. This
@ -90,8 +90,8 @@ that can complete the reshape processes is used.
.SS ARRAYS WITHOUT SUPERBLOCKS
While it is usually best to create arrays with superblocks so that
they can be assembled reliably, there are some circumstances where an
array without superblocks in preferred. This include:
they can be assembled reliably, there are some circumstances when an
array without superblocks is preferred. These include:
.TP
LEGACY ARRAYS
Early versions of the
@ -114,19 +114,19 @@ a MULTIPATH array with no superblock makes sense.
.TP
RAID1
In some configurations it might be desired to create a raid1
configuration that does use a superblock, and to maintain the state of
configuration that does not use a superblock, and to maintain the state of
the array elsewhere. While not encouraged for general us, it does
have special-purpose uses and is supported.
.SS LINEAR
A linear array simply catenates the available space on each
drive together to form one large virtual drive.
drive to form one large virtual drive.
One advantage of this arrangement over the more common RAID0
arrangement is that the array may be reconfigured at a later time with
an extra drive and so the array is made bigger without disturbing the
data that is on the array. However this cannot be done on a live
an extra drive, so the array is made bigger without disturbing the
data that is on the array. This can even be done on a live
array.
If a chunksize is given with a LINEAR array, the usable space on each
@ -145,7 +145,7 @@ device, the second chunk to the second device, and so on until all
drives have been assigned one chunk. This collection of chunks forms
a
.BR stripe .
Further chunks are gathered into stripes in the same way which are
Further chunks are gathered into stripes in the same way, and are
assigned to the remaining space in the drives.
If devices in the array are not all the same size, then once the
@ -166,7 +166,7 @@ requests across all devices to maximise performance.
All devices in a RAID1 array should be the same size. If they are
not, then only the amount of space available on the smallest device is
used. Any extra space on other devices is wasted.
used (any extra space on other devices is wasted).
.SS RAID4
@ -176,10 +176,10 @@ array. Unlike RAID0, RAID4 also requires that all stripes span all
drives, so extra space on devices that are larger than the smallest is
wasted.
When any block in a RAID4 array is modified the parity block for that
When any block in a RAID4 array is modified, the parity block for that
stripe (i.e. the block in the parity device at the same device offset
as the stripe) is also modified so that the parity block always
contains the "parity" for the whole stripe. i.e. its contents is
contains the "parity" for the whole stripe. I.e. its content is
equivalent to the result of performing an exclusive-or operation
between all the data blocks in the stripe.
@ -192,10 +192,10 @@ parity block and the other data blocks.
RAID5 is very similar to RAID4. The difference is that the parity
blocks for each stripe, instead of being on a single device, are
distributed across all devices. This allows more parallelism when
writing as two different block updates will quite possibly affect
writing, as two different block updates will quite possibly affect
parity blocks on different devices so there is less contention.
This also allows more parallelism when reading as read requests are
This also allows more parallelism when reading, as read requests are
distributed over all the devices in the array instead of all but one.
.SS RAID6
@ -210,12 +210,12 @@ disk failure mode, however.
.SS RAID10
RAID10 provides a combination of RAID1 and RAID0, and sometimes known
RAID10 provides a combination of RAID1 and RAID0, and is sometimes known
as RAID1+0. Every datablock is duplicated some number of times, and
the resulting collection of datablocks are distributed over multiple
drives.
When configuring a RAID10 array it is necessary to specify the number
When configuring a RAID10 array, it is necessary to specify the number
of replicas of each data block that are required (this will normally
be 2) and whether the replicas should be 'near', 'offset' or 'far'.
(Note that the 'offset' layout is only available from 2.6.18).
@ -243,7 +243,7 @@ suitably large chunk size is used, but without as much seeking for
writes.
It should be noted that the number of devices in a RAID10 array need
not be a multiple of the number of replica of each data block, those
not be a multiple of the number of replica of each data block; however,
there must be at least as many devices as replicas.
If, for example, an array is created with 5 devices and 2 replicas,
@ -251,7 +251,7 @@ then space equivalent to 2.5 of the devices will be available, and
every block will be stored on two different devices.
Finally, it is possible to have an array with both 'near' and 'far'
copies. If and array is configured with 2 near copies and 2 far
copies. If an array is configured with 2 near copies and 2 far
copies, then there will be a total of 4 copies of each block, each on
a different drive. This is an artifact of the implementation and is
unlikely to be of real value.
@ -283,7 +283,7 @@ read/write at the address will probably succeed) or persistent
faults can be "fixable" meaning that they persist until a write
request at the same address.
Fault types can be requested with a period. In this case the fault
Fault types can be requested with a period. In this case, the fault
will recur repeatedly after the given number of requests of the
relevant type. For example if persistent read faults have a period of
100, then every 100th read request would generate a fault, and the
@ -301,8 +301,8 @@ failure modes can be cleared.
When changes are made to a RAID1, RAID4, RAID5, RAID6, or RAID10 array
there is a possibility of inconsistency for short periods of time as
each update requires are least two block to be written to different
devices, and these writes probably wont happen at exactly the same
each update requires at least two block to be written to different
devices, and these writes probably won't happen at exactly the same
time. Thus if a system with one of these arrays is shutdown in the
middle of a write operation (e.g. due to power failure), the array may
not be consistent.
@ -320,7 +320,7 @@ known as "resynchronising" or "resync" is performed in the background.
The array can still be used, though possibly with reduced performance.
If a RAID4, RAID5 or RAID6 array is degraded (missing at least one
drive) when it is restarted after an unclean shutdown, it cannot
drive, two for RAID6) when it is restarted after an unclean shutdown, it cannot
recalculate parity, and so it is possible that data might be
undetectably corrupted. The 2.4 md driver
.B does not
@ -333,11 +333,11 @@ this behaviour can be overridden by a kernel parameter.
If the md driver detects a write error on a device in a RAID1, RAID4,
RAID5, RAID6, or RAID10 array, it immediately disables that device
(marking it as faulty) and continues operation on the remaining
devices. If there is a spare drive, the driver will start recreating
on one of the spare drives the data what was on that failed drive,
devices. If there are spare drives, the driver will start recreating
on one of the spare drives the data which was on that failed drive,
either by copying a working drive in a RAID1 configuration, or by
doing calculations with the parity block on RAID4, RAID5 or RAID6, or
by finding a copying originals for RAID10.
by finding and copying originals for RAID10.
In kernels prior to about 2.6.15, a read error would cause the same
effect as a write error. In later kernels, a read-error will instead
@ -345,7 +345,7 @@ cause md to attempt a recovery by overwriting the bad block. i.e. it
will find the correct data from elsewhere, write it over the block
that failed, and then try to read it back again. If either the write
or the re-read fail, md will treat the error the same way that a write
error is treated and will fail the whole device.
error is treated, and will fail the whole device.
While this recovery process is happening, the md driver will monitor
accesses to the array and will slow down the rate of recovery if other
@ -384,7 +384,7 @@ causing an enormous recovery cost.
The intent log can be stored in a file on a separate device, or it can
be stored near the superblocks of an array which has superblocks.
It is possible to add an intent log or an active array, or remove an
It is possible to add an intent log to an active array, or remove an
intent log if one is present.
In 2.6.13, intent bitmaps are only supported with RAID1. Other levels
@ -419,9 +419,9 @@ also known as
.IR Reshaping ,
is the processes of re-arranging the data stored in each stripe into a
new layout. This might involve changing the number of devices in the
array (so the stripes are wider) changing the chunk size (so stripes
array (so the stripes are wider), changing the chunk size (so stripes
are deeper or shallower), or changing the arrangement of data and
parity, possibly changing the raid level (e.g. 1 to 5 or 5 to 6).
parity (possibly changing the raid level, e.g. 1 to 5 or 5 to 6).
As of Linux 2.6.17, md can reshape a raid5 array to have more
devices. Other possibilities may follow in future kernels.
@ -445,29 +445,29 @@ Disable writes to that section of the array (using the
.B sysfs
interface),
.IP \(bu 4
Take a copy of the data somewhere (i.e. make a backup)
take a copy of the data somewhere (i.e. make a backup),
.IP \(bu 4
Allow the process to continue and invalidate the backup and restore
allow the process to continue and invalidate the backup and restore
write access once the critical section is passed, and
.IP \(bu 4
Provide for restoring the critical data before restarting the array
provide for restoring the critical data before restarting the array
after a system crash.
.PP
.B mdadm
version 2.4 and later will do this for growing a RAID5 array.
versions from 2.4 do this for growing a RAID5 array.
For operations that do not change the size of the array, like simply
increasing chunk size, or converting RAID5 to RAID6 with one extra
device, the entire process is the critical section. In this case the
restripe will need to progress in stages as a section is suspended,
device, the entire process is the critical section. In this case, the
restripe will need to progress in stages, as a section is suspended,
backed up,
restriped, and released. This is not yet implemented.
restriped, and released; this is not yet implemented.
.SS SYSFS INTERFACE
All block devices appear as a directory in
Each block device appears as a directory in
.I sysfs
(usually mounted at
(which is usually mounted at
.BR /sys ).
For MD devices, this directory will contain a subdirectory called
.B md
@ -486,8 +486,8 @@ This value, if set, overrides the system-wide setting in
.B /proc/sys/dev/raid/speed_limit_min
for this array only.
Writing the value
.B system
to this file cause the system-wide setting to have effect.
.B "system"
to this file will cause the system-wide setting to have effect.
.TP
.B md/sync_speed_max
@ -561,7 +561,7 @@ operation is started.
As mentioned above, md will not normally start a RAID4, RAID5, or
RAID6 that is both dirty and degraded as this situation can imply
hidden data loss. This can be awkward if the root filesystem is
affected. Using the module parameter allows such arrays to be started
affected. Using this module parameter allows such arrays to be started
at boot time. It should be understood that there is a real (though
small) risk of data corruption in this situation.
@ -603,15 +603,15 @@ is ignored (legacy support).
Contains information about the status of currently running array.
.TP
.B /proc/sys/dev/raid/speed_limit_min
A readable and writable file that reflects the current goal rebuild
A readable and writable file that reflects the current "goal" rebuild
speed for times when non-rebuild activity is current on an array.
The speed is in Kibibytes per second, and is a per-device rate, not a
per-array rate (which means that an array with more disc will shuffle
per-array rate (which means that an array with more disks will shuffle
more data for a given speed). The default is 100.
.TP
.B /proc/sys/dev/raid/speed_limit_max
A readable and writable file that reflects the current goal rebuild
A readable and writable file that reflects the current "goal" rebuild
speed for times when no non-rebuild activity is current on an array.
The default is 100,000.

209
mdadm.8
View File

@ -9,7 +9,7 @@
.SH NAME
mdadm \- manage MD devices
.I aka
Linux Software Raid.
Linux Software RAID
.SH SYNOPSIS
@ -43,8 +43,7 @@ and
.B MULTIPATH
is not a Software RAID mechanism, but does involve
multiple devices. For
.B MULTIPATH
multiple devices:
each device is a path to one common physical storage device.
.B FAULTY
@ -91,7 +90,7 @@ provides a layer over a true device that can be used to inject faults.
mdadm has several major modes of operation:
.TP
.B Assemble
Assemble the parts of a previously created
Assemble the components of a previously created
array into an active array. Components can be explicitly given
or can be searched for.
.B mdadm
@ -106,7 +105,7 @@ sorts of arrays,
.I mdadm
cannot differentiate between initial creation and subsequent assembly
of an array. It also cannot perform any checks that appropriate
devices have been requested. Because of this, the
components have been requested. Because of this, the
.B Build
mode should only be used together with a complete understanding of
what you are doing.
@ -120,7 +119,7 @@ Create a new array with per-device superblocks.
.TP
.B "Follow or Monitor"
Monitor one or more md devices and act on any state changes. This is
only meaningful for raid1, 4, 5, 6, 10 or multipath arrays as
only meaningful for raid1, 4, 5, 6, 10 or multipath arrays, as
only these have interesting state. raid0 or linear never have
missing, spare, or failed drives, so there is nothing to monitor.
@ -128,8 +127,8 @@ missing, spare, or failed drives, so there is nothing to monitor.
.B "Grow"
Grow (or shrink) an array, or otherwise reshape it in some way.
Currently supported growth options including changing the active size
of component devices in RAID level 1/4/5/6 and changing the number of
active devices in RAID1/5/6.
of component devices and changing the number of active devices in RAID
levels 1/4/5/6, as well as adding or removing a write-intent bitmap.
.TP
.B "Incremental Assembly"
@ -219,7 +218,7 @@ mode to be assumed.
.TP
.BR \-h ", " \-\-help
Display general help message or, after one of the above options, a
mode specific help message.
mode-specific help message.
.TP
.B \-\-help\-options
@ -259,17 +258,17 @@ gives an intermediate level of verbosity.
.TP
.BR \-f ", " \-\-force
Be more forceful about certain operations. See the various modes of
Be more forceful about certain operations. See the various modes for
the exact meaning of this option in different contexts.
.TP
.BR \-c ", " \-\-config=
Specify the config file. Default is to use
.BR /etc/mdadm.conf ,
or if that is missing, then
or if that is missing then
.BR /etc/mdadm/mdadm.conf .
If the config file given is
.B partitions
.B "partitions"
then nothing will be read, but
.I mdadm
will act as though the config file contained exactly
@ -278,26 +277,25 @@ and will read
.B /proc/partitions
to find a list of devices to scan.
If the word
.B none
.B "none"
is given for the config file, then
.I mdadm
will act as though the config file were empty.
.TP
.BR \-s ", " \-\-scan
scan config file or
Scan config file or
.B /proc/mdstat
for missing information.
In general, this option gives
.B mdadm
permission to get any missing information, like component devices,
array devices, array identities, and alert destination from the
configuration file:
.BR /etc/mdadm.conf .
One exception is MISC mode when using
permission to get any missing information (like component devices,
array devices, array identities, and alert destination) from the
configuration file (see previous option);
one exception is MISC mode when using
.B \-\-detail
or
.B \-\-stop
.B \-\-stop,
in which case
.B \-\-scan
says to get a list of array devices from
@ -320,11 +318,11 @@ Options are:
.RS
.IP "0, 0.90, default"
Use the original 0.90 format superblock. This format limits arrays to
28 componenet devices and limits component devices of levels 1 and
28 component devices and limits component devices of levels 1 and
greater to 2 terabytes.
.IP "1, 1.0, 1.1, 1.2"
Use the new version-1 format superblock. This has few restrictions.
The different subversion store the superblock at different locations
The different sub-versions store the superblock at different locations
on the device, either at the end (for 1.0), at the start (for 1.1) or
4K from the start (for 1.2).
.RE
@ -333,13 +331,13 @@ on the device, either at the end (for 1.0), at the start (for 1.1) or
.B \-\-homehost=
This will override any
.B HOMEHOST
setting in the config file and provides the identify of the host which
setting in the config file and provides the identity of the host which
should be considered the home for any arrays.
When creating an array, the
.B homehost
will be recorded in the superblock. For version-1 superblocks, it will
be prefixed to the array name. For version-0.90 superblocks part of
be prefixed to the array name. For version-0.90 superblocks, part of
the SHA1 hash of the hostname will be stored in the later half of the
UUID.
@ -381,7 +379,7 @@ number of spare devices.
.TP
.BR \-z ", " \-\-size=
Amount (in Kibibytes) of space to use from each drive in RAID1/4/5/6.
Amount (in Kibibytes) of space to use from each drive in RAID level 1/4/5/6.
This must be a multiple of the chunk size, and must leave about 128Kb
of space at the end of the drive for the RAID superblock.
If this is not specified
@ -436,8 +434,8 @@ The layout of the raid5 parity block can be one of
The default is
.BR left\-symmetric .
When setting the failure mode for
.I faulty
When setting the failure mode for level
.I faulty,
the options are:
.BR write\-transient ", " wt ,
.BR read\-transient ", " rt ,
@ -447,10 +445,10 @@ the options are:
.BR read\-fixable ", " rf ,
.BR clear ", " flush ", " none .
Each mode can be followed by a number which is used as a period
Each failure mode can be followed by a number, which is used as a period
between fault generation. Without a number, the fault is generated
once on the first relevant request. With a number, the fault will be
generated after that many request, and will continue to be generated
generated after that many requests, and will continue to be generated
every time the period elapses.
Multiple failure modes can be current simultaneously by using the
@ -466,23 +464,23 @@ the level of the array ("faulty")
must be specified before the fault mode is specified.
Finally, the layout options for RAID10 are one of 'n', 'o' or 'f' followed
by a small number. The default is 'n2'.
by a small number. The default is 'n2'. The supported options are:
.I n
.I 'n'
signals 'near' copies. Multiple copies of one data block are at
similar offsets in different devices.
.I o
.I 'o'
signals 'offset' copies. Rather than the chunks being duplicated
within a stripe, whole stripes are duplicated but are rotated by one
device so duplicate blocks are on different devices. Thus subsequent
copies of a block are in the next drive, and are one chunk further
down.
.I f
.I 'f'
signals 'far' copies
(multiple copies have very different offsets). See md(4) for more
detail about 'near' and 'far'.
(multiple copies have very different offsets).
See md(4) for more detail about 'near' and 'far'.
The number is the number of copies of each datablock. 2 is normal, 3
can be useful. This number can be at most equal to the number of
@ -504,10 +502,10 @@ exist unless
.B \-\-force
is also given. The same file should be provided
when assembling the array. If the word
.B internal
.B "internal"
is given, then the bitmap is stored with the metadata on the array,
and so is replicated on all devices. If the word
.B none
.B "none"
is given with
.B \-\-grow
mode, then any bitmap that is present is removed.
@ -523,7 +521,7 @@ Storing bitmap files on other filesystems may result in serious problems.
Set the chunksize of the bitmap. Each bit corresponds to that many
Kilobytes of storage.
When using a file based bitmap, the default is to use the smallest
size that is atleast 4 and requires no more than 2^21 chunks.
size that is at-least 4 and requires no more than 2^21 chunks.
When using an
.B internal
bitmap, the chunksize is automatically determined to make best use of
@ -560,7 +558,7 @@ when trying to recover from a major failure as you can be sure that no
data will be affected unless you actually write to the array. It can
also be used when creating a RAID1 or RAID10 if you want to avoid the
initial resync, however this practice \(em while normally safe \(em is not
recommended. Use this ony if you really know what you are doing.
recommended. Use this only if you really know what you are doing.
.TP
.BR \-\-backup\-file=
@ -697,10 +695,10 @@ will look for super blocks with a minor number of 0.
.BR \-N ", " \-\-name=
Specify the name of the array to assemble. This must be the name
that was specified when creating the array. It must either match
then name stored in the superblock exactly, or it must match
the name stored in the superblock exactly, or it must match
with the current
.I homehost
is added to the start of the given name.
prefixed to the start of the given name.
.TP
.BR \-f ", " \-\-force
@ -721,10 +719,10 @@ an attempt will be made to start it anyway.
.B \-\-no\-degraded
This is the reverse of
.B \-\-run
in that it inhibits the started if array unless all expected drives
in that it inhibits the startup of array unless all expected drives
are present. This is only needed with
.B \-\-scan
and can be used if you physical connections to devices are
.B \-\-scan,
and can be used if the physical connections to devices are
not as reliable as you would like.
.TP
@ -859,10 +857,10 @@ update the relevant field in the metadata.
.TP
.B \-\-auto\-update\-homehost
This flag is only meaning with auto-assembly (see discussion below).
This flag is only meaningful with auto-assembly (see discussion below).
In that situation, if no suitable arrays are found for this homehost,
.I mdadm
will recan for any arrays at all and will assemble them and update the
will rescan for any arrays at all and will assemble them and update the
homehost to match the current host.
.SH For Manage mode:
@ -888,7 +886,7 @@ and
can be given to
.BR \-\-remove .
The first causes all failed device to be removed. The second causes
any device which is no longer connected to the system (i.e and open
any device which is no longer connected to the system (i.e an 'open'
returns
.BR ENXIO )
to be removed. This will only succeed for devices that are spares or
@ -908,19 +906,19 @@ same as
.BR \-\-fail .
.P
Each of these options require that the first device list is the array
to be acted upon and the remainder are component devices to be added,
removed, or marked as fault. Several different operations can be
Each of these options require that the first device listed is the array
to be acted upon, and the remainder are component devices to be added,
removed, or marked as faulty. Several different operations can be
specified for different devices, e.g.
.in +5
mdadm /dev/md0 \-\-add /dev/sda1 \-\-fail /dev/sdb1 \-\-remove /dev/sdb1
.in -5
Each operation applies to all devices listed until the next
operations.
operation.
If an array is using a write-intent bitmap, then devices which have
been removed can be re-added in a way that avoids a full
reconstruction but instead just updated the blocks that have changed
reconstruction but instead just updates the blocks that have changed
since the device was removed. For arrays with persistent metadata
(superblocks) this is done automatically. For arrays created with
.B \-\-build
@ -928,10 +926,9 @@ mdadm needs to be told that this device we removed recently with
.BR \-\-re\-add .
Devices can only be removed from an array if they are not in active
use. i.e. that must be spares or failed devices. To remove an active
device, it must be marked as
.B faulty
first.
use, i.e. that must be spares or failed devices. To remove an active
device, it must first be marked as
.B faulty.
.SH For Misc mode:
@ -1137,14 +1134,15 @@ is not given, then
.I mdadm
acts as though
.B \-\-scan
was given and identify information is extracted from the configuration file.
was given and identity information is extracted from the configuration file.
The identity can be given with the
.B \-\-uuid
option, with the
.B \-\-super\-minor
option, can be found in the config file, or will be taken from the
super block on the first component-device listed on the command line.
option, will be taken from the md-device record in the config file, or
will be taken from the super block of the first component-device
listed on the command line.
Devices can be given on the
.B \-\-assemble
@ -1179,7 +1177,6 @@ intent is clear. i.e. the name must be in a standard form, or the
.B \-\-auto
option must be given to clarify how and whether the device should be
created.
This can be useful for handling partitioned devices (which don't have
a stable device number \(em it can change after a reboot) and when using
"udev" to manage your
@ -1189,7 +1186,7 @@ initialisation conventions).
If the option to "auto" is "mdp" or "part" or (on the command line
only) "p", then mdadm will create a partitionable array, using the
first free one that is not in use, and does not already have an entry
first free one that is not in use and does not already have an entry
in /dev (apart from numeric /dev/md* entries).
If the option to "auto" is "yes" or "md" or (on the command line)
@ -1200,7 +1197,7 @@ It is expected that the "auto" functionality will be used to create
device entries with meaningful names such as "/dev/md/home" or
"/dev/md/root", rather than names based on the numerical array number.
When using this option to create a partitionable array, the device
When using option "auto" to create a partitionable array, the device
files for the first 4 partitions are also created. If a different
number is required it can be simply appended to the auto option.
e.g. "auto=part8". Partition names are created by appending a digit
@ -1232,7 +1229,7 @@ anything that it finds which is tagged as belonging to the given
homehost. This is the only situation where
.I mdadm
will assemble arrays without being given specific device name or
identify information for the array.
identity information for the array.
If
.I mdadm
@ -1248,8 +1245,8 @@ so for example
If the array uses version-1 metadata, then the
.B name
from the superblock is used to similarly create a name in
.BR /dev/md .
The name will have any 'host' prefix stripped first.
.BR /dev/md
(the name will have any 'host' prefix stripped first).
If
.I mdadm
@ -1274,7 +1271,7 @@ devices from one host to another.
.HP 12
Usage:
.B mdadm \-\-build
.I device
.I md-device
.BI \-\-chunk= X
.BI \-\-level= Y
.BI \-\-raid\-devices= Z
@ -1297,7 +1294,7 @@ once complete.
.HP 12
Usage:
.B mdadm \-\-create
.I device
.I md-device
.BI \-\-chunk= X
.BI \-\-level= Y
.br
@ -1447,7 +1444,7 @@ The exit status of
.I mdadm
will normally be 0 unless
.I mdadm
failed to get useful information about the device(s). However if the
failed to get useful information about the device(s); however, if the
.B \-\-test
option is given, then the exit status will be:
.RS
@ -1472,9 +1469,9 @@ The device should be a component of an md array.
will read the md superblock of the device and display the contents.
If
.B \-\-brief
is given, or
or
.B \-\-scan
then multiple devices that are components of the one array
is given, then multiple devices that are components of the one array
are grouped together and reported in a single entry suitable
for inclusion in
.BR /etc/mdadm.conf .
@ -1553,11 +1550,11 @@ The result of monitoring the arrays is the generation of events.
These events are passed to a separate program (if specified) and may
be mailed to a given E-mail address.
When passing event to program, the program is run once for each event
and is given 2 or 3 command-line arguments. The first is the
name of the event (see below). The second is the name of the
When passing events to a program, the program is run once for each event,
and is given 2 or 3 command-line arguments: the first is the
name of the event (see below), the second is the name of the
md device which is affected, and the third is the name of a related
device if relevant, such as a component device that has failed.
device if relevant (such as a component device that has failed).
If
.B \-\-scan
@ -1566,7 +1563,7 @@ command line or in the config file. If neither are available, then
.B mdadm
will not monitor anything.
Without
.B \-\-scan
.B \-\-scan,
.B mdadm
will continue monitoring as long as something was found to monitor. If
no program or email is given, then each event is reported to
@ -1614,7 +1611,7 @@ faulty. (syslog priority: Critical)
.TP
.B FailSpare
A spare component device which was being rebuilt to replace a faulty
device has failed. (syslog priority: Critial)
device has failed. (syslog priority: Critical)
.TP
.B SpareActive
@ -1636,7 +1633,7 @@ generated when
notices a drive failure which causes degradation, but only when
.I mdadm
notices that an array is degraded when it first sees the array.
(syslog priority: Critial)
(syslog priority: Critical)
.TP
.B MoveSpare
@ -1652,7 +1649,7 @@ If
has been told, via the config file, that an array should have a certain
number of spare devices, and
.I mdadm
detects that it has fewer that this number when it first sees the
detects that it has fewer than this number when it first sees the
array, it will report a
.B SparesMissing
message.
@ -1667,14 +1664,14 @@ flag was given.
.RE
Only
.B Fail ,
.B FailSpare ,
.B DegradedArray ,
.B SparesMissing ,
.B Fail,
.B FailSpare,
.B DegradedArray,
.B SparesMissing
and
.B TestMessage
cause Email to be sent. All events cause the program to be run.
The program is run with two or three arguments, they being the event
The program is run with two or three arguments: the event
name, the array device and possibly a second device.
Each event has an associated array device (e.g.
@ -1692,16 +1689,16 @@ the second device is the array that the spare was moved from.
For
.B mdadm
to move spares from one array to another, the different arrays need to
be labelled with the same
be labeled with the same
.B spare-group
in the configuration file. The
.B spare-group
name can be any string. It is only necessary that different spare
name can be any string; it is only necessary that different spare
groups use different names.
When
.B mdadm
detects that an array which is in a spare group has fewer active
detects that an array in a spare group has fewer active
devices than necessary for the complete array, and has no spare
devices, it will look for another array in the same spare group that
has a full complement of working drive and a spare. It will then
@ -1724,7 +1721,7 @@ for RAID1, RAID5 and RAID6.
.IP \(bu 4
increase the "raid-disks" attribute of RAID1, RAID5, and RAID6.
.IP \(bu 4
add a write-intent bitmap to any array which support these bitmaps, or
add a write-intent bitmap to any array which supports these bitmaps, or
remove a write-intent bitmap from such an array.
.PP
@ -1752,7 +1749,7 @@ inactive devices.
When reducing the number of devices in a RAID1 array, the slots which
are to be removed from the array must already be vacant. That is, the
devices that which were in those slots must be failed and removed.
devices which were in those slots must be failed and removed.
When the number of devices is increased, any hot spares that are
present will be activated immediately.
@ -1778,7 +1775,7 @@ to restore the backup and reassemble the array.
.SS BITMAP CHANGES
A write-intent bitmap can be added to, or removed from, an active
array. Either internal bitmaps, or bitmaps stored in a separate file
array. Either internal bitmaps, or bitmaps stored in a separate file,
can be added. Note that if you add a bitmap stored in a file which is
in a filesystem that is on the raid array being affected, the system
will deadlock. The bitmap must be on a separate filesystem.
@ -1808,7 +1805,7 @@ to be conditionally added to an appropriate array.
.I mdadm
performs a number of tests to determine if the device is part of an
array, and which array is should be part of. If an appropriate array
array, and which array it should be part of. If an appropriate array
is found, or can be created,
.I mdadm
adds the device to the array and conditionally starts the array.
@ -1820,8 +1817,8 @@ will only add devices to an array which were previously working
automatic inclusion of a new drive as a spare in some array.
.B "mdadm \-\-incremental"
requires a bug present in all kernels through 2.6.19, to be fixed.
Hopefully this will be fixed in 2.6.20. Alternately apply the patch
requires a bug-fix in all kernels through 2.6.19.
Hopefully, this will be fixed in 2.6.20; alternately, apply the patch
which is included with the mdadm source distribution. If
.I mdadm
detects that this bug is present, it will abort any attempt to use
@ -1865,11 +1862,11 @@ The metadata can match in two ways. Either there is an array listed
in
.B mdadm.conf
which identifies the array (either by UUID, by name, by device list,
or by minor-number), the array was created with a
or by minor-number), or the array was created with a
.B homehost
specified, and that
specified and that
.B homehost
matches that which is given in
matches the one in
.B mdadm.conf
or on the command line.
If
@ -1879,7 +1876,7 @@ current host, the device will be rejected.
.IP +
.I mdadm
keeps a list of arrays that is has partly assembled in
keeps a list of arrays that it has partially assembled in
.B /var/run/mdadm/map
(or
.B /var/run/mdadm.map
@ -1918,7 +1915,7 @@ devices present for the data to be accessible. For a raid1, that
means one device will start the array. For a clean raid5, the array
will be started as soon as all but one drive is present.
Note that neither of these approaches is really ideal. If it is can
Note that neither of these approaches is really ideal. If it can
be known that all device discovery has completed, then
.br
.B " mdadm \-IRs"
@ -1939,12 +1936,12 @@ one, and will provide brief information about the device.
.B " mdadm \-\-assemble \-\-scan"
.br
This will assemble and start all arrays listed in the standard config file
This will assemble and start all arrays listed in the standard config
file. This command will typically go in a system startup file.
.B " mdadm \-\-stop \-\-scan"
.br
This will shut down all array that can be shut down (i.e. are not
This will shut down all arrays that can be shut down (i.e. are not
currently in use). This will typically go in a system shutdown script.
.B " mdadm \-\-follow \-\-scan \-\-delay=120"
@ -1971,9 +1968,9 @@ contain unwanted detail.
.B " echo 'DEVICE /dev/hd[a\-z] /dev/sd*[a\-z]' > mdadm.conf"
.br
.B " mdadm \-\-examine \-\-scan \-\-config=mdadm.conf >> mdadm.conf"
.ber
This will find what arrays could be assembled from existing IDE and
SCSI whole drives (not partitions) and store the information is the
.br
This will find arrays which could be assembled from existing IDE and
SCSI whole drives (not partitions), and store the information in the
format of a config file.
This file is very likely to contain unwanted detail, particularly
the
@ -1988,7 +1985,7 @@ actual config file.
Create a list of devices by reading
.BR /proc/partitions ,
scan these for RAID superblocks, and printout a brief listing of all
that was found.
that were found.
.B " mdadm \-Ac partitions \-m 0 /dev/md0"
.br
@ -2060,7 +2057,7 @@ for more details.
.SS /var/run/mdadm/map
When
.B \-\-incremental
mode is used. this file gets a list of arrays currently being created.
mode is used, this file gets a list of arrays currently being created.
If
.B /var/run/mdadm
does not exist as a directory, then
@ -2077,7 +2074,7 @@ behaviour when creating device files via the
option.
The standard names for non-partitioned arrays (the only sort of md
array available in 2.4 and earlier) either of
array available in 2.4 and earlier) are either of
.IP
/dev/mdNN
.br
@ -2085,7 +2082,7 @@ array available in 2.4 and earlier) either of
.PP
where NN is a number.
The standard names for partitionable arrays (as available from 2.6
onwards) is one of
onwards) are either of
.IP
/dev/md/dNN
.br

View File

@ -6,7 +6,7 @@
''' See file COPYING in distribution for details.
.TH MDADM.CONF 5
.SH NAME
mdadm.conf \- configuration for management of Software Raid with mdadm
mdadm.conf \- configuration for management of Software RAID with mdadm
.SH SYNOPSIS
/etc/mdadm.conf
.SH DESCRIPTION
@ -211,7 +211,7 @@ line and it should have only one address.
.B MAILFROM
The
.B mailfrom
line (which can only be abbreviate at leat 5 characters) gives an
line (which can only be abbreviated to at least 5 characters) gives an
address to appear in the "From" address for alert mails. This can be
useful if you want to explicitly set a domain, as the default from
address is "root" with no domain. All words on this line are
@ -295,7 +295,7 @@ DEVICE /dev/sd[bcdjkl]1
.br
DEVICE /dev/hda1 /dev/hdb1
# /dev/md0 is known by it's UID.
# /dev/md0 is known by its UID.
.br
ARRAY /dev/md0 UUID=3aaa0122:29827cfa:5331ad66:ca767371
.br
@ -305,7 +305,7 @@ ARRAY /dev/md0 UUID=3aaa0122:29827cfa:5331ad66:ca767371
.br
ARRAY /dev/md1 superminor=1
.br
# /dev/md2 is made from precisey these two devices
# /dev/md2 is made from precisely these two devices
.br
ARRAY /dev/md2 devices=/dev/hda1,/dev/hdb1

View File

@ -3,7 +3,7 @@
.SH NAME
mdassemble \- assemble MD devices
.I aka
Linux Software Raid.
Linux Software RAID
.SH SYNOPSIS
@ -12,7 +12,7 @@ Linux Software Raid.
.SH DESCRIPTION
.B mdassemble
is a tiny program that can be used to assemble MD devices inside an
initial ramdisk (initrd) or initramfs, it is meant to replace the in-kernel
initial ramdisk (initrd) or initramfs; it is meant to replace the in-kernel
automatic raid detection and activation.
It can be built statically and linked against lightweight libc alternatives, like
.B dietlibc,