OpenBSD-4.6/share/man/man4/softraid.4

Compare this file to the similar file:
Show the results in this format:

.\"	$OpenBSD: softraid.4,v 1.23 2009/06/23 18:49:10 jmc Exp $
.\"
.\" Copyright (c) 2007 Todd T. Fries   <todd@OpenBSD.org>
.\" Copyright (c) 2007 Marco Peereboom <marco@OpenBSD.org>
.\"
.\" Permission to use, copy, modify, and distribute this software for any
.\" purpose with or without fee is hereby granted, provided that the above
.\" copyright notice and this permission notice appear in all copies.
.\"
.\" THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES
.\" WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF
.\" MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR
.\" ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES
.\" WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN
.\" ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF
.\" OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
.\"
.Dd $Mdocdate: June 23 2009 $
.Dt SOFTRAID 4
.Os
.Sh NAME
.Nm softraid
.Nd Software RAID
.Sh SYNOPSIS
.Cd "softraid0 at root"
.Cd "scsibus*  at softraid?"
.Sh DESCRIPTION
The
.Nm
device emulates a Host Bus Adapter (HBA) that provides RAID and other I/O
related services.
The
.Nm
device provides a scaffold to implement more complex I/O transformation
disciplines.
For example, one can tie chunks together into a mirroring discipline.
There really is no limit on what type of discipline one can write as long
as it fits the SCSI model.
.Pp
.Nm
supports a number of
.Em disciplines .
A discipline is a collection of functions
that provides specific I/O functionality.
This includes I/O path, bring-up, failure recovery, and statistical
information gathering.
Essentially a discipline is a lower
level driver that provides the I/O transformation for the softraid
device.
.Pp
A
.Em volume
is a virtual disk device that is made up of a collection of chunks.
.Pp
A
.Em chunk
is a partition or storage area of fstype
.Dq RAID .
.Xr disklabel 8
is used to alter the fstype.
.Pp
Currently
.Nm
supports the following disciplines:
.Bl -ohang -offset indent
.It RAID 0
A
.Em striping
discipline.
It segments data over a number of chunks to increase performance.
RAID 0 does not provide for data loss (redundancy).
.It RAID 1
A
.Em mirroring
discipline.
It copies data across more than one chunk to provide for data loss.
Read performance is increased,
though at the cost of write speed.
Unlike traditional RAID 1,
.Nm
supports the use of more than two chunks in a RAID 1 setup.
.It RAID 4
A striping discipline with a fixed parity chunk.
It stripes data across chunks and provides parity to prevent data loss of
a single chunk failure.
Read performance is increased,
though write performance is limited by the parity chunk.
.It RAID 5
A striping discipline with floating parity across all chunks.
It stripes data across chunks and provides parity to prevent data loss of
a single chunk failure.
Read performance is increased;
write performance should be faster than RAID 4.
.It CRYPTO
An
.Em encrypting
discipline.
It encrypts data on a single chunk to provide for data confidentiality.
CRYPTO does not provide redundancy.
.El
.Sh EXAMPLES
An example to create a 3 chunk RAID 1 from scratch is as follows:
.Pp
Initialize the partition tables of all disks:
.Bd -literal -offset indent
# fdisk -iy wd1
# fdisk -iy wd2
# fdisk -iy wd3
.Ed
.Pp
Now create RAID partitions on all disks:
.Bd -literal -offset indent
# printf "a\en\en\en\enRAID\enw\enq\en\en" | disklabel -E wd1
# printf "a\en\en\en\enRAID\enw\enq\en\en" | disklabel -E wd2
# printf "a\en\en\en\enRAID\enw\enq\en\en" | disklabel -E wd3
.Ed
.Pp
Assemble the RAID volume:
.Bd -literal -offset indent
# bioctl -c 1 -l /dev/wd1a,/dev/wd2a,/dev/wd3a softraid0
.Ed
.Pp
The console will show what device was added to the system:
.Bd -literal -offset indent
scsibus0 at softraid0: 1 targets
sd0 at scsibus0 targ 0 lun 0: \*(LtOPENBSD, SR RAID 1, 001\*(Gt SCSI2
sd0: 1MB, 0 cyl, 255 head, 63 sec, 512 bytes/sec, 3714 sec total
.Ed
.Pp
It is good practice to wipe the front of the disk before using it:
.Bd -literal -offset indent
# dd if=/dev/zero of=/dev/rsd0c bs=1m count=1
.Ed
.Pp
Initialize the partition table and create a filesystem on the
new RAID volume:
.Bd -literal -offset indent
# fdisk -iy sd0
# printf "a\en\en\en\en4.2BSD\enw\enq\en\en" | disklabel -E sd0
# newfs /dev/rsd0a
.Ed
.Pp
The RAID volume is now ready to be used as a normal disk device.
See
.Xr bioctl 8
for more information on configuration of RAID sets.
.Sh SEE ALSO
.Xr bioctl 8 ,
.Xr disklabel 8 ,
.Xr fdisk 8 ,
.Xr newfs 8
.Sh HISTORY
The
.Nm
driver first appeared in
.Ox 4.2 .
.Sh AUTHORS
.An Marco Peereboom .
.Sh CAVEATS
The driver relies on underlying hardware to properly fail chunks.
Currently RAID 1 support does not have the ability to recover a
failed chunk.
.Pp
The RAID 1 discipline does not initialize the mirror upon creation.
This is by design because all sectors that are read are written first.
There is no point in wasting a lot of time syncing random data.
.Pp
The RAID 4 and 5 disciplines do not initialize the parity upon creation.
This is due to the scrub functionality not being currently implemented.
.Pp
Currently there is no automated mechanism to recover from failed disks.
.Pp
There is no boot support at this time for any disciplines.
.Pp
Sparc hardware needs to use fstype
.Dq 4.2BSD
instead of
.Dq RAID .
.Pp
Certain RAID levels can protect against some data loss
due to component failure.
RAID is
.Em not
a substitute for good backup practices.