linux入门笔记14

更新时间:2023-10-21 02:00:02 阅读量: 综合文库 文档下载

说明:文章内容仅供预览,部分内容可能不全。下载后的文档,内容与下面显示的完全一致。下载之前请确认下面内容是否您想要的,是否完整无缺。

RAID:

Redundant Arrays of Inexpensive Disks , RAID

解决的是数据读写的性能和安全问题。

RAID 0 (Stripe):解决数据读写性能问题

Stripe:条带,就是每次给每块磁盘上分配的数据块的大小。

如果两块磁盘容量不一致,那么,当写入数据量大于最小的那块磁盘的容量之后, raid 0带来的性能问题,就不存在了。

至少两块磁盘组成: 磁盘利用率:100%

RAID 1(Mirror):解决数据安全问题

至少两块磁盘组成:

磁盘利用率为1/n,n为组成raid 1阵列的磁盘个数。

RAID 01或者 RAID 10

既解决了数据读写性能问题,又解决了数据安全问题。但是成本提高了。

注意两者的磁盘损坏的容错程度。

RAID 5:既能解决性能问题,也能解决安全问题。

至少三块磁盘组成: 磁盘利用率是:(n-1)/n

提示:man mdadm /layout

spare disk:预备盘 software RAID hardware RAID 外置的磁盘阵列

RAID的配置实现: mdadm:

# rpm -qa | grep mdadm mdadm-2.6.9-2.el5

创建5个分区:或者你添加一个虚拟磁盘; RAID 0:

# mdadm --create --auto=yes /dev/md0 --level=0 --raid-devices=2 /dev/sdb5 /dev/sdb6

mdadm: array /dev/md0 started.

# mdadm --detail /dev/md0

Raid Level : raid0

Array Size : 995712 (972.54 MiB 1019.61 MB) Raid Devices : 2 Total Devices : 2 State : clean Active Devices : 2 Working Devices : 2 Failed Devices : 0 Spare Devices : 0 Chunk Size : 64K

Number Major Minor RaidDevice State 0 8 21 0 active sync /dev/sdb5 1 8 22 1 active sync /dev/sdb6 RAID 1:

# mdadm --create --auto=yes /dev/md1 --level=1 --raid-devices=2 /dev/sdb7 /dev/sdb8

mdadm: array /dev/md1 started. RAID 5:

# mdadm --create --auto=yes /dev/md0 --level=5 --raid-devices=3 \\ /dev/sdb5 /dev/sdb6 /dev/sdb7 --spare-disks=1 /dev/sdb8

#mdadm --detail /dev/md0

Number Major Minor RaidDevice State 0 8 21 0 active sync /dev/sdb5 1 8 22 1 active sync /dev/sdb6 2 8 23 2 active sync /dev/sdb7

3 8 24 - spare /dev/sdb8

RAID 10:

#mdadm --create --auto=yes /dev/md1 --level=10 --raid-devices=4 /dev/sdb[5678]

RAID设备管理: 模拟磁盘损坏:

# mdadm --manage /dev/md0 --fail /dev/sdb6

#mdadm --detail /dev/md0 State : clean, degraded

移出损坏磁盘:

# mdadm --manage /dev/md0 --remove /dev/sdb5 mdadm: hot removed /dev/sdb5

添加一块新的磁盘:

# mdadm --manage /dev/md0 --add /dev/sdb5

设置配置文件:

# mdadm -Ds > /etc/mdadm.conf

# cat /etc/mdadm.conf ARRAY

删除RAID设备: 卸载RAID设备: umount

停止RAID设备: mdadm --stop /dev/md1

/dev/md0

level=raid5

num-devices=3

metadata=0.90

UUID=cd10e0dc:f34c795e:a3b34fbc:2560484c

配置文件中的相关内容删除: vim /etc/madam.conf 删除相关内容

删除raid设备名:/dev/md1 rm -f /dev/md1

进阶:关于RAID设备的监控,以及多个设备公用spare disk google,baidu的干活。

作业:创建三个分区,组成一个RAID 5,名字/dev/md1。

创建一个挂载点/mnt/raid5, 然后,拷贝/etc/目录下的文件,到该raid设备中。 重新启动系统后,该raid设备能够自动挂载到/mnt/raid5,设备里的文件,能够被读取。

********************************* LVM

LVM提供了磁盘系统的弹性扩展。

PV:Physical Volume 物理卷

VG:Volume Group PE: Phycial Extent

组成卷组的PE个数是有限制的,最多不能超过65534个 默认的PE大小为4M

LV:Logical Volume

/最后供用户使用的就是这个东东

实现流程:

首先,你要有候选人:磁盘分区

/dev/sdb5 /dev/sdb6 /dev/sdb7 /dev/sdb8 /dev/sdb9 注意:修改分区类型:8e

相当于,你报名参加了超女的选拔。

成为PV: pvcreate:

#pvcreate /dev/sdb5

Wiping software RAID md superblock on /dev/sdb5

Physical volume \# pvcreate /dev/sdb6

Wiping software RAID md superblock on /dev/sdb6 Physical volume \

组成一个小组,VG:

# vgcreate vg0 /dev/sdb5 /dev/sdb6 Volume group \

创建最后的胜利者:LV # lvcreate -L 100M -n lv0 vg0 Logical volume \

-L: 表示容量 -l: 表示PE的个数

lv设备创建文件系统: # mkfs.ext3 /dev/vg0/lv0 #mkdir /mnt/lv0

#mount /dev/vg0/lv0 /mnt/lv0

# mount /dev/vg0/lv0 /mnt/lv0 [root@localhost ~]# mount | grep lv

/dev/mapper/vg0-lv0 on /mnt/lv0 type ext3 (rw)

# ll /dev/vg0/lv0

lrwxrwxrwx 1 root root 19 03-30 17:02 /dev/vg0/lv0 -> /dev/mapper/vg0-lv0

状态显示命令: # pvscan

PV /dev/sdb5 VG vg0 lvm2 [484.00 MB / 384.00 MB free] PV /dev/sdb6 VG vg0 lvm2 [484.00 MB / 444.00 MB free] PV /dev/sda12 VG web lvm2 [1.87 GB / 1.48 GB free] Total: 3 [2.81 GB] / in use: 3 [2.81 GB] / in no VG: 0 [0 ] //查看系统中,所有的pv # pvs

PV VG Fmt Attr PSize PFree

/dev/sda12 web lvm2 a- 1.87G 1.48G /dev/sdb5 vg0 lvm2 a- 484.00M 384.00M /dev/sdb6 vg0 lvm2 a- 484.00M 444.00M

# pvdisplay

--- Physical volume --- PV Name /dev/sdb5 VG Name vg0

PV Size 486.28 MB / not usable 2.28 MB Allocatable yes PE Size (KByte) 4096 Total PE 121 Free PE 96 Allocated PE 25

PV UUID GqCwho-lKRY-NZxj-4R3T-cbYI-ps3T-nxyHbA # vgscan # vgs

VG #PV #LV #SN Attr VSize VFree vg0 2 2 0 wz--n- 968.00M 828.00M web 1 1 0 wz--n- 1.87G 1.48G # vgdisplay --- Volume group --- VG Name vg0 System ID Format lvm2 Metadata Areas 2 Metadata Sequence No 3 VG Access read/write VG Status resizable MAX LV 0 Cur LV 2 Open LV 0 Max PV 0 Cur PV 2 Act PV 2

VG Size 968.00 MB PE Size 4.00 MB

Total PE 242

Alloc PE / Size 35 / 140.00 MB Free PE / Size 207 / 828.00 MB

VG UUID 684B91-8UY8-3A1l-Xx4G-fhs2-V5sW-drJBx3 #lvscan # lvs

LV VG Attr LSize Origin Snap% Move Log Copy% Convert lv0 vg0 -wi-a- 100.00M lv1 vg0 -wi-a- 40.00M lv01 web -wi-a- 400.00M # lvdisplay

--- Logical volume --- LV Name /dev/vg0/lv0 VG Name vg0

LV UUID WsG7Ry-g3sJ-tvMA-w90C-K7i2-UdbP-1aXf8G LV Write Access read/write LV Status available # open 0

LV Size 100.00 MB Current LE 25 Segments 1 Allocation inherit Read ahead sectors auto - currently set to 256 Block device 253:1

指定PE的大小,来创建一个VG # vgcreate -s 16M vg1 /dev/sdb{7,8,9} Volume group \-s:表示指定PE的大小

注意:PE是创建lv,卷组分配磁盘容量的最小单元。

# lvcreate -L 100M -n biglv1 vg1

Rounding up size to full physical extent 112.00 MB Logical volume \

lv的扩展:

# lvextend -L 200M /dev/mapper/vg0-lv0 Extending logical volume lv0 to 200.00 MB Logical volume lv0 successfully resized //拉伸到200M

# lvextend -L +200M /dev/mapper/vg0-lv0 //拉伸了200M,

# resize2fs /dev/mapper/vg0-lv0 resize2fs 1.39 (29-May-2006)

Filesystem at /dev/mapper/vg0-lv0 is mounted on /mnt/lv0; on-line resizing required Performing an on-line resize of /dev/mapper/vg0-lv0 to 204800 (1k) blocks. The filesystem on /dev/mapper/vg0-lv0 is now 204800 blocks long.

格式:resize2fs lv设备名

# df -h | grep lv

/dev/mapper/vg0-lv0 194M 5.6M 179M 4% /mnt/lv0

vg的拉伸:

首先,要有剩余的pv,或者,创建新的pv # pvcreate /dev/sdb10

Physical volume \

# vgextend vg0 /dev/sdb10 # pvs

PV VG Fmt Attr PSize PFree /dev/sda12 web lvm2 a- 1.87G 1.48G /dev/sdb10 vg0 lvm2 a- 100.00M 100.00M /dev/sdb5 vg0 lvm2 a- 484.00M 284.00M /dev/sdb6 vg0 lvm2 a- 484.00M 444.00M /dev/sdb7 vg1 lvm2 a- 480.00M 368.00M /dev/sdb8 vg1 lvm2 a- 480.00M 480.00M /dev/sdb9 vg1 lvm2 a- 480.00M 480.00M

lv的收缩:

注意:先要卸载该lv设备: #umount /dev/mapper/vg0-lv0

# resize2fs /dev/mapper/vg0-lv0 150M resize2fs 1.39 (29-May-2006)

Please run 'e2fsck -f /dev/mapper/vg0-lv0' first.

# e2fsck -f /dev/mapper/vg0-lv0

# resize2fs /dev/mapper/vg0-lv0 150M

# lvreduce -L 150M /dev/mapper/vg0-lv0

Rounding up size to full physical extent 152.00 MB WARNING: Reducing active logical volume to 152.00 MB THIS MAY DESTROY YOUR DATA (filesystem etc.) Do you really want to reduce lv0? [y/n]: y Reducing logical volume lv0 to 152.00 MB Logical volume lv0 successfully resized

注意:此动作非常非常危险。能不做,就别做。

vg的收缩:

# vgreduce vg0 /dev/sdb6 移除: 移除lv: 首先要卸载: # umount /mnt/lv0 # lvs

LV VG Attr LSize Origin Snap% Move Log Copy% Convert lv0 vg0 -wi-a- 152.00M lv1 vg0 -wi-a- 40.00M

# lvremove /dev/vg0/lv0

Do you really want to remove active logical volume lv0? [y/n]: y Logical volume \

# lvremove /dev/vg0/lv1

Do you really want to remove active logical volume lv1? [y/n]: y Logical volume \

# vgremove vg0

Volume group \

卷组移除的时候,出现这样的信息,如何处置? # vgremove vg1

Couldn't find device with uuid 'ko93xQ-xXz0-kDjW-0KLd-ii0D-GJnu-0c0rKw'. Couldn't find device with uuid 'ko93xQ-xXz0-kDjW-0KLd-ii0D-GJnu-0c0rKw'. Volume group \ Consider vgreduce --removemissing if metadata is inconsistent. [root@localhost vg1]# vgreduce --removemissing Please give volume group name

Run `vgreduce --help' for more information.

[root@localhost vg1]# vgreduce --removemissing vg1

Couldn't find device with uuid 'ko93xQ-xXz0-kDjW-0KLd-ii0D-GJnu-0c0rKw'. Couldn't find device with uuid 'ko93xQ-xXz0-kDjW-0KLd-ii0D-GJnu-0c0rKw'. Couldn't find device with uuid 'ko93xQ-xXz0-kDjW-0KLd-ii0D-GJnu-0c0rKw'. Couldn't find device with uuid 'ko93xQ-xXz0-kDjW-0KLd-ii0D-GJnu-0c0rKw'. Wrote out consistent volume group vg1 # vgs

VG #PV #LV #SN Attr VSize VFree vg1 2 0 0 wz--n- 960.00M 960.00M web 1 1 0 wz--n- 1.87G 1.48G [root@localhost vg1]# pvs PV VG Fmt Attr PSize PFree /dev/sda12 web lvm2 a- 1.87G 1.48G /dev/sdb10 lvm2 -- 101.94M 101.94M /dev/sdb5 lvm2 -- 486.28M 486.28M /dev/sdb6 lvm2 -- 486.31M 486.31M /dev/sdb8 vg1 lvm2 a- 480.00M 480.00M /dev/sdb9 vg1 lvm2 a- 480.00M 480.00M [root@localhost vg1]# vgremove vg1 Volume group \

pv的移除:

# pvremove /dev/sdb5

Labels on physical volume \

本文来源:https://www.bwwdw.com/article/s3rf.html

Top