Storage foundation for HA (cluster + storage foundation)
Storage foundation (volumn + vxfs) enterprise(do snapshort)
view license
vxlicrep
install license
vxlicinst
# license is able apply multiple
Before install must be stop cluster hastop -all
install software vxvm
./installer
- i
- 4 (storage foundation)
- License key
- Patch must be same to requireVxinstall
-- enter
-- disk for enterprise or other_disk
-- no,no,no
-- 2 choose custom install
-- 2 install one time
--1 install as a new disk ## repeat all new disk for vxvm (no lvm) but disk vxvm is recored
# Not need rootdg for vxvm version 4 but answer is no
# Create root dg menu (for ver 3.5 but 4 not need)
# Root dg keep rootdg for start vxvm
# Root dg = 10M or 1 hdisk
vea (gui for volumn manager user:root/)
vxadm
Hardisk for vxvm
Vxdisk list
Disk = disk for vxvm ,group=vg group
-online is to use
-online invalid is to no use
Disk usarge
vxdisk –o alldgs list
Convert physical disk to vx disk (destroy data)
vxdisksetup –i cXtXdX
UnConvert vx disk to physical disk
vxdiskunsetup cXtXdX
Create/Delete manage Disk(text)
vxdiskadm
Print subdisk
vxprint -ht
Display information DG
Vxassist
Extend volumn and file system
vxresize volumn and file system
-- or --
vxassist -g datadg growto vol01 967340032
Extend file system
Fsadm
Physical disks--subdisks--plexes--volumes
Plexes=volume but mirror 1 volumn= 2 plexes
Subdisk = PE,PP
Plexes = LE,LP
Volume = LV
DG = VG
Higth level create vx disk 1 vg = 1 subdisk
Low level create vx disk 1 vg = X subdisk
Type
Mirror-concat (before concat after mirror)
Concat-mirror
Layer is diffical that will stript pro,concat pro
Gui is All command except create rootdg
vea for server gui config disk
vxsvc & # gui process for vea
vxconfigd and VEA service (vxsvc) run before
-- or --
veritas enterprise administrator for client
Disk
- Create volumn disk(DG)
-Add dsik to volumn
Export dg =deport
Import dg
clear host id for import difference host ( for CA disk )
force for quorum (for CA disk)Volumn
-create volumn
-create mirror
-create snap start (mirror snap)create snap shot space optimize(create little volumn snap)
Snap start (mirror)
Snap shot (split mirror)
Snap back(resync mirror # fast resync then snap back)
Snap clear(for another Machine)
Disk group
Split DG ( for create new dg)
Join DG ( for add to dg)
For AIX only -- smitty vxvm
To convert a RAID 0+1 quickly to RAID 1+0, run:
# vxassist -g <dgname> convert <volname> layout=stripe-mirror
This command SHOULD take about 1 second to run.
For example, here is a RAID 0+1 volume:
# vxprint -htrL -g ckdg
v vol01 fsgen ENABLED ACTIVE 35356672 SELECT -
pl vol01-01 vol01 ENABLED ACTIVE 35356672 STRIPE 2/128 RW
sd ckdg02-01 vol01-01 ckdg02 0 17678336 0/0 c2t3d0 ENA
sd ckdg03-01 vol01-01 ckdg03 0 17678336 1/0 c2t26d0 ENA
pl vol01-03 vol01 ENABLED ACTIVE 35356672 STRIPE 2/128 RW
sd ckdg01-01 vol01-03 ckdg01 0 17678336 0/0 c2t9d0 ENA
sd ckdg04-01 vol01-03 ckdg04 0 17678336 1/0 c2t24d0 ENA
We then run the command to convert this to a RAID 1+0 (striped-pro) volume:
# vxassist -g ckdg convert vol01 layout=stripe-mirror
This command shows the striped-pro volume.
# vxprint -htrL -g ckdg
v vol01 fsgen ENABLED ACTIVE 35356672 SELECT vol01-02
pl vol01-02 vol01 ENABLED ACTIVE 35356672 STRIPE 2/128 RW
sv vol01-S01 vol01-02 vol01-L01 1 17678336 0/0 2/2 ENA
v2 vol01-L01 fsgen ENABLED ACTIVE 17678336 SELECT -
p2 vol01-P01 vol01-L01 ENABLED ACTIVE 17678336 CONCAT - RW
s2 ckdg02-02 vol01-P01 ckdg02 0 17678336 0 c2t3d0 ENA
p2 vol01-P02 vol01-L01 ENABLED ACTIVE 17678336 CONCAT - RW
s2 ckdg01-02 vol01-P02 ckdg01 0 17678336 0 c2t9d0 ENA
sv vol01-S02 vol01-02 vol01-L02 1 17678336 1/0 2/2 ENA
v2 vol01-L02 fsgen ENABLED ACTIVE 17678336 SELECT -
p2 vol01-P03 vol01-L02 ENABLED ACTIVE 17678336 CONCAT - RW
s2 ckdg03-02 vol01-P03 ckdg03 0 17678336 0 c2t26d0 ENA
p2 vol01-P04 vol01-L02 ENABLED ACTIVE 17678336 CONCAT - RW
s2 ckdg04-02 vol01-P04 ckdg04 0 17678336 0 c2t24d0 ENA
Note that no data is moved from disk to disk in this process (this is why
it only takes a second to perform). The only thing volume manager has to
do is to change the way the volume is laid-out.
To convert back to a RAID0+1, if needed:
# vxassist -g ckdg convert vol01 layout=mirror-stripe
Vxvm
scan หา new disk (enable disk device)
vxdctl enable
ทำ init disk for vxvm (from online invalid to online)
vxdisksetup -i hdisk7
สร้าง create newdg use
vxdg init <diskgroup> <diskname>=devicename
vxdg init testdg testdg01=hdisk4
vxdg –g datadg adddisk testdg02=hdisk2 (เพิ่ม disk to dg)
vxassist -g data1dg -p maxsize layout=stripe ncol=4 ดู maxsize
vxassist -g data1dg make testvol 200M layout=stripe ncols=4
mkfs -F vxfs -o largefiles,bsize=8192 /dev/vx/rdsk/data1dg/vol
mount
edit /etc/fstab
ดูว่ามี disk
vxdisk list
ดูว่ามี disk กับ dg
vxdisk -o alldgs list
ดูว่ามี disk,dg,vol,layer
vxprint -rth
ดูว่ามี volume
vxprint -g <dgname> -ht|more
ดูว่ามี layer
vxprint -rth <volumename>
ลบ disk vxvm แต่ไม่ใช่ลูกสุดท้ายของ dg
vxdg -g testdg rmdisk testdg01
vxdiskunsetup -C testdg01
ลบ disk vxvm ที่เป็นลูกสุดท้ายของ dg
vxdg destroy testdg
vxdiskunsetup -C testdg01
ขยาย volume
/etc/vx/bin/vxdisksetup -i c2t1d0
vxdg –g datadg adddisk testdg02=hdisk2
Vxassit –g datadg maxgrow vol01
vxassist -g datadg growto vol01 967340032
ขยาย FS
cd /usr/lbin/fs/vxfs
./fsadm -F vxfs -b 967340032 -r /dev/vx/rdsk/datadg/vol01 /data
เพิ่ม disk เข้า dg
vxdg –g newdg adddisk diskname=c1t2d2
เอา disk ออกจาก dg
vxdg –g diskgroup rmdisk diskname
migrate disk
vxevac –g diskgroup from_disk to_disk
ดู diskgroup ที่ import จากเครื่องอื่น
vxdisk –o alldgs list
สร้าง volume
vxassist –g diskgroup make vol_name vol_size
ลบ volume
vxassist –g diskgroup remove volume vol_name
add mirror แต่ละ volume
vxassist –g diskgroup mirror vol_name
mirror ทุก volume
/etc/vx/bin/vxmirror –g diskgroup –a
remove mirror
vxassist –g diskgroup remove mirror vol_name !dm_name
vxplex –g diskgroup dis plex_name
vxedit –g diskgroup –rf rm plex_name
เปลี่ยนขนาดทั้ง fs และ volume
vxresize –F fstype –g diskgroup vol_name +,-size
เปลี่ยนขนาดของ volume
vxassist –g diskgroup (growto,growby,shrinkto,shrinkby) vol_name size
เปลี่ยนขนาดของ fs
fsadm –F vxfs –b newsize –r rawdev mount_point
ทำ snap shot ของ fs
mount –F vxfs –o snapof=/dev/vx/dsk/datadg/uservol /dev/vx/dsk/datadg/snapvol /snapmount
การซ่อม object ในระดับ low level
vxmend fix [stale,clean,active] object : object=(plex,subdisk,vol)
ทำการ replace disk
- cfgmgr(AIX),ioscan -fnC disk(HP) (scandisk from os )
- vxdctl enable (scan disk by vxvm)
- vxdisksetup -i hdiskX (setup vxdisk)
- vxdisk list (check vxdisk fail and will be add device disk(hdisk) mach to failed vmdisk )
- vxdg -k -g <dgname> adddisk <vmdisk fail>=< replace disk>
ex. --> vxdg -k -g testdg adddisk testdg05=hdisk13
- vxrecover -bs -g testdg testvol ( recover to new disk )
ex. -->vxrecover -bs -g testdg testvol
- vxprint -rth testvol (check state from stale to active)
ทำการ convert to Raid 1+0(striped-pro=stripe-mirror)
- vxassist -g diskgroup convert vol01 layout=stripe-mirror
0 comments:
Post a Comment