iscsi with MySQL
When the economy meets the performance the iScsi comes into play!
I am using 1Gb ethernet between the target and the initiator (both on CentOs 6.3).
The target consists of the 3ware controller with 4 SSD Ocz Vertex 3. So, the backing-store is the device /dev/sda/ (/etc/tgt/targets.conf).
Next, Parted and LVM configured on the target with xfs.
parted /dev/sde
mklabel gpt
mkpart
[name] primary
[fs] xfs
[start] 0%
[stop] 100%
set 1 lvm on
pvcreate /dev/sde1 vgcreate vg_s7v0_ssd /dev/sde1 lvcreate -C y -n data -l 98%FREE vg_s7v0_ssd # rest for snapshot mkfs.xfs -f -d su=256k,sw=3 /dev/vg_s7v0_ssd/data su = < RAID controllers stripe size in BYTES (or KiBytes when used with k) > sw = <# of data disks (don't count parity disks)> xfx_info /dev/vg_s7v0_ssd/data
The problem I encountered was that after target reboot it used isci device (configured on target) locally. So it was impossible to connect it to the initiator… The solution was to set the filters on the target. However, it did NOT work when the rule “a/.*/” was in the config. This is my case (only one “variable” filter is used):
# By default we accept every block device:
filter = [ "r|/dev/sda1|", "r|/dev/sda|", "a|/dev/sdd|", "a|/dev/sdc|" ]
You have to ensure that the initiator connect to the target on startup, you do it with the following command
iscsiadm -m node --target iqn.2012.local.fs07:ssd.lun0 -p 10.0.0.1 --op update -n node.startup -v automatic
Add to the grub.conf elevator=noop (io scheduling mode) and mount with the options noatime,nodiratime,nobarrier.
If you use iscsi – add “_netdev” (device requires network to be available) – thank God! 🙂
UUID="79a594d6-de0b-477c-bad5-8f4bc77267be" /db_storage_ssd xfs noatime,nodiratime,nobarrier,_netdev 0 0
Finally, I obtained the following numbers on SSD RAID 10 (measured with sysbench)
200Mb total file size
Block size 16Kb
Number of random requests for random IO: 10000
Read/Write ratio for combined random IO test: 1.50
Calling fsync() at the end of test, Enabled.
Operations performed: 0 Read, 10000 Write, 1 Other = 10001 Total
Read 0b Written 156.25Mb Total transferred 156.25Mb (80.4Mb/sec)
5145.57 Requests/sec executed
Operations performed: 10000 Read, 0 Write, 0 Other = 10000 Total
Read 156.25Mb Written 0b Total transferred 156.25Mb (110.72Mb/sec)
7085.88 Requests/sec executed
Common administration tasks on target:
Dynamic initiator management:
# add ACL
tgtadm --lld iscsi --mode target --op bind --tid 4 --initiator-address 10.0.15.206
# del ACL
tgtadm --lld iscsi --mode target --op unbind --tid 4 --initiator-address 10.0.15.206
insert new initiator and then start it (w/o restarting services)
# Update configuration for targets. Only targets which
# are not in use will be updated.
/etc/init.d/tgtd reload
Common administration tasks on initiator:
Disconnect initiator on lvm (clean)
umount /db_storage_ssd1
umount /db_storage_ssd2
vgchange -ay vg_mysql_ssd1 && vgchange -ay vg_mysql_ssd2
iscsiadm -d 1 --mode node -p 10.0.15.233 --logout all
# or separately
iscsiadm -d 1 --mode node --target iqn.2012.local.fs05:ssd.lun0 -p 10.0.15.233 --logout
# active sessions
iscsiadm -m session
# detailed
iscsiadm -m session -P1
You can find very interesting information in one of Oracle'a White Paper about
sun Flash Accelerator f80 PCIe Card
(another elevator and with another fs – ext4).
No comments yet.