piektdiena, 2012. gada 2. novembris

Melnraksts - pykota


Melnraksts - liekam WINBIND/PYKOTA/CUPS

apt-get install winbind
apt-get install krb5-config krb5-user
scp -r root@druka:/etc/krb5.conf /etc/
scp -r root@druka:/etc/cups /etc/
scp -r root@druka:/etc/samba /etc/
net ads join -S vaserv -U admin_vea
vim /etc/nsswitch.conf
/etc/init.d/winbind restart




apt-get install subversion
cd /usr/local/src/
svn co http://svn.pykota.com/pykota/trunk pykota
cd pykota
/* # python -V
Python 2.7.3 */
python checkdeps.py
apt-get install python-pygresql python-jaxml python-reportlab python-imaging pkpgcounter python-pam
cd /usr/local/src
svn co http://svn.pykota.com/pkipplib/trunk pkipplib
cd pkipplib/
python setup.py install
cd /usr/local/src/pykota
python checkdeps.py
python setup.py install -v -f | tee -a install.log

adduser --system --group --home /etc/pykota --gecos PyKota pykota
cp /usr/local/share/pykota/cupspykota /usr/lib/cups/backend/cupspykota
cp /usr/local/share/pykota/conf/pykota.conf.sample /etc/pykota/pykota.conf
cp /usr/local/share/pykota/conf/pykotadmin.conf.sample /etc/pykota/pykotadmin.conf
chmod 644 /etc/pykota/pykota.conf
chmod 640 /etc/pykota/pykotadmin.conf
chown pykota.pykota /etc/pykota/pykota.conf /etc/pykota/pykotadmin.conf


/*
[global]
Debug: Yes
config_charset : UTF-8
storagebackend: pgstorage
storageserver: stende
storagename: XXX
storageuser : XXX
storageuserpw : XXX
storagecaching: No
disablehistory: No
logger: system
logourl : http://www.pykota.com/pykota.png
logolink : http://www.pykota.com/
smtpserver: localhost
maildomain: venta.lv
usernamecase: lower
privacy : no
onbackenderror : nocharge
keepfiles : no
accounter: software()
skipinitialwait : no
preaccounter: software()
onaccountererror: stop
admin: John Doe
adminmail: root@localhost
mailto : both
balancezero: -0.5
gracedelay : 7
poorman : 1.0
poorwarn : Your Print Quota account balance is low.
 Soon you'll not be allowed to print anymore.
softwarn: Your Print Quota Soft Limit is reached.
 This means that you may still be allowed to print for some
 time, but you must contact your administrator to purchase
 more print quota.
hardwarn: Your Print Quota Hard Limit is reached.
 This means that you are not allowed to print anymore.
 Please contact your administrator at root@localhost
 as soon as possible to solve the problem.
policy: external(/usr/local/bin/pkusers --add --limitby balance --skipexisting  %(username)s > /dev/null &&  /usr/local/bin/edpykota --add --skipexisting -n %(username)s)
maxdenybanners: 0
enforcement : strict
trustjobsize : yes
denyduplicates : no
duplicatesdelay : 0
noprintingmaxdelay : 60
statusstabilizationloops : 5
statusstabilizationdelay : 4.0
snmperrormask : 0x4FCC
*/

piektdiena, 2012. gada 26. oktobris

Veidojam LTSP ar rdesktop klientu


Būvējam LTSP tīkla ielādi priekš rdesktop

Izmantots:

  • OpenVZ serveris priekš tīkla attēla
  • Kaut kāds PXE spējīgs dators
  • Kaut kāds Windows 2008R2 terminālserveris


  1. Uzliekam OpenVZ Ubuntu 12.04 konteineri +tīkls u.t.t.
  2. Novācam visu nost no OpenVZ konteinera (saslauth, apache, sendmail, bind u.t.t.)
  3. Apdeitojam, apgreidojam
  4. Lejupielādējam ltsp-server pakotni (lai nebūtu jāliek ~600 MB līdzi nākošās ieteiktās pakas)
    apt-cache show ltsp-server | grep Filename (???)
    wget http://archive.ubuntu.com/ubuntu/pool/main/l/ltsp/ltsp-server_5.3.7-0ubuntu2.2_all.deb
    dpkg --unpack ltsp-server_5.3.7-0ubuntu2.2_all.deb
    ltsp-build-image (izmet kļūdu)
    apt-get -f install (pienstalē debootstrap nbd-server liblzo2-2 squashfs-tools  tcpd ltsp-server libc-bin )
    apt-get install tftpd-hpa

  5. Uztaisa klientu
    ltsp-build-client 
  6. Apdeito, apgreido
    ltsp-chroot
    apt-get update && apt-get upgrade -y --force-yes && apt-get clean
    apt-get install ltspfs
    exit

  7. Pievieno rdesktop-1.7.1 - tas nepieciešams lai Windows2008R2 varētu licencēt "Per Device" modē.
    ltsp-chroot
    wget http://launchpadlibrarian.net/103630514/rdesktop_1.7.1-1ubuntu1_i386.deb
    dpkg -i rdesktop_1.7.1-1ubuntu1_i386.deb
    apt-get -f install
    dpkg -i rdesktop_1.7.1-1ubuntu1_i386.deb
    exit

  8. Atjaunojam ielādes attēlu
    ltsp-update-sshkeys
    ltsp-update-kernels
    ltsp-update-image
    service nbd restart 

  9. Veidojam /var/lib/tftpboot/ltsp/i386/pxelinux.cfg failu
  10. Veidojam /var/lib/tftpboot/ltsp/i386/lts.conf ar saturu:
    [default]
    SEARCH_DOMAIN = skola.venta.lv || Change this for your own network
    DNS_SERVER = 10.0.0.50  || Change this for your own network
    LOCALDEV=True
    SOUND=True
    XKBLAYOUT=en
    SCREEN_03=shell
    SCREEN_07="rdesktop -a 24 -x b -f -k en-us -d SKOLA  -r sound:local -r disk:Drives=/media/root w2k8r2RDS"
  11. Startējam klientu - uz 7 konsoles jābūt ielogošanās ekrānam

Kas nav pārbaudīts:
  • vai strādā USB flashi
  • vai strādā skaņa
  • vai nerāda melnu peles kursoru
Kāpēc gan ikonas uz Desktopa var atvērt ar vienu klikšķi?!

trešdiena, 2012. gada 22. augusts

Pilnīgā bremze ar LVM nospiedumiem

Nesekmīgi cīnos ar RAID1, LVM sējumiem un nospiedumiem.
Tātad ir sataisīti 4 gab. mdadm RAID1 sējumi (katrs no diviem 2TB diskiem pie SRC16HI kontroliera)  un uz tiem tiek taisīts LVM sējums ar 4 straipiem (opcija pie veidošanas " lvcreate ... -i 4 ... "). Ja šādam sējumam tiek izveidots  nospiedums, vai divi, tad disku ātrums būtiski krīt - taisot rsync esošiem datiem, average IO access time pat reizēm sasniedz 80 sek. un rsync nomirst ar hung task.
Tātad sējums BKPA:(
lvs -o +devices
  LV          VG   Attr     LSize   Pool Origin Data%  Move Log Copy%  Convert Devices                                                                                   
  bkpa        data owi-aos-   2,00t                                            /dev/md126(196608),/dev/md124(196608),/dev/md125(196608),/dev/md127(196608)

[root@bfsa clilin]# tiotest -f 2000 -t 4 -d /data2/ -k 1 -k 3
Tiotest results for 4 concurrent io threads:
,----------------------------------------------------------------------.
| Item                  | Time     | Rate         | Usr CPU  | Sys CPU |
+-----------------------+----------+--------------+----------+---------+
| Write        8000 MBs |   64.1 s | 124.868 MB/s |   4.8 %  | 159.7 % |
| Read         8000 MBs |   25.4 s | 314.513 MB/s |   7.1 %  | 125.1 % |
`----------------------------------------------------------------------'
Ja izveidojam nospiedumu:
lvcreate -s /dev/data/bkpa -n aa -L 100G -i 4
...
lvs -o +devices
  LV          VG   Attr     LSize   Pool Origin Data%  Move Log Copy%  Convert Devices                                                                    
  aa          data swi-a-s- 100,00g      bkpa     7,84                         /dev/md126(327680),/dev/md124(327680),/dev/md125(327680),/dev/md127(327680)
...
Tad tiotest ātrums ir sekojošs:
tiotest -f 2000 -t 4 -d /data2/ -k 1 -k 3
Tiotest results for 4 concurrent io threads:
,----------------------------------------------------------------------.
| Item                  | Time     | Rate         | Usr CPU  | Sys CPU |
+-----------------------+----------+--------------+----------+---------+
| Write        8000 MBs |  652.6 s |  12.259 MB/s |   0.3 %  |  54.0 % |
| Read         8000 MBs |   28.8 s | 278.213 MB/s |   7.1 %  | 336.3 % |
`----------------------------------------------------------------------'
Pievienojot vēl vienu nospiedumu, ātrums vēl vairāk nokrīt. Adaptera kontroliera parametu mainīšana (WT-WB, Direct-Cached, EnDskCache-DisDiskCache) neko neuzlaboja. Šie rezultāti ir pie sekojošiem rādītājiem:
CmdTool2 -ldinfo -lall -aall
...
Current Cache Policy: WriteBack, ReadAheadNone, Direct, No Write Cache if Bad BBU
..
Disk Cache Policy   : Enabled
..

Darbojos uz Centos 6.3, jo Ubuntu 12.04 vispār pie slodzes kontrolierim diski aizgāja offlainā.

trešdiena, 2012. gada 8. augusts

NFS OpenVZ uz Debian

Kā piedabūt darboties NFS sējumus uz OpenVZ klientiem. 

Darbs notiek uz Debian saimnieka (kodols 2.6.32-5-openvz-amd64) - NFS klients darbojas konteinerī arī ar šo Debian kodolu, bet, ja nepieciešams konteinerī startēt arī NFS-kernel-server, tad saimnieksistēmā jālieto RHEL openvz kodols ( http://forum.openvz.org/index.php?t=msg&goto=43097 ).
Veidojot virtuālo mašīnu, gan jānorāda opcijas, ka klients izmantos NFS (ja to nedara, tad klientā tiek saņemts ziņojums
# mount -t nfs SERVER:/DIR /mnt
mount.nfs: No such device
):
vzctl create 4001 --ostemplate debian-6.0-x86 --hostname ovz-4001-nfstest
vzctl set 4001 --features nfs:on --save
vzctl set 4001 --ipadd 10.1.4.1 --save
vzctl start 4001
vzctl enter 4001

Nākamās grūtības sagādāja nfs-common uzstādīšana klientā. Instalējot pēc noklusējuma:
apt-get install nfs-common

tika saņemti kļūdas ziņojumi
Setting up nfs-common (1:1.2.2-4squeeze2) ...
insserv: Service portmap has to be enabled to start service nfs-common
insserv: exiting now!
update-rc.d: error: insserv rejected the script header
dpkg: error processing nfs-common (--configure):
 subprocess installed post-installation script returned error exit status 1
configured to not write apport reports
                                      Errors were encountered while processing:
 nfs-common
E: Sub-process /usr/bin/dpkg returned an error code (1)

Ja tika papildus pieinstalēts un startēts portmap serviss kā arī manuāli - /sbin/rpc.statd, tad NFS klients darbojās (mount -t nfs IP:/home/nfsout /mnt), bet pēc pārstartēšanas rpc.statd ir jāstartē ar roku.
Mūsdienās gan portmap vietā gan izmantojot rpcbind
nfs-common uzstādīšanu atrisināja sekojošā secība:
apt-get remove --purge portmap
apt-get remove --purge rpcbind
apt-get remove --purge nfs-common apt-get install portmap apt-get install nfs-common

Pēc tam no klienta varēja piemontēt NFS sējumus.

piektdiena, 2012. gada 3. augusts

RAIDi un LVMi

Testējamies un meklējam optimālos piegājienus darbam ar RAID'iem.

DOTS:
Serveris ar SRCSASPH16i adapteri, 8 diski SATA diski  2TB 7200rpm 32MB Cache.

Kā labāk lietot - RAID6, RAID10, vai 4 x RAID1 un uzlikt LVM. Kā vienmēr  ir gan bojājumpiecietība, gan ātrums.
Galu galā nosliecos uz 4 RAID1 sējumiem, kas katrs ir no diviem diskiem, un tam virsū - LVM ar straipu. Domāju, ka kāda atsevišķa diska nobrukšanas gadījumā, daudz vieglāk un ātrāk būs ar LVM līdzekļiem norādīt kurus masīvus izmantot, nekā cerēt ka rezerves disks sinhronizēsies RAID10 vai RAID6 gadījumos.

Diski RAID kontrolierī konfigurēti kā atsevišķi JBOD, ko OS rāda kā:
/dev/sd{e,f,g,h,m,n,o,p}

1. RAID10


mdadm --create --verbose /dev/md6 --level=raid6  --raid-devices=8 /dev/sd{e,f,g,h,m,n,o,p}
Gaidam, kad pabeidzas sinhronizācija (>10h)
mkfs.ext4 /dev/md6
mount /dev/md6 /mnt/

TIOTEST skritps (CentOS6.3 tiobench kārās ar kļūdas ziņojumu par dalīšanu ar nulli)

for i in 1 2 4 8 ; do echo ================================= ; echo THREADS  $i ; echo SIZE PER THREAD $((8000/$i)) ; tiotest -d /mnt/ -f $((8000/$i))  -t $i  ; done
=================================
THREADS 1
SIZE PER THREAD 8000
Tiotest results for 1 concurrent io threads:
,----------------------------------------------------------------------.
| Item                  | Time     | Rate         | Usr CPU  | Sys CPU |
+-----------------------+----------+--------------+----------+---------+
| Write        8000 MBs |   46.2 s | 173.178 MB/s |   1.3 %  |  32.2 % |
| Random Write    4 MBs |    2.5 s |   1.536 MB/s |   0.0 %  |   0.9 % |
| Read         8000 MBs |   10.3 s | 774.033 MB/s |   4.1 %  |  59.6 % |
| Random Read     4 MBs |    3.6 s |   1.090 MB/s |   0.0 %  |   0.2 % |
`----------------------------------------------------------------------'
=================================
THREADS 2
SIZE PER THREAD 4000
Tiotest results for 2 concurrent io threads:
,----------------------------------------------------------------------.
| Item                  | Time     | Rate         | Usr CPU  | Sys CPU |
+-----------------------+----------+--------------+----------+---------+
| Write        8000 MBs |   62.0 s | 129.005 MB/s |   2.0 %  |  70.6 % |
| Random Write    8 MBs |    5.5 s |   1.415 MB/s |   0.0 %  |   0.4 % |
| Read         8000 MBs |   15.9 s | 504.067 MB/s |   5.5 %  |  76.5 % |
| Random Read     8 MBs |    4.0 s |   1.933 MB/s |   0.1 %  |   0.0 % |
`----------------------------------------------------------------------'
=================================
THREADS 4
SIZE PER THREAD 2000
Tiotest results for 4 concurrent io threads:
,----------------------------------------------------------------------.
| Item                  | Time     | Rate         | Usr CPU  | Sys CPU |
+-----------------------+----------+--------------+----------+---------+
| Write        8000 MBs |  158.1 s |  50.591 MB/s |   2.6 %  | 136.7 % |
| Random Write   16 MBs |   11.7 s |   1.333 MB/s |   0.1 %  |   0.0 % |
| Read         8000 MBs |   16.5 s | 484.436 MB/s |   9.8 %  | 143.8 % |
| Random Read    16 MBs |    4.6 s |   3.421 MB/s |   0.0 %  |   0.0 % |
`----------------------------------------------------------------------'

=================================
THREADS 8
SIZE PER THREAD 1000
Tiotest results for 8 concurrent io threads:
,----------------------------------------------------------------------.
| Item                  | Time     | Rate         | Usr CPU  | Sys CPU |
+-----------------------+----------+--------------+----------+---------+
| Write        8000 MBs |  208.1 s |  38.449 MB/s |   4.4 %  | 351.5 % |
| Random Write   31 MBs |   21.5 s |   1.454 MB/s |   0.2 %  |   9.2 % |
| Read         8000 MBs |   16.3 s | 491.984 MB/s |  21.2 %  | 284.4 % |
| Random Read    31 MBs |    5.5 s |   5.714 MB/s |   0.4 %  |   0.0 % |
`----------------------------------------------------------------------'

========
========
========

time  for D in `seq 1000 1999` ; do echo $D ; mkdir $D ; for F in `seq 1000 1234 1000000` ; do echo $D $F ; dd if=/dev/zero  bs=$F count=1 of=/mnt/A/$D/$F.txt ; done ; done
real    50m29.980s
user    4m2.111s
sys     36m16.997s

========
========
========


rsync -a --stats /mnt/A /mnt/C
..
sent 405221629673 bytes  received 15394018 bytes  52844366.39 bytes/sec
..

========
========
========

time cp -al /mnt/A /mnt/B 

real    1m0.912s
user    0m3.396s
sys     0m43.982s

========
========
========

time rm -rf /mnt/A  ; time rm -rf /mnt/B ; time rm -rf /mnt/C

real    0m49.380s
user    0m0.330s
sys     0m8.737s

real    0m59.611s
user    0m0.517s
sys     0m27.437s

real    1m15.621s
user    0m0.545s
sys     0m35.994s
=========================

RAID 10 

Sagatavojam:

umount /mnt
mdadm -S /dev/md6
mdadm --remove /dev/md6
mdadm --zero-superblock /dev/sd{e,f,g,h,m,n,o,p}
mdadm --create --verbose /dev/md6 --level=raid10  --raid-devices=8 /dev/sd{e,f,g,h,m,n,o,p}
## Gaidām, kad sinhronizējas
mkfs.ext4 /dev/md6
mount /dev/md6 /mnt

Testējam:

=================================
THREADS 1
SIZE PER THREAD 8000
Tiotest results for 1 concurrent io threads:
,----------------------------------------------------------------------.
| Item                  | Time     | Rate         | Usr CPU  | Sys CPU |
+-----------------------+----------+--------------+----------+---------+
| Write        8000 MBs |   56.3 s | 142.088 MB/s |   1.0 %  |  26.9 % |
| Random Write    4 MBs |    0.2 s |  18.616 MB/s |   0.5 %  |   6.7 % |
| Read         8000 MBs |   17.2 s | 466.319 MB/s |   2.7 %  |  37.4 % |
| Random Read     4 MBs |    4.0 s |   0.976 MB/s |   0.0 %  |   0.2 % |
`----------------------------------------------------------------------'
=================================
THREADS 2
SIZE PER THREAD 4000
Tiotest results for 2 concurrent io threads:
,----------------------------------------------------------------------.
| Item                  | Time     | Rate         | Usr CPU  | Sys CPU |
+-----------------------+----------+--------------+----------+---------+
| Write        8000 MBs |   54.0 s | 148.037 MB/s |   2.4 %  |  67.7 % |
| Random Write    8 MBs |    0.7 s |  11.882 MB/s |   0.3 %  |   0.0 % |
| Read         8000 MBs |   14.8 s | 540.108 MB/s |   5.5 %  |  88.9 % |
| Random Read     8 MBs |    4.2 s |   1.882 MB/s |   0.0 %  |   0.0 % |
`----------------------------------------------------------------------'
=================================
THREADS 4
SIZE PER THREAD 2000
Tiotest results for 4 concurrent io threads:
,----------------------------------------------------------------------.
| Item                  | Time     | Rate         | Usr CPU  | Sys CPU |
+-----------------------+----------+--------------+----------+---------+
| Write        8000 MBs |   54.2 s | 147.615 MB/s |   8.0 %  | 188.9 % |
| Random Write   16 MBs |    3.5 s |   4.501 MB/s |   0.5 %  |   0.0 % |
| Read         8000 MBs |   16.1 s | 496.289 MB/s |   7.2 %  | 158.6 % |
| Random Read    16 MBs |    4.8 s |   3.236 MB/s |   0.0 %  |   0.0 % |
`----------------------------------------------------------------------'
=================================
THREADS 8
SIZE PER THREAD 1000
Tiotest results for 8 concurrent io threads:
,----------------------------------------------------------------------.
| Item                  | Time     | Rate         | Usr CPU  | Sys CPU |
+-----------------------+----------+--------------+----------+---------+
| Write        8000 MBs |   56.7 s | 141.097 MB/s |  20.4 %  | 490.5 % |
| Random Write   31 MBs |    5.6 s |   5.541 MB/s |   1.3 %  |   0.0 % |
| Read         8000 MBs |   16.3 s | 492.223 MB/s |  12.2 %  | 301.0 % |
| Random Read    31 MBs |    5.6 s |   5.559 MB/s |   1.4 %  |   0.0 % |
`----------------------------------------------------------------------'

Failu veidošana:
========
========
========

time  for D in `seq 1000 1999` ; do echo $D ; mkdir $D ; for F in `seq 1000 1234 1000000` ; do echo $D $F ; dd if=/dev/zero  bs=$F count=1 of=/mnt/A/$D/$F.txt ; done ; done

real    42m58.039s
user    3m52.933s
sys     26m18.912s

========
========
========
rsync -a --stats /mnt/A /mnt/C
...
sent 405221480087 bytes  received 15394018 bytes  51606096.67 bytes/sec
...

========
========
========
time cp -al /mnt/A /mnt/B 

real    1m21.329s
user    0m3.292s
sys     1m8.058s

========
========
========

for i in A B C ; do time rm -rf /mnt/$i ; done 

real    0m12.508s
user    0m0.362s
sys     0m7.445s

real    0m36.581s
user    0m0.463s
sys     0m25.834s

real    1m1.690s
user    0m0.570s
sys     0m47.764s

LVM uz RAID1 masīviem:
mdadm -S /dev/md6
mdadm --remove /dev/md6
mdadm --zero-superblock /dev/sd{e,f,g,h,m,n,o,p}
mdadm --create --verbose /dev/md11 --level=raid1 --raid-devices=2 /dev/sd{e,m}
mdadm --create --verbose /dev/md12 --level=raid1 --raid-devices=2 /dev/sd{f,n}
mdadm --create --verbose /dev/md13 --level=raid1 --raid-devices=2 /dev/sd{g,o}
mdadm --create --verbose /dev/md14 --level=raid1 --raid-devices=2 /dev/sd{h,p}
Izveidojam PV un VG:
pvcreate /dev/md11
pvcreate /dev/md12
pvcreate /dev/md13
pvcreate /dev/md14

vgcreate test /dev/md11 /dev/md12 /dev/md13 /dev/md14
Izveidojam 3 LV sējumus - bez un ar straipiem:
lvcreate      -L1T -n tests test
lvcreate -i 3 -L1T -n tests3 test
lvcreate -i 4 -L1T -n tests4 test
Testējam: Bez straipa:
mkfs.ext4 /dev/test/tests 

real    8m52.703s
user    0m1.248s
sys     0m17.725s


Unit information
================
File size = megabytes
Blk Size  = bytes
Rate      = megabytes per second
CPU%      = percentage of CPU used during the test
Latency   = milliseconds
Lat%      = percent of requests that took longer than X seconds
CPU Eff   = Rate divided by CPU% - throughput per cpu load

Sequential Reads
2.6.32-279.2.1.el6.x86_64     8000  4096    1  133.91 12.61%     0.029     1308.61   0.00000  0.00000  1062
2.6.32-279.2.1.el6.x86_64     8000  4096    2  307.13 60.66%     0.025     1069.12   0.00000  0.00000   506
2.6.32-279.2.1.el6.x86_64     8000  4096    4  178.61 58.89%     0.080     1439.95   0.00000  0.00000   303
2.6.32-279.2.1.el6.x86_64     8000  4096    8  159.53 82.49%     0.159     2918.88   0.00005  0.00000   193

Random Reads
2.6.32-279.2.1.el6.x86_64     8000  4096    1    0.96 0.263%     4.076       17.82   0.00000  0.00000   363
2.6.32-279.2.1.el6.x86_64     8000  4096    2    1.94 0.074%     4.013       19.58   0.00000  0.00000  2604
2.6.32-279.2.1.el6.x86_64     8000  4096    4    2.31 0.340%     6.088       89.15   0.00000  0.00000   679
2.6.32-279.2.1.el6.x86_64     8000  4096    8    2.64 0.135%     8.743      144.99   0.00000  0.00000  1953

Sequential Writes
2.6.32-279.2.1.el6.x86_64     8000  4096    1   38.24 7.624%     0.095     1826.67   0.00000  0.00000   502
2.6.32-279.2.1.el6.x86_64     8000  4096    2   42.03 22.86%     0.172     8895.27   0.00093  0.00000   184
2.6.32-279.2.1.el6.x86_64     8000  4096    4   50.89 84.20%     0.268    12996.51   0.00200  0.00034    60
2.6.32-279.2.1.el6.x86_64     8000  4096    8   49.56 254.4%     0.541    18589.47   0.00474  0.00142    19

Random Writes
2.6.32-279.2.1.el6.x86_64     8000  4096    1    1.29 0.733%     0.005        0.03   0.00000  0.00000   176
2.6.32-279.2.1.el6.x86_64     8000  4096    2    1.44 0.312%     0.007        0.04   0.00000  0.00000   460
2.6.32-279.2.1.el6.x86_64     8000  4096    4    1.27 0.146%     0.009        0.05   0.00000  0.00000   868
2.6.32-279.2.1.el6.x86_64     8000  4096    8    1.46 0.223%     0.012       10.15   0.00000  0.00000   651
3 straipi:
time  mkfs.ext4 /dev/test/tests3

real    3m19.238s
user    0m1.241s
sys     0m17.049s


Unit information
================
File size = megabytes
Blk Size  = bytes
Rate      = megabytes per second
CPU%      = percentage of CPU used during the test
Latency   = milliseconds
Lat%      = percent of requests that took longer than X seconds
CPU Eff   = Rate divided by CPU% - throughput per cpu load

Sequential Reads
2.6.32-279.2.1.el6.x86_64     8000  4096    1  287.61 28.51%     0.013      472.71   0.00000  0.00000  1009
2.6.32-279.2.1.el6.x86_64     8000  4096    2  292.54 61.72%     0.026      788.40   0.00000  0.00000   474
2.6.32-279.2.1.el6.x86_64     8000  4096    4  245.74 101.4%     0.063     1150.85   0.00000  0.00000   242
2.6.32-279.2.1.el6.x86_64     8000  4096    8  239.19 195.8%     0.125      597.14   0.00000  0.00000   122

Random Reads
2.6.32-279.2.1.el6.x86_64     8000  4096    1    1.04 0.434%     3.742       35.00   0.00000  0.00000   240
2.6.32-279.2.1.el6.x86_64     8000  4096    2    2.06 1.239%     3.668       30.53   0.00000  0.00000   166
2.6.32-279.2.1.el6.x86_64     8000  4096    4    3.68 0.895%     4.156      156.84   0.00000  0.00000   411
2.6.32-279.2.1.el6.x86_64     8000  4096    8    5.31 0.271%     5.207       60.89   0.00000  0.00000  1953

Sequential Writes
2.6.32-279.2.1.el6.x86_64     8000  4096    1  108.41 21.86%     0.033     2525.84   0.00015  0.00000   496
2.6.32-279.2.1.el6.x86_64     8000  4096    2   97.02 45.85%     0.073     3191.88   0.00015  0.00000   212
2.6.32-279.2.1.el6.x86_64     8000  4096    4  104.68 125.0%     0.134     6294.07   0.00103  0.00000    84
2.6.32-279.2.1.el6.x86_64     8000  4096    8   96.39 268.3%     0.272     9232.73   0.00229  0.00000    36

Random Writes
2.6.32-279.2.1.el6.x86_64     8000  4096    1    5.35 2.907%     0.005        0.03   0.00000  0.00000   184
2.6.32-279.2.1.el6.x86_64     8000  4096    2    5.31 5.636%     0.007        0.05   0.00000  0.00000    94
2.6.32-279.2.1.el6.x86_64     8000  4096    4    4.54 3.838%     0.009        0.06   0.00000  0.00000   118
2.6.32-279.2.1.el6.x86_64     8000  4096    8    5.40 1.106%     0.011        6.28   0.00000  0.00000   488
4 straipi
[root@bfsa ~]# cat t.4
mkfs.ext4 /dev/test/tests4

real    2m28.422s
user    0m1.211s
sys     0m17.627s


Unit information
================
File size = megabytes
Blk Size  = bytes
Rate      = megabytes per second
CPU%      = percentage of CPU used during the test
Latency   = milliseconds
Lat%      = percent of requests that took longer than X seconds
CPU Eff   = Rate divided by CPU% - throughput per cpu load

Sequential Reads
2.6.32-279.2.1.el6.x86_64     8000  4096    1  352.56 34.89%     0.011      361.91   0.00000  0.00000  1010
2.6.32-279.2.1.el6.x86_64     8000  4096    2  300.66 61.63%     0.025      550.33   0.00000  0.00000   488
2.6.32-279.2.1.el6.x86_64     8000  4096    4  297.59 121.8%     0.052      594.57   0.00000  0.00000   244
2.6.32-279.2.1.el6.x86_64     8000  4096    8  297.59 242.5%     0.103      498.79   0.00000  0.00000   123

Random Reads
2.6.32-279.2.1.el6.x86_64     8000  4096    1    1.07 0.431%     3.651       37.60   0.00000  0.00000   248
2.6.32-279.2.1.el6.x86_64     8000  4096    2    2.02 0.710%     3.767       31.34   0.00000  0.00000   284
2.6.32-279.2.1.el6.x86_64     8000  4096    4    3.77 0.289%     3.991       38.33   0.00000  0.00000  1302
2.6.32-279.2.1.el6.x86_64     8000  4096    8    5.74 1.614%     5.079       68.38   0.00000  0.00000   355

Sequential Writes
2.6.32-279.2.1.el6.x86_64     8000  4096    1  140.81 28.79%     0.026     1226.49   0.00000  0.00000   489
2.6.32-279.2.1.el6.x86_64     8000  4096    2  160.22 76.74%     0.045     1535.99   0.00000  0.00000   209
2.6.32-279.2.1.el6.x86_64     8000  4096    4  160.26 208.7%     0.087     3242.12   0.00068  0.00000    77
2.6.32-279.2.1.el6.x86_64     8000  4096    8  143.09 462.8%     0.179     5060.46   0.00161  0.00000    31

Random Writes
2.6.32-279.2.1.el6.x86_64     8000  4096    1    7.26 3.252%     0.005        0.04   0.00000  0.00000   223
2.6.32-279.2.1.el6.x86_64     8000  4096    2    6.93 2.839%     0.007        0.04   0.00000  0.00000   244
2.6.32-279.2.1.el6.x86_64     8000  4096    4    7.16 0.549%     0.009        0.05   0.00000  0.00000  1302
2.6.32-279.2.1.el6.x86_64     8000  4096    8    5.77 0.590%     0.008        0.06   0.00000  0.00000   977
Failu veidošanas tests (veidojam 2x mazāk nekā iepriekšējos testos!):
for i in 4 3 1 ; do echo  ===== VEIDOJAM FAILUS $i ==== ; time  for D in `seq 1000 1499` ; do  mkdir /mnt/$i/A/$D ; for F in `seq 1000 1234 1000000` ; do  dd  status=noxfer if=/dev/zero  bs=$F count=1 of=/mnt/$i/A/$D/$F.txt 2>/dev/null ; done ; done ; done

===== VEIDOJAM FAILUS 4 (split 4) ====
real    22m9.756s
user    1m55.964s
sys     14m11.965s

===== VEIDOJAM FAILUS 3 (split 3) ====
real    30m37.752s
user    1m48.219s
sys     13m24.232s

===== VEIDOJAM FAILUS 1 (no split) ====
real    78m11.036s
user    1m45.851s
sys     12m59.008s
OK - šķiet LVM ar 3 vai 4 straipiem derētu. Pašreizējais sējumu izkārtojums:
lvs -o +seg_pe_ranges --segments 
  LV     VG   Attr     #Str Type    SSize PE Ranges                                                                              
  tests  test -wi-a---    1 linear  1,00t /dev/md11:0-262143                                                                     
  tests3 test -wi-a---    3 striped 1,00t /dev/md11:262144-349525 /dev/md12:0-87381 /dev/md13:0-87381                            
  tests4 test -wi-a---    4 striped 1,00t /dev/md11:349526-415061 /dev/md12:87382-152917 /dev/md13:87382-152917 /dev/md14:0-65535
Tātad provējam no VG izņemt vienu masīvu:

# vgreduce -d -v test /dev/md14
    Finding volume group "test"
    Using physical volume(s) on command line
  Physical volume "/dev/md14" still in use

:(
Skaidrs, seejums test/tests4 lieto /dev/md14 Provējam pārvietot:
# pvmove -v /dev/md14
    Finding volume group "test"
    Archiving volume group "test" metadata (seqno 4).
    Creating logical volume pvmove0
    Moving 65536 extents of logical volume test/tests4
  Insufficient suitable allocatable extents for logical volume : 65536 more required
  Unable to allocate mirror extents for pvmove0.
  Failed to convert pvmove LV to mirrored
Spriežot pēc visa, tā ir nevis kļūda, ka vairākus stripus nevar uzlikt uz viena PV, bet gan fīča, kas izlabota RHEL6/Centos6 ( https://bugzilla.redhat.com/show_bug.cgi?id=580155 ) Tātad ir jāmēģina samazināt LV sējumam tests4 straipus no 4 uz 3:
## Uzzinam lielumu extentos
lvdisplay /dev/test/tests4 | grep LE
  Current LE             262144

## Mainam stripu skaitu, saglabājot sējuma lielumu ?varbūt var uzreiz norādīt izmantojamos PV??
time lvextend -v -i 3 -l 262144 /dev/test/tests4 
    Finding volume group test
  New size (262144 extents) matches existing size (262144 extents)
  Run `lvextend --help' for more information.

real    0m2.186s
user    0m0.127s
sys     0m0.024s
[root@bfsa ~]# time lvextend -v -i 3 -l 262146 /dev/test/tests4 
    Finding volume group test
  Using stripesize of last segment 64,00 KiB
  Rounding size (262146 extents) up to stripe boundary size for segment (262147 extents)
    Archiving volume group "test" metadata (seqno 4).
  Extending logical volume tests4 to 1,00 TiB
    Found volume group "test"
    Found volume group "test"
    Loading test-tests4 table (253:2)
    Suspending test-tests4 (253:2) with device flush
    Found volume group "test"
    Resuming test-tests4 (253:2)
    Creating volume group backup "/etc/lvm/backup/test" (seqno 5).
  Logical volume tests4 successfully resized

real    0m2.626s
user    0m0.133s
sys     0m0.047s
Provējam pārvietot disku
pvmove -v /dev/md14 /dev/md13
    Finding volume group "test"
    Archiving volume group "test" metadata (seqno 5).
    Creating logical volume pvmove0
    Moving 65536 extents of logical volume test/tests4
  Insufficient suitable allocatable extents for logical volume : 65536 more required
  Unable to allocate mirror extents for pvmove0.
  Failed to convert pvmove LV to mirrored


:(
# Skatamies, kas notiek:
lvs -o +seg_pe_ranges --segments
  LV     VG   Attr     #Str Type    SSize  PE Ranges                                                                              
  tests  test -wi-a---    1 linear   1,00t /dev/md11:0-262143                                                                     
  tests3 test -wi-a---    3 striped  1,00t /dev/md11:262144-349525 /dev/md12:0-87381 /dev/md13:0-87381                            
  tests4 test -wi-a---    4 striped  1,00t /dev/md11:349526-415061 /dev/md12:87382-152917 /dev/md13:87382-152917 /dev/md14:0-65535
  tests4 test -wi-a---    3 striped 12,00m /dev/md11:415062-415062 /dev/md12:152918-152918 /dev/md13:152918-152918                



pirmdiena, 2012. gada 23. jūlijs

MegaCli uz Ubuntu 12.04 64

apt-get install alien
wget http://www.lsi.com/downloads/Public/MegaRAID%20Common%20Files/8.04.07_MegaCLI.zip
unzip 8.04.07_MegaCLI.zip
unzip CLI_Lin_8.04.07.zip
unzip MegaCliLin.zip
alien -k  MegaCli-8.04.07-1.noarch.rpm
apt-get install libsysfs2
ln -s /lib/libsysfs.so.2.0.1 /lib/libsysfs.so.2.0.2
/opt/MegaRAID/MegaCli/MegaCli64 -AdpAllInfo -a

svētdiena, 2012. gada 1. jūlijs

OpenVZ on KVM virtual machine. Network configuration

Required scenario:

  • Build Linux machines as OpenVZ containers  (CT101, CT102, ... ) on OpenVZ host, which itself is a KVM virtual machine KvmH on KVM virtualization host HstA
  • Both OpenVZ host KvmH and KVM host HstA uses internal network by default, but OVZ virtual hosts should be direct connected to external net or other internal nets, which are separated by VLAN's. 
  • The KVM host HstA should be connected to switch with two ethernet cards, which are bonded for redundancy/bandwidth/NAS reasons. 

HstA network configuration

OS - Ubuntu 12.04 server

... install ifenslave, bridge-utils, vlan ...

/etc/network/interfaces:
# Used by ifup(8) and ifdown(8). See the interfaces(5) manpage or
# /usr/share/doc/ifupdown/examples for more information.

auto lo 

iface lo inet loopback

auto bond0
iface bond0 inet manual
    post-up    ifenslave bond0 eth0 eth1
    pre-down ifenslave -d bond0 eth0 eth1
    dns-nameservers 10.0.0.1
    dns-search internal.example.com

## - br0 IntLAN A  - on default VLAN
auto br0
iface br0 inet manual
    up ifconfig  bond0 up
    up brctl addbr br0
    up brctl addif br0 bond0
    up brctl stp  br0 on
    up ifconfig br0 10.0.0.11 netmask 255.255.0.0
    up route add default gw 10.0.0.1
    down brctl delbr br0

## - br4 - on tagged ExtLAN VLAN4 -  (192.0.2.0/24)
auto vlan4
iface vlan4 inet manual
    up ifconfig vlan4 up
    vlan_raw_device bond0

auto br4
iface br4 inet manual
    up ifconfig vlan4 up
    up brctl addbr br4
    up brctl addif br4 vlan4
    up brctl stp br4 on
    up ifconfig br4 0.0.0.0 up
    down brctl delif br4 vlan4
    down brctl delbr br4

## - br6 - on tagged intLAN B VLAN6 -  (192.168.1.0/24)
auto vlan6
iface vlan6 inet manual
    up ifconfig vlan6 up
    vlan_raw_device bond0

auto br6
iface br6 inet manual
    up ifconfig vlan6 up
    up brctl addbr br6
    up brctl addif br6 vlan6
    up brctl stp br6 on
    # up ifconfig br6 192.168.1.2 netmask 255.255.255.0
    up ifconfig br6 0.0.0.0 up
    down brctl delif br6 vlan6
    down brctl delbr br6

##  - br8 - on tagged intLAN C VLAN8 (192.168.2.0/24)
auto vlan8
iface vlan8 inet manual
    up ifconfig vlan8 up
    vlan_raw_device bond0

auto br8
iface br8 inet manual
    up ifconfig vlan8 up
    up brctl addbr br8
    up brctl addif br8 vlan8
    up brctl stp br8 on
    # up ifconfig br8 192.168.2.2 netmask 255.255.255.0
    up ifconfig br8 0.0.0.0 up
    down brctl delif br8 vlan8
    down brctl delbr br8


/etc/modprobe.d/bonding.conf
alias bond0 bonding
options bonding mode=4 miimon=100

KvmH network configuration:


OS - CentOS 6.2

Virtual NIC eth0 is connected to br0 on HstA
Virtual NIC eth1 is connected to br4 on HstA

... install vzkernel vzctl vzquota bridge-utils ...
... configure/disable iptables, allow ip_forwarding via systctl ...

/etc/sysconfig/network-scripts/ifcfg-eth0:
DEVICE="eth0"
TYPE="Ethernet"
HWADDR="52:54:00:E3:AB:CD"
BOOTPROTO=none
ONBOOT="yes"
NM_CONTROLLED="no"
TYPE="Ethernet"
BRIDGE=vzbr0

/etc/sysconfig/network-scripts/ifcfg-vzbr0:
DEVICE=vzbr0
TYPE=Bridge
IPADDR=10.0.0.4
NETMASK=255.255.0.0
ONBOOT=yes
BOOTPROTO=static
NM_CONTROLLED=no
DEALAY=0

/etc/sysconfig/network-scripts/ifcfg-eth1:
DEVICE="eth1"
TYPE="Ethernet"
BOOTPROTO=none
ONBOOT="yes"
NM_CONTROLLED="no"
TYPE="Ethernet"
BRIDGE=vzbr4

/etc/sysconfig/network-scripts/ifcfg-vzbr4:
DEVICE=vzbr4
TYPE=Bridge
ONBOOT=yes
BOOTPROTO=static
NM_CONTROLLED=no
DEALAY=0

/etc/sysconfig/network:
NETWORKING=yes
HOSTNAME=kvmh.int.test
GATEWAY=10.0.0.1

Prepare VZ to automatically add configured container veth interfaces to host bridges 

http://wiki.openvz.org/Virtual_Ethernet_device#Making_a_bridged_veth-device_persistent

Just create /etc/vz/vznet.conf containing the following.
#!/bin/bash
EXTERNAL_SCRIPT="/usr/sbin/vznetaddbr"

Build containers:


Download OpenVZ template:

cd /vz/template/cache
wget http://download.openvz.org/template/precreated/ubuntu-12.04-x86.tar.gz

Create container:

vzctl create 101 --ostemplate ubuntu-12.04-x86

Add veth interface:

vzctl set 101 --netif_add eth0,,,,vzbr0 --save

Start container and check if veth101.0 is added to bridge vzbr0
vzctl start 101
brctl show

Go inside container 101 (vzctl enter 101), and configure eth0 as usualy ( .. /etc/network/interfaces)

Add other containers and bridges:

vzctl create 102 --ostemplate ubuntu-12.04-x86
vzctl set 102 --netif_add eth0,,,,vzbr4 --save

vzctl create 145 --ostemplate ubuntu-12.04-x86
vzctl set 145 --netif_add eth0,,,,vzbr0 --save
vzctl set 145 --netif_add eth1,,,,vzbr4 --save




pirmdiena, 2012. gada 27. februāris

Tīkla karšu apvienošana, pieslēgšana Procurve slēdzim ar statisko LACP maģistrāli.

Angliski nosaukums varētu skanēt sekojoši:
Network interface bonding with Procurve LACP static trunk and VLAN tagging.

Ja serverim ir 2 vai vairāk tīkla kartes, tad varētu būt lietderīgi tās apvienot (NIC bonding, teaming), lai sasniegtu lielāku caurlaides spēju, kā arī nodrošinātu bojājumpiecietību.

Sekojošais tika veikts uz Ubuntu servera un Procurve tīkla slēdža.

Uz servera

Uzstādam ifenslave,  moduli un parametrus.
apt-get install ifenslave 
echo alias bond0 bonding >  /etc/modprobe.d/bonding.conf 
echo options bonding mode=4 miimon=100 >>  /etc/modprobe.d/bonding.conf
Pārbaudām, kā ielādējies bonding modulis:

===== cat /proc/net/bonding/bond0 ======
Ethernet Channel Bonding Driver: v3.2.3 (December 6, 2007)


Bonding Mode: IEEE 802.3ad Dynamic link aggregation
Transmit Hash Policy: layer2 (0)
MII Status: down
MII Polling Interval (ms): 100
Up Delay (ms): 0
Down Delay (ms): 0


802.3ad info
LACP rate: slow
bond bond0 has no active aggregator
=============================================

Konfigurējam tīkla saskarni bond0, rediģējot failu /etc/network/interfaces:


===== cat /etc/network/interfaces ======
# This file describes the network interfaces available on your system
# and how to activate them. For more information, see interfaces(5).


# The loopback network interface
auto lo
iface lo inet loopback


# The primary network interface
auto bond0
iface bond0 inet static
  address 10.0.0.4
netmask 255.255.0.0
gateway 10.0.0.1
post-up ifenslave bond0 eth0 eth1
pre-down ifenslave -d bond0 eth0 eth1
==============================================


Uz Procurve


Tranka izveidošana


Ja nevēlas izmantot VLAN'us, tad var vienkārši salikt visus interesējošos slēdža portus LACP active vai passive(?) modē - un piespraužot konfigurētā servera tīkla portus, autormātiski tiks izveidoti dinamiskie tranki kā Dyn1 u.t.t.. Diemžēl uz dinamiskajiem trankiem nav iespējams uzlikt VLAN marķējumus, tāpēc ir jāveido statiskie tranki:

config t 
trunk 9-10 trk5 lacp

Lai pārbaudītu izveidoto tranku:
show lacp
....

                           LACP


   PORT   LACP      TRUNK     PORT      LACP      LACP
   NUMB   ENABLED   GROUP     STATUS    PARTNER   STATUS
   ----   -------   -------   -------   -------   -------
   9      Active    Trk5      Down      No        Success
   10     Active    Trk5      Down      No        Success


VLAN'a pieslēgšana


Tos pieslēdz kā parasti, bet slēdža porta vietā jānorāda Trk5, piem.,:

vlan 8 tagged Trk5

Servera pieslēgšana trankam

Serveri pārslēgt no viena vada pieslēguma uz LACP tranku iespējams "hotswap" režīmā:

1. Pieslēdzam tīkla vadu no līdz šim neizmantotās tīkla saskarnes (piem., eth1)  Trk5 portam (piem. 10)
2. Uz servera apturam darbojošos esošo tīkla saskarni eth0 un uzreiz startējam bond0 -
 ifconfig eth0 down
 ifconfig bond0 up
3. Pēc 1-2 sekunžu pātraukuma sāk darboties bond0 tīkla saskarne
4. Pārslēdzam arī līdz šim izmantotās tīkla saskarnes (eth0)  uz jauno tranku (9. tīkla slēdža portam).

Kad serveris pieslēgts:


==== cat /proc/net/bonding/bond0  =====
Ethernet Channel Bonding Driver: v3.2.3 (December 6, 2007)


Bonding Mode: IEEE 802.3ad Dynamic link aggregation
Transmit Hash Policy: layer2 (0)
MII Status: up
MII Polling Interval (ms): 100
Up Delay (ms): 0
Down Delay (ms): 0


802.3ad info
LACP rate: slow
Active Aggregator Info:
Aggregator ID: 2
Number of ports: 2
Actor Key: 17
Partner Key: 289
Partner Mac Address: 00:1f:fe:1f:92:c0


Slave Interface: eth0
MII Status: up
Link Failure Count: 5
Permanent HW addr: 00:15:17:5e:d8:34
Aggregator ID: 2


Slave Interface: eth1
MII Status: up
Link Failure Count: 2
Permanent HW addr: 00:15:17:5e:d8:35
Aggregator ID: 2
========================================