Wednesday, July 30, 2008
Howto configure a 3M mouse with scrolling in Linux
Add this to your xorg.conf file.
Section "InputDevice" # Configuration for 3M mouse, with scrolling Identifier "Mouse2" Driver "mouse" Option "Device" "/dev/input/mouse2" Option "EmulateWheel" "on" Option "EmulateWheelButton" "2" Option "EmulateWheelInertia" "20" Option "YAxisMapping" "4 5" Option "XAxisMapping" "6 7" Option "EmulateWheelTimeout" "150" EndSection
Note, make sure other mouse section refers to your other mouse, not the general mouse. This can be tested by doing cat /dev/input/mouse1, mouse2, etc and moving a mouse, to see which mouse is which.
Tuesday, July 22, 2008
Changing storage under lvm
I’ve got an lvm volume group that I want to changing from using raid0, to using raid1.
The volume group is made up for a few raid0 volumes. To change the storage, I do the following:
pvmove the data off at least one of the physical volumes.
[root@wizards nelg]# pvs PV VG Fmt Attr PSize PFree /dev/md1 system lvm2 a- 44.92G 8.32G /dev/md2 system lvm2 a- 46.87G 19.87G /dev/md3 other lvm2 -- 93.75G 93.75G /dev/md4 other lvm2 -- 93.75G 93.75G /dev/md5 other lvm2 a- 93.75G 1.73G /dev/md6 other lvm2 a- 92.62G 0 /dev/md7 data lvm2 a- 306.41G 52.67G /dev/md8 data lvm2 a- 306.41G 62.67G /dev/md9 data lvm2 a- 318.68G 133.21G [root@wizards nelg]#
vgreduce the volume,
then remove it:
lvm> vgreduce other /dev/md3 Removed "/dev/md3" from volume group "other" lvm> pvremove /dev/md3 Labels on physical volume "/dev/md3" successfully wiped lvm> pvs PV VG Fmt Attr PSize PFree /dev/md1 system lvm2 a- 44.92G 8.32G /dev/md2 system lvm2 a- 46.87G 19.87G /dev/md3 lvm2 -- 93.75G 93.75G /dev/md4 other lvm2 -- 93.75G 93.75G /dev/md5 other lvm2 a- 93.75G 1.73G /dev/md6 other lvm2 a- 92.62G 0 /dev/md7 data lvm2 a- 306.41G 52.67G /dev/md8 data lvm2 a- 306.41G 62.67G /dev/md9 data lvm2 a- 318.68G 133.21G lvm>
The device looks like this currently in mdadm
mdadm -D /dev/md3 /dev/md3: Version : 00.90.03 Creation Time : Sun Jan 8 23:02:59 2006 Raid Level : raid0 Array Size : 98301440 (93.75 GiB 100.66 GB) Raid Devices : 2 Total Devices : 2 Preferred Minor : 3 Persistence : Superblock is persistent Update Time : Mon Jul 7 20:46:54 2008 State : clean Active Devices : 2 Working Devices : 2 Failed Devices : 0 Spare Devices : 0 Chunk Size : 64K UUID : 2b3493e2:0a59d252:d9ed1514:2cb69c87 Events : 0.3 Number Major Minor RaidDevice State 0 3 7 0 active sync /dev/hda7 1 3 71 1 active sync /dev/hdb7
Stop the device
[root@wizards nelg]# mdadm -S /dev/md3 mdadm: stopped /dev/md3
This means the pvs shows:
[root@wizards nelg]# pvs PV VG Fmt Attr PSize PFree /dev/md1 system lvm2 a- 44.92G 8.32G /dev/md2 system lvm2 a- 46.87G 19.87G /dev/md4 other lvm2 -- 93.75G 93.75G /dev/md5 other lvm2 a- 93.75G 1.73G /dev/md6 other lvm2 a- 92.62G 0 /dev/md7 data lvm2 a- 306.41G 52.67G /dev/md8 data lvm2 a- 306.41G 62.67G /dev/md9 data lvm2 a- 318.68G 133.21G
Now, all that is remaining is to change the device, update /dev/mdadm.conf and rebuild initrd to reflect the change.
So. Double check that devices are not in a current array:
grep hda7 /proc/mdstat
grep hdb7 /proc/mdstat
Build the new device
mdadm --create /dev/md3 -l 1 -n 2 /dev/hda7 /dev/hdb7
This will warn, as per below
[root@wizards nelg]# mdadm --create /dev/md3 -l 1 -n 2 /dev/hda7 /dev/hdb7 mdadm: /dev/hda7 appears to be part of a raid array: level=raid0 devices=2 ctime=Sun Jan 8 23:02:59 2006 mdadm: /dev/hdb7 appears to be part of a raid array: level=raid0 devices=2 ctime=Sun Jan 8 23:02:59 2006 Continue creating array? y mdadm: array /dev/md3 started.
This now has a new device, as per below.
[root@wizards nelg]# mdadm -D /dev/md3 /dev/md3: Version : 00.90.03 Creation Time : Tue Jul 22 20:49:22 2008 Raid Level : raid1 Array Size : 49150720 (46.87 GiB 50.33 GB) Used Dev Size : 49150720 (46.87 GiB 50.33 GB) Raid Devices : 2 Total Devices : 2 Preferred Minor : 3 Persistence : Superblock is persistent Update Time : Tue Jul 22 20:49:22 2008 State : clean, resyncing Active Devices : 2 Working Devices : 2 Failed Devices : 0 Spare Devices : 0 Rebuild Status : 3% complete UUID : 60f87451:d385361e:c4cdb003:fcc220ea Events : 0.1 Number Major Minor RaidDevice State 0 3 7 0 active sync /dev/hda7 1 3 71 1 active sync /dev/hdb7 [root@wizards nelg]#
Notice that the devices are being synced.
/proc/mdstat shows:
md3 : active raid1 hdb7[1] hda7[0] 49150720 blocks [2/2] [UU] [==>..................] resync = 13.3% (6540864/49150720) finish=23.3min speed=30358K/sec
Now, I take the UUID: 60f87451:d385361e:c4cdb003:fcc220ea and update /etc/mdadm.conf to reflect this.
I.e.
-ARRAY /dev/md3 UUID=2b3493e2:0a59d252:d9ed1514:2cb69c87 auto=yes +ARRAY /dev/md3 UUID=60f87451:d385361e:c4cdb003:fcc220ea auto=yes
Next is to update initrd.
The is two choices. Build a new initrd, or just change this one.
As I like to be able to boot my system, I’ll do both, so I have a spare if one does not work.
Approach 1
[root@wizards boot]# mkdir tt2 [root@wizards boot]# cd tt2 [root@wizards tt2]# cat ../initrd-2.6.24.5-server-2mnb.img | gzip -d -c | cpio -i 12546 blocks [root@wizards tt2] ls etc blkid/ ld.so.cache ld.so.conf ld.so.conf.d/ lvm/ mdadm.conf suspend.conf [root@wizards tt2]# pwd /boot/tt2
So, as you can see, the is an mdadm.conf in the initrd file.
cp /etc/mdadm.conf /boot/tt2/etc/
the, put the initrd back together.
find . | cpio -H newc --quiet -o | gzip -9 > ../initrd-2.6.24.5-server-2mnb-new.img
Note, I made this a new name, as I don’t like to overwrite my existing initrd, just in case. So, I’ll just change my symlink to use this one.
Approach 2
mkinitrd /boot/initrd-2.6.24.5-server-2mnb-new1.img $(uname -r)
Both approaches should work. I’ll comment further if one does not.
Now, last but not least, is to make use of our new device.
In my case, create a new volume group for my virtual machines, after adding this device to lvm.
lvm> pvcreate /dev/md3 Physical volume "/dev/md3" successfully created lvm> vgcreate virtualmachines /dev/md3 Volume group "virtualmachines" successfully created lvm> vgs VG #PV #LV #SN Attr VSize VFree data 3 3 0 wz--n- 931.51G 248.54G other 3 1 0 wz--n- 280.12G 95.48G system 2 6 0 wz--n- 91.79G 28.19G virtualmachines 1 0 0 wz--n- 46.87G 46.87G lvm> lvcreate -n ms -L 5G virtualmachines Logical volume "ms" created lvm> lvdisplay /dev/mapper/virtualmachines-ms --- Logical volume --- LV Name /dev/virtualmachines/ms VG Name virtualmachines LV UUID GGV1PY-mi9Q-yqBz-3bFE-U3Fa-bpS9-pou7fX LV Write Access read/write LV Status available # open 0 LV Size 5.00 GB Current LE 1280 Segments 1 Allocation inherit Read ahead sectors auto - currently set to 256 Block device 252:10 lvm>
now, just format it.
[root@wizards boot]# mkfs.xfs /dev/virtualmachines/ms meta-data=/dev/virtualmachines/ms isize=256 agcount=4, agsize=327680 blks = sectsz=512 attr=2 data = bsize=4096 blocks=1310720, imaxpct=25 = sunit=0 swidth=0 blks naming =version 2 bsize=4096 log =internal log bsize=4096 blocks=2560, version=2 = sectsz=512 sunit=0 blks, lazy-count=1 realtime =none extsz=4096 blocks=0, rtextents=0 [root@wizards boot]# and mount
[root@wizards /]# df -h /virtualmachines/ms/ Filesystem Size Used Avail Use% Mounted on /dev/mapper/virtualmachines-ms 5.0G 4.2M 5.0G 1% /virtualmachines/ms [root@wizards /]#
Sunday, July 20, 2008
How to find out which boot loader is installed.
dd if=/dev/hda of=/tmp/bootsec.img bs=512 count=1
file /tmp/bootsec.img
grep LILO /tmp/bootsec.img
grep GRUB /tmp/bootsec.img
Monday, July 07, 2008
mdadm buggy
Well, just had a rather nasty experience.
After rebooting my computer, because I put a new fan in it, as it started up, I saw it fail to start /dev/md4 and /dev/md5 raid0 devices. This then resulted in not being able to start any volumes within my other volume group. Nasty!. So, of course /storage could not be mounted.
So, TO WORK…
After some careful investigation, /proc/mdstat showed that /dev/md4 only had once device, (hda8), and not hdb8. It showed md4 as inactive.
and dmesg shows:
md: array md4 already has disks!
md: array md5 already has disks!
Very strange. Becuase it show have shown sometime like this:
md4: setting max_sectors to 128, segment boundary to 32767
raid0: looking at hda8
raid0: comparing hda8(49150720) with hda8(49150720)
raid0: END
raid0: ==> UNIQUE
raid0: 1 zones
raid0: looking at hdb8
raid0: comparing hdb8(49150720) with hda8(49150720)
raid0: EQUAL
raid0: FINAL 1 zones
raid0: done.
raid0 : md_size is 98301440 blocks.
raid0 : conf->hash_spacing is 98301440 blocks.
raid0 : nb_zone is 1.
raid0 : Allocating 8 bytes for hash.
lvs showed that it could not find all the physical volumes to start the “other” volume group.
So, I stopped the device:
mdadm -S /dev/md4
Then reassembled it, using:
mdadm -A /dev/md4 /dev/hda8 /dev/hdb8
This then started fine, and showed it as clean.
So, an
lvchange -a /dev/other/storage
had me back in business.