Roy's notes: Difference between revisions

From MalinWiKi
Jump to navigation Jump to search
(Created page with "=== "Unplug" drive from system === <nowiki> # echo 1 > /sys/bus/scsi/devices/H:B:T:L/delete </nowiki> === Fail injection with debugfs === [https://lxadm.com/Using_fault_injec...")
 
(Basics)
Line 1: Line 1:
== Low level disk handling ==
=== "Unplug" drive from system ===
=== "Unplug" drive from system ===
<nowiki>
<nowiki>
Line 6: Line 8:
=== Fail injection with debugfs ===
=== Fail injection with debugfs ===
[https://lxadm.com/Using_fault_injection https://lxadm.com/Using_fault_injection]
[https://lxadm.com/Using_fault_injection https://lxadm.com/Using_fault_injection]
== LVM ==
LVM is Linux' Logical volume manager. It is designed to be an abstraction layer on top of physical drives or RAID, typically [https://raid.wiki.kernel.org/index.php/RAID_setup mdraid] or [https://help.ubuntu.com/community/FakeRaidHowto fakeraid]. Keep in mind that fakeraid should be avoided unless you really need it, like in conjunction with dual-booting linux and windows on fakeraid. LVM broadly consists of three elements, the "physical" devices (PV), the volume group (VG) and the logical volume (LV). There can be multiple PVs, VGs and LVs, depending on requirement. More about this below. All commands are given as examples, and all of them can be fine-tuned using extra flags in need of so. In my experience, the defaults work well for most usecases.
I'm mentioning filesystem below too. Where I write ext4, the samme applies to ext2 and ext3.
==== Create a PV ====
A PV is the "physical" parts. This does not need to be a physical disk, but can also be another RAID, be it an mdraid, fakeraid, hardware raid, a virtual disk on a SAN or otherwisee or a partition on one of those.
<nowiki>
# pvcreate /dev/sdb
# pvcreate /dev/md1 /dev/md2
</nowiki>
These add three PVs, one on a drive (or hardware raid) and another two on mdraids. For more information about pvcreate, see [https://linux.die.net/man/8/pvcreate the manual].
==== Create a VG ====
The volume group consists of one or more PVs grouped together on which LVs can be placed. If several PVs are grouped in a VG, it's generally a good idea to make sure these PVs have some sort of redundancy, as in mdraid/fakeraid or hwraid. Otherwise it will be like using a [https://en.wikipedia.org/wiki/RAID RAID-0] with a single point of failure on each of the independant drives. LVM has [https://unix.stackexchange.com/questions/150644/raiding-with-lvm-vs-mdraid-pros-and-cons  RAID code in it] as well, so you can use that. I haven't done so myself, as I generally stick to mraid. The reason is mdraid is, in my opinion older and more stable and has more users (meaning bugs are reported and fixed faster whenever they are found). That said, I beleive the actual RAID code used in LVM RAID are the same function calls as for mdraid, so it may not be much of a difference. I still stick with mdraid. To create a VG, run
<nowiki>
# vgcreate vgname /dev/md1
</nowiki>
Note that if vgcreate is run with a PV (as /dev/md1 above) that is not defined as a PV (like above), this is done implicitly, so if you don't need any special flags to pvcreate, you can simply skip it and let vgcreate do that for you.
==== Create an LV ====
LVs can be compared to partitions, somehow, since they are bounderies of a fraction or all of that of a VG. The difference between them and a partition, however, is that they can be grown or shrunk easily and also moved around between PVs without downtime. This flexibility makes them superiour to partitions as your system can be changed without users noticing it. By default, an LV is alloated "thickly", meaning all the data given to it, is allocated from the VG and thus the PV. The following makes a 100GB LV named "thicklv". When making an LV, I usually allocate what's needed plus some more, but not everything, just to make sure it's space available for growth on any of the LVs on the VG, or new LVs.
<nowiki>
# lvcreate -n thicklv -L 100G vgname
</nowiki>
After the LV is created, a filesystem can be placed on it unless it is meant to be used directly. The application for direct use include swap space, VM storage and certain database systems. Most of these will, however, work on filesystems too, although my testing has shown that on swap space, there is a significant performance gain for using dedicated storage without a filesystem. As for filesystems, most Linux users use either ext4 or xfs. Personally, I generallly use XFS these days. The only thing left that can't be done on XFS is shrinking a filesystem.
<nowiki>
# mkfs -t xfs /dev/vgname/thicklv
</nowiki>
Then just edit /etc/fstab with correct data and run mount -a, and you should be all set.
==== Growing devices ====
LVM objects can be grown and shrunk. If a PV resides on a RAID where a new drive has been added or otherwise grown, or on a partition or virtual disk that has been extended, the PV must be updated to reflect these changes. The following command will grow the PV to the maximum available on the underlying storage.
<nowiki>
# pvresize /dev/md1
</nowiki>
If a new PV is added, the VG can be grown to add the space on that in addition to what's there already.
<nowiki>
# vgextend vgname /dev/md2
</nowiki>
With more space available in the VG, the LV can now be extended. Let's add another 50GB to it.
<nowiki>
# lvresize -L +50G vgname/thicklv
</nowiki>
After the LV has grown, run ''xfs_growfs'' (xfs) or ''resize2fs'' (ext4) to make use of the new data.
=== LVthin ===
(to be continued)
==== Create a thin pool ====
(to be continued)
==== Create a thin volume ====
(to be continued)

Revision as of 17:30, 16 October 2017

Low level disk handling

"Unplug" drive from system

# echo 1 > /sys/bus/scsi/devices/H:B:T:L/delete

Fail injection with debugfs

https://lxadm.com/Using_fault_injection

LVM

LVM is Linux' Logical volume manager. It is designed to be an abstraction layer on top of physical drives or RAID, typically mdraid or fakeraid. Keep in mind that fakeraid should be avoided unless you really need it, like in conjunction with dual-booting linux and windows on fakeraid. LVM broadly consists of three elements, the "physical" devices (PV), the volume group (VG) and the logical volume (LV). There can be multiple PVs, VGs and LVs, depending on requirement. More about this below. All commands are given as examples, and all of them can be fine-tuned using extra flags in need of so. In my experience, the defaults work well for most usecases.

I'm mentioning filesystem below too. Where I write ext4, the samme applies to ext2 and ext3.

Create a PV

A PV is the "physical" parts. This does not need to be a physical disk, but can also be another RAID, be it an mdraid, fakeraid, hardware raid, a virtual disk on a SAN or otherwisee or a partition on one of those.

# pvcreate /dev/sdb # pvcreate /dev/md1 /dev/md2

These add three PVs, one on a drive (or hardware raid) and another two on mdraids. For more information about pvcreate, see the manual.

Create a VG

The volume group consists of one or more PVs grouped together on which LVs can be placed. If several PVs are grouped in a VG, it's generally a good idea to make sure these PVs have some sort of redundancy, as in mdraid/fakeraid or hwraid. Otherwise it will be like using a RAID-0 with a single point of failure on each of the independant drives. LVM has RAID code in it as well, so you can use that. I haven't done so myself, as I generally stick to mraid. The reason is mdraid is, in my opinion older and more stable and has more users (meaning bugs are reported and fixed faster whenever they are found). That said, I beleive the actual RAID code used in LVM RAID are the same function calls as for mdraid, so it may not be much of a difference. I still stick with mdraid. To create a VG, run

# vgcreate vgname /dev/md1

Note that if vgcreate is run with a PV (as /dev/md1 above) that is not defined as a PV (like above), this is done implicitly, so if you don't need any special flags to pvcreate, you can simply skip it and let vgcreate do that for you.

Create an LV

LVs can be compared to partitions, somehow, since they are bounderies of a fraction or all of that of a VG. The difference between them and a partition, however, is that they can be grown or shrunk easily and also moved around between PVs without downtime. This flexibility makes them superiour to partitions as your system can be changed without users noticing it. By default, an LV is alloated "thickly", meaning all the data given to it, is allocated from the VG and thus the PV. The following makes a 100GB LV named "thicklv". When making an LV, I usually allocate what's needed plus some more, but not everything, just to make sure it's space available for growth on any of the LVs on the VG, or new LVs.

# lvcreate -n thicklv -L 100G vgname

After the LV is created, a filesystem can be placed on it unless it is meant to be used directly. The application for direct use include swap space, VM storage and certain database systems. Most of these will, however, work on filesystems too, although my testing has shown that on swap space, there is a significant performance gain for using dedicated storage without a filesystem. As for filesystems, most Linux users use either ext4 or xfs. Personally, I generallly use XFS these days. The only thing left that can't be done on XFS is shrinking a filesystem.

# mkfs -t xfs /dev/vgname/thicklv

Then just edit /etc/fstab with correct data and run mount -a, and you should be all set.

Growing devices

LVM objects can be grown and shrunk. If a PV resides on a RAID where a new drive has been added or otherwise grown, or on a partition or virtual disk that has been extended, the PV must be updated to reflect these changes. The following command will grow the PV to the maximum available on the underlying storage.

# pvresize /dev/md1

If a new PV is added, the VG can be grown to add the space on that in addition to what's there already.

# vgextend vgname /dev/md2

With more space available in the VG, the LV can now be extended. Let's add another 50GB to it.

# lvresize -L +50G vgname/thicklv

After the LV has grown, run xfs_growfs (xfs) or resize2fs (ext4) to make use of the new data.

LVthin

(to be continued)

Create a thin pool

(to be continued)

Create a thin volume

(to be continued)