Skip to content

Latest commit

 

History

History
68 lines (68 loc) · 3.39 KB

notes.org

File metadata and controls

68 lines (68 loc) · 3.39 KB

Scripts

The scripts in this repo: lustreOSSinstall.sh and lustreOSSpostRebootInstall.sh attempt to automate the install. Very hacky. Not 100%. Sometimes, installing the lustre packages will result in a clean dkms module and other times it doesn’t. There’s a fix to remove a warning when the lustre module is installed (See script which modifies a config file then rebuilds the module). There are two scripts because I found a reboot helps, so run lustreOSSinstall.sh first which ends in a reboot then run the second script. lwastor07 still isn’t building the modules correctly. Some missing dependencies when trying to load osd-zfs mount module. Go figure.

Install lustre repo config file

cp lustre.repo in this repo to etc/yum.repos.d then enable(don’t think the update is needed):

yum-config-manager --enable \*

Then yum update

yum update

MDS SW install

create md array. this was alrady done.

[ubuntu@lwastormds ~]$ cat /proc/mdstat 
Personalities : [raid10] 
md127 : active raid10 nvme5n1p1[5] nvme2n1p1[2] nvme4n1p1[4] nvme0n1p1[0] nvme3n1p1[3] nvme1n1p1[1]
      9376450560 blocks super 1.2 512K chunks 2 near-copies [6/6] [UUUUUU]
      bitmap: 0/70 pages [0KB], 65536KB chunk

unused devices: <none>

Install kmod on MDS using above repos

sudo yum install kmod-lustre-osd-ldiskfs lustre lustre-osd-ldiskfs-mount

This resulted in a kernel version change, so reboot afterwards then install modules

sudo modprobe lustre
sudo modprobe ldiskfs

make filesystem on /dev/md127

sudo mkfs.lustre --reformat --mdt --mgs --backfstype=ldiskfs --fsname=lstore --mgsnode=10.41.0.86 --index=0 /dev/md127

mount it

sudo mkdir /metastor
sudo mount -t lustre /dev/md127 /metastor

Finish configuring the MDS

Install lustre on OSS’s with ZFS support

Use the same repo definition, then install the zfs modules

sudo yum install kmod-lustre-osd-zfs lustre lustre-osd-zfs-mount e2fsprogs e2fsprogs-libs
sudo modprobe lustre
sudo modprobe zfs

The above works some of the time. Not sure why. the lustre*sh scripts attempt to do what is necessary but seems to be really hacky. Fiddling seems to be needed and some reboots are necessary likely because I’m unaware of command line programs.

make vdevs using zpool create

make filesystem on vdev

Note, all OST’s to my knowledge need the same fsname for it to be added to the entire storage allocation seen by the client. May be other ways to do this. Also, the index number is used to create the fsname-OSTnnnn label when has to be unique. nnnn is in hex. The script ensures this as long as the hostname is of the form: lwastorXX which they are.

sudo mkfs.lustre --reformat --ost --backfstype=zfs --fsname=lstore --mgsnode=10.41.0.86 --index=2 lwastor01b/ost

Mount filesystem

sudo mount -t lustre lwastor01b/ost /mnt/lustre/lustre-ost01b/

Enable lustre service

sudo systemctl enable lustre

The service mount points are actually: /mnt/lustre/local/<mountpoint> which it somehow creates. When it’s all working, these mounts come back after a reboot.

lwastor07 osd-zfs missing symbols

fixed. the zfs repo definition file was moved out of the way from a previous yum update. I moved it back(/etc/yum.repos.d/zfs.repo and did an update of the zfs and zfs-dkms packages. zfs dkms build fine and I rebuilt the lustre-zfs dkms module(after fixing up the conf file. see scripts). It built w/out error. I then reformatted the lustre filesystems on the vdevs and mounted them.