Building a Lustre Filesystem on a Mini PC
Introduction
Lustre is a high-performance parallel filesystem used in many of the world’s largest supercomputers, but you don’t need a million-dollar HPC cluster to learn how it works.
In this post, I’ll walk through setting up a functional Lustre filesystem using nothing more than a mini PC running Proxmox VE in my homelab.
While this setup won’t handle production workloads, it’s perfect for understanding Lustre’s architecture and getting hands-on experience with its components.
Hardware and Software Specifications
The hardware I’m using is a Beelink SER5 MAX mini PC. It has an AMD Ryzen 7 5800H CPU with 8 core 16 threads, 64GB DDR4 memory and 1TB Samsung 970 EVO Plus NVMe storage.
I’m running Proxmox Virtual Environment 8.4 as the hypervisor. Proxmox VE is a versatile virtualization platform that allows us to create multiple virtual machines on a single host, making it ideal for simulating a distributed Lustre environment.
This setup uses a single physical server with one storage device, so we won’t experience Lustre’s key production benefits: high availability, aggregated bandwidth, or true storage parallelism.
This is purely educational—real Lustre deployments span multiple servers with dedicated metadata and object storage nodes connected via high-speed networks.
Architecture Overview
The Lustre cluster setup featureing the following virtual machines:
- 1x MGS (Management Server, 1x vCPU 4GB RAM) with 1x MGT(Management Target, 2GB)
- 1x MDS (Metadata Server, 2x vCPU 8GB RAM) with 1x MDT(Management Target, 4GB)
- 2x OSS (Object Storage Server, 2x vCPU 4GB RAM) with 1x OST(Object Storage Target 48GB) each
- 2x Client (1x vCPU, 4 GB RAM)
All servers running Alma Linux 8.10 with kernel version 4.18.0.
Installation Overview
- Set up Lustre file system network
- Set up Lustre hosts and storage devices
- Prepare Lustre installation
- Add Lustre package repo
- Install dependencies
- Install Linux kernel with Lustre support
- Install Lustre modules
- Create MGT @ MGS, start MGS
- Create MDT @ MDS, start MDS
- Create OST @ OSS, start OSS
- Set up Lustre clients
Network Setup
All VMs are connected via a Linux Bridge on the PVE host, with the following IP addresses:
10.100.0.11 mgs1
10.100.0.21 mds1
10.100.0.31 oss1
10.100.0.32 oss2
10.100.0.41 client1
10.100.0.42 client2
Lustre Server Setup (All Servers)
Disable firewalld and SELinux
# stop and disable firewalld
systemctl stop firewalld
systemctl disable firewalld
# disable SELinux
sed -i '/^SELINUX=/c\SELINUX=disabled' /etc/selinux/config
Add Lustre repo
Lustre Download Lustre Operation Manual section 8.1
cat /etc/yum.repos.d/lustre.repo
[lustre-server]
name=Lustre Server - EL8.10
baseurl=https://downloads.whamcloud.com/public/lustre/latest-release/el8.10/server
enabled=1
gpgcheck=0
priority=50
module_hotfixes=1
[lustre-client]
name=Lustre Client - EL8.10
baseurl=https://downloads.whamcloud.com/public/lustre/latest-release/el8.10/client
enabled=1
gpgcheck=0
priority=50
module_hotfixes=1
[e2fsprogs]
name=Ext2/3/4 Filesystem Utilities
baseurl=https://downloads.whamcloud.com/public/e2fsprogs/latest/el8
gpgcheck=0
Install Linux Kernel with Lustre patch:
dnf install kernel-4.18.0-553.53.1.el8_lustre.x86_64
reboot
Install Lustre Related Packages
dnf install lustre kmod-lustre kmod-lustre-osd-ldiskfs lustre-osd-ldiskfs-mount
Configure LNet
echo "options lnet networks=tcp" > /etc/modprobe.d/lustre.conf
Create MGT, Start MGS
mkfs.lustre --mgs /dev/sdb
mkdir -p /mnt/mgt
mount -t lustre /dev/sdb /mnt/mgt
[root@mgs1 ~]# lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sda 8:0 0 48G 0 disk
├─sda1 8:1 0 1G 0 part /boot
└─sda2 8:2 0 47G 0 part
├─almalinux-root 253:0 0 42.2G 0 lvm /
└─almalinux-swap 253:1 0 4.8G 0 lvm [SWAP]
sdb 8:16 0 2G 0 disk
[root@mgs1 ~]# mkfs.lustre --mgs /dev/sdb
Permanent disk data:
Target: MGS
Index: unassigned
Lustre FS:
Mount type: ldiskfs
Flags: 0x64
(MGS first_time update )
Persistent mount opts: user_xattr,errors=remount-ro
Parameters:
checking for existing Lustre data: not found
device size = 2048MB
formatting backing filesystem ldiskfs on /dev/sdb
target name MGS
kilobytes 2097152
options -q -O uninit_bg,dir_nlink,quota,project,huge_file,^fast_commit,flex_bg -E lazy_journal_init="0",lazy_itable_init="0" -F
mkfs_cmd = mke2fs -j -b 4096 -L MGS -q -O uninit_bg,dir_nlink,quota,project,huge_file,^fast_commit,flex_bg -E lazy_journal_init="0",lazy_itable_init="0" -F /dev/sdb 2097152k
Writing CONFIGS/mountdata
[root@mgs1 ~]# mkdir -p /mnt/mgt
[root@mgs1 ~]# mount -t lustre /dev/sdb /mnt/mgt
[root@mgs1 ~]# lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sda 8:0 0 48G 0 disk
├─sda1 8:1 0 1G 0 part /boot
└─sda2 8:2 0 47G 0 part
├─almalinux-root 253:0 0 42.2G 0 lvm /
└─almalinux-swap 253:1 0 4.8G 0 lvm [SWAP]
sdb 8:16 0 2G 0 disk /mnt/mgt
Create MDT, Start MDS
mkfs.lustre --fsname=scratch --mgsnode=10.100.0.11@tcp --mdt --index=0 /dev/sdb
mkdir -p /mnt/mdt
mount -t lustre /dev/sdb /mnt/mdt/
[root@mds1 ~]# lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sda 8:0 0 48G 0 disk
├─sda1 8:1 0 1G 0 part /boot
└─sda2 8:2 0 47G 0 part
├─almalinux-root 253:0 0 42.2G 0 lvm /
└─almalinux-swap 253:1 0 4.8G 0 lvm [SWAP]
sdb 8:16 0 4G 0 disk
[root@mds1 ~]# mkfs.lustre --fsname=scratch --mgsnode=10.100.0.11@tcp --mdt --index=0 /dev/sdb
Permanent disk data:
Target: scratch:MDT0000
Index: 0
Lustre FS: scratch
Mount type: ldiskfs
Flags: 0x61
(MDT first_time update )
Persistent mount opts: user_xattr,errors=remount-ro
Parameters: mgsnode=10.100.0.11@tcp
checking for existing Lustre data: not found
device size = 4096MB
formatting backing filesystem ldiskfs on /dev/sdb
target name scratch:MDT0000
kilobytes 4194304
options -J size=163 -I 1024 -i 2560 -q -O dirdata,uninit_bg,^extents,dir_nlink,quota,project,huge_file,ea_inode,large_dir,^fast_commit,flex_bg -E lazy_journal_init="0",lazy_itable_init="0" -F
mkfs_cmd = mke2fs -j -b 4096 -L scratch:MDT0000 -J size=163 -I 1024 -i 2560 -q -O dirdata,uninit_bg,^extents,dir_nlink,quota,project,huge_file,ea_inode,large_dir,^fast_commit,flex_bg -E lazy_journal_init="0",lazy_itable_init="0" -F /dev/sdb 4194304k
Writing CONFIGS/mountdata
[root@mds1 ~]# mkdir -p /mnt/mdt
[root@mds1 ~]# mount -t lustre /dev/sdb /mnt/mdt/
[root@mds1 ~]# lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sda 8:0 0 48G 0 disk
├─sda1 8:1 0 1G 0 part /boot
└─sda2 8:2 0 47G 0 part
├─almalinux-root 253:0 0 42.2G 0 lvm /
└─almalinux-swap 253:1 0 4.8G 0 lvm [SWAP]
sdb 8:16 0 4G 0 disk /mnt/mdt
Create OST, Start OSS
mkfs.lustre --fsname=scratch --ost --mgsnode=10.100.0.11@tcp --index=0 /dev/sdb
mkdir -p /mnt/ost
mount -t lustre /dev/sdb /mnt/ost/
[root@oss1 ~]# lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sda 8:0 0 48G 0 disk
├─sda1 8:1 0 1G 0 part /boot
└─sda2 8:2 0 47G 0 part
├─almalinux-root 253:0 0 42.2G 0 lvm /
└─almalinux-swap 253:1 0 4.8G 0 lvm [SWAP]
sdb 8:16 0 48G 0 disk
[root@oss1 ~]# mkfs.lustre --fsname=scratch --ost --mgsnode=10.100.0.11@tcp --index=0 /dev/sdb
Permanent disk data:
Target: scratch:OST0000
Index: 0
Lustre FS: scratch
Mount type: ldiskfs
Flags: 0x62
(OST first_time update )
Persistent mount opts: ,errors=remount-ro
Parameters: mgsnode=10.100.0.11@tcp
checking for existing Lustre data: not found
device size = 49152MB
formatting backing filesystem ldiskfs on /dev/sdb
target name scratch:OST0000
kilobytes 50331648
options -J size=1024 -I 512 -i 69905 -q -O extents,uninit_bg,dir_nlink,quota,project,huge_file,^fast_commit,flex_bg -G 256 -E resize="4290772992",lazy_journal_init="0",lazy_itable_init="0" -F
mkfs_cmd = mke2fs -j -b 4096 -L scratch:OST0000 -J size=1024 -I 512 -i 69905 -q -O extents,uninit_bg,dir_nlink,quota,project,huge_file,^fast_commit,flex_bg -G 256 -E resize="4290772992",lazy_journal_init="0",lazy_itable_init="0" -F /dev/sdb 50331648k
Writing CONFIGS/mountdata
[root@oss1 ~]# mkdir -p /mnt/ost
[root@oss1 ~]# mount -t lustre /dev/sdb /mnt/ost/
mount.lustre: increased '/sys/devices/pci0000:00/0000:00:05.0/0000:01:02.0/virtio3/host1/target1:0:0/1:0:0:1/block/sdb/queue/max_sectors_kb' from 1280 to 16384
[root@oss1 ~]# lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sda 8:0 0 48G 0 disk
├─sda1 8:1 0 1G 0 part /boot
└─sda2 8:2 0 47G 0 part
├─almalinux-root 253:0 0 42.2G 0 lvm /
└─almalinux-swap 253:1 0 4.8G 0 lvm [SWAP]
sdb 8:16 0 48G 0 disk /mnt/ost
[root@oss2 ~]# lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sda 8:0 0 48G 0 disk
├─sda1 8:1 0 1G 0 part /boot
└─sda2 8:2 0 47G 0 part
├─almalinux-root 253:0 0 42.2G 0 lvm /
└─almalinux-swap 253:1 0 4.8G 0 lvm [SWAP]
sdb 8:16 0 48G 0 disk
[root@oss2 ~]# mkfs.lustre --fsname=scratch --ost --mgsnode=10.100.0.11@tcp --index=1 /dev/sdb
Permanent disk data:
Target: scratch:OST0001
Index: 1
Lustre FS: scratch
Mount type: ldiskfs
Flags: 0x62
(OST first_time update )
Persistent mount opts: ,errors=remount-ro
Parameters: mgsnode=10.100.0.11@tcp
checking for existing Lustre data: not found
device size = 49152MB
formatting backing filesystem ldiskfs on /dev/sdb
target name scratch:OST0001
kilobytes 50331648
options -J size=1024 -I 512 -i 69905 -q -O extents,uninit_bg,dir_nlink,quota,project,huge_file,^fast_commit,flex_bg -G 256 -E resize="4290772992",lazy_journal_init="0",lazy_itable_init="0" -F
mkfs_cmd = mke2fs -j -b 4096 -L scratch:OST0001 -J size=1024 -I 512 -i 69905 -q -O extents,uninit_bg,dir_nlink,quota,project,huge_file,^fast_commit,flex_bg -G 256 -E resize="4290772992",lazy_journal_init="0",lazy_itable_init="0" -F /dev/sdb 50331648k
Writing CONFIGS/mountdata
[root@oss2 ~]# mkdir -p /mnt/ost
[root@oss2 ~]# mount -t lustre /dev/sdb /mnt/ost/
mount.lustre: increased '/sys/devices/pci0000:00/0000:00:05.0/0000:01:02.0/virtio3/host1/target1:0:0/1:0:0:1/block/sdb/queue/max_sectors_kb' from 1280 to 16384
[root@oss2 ~]# lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sda 8:0 0 48G 0 disk
├─sda1 8:1 0 1G 0 part /boot
└─sda2 8:2 0 47G 0 part
├─almalinux-root 253:0 0 42.2G 0 lvm /
└─almalinux-swap 253:1 0 4.8G 0 lvm [SWAP]
sdb 8:16 0 48G 0 disk /mnt/ost
Set Up Lustre Client
Install DKMS
dnf config-manager --set-enabled powertools
dnf install epel-release
dnf install dkms
Install Lustre Related Packages
dnf install kmod-lustre-client lustre-client lustre-client-dkms
Configure LNet
echo "options lnet networks=tcp" > /etc/modprobe.d/lustre.conf
Mount Lustre
mkdir /mnt/scratch
mount -t lustre 10.100.0.11@tcp:/scratch /mnt/scratch/
[root@client1 ~]# mkdir /mnt/scratch
[root@client1 ~]# mount -t lustre 10.100.0.11@tcp:/scratch /mnt/scratch/
[root@client1 ~]# df -h
Filesystem Size Used Avail Use% Mounted on
devtmpfs 1.8G 0 1.8G 0% /dev
tmpfs 1.8G 0 1.8G 0% /dev/shm
tmpfs 1.8G 8.5M 1.8G 1% /run
tmpfs 1.8G 0 1.8G 0% /sys/fs/cgroup
/dev/mapper/almalinux-root 43G 3.7G 39G 9% /
/dev/sda1 1014M 300M 715M 30% /boot
tmpfs 367M 0 367M 0% /run/user/1000
10.100.0.11@tcp:/scratch 93G 2.5M 88G 1% /mnt/scratch
Check Lustre Filesystem
[root@client1 ~]# lfs mdts
MDTS:
0: scratch-MDT0000_UUID ACTIVE
[root@client1 ~]# lfs osts
OBDS:
0: scratch-OST0000_UUID ACTIVE
1: scratch-OST0001_UUID ACTIVE
Final Thoughts
Setting up Lustre on a mini PC demonstrates that you don’t need enterprise hardware to learn how high-performance parallel filesystems work. While this homelab version lacks the performance and fault tolerance of production deployments, it shows how MGS, MDS, and OSS components work together to create a distributed filesystem.
The real value is hands-on experience with Lustre’s architecture and administration commands. Having a working environment means you can experiment, break things, and learn without impacting production systems—perfect preparation for working with Lustre in real HPC environments.