Arch Linux Installation
Download the ISO image
Download the latest Arch Linux ISO from here.
Create empty disk image
Create the empty file with filled zeros of size 64GB:
dd if=/dev/zero of=bios.img bs=2G count=32
Boot the ISO image
Make sure to have the OVMF
package installed:
pacman -S ovmf
This package contains the UEFI firmware for QEMU. Copy the UEFI firmware to the current directory:
cp /usr/share/edk2/x64/OVMF_VARS.fd .
Boot the ISO image with QEMU:
qemu-system-x86_64 \
-enable-kvm \
-drive file=bios.img,format=raw \
-cdrom /usr/share/nswi106/archlinux-2023.09.01-x86_64.iso \
-drive if=pflash,format=raw,readonly=on,file=/usr/share/edk2-ovmf/x64/OVMF_CODE.fd \
-drive if=pflash,format=raw,file=OVMF_VARS.fd \
-m 2G \
-cpu host \
-smp 2 \
-vnc :4232
Prepare the disk
See available disks:
lsblk
Partition the disk:
fdisk /dev/sda
Than enter these commands inside the fdisk
:
g
to create a new GPT partition tablen
to create a new partitionEnter
to select the default partition numberEnter
to select the default first sector+512M
to create 512MB partitiont
to change the partition type1
to select the first partition, if only one partition was created, it will be selected by default1
to select EFI System partition typen
to create a new partitionEnter
to select the default partition numberEnter
to select the default first sectorEnter
to select the default last sectorw
to write the changes to the disk
Check with lsblk
that the partitions were created correctly.
Create a filesystem on the partitions:
mkfs.fat -F32 /dev/sda1
mkfs.btrfs /dev/sda2
You can check the filesystems with lsblk -f
.
Mount the filesystems:
mount /dev/sda2 /mnt
mount --mkdir /dev/sda1 /mnt/boot
Install the base system
Install the essential packages:
pacstrap -K /mnt base linux linux-firmware
Generate the fstab
file:
genfstab -U /mnt >> /mnt/etc/fstab
Change root to the newly installed system
Chaneg root into the new system:
arch-chroot /mnt
Time zone, localization, hostname and root password (optional)
Set the time zone:
ln -sf /usr/share/zoneinfo/Europe/Prague /etc/localtime
hwclock --systohc
Install the vim
editor:
pacman -S vim
Uncomment the en_US.UTF-8 UTF-8
and other needed locales in /etc/locale.gen
, then generate them with:
locale-gen
Create the /etc/locale.conf
file and set the LANG
variable:
echo "LANG=en_US.UTF-8" > /etc/locale.conf
Set the hostname:
echo "arch" > /etc/hostname
Set the root password:
passwd
Time synchronization
Use the timedatectl
command to ensure the system clock is accurate:
timedatectl set-ntp true
Bootloader
Install the grub
bootloader:
pacman -S grub efibootmgr
Install the bootloader:
grub-install --target=x86_64-efi --efi-directory=/boot --bootloader-id=grub
Generate the grub
configuration file:
grub-mkconfig -o /boot/grub/grub.cfg
Exit the chroot
environment and QEMU.
Start the new system
Start the QEMU with the new system:
qemu-system-x86_64 \
-enable-kvm \
-drive file=bios.img,format=raw \
-drive if=pflash,format=raw,readonly=on,file=/usr/share/edk2-ovmf/x64/OVMF_CODE.fd \
-drive if=pflash,format=raw,file=OVMF_VARS.fd \
-m 2G \
-cpu host \
-smp 2 \
-vnc :4232
Network Setup
In the further we assume that we have a machine a
which has htwo VMs running ns1
and gw
and we want to connect them to the internet, such that ns1
can access the internet through gw
.
VDE Switch
Create a vde_switch
network:
/usr/bin/vde_switch -sock /home/jankovys/vde/sw1/comm -daemon
Or create a service that will start the vde_switch
automatically. Create a file /etc/systemd/system/sw1.service
or ~/.config/systemd/user/sw1.service
with the following content:
[Unit]
Description=Vde Switch
After=network.target
[Service]
ExecStart=/usr/bin/vde_switch -sock /home/jankovys/vde/sw1/comm -daemon
Type=forking
Restart=always
RestartSec=1
[Install]
WantedBy=default.target
Then enable and start the service:
systemctl --user enable --now sw1.service
VMs setup
ns1
Connect the VM to the vde_switch
network with option
-nic vde,mac=de:ad:be:ef:20:03,sock="$HOME/vde/sw1/comm"
Create a network interface on the ns1
by creating a file /etc/systemd/network/ens3.network
with the following content:
[Match]
Name=ens3
[Network]
Address=10.0.42.2/24
Gateway=10.0.42.1
DNS=8.8.8.8
Then enable and start the service:
systemctl enable --now systemd-networkd
You can check the network configuration with:
ip addr show
ip route show
Add a nameserver to /etc/resolv.conf
:
echo "nameserver 8.8.8.8" > /etc/resolv.conf
gw
Connect the VM to the vde_switch
network and external network with options
-nic vde,mac=de:ad:be:ef:20:01,sock=/var/run/vde/backbone/comm \
-nic vde,mac=de:ad:be:ef:20:02,sock="$HOME/vde/sw1/comm" \
Create a ens3
network interface connecetd to the external internet on the gw
by creating a file /etc/systemd/network/ens3.network
with the following content:
[Match]
Name=ens3
[Network]
Address=10.0.0.42/24
Gateway=10.0.0.1
DNS=10.0.0.1
Additionaly create a ens4
network interface connected to the vde_switch
network by creating a file /etc/systemd/network/ens4.network
with the following content:
[Match]
Name=ens4
[Network]
Address=10.0.42.1/24
Then enable and start the service:
systemctl enable --now systemd-networkd
Add a nameserver to /etc/resolv.conf
:
echo "nameserver 8.8.8.8" > /etc/resolv.conf
Setp up masquerading
Add the IPMasquerade=yes
line to /etc/systemd/network/ens4.network
and /etc/systemd/network/ens5.network
on the gw
to enable fowarding and masquerading of the traffic from ens4
and ens5
to the Internet:
[Match]
Name=ens4
[Network]
Address=10.0.42.1/24
IPMasquerade=yes
[Match]
Name=ens5
[Network]
Address=10.0.142.1/24
IPMasquerade=yes
Then restart the systemd-networkd
service:
systemctl restart systemd-networkd
VM base setup
Create users:
useradd -m jankovys
useradd -m eval
Change the password for the users:
passwd jankovys
passwd eval
Install the sudo
package:
pacman -S sudo
Add the users to be able to use sudo
without password by editing the /etc/sudoers
file
pacman -S vi
visudo
and adding the following lines:
## User privilege specification
jankovys ALL=(ALL) ALL
eval ALL=(ALL) ALL
##
Defaults:jankovys !authenticate
Defaults:evel !authenticate
Install the openssh
package:
pacman -S openssh
Enable the sshd
service:
systemctl enable --now sshd
Add the puclib key to the .ssh/authorized_keys
file:
ssh-ed25519 .....
Disable passwrd authentication by editing the /etc/ssh/sshd_config
file and setting the following options:
PasswordAuthentication no
# ChallengeResponseAuthentication no
Restart the sshd
service:
systemctl restart sshd
Snapper setup
Setp up cronie
Install cronie
package:
pacman -S cronie
Enable and start the cronie
service:
systemctl enable --now cronie
The cron
jobs can be edited by
crontab -e
and the list of the jobs can be displayed by
crontab -l
Setup snapper
Snapper is a tool for creating snapshots of the filesystem.
Install the snapper
package:
pacman -S snapper
Generate the default configuration:
snapper -c root create-config /
This will create a configuration file /etc/snapper/configs/root
, which you can edit to your needs.
Expand to see the config file
# subvolume to snapshot
SUBVOLUME="/"
# filesystem type
FSTYPE="btrfs"
# btrfs qgroup for space aware cleanup algorithms
QGROUP=""
# fraction or absolute size of the filesystems space the snapshots may use
SPACE_LIMIT="0.5"
# fraction or absolute size of the filesystems space that should be free
FREE_LIMIT="0.2"
# users and groups allowed to work with config
ALLOW_USERS=""
ALLOW_GROUPS=""
# sync users and groups from ALLOW_USERS and ALLOW_GROUPS to .snapshots
# directory
SYNC_ACL="no"
# start comparing pre- and post-snapshot in background after creating
# post-snapshot
BACKGROUND_COMPARISON="yes"
# run daily number cleanup
NUMBER_CLEANUP="yes"
# limit for number cleanup
NUMBER_MIN_AGE="1800"
NUMBER_LIMIT="30"
NUMBER_LIMIT_IMPORTANT="30"
# create hourly snapshots
TIMELINE_CREATE="yes"
# cleanup hourly snapshots after some time
TIMELINE_CLEANUP="yes"
# limits for timeline cleanup
TIMELINE_MIN_AGE="1800"
TIMELINE_LIMIT_HOURLY="20"
TIMELINE_LIMIT_DAILY="20"
TIMELINE_LIMIT_WEEKLY="1"
TIMELINE_LIMIT_MONTHLY="0"
TIMELINE_LIMIT_YEARLY="0"
# cleanup empty pre-post-pairs
EMPTY_PRE_POST_CLEANUP="yes"
# limits for empty pre-post-pair cleanup
EMPTY_PRE_POST_MIN_AGE="1800"
Firewall setup
Install nftables
package:
pacman -S nftables
Edit the /etc/nftables.conf
file to your needs:
- Drop input and forward hooks
- Allow all outbound and forwarded traffic
- Allow established and related traffic
- Allow SSH traffic
- Allow ICMP traffic
- Allow localhost (loopback) traffic
Expand to see the config file
#!/usr/sbin/nft -f
flush ruleset
table inet filter {
chain input {
type filter hook input priority filter; policy drop;
ct state invalid drop comment "early drop of invalid connections"
ct state { established, related } accept comment "allow tracked connections"
iifname "lo" accept comment "allow from loopback"
ip protocol icmp accept comment "allow icmp"
meta l4proto ipv6-icmp accept comment "allow icmp v6"
tcp dport 22 accept comment "allow sshd"
}
chain forward {
type filter hook forward priority filter; policy drop;
ct state { established, related } accept
}
}
Load the configuration:
nft -f /etc/nftables.conf
The configuration can be tested with:
nft list ruleset
Enable and start the nftables
service:
systemctl enable --now nftables
BIRD
Install the bird
package:
pacman -S bird
Edit the /etc/bird.conf
file to your needs to configure the BGP protocol:
- Set the router ID
- Enable the
device
protocol - Import and expoart all routes from the
direct
protocol andkernel
protocol - Setup the
BGP
protocol
Expand to see the config file
log syslog all;
router id 10.0.0.42;
protocol device {
}
protocol direct {
ipv4; # Connect to default IPv4 table
ipv6; # ... and to default IPv6 table
}
protocol kernel {
ipv4 {
import all; # Import to table, default is import all
export all; # Export to protocol. default is export none
};
}
# Another instance for IPv6, skipping default options
protocol kernel {
ipv6 { export all; };
}
protocol static {
ipv4; # Again, IPv4 channel with default options
}
protocol bgp {
local 10.0.0.42 as 65042; # Local AS number
neighbor 10.0.0.1 as 65001; # Remote AS number
multihop; # Allow multihop connections
ipv4 {
export all; # Export all routes to the neighbor
import all; # Import all routes from the neighbor
next hop self; # Use local address as nexthop for all routes
};
};
Enable and start the bird
service:
systemctl enable --now bird
The routing table can be displayed with:
birdc show route
Or more verbosely with:
birdc show route all
To view all protocols use:
sudo birdc show protocols