These are miscellaneous projects and how-to guides which I find useful and hope that others will too.
Table of Contents
Many cellular providers artificially throttle tethered (hotspot) traffic while leaving on-device traffic at full speed. This guide demonstrates how to tunnel WireGuard VPN traffic through an iOS device using iSH, effectively disguising tethered laptop traffic as native phone traffic. The carrier sees only UDP traffic to/from an app on the phone, not recognizing it as tethered data.
How it works:
Prerequisites: iOS device with iSH app installed, WireGuard server accessible on UDP port (51820 or 443), Linux laptop with NetworkManager, and WiFi tethering enabled on iOS.
Enable the screen-on setting in iSH, as the app will suspend when the screen locks.
Start socat on the iOS device to forward UDP traffic from the tethering interface to your WireGuard server:
socat udp-listen:8000 UDP:your_wireguard_server:51820
Replace your_wireguard_server with your WireGuard server's IP or hostname. Port 51820 is the standard WireGuard port, though port 443 can be used if 51820 is blocked.
Notes:
172.20.10.1On the Linux laptop, create a NetworkManager WireGuard connection profile that points to the iOS device instead of directly to the WireGuard server:
$EDITOR /etc/NetworkManager/system-connections/wg0-proxy.nmconnection [connection] id=wg0-proxy uuid=<generate_uuid> type=wireguard interface-name=wg0 autoconnect=false [wireguard] private-key=<your_wireguard_private_key> [wireguard-peer.<peer_public_key>] endpoint=172.20.10.1:8000 allowed-ips=<vpn_server_ip>/32;<other_allowed_ips>;0.0.0.0/0; [ipv4] address1=<your_vpn_ip>/32 dns=<vpn_dns_server>; dns-search=~; method=manual [ipv6] addr-gen-mode=default method=disabled [proxy]
Key configuration points:
endpoint=172.20.10.1:8000 - Points to the iOS tethering IP and socat's listening portuuid - Generate with uuidgen<peer_public_key> - Your WireGuard server's public keyallowed-ips - Should include your VPN server's IP and 0.0.0.0/0 for full tunnelingaddress1 - The IP address assigned to you by the WireGuard serverSet proper permissions and activate the connection:
chmod 600 /etc/NetworkManager/system-connections/wg0-proxy.nmconnection systemctl restart NetworkManager nmcli connection up wg0-proxy
Verify the tunnel is working correctly:
ip addr show wg0Check that the WireGuard interface is up ip route | grep wg0Verify routing ping 8.8.8.8Test connectivity through VPN curl ifconfig.meShould show your WireGuard server's IP
Optional: For testing or debugging, run a simple HTTP proxy through iSH without WireGuard. On iOS (iSH):
ncat -l 3128 --proxy-type http
On laptop, configure browser to use HTTP proxy: <ios_ip>:3128
This is useful for verifying the tethering connection works before setting up the WireGuard tunnel.
Troubleshooting:
172.20.10.1 (check in laptop's network settings). Check that your WireGuard server is accessible from the iOS device's cellular connection.Note: This technique works because carriers typically identify tethering by TTL values, user-agent strings, or packet inspection. Tunneling through an on-device app makes the traffic indistinguishable from normal app traffic. Battery usage on iOS will be higher due to keeping screen on and continuous network activity.
See: /home/mmh/Documents/Various Research/foobar2000-x64_v2.24.1/
Set up a clean WINE prefix and install required dependencies:
export WINEPREFIX=${HOME}/.wine-foobar2000
winetricks allfonts
for packg in vcrun2003 vcrun2005 vcrun2008 vcrun2010 vcrun2012 vcrun2013 vcrun2015 vcrun2017 vcrun2019 vcrun2022; do winetricks -q $packg; done
winecfg Set Windows version to 11
Things needed for the setup:
%RATING% tag support
Quick Search old favorites:
NOT (%GENRE% HAS k-pop OR %GENRE% HAS j-pop OR %GENRE% HAS c-pop) AND %RATING% GREATER 2%RATING% GREATER 1%GENRE% HAS k-pop OR %GENRE% HAS j-pop(%GENRE% HAS k-pop OR %GENRE% HAS j-pop) AND %RATING% GREATER 2default font: @Arial Unicode MS
custom columns:
$repeat(β
,%rating%)$repeat( ,$sub(5,%rating%))[$substr(%date%,1,4)]%track%Shell Integration β Sort Incoming Files:
%ARTIST%|%DATE%|%DISCNUMBER%|%ALBUM%|%TRACKNUMBER%|%TITLE%
Advanced β Display β Standard Sort Patterns:
file path=%path_sort%;artist=%artist%|%date%;album=%album%|%discnumber%|%tracknumber%;track
I can not get Column UI to work, specifically can't configure the Fonts section. Thus we are currently using the default UI.
A script to remove OceanofPDF.com branding and advertisements from EPUB files. The script properly handles the EPUB format specification, extracts the archive, removes ad content from HTML/XHTML files, deletes ad-related files, and repackages the EPUB correctly.
Download and make the script executable:
curl -O https://matthewheadlee.com/projects/miscellaneous/remove-oceanofpdf-ads.sh chmod +x remove-oceanofpdf-ads.sh
Run the script on an EPUB file:
./remove-oceanofpdf-ads.sh book_OceanofPDF.com.epub Removed: book_OceanofPDF.com.epub Output: book.epub Successfully removed all ads.
The script will automatically strip _OceanofPDF.com_ from the filename, remove all ad-related content, and create a clean EPUB file.
The script performs several operations:
<div> elements containing OceanofPDF.com references from all HTML/XHTML filesA workflow automation script that automatically switches your kubectl context and namespace based on your current working directory path. This eliminates the need to manually run kubectl ctx and kubectl ns commands when navigating between different Kubernetes projects.
Create the script in your ~/bin/ directory:
$EDITOR ~/bin/kubectl-sw
#!/bin/bash
cluster_namespace=$(grep -Po '(eks|k6s)-[^/]+(/[^/]+)?' <<<"${PWD}")
IFS='/' read -r cluster namespace <<<"${cluster_namespace}"
# Cluster switching:
if [[ ! -z "${cluster}" ]]; then
kubectl ctx "${cluster}"
fi
# Namespace switching:
if [[ -z "${namespace}" ]]; then
namespace="default"
fi
kubectl ns "${namespace}"
# Tmux window renaming:
if [[ -n "${TMUX}" ]]; then
tmux rename-window "${cluster}β£${namespace}"
fi
Make the script executable:
chmod +x ~/bin/kubectl-sw
The script parses directory paths containing cluster and namespace information. For example:
~/work/eks-production/backend/ β switches to eks-production cluster, backend namespace~/projects/k6s-staging/frontend/ β switches to k6s-staging cluster, frontend namespace~/code/eks-dev/ β switches to eks-dev cluster, default namespaceAdd to your shell configuration to run automatically on directory change. For bash, add to ~/.bashrc:
$EDITOR ~/.bashrc
cd() {
builtin cd "$@" && kubectl-sw 2>/dev/null
}
For zsh, add to ~/.zshrc:
$EDITOR ~/.zshrc
chpwd() {
kubectl-sw 2>/dev/null
}
Prerequisites: This script requires kubectl with the ctx and ns plugins (from kubectx). Install with:
kubectl krew install ctx ns
Project repository: https://github.com/gsauthof/dracut-sshd
Add the COPR repository and install required packages:
curl -sSLhttps://copr.fedorainfracloud.org/coprs/gsauthof/dracut-sshd/repo/fedora-41/gsauthof-dracut-sshd-fedora-41.repo > /etc/yum.repos.d/gsauthof-dracut-sshd-fedora-41.repo dnf install dracut-sshd dracut-network systemd-networkd
The file /usr/lib/dracut/modules.d/46sshd/module-setup.sh copies /root/.ssh/authorized_keys into the new initramfs for authentication.
Create a systemd-networkd configuration file:
$EDITOR /etc/systemd/network/20-wired.network [Match] Name=e* [Network] DHCP=ipv4
Configure Dracut to copy in the systemd-networkd configuration file and load the systemd-networkd module:
$EDITOR /etc/dracut.conf.d/90-networkd.conf install_items+=" /etc/systemd/network/20-wired.network " add_dracutmodules+=" systemd-networkd "
Rebuild the initramfs for all installed kernels:
for kernel in $(rpm -q kernel --queryformat "%{VERSION}-%{RELEASE}.%{ARCH}\n"); do
echo "Rebuilding initramfs for kernel: ${kernel}"
dracut --force "/boot/initramfs-${kernel}.img" "${kernel}"
done
Because the network configuration is set by Dracut, NetworkManager will not be able to change the IP address. Create a systemd service to flush dracut's network configuration:
$EDITOR /etc/systemd/system/[email protected] # Remove Dracut's network configuration # https://access.redhat.com/solutions/3017441 # https://unix.stackexchange.com/questions/506331/networkmanager-doesnt-change-ip-address-when-dracut-cmdline-pr> [Unit] Description=Remove dracut's network configuration for %I Before=network-pre.target Wants=network-pre.target [Service] ExecStartPre=/usr/sbin/ip address show %i ExecStart=/usr/sbin/ip -statistics address flush dev %i [Install] WantedBy=default.target
Enable the service for your network interface (adjust interface name as needed):
systemctl enable flush-dracut-network@enp7s0
On an EFI system, this is the correct way to reinstall GRUB2 as opposed to using grub2-install (But, this doesn't work post Fedora:32 due to grub patches):
export ZPOOL_VDEV_NAME_PATH=YES dnf reinstall shim-* grub2-efi-* grub2-common
As of Fedora:33 and higher, GRUB2 has been stripped of all ZFS support. Thus, to boot ZFS, a custom stand-alone grub2.efi needs to be built. The easiest way to do this is from a Fedora:32 system.
Use a Fedora:32 container to generate a stand-alone ZFS-supporting grub2-efi binary with an embedded configuration (Note the --label tank option may be different on different systems):
podman container run -v /tmp/:/mnt/ --rm -ti fedora:32 dnf install -y 'grub2*'Complete! cat << 'EOF' > grub.cfg search --no-floppy --label tank --set=dev set prefix=($dev)/root/boot@/grub2 export $prefix configfile $prefix/grub.cfg EOF grub2-mkstandalone -o /mnt/matthewcfg.efi --modules="zfs part_gpt all_video increment gzio " --format=x86_64-efi /boot/grub/grub.cfg=grub.cfg ll -h /mnt/matthewcfg.efi -rw-r--r--. 1 root root 7.2M Oct 20 20:54 /mnt/matthewcfg.efi exit sudo -i mv /tmp/matthewcfg.efi /boot/efi/EFI/fedora/matthewcfg.efi efibootmgr -c -d /dev/nvme0n1 -p 1 -L 'Matthew with Config' -l '\EFI\fedora\matthewcfg.efi'
To forward all logs from various Podman containers to a log file /var/log/containers/container_name.log, create a new file in /etc/rsyslog.d/ and restart rsyslog:
$EDITOR /etc/rsyslog.d/podman.conf
template(name="container-format" type="list") {
property(name="timestamp" dateFormat="rfc3339")
constant(value=" ")
property(name="$!CONTAINER_NAME")
constant(value=" ")
property(name="$!CONTAINER_ID")
constant(value=" ")
property(name="$!MESSAGE")
}
template(name="name-log-by-container" type="list") {
constant(value="/var/log/containers/")
property(name="$!CONTAINER_NAME")
constant(value=".log")
}
if($!CONTAINER_NAME != "") then {
action(type="omfile" dynaFile="name-log-by-container" template="container-format")
stop
}
The RPMFusion packaged NVIDIA driver does almost all required setup, but on my setup, would crash whenever the monitor went to sleep without a few additional steps to disable power management.
Install the NVIDIA driver following the RPMFusion instructions.
After installing the NVIDIA driver through RPMFusion, copy the provided Xorg configuration file into place:
cp /usr/share/X11/xorg.conf.d/nvidia.conf /etc/X11/xorg.conf.d/nvidia.conf
Modify the nvidia.conf file, adding modesetting support to the embedded Intel card and set the NVIDIA card to be the primary GPU:
$EDITOR /etc/X11/xorg.conf.d/nvidia.conf Section "OutputClass" Identifier "intel" MatchDriver "i915" Driver "modesetting" EndSection Section "OutputClass" Identifier "nvidia" MatchDriver "nvidia-drm" Driver "nvidia" Option "AllowEmptyInitialConfiguration" Option "SLI" "Auto" Option "BaseMosaic" "on" Option "PrimaryGPU" "yes" EndSection Section "ServerLayout" Identifier "layout" Option "AllowNVIDIAGPUScreens" EndSection
Configure the system to keep the /dev/nvidia-uvm device file when the NVIDIA device goes into power saving mode by enabling the nvidia-persistenced service:
systemctl enable nvidia-persistenced.service
Disable runtime power management on the NVIDIA device:
cp /lib/udev/rules.d/60-nvidia.rules /etc/udev/rules.d/60-nvidia.rules
$EDITOR /etc/udev/rules.d/60-nvidia.rules # Enable runtime PM for NVIDIA VGA/3D controller devices on driver bindACTION=="bind", SUBSYSTEM=="pci", ATTR{vendor}=="0x10de", ATTR{class}=="0x030000", TEST=="power/control", ATTR{power/control}="auto"ACTION=="bind", SUBSYSTEM=="pci", ATTR{vendor}=="0x10de", ATTR{class}=="0x030200", TEST=="power/control", ATTR{power/control}="auto"ACTION=="bind", SUBSYSTEM=="pci", ATTR{vendor}=="0x10de", ATTR{class}=="0x030000", TEST=="power/control", ATTR{power/control}="on" ACTION=="bind", SUBSYSTEM=="pci", ATTR{vendor}=="0x10de", ATTR{class}=="0x030200", TEST=="power/control", ATTR{power/control}="on"
Reboot or restart Xorg.
Instructions to clone a system using zfs send and zfs recv
Boot the system receiving the ZFS Snapshot from the rescue disk.
On the system receiving the ZFS Snapshot, remove all previous snapshots:
zpool import -Nf tank zfs destroy -Rv 'tank@%'
From the sender, create a new snapshot, and send it:
zfs snapshot -r 'tank@p9snap' zfs send -v --props --replicate --large-block --compressed tank@p9snap | ssh root@receiver 'zfs receive -F tank'
On the receiver, mount the freshly received tank, and chroot into it:
zfs set mountpoint=/sysroot tank/root zfs set mountpoint=/sysroot/boot tank/root/boot zfs set mountpoint=/sysroot/tmp tank/root/tmp zfs set mountpoint=/sysroot/var tank/root/var zfs mount -aO mount --rbind /dev/ /sysroot/dev/ mount --rbind /proc/ /sysroot/proc/ mount --rbind /sys/ /sysroot/sys/ mount --rbind /run/ /sysroot/run/ chroot /sysroot/ /bin/bash
Set the system to do an SELinux relabel:
touch /.autorelabel
Edit /etc/fstab, and fix any UUIDs:
$EDITOR /etc/fstab UUID="3AB8-F13E" /boot/efi vfat defaults 0 0
Correct the receiver's hostname:
echo receiver.xn0.org > /etc/hostname
If the system receiving the clone is tiny.xn0.org, add the zfs configuration limiting memory usage:
cp /root/tiny-modprobe.d-zfs.conf /etc/modprobe.d/zfs.conf
If the system receiving the clone is p50.xn0.org, copy the nvidia configuration files:
cp /root/nvidia-p50-x11.conf /etc/X11/xorg.conf.d/nvidia-p50-x11.conf cp /root/Xsetup-p50 /etc/sddm/Xsetup
Depending on system UEFI setting, the PrimaryGPU option may need to be removed.
Rebuild the initial ram-filesystem for all kernels, rebuild the grub.cfg, and disable grub's pager:
zfsver=$(rpm -q --qf "%{VERSION}\n" zfs)
while read -r kernelver; do
for module in spl zfs; do
dkms add -k "${kernelver}" "${module}/${zfsver}"
dkms build -k "${kernelver}" "${module}/${zfsver}"
dkms install -k "${kernelver}" "${module}/${zfsver}"
done
initfilename="/boot/initramfs-${kernelver}.img"
dracut --force "${initfilename}" "${kernelver}"
if [[ "$(lsinitrd "${initfilename}" | grep -c -e spl.ko -e zfs.ko)" -ne 2 ]]; then
echo "ERROR: failed to find spl or zfs modules in the "${initfilename}" initramfs."
exit 1
fi
done < <(rpm -q --qf "%{VERSION}-%{RELEASE}.%{ARCH}\n" kernel)
ZPOOL_VDEV_NAME_PATH=YES grub2-mkconfig > /boot/grub2/grub.cfg
sed -i '/^set pager/ s/1/0/' /boot/grub2/grub.cfg
Exit the chroot, unmount filesystems, and export tank:
exit
umount -l /sysroot/{dev,proc,sys,run}/
zfs unmount -a
zfs set mountpoint=/ tank/root
zfs set mountpoint=legacy tank/root/boot
zfs set mountpoint=legacy tank/root/tmp
zfs set mountpoint=legacy tank/root/var
zpool export tank
Reboot the receiver:
reboot
On the receiver, remove the snapshot p9snap:
zfs destroy -vr 'tank@p9snap'
On the sender, remove the snapshot p9snap:
zfs destroy -vr 'tank@p9snap'
A modification I like to make to my shell is to change the prompt to include the complete working directory path, and most of the time this isn't a problem. However, every now-and-then I find myself deep in the file system and the prompt starts taking up a large portion of my screen.
Go in deep:
cd /usr/lib/systemd/user/sockets.target.wants [mmh@p9 /usr/lib/systemd/user/sockets.target.wants]$
Yikes! A 53 character long prompt.
Both bash and zsh provide solutions to shorten the prompt based on path depth.
This step applies to bash
Configure bash to display only the last two directories of the path:
export PROMPT_DIRTRIM=2 [mmh@p9 .../user/sockets.target.wants]$
This step applies to zsh
zsh is a little more complicated to configure, but a also more flexible.
Configure zsh to trim the path:
export PS1="[%n@%m %(3~|β¦/%2~|%2~)]%(#.#.$) " [mmh@p9 β¦/user/sockets.target.wants]$
When the sort options -n and -u are used together, unexpected side effects can occur. Notably, sort only considers uniqueness up to the first decimal place.
From the sort(1) manual page:
A basic file, notice the IP address 211.149.136.164 appears twice, but with two different port numbers. There are no duplicate lines in this file:
cat sortbug 67.207.248.25:54321Same IP 211.149.136.164:8000 122.228.218.163:8090 67.207.248.125:54321Same IP 211.149.136.164:8080
Piping sorted text to uniq, sort of works (pun intended), though the first two lines aren't sorted correctly:
sort -n sortbug | uniq 67.207.248.125:54321Wrong position 67.207.248.25:54321 122.228.218.163:8090Same IP 211.149.136.164:8000Same IP 211.149.136.164:8080
One would think running sort -un sortbug would give the same output as sort -n sortbug | uniq, however:
sort -un sortbug 67.207.248.25:54321 122.228.218.163:8090 211.149.136.164:8000
What the hell just happened? Two unique lines were just removed.
The info page for sort does explain this behavior, though the man has no mention of it:
Numeric sort uses what might be considered an unconventional method to compare strings representing floating point numbers. Rather than first converting each string to the C `double' type and then comparing those values, `sort' aligns the decimal-point characters in the two strings and compares the strings a character at a time. One benefit of using this approach is its speed. In practice this is much more efficient than performing the two corresponding string-to-double (or even string-to-integer) conversions and then comparing doubles. In addition, there is no corresponding loss of precision. Converting each string to `double' before comparison would limit precision to about 16 digits on most systems.
β(coreutils.info.gz (v8.25) sort invocation
The sort command has many problems sorting numbers with multiple decimal places in them. In the above example, notice the IP addresses 67.207.248.25 and 67.207.248.125 did not get sorted in the proper order.
Use sort to correctly sort IP addresses:
sort -t . -k1,1n -k2,2n -k3,3n -k4,4n sortbug 67.207.248.25:54321 67.207.248.125:54321 122.228.218.163:8090 211.149.136.164:8000 211.149.136.164:8080
Did you wipe your laptop, and install a non-approved operating system?
Did you pull all the stickers off your laptop, including the Windows license key?
Did you realize the mistake you've made by not running a Microsoft product?
Are you crying because you need your Windows key to get out of this depression?
Fear not! Your Windows License Key may be in the ACPI MSDM table:
xxd /sys/firmware/acpi/tables/MSDM 0000000: 4d53 444d 5500 0000 03df 4c45 4e4f 564f MSDMU.....LENOVO 0000010: 4342 2d30 3120 2020 0100 0000 4143 5049 CB-01 ....ACPI 0000020: 0000 0400 0100 0000 0000 0000 0100 0000 ................The Windows License begins here. 0000030: 0000 0000 1d00 0000 3348 3837 4e4e 4e4e ........3H8NN-NN 0000040: 4e4e 4e4e 4e4e 4e4e 4e4e 4e4e 4e4e 4e4e NNN-NNNNN-NNNNN- 0000050: 4e4e 4243 44 NNBCD
Of course to any reasonable person, all the above questions are preposterous, and no one in their right mind would ever run Windows β Microsoft's spyware.
Assume there is a directory with a dozen files, each only a byte or so in size. How do you view the content with associated filename? A for-loop would work, but that's a lot of typing for very little benefit. The commands xargs and cat could do it, but that's just ridiculous.
Enter the magic of grep!
Often such files are found under the /proc/ and /sys/ directories. An example here:
ls /sys/module/iwlwifi/parameters/ 11n_disable antenna_coupling d0i3_disable debug fw_restart led_mode power_level swcrypto amsdu_size bt_coex_active d0i3_timeout fw_monitor lar_disable nvm_file power_save uapsd_disable
By using grep to search for null recursively through a directory, grep will print each file's name, and that file's content:
There is nothing inside the single-quotes. grep -R '' /sys/module/iwlwifi/parameters/ /sys/module/iwlwifi/parameters/nvm_file:(null) /sys/module/iwlwifi/parameters/debug:0 /sys/module/iwlwifi/parameters/swcrypto:0 /sys/module/iwlwifi/parameters/power_save:N /sys/module/iwlwifi/parameters/lar_disable:N /sys/module/iwlwifi/parameters/d0i3_disable:Y /sys/module/iwlwifi/parameters/led_mode:0 /sys/module/iwlwifi/parameters/fw_restart:Y /sys/module/iwlwifi/parameters/bt_coex_active:N /sys/module/iwlwifi/parameters/d0i3_timeout:1000 /sys/module/iwlwifi/parameters/11n_disable:0 /sys/module/iwlwifi/parameters/uapsd_disable:Y /sys/module/iwlwifi/parameters/amsdu_size:0 /sys/module/iwlwifi/parameters/antenna_coupling:0 /sys/module/iwlwifi/parameters/fw_monitor:N /sys/module/iwlwifi/parameters/power_level:0
And, finally. If there is a desire to get needlessly profligate, there's always the column command:
grep -R '' /sys/module/iwlwifi/parameters/ | column -s : -t /sys/module/iwlwifi/parameters/disable_11ac N /sys/module/iwlwifi/parameters/power_save N /sys/module/iwlwifi/parameters/swcrypto 0 /sys/module/iwlwifi/parameters/power_level 0 /sys/module/iwlwifi/parameters/amsdu_size 0 /sys/module/iwlwifi/parameters/uapsd_disable 3 /sys/module/iwlwifi/parameters/d0i3_disable Y /sys/module/iwlwifi/parameters/lar_disable N /sys/module/iwlwifi/parameters/d0i3_timeout 1000 /sys/module/iwlwifi/parameters/11n_disable 0 /sys/module/iwlwifi/parameters/fw_restart Y /sys/module/iwlwifi/parameters/led_mode 0 /sys/module/iwlwifi/parameters/debug 0 /sys/module/iwlwifi/parameters/antenna_coupling 0 /sys/module/iwlwifi/parameters/bt_coex_active Y /sys/module/iwlwifi/parameters/fw_monitor N /sys/module/iwlwifi/parameters/nvm_file (null)
In most web and file browsers the key combinations alt+left (back) and alt+up (parent directory) can be used for navigation. So why not in the shell? These following steps are specific for zsh, however it is possible to do the same in bash.
Add the following functions and keybindings $HOME/.zshrc:
$EDITOR $HOME/.zshrc
setopt AUTO_PUSHD ##AUTO_PUSHD is needed for alt+left to function intuitively.
function cdBack() {
local sBackWD=~1
print -P "\r\e[?7\e[0K${PS1}cd ${sBackWD/\~1/.}"
unset sBackWD
popd &> /dev/null
zle reset-prompt
}
function cdUp() {
print -P "\r\e[?7\e[0K${PS1}cd $(realpath ..)"
pushd .. &> /dev/null
zle reset-prompt
}
zle -N cdUp
zle -N cdBack
bindkey '^[[1;3A' cdUp # Alt+Up-Arrow
bindkey '^[[1;3D' cdBack # Alt+Left-Arrow
Now CLI directory navigation is identical to GUI navigation:
pwd /home/mmh/Desktop/ [mmh@host /home/mmh/Desktop/]$ press alt+up [mmh@host /home/mmh/Desktop/]$ cd /home/mmh press alt+up [mmh@host /home/mmh/]$ cd /home press alt+up [mmh@host /home/]$ cd / press alt+left [mmh@host /]$ cd /home press alt+left [mmh@host /home/]$ cd /home/mmh press alt+left [mmh@host /home/mmh/]$ cd /home/mmh/Desktop [mmh@host /home/mmh/Desktop/]$
One benefit to using zle reset-prompt is it will not foobar any already typed commands on the prompt:
[mmh@host /home/mmh/Desktop/]$ press alt+up [mmh@host /home/mmh/]touch testfile press alt+left [mmh@host /home/mmh/]$ cd /home/mmh/Desktop [mmh@host /home/mmh/Desktop/]$ touch testfile
In the default bash or zsh configuration aliases do not expand past the first word. Generally desired behavior, however highly annoying when using sudo:
which ll
ll='ls -l --color=auto'
/usr/bin/ls
sudo ll /root/
sudo: ll: command not found
There's a trick to get around this limitation though. By assigning sudo to an alias of itself with a trailing space, both the first and second words will be expanded:
The trailing space is critical. alias sudo='sudo ' sudo ll /root/ total 222K drwx------. 6 root root 8 2018-Jan-15 04:23:09 PM .cache drwx------. 10 root root 13 2018-Apr-14 01:49:29 PM .config drwx------. 3 root root 3 2017-Feb-18 03:39:55 PM .dbus drwxr-xr-x. 3 root root 3 2017-Feb-24 11:11:08 PM .local
These are the steps to do a recursive, incremental backup of the data stored on pod/tank to the back up drives tank-backup-a and tank-backup-b.
The following commands are to be run on ssds9
Attach the backup disk to the virtual machine:
tank-backup-a virsh attach-disk kvm_c8_pod /dev/disk/by-id/ata-WDC_WD80EFZX-68UW8N0_VKJPLP6X vde--serial 68UW8N0_VKJPLP6X --sourcetype block --cache none tank-backup-b virsh attach-disk kvm_c8_pod /dev/disk/by-id/ata-WDC_WD80EFZX-68UW8N0_VKJNK77X vdf--serial 68UW8N0_VKJNK77X --sourcetype block --cache none
The following commands are to be run on pod
Import the backup tank and make a current set of snapshots:
zpool import tank-backup-a zpool import tank-backup-b sCurrSnap="tank@backup-time-$(date '+%Y%m%d')"tank-backup-a sPrevSnapA="$(zfs list -t snapshot -S name -o name -d 1 -H tank-backup-a | grep -om1 'backup-time-.*')"tank-backup-b sPrevSnapB="$(zfs list -t snapshot -S name -o name -d 1 -H tank-backup-b | grep -om1 'backup-time-.*')" zfs snapshot -r "${sCurrSnap}"
Transfer the snapshots to the backup disk:
tank-backup-a zfs send --replicate --large-block --compressed --props -I "tank@${sPrevSnapA}""${sCurrSnap}" | pv | zfs receive -F tank-backup-a tank-backup-b zfs send --replicate --large-block --compressed --props -I "tank@${sPrevSnapB}""${sCurrSnap}" | pv | zfs receive -F tank-backup-b
Export the backup tank:
zpool export tank-backup-a zpool export tank-backup-b
The following commands are to be run on ssds9
Remove the backup disk from the virtual machine, power down the disk, and remove it from the kernel:
tank-backup-a virsh detach-disk kvm_c8_pod /dev/disk/by-id/ata-WDC_WD80EFZX-68UW8N0_VKJPLP6Xtank-backup-a hdparm -Y /dev/disk/by-id/ata-WDC_WD80EFZX-68UW8N0_VKJPLP6Xtank-backup-a echo 1 > /sys/block/$(basename $(realpath /dev/disk/by-id/ata-WDC_WD80EFZX-68UW8N0_VKJPLP6X))/device/delete
tank-backup-b virsh detach-disk kvm_c8_pod /dev/disk/by-id/ata-WDC_WD80EFZX-68UW8N0_VKJNK77Xtank-backup-b hdparm -Y /dev/disk/by-id/ata-WDC_WD80EFZX-68UW8N0_VKJNK77Xtank-backup-b echo 1 > /sys/block/$(basename $(realpath /dev/disk/by-id/ata-WDC_WD80EFZX-68UW8N0_VKJNK77X))/device/delete
The smartd service automatically detects the new disks and begins monitoring them. Stupidly, the smartd service flips-the-fuck-out when the drive is removed. Restart the smartd service:
systemctl restart smartd
Remove the backup disk.
The zfs receive above uses the -F which does:
The zfs send command above has many options being passed, a brief overview of what they're accomplishing:
This script is written for the Ubiquiti USG.
This script downloads various anti-ad hosts files then merges them together, sorts in alphabetical order, and removes all duplicate entries.
Current host providers for this script are listed below, more can be added on request:
A previous version of this script used to save all host entries to the file /etc/hosts, however this resulted in problems with the Ubiquiti USG due to the way it updates the file anytime a DHCP address was leased or renewed.
Now this script works by automatically configuring dnsmasq through the configuration file /etc/dnsmasq.d/buildhosts-blacklist.conf to read an additional hosts file: /etc/hosts-blacklist, where all blocked host entries are stored to.
The following commands are to be run on as root on the Ubiquiti USG
Download buildhosts-unifi.sh and make it executable:
curl 'http://matthewheadlee.com/projects/miscellaneous/buildhosts-unifi.sh' > /usr/local/sbin/buildhosts-unifi.sh chmod 700 /usr/local/sbin/buildhosts-unifi.sh
Run buildhosts-unifi.sh once to get the initial hosts file in-place:
buildhosts-unifi.sh
Create a new cron job to run buildhosts-unifi.sh once a week:
vi /etc/cron.d/buildhosts # Run buildhosts every Thursday at 4:44AM 44 4 * * 4 root /usr/local/sbin/buildhosts-unifi.sh
The dnsmasq service running on an Ubiquiti USG uses the name servers listed in /etc/resolv.conf in a round-robin fashion. When using split-views on an internal name server, causes the USG to sometimes resolve the internal address and other times the external address.
Configure dnsmasq to use the DNS servers in listed order:
echo strict-order > /etc/dnsmasq.d/strict-order.conf service dnsmasq restart
The default font selected by GUI applications which utilize the fontconfig library can be changed by editing the ~/.config/fontconfig/fonts.conf configuration file. New "alias" elements can be added to this configuration file which will prepend to the system's default list.
The command fc-match can be used to determine how the system will match a font name:
fc-match -a Monospace DejaVuSansMono.ttf: "DejaVu Sans Mono" "Book"
Configure fontconfig to pick Noto Mono when applications request the monospace font:
$EDITOR ~/.config/fontconfig/fonts.conf
<?xml version='1.0'?>
<!DOCTYPE fontconfig SYSTEM 'fonts.dtd'>
<fontconfig>
<dir>~/.fonts</dir>
<alias>
<family>monospace</family>
<prefer>
<family>Noto Mono</family>
</prefer>
</alias>
<alias>
<family>monospace</family>
<prefer>
<family>DejaVu Sans Mono</family>
</prefer>
</alias>
</fontconfig>
Confirm the configuration change was effective:
fc-match -a Monospace NotoMono-Regular.ttf: "Noto Mono" "Regular" DejaVuSansMono.ttf: "DejaVu Sans Mono" "Book"
This script was written for routers running DD-WRT. This script gets various anti-ad hosts files, merges, sorts, and uniques, then installs.
This script grabs the locations of all Tesla chargers, including Superchargers and private chargers from teslamotors.com
Copyright Β© 2019 Matthew Headlee
This is free software: you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation, either version 3 of the License, or (at your option) any later version.
This software is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details.
Copyright Β© 2019 Matthew Headlee. This work is licensed under an Attribution-NonCommercial-ShareAlike 4.0 International (CC BY-NC-SA 4.0) license.
Please feel free to reach out to me with any questions or feedback on the content presented here. My email, GPG key, and other contact methods can be found at the top of the page on my primary domain: http://www.matthewheadlee.com/