Home

last update: 2024-03-29 at 14:00:07 CET

Tips And Tricks On The Shell

Using GNU/Linux isn’t a matter of choice. It’s a matter of freedom!

WARNING - Some commands may manipulate or delete files!

Laptop

Control Display Backlight

ls /sys/class/backlight/
cat /sys/class/backlight/acpi_video0/max_brightness
echo 8 > /sys/class/backlight/acpi_video0/brightness

Get Battery Runtime

acpi -V

Get Battery Info

upower -d

System Info

List Processes by their Memory Usage

Constantly watch (every 2 seconds) which processes consume the most memory (rss) and sort them by the size. Additionally truncate the commands by the current linewidth of our terminal ($COLUMNS, normally 80). Finally the last command shows only the 30 top most mem users.

watch "ps -e -orss=,args= | sort -brn | pr -TW$COLUMNS | head -n30"

Memory Usage

Memory can just be used or its is free. Thats seems easy :) But if the systems memory is exhausted memory pages can also be swapped to disk. Multiple processes can share the same memory region. Processes can map video ram, files, libraries etc. into their virtual address space VmSize (see: /proc/pid/maps). Identifying how much memory a process really uses might not be as simple as it sounds.

Virtual memory 1x1 (see man proc):
  • VmPeak: Peak virtual memory size.

  • VmSize: Virtual memory size.

  • VmLck: Locked memory size.

  • VmHWM: Peak resident set size ("high water mark").

  • VmRSS: Resident set size.

  • VmData, VmStk, VmExe: Size of data, stack, and text segments.

  • VmLib: Shared library code size.

  • VmPTE: Page table entries size (since Linux 2.6.10).

3 Ways to display various memory stats of the system:

vmstat -s
cat /proc/meminfo
free

Sum up the VmSize from all processes:

sudo grep VmSize /proc/*/status | awk '{sum += $2} END {print sum}'

Sum up the VmSize from all firefox processes (pids returned by pidof), and print the result every second

$while [ 1 ]; do for p in $(pidof firefox); do grep VmSize /proc/$p/status |
awk '{sum += $2} END {print sum}';done; sleep 1; done

Get very detailed memory usage information from all browser processes (here chromium). This can nicely be combined with the sum command above

for p in $(pidof chromium-browser); do cat /proc/$p/smaps;done

Further memory info

Display slapinfo (malloc’ed memory):

vmstat -m

See How Much Space is Used

See which directories take up the most space. The following command does print the size in megabytes (-m) and makes sure du stays in the same filesystem (-x). At the end we sort everything nummeric (-n) by the size

sudo du -mx --max-depth=2 / | sort -n

Monitor All Processes Matching a String

Use top to display all processes matching the string X

top $(pgrep X | sed 's|^|-p |g')

Display Processes that Spawned the Most Threads

Sometimes applications like Java programs spawn new threads like hell. The following command displays a top 10 of current user processes having the highest thread count. -A displays all processes, -o nlwp,pid,cmd chooses the columns we want ps to show. The rest of the command cuts the output (pr), sort ascending by lwp count, combine unique lines (uniq), and head does display the first 10 topmost entrys.

ps -Ao nlwp,pid,cmd | pr -TW$COLUMNS | sort -rn | uniq | head -n10
Tip
nlwp means: Number of Lightweight Processes (basically a thread)

To see the total number of allowed processes on the system do:

cat /proc/sys/kernel/threads-max

Test Hard Drive Performance

Using hdparm

For meaningful results, this operation should be repeated 2-3 times

sudo hdparm -tT /dev/sda
Timing cached reads(-t)

This displays the speed of reading directly from the Linux buffer cache without disk access. This measurement is essentially an indication of the throughput of the processor, cache, and memory of the system under test.

Timing buffered disk reads(-T):

This displays the speed of reading through the buffer cache to the disk without any prior caching of data. This measurement is an indication of how fast the drive can sustain sequential data reads under Linux, without any filesystem overhead.

(from the man page)

Using dd

Measure the hard dirves read and write performance by writing to and reading from a file

dd if=/dev/zero of=perf-test bs=4M count=1000 oflag=direct
dd if=perf-test of=/dev/null bs=4M count=1000 iflag=direct

form the man page:

`iflag=FLAG[,FLAG]...'
     Access the input file using the flags specified by the FLAG
     argument(s).  (No spaces around any comma(s).)

`oflag=FLAG[,FLAG]...'
     Access the output file using the flags specified by the FLAG
     argument(s).  (No spaces around any comma(s).)

    `direct'
          Use direct I/O for data, avoiding the buffer cache.  Note
          that the kernel may impose restrictions on read or write
          buffer sizes.  For example, with an ext4 destination file
          system and a linux-based kernel, using `oflag=direct' will
          cause writes to fail with `EINVAL' if the output buffer size
          is not a multiple of 512.

Monitor Disk Activity

Usefull for:
  • Find processes which cause the disk to spin up (e.g. saving power on a laptop)

  • Verify that certain processes really write to the correct filesystem

It might be important to stop your syslogger cause he will also write to the disk and this may causes a loop. Use dmesg to see which process writes to the disk.

To start monitoring
/etc/init.d/syslog-ng stop
echo 1 > /proc/sys/vm/block_dump
dmesg
To stop monitoring
echo 0 > /proc/sys/vm/block_dump
/etc/init.d/syslog-ng start

Display the System Load in Percent

cat /proc/loadavg | cut -c 1-4 | echo "scale=2; ($(</dev/stdin)/`nproc`)*100" | bc -l

Copy/Sync/Backup/Encrypt/Decrypt Files

Migrate System to another Drive/Stick

Boot your system with a live Linux from a stick or a CD. Assuming the source system is on drive sdb and the destination drive is sdc the bootet live system is sda. My approach is to just mount both drives into the corresponding source and destination directories and then copy all relevant directories over. Some empty system directories have to be created. The last step is to install the boot loader grub. To make this succeed we have to change our root directory (chroot) to the new copied destination system. Grub needs access to some system relevant directories created by the kernel so we have to mount them before chrooting.

mkdir /mnt/source /mnt/destination
mount /dev/sdb1 /mnt/source
mount /dev/sdc1 /mnt/destination
cd /mnt/source

# copy files using whitelist
cp -aRv /bin /boot /etc /home /lib /lib64 /opt /root /run /sbin /selinux /srv /usr /var vmlinuz* initrd.img* /mnt/destination
# copy files using a blacklist
cp -aRv /!(lost+found|dev|run|proc|sys|mnt|media|tmp) /mnt/destination

cd /mnt/destination
mkdir dev media mnt proc sys tmp
chmod 777 tmp

# update fstab with current uuid
blkid
vim /mnt/destination/etc/fstab

# install grub
mount -o bind /dev /mnt/destination/dev
mount -o bind /sys /mnt/destination/sys
mount -o bind /proc /mnt/destination/proc
chroot /mnt/destination
grub-setup /dev/sdc
grub-install /dev/sdc
update-grub
init 0

Backup whole System

Use Tar to backup the whole system. p does preserve file attributes, S handles sparse files efficiently, --one-file-system makes sure tar stays in the same local file system ommitting directories like /proc. Also I exclude /home here cause its backupped elsewhere. everything is zipped using gzip.

 tar -pSczvf $(hostname).backup.tar.gz --one-file-system --exclude=/home /

Copy Many Small Files Fast

Local copy:

(cd /src/dir && tar cf - . )|(cd /dest/dir && tar xvf - )

Over network using ssh:

(cd /src/dir && tar cf - . )| ssh user@host "cd /dest/dir && tar xvf -"

from current dir to users home dir:

tar cf - prg|ssh user@host "tar xvf -"

Backup a Directory to Another Host

now=$(date +%Y%m%d-%H%M%S);s=directory;file=$now-$s.tar.bz2; \
time tar cfj $file $s; \
scp $file user@host:/dest/dir;

Encrypt/Decrypr File With OpenSSL

openssl enc -aes-256-cbc -salt -in file.ssl -out file.out -k pass
openssl enc -aes-256-cbc -salt -in file.ssl -out file.out -d -k pass

Terminal Multiplexing

I always tend to have lots of terminals open. Even if I there are multiple Desktops it always seems that I am running out of desktop space. Terminal multiplexers are tools which simplify the management of multiple terminals. You can group/split/link windows and do lots of funny stuff.

Using Tmux

I suggest using tmux instead of screen, cause screen is not very intuitive to use.

Table 1. Keybindings

ctrl+b c

create a new tab/window

ctrl+b N

move to window number N where n is a number 0-9

ctrl+b n

move to the next window

ctrl+b p

move to the previous window

ctrl+b w

display window list (window can be selected with cursor keys)

ctrl+b ,

rename a window (useful in combination with find)

ctrl+b f

find a window (useful if you renamed it before)

ctrl+b :

display command prompt

Table 2. Leave and come back

ctrl+b &

kill currently active window

ctrl+b x

kill currently active pane

ctrl+b !

kill all other panes except current

tmux new-session -s name

creates a new tmux session

ctrl+b d

detach window (like in screen ctrl+a d)

tmux ls

lists all tmux sessions

tmux a -t name

reattatch to the detached session

tmux rename -session -t [current name] [new name]

rename a session

Table 3. Window Splitting

ctrl+b %

spilt window 50/50 vertically

ctrl+b "

spilt window 50/50 horizontally

ctrl+b cursor

move to the window in cursor key direction

ctrl+b+cursor

(hold ctrl+b) resize current window in cursor direction

Table 4. Other Things

ctrl+b t

Show time

ctrl+b : source-file ~/.tmux.conf

Reload tmux configuration

ctrl+b : clear-history

Clear the tmux buffer history

Copy and Paste Using the Mouse

If copy and paste (left click + drag to select middle click to paste) does not work try holding down the shift key for the entire operation

Copy and Paste Using the Keyboard

press ctrl+b [ to switch to cursor-movement-mode, press space or in some cases ctrl+space to enter test-selection-mode, press ctrl+w to copy the text, press ctrl+b ] to paste the text

Using Screen

Screen is a full-screen window manager that multiplexes a physical terminal between several processes (typically interactive shells)

Multisession

User A starts screen screen -S name then press ctrl+A to switch to the command mode, then : to enable the internal command line, then switch on multuser mode multiuser on User A can join the screen using screen -rx name

Keyboard

Caps Lock is inverted/broken/always on

Try this wonderful command if your caps lock key does not do what you want

xdotool key Caps_Lock

Remote Control

Who doesnt know it, you sit with your laptop on the couch and you need to do some stuff on your workstation. But you dont want to stand up. X2X is a tool that forward keyboard and mouse movements over ssh from your laptop to the workstartion. Make sure openSSH is installed on all boxes involved.

On the workstation (the host going to be controlled)

sudo apt-get install x2x

Log into a X session on the workstation and log into the same user accout on your laptop. Assuming the workstation is to the left (west) of your laptop

On your laptop do:

ssh -XC user@workstation "x2x -west -to :0.0"

Now slide your mouse over the left border of the screen and it should appear on the workstation.

Print or Manipulate Text

Sum up the 10th field of a csv file and divide by the number of values (average of the column)

cat file.csv|awk -F ',' '{sum+=$10}{x+=1} END {print sum/x}'

Print all Unicode Characters

for i in {0..65536}; do echo -en "\u$i"; done

Create A Random String

Useful for passwords, filenames and others

md5sum <<< $(cat /dev/urandom | head -c 100) | sed "s/ .*$//"

Create Random Text

If you dont have lorem ipsum

tr -dc a-z0-9 < /dev/urandom | tr 0-8 \ | tr 9 \\n | sed 's/^[ \t]*//' | fmt -u

Using Pipes

Pipe 1 gigabyte of zeroes through pv into /dev/null with a datarate of 50 megabytes/s. Head is used to stop after 1 gigabyte.

cat /dev/zero | head -c 1G | pv -L 50m -s 1G > /dev/null

Date

Get The Number of Days between two Dates

use date and bc to calculate the number of days between: 15 April 2013 and 30 June 2014. The Answer is 441. Date returns the number of seconds %s between, then divide by seconds per day

echo $"(( $(date --date="140630" +%s) - $(date --date="130415" +%s)))/(60*60*24)"|bc

Display Date and Time Nicely Formatted

This uses figlet to display a nicely formatted date and time string in your console window.

watch -n1 "date '+%D%n%T'|figlet -k"

Another funny way to display a colorful box with the current date and time inside

while true;do echo "$(date '+%D %T'|toilet -f term -F border --gay)";sleep 1;done

MySQL

Dump a Database

mysqldump --user=root --add-drop-table databasename > databasename.sql

Conversion

Convert all ogg files to mp3

The ogg format is considered better than mp3. But sadly some old or cheap devices cant play ogg files. here is how to convert them to mp3 with a bitrate of 80 kbit/s:

sudo apt-get install ffmpeg

for name in *.ogg; do ffmpeg -i "$name" -ab 80k -map_meta_data 0:0,s0 "${name/.ogg/.mp3}"; done;

Convert a mp4 Video to mp3 Audio

ffmpeg -i "video.mp4" -vn -acodec libmp3lame -ac 2 -qscale:a 4 -ar 48000 "audio.mp3"

Downsize a mp4 Podcast for A Mobile Tracking Device

I scaled down the video by 1/2, set the crf (Constant Rate Factor) to 32 and tuned for stillimage (which additionally saves some MB)

ffmpeg -i "test.mp4" -vf "scale=iw/2:ih/2" -vcodec libx264 -crf 32 -tune stillimage "test.x264.mp4"

To do this for lots of files I would recommend to first fix white space errors in filenames. (some tools still not get it) A white space is a separator using it in filenames is just bad.

Note
ffmpeg somehow tries to determine your output type based on the extension of your output filename. Which is IMO a little bit odd. But anyway make sure to pass -f mp4 otherwise you get stupid error messages which took me 20mins to figure out whats really going on
OLDIFS=$IFS
IFS=$(echo -en "\n\b")
for f in $(ls);do mv "$f" "$(echo $f | sed 's/ /_/g')"; done
for f in $(find -name \*.mp4 -type f); \
        do ffmpeg -i $f -vf "scale=iw/2:ih/2" -vcodec libx264 -crf 32 -tune \
        stillimage -f mp4 $f.x264; \
done
IFS=$OLDIFS

Youtube

Download an entire youtube channel

/usr/local/bin/youtube-dl -f best -ciw -o %(title)s.%(ext)s -v <channel-url>

Scrape Twitter

Twitter is information. To download tweets from a certain user twitterscraper is a very nice tool. Information can be easily archived in json format for later processing

# install
pip install --user twitterscraper

# where installed
python -m site --user-base
/home/timo/.local

# run
/home/timo/.local/bin/twitterscraper --help
/home/timo/.local/bin/twitterscraper --begindate 2019-12-29 \
        -o 2019-12-29__2020-02-04.user.tweets.json --user <USER>

# display
jq '.[].text' 2019-12-29__2020-02-04.user.tweets.json

Image Manipulation

Resize all files in a Directory

Often I have the problem that I have a couple of high res images in a directory and want to resize all of them to a smaller size. This example shows a way to rename all files in the directory to a name with a ascending number and then convert all files to a resolution of 800x600 pixels.

i=0; for f in $(ls *.jpg); do mv $f server_$i.jpg; (( i+=1 )); done

for i in $(ls *.jpg); do convert $i -resize 800x600 _$i; echo "resizing $i";
done
Note
The original images are not deleted! The resized images just have an underscore prefixed in their name.
Note
The convert tool is part of the ImageMagick software package.

Deniose an Image (gimp)

using wavelet
  • Download Wavelet denoise plugin

  • install it with make && make install

  • open gimp and go to: Filters → Enhance

using GRAYCstoration
  • Use GRAYCstoration (I have it already in gimp 2.6.12)

  • open gimp and go to: Filters → Enhance

Downloading

Download a Certificate using OpenSSL

To download a certificate on the command line you can use the openssl client to show a certificate. It will also display more information that we cut away using sed. The certificate will be saved in the /tmp directory. echo -n gives a response to the server, so that the connection is released.

echo -n | openssl s_client -connect HOSTNAME:PORT | sed -ne '/-BEGIN
CERTIFICATE-/,/-END CERTIFICATE-/p' > /tmp/SERVER.cert

Download Specific Files From a Website Using Wget

Download all .mp4 files and .sha1 checksums from a website

 wget -r -l2 -nd -Nc -A.mp4,.sha1 -c URL

Find (GNU findutils)

Recursive Delete All Directories

Delete all .svn directories in all subdirectories under the current directory.

find . -name ".svn" -type d -exec rm -R {} \;

Recursive change Rights

Change the access rights of all makefiles in the current and all subdirectories to read write for current user.

find . -name "Makefile" -type f -exec chmod 0600 {} \;

Recursive Search and Replace

This command replaces "searchtext" with "replacetext" in all *.txt files recursive from current directory.

sed -i 's/searchtext/replacetext/' $(find . -name "*.txt" -type f -print)

Search A File Modified in Given Year

Search in all files (here all mp4 videos) recursive from current directory if its modification date is 2010. If yes print the filename to stdout.

find . -iname \*.mp4 -exec ls -la {} \; | awk '{ if ( $8==2010) printf $0"\n" }'

Search for a file Containing Specific Text

Search recursive and case insensitive in current directory all .txt files. if a file contains the string SEARCH, print the filename.

for i in $(find . -iname *.txt*); do grep -i SEARCH $i; if [ $? -eq 0 ];then \
echo $i;fi; done

File will be echoed if file exists and is a symbolic link. see man bash

for f in $(find .); do if [ -h $f ]; then echo $f; fi; done

Find Duplicated Files and Md5sum them

find . -exec md5sum {} > allSongs.md5 \;
cat allSongs.md5 | sort > allSongs.md5.sort

Get a Random File from current Directory

Usefull if you want to play a random music file. The command does find all files from current directory, counts them and prints a random filename from the list. (not very efficient but it works)

allfiles=$(find . -type f); files=( $allfiles );c="${#files[@]}"; \
r=$(($RANDOM % $c)); echo ${files[$r]}

Grep

Grep for two Different Strings

Grep matches if either one foo or bar is present in the line

echo 'hello foo' | egrep -i '(foo)|(bar)'

Sed

Command1

Searches for all *.txt files recursive from . and prints out the line if the file contains the string "foo"

Command2

Remove all lines in all that files containig the string "foo" BE CAREFUL THIS COMMAND WILL MODIFY YOUR FILES(!)

All files that contain the string ".svn" in their filename are skipped.

for f in $(find . -name *.txt* | egrep -v ".svn"); do echo $f; cat $f | \
grep foo;done

for f in $(find . -name *.txt* | egrep -v ".svn"); do echo $f; sed -i \
'/foo/d' $f;done

Filesystems

Create a Ramdisk

Create an ext2 filesystem in ram with a size of 8 megabyte

sudo mkfs -q /dev/ram1 8192
sudo mkdir /mnt/ramdisk
sudo mount /dev/ram1 /mnt/ramdisk
df -H

Creating Files

Create Recursive Directories And Files

for i in {0..9}; do for c in {0..9};do i="$i/$c"; (mkdir -p $i; cd $i; echo
12345 > file_$c); done; echo $i; done

Create Sparse Files

Speed: ultra fast

several ways to create a 10GB sparse file. Sparse files do not actually take up a real size on the disk. It is possible to cerate a 1TB file even if you only have 1GB free disk space. The filesystem only reserves the space for the file.

truncate -s 10G 10gb-file
dd if=/dev/zero of=filename bs=1 count=1 seek=10G

Create a 300GB File

Speed: very fast

This might be the fastest solution if cerating a file that should take up real disk space (non sparse) but where the content is not important. The file is allocated but no data (e.g. zeroes) are written to the filesystem.

fallocate -l 300G 300gb

Create a 300GB File filled with Zeroes

Speed: fast

Allocate a file filled with zeroes (non sparse file). In this example the file contains real zeroes. Generating zeroes is very fast so the speed is comparable to the max writing speed of the disk.

dd if=/dev/zero of=300gb bs=1024M count=300

Create a 3GB File filled with Good Random

Speed: (very) very slow

Generating good random numbers is very hard. This is why /dev/random is very very slow. A compromise might be using /dev/urandom which is much faster in trade off predictability of random numbers.

dd if=/dev/random of=3gb bs=1024M count=3

Create a 32GB File Filled with Repeated Random

Speed: slow

dd if=/dev/urandom of=32gb bs=1M count=1
for i in {1..15}; do cat 32gb 32gb > tmp && mv tmp 32gb; done

Create a 10GB File Filled with Pseudo Random

Speed: moderate

shred -s 10G - > 10gb

If you need multiple numbered files each in a separate directory you can also generate them in parallel with this command (here we create 20 x 60GB random files!)

for i in {1..20};do mkdir $i && (cd $i && shred -s 60G - > movie${i}.mxf)&done

finally create md5 sums in parallel (optional):

for i in {1..20};do (cd $i && md5sum movie${i}.mxf > movie${i}.mxf.md5)&done

Rename Files

Rename all files that start with any amount of numbers and contain white spaces e.g. "0001_file name" to a filename without the numbers "file name"

OLDIFS=$IFS
IFS=$(echo -en "\n\b")
for f in $(ls);do mv "$f" "$(echo $f | sed 's/[0-9].*_//')"; done
IFS=$OLDIFS

Fix white space errors in filenames (replace ' ' with _)

OLDIFS=$IFS
IFS=$(echo -en "\n\b")
for f in $(ls);do mv "$f" "$(echo $f | sed 's/ /_/g')"; done
IFS=$OLDIFS

Manipulate File Attributes

Set the SGID (Set Group Id) bit on all directories recursive from /tmp/dir. Note that also the IFS (Internal Field Separator) will be temporary set to new line \n. This is necessary to also handle directories containing white spaces. Afterwards it is set back.

$OIFS="$IFS"; IFS=$'\n'; for d in $(find /tmp/dir/ -type d); do chmod g+s "$d";done; IFS="$OIFS"

Show Libraries a Binary was Linked Against

show direct libs:

readelf -d $(which ls) | grep NEEDED

Another way to show direct libs:

objdump -x $(which ls)

show all direct and indirect libs:

ldd $(which ls)

Managing Resources

The ulimit Parameter

show actual limits: ulimit -a

To Set maximum processes to 2048 you have to edit /etc/security/limits.conf

user        soft    nproc          1024
user        hard    nproc          2048

A hard limit for nproc maximum number of processes should always be set to prevent users to build a fork bomb. A fork bomb can be something like this:

:(){ :|:& };:
Caution
if your process limit is set to unlimited a forkbomb will probably consume resonable system resources and may take your system down(!)

List the Processes by their open file count

thanks to Alfe (stackoverflow)

find /proc -maxdepth 1 -type d -name '[0-9]*' -exec bash -c "ls {}/fd/ | wc -l | tr '\n' ' '" \; -printf "fds (PID = %P), command: " -exec bash -c "tr '\0' ' ' < {}/cmdline" \; -exec echo \; | sort -rn | head

Execute Command Without Revealing Password

If issuing a command like wget --user=timo --password=woo this will go straigt into the bash history and can be seen by using the history command. this is generally bad.

Put a " " (space) in front of your command:

" wget --password=woo ..."

Put the password into a local variable

read -es pass
wget --password=$pass ...
Caution
The command including the password is still visible in ps while the command runs

Bash Brace Expansion

Brace expansion is useful for lots of tasks.

Creating directories:

mkdir D{1..5}

This lazily creates the directories: D1, D2, D3, D4, D5

Bash Scripting

Get Absolute Path of Script Directory

if the script is located in /home/timo/script and is executed from / the current working directory $cwd is /home/timo

cwd=$(dirname $(readlink -f ${0}))

Defining Colors

R='\e[0;31m'
RB='\e[1;31m'
G='\e[0;32m'
GB='\e[1;32m'
Y='\e[0;33m'
YB='\e[1;33m'
B='\e[0;34m'
M='\e[0;35m'
C='\e[0;36m'
X='\e[0m'

Use it this way:

+echo -e "Hi there, I am ${GB}GREEN${X}"+

or use a function:

printc()
{
        local text=$1
        local color=$2
        echo -e ${color}${text}${X}
}

Line Continuation

Its now the second time I wonder how to correctly break lines in a bash script. Here is a small real life example how it works:

$cat -n ./test
     1  #!/bin/bash
     2
     3  A="I want to line break here ^
     4  and not here ^"
     5
     6  B="I want to line break here ^
     7          and not here ^"
     8
     9  C="I want to line break here ^"\
    10  "and not here ^"
    11
    12  D="I want to line break here ^"\
    13          "and not here ^"
    14
    15  E='I want to line break here ^
    16           and not here ^'
    17
    18  echo A: $A
    19  echo B: $B
    20  echo C: $C
    21  echo D: $D
    22  echo E: $E

This comes out:

$./test
./test: line 13: and not here ^: command not found
A: I want to line break here ^ and not here ^
B: I want to line break here ^ and not here ^
C: I want to line break here ^and not here ^
D:
E: I want to line break here ^ and not here ^

A and C do not use indendation whereas B and E do. D is an error cause bash will interpret this as two commands. So I prefer B or E.

Cycle through all Hosts in Subnet

And Show Running OS

Note
This works best if you have configured a passwordless ssh login
for h in $(sudo nmap 10.200.127.* -p22 | grep ports |sed 's/.*on //;s/ \
(.*$//;s/://');do ssh $h "lsb_release -a";done

Troubleshooting SSH Logins

Too many authentication failures for user

If you get the following message:

$ssh root@host
Received disconnect from 10.200.127.66: 2: Too many authentication failures for
root

If your SSH-Agent sends muiltiple SSH-Keys to the server but none of them matches, the server does no more accept any key after too many keys have been offered. To access the host and fix the problem, e.g. put your key into the ~/.ssh/authorized_keys you can use the password login

ssh -o PubkeyAuthentication=no root@host

Increase Verbosity

Add the -v parameter, to increase verbosity you can also increase the verbosity even more by using -vv, -vvv is the max verbose level

Start Another Instance in Debug Mode

A nice way to troubleshoot SSH issues is to temporary start an own SSH daemon. Here the problem may be isolated and does not interfere with other SSH daemons. Also see why ssh login can still fail

on the server:

sudo $(which sshd) -d -p 1234

on the client:

ssh host -v -p 1234

Turn on DEBUG Logging of existing SSHD Instance

To raise the log-level of an existing SSH instance edit the following file

sudo vim /etc/ssh/sshd_config

Add the line

LogLevel DEBUG

Restart SSHD and Watch the logfile for any errors while logging in

sudo /etc/init.d/sshd restart
sudo tail -f /var/log/secure

SSH Login is Failing only in Daemon Mode

If you run sshd in the foreground as root login is possible but if running SSH as Daemon the login fails

Failed publickey for timo from 10.200.127.66 port 53228 ssh2

Check the SELinux status

cat /var/log/audit/audit.log

Solution is to run restorecon

sudo restorecon -r -vv /home/timo/.ssh

No Random Number Generator Available

The following command ssh host produced this output

PRNG is not seeded

check if both devices are listed

ls /dev/*random
/dev/urandom
/dev/random

if /dev/urandom or /dev/random or both are missing create them with

mknod -m 644 /dev/random c 1 8
mknod -m 644 /dev/urandom c 1 9
chown root:root /dev/random /dev/urandom

Wine

Run Elster Steuererlärung

Tested with wine-2.15

wine msiexec /i ElsterFormularKomplett.msi

Video Editing

I wanted to cut some video scenes together from the hollidays

  • Copy the RAW material together

  • apt-get install mpv a mediaplayer which displays also milliseconds

  • use ffmpeg to cut all interesting scenes for later composition

ffmpeg -i "P1060099.MOV" -ss 00:00:10.566 -to 00:00:22.666 -vcodec libx264 -acodec copy "P1060099_scene_name.MOV"