Table of contents
New shell gimmicks
So, I decided to streamline my shell config the other day. The very first I did, was to write an awk replacement in perl. Sounds a little senceless, but sometimes I need some more power inside an awk command and sometimes I just want to save typing. This is how it looks like, when used:
# awk version:
ps | grep sleep | grep -v grep | awk '{print $1}' | xargs kill
# pwk version:
ps | grep sleep | grep -v grep | pwk 1 | xargs kill
This is the simple variant, which just saves typing, pretty handy. The other variant is more perlish and at first looks like the original awk syntax. Hover, you can add almost any kind of perl code to it:
ps | pwk 'if($5 =~ /^python$/) { $t=`fetch -o - "http://$8/"`; if($t =~ /<title>(.+?)<\/title/) { print "$8: $1"} }'
Here's the shell function, just put it into your .bashrc:
pwk () {
if test -z "$*"; then
echo "Perl awk. Usage:"
echo "Perlish: pawk [-F/regex/] [-Mmodule] <perl code>"
echo " Simple: pawk <1,2,n | 1..n>"
echo "Perlish helpers:"
echo " p() - print field[s], eg: p(\$1,\$4,\$7)"
echo " d() - dump variables, without param, dump all"
echo " e() - exec code on all fields, eg: e('s/foo/bar/')"
echo
echo "Default loaded modules: Data::Dumper, IO::All"
echo "Enable \$PWKDEBUG for debugging"
echo "Simple mode has no helpers or anything else"
else
# determin pwk mode
if echo "$*" | egrep '^[0-9,\.]*$' > /dev/null 2>&1; then
# simple mode
code=`echo "$*" | perl -pe 's/([0-9]+?)/\$x=\$1-1;\$x/ge'`
perl -lane "print join(' ', @F[$code]);"
else
# perl mode
# prepare some handy subs
uselib="use lib qw(.);"
subprint="sub p{print \"@_\";};"
subsed='sub e{$r=shift; foreach (@F) { eval $r; }};'
subdump='sub d {$x=shift||{_=>$_,S=>\@F}; print Dumper($x);};'
begin="; BEGIN { $uselib $stdsplit $subprint $subdump $subsed}; "
# extract the code and eventual perl parameters, if any
code=""
args=""
last=""
for arg in "$@"; do
args="$args $last"
last="$arg"
done
code=$last
# fix perl -F /reg/ bug, complains about file /reg/ not found,
# so remove the space after -F
args=`echo "$args" | sed -e 's/-F /-F/' -e 's/-M /-M/'`
# convert $1..n to $F[0..n]
code=`echo "$code" | perl -pe 's/\\\$([0-9]+?)/\$x=\$1-1;"\\\$F[\$x]"/ge'`
# rumble
defaultmodules="-MData::Dumper"
if perl -MIO::All -e0 > /dev/null 2>&1; then
defaultmodules="$defaultmodules -MIO::All"
fi
if test -n "$PWKDEBUG"; then
set -x
fi
perl $defaultmodules $args -lane "$code$begin"
if test -n "$PWKDEBUG"; then
set +x
fi
fi
fi
}
Another new shell function is extr, which unpacks any kind of archive. In contrast to its sisters out there (there are a couple of generic unpack shell funcs to be found on the net), it takes much more care about what it does. Error checking, you know. And it looks inside the archive to check if it extracts into its own directory, which is not always the case and very annoying. In such instances it generates a directoryname from the archivename and extracts it to there. Usage is simple: extr archivefile. Here's the function:
extr () {
act() {
echo "$@"
"$@"
}
n2dir() {
tarball="$1"
suffix="$2"
dir=`echo "$tarball" | perl -pne "s/.$suffix//i"`
dir=`basename "$dir"`
echo "$dir"
}
tarball="$1"
if test -n "$tarball"; then
if test -e "$tarball"; then
if echo "$tarball" | grep -Ei '(.tar|.jar|.tgz|.tar.gz|.tar.Z|.tar.bz2|tbz)$' > /dev/null 2>&1; then
# tarball
if echo "$tarball" | grep -E '.(tar|jar)$' > /dev/null 2>&1; then
# plain old tarball
extr=""
elif echo "$tarball" | grep -E '(bz2|tbz)$' > /dev/null 2>&1; then
extr="j"
elif echo "$tarball" | grep -E 'Z$' > /dev/null 2>&1; then
extr="Z"
else
extr="z"
fi
if ! tar ${extr}tf "$tarball" | cut -d/ -f1 | sort -u | wc -l
| egrep ' 1$' > /dev/null 2>&1; then
# does not extract into own directory
dir=`n2dir "$tarball" "(tar.gz|tgz|tar.bz2|tbz|tar|jar|tar.z)"`
mkdir -p $dir
extr="-C $dir -${extr}"
fi
act tar ${extr}vxf $tarball
elif echo $tarball | grep -Ei '.zip$' > /dev/null 2>&1; then
# zip file
if unzip -l "$tarball" | grep [0-9] | awk '{print $4}' | cut -d/ -f1 | sort -u \
| wc -l | egrep ' 1$' /dev/null 2>&1; then
# does not extract into own directory
dir=`n2dir "$tarball" zip`
act mkdir -p $dir
opt="-d $dir"
fi
act unzip ${opt} $tarball
elif echo "$tarball" | grep -Ei '.rar$' > /dev/null 2>&1; then
if ! unrar vt "$tarball" | tail -5 | grep '.D...' > /dev/null 2>&1; then
# does not extract into own directory
dir=`n2dir "$tarball" rar`
act mkdir -p "$dir"
(cd "$dir"; act unrar x -e $tarball)
else
act unrar x $tarball
fi
elif echo "$tarball" | grep -Ei '.gz$' > /dev/null 2>&1; then
# pure gzip file
act gunzip "$tarball"
else
:
fi
else
echo "$tarball does not exist!"
fi
else
echo "Usage: untar <tarball>"
fi
}
And finally an updated version of my h function, which can be used for dns resolving. Usage is pretty simple:
% h theoatmeal.com ; dig +nocmd +noall +answer theoatmeal.com theoatmeal.com. 346 IN A 208.70.160.53% h 208.70.160.53 ; dig -x 208.70.160.53 +short oatvip.gpdatacenter.com.
% h theoatmeal.com mx ; dig +nocmd +noall +answer theoatmeal.com mx theoatmeal.com. 1800 IN MX 5 eforwardct2.name-services.com. theoatmeal.com. 1800 IN MX 5 eforwardct3.name-services.com. theoatmeal.com. 1800 IN MX 5 eforwardct.name-services.com.
It uses dig to do the work, or host if dig cannot be found. The source:
h () {
if type dig > /dev/null 2>&1; then
args="$*"
opt="+nocmd +noall +answer"
rev=""
if echo "$args" | egrep '^[0-9\.:]*$' > /dev/null 2>&1; then
# ip address
cmd="dig -x $* +short"
else
# hostname
cmd="dig +nocmd +noall +answer $*"
fi
echo "; $cmd"
$cmd
else
# no dig installed, use host instead
host="$1"
type="a"
debug=""
cmd="host $debug"
if test -z "$host"; then
echo "Usage: h <host> [<querytype>]"
return
else
if test -n "$2"; then
type=$2
fi
if test -n "$debug"; then
set -x
fi
case $type in
ls)
$cmd -l $host
;;
any)
cmd=`echo $cmd | sed 's/\-d//'`
$cmd -d -t any $host | grep -v ^\; | grep -v "^rcode ="
;;
mx|a|ns|soa|cname|ptr)
$cmd -t $type $host
;;
*)
echo "*** unsupported query type: $type!"
echo "*** allowed: mx, a, ns, any, *, soa, cname, ptr"
continue
;;
esac
if test -n "$debug"; then
set +x
fi
fi
fi
}
subst update (1.1.3)
So, after a couple of idle years I made an update to my subst script. Although I use it everyday there were still some glitches here and there. For one, I just could not rename files with spaces in them. Very annoying. Also it was unflexible in that I could not use additional perlmodules when using /e. STDIN was not supported among other minor stuff.
So, the new version fixes all this, see the link above. Download, rename it (remove the .txt extension) and put it into your bin directory. Usage:
Usage: subst [-M <perl module>] [-t] -r 's/old/new/<flags>' [ -r '...', ...] [<file> ... | /regex/]
subst [-M <perl module>] [-t] -m 's/old/new/<flags>' [ -m '...', ...] [<file|dir> ... | /regex/]
Options:
-r replace contents of file(s)
-m rename file(s)
-M load additional perl module to enhance /e functionality.
-t test mode, do not overwrite file(s)
Samples:
-
replace “tom” with “mac” in all *.txt files: subst -r ’s/tom/mac/g’ *.txt
-
rename all jpg files containing whitespaces: subst -m ’s/ /_/g’ ‘/.jpg/’
-
decode base64 encoded contents subst -M MIME::Base64 -r ’s/([a-zA-Z0-9]*)$/decode_base64($1)/gem’ somefile
-
turn every uri into a link subst -M “Regexp::Common qw /URI/” -r ’s#($RE{URI}{HTTP})#<a href="$a">link</a>#g’ somefile
If <file> is -, STDIN will be used as input file, results will be printed to STDOUT. -t does not apply for STDIN input.
Substitution regex must be perlish. See ‘perldoc perlre’ for details.
Version: 1.1.3. Copyright (c) 2002-2014 - T.v.Dein <tom AT linden DOT at>
So, in order to remove spaces of filenames, I can now just issue:
subst -m 's/ /_/g' '/\.mp3$/'
As you can see, instead of giving a shell wildcard as last argument, I provide a regex, which will be resolved by the script itself from the current directory. Bam!
ipv4.l.google.com
Ohne Worte:
host -t aaaa ipv4.l.google.com ipv4.l.google.com has IPv6 address 2a00:1450:4019:800::1003
Die ersten Wildstiefmütterchen
Frauchen baut ja jedes Jahr massenweise Grünzeug als Futtermittel an. Dieses Jahr ist das hier die erste Blüte ihrer Aussaat:
Springer. Ohne Worte
Man beachte diese beiden Heisemeldungen:
-
05.05.2014 20:47: Springer: Google, Facebook & Co. "wollen uns Verlage vernichten"
-
06.05.2014 08:23: Axel Springer erwirtschaftet erstmals mehr als die Hälfte seines Umsatzes digital
Offensichtlich setzen die zu viele unbezahlte Praktikanten in ihrer PR-Abteilung ein. Demagogengesindel.