If there is some set of keystrokes that you find yourself using day in, day out, you can almost certainly write something that can automate the process. This could be something simple that could be achieved by putting a bash alias somewhere in your ~/.bashrc
:
echo "alias ls='ls -lh --color=auto'" >> ~/.bashrc
But what if you’d like to be able to give arguments or use the output of commands? In that case, a bash script is usually the way to go. Anything placed in a directory that appears in the PATH environment variable, with the correct permissions, can be run from the command line:
$ echo $PATH /usr/bin:~/local/bin $ echo "echo \"Hello world.\"" > ~/local/bin/hello_world $ hello_world bash: ~/local/bin/hello_world: Permission denied $ chmod +x ~/local/bin/hello_world $ hello_world Hello world.
The last command (chmod +x
) gives the current user permission to treat the file as live code rather than just reading and writing from and to it like a text file.
This is not a particularly useful script; for that we can take arguments passed after the name of the script. Placing this script named add_numbers
in a PATH-accessible directory and giving it the appropriate permissions, we can perform some simple arithmetic on the inputs:
echo $(($1 + $2))
There are no prizes for guessing the result:
$ add_numbers 1 3 4
The nth argument to the script is denoted $n, and indexing begins at 1. $(( ... ))
is the expand arithmetic operator, and echo
prints the result. One use of arguments that I use a lot is in checking in on Slurm jobs. The following script takes a single argument – the Slurm job ID – prints live updates of the most recent 40 lines of the job output:
watch -n -0.1 "tail -n 40 ~/slurm_logs/slurm_$1.out"
It may be the case that the console output of a command is useful to us, and we can capture, process and use this in subsequent commands. In the following script, the single argument is the job file; passing this filename to sbatch
submits the job to Slurm, and prints information about the associated job ID to the console. The job ID can be used to monitor the progress of the job – all of this uses a single command and a single argument:
OUTPUT=$(sbatch $1) JOBID=$(echo "$OUTPUT" | awk -F ' ' '{print $4}') watch -n 0.1 "tail ~/slurm_logs/slurm_$JOBID.out -n 50"
If we have a file called slurm_job.sh, running sbatch
will give the following output:
$ sbatch slurm_job.sh Submitted batch job 831489
In our script, the string Submitted batch job 831489
is captured in the OUTPUT
variable (line 1); we then use awk
to split this string with the delimiter argument (-F
) set to [space], and ‘print’ the 4th and final item (a six digit job ID) into the JOBID
variable (line 2). This can be used in the final line to observe the progress of the job via its Slurm log.
These are simple examples, but I try to automate any command I use more than a few times per day in this manner. Bash scripting is extremely powerful, and learning the basics can save a lot of time in the long run.