While this is nothing new I just wanted to list out some of my daily preferred commands in linux and how and why i use them. Its not ordered in any way.
- change directory (cd):
Nothing new here everyone usages everyday but there are some cool stuffs you can do with it and save some time.
cd ~ or cd # takes you to the home dir, normally i use when i am lost
cd - # takes you back to the last dir you were in specially useful when you have keep on changing dir to work rather than remebering so you can do this
cd ../../ # this is a nightmare but everyone has to use it which helps you go back directories in this specific line it goes back 2 dir.
cd /var/log/sssd/ # this help you travel to specific dir in this case to /var/log/sssd/
cd "AWS Prepartion" # when i end up creating weird directory due to requirements that quotes help you with traveling weird named location like one from the example.
- print working dir (pwd):
pwd # plain and simple tells you your present working directory this has been handy specially when you have different folder with similar files or folder names(same file heirarchy) so if you ever got confused you can trace where you are as verify the parent dir.
- global regular expression print (grep):
Without this my life would have been nightmare specially in place where i cannot use editor with mouse and have to rely on terminal and keyboard to find files which specific contents on them like tags, keys, values etc.
You can use it will -r recursive and -i ignore case flags and -n to show the line on which its found.
grep -rn "200"
grep -rn "tag_mockapi_error"
# here 200 can be status code of the log from the application in the dir where you don't know how many files has that value and tag_mockapi_error can be the tag used by for example rsyslog to send logs to the log agg server where you have 1000's of files and lines of logs and you need to find error logs related to one specific service or specific instance of that service.
grep -in "timeout" lambda.yaml
cat lambda.yaml | grep timeout
dnf list installed | grep httpd
ps aux | grep httpd
# this is helpful when you know the filename but there are so many you want quickly verify if you missed a param rather than using cat or less or more or vi or vim or nano to check the entire file or list if you have specific package installed in the system or just to confirm if a process is running in the system or not
- find or locate:
files mainly for realtime and locate mainly for fast but i have found myself using locate for simple stuffs like finding files quickly in the instance or vm or container.
which is used to find stuffs but with combination its a very powerful tool. I normally use it to list files or directories based on timestamp to delete things like old logs in instances which doesn't have retention and requires custom cron to cleanup frequently or some jenkins jobs which can quickly fill up your ebs volumes or very commonly just to find lost contents.
—type f files -type d for directories -name for name . for current location or you can provide specific location along with combination of other flags like
locate httpd # locate http files
locate *.log # locate all logs specially when sometimes some of the logs are not at /var/log/
find /var/log/httpd/ -name "access.log" -type f
find /opt/logs/ -type f -mtime +30 # finds files older than 30 days in /opt/logs/ dir and its sub directories
find /opt/logs/ -type f -mtime +30 | wc -l # word count so returns the count of files
find /opt/logs/ -type f -mtime +30 -exec rm {} \; # delete selected files
you might be wondering but why am i using -type f because sometimes what happens is you are have requirements like delete the files but maintain the folder structure so that you cleanup doesn't break existing implementation or sometimes just for compliance.
- tail and head:
Tail and Head are used to view contents of last few lines and first few lines of file or files in a dir or of specific extensions etc based on the -n which means up to lines flag.
I tend to use this for listening to logs of hosts and container logs or app logs which i am debugging or just want to ensure the logs are coming as expected.
head -f 50 /var/log/httpd/access.log
tail -f 50 /var/log/sssd/sssd_pam.log
tail -f /opt/vault/vault-audit.log # this listens to live contents coming to a file so that you can monitor
tail -f /var/log/httpd/*.log # to listen to all log files useful when you are not sure which file is logging the current errors and what is the content or format of the error
- concatenate (cat) and echo:
I have used these 2 commands mainly for reading files, generating new files with contents updating files with dynamic values with combination of other commands along with for things like encoding and decoding, reading env and so on.
cat -n /var/log/zabbix/zabbix.log # read file with lines
cat /var/log/mockapi/error.log2 >> /var/log/mockapi/error.log # useful when you are testing if log agg is happening when you lines of contents are added to files
cat -n /var/log/httpd/*.log | grep tag_mockapi_error # check if the any files in httpd contains the tag_mockapi_error in them
echo $NODE_ENV # read shell env
echo "environment=${NODE_ENV" >> .env # set dynamic environment value in .env file
echo -n "IAMASECUREPASSWORD" | base64 # to encode
echo -n "SWFtU2VjdXJl" | base64 --decode # decode encoded passwords handy for storing credentials indirectly in files or in secrets manager/parameter store/vault/secrets/configmap etc
- list (ls):
I have been using this command for things like checking if file exists, file is timestamp, list files in a dir in order, list file size to verify for log rotations and sometimes just to validate all the expected files are there in the dir and to see hidden files. It comes handly when you need to know when was the file last updated to ensure the timestamp you are expecting matches the files modification. The type of the files whether you are expecting it to be hidden or to just see new files that I might be expecting to be created in a dir.
The most common options i use is -ltras which does almost all the stuffs i need.
ls -ltras # lists everything
ls -ltras -block-size=M # list everything with file size in MB
ls -ltras /etc/rsyslog/rsyslog.conf # validate if rsyslog.conf file exists and its details
ls -ltras /etc/logrotate.d/ # list all log rotation configuration files
- process status (ps):
Very useful if you want to check quickly how many processes are running with details in various format of output and if you want to verify quickly status of any process. I have been using it to check if any process is running in the server that i am expecting like httpd, hutch etc.
ps aux
ps aux | grep hutch
ps -ef
ps -ef | grep httpd
- list open files (lsof) and kill
I use this combination when i know my application port is showing its being used and i want to close its process either forcefully or gracefully. Sometimes I end up with background process or sometimes I end up running app instances in same processes unknowingly after a break in different terminal.
Here the lsof is used to list the process id of the process running on the port 5000 or 3000 and kill using the kill command.
lsof -i :5000 | xargs kill -9 # forcefully
lsof -i :3000 | xargs kill # gracefully
kill -9 32145 # using process id kill
10 netstat(legacy) or ss
This has been a good command to check network connection. These I commonly use when i am working in RHEL and AL2023 due to default availability but both does the same thing. Some I am expecting some of the ports to open by specific services for examples for rsyslog 514 tcp port or app ports like 8080.
netstat -plunt
ss -plunt
- telnet
While I am working on servers or containers this has been very handy to just confirm if certain ports are accessible from to certain location for example if i want to confirm the newly port i opened in security and nacl from one account to another in aws or from one instance to another in same account but different server or when i want to check if i container can access another container within same or different network etc, or if my db ports are accessible from my application containers or if i can reach the load balancer of another service from or if i can ssh into another instance from the bastion server. The ensures that least if i am not able to perform certain actions then its not the network connection issue out of all the issues that i could be having.
telnet <ip> <port>
telent 10.0.0.12 8080
- nslookup and dig
I have used them few times because of the types of issues its used to deal with and in my case it was when one of the subdomain where infrequently accessible from one of from aws accounts servers domain when i was using dns resolver. These commands were handy to verify if i was getting the correct records type in result and also if the issue were frequent or infrequent and trace the issues get detailed information on the record or when i want to find the server ip address associated with the given private local record.
dig +trace jaeger.ydvsailendar.com
dig @8.8.4.4 jaeger.ydvsailendar.com # query using specific dns server
dig -x 35.179.107.142 # reverse dns lookup
nslookup prometheus.ydvsailendar.com
- sed
This has been very handy while writing pipelines for jenkins in bash or write bash scripts for building ami's to update values dynamically in files.
sed -i 's/localhost:8080/nessus.ydvsailendar.com/' httpd.conf single occurance
sed -i 's/localhost:8080/nessus.ydvsailendar.com/g' *.conf # replace in all files ending with .conf
echo $BUILD_URL | sed -i 's/localhost:8080/nessus.ydvsailendar.com/'
- cp and mv
This I use a lot like everyone specially when you are taking backup of config files moving config after generating from template to specific location like httpd, rsyslog, sssd, ldap, docker compose files, k8s secrets files, env files and many more or when creating a project from a boilerplate for example terraform modules, nodejs boilerplate or just when i am aligning different scattered projects in a dir.
cp -r terraform-boilerplate-module azure-pipeline # directory files recursively
cp /tmp/sssd.conf /etc/sssd/sssd.conf # sssd conf
cp . . # Docker
mv azure-pipeline archived/ # archiving a project
mv /tmp/*.backup /mnt/data/
15 rm
This i have used mainly to get rid of stuffs. For example in servers where i need to cleanup logs via cron on timely manners or to get rid of local projects in my computer. Or to just get rid of files from a project i am working on. Though sometimes it makes you cry after using the commands and this get rid of the files or folder directly from the system so please do be careful when using assuming you must have heard the saying: With Great Power Comes Great Responsibilities.
rm -rf backup/* # get fired from job
rm -rf azure-pipeline/*.yaml # remove all yaml files form azure-pipelines
rm -rf *.swp # remove all vim edit open files
rm -rf terraform-provider-aws-5.71.0.zip
rm -rf /var/log/syslog/<container_name>/<datetime>.log # delete specific timestamped logs from container
# you also must have seen one god example in find and delete
- touch
whenever i am bored i don't want use vscode i use this to create files. Alternatively i use vim to do the same as well.
touch http.conf.template