Most Linux tutorials teach you ls and cd and call it a day. These are the commands that actually separate beginners from people who know what they are doing.

The first time most people open a Linux terminal, they feel the same thing.

A black screen. A blinking cursor. Silence.

No hints. No menu. No suggestions. Just a prompt waiting for you to know what to type.

If you typed ls, something happened. If you typed cd, you moved somewhere. And then you ran out of commands you knew and stared at the cursor again.

Here is the thing nobody tells you: the gap between someone who just started Linux and someone who looks genuinely competent is not years of study. It is maybe thirty commands. Specifically, the thirty commands that actually come up in real work, not the theoretical ones that fill textbooks but never appear in practice.

This article covers twenty of them. By the end you will not just know what they do, you will understand why they work, when to reach for them, and how to combine them into something more powerful than any single command alone.

Let us start.

Before We Get into the Commands, One Thing You Need to Understand

Linux is built around a philosophy called "do one thing and do it well."

Every command is a small, focused tool. grep searches text. sort sorts lines. cut extracts columns. awk processes structured data. None of them do everything, but all of them work together through something called a pipe.

A pipe, the | character =, takes the output of one command and feeds it directly as input to the next. This is the single most powerful concept in the Linux command line. Once you understand pipes, you stop thinking about individual commands and start thinking about workflow, chains of small tools that together solve complex problems in a single line.

Keep that in mind as you read through these commands. Most of them are powerful alone. Combined with pipes, they become something else entirely.

The Navigation Commands You Actually Need

1. ls -lah

You probably know ls. But ls by itself is the tourist version. ls -lah is what you actually type.

-l gives you the long format, permissions, owner, file size, and modification date for every file. -a shows hidden files (files starting with a dot, which Linux hides by default). -h makes file sizes human-readable, 4.2M instead of 4302848.

bash

ls -lah /etc

Run this in /etc and you immediately see every configuration file on the system, who owns it, what permissions it has, and when it was last changed. This single command tells you an enormous amount about a system's state.

Why it matters for security: Hidden files are where malware loves to hide. World-writable configuration files are a misconfiguration waiting to be exploited. ls -lah shows you both.

2. cd -

Everyone knows cd. Almost nobody knows cd -.

A single dash after cd takes you back to wherever you just were, like a back button for the terminal. If you navigated deep into a directory structure to check something and now want to return to where you started, cd - gets you there instantly without retyping the full path.

bash

cd /var/log/nginx
# check some files
cd -
# back where you were

Small thing. Saves time every single day.

3. find with useful flags

The basic find command is taught everywhere. What is not taught is how to make it useful.

bash

# Find all files modified in the last 24 hours
find /var/www -mtime -1
# Find all files larger than 100MB
find / -size +100M -type f
# Find all SUID binaries (critical for security audits)
find / -perm -4000 -type f 2>/dev/null
# Find all world-writable files
find / -perm -0002 -type f 2>/dev/null
# Find and immediately delete all .tmp files
find /tmp -name "*.tmp" -delete

The 2>/dev/null at the end of the security-focused commands redirects error messages to nowhere, without it, find floods your terminal with "Permission denied" errors for every directory it cannot access. With it, you only see the results you care about.

The Text Processing Commands That Change Everything

This is where Linux starts to feel genuinely powerful. The ability to search, filter, and transform text from the command line is one of the most practical skills you can develop.

4. grep, and how to actually use it

bash

# Basic search
grep "error" /var/log/syslog
# Case-insensitive search
grep -i "failed" /var/log/auth.log
# Show line numbers
grep -n "root" /etc/passwd
# Search recursively through an entire directory
grep -r "password" /var/www/html/
# Show lines that do NOT match
grep -v "DEBUG" application.log
# Show 3 lines before and after each match (context)
grep -B 3 -A 3 "CRITICAL" error.log
# Count how many lines match
grep -c "Failed password" /var/log/auth.log

That last one is worth pausing on. grep -c "Failed password" /var/log/auth.log tells you instantly how many failed SSH login attempts are in your auth log. On an exposed server, that number is often in the thousands. Knowing that in one command is genuinely useful.

5. grep with pipes, where it gets interesting

bash

# Show all currently logged-in users and filter to one
who | grep alice
# List running processes and find a specific one
ps aux | grep nginx
# Check open ports and filter to HTTPS only
ss -tulpn | grep :443
# Count unique IP addresses attempting SSH login
grep "Failed password" /var/log/auth.log | awk '{print $11}' | sort | uniq -c | sort -rn | head -20

That last command is worth reading carefully. It chains five commands together to produce a ranked list of IP addresses by how many times they have failed to log into your server via SSH. Five commands. One line. Something that would take significant code in most programming languages.

This is what Linux people mean when they say the command line is powerful.

6. awk, for structured data

awk frightens beginners because its syntax looks unusual. But for pulling specific columns out of structured output, nothing beats it.

bash

# Print only the first and fifth columns of ps output
ps aux | awk '{print $1, $5}'
# Print usernames from /etc/passwd (colon-separated, first field)
awk -F: '{print $1}' /etc/passwd
# Print lines where the third column is greater than 1000
awk -F: '$3 > 1000 {print $1, $3}' /etc/passwd
# Calculate the total size of files listed by ls
ls -la | awk '{sum += $5} END {print "Total:", sum, "bytes"}'

You do not need to master awk to use it. The pattern '{print $N}'where N is a column number, covers the majority of real-world use cases.

7. sed, find and replace at scale

sed is a stream editor. Its most common use is replacing text across files or in command output.

bash

# Replace first occurrence of 'http' with 'https' on each line
sed 's/http/https/' config.txt
# Replace ALL occurrences (g flag = global)
sed 's/http/https/g' config.txt
# Replace in place โ€” modifies the actual file
sed -i 's/old_password/new_password/g' config.php
# Delete all blank lines from a file
sed '/^$/d' messy_file.txt
# Print only lines 10 through 20 of a file
sed -n '10,20p' large_file.log

The -i flag is the dangerous one โ€” it modifies the original file without creating a backup. Always test your sed command without -i first to make sure it does what you expect.

The System Monitoring Commands

8. htop

top shows you running processes and system resource usage. htop does the same thing but in a way that is actually readable.

bash

sudo apt install htop   # install it first if needed
htop

Inside htop: press F6 to sort by CPU or memory. Press F9 to kill a process. Press F5 to see the process tree. Press q to quit. That is genuinely all you need to know to use it effectively from day one.

9. df -h and du -sh

Two commands. One shows disk space by partition; one shows disk usage by directory.

bash

# Show free and used space on all mounted filesystems
df -h
# Show how much space a specific directory is using
du -sh /var/log
# Show sizes of everything inside a directory, sorted
du -sh /var/log/* | sort -rh | head -10

That last command, sizes of everything in /var/log sorted from largest to smallest, is the first thing to run when a server is running out of disk space. It tells you immediately what is eating your storage.

10. ss -tulpn

ss shows network connections and open ports. The flags stand for: TCP, UDP, Listening, Processes, Numeric.

bash

ss -tulpn

The output shows every port currently open on your machine, which protocol it is using, and which process owns it. Any port you cannot explain is a question worth answering.

bash

# Just show listening TCP ports
ss -tlpn
# Check if a specific port is open
ss -tulpn | grep :22

Why this matters: After setting up any server, running ss -tulpn should be one of the first things you do. If you see ports open that you did not intentionally open, something is wrong.

11. journalctl

On modern Linux systems, journalctl is how you read system logs. It is far more powerful than directly reading files in /var/log.

bash

# Show all logs from the last hour
journalctl --since "1 hour ago"
# Follow a specific service's logs in real time
journalctl -u nginx -f
# Show only error-level and above
journalctl -p err
# Show logs from the last boot
journalctl -b
# Show logs between two times
journalctl --since "2024-01-01 09:00" --until "2024-01-01 10:00"

The -f flag is particularly useful, it follows the log in real time, printing new lines as they are written. Equivalent to tail -f but with better filtering options.

The Network Commands

12. curl, more than just downloading

Most people know curl downloads files. Fewer people use it as a debugging tool.

bash

# Fetch just the HTTP headers โ€” check server response without downloading the body
curl -I https://example.com
# Time every stage of a request
curl -w "\nDNS: %{time_namelookup}s\nConnect: %{time_connect}s\nTotal: %{time_total}s\n" -o /dev/null -s https://example.com
# Follow redirects
curl -L http://example.com
# Send a POST request with JSON data
curl -X POST -H "Content-Type: application/json" -d '{"key":"value"}' https://api.example.com
# Download a file and save it with its original filename
curl -O https://example.com/file.zip
# Test if a port is open
curl -v telnet://192.168.1.1:22

The timing command is particularly useful, it shows you exactly how long each phase of an HTTP request takes, which immediately tells you whether a slowness problem is DNS, network connectivity, or server processing.

13. ssh with useful options

bash

# Basic connection
ssh user@192.168.1.100
# Connect on a non-standard port
ssh -p 2222 user@192.168.1.100
# Forward a remote port to your local machine
ssh -L 8080:localhost:80 user@remote-server.com
# Run a single command without opening an interactive session
ssh user@server.com "df -h && free -h"
# Copy your SSH key to a server for passwordless login
ssh-copy-id user@192.168.1.100

Port forwarding deserves a special mention. ssh -L 8080:localhost:80 creates a tunnel, anything sent to port 8080 on your local machine travels through the encrypted SSH connection and comes out at port 80 on the remote server. This is how you securely access web interfaces on servers that are not exposed to the internet.

The File and Permission Commands

14. chmod with the commands that matter

bash

# Make a script executable
chmod +x script.sh
# Set permissions precisely with octal notation
chmod 755 script.sh    # rwxr-xr-x โ€” owner full, others read+execute
chmod 644 file.txt     # rw-r--r-- โ€” owner rw, others read only
chmod 600 id_rsa       # rw------- โ€” owner only (required for SSH keys)
# Recursively set permissions on a directory
chmod -R 755 /var/www/html/
# Find and fix overly permissive files
find /var/www -perm 777 -exec chmod 644 {} \;

That last command finds every world-writable file in your web directory and corrects the permissions, a useful one-liner after a messy deployment.

15. tar, the command everyone looks up every time

bash

# Create a compressed archive
tar -czf backup.tar.gz /home/alice/documents/
# Extract an archive
tar -xzf backup.tar.gz
# Extract to a specific directory
tar -xzf backup.tar.gz -C /tmp/restore/
# List contents without extracting
tar -tzf backup.tar.gz
# Create archive with bzip2 compression (smaller but slower)
tar -cjf backup.tar.bz2 /home/alice/

The flags: c = create, x = extract, z = gzip compression, j = bzip2 compression, f = filename follows, t = list. Combine them differently for different tasks.

The Power User Commands

16. history and !

bash

# Show command history
history
# Search history interactively (Ctrl+R then type)
# Press Ctrl+R and start typing โ€” terminal finds matching commands
# Run the last command again
!!
# Run command number 42 from history
!42
# Run the last command that started with 'ssh'
!ssh
# Clear your history (useful for security reasons)
history -c

Ctrl+R is the one that changes how you use the terminal. Instead of pressing the up arrow fifty times to find a command you ran last week, press Ctrl+R and start typing any part of it. The terminal searches backwards through your history and finds matches instantly.

17. screen and tmux, for sessions that survive disconnection

When you are connected to a remote server over SSH and your connection drops, any running processes die. screen and tmux solve this by creating terminal sessions that persist independently of your SSH connection.

bash

# Start a new named screen session
screen -S mysession
# Detach from the session (leaves it running)
# Press: Ctrl+A then D
# List all running sessions
screen -ls
# Reattach to a session
screen -r mysession
# tmux equivalent
tmux new -s mysession
# Detach: Ctrl+B then D
tmux attach -t mysession

This is essential when running long tasks on remote servers, backups, downloads, compilations, or any process that takes longer than your SSH session might stay alive.

18. xargs, apply a command to a list

xargs takes a list of items from stdin and applies a command to each one. It bridges the gap between commands that produce lists and commands that act on individual items.

bash

# Find all .log files and delete them
find /tmp -name "*.log" | xargs rm
# Find all Python files and search them for a pattern
find . -name "*.py" | xargs grep "import os"
# Download a list of URLs from a file
cat urls.txt | xargs -n 1 curl -O
# Run a command on multiple servers
echo "server1 server2 server3" | xargs -n 1 -I {} ssh user@{} "uptime"

19. tee, write to a file and the terminal simultaneously

bash

# Run a command, see the output, AND save it to a file
ping google.com | tee ping_results.txt
# Append to an existing file instead of overwriting
./long_script.sh | tee -a script_output.log
# Save command output with a timestamp
ps aux | tee "processes_$(date +%F_%H%M).txt"

tee is named after the T-shaped pipe fitting in plumbing, it splits the flow in two. Output goes to both your screen and the file simultaneously. Invaluable for logging the output of long-running processes.

20. alias, make your own commands

bash

# Create a shortcut for a long command
alias ll='ls -lah'
alias ports='ss -tulpn'
alias update='sudo apt update && sudo apt upgrade -y'
alias myip='curl ifconfig.me'
# Make aliases permanent by adding them to ~/.bashrc
echo "alias ll='ls -lah'" >> ~/.bashrc
source ~/.bashrc
# List all current aliases
alias
# Remove an alias
unalias ll

Aliases compound over time. Every command you type more than a few times per day is a candidate for an alias. Senior Linux users have dozens of them. Their terminal feels faster because it is, they are doing less typing for the same outcome.

Putting It All Together (A Real Example)

Here is a scenario that combines several of these commands into something genuinely useful.

Imagine you suspect your Linux server has been accessed by an unauthorized user. Here is how you investigate using only what you have learned in this article:

bash

# 1. Check who has logged in recently
last | head -20
# 2. Check for failed login attempts and rank attackers by frequency
grep "Failed password" /var/log/auth.log | awk '{print $11}' | sort | uniq -c | sort -rn | head -10
# 3. Find files modified in the last 24 hours in sensitive directories
find /etc /usr/bin /usr/sbin -mtime -1 -type f
# 4. Check for unexpected SUID binaries
find / -perm -4000 -type f 2>/dev/null
# 5. See what is currently listening on the network
ss -tulpn
# 6. Check running processes for anything unusual
ps aux | grep -v "^root\|^www-data\|^nobody" | head -30
# 7. Save the entire investigation to a file for later review
last | head -20 | tee investigation.txt
grep "Failed password" /var/log/auth.log | awk '{print $11}' | sort | uniq -c | sort -rn >> investigation.txt

Twenty minutes of reading this article. The ability to run a basic security investigation on a Linux server.

That is the power of knowing the right commands.

The One Thing That Will Actually Make You Better

Reading about commands is useful. Typing them is what makes them stick.

Open your Linux terminal, or your Kali VM, or your Ubuntu WSL instance, and run every command in this article. Not next week. Today. Right now, while the context is fresh.

The commands that feel awkward the first time will feel natural by the fifth. By the fiftieth, you will type them without thinking. That automaticity, where your hands know what to type before your brain has fully formed the thought, is what people mean when they say someone "knows Linux."

It is not magic. It is just repetition applied to the right things.

What to Learn Next

These twenty commands are a foundation. The next layer is understanding how to combine them into scripts, files full of commands that run automatically, handle errors gracefully, and do in seconds what would take minutes manually.

If you want to go deeper on the Linux command line, file permissions, process management, networking from the terminal, shell scripting from scratch, and system administration fundamentals, I put together a complete beginner's guide that covers all of it with annotated examples and real-world exercises.

No prior Linux experience required. Everything builds from the ground up.

You can find it here: http://bit.ly/3PWKjMM

And if you want to understand why Linux matters so much in cybersecurity specifically, there is a companion guide covering the cybersecurity landscape, every major career path, and a step-by-step roadmap to your first role or certification.

https://bit.ly/4t33fbp

The two books were written to work together. Start with either one.