When a Java or Spring Boot service slows down, the first thing I touch is not the code. It is the log file. Logs tell you the story. They tell you what went wrong, where it happened, and how bad the damage is.

But a log file is usually a giant wall of text. So reading it needs a few simple Ubuntu commands that make the job easier. Nothing fancy. No heavy tools. Just basic commands that help you cut through the noise.

In this article, I will walk through the commands I use almost every day. To make it real, I am using a sample log file. All examples come from this file.

Sample Log File

Here is a simple Spring Boot style log that I will use in all examples.

2025-01-10 10:01:12.453  INFO 12345 --- [http-nio-8080-exec-1] c.e.a.OrderController      : Received request GET /orders/102
2025-01-10 10:01:12.611  INFO 12345 --- [http-nio-8080-exec-1] c.e.a.OrderService         : Fetching order 102 from DB
2025-01-10 10:01:13.102  WARN 12345 --- [http-nio-8080-exec-1] c.e.a.OrderService         : Slow DB query took 491 ms
2025-01-10 10:01:13.133 ERROR 12345 --- [http-nio-8080-exec-1] c.e.a.OrderService         : Failed to fetch discounts for order 102
java.lang.NullPointerException: Cannot invoke "Discount.getAmount()" because "discount" is null
        at com.example.app.DiscountService.getDiscount(DiscountService.java:42)
        at com.example.app.OrderService.calculatePrice(OrderService.java:85)
        at com.example.app.OrderController.getOrder(OrderController.java:55)

2025-01-10 10:02:04.884  INFO 12345 --- [http-nio-8080-exec-3] c.e.a.PaymentController    : Received POST /payments
2025-01-10 10:02:05.105 ERROR 12345 --- [http-nio-8080-exec-3] c.e.a.PaymentService       : Payment failed for user 77
java.sql.SQLTimeoutException: Query timed out after 30000 ms

2025-01-10 10:04:22.315 DEBUG 12345 --- [http-nio-8080-exec-5] c.e.a.InventoryService     : Checking stock for item ABC123
2025-01-10 10:04:22.897  INFO 12345 --- [http-nio-8080-exec-5] c.e.a.InventoryService     : Stock available for item ABC123

2025-01-10 10:04:31.002 ERROR 12345 --- [http-nio-8080-exec-7] c.e.a.OrderService         : Failed to update order 105
java.lang.RuntimeException: Update conflict for order 105

2025-01-10 10:05:57.431  INFO 12345 --- [http-nio-8080-exec-4] c.e.a.HealthController     : GET /health took 10 ms

This file has everything we need. INFO. DEBUG. WARN. ERROR. Stack traces. Slow queries. Multiple controllers. Perfect for real troubleshooting examples.

1. Using tail to watch logs live

If something is failing right now, I start with tail.

tail -f application.log

This shows new logs as they appear. It feels like watching the app breathe.

To see the last few lines first:

tail -n 20 -f application.log

If payments are failing in real time, I filter them:

tail -f application.log | grep "Payment"

This will show logs like:

Payment failed for user 77

Simple and effective.

2. cat to print the full file

cat prints the file from top to bottom.

cat application.log

I use this for:

  • Small files
  • Quick checks
  • Piping logs into grep or awk
cat application.log | grep "Payment"

This prints:

Payment failed for user 77

Simple and clean.

3. tac to print the file in reverse

tac is the opposite of cat. It prints the file from bottom to top.

tac application.log

This is helpful when:

  • The issue just happened
  • The log is huge
  • You want the latest events first

This brings the most recent log:

GET /health took 10 ms

right to the top.

I use this often during outages.

4. zcat to view compressed logs

Ubuntu rotates logs into .gz files. Instead of extracting them, use:

zcat application.log.1.gz

This prints the content like cat but for compressed logs.

Search inside gzipped logs:

zcat application.log.1.gz | grep "ERROR"

Very handy when yesterday's logs hold the key.

5. Using grep to search for errors

Grep is the workhorse. I use it without thinking.

Find all errors:

grep "ERROR" application.log

Output looks like this:

2025-01-10 10:01:13.133 ERROR ...
2025-01-10 10:02:05.105 ERROR ...
2025-01-10 10:04:31.002 ERROR ...

Search for NullPointerExceptions:

grep "NullPointer" application.log

Find slow DB queries:

grep "Slow DB query" application.log

Case insensitive search:

grep -i "timed out" application.log

Count the number of errors:

grep -c "ERROR" application.log

If the count is high, something serious is cooking.

6. zgrep to search inside compressed logs

If you only want to search and not scroll, use zgrep.

Search for errors:

zgrep "ERROR" application.log.1.gz

Search case insensitive:

zgrep -i "timeout" application.log.2.gz

Count matches:

zgrep -c "ERROR" application.log.1.gz

This is the same as grep, but for .gz files.

Useful when:

  • Logs are archived
  • The bug happened last week
  • You want fast pattern matching without extraction

7. Using less to explore giant logs

I use less when I want to scroll inside the log file.

less application.log

Once you see the output, you can use any of the following:

  • /order 102 searches forward and jumps to the first log of that request
  • ?timeout to search backward
  • n goes to the next match
  • Shift + G jumps to the end
  • g jumps to the start

The best part is that less handles even 5 GB logs without crying. VS Code will hang. Less won't.

I also use it to scroll through stack traces like:

java.lang.NullPointerException...

8. zless to scroll inside compressed logs

It works exactly like less, but for compressed .gz files.

You do not need to unzip the file first.

zless application.log.1.gz

Inside zless, all the navigation works the same as less:

  • /error to search forward
  • ?timeout to search backward
  • n for next match
  • Shift + G to jump to end
  • g to jump to start

This is extremely useful when:

  • logs rotate nightly
  • logs are huge
  • older logs hold the root cause

9. Using awk to extract useful fields

Awk is powerful but simple once you get used to it.

Extract timestamps and log levels:

awk '{print $1, $2, $3}' application.log

The output :

2025-01-10 10:01:12.453 INFO
2025-01-10 10:01:12.611 INFO
2025-01-10 10:01:13.102 WARN
2025-01-10 10:01:13.133 ERROR
java.lang.NullPointerException: Cannot
at com.example.app.DiscountService.getDiscount(DiscountService.java:42)
at com.example.app.OrderService.calculatePrice(OrderService.java:85)
at com.example.app.OrderController.getOrder(OrderController.java:55)
2025-01-10 10:02:04.884 INFO
2025-01-10 10:02:05.105 ERROR
java.sql.SQLTimeoutException: Query
2025-01-10 10:04:22.315 DEBUG
2025-01-10 10:04:22.897 INFO
2025-01-10 10:04:31.002 ERROR
java.lang.RuntimeException: Update
2025-01-10 10:05:57.431 INFO

Find operations that took more than 400 ms:

awk '/took/ && $NF+0 > 400' application.log

This gives:

Slow DB query took 491 ms

List unique services:

awk '{print $9}' application.log | sort | uniq

Useful when errors come from only one service.

10. Using sed to clean or extract logs

Remove all DEBUG logs:

sed '/DEBUG/d' application.log

Replace thread names:

sed 's/http-nio-8080-exec-1/EXEC-1/g' application.log

Extract the full stack trace for a NullPointerException:

sed -n '/NullPointerException/,/OrderController/p' application.log

Sed is great when someone wants only the error part.

11. Using head for a quick peek

To see how the log file starts:

head application.log

You may see startup logs like:

Received request GET /orders/102

Fast sanity check.

12. Useful command combos that always help

Troubleshooting is rarely one command. It is usually a chain.

Find the slowest operations

grep "took" application.log | sort -k10 -nr | head

You will get something like:

Slow DB query took 491 ms
GET /health took 10 ms

Show errors and their next 3 lines (stack traces)

grep -A 3 "ERROR" application.log

Most common service in error logs

grep "ERROR" application.log | awk '{print $9}' | sort | uniq -c | sort -nr

Output may look like:

2 OrderService
1 PaymentService

This quickly tells you where to focus.

13. Count total lines with wc

wc -l application.log

If the file has 3000000 lines, I know I need coffee.

14. Check file size with du

du -sh application.log

If the log is 10 GB, rotation is broken.

15. Find rotated log files

find . -name "application.log*"

Useful when logs are split like:

  • application.log
  • application.log.1
  • application.log.2.gz

This saves a lot of time.

A small story

Once, a service was slow and everyone was hunting for answers. Dashboards were useless. Metrics were delayed. The magic answer came from a simple command:

grep "took" application.log | sort -k10 -nr | head

# -k10 = Use the 10th field (column) as the sort key.
# Fields are split by spaces by default.

#-n Means numeric sort, not alphabetical.
# -r Means reverse order. So largest number first.

# So sort -k10 -nr means:

#Sort lines by the 10th column,
#as numbers,
#from largest to smallest.

It showed that a single query inside OrderService was taking 2.1 seconds. We fixed the index. The issue vanished.

Sometimes the simplest command gives the sharpest insight.

Troubleshooting is not about tools. It is about knowing where to look. Ubuntu gives you small commands that save time and keep your head clear when the system is burning.

These commands have helped me across many production outages. I hope they save you some time too.