Long Polling

Long polling is a technique used to achieve real-time communication between two systems (typically client-server) over HTTP, even though HTTP is inherently request-response based and stateless.

  1. The client sends a request to the server and waits.
  2. The server holds the request open until:
  • It has new data to send.
  • Or a timeout occurs.

3. Once the server responds, the client:

  • Processes the data.
  • Immediately sends a new request to wait again.

4. This cycle repeats.

Unlike regular polling (which sends frequent requests every few seconds), long polling reduces unnecessary requests by waiting for actual data.

Connection Type:

  • Type: Long polling uses a standard HTTP or HTTPS connection (usually HTTP/1.1).
  • Nature: It is unidirectional per request — each HTTP connection is from client to server, though multiple requests simulate a two-way interaction.
  • Not a persistent connection like WebSockets.
  • It's stateless per HTTP, but the long duration gives it near real-time behavior.

How Two Services Communicate Using Long Polling:

Let's say we have Service A (client) and Service B (server):

  1. Service A (Client) sends a request:
GET /updates

2. Service B (Server) holds the connection open until:

  • There is new data (e.g., a database change, message, etc.).

3. When data is ready, Service B responds:

{ "event": "new_message", "data": "Hello" }

4. Service A handles the data, then immediately sends a new request.

This loop gives the illusion of a real-time "push", even though the client is technically "pulling".

Use Cases:

  • Chat applications
  • Notifications
  • Stock price updates
  • Real-time dashboards (when WebSockets aren't available)

Idempotent APIs

An API is idempotent if making the same request multiple times results in the same effect as making it once.

Examples:

  • GET /user/123: always returns the same user data.
  • DELETE /user/123: deleting a user once or multiple times has the same result—the user is deleted.

Request Coalescing

Request coalescing is a performance optimization technique where multiple identical requests are merged (or "coalesced") into a single request to the backend.

Example scenario:

  • Ten users request the same resource at the same time.
  • Rather than hitting the backend ten times, the system sends one request, caches the result, and returns the same response to all ten users.

1. Safe to Merge Because of Idempotency

  • Request coalescing relies on the fact that handling a request multiple times should be safe or lead to the same result.
  • Idempotent operations guarantee that merging requests will not cause side effects or inconsistent state, making it safe to coalesce them.

2. Reduces Load and Prevents Thundering Herd

  • Coalescing helps reduce backend load when there is a burst of identical requests (e.g., cache miss).
  • Idempotent APIs ensure that even if coalescing isn't possible (e.g., due to network retries), repeating requests won't corrupt data or state.

3. Retry-Friendly

  • Both support retry logic:
  • Idempotent APIs allow clients to safely retry failed requests.
  • Request coalescing can batch retries or serve stale data temporarily.

1. Implementing a In-Memory Coalescing in Go

Use a sync.Map or singleflight (from Go's golang.org/x/sync/singleflight) to manage in-flight identical requests.

import (
    "net/http"
    "golang.org/x/sync/singleflight"
    "fmt"
)

var requestGroup singleflight.Group

func handler(w http.ResponseWriter, r *http.Request) {
    userID := r.URL.Query().Get("id")
    key := fmt.Sprintf("user:%s", userID)

    result, err, _ := requestGroup.Do(key, func() (interface{}, error) {
        // Simulate DB or expensive call
        data := fetchUserFromDB(userID)
        return data, nil
    })

    if err != nil {
        http.Error(w, "Internal Server Error", http.StatusInternalServerError)
        return
    }

    fmt.Fprintf(w, "User Data: %+v", result)
}

func fetchUserFromDB(userID string) string {
    // simulate expensive call
    return "user_data_for_" + userID
}

Pros:

  • Easy to use
  • Handles concurrent deduplication
  • Standard for coalescing logic

2. Manual Map + Channels (More Control)

You can implement your own coalescing layer:

type result struct {
    value interface{}
    err   error
}

var (
    mu         sync.Mutex
    inFlight   = make(map[string][]chan result)
)

func coalesce(key string, fn func() (interface{}, error)) (interface{}, error) {
    ch := make(chan result, 1)

    mu.Lock()
    if waiters, ok := inFlight[key]; ok {
        inFlight[key] = append(waiters, ch)
        mu.Unlock()
        return waitForResult(ch)
    }

    inFlight[key] = []chan result{ch}
    mu.Unlock()

    value, err := fn()

    mu.Lock()
    for _, waiter := range inFlight[key] {
        waiter <- result{value, err}
        close(waiter)
    }
    delete(inFlight, key)
    mu.Unlock()

    return value, err
}

func waitForResult(ch chan result) (interface{}, error) {
    res := <-ch
    return res.value, res.err
}
  • Works well for read-heavy, duplicate-prone APIs.
  • Good for caching layer integration (e.g. Redis or in-memory cache).
  • Use timeouts if requests may hang.
  • Avoid for high-cardinality parameters unless memory is tightly managed.

Real-World Use Cases

  • CDN edge servers (e.g. deduplicating origin fetches)
  • Image or video processing APIs
  • SSR frameworks caching expensive templates
  • Microservices calling other services

HTTP Interceptor

An HTTP interceptor on the server side is a piece of middleware logic that intercepts incoming HTTP requests or outgoing responses, allowing you to modify, log, block, or perform additional actions before the request reaches your main handler or after the response is generated.

It's a middleware that wraps around the request/response lifecycle.

  • Interceptors are order-dependent.
  • Keep each interceptor single-responsibility.
  • Use them for cross-cutting concerns only.
  • Avoid heavy computation — they're run for every request.

How It Works (Request Flow)

Client → Interceptor 1 → Interceptor 2 → Handler → Response Interceptor(s) → Client

Each interceptor can:

  • Inspect and modify the request
  • Halt the request (return 403, 401, etc.)
  • Allow it to pass to the next handler

Example in Go (Net/HTTP)

Step 1: Define an Interceptor as Middleware

func loggingMiddleware(next http.Handler) http.Handler {
    return http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
        log.Printf("Incoming request: %s %s", r.Method, r.URL.Path)
        next.ServeHTTP(w, r) // Pass to the next handler
    })
}

Step 2: Chain Interceptors and Use

func authMiddleware(next http.Handler) http.Handler {
    return http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
        token := r.Header.Get("Authorization")
        if token != "valid-token" {
            http.Error(w, "Unauthorized", http.StatusUnauthorized)
            return
        }
        next.ServeHTTP(w, r)
    })
}

func finalHandler(w http.ResponseWriter, r *http.Request) {
    fmt.Fprintf(w, "Request processed successfully")
}

func main() {
    final := http.HandlerFunc(finalHandler)
    http.Handle("/", loggingMiddleware(authMiddleware(final)))
    http.ListenAndServe(":8080", nil)
}

Use Cases

  • Authentication/Authorization
  • Logging (e.g., request/response logs, trace IDs)
  • Rate Limiting
  • Request/Response Modification
  • Metrics collection
  • Error recovery

GZIP

GZIP is a compression format (GNU Zip) used to reduce the size of HTTP responses, making web traffic faster and more efficient. Using GZIP with HTTP helps :

  • Reduce bandwidth usage
  • Speed up page/API load times
  • Improve performance over slow connections

How it works (HTTP Layer):

  1. Client sends a request with:
Accept-Encoding: gzip

2. Server compresses the response body using gzip and replies with:

Content-Encoding: gzip

3. Client decompresses the response before using it.

Implementation in Go :

import (
    "compress/gzip"
    "net/http"
    "strings"
)

func gzipMiddleware(next http.Handler) http.Handler {
    return http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
        if !strings.Contains(r.Header.Get("Accept-Encoding"), "gzip") {
            next.ServeHTTP(w, r)
            return
        }

        w.Header().Set("Content-Encoding", "gzip")
        gz := gzip.NewWriter(w)
        defer gz.Close()

        gzipWriter := gzipResponseWriter{Writer: gz, ResponseWriter: w}
        next.ServeHTTP(gzipWriter, r)
    })
}

type gzipResponseWriter struct {
    http.ResponseWriter
    Writer *gzip.Writer
}

func (w gzipResponseWriter) Write(b []byte) (int, error) {
    return w.Writer.Write(b)
}

Tracing

Tracing is a method to track and measure the execution of code across requests — especially useful in distributed systems or microservices. It is used to :

  • Monitor request flow through services
  • Diagnose performance bottlenecks
  • Debug distributed systems
  • Visualize latency (spans) across APIs and DB calls
  • Trace — Entire journey of a request through the system
  • Span — A single operation (e.g., DB call, handler)
  • Context — Carries trace/span info across function calls

Implementation in Go :

import (
    "context"
    "go.opentelemetry.io/otel"
    "go.opentelemetry.io/otel/trace"
)

var tracer = otel.Tracer("my-service")

func myHandler(ctx context.Context) {
    ctx, span := tracer.Start(ctx, "myHandler")
    defer span.End()

    // do work
}

Tracing often starts in middleware, attaches a trace to context.Context, and propagates it downstream through:

  • Handlers
  • DB access
  • External service calls

CSRF (Cross-Site Request Forgery)

An attacker tricks a logged-in user's browser into making a malicious request to your server (like changing email or making a payment) without the user's consent.

Example:

While logged in to your bank account, you visit a malicious website that auto-submits this hidden form:

<form action="https://bank.com/transfer" method="POST">
  <input type="hidden" name="amount" value="10000">
</form>
<script>document.forms[0].submit();</script>

Your browser sends cookies automatically, and the server may process it unless protected.

Protection Techniques:

  • CSRF Token: Server sends a hidden random token per session/form, and validates it on each sensitive request.
  • SameSite Cookies: Use Set-Cookie: ...; SameSite=Strict|Lax to prevent cookies from being sent cross-site.
  • Check Referer/Origin Headers

XSS (Cross-Site Scripting)

An attacker injects malicious JavaScript into your page. If rendered, it runs in the victim's browser under your domain.

Example:

User submits:

<script>alert('Hacked!');</script>

If your API or app reflects it without escaping, the script runs in any other user's browser.

Protection Techniques:

  • Escape HTML output (<, >)
  • Content Security Policy (CSP) headers to limit scripts
  • Sanitize inputs in forms and APIs
  • Avoid innerHTML, use textContent

HTTP Cache Headers

Control how clients (browsers, proxies, CDNs) store and reuse responses to reduce redundant requests.

  • Cache-Control:Main directive for caching (e.g., no-cache, max-age=3600)
  • Expires : HTTP/1.0 fallback for expiration time
  • ETag:Unique content hash; used for conditional GETs
  • Last-Modified:Timestamp of last resource change
  • Vary:Tells caches to vary response by header(s) like Accept-Encoding

Example:

Cache-Control: max-age=3600
ETag: "abc123"
Last-Modified: Mon, 20 Jun 2025 12:00:00 GMT

On future requests, client sends:

If-None-Match: "abc123"

If the content hasn't changed, the server replies:

304 Not Modified

Saves bandwidth & improves performance.

Client-Side Caching

Browser or HTTP client stores previous responses (pages, images, API data) locally, and reuses them until expired or changed.

  • Faster response
  • Reduces server load
  • Offline support for SPAs/PWAs

Working :

  1. Server sends response with cache headers like:
Cache-Control: max-age=300

2. Browser stores it.

3. For next requests to the same resource:

  • It serves from cache until 5 minutes pass
  • After that, it may revalidate using ETag or Last-Modified

Reverse Proxies (e.g., NGINX)

A reverse proxy sits between clients and backend servers. It handles the client's HTTP request and forwards it to your server, often adding caching, load balancing, or security.

Client ──▶ NGINX ──▶ Your Go API Server

Example NGINX Config:

server {
    listen 80;

    location /api/ {
        proxy_pass http://localhost:8080;
        proxy_set_header Host $host;
        proxy_set_header X-Real-IP $remote_addr;
    }

    location /static/ {
        root /var/www/html;
        expires 1d;
    }
}

Checkout this article to know about basics of http and API development.

If you're new to Go or looking to deepen your understanding, be sure to check out our other articles on Go concepts here. Don't forget to follow for more insightful content, and if you find it helpful, please like and share to support the community!

None
Photo by Emma Harrisova on Unsplash