We talk a lot about shifting left in AppSec. Run your scans earlier. Find bugs before they hit production. The math backs it up, NIST and IBM data show fixing a bug in production can cost 15x more than catching it in design.

But here's where we need to be careful and not add friction without adding value. A scanner that blocks every PR with 300 false positives, or one that takes 20 minutes to run regardless of what changed, trains developers to hate security tooling. That defeats the purpose.

The goal isn't just scanning. It's scanning smarter. And it doesn't have to cost anything.

What I Built

A GitHub Actions workflow that runs on every pull request, covers multiple languages and file types, and generates an aggregated report directly in the PR. Everything is open source. No licenses. No per-seat costs. No procurement process. A text editor and a YAML file.

The tools it runs:

  • Semgrep — SAST across Go, C#, Python, JavaScript/TypeScript, PHP, Ruby, Rust, plus OWASP Top 10 rules and secrets detection
  • Gosec — Go-specific security analysis
  • Staticcheck — Go linting with security relevance
  • Brakeman — Rails/Ruby security scanning
  • Checkov — IaC scanning across Terraform, Docker, Kubernetes, and CloudFormation
  • Trivy — Filesystem vulnerability and secret scanning
  • TruffleHog — Secret detection across commit history
  • SQLFluff + TSQLLint — SQL linting and analysis
  • PSScriptAnalyzer — PowerShell security analysis
  • cargo-audit — Rust dependency scanning against the RustSec advisory database
  • dotnet list package — vulnerable — NuGet vulnerability detection across .NET 8, 9, and 10 (all currently supported versions — .NET 6 hit end of life in November 2024 and has been removed)

One design decision worth calling out explicitly: every action in this pipeline is pinned to a full commit SHA rather than a mutable tag. actions/checkout@v4 looks clean but the tag can be moved to point to different code at any time , the tj-actions compromise in 2025 demonstrated exactly this attack vector.

Pinning to a SHA means what you audited is what runs, permanently. The version is preserved in a comment so Dependabot can track and update the pins automatically. Two exceptions in this pipeline — bridgecrewio/checkov-action and aquasecurity/trivy-action — use @master refs in their official documentation, which is worth flagging: your security tooling is part of your supply chain too, and @master is the least safe reference you can use. Pin those to release SHAs the same as everything else.

If you want tool to help you do this automatically use:

frizbee or pinact

Frizbee covers more than just actions, pinact covers only GitHub actions.

Here is some regex to find anything that isn't using a SHA:

Also note you can put in a dependabot check for new versions of actions with your other checks:

Reducing Time and Noise: Only Run What's Relevant

Every job in this pipeline checks what files actually changed in the PR before running. Changed a .go file? Gosec and Staticcheck run. No Go changes? They skip and produce an empty SARIF result so the report still aggregates cleanly. The Rust job uses a two-part check — changed .rs or .toml files AND a Cargo.toml must exist in the repo, since .toml alone is too broad a filter.

This matters for two reasons. First, it keeps PRs fast — a frontend-only change doesn't wait for .NET restore and vulnerability scanning. Second, it reduces noise. Developers see findings relevant to what they actually changed, not a wall of existing issues from other parts of the codebase.

All Results in One Place

Every scanner outputs SARIF. The final aggregation job collects all of them, parses severity levels, and generates a markdown summary that posts directly to the PR. Developers see a table: tool name, high/medium/low counts, total findings. High severity items get expanded with file location and rule detail.

One practical note on the location data: code scanners like Semgrep, Gosec, Brakeman, Checkov, Staticcheck, and PSScriptAnalyzer emit precise file, line, and column information, so findings link directly to the relevant code. Dependency scanners — cargo-audit, dotnet vulnerable packages, and TruffleHog — point to the manifest file (Cargo.toml, .csproj, etc.) at line 1, because the finding is about a package version, not a specific line of code. The report renders both correctly. Knowing which category a tool falls into helps developers interpret output without assuming something is broken when they see Cargo.toml:1.

No hunting through separate job logs. No trying to remember which artifact download had the Semgrep results. One comment, full picture.

The Cost

GitHub Actions minutes for public repos are free. All of these tools are open source. The only investment is setup time, and once it's in a shared workflow, every repo that adopts it gets the full pipeline for free.

Compare that to enterprise SAST licensing, which can run into six figures annually, and the ROI conversation becomes straightforward even before you factor in the cost of finding a vulnerability in production. The "we don't have budget for security tooling" argument doesn't hold up when the tooling is free.

Noise Reduction Is a Feature, Not an Afterthought

The fastest way to kill developer trust in security tooling is irrelevant findings. A finding on a file the developer didn't touch, in a language they didn't write, for a PR that changed two lines of CSS, is noise. Noise gets ignored. Ignored findings defeat the purpose of scanning.

The file-change detection, the per-tool scoping, and the single aggregated report all exist to solve this problem. The signal developers receive is scoped to their actual changes. That's what makes security feedback feel like a code review comment rather than a compliance checkpoint.

What This Enables for Developers

When a developer opens a PR and gets specific, actionable findings scoped to their changes, security becomes part of their normal workflow rather than a gate at the end. They fix issues the same way they address code review comments. The feedback loop closes inside the PR rather than weeks later when a finding surfaces from a quarterly scan.

This is the enablement model — security as a teammate in the review process, not a blocker at the end of it.

The workflow is available on my GitHub. In the next post in this series we'll look at what happens after code passes the pipeline: signing container images, generating and attesting SBOMs, and building a full supply chain security posture — again, with nothing but open source tooling.

Here is a link to the file itself

Originally published at https://www.linkedin.com.