Straight to the point — I recently built an Android security scanner:

🔗 KishorBal/deep-C: Android deeplink misconfiguration detector and exploitation tool— Android Deep Link Misconfiguration Detector & Exploitation Tool

Deep-C is designed to identify misconfigured deep links by going beyond just analyzing the AndroidManifest.xml. It also inspects the decompiled source code to detect real-world exploitability.

It now supports analysis of both Java and Kotlin codebases.

The goal is simple: Not just detect exposed deep links — but validate whether they're actually exploitable and generate working ADB PoCs.

None
Web Dashboard
None
CLI version

AI Verification (Optional)

Deep-C also includes a pluggable AI verification feature.

By providing your OPENAI_API_KEY and enabling the extra parameter:

the findings will be reviewed using OpenAI to help:

  • Reduce false positives
  • Validate exploitability
  • Provide impact classification
  • Offer technical explanations

This makes the scanner more practical for real-world assessments and reporting.

🔧 Backend Dependencies

Make sure the following tools are installed and available in your environment:

  • jadx
  • apktool
  • openai (Python package)

🌐 Frontend Requirements

For the web dashboard version:

  • nodejs
  • npm

⚙️ How the Scanner Works

  1. APK Decompilation (Stage 1) The Python script decompiles the APK using Apktool and analyzes the AndroidManifest.xml to identify exported deep link activities.
  2. Source Code Analysis (Stage 2) The scanner then decompiles the DEX files using JADX and locates the corresponding source files (Java/Kotlin).
  3. Pattern-Based Vulnerability Detection Using defined detection patterns, it checks:
  • WebView loading sinks
  • Query parameter handling
  • Weak host validation (e.g., endsWith, contains)
  • Missing validation flows
  1. Exploit Generation If vulnerable patterns are confirmed, Deep-C:
  • Extracts affected paths
  • Identifies relevant query parameters
  • Generates ready-to-use ADB PoC commands

Role of AI

The AI module acts as a second-layer validator.

After Deep-C identifies potential vulnerabilities:

  • OpenAI reviews the findings
  • Evaluates real exploitability
  • Confirms impact severity
  • Helps eliminate edge-case false positives

This bridges static pattern detection with contextual security reasoning.

Deep-C is still evolving, and I'm continuously refining detection logic and exploit validation.

Feedback, suggestions, and contributions are always welcome.