*scroll down to see the invitation to defend this paper and concept vs my criticism, in the comments
The "Invisible Encryption" That Requires Showing Everyone Your Secrets
A New Paper Promises Unbreakable Image Security. There's Just One Problem.
The Promise
A team of researchers from Beijing University and the University of Surrey just published a paper with an impressive title: "Secure Intellicise Wireless Network: Agentic AI for Coverless Semantic Steganography Communication."


Stripped of 'jargon', their pitch is this: We can hide your secret photos inside innocent-looking images so well that even smart AI-powered hackers can't find them.
Imagine wanting to send a confidential medical scan to your doctor. Instead of just encrypting it (which might attract attention), you hide it inside a picture of a landscape. Anyone intercepting the transmission just sees a pretty mountain scene. Your doctor, with the right password, extracts the hidden medical image. Invisible encryption.
Sounds clever, right?
The Old Trick With a New Name
Here's what the paper doesn't emphasize: this concept is decades old.
Steganography — the art of hiding information inside other information — has been around since ancient Greece (invisible ink, hidden messages in wax tablets). The digital version emerged in the 1990s.
How traditional digital steganography works:
- Take a cover image (a photo of a cat)
- Hide secret data in it (by slightly changing pixel values)
- Send the modified image
- Receiver extracts the hidden data
This has been used for years. Terrorists, criminals, and intelligence agencies all know about it. Security researchers have built detectors for it. It's established technology.
What's "new" in this 2026 paper?
Instead of taking an existing photo and hiding data in it, they generate a brand new image from scratch designed specifically to hide your secret. They use modern AI image generators (like Stable Diffusion) to create custom cover images.
That's it. That's the core "innovation."
Plus they add:
- Using simple numeric passwords instead of text descriptions
- Applying "agentic AI" (we'll get to why that's mostly marketing speak)
- Combining several existing AI techniques (EDICT, ControlNet, IPAdapter)
Why Generate Custom Covers?
It's actually a sensible idea in theory. If you're creating the cover image instead of modifying an existing one, you have more control. You can design it specifically to hide data without creating suspicious statistical patterns that detectors might spot.
But here's the thing: researchers have been doing this since 2017.
- 2017–2019: Generative Adversarial Networks (GANs) were used to create custom cover images for steganography
- 2020–2023: Diffusion models (the tech behind DALL-E, Midjourney, Stable Diffusion) entered the steganography game
- 2024–2025: Multiple papers on "coverless steganography" with diffusion models were published
This paper, published in January 2026, is an incremental variation on a five-year-old concept. They've combined existing techniques in a slightly different way and given it a trendy name ("Agentic AI").
The "Agentic AI" Marketing
The authors make much of their "agentic AI" approach. Sounds futuristic and intelligent, right?
What "agentic AI" actually means in 2025–2026:
- Systems that can make autonomous decisions
- Plan multiple steps ahead
- Use various tools to accomplish goals
- Learn and adapt in real-time
What their "agentic AI" actually does:
IF image contains faces THEN use face-detection tool
IF image is a landscape THEN use scene-segmentation tool
IF wireless signal is weak THEN adjust error correctionThis is… basic conditional logic. It's an if-then statement. Every piece of software uses this. Calling it "agentic AI" is like calling a thermostat "an autonomous climate control agent" because it turns the heat on when it's cold.
Real agentic AI systems (like advanced ChatGPT implementations or research robots) reason through complex problems, combine tools in novel ways, and adapt to unexpected situations. This paper's system follows predefined rules.
It's not "agentic" — it's automated.
The Practical Problem: It's Completely Impractical
Now we get to the real issues. Even if this were genuinely novel, there's a crushing problem: you can't actually use it.
What their system requires:
To hide one 512×512 pixel image, you need:
- Stable Diffusion model: ~4GB of data
- ControlNet model: ~1.5GB
- Additional AI models: ~2GB more
- Graphics card (GPU) memory: 8–16GB RAM
- Processing time: 10–20 seconds per image
What this means in practice:
Your laptop can't run it. Most laptops don't have the kind of graphics cards needed. If they do, the battery would drain in minutes and the laptop would overheat.
Your phone can't run it. Not even close. Modern phones don't have 8–16GB of dedicated graphics memory, and even if they did, the processing would take minutes and kill the battery.
Your desktop computer probably can't run it either. Unless you have a gaming PC with a high-end graphics card (costing $300-$2,000), you don't have the hardware.
So how are you supposed to use this "secure" system?
The Fatal Flaw: Cloud Processing Destroys the Security
Here's where it gets absurd.
Since most people don't have powerful graphics cards at their desk, and nobody has them in their phone, the only practical way to use this system is to upload your data to a cloud service that has the necessary computing power.
The "secure" workflow becomes:
- You have a secret image (the thing you desperately want to protect)
- You upload it to a cloud service (Google Cloud, Amazon AWS, Microsoft Azure)
- The cloud runs the AI models to hide your secret in a generated cover image
- You download the result
- Now you can securely transmit it over wireless
Do you see the problem?
You just uploaded your unencrypted secret to a third-party cloud provider to "secure" it for wireless transmission.
It's like:
- Being worried about pickpockets
- So you mail your wallet to a stranger
- Who puts it in a locked box
- And mails it back to you
- And now you carry the locked box confidently
- Claiming your money is "secure"
You've protected against the least likely threat (wireless eavesdropping) by creating exposure to a much more likely threat (cloud provider data breach, employee access, government surveillance, hacking).
The Impossible Choice
The authors face an unsolvable dilemma:
Option 1: Process Locally (Securely)
- ✅ Your secret never leaves your device
- ❌ Requires $2,000+ gaming computer per user
- ❌ Doesn't work for remote workers
- ❌ Doesn't work on mobile devices
- ❌ Takes 10–20 seconds per image
- ❌ For a company with 100 employees: $200,000 in hardware costs
Option 2: Process in the Cloud (Practically)
- ✅ Works from anywhere
- ✅ Works on any device
- ✅ No hardware investment needed
- ❌ Your secret is exposed in plaintext to the cloud provider
- ❌ Completely defeats the purpose of security
There is no third option.
If you deploy it securely (local processing), it's economically impossible for most users. If you deploy it practically (cloud processing), the security is fake.
What About Just Using Encryption?
The obvious question: Why not just encrypt the image?
Traditional encryption (what we've been using for decades):
- Secret image → Encrypt with password → Encrypted blob → Send → Decrypt with password
- Processing time: Less than 1 millisecond
- Works on: Every device (phone, laptop, desktop, smartwatch)
- Hardware needed: None (built into everything)
- Cost: $0
- Security: Proven for 20+ years (AES-256)
- Cloud exposure: Zero (encrypt locally)
This new steganography system:
- Secret image → Upload to cloud or use $2,000 PC → AI processing (10–20 seconds) → Stego image → Send → Receiver processes (10–20 seconds) → Secret recovered
- Processing time: 20–40 seconds total
- Works on: High-end gaming PCs only
- Hardware needed: 8–16GB GPU, or cloud service access
- Cost: $200,000 for 100-user company, OR security failure via cloud
- Security: Unproven (2026 research paper)
- Cloud exposure: 100% if practically deployed
The comparison isn't even close.
But Wait — What About Hiding That You're Hiding Something?
The authors would argue: "Steganography isn't just about security — it's about undetectability. An encrypted file looks suspicious. A vacation photo doesn't."
This is a fair point. There are scenarios where you want to hide the existence of secret communication:
- Journalists in authoritarian countries
- Whistleblowers
- Dissidents under surveillance
But even here, the cloud processing problem ruins everything. If you're so worried about surveillance that you need steganography, why would you upload your secrets to Google Cloud?
And if you're in a high-security situation where you have local hardware (military, intelligence agencies), you probably just use proven encryption rather than experimental AI steganography from a 2026 research paper.
The 1:1 Inefficiency
There's another puzzling limitation: their system hides one image inside one image. That's it.
If you're generating a custom cover image from scratch using AI, you have complete control over every pixel. In theory, you should be able to hide multiple secrets, or pack data much more densely.
Think of it this way: if you're building a custom landscape to hide things, you could hide a house, a factory, a plane, and a warehouse all in the same terrain — caves, mountains, forests, camouflage patterns designed around all the hidden objects.
Instead, they're using all that generative power to hide one single thing at one-to-one ratio. It's like using a cargo plane to transport a bicycle.
Why? The paper doesn't really explain. They claim it's about maintaining "imperceptibility" (not getting detected), but that doesn't explain why generative capacity is so underutilized.
The Real Target Audience: Nobody
Who is this for?
Consumers?
- Can't run it (no hardware)
- Don't need it (encryption works fine)
- Won't accept 20-second delays
Businesses?
- Too expensive ($200K for 100 users)
- Can't support remote work
- Legal liability if cloud-processed secrets leak
- Simpler to use corporate VPN
Military/Intelligence?
- Have specialized hardware (could run it locally)
- But would never trust experimental academic code
- Already use proven, tested encryption systems
- Security clearance processes prohibit untested technology
Researchers?
- Only group with local GPUs
- Don't need steganography (publish openly)
- If they need security, use established encryption
There is no realistic user base.
What They Actually Contributed
To be fair to the authors, they did do legitimate technical work:
✅ Combined several AI techniques in a novel configuration ✅ Tested performance across different wireless conditions ✅ Showed improvements over some previous steganography methods ✅ Conducted security analysis against hypothetical eavesdroppers ✅ Published reproducible experiments with open datasets
This is competent engineering and valid research.
But it's not innovative. It's incremental improvement on five-year-old concepts, wrapped in marketing language ("agentic AI," "intellicise networks," "semantic communications"), with practical deployment problems that are fundamentally unsolvable.
The Deeper Problem: Academic Incentive Misalignment
This paper represents a broader issue in academic computer science: publishing for publication's sake.
The formula:
- Take existing techniques (diffusion models, steganography)
- Combine them in a slightly new way
- Add trendy buzzwords ("agentic AI," "6G," "semantic")
- Show incremental improvement (14% better than baseline)
- Publish in conference proceedings
- Advance your career
✅ Paper gets accepted ✅ Authors get credentials ✅ Citations accumulate ❌ Nobody ever deploys it ❌ Real-world impact: zero
The problem isn't malicious — the authors aren't trying to deceive anyone. It's structural: the academic system rewards publications, not deployability. A paper showing "+14% improvement" is publishable. A paper saying "we couldn't make this work practically" is not.
So we get a steady stream of papers about systems that:
- Work in the lab
- Are technically interesting
- Will never be used
- Solve problems that don't need solving (or are already solved)
The Bottom Line
This paper proposes a security system that:
- Isn't new (coverless generative steganography is 5+ years old)
- Isn't practical (requires $2,000 hardware or cloud processing)
- Isn't secure if practically deployed (cloud exposure defeats the purpose)
- Isn't better than existing solutions (AES-256 encryption works perfectly)
- Isn't deployable (no realistic user base exists)
The core flaw is breathtakingly simple: to "securely" hide your secret image, you must first send it — unprotected — to someone else's computer.
That's not security. That's security theater.
What Should Have Been Done Instead?
If the authors wanted to make a real contribution, they could have:
1. Built lightweight on-device steganography
- 50–100MB models (not 7GB)
- Runs on phones
- 100–500ms processing (not 20 seconds)
- Actually deployable
2. Developed privacy-preserving cloud processing
- Homomorphic encryption (compute on encrypted data)
- Secure multi-party computation
- Zero-knowledge proofs
- Never expose plaintext secrets
3. Focused on the real problem
- Make encryption undetectable (covert channels)
- Develop practical post-quantum cryptography
- Solve actual deployment challenges
Instead, they optimized the wrong thing: making AI-generated steganography 14% better while ignoring that nobody can use it.
Conclusion: When Innovation Defeats Its Own Purpose
The ultimate irony: this paper proposes hiding secrets using a method that requires revealing your secrets to use it.
It's research for research's sake — technically competent, experimentally sound, and practically useless. The kind of work that looks impressive in an academic CV but would never survive contact with reality.
Sometimes the old ways are old because they work. AES-256 encryption is 23 years old, runs on every device, costs nothing, and has never been broken.
This new approach is 0 years old, runs on almost nothing, costs $200,000 for a medium company, and requires trusting cloud providers with your unencrypted secrets.
Progress? Not quite.
Postscript: The saddest part is that steganography genuinely has value in some scenarios... But by making it require cloud processing or $2,000 hardware, the researchers have ensured it won't help the people who actually need it.
The people with $200,000 budgets already have security. The people without hardware access can't use this. The solution serves neither group.
It's innovation without purpose — a technical achievement that achieves nothing.
Author's Note: Let's Hear the Defense
This article presents a critical analysis of the practical deployment challenges in the "AgentSemSteCom" paper. However, critiques can be wrong, and I'm genuinely interested in hearing counterarguments.
To the paper's authors (Rui Meng, Song Gao, Bingxuan Xu, Xiaodong Xu, and colleagues):
You've done legitimate technical work combining EDICT, ControlNet, and other components in novel ways. If I've misunderstood your deployment model, target applications, or the problems you're solving, I'd welcome your clarification. Specifically:
- What deployment scenarios address the cloud/local processing dilemma?
- Are there existing specialized environments where this is already viable?
- What timeline do you envision for mobile-capable implementations?
- How do you respond to the security-versus-practicality trade-off?
To steganography researchers and security experts:
- Am I underestimating the value of generative steganography advances?
- Are there high-security applications where 10–20 second latency is acceptable?
- Does the defense-in-depth argument (steganography + encryption) hold merit?
- What's the actual state-of-the-art in 2026 — is this more novel than I credit?
To practitioners in military/intelligence/aerospace:
- Do specialized deployments with local GPU infrastructure already exist?
- Would systems like this ever pass security certification processes?
- Is there real demand for this capability over proven encryption?
I could be wrong. Perhaps there are deployment contexts I haven't considered. Perhaps hardware acceleration is closer than I think. Perhaps the academic contribution is more significant than my critique suggests.
Defend the concept. Challenge the analysis. Provide the missing context.
The comments are open. Let's have the technical debate this paper deserves.