We just open-sourced the steganography engine that powers Phasm. Not the iOS app. Not the Android app. Not the website. Just the core — the Rust crate that handles encryption, embedding, and everything that touches your secrets.
You can read it right now: github.com/cgaffga/phasmcore.
This isn’t a half-hearted gesture or a marketing stunt. The boundary between what’s open and what’s not is deliberate, and it’s the whole point. We drew the line exactly where it matters: at the border between your data and everything else.
Everything That Touches Your Data
Here’s what’s in the open-source crate:
- The custom JPEG coefficient codec — roughly 1,000 lines of pure Rust that reads and writes quantized DCT coefficients without ever touching the pixel domain
- Ghost mode: the J-UNIWARD cost function and syndrome-trellis code embedding that make hidden data statistically invisible
- Armor mode: STDM embedding, Reed-Solomon error correction, Watson perceptual masking for the Fortress sub-mode, and DFT template embedding for geometric resilience
- All encryption: AES-256-GCM-SIV with Argon2id key derivation
- The permutation layer: Fisher-Yates shuffle seeded by ChaCha20 PRNG
- Deterministic math: our FDLIBM-based trigonometry and in-house FFT that ensure identical results across every platform
The principle is simple: if it can see your plaintext, it’s open source.
Every function that encrypts, embeds, extracts, or decrypts your message is in the public crate. The key derivation. The random number generation. The error correction. The capacity calculations. All of it. If you want to know exactly what happens to your data between the moment you type a message and the moment it’s woven into a JPEG’s coefficient structure, the answer is in the source code.
Everything Else Is a Delivery Mechanism
What stays proprietary:
- The iOS app (SwiftUI)
- The Android app (Jetpack Compose)
- The web frontend (vanilla JS + WASM bridge)
- The cloud sharing service at phasm.link
- The App Clip, Universal Links, and platform integration glue
Why? Because these are packaging. They’re UI, platform adaptation, and distribution. They call into phasmcore — they don’t implement cryptography, and they don’t touch plaintext. The iOS app converts your photo to JPEG bytes using UIImage and hands those bytes to the Rust core. The Android app does the same with BitmapFactory. The web frontend does the same with a canvas element. Then the core does all the security-critical work.
Someone reviewing the apps can verify that we have a nice share sheet and a dark mode. Someone reviewing the core can verify that we actually implemented AES-256-GCM-SIV correctly, that the RNG is properly seeded, and that the embedding algorithm matches the published papers.
This isn’t “open-washing.” It’s a conscious boundary. The security-critical code is public. The product experience is not.
Trust No One — Including Us
Steganography has a unique trust problem that most software doesn’t face.
Your users are hiding secrets. Not preferences, not analytics events — actual secrets that could endanger them if exposed. The tool they use to hide those secrets must itself be trustworthy. And “trust us, we implemented AES-256” is a claim. An auditable crate with CI, tests, and a git history is evidence.
Security through obscurity is the opposite of what we do. Phasm’s entire premise is that security comes from mathematical guarantees, not hidden implementations. Ghost mode uses J-UNIWARD because its security properties are proven in peer-reviewed literature, not because the algorithm is secret. Armor mode uses STDM because its robustness characteristics are well-understood and published. The algorithms come from academic papers. The code that implements them should be public too.
Peer review is the only path to credibility. Security researchers break things for a living. If they can review the STC implementation, inspect the cost function, trace the RNG seeding, probe the key derivation parameters — and find nothing wrong — that’s worth more than any marketing claim we could make. We’ve published ten deep-dive blog posts walking through the algorithms and design decisions. The source code proves they’re implemented correctly.
Copyleft Was Intentional
We chose GPL-3.0, not MIT. Not Apache. This was deliberate.
MIT and Apache are permissive licenses. They’d allow anyone to fork phasmcore, strip the encryption, weaken the RNG, and ship a backdoored version as a proprietary product. Users of that fork would never know.
GPL-3.0 ensures that any derivative work must also be open source. If someone forks phasmcore and modifies the key derivation, those modifications are visible. If someone builds a competing product on our engine, their users get the same right to inspect the security-critical code that our users do.
This matters because the “freedom to inspect” isn’t just about the original project. It’s about the ecosystem. A permissive license protects the right to build proprietary software. A copyleft license protects the right to verify security claims. For a steganography engine, the second matters more.
Read the Code. Break It If You Can.
We’re not publishing the source code as a formality. We genuinely want people to read it, test it, and try to break it.
Security researchers: Audit the implementation. File issues. Publish your findings. If there’s a weakness in the STC embedding, a bias in the permutation, or a flaw in the key derivation, we want to know — and we want the fix to be public too.
Steganography implementers: Compare approaches. The crate includes working implementations of J-UNIWARD, binary STC with $h = 7$, STDM, BA-QIM with Watson masking, and DFT template embedding. Use them as reference. Tell us where we got it wrong.
Privacy advocates: Verify our claims independently. Don’t take our word for it. The about page describes what Phasm does. The source code proves whether it actually does it.
Students and academics: These are reference implementations of algorithms from published papers — Holub & Fridrich’s J-UNIWARD, Filler, Judas & Fridrich’s STC, Watson’s perceptual model. If you’re implementing these for a thesis or a course, having working Rust code alongside the papers might save you some pain.
The repo is at github.com/cgaffga/phasmcore. The blog posts provide context for what each module does and why we made the design decisions we did.
Steganography hides the existence of a message. Open-sourcing the engine proves there’s nothing hidden in the tool itself.
The core is open because the people who need Phasm most — journalists, activists, anyone with something to protect — deserve to verify that the tool protecting them is doing exactly what it claims. No trust required. Just code.
Hidden in plain sight. The message, not the method.