Enterprise Python services increasingly need FIPS 140–3–style behavior: OpenSSL 3 with the FIPS provider active, strict algorithm policy, and predictable crypto across the whole process. When that service is frozen with cx_Freeze and deployed as a relocatable tree (bin/, lib/*.so, zipped packages, vendored wheels), several failures look like random SSL bugs but are actually packaging + linkage + OpenSSL global state.
This article walks through what broke for us, why, how we fixed it (especially soname mangling), and how we validate builds so regressions are caught in CI instead of on customer appliances.
Stack and goals
- Python 3.12+ (we used 3.13) with
sslbacked by OpenSSL 3 (libcrypto.so.3,libssl.so.3). - FIPS via
openssl-fips.cnf, FIPS module config,OPENSSL_MODULES, and application code that enables FIPS early. - cx_Freeze to produce a self-contained directory: entry ELF(s),
lib/full of native extensions and shared libs, pluscryptographyand friends under the product prefix. - Hard requirement: one coherent OpenSSL in the process: FIPS provider, DRBG, digest fetches, and pyOpenSSL/cryptography must all see the same
libcrypto.
Challenge 1: cx_Freeze and "mangled" OpenSSL names
Freezers often copy OpenSSL into lib/ under non-canonical filenames and wire dependents to those names, for example:
libcrypto-<suffix>.so.3libssl-<suffix>.so.3
That is normal packaging behavior: reduce basename collisions and pin exactly which .so was collected.
Inspect dependencies (authoritative for DT_NEEDED):
patchelf — print-needed /path/to/bundle/lib/_ssl.cpython-313-x86_64-linux-gnu.so
Inspect runtime resolution:
ldd /path/to/bundle/lib/_ssl.cpython-313-x86_64-linux-gnu.so
print-needed shows the requested soname strings. ldd shows where the loader maps them today. Both are required for debugging.
Challenge 2: two libcrypto instances in one process
OpenSSL keeps critical state in the loaded libcrypto object: providers, default properties, DRBG, error stacks, etc. Two different files on disk (even byte-identical) mean two mappings unless one is a symlink/hardlink and all references resolve through one path consistently.
We hit the classic split:
_ssl.sodepended onlibcrypto-<hash>.so.3(mangled).cryptography's native module (e.g._rust.abi3.so) depended onlibcrypto.so.3(canonical).- On the appliance,
libcrypto.so.3andlibcrypto-<hash>.so.3were separate inodes.
Symptoms were misleading:
ssl.SSLError: unable to fetch drbgdeep in code that ultimately callsssl.RAND_bytes.cryptographyerrors aroundEVP_fetch/ unsupported for digests that "should" work under FIPS.- Logs showing FIPS enabled from one bootstrap path while another path still behaved like a different OpenSSL.
Rule: until ldd agrees across _ssl and cryptography, do not chase FIPS policy bugs.
Quick inode check:
ls -li /path/to/bundle/lib/libcrypto.so.3 /path/to/bundle/lib/libcrypto-*.so.3
Challenge 3: ctypes / env-based loading does not fix the whole tree
A common partial fix is aligning ctypes.CDLL(...) (or an env var like PHOENIX_LIBCRYPTO) to the mangled filename so bootstrap matches _ssl.
That can fix one mismatch, but cryptography does not follow your env var for which libcrypto it uses. It follows DT_NEEDED on its own .so.
So the real invariant is:
Every ELF that touches OpenSSL must
NEEDED+ resolve to the samelibcrypto/libsslobject in the bundle.
The mangling fix: approach (post–cx_Freeze, patchelf)
cx_Freeze did not expose a portable "never rename OpenSSL" switch in our setup. The reliable fix is a post-processing step on the frozen output directory, before packaging into deb/rpm/tar or golden images.
Strategy we standardized on: canonical sonames everywhere
Objective:
- Every ELF lists
libcrypto.so.3andlibssl.so.3only. - Those files exist under the bundle
lib/. - Hashed copies are removed after all
NEEDEDrewrites, so nothing can load a second OpenSSL by accident.
Discovery: find the mangled names actually referenced
Walk shared objects under the frozen root and collect NEEDED lines, then filter OpenSSL-like entries. A practical pattern:
find /path/to/frozen-root -name '*.so' -print0 \
| xargs -0 -r sh -c 'for f; do patchelf — print-needed "$f" 2>/dev/null; done' _ \
| grep -E '^libcrypto-.+\.so\.3$|^libssl-.+\.so\.3$' | sort -u
Also explicitly check anchors:
patchelf — print-needed /path/to/frozen-root/lib/_ssl*.so
patchelf — print-needed /path/to/frozen-root/lib/cryptography/hazmat/bindings/_rust*.so
Ensure canonical libraries exist on disk
If the tree has libcrypto-<hash>.so.3 but packaging also expects libcrypto.so.3, create the canonical file by copying from the mangled artifact (preserving metadata):
cp -a /path/to/frozen-root/lib/libcrypto-<hash>.so.3 /path/to/frozen-root/lib/libcrypto.so.3
cp -a /path/to/frozen-root/lib/libssl-<hash>.so.3 /path/to/frozen-root/lib/libssl.so.3
(Exact hash strings vary per build; treat them as data discovered in the previous step.)
Rewrite DT_NEEDED on every consumer ELF
For each ELF under the frozen root that lists a mangled OpenSSL dependency, replace it:
patchelf — replace-needed libcrypto-<hash>.so.3 libcrypto.so.3 /path/to/some.so
patchelf — replace-needed libssl-<hash>.so.3 libssl.so.3 /path/to/some.so
Patch targets include:
_sslcryptographynative modules (_rust*.so, etc.)- Any other
.soshowing mangled OpenSSLNEEDEDlines inprint-needed - Top-level frozen executables if they directly depend on OpenSSL (verify with
print-needed)
Order matters: complete all replace-needed operations before deleting hashed .so files, otherwise you can temporarily leave ELFs pointing at files you removed.
Remove hashed copies (only after rewrites)
Once print-needed everywhere shows canonical names:
rm -f /path/to/frozen-root/lib/libcrypto-<hash>.so.3 /path/to/frozen-root/lib/libssl-<hash>.so.3
Alternative strategy (not our default): unify on mangled names
Some teams instead rewrite libcrypto.so.3 → libcrypto-<hash>.so.3 across the tree and update explicit loads to match. That works, but it fights human expectations and any code that assumes canonical SONAMEs. Canonicalization was the better operational default for us.
OpenSSL configuration: what "good" looks like (and what it does not prove)
A typical strict layout activates FIPS and base only, with global properties like fips=yes, and often omits default.
Important: openssl list -providers in an interactive shell only proves what the CLI saw. The service must be tested with the same exports as runtime:
export OPENSSL_CONF=/path/to/openssl-fips.cnf
export OPENSSL_MODULES=/path/to/openssl/modules
export LD_LIBRARY_PATH="/path/to/bundle/lib:${LD_LIBRARY_PATH}"
openssl list -providers
openssl rand -hex 16
If openssl rand fails under the app-equivalent environment, you likely have a pure OpenSSL/module/path issue before Python even enters the story.
Validations we treat as release gates
1) Linkage audit on the built artifact (not the dev venv)
ldd /path/to/bundle/lib/_ssl*.so
ldd /path/to/bundle/lib/cryptography/hazmat/bindings/_rust*.so
Pass criteria: same resolved libcrypto path and same libssl path.
2) DT_NEEDED audit (catches drift before ldd does)
patchelf — print-needed /path/to/bundle/lib/_ssl*.so
patchelf — print-needed /path/to/bundle/lib/cryptography/hazmat/bindings/_rust*.so
Pass criteria: identical OpenSSL soname strings (per chosen canonical vs mangled policy).
3) In-process checks (startup / self-tests)
We run early probes that mirror real usage:
ssl.RAND_bytes(DRBG path)- Representative
cryptographydigest / fetch paths used in production modules - Any existing "FIPS verification" module, but interpreted strictly: digest self-checks passing does not imply DRBG is healthy if linkage was split.
4) Packaging hygiene
Mixed-ABI leftovers (e.g. stale cpython-39 .so next to cpython-313) confuse audits and sometimes affect packaging. We treat find + inventory of lib/ as part of the release checklist.
4) The true litmus test!
We use the cryptography backend to perform two definitive checks:
- SHA-1 (FIPS-approved path) must succeed.
- MD5 (Non-approved) must fail.
If MD5 unexpectedly succeeds, the module treats FIPS enforcement as fundamentally broken and raises a fatal error to prevent the service from starting in an insecure state.
# This is AI generated script
import builtins
import logging
import os
import ssl
import sys
from cryptography.hazmat.primitives import hashes
from cryptography.exceptions import UnsupportedAlgorithm
logger = logging.getLogger(__name__)
def run_fips_post():
"""
FIPS Power-on Self-Test (POST).
Validates that the OpenSSL stack is correctly unified and enforcing policy.
"""
if getattr(builtins, "fips_post_run", False):
return
logger.info("Starting FIPS Power-on Self-Test (POST)")
logger.info(f"Python: {sys.version}")
logger.info(f"OpenSSL Version: {ssl.OPENSSL_VERSION}")
logger.info(f"OPENSSL_CONF: {os.environ.get('OPENSSL_CONF', 'NOT SET')}")
logger.info(f"Main File: {getattr(sys.modules['__main__'], '__file__', 'REPL')}")
# 1. Early DRBG Probe
try:
# Ensures ssl (libcrypto) can access the entropy source/DRBG
random_bytes = ssl.RAND_bytes(16)
logger.info("DRBG Probe: Success")
except Exception as e:
# We log but don't necessarily crash here, as some POSTs might
# allow for lazy initialization, but it's a major red flag.
logger.warning(f"DRBG Probe: Failed (Check linkage/entropy) - {e}")
# 2. Cryptography Backend Policy Checks
# Check A: Approved algorithm (SHA-256) must succeed
try:
digest = hashes.Hash(hashes.SHA256())
digest.update(b"fips-test")
digest.finalize()
logger.info("FIPS Approved Path (SHA-256): Success")
except Exception as e:
raise RuntimeError(f"FIPS Failure: Approved algorithm SHA-256 blocked! {e}")
# Check B: Non-approved algorithm (MD5) must fail
try:
digest = hashes.Hash(hashes.MD5())
digest.update(b"fips-test")
digest.finalize()
# If we reach this line, MD5 worked, which means FIPS is NOT enforced.
raise RuntimeError("FIPS Failure: Non-approved algorithm MD5 succeeded. Enforcement broken.")
except UnsupportedAlgorithm:
logger.info("FIPS Enforcement Path (MD5): Success (Algorithm blocked as expected)")
except Exception as e:
# Some providers raise internal OpenSSL errors instead of UnsupportedAlgorithm
logger.info(f"FIPS Enforcement Path (MD5): Success (Blocked with error: {e})")
# Mark POST as complete
builtins.fips_post_run = True
logger.info("FIPS POST completed successfully.")
# Run the validation
if __name__ == "__main__":
logging.basicConfig(level=logging.INFO)
try:
run_fips_post()
except RuntimeError as e:
logger.error(f"FATAL: {e}")
sys.exit(1)After linkage was unified: real FIPS policy failures
Once OpenSSL was truly singular, we started seeing legitimate policy errors rather than ghost failures.
Example: SHA-1 for X.509 signing rejected under FIPS (digest not allowed from pyOpenSSL). Fix pattern: use SHA-256 for cert generation and align shared cert helpers so debug servers and production paths do not default to legacy digest choices.
This is the expected second phase: FIPS is not "load providers," it is "every operation your app performs is allowed in that configuration."
Operational lessons
- Treat the frozen tree like an OS image: linkage and sonames are part of the security boundary.
- Never debug FIPS from a shell that does not match the service's
OPENSSL_*and library paths. - Assume cx_Freeze will rename OpenSSL until proven otherwise per release; keep post-steps in CI.
patchelfis intentionally narrow: we applied OpenSSL canonicalization broadly, but we did not generalize the pattern to unrelated hashed libs without a concrete failure mode.- Automate: artifact-stage
ldd/print-neededchecks plus a minimal Python smoke import under FIPS env catches regressions when cx_Freeze or wheels change.
Closing
FIPS with frozen Python fails in ways that look like SSL randomness bugs or cryptography bugs, but the highest-leverage discovery was structural: multiple libcrypto instances caused by mismatched DT_NEEDED strings and duplicate on-disk files.
Post–cx_Freeze patchelf --replace-needed, ensuring canonical libcrypto.so.3 / libssl.so.3, and removing hashed copies after rewrites gave us a single OpenSSL story across _ssl, cryptography, and ctypes-based bootstrap. Only then did FIPS configuration and algorithm policy work become trustworthy—and testable.
If you are shipping something similar: start with patchelf --print-needed + ldd on _ssl and _rust. It is the fastest way to separate linkage catastrophes from provider configuration work.