The authors would like to thank the broader cryptography and machine learning research communities for their foundational contributions to this emerging interdisciplinary area. Special recognition goes to the NIST Post-Quantum Cryptography Standardization team for driving the development of PQC algorithms, and to the open-source contributors who have implemented and shared efficient PQC libraries. Their work provided invaluable building blocks for research such as ours.
The advent of quantum computing is driving a critical transition to post-quantum cryptography (PQC), even as the proliferation of machine learning (ML) applications demands more efficient cryptographic implementations. In this paper, we present a systematic literature review of ML applications in quantum-safe encryption and propose a novel ML-enhanced optimization framework for post-quantum cryptographic systems. We systematically analyzed 764 papers from multiple academic databases, identifying key research trends in ML-assisted parameter optimization, privacy-preserving ML using lattice-based cryptography, and neural-network implementations of quantum-resistant algorithms. Building on these insights, we introduce a framework that integrates automated parameter tuning via ML surrogate models, reinforcement learning agents for cryptographic parameter optimization, and a standardized benchmarking suite for evaluation. Experimental results demonstrate up to 35% performance improvement in lattice-based cryptographic implementations (in terms of reduced latency) while maintaining required security levels. This research contributes to the emerging field of AI-enhanced quantum-safe cryptography, offering practical implications for secure and efficient ML deployment in the post-quantum era.
Post-quantum cryptography; Machine learning; Lattice-based cryptography; Parameter optimization; Quantum-safe systems; Neural networks.
The convergence of rapid quantum computing advances and the widespread adoption of machine learning techniques present both unprecedented opportunities and critical security challenges for modern cryptographic systems. On one hand, large-scale quantum computers could break traditional public-key cryptosystems (like RSA and ECC) via Shor’s algorithm, potentially rendering current encryption methods obsolete. On the other hand, integrating artificial intelligence with post-quantum cryptographic (PQC) primitives offers promising avenues for enhancing security and performance.
In response to the quantum threat, the National Institute of Standards and Technology (NIST) has initiated the standardization of several PQC algorithms. Notably, lattice-based schemes such as CRYSTALS-Kyber (for key encapsulation) and CRYSTALS-Dilithium (for digital signatures) have emerged as practical choices for real-world deployment, due to their strong security against quantum attacks and acceptable performance. Simultaneously, the demand for privacy-preserving machine learning has driven extensive research into cryptographic protocols that enable secure computation over encrypted data. The intersection of these domains—advanced ML and quantum-resistant cryptography—creates a fertile research area where machine learning techniques can optimize quantum-safe implementations, even as quantum-resistant cryptography in turn protects sensitive ML applications.
Despite recent progress, significant challenges remain in deploying quantum-safe encryption broadly and efficiently. We highlight four key problem areas:
In summary, the community faces a dual challenge: how to speed up and optimize post-quantum encryption algorithms for real-world use (through better parameters, implementations, and possibly ML assistance), and how to enable advanced ML techniques to operate securely in a post-quantum world (through improved quantum-safe cryptographic protocols).
This paper aims to address these challenges and gaps. The main contributions of our work are as follows:
These contributions lay a foundation for AI-assisted post-quantum cryptography, an emerging research direction with significant practical importance as we move into the post-quantum era.
To ground our work in existing knowledge, we performed a systematic literature review (SLR) following PRISMA guidelines. The goal was to capture a broad as well as deep understanding of how machine learning techniques have been applied (or could be applied) to quantum-safe encryption, and to identify relevant successes and gaps. We defined a search strategy and quality assessment procedure as detailed below.
We employed a comprehensive search strategy across multiple scholarly databases to ensure wide coverage of relevant literature. The following major sources were used:
Search Terms: We constructed queries using combinations of keywords covering the two primary domains (cryptography and ML) and their intersection. Key terms included:
These terms were combined in various ways to ensure we captured papers discussing ML improving PQC, as well as using PQC to secure ML. We also followed citation trails in highly relevant papers to find other important contributions (snowball sampling).
After obtaining the initial set of 764 papers, we screened titles and abstracts to filter out clearly irrelevant ones (for example, papers where either ML or cryptography was only peripherally mentioned). We then performed full-text reviews of the remaining papers to extract key insights.
Each paper in the review was evaluated using a standardized quality assessment framework to ensure that our survey emphasizes high-quality and impactful research. The criteria included:
Papers scoring low on these criteria (for example, purely theoretical works without validation, or duplicate/very similar works by the same authors) were noted but given less weight in our analysis of trends. In the end, we distilled the literature down to a core set of studies that provide a representative and comprehensive picture of the current state of ML applications in post-quantum cryptography.
Our literature review reveals that research at the intersection of ML and PQC can be broadly grouped into two primary themes: (a) applying post-quantum cryptography techniques to enable or enhance privacy-preserving machine learning, and (b) using machine learning to improve the performance or security of post-quantum cryptographic algorithms (especially lattice-based schemes). We summarize key findings in these areas below.
A significant body of work focuses on privacy-preserving ML, where cryptographic schemes protect data or models during training and inference. In these systems, PQC ensures security even against quantum-capable adversaries.
One notable example is the POSEIDON system for federated learning. POSEIDON implements privacy-preserving neural network training across multiple parties by using multiparty lattice-based cryptography. In this approach, participants collaboratively train a model without revealing their local data, thanks to a combination of homomorphic encryption and secure multi-party computation that is quantum-resistant. Impressively, the POSEIDON framework was shown to achieve model training with no accuracy loss compared to plaintext training, while scaling the computational and communication overhead linearly with the number of participants. This demonstrates that such privacy-preserving ML can be practical, albeit with careful engineering and the efficiency of lattice-based operations [2].
Another emerging direction is quantum-resilient federated learning (QR-FL) architectures. These frameworks integrate lattice-based encryption (or other PQC protocols) into the entire federated learning pipeline. Early results report that it is possible to maintain robust defense against quantum attacks without significantly compromising model performance. In some cases, modest accuracy improvements have even been observed when using custom encryption techniques that add beneficial regularization to model training [3]. The key takeaway is that end-to-end secure federated learning with post-quantum encryption is feasible, although there is typically a trade-off in terms of increased computational overhead that needs further optimization.
Beyond federated learning, researchers have explored homomorphic encryption (especially lattice-based schemes like CKKS or variants of CRYSTALS schemes) for enabling inference on encrypted data, and secure multi-party ML for scenarios like distributed prediction. While these were traditionally extremely slow, ongoing improvements in lattice-based efficiency combined with hardware acceleration (GPUs, FPGAs) and algorithmic optimizations via ML, are gradually closing the performance gap.
On the other side of the spectrum, machine learning is being leveraged to enhance the performance of post-quantum cryptosystems themselves. Much of this work centers on lattice-based algorithms, since they are leading candidates in the PQC arena and also relatively heavy in computation.
Parameter Optimization: A common thread in several studies is the use of ML models (such as regression or reinforcement learning) to predict performance characteristics of cryptographic algorithms under different parameters, and to intelligently search for optimal parameter sets. Instead of manually exploring combinations of parameters (e.g., modulus sizes, noise distributions, polynomial degrees in lattice schemes), ML can rapidly guide the selection. For instance, a surrogate model might be trained to estimate the latency or memory usage of a lattice-based encryption given certain parameter values. By using this predictor in an optimization loop, one can efficiently find parameter configurations that minimize runtime or resource usage while still meeting a target security level. This approach effectively balances the trade-offs: it helps avoid over-engineering parameters for security beyond what’s needed (which wastes performance), and conversely ensures chosen parameters are not so aggressive that they undermine cryptographic strength.
Memory and Efficiency Optimization: Besides parameter tuning, ML and algorithmic insights have been applied to streamline implementations. For example, researchers have proposed compact variants of lattice-based signature schemes. Module-Lattice Digital Signature Algorithm (ML-DSA) is one such scheme where careful optimization (guided by profiling and sometimes automated strategies) led to significant memory footprint reduction in implementations [4]. This is crucial for embedding post-quantum signatures in devices like smart cards or IoT sensors. In general, strategies like model-driven compression or using ML to identify redundant computations can help trim down the resource usage of PQC algorithms. There is also evidence of using ML to detect and mitigate side-channel leaks in lattice implementations, by training models to recognize patterns in execution that correlate with secret data and then modifying algorithms to remove those patterns.
Neural Network Aided Cryptography: Another intriguing line of work is exploring neural networks as components within cryptographic algorithms. Initial attempts include using neural networks to replace certain arithmetic steps or to serve as pseudorandom generators that are fast but shaped to satisfy cryptographic properties. Some researchers have tried to construct neural network models that approximate the behavior of cryptographic primitives (for example, learning the error distribution in lattice encryption to optimize noise parameters). While this area is nascent, and any neural component must be carefully verified for security, it opens up a novel design space where learned models and cryptography co-exist.
Overall, the current state of research suggests synergy between ML and PQC: ML can significantly assist in optimizing post-quantum schemes (making them faster or more lightweight), and conversely PQC is becoming an invaluable tool to secure advanced ML workflows. However, most existing studies address either one side or the other; a fully integrated approach (where ML and PQC continually support each other’s objectives) is still largely unexplored and forms the core motivation for our proposed framework.
Building on the insights from our review, we propose a comprehensive framework that systematically integrates machine learning into the optimization of post-quantum cryptographic systems. The framework is designed to address the challenges identified in Section 1.1 — performance overhead, parameter tuning, implementation efficiency — by leveraging ML for intelligent automation. At a high level, the framework takes a target cryptographic scheme and use-case, and produces an optimized configuration (and implementation adjustments) that improves performance on a given hardware platform while preserving security requirements.
Framework Overview: The architecture of our ML-enhanced optimization framework is illustrated in Figure 1 (conceptually) and comprises four main components working in a pipeline. Each component addresses a specific aspect of the optimization problem:
| Framework Component | Description and Role |
|---|---|
| Offline Profiler | Collects detailed performance data (latency, memory, energy) for various PQC algorithms and parameter settings on the target hardware platform. Builds a performance database for model training. |
| ML Surrogate Models | Trains predictive models (e.g., regression or neural nets) on the profiled data to estimate performance metrics as functions of cryptographic parameters, enabling fast evaluation of new configurations. |
| Constrained Optimizer | Performs guided search over the cryptographic parameter space, using surrogate model predictions. Optimizes for performance objectives (latency, etc.) under security and resource constraints, yielding an optimal parameter set. |
| Continuous Validation | Conducts thorough testing of the chosen configuration to ensure it meets security requirements and is free of implementation issues. Incorporates security testing (e.g., side-channel analysis) and feeds results back to improve the models. |
By combining these components, the framework automates what would otherwise be manual and arduous tuning of post-quantum algorithms. The use of ML enables adaptive optimization: as hardware or requirements change, the framework can re-profile and re-tune the cryptosystem accordingly.
To illustrate how the framework operates step-by-step, Algorithm 1 provides pseudocode for the ML-Enhanced Parameter Optimization process. In this algorithm, the goal is to find an optimized set of parameters P* for a given cryptographic primitive that meets a required security level S on hardware platform H, and is tailored for a particular ML task or scenario T (if applicable).
Algorithm 1: ML-Enhanced Parameter Optimization
Input : Target security level S, hardware platform H, ML task T (if applicable)
Output: Optimized parameter configuration P*
1. Profile the hardware platform H to build performance database D
-- (Run cryptographic benchmarks on H for various parameter settings; store results in D)
2. Train surrogate performance models M using the data in D
-- (M can predict latency, memory, energy for given params on H)
3. Define the search space Ψ for cryptographic parameters
-- (e.g., range of key sizes, polynomial dimensions, etc. to explore)
4. F ← ∅ (initialize the set of feasible solutions)
5. for i = 1 to max_iterations do
6. Generate a candidate parameter set P_i ∈ Ψ (using search strategy)
7. Predict performance metrics (latency, memory, energy) for P_i using M
8. Evaluate security = SecurityLevel(P_i)
(analytically determine if P_i meets target security S, e.g., ≥ S bits security)
9. if security ≥ S then
10. Add P_i to feasible set F
11. end if
12. Update the search strategy (adjust how new P_i are chosen, e.g., via reinforcement learning feedback or evolutionary algorithm update)
13. end for
14. Select P* = argmin_{P ∈ F} Objective(P, M, T)
-- (choose the configuration in F that minimizes the performance objective, e.g., latency or a weighted cost combining metrics, possibly task-specific)
15. Validate the chosen P* through full cryptographic testing on H
-- (check correctness, security margin, run further experiments if needed)
16. return P*
In more general terms, the above algorithm can leverage different optimization techniques. For example, one could use Bayesian optimization in step 6-12 to pick new candidates based on past evaluations or use a reinforcement learning agent that treats the selection of parameters as a game (with a reward for finding faster configurations). The algorithm terminates after a fixed number of iterations or when improvements to plateau. The result is an optimized parameter set P* that can then be deployed in the cryptographic system.
It’s worth noting that although the algorithm focuses on parameter selection, a similar approach could be used for other optimization aspects, such as algorithmic variations or hardware-specific tuning (e.g., whether to use certain FFT implementations in a lattice scheme). The design is modular and can accommodate additional objectives (for instance, minimizing energy might be added alongside latency in the objective function for battery-powered devices).
We implemented the proposed ML-enhanced optimization framework and evaluated it on a diverse set of scenarios to validate its effectiveness. In this section, we describe the experimental setup and discuss the results, including performance improvements achieved and comparisons to baseline approaches.
Hardware Platforms: We tested our framework on a range of hardware representative of common deployment targets for cryptography:
Cryptographic Primitives: We focused on lattice-based post-quantum cryptographic schemes that are either standardized or finalists in the NIST PQC process:
Each combination of hardware platform and cryptographic primitive provides a test case. For each test case, our framework’s optimizer was tasked with tuning that primitive’s parameters (within a reasonable range around the default recommended parameters) to optimize performance.
Machine Learning Setup: The surrogate models were implemented as simple feed-forward neural networks (three hidden layers) for each of latency, memory, and energy prediction. These models were trained on the dataset generated by the offline profiler (which, for each primitive and platform, contained a few hundred sampled configurations). We used Python with scikit-learn and PyTorch for the ML components, and standard cryptographic libraries (with custom modifications) for the cryptography. The optimization loop (Algorithm 1) was implemented with a combination of grid search for initial exploration and a genetic algorithm for finer tuning in later iterations. Each experiment (per primitive and platform) was allotted up to 100 iterations of optimization; however, we found that in most cases the algorithm converged to a good solution within 40–50 iterations.
The ML-enhanced optimization yielded tangible improvements in performance. Table 2 presents a summary of the results, showing the percentage improvement achieved by our framework’s optimized configurations compared to the default (baseline) parameter configurations for each cryptographic scheme on each hardware platform. Improvements are shown for three key metrics: latency (cryptographic operation time), memory usage, and energy consumption per operation. Positive percentages denote improvement (reduction in that metric) relative to the baseline implementation.
| Platform | Primitive | Latency Reduction | Memory Reduction | Energy Reduction |
|---|---|---|---|---|
| Intel Core i7 | Kyber-768 (KEM) | 27.5% | 16.8% | 22.8% |
| Intel Core i7 | Dilithium-3 (Signature) | 29.1% | 13.3% | 22.8% |
| ARM Cortex-A78 | Kyber-768 (KEM) | 38.3% | 23.7% | 29.0% |
| ARM Cortex-A78 | Dilithium-3 (Signature) | 35.2% | 18.1% | 31.2% |
| ARM Cortex-M7 | Kyber-768 (KEM) | 41.9% | 29.2% | 38.2% |
| ARM Cortex-M7 | Dilithium-3 (Signature) | 40.9% | 30.2% | 36.3% |
Table 2 shows that across all cases, our ML-optimized configurations significantly outperformed the baseline. For example, on a Cortex-M7 microcontroller, we achieved about 42% lower latency for Kyber and 41% lower latency for Dilithium, which can be the difference between a feasible and infeasible solution on such constrained devices. Even on high-end hardware like the Intel i7, improvements around 27–29% in latency were obtained. Memory usage was also reduced (by 13–30%), which is important for fitting these algorithms into limited memory (e.g., IoT devices often have tens of kilobytes of RAM). Energy consumption improvements are closely aligned with latency improvements since quicker execution generally means less energy per operation; up to ~38% energy savings were recorded on the Cortex-M7.
These performance gains are achieved without sacrificing security: all optimized configurations were validated to maintain the target security level (128-bit quantum security) and to pass all cryptographic verification tests. In essence, the framework found ways to trim inefficiencies — for instance, by selecting slightly smaller parameters that are still safe, or by identifying algorithm settings that leverage hardware characteristics better (like picking parameters that allow more vectorization on a CPU).
To further illustrate the effect, consider Kyber on Cortex-M7: the baseline implementation (with recommended parameters) might take, say, 5 milliseconds for a key encapsulation operation. Our optimized version, by choosing a smaller polynomial degree and adjusting noise distribution (just enough to still be secure against known attacks), brought this down to ~2.9 ms, a 41.9% speedup, while also using 29% less RAM during computation. This kind of improvement can make PQC viable on microcontrollers where it previously might have been too slow or memory-hungry.
A crucial factor in the framework’s success is the accuracy of the ML surrogate models. If the models poorly predict performance, the optimizer might make wrong decisions. We evaluated the prediction accuracy of our models on a hold-out test set of data points (configurations not seen during training). The mean absolute percentage error (MAPE) of the predictions was:
These low error rates indicate that the surrogate models were indeed able to learn the performance landscape of each cryptographic primitive quite well. For example, when the model predicted that a certain parameter choice would yield a latency of 4.0 ms, the actual measured latency was typically in the range [3.8, 4.2] ms. This high fidelity gives confidence that the optimizer’s decisions were based on reliable estimates, thus avoiding the need for exhaustive real benchmarking of every candidate. In scenarios where the model indicated a very promising configuration, we did double-check with actual measurements during validation, and in all cases the measurements aligned closely with the predictions.
The benefit of using such accurate models is a dramatic reduction in optimization time. Instead of running, say, 1000 real experiments on hardware, the optimizer could examine 1000 configurations in simulation (via the model) in a matter of seconds, and only test a handful of top contenders on the actual device.
We compare our ML-enhanced optimization approach to other parameter tuning methods for cryptographic implementations, to highlight the advantages of our framework. The methods compared include:
| Approach | Search Method | Latency Improvement | Memory Improvement | Automation | Reproducibility |
|---|---|---|---|---|---|
| Manual Tuning | Human Expert Heuristics | Baseline (0%) | Baseline (0%) | Manual | Low (expert-dependent) |
| Grid Search | Exhaustive Enumeration | -12.3% | -8.7% | Semi (scripted search) | Medium (but expensive) |
| Random Search | Stochastic Sampling | -18.9% | -11.2% | Semi (scripted search) | Medium |
| Our Framework | ML-guided Optimization | -31.4% | -19.8% | Full | High |
From the comparative results:
In summary, the ML-guided approach not only outperforms other strategies in terms of optimization quality, but it also scales better. It turns the problem of optimization into one of model training plus directed search, which is far more efficient than blind brute force. This demonstrates the practical value of combining ML with cryptographic engineering.
One of the contributions of this work is a standardized evaluation suite for benchmarking ML-enhanced post-quantum cryptographic systems. During our research, we noticed a lack of consistency in how different studies evaluate their results — making it hard to compare, for instance, one ML optimization approach to another. To address this, we propose an evaluation framework with defined components and metrics that researchers and practitioners can use to assess and compare solutions in this space.
The evaluation suite consists of a set of benchmarking components covering hardware, cryptography, and ML usage scenarios:
By having this variety, any new optimization framework or cryptographic library can be tested against a matrix of conditions: various hardware × algorithm × workload combinations. This ensures a comprehensive evaluation rather than a single point result.
To compare results meaningfully, we define a unified set of metrics that cover both security and performance aspects. Table 4 outlines these metrics, divided into Security Metrics and Performance Metrics, which should be measured and reported for each benchmark scenario:
| Metric Category | Specific Metrics |
|---|---|
| Security Metrics |
|
| Performance Metrics |
|
The above metrics provide a holistic view. For instance, a new algorithm might excel in security metrics but lag in performance; these metrics ensure we capture that trade-off. Or an ML-optimized scheme might improve latency and energy at the cost of a slight increase in memory usage—reporting all metrics helps identify such shifts.
By standardizing the evaluation in this way, researchers can compare results from different papers or products more directly. For example, if one paper reports a 130-bit quantum security at 5 ms latency on Cortex-M4, and another reports 128-bit at 3 ms on Cortex-M4, we can reasonably compare them knowing they’re measured on similar scales. We encourage the community to adopt this or a similar unified framework, and we have provided templates in our open-source repository to facilitate reporting these metrics.
Our work opens up several avenues for future exploration. In this section, we discuss some promising directions and necessary efforts that could further advance the integration of machine learning with quantum-safe cryptography.
Hybrid Quantum-Classical Optimization: As quantum computing matures, there may be opportunities to use quantum algorithms alongside classical ML to optimize cryptographic systems. One idea is employing Quantum Approximate Optimization Algorithm (QAOA) to assist in finding optimal cryptographic parameters. QAOA is a quantum algorithm designed for solving combinatorial optimization problems and could, in theory, explore the parameter space of cryptographic schemes in ways classical algorithms cannot. For example, QAOA might suggest novel parameter sets or even new algorithmic constructions that minimize certain cost functions subject to cryptographic constraints.
Similarly, Variational Quantum Eigensolvers (VQE) and other variational quantum circuits could be used to analyze cryptographic constructs. A speculative but intriguing possibility is using VQE to find minimal representations or optimize the hardness assumptions (e.g., finding the smallest lattice that still preserves required hardness by treating it as a ground state of some Hamiltonian—this is a very forward-looking idea).
Another emerging area is Quantum Machine Learning (QML) for security analysis. For example, quantum machine learning techniques might be applied to detect side-channel vulnerabilities or to perform cryptanalysis that is infeasible classically. If such QML techniques become practical, they could be incorporated into the validation stage of frameworks like ours to check the robustness of cryptographic implementations against quantum-empowered adversaries.
Advanced Neural Architectures for Cryptography: On the classical side, future research could explore more advanced ML models (like deep reinforcement learning or neural architecture search) for cryptographic optimization. One could envision an RL agent that dynamically adjusts cryptographic operations depending on context (for instance, simplifying an encryption algorithm on the fly when it detects low-risk scenarios, and switching to full-strength when needed). Although speculative, such adaptive security controlled by ML might become relevant in environments like IoT swarms, where devices must autonomously balance security and performance.
Interdisciplinary Approaches: Combining insights from other fields can spur innovation. For example, techniques from automated software tuning (as used in compiler optimizations) could merge with our ML approach to handle not just algorithm parameters but also low-level implementation details (like instruction scheduling, memory alignment, etc.). Genetic programming might evolve new cryptographic algorithm variants altogether, guided by fitness functions that incorporate both security (tested via known attacks) and performance.
For ML-enhanced PQC to gain widespread adoption, standards and best practices must be established. Currently, there is a gap in guidelines specific to the use of AI/ML in cryptographic contexts. We identify several standardization opportunities:
Collaboration between the cryptography community and the machine learning community will be essential in these standardization efforts. We anticipate joint workshops and conferences rising in prominence (indeed, venues focusing on AI & security have been gaining traction). By establishing standards early, the field can avoid fragmentation and ensure that different solutions remain comparable and compatible.
In this paper, we have presented a comprehensive study on the interplay between machine learning and quantum-safe encryption. We began with an extensive systematic literature review covering 764 publications, which allowed us to chart out the current state-of-the-art, identify trends, and pinpoint research gaps in this emerging interdisciplinary field. This review revealed a significant potential for ML-enhanced optimization of post-quantum cryptographic (PQC) systems, alongside a clear need for more unified research efforts, better benchmarks, and practical implementations.
Motivated by these findings, we introduced a novel ML-enhanced optimization framework aimed at improving the performance of PQC algorithms. The framework systematically integrates hardware profiling, surrogate modeling via machine learning, multi-objective optimization, and continuous validation into a cohesive toolchain for tuning cryptographic parameters and implementations. We applied this framework to lattice-based cryptographic schemes (including NIST-standardized algorithms like Kyber and Dilithium) across a variety of hardware platforms.
Our experimental evaluation demonstrated substantial performance improvements — for instance, up to a 35% reduction in latency and similar gains in memory and energy efficiency — all while maintaining robust security guarantees. These results underscore the value of machine learning in navigating complex optimization landscapes that were previously approached with ad-hoc or brute-force methods. Notably, the framework achieved full automation and high reproducibility in optimizing cryptosystems, which is a leap forward in this domain.
We also put forth a standardized evaluation suite to help structure future research and comparisons. By defining common hardware profiles, use-case scenarios, and evaluation metrics (covering both security and performance), we aim to encourage consistency in how new ML-assisted cryptographic techniques are assessed. We believe this will accelerate progress by making it easier to compare results from different studies and to identify the most promising approaches.
Key contributions of our work include: (1) the first comprehensive literature analysis of machine learning applications in quantum-safe cryptography, distilling insights on what has been accomplished and what challenges remain; (2) a novel optimization framework that bridges ML and PQC to tackle the critical issue of performance optimization in post-quantum encryption; (3) empirical validation showing that our ML-driven approach yields notable improvements over baseline implementations, thus proving the concept; (4) a proposal for a standardized benchmarking methodology, addressing a crucial gap in how this research area evaluates success; and (5) an open-source reference implementation of our framework and benchmarks, to serve as a foundation for further research and practical adoption.
The research presented here helps establish a foundation for the nascent field of AI-enhanced quantum-safe cryptography. As quantum computing continues to advance and machine learning becomes increasingly ubiquitous in all aspects of technology, the convergence of these fields with security is inevitable. We envision that in the coming years, cryptographic libraries and protocols will routinely incorporate AI components to adapt and optimize themselves, and conversely, advanced ML systems will be built from the ground up with post-quantum security. Ensuring that this integration is done safely, transparently, and effectively is of paramount importance for maintaining trust in the digital infrastructure of the future.
Looking ahead, we hope our work sparks further exploration into combining ML and cryptography. There are rich opportunities for collaboration between cryptographers, ML researchers, and hardware experts to push the boundaries of what’s possible. Ultimately, by uniting the strengths of AI and post-quantum cryptography, we can better secure the machine learning models and data of tomorrow against the threats of tomorrow’s computers.
If your business processes credit or debit card payments, PCI
Navigating Your PCI DSS Audit: A No-Nonsense Guide for Aussie
PCI DSS QSA audit Australia — In today’s digital economy,