Dr Robert Granger
Academic and research departments
Surrey Centre for Cyber Security, Computer Science Research Centre, School of Computer Science and Electronic Engineering.About
Biography
Robert Granger holds an MSci from the School of Mathematics, University of Bristol (2002) and a PhD from the Department of Computer Science, University of Bristol (2006).
He has held postdoctoral positions at: the Centre for Applied Cryptographic Research, University of Waterloo, Canada; his alma mater, again in the Cryptography and Information Security Group; and at the Claude Shannon Institute, University College Dublin, Ireland. Before joining the University of Surrey as a Lecturer in Secure Systems in December 2018, he was a scientist in the Laboratory for Cryptologic Algorithms, EPFL, Switzerland. He became a Senior Lecturer in Secure Systems in August 2021 and a Reader (what the University now designates as an Associate Professor) in August 2023.
He is the Cryptography Research Theme Lead for the Computer Science Research Centre.
Areas of specialism
Affiliations and memberships
News
In the media
ResearchResearch interests
I am a computational number theorist whose main specialism is the number-theoretic assumptions, applications and algorithms underlying public key cryptography. I have over a decade of experience conceiving and leading often groundbreaking research projects in this area, and am an expert in foundational cryptographic security assumptions. In particular, I have designed several highly original discrete logarithm algorithms for various algebraic groups (one receiving the Best Paper Award, CRYPTO 2013) and have set several world records for discrete logarithm computations with my teams. I have a natural passion for attacking interesting and hard problems (in cryptography and elsewhere), combining deep insights with powerful mathematical and computational techniques. In addition to cryptography and cryptanalysis, number theory, algebraic geometry, algorithm design, discrete mathematics and finite field theory, I am interested in various theoretical properties of irreducible polynomials and curves over finite fields.
Research projects
Public key cryptography (PKC) depends on the existence of computational problems that are incredibly hard - but not impossible - to solve. Classical examples that were fundamental to the origins of PKC in the 1970s (and indeed were prominent centuries earlier) are the integer factorisation problem and the discrete logarithm problem (DLP). While there are no known efficient, i.e., polynomial-time algorithms for solving these problems that run on classical computers, thanks to Shor's astounding breakthrough ideas in 1994, both can be solved efficiently on a quantum computer of sufficient size. Intense research in the areas of quantum computation, quantum information theory and quantum algorithms ensued, and replacement post-quantum (PQ) cryptosystems have been studied in earnest for the past 15 years or so, with standardisation efforts in process by both NIST and ETSI. PQ cryptosystems must be secure against both classical and quantum computers and therefore their underlying hardness assumptions must be studied intensely before they can be fully trusted to replace our existing PKC hardness assumptions. Until these standards have been established and cryptographic practice migrates entirely to PQ cryptography, it is also essential that the study of classical hardness assumptions persists, particularly as sporadic and sometimes spectacular progress can occur: for instance, for a special but large family of finite fields the DLP can be solved on a classical computer in quasi-polynomial time, i.e., `very nearly' efficiently, thanks to a series of results due to Dr. Granger and his collaborators, and Joux and his collaborators.
In this project we will research and develop algorithms for solving computational problems that are foundational to the security of PKC, both now and in the future. In particular, we will study: the DLP in the aforementioned special family of finite fields, for which an efficient classical algorithm is potentially on the horizon; the security of the Legendre pseudo-random function, which is extremely well suited for multi-party computation and has been proposed for use in the next iteration of Ethereum - the de facto standard blockchain platform - but is not so well-studied; and finally the security of supersingular isogeny-based PQ cryptography, which although a relatively young field offers many very promising applications. Due to their nature, any cryptographic assumptions based on mathematical constructions are potentially weaker than currently believed, and we will deepen our understanding and assess the hardness of these natural and fundamental problems, thus providing security assurances to the cryptography community and more generally all users of cryptography.
Research interests
I am a computational number theorist whose main specialism is the number-theoretic assumptions, applications and algorithms underlying public key cryptography. I have over a decade of experience conceiving and leading often groundbreaking research projects in this area, and am an expert in foundational cryptographic security assumptions. In particular, I have designed several highly original discrete logarithm algorithms for various algebraic groups (one receiving the Best Paper Award, CRYPTO 2013) and have set several world records for discrete logarithm computations with my teams. I have a natural passion for attacking interesting and hard problems (in cryptography and elsewhere), combining deep insights with powerful mathematical and computational techniques. In addition to cryptography and cryptanalysis, number theory, algebraic geometry, algorithm design, discrete mathematics and finite field theory, I am interested in various theoretical properties of irreducible polynomials and curves over finite fields.
Research projects
Public key cryptography (PKC) depends on the existence of computational problems that are incredibly hard - but not impossible - to solve. Classical examples that were fundamental to the origins of PKC in the 1970s (and indeed were prominent centuries earlier) are the integer factorisation problem and the discrete logarithm problem (DLP). While there are no known efficient, i.e., polynomial-time algorithms for solving these problems that run on classical computers, thanks to Shor's astounding breakthrough ideas in 1994, both can be solved efficiently on a quantum computer of sufficient size. Intense research in the areas of quantum computation, quantum information theory and quantum algorithms ensued, and replacement post-quantum (PQ) cryptosystems have been studied in earnest for the past 15 years or so, with standardisation efforts in process by both NIST and ETSI. PQ cryptosystems must be secure against both classical and quantum computers and therefore their underlying hardness assumptions must be studied intensely before they can be fully trusted to replace our existing PKC hardness assumptions. Until these standards have been established and cryptographic practice migrates entirely to PQ cryptography, it is also essential that the study of classical hardness assumptions persists, particularly as sporadic and sometimes spectacular progress can occur: for instance, for a special but large family of finite fields the DLP can be solved on a classical computer in quasi-polynomial time, i.e., `very nearly' efficiently, thanks to a series of results due to Dr. Granger and his collaborators, and Joux and his collaborators.
In this project we will research and develop algorithms for solving computational problems that are foundational to the security of PKC, both now and in the future. In particular, we will study: the DLP in the aforementioned special family of finite fields, for which an efficient classical algorithm is potentially on the horizon; the security of the Legendre pseudo-random function, which is extremely well suited for multi-party computation and has been proposed for use in the next iteration of Ethereum - the de facto standard blockchain platform - but is not so well-studied; and finally the security of supersingular isogeny-based PQ cryptography, which although a relatively young field offers many very promising applications. Due to their nature, any cryptographic assumptions based on mathematical constructions are potentially weaker than currently believed, and we will deepen our understanding and assess the hardness of these natural and fundamental problems, thus providing security assurances to the cryptography community and more generally all users of cryptography.
Supervision
Postgraduate research supervision
Previous PhD students as Principal Supervisor:
- Bruno Sterner
Previous PhD students (co-supervised):
- Benjamin Wesolowski (EPFL 2018)
- Naomi Benger (CSI 2010)
Teaching
This year (2023/24) I am teaching
- Asymmetric Cryptography (COMM045) in the Information Security MSc
I also supervise Professional Projects (COM3001) and MSc dissertations (COMM002).
Since 2020/21 I have been the Examinations Officer for the department.
Publications
In this paper we present a new multiplication algorithm for residues modulo the Mersenne prime 2521 − 1. Using this approach, on an Intel Haswell Core i7-4770, constant-time variable-base scalar multiplication on NIST’s (and SECG’s) curve P-521 requires 1,108,000 cycles, while on the recently proposed Edwards curve E-521 it requires just 943,000 cycles. As a comparison, on the same architecture openSSL’s ECDH speed test for curve P-521 requires 1,319,000 cycles. Furthermore, our code was written entirely in C and so is robust across different platforms. The basic observation behind these speedups is that the form of the modulus allows one to multiply residues with as few word-by-word multiplications as is needed for squaring, while incurring very little overhead from extra additions, in contrast to the usual Karatsuba methods
In this paper we propose a binary field variant of the JouxLercier medium-sized Function Field Sieve, which results not only in complexities as low as Lqn (1/3,(4/9)1/3) for computing arbitrary logarithms, but also in an heuristic polynomial time algorithm for finding the discrete logarithms of degree one and two elements when the field has a subfield of an appropriate size. To illustrate the efficiency of the method, we have successfully solved the DLP in the finite fields with 21971 and 23164 elements, setting a record for binary fields.
In this paper we show that the Ate pairing, originally defined for elliptic curves, generalises to hyperelliptic curves and in fact to arbitrary algebraic curves. It has the following surprising properties: The loop length in Miller’s algorithm can be up to g times shorter than for the Tate pairing, with g the genus of the curve, and the pairing is automatically reduced, i.e. no final exponentiation is needed.
We count the number of isogeny classes of Edwards curves over odd characteristic finite fields, answering a question recently posed by Rezaeian and Shparlinski. We also show that each isogeny class contains a complete Edwards curve, and that an Edwards curve is isogenous to an original Edwards curve over Fq if and only if its group order is divisible by 8 if q ≡ −1 (mod 4), and 16 if q ≡ 1 (mod 4). Furthermore, we give formulae for the proportion of d ∈ Fq ____ {0, 1} for which the Edwards curve Ed is complete or original, relative to the total number of d in each isogeny class
We show that for any elliptic curve E(Fqn ), if an adversary has access to a Static Diffie-Hellman Problem (Static DHP) oracle, then by making O(q1− 1/n+1) Static DHP oracle queries during an initial learning phase, for fixed n > 1 and q → ∞ the adversary can solve any further instance of the Static DHP in heuristic time O˜(q1− 1/n+1). Our proposal also solves the Delayed Target DHP as defined by Freeman, and naturally extends to provide algorithms for solving the Delayed Target DLP, the One-More DHP and One-More DLP, as studied by Koblitz and Menezes in the context of Jacobians of hyperelliptic curves of small genus. We also argue that for any group in which index calculus can be effectively applied, the above problems have a natural relationship, and will always be easier than the DLP. While practical only for very small n, our algorithm reduces the security provided by the elliptic curves defined over Fp2 and Fp4 proposed by Galbraith, Lin and Scott at EUROCRYPT 2009, should they be used in any protocol where a user can be made to act as a proxy Static DHP oracle, or if used in protocols whose security is related to any of the above problems.
Recently, several striking advances have taken place regarding the discrete logarithm problem (DLP) in finite fields of small characteristic, despite progress having remained essentially static for nearly thirty years, with the best known algorithms being of subexponential complexity. In this expository article we describe the key insights and constructions which culminated in two independent quasi-polynomial algorithms. To put these developments into both a historical and a mathematical context, as well as to provide a comparison with the cases of so-called large and medium characteristic fields, we give an overview of the state-of-the-art algorithms for computing discrete logarithms in all finite fields. Our presentation aims to guide the reader through the algorithms and their complexity analyses ab initio.
For q a prime power, the discrete logarithm problem (DLP) in Fq consists in finding, for any g ∈ F × q and h ∈ hgi, an integer x such that g x = h. We present an algorithm for computing discrete logarithms with which we prove that for each prime p there exist infinitely many explicit extension fields Fpn in which the DLP can be solved in expected quasi-polynomial time. Furthermore, subject to a conjecture on the existence of irreducible polynomials of a certain form, the algorithm solves the DLP in all extensions Fpn in expected quasi-polynomial time.
At Crypto 2004, van Dijk and Woodruff introduced a new way of using the algebraic tori Tn in cryptography, and obtained an asymptotically optimal n/φ(n) savings in bandwidth and storage for a number of cryptographic applications. However, the computational requirements of compression and decompression in their scheme were impractical, and it was left open to reduce them to a practical level. We give a new method that compresses orders of magnitude faster than the original, while also speeding up the decompression and improving on the compression factor (by a constant term). Further, we give the first efficient implementation that uses T30, compare its performance to XTR, CEILIDH, and ECC, and present new applications. Our methods achieve better compression than XTR and CEILIDH for the compression of as few as two group elements. This allows us to apply our results to ElGamal encryption with a small message domain to obtain ciphertexts that are 10% smaller than in previous schemes.
In this paper we investigate the efficiency of the function field sieve to compute discrete logarithms in the finite fields F3n . Motivated by attacks on identity based encryption systems using supersingular elliptic curves, we pay special attention to the case where n is composite. This allows us to represent the function field over different base fields. Practical experiments appear to show that a function field over F3 gives the best results.
This paper describes an extremely efficient squaring operation in the so-called ‘cyclotomic subgroup’ of F× q6 , for q ≡ 1 mod 6. Our result arises from considering the Weil restriction of scalars of this group from Fq6 to Fq2 , and provides efficiency improvements for both pairing-based and torus-based cryptographic protocols. In particular we argue that such fields are ideally suited for the latter when the field characteristic satisfies p ≡ 1 (mod 6), and since torus-based techniques can be applied to the former, we present a compelling argument for the adoption of a single approach to efficient field arithmetic for pairing-based cryptography.
In this paper we show how some recent ideas regarding the discrete logarithm problem (DLP) in finite fields of small characteristic may be applied to compute logarithms in some very large fields extremely efficiently. By combining the polynomial time relation generation from the authors’ CRYPTO 2013 paper, an improved degree two elimination technique, and an analogue of Joux’s recent small-degree elimination method, we solved a DLP in the record-sized finite field of 26120 elements, using just a single core-month. Relative to the previous record set by Joux in the field of 24080 elements, this represents a 50 % increase in the bitlength, using just 5 % of the core-hours. We also show that for the fields considered, the parameters for Joux’s LQ(1/4 + o(1)) algorithm may be optimised to produce an LQ(1/4) algorithm.
A popular approach to tweakable blockcipher design is via masking, where a certain primitive (a blockcipher or a permutation) is preceded and followed by an easy-to-compute tweak-dependent mask. In this work, we revisit the principle of masking. We do so alongside the introduction of the tweakable Even-Mansour construction MEM. Its masking function combines the advantages of word-oriented LFSR- and powering-up-based methods. We show in particular how recent advancements in computing discrete logarithms over finite fields of characteristic 2 can be exploited in a constructive way to realize highly efficient, constant-time masking functions. If the masking satisfies a set of simple conditions, then MEM is a secure tweakable blockcipher up to the birthday bound. The strengths of MEM are exhibited by the design of fully parallelizable authenticated encryption schemes OPP (nonce-respecting) and MRO (misuse-resistant). If instantiated with a reducedround BLAKE2b permutation, OPP and MRO achieve speeds up to 0.55 and 1.06 cycles per byte on the Intel Haswell microarchitecture, and are able to significantly outperform their closest competitors.
We present three proofs of an observation of Ahmadi on the number of irreducible polynomials over GF(2) with certain traces and cotraces, the most interesting of which uses an explicit natural bijection. We also present two proofs of a related observation. •Finite fields.•Irreducible binary polynomials with certain traces and cotraces.•Bijective proofs.
The Lucas-Lehmer (LL) test is the most efficient known for testing the primality of Mersenne numbers, i.e. the integers Ml = 2l − 1, for l ≥ 1. The Mersenne numbers are so-called in honour of the French scholar Marin Mersenne (1588-1648), who in 1644 published a list of exponents l ≤ 257 which he conjectured produced all and only those Ml which are prime, for l in this range, namely l = 2,3,5,7, 13, 17, 19,31,67, 127 and 257. Mersenne's list turned out to be incorrect, omitting the prime-producing l = 61, 89 and 107 and including the composite-producing l = 67 and 257, although this was not finally confirmed until 1947, using both the LL test and contemporary mechanical calculators.
Generalised Mersenne Numbers (GMNs) were defined by Solinas in 1999 and feature in the NIST (FIPS 186-2) and SECG standards for use in elliptic curve cryptography. Their form is such that modular reduction is extremely efficient, thus making them an attractive choice for modular multiplication implementation. However, the issue of residue multiplication efficiency seems to have been overlooked. Asymptotically, using a cyclic rather than a linear convolution, residue multiplication modulo a Mersenne number is twice as fast as integer multiplication; this property does not hold for prime GMNs, unless they are of Mersenne's form. In this work we exploit an alternative generalisation of Mersenne numbers for which an analogue of the above property -- and hence the same efficiency ratio -- holds, even at bitlengths for which schoolbook multiplication is optimal, while also maintaining very efficient reduction. Moreover, our proposed primes are abundant at any bitlength, whereas GMNs are extremely rare. Our multiplication and reduction algorithms can also be easily parallelised, making our arithmetic particularly suitable for hardware implementation. Furthermore, the field representation we propose also naturally protects against side-channel attacks, including timing attacks, simple power analysis and differential power analysis, which is essential in many cryptographic scenarios, in constrast to GMNs.
We present an efficient deterministic algorithm which outputs exact expressions in terms of n for the number of monic degree n irreducible polynomials over Fq of characteristic p for which the first l˂p coefficients are prescribed, provided that n is coprime to p. Each of these counts is 1n(qn−l + O(qn/2)). The main idea behind the algorithm is to associate to an equivalent problem a set of Artin-Schreier curves defined over Fq whose number of Fqn-rational affine points must be combined. This is accomplished by computing their zeta functions using a p-adic algorithm due to Lauder and Wan. Using the computational algebra system Magma one can, for example, compute the zeta functions of the arising curves for q=5 and l=4 very efficiently, and we detail a proof-of-concept demonstration. Due to the failure of Newton's identities in positive characteristic, the l≥p cases are seemingly harder. Nevertheless, we use an analogous algorithm to compute example curves for q=2 and l≤7, and for q=3 and l=3. Again using Magma, for q=2 we computed the relevant zeta functions for l=4 and l=5, obtaining explicit formulae for these open problems for n odd, as well as for subsets of these problems for all n, while for q=3 we obtained explicit formulae for l=3 and n coprime to 3. We also discuss some of the computational challenges and theoretical questions arising from this approach in the general case and propose some natural open problems.
The value of the Tate pairing on an elliptic curve over a finite field may be viewed as an element of an algebraic torus. Using this simple observation, we transfer techniques recently developed for torus-based cryptography to pairing-based cryptography, resulting in more efficient computations, and lower bandwidth requirements. To illustrate the efficacy of this approach, we apply the method to pairings on supersingular elliptic curves in characteristic three.
In late 2012 and early 2013 the discrete logarithm problem (DLP) in finite fields of small characteristic underwent a dramatic series of breakthroughs, culminating in a heuristic quasi-polynomial time algorithm, due to Barbulescu, Gaudry, Joux and Thomé. Using these developments, Adj, Menezes, Oliveira and Rodríguez-Henríquez analysed the concrete security of the DLP, as it arises from pairings on (the Jacobians of) various genus one and two supersingular curves in the literature, which were originally thought to be 128-bit secure. In particular, they suggested that the new algorithms have no impact on the security of a genus one curve over F21223 , and reduce the security of a genus two curve over F2367 to 94.6 bits. In this paper we propose a new field representation and efficient general descent principles which together make the new techniques far more practical. Indeed, at the ‘128-bit security level’ our analysis shows that the aforementioned genus one curve has approximately 59 bits of security, and we report a total break of the genus two curve
Although identity-based cryptography offers a number of functional advantages over conventional public key methods, the computational costs are significantly greater. The dominant part of this cost is the Tate pairing, which, in characteristic three, is best computed using the algorithm of Duursma and Lee. However, in hardware and constrained environments, this algorithm is unattractive since it requires online computation of cube roots or enough storage space to precompute required results. We examine the use of normal basis arithmetic in characteristic three in an attempt to get the best of both worlds: an efficient method for computing the Tate pairing that requires no precomputation and that may also be implemented in hardware to accelerate devices such as smart-cards.
We give estimates for the running-time of the function field sieve (FFS) to compute discrete logarithms in F×pn for small p. Specifically, we obtain sharp probability estimates that allow us to select optimal parameters in cases of cryptographic interest, without appealing to the heuristics commonly relied upon in an asymptotic analysis. We also give evidence that for any fixed field size some may be weaker than others of a different characteristic or field representation, and compare the relative difficulty of computing discrete logarithms via the FFS in such cases.
Using a recent idea of Gaudry and exploiting rational representations of algebraic tori, we present an index calculus type algorithm for solving the discrete logarithm problem that works directly in these groups. Using a prototype implementation, we obtain practical upper bounds for the difficulty of solving the DLP in the tori T2(Fpm) and T6(Fpm) for various p and m. Our results do not affect the security of the cryptosystems LUC, XTR, or CEILIDH over prime fields. However, the practical efficiency of our method against other methods needs further examining, for certain choices of p and m in regions of cryptographic interest.
We propose a simple deterministic test for deciding whether or not an element a ∈ F×2n or F×3n is a zero of the corresponding Kloosterman sum over these fields, and rigorously analyse its runtime. The test seems to have been overlooked in the literature. The expected cost of the test for binary fields is a single point-halving on an associated elliptic curve, while for ternary fields the expected cost is one-half of a point-thirding on an associated elliptic curve. For binary fields of practical interest, this represents an O(n) speedup over the previous fastest test. By repeatedly invoking the test on random elements of F×2n we obtain the most efficient probabilistic method to date to find nontrivial Kloosterman sum zeros. The analysis depends on the distribution of Sylow p-subgroups in the two families of associated elliptic curves, which we ascertain using a theorem due to Howe.
This paper reports on the computation of a discrete logarithm in the finite field F-230750, breaking by a large margin the previous record, which was set in January 2014 by a computation in F-29234. The present computation made essential use of the elimination step of the quasi-polynomial algorithm due to Granger, Kleinjung and Zumbragel, and is the first large-scale experiment to truly test and successfully demonstrate its potential when applied recursively, which is when it leads to the stated complexity. It required the equivalent of about 2900 core years on a single core of an Intel Xeon Ivy Bridge processor running at 2.6 GHz, which is comparable to the approximately 3100 core years expended for the discrete logarithm record for prime fields, set in a field of bit-length 795, and demonstrates just how much easier the problem is for this level of computational effort. In order to make the computation feasible we introduced several innovative techniques for the elimination of small degree irreducible elements, which meant that we avoided performing any costly Grobner basis computations, in contrast to all previous records since early 2013. While such computations are crucial to the L(1/4 + o(1)) complexity algorithms, they were simply too slow for our purposes. Finally, this computation should serve as a serious deterrent to cryptographers who are still proposing to rely on the discrete logarithm security of such finite fields in applications, despite the existence of two quasi-polynomial algorithms and the prospect of even faster algorithms being developed.
The security and performance of pairing based cryptography has provoked a large volume of research, in part because of the exciting new cryptographic schemes that it underpins. We re-examine how one should implement pairings over ordinary elliptic curves for various practical levels of security. We conclude, contrary to prior work, that the Tate pairing is more efficient than the Weil pairing for all such security levels. This is achieved by using efficient exponentiation techniques in the cyclotomic subgroup backed by efficient squaring routines within the same subgroup.
We give a comparison of the performance of the recently proposed torus-based public key cryptosystem CEILIDH, and XTR. Underpinning both systems is the mathematics of the two dimensional algebraic torus T6(Fp). However, while they both attain the same discrete logarithm security and each achieve a compression factor of three for all data transmissions, the arithmetic performed in each is fundamentally different. In its inception, the designers of CEILIDH were reluctant to claim it offers any particular advantages over XTR other than its exact compression and decompression technique. From both an algorithmic and arithmetic perspective, we develop an efficient version of CEILIDH and show that while it seems bound to be inherently slower than XTR, the difference in performance is much smaller than what one might infer from the original description. Also, thanks to CEILIDH’s simple group law, it provides a greater flexibility for applications, and may thus be considered a worthwhile alternative to XTR.
For any positive integers n≥3, r≥1 we present formulae for the number of irreducible polynomials of degree n over the finite field F2r where the coefficients of xn−1, xn−2 and xn−3 are zero. Our proofs involve counting the number of points on certain algebraic curvesover finite fields, a technique which arose from Fourier-analysing the known formulae for the F2 base field cases, reverse-engineering an economical new proof and then extending it. This approach gives rise to fibre products of supersingular curves and makes explicit why the formulae have period 24 in n.
In 2013 and 2014 a revolution took place in the understanding of the discrete logarithm problem (DLP) in finite fields of small characteristic. Consequently, many cryptosystems based on cryptographic pairings were rendered completely insecure, which serves as a valuable reminder that long-studied so-called hard problems may turn out to be far easier than initially believed. In this article, Robert Granger gives an overview of the surprisingly simple ideas behind some of the breakthroughs and the many computational records that have so far resulted from them.