I am a principal researcher at
Stellar Development Foundation, on leave from
the School of Computer Science
at Tel Aviv University. My main research interests are Cryptography and Computational
Complexity.
A distribution is k-incompressible, Yao [FOCS ’82], if no efficient compression scheme compresses it to less than k bits. While being a natural measure, its relation to other computational analogs of entropy such as pseudoentropy, Hastad, Impagliazzo, Levin, and Luby [SICOMP 99], and to other cryptographic hardness assumptions, was unclear.
We advance towards a better understating of this notion, showing that a k-incompressible distribution has (k-2) bits of next-block pseudoentropy, a refinement of pseudoentropy introduced by Haitner, Reingold, and Vadhan [SICOMP ’13]. We deduce that a samplable distribution X that is (H(X) + 2)-incompressible, implies the existence of one-way functions.
Succinct non-interactive arguments (SNARGs) have become a fundamental primitive in the cryptographic community. The focus of this work is constructions of SNARGs in the Random Oracle Model (ROM). Such SNARGs enjoy post-quantum security and can be deployed using lightweight cryptography to heuristically instantiate the random oracle. A ROM-SNARG is \emph{(t,ε)-sound} if no t-query malicious prover can convince the verifier to accept a false statement with probability larger than ε. Recently, Chiesa-Yogev (CRYPTO '21) presented a ROM-SNARG of length Θ(log(t/ε)⋅logt) (ignoring logn factors, for n being the instance size). This improvement, however, is still far from the (folklore) lower bound of Ω(log(t/ε)).
Assuming the \textit{randomized exponential-time hypothesis}, we prove a tight lower bound of Ω(log(t/ε)⋅logt) for the length of {(t,ε)-sound} ROM-SNARGs. Our lower bound holds for constructions with non-adaptive verifiers and strong soundness notion called \textit{salted soundness}, restrictions that hold for \emph{all} known constructions (ignoring contrived counterexamples). We prove our lower bound by transforming any short ROM-SNARG (of the considered family) into a same length ROM-SNARG in which the verifier asks only a \emph{few} oracles queries, and then apply the recent lower bound of Chiesa-Yogev (TCC '20) for such SNARGs.
We present a new OT-based two-party multiplication protocol that is almost as efficient as Gilboa's semi-honest protocol [Crypto '99], but has a high-level of security against malicious adversaries without further compilation. The achieved security suffices for many applications, and, assuming DDH, can be cheaply compiled into full security.
In distributed differential privacy, the parties perform analysis over their joint data while preserving the privacy for both datasets. Interestingly, for a few fundamental two-party functions such as inner product and Hamming distance, the accuracy of the distributed solution lags way behind what is achievable in the client-server setting. McGregor, Mironov, Pitassi, Reingold, Talwar, and Vadhan [FOCS '10] proved that this gap is inherent, showing upper bounds on the accuracy of (any) distributed solution for these functions. These limitations can be bypassed when settling for computational differential privacy, where the data is differentially private only in the eyes of a computationally bounded observer, using public-key cryptography primitives.
We prove that the use of public-key cryptography is necessary for bypassing the limitation of McGregor et al., showing that a non-trivial solution for the inner-product, or the Hamming distance, implies the existence of a key-agreement protocol. Our bound implies a combinatorial proof for the fact that non-Boolean inner product of independent (strong) Santha-Vazirani sources is a good condenser. We obtain our main result by showing that the inner-product of a (single, strong) SV source with a uniformly random seed is a good condenser, even when the seed and source are dependent.
We study time/memory tradeoffs of function inversion: an algorithm, i.e., an inverter, equipped with an s-bit advice for a randomly chosen function f:[n]→[n] and using q oracle queries to f,
tries to invert a randomly chosen output y of f (i.e., to find x such that f(x)=y).
Much progress was done regarding adaptive function inversion---the inverter is allowed to make adaptive oracle queries.
Hellman [IEEE transactions on Information Theory '80] presented an adaptive inverter that inverts with high probability a random f.
Fiat and Naor [SICOMP '00] proved that for any s,q with s2q = n2 (ignoring low-order terms), an s-advice, q-query
variant of Hellman's algorithm inverts a constant fraction of the image points of any function. Yao [STOC '90] proved a lower bound of sq ≥ n for this problem.
Closing the gap between the above lower and upper bounds is a long-standing open question.
Very little is known for the non-adaptive variant of the question---the inverter chooses its queries in advance.
The only known upper bounds, i.e., inverters, are the trivial ones (with s+q= n), and the only lower bound is the above bound of Yao.
In a recent work, Corrigan-Gibbs and Kogan [TCC '19] partially justified the difficulty of finding lower bounds on non-adaptive inverters,
showing that a lower bound on the time/memory tradeoff of non-adaptive inverters implies a lower bound on low-depth Boolean circuits. Bounds that for a strong enough choice of parameters, are notoriously hard to prove.
We make progress on the above intriguing question, both for the adaptive and the non-adaptive case, proving the following lower bounds on restricted families of inverters:
In a distributed coin-flipping protocol, Blum [ACM Transactions on Computer Systems '83], the parties try to output a common (close to) uniform bit,
even when some adversarially chosen parties try to bias the common output. In an adaptively secure full-information coin flip, Ben-Or and Linial [FOCS '85], the parties communicate over a broadcast channel and a computationally unbounded adversary can choose which parties to corrupt during the protocol execution. Ben-Or and Linial proved that the n-party majority protocol is resilient to O(√n) corruptions (ignoring poly-logarithmic factors), and conjectured this is a tight upper bound for any n-party protocol (of any round complexity). Their conjecture was proved to be correct for limited variant of single-turn (each party sends a single message) single-bit (a message is one bit) protocols, Lichtenstein, Linial, and Saks [Combinatorica '89], symmetric protocols Goldwasser, Kalai, and Park [ICALP '15], and recently for (arbitrary message length) single-turn protocols Kalai, Komargodski, and Raz [DISC '18]. Yet, the question for many-turn (even single-bit) protocols was left completely open.
In this work we close the above gap, proving that no n-party protocol (of any round complexity) is resilient to ω(√n) (adaptive) corruptions. That is, up tp polylog factors, the single-bit, single-message majority protocol is the optimal protocol against adaptive corruptions.
Hardness amplification is a central problem in the study of interactive protocols.
While "natural" parallel repetition transformation is known to reduce the soundness error of some special cases of interactive arguments:
three-message protocols (Bellare, Impagliazzo, and Naor[FOCS '97]) and public-coin protocols (Haastad, Pass, Wilkstrom, and Pietrzak [TCC '10], Chung and Lu [TCC '10] and Chung and Pass [TCC '15]),
it fails to do so in the general case (the above Bellare, Impagliazzo, and Naor; also Pietrzak and Wikstroom [TCC '07]).
The only known round-preserving approach that applies to all interactive arguments is Haitner's random-terminating transformation [FOCS '09, SiCOMP '13],
who showed that the parallel repetition of the transformed protocol reduces the soundness error at a weak exponential rate:
if the original m-round protocol has soundness error 1- ε, then the n-fold
repetition of its random-terminating variant has soundness error (1- ε)εn/m4
(omitting constant factors). Hastad et al. have generalized this result to partially simulatable interactive arguments,
showing that the n-fold repetition of an m-round δ-simulatable argument of
soundness error 1- ε has soundness error (1- ε)εδ2n/m2.
When applied to random-terminating arguments, the Hastad et al. bound matches that of Haitner.
In this work we prove that parallel repetition of random-terminating arguments reduces the soundness error at a much stronger exponential rate:
the soundness error of the n-fold repetition is (1- ε)n/m, only an m
factor from the optimal rate of (1- ε)n achievable in public-coin and three-message arguments.
The result generalizes to δ- arguments, for which we prove a bound of (1- ε)δn/m.
This is achieved by presenting a tight bound on a relaxed variant of the KL-divergence between the distribution induced by our reduction and its ideal variant,
a result whose scope extends beyond parallel repetition proofs. We prove the tightness of the above bound for random-terminating arguments,
by presenting a matching protocol.
We put forth a new computational notion of entropy, measuring the (in)feasibility of sampling
high-entropy strings that are consistent with a given generator. Specifically, the i'th output block
of a generator G has accessible entropy at most k if the following holds: when conditioning on its
prior coin tosses, no polynomial-time strategy G*
can generate valid output for G's i'th output block with entropy greater than k.
A generator has inaccessible entropy if the total accessible
entropy (summed over the blocks) is noticeably smaller than the real entropy of G's output.
As an application of the above notion, we improve upon the result of Haitner, Nguyen, Ong,
Reingold, and Vadhan [Sicomp '09], presenting a much simpler and more ecient construction
of statistically hiding commitment schemes from arbitrary one-way functions.
Iftach Haitner, Thomas Holenstein, Omer
Reingold, Salil Vadhan and Hoeteck Wee
Inaccessible Entropy II: IE Functions and Universal One-Way Hashing
Theory of Computing 2020 [link]
A preliminary version appeared as Universal One-Way Hash Functions via Inaccessible Entropy [Eurocrypt, 2010]
This paper revisits the construction of Universally One-Way Hash Functions (UOWHFs) from any one-way function due to Rompel [STOC 1990}.
We give a simpler construction of UOWHFs which also obtains better efficiency and security. The construction exploits a strong connection to the recently
introduced notion of inaccessible entropy Haitner et
al. [STOC 2009]. With this perspective, we observe that a small tweak of
any one-way function f is already a weak form of a UOWHF: Consider
F(x,i) that outputs the i-bit long
prefix of f(x). If F were a UOWHF then given a
random x and i it would be hard to come up with x' ≠ x such
that F(x,i)=F(x',i). While this may not be the case, we show (rather easily) that it is hard
to sample x' with almost full entropy among all the possible such values of x'. The rest of our construction
simply amplifies and exploits this basic property.
With this and other recent works we have that the constructions of
three fundamental cryptographic primitives (Pseudorandom Generators,
Statistically Hiding Commitments and UOWHFs) out of one-way functions
are to a large extent unified. In particular, all three constructions
rely on and manipulate computational notions of entropy in similar
ways. Pseudorandom Generators rely on the well-established notion of
pseudoentropy, whereas Statistically Hiding Commitments and UOWHFs rely
on the newer notion of inaccessible entropy.
We present new lower bounds on the round complexity of randomized Byzantine agreement (BA) protocols, bounding the halting probability of such protocols after one and two rounds. In particular, we prove that:
Consider a ppt two-party protocol Π = (A,B) in which the parties get no private inputs nd obtain outputs OA,OB ∈{0,1}, and let VA,VB denote the parties' individual views.
Protocol Π has α-agreement if Pr[OA=OB]≥ 1/2 + α . The leakage of Π is the amount of information a party obtains about the event {OA=OB}; that is, the leakage ε is the maximum, over P∈{A,B}, of the distance between VP|OA=OB and VP|OA≠OB. Typically, this distance is measured in statistical distance, or, in the computational setting, in computational indistinguishability. For this choice, Wullschleger [TCC `09] showed that if α≪ε then the protocol can
be transformed into an OT protocol.
We consider measuring the protocol leakage by the log-ratio distance (which was popularised by its use in the differential privacy framework). The log-ratio distance between X; Y over domain
Ωis the minimal ε≥0 for which, for every v\in Ω, log (Pr[X=v]/Pr[Y =v]) ∈ [-ε,ε]. In the computational setting, we use computational indistinguishability from having log-ratio distance ε. We show that a protocol with (noticeable) accuracy α∈Ω(ε2) can be transformed into an OT
protocol. We complete the picture, in this respect, showing that
a protocol with α∈o(ε2) does not necessarily imply OT. Our results hold both the information
theoretic and in the computational settings, and can be viewed as a "fine grained" approach to "weak OT amplification"".
We then use the above result to fully characterize the complexity of differentially private two-party computation for the XOR function, answering the open question put by Goyal, Khurana, Mironov, Pandey, and Sahai [ICALP '16] and Haitner, Nissim, Omri, Shaltiel, and Silbak [FOCS '18]. Specifically, we show that for any (noticeable) α∈Ω(ε2), a two-party protocol that
computes the XOR function with α-accuracy and ε-differential privacy can be transformed into an OT protocol. This improves upon Goyal et al. that only handle α∈Ω(ε), and upon Haitner
et al. who showed that such a protocol implies (infinitely-often) key agreement (and not OT). Our characterization is tight since OT does not follow from protocols in which α∈o(ε2), and
extends to functions (over many bits) that "contain" an "embedded copy" of the XOR function.
Distributional collision resistance is a relaxation of collision resistance that only requires that it is hard to sample a collision (x; y) where x is uniformly random and y is uniformly random conditioned on colliding with x. The notion lies between one-wayness and collision resistance, but its exact power is still not well-understood. On one hand, distributional collision resistant hash functions cannot be built from one-way functions in a black-box way, which may suggest that they are stronger. On the other hand, so far, they have not yielded any applications beyond one-way functions.
Assuming distributional collision resistant hash functions, we construct constant-round statistically hiding commitment scheme. Such commitments are not known based on one-way functions and are impossible to obtain from one-way functions in a black-box way. Our construction relies on the reduction from inaccessible entropy generators to statistically hiding commitments by Haitner, Reingold, Vadhan, and Wee. (STOC '09). In the converse direction, we show that two-message statistically hiding commitments imply distributional collision resistance, thereby establishing a loose equivalence between the two notions.
A corollary of the first result is that constant-round statistically hiding commitments are implied by average-case hardness in the class SZK (which is known to imply distributional collision resistance). This implication seems to be folklore, but to the best of our knowledge has not been proven explicitly. We provide yet another proof of this implication, which is arguably more direct than the one going through distributional collision resistance.
Key-agreement protocols whose security is proven in the random oracle model are an important alternative to the more common public-key based key-agreement protocols. In the random
oracle model, the parties and the eavesdropper have access to a shared random function (an "oracle"), but they are limited in the number of queries they can make to it. Unfortunately,
as shown by Impagliazzo and Rudich [STOC '89] and Barak and Mahmoody [Crypto '09], such
protocols can only guarantee limited secrecy: the key of any ℓ-query protocol can be revealed
by an O(ℓ2)-query adversary. This quadratic gap between the query complexity of the honest
parties and the eavesdropper matches the gap obtained by the Merkle's Puzzles protocol of
Merkle [CACM '78].
In this work we tackle a new aspect of key-agreement protocols in the random oracle model:
their communication complexity. In Merkle's Puzzles, to obtain secrecy against an eavesdropper that makes roughly ℓ2 queries, the honest parties need to exchange
Ω(ℓ) bits. We show that for protocols with certain natural properties, ones that Merkle's Puzzle has, such high
communication is unavoidable. Specifically, this is the case if the honest parties' queries are uniformly random, or alternatively if the protocol uses non-adaptive queries and has only two
rounds. Our proof for the first setting uses a novel reduction from random-oracle protocols to
the set-disjointness problem in two-party communication complexity, which is known to have
high communication cost. For the second setting we prove the lower bound directly, using
information-theoretic arguments.
Understanding the communication complexity of protocols whose security is proven in the
random-oracle model is an important question in the study of practical protocols. Our results
and proof techniques are a first step in this direction.
In their breakthrough result, Moran, Naor, and Segev
[Journal of Cryptology '16] show how to construct an r-round-round two-party coin-flipping with bias
Θ(1/r).
This improves over the Θ(1/√r)-bias protocol of Awerbuch, Blum, Chor, Goldwasser, and Micali [Manuscript '85], and matches the lower bound of Cleve [STOC '86].
The protocol of Moran et. al, however, uses oblivious transfer, to be compared with the protocol of Awerbuch et. al that can be based on any one-way function.
An intriguing open question is whether oblivious transfer, or more generally "public-key primitives", is required for an o(1/r)-bias coin flipping. This question was partially answered in the black-box settings by Dachman-Soled et al.
[11] [TCC '11] and Dachman-Soled et al. [12] [TCC '14], who showed that restricted types of fully black-box reductions cannot established such o(1/r)-bias coin-flipping protocol from one-way functions.
We make progress towards answering the above question, showing that for any (constant) r ∈ N, the existence of an r-round o(1/r)-bias coin flipping protcol implies the existence of an infinitely-often key-agreement protocol. Our reduction is non black-box, and makes a novel use of the
recent dichotomy for two-party protocols of Haitner et al. [ECCC '18] to facilitate, for the two-party
case, the recent attack of Beimel et al [ECCC '17] on multi-party coin-flipping protocols.
Consider an efficient two-party protocol that given security parameter k, both parties output
single bits Xk and Yk, respectively. We are interested in how (Xk,Yk) "appears" to an efficient
adversary that only views the transcript Tk. We make the following contributions:
In his seminal work, Cleve [STOC 1986] has proved that any r-round coin-flipping protocol
can be efficiently biassed by Ω(1/r). The above lower bound was met for the two-party case by
Moran, Naor, and Segev [Journal of Cryptology '16], and the three-party case (up to a polylog
factor) by Haitner and Tsfadia [SICOMP '17], and was approached for n-party protocols when n < loglog r by Buchbinder, Haitner, Levi, and Tsfadia [SODA '17]. For n > loglog r, however,
the best bias for n-party coin-flipping protocols remains Θ(n/√r) achieved by the majority
protocol of Awerbuch, Blum, Chor, Goldwasser, and Micali [Manuscript '85].
Our main result is a tighter lower bound on the bias of coin-flipping protocols, showing that, for every constant ε > 0, an rε-party r-round coin-flipping protocol can be efficiently biased by Ω(1/√r). As far as we know, this is the first improvement of Cleve's bound that holds in the
standard model, and is only rε (multiplicative) far from the aforementioned upper bound of Awerbuch et al.
We prove the above bound using two new results that we believe are of independent interest.
The first result is that a sequence of (``augmented'') weak martingales have large gap: with constant probability there exists two adjacent variables whose gap
is at least the ratio between the gap between the first and last variables and the square root of the number of variables.
This generalizes over the result of Cleve and Impagliazzo [Manuscript '93], who showed that the above holds for strong martingales,
and allows in some setting to exploit this gap by efficient algorithms. We prove the above using a novel argument that does not follow the more complicated
approach of Cleve and Impagliazzo.
The second result is a new sampling algorithm that uses a differentially private mechanism to minimize the effect of data divergence.
Ran Cohen and Iftach Haitner and Eran Omri and Lior Rotem
From Fairness to Full Security in Multiparty Computation
SCN 2018
Journal of Cryptology 2021 [link]
Draft of full version [ePrint]
In the setting of secure multiparty computation (MPC), a set of mutually distrusting parties wish to jointly compute a function, while guaranteeing the privacy of their inputs and the correctness of the output. An MPC protocol is called \emph{fully secure} if no adversary can prevent the honest parties from obtaining their outputs. A protocol is called \emph{fair} if an adversary can prematurely abort the computation, however, only before learning any new information. We present highly efficient transformations from fair computations to fully secure computations, assuming the fraction of honest parties is constant (e.g., $1\%$ of the parties are honest). Compared to previous transformations that require linear invocations (in the number of parties) of the fair computation, our transformation require super-logarithmic, and sometimes even super-constant, such invocations. The main idea is to delegate the computation to chosen random committees that invoke the fair computation. Apart from the benefit of uplifting security, the reduction in the number of parties is also useful, since only committee members are required to work, whereas the remaining parties simply ``listen'' to the computation over a broadcast channel. One application of these transformations is a new $\delta$-bias coin-flipping protocol, whose round complexity has a super-logarithmic dependency on the number of parties, improving over the protocol of Beimel, Omri, and Orlov (Crypto 2010) that has a linear dependency. A second application is a new fully secure protocol for computing the Boolean OR function, with a super-constant round complexity, improving over the protocol of Gordon and Katz (TCC 2009) whose round complexity is linear in the number of parties. Finally, we show that our positive results are in a sense optimal, by proving that for some functionalities, a super-constant number of (sequential) invocations of the fair computation is necessary for computing the functionality in a fully secure manner.
Computational analogues of information-theoretic notions have given rise to some of the most interesting phenomena in the theory of computation. For example, computational indistinguishability, Goldwasser and Micali [Journal of Computer and System Sciences 1984] , which is the computational analogue of statistical distance, enabled the bypassing of Shanon's impossibility results on perfectly secure encryption, and provided the basis for the computational theory of pseudorandomness. Pseudoentropy, Haastad, Impagliazzo, Levin, and Luby [SICOMP 1999], a computational analogue of entropy, was the key to the fundamental result establishing the equivalence of pseudorandom generators and one-way functions, and has become a basic concept in complexity theory and cryptography. This tutorial discusses two rather recent computational notions of entropy, both of which can be easily found in any one-way function, the most basic cryptographic primitive. The first notion is next-block pseudoentropy, Haitner, Reingold and Vadhan [SICOMP 2013], a refinement of of pseudoentropy that enables simpler and more efficient construction of pseudorandom generators. The second is inaccessible entropy, Haitner, Reingold, Vadhan and Wee [STOC 2009], which relates to unforgeability and is used to construct simpler and more efficient universal one-way hash functions and statistically hiding commitments.
In a multi-party fair coin-flipping protocol, the parties output a common (close to) unbiased bit, even when some corrupted parties try to bias the output.
In this work we focus on the case of arbitrary number of corrupted parties. Cleve [STOC 1986] has shown that in any such m-round coin-flipping protocol, the corrupted parties can bias the honest parties'
common output bit by Ω(1/m). For more than two decades, however, the best known coin-flipping protocol was the one of Awerbuch, Blum, Chor, Goldwasser, and Micali [Manuscript 1985], who presented a t-party, m-round protocol with bias
Θ(t/√m). This was changed by a recent breakthrough result of Moran et al. [TCC 2009],
who constructed an m-round, two-party coin-flipping protocol with optimal bias Θ(1/m).
Recently, Haitner and Tsfadia [STOC 14] constructed an m-round, three-party coin-flipping protocol with bias O((log m)3/m.
Still for the case of more than three parties, the best known protocol remained the Θ(t/√m)-bias protocol of Awerbuch et al.
We make a step towards eliminating the above gap, presenting a t-party, m-round coin-flipping protocol,
with bias O(t^3 * 2t * √log m/ m.5 + 1/(2t-1-2)).
This improves upon the protocol of Awerbuch et al. for any t< loglog m /2, and in particular for t ∈ O(1), it is a 1/m.5+ Θ(1)-bias protocol. For the three-party case,
is is a O(√log m/m)-bias protocol, improving over the protocol of Haitner and Tsfadia .
Our protocol generalizes that of Haitner and Tsfadia
by presenting an appropriate ``defense protocols'' for the remaining parties to interact in, in the case that some parties abort or caught cheating.
We analyze the new protocols by presenting a new paradigm for analyzing fairness of coin-flipping protocols. We map the set of adversarial strategies that try to bias the honest parties outcome in the protocol to the set of the feasible solutions of a linear program. The gain each strategy achieves is the value of the corresponding solution. We then bound the the optimal value of the linear program by constructing a feasible solution to its dual.
Ran Cohen and Iftach Haitner and Eran Omri and Lior Rotem
Characterization of Secure Multiparty Computation Without Broadcast
TCC 2016 (A)
Journal of Cryptology 2017 [link]
A major challenge in the study of cryptography is characterizing the necessary and sufficient assumptions required to carry out a given cryptographic task. The focus of this work is the necessity of a broadcast channel for securely computing symmetric functionalities (where all the parties receive the same output) when one third of the parties, or more, might be corrupted. Assuming all parties are connected via a peer-to-peer network, but no broadcast channel (nor a secure setup phase) is available, we prove the following characterization:
Iftach Haitner and Yuval Ishai and Eran Omri and Ronen Shaltiel
Parallel Hashing via List Recoverability
Crypto 2015 [link]
Motivated by the goal of constructing efficient hash functions, we investigate the possibility of hashing a long message by only making parallel, non-adaptive calls to a hash function on short messages. Our main result is a simple construction of a collision-resistant hash function h:{0,1}k
Iftach Haitner, Jonathan J. Hoch, Omer Reingold
and Gil Segev
Finding Collisions in Interactive Protocols — Tight Lower Bounds on the Round Complexity and Communication Complexityof Statistically Hiding Commitments
SIAM Journal of Computing 2015 [link]
Draft of full version [pdf]
We study the round complexity and communication complexity of various cryptographic protocols. We give tight lower bounds on the round complexity and communication complexity of any fully black-box reduction of a statistically hiding commitment scheme from one-way permutations, and from trapdoor permutations. As a corollary, we derive similar tight lower bounds for several other cryptographic protocols, such as single-server private information retrieval, interactive hashing, and oblivious transfer that guarantees statistical security for one of the parties. Our techniques extend the collision-finding oracle due to Simon [EUROCRYPT 98] to the setting of interactive protocols, and the reconstruction paradigm of In their seminal work, Impagliazzo and Rudich (STOC’89) showed that no key-agreement protocol exists in the random-oracle model, yielding that key agreement cannot be black-box reduced to one-way functions. In this work, we generalize their result, showing that, to a large extent, no-private-input, semi-honest, two-party functionalities that can be securely implemented in the random oracle model can be securely implemented information theoretically (where parties are assumed to be all powerful, and no oracle is given). Using a recent information-theoretic impossibility result by McGregor et al. (FOCS’10), our result yields that certain functionalities (e.g. inner product) cannot be computed both in an accurately and in a differentially private manner in the random oracle model, implying that protocols for computing these functionalities cannot be black-box reduced to the existence of one-way functions. Gennaro and Trevisan [FOCS 00].
Itay Berman and Iftach Haitner and Aris Tentes
Coin Flipping of Any Constant Bias Implies One-Way Functions
 [slides]
STOC 2014
Journal of the ACM 2018 [link]
Draft of full version [pdf]
We show that the existence of a coin-flipping protocol safe against any non-trivial constant bias (e.g., .499), implies the existence of one way functions. This improves upon a recent result of Haitner and Omri [FOCS ’11] , who proved this implication for protocols with bias √2− 1⁄2 ~.207. Unlike the result of Haitner and Omri, our result holds also for weak coin-flipping protocols.
In a multiparty, fair coin-flipping protocol, the parties output a common (close to) unbiased bit, even when some corrupted parties try to bias the output.
Cleve [STOC 1986] has shown that in the case of dishonest
majority (i.e., at least half of the parties can be corrupted), the corrupted parties can bias the honest parties common output bit
by Ω(1/m), in any m-round, coin-flipping protocol. For more than two decades, however, the best known
coin-flipping protocols had bias Θ(t/√m)
against dishonest majority, where t is the number of
corrupted parties. This was changed by a recent breakthrough result of Moran et al. [TCC 2009],
who constructed an m-round,
two-party coin-flipping protocol with optimal bias Θ(1/m). Where in a
subsequent work, Beimel et al. [Crypto 2010] extended this result for the multiparty
case, in which less than 2⁄3 of case, in which less than 2⁄3 of
the parties can be corrupted. More specifically, for any t < 2⁄3
· k , they presented an m-round, k-party
protocol, with bias Θ( 22t−k/m) against (up to) t
corrupted parties. Still for the case 2⁄3
(or more) corrupted parties, the best known protocol had bias Θ(t/√m). In particular,
this was the state of affairs for the natural three-party case (against
two corrupted parties).
We make a step towards eliminating the above gap, presenting an m-round, three-party,
coin-flipping protocol, with bias O(log 3 m)/m. Our approach (which we also apply for the two-party case), does not
follow the “threshold round” paradigm used in the work of Moran et al. and Beimel et al.,
but rather is a takeoff of the majority protocol of Cleve (used to obtain the Θ(√t)-bias protocol,
mentioned above).
Itay Berman and Iftach Haitner
From Non-Adaptive to Adaptive Pseudorandom Functions
TCC 2012
Journal of Cryptology 2013 [link]
Draft of full version [pdf]
Iftach Haitner and Eran Omri
Coin Flipping with Constant Bias Implies One-Way Functions
  [slides]
FOCS 2011
SIAM Journal of Computing 2013 [link]
Draft of full version [pdf]
It is well known (c.f., Impagliazzo and Luby [FOCS '89]) that the existence
of almost all ``interesting" cryptographic applications, i.e., ones that cannot hold information theoretically, implies one-way functions.
An important exception where the above implication is not known, however, is the case of coin-flipping protocols. Such protocols allow
honest parties to mutually flip an unbiased coin, while guaranteeing that even a cheating (efficient) party cannot bias the output of the
protocol by much. While Impagliazzo and Luby proved that coin-flipping protocols that are safe against negligible bias do imply one-way
functions, and, very recently, Maji, Prabhakaran, and Sahai [FOCS '10]
proved the same for constant-round protocols (with any non-trivial bias). For the general case, however, no such implication was known.
We make progress towards answering the above fundamental question, showing that (strong) coin-flipping protocols safe against a constant
bias (concretely,√2− 1⁄2 ~.207) imply one-way functions.
Iftach Haitner, Omer Reingold and Salil Vadhan
Efficiency Improvements in Constructing Pseudorandom Generators from One-way Functions
[slides]
STOC 2010
SIAM Journal of Computing 2013 [link]
Draft of full version [pdf]
We give a new construction of pseudorandom generators from any one-way function. The construction achieves better parameters and is simpler than that given in the seminal work of Håstad, Impagliazzo, Levin and Luby [SICOMP '99]. The key to our construction is a new notion of next-block pseudoentropy, which is inspired by the notion of ``inaccessible entropy'' recently introduced in Haitner, Reingold,Vadhan and Wee [STOC '09]. An additional advantage over all previous constructions is that our pseudorandom generators are highly parallelizable and invoke the one-way function in a non-adaptive manner. Using Applebaum, Ishai and Kushilevitz [SICOMP '06], this implies the existence of pseudorandom generators in NC0 based on the existence of one-way functions in NC1.
Iftach Haitner
A Parallel Repetition Theorem for Any Interactive Argument
[slides]
FOCS 2009
SIAM Journal of Computing 2013 [link]
Draft of full version [eccc]
(The SICOMP version contains an eror in Prop 2.5, which we fix in the eccc version. The eror is minor
and has no effect on the rest of the text.)
The question whether or not parallel repetition reduces the soundness error is a fundamental question in the theory of protocols.
While parallel repetition reduces (at an exponential rate) the error in interactive proofs and (at a weak exponential rate) in special cases of interactive
arguments (e.g., 3-message protocols - Bellare, Impagliazzo and Naor [FOCS '97], and
public-coin protocols - Håstad, Pass, Pietrzak and
WikstrÄom [TCC '10]), Bellare et. al gave example of interactive arguments for which parallel repetition does not reduce the soundness
error at all.
We show that by slightly modifying any interactive argument, in a way that preserves its completeness and only
slightly deteriorates its soundness, we get a protocol for which parallel repetition does reduce the error at a weak exponential rate.
In this modified version, the verifier flips at the beginning of each round an (1 - (1/4m), 1/4m) biased
coin (i.e., 1 is tossed with probability 1/4m), where m is the round complexity of the
(original) protocol. If the coin is one, the verifier halts the interaction and accepts, otherwise it sends the same message that the
original verifier would. At the end of the protocol (if reached), the
verifier accepts if and only if the original verifier would.
Yevgeniy Dodis, Iftach Haitner and Aris Tentes
On the Instantiability of Hash-and-Sign RSA Signatures
[slides]
TCC 2012
Draft of full version [pdf]
The hash-and-sign RSA signature is one of the most elegant and well known signatures schemes, extensively used in a wide variety of cryptographic
applications. Unfortunately, the only existing analysis of this popular signature scheme is in the random oracle model, where the resulting
idealized signature is known as the RSA Full Domain Hash signature scheme (RSA-FDH). In fact, prior work has shown several
``uninstantiability'' results for various abstractions of RSA-FDH, where the RSA function was replaced by a family of trapdoor random
permutations, or the hash function instantiating the random oracle could not be keyed. These abstractions, however, do not allow the
reduction and the hash function instantiation to use the algebraic properties of RSA function, such as the multiplicative group structure
of Zn*. In contrast, the multiplicative property of
the RSA function is critically used in many standard model analyses of various RSA-based schemes.
Motivated by closing this gap, we consider the setting where the RSA function representation is generic (i.e., black-box) but
multiplicative, whereas the hash function itself is in the standard model, and can be keyed and exploit the multiplicative properties of
the RSA function. This setting abstracts all known techniques for designing provably secure RSA-based signatures in the standard model, and aims to address
the main limitations of prior uninstantiability results. Unfortunately, we show that it is still impossible to reduce the
security of RSA-FDH to any natural assumption even in our model. Thus, our result suggests that in order to prove the security of a given instantiation
of RSA-FDH, one should use a non-black box security proof, or use specific properties of the RSA group
that are not captured by its multiplicative structure alone.
We complement our negative result with a positive result, showing that the RSA-FDH signatures can be proven
secure under the standard RSA assumption, provided that the number of signing queries is a-priori bounded.
Iftach Haitner, Yuval Ishai, Eyal
Kushilevitz, Yehuda Lindell and Erez Petrank
Black-Box Constructions of Protocols for Secure Computation
SIAM Journal of Computing 2011 [link]
Draft of full version [pdf]
It is well known that secure computation without an honest majority requires computational assumptions. An interesting question that
therefore arises relates to the way such computational assumptions are used. Specifically, can the secure protocol use the underlying
primitive (e.g., a one-way trapdoor permutation) in a black-box way, treating it as an oracle, or must it be non-black-box (by referring to
the code that computes the primitive)?
Despite the fact that many general constructions of cryptographic
schemes refer to the underlying primitive in a black-box way only,
there are some constructions that are inherently nonblack-box. Indeed,
all known constructions of protocols for general secure computation
that are secure in the presence of a malicious adversary and without an
honest majority use the underlying primitive in a nonblack-box way
(requiring to prove in zero-knowledge statements that relate to the
primitive).
In this paper, we study whether such nonblack-box use is essential. We
answer this question in the negative. Concretely, we present a fully
black-box reduction from oblivious transfer with security against
malicious parties to oblivious transfer with security against
semi-honest parties. As a corollary, we get the first constructions of
general multiparty protocols (with security against malicious
adversaries and without an honest majority) which only make a black-box
use of semi-honest oblivious transfer, or alternatively a black-box use
of lower-level primitives such as enhanced trapdoor permutations or
homomorphic encryption.
Boaz Barak, Iftach Haitner, Dennis Hofheinz
and Yuval Ishai
Bounded Key-Dependent Message Security
Eurocrypt 2010
Draft of full version [pdf]
We construct the first public-key encryption scheme that is proven secure
(in the standard model, under standard assumptions) even when the attacker gets access to encryptions of arbitrary efficient functions of the secret key.
Specifically, under either the DDH or LWE assumption, for every polynomials L and N we obtain a
public-key encryption scheme that resists key-dependent message (KDM) attacks for up to N(k)
public keys and functions circuit size up to L(k), where k denotes the size of the secret
key. We call such a scheme bounded KDM secure. Moreover, we show that our scheme suffices for
one of the important applications of KDM security: ability to securely instantiate symbolic protocols with
axiomatic proofs of security.
We also observe that any fully homomorphic encryption scheme which
additionally enjoys circular security and circuit privacy is fully KDM secure in the sense that the encryption and
decryption algorithms can be independent of the polynomials L and N as
above. Thus, the recent fully homomorphic encryption scheme of Gentry [STOC 2009] is fully KDM
secure under certain non-standard hardness assumptions. Previous works obtained either full KDM security in the random oracle
model ( Black et al. [SAC 2002]) or security with respect to a very restricted
class of functions (e.g., clique/circular security and affine functions, Boneh et al. [CRYPTO 2008] and
Applebaum et al. [CRYPTO 2009].
Our main result is based on a combination of the circular-secure
encryption scheme of either Boneh et al. or Applebaum et al. with
Yao's garbled circuit construction.
Finally, we extend the impossibility result of Haitner and Holenstein [TCC 2009], showing that it
is impossible to prove KDM security against a family of query functions that contains exponentially hard pseudorandom functions,
using only black-box access to the query function and the adversary attacking the scheme. This proves that the non-black-box
usage of the query function in our proof of security makes to the KDM query function is inherent.
Iftach Haitner, Mohammad Mahmoody and David Xiao
New Sampling Protocol and Applications to Basing Cryptographic Primitives on the Hardness of NP
Complexity 2010
Draft of full version [pdf]
We investigate the question of what languages can be decided efficiently with the help of a recursive collision-finding oracle. Such an oracle can be used to break
collision-resistant hash functions or, more generally, statistically hiding commitments. The
oracle we consider Samd, where d is the
recursion depth, is based on the identically-named oracle defined in the work of
Haitner et al. (FOCS '07). Our main result is a constant-round public-coin protocol AM-Sam
that allows an efficient verifier to emulate a Samd
oracle for any constant depth d = O(1) with the help of a BPPNP
prover. AM-Sam allows us to conclude that if L
is decidable by a k-adaptive randomized oracle algorithm with access to a SamO(1)
oracle, then L∈AM[k]∩coAM[k].
The above yields the following corollary: assume there exists an
O(1)-adaptive reduction that bases constant-round statistically hiding commitment on NP-hardness,
then NP ⊆ coAM and the polynomial hierarchy collapses. The same result holds for any
primitive that can be broken by SamO(1) including collision-resistant hash functions and O(1)-round
oblivious transfer where security holds statistically for one of the parties. We also obtain
non-trivial (though weaker) consequences for k-adaptive reductions for any k = poly(n).
Prior to our work, most results in this research direction either applied only to non-adaptive reductions (Bogdanov and Trevisan
[SIAM J. of Comp. '06]) or to primitives with special regularity properties (Brassard [FOCS '79], Akavia et al. [FOCS '06].
The main technical tool we use to prove the above is a new constant-round public-coin protocol (SampleWithSize)
that we believe may be interesting in its own right, and that guarantees the following. Given an efficient function
f on n
bits, let D be the output distribution D = f(Un),
then SampleWithSize allows an efficient verifier Arthur to use an all-powerful prover Merlin's help to sample a random y ← D>
along with a good multiplicative approximation of the probability py
= Pry' ← D[y' = y]. The crucial feature of SampleWithSize
is that it extends even to distributions of the form D = f(US),
where US
is the uniform distribution on an efficiently decidable subset S ⊆{0.1}n
(such D> are called efficiently samplable with post-selection), as long
as the verifier is also given a good approximation of the value |S|.
Iftach Haitner, Minh-Huyen Nguyen, Shien Jin Ong, OmercReingold and Salil Vadhan
Statistically Hiding Commitments and Statistical Zero-Knowledge Arguments from Any One-Way Function
SIAM Journal of Computing 2009 [link]
Draft of full version [pdf]
We give a construction of statistically hiding commitment schemes (those in which the hiding property holds against even computationally unbounded adversaries) under the minimal complexity assumption that one-way functions exist. Consequently, one-way functions suffice to give statistical zero-knowledge arguments for any NP statement (whereby even a computationally unbounded adversarial verifier learns nothing other than the fact that the assertion being proven is true, and no polynomial-time adversarial prover can convince the verifier of a false statement). These results resolve an open question posed by Naor et al. [J. Cryptology 98].
Iftach Haitner, Alon Rosen and Ronen Shaltiel
On the (Im)Possibility of Arthur-Merlin Witness Hiding Protocols
[slides]
TCC 2009 [pdf]
The concept of witness-hiding suggested by Feige and Shamir [STOC 90] is a natural relaxation of zero-knowledge. In this paper we
identify languages and distributions for which many known constant-round
public-coin protocols with negligible soundness cannot be shown to witness-hiding using
black-box techniques. In particular, our results imply that it is impossible to prove that
parallel repetition of 3-Colorability or Hamiltonicity is witness-hiding for distributions that support instances with exactly one witness, if the
proof of security is by a black-box reduction that is independent of the choice of the commitment scheme used in the protocol. This lower bound
conceptually matches an upper bound of Feige and Shamir that uses such a black-box reduction to show that parallel repetition of 3-Colorability
or Hamiltonicity is witness-hiding for distributions with ``two independent witnesses.
We also consider black-box reductions for 3-Colorability or Hamiltonicity that depend on a specific
implementation of the commitment scheme. While we cannot rule out such reductions completely, we show that
``natural reductions" cannot bypass the limitations above.
Our proofs use techniques developed by Goldreich and Krawczyk [ICALP 90] for the
case of zero knowledge. The setup of witness-hiding, however, presents new technical and
conceptual difficulties that do not come up in the setup of zero-knowledge. The
high level idea is that if a black-box reduction establishes the witness-hiding
property for a protocol that is also a proof of knowledge, then this latter
property can be used ``against the reduction" to find witnesses unconditionally.
Iftach Haitner and Thomas Holenstein
On the (Im)Possibility of Key Dependent Encryption
[slides]
TCC 2009
Draft of full version [pdf]
We study the possibility of constructing encryption schemes secure under messages that are chosen depending on the key k of the encryption scheme itself. We give the following separation results that hold both in the private and in the public key settings:
Iftach Haitner
New Implications and Improved Efficiency of Constructions Based on One-way Functions
PhD. Thesis, 2008
[pdf]
Iftach Haitner
Semi-Honest to Malicious Oblivious Transfer -- The Black-Box Way
[slides]
TCC 2008
Full version is part of Black-Box Constructions of Protocols for Secure Computation
Until recently, all known constructions of oblivious transfer protocols based on general hardness assumptions had the following form. First, the hardness assumption is used in a black-box manner (i.e., the construction uses only the input/output behavior of the primitive guaranteed by the assumption) to construct a semi-honest oblivious transfer, a protocol whose security is guaranteed to hold only against adversaries that follow the prescribed protocol. Then, the latter protocol is ``compiled" into a (malicious) oblivious transfer using non-black techniques (a Karp reduction is carried in order to prove an NP statement in zero-knowledge). In their recent breakthrough result, Ishai, Kushilevitz, Lindel and Petrank [STOC '06] deviated from the above paradigm, presenting a black-box reduction from oblivious transfer to enhanced trapdoor permutations and to homomorphic encryption. Here we generalize their result, presenting a black-box reduction from oblivious transfer to semi-honest oblivious transfer. Consequently, oblivious transfer can be black-box reduced to each of the hardness assumptions known to imply a semi-honest oblivious transfer in a black-box manner. This list currently includes beside the hardness assumptions used by Ishai et al. also the existence of families of dense trapdoor permutations and of non trivial single-server private information retrieval.
Iftach Haitner, Jonathan J. Hoch and Gil Segev
A Linear Lower Bound on the Communication Complexity of Single-Server Private Information Retrieval
[slides]
TCC 2008
Full version is part of Finding Collisions in Interactive Protocols — Tight Lower Bounds on the Round Complexity and
Communication Complexity of Statistically Hiding Commitments
We study the communication complexity of single-server Private Information Retrieval (PIR) protocols that are based on fundamental cryptographic
primitives in a black-box manner. In this setting, we establish a tight lower bound on the number of bits communicated by the server in any
polynomially-preserving construction that relies on trapdoor permutations. More specifically, our main result states that in such
constructions Ω(n) bits must be communicated by the server, where n
is the size of the server's database, and this improves the Ω(n / log n) lower bound due to
Haitner, Hoch, Reingold and Segev [FOCS '07]. Therefore, in the setting under consideration, the naive solution in which the user downloads the
entire database turns out to be optimal up to constant multiplicative factors. We note that the lower bound we establish holds for the most
generic form of trapdoor permutations, including in particular enhanced trapdoor permutations.
Technically speaking, this paper consists of two main contributions from which our lower bound is obtained. First, we derive a tight lower
bound on the number of bits communicated by the sender during the commit stage of any black-box construction of a statistically-hiding
bit-commitment scheme from a family of trapdoor permutations. This lower bound asymptotically matches the upper bound provided by the
scheme of Naor, Ostrovsky, Venkatesan and Yung [CRYPTO '92]. Second, we improve the efficiency of the reduction of statistically-hiding
commitment schemes to low-communication single-server PIR, due to Beimel, Ishai, Kushilevitz and Malkin [STOC '99]. In particular, we
present a reduction that essentially preserves the communication complexity of the underlying single-server PIR protocol.
Iftach Haitner, Jonathan J. Hoch, Omer Reingold
and Gil Segev
Finding Collisions in Interactive Protocols -- A Tight Lower Bound on the Round Complexity of Statistically-Hiding Commitments
[slides]
FOCS 2007
Full version is part of Finding Collisions in Interactive Protocols — Tight Lower Bounds on the Round Complexity and
Communication Complexity of Statistically Hiding Commitments
We study the round complexity of various cryptographic protocols. Our main result is a tight lower bound on
the round complexity of any fully-black-box construction of a statistically-hiding commitment scheme from one-
way permutations, and even from trapdoor permutations. This lower bound matches the round complexity of the
statistically-hiding commitment scheme due to Naor, Ostrovsky, Venkatesan and Yung [CRYPTO ’92].
As a corollary, we derive similar tight lower bounds for several other cryptographic protocols, such as single-server private infor-
mation retrieval, interactive hashing, and oblivious transfer that guarantees statistical security for one of the parties.
Our techniques extend the collision-finding oracle due to Simon [EUROCRYPT ’98] to the setting of interactive protocols
(our extension also implies an alternative proof for the main property of the original oracle). In addition, we substantially extend the reconstruction paradigm of
Gennaro and Trevisan (FOCS ’00]. In both cases, our extensions are quite delicate and may be found useful in proving
additional black-box separation results.
Iftach Haitner and Omer Reingold
Statistically Hiding Commitment from Any One-Way Function
[slides]
STOC 2007 [pdf]
Full version is part of Statistically Hiding Commitments and Statistical Zero-Knowledge Arguments from Any One-Way Function
We give a construction of statistically-hiding commitment schemes (ones where the hiding property holds information theoretically), based on the minimal cryptographic assumption that one-way functions exist. Our construction employs two-phase commitment schemes, recently constructed by Nguyen, Ong and Vadhan [FOCS `06], and universal one-way hash functions introduced and constructed by Naor and Yung [STOC `89] and Rompel [STOC `90].
Iftach Haitner and Omer Reingold
A New Interactive Hashing Theorem
[slides]
Complexity 2007
Journal of Cryptology 2014 [link]
Draft of full version [pdf]
Interactive hashing, introduced by Naor, Ostrovsky, Venkatesan and Yung [CRYPTO '92],
plays an important role in many cryptographic protocols. In particular, interactive hashing is a major component in all known constructions
(which are based on general one-way permutations and one-way functions) of statistically hiding commitment schemes and of statistical
zero-knowledge arguments. Interactive hashing with respect to a one-way permutation f, is a
two-party protocol that enables a sender that knows y=f(x) to transfer a random hash
z=h(y) to a receiver. The receiver is guaranteed that the sender is committed to y (in the sense that it cannot
come up with x and x' such that f(x) ≠ f(x'), but h(f(x))=h(f(x'))=z). The sender
is guaranteed that the receiver does not learn any additional information on y. In particular, when h is a two-to-one hash
function, the receiver does not learn which of the two preimages {y,y'}=h-1(z)
is the one the sender can invert with respect to f.
This paper reexamines the notion of interactive hashing. We give an alternative proof for the Naor et al. protocol, which seems
significantly simpler and more intuitive than the original one. Moreover, the new proof achieves much better parameters (in terms of
how security preserving the reduction is). Finally, by applying our new proof to a close variant of the Naor et al. protocol, we achieve a more
versatile interactive hashing theorem for a more general setting than that of the Naor et al. protocol.
Iftach Haitner, Danny Harnik, and Omer Reingold
On the Power of the Randomized Iterate
[slides]
Crypto 2006
SIAM Journal of Computing 2011 [link]
Draft of full version [pdf]
We consider two of the most fundamental theorems in Cryptography. The first, due to Håstad, Impagliazzo, Levin and Luby [STOC '89, STOC '90,
SIAM J. on Computing '99], is that pseudorandom generators can be constructed from any one-way function. The second, due to Yao [FOCS
'82], states that the existence of weak one-way functions implies the
existence of full fledged one-way functions. These powerful plausibility results shape our understanding of hardness and randomness
in Cryptography, but unfortunately their proofs are not as tight (i.e., security preserving) as one may desire.
This work revisits a technique that we call the randomized
iterate, introduced by Goldreich, Krawczyk and Luby [SIAM J.
on Computing '93]. This technique was used by Goldreich et al. to give a construction of pseudorandom generators from regular one-way
functions. We simplify and strengthen this technique in order to obtain a similar construction where the seed length of the resulting
generators is as short as Θ(n log n) rather than Θ(n3)
achieved by Goldreich et al. Our technique has the potential of implying seed-length Θ(n),
and the only bottleneck for such a result is the parameters of current generators against space bounded computations. We give a construction
with similar parameters for security amplification of regular one-way functions. This improves upon the construction of Goldreich,
Impagliazzo, Levin, Venkatesan and Zuckerman [FOCS '90] in that the
construction does not need to ``know" the regularity parameter of the functions (in terms of security, the two reductions are incomparable).
In addition, we use the randomized iterate to show a construction of a pseudorandom generator based on an exponentially hard one-way function
that has seed length of only Θ(n2). This improves a recent result of Holenstein [TCC '06] that shows a
construction with seed length
Iftach Haitner, Danny Harnik
and Omer Reingold
On Efficient Pseudorandom Generators from Exponentially Hard One-Way Functions
[slides]
ICALP 2006
Full version is part of On the Power of the Randomized Iterate
We show a construction of a pseudorandom generator from any exponentially hard one-way function with a blowup of only Θ(n2). This Improves the recent Θ(n5) construction of Holenstein [TCC '06]. Our technique uses the tools recently presented in Haitner et al. [Crypto '06] for the setting of regular one-way functions, and further develops them.
Iftach Haitner, Omer Horvitz, Jonathan Katz, Chiu-Yuen Koo, Ruggero Morselli and Ronen Shaltiel
Reducing Complexity Assumptions for Statistically-Hiding Commitment
[slides]
Eurocrypt 2005
Journal of Cryptology 2009 [link]
Draft of full version [pdf]
We revisit the following question: what are the minimal
assumptions needed to construct statistically-hiding commitment schemes?
Naor et al. [J. Cryptology 98] show how to construct such schemes based on any one-way
permutation. We improve upon this by showing a construction based on any approximable preimage-size one-way function.
These are one-way functions for which it is possible to efficiently approximate the number of pre-images of a given output. A special case
is the class of regular one-way functions where all points in the image of the function have the same (known) number of
pre-images.
We also prove two additional results related to statistically hiding commitment. First, we prove a (folklore)
showing, roughly speaking, that the statistical hiding property of any such commitment scheme is amplified exponentially when multiple independent parallel
executions of the scheme are carried out. Second, we show a compiler which transforms any commitment scheme which is statistically hiding
against an honest-but-curious receiver into one which is statistically hiding even against a malicious receiver.
Iftach Haitner
Implementing Oblivious Transfer Using a Collection of Dense Trapdoor Permutations
[slides]
TCC 2004
Draft of full version [pdf]
Until recently, the existence of collection of trapdoor permutations (TDP) was believed (and claimed) to imply almost all of the
major cryptographic primitives, including public-key encryption (PKE), oblivious transfer (OT), and non-interactive zero-knowledge (NIZK). It
was recently realized, however, that the commonly accepted general definition of TDP needs to be strengthened slightly in order to make the
security proofs of TDP-based OT go through. We present an implementation of oblivious transfer based on collection of dense trapdoor permutations.
The latter is a collection of trapdoor permutations, with the property that the permutation domains are polynomially dense in the
set of all strings of a particular length. Previous TDP-based implemen tations of oblivious transfer assumed an enhancement of the hardness
assumption (of the collection).
Here we present an alternative construction that only requires the TDP to have dense domains. Specically we present
an implementation of OT based on any dense TDP.