Review of  the
Finanical Cryptography,
Roseau, Dominica
February 28-March 3, 2005

Review by Allan Friedman
May 15, 2005


Introduction (Sven Dietrich, Carnegie Mellon University)

Financial Cryptography 2005 was held in Roseau, Dominica (that's Dominica, between Guadeloupe and Martinique, NOT the Dominican Republic on the island of Hispaniola shared with Haiti). The conference venue was the Fort Young Hotel in the heart of Roseau, attended by about 60 representatives from research and industry from around the world. The hotel was booked solid very early, so many of us had to seek alternate housing at surrounding hotels.

The Balas Bar in the Fort Young ended up being the break area for the sessions and a general meeting area before and after the talks. Wireless service (sometimes reliable, sometimes not, your mileage did vary) was available both in the conference room and outside in the break area, which was quite useful in case someone needed information or references. However, there were ample opportunities to network with colleagues throughout the day until late in the night, e.g. at the Rump Session and Banquet.

The conference was peppered with cryptograms (mostly - if not all - courtesy of Ray Hirschfeld) to jog your brain: the obligatory T-shirt contained them front and back, and the meal coupons were anagrams of each meal (Monday breakfast, Monday lunch, etc.). Yes, what cryptographers do for fun...

Motto: It's not a junket! Really. [Ed. Really?] See you at FC'06 in Anguilla, BWI!

Notes on presentations (Allan Friedman, Harvard University)
Monday, February 28, 2005
Threats and Attacks (Session chair: Avi Rubin)

Fraud with Asymmetric Multi-hop cellular Networks - Gildas Avoine

The problem is how to encourage nodes in a mobile ad hoc network to forward traffic to the next hop on its path to or from the station. The originator of the message should be charged, but intermediaries should be rewarded. The scheme has to be lightweight. Small amounts of cheating are tolerable, but systematic fraud should be detectable and punishable. Every message has the MAC of the sender encrypted with a symmetric key known only to the sender, which the forwarders then submit to the station for some (50-75%) chance of reward. System can be subverted by two nodes in the same cell who just don't go through the station so avoid a charge. This can be solved by forcing message authentication. Another problem is the reliance on the one key for authentication. This means that the station is vulnerable to an oracle attack to determine the key. Using a hash of the key helps.

Protecting Secret Data from Insider Attacks - David Dagon, Wenke Lee,

The goal is to make it as hard as possible for some one who has gained access to a database (intruder or insider) to learn something useful. They present a storage system that is incredibly large, so that finding useful data is difficult even if some one has subverted some of the defenses. It comprises a tera-scale padded table with encrypted data broken into shares. It should be big enough that it can't be stored in memory, forcing a disk access and the ensuing performance drag. Lagrange interpolation can be used for data integrity. Linear scans of the table are useless, as is any single chunk of the table. A brute-force online attack is really slow, since it requires disk I/O, but proper usage with appropriate information can run 300 disk lookups/second. Using the speed difference between memory (proper usage) and disk speed (brute force access looking for shares) can be thought of as a "poor man's" one-way function.

Countering Identity Theft through Digital Uniqueness, Location Cross-Checking, and Funneling - P.C. van Oorschot and S. Stubblebine

Document-driven authentication has failed: easy to duplicate, hard to detect duplicates, no back-channel for document management. Assert uniqueness of credential through location. Use cell phones with geolocation enabled (911 service) which will allow for authenticator entry tied to a geographic space. For each authentication, query the device: if there is one signal (PIN) entered, assume proper use. If no signal or more than one signal is received, there is a system error. Whenever ID is asserted, verify through local phone signal. Theft of device is noticed, cloning causes multiple signals, so neither is a very strong attack. There are still many legal and economic details to work out for privacy issues.

First Keynote (Session chair: Moti Yung) Trust and Swindling on the Internet - Bezalel Gavish

Fraud in online auctions has grown. Trust is important, and Gavish detailed many fraud schemes that were currently being used on online auction sites that preyed on flaws in the trust-based system. Many auction sites claim a fraud rate of less than 1%; this doesn't correspond with anecdotal evidence, but we easily get data from the auction houses themselves. He ran a survey of 130 auction winners (10% response of 1300), 21% of which were dissatisfied with their experience. BUT - that's only 2% of all auctions, and there is a likelihood of reporting bias. It's harder than you'd think to get good data about the results of auctions, but there is reason to believe fraud rate is higher than we'd think.

Digital Signing Methods (Session chair: Giovanni di Crescenzo)
Identity-based Partial Message Recovery Signatures (or How to Shorten ID-based Signatures) - Fangguo Zhang (Sun Yat Sen University, P.R.China) and Yi Mu and Willy Susilo (University of Wollongong, Australia)

Can we get the security an L-bit key's signature (difficulty to forge) with fewer than L bits if we want to recover the message? Using Abe-Okamato's (1999) method for shortening sigs for message recovery, they are able to use the signature to recover discarded bits in a truncated message. This is a good work-around for schemes that have fixed message lengths.

Time Capsule Signature - Yevgeniy Dodis (NYU) and Dae Hyun Yum (POSTECH)

Goal: A signature that is valid in the future time t, but not valid now. They use absolute time to define the "time capsule" signature's activation, but the time server can be independent of this protocol (doesn't need to contact anyone at any specific time). Other party should be able to verify that signature will be valid at t. The construct "identity-based trap-door hard-to-invert relations". Clever idea, but unclear how it would be used.

Policy-Based Cryptography and Applications - Walid Bagga and Refik Molva (Eurecom)

Most security systems use policy for access control and crypto for confidentiality and authentication. These systems are usually slapped together, since they are hard to combine. Policies are monotonic logical expressions of ands and ors, defined through trusted authority and assertion. Policy-based encryption integrates policy with crypto. Keys can be policies: decrypt a message only if the predefined policy/key allows. Suppose a client sends an encrypted request to service provider with policy for privacy certification. The SP can only decrypt if it has a cert issued by an authority that the sender likes. While trust negotiation is terribly flawed on this level, this scheme sidesteps the negotiation phase, making communication more efficient.

Panel: Phishing
Organizer: Steve Myers, with Drew Dean, Stuart Stubblebine, Richard Clayton, Steve Myers, Mike Szydlo

Steve reminded us that phishing is an attack the combines social engineering and technology to get valuable information, or otherwise take advantage of the victim. They fool their victim with convincing reasons to visit misleading sites, often using fake addresses or even fake SSL certs. RSA did a survey showing that consumers are now more scared. Anti-phishing.org shows phishing emails are on the rise.

Stuart talked about the futility of both technical and social countermeasures. Some advice we give consumers is conflicting (Citi says don't click on unsolicited emails, then sends them) or not less than helpful in the long run (eliminate the use of clicking in email??). Fooling the phishing site by always giving the wrong password first won't work for long as scammers become more sophisticated. Merchants don't want to give up the direct channel of email, but as email becomes riskier, who knows?

Drew reminded us that man-in-the-middle web-spoofing attacks are as old as the web. Phishing should be easier to protect against, since it is spam, not MitM, but spreads more easily. Phishing attacks the user, not the computer. The attacks aren't that technically sophisticated, and some client-side defenses should work. Why don't we synch our spam-filter with our browser??? Tools like the Stanford Spoofguard would also help.

Steve proposed an alternate protocol for authenticating websites. It's easy to duplicate a website, and hard for firms to police against trademark usage from others. The password model authenticates the user to the site, but not the site to the user. Use images sent back from the website for the user to authenticate as the password is typed in letter by letter. User just has to *recall* the images, not remember them explicitly. No hardware is required, and can have a large image space. The plan is still vulnerable to a MitM attack, but it makes phishing harder.

Richard reiterated that any single tool has enough flaws to conclude that it will not solve the problem. Even if a user can reliably prove her ID, there are no mechanisms binding authentication to action, so Alice can think she is paying her gas bill while the attacker withdraws 1000 pounds. The intermediary software must be trustworthy, and the distribution itself is vulnerable to more phishing, as is the distribution of client certificates. He charged that current browser technology didn't guarantee the credibility of anything on the screen, and fully defensive behavior would probably prevent the user from using the bank's services. However, he charged that implementing many small steps would raise the cost of phishing enough to potentially reduce it.

Discussion after the panel wondered about the impact of these security flaws on the consumer market, and whether the market could drive better security. There was some debate over whether regulation and liability needed to be adjusted to foster the appropriate environment. Some one in the audience pointed out the bad guys had were better at cooperating than the good guys. Several people suggested attacking phishing at the level of email media: ISP's can check source domains or blacklist known sites. These solutions are part of a larger network security debate.


Tuesday, March 1, 2005

Privacy (Session chair: David Pointcheval)
A Privacy Protecting Coupon System - Liqun Chen, Matthias Enzmann, Ahmad-Reza Sadeghi, Markus Schneider, Michael Steiner

Repeated-use coupons are useful to both the consumer (get free stuff) and the seller (customer lock-in, brand loyalty). Unlike paper coupons, digital coupons involve identity tracking as the easiest solution. The vendor needs protection against forging and double use and use control to prevent multiple users from pooling together. The privacy-sensitive consumer wants to prevent linking between issuances and redemption. The consumer buys a multi-coupon, each good for one redemption (i.e. buy 10 tunes for price of 9). They present a signing scheme based on (camenisch/lysyanska 2002) that uses a different exponent root for each signature. To issue a coupon, the customer chooses random string, computes a binding factor, and computes a value D using the vendor's public key. The vendor then computes a blinded signature, which the customer can then unblind. To redeem a single coupon, the vendor only needs to know that at least one unredeemed coupon exists, but shouldn't be able to link any two coupons to a single multicoupon. The solution is to show that a provable but undisclosed multi-coupon signature covers the single coupon.

Testing Disjointness of Private Datasets - Aggelos Kiayas and Antonina Mitrofanova

Suppose two distrustful parties with private datasets from the same lexical universe want to compare these datasets without revealing the contents to each other. The presenters offered three protocols based on homomorphic encryption principles, each optimizing for different attributes. Any private intersection predicate evaluation (PIPE) system should be designed to reflect size of the lexicon and the sets, revealing on a yes or no as to whether there are any common elements in the two sets. In the first solution, each actor encrypts the entire alphabet using an added primitive denoting the members they possess: any common elements will be observable due to homomorphic properties. This is straightforward, but impractical with large lexicons. The second and third scheme uses "superposed encryption" where one party computes a polynomial based on encrypting the held set and superposing it, then encrypting the combination. The latter two are more efficient for smaller sets in larger universes. It is not clear exactly what the ability to compare private datasets for any unity would be used for.

Hardware-oriented mechanisms (Session chair: Jacques Traore)

RFID traceability: A multiplayer problem - Gildas Avoine and Phillipe Oechslin

The presenters summarized how RFID technology works, and observed that the privacy issues have to do with information leakage (the ID can convey other information about the bearer) and traceability. The former issue can be solved at the vendor level. RFID is bad for tracebility: they cannot be turned off, they are not always evident to the bearer, range of readability is increasing, and they present an equipped observer with easy-to-analyze logs. Physical solutions involve physically destroying the tags, preventing them from being read with Faraday cages or preventing the response from being read with blocker tags. Software solutions that allow the authorized party to read the tag but prevent unauthorized tracking across time/space are just being developed, and have issues. RFID technology has a stack of a physical layer, a communication layer and an application layer. Any of these layers can be attacked to trace the tag, so a privacy scheme needs to address all layers. The data-based application layer can be attacked based on the multiple signals sent. The communication layer can be attacked where the reader is responsible for preventing collisions, both with deterministic and probabilistic protocols. The interface of the physical layer is very open to eavesdropping, radio fingerprinting, etc. The focus on the application layer won't suffice; ideally, we'll need crypto at the communication layer. Stronger protocols are more expensive. Discussion generated some controversy about killing and jamming tags.

Information-theoretic security analysis of physical unclonable functions - P. Tuyls, B. Skoric, S. Stallinga, A.H. Akkermans and W. Ophey

In crypto-based systems, storing keys on weak media can be the weakest point, allowing cloning and abuse. The solution is to use physical artifacts that are unclonable. They must be easy to evaluate, hard to invert and unpredictable even to some one who has the function. Multiple different inputs should produce random-looking outputs. Philips has a prototype optical system: shine a light on a pattern, and get a pattern, from which you generate a key. Each angle forms the first half of a different key pair. Like biometrics, they are robust in perfect conditions, but very sensitive to noise. Security-wise, physical cloning requires the actual device. Electronic cloning requires a full challenge-response space, or at least extract all the entropy, which they go on to prove. Questions about how large the space is were mollified, since the prototype has 10^8 challenge options, with thousands of bits of output space.

Supporting Financial Transactions (Session chair: Liqun Chen)

Risk Assurance for Hedge Funds using Zero Knowledge Proofs - Michael Szydlo

Hedge fund investors want to know as much as possible about the risks of their investment. At the same time, the fund follows private, proprietary investment strategies, and wishes to keep their holdings as secret as possible. The funds use statistical arbitrage, so need to keep their actions secret; they are also subject to less regulation so investors are more exposed if the fund's position are more risky (statistically) than their approval. If the fund managers make statements about their risk exposure using ZK proofs, investors will be able to measure their risk without being able to learn the actual investments. Risk proofs can be about firm earnings, geopolitics, asset allocation percentages, etc. The investment contract includes ZK proofs based on market predicates, verified by a trusted third party. This is less for individual investors than institutions, who have precise risk models for their investments. Dishonest actions now leave a clear paper trail for fraud. Discussion revealed that the proofs must be defined strictly enough to be useful, but broadly enough to prevent an oracle-like attack.

Systems, Applications and Experiences (Session chair: Lynne Coventry)

Views, Reactions and Impact of Digitally-Signed Mail in e-Commerce - Simson L. Garfinkel, Jeffrey I. Schiller, Erik Nordlander, David Margrave, Robert C. Miller

Secure email has been around for a while, and S/MIME has been around since 1998, but neither seems to be too popular. S/Mime allows the mail client to easily verify a signature; in business circumstances where getting the sender's key is easy, it should be easy to use. S/MIME may not be the best standard (document updates, subject not signed), but it's common. The project is based on the results of a survey of amazon merchants, since amazon uses S/MIME for VAT tax notices. 417 surveys were completed (25% EU - 75% US) and the respondents were reasonably well off, educated online merchants. Most respondents didn't know if their email client could handle encryption. Most felt that bank and online merchant information should be signed, with some respondents desiring tax returns and personal correspondence also signed. Merchants didn't believe that signed messages were more trustworthy. Some felt that signed mail was important, but that it was too complicated or not worth the effort: they had no reason to send out signed emails. In the discussion, it came up that people don't think of a signature as a message verifier, but a sender verifier.

Securing Sensitive Data with the Ingrian DataSecure Platform - Andrew Koyfman

There is a large need to keep databases secure, but we can't focus on perimeter defenses alone: there are too many breeches, and too many insider attacks. Application-level security to encrypt data before it goes in the DB require firms to change their apps. So it's easier to add on database level encryption that is transparent to existing apps. And we don't need all the data encrypted, just some the sensitive stuff. Encrypting the columns can break existing read mechanisms, so we use triggers that only authorized parties can use to decrypt data. A network-attached centralized server handles the keys for all users, crypto, with each user having different permissions (time of day, rate of processing, etc). Like all database apps, there are hassles: indexed fields are hassles and there are legacy issues.

Ciphire Mail Email Encryption and Authentication - Lars Eilebrecht

Why a new encrypted email scheme? The hard part of encrypted email is key distribution: the web of trust doesn't really work for normal users, and trusted third parties needed for X.509 or PKIX are unproven. Ciphire transparently sits between user's mail client and server. The setup uses a centralized server to generate a key for an email address - not a person - without much user involvement. It uses ASN.1 (based on X.509) and encrypts the header as well as the message, and each MIME part separately. The hybrid trust model uses the hierarchical nature of PKI (with some detection against server subversion) but each client checks its own certs, other certs and compares summary hash values with communication partners, so many users would be able to see if something had changed. It's not compatible with extant systems, and the dispute resolution and key revocation bits are sketchy. But the system is trying to be as simple for the user as possible.


Wednesday, March 2, 2005 Second Keynote (Session chair: Andrew Patrick) Lynne Coventry - Usable Security: A conundrum?

The goal of most systems is something other than security. The question is whether reconciling usability and security is impossible or just difficult. The default is to make things convenient, rather than secure. Many parties are involved in the burgeoning fraud arena: attackers, legal users of applications, programmers, ISP owners, company managers, etc, all involved in the tradeoff BUT legal users have the steepest cost/benefit range. Since users think of money as safe to start with, security is seen as an added cost to them. A dizzying array of ATM fraud is presented, most capitalizing on users' trust in the banks and the people around them. We can ask users to secure themselves, but PINs have usability issues. Setting their own risk by pre-specifying hours or locations may or may not work. Memory offers set limitations on "something you know" and natural behavior around biometric systems leads to false reject rates of 20-30%, so that wraps it up for "something you are." We need more HCI and user studies: people can be their own worst enemy. Even increasing public awareness can backfire by reducing confidence and use. A participant asked how banks make their final decisions on security measures, but there appears to be no clear metric.

Message Authentication (Session chair: Yuliang Zheng)

A User-Friendly Approach to Human Authentication of Messages - Jeff King and Andre dos Santos

If the user trusts a smart card, but doesn't trust the intermediary computer, how can the user trust the smart card to sign a message to a server? They propose using hard AI problems that the device can initiate, the human can solve, but the computer can't. The user needs to be able to extract a unique message from the smart card, without the computer identifying (and being able to tamper with) that message. The device generates a 3D ray-traced image with text in it. The user can verify the image and read the text message. Any attacking intermediary would have to redraw the image with a new message, which would involve interpreting the scene to re-run a ray-tracing machine. There are other mechanisms you can use: spoken speech, or handwriting. The system must be easy to use, and depends on AI problem parameters. The developed image system can transmit 20 characters, so it's good for status messages, but not meaningful communication.

Auctions and Voting (Session chair: Yvo Desmedt)

Small Coalitions Cannot Manipulate Voting - Edith Elkind and Helger Lipmaa

Plurality systems create incentives for malicious coalitions to not vote their true preferences to manipulate the outcomes. Arrow shows that manipulation is completely preventable, but we can make it hard to do. A random preround pair-wise comparison chooses which half of the candidates go on the ballot, but this can be a source of manipulation itself. They extend Conitzer and Sandholm to apply for more manipulators than just one, but there is no clear, systematic rule to determine the maximum fraction of manipulators they can allow. A question highlighted the fact that the model also assumes equal weights across preference strength for each actor.

Efficient Privacy-Preserving Protocols for Multi-Unit Auctions - Felix Brandt and Tuomas Sandholm

In a multi-unit auction with identical items, the auction is efficient way to allocate goods based on value, but you have to trust the auctioneer. Auction fraud is easy with an untrustworthy auctioneer, and hard to detect. The bidders don't want to reveal their bids to each other, since it can expose strategy, so the protocol should be robust against coalitions. Protocol uses distributed homomorphic encryption to generate keys, publish encrypted bids, then jointly compute the outcome vectors, followed by distributed decryption. Bad faith participation can be identified, those bids removed, and the computation restarted.

Event Driven Private Counters - Eujin Goh and Philippe Golle

In an instant runoff election, preferences are publicly revealed anonymously. Still, it can be possible to submit preference orderings that are combinatorically identifiable, allowing for credible vote-selling. Common solutions such as encrypting counters add too much work to the tallying of votes. They present a "private counter" protecting a value that updates based on events, without revealing what the new value is. Individual preferences are never actually revealed. Results are computed collectively, and if a winner is not reached, the event is announced that one candidate is removed. All private counters update, and another result is computed. If the private counter is secure, then this system is as well.


Thursday, March 3, 2005
User Authentication (Session chair: Mike Szydlo)

Secure Biometric Authentication for Weak Computational Devices - Mikhail J. Atallah, Keith B. Frikken, Michael T. Goodrich and Roberto Tamassia

Biometric authentication should be fairly straightforward, but standard systems need to compare cleartext form of biometric data because hashing similar readings will produce different outcomes. The system assumes a smart card with biometric reader, and a server to access, with the goal that attacking the server should not let you impersonate the user. Even sending the biometric reading XOR'd with a one-time pad, since it will leak information over time. They use Boolean vectors to interpret the Hamming distance between the encrypted measured value and the encrypted stored value. This forms a very lightweight protocol requires possession of the smart card AND either the source metric or access to the server. Since the smart card and server are both involved in the authentication, each authentication system requires a separate smart card.

Panel: Incentives, Markets and Information Security.
Organizer: Allan Friedman, with Bazelel Gavish, Paul Syverson, Sven Dietrich and Richard Clayton

Bazelel presented a straightforward economic analysis of a commonly discussed information security issue: spam. Gavish argues the problem stems from the low marginal cost to send messages in a digital environment, and proposes fee-based system that gives a credit to each recipient, claimable from the sender. While the general idea has been discussed before [5], this approach involved both end parties and the service providers. Gavish advocated a dynamic pricing scheme, and highlighted important areas of research for implementation.

Paul shifted the focus from mechanisms to institutions, arguing the "identity theft is about neither identity nor theft." Syverson highlighted flaws in the current state of consumer authentication, where information that has a very high value in specific contexts (a social security number can open a line of credit or obtain a new password) is undervalued by some actors, leading to arbitrage and fraud. This also introduced the concept of a security externality, where poor protection or overuse of identifying and authenticating information can raise fraud rates for other parties.

Sven demonstrated that a single security issue like distributed denial of service (DDOS) attacks presents the opportunity for multiple levels of analysis that stem from unique features of information systems. The nature of the attack stems from the decentralized environment, where the coordination costs of a bot-net are less than the damage inflicted on the target. Networks of subverted machines also raise the question of who should bear responsibility for the damage caused, since the software manufacturer, the machine owner and local ISP could all have theoretically prevented the machine from causing damage. Dietrich even explained the networks of subverted machines were traded in illicit marketplaces, raising questions of trust and quality guarantees. While no single approach can solve the problem of DDOS attacks, each layer of analysis opens an opportunity to raise the costs, reduce the damages and mitigate harms of this critical issue.

Finally, Richard took a step back, acknowledging the importance of economics in the field of security, but tempering this enthusiasm with several observations. Using the example of email payments, he illustrated that proposed economic solutions might fall flat from simple economic or technical realities. Furthermore, economics is a nice tool, but good numbers are needed to judge efficacy. It is one thing to build internally consistent models but to further extend the field, these models should be consistent with empirical data. Clayton summed up by urging people to learn more about economics, but suggesting that it was "perhaps not yet time to change departments."