Review of the
Financial Cryptography and Data Security,
Bonaire, Dutch Antilles
Feb 26-Mar 3, 2012
Review by Benjamin Mood
Conference report for Financial Cryptography and Data Security 2012
Divi Flamingo Beach Resort, Bonaire, Dutch Antilles, Feb 26-Mar 3 2012
by Benjamin Mood, University of Oregon
The annual IFCA Financial Cryptography and Data Security conference took place in Bonaire, the easternmost island of the ABC Islands just north of Venezuela, in its 16th instance in late February 2012. In attendance were about 90 participants from academia, government, and industry. The program chair was Angelos Keromytis, who put together an excellent program, as you will see below, and the local organization was by the general chair Ray Hirschfeld. The program included meeting with local government representatives at the welcome session in the morning ("You're the first group of scientists I meet that hasn't studied reefs!") and the welcome reception at the Flamingo's Nest. The latter is a small terrace at the resort facing west, making it ideal to observe the green flash. Among the many social activities (besides scuba diving, an obvious choice in Bonaire that many took advantage of), there was the island excursion up north which let us observe some flamingos (some pink dots in the distance), wild pigs, iguanas, wild donkeys, and also the sailing trips over to snorkeling spots and Klein Bonaire.
The obvious meeting spots were the open-air lobby (good wireless coverage) or the small pier by the beach bar, where conference participants, hotel guests, and locals would congregate at the end of the day. The nearby town of Kralendijk offered the opportunity to go off in small dinner groups on the days when we didn't have dinners planned. A beach BBQ on Wednesday night was a wrap-up for those who left on Thursday and didn't stay for the co-located workshops. The main FC sessions took place in the Peter Hughes Meeting Room, at the northern end of the resort.
There were two workshops: one on usability (USEC, http://infosecon.net/usec12/) and and one on ethics in computer security research (WECSR, http://www.cs.stevens.edu/~spock/wecsr2012/). The workshops shared the keynote speaker Ross Anderson, who spoke on ethics committees and IRBs (after the original joint guest speaker Jody Westby experienced travel problems getting to Bonaire), and a panel discussion led by Lenore Zuck on the ethics in data sharing. The morning break was spontaneously supplied with fresh coconuts, leaving participants to return (with their individual coconuts) to their respective conference rooms, the Peter Hughes meeting room above the dive shop and the Capture Shop next to the Activities office.
The IFCA Annual General Meeting took place on Tuesday night before the rum(p) session. It was announced there that the next location for FC (in 2013) will be in Okinawa, Japan, with Ahmad-Reza Sadeghi as the program chair and Kazue Sako as the general chair.
See you in Okinawa!
The first session was given by Scott Zoldi, of FICO, who talked about fraud detection methods used in the real world by customers such as banks and credit card companies. Scott provided a good overview of the methods, including using neural network approaches, for detecting fraud in ATM usage, profiling credit card usage in different scenarios.
Without further ado, Benjamin Mood's notes on the FC sessions...
Why privacy? We only have a few hundred friends, unlike celebrities.. but we can still find ourselves in embarrassing situations that we don't want our friends or significant others to know about. Impersonation is possible if people have small sets of friends and those sets have a high clustering coefficient. With disparate groups of friends, social authentication becomes more effective. Best performing face-recognition algorithms give about 65% accuracy. Recognition can be bad with certain images (e.g., a beefeater). Facebook knew this would be weak against your jilted former lover etc., and you can easily log in from a friend's machine as a policy matter. Their argument is police and courts are the proper place to deal with "insider" threats. It can also be used for targeted attacks (spear phishing), but the haven't seen much of this. What their system does is kill industrial scale phishing. Social CAPTCHAs are implemented by Facebook, which may have provided security for us, but in reality, it doesn't help us, but helps Facebook deal with their problems. "It's really security theatre".
Rachel Greenstadt: Their goals seemed straightforward - they do care about phishing. It does protect your friends. Ross: Makes it more difficult to do social phishing from a remote location, but not from your machine if it's compromised.
Q: Does it help against spam on FB page?
A: yes, it should - FB really wants to cut down on spam
Q: but this is a user benefit
A: this is a fair comment - one of the reasons why people want FB but not MySpace
Q: why is this surprising? Preventing fraud is good for companies as keynote said
A: when set out to investigation, thought these mechanisms would help user privacy, so it was of interest to see the benefits are aligned in a slightly different direction. Automated mechanisms were a lot weaker than expected, but have significant value to the company, if not the user
Jean Smart: If investigating someone and wanted to find out who they were talking with and who over time, could I log in multiple times and get pictures of people you're speaking to more frequently? Wouldn't that be an easier way to trace targets through social network?
A: good point - these might be leaking and could be a covert channel
Moti Yung: how to define topics and what is good for company vs user? If something is good for the user with less spam, that's better for user and means ads will be more valuable
A: Remember Facebook Connect is part of their business model, they want everyone to log in through FB - so it's financial too - if banks want to user other authentication forms, there are others like amazon doing 1-click and hashing, if they line up with FB then we're depending on their protections. Governments might eventually say you can use FB to claim welfare or pension benefits. UK is going entirely online next year, so this isn't inconceivable. 160 billion pounds a year, 8% fraud. Imagine the problems if FB is part of this.
Q: Paper develops intuition, this might also trigger violations of US law - criminal statutes. Even fairly lousy mechanisms like this could trigger these legal protections.
A: yes, other works we've done that show more money should be spent on policing speak to this. Yet another evidential straw against them.
The MVP Web-based Authentication Framework (short paper)
Sonia Chiasson, Chris Deschamps, Elizabeth Stobert, Max Hlywa, Bruna Freitas Machado, Alain Forget, Nicholas Wright, Gerry Chan and Robert Biddle
presenter: Sonia Chiasson
Lots of "new and improved" authentication schemes, but it's hard to compare them. No consistent evaluation from security or usability standpoint. The MVP framework is meant to address this by facilitating user studies. Can do longer-term studies and users can use their own systems. Modified several open source systems, eg Wordpress blogs. Logging in brings up the correct authentication scheme. The process stays the same since the authentication string is sent back to the website, just they way that the user sees the authentication scheme differs. Many characteristics are common, such as web-based and allowing for interchangeable schemes (about 20 implemented so far). These are all instrumented for analysis including user interaction and details of the user environment. Password reset emails can be sent or delayed to encourage trying to log on. Can be used for crowdsourcing activities as well. Some folks reset at each login which might model realistic behavior. Unexpected findings: as non-US researcher, can't use Mechanical Turk directly (must be a US citizen!) Can use Crowdflower instead. Posting tasks at different times will give you different types of users because it depends on who is awake in that part of the earth. Moving on larger and longer term studies with ore comparisons and mobile device support.
Stuart Schecter: how do users know they're supposed to be practicing their
password? Do they know study is about password?
A: They're not told but some people do figure it out
Angelos Keromytis: is this currently available?
Stuart Schecter: big reveal at the end saying this is what we're actually doing?
A: do debrief at the end but haven't filtered based on whether people figured it out
Q: Several mechanical turk communities where tasks are discussed, has this been looked at?
A: no but run in a block at the same time - even if debriefing, most of the study gone before hand
Q: Still have to do own authentication?
A: Yes - modified Wordpress but not saying conclusively use one or another
A birthday present every eleven wallets? The security of
customer-chosen banking PINs
Joseph Bonneau, Sören Preibusch and Ross Anderson
presenter: Joseph Bonneau
What's a stolen wallet worth? Do PINs make pickpocketing less of a problem? There hasn't been a big PIN leak like with passwords. There was a leak from RockYou of passwords which had 4-digit sequences in them, can these correlate with PINs? Most of these were text and four digits that weren't related. Last June, an iPhone developer had a PIN lock password and he shared the dataset. 200,000 users - sadly just below what you'd really like and not every combination picked. 9954 possibilities covered. Correlating data sets, need 10,000 users to get to everyone. Worse than random. Trends in PIN selection an be represented by color intensity graphs. Can reverse-engineer from the diagram how the Gregorian calendar works! Built a linear model of PIN probability based on factors like probability that user chooses date or sequence vs random value. Models showed that dates and patterns are very popular. 25% choose people who look random by looking against RockYou dataset and iPhone datasets with regression model. Survey of banking customers showed that over half of people share their PINs (almost exclusively with family) and about one third re-use for things like online banking and SIM cards. Using survey data to model banking distribution data, users answered questions about their PINs. More people choose random data for their PINs. Lots of people use the default banking PIN, some didn't know they can change it. Many use the same they've ever had form their first account. Some use their number based on student ID or old phone number. So real PINs are better than what the model might suggest. What if banks used a blacklist? Maybe not a great idea but it does make the passwords look more random. However, 8% of people use their own birthday, and 99% of users in the US have DOB in their purse or wallet. Implications: if attackers have 6 tries (e.g., 3 at ATM, 3 in chip and pin reader) and guess just dates or sequence, if you know the birthday you can get up to 8% success. If you can steal 4 cards you can bring this up to 10.9% with known DOB. Users had a lot to say about PINs.
Q: how to track second order effects of people gaming the survey?
A: took precautions: didn't have to answer and could drop out (some people did), said who they are, didn't ask for demographic data (people didn't like this in pilot studies), out of 1000 people only 2-3 very negative comments, perhaps 50 saying this is really good and worth studying, people thought a lot about the answers and made them think about this data.
Q: Survey was about banking but did you ask about how many unique accounts etc do they have?
A: asked what they reuse PINs for (e.g., smartphone, gate code), asked if same PIN used for multiple cards (about 2/3rds said yes)
Jean Smart: is this biased against less paranoid? Also if you call yourself engineer, they are statistically most trusted group.
A: might be biased against less paranoid given demographics of who takes MTurk survey.
Q: moving toward larger PINs, customers would often choose 4-digit PINs and add "11" or "12" if they need 6-digit PIN.
A: Switching to more digits doesn't work - CMU looked at this in password. Switching to random PINs even if short (3 digit) would be better than longer PINs. History of banking is moving towards 4-digit PINs chosen by user.
High yield investments promise multiple percentage interest _per day_. Also known as Ponzi or pyramid schemes. Advertised as legitimate investments even when they aren't and a sophisticated ecosystem set up surrounding them. One example is Macrotrade, which claims to be certified by Comodo (a CA). Similar to Paypal but you probably haven't heard of them... These are dominated by two payment mediators (based in Costa Rica and Panama). Observed 1500 HYIP programs and over 140K events. Found they were reporting on same set of data but not always agreeing - shows collusion probably not happening. They agree about 80% of the time. HYIPs tend to collapse within a few weeks - 50% last 50 days or less, but some last over a year. Collapse due to unsustainable return rates among other factors. Users are less likely than aggregators to give a bad rating and more likely to give good. Don't trust previous investors, but maybe the aggregators. Impacted based on model is estimated at $6 million/month. Thousands of these are online, some persisting for years. The reputation ecosystem produces mostly reliable assessments, but don't trust users. Future goals include an interactive web-based visualization and a way for linking scams together.Q: How to know when collapse is happening?
Consider a trade consortium where members may want to collaborate but they don't trust each other with each other's financial data. This can be solved with secure multiparty computation, where multiple users use secret sharing and distribute these pieces amongst the other parties. Sharemind framework is used to perform this (based on paper in ESORICS'08), improved with currently-unpublished protocols. The SecreC programming language is used for implementing data analysis algorithms. Input form is web-based and secret sharing is done in the web browsers, with the parties receiving shares over secret channels. SecreC looks similar to C. Several MPC primitives are used and implemented in Sharemind, which can also be used for anonymization. User study shows that there is a positive reaction to the framework. This is the first practical secure MPC application with geographically separated computation nodes. Looking for collaborators for new prototypes or applications for academia and research.Q: Any interest in using this for browser security? e.g., malware
Stock market orders aren't expressive. Ordering must be managed outside exchanges and managing is expensive (fast network and perfect computing). Exchanges aren't always fair either - simply trusting the firm in question isn't necessarily in your best interest. Limit orders can create put options since if the value of a stock drops below the limit order, so that order gives someone a free put option. What about pushing trading logic into the exchange itself? This can make individuals safer by allowing for dynamic orders, e.g., buy shares at a given price only if trading volume is less than a certain threshold per hour so that you're not caught up in panic selling. People don't want to reveal rules because they can be taken advantage of. Adding crypto can help by creating a zero-knowledge style trustworthy audit trail. The exchange not proves it is operating correctly. Downside is crypto adds complexity and time in fast markets. Rules use vectors of encrypted integers while interval proofs allow to prove when a trigger is hit without revealing their securities or trigger values. Pitfalls include flash crashes due to unimaginative rules,but these happen anyway since people are already doing algorithmic trading.Q: What's in this for the stock market?
In their talk, they talked about private proximity testing. Their goals were constant time and liar-free. Proximity tests allow for users to determine whether two users are in proximity. There are popular non-privacy preserving solutions but they care about the privacy of the information. They use secure multi-party computation to compute distance between users and answer whether or not the users are close by. There is a WiFi location tag proximity previously published paper. However, the tag system takes O(N) time instead of the desired O(1) time. A location tag consist of a location and time.
For GSM networks: These tags are unpredictable and reproducible. Used a custom Android kernel for the devices. Cell phones are given a unique identifier which includes a location area code (LAC) for where the phone currently exists. They used a method called K-shingling, which breaks a document into K pieces. Each shingle has a numeric number. The minimum values of these shingles are compared. If two shingles are the same then the two documents are alike. They use shingling to separate the location data and then compare it. Shingling prevents dishonesty from working.
They tested three different distances: < 10 meters (is in proximity), < 1 mile (is in proximity), > 10 miles (not in proximity). In the same room: 85% success rate. On the same university campus: high 70%, large distance: does not compute to same place. each test was then repeated 5 times which prevented any false results.
Q: What happens on the borders of LAC?
A: Reasonable that it is not a problem.
Q: How far apart towers are affect results?
A: Have not tested this yet.
Q: Any plans to do CDMA?
A: Chose GSM due to currently possessed equipment. Should work on CDMA.
Q: Is the modifications of the phones available/ can we use it?
A: I can point you to who did the modifications of the phones
This talk describes the problem of spam and focuses on the bad ISPs and whether they can be taken down to prevent the spam. They revealed a couple interesting statistics including that spam comprises 90% of all email and costs businesses $100B a year. It is difficult to manage by only the recipient. They propose that filtering the spam would be better. However, this is made more difficult by bad ISPs. The ISPs can do something about the outbound spam. The majority of spam is from a few IP addresses controlled by a few ISPs.
A question raised during this research is what is the legal issues regarding removing a ISP. To determine whether an ISP can be removed they used a few metrics. The exclusive customer cone is the set of customers which will be cut off if the ISP is removed. Exclusive customer prefix size, how many larger customers are included in that cut off. They also keep track of how many bad users there are and what the ratio of good to bad users is.
For their tests they did not include forged headers since 80% of spam does not have forged headers. They used spam email headers to deal determine IPs (for un-forged headers). They used the spam database which Georgia Tech gathered from july 2008 to january 2010. They used databases to determine the metrics listed above. Their results show some ISPs which spam comes from are needed but at least one could be removed; judged by the amount of spam vs legitimate data.
A: Some mechanism to put pressures on the ISP and shut it down
This talk was on finding the least congested routes in Tor. Tor is an online routing system for preserving the clients privacy. They presented two algorithms for finding these routes. The first algorithm, circuit choosing, works as follows: the clients pre-build and maintain a number of circuits. Use circuit which has the least amount of congestion. Reduces the duration time by about 1/2. Some overhead was needed for gathering the information needed for this algorithm. The second algorithm was called "circuit dropping": If the route has too much congestion then pick a new route. This reduces the time by about 1/2 as well. There is no overhead.
They also have a long term path suggestion algorithm. Since some relays (i.e. users) may always be congested, they find and remove them from use. However probing these may have a large amount of overhead. They use the route overhead to determine a relay's overhead. If the relay keeps a history of congestion then they remove it. They were able to reduce long term path by 10%.
General attacks are not affected by their approach. There are specific
attacks against the long term path selection since an attacker can flood
nodes which causes those nodes to be slower, however this does not
dramatically decrease the run time.
Ross Anderson: 25 years ago people use to do "sticky routing" use route
until congested, can this be applied?
A. this fits into route dropping.
Q: What if a large number of users use this process? Won't this prevent it from working?
A: Guess the benefit will not be as great but still use the routes more evenly.
Q: reverse of smear attack, make a relay look better than everyone else?
A: best way to avoid this is to drop that particular circuit.
Internet voting systems are hard to get right. Verifiability and auditability are particularly properties to obtain, as is ballot secrecy - these are potentially at odds with each other. Washington, DC deployed Internet voting for overseas absentee voters in the 2010 general election and invited the academic security to "hack" a mock election using the system with three days notice and two weeks before it was to be deployed for a general election. The system consists of a web server in a DMZ between two firewalls, with application server and DB server behind the web server firewall. IDS is in front of the web server. Name, zip code, voter ID is entered, as is a hex-based PIN mailed in advance to voters. Ballot is an interactive PDF, saved to computer after being filled in and then uploaded to system. System is built with Ruby on Rails, open source, available on GitHub. Team was able to download the code in advance. Ballot is encrypted with GPG. By using double rather than single quotes in the GPG operation, the code is susceptible to a shell injection attack. Other problems include unencrypted ballots persisting in /tmp and the deployed session secret wasn't changed from what's in the public GitHub repository. Created an exploit ballot with a small Python shell. Allowed making a fake ballot. All database credentials were effectively stolen, replaced votes with their choices, replaced and new votes, installed a back door to reveal new votes, and cleared the audit logs. Ballot return page played "Hail to the Victors" to make it abundantly clear it was compromised. Based on network diagram, were able to determined who was coming in and out of server room webcams. Even got access to real voter roll that would allow voting in the real election. All was made possible from using double rather than single quotes.
Q: didn't change code so just had hash of PIN - how would you have voted?
A: if you have the PIN, yu can just log in. If same session secret and attacker knows, can craft a ballot with cookies, need ID number from DB but that is sequential
Q: that's if a real person voted?
A: that's without modifying the server at all
Q: DC government can't fix much of anything, is this just bad implementation of fundamentally hard?
A: People hired for this were pretty good - open source, good work done in the past, this was much better than many closed source systems in the past. Wasn't a problem with them, these are issues that everyone has.
Q: After all this, some discussion probably. Any insight into QA process?
A: Not too familiar with their QA. Probably not much of one. Even doing it in advance isn't the right approach. Enough vulnerabilities. Illustrates real pressures election officials are under. Changes required due to election timeline pressure. Elections are administered by municipalities and not everyone has resources to get this right.
In the past, there were many entities that worked independently of each other. Nowadays, more specialization which means more networking of independent actors to form things such as supply chains. This raises the question of who is in charge of security in these organizational networks, and the answer is that everyone is usually in charge - but there are some selfish actors. To get around this, audits are proposed. Using game theory can show productivity, interdependence, and thoroughness of these audits. Model interdependence in terms of probabilities of loss and attack. Security investment generate positive externalities in this model, which helps all actors. Is there a benefit to investing though if others won't? Consider that the total cost is that of loss plus that of adding security. There is also a social cost of two firms together. There is a local minimum called the social optimization. The curve changes depending on interdependence and more is required as interdependence increases. Security audits can be deployed in this environment but is hampered given security of an organization isn't directly observable. The thoroughness of the audit is thus important, and the audit result can be attested to third parties. Symmetric Nash Equilibria shows the optimal security level given a security investment of two parties. ??A certain security level can be reached without audits. Though audits are useful, baseline audits are useless since both players have incentive to sufficiently invest to the Nash equilibrium. Thus, only thorough audits will be useful. The implications are that audit procedures must be tailored to adjust thoroughness, that systems should be built to be auditable easily, and that systems should be decoupled whenever possible. Audit requirements should similarly be tailored and criteria defined for thoroughness. Limitations include uncertainty about an attacker's action. The takeaway is that security audits are no panacea.
Q: How can this be applied or introduced, particularly interconnectedness
A: Customer diversity, length of supply chain - this is theoretical - look at ISP system as interconnected system who themselves use graph-related approaches. Finding best indicators is important.
Q: What is timing?
A: Simultaneous and symmetric
Q: Set of compatible or similar with audits
A: As long as originating - stability of equilibria determines whether audits required symmetrically or whether asymmetric audits are OK. Reputation effects can also be introduced.
Audit logs should be designed to be unforgeable. When a log entry is signed and the signature is generated, the key should be deleted to prevent logs from being tampered. Forward security is guaranteed but no guarantee against log erasure. Another scheme would include aggregating signatures together. Evolving the key and keeping a signature of the entire stream prevents compromise but is expensive particularly for the verifier. Previous schemes are not necessarily verifiable publicly or have dependency on an online trusted third party. More recent schemes using public keys are scalable and secure but if they fail, it's hard to determine what parts are valid and invalid. Public key schemes must have signatures computed in order to verify the signature of the entire scheme. Each private key is associated with an individual public key, raising the question of whether it's possible to verify the entire scheme once and directly. LogFAS system created with constant number of expensive operations regardless of input size, as well as a public key independent from the number of signers or log entries, and fine-grained verification is possible. It is also provably secure. Single public key verifies signatures across all private keys using Schnorr scheme. Tag values are verified rather than individual log entries. Bind the signature and tag using a token that evolves with private key and log entry. Private keys are deleted as used, and loggers support a given number of entries before new keys are necessary. Tokens are aggregated and comprise tags. Based on tags, signature can be calculated and exponentiation (expensive) only needs to be done twice. Corrupt log entries can be identified with a binary search like strategy on O (lg n) expensive operations. The external counter signature protects against truncation. Security is proven in a supplemental paper. Evaluation shows this scheme is much faster in terms of verification time with little overhead in signing. Future work involves building truncation protection implicit into the log.
Q: Isn't there an issue if a machine is broken into multiple times, if the
private key is stolen?
A: Assume if you break into the system this causes a log entry. If you can break in and target the key without incurring a reporting event, you've subverted the logging mechanism, not the security primitives.
Q: How to handle randomization? In Schnorr, knowing the one-time random gives you the private key. Deterministic of pseudorandom?
A: Truly random selection. Don't rely on pseudorandom.
Q: hash tree logging mechanism in the past?
A: Sounds familiar
A method for secure integer division. They use a threshold homomorphic encryption system (Paillier) to accomplish the secure computation and is specific to two party computation. They mention it could be extended. For their homomorphic system, the public key is known to everyone and private key is shared such that no party can decrypt his own encryption and decryption must be done in collaboration. The scheme they use has addition and multiplication. Division. Divides h/d and returns the quotient to both parties. This can be used for private statistics. This can be useful for data analysis like (n_a + n_b)/(d_a + d_b). They use a taylor series to compute do the division. They fix the approximation of the taylor series by comparison. For division protocol, they needed truncation (using encryption and homomorphic addition) and a way to determine the bit length of a number (using a binary search like technique or a regression protocol)
Q: You are doing this on integer mod N, correct?
A: Yes, using Pallier, and using a thousand bits to be secure
Q: Have you implanted this?
A: No, we have not yet. We wanted to create a general framework before implementing this.
Each user can only decrypt the files which he is allowed to read and uses a tree based derivation method for distributing keys and tokens. Users can also let other users read and write the data which they own.
For access write control at the coarse gained level oblivious mapping still works. For fine grain, encryption does not work - it may allow for unauthorized writes or not let the cloud know which file was written. Their solution is to use oblivious update tags, which contains information about which file was updated and lets the cloud know which files are being written too. Each spot in the resources tree has a key pair. Each file in encrypted with symmetric keys.
The cloud observes requests at block level and will not see patterns. It was noted that cache optimizations are possible to increase the performance of the system.
Q: This is like ID based apples, like a capability where the users has
a token which can allow a users to access, this is capability, right?
Q: What about solutions like PIR with access control?
A: We wanted something in-between cloud side completely and user completely.
Q: Estimate in performance? Important for what is the price of privacy.
A: Question becomes what you want to store in blocks (i.e. how large of data)? Not implemented for these measurements.
Their main topic was how can a user preform data analysis on data while preserving the privacy of the data. The previous work allows the aggregator to learn the noisy sum only. They added support for fault tolerance and support for dynamic join and leave operations. They use differential privacy which allow for adversaries to not notice large changes. In the basic scheme a one-time key distribution is used. This is enabled by homomorphic encryption. To achieve fault tolerance, they use a binary tree like construction. This contraction also allows for dynamic joins and leaves. Whenever the number of users reaches the next power of two, a new tree must be created. Differential cryptography allows for dealing with untrusted aggregator.
Q: Could you combine secret sharing with differential privacy?
A: Differential cryptography allows for a smaller noise value. We use some secret sharing.
Q: I think you can turn this model into secret sharing?
A: Yes, but then you may lose some privacy.
Q: You would have to treat K as N?
Discretionary access control is popular but there are issues with this model in conjunction with outsourced storage. The storage provider (e.g., a cloud provider) could be "honest but curious", where it runs the protocol correctly but might be interest in properties such as access patterns. Storage provider ay learn who owns the data objects but shouldn't be able identify the users accessing these objects. A desired property is unlinkability, where the provider can't link accesses to the users. Pseudonyms hide identities but don't provide unlinkability of access patterns. Instead, have an ACL per permission and have the user prove to the trusted reference monitor that one valid pseudonym is possessed without specifying which one. Cryptographic accumulators are used to provide this, and the second of two schemes presented uses a dynamic accumulator, which allows efficient updating of the accumulator. Representing the ACl, the size of values and the proof complexity are constant-time.
Q: Does this model consider a particular document you're not going to need
right away, if there are a number of people on an ACL and there are accesses
per person - are there ways to link this?
A: Perhaps through side channel. But can't tell the users themselves apart. If two accesses are of different or the same users.
Q: Can you see if the same user accesses different data?
Q: You can see what is being accessed though?
A: Yes, but not by who
Q: Accumulator looks like group signature sometimes, is a trusted authority needed?
A: No, not for this
Two parties want a commitment without revealing specific values such as age (e.g., prove over 18 but not give the specific age). This can be done with a non-interactive zero knowledge proof. This has applications to e-voting, auctions, age-verification sites. Historically Lagrange proofs can be used and are short but finding the values taks time, and don't know how to do factoring-based NIZK. Binary representations have shortcomings as well, namely non-constant communication and non-interactivity only through random oracles. The range proof in this paper is pairing-based using knowledge assumptions and common reference strings rather than random oracles, providing low computational complexity for the prover. Communication can be tuned through parameterization. Trade-offs can be made between either constant communication/verification or small CRS length or prover's complexity, or any trade-off between them.
About 76 million smart meters are deployed worldwide, 10 million in the US alone. They enable consumers to help distribute and reduce loads and gather consumption information. There are privacy issues with collecting this data. Thought has been given to privacy-preserving solutions, but not the feasibility of using these techniques on low cost microcontrollers. This work shows privacy preserving computations are possible in limited environments. Privacy can be given with zero-knowledge proofs. However, microcontrollers have pretty low MIPS rates and small code space. This work shows it's possible to implement zero-knowledge billing on these platforms. Using different cryptographic primitives affects feasibility. Even low-end cheap microcontrollers can provide cryptographic committments in a matter of seconds. Certified readings can be done with 10 seconds with a $3 microcontroller while the ones in current smart meters can produce readings every 28 seconds. ECC primitives maximize the capabilities.
Q: What about power issues? Also, how can software upgrades be done given
the high amount of energy required for writing flash? You could do privacy
and attach meter. These should be policy issues.
A: Agreed on some points.
Q: What do utilities have to say about this? What is their incentive to support this?
A: Could prevent lawsuits from fraud/stealing, also potentially future legislation.
Many financial transactions demand confidentiality, and these are increasingly done on mobile devices. Privacy-preserving computation through secure function evaluation is feasible on desktops but is slow and requires large amounts of memory, too much for mobile devices. Clouds aren't everywhere and can track usage patterns. Better would be to do all of the computation on the mobile devices themselves. This work uses circuit templating, taking small pre-optimized pieces of garbled circuits and can create vastly larger circuits for SFE than were previously possible on phones. The Fairplay compiler generally used for SFE was modified with a new intermediate language that allows easy generation of these templates. This language, called PAL, is transparent to the user, who can write in Fairplay-standard SFDL and output SHDL, also standard. Evaluation shows that a large number of programs can be compiled on mobile devices that were previously infeasible, with reductions in required memory of over 95%. The circuit generation techniques are modular and can be used with other efficient techniques for garbled circuit execution.
Q: Tried to compare size of circuits? Are they similar?
A: Some are smaller, some the same size, a couple are larger
Q: Why is this difference?
A: Sometimes compilers can increase resulting circuit size
Q: Is the system available?
A: Yes - online soon and available on request
Binary decision trees are provide a means for determining program execution for a branching program. Ordered binary decision diagrams (OBDD) are directed acyclic graphs where variables are processed in order, used for applications such as formal verification and circuit design. Multivariate branching programs can be represented with these diagrams. Secure two-party computation aims to keep these programs and their input private, for example for diagnostics. Private database queries can represent server data as a branching program such as private information retrieval and keyword search. Using OBDD can allow for these. Protocol uses oblivious transfer. Server sends encrypted decision tree to client, client can decrypt with key received from oblivious transfer. Randomly permute answers to hide structure. This requires a stronger oblivious transfer where queries and answers can't be correlated. This can be generalized to decision trees. Client computation is constant time rather than log time in the Yao model, while server computations decrease from linear to log. Future work is trying to achieve efficiency with communication and computation.
The security of searchable symmetric encryption against passive attacks has been considered by many, but security against active attackers has not been similarly considered. Universal composable security is considered in this work. Naive approaches to securing searches is to add a MAC to provide integrity, but a malicious server can replace search pairs, so a stronger model is needed, namely the use of tags computed over keyword aggregates. UC security can be proved using a simulation proof. In an ideal world, the adversary only lears the size of the documents, size of keywords, and indices of the keyword. Reliability is also proven in this scheme. No questions
They propose a new way to verify when a vulnerability was found. They called it "carbon dating", which is a way to verify a time stamp. Traditional method is time stamping. Carbon dating involves the creation and solution of a puzzles. Given a puzzle and a solution any user should be able to verify. However solving a puzzle should be hard. Carbon dating, as they described, creates a puzzle which takes N time to solve, so if the puzzle was solved then it must have been created N time ago. The past puzzle types were repeated squaring or hashed based. Repeated squaring does not allow for easy verification. Hashed based is verifiable put easily parallelizable. The drawbacks of this method is it ties up a computer for N time, there is no idea proof to work protocol, nothing prevents a user from carbon dating multiple outputs, and very fuzzie (meaning each computer takes a different amount of time to solve a puzzle). Their next idea was to use Bitcoin to carbon-date. They also created a method to avoid deflating the value of bitCoin. Their applications include time-release encryption & commitments and digital cash schemes.
Q: do you see this being integrated anywhere?
A: Could be useful in some applications [did not say which applications]
Q: what are some anonymity concerns?
A: Account you create is a fresh account, have to send money from one account to another account, one account must have a name from somewhere. Its possible but you would have to take extra steps.
Bitter to Better - How to Make Bitcoin a Better Currency
Simon Barber, Xavier Boyen, Elaine Shi and Ersin Uzun
Presenter: Xavier Boyen
How to make Bitcoin better. Bitcoin is an online currency which was decentralized, transparent, flexible, and is an alternative to other currencies. They compared e-cash and Bitcoin. Bitcoin is trustworthy, meaning no "unfair" manipulation of the currency, no double spending, and transactions are irreversible. They note the real "kicker" of the system is a scripting language. Transactions can be divided and reconstructed in Bitcoin. Also uses hashes for verification. Transactions are public which allow for checks against double-spending. A group of transactions are put together into a "block". Legitimate transactions in blocks which were discard can be recollected.
They observed a few issues in Bitcoin: deflation, deflation allows adversaries to enter into the currency later but still gain most of the money. Theft, loose money forever. Their possible solutions are "checkpointing the past" - save hashes of a set of money. Backup the money, static master secret, postdated backup transactions, spending rate limits. They also suggest a new method for anonymity. Their solution is a two party mixer for transactions. Their conclusion is that Bitcoin is good idea and their paper shows couple of ideas to fix the problems.
Q: if have computation power for history revision attack, why not just take
all the money?
A: negative feedback loop in history revision attack
Q: is language Turing complete?
Q: if we can save ssl certificates can we revoke sketchy things automatically?
A: leaving humans to make decisions creates many problems.
Q: can it be fixed?
A: yes, problem is the parameters, but needs to be restarted from scratch.
Q: what is purpose of slowing down coin mint rate?
A: Want to have the shape of minting, put deflation at specific points.
Jason: FC is a virtual tax haven, laws sometimes respect privacy and sometimes do not respect privacy.
Stewart: broadly categorize different PETS and which lead to different problems with the law. Category 1: first party PETS: used by actually data subjects, tools which reduce visibility. TOR is the main example. SOPA has anti circumvention techniques. This might make distributing TOR illegal. It will inflict lots of collateral damage. Category 2: Enterprise pets, tools used by enterprises, help more responsibly manage person information. Laws are there to prevent information being take to make distinctions about personal. Like hiring laws. When Secure Multi Party Computation PETSs become commercially available will make things interesting since this will prevent information from being revealed. [i.e. no information left to reveal]. The bottom line is there are problems if you don't use PETS and problems if you do use PETS. Category 3: infrastructural PETS: federated identity management type "stuff". Selective credential revealing.
Travis: Software requirements which comply with security laws. There are two types of laws: Rules (like type of encryption) and standards (very general which do not specify what is reasonable). Reasonable standards identify foreseeable internal and external risks. Companies must disclose data breaches which allows the FTC to know what people should use. The regulars must know what can happen to know what they should require. Rules can conflict with each other (Examples given of laws in various states).
Peter: (see Peter's paper at) http://ssrn.com/abstract=1960602 Some counties have limits on cryptography. India: max key length is 40 bits. China: requires chinese created standards. Internet is rather insecure. Lots of nodes, encryption, many intercepts. Crypto was munitions until 1999 and can distribute it unless you are sending to North korea or the like. In india: security agencies want to wiretap in real time, commercial people want to encrypt, government stuck in the middle. Keys are only held by individuals and corporations. Is good idea to ban SSL? In China: there is internet surveillance. there are limits on effective crypto. To sell in china, make it in china. They insist on non-standard crypto systems. A 1999 law prohibits commercial use of crypto unless you have a license. There is a soft law which said no license is needed unless the core function is encryption. VPN is not OK in China. There is a great uncertainty about what "core functionality" means. Public release of 3 chinese algorithms recently. Not interoperability with global standards. The bottom line is that Major nations are ramping of legal regulation of crypto.
Q to Travis: laws which dictate which technologies, effort for flexible
A: regulators reach out to communities to find out what the standards should be. Experts say not a set amount, People who are most informed typically have other problems they are interested.
Q to Peter: what happened about RIM?
A by someone else: enterprise architecture: can get keys from corporations, personal architecture: everything is routed through servers
Q to Peter: encrypt before transmission?
A: laws against it, companies don't like to typically break laws. Possibility is prevent enforcement somehow. International pressure is the only way.
Observation: need to write laws which do not mess crypto up.
Q to Peter:couple of issues, vast amount of crypto, made in china and the shipped to india, if crypto is outlaws then only outlaws will have crypto. However most amount of privacy problems occur from inside adversaries and people with warrants. Crypto helps protect meta-data.
A (by someone else): Live in a world where both things are true, protecting things on wires and protect things online too. still have to disclose passphrase.
Q: if you believe what was released at OBL death, is pushback through social issues or a ruse or?
A Stewart: very good point, common knowledge that most of powers in patriot act have been lobbied for for a long period of time. Authorities are opportunistic. However there are still legitimate threat. These powers may not address the specific threats
Q: Have you guys thought about patriot act section 215?
A: section 215 is part that says for national security purposes government can get any information it wants. continuing flight on the EU.
Q: learn for secret interpretation of section 215? (suggested that all communications are fair game) third party has the information, so there is no 4th amendment right.
A: would be outrage if true, but it kind of died down.
Observation: at least one supreme court justice said using third parties is messed up.
Observation: Judges do not believe metadata is content and can be taken without a warrant.
Observation: in UK, restricted which metadata can be taken by police. With Facebook going https the only real way to take information is to have a high bandwidth interface into Facebook.
Observation: in EU, since data is private, there should be some sort of audit trail for data so the government must confess to what it does.
The PACE|AA Protocol for Machine Readable Travel Documents, and its Security
Jens Bender, Özgür Dagdelen, Marc Fischlin and Dennis Kügler Presenter: Özgür Dagdelen
A new way for passports to be validated. The past protocol
used for passports (PACE) did not have an active authentication method for
the passports. There is a passive authentication but it does not affect
cloning. They noted this means the passports have a public/private key pair.
However then there is no deniability. They addressed the active
authentication protocol with deniability. The protocol uses a differ hellman
type protocol. They called there protocol, PACE|AA, and is faster, deniable,
and still secure. They use Schnorr signatures for active authentication. To
accomplish deniability they use a MAC result for the challenge response.
They reuse randomness, saved one computation, one less round of
communication, and almost have deniability (they noted they have a fully
deniable version of Schnorr). To prove the protocol secure they use
reductions. Their security proof is works in the strong model. They assume
random oracles and ideal cipher. This protocol also prevents impersonation
since more secrets are stored on the chip in the passport. They prove the
key used is good in the paper. For deniability they use different variables
to get a Schnorr signature.
Oblivious Printing of Secret Messages in a Multi-party Setting
Aleksander Essex and Urs Hengartner
Presenter: Aleksander Essex
They developed oblivious printing. Oblivious printing allows a set of printers to print a secret message and not know what the secret message is. Given secure multi party computation it might be helpful to print pieces of a computation physically. They use invisible ink overprinting: Message printing in invisible ink and a pen which can activate the message to be seen. Each message takes multiple printers to be completely printed. In this protocol each printer receives an encrypted cipher text. There is a translation table which translates the cipher text to the plain text. This translation table is mixed and encrypted. They use a plaintext equality test to find which row should be printed. The printer then shares which portion it prints using a secret sharing technique. The printers use an operation like a visual XOR. They use a cut-and-choose protocol to prove the printers are honest. The last printer in the sequence uses black ink of the XOR of the previous pixels so each printer can verify the result.
The applications they mentioned were for trustworthy voting, generation of a ElGamal key pair (which could be used for a digital cash like system). They limitations of this process are the printing process itself, document authentication, contrast, and custody requirement. Their takeaway was that uses are now able to print a secret, human or machine readable, and its a paper document. This system can fit into a wider protocol.
Q: Did you develop this complete even the pens?
A: we developed the pens and tested single layer but we haven't tested the multi-layer idea.
Q: modify any printers?
A: we replace ink cartridges with our own ink. The original ink would only last for about 100 prints until the printers had to be replaced.
Reverse Fuzzy Extractors: Enabling Lightweight Mutual Authentication
for PUF-enabled RFIDs
Anthony Van Herrewege, Stefan Katzenbeisser, Roel Maes, Roel Peeters, Ahmad-Reza Sadeghi, Ingrid Verbauwhede and Christian Wachsmann
Presenter: Christian Wachsmann
The presentation was on a lightweight authentication scheme for RFIDS. They use physical unclonable functions (PUF) to achieve this security. These are like a hardware fingerprint. They are inherently unclonable, infeasible to predict, and tamper evident. Two previous applications of PUFs were authentication and key based storage. Both of these applications had problems. Their contribution was a lightweight PUF scheme which has a small footprint on the tag. They added cryptographic hardware and error correction to the tags to complete the PUF scheme. To reduce the hardware footprint they used reverse secure sketches. They also added a mutual authentication scheme to the tag. To prove their security they used secure sketch. Their conclusion was that their system allows for mutual authentication, small footprint, scalable, and applications to other devices and scenarios.
Q:What do you mean by lightweight hash?
A:hash with a low implementation hash, still cryptographic
Q: how many bits is the communication complexity
A: communication protocol is not optimal, need about 220 bytes. take a 64 bit challenge, generate 1785 bits in total based on challenge. then send that result back.
Q: is this actually built?
A: just proof of concept
CTL: A Platform-Independent Crypto Tools Library Based on Dataflow
Junaid Jameel Ahmad, Shujun Li, Ahmad-Reza Sadeghi and Thomas Schneider
Presenter: by Junaid Jameel Ahmad
A new platform independent cryptography library called Crypto Tools Library (or CTL). Since there are many types if devices which need a variety of cryptography, a platform independent library could be used. The previous libraries are primary programming language deponent to a few languages. One example of where this could be used is for video encoding. There is already of reconfigurable video coding (RVC) system. RVC can be written in a verity of languages. CTL has a number of crypto systems and cryptographic primitives. A small subset of what their system includes is block ciphers, different modes of operation, and stream ciphers. They tested their library on single and multicore machines. They achieve a performance increase of 300% to 400% using multicore machines. For the future work, they wish to include a privacy preserving garbled circuit system in their system as well as multimedia applications. They also want to add more cryptographic algorithms to the CTL.
Q: How is the performance different with different paradigms between VHDL and C
A: We have not studied this difference yet.
A Cache Timing Attack on AES in Virtualization Environments
Michael Weiss, Benedikt Heinz and Frederic Stumpf
Presenter: Michael Weiss
A cache timing attack on AES in a virtualization environment using side channels. Many users use virtualization to improve security. For their attack they exploit the ache of modern CPU's since the lookup table cannot be completely put into the cache which enables this attack. Their attack is based on the Bernstien attack. This attack requires the weakest attack model. Need to observe a whole AES encryption for the attack. Their attack assumes known plain texts. They attack the trusted executing environment (TEE), which is a virtualization based security architecture. There are both trusted and untrusted pieces to the TEE architecture. Specifically, AES can be used for mutual authentication. When the challenge starts, the adversary starts timing, when the response occurs he stops the clock. He can then decrypt the text.
For their implementation they used a beagle board with a Cortex A8 32 bit processor and fiasco.OC L4 microkernel and L4Linux for the environment. They used precise timestamps from the kernel. For AES they used five different implementation include openSSL, Barreto, Barnstein, Gladman, and Niyaz. Depending upon on the implementation, the keys can be recovered easier in some cases than others. Showed feasibility of attack on TEE, provided measurements which show all AES implementations are vulnerable, Simple attacks need lots of samples which cannot be gain from a few possible plaintext observations. Their conclusion is that cache timing leakage is a real threat.
Q: why did you choose test bed which was close to mobile, why not android?
A: choose it since it has debugging possibilities, mobile phones are more complicated to do this on. Hardware it is the same.
Softer Smartcards: Usable Cryptographic Tokens with Secure Execution
Franz Ferdinand Brasser, Sven Bugiel, Atanas Filyanov, Ahmad-Reza Sadeghi and Steffen Schulz
Presenter: Steffen Schulz
A usable cryptographic token (or smart cards) with secure execution. Smartcards allow for two factor authentication. For their implementation they used TXT to implement smartcards. The main portion they needed to create was a driver for PKCS#11 middleware. Once this driver was created, a smartcard implementation of TXT (Trusted Execution Technology) would be possible. To deploy the system it would be like token deployment. Their system can also allow a trusted channel to be used if needed. Currently this implements RSA signatures. They used opencryptoki for the middleware and also have user interface. The trusted code base is about 3000 lines of code and the untrusted code is about 2000 lines of code. One application this can be used for is encrypted or signatures of emails and he demonstrated the user interface for the email application.
Their system is a two factor authentication scheme: a user must possess the platform and know the PIN. They mention this is less secure than actual smartcards since this has little repentance against tampering and since it is possible to try many PIN entries to find the correct PIN. For future work they might look at web authentication, transaction confirmation for HBCI, maintain synchronization with multiple platforms, and a formal analysis of separating the public/private token state.
Q: What happens if someone SSHs into the machine
A: The dialogue will ask for the PIN. Also the machine would have already been compromised.
Q: Only works for single user environment?
A: Yes a single user thing.
Q: Lots of system services which are interrupted? (including the wifi)
A: Yes this is a problem. Suspends all PCI devices.
Q: how much insurance is there that the message you want to sign is signed?
A: same problem as smartcards. Perhaps use a PKCS interface to include message. For now its RSA signed.
Q: public key operations happen outside, where is the assurance? (or how can tell if verification has succeeded)
A: we don't focus on that but it is possible.
FC adjourned at noon on Thursday, Mar 1, 2012. In papiamiento we will add: "Macha danki! Ayoo!" -SD