>Date: Fri, 1 Nov 1996 05:59:54 -0800
>From: "Paul C. Kocher" 
>Subject: Fault-induced crypto attacks and the RISKS of press releases
>
>I've been watching the recent announcements about fault-induced
>cryptanalysis with interest [e.g., RISKS-18.50,52,54,55,56].  Whereas the
>attacks are extremely powerful tools, they aren't at all new to the crypto
>community -- there has been widespread discussion for years about these,
>they've been implemented by criminals and security system evaluators, and
>they are reasonably well documented.
>
>For example, NIST specifically discuss such attacks and the need to prevent
>them.  FIPS PUB 74-1 (see http://csrc.nist.gov/fips), "Guidelines for
>Implementing and Using the NBS Data Encryption Standard," was published way
>back in 1981 and says in section 5.2.2 on Error Handling:
>
>>       Errors associated with the primary encryption device should be
>> detected and handled by the secondary device. Physical tampering detectors
>> (vibration or intrusion sensors) may be used to detect physical tampering
>> or unauthorized access to the encryption unit. Sensors which detect
>> abnormal changes in the electrical power or the temperature may be used to
>> monitor physical environment changes which could cause a security problem.
>> However, the major requirement for error detection or correction involves
>> the application itself. The type of error control utilized will depend on
>> the sensitivity of the data and the application. The method selected may
>> range from no error handling capability for some systems to full redundancy
>> of encryption devices in other systems. Errors may be ignored when detected
>> or the entire system may be immediately shutdown.  Errors which could
>> compromise the plaintext or key should never be ignored.
>
>Anyone interested in issues relating to secure hardware design should also
>study FIPS 140-1, "Security Requirements for Cryptographic Modules."  It's
>the best public document I know of for anyone designing tamper resistant
>hardware and does a great job of covering the basics and also describes
>measures to prevent these attacks, suggests using "two independent
>cryptographic algorithm implementations whose output are continually
>compared in order to ensure the correct functioning of the cryptographic
>algorithm," etc.  In general, these attacks are fairly straightforward to
>implement once the appropriate errors are available.
>
>In addition to published sources, I've had many discussions with other
>cryptographers error attacks and other hardware issues.  (Ross Anderson in
>particular is extremely knowledgeable about hardware attacks and has done
>much to raise awareness about them.  [See RISKS-18.52]) It's also important
>to note that there are also quite a few other attacks which haven't been
>published but which are widely known to the community.  (For example, I've
>discussed widely my work on using timing attack math to analyze power
>consumption, use of error analysis to reverse-engineer secret algorithms,
>implementations of attacks using software pointer errors to damage secret
>keys and encryption function tables, etc.)
>
>With the timing attack I was alarmed by the amount of confusion and
>misinterpretation that followed my initial release of the paper (though I
>didn't send out any press releases or contact any reporters), even though
>it been reviewed by many cryptographers prior to its release and was
>available online.  I haven't seen the actual Bellcore paper yet and don't
>know whether it was reviewed before they sent press releases to the media,
>but in general I worry about the consequences of the public trying to
>evaluate the importance, novelty, and quality of unreviewed work.
>
>Paul Kocher  pck@cryptography.com (or http://www.cryptography.com)