NDSS 2003: Conference report

Pekka Nikander Pekka Nikander

This is a personal report from my trip to San Diego for the NDSS conferece. This time this document describes mostly technical and research related material, since the food was worse than average, there were no wines worth to mention, and I didn't have much change to talk with interesting people.

Executive summary

ISOC 2003 Symposium on Network and Distributed System Security was held at Catamaran Resort Hotel in San Diego, just like it has been held several times earlier. The actual conference took place on February 6-7, with tutorials held on the previous day.

The conference program was quite interesting in the average, but also diversed both in topics and quality. From LMF / NomadicLab / my own personal point of view, the most interesting issues included the following:

Furthermore, it was very interesting to note that within this community Mac OS X is really taking off. Based on casual observation, about one third of the laptops were Apples, mostly PowerBooks but also some iBooks. Many of the interesting people seemed to have switched over to Mac OS X,, including Avi Rubin, Alper Yegin, and Steve Kent. So, I am in good company with my new PowerBook G4 DVD-R. :-)

Thursday, Feb 6th

After having arrived on Wednesday evening, after 21 hours of travelling, and somehow surviving the welcome reception, I woke up at 3:30am after some seven hours of sleep. Feeling much better than the night before I was able to finalize my slide and process e-mail before it started to dawn. Walking along the shore of Mission Bay at the time the sun was rising was lovely, but it was also very cold for California, almost freezing. However, after a couple of Tai Chi 24s I was fresh and ready for the day.

Welcome

Clifford Neumann

Clifford started by briefly describing the history NDSS. NDSS was started in 1993, initially as an independent workshop. It has always been intended to be a fairly small conference, to promote interaction between speakers and the audience. The focus is on actual system design and implementation, not theory.

In 1994 NDSS joined forces with Internet Societ. The conferences grew quite large for a couple of years, but they are againsmaller now. This year there was135 attendees. The tutorial day added in 1998, tutorials have been available ever since, and an outstanding paper award has been given out since 2000.

The National Security Agency (NSA) is the largest sponsor for NDSS, and has always been.

Clifford handed over the podium to Virgil Gligor, progam co-chair, who thanked the program committee. This year there were 82 papers submitted, 17 of which were accepted, by authors from Finland, France, Sweden, and U.S. The number of accepted papers has always been and will keep on the level of 17-19 papers, due to the structure of the conference. Thus, given the ratio of accepted papers, NDSS is getting to the same level with the Oakland and ACM CCS conferences.

Session 1: Invited Speaker

The invited speach was planned to be Total Information Awareness (TIA), by Dr. Robert L. Popp from DARPA, but that was cancelled on very short notice (previous day) since he seems to be very busy at the Capitol hill. Thus, we had the opportunity to enjoy JI instead.

Why Don't we Still Have IPsec, dammit

John "JI" Ionnadis

JI described his talk as six talks in one, since most of the same could be said for IPv6, Mobile IP, PKIs, secure e-mail, and DNSSEC. In general, it is easy to blame people, vendors (or a particular vendor...), and the users. Basically, it is a question of the network effect, in the sense that all of these are kinds of technology which are only useful if there are other people that have them, too. These just just like Like fax or telephone; a single telephone by itself is completely useless.

JI continued his rant by noting that people who really know IPsec can write it right, not IPSEC, IPSec, or Ipsec. He also blamed Microsoft Word, since it is almost impossible to spell IPsec correctly with it.

Going to the real subject, IPsec is still missing configuration languages, GUIs, management software, etc. In a word, it is lacking ease of deployment. It is still hard to configure IPsec for use; even JI usually turns to Angelos when he wants to configure IPsec for something, prefering to use SSH for his own needs.

Perhaps one of the problems with IPsec is that it is too modular and too secure; the modularity and security goals may also have slowed down the standardization too much. The IPsec design decouples key management, policy, and on the wire security protocols. Making the first versions of the standards took from 1992 to 1997, so roughly 4-5 years. In the meanwhile, SSL (1995) happened, and SSH started to catch on, firewalls started to get more sophisticated, NAT spread, and layer-4 redirectors happened. Thus, the need for IPsec diminished and its operating environment changed.

Why then SSL and SSH spread while IPsec didn't? SSL and SSH do not need to be configured, at least not be the end user. Whether the SSL connections are really secure and does thatt matter, is a topic of a separate talk. SSL is there, dammit.

So why don't we have IPsec, part I? The standardization took so long that in the meanwhile SSL removed the urgency, as did SSH. The existing IPsec implementations were created by kernel hackers, and they did not care too much about the users. Most IPsec implementations today have disgusting command line interfaces for key management.

IPsec is currently everywhere and nowhere. All major operating systems have it, even embedded systems, routers, broadband modems, VPN access software, etc. But nobody really uses it because it is too bloody hard to configure. It is being used for VPN and some remote/rowad warrior access, and it may come up on 3GPP phones, but it is not in for general use.

From the management point of view, there are no standard APIs for application to ask the network stack about security. The existing APIs that there are, getsockopt/setsockopt and PF_KEY, have different semantics on different operating systems. There is no real way of defining policy at a high level. The IETF IPSP WG is not doing much progress.

So why, part II? IKE is too complex. JI is know for distributing his "I dislike IKE" buttons. IKE is a real pain to configure; I have some personal experience on this, too. It really is a royal pain, not just in JI's but also in my opinion.

So why, part III? Even though IPsec is being used by businesses, integrating legacy systems with it is fairly hard. Peer-to-peer IPsec is not really useful without real PKIs. Currently it is really hard to set up app-to-app secure communications. Application writers don't know about IPsec, or they don't know they can use it, even on Windows.

IPsec still counts for less than 1% of the Internet traffic.

What need to be done? Firstly, remote access should be made easier. There is a great need for standardized, useful APIs. We need better configuration and management tools. We need to make IPsec to work nicer with the other tunneling protocols. And finally, we need to fix the implementations so that IPsec integrates better with host routing within the hosts.

Furthermore, we have to prepare for the future needs. If IPsec indeed happens, firewalls need to be redifined as policy enforcement points. When IPsec is used, "inside" and "outside" become defined by keys, not by network topology.

Steven Bellovin's Distributed Firewalls paper, in the November 1999 issue of ;login:, described the basic ideas in distributing the firewall into the hosts. There also appears to be a dedicated website for spreading information about distributed firewalls.

In the discussion afterwards, Steve Kent noted that the SPD design allows one to have symbolic identifiers in the SPDE, so that the identifiers can the can be dynamically mapped to IP addresses. Thus, in theory, IPsec SPD would be able to support mobility better than it does today, but people have not implented that. This may be something that we have to have a look at.

Session 2: Mobility and Secure Routing

The first paper session was chaired by Bill Arbaugh, and focused on secure routing and mobility. Thus, it appeared that our paper was the only paper about secure mobility and that our paper was clumbed together with two routing security papers.

Efficient Security Mechanisms for Routing Protocols

Yih-Chun Hu

The first paper in the conference focused on efficient security for routing protocols. The goal of the work was to make it possible to verify routing packets at or near wire speed, using hash chains instead of signatures. In general, the talk was very interesting, and the paper is most probably very worth reading. (I haven't read it yet, though.)

The paper is available in electronic form. Related to the paper, Adrian Perrig's projects web page contains more information about the project.

The speaker started with describing distance vector routing protocols. In a distance vector protocol, a router basically distributes its hop-count based routing tables periodically. Sequence numbers are used to make difference between older and newer versions of the distributed routing info.

Most of the distance vector routing related work was already presented in a companion paper, called SEAD, where the authors introduced the idea of using long hash chains to secure information in routing messages. In the original approach, they used a few consequtive hash values for representing a single sequence number. Within such a group, the different hash values were used to represent different metrics, so that an earlier hash within the group in the chain represents a better route. Furthermore, larger sequence numbers were represented by earlier hashes in a sequence. Thus, when a new sequence number was taken into use, all the routes using old sequence numbers started to look much worse, thereby becoming obsolete.

However, there remained a number problem in this scheme. The first problem was called the same metric problem. In a distance vector protocol, the routers increment the distances as they forward messages received from other hosts. In SEAD, the hosts are assumed to increment the distances by hashing once more the received hash values, thereby creating hash values that represent larger distances. Now, in a same metric attack an attacking router just replays the metric instead of incrementing it.

As a solution to same metric problem, the authors introduced Hash Tree Chains. In a hash tree chian, they used a converging hash tree for each sequence number, using separate hash nodes in the hash tree for separate forwarding nodes. This allows the receiver to associate the metrics with specific nodes for which they were created for, thereby noticing missing increments.

Accornding to the speaker, the used scheme is similar to HORS signature scheme; this is probably something the we should check later on.

There is also a possibilityof a DoS attack. In this attack a router simply claims a very sequence number whose checking requires the recipients to follow very long hash chains. To alleviate this, the authors introduced skiplists. Basically, a skiplist is a hash structure that allows one to "skip" fast forward within a hash chain. Thus, it is not a linear hash chain, but one that contains "skips" over spans of hashes, allowing faster checking of large chains. To implement skiplists, the authors used Merkle-Winternitz signatures.

After describing their work on distance vector protocols, the author continued with path vector routing protocols. In a path vector protocol, the routing messages carry the names of the nodes on a path instead of just a hop count.

There are two basic attacks against path vector protocols:

The basic approach to secure against path alteration was again already presented in another paper, describing the Ariadne approach by the authors. The authors now improved the Ariadne approach with cumulative authentication that requires only a constant space overhead independent of the path length.

The details of the approaches are in the paper. In general, I think that this was one of the most interesting talks in the conference.

Working around BGP: An Incremental Approach ...

Geoffrey Goodell

The second talk considered BPG security. A large part of the talk described BGP in general, how it hides some information, and why that information needs to be unhidden so that one can perform source verification and policy checking. As we konw, bgp misconfiruation is a common problem; having more information available maybe helps in better determining the sources of the misconfugred information.

The speaker continued by noting that S-BGP (Secure BGP by Steve Kent and others) has different goals than the present approach. According to their classification, S-BGP aims for synchronous authentication while the present approach, Interdomaoin Routing Validation (IRV) provides asynchronous authentication.

The rest of the talk described the IRV system. It is basically a centralized, per AS repository of BGP data, Implemented in XML. The IRV repositories exchange data with each other, if I understood correctly. Whenever routers receive routing updates, they consult their local IRV repository to verify that the updates match with the currently known configuration.

In my personal opinion, the approach was not very interesting. It seemd to be very complex, at least in implementation, and some of the questions afterwards also questioned its security.

Integrating Security, Mobility and Multi-Homing in a HIP Way

Pekka Nikander

The third talk in the morning session was my talk. In general, I think the talk went well, and there were a few clarifying questions afterwards. However, it appears that learning to think in a new way is very hard for people in general. Most of the current practioners have been learned the structure of the Internet architecture at the college, and they have never questioned whether it is good or not; it just is. Thus, understanding HIP requires that they start to think in a new way, and that is not particularly easy.

Both the slides and paper are available at the web.

Lunch

The lunch was served outside, at the lawns in the front of the Mission bay. The lunch was lasagne, almost not worth mentioning.

Session 3: Panel on Authentication and Privacy

The afternoon was started by a panel on authentication and privacy. Basically, the panel consisted of different points of view by some members of the Computer Science and Telecommunications Board study committee investigating the implications authentication has on privacy.

Authentication and Privacy -- Some Observations

Steven Kent

The first presentation was given by Steve Kent, describing the overall purpose and goals of the study. Basically, their aim has been to understand the interplay between authentication and privacy, and their report is coming out soon. The membership of the committee has been quite diversed, representing different areas of expertese. He also underlined that affecting privacy does not always mean violating privacy, and the committee's goal has been to understand all kinds of consequences, not just violations.

When proposing this panel to NDSS, Steve's original intend was to have the report out, and on the conference CD-ROM. That didn't happen, but the report will be out fairly soon.

Terminology

Bob Blakley

The second presentation, given by Bob Blackley, covered mostly terminology. Starting from fairly standard and uncontroversial definitions for individual, name, and attribute, he continued to ones with more subteties.

Firstly, the committee had come to the conclusion that identity is in the eye of the observer. Consequently, the observer is called an identity system. This has some interesting implications. For example, according to this definition, each of us humans posses separate identity systems. The definitions of the identies of the individuals within our mental identity systems overlap, but they are not necessarily identical.

Another interesting consequence is that not all identities must refer to real individuals. For example, Lara Croft is a identity that many of us have, but itdoesn't have any individual there behind. Furthermore, It is also possible for individuals to have multiple identities, tough it didn't become clear to me if an individual can have multiple indetities within a single identity system..

Using thse definitions, identification is the process of using attributes of an individual to infer who the individual is, i.e., which identity does the individual possess within an identity system. It is noteworthy that people get this wrong relatively often within their own mental identity system..

Secondly, authentication is the process of establishing confidentce in the truth of some claim, and it doesn't have anything to do with identification per se. Even though the claim often considers identities and individuals, it does not need to. Very seldomly, if ever, authentication in itself is the end goal. It is usually done for a purpose. Authentication is typically used for access control or for establishing accountability.

An authenticator is a piece evidence which is presented to support authentication of a claim. It increases confidence to the claimed truth.

Bob next came to the distinction between individual authentication vs. identity authentication. From my point of view, this is a very important but certainly also very subtle issue. Unfortunately Bob covered these so quickly that I didn't have time to take any notes, and we have to wait for the report for the final truth. However, according to my (limited) understanding, individual authentication consists of providing evidence that a user/suspect/whatever really is a given individual, i.e., known human being. For example, fingerprints or face recognition (performed by a human) can be considered as individual authentication. On the other hand, identity authentication consists of providing evidence that a user/suspect/whatever really possesses or is entitled to an identity defined by an identity system. Providing an account name and a password is a good example of (weak) identity authentication. It does not provide any confidence about the actual individual using the system, but it does link the user and an identity together.

Finally, authorization is the process of deciding what an individual is allowed or not allowed to do. I found it very interesting that they defined authorization in the terms of individuals instead of identities, but so they did.

In general it was very encouraging to me to see these definitions, since they largely agree with the analysis I did in the introduction of my Ph.D. Thesis back in 1999. Having these definitions soon out by a committee of this high reputation should gradually lead to better use of the terms in the literature.

Technologies

Steven Bellovin

The next presentation was a pretty standard description of the available authentication technologies, starting from the usual trinity of something you are, something you know, and something your posses.

Even though mostly old topics, Steve emphasized that the technologies must be used in the right context. For example, most people consider passwords easy and cheap, though not necessarily secure. However, depending on the context, passwords can be really expensive, in terms of revocation, replacement procedures, etc.

Privacy

Deidre Mulligan

The next presentation was perhaps the most interresting one to me. Deidre Mulligan, a privacy expert, started by discussing how the concept of privacy is very much dependent on the political system and legal system. It is contextual, cultural, and depends on the time period. Thus, the difficulty of defining what privacy means, exactly, leads to dificulties in the discussion.

According to her, privacy can be divided into the following four dimensions:

Of these, information privacy seems to be on most jeopardy by modern information systems. Basically, information privacy means that even when an individiual is giving out information in an transaction, that does not mean that the recipient may do anything he or she wishes with that information. The information is given only for a specific purpose, and by the information privacy principle it must not be used for other purposes.

Bodily integrity is easy to understand; in our culture, everybody considers their body private, and there must be an explicit reason for bodily searches or other violations of bodily integrity.

Decision privacy relates to our ability to make our own choices. Basically, it includes the freedom to decide how to raise you children, to chooce you religion, etc. In my personal view, decision privacy may be the most culturally dependent of the four.

Communciations privacy is very much a subset of information privacy. In covers issues such as our expectations about the privacy of first class mail, telephone calls, etc. In a word, it provides special protection for information privacy in the communication context.

Deidre spoke fairly long and about many important issues. However, I started to be tired (Can you spell jet lag? -- I guess I couldn't at that time). Thus, I just couldnt' make notes of all, and I ndeed to resolve in noting a few important issues that she mentioned.

One of the issues that she talked about was the concent and requirement of interaction by the individuals. Today, most authentication technologies require some interaction by the individual. But some things, like automated face scanning, i.e., covert authentication system, don't require any interaction by the individual being identified. This is clearly a problem, but I don't know how to classify it within the four mentioned categories, and I don't remember if she did.

A couple of other issues mentioned were the proliferation of (low grade) authentication systems everywhere, the problems with aggrecation of data.

Overall, Deidre, like Steve Bellovin earlier, emphasized that the privacy implications always need to be evaluated in some context. Otherwise the talk is extremely abstract, and she apologized that her talk was this time exactly that, extremely abstract.

Other issues

Steve Kent

The final speaker before questions was Steve Kent, again. This time he described some of the problems, such as secondary use of identifiers, identity theft being very much as a side effect of authentication system design choices, etc. He also spoke about the special role of the government both as an issuer and a consumer of credentials. With private organizations you can make a choice, but it is a little harder to change your government.

In general, again, Steve emphasized that privacy is very much a system issue, very context dependent.

Questions

The presentations were followed by half an hour session with questions from the audience. Some of the more interesting notes are collected below.

There is a difference between the kind of information available by google by examining transaction records. We should really be worried about the possible linkage of transaction records and other kinds of more readily available information.

Bob Blackely noted that identity theft is possible since we deal much more with strangers than we used to do. There is a difference between a small community and the current socity. One of the problems is that Third party assurances are of unknown value.

It is noteworthy that the committee did consider some measures that could be done, like recommending limited scope PKIs, recognizing the fact that people legitimately have different identities for different contexts, authorization from authentication, and acknowledging that there are times where you don't have to authenticate someone in order to find out whether they have the authorization.

Rest of the day

After the panel I was so tired that I went to sleep. I had a wake up alarm at 7pm for the banquet, but I just couldn't get up. Thus, I continued sleeping until 3am, and started to work after that.

Friday, Feb 7th

The skies were clear in the morning, like the day before, and this time I was prepared for the cold. I also crossed the peninsula to see the ocean. There were people surfing, in wetsuits, already that early (6:30am). Californian lifestyle, I guess.e

Session 5: Fault and intrusion detection

The first session in the second day concentrated on vulnerability and intrusion detection. It was a long session, with four papers, and I didn't find any of them particularly interesting, but that may be just because I don't care that much about the topic areas.

Comparison of tools for detecting buffer overflows and other programming errors

John Wilander

The presentation and paper compared various tools for dynamic (runtime) discovery of buffer overflows, format string attacks, and other similar attacks utilizing programming mistakes. The same authors had compared Tools for static (compile time) discovery in an earlier paper that had appeared at NordSec'02.

In the current paper, four tools were tested: StackGuard, Stackhield, ProPolic, and Libsafe/Libverify. ProPolice appeared to be best of the four tested.

Traps and Pitfals in System Call Interposition based Security Tools

Tal Garfinkel

Intrusion detection by interposing (tracing/spyking) system calls have become a popular technology after the Janus paper won the best paper award at Usenix. Current existing tools include: Janus, MapBox, Systrace, and various kinds of software Wrappers.

The main point in the presentation was that the system call interface is complex, not well documented, changes over time, and not designed for interposition. Consequently, the tools turn out to be quite broken in practise; the authors found holes in every tool they looked at. However, all the problems described the the talk were fairly subtle, seeming to require detailed knowledge about the tool used.

I am not very convinced about the value of the paper, but it certainly helps to break some overly confidence people may have on Janus and friends.

Detecting Service Violations and DoS Attacks

Ahsan Habib

The speaker had a very hard accent, making the talk very hard to understand. Consequently, I didn't really try to listen to the talk. However, based on the slides, the paper seemed to include a very good analysis of the various DoS prevention tools and techniques available. I would recommend studying the paper for those people that are interested in DoS attacks and their prevention.

A Virtual Machine Introspection based Architecture for Intrusion Detection

Tal Garfinkel

The idea of this talk was to use a Virtual Machine Monitor to co-locate an Intrusion Detection System at a physical host, but run it in a really isolated protection domain, a separate virtual machine. This leads to a situation where the IDS is isolated from the watched operating environment, but where is has good visibility to watch its target and detect suspiciously looking activity. In a way, the technique gives a view from below.

Since the topic of the talk was really beyond my area of expertese and my interests, I didn't listen too closely, even though the practical implementation using VMware sounded interesting..

Session 6: Spam and IP Telephony

The second session before lunch concentrated on spam prevention, but also contained a talk about IP Telephone security. I basically skipped the latter one; the paper is probably interesting to teh SIP people, though.

Why do we still have spam, dammit

JI, again

We had another chance to enjoy listening to JI. As everybody agrees, spam is bad. A significant fraction of JIs e-mail is spam; mine likewise.

Spam is a social desease. Spammers are not in the same community as we are. JI doesn't hang out with spammers; neither do I.

We can try legistlating against spam, but that has its own problems, which JI didn't want to go to in his speech.

Spam starts with exposure, by giving out your e-mail address an an on-line forum, by buying something on-line, or even sometimes by person-to-person messages. We give out the e-mail address since we want the recipients to be able to answer to us. There is always also a reason why we want them to be able to respond to us.

The situation is complicated because e-mail addresses are long-lived, and therefore they sooner or later end up in spam lists, and there is no real way of getting them removed. So, we want the e-mail addresses to have an expiration date.

From spam to SPAs: Single Purpose Addresses

Arriging to the core of his talk, he proposed to combat spam with SPAs. Basically, we need to define a policy under which conditions a received e-mail message is acceptable:

In most transactions you want the e-mail address to be usable for a while, maybe once or twice. Somewhat similar to single use credit card numbers.

Now, the idea of SPAs is to use unique email addresses that encode policy. The policy must be enforced by the creator of the address in such a way that random addresses must not work. In practice, a simple SPA address encodes an expiration date and an expected sender's address into a base32 encoded user name portion of an e-mail address.

Mailing lists are a problem for SPAs since the recipients address in only in the envelope. Thus, JI made a little hack that adds the envelope address into a X-SPA-To: header so that the recipient address is indeed visible in the received e-mail message.

It must be noted that SPAs are just another set of tools in the toolbox, with the big benefit that it does not need any local state to be remembered. It is not a panacea. For example, a challenge-response system is much more appropriate for already exposed addresses. On the other hand, what is really good about SPAs is that if enough people start using them, it will make the spam lists larger, and cause more costs to the spammers.

During the discussion, somebody suggested that the system could be augmented with a web site that can be used to request one-time short time addresses that are valid for only one sender. In that way real people can get an e-mail address for you, but they don't do much good for spammers. Of course, if needed, such a web site can include a human recognition feature, making it virtually impossible for spammers to create any automatic way of gettinng such addresses.

A down side of SPAs is that it is currently practically impossible to get code released from AT&T, and therefore the system is not available. However, JI urged people to read the paper, to reimplement the system, and to put in in public. I think this would be a great student project for someone. Besides, having JI on your side is not that bad thing...

Modelrately hard, memory-bound functions

Mike Burrows, Microsoft Research

The second paper in the session got the outstanding paper award in the conference. Like the paper, the talk was good, as you would expect from Mike Burrows. Basically, he described a class of memory intesive functions that can be used to used to implement protocol puzzles.

The functions described can be described as a form weak cryptography that runs approximately the same speed on most of computers. Since they are memory bound and not CPU bound, the difference in CPU speeds does not affect the run-time of the functions that much. Instead, these functions are designed in such a way that they cause constant cache misses, making the CPU - memory interface the bottleneck. Since there is less variation on the speed of the main memory, the present functions run at approximately same speed a large class of current computers, from top model servers to PDAs.

Mike started his talk by talking about the basic models of preventing spam, DoS in general. While there are several method, the current work is based on the idea of requiring a payment somehow, making people to pay by money, attention, or computation. And it is computation, or actually memory access, that people pay with in this time.

Earlier works have relied on CPU-intensive computations, but machines vary widely in CPU power. The trick away from the CPU speed variations is not to spend CPU but memory bandwith, since the gap in memory-system performance is much smaller. Thus, memory-bound functions should be more egalitarian.

Thus, the goal of this work was a family of problems that take many cache misses to solve, e.g. in order of 2^23 cache misses, is much faster to set and check, and be expressed and answered concisely, can be made harder by changing a parameter, and solving one problem doesn't help with others. The requirements are pretty similar to the CPU-intensive puzzle schemes other than to the use of cache misses.

After considering a number of possible constructes, they decided to use some near-random function F(), and require reversing a series of applications. That is, the recipient would select a random number x, apply F() repeatedly on that, producing a series F(x), F(F(x)), F(F(F(x))), and finally compute a checksum over the series. It would then hand out the final value in the series, F^k(x), and the checksum to the sender. The sender needs to search out the path from the final value to x, and return it back to the reciepient.

The function F() is constructed in such a way that the most efficient way of inverting is to store the inverse values in a table and to index that table directly. Furthermore, since F() is (near) random, any given image y is likely to have a number of pre-images F^-1(y). This results in a tree of pre-images rooted at the value F^k(x). The sender has to search over this tree space, test each path in the tree with the checksum, and finally return the path that matches with the checksum.

Most modern computers have a few megabytes of cache. To force cache misses it is necessary to use a large enough pre-image to access. It appears that a space size 2^22 - 2^23 seems to be appropriate, i.e., to use more than 8MB but less than 64MB of memory. In the tree functions used, an appropriate tree depth appears to be between 2^11 - 2^13, or the square root of the space size. However, it should not be much larger, or the become some CPU intensive shortcuts available.

In their empirical experiments, a WinCE based 300 MHz PDA was about five time slower than a 2.4 GHz top end server.

In general, the talk was very good, and I found out the paper to be excellent. A required read for anyone dealing with DoS resistance.

Secure IP telephony Using Multi-layered Protection

Brennen Reynolds, UC Davis

This talk focused fairly much on how IP telephony and SIP works. Since I personally couldn't care less about SIP, I basically just skipped the talk. However, reading the paper is probably useful for people working with SIP and frieds.

Lunch

The lunch was served out, at the lawns, like yesterday. This the the dish was slightly better, roasted something, and actually almost delicious.

Session 7: Cryptography

The first session after the lunch concentrated on crypto. As you probably know, I am not a cryptographer, and never will be. Consequently, I didn't get too much from this session.

Proxy cryptography revisited

Anca-Andreea Ivan

I have to confess I didn't understand the point of this talk at all. Read the paper if you are interested.

Proactive Two-Party Signatures for User Authentication

Antonio Nicolosi

The authors looked for solutions in a situation where the users want to create signatures using a software only client, where the user does not completely trust the client computer. Under these conditions, the user can't use her public key directly at the client.

One possible solution is to split the private key, and to create a signature at two different computers: An initial but still invalid signature is created by the user at the partially trusted client, using a key half available only at the client. The signature is then validated by a second key half, available at a partially-trusted server.

The server keeps a log of the signatures it helps to sign. The server endorses signatures and logs all accesses. Additionally, the server can also act as a repository for the user's encrypted key half. But is is still hard to recover if Alice's key half gets compromised.

Thus, we arrive to proactive two-party signatures. Add a refresh protocol to re-randomize the sharing of the key. Thus, whenever the sharing is refreshed, a possibly compromised user's half of the key is rendered useless.

Source code available at http://www.fs.net

Good talk and good work, if you happen to need this kind of a scheme.

Session 8:

Efficient Multicast Packet Authentication

Refik Molva

A multicast setting is an asymmetric setting. A generator generates messages, and a number of verifiers verify them. The setting is lossy; not all packet reach all recipients. Due to the nature of the application protocols, the situation is usually also space and time constrained. That is, the authentication code should use a fairly small amount of space in a packet, and it must be fast to either verify (pre-recorded data) or generate and verify (real-time) packets.

To achieve the space and time constraints, the usual methods applied is signature amortization. That is, the cost of signature verification is divided over a number of packets. There are three popular techniques for signature amortization:

This paper concetrated on error correction codes to recover hashes on a lossy channel. Special type of error codes, called erasure codes, that work on blocks (like in block ciphers). For example, Tornado Codes that are based on xor.

The basic idea is compute hashes over packets to be sent, and then create a signature over all of the hashes. Additionally, to provide robustness against packet losses, one creates error correction codes, and conceptually adds them to the signature. The error correction codes allow recovery from packet losses, and still be able to compute the hash values for the packets and to verify the signature.

The signature and error correction codes can be send in a previous batch of packets, along with the packets to be checked, or piggybacked on the next bunch of packets. The method depends on the needs of the application. In all cases, there is a need for buffering, and therefore the present method is not suitable for real-time media.

Efficient Distribution of Key Chain Commitments for Broadcast Authentication in Distributed Sensor Networks

Peng Ning

Sensors need to authenticate commands received from the (more powerful) base stations. Sensors are resource constrainted, low computational capabilities, low storage space, slow communications, and maybe even energy constrains.

The work described is based on uTESLA, and it extends into what the authors called Multi-level uTESLA. Basically, they enhanced uTESLA with a new hash chain initialization protocol that is based key predetermination and which is thereby able to use broadcast instead of unicast. In a way, the method may be described as using a high level key chain to authenticate commintes in low level key chains.

While alone I didn't find this paper too interesting, it gave a nice contrast to the first regular paper in the conference, which described how to use hash chains to secure routing messages.

Rest of the day

Similar to the previous day, I became really tired during the afternoon. There were relatively few people left after the closing remarks, and I didn't meet anyone with whom I had been able to go to a dinner. Thus, I bought a roll, some crisps, and a beer from a local deli, and went early to bed, allowing me to wake up 4am fresh and ready for the return flights.