Dave Mazieres argued that the state of computer security is rather abysmal because of a lack of control on the part of applications. In his talk, Mazieres argued that capabilities on physical names provide the ultimate control (without privileged parties doing translations in the background). In addition, in pursuance of the principle of least privilege, he introduced the concept of hierarchically named capabilities. Hierarchically named capabilities allow applications to forge new capabilites whose prefix is one of the application's current capabilities. The application can then selectively choose which resources to associate with that capability. Finally, the application can pass those restricted capabilities to other parties, giving them limited rights over the application's resources. The hierarchical nature ensures that the owner of the capability is clearly identifiable.
Mazieres answered that the solution was to use secure network protocols.
Mazieres - you don't give both FS's unrestricted access to the whole disk. Give permissions to ranges of blocks.
Mazieres - Everything is shared in the exokernel. Capabilities allow for safe sharing by granting permission for read or write.
Mazieres - Sure - no reason these principle couldn't be added to the UNIX system.
Mazieres - Some trusted guy, as before. Just has less opportunity to mess up in a way that will compromise the system
Robert Grimm sketched an architecture for security in extensible systems. He motivated the problem by pointing out that, while the OS community had focused extensively on safety, security had been neglected. Specifically, security was implemented in an ad hoc fashion that was difficult to verify. He said that the mandatory access control model was a good model for security in extesnible systems. The principal challenge was to pick a model from the many out there in the security world and to fit it into extensible systems.
Originally, the paper recommended the lattice model. Recent work has indicated that this was wrong that the lattice model is not sufficiently flexible. Instead, he pointed to the Domain and Type Enforcement Model (DTE) which is flexible, allows for controlled change of privileges, and makes access modes explicit.
He went on to argue that SPIN was a good platform to explore these issue on, esp due to the explicit linking of extensions.
He showed a slide showing the price of security to be neglible on application performance, even though microbenchmarks showed a significant increase in cross-domain system calls.
Grimm said that the grain of the protection can be changed to accomodate performance needs.
Stephanie Forrest, from the University of New Mexico, argued that every computer system should be unique, as to avoid a single attack from taking down every system in the world. She likened this diversity within species, so that one disease does not wipe out the entire species. Just because an attacker has found a way to get past one defense mechanism, it doesn't mean the attacker has succeded for all members of the species.
There are big trends in the computer industry opposing this. That is, computers are becoming increasingly homogeneous due to economies of scale and the advantages of consistent behavior. Her goal, however, is to prevent widespread attacks and make intrusions much harder to replicate.
The challenges are whether you can randomize the code enough to be effective against attackers while minimizing the number of eerrors yhou introduce? Are the techinques too expensive for the benefit they provide? The answer: we'll see.
Examples - randomized compilers which vary memory layout or re-order code or insert no-ops. Other possible randomization targets - runtime checks, process initialization, sytem calls and dynamic libraries.
Her group implemented a randomized stack growth algorithm that was able to avoid stack overflow bugs in many cases.
Liedtke went through a complex derivation which argued that you can't really do fast security without address spaces - the hardware mechanisms for security. Savage interjected that he was mixing security with safety.
Liedtke went through the performance numbers for his high-speed server, which delivered about 600 megabits per second - limited only by the PCI bus. In simulation, it ran at 850 megabits per second. Flexibility was there because of the microkernel architecture and speed was there because of cheap IPC.
Liedtke went on to treat the problem of denial-of-service attacks on microkernels. To avoid denial-of-service attacks, a microkernel must ensure two things. First, servers must be able to control any necessary resources. Second, servers can be constructed in such a way that they cannot be blocked by an attacker.
On analyzing L4, Liedtke determined a denial of service attack could be staged on the uKernel if the page frames were used up by a malicious server which created a tremendous number of mapping for a single page. The solution was to limit the amount of page frames that are granted to each pager and require requesting services to donate page frames when the number of frames runs low.
The other resources, including CPU times, thread creation, and I/O ports, are entirely controllable by the servers. It is up to the servers, then, to put forth policies that prevent denial-of-service attacks on these.
Of the inter-task operations allowed by the L4 uKernel, only ipc causes potential problems for the server. These attacks can be minimized by running the server at sufficiently high priorities, limiting the receiver buffer size for each thread, and using one thread per client.
Suspicious tasks can be grouped under a leader (called a chief). All messages from the suspicious tasks are serialized through the leader, reducing the possible breadth of the attack and possibly halting it, if the chief can detect a violation of some policy.