SAMSON Network Memory Server Project

Prof. Eugene W. Stark, Project Director

Overview

A network memory server (NMS) is a device that provides clients with access to a large amount of RAM via a fast network memory paging service, in analogy to the way in which a network file server provides clients with access to a large amount of disk storage. The ready availability of high-speed (>1Gb/sec), low-latency (<10us) networking hardware now makes it feasible to use commodity components to construct a network memory server that can offer a memory paging service two orders of magnitude faster than paging to and from local disks. Such a server could provide a seemingly "infinite" memory resource for memory-hungry applications on client systems, in such a way that the client applications can execute nearly as fast using the network memory server as they would if all the memory were local to the client.

SAMSON stands for "Scalable Active Memory Server on a Network". This project, which has been funded by NSF grant NSF-CISE EIA-98-18342 (Prof. Tzi-cker Chiueh, PI, and eight co-PIs), has as its objective the construction of a prototype network memory server using a cluster of commodity PC-class systems. A block diagram of the system can be found here.

News Flash (June, 2003)!

Professor Stark has implemented the event-logging mechanism that was part of the original design. It is now possible to make some cool graphs showing detailed system performance. Also see here for a report on the performance of the "Visible Human" application running on the system.

Current Status (Summer 2003)

The construction of a network memory server that performs basic paging services to applications running on client notes has now been completed, the "Visible Human" raycasting rendering application has been run on the system, and detailed peformance information has been obtained. In the existing configuration the system has 7 server nodes (933MHz Intel CPU) each with 3GB RAM. There are three working client nodes, which are 500MHz Alpha 21164 systems with 1GB RAM. The nodes are connected by Myrinet and Fast Ethernet. After indexing overhead, the system can serve approximately 13.5GB of data to client applications. Average page fault service latency is measured at just under 300 microseconds for an 8K page. Paging rates up to about 3100 pages per second (24MB/sec) to a single client have been measured. Some improvements are probably possible, but have not been attempted at this time. The system comprises about 13,700 lines of code, which implements a kernel module that runs within the Linux 2.2.17 kernel on either the Intel or Alpha platforms. There are a few bugs that have been observed during system operation.

The system development is now at a stage where a variety of projects are possible. Some of these projects would involve kernel coding and some could be performed entirely at the application level.

Students seriously interested in working on any of these projects are invited to contact Professor Stark.

Project Members

Main developer (Spring 1999 - Summer 2003):
Prof. Stark

Former project member (Fall 2002):
Harikesavan Krishnan

Former developer (Summer 2000 to Fall 2001):
Eric Nuzzi

Spring 1999 Design Group (course CSE 679):
Prof. Stark, Kartik Gopalan, Tulika Mitra, Guizhen Yang, Ganesh Venkitachalam, Ashish Raniwala, Susan Frank.

Links to Detailed Information

Other stuff that is now here (mostly useful for current and prospective project members):


Gene Stark
Last modified: Fri Aug 29 10:18:17 EDT 2003