NotesFAQContact Us
Search Tips
Back to results
ERIC Number: ED532239
Record Type: Non-Journal
Publication Date: 2011
Pages: 155
Abstractor: As Provided
ISBN: ISBN-978-1-1248-3122-0
Minimizing End-to-End Interference in I/O Stacks Spanning Shared Multi-Level Buffer Caches
Patrick, Christina M.
ProQuest LLC, Ph.D. Dissertation, The Pennsylvania State University
This thesis presents an end-to-end interference minimizing uniquely designed high performance I/O stack that spans multi-level shared buffer cache hierarchies accessing shared I/O servers to deliver a seamless high performance I/O stack. In this thesis, I show that I can build a superior I/O stack which minimizes the inter-application interference and consequently increases application performance based on our understanding of application characteristics such as reuse and locality, application data access patterns, application execution history, disk characteristics such as spin time and seek time. The first contribution of this thesis is an intelligent client side prefetching module called "APP" which automatically infers and configures parameters to minimize interapplication disk interference. The goal of APP is to decrease application execution time by increasing the throughput of individual I/O streams and utilizing idle capacity on remote nodes along with idle network times thus effectively avoiding alternating bursts of activity followed by periods of inactivity. APP significantly increases overall I/O throughput and decreases overall messaging overhead between servers. APP clients make use of aggressive prefetching and data offloading to selective remote buffer caches in multi-level buffer cache hierarchies and background network transfers in an effort to minimize disk interference and tranquilize the effects of aggressive prefetching. The second contribution of this thesis is to come up with a cluster application scheduler or an "AppMap" which maps applications to the different nodes in a multi-level shared buffer cache hierarchy. The challenge here is to predict the inter-application buffer cache interference without actually executing the applications as well as to schedule the applications such that this interference is minimized. We devise two models based on reuse and locality, derive the inter-application interference on all shared nodes and propose a local optimal algorithm which maps the applications to the nodes in the hierarchy such that the interference throughout the buffer cache hierarchy is minimized. Our models are computationally efficient and require only the reuse distance metric and I/O rates of the respective applications. I/O rate is a concept defined by us as the number of buffer cache blocks touched by an application per second and includes both the blocks, read and written by the application. AppMap is especially pertinent in data centers where several applications are scheduled on shared resources to avoid over-provisioning. Next, I propose a novel end-to-end hint exploiter high performance I/O stack called "Mnemosyne" which is capable of accepting user specified hints specifically data access pattern hints to increase the effectiveness of multi-level caching by striving for exclusivity in the buffer cache contents. Multi-level caches are often plagued by content duplication, reducing the effective capacity of these caches. Additionally, due to the duplication, the same interference manifests itself in the lower as well as upper level caches. The challenge here is to assimilate the information provided by the user to come up with a global strategy, which unburdens the user from making decisions for every loop in the program. Additionally, Mnemosyne uses these hints to predict the next access of the application and prefetch the data before it is requested. I/O servers in multi-tiered architectures are often characterized by slack time and peak I/O bursts. Mnemosyne reduces the disk interference by issuing the requests in a layout aware manner, thus, avoiding the disk from getting flooded with requests all at once, which increases the chances of random seeks in the I/O streams. The fourth and final contribution of this thesis is the conception of an innovative server cache partitioning module called Caerus which dynamically partitions the server buffer cache to minimize the inter-application buffer cache interference such that the individual hit rates of the applications sharing the buffer cache as well as the overall hit rate of the server buffer cache increase. The challenge here is to find the partition size dynamically, such that applications that do not benefit from getting larger cache space are given a small portion of the cache while applications that require more cache space for acquiring higher hit rates get a larger portion of the cache space. At the same time, Caerus takes into account the different phases of application behavior and adapts its allocation based on the phase in which the application is executing. (Abstract shortened by UMI.) [The dissertation citations contained here are published with the permission of ProQuest LLC. Further reproduction is prohibited without permission. Copies of dissertations may be obtained by Telephone (800) 1-800-521-0600. Web page:]
ProQuest LLC. 789 East Eisenhower Parkway, P.O. Box 1346, Ann Arbor, MI 48106. Tel: 800-521-0600; Web site:
Publication Type: Dissertations/Theses - Doctoral Dissertations
Education Level: N/A
Audience: N/A
Language: English
Sponsor: N/A
Authoring Institution: N/A