8111

Efficient Dynamic Program Monitoring on Multi-Core Platforms

Guojin He
University of Minnesota
University of Minnesota, 2012
@phdthesis{he2012efficient,

   title={EFFICIENT DYNAMIC PROGRAM MONITORING ON MULTI-CORE PLATFORMS},

   author={He, G.},

   year={2012},

   school={UNIVERSITY OF MINNESOTA}

}

Download Download (PDF)   View View   Source Source   

357

views

Software security and reliability have become increasingly important in the modern world. An effective approach to enforcing software security and reliability is to monitor a program’s execution at run time. However, instrumentation-based implementation of a dynamic program monitor on single-core systems suffers significant performance overhead. As multi-core architecture becomes more mainstream, implementing efficient dynamic program monitoring by assigning monitoring activities onto separate processor cores and thus reducing performance overhead becomes not only a feasible but an appealing way to enforce software security and reliability. To achieve efficient and flexible multi-core based dynamic program monitoring, however, three challenging issues must be carefully considered and adequately addressed: the hardware support, the monitoring model, and the parallelization of monitoring tasks. This dissertation proposes novel solutions to these challenging problems. The hardware support proposed in this dissertation, which is referred to as extraction logic, selectively extracts execution information from the monitored program and forwards it to a monitor running on a separate CPU core. The extraction logic is generic and configurable by the monitor so that it can support a large spectrum of monitoring tasks. Based on this generic hardware support, this dissertation proposes a novel monitoring model, referred to as the distill-based monitor model. Monitors in this execution model is generated by special compiler supports. The distill-based monitor model is based on the observation that a monitor needs only partial information from the monitored execution and that of this needed information, some can be easily computed by the monitor from other information that has already been communicated. We implemented a code generator and optimization techniques to decide which set of information to forward and which set to compute so as to minimize the total execution time of the monitor. This compiler support can optimize a variety of monitors with diverse monitoring requirements, taking as input the control flow graph of the monitored program and the set of monitoring requirements. To parallelize monitoring tasks, this dissertation proposes a novel parallelization paradigm built on General-purpose Computing on Graphics Processing Unit (GPGPU) architecture. In the following chapters, we first propose a generic, purely software-based GPGPU monitor framework that is flexible enough to support parallelization of various kinds of monitoring tasks. Furthermore, we propose software-based optimization techniques built on this framework that effectively take advantage of various characteristics of monitoring tasks such as taint-propagation and memory-bug detection, and thus achieve significant performance improvement. This dissertation reports the performance improvement achieved by the proposed monitoring model and parallelization paradigm. Relative to the performance of traditional instrumentation-based monitor for taint-propagation and memory-bug-detection, the proposed compiler support is able to bring down performance overhead by 3.7 times and 2.2 times for SPEC2006INT benchmarks. The proposed GPGPU-based monitor with optimization even achieves more for memory-bug detection, reducing performance overhead by 5.2 times.
VN:F [1.9.22_1171]
Rating: 0.0/5 (0 votes cast)

* * *

* * *

Like us on Facebook

HGPU group

137 people like HGPU on Facebook

Follow us on Twitter

HGPU group

1209 peoples are following HGPU @twitter

Featured events

* * *

Free GPU computing nodes at hgpu.org

Registered users can now run their OpenCL application at hgpu.org. We provide 1 minute of computer time per each run on two nodes with two AMD and one nVidia graphics processing units, correspondingly. There are no restrictions on the number of starts.

The platforms are

Node 1
  • GPU device 0: AMD/ATI Radeon HD 5870 2GB, 850MHz
  • GPU device 1: AMD/ATI Radeon HD 6970 2GB, 880MHz
  • CPU: AMD Phenom II X6 @ 2.8GHz 1055T
  • RAM: 12GB
  • OS: OpenSUSE 13.1
  • SDK: AMD APP SDK 2.9
Node 2
  • GPU device 0: AMD/ATI Radeon HD 7970 3GB, 1000MHz
  • GPU device 1: nVidia GeForce GTX 560 Ti 2GB, 822MHz
  • CPU: Intel Core i7-2600 @ 3.4GHz
  • RAM: 16GB
  • OS: OpenSUSE 12.2
  • SDK: nVidia CUDA Toolkit 6.0.1, AMD APP SDK 2.9

Completed OpenCL project should be uploaded via User dashboard (see instructions and example there), compilation and execution terminal output logs will be provided to the user.

The information send to hgpu.org will be treated according to our Privacy Policy

HGPU group © 2010-2014 hgpu.org

All rights belong to the respective authors

Contact us: