Home
People
Publications
|
Refereed International Conference PublicationsArchitecture-aware Automatic Computation Offload for Native Applications [abstract] (ACM DL, PDF)
Although mobile devices have been evolved enough to support complex
mobile programs, performance of the mobile devices is lagging behind
performance of servers. To bridge the performance gap, computation
offloading allows a mobile device to remotely execute heavy tasks at
servers. However, due to architectural differences between mobile
devices and servers, most existing computation offloading systems
rely on virtual machines, so they cannot offload native
applications. Some offloading systems can offload native mobile
applications, but their applicability is limited to well-analyzable
simple applications. This work presents automatic cross-architecture
computation offloading for general-purpose native applications with
a prototype framework that is called Native Offloader. At
compile-time, Native Offloader automatically finds heavy tasks
without any annotation, and generates offloading-enabled native
binaries with memory unification for a mobile device and a server.
At run-time, Native Offloader efficiently supports seamless
migration between the mobile device and the server with a unified
virtual address space and communication optimization. Native
Offloader automatically offloads 17 native C applications from SPEC
CPU2000 and CPU2006 benchmark suites without a virtual machine, and
achieves a geomean program speedup of 6.42x and battery saving of
82.0%.
|