An Enhancement of Futures Runtime in Presence of Cache Memory Hierarchy

BotinĨan, Matko; Runje, Davor
December 2008
Journal of Computing & Information Technology;Dec2008, Vol. 16 Issue 4, p339
Academic Journal
A future is a simple abstraction mechanism for exposing potential concurrency in programs. In this paper, we propose an enhancement of our previously developed runtime for scheduling and executing futures based on the lazy task creation technique that aims to reflect the cache memory hierarchy present in modern multi-core and multiprocessor systems.


Related Articles

  • The Significance of Memory Costs in Answer Set Solver Implementation. BRAIN, MARTIN; DE VOS, MARINA // Journal of Logic & Computation;Aug2009, Vol. 19 Issue 4, p615 

    Implementation costs linked to processor memory subsystems (cache miss costs, stalls due to bandwidth limits, etc.) have been shown to be a factor in the performance of a variety of declarative programming tools. This article investigates their impact on answer set solvers and the factors that...

  • Enhancing Cache Performance Based on Improved Average Access Time. Ghaeb, Jasim. A. // World Academy of Science, Engineering & Technology;2012, Issue 64, p815 

    A high performance computer includes a fast processor and millions bytes of memory. During the data processing, huge amount of information are shuffled between the memory and processor. Because of its small size and its effectiveness speed, cache has become a common feature of high performance...

  • A Smart Cache Designed for Embedded Applications. Naz, Afrin; Kavi, Krishna M. // International Journal of Advanced Research in Computer Science;Jan/Feb2012, Vol. 3 Issue 1, p114 

    In this paper, we extend our previous investigation of split array and scalar data caches to embedded systems. More specifically we explore reconfigurable data caches where L-1 data caches are optimally partitioned into scalar caches augmented with victim caches and array caches. We do not...

  • Exploiting application-level similarity to improve SSD cache performance in Hadoop. Chen, Zhijian; Luo, Wenhai; Wu, Dan; Huang, Xiang; He, Jian; Zheng, Yuanhuan; Wu, Di // Journal of Supercomputing;Dec2014, Vol. 70 Issue 3, p1331 

    To boost the performance of massive data processing, solid-state drives (SSDs) have been used as a kind of cache in the Hadoop system. However, most of existing SSD cache management algorithms are ignorant of the characteristics of upper-level applications. In this paper, we propose a novel SSD...

  • Adaptively and Efficiency: Key Features for Getting Performance on Process Migration. Righi, Rodrigo da Rosa; Graebin, Lucas // Journal of Communication & Computer;May2012, Vol. 9 Issue 5, p495 

    In this paper we will describe a model for BSP (Bulk Synchronous Parallel) processes rescheduling called MigBSP. Considering the scope of BSP applications, its differential approach is the combination of three metrics - Memory, Computation and Communication-in order to measure the Potential of...

  • Restructuring Computations for Temporal Data Cache Locality. Pingali, Venkata K.; McKee, Sally A.; Hsieh, Wilson C.; Carter, John B. // International Journal of Parallel Programming;Aug2003, Vol. 31 Issue 4, p305 

    Data access costs contribute significantly to the execution time of applications with complex data structures. A the latency of memory accesses becomes high relative to processor cycle times, application performance is increasingly limited by memory performance. In some situations it is useful...

  • Cache oblivious algorithms for nonserial polyadic programming. Tan, Guangming; Feng, Shengzhong; Sun, Ninghui // Journal of Supercomputing;Feb2007, Vol. 39 Issue 2, p227 

    The nonserial polyadic dynamic programming algorithm is one of the most fundamental algorithms for solving discrete optimization problems. Although the loops in the nonserial polyadic dynamic programming algorithm are similar to those in matrix multiplication, the available automatic...

  • Teapot: A Domain-Specefic Language for Writing Cache Coherence Protocols. Chandra, Satish; Richards, Bradley; Larus, James R. // IEEE Transactions on Software Engineering;May/Jun99, Vol. 25 Issue 3, p317 

    In this paper, we describe Teapot, a domain-specific language for writing cache coherence protocols. Cache coherence is of concern when parallel and distributed systems make local replicas of shared data to improve scalability and performance. In both distributed shared memory systems and...

  • Fast Cache, Fresh Data. McCarthy, Tim // SQL Server Magazine;Nov2004, Vol. 6 Issue 11, p23 

    Discusses the significance of caching static database data in memory. Usability of caching Web applications and databases; System requirements of .NET Framework 2.0 for adding support for database cache dependencies; Differences between the SQL Server 2000 and SQL Server 2005; Steps to perform...


Read the Article


Sorry, but this item is not currently available from your library.

Try another library?
Sign out of this library

Other Topics