Traditional caching mechanisms work by storing individual items (relational tuples, pages, or objects) in caches; thus, their contents can be exploited only if future requests address these items by their IDs (tuple-id, object-id, and so on).
In contrast to this item-based cache granularity model, semantic caching stores both the retrieved data and the queries that fetched them into caches. Thus, semantic caching is a query-based cache granularity model, where each cached query is called a semantic region. The purpose of semantic caching is to enhance the reusability of cache contents, by answering future queries based on their containment of or intersection with already-cached queries and their associated data. This caching model introduces significant challenges related to semantic region merging and splitting, and also to the issue of keeping the cached data up to date.
This paper addresses these two challenges, and provides scalable and efficient solutions. Through detailed experiments, the proposed methods are proven to be effective and superior to previously proposed methods. Overall, this paper represents an important contribution to the literature related to semantic caching.