Introduction

In distributed applications, object caching offers significant performance gains compared to direct database access. Historically, we have come to believe that performance and scalability are like two faces of the same coin; one can either get the system to perform better or have it optimized for scalability.

The use of distributed in-memory caching not only helps with performance, but also with scalability. If you cannot scale up then you have to scale out and that is exactly how distributed in-memory caching works for Windows Server AppFabric.

Note

Scale out, or the horizontal scaling, refers to the scalability approach that allows adding of new (compute and/or storage) nodes to the deployment to handle additional load; whereas with the traditional scale up approach to handle additional workload we add more memory and compute power to the existing compute node (server).

In Windows Server AppFabric Cache, the data is kept in-memory, but instead of limiting it to one server node, it has the capability of scaling out to hundreds of nodes, on demand, seamlessly.

This distributed in-memory architecture offers a possibility of dynamically scalable and highly available cache. This cache can then be used for storing large amounts of data in memory and as a result, applications and services perform faster and become more reliable.

Windows Server AppFabric uses a notion of a Cache Cluster to represent a logical collection of a number of Cache Hosts (nodes). Cache is transparently distributed across the hosts. Each host may contain zero or more named cache regions. The good thing is that the Cache Client is abstracted from the details of distributed architecture. The following diagram is a schematic representation of a Cache Cluster in Windows Server AppFabric:

Introduction

In this chapter we will go through some of the most common caching-related scenarios for Windows Server AppFabric Cache.

We will start with initializing Windows Server AppFabric Cache using code.