· engineering · 5 min read
AWS ElastiCache: So much more than just a cache
Whether Redis or Memcached, AWS Elasticache is an essential component in a scalable architecture.
Table of Contents
AWS ElastiCache is a highly performant, managed service from AWS providing two of the most popular in-memory datastore engines with sub millisecond response times. The name undersells its utility, as the features and use cases of Redis OSS go far beyond caching.
When used as a cache it can dramatically increase the performance and efficiency of your application, and reduce the workload on your database, often the costs of ElastiCache will be entirely offset by savings on database related fees.
Whether you choose the Memcached engine for simplicity and efficiency, or Redis for more complex operations and optional persistence, ElastiCache brings the typical advantages of an AWS managed service and eliminates the operational overhead of managing applications and their servers.
Used with the Redis OSS engine, it can assume multiple roles in your application beyond simply caching with a key /value store. It can take the role of database engine, act as a message broker, or drive a distributed system, powering leaderboards, session storage, real-time analytics, and even geospatial queries to name just a few use cases.
Getting started with ElastiCache
ElastiCache comes in two distinct offerings, serverless, and instance based.
Provisioning instances is done in a similar way to EC2, with various instance types to suit ou needs and the ability to reserve instances in advance for cost optimisation, in a similar way to how EC2 instances are managed.
Instances can be provisioned as “nodes” within clusters for autoscaling, or as a single instance with up to 5 read replicas.
The serverless option, while being potentially more expensive, is ideal for flexible, spiky workloads, scaling instantly as needed to cope with demand. Unfortunately, due to its unfortunate minimum pricing, it’s generally uneconomic for staging/test sites, or other setups with infrequent usage.
ElastiCache is secure and compliant, it complies with various industry standards like PCI DSS, HIPAA, and SOC. In Multi-AZ configurations it guarantees 99.9% availability, can be encrypted in transit and at rest. When using Redis as an engine we can even take advantage of AWS Backup and even replicate our elasticache instances across multiple regions using Global Datastore.
Understanding the costs
On-demand
Getting started with ElastiCache is surprisingly cheap - with instances such as cache.t4g.micro
available from $0.016 per hour, about $12 a month (with continuous use) - and there are instance types available in the free tier (for the first 12 months). Pricing is the same for either engine — Redis OSS or Memcached— so you can pick the best option for your use case, without considering the cost.
Of course production workloads will need beefier instances, and the costs can add up quickly. If your needs and workload are predictable, you can take advantage of reserved instances, with savings of over 30% against the on-demand price for a 1-year commitment and up to 50% with a three year plan, depending on the instance type and region.
Serverless
If your needs aren’t predictable though, ElastiCache serverless may better suit your needs. Able to scale instantly to meet any demand, its pricing is based on both data storage and data access. Storage costs are simple, around $0.125 / GB-hour depending on region. Usage is billed with “elasticache processing units” (ECPU) - with 1 unit representing 1KB read or write. Usage costs around $0.0034 / million ECPUs - though some CPU intensive commands may consume more ECPUs than would be expected - this is particularly the case with Redis as it offers more advanced operations.
Sadly, it’s not ‘truly’ serverless as each ElastiCache is charged a minimum of 1GB storage, which equates to approximately $90 a month, considerably more than the cheapest on-demand instances. However with spiky traffic, the serverless option is a compelling offering, provided care is taken to reduce unnecessary data transfer and the storage of stale data.
Choosing Memcached engine
If all you need is a simple key/value store to cache data, then Memcached engine is a great fit. Though it’s widely supported by many systems, particularly legacy ones, Memcached is not a legacy product. It’s faster than Redis (for simple/key value requests) and supports multithreading. You can expect higher throughput on a similarly sized machine, making Memcached a cost-effective choice.
Some might also see its lack of more advanced features as a benefit - developers are less likely to misuse the cache infrastructure to solve adjacent issues. Instead, for more complex needs, you can always provision a separate Redis ElastiCache instance to handle those features.
Or going with Redis OSS
Redis is the swiss army knife of datastores, it can of course act as a high performance cache, but it offers so much more than basic key/value storage. Redis supports various data types such as strings, hashes, lists, sets, and sorted sets, and provides a range of advanced features like replication, persistence, Lua scripting, transactions, pub/sub messaging, and even geospatial indexing. Its integration with AWS Backup, and replication between availability zones allow it to function as a database, with “Global Datastore” taking this a step further, making your data available across multiple regions.
For applications that require more than just simple caching, Redis OSS is a powerful choice. It can handle everything from distributed locking mechanisms to session storage, all while maintaining the typical sub-millisecond performance that makes it such an attractive option. If your application demands both caching and more advanced data operations, Redis is the right tool for the job.
Conclusion
Whether you’re using Memcached for straightforward, high-performance caching or Redis OSS for a feature-rich datastore, ElastiCache offers powerful tools to enhance your application’s performance.
With its managed service model, you can eliminate the operational overhead of managing your own infrastructure, while taking advantage of AWS’s security, compliance, and availability guarantees, and the choice of serverless or auto-scaling can help you effortlessly scale to meet peak demands.
No matter your application’s needs—whether it’s simple caching, real-time analytics, complex data processing, or scaling globally — ElastiCache can provide the performance, reliability, and scalability to take your systems to the next level.
About James Babington
A cloud architect and engineer with a wealth of experience across AWS, web development, and security, James enjoys writing about the technical challenges and solutions he's encountered, but most of all he loves it when a plan comes together and it all just works.
No comments yet. Be the first to comment!