Introduction Previously, in Part 1 of my Distributed Cache Series, I discussed Redis Cache, how it works, its benefits, limitations, and various trade-offs we need to think about before using Redis in production. In this part of the series, we will discuss Amazon MemoryDB and what makes it different from Redis. We will also examine what problem it solves over Redis and how it solves them. Again, we will discuss its benefits, limitations, and various trade-offs it provides.Introduction So, after a couple of years of debating whether to jump into the icy waters of Apache Iceberg Table format, I finally mustered up the courage (and the free time, thanks to a long weekend) to dive into understanding it. Spoiler alert: I didn’t freeze—well, maybe just a little, but it was worth it.
In this post, I’m going to tackle some burning questions that have been floating around my headIntroduction For the last couple of weeks, I have been discussing with my colleagues about
How can we improve the performance of some of our data access?
I will give you some background about the system so that we know our base tenets throughout this blog.
Let’s say the system stores the structured data in a database (Amazon DynamoDB) and unstructured data blob in some blobstore (Amazon S3). And, we want to improve the data access latency for both data stores as they provide sub-10ms latency for GET operations.