The first approach would be to switch from Amazon DynamoDB to Amazon S3 as the storage layer solution. We got a strong signal that the 400 KB limit imposed by the underlying architecture no longer fits the product requirements – that was the last responsible moment I alluded to earlier.Ĭonsidering prior exploratory work, we identified two viable solutions that would enable us to expand the amount of data the entity consists of, allowing the service to accept bigger payloads. The following depicts the new API architecture in a simplified form.Īmazon Route 53 routes the user request to the endpoint that responds the fastest, while Amazon DynamoDB global tables take care of data replication between different AWS regions.Īfter some time, customers expected the application's API to allow for bigger and bigger payloads. We have opted for an active-active architecture backed by DynamoDB global tables to improve the service's availability. Therefore, we deferred splitting the entity to the last responsible moment, pivoting our work towards system availability, exercising one of the Stedi core standards – "bringing the pain forward" and focusing on operational excellence. One of such was splitting the single entity into multiple sub-items, following the logical separation of data inside it.Īt the same time, prompted by one of the AWS region outages that rendered the service unavailable, we decided to re-architect the application to increase the system availability and prevent the application from going down due to AWS region outages. We have observed that the user payload started to grow during the application lifecycle.Īware of the 400 KB limitation of a single Amazon DynamoDB item, we started thinking about alternatives in how we could store the entity differently. As the product matured, more and more customers began to rely on our application.