What is Amazon DynamoDB used for?
When looking into database options available from Amazon Web Services, one interesting option is Amazon DynamoDB. At first glance, it is quite different from traditional SQL Databases, so new users may want to know what is Amazon DynamoDB used for?
Amazon DynamoDB is a managed NoSQL database and is used for storing an unlimited amount of unstructured key value or document-like data. Usage examples include user data stores, metadata caches, graph relationship stores, game states, leaderboards, user event streams, and many others.
Given the list of use cases mentioned above, more explanation of how these might work when using Amazon DynamoDB might help clear things up.
For most applications, user records are self contained chunks of data specific to the given user. Since Amazon DynamoDB stores its records primarily based on a single identifier, the user id is usually a good choice for the partition key used with user data stored in Amazon DynamoDB. This user item can have many attributes attached to it, such as first name, last name, age, address, gender, and more. Essentially if you look at Amazon Dynamodb as a Key Value table in this scenario, the key becomes the user id while the value ends up being all of the attributes attached to the user record.
The benefit of using DynamoDB for this type of data is that it can be easily scaled to handle any load of user data entry, attribute updates, or user record reading of that data since the DynamoDB table can be configured for on-demand pricing. This enables the table to process the reads and writes sent to the table, and only charge for those reads and writes without needing to pre-configure a read and write capacity. This can save a lot of planning headache when initially getting things off the ground.
Similar to a user data store, DynamoDB can be used to store any type of metadata. For example the DynamoDB table could be used to store records of cars along with their metadata or furniture from a furniture store along with their metadata. Really it could be any type of object stored within the DynamoDB table along with attributes specific to those objects.
Using the DynamoDB APIs, it can also be determined if an object has been updated since the last read and only fetch the metadata of that record if it has changed, saving on data transfer fees. This means that Amazon DynamoDB can effectively be used as a single digit millisecond latency metadata cache when used in this fashion. Assuming a table that has millions of reads to commonly unchanged items, this can be a real money saving feature.
When looking at graph relationship type data, usually there are many nodes that are linked together via edges in the data. Amazon DynamoDB can be used to store the nodes in the table along with properties or metadata associated with those nodes. This would be much like how the user data store was described to use Amazon DynamoDB.
Once these node records are stored in the table, edge-like records can also be stored in the table. The edge records may need to be stored multiple times, with the second copy being in reverse order from the first so that bi-directional paths can be determined. For example an edge record may have the source node id as the partition key and the target node id as the sort key. This would allow a single node to have many outgoing edges to other nodes in the graph. If this was also flipped, we would have an edge record with its primary key set to the previous target id, and the sort key set to the previous nodes id. Doing this allows finding a path between the two nodes when traveling in either direction. This also allows the original node to have many inbound connections as well.
Even though what was previously described used a single DynamoDB table, this is not the only way of setting things up. The graph relationship store can just as easily be set up with two tables instead of one. The first being used to store the node objects along with their metadata and the second table used to store the graph edges along with any metadata those edges might have. For example the edges may have certain weight values that need to be saved on the edge between nodes.
A lot of online, or mobile, games need a way to track game state from very large and distributed worlds or environments. One really good way of doing this is to use Amazon DynamoDB. Each object in the game can be stored as an item within Amazon DynamoDB and all of the game object properties and state can be saved as attributes of these DynamoDB records. This allows the game to store, read, and update an incredibly large amount of state for all of the objects or players within the game.
A major benefit of storing game state this way is that scaling the game for player load becomes much simpler for the game owner. All that needs to be done is to increase the read and write capacity of the table, or tables, to support the increased demand of the players as the increase in demand is happening. This can also be configured to happen automatically, so that as demand spikes, the table read and write capacity automatically increases. On the other side, as player demand dies down, the read and write capacity can also be automatically decreased so that the game owner isn’t paying for capacity they aren’t using. This is a win-win for the game owner and the players as the company isn’t paying more than they need to, and the players are getting single digit latency access to the state of their objects when they need it.
Similar to game state, DynamoDB is also a great use case for storing leaderboard data. Since the data is very key-value like it can simply be stored in a DynamoDB table with the user id, the game id, the score, and any other metadata specific to the game leaderboard that is required, for example level completion time. Ideally the sort-key used in this table is related to the game id and score so that results can be returned quickly without needing to do a full table scan. This would be accomplished using the DynamoDB Query API call.
Using this configuration for storing leaderboard data, the DynamoDB table can hold hundreds of millions of user scores per game, for many games, all while receiving large amounts of writes for new scores, and large amounts of reads from players checking the top scores. All of this load can easily be handled by Amazon DynamoDB with the correct read and write capacity set on the table, or with the auto scaling feature enabled.
DynamoDB is also a great use case for user event streams. For example website click data, mobile app events, or an Internet-of-Things like applications. These would mostly be heavy write type workloads where all of these events from potentially millions of devices are being written to a DynamoDB table. This type of workload is again easily handled by Amazon DynamoDB simply by setting the appropriate write capacity on the table, or letting the automatic scaling do its job. Even better, the on-demand pricing model available to Amazon DynamoDB will simply charge for the write capacity used without needing to guess the capacity setting for the table.
Even though this is an example of a heavy write scenario, analytics can be performed on the data once it has been written. Dashboards could be designed to interface with the table to show any type of metric based on the attributes of the records saved within the table. Secondary indexes can be set on the appropriate attributes of the table to make these types of queries and analysis possible in a time and cost effective manner.