Optimizing DynamoDB data injection with limited capacity

Recently we had a data injection task. This means loading a large amount of source files from S3, reading the information, and writing it to Amazon DynamoDB. Here are some things that we considered as we came-up with a solution.

  • The total number of S3 source files was about 1500 and each file contained 4 millions rows, we had to write to DynamoDB 6 billion times.
  • Amazon DynamoDB writes are expensive. Pricing for (100 writes/second is $3000/month), thus we had to control writing rate, because anything over capacity would be lost.
  • Our application would be running in pods managed by Kubernetes. We wanted to make sure that each pod would be fully utilized.
  • Each app would be a multi-threaded Java program, this would make things more complicated, but also help the task finish as soon as possible.

Here's the relationship between DynamoDB writing capacity, the number of pod, and the number of threads:

Writing Capacity(writes/second) = (number of threads) * (number of pods) * (number of rows each thread can handle per second)

It’s very straight-forward and we need a way to make sure this equation is satisfied. Guava library (https://github.com/google/guava) comes to mind at first. It has a well designed rate limiter and can be called before every write to DynamoDB. For example:

int numberOfWritesPerSecond = 100;
RateLimiter limiter = RateLimiter.create(numberOfWritesPerSecond);
limiter.acquire()
updateDynamoDB();

Now the equation becomes simpler:

Writing Capacity(writes/second) = (number of writes on Rate limiter)* (number of pods)

It looks good but when we tested it, it had problems:

  • If we set 3000 writing capacity on Amazon, 5 pods in Kubernetes and gave each pod 5 threads, one single thread would have: 3000 / 5 / 5 = 120 (writes per second) and one source file could only be processed by one thread, we had 4 millions records, so it would take 4 millions / 240 = 33333 seconds or 9 hours to finish one file.
  • It’s actually even longer than 9 hours because for one row in file, there are 2 operations: reading from S3 and writing data to Dynamo. The slower operation limits the actual time.
  • With such a long processing time, the S3 connection could time out, the thread pool could time out, and/or Kubernetes could restart. Too many things could happen to cause the job to fail.
  • Increasing writing capacity didn't even work. Single threaded capacity has been limited by file reading and Dynamo writing speed. Each file still has a high possibility to fail.

Here's how to solve problems:

  • After careful observation of the file, we found that we can do a small aggregation on it. Every 4 rows could be combined into one row and written to DynamoDB together. 4 million writes suddenly became 1 million.
  • We changed to async Dynamo operation to overcome the Dynamo writing limitation. Now the only limitation was on file reading. We tuned number of threads in a pod to get the best CPU utility.
  • We read one file with multi-threads.

After we made the first 3 changes, one file could be finished within 2 hours.

Conclusion
From this task, we gained a deep understanding about how to control application throughput a container cluster. Kubernetes makes container orchestration easier but at same time it bring more complexity to the application. Cloud-based database (Dynamo) reduces maintenance cost, but it requires extra work to ensure that the app is working correctly under its constraints.