Buy vs. build decision

The bursty nature of the user population and associated load asks for a Public Cloud for elasticity.

One of the significant benefits of the modern cloud is Scalability of resources—we can request more resources when needed and return them when the load is low. Such scalability allows a service to use computational resources commensurate to the load.

For a specific instance, the load might spike beyond the current capacity. We expect that new resources come online in the order of a few seconds. We might like to keep some spare capacity online and use a good machine-learning-based predictor of required resources.

Job queuing

We need to differentiate between user-facing and service’s internal jobs. Not all of the service jobs are latency-sensitive and might be queued for later execution. This calls for the need for an appropriate job queuing system for this purpose.

Continuous integration and deployment

Service might be updating the codebase tens of times in a single day. There is a tradeoff between the velocity of feature deployment and the risk of introducing bugs. That implies that services need mechanisms to retract changes and go back to the last known good version if any problems arise.

Quora reports that they send individual code commits to the production, and the time between a commit and actual deployment could be five to seven minutes.

Create a free account to access the full course.

By signing up, you agree to Educative's Terms of Service and Privacy Policy