LeetCode API Design Evaluation and Latency Budget

Learn to meet non-functional requirements and estimate the response time of the LeetCode service.

Introduction

This lesson will discuss how to achieve the non-functional requirements to increase efficiency and improve the response time of LeetCode API. We’ll analyze the response time, especially for contest service, which will be a key factor for the efficiency of the service.

Non-functional requirements

The following section discusses how different non-functional requirements are achieved.

Availability and scalability

GraphQL architecture's implementation helps us reduce the number of calls by bundling multiple requests into one. This enables the service to be available and scalable to more users. Also, the use of the pub-sub service decouples different microservices and enables asynchronous communication between them. Moreover, pagination improves the availability and scalability of the overall service by reducing the burden on the network and backend.

Security

The security of the service focuses on the users and backend services. The following steps are taken to ensure the security of the service:

  • Only authenticated and authorized users are allowed to access the services. The users with basic authentication (registered users) can perform different basic operations, such as listing problems, reading comments, and so on, in the LeetCode service. Moreover, authorized users (authorized with JWT tokens) can perform primary operations such as practicing problems, commenting, participating in contests, etc.

  • The concept of containers is implemented to avoid malicious attacks using code submissions. The containers execute codes without letting them speak with the systems. The containers are allowed to work with appropriate privileges so that they don’t affect the system at any point.

Latency

The usage of containers not only allows us to scale well but also allows us to reduce the latency because of their lightweight nature. Caching configurations for containers can further reduce the latency in spinning up containers and executing code for frequently used environments. This is particularly helpful in contests where problems will be executed on containers with the same configurational requirements. We also use GraphQL, which is well known for fetching targeted data only, further reducing the latency.

Level up your interview prep. Join Educative to access 70+ hands-on prep courses.