How Request Pricing Affects Deployment Architecture
With serverless applications, developers write functions to coordinate and perform business features unique to their application, and use platform services to manage state or communicate with users. In the case of AWS, the pricing for most of those services is also structured around utilization, not reserved capacity. Amazon Simple Storage Service (S3), a scalable file system, charges users for transferred bytes in and out of the storage service. Amazon Simple Notification Service (SNS), a message topic system, charges for each sent message. The whole platform is designed so that how much you pay depends on how much you use the platform. Lambda is the universal glue that brings all those services together.
In his conference talk Why the Fuss about Serverless, Simon Wardley argued that serverless is effectively platform-as-a-service, or more precisely what platform-as-a-service was supposed to be before marketers took over the buzzword. No doubt history will repeat itself, and in a few years things that have nothing to do with these ideas will be sold as ‘serverless’. But for now, here are what are considered the three critical aspects of a serverless application:
- Infrastructure providers are responsible for handling incoming network requests.
- Operational cost is based on actual utilisation, broken down by individual requests.
- Technical operations (deployment, scaling, security, and monitoring) are included in the price.
These three factors make up an interesting set of deployment constraints. For example, with request-based pricing and no overhead to set up or operate environments, it costs the same amount to send a million user requests to a single version of your application, or two different versions, or 50 different versions.
The number of requests matters, not the number of environments.