Serverless is cheaper, not simpler

The Emit conference last week featured a lineup of excellent talks, an engaging panel discussion, and plenty of time to meet and exchange notes with the awesome fellows of the serverless community.

The alluring economics of serverless

Cost is unanimously cited as the key driver for serverless adoption. The economic benefits stem from two main architectural attributes:

On-demand execution and built-in elasticity

By automatically spinning up and down compute resources per request, serverless platforms optimize utilization while keeping uptime and reliability high. There is no need to provision for peak capacity that sits idle most of the time.

Studies have shown serverless platforms utilize compute resources 25-50% more efficiently compared to containers and virtual machines:

ArchitectureAverage CPU Utilization
Virtual machines20-50%
Containers30-70%
Serverless functions70-90%

Data source: Forrester

This translates into 50-70% potential compute cost savings in production workloads by going serverless.

Pay-per-use billing

With serverless, you only pay for the exact resources used to execute your code down to 1ms increments. This makes your cloud cost directly quantifiable with no guessing required. Studies by Forrester Research found enterprises save 42-59% in infrastructure costs by running production workloads on serverless:

Serverless leads to substantial cloud cost savings

The savings can be downright insane for some apps reaching over 90% reduction in hosting bills. Anne Thomas, a Gartner analyst, shared at the Emit panel discussion that her enterprise clients consistently cite "cost" as the #1 benefit attracting them to serverless.

But there ain‘t no such thing as a free lunch. To gain the economic benefits of serverless, complexity must increase somewhere…

Trading simplicity for scale & cost: A fundamental law

There is no free lunch in closed systems according to the laws of thermodynamics: to gain a benefit something must be sacrificed. In technology, the most common currency we pay is complexity.

When microservices replaced monoliths, organizations gained benefits like independent scalability, faster iteration, and reliability under load. But it was not a free lunch – engineers took on immense extra complexity:

  • Handling eventual consistency and race conditions
  • Managing asynchronous processes and event architectures
  • Implementing fault tolerance & reliability patterns
  • Load balancing across dynamic infrastructure
  • Supporting multiple message formats, data schemas and APIs
  • Running zero-downtime deployments on hundreds of moving parts

We paid for microservices gains with a great deal of complexity.

The same fundamental law operates as we move from microservices to serverless architectures. Functions become simpler, but total system complexity tends to grow.

Where does this complexity go, and how can we manage it?

Mapping the complexity delta from microservices to serverless

To conceptualize where complexity resides, let‘s visualize a transition from a monolith to microservices to serverless:

Complexity increases from monolith to microservices to serverless

The orange blocks represent code/business logic. As we move from monoliths → microservices → serverless, each individual block becomes smaller and simpler, focusing on specific tasks.

But the total system complexity grows as seen by more overall blocks and connections. There is much more "wiring" complexity in serverless systems relating to configurations, deployment orchestration, resource management, automation scripts, etc.

Let‘s analyze where things simplify vs where complexity tend to increase in serverless.

Simplifications

By adopting a serverless model, the following areas generally become simpler:

Infrastructure

  • No more provisioning and managing virtual machines, networking, operating systems, storage
  • Significantly reduced ops burden around infrastructure reliability and scalability

Environment

  • Lightweight execution contexts(containers) with faster startup times
  • Auto-scaled, event-driven compute resources
  • Built-in high availability and fault tolerance

Code

  • Business logic decomposed into simple, stateless functions
  • Clear separation of concerns following single responsibility principle
  • Easy troubleshooting and instrumentation with execution tracing

Sources of increased complexity

The complexity saved from not needing to manage infrastructure and runtimes shifts to other areas in serverless:

Architecture

  • Visualizing and understanding flow through chained, event-driven services
  • Identifying bottlenecks and debugging complex request fan-outs
  • Modeling state in a stateless function world
  • Handling cross-service transactions & data consistency
  • Having to reinvent integration and automation logic in business code

Operations

  • Coordinating releases across message buses, databases, CDNs and 100s of functions
  • Lack of visibility into lower level metrics beyond cloud provider tooling
  • Tracing performance issues across components owned by different teams
  • Batch processing/scheduling logic moved into application code

Testing

  • Simulating asynchronous processes and event triggers
  • Generating valid test datasets across interconnected services
  • No control over underlying infrastructure to inject failures
  • Testing infrastructure configurations and policy changes

Tooling

  • Gluing together cloud vendor services(API Gateway, SQS) with custom logic
  • Configuring multiple functions/triggers to act as a unified app
  • losing DevOps capabilities for shell access, network controls, etc

Architectural principles for taming serverless complexity

While serverless introduces messy complexity in some areas as shown above, there are architectural principles and patterns we can use to minimize and manage it:

Managed services over custom infrastructure
Prefer leveraging cloud-managed databases, message queues, object stores(S3), CDNs out of the box rather than configuring your own instances to run them unless absolutely necessary.

Event-driven design
Embrace event-driven messaging across decoupled services over direct API requests. This simplifies scaling, delivery guarantees, audit logging, and reliability.

Thinking in MapReduce
Many problems can be modeled as parallel map operations on data sets followed by summary reduces steps. This takes advantage of serverless scale-per-request.

Function composition
Break down complex workflows into choreographed steps calling out to other specialized functions. Chain together multiple fan out-fan in style versus doing orchestration inside a giant function.

Data streaming
Use streaming data platforms over complex microservice data pipelines. Kinesis, Kafka streams, Flink, etc handle ordering, parallelization, delivery semantics, replayability out of the box.

Adhering to these and few other best practices helps control the complexity beasts unleashed in serverless systems.

But even with robust architecture and design principles, is it enough?

The missing middleware for serverless complexity

Unfortunately, the processes and tools have not yet caught up to the complexity of serverless in production:

  1. Most serverless tooling remains stuck at Function-as-a-Service: Frameworks like Serverless Framework, SAM, and Apex provide handy abstractions over AWS Lambda/Azure Functions but don‘t fully address composed, event-driven architectures with complex logic.

  2. Orchestration still lives as tribal knowledge: Looking at reference serverless applications like the Nordstrom Hello Retail demo, a ton of complexity around cross-service dependencies, message formats, deployment order etc remains unchecked by frameworks and lives in scripts or developer‘s heads.

  3. DevOps disruption: Adopting serverless turns some mature DevOps capabilities like shell access, network controls etc obsolete. But the needed event-driven, policy-based automation to fill this void is still emerging. Terraform lags behind configurations needed for serverless-first infrastructures.

The complexity will have to shift left into frameworks and middleware. Serverless applications demand a robust underlying platform addressing issues like:

  • Event flow monitoring
  • Policy based deployments
  • Scaling dashboards
  • Infrastructure security scanning
  • Cross-stack troubleshooting
  • Immutable infrastructure etc

Open source projects like Stackery, SEED, Amazon State Language, and commercial offerings from startups like Contrast Security and Thundra show early progress.

But serverless DevOps remains fragmented compared to Kubernetes ecosystem maturity. Filling these gaps will fuel next stage explosive enterprise adoption.

So while architects and developers must carry the complexity burden today, it is only temporary. Economic forces have a way of self-correcting systems towards efficiency and productivity. And the economies unlocked by serverless over current architectures are too massive to ignore.

Once tools catch up, we will look back at early serverless complexity like we do at pre-DevOps days today.

The road ahead

Like many technological inflection points before, what lies ahead is a great opportunity.

DevOps came to tame complexity introduced by microservices – making cloud-native delivery possible.

The next wave of innovation stands to tame complexity of serverless – making cloud-native delivery economical at any scale.

When that happens, and frameworks advance to handle wired-together architectures natively, I predict we will see mainstream severless adoption explode across many verticals.

The incentives are astronomical – up to 10x cost savings in some cases. The shift is inevitable. Overall, the cloud industry recognizes serverless as the next evolution following virtual machines, containers, and PaaS.

So don‘t get dismayed by the early complexity humps. They are temporary payment towards the permanent rewards of ridiculously scalable and cost efficient infrastructure – not just for Silicon Valley tech companies but enterprises worldwide.

What we make simple, scale; what we make economical, spreads. By that logic, the serverless wave is just getting started…

Did you find this post useful? Share your serverless experiences or feedback on Twitter or in the comments below.

Similar Posts