5 tips to manage serverless security – while coding at full steam

Serverless security is becoming one really hot topic these days – especially with the ever-growing adoption of serverless computing. Yet, should it really become a developer’s nightmare? Can we not simply bake serverless security into our dev-ops lifecycle, sit back, and relax while automated security tooling does the job for us?

Can we automate serverless security?

By 2021, annual global financial losses due to security breaches are expected to reach a whopping $6 trillion – and serverless is at no mercy. The beauty, however, is that there is no solution: as long as systems continue to be designed by people, error-prone humans, there will be security loopholes; and other humans will keep on stumbling upon them, sooner or later.

So, there is no way to achieve perfect, 100% serverless security; even through automation, with the most sophisticated artificial intelligence yet to be invented. However, we can mitigate the risks significantly; simply by adopting common serverless security best practices. And the good news is, automation can already play a major role here.

In fact, automating serverless security reinforcements is becoming more and more important in today’s highly volatile FaaS dev and dev-ops spaces.

Serverless Security: what should we focus on?

As pointed out previously by numerous articles, serverless computing offers you a well-managed, well-constrained and well-secured environment – despite occasional hiccups then and there. Securing this environment is the responsibility of the provider – public cloud, or the infra teams in case of on-premise. However, what you do with the platform falls into your cup of responsibility.

Deployment artifacts.
Sanitizing user input.
Accessing external services, securely and reliably.
So, it’s our own work that we shall mostly be concerned about.

The five tips, without further ado

  1. Put up barriers, gates and alarms at platform level
  2. Add validators to your CI/CD workflows
  3. Enforce and automate best practices
  4. Retain post-deployment observability
  5. Remain updated, a.k.a. keep your eyes peeled

1: Put up barriers, gates and alarms at platform level

Cloud providers often have a better idea of security than we do. While they may not be in a position to analyze or modify your application code, they sure have mechanisms in place to detect and mitigate anomalies that arise from security breach attempts.

  • AWS Shield offers protection in two tiers. The free Standard tier covers CloudFront, Route 53 and other AWS utility services, many of which are serverless by nature. The Advanced tier offers richer control – including EC2, ELB and the Global Accelerator network layer service.
  • AWS Lambda has account-level maximum concurrency throttles, so if your functions start getting flooded with invocations – either during an attack or due to an innocent programming error – AWS will automatically hop in and keep things (fairly) under control.
  • AWS API Gateway has request throttling for similar applications, in addition to API key-driven usage plans and authorizers for access control – greatly reducing the attack surface at no extra cost.
  • Google Cloud also has its own array of attack mitigation tooling and services, such as Cloud Armor that works with Load Balancing (although they do not quite fit the serverless computing model yet).

Additionally, there are billing strategies to detect anomalous workloads, like billing alarms and budget alerts on AWS with actual and forecast values. Google Cloud does offer hard limits in the form of quotas, although one could argue they are less management-friendly than dollar-amount budget limits – which is everybody’s dream, after all. Consolidate billing can also help detect anomalies across multiple accounts, governing them under one budget yet allowing clear demarcation of, say, test and production cloud resources.

These precautions mainly act on DoS-type attacks; but you can always set up third-party monitoring tools at cloud platform level, to analyze your execution path and detect other types of breaches – more on that later.

2: Add validators to your CI/CD workflows

Prevention is better than cure; similarly, fighting serverless security loopholes at development and deployment stage is the most effective way to safeguard your serverless applications.

Tools like serverless-snyk plugin on Serverless Framework and the scanner CLI from Snyk allow you to run analyses on your codebase at development and build time. There are numerous other CI/CD plugins for Jenkins, TravisCI, etc. that do the same at deployment time, acting as fail-fast gates to prevent vulnerable code from spoiling your production serverless deployment.

3: Enforce and automate best practices

Enforcing serverless security best practices on developers is a good thing, but how do you ensure that they adhere to them?Integrating security validation tooling into the workflow, as discussed above, could get us started. However, developers may prefer more robust approaches – rather than having to make hasty last-minute “fixed security issue” commits to push their app through to the CI/CD pipeline.

AWS IAM permission generation is a good example. It is quite easy for a dev to hack together an overpermissive Lambda policy – and they are surprisingly common among samples and tutorials as well. However, if your production Lambda somehow gets compromised, the IAM security loophole will open up a gateway to the rest of the infrastructure in your AWS account; that could be destructive, to say the least.

Tooling like Serverless PureSec plugin can auto-generate least-privilege IAM permissions on demand. Moreover, enhanced IDEs like SLAppForge Sigma make it an integral part of the deployment cycle, so that the devs can completely forget about the permission management aspect of serverless security.

4: Retain post-deployment observability

Without preemptive serverless observability, you will never know when an attacker would stab you in your back – until they actually do.
While tools like SigmaDash offer visualizations of raw runtime metrics of serverless functions – like throttled executions, memory usage, and cost – you need to further process the raw data to visualize access patterns, and then, anomalies.

Tools like Dashbird and Serverless Framework Pro offer this to some extent. However, if you are comfortable with sprinkling some instrumentation onto your code, there are more advanced solutions like AWS X-Ray and agents from  New Relic and  DynaTrace which can monitor your application’s execution paths more robustly, and provide deeper analytics for identifying security anomalies.

5: Remain updated, a.k.a. keep your eyes peeled

As I said, there is no “secure application”; the same is true for serverless.
Hackers will continue to invent new ways to exploit applications and platforms. Security pros will continue to patch them, and introduce newer tools, strategies and best practices.
Even with a fully automated serverless security pipeline, if you do not keep on paying attention to new threats and solutions, you would become obsolete and vulnerable – before you know it.

So it is important that you keep in touch with the latest security vulnerability disclosures, serverless or not; especially on libraries and dependencies. You don’t really need to change your browser homepage to The CVE List; following popular security feeds like awesome-serverless-security would get you far enough.

In Closing

Serverless security is critical, but it should not slow down your devops pace. With a few tricks and tools up your sleeve, you can automate serverless security to a great extent; and continue to enjoy a sleek devops pipeline that still guarantees near-perfect security – with less room for human mistakes, all the time.