Serverless: A Quick and Dirty Overview
If you keep listening, you’re going to hear people talk about serverless computing. Here’s a simplified history that projects into the…
If you keep listening, you’re going to hear people talk about serverless computing. Here’s a simplified history that projects into the future.
In the beginning was the mainframe, the most successful of which was the IBM System 360. Back then, hardware was much more difficult than it was, and for the tasks hardware could do, operating systems were much more specialized. IBM took batch processing to time-sharing and time-sharing begat the most clever operating system of all, VM/CMS. At least, I thought so in 1986.
This triumph of the engineers at IBM enabling single machines to run multiple operating systems dates back in the 1970s. It wasn’t until personal laptops got really powerful in the late 90s that VMWare was able to revive the idea, and so the latest era of virtualization began. The solidification and specialization Linux made the demand real. Running Fedora and Windows 95 Pro on my ThinkPad made me feel like a boss in 2000.
Virtualization started happening in small smart companies and it wasn’t long before I could request a Linux server from IT in the morning and get one that afternoon. I could configure it up and load my specialty software and be productive the next day. If you’ve ever been in a situation where running out to Best Buy to cover a hardware shortage has been a tempting option, you know what a miracle virtualization seemed to be. But that was before the AWS cloud.
I don’t want to get into comparisons but by about 2010 the air was thick with nascent DevOps talk and people speaking about ansible, salt, puppet, and chef. Huh? What was all that? Well, briefly, the great mystery of the day was how to predict if your website would get huge or not. How do you do capacity planning in the age of virality? Virtualization could certainly help, but what if I need 20 new virtual webservers, like now? The above deployment software created ways to configure multiple servers in an automated fashion. A godsend for the caretakers of websites, but a mystery to us Enterprise types. Asking for 20 new servers in my world took the equivalent of passing a UN resolution, but these orchestration tools put me back in control as a programmer. As I joined the open source world there was one phrase that stuck with me.
Infrastructure as code.
I got it. Someday over the rainbow, I’d be able to write something like
check_every_hour server_count = integer(user_count / 50); server_config = {'4GB RAM', '1TB hard drive', 'database = true'}; deploy_to_prod;
That would be amazing, right? Well believe it or not, that’s already happening. Think Netflix. How do they know how many people want to watch ‘The Martian’ on the day it becomes available? They don’t and they don’t have to know. They’ve got deployment code that manages it for them.
So those of us who have been in the cloud, where all this magic is actually coded into APIs, have been building environments that pop into existence like the intro to Game of Thrones. But what if we could do that on a smaller scale?
What if I could have a little process daemon that watches one little condition tripwire and brings to bear a quantum of compute resource for that little job? Like, if somebody drops a spreadsheet into this file server, parse it and load it into the database. Then I wouldn’t have to pay for an actual server even for one hour. Well that’s basically what serverless computing like AWS Lambda does: micro-pricing for micro-processing counted in milliseconds. Instead of paying 50 cents per hour to run a virtual server that runs maybe 10 jobs a day. I can pay 20 cents per million transactions. Just over 20 cents, to read 10 x 100,000 line spreadsheets and load them up. This is the ultimate.
We’ve pretty much come full circle back to time-sharing. I don’t need to think about the machine. Virtualization lets me choose whatever operating system I want. Continuous deployment allows me to configure my compute resource with exactitude. Serverless goes to the final step where I can write very small event-driven programs that execute whatever I want. It’s like setting alerts for program trading, but for any compute job I can think of.
That’s serverless computing.
In the future, you can imagine an open source marketplace for optimized functions that run in a serverless environment. Then you can imagine functional processing tags that direct certain types of transactions to certain hardware and others to other specialized hardware. The economics of these architectures are revolutionary. Now you know.