Kubernetes versus serverless – the battle of the decade! Both deserve their status as exciting and powerful platforms that offer organizations tremendous boosts in agility, scalability, and computing performance, but it is easy to forget that Kubernetes offers advantages that serverless alternatives don’t — and vice versa. In this episode, Michael Nunez, Dave Anderson, and William Jeffries discuss the pros and cons of each platform, what it is like working with Kubernetes versus serverless on a project, and compare the complexity, load, function call and time cost, as well as comparing different cloud hosting services. Tune in today for all this and so much more!
Key Points From This Episode:
Transcript for Episode 176. Kubernetes VS Serverless
[0:00:01.9] MN: Hello and welcome to The Rabbit Hole, the definitive developer’s podcast. Live from the boogie down Bronx. I’m your host, Michael Nunez. Our co-host today.
[0:00:09.3] DA: Dave Anderson.
[0:00:10.1] MN: And our producer.
[0:00:11.3] WJ: William Jeffries.
[0:00:14.1] MN: Today, it’s the battle of the decade. Kubernetes versus serverless. Which side will you choose?
[0:00:22.0] DA: My god, so many buzzy bees, so many buzzy bees?
[0:00:27.9] WJ: What gator we’re in? Are we in the 20s?
[0:00:31.5] DA: This is the 22nd century.
[0:00:34.3] MN: the 22nd century, it’s going down right now. Kubernetes versus serverless. We’ll explore some of the things that –
[0:00:40.5] DA: 2099
[0:00:40.6] MN: The benefits of Kubs and the benefits of the serverless. I call it Kub, what is it like, the cool kids call it K8 because there’s eight characters in between K and S, is that like the thing?
[0:00:52.0] DA: Is that like –
[0:00:52.9] WJ: K8s.
[0:00:56.0] DA: K8er boy.
[0:00:57.4] MN: Yeah, K8er boy, there you go, like sk8er boy.
[0:00:59.5] DA: See you later boy. Yeah, it’s like we learned about internationalization and accessibility, right? A11.
[0:01:10.4] MN: A11 I18n.
[0:01:12.5] DA: That’s the one, that’s how I know that one. Kubernetes tries to be cool and it’s like K8S. all right, whatever, you got it.
[0:01:19.7] MN: Right, it’s like elevated to the same level as like those basic needs of accessibility and internationalization, you’ve got to get the Kubernetes, you got to get that helmsman in your applications, steering you to success for your infrastructure which, you know, we got some episodes, 104.
[0:01:19.7] DA: 104. Kubernetes migration. Man, William, you mentioned before in time, that you had worked in Kubernetes on a small project, yeah?
[0:01:57.5] WJ: Yeah, I had worked with Kubernetes for clients with bigger deployments where it made more sense to use Kubernetes and then I was like, “I’m going to use this on my Chinese side project which has no traffic.” That turns out to be a huge mistake, because I started getting hit with this massive server battles. What? I don’t have any users, why am I paying over all these servers?
I was on GCP and using their Kubernetes offering, and it just comes with a whole bunch of extra infrastructure, and when I went to shut it down, this was the worst part is like, I shut everything – I thought I shut everything down and then I came back several weeks later with even more server bounce because I, somehow, there was like some static IP addresses that I was required to setup in order to get the Kubernetes platform offering to run, and those aren’t like a turn off and turn on thing, you have to like go and release them in this separate section of GCP.
[0:03:06.6] MN: You had to turn on more things in order to turn it off? Or –
[0:03:13.5] WJ: I was like, there is a section in the GCP dashboard where you can go to like shut down all of the infrastructure relating to your GCP deployment, for whatever reason, they don’t put the static IP addresses you're required to reserve in that section. I had shut everything down, I thought everything was off, and it turned out that there is another section of GCP where I have these static IP addresses still reserved.
[0:03:46.0] MN: That’s how Google get your money, it’s not the big business, it’s all the side projects, they just get all the static IP’s, just milking it, taking your $5 or $10 or whatever.
[0:04:00.6] WJ: Yeah, I switched to serverless.
[0:04:04.3] MN: Did you use – what platform was it? Also GCP or did you use AWS?
[0:04:10.8] WJ: AWS. It seems like the ecosystem is not really that robust yet for serverless on GCP, unfortunately. It’s just like hard to find [inaudible 0:04:20] to get it set up and I figured, why fight the river.
[0:04:25.5] MN: There’s another Kubernetes reference with the oarsman. I thought that using the serverless framework would potentially make it a little bit easier to use other platforms outside of AWS, like, you’re able to use different providers and I think GCP is one of them, but the last project that I worked on was an AWS serverless project and it was pretty cool to work on that.
I imagine William that for your side project, you saved a lot of money on when you switched over to serverless.
[0:05:07.3] WJ: Yeah, it’s free so far.
[0:05:10.1] MN: Yeah, it’s free for like the first million functions that run or something like that or like –
[0:05:15.1] WJ: Two million functions free.
[0:05:17.8] MN: Which is insane. If you have a –
[0:05:22.0] WJ: Calls we should say.
[0:05:23.9] MN: Right, if you have like a pet project that you’re the only user and you’re hitting these functions, you don’t get billed or dinged for your server cost, which is great. I’m going to pull up the actual prices because I think that being able to see those numbers are kind of ridiculous. If you search for Amazon serverless prices, the paragraph is, “The monthly compute proceed is 0.00001667 per gigabyte and the free tier provides 400,000 gigabytes.”
“The monthly request price is 20 cents per one million requests and the free tier provides one million request per month.” If you are your only user in this pet project, you’d have to hit that bad boy one million times to pay 20 cents, which is insane. How many times did you make any request when you were using the Kubernetes infrastructure in your pet project? I’m sure it wasn’t a million, William.
[0:06:29.6] WJ: No, it was probably fucking less than a hundred.
[0:06:32.8] MN: Less than a hundred, just a hundred.
[0:06:35.6] WJ: On the order of the hundreds.
[0:06:37.5] MN: Right, just a hundred, not even hundred thousand, or none of that. One of the benefits I guess you know, we should share about serverless, is that it is like dirt cheap to get a pet project started. It’s a really cool way to just like write your functions and deploy them and then you have access to them and you – if it just has to do with small computations, like, serverless is definitely the way I go for stuff like that.
[0:07:06.6] WJ: Is Kubernetes dead?
[0:07:08.7] MN: I don’t know about whether we need to kill it. I mean, I don’t know, you tell me, your pockets felt the brunt of it on a pet project but like, a lot of the clients that we work with are still using Kubernetes. I imagine that they want to be able to spin up their servers, that will ensure that they are running 100% of the time, and can able to self-heal if things go awry or bad. I’m sure that Kubernetes does have its benefits that I definitely need to do some more research on, I’m sure, William or Dave, you guys have more context on the Kubs.
[0:07:50.0] WJ: Yeah, I think that Kubernetes is still a better fit for really big projects that have complex infrastructure. I think that it seems like serverless stuff has a lot of limitations and also, I think at scale, it works out to be cheaper to be Kubernetes. My issue is that I don’t have any traffic. If you have a large volume of traffic, as long as it’s not super spikey, then I think it works out to be cheaper to just spin up your own machines. You have so much more control when you use Kubernetes or, you know, whatever container orchestration tool you want. Even not using a container orchestration, just using your own infrastructure, your own dedicated infrastructure.
You have much more control over like how many instances you’re going to have spun up, and you can still do auto scaling if you're worried about paying too much for servers.
[0:08:52.4] MN: Right, if you’re starting a pet project and, you know, you’re figuring out the – whether the user base is going to latch on and you have a project, it’s good to start maybe on the serverless front and then migrate over to Kubernetes when you find that you have that traffic coming in for your application, is that safe to say?
[0:09:16.7] WJ: That makes sense to me.
[0:09:19.0] MN: I think we mentioned one of the cons for serverless is that it can, if the functions are computation heavy, you may end up paying a lot more in the long run, depending on how many uses you have or how long these functions need to run. I know that serverless has some limitations where things can’t run longer than 15 seconds, I believe it is. There’s many other limitations that one should definitely do research for those limitations but you don’t have to worry about that if you're working on your own Kubernetes infrastructure.
[0:09:53.0] DA: Yeah, it’s kind of interesting because like we’re talking about the kind of low that you have, that you're expecting. Like obviously if you have serverless then it is pretty optimized for short and sporadic loads, like loads that aren’t necessarily even one way or the other, maybe it is spotty, and sometimes it is really heavy and sometimes it is not all there.
Whereas Kubernetes and traditional container orchestration is kind of better suited to a load that has some baseline level. You already going to be having something on, and they’ll be controlling it and you will be able to spin up and spin down as needed, but there is also something to be said for like –
[0:10:47.8] WJ: Is the serverless framework support multi-cloud? That I think would be really interesting if you could allocate your traffic to whichever cloud hosting provider has the cheapest Lambda function calls at the time. I don’t know if they are going to do a spot market for Lambda function calls.
[0:11:09.6] DA: Were they betting wars and stuff like that, I have definitely seen like betting wars kind of. I have heard of tools that will help you do that for like large loads for distributed computing, like machine learning where you’re like, “Okay, I am only going to run this if you will let me pay this rate,” you know, over time and otherwise, I don’t really care. I will let it run for longer on a more expensive machine because I don’t want to pay the higher rate to get it done quicker.
[0:11:50.7] WJ: I keep hoping that multi-cloud is going to become a thing, just so that there is more competition and incentive for the cloud hosting providers to lower costs and compete on price.
[0:12:05.3] MN: It is also different tech as well. Like I mean a quick Google of Serverless framework multi-cloud yields a blog post about the ability to or the thoughts behind doing multi-cloud related computations, like for example, if you want to do some machine learning related process, you may want to run those pieces of information on the GCP, on the Google platform, as oppose to like Amazon if you feel that you know Google has a better machine learning algorithm than Amazon.
But then for other things, you could potentially point that to other serverless framework applications, say Azure. I know Azure is another one that people often use for their functions as a service and maybe even like the cost – the calculating which cost will cost less or which functions will cost less is probably the thing that people are trying to do with this multi-cloud functionality within the serverless framework.
It seems fairly new though. I am not 100% sure whether I like the Serverless framework does it but it appears that there may be some multi-cloud solutions that use GCP, or AWS Lambda, or Microsoft Azure to, I guess, arbitrage which price point is the least and then make the functions then and there.
[0:13:33.4] WJ: Yeah, I think IBM also has an equivalent of AWS Lambda now? I think they are getting into the space.
[0:13:42.0] MN: Everybody get in here, got to get in here, more options. More functions, more of fuss. More functions as a service.
[0:13:51.6] DA: And plus one options. Multi-cloud means it’s like that old XKCD cartoon. Like I said why they are always competing standards? Why can’t there just be one unifying standard? And then there is just one more competing standard.
[0:14:08.2] MN: And there is 17 new standards.
[0:14:10.7] DA: Great. Good. I mean it’s an interesting challenge that we are talking about, is like to lens of just starting something, right? Getting something off the ground and going, cost is one aspect that Kubernetes might fall down versus severless. Complexity is another thing that Kubernetes has a bit more of, but there are also a lot of other tools out there, like Heroku and other platform as a service, hosting options that you can use to get started quickly and relatively cheaply. You can use Terraform, things like that. There is a lot of cool options out there, and if you’re trying to get started quickly then those can be very useful as well.
[0:15:10.3] WJ: I was thinking you know, I am as comfortable with containers. I don’t really know that much about serverless. So I was like I think it will be faster for me to get set up with Docker. Then my main concern with serverless was whether or not they would be support for the dependencies that I needed. So that was the tradeoff in my mind. It is like, “Okay so serverless is going to be a lot cheaper, but its new technology that I need to learn and I am not sure whether or not it is going to be as flexible.”
Like a Docker container I know I can put anything on there that I could put on a Linux machine and with the serverless, with these Lambdas, you are much more restricted as to what you can put on them because you have to be able to spin them up in a very short notice.
[0:16:04.3] MN: Right, the idea that if you have a ton of say using Node, and you needed an applications built on like no modules for example, having all of that information set up, having all of those third party libraries set up, could be a hindrance to your application loading to do the right thing, which in like – if you have Docker you know exactly what you have. It is already spun up and it is ready to make those calls. It doesn’t have to turn on and activate to run your function and then shut down when it is done.
I think we talked about this before in the previous episode about serverless, where the idea of like a cult start where the idea that application – that a function may need to load up before actually running, if it is has been down for too long, and I don’t think that you will have that problem in terms of speed with a Docker container that is kind of waiting for those API calls to come in.
[0:17:05.3] WJ: Yeah, it seems like the solution for the cold start of the community has come up with is to set the chrome cap that just hits your function at least once every five minutes, which seems like cheating.
[0:17:19.3] MN: Yeah, it’s kind of wild.
[0:17:21.1] WJ: Kind of like, how long has Amazon being able to put up with that? But I don’t know, I guess it works.
[0:17:25.6] DA: I mean you are paying them by the function call. So how many seconds do you get for free?
[0:17:36.5] WJ: Yeah, I think that if it is like in language packages, then you’re good, but if it is like a C-library that you need to show up to, then that is more complicated.
[0:17:52.4] MN: Right.
[0:17:53.7] DA: Totally. I think it is also maybe you don’t even need to do very much in the way of servers side computation at all. So you know maybe you don’t need Kubernetes or Serverless or Roku or whatever. Maybe you just need a static file, and that is always an option too. Just put it in this three, do something gem stack, because those rates can be even cheaper although comparing these things becomes like a weird calculus problem.
I don’t know what the proper math is to compare the s3 bucket costs and load versus serverless function call and time cost versus Kubernetes pod spinning up. It becomes pretty complicated and I guess you just got to have a sense for what kind of application you’re building and what’s your historical load is going to be and maybe play around with it.
[0:19:01.5] WJ: I am curious what the cloud providers would prefer. Are they going to try to incentivize that people switch to serverless by making it so much cheaper for new projects that are just getting started? Is that part of the road map for them?
[0:19:19.3] DA: Oh there’s like the deep state in AWS land. They are pulling the strings.
[0:19:27.7] WJ: We’ll have to interview somebody who works to the cloud platform and find out which is the most profitable service offering for them.
[0:19:36.0] DA: Right like NDA’s be damned. They could go into witness protection.
[END OF INTERVIEW]
[0:19:41.8] MN: Follow us now on Twitter @radiofreerabbit so we can keep the conversation going. Like what you hear? Give us a five star review and help developers like you find their way into The Rabbit Hole and never miss an episode, subscribe now however you listen to your favorite podcast. On behalf of our producer extraordinaire, William Jeffries and my amazing co-host, Dave Anderson and me, your host, Michael Nunez, thanks for listening to The Rabbit Hole.
Links and Resources: