Hacker News new | threads | past | comments | ask | show | jobs | submit DanielBMarkham (43827) | logout
Kubernetes is a red flag signalling premature optimisation (jeremybrown.tech)
522 points by tenfourty 1 day ago | flag | hide | past | favorite | 532 comments





Doesn't look like the author knows what he is talking about. His point about early stage startup should not use K8S is fine. But the next advice about not using a different language for frontend and backend is wrong. I think the most appropriate advice is to choose a stack which the founding team is most familiar with. If that means RoR then RoR is fine. If it means PHP then PHP is fine too. Another option is to use a technology which is best suited for the product your are trying to build. For example, if you are building a managed cloud service, then building on top of K8S, or FireCracker or Nomad can be a good choice. But then it means you need to learn the tech being used inside out.

Also he talks about all of this and then gives example of WhatsApp at the end. WhatsApp chose Erlang for the backend and their front end was written in Java and Objective-C. They could have chosen Java for backend to keep frontend language same but they didn't. They used Erlang because they based their architecture on Ejabberd which was open source and was built with Erlang. Also WhatsApp managed all their servers by themselves and didn't even move to managed cloud services when they became available. They were self hosting till FB acquired them and moved them to FB data centres later on (Source: http://highscalability.com/blog/2014/2/26/the-whatsapp-archi...).


I don't think their advice about not using it in a startup is correct either. You just need to somewhat know what you're doing.

I know of such a case, where a single engineer could leverage the helm chart open source community, and set up a scalable infrastructure, with prometheus, grafana, worker nodes that can scale independently of web service, a CI/CD pipeline that can spin up complete stacks with TLS automated through nginx and cert-manager, do full integration tests, etc.

I found that to be quite impressive, for one person, one year, and would probably be completely impossible if it wasn't for k8s.


The thing is, unless using those technologies was somehow core to what the single engineer was trying to, it might be technically impressive but might not have actually provided value for users.

Users don't really care if you have a really impressive stack with cool technologies if it doesn't offer anything more than a couple of web servers and a DB server.


Right on. Previous devs at co I joined wanted to play DevOps cowboys. They used Ansible scripts to spin up various AWS services costing the company over 100K/yr.

New lead came in, got rid of that crap by using 3rd party services to spin up infrastructure. Got a load balancer, a few VMs + DB. Reduced the cost down by 85% and greatly simplified the entire stack.

I learned a really valuable lesson without having to make that mistake myself.

I understand why people get excited about tooling. It's cool to learn new things and automate stuff away. I'm prone to that myself and do this on my own server when I get that itch.

Having said that, it's wrong to foist this stuff onto an unsuspecting company where the owners don't know any better about tech, that's why they hire other people to do that for them and seeing that just left a bad taste in my mouth for overcomplicated setups.

I get that SV is different, that's why tools like K8 are made and I would jump on those tools on a heartbeat as needed.

But for other smaller businesses, the truth is they just need a boring monolothic load balanced app with a few VMs and a DB sprinkled with 3rd party services for logging or searching or other stuff not core to the business.


I know this utterly misses the larger point of your comment, but:

> They used Ansible scripts to spin up various AWS services

This seems less about using the "cool/new" tech... rather it's about using the "right" tech. Config management tools like Ansible/Chef/Puppet are very much previous-generation when it comes to cloud infrastructure.

They... can manage cloud infrastructure, but they were created prior to the ubiquity of cloud deployments, and the features are glued on. Not choosing a more modern IaC framework tells me they(those devs) were going to be making sub-optimal implementation decisions regardless.


Yeah, this project was several years old. Take this with a grain of salt, I'm not familiar with timelines in terms of k8s, but I would guess that it had not yet risen to popularity as it has in more recent years.

Yeah but k8s isn’t hard at all if you know it, it’s actually substantially easier than a couple web servers and dbserver and provides a whole lot more

Rocket science isn't hard if you know it. Should we all build spaceships to deliver groceries? Good luck finding a few local rocket scientists in a pinch.

You can find plenty of auto mechanics though. Cars are cheaper and ubiquitous. Maybe they can't drive to the moon, but they can get most things done.

Unless your business is flying to the moon, stick to cars and trucks over spaceships.


lol what? k8s isn't rocket science, for a basic web app its a single yaml file

lol yes it is

So, your argument is that you should use a tool if you know how to use it, regardless of if it's actually needed?

Personally I would take managing "a couple of webservers and a db" any day over k8s.


His point is to use whatever simplifies workflow / reduces operational overhead. To some people, that indeed would be k8s. To you, that may be "managing a couple of webservers and a db". And that is great for you.

yeah I use k8s for basic webapps and it works wonderfully, and is way way easier than anything else, and yes I started developing in the 90s so I've seen it all. There is a bit of overhead in learning k8s, but once you know it, it's dead simple for every use case I've found and takes you way further than anything else.

> I know of such a case, where a single engineer could leverage the helm chart open source community, and set up a scalable infrastructure, with prometheus, grafana, worker nodes that can scale independently of web service, a CI/CD pipeline that can spin up complete stacks with TLS automated through nginx and cert-manager, do full integration tests, etc. I found that to be quite impressive, for one person, one year, and would probably be completely impossible if it wasn't for k8s.

But that's the thing though: they didn't do it alone. You literally pointed out that this wasn't true almost immediately: "leverage the helm chart open source community". They used the work of others to get to the result.

Also, I highly doubt they could debug something if it went wrong.

I simply cannot believe anyone would advocate, or believe, that because a Helm chart makes it simple to create a complex piece of infrastructure it must also be true that maintaining it is also simple. It's really.


I was able to do something similar by myself in only about a month or so (ops and production ready with backups etc) using https://www.digitalocean.com/community/tech_talks/getting-st...

Once you understand the concepts it's not hard to debug. It's fair to acknowledge that kubernetes is complex, but also we should not ignore the real work that has been done in the past few years to make this stuff more accessible to the developers that want to learn it.

Also, saying it's not "alone" in this example I think is not fair. What would you count as "alone"? Setting up the kubernetes cluster from scratch and writing your own helm charts? Using that same logic, I can't say that because someone else designed and built the hardware it's running on. I think it's fair to say that if someone, independent of coaching, regardless of the underlying infrastructure produced some production grade infrastructure by themselves, they certainly did it alone.


Using the helm charts from the community is arguably still doing it alone. There isn't anything back and forth. It's just the right tool for the right job. But, this starts being about language and semantics. Like saying that following a best practices on how to configure nginx isn't doing it alone, because someone else wrote that. Helm charts just very often expose what needs to be configured, and otherwise follows those best practices.

As for debugging. You do have a point that it becomes more difficult. But, this also holds true for any of the alternatives discussed her (lambdas, terraform). I'd argue that when it all comes down to it, that because you can spin up the entire infrastructure locally on things like minikube, that it makes it many times more easy to debug than other cloud-provider-only solutions.


> They used the work of others to get to the result.

Should everyone go back to assembly on bare metal, then?


Lol you think you get computer. Use hands to mine lithium, go.

One would also need to independently discover much of math and physics in order to say they did it themselves according to that definition.

I think you mean silicon though.


I agree with that, setting up k8s on bare-metal took me 2 days, and we needed it to deploy elastic and some other helm charts as quickly as possible without loosing our minds maintaining nodes with some clunky shell scripts.

Also we bought us immediately an easy approach to build gitlab ci/cd pipelines + different environments (dev, staging, production) on the same cluster. Took me a week to set everything up completely and saved our team developing rapidly features really a lot of time and headache since then. But the point is, I knew how to do it, focus on the essentials and deliver quick reasonable results with large leverage down the route.


> deploy elastic and some other helm charts as quickly as possible

Bad culture alert! No one needs Elastic "as quickly as possible" unless their business, or the business they work for, is being very poorly run.

I would also argue that you might have got it running quickly, but how are you patching it? Maintaining it? Securing it? Backing it up? Have you got a full D/R plan in place? Can you bring it back to life if I delete everything within 3-6 hours? Doubt it.

> maintaining nodes with some clunky shell scripts.

Puppet Bolt, Ansible, Chef, ...

There are so many tools that are easy to understand that solve this issue.


That’s all solved for you, helm upgrade in ci/cd and bump of versions has been straight forward, if not snapshot rollback via Longhorn, also for DR. Accidentally deleted data => get the last snapshot, 5 minutes it’s back (except that there is of course CI/CD in place for new code + no write permissions for devs on the cluster and „sudden“ data deletion somewhat rare).

Elastic usecase is for crawling crazy amount of data and make it searchable and aggregatable and historically available, don’t know any other solution than elastic who has reasonable response times and easy-to-use access (plus we can add some application logging and APM).

> Puppet Bolt, Ansible, Chef, ...

Helm chart values.yaml and you’re all set, security + easy version bump included.


I believe elastic is available as service from AWS and elastic.co, if you need it fast, use that. If you need it long term it may be worthwhile to deploy your own for cost and flexibility purposes

Managing Elastic Cluster is on pair of managing Kubernetes cluster. Its not easy.

Setting it up is easy, but not operations.

Ive been managing multiple clusters on AWS and on Azure.

I would take a managed EKS/OS in a heartbeat if I was a startup.


> with prometheus, grafana, worker nodes that can scale independently of web service, a CI/CD pipeline that can spin up complete stacks with TLS automated through nginx and cert-manager, do full integration tests, etc.

> I found that to be quite impressive, for one person, one year, and would probably be completely impossible if it wasn't for k8s.

I've always found this interesting about web based development. I have no idea what Prometheus, grana, etc do. Ive never used k8s.

And yet, as a solo dev, I've written auto-scaling architecture using, for example, the AWS ec2 apis that let you launch configure and shutdown instances. I don't know what else you need.

Really the only advantage I see to morass of services is you get a common language so other devs can have a slightly easier time of picking up where someone left off. As long as they know all the latest bs.


> Prometheus, grafana

In short. Prometheus is a worker that knows about all your other services. Each service of interest can expose an endpoint that prometheus scrapes periodically. So the services just say what the current state is, and prometheus, since it keeps asking, knows and stores what happens over time. Grafana is a web service that uses prometheus as a data source and can visualize it very nicely.

Prometheus also comes with an Alert Manager, where you can set up rules for when to trigger an alert, that can end up as an email or slack integration.

They are all very useful, and gives a much needed insight into how things are going.


I dont know AWS ec2 apis and for sure I'm not capable of writing auto-scaling architecture. This is the reason why I default to K8s. I have used it easily and successfully by myself for the last 4 years. It just keeps on running and hasn't given me any problems.

> And yet, as a solo dev, I've written auto-scaling architecture using, for example, the AWS ec2 apis that let you launch configure and shutdown instances. I don't know what else you need.

This is fine, if you’re on AWS and can use AWS APIs. If you’re not (especially if you’re on bare metal), something like K8s can be nice.


If you're on bare metal there is a case for K8s. How many people are on bare metal?

That isn't the point. If he had a whole year, was there a tangibly better use of his time to get a product to market faster? What might the business implications be for doing or not doing so?

It seems many are focused on the time estimate. That was in creating the overall solution. About two months was to set up the infrastructure mentioned.

These often get developed side by side. GitLab, unit tests, api-server, nginx, cert-manager, deployments, integration tests, prometheus, metrics in services, grafana, alert-manager, log consolidation, work services and scaling, etc.

Just spinning up a cluster, nodepool, nginx, cert-manager w/let's encrypt cluster issuer, prometheus, grafana, can easily be done in a day. So, time estimates kinda depend entirely on what you mean by it.

Spinning up promerheus and grafana with automatic service discovery: one day. Making good metrics, visualizations, and alerts: everything from a week to a month or two. So, take the time estimates with a grain salt.


I think that people are focusing on the time estimate just to estimate a cost, and asking what the return was. The material return.

> and set up a scalable infrastructure

As a rule, for anything in an startup, adding that "scalable" adjective is a waste.

Of course, exceptions exist, but it's one of those problems that if you have them, you probably know it, and if you have them and don't know it, you won't succeed anyway. So any useful advice has the form of "don't".


In a typical startup you are going to be strapped for cash and time. You need to optimize and compromise. In general, if you don't know how do something, then figure out if you need to learn it, or whether you can get by with a "good enough" solution, because there will be a queue of other things you need to do that might be more business-critical.

So if you already know Kubernetes, then great, use it. Leverage your existing skills. If you don't, then just use Heroku or fly.io or whatever, or go with AWS if that's your competence. Maybe revisit in a year or two and maybe then you'll have funding to hire a devops person or time to spend a week or two learning to do it yourself. Right now you want to get your SAAS MVP in front of customers and focus on the inevitable product churn of building something people want to pay for. The same advice goes for anything else in your stack. Do you know it well enough? Do you need it right now? Or is there a "good enough" alternative you can use instead?


How long did it take him to do this setup, a year you say, and that is impressive? I am not trying to be cute here, my question comes from a genuine place of curiosity. I've love to learn to spin-up a system like that, but from the tech/sales talks I see I am made to believe this can be done in a day. Expectation management is important, if people say ops is just a solved problem then I expect this to take very little time and to be easy to learn. Maybe I am learning the wrong thing here, and should do learn Helm or something more high level.

It took a year, but that was somewhat on the side of also building an OpenAPI based Web service, the gRPC based workers. So, it wasn't just the infrastructure stuff. If I were to estimate how much time for just the infrastructure and devops tooling, then two months. It's been up and running with less than 15 minute downtime over the course of two years.

I do consider this impressive. And, to be clear, I wouldn't say this is because of a "super-developer". In fact, he had no prior k8s experience. But rather that there are thousand upon thousand of infrastructure hours devoted to the helm charts, often maintained by the people who develop the services themselves. It is almost mind boggling how much you get for almost free. Usually with very good and sensible default configurations.

In my precious work place, we had a team of 5 good engineers purely devoted to infrastructure, and I honestly believe that all five would be able to spend their time doing much more valuable things, if k8s had existed.

As for whether or not such devops solutions could be done in a day. Hm. I don't know. These things should be tailored to the problem. If you've done all of this a few times, then maybe you can adjust a bunch of charts that you are already familiar with and do what took a couple of months and impressed me, in a couple of weeks. It's a lot more than just "helm install. Done", that goes into architecting a scalable solution. Implementing monitoring, alerting and logging. Load testing stuff. Etc.


Sounds like a waste of months that could have gone into building product by choosing simpler operational tech

That's seems like a very negative take in my opinion. This 'simpler operational tech' would still need to be able to scale, correct? If you think that there is a good and easier way to deploying 10-15 services, all of which can scale, and all of it defined in rather neat code, to be anything but "simple operational tech", then I believe you are confusing "solving a complex problem", with "simplifying the requirements of a complex problem". The latter of which has been stripped of many important features. K8S isn't anything magic, but it certainly isn't a bad tool to use. At least not in my experience, though I've heard of horror stories.

That does remind me that when that employee started, the existing "simple operational tech" was in fact to SSH into a VM and kill the process, git pull the latest changes, and start the service.

The only way you can solve the actual problem (not a simplified one) would in my opinion either be k8s or terraform of some kind. The latter would mostly define the resources in the cloud provider system, most of which would map to k8s resources anyways. So, I honestly just consider k8s to better solve what terraform was made for.

I'm sure the "simpler operational tech" meets few requirements for short disaster recovery. Unless you have infrastructure as code, I don't think that is possible.


Yeah to be honest, I run a k8s cluster now for my saas. But about 4 times more expensive then my previous company I ran on a VPS.

And scaling is the same that VPS I could just scale the same way. Run a resize in my hosting company panel. (I dont use autorescal atm)

Only if I would hit about 100x times the nrs I would get the advantage of k8s, but even then I could just split up customers into different VPS.

CI / CD can be done good and bad with both.

And in practice K8S's a lot less stable. Maybe because I'm less experienced with K8S. But also because I think its more complex.

To be honest k8s is one of those dev tools that has to reinvent every concept again, so it has it's own jargon. And then there are these ever changing tools on top of it. It reminds me of JS a few years ago.


>This 'simpler operational tech' would still need to be able to scale, correct?

Only if "scaling" is the problem that your startup is solving.


Any startup that knows what their product is and are done with PoCs, should be able to deal with the consequence of succeeding, without failing. Scaling is one of those things that should be in place before you need it. In our case, scaling was a main concern.

> In our case, scaling was a main concern.

and ... you might be justified in that concern. However... after having been in the web space for 25+ years, it's surprising to me how many people have this as a primary concern ("we gotta scale!") while simultaneously never coming close to having this concern be justified.

I'm not saying it should be an either/or situation, but... I've lost count of how many "can it scale?" discussions I've had where "is it tested?" and "does it work?" almost never cross anyone's lips. One might say "it's assumed it's tested" or "that's a baseline requirement" but there's rarely verification of the tests, nor any effort put in to maintaining the tests as the system evolves.

EDIT: so... when I hear/read "scaling is a main concern" my spidey-sense tingles a bit. It may not be wrong, but it's often not the right questions to be focused on during many of the conversations I have.


> I'm not saying it should be an either/or situation, but... I've lost count of how many "can it scale?" discussions I've had where "is it tested?" and "does it work?" almost never cross anyone's lips.

Also, discussions about rewrites to scale up service capacity, but nobody has actually load tested the current solution to know what it can do.


Just keep it simple, and if you take off scale vertically while you then work on a scalable solution. Since most businesses fail, premature optimisation just means you're wasting time that could have gone on adding more features or performing more tests.

It's a trap many of us fall into - I've done it myself. But next time I'll chuck money at the problem, using whatever services I can buy to get to market as fast as possible to test the idea. Only when it's proven will I go back and rebuild a better product. I'll either run a monolith or 1-2 services on VPSs, or something like Google cloud run or the AWS equivalent.

Scaling something no one wants is pointless.


> good and easier way to deploying 10-15 services

Why are so many micro-services needed? Could the software be deployed in a more concise manner?

Not getting into the whole monolith-vs-services arguments. In both cases, complexity of deployment is part of the cost of each option.


I should perhaps have clarified, but the 10-15 are not self maintained services. You need nginx for routing and ingress, set up cert-manager and other ingress endpoints are automatically configured to have TLS, deploy prometheus, which comes with node-exporter and alert-manager, deploy grafana.

So far, we're up at 6 services, yet still at almost zero developer overhead cost. Then add the SaaS stack for each environment (api, worker, redis) and you're up at 15.


Sometimes it's faster to implement certain features in another languages and deploy it as microservice instead of fighting your primary language/framework to do it. Deploying microservices in k8s is as easy as writing a single yaml file.

In a lot of cases it's pattern abuse. I'm dealing with this all the time. People like to split things that can work perfectly as one whole, just for the sake of splitting it.

>That's seems like a very negative take in my opinion. This 'simpler operational tech' would still need to be able to scale, correct?

Premature optimization is a top problem in startup engineering. You have no idea what your startup will scale to.

If you have 1,000 users today and 5 year goal of 2,000,000 users, then spending a year building infrastructure that can scale to 100,000,000 is an atrociously terrible idea. A good principal can setup a working git hook, circleci integration, etc capable of automated integration testing and rather close to ci/cd in about a weekend. Like you can go from an empty repo to serving a web app as a startup in a matter of days. A whole year is just wasteful insanity for a startup.

The reality for start-ups running on investor money with very specific plans full of OKRs and sales targets is very different: you need to be building product as fast as possible and not giving any fuck about scale. Your business may pivot 5 times before you get to a million users. Your product may be completely different and green-fielded two times before you hit a million users.

I can't imagine any investor being ok with wasting a quarter of a million+ and a year+ on a principal engineer derping around with k8s while the product stagnated and sales had nothing to drive business -- about as useful as burning money in a pit.

You hire that person in the scale-up phase during like the third greenfield to take you from the poorly-performing 2,000,000 user 'grew-out-of-it' stack to that 100,000,000+ stack, and at that point, you are probably hiring a talented devops team and they do it MUCH faster than a year


If you have a website with 1000 users today and product is going to be re-designed 5 times, it's probably best just to use sqlite and host on a single smallish machine. Not all problems are like this however.

for example lambda (not microservices, running mini monoliths per lambda function)

yes by simple I mean covering high availability requirements, continuous deployment, good DORA measures - not simple as in half-baked non-functional operations (such as manually sshing to a server to deploy)


Ah, I see. Well, lambdas are also a nice tool to have, but it certainly do not fit for all applications (same as with k8s). I'd also point out that lambdas replace a rather small capabilities of k8s, and the type of systems you can put together. You would end up needing to set up the rest either through a terrible AWS UI or terraform. Neither of which I find to simplify things all that much, but perhaps this is a matter of taste.

In our case, the workers were both quite heavy in size (around 1 GB), and heavy in number crunching. For this reason alone (and there are plenty more), lambdas would be a poor fit. If you start hacking them to keep them alive because of long cold starts, you would lose me at the simple part.


>If you start hacking them to keep them alive because of long cold starts,

this is a few years out of date of platform capability, just fyi


How would you possibly know one way or the other?

Having very recently done this (almost, another dev had half time on it) solo, It's not _too_ terrible if you go with a hosted offering. Took about a month/month and a half to really get set up and has been running without much of a blip for about 5 months now. Didn't include things like dynamic/elastic scaling, but did include CD, persistent volumes, and a whole slew of terraform to get the rest of AWS set up (VPCs, RDS, etc). I'd say that it was fairly easy because I tinkered with things in my spare time, so I had a good base to work off of when reading docs and setting things up, so YMMV. My super hot take, if you go hosted and you ignore a ton of the marketing speak on OSS geared towards k8s, you'll probably be a-ok. K8s IME is as complex as you make it. If you layer things in gradually but be very conservative with what you pull in, it'll be fairly straightforward.

My otherhot take is to not use helm but rather something like jsonnet or even cue to generate your yaml. My preference is jsonnet because you can very easily make a nice OO interface for the yaml schemas with it. Helm's approach to templating makes for a bit of a mess to try and read, and the values.yml files _really_ leak the details.


With 1YoE I did most of that in about 3 months. Had a deadline of 6 months to get something functional to demonstrate the proposed new direction of the company, and I did just that. If I were to do it today I could probably rush it to a week, but that would mean no progress on the backend development that I was doing in parallel. A day is probably doable with more on-rails/ batteries included approaches.

Not because I'm amazing, but there's a frankly ridiculous amount of information out there, and good chunks of it are high quality too. I think I started the job early January, and by April I had CI/CD, K8s for backend/frontend/DBs, Nginx (server and k8s cluster), auto-renewing certs, Sentry monitoring, Slack alerts for ops issues, K8s node rollback on failures, etc.

The best way to learn, is to do. Cliche, but that's what it really comes down to. There's a fair few new concepts to grasp, and you probably have picked some of these up almost by osmosis. It sounds more overwhelming than it is, truly.


The problem is never spinning things up, it's in maintenance and ops. K8s brings tons of complexity. I wouldn't use it without thinking very carefully for anything other than a very complex startup while you're finding product-market fit.

You can get a majority of those things "running" in few days. If you don't want it to fall over every other day, then you need to have a ton of ancillaries which will take at least several months to set up, not to mention taking care of securing it.

Use a managed k8s cluster (eks, aks or gke). Creating a production ready k8s on vms or baremetal can be time consuming. Yes, you can do lamdba, serverless, etc. but k8s gives you the same thing and is generally cheaper.

It's actually pretty easy to do these days, even on bare metal servers. My go to setup for a small bare metal k8s cluster:

- initial nodes setup: networking configuration (private and public network), sshd setup (disallow password login), setting up docker, prepping an NFS share accessible on every nodes via private network

- install RKE and deploy the cluster, deploy nginx ingress controller

- (optional) install rancher to get the rest of the goodies (graphana, istio, etc). These ate a lot of resources though, so I usually don't do this for small clusters

Done in a single afternoon.


And yet, to me it sounds like NIH since it's a pretty standard stack; couldn't they just get something like google app engine and get all of that from day one? Because did any of those things mentioned result in a more successful company?

I'd argue that using helm charts is the exact opposite of NIH. The things that take time are not the stack themselves, but the software and solutions. K8s just makes the stack defined in code, and written and managed by dedicated people (helm maintainers) as opposed to a bit "all over the place" and otherwise in-house, directly using cloud provider lock-in resources.

I'm sure there are plenty of use cases where that makes sense, and is a better approach. But, I disagree that k8s suggests a NIH-mindset.


Most startups get basic security for networking and compute wrong, K8s just adds even more things to mess up. Odds are even if you use an out of the box solution, unless you have prior experience you will get it wrong.

I will always recommend using whatever container / function as a service e.g. ECS, GCF, Lambda any day over K8s for a startup. With these services its back to more similar models of security such as networking rules, dependency scanning, authorization and access...


So question then - is it possible to found a tech startup without paying rent to a FAANG? Before I get the answer that anything is possible, I should say is it feasible or advisable to start a company without paying rent to the big guys?

Who would you prefer to pay rent to?

The reality is unless you’re some rich dude who can borrow dad’s datacenter (And that’s cool if so), you’re either going to be renting colo space, virtual servers, etc.

It’s always a challenge in business to avoid the trap of spending dollars to save pennies.

IMO, you’re better off working in AWS/GCP/Azure and engineering around the strengths of those platforms. That’s all about team and engineering discipline. I’m not in the startup world, but I’ve seen people lift and shift on-prem architecture and business process to cloud and set money on fire. Likewise, I’ve seen systems that reduced 5 year TCO by 80% by building to the platform strengths.


> Who would you prefer to pay rent to?

I'm aware that no man is an island in some sense, but I'm not comfortable with locking myself into one of 3 companies who need to increase their revenue by double digits year over year. And as you say, a lift and shift is basically setting money on fire. Currently I run sort of a hybrid approach with a small IaaS provider and a colo. It seems to work well for us both technically and financially though that seems to go contrary to what is considered conventional wisdom these days.


That’s awesome. The most important thing is to understand why you’re making the decisions that you do.

Where I work, we can deliver most services cheaper on-prem due to our relative scale and cloud margins. But… we’re finding that vendors in the hardware space struggle to meet their SLAs. (Look at HPE — they literally sold off their field services teams and only have 1-2 engineers covering huge geographic regions. So increasingly critical workloads make the most sense in the cloud.


re: advisable

If and only if your business model depends on it. A startup's job is mostly to find product market fit; if being decoupled from AWS isn't part of your market, you are spending money on a non-problem.


There is nothing stopping you from hosting your own OpenStack, managed k8s, and all that, on your own hardware. You would need a good reason to not let someone else deal with all of this though.

For a small enough company you could even just use use k3s + offsite backups. Once you grow large enough you can setup machines in 2-4 locations across the land mass where your users exist. If you have enough than a hardware fault in one isn't an emergency and you'd be able to fly out to fix things if needed.

Realistically, on all flash, you are very unlikely to need to maintain anything on a server for a few years after deployment.


That is probably a good idea for many startups. However, once you get into the world of audits and compliance certifications, things become a lot harder. But, but then again, at this point, I suppose it is easy enough to transition to some managed hardware.

If you're priorities are 'which companies do my values align with among generally very high integrity companies to begin with' - then you might want to reconsider.

Google is not evil. They're just big, and define some practices which we might think should change in the future.

Once you have the thing up and running, you can start to think about hosting your own.

Also, you don't need to use fancy services because most startups can run just fine on a single instance of whatever, meaning, there are a lot of cloud providers out there.


Right.. But scale?

I've seen places hire a dev that write all the OPS stuff and they scaled awesomely.. I mean if they had purchased 100servers full time on amazon, they would have spent a fraction of the cost to scale, but they could scale. In 5 years I think they've never once had to reach even near the 100servers.

At the same time. I can scale heroku to 500 servers, and still be under the cost of one ops person. I can make that change and leave it there. I can do that all in under 30 seconds. Oh. And CICD is built in as a github hook. Even with blue-green deploys.

I think his point was most start-ups don't need to scale more than a site like heroku can offer. If you need more than 500 servers running full time then it's time to start looking to "scale"


> At the same time. I can scale heroku to 500 servers, and still be under the cost of one ops person. I can make that change and leave it there. I can do that all in under 30 seconds. Oh. And CICD is built in as a github hook. Even with blue-green deploys.

And then Heroku shuts down.

If you're building something that needs to scale up rapidly if it succeeds, k8s is worth thinking about. Either you don't succeed, in which case it doesn't matter what your stack was, or you do, in which case you'll be glad that you can scale up easily, you'll be glad you are using a common platform which is easy to hire competent people in, and, if you were smart about how you used k8s, you'll be glad that you can relatively easily move between clouds or move to bare metal.


I think the set of cases where "we need to scale up rapidly if it succeeds" and "Kubernetes solves all of our scaling needs and we aren't going to have problems with other components" is almost empty. On the other hand, there are quite a lot of startups that fail because they put too much focus on the infrastructure and Kubernetes and the future and too little on the actual product for the users. Which is the point of the article, I think. Ultimately what matters is whether you sell your product or not.

> I think the set of cases where "we need to scale up rapidly if it succeeds" and "Kubernetes solves all of our scaling needs and we aren't going to have problems with other components" is almost empty.

I agree, but so what? K8s isn't magic, it won't make all your problems go away, but if you have people who are genuinely skilled with it, it solves a lot of problems and generally makes scaling (especially if you need to move between clouds or move onto bare metal) much smoother. Of course you'll still have other problems to solve.

Given that most startups never need to scale up much, it's not surprising that k8s is mostly used where it's not needed. But people usually prefer not to plan for failure, so it's also not surprising that people keep using it.


I mean, you still have to invest time on putting k8s to work, get people skilled with it, maintain and debug the problems... If Kubernetes didn't cost anything to deploy I'd agree that using it is the better idea, but it costs time and people, and those things might be better invested in features that matter to the users.

It depends. There are many things that carry a cost early but pay for themselves many times over later. Whether that will be the case for your startup depends whether you end up needing to scale quickly or not.

It's also worth considering that appropriate use of k8s can quite likely save you time and money early on as well. It standardises things, making it very easy for new ops people to onboard, and you might otherwise end up spending time reinventing half-baked solutions to orchestration problems anyway.


> It depends. There are many things that carry a cost early but pay for themselves many times over later. Whether that will be the case for your startup depends whether you end up needing to scale quickly or not.

Well, precisely what I said is that 99.9% of startups won't find themselves in a situation where they need to scale quickly and the only scale problems they find can be solved with Kubernetes.

> It's also worth considering that appropriate use of k8s can quite likely save you time and money early on as well. It standardises things, making it very easy for new ops people to onboard, and you might otherwise end up spending time reinventing half-baked solutions to orchestration problems anyway.

The point is that you might not even need orchestration from the start. Instead of thinking how to solve an imagined scenario where you don't even know the constraints, go simple and iterate from that when you need it with the actual requirements in hand. And also, "make it easier for new ops people to onboard" doesn't matter if you don't have a viable product to support new hires.


You seem to be describing very early stage companies, and if so I agree, host it on your laptop if you need to, it makes zero difference. But it's not binary with Netflix on one side and early stage on the other.

There are a lot of companies in the middle, and following dogma like "you don't need k8s" leads them to reinvent the wheel, usually badly, and consequently waste enormous amounts of time and money as they grow.

Knowing when is the right time to think about architecture is a skill; dogmatic "never do it" or "always do it" helps nobody.


What about CD of similar but not identical collections of services to metal? No scaling problem, other than the number of bare metal systems is growing, and potentially the variety of service collections. For instance, would you recommend k8s to tesla dor the CD of software to their cars?

Meanwhile, random_pop_non-tech_website exploding in traffic wasn't setup to scale despite years actively seeking said popularity through virtually any means and spending top dollar on hosting, and it slows down to crawl.

"Why no k8s?", you ask, only to be met with incredulity: "We don't have those skills", says the profiteering web agency. Sure, k8s is hard… Not. Nevermind that it's pretty much the only important part of your job as of 2022.


That’s clearly not a startup!

Obviously not, I was just pointing out that infra like k8s even under-the-hood for intermediaries (like web agencies) is still not always the norm given the real-world failures. There's this intermediary world between startups and giant corporations, you know. ;-)

>infra like k8s even under-the-hood for intermediaries (like web agencies) is still not always the norm

That's because 'the norm' for web agencies is a site that does basically zero traffic. If a company hires a 'web agency' that's by definition because the company's business model does not revolve around either a web property or app.

Whether that's a gas station company or a charity or whatever, the website is not key to their business success and won't be used by most customers apart from incidentally.

With that in mind most agencies know only how to implement a CMS and simple deployment perhaps using Cloudflare or a similar automated traffic handling system. They don't know anything about actual infrastructure that's capable of handling traffic, and why would they?

A lot of agencies are 100% nontechnical (i.e. purely designers) and use a MSP to configure their desktop environment and backups and a hosting agency to manage their deployed sites.


I very much agree with you. I must have been unnecessarily critical in my initial comment, I did not mean it as a rant, more like an observation about where-we're-at towards what seems an inevitable conclusion to me. Sorry that came out wrong, clearly I got carried away.

In asking if "Kubernetes is a red flag signalling premature optimisation", you correctly explain why we're yet on the "yes" side for the typical web agency category.

[Although FWIW I was hinting at a non-trivial category who should know better than not to setup a scale-ready infra for some potentially explosive clients; which is what we do in the entertainment industry for instance, by pooling resources (strong hint that k8s fits): we may not know which site will eventually be a massive hit, but we know x% of them will be, because we assess from the global demand side which is very predictable YoY. It's pretty much the same thing for all few-hits-but-big-hits industries (adjust for ad hoc cycles), and yes gov websites are typically part of those (you never know when a big head shares some domain that's going to get 1000x more hits over the next few days/weeks), it's unthinkable they're not designed to scale properly. Anyway, I'm ranting now ^^; ]

My unspoken contention was that eventually, we move to a world where k8s-like infra is the de facto norm for 99% of infrastructure out there, and on that road we move to the "no" side of the initial question for e.g. web agencies (meaning, we've moved one notch comparable to the move from old-school SysAdmin to DevOps maybe, you know those 10 years circa 2007-2018 or so).

[Sorry for a too-terse initial comment, I try not to be needlessly verbose on HN.]


>My unspoken contention was that eventually, we move to a world where k8s-like infra is the de facto norm for 99% of infrastructure out there, and on that road we move to the "no" side of the initial question for e.g. web agencies (meaning, we've moved one notch comparable to the move from old-school SysAdmin to DevOps maybe, you know those 10 years circa 2007-2018 or so).

This is very very hard to parse BTW. I don't want to reply to what you've written because I can't determine for sure what it is that you're saying.


Sorry, my bad. I'm tired, I shouldn't post.

Essentially I mean: scalable infra may be premature optimization today in a lot of cases, but eventually it becomes the norm for pretty much all systems.

You could similarly parse the early signs of a "devops" paradigm in the mid-2000's. I sure did see the inception of the paradigm we eventually reached by 2018 or so. Most of it would have been premature optimization back then, but ten-ish years later the landscape has changed such that a devops culture fits in many (most?) organizations. Devops being just one example of such historical shifts.

I anticipate the general k8s-like paradigm (generic abstractions on the dev side, a full 'DSL' so to speak, scalable on the ops side) will be a fit for many (most?) organizations by 2030 or so.

I hope that makes sense.


> Either you don't succeed, in which case it doesn't matter what your stack was, or you do, in which case you'll be glad that you can scale up easily

This take brushes right past the causes of success and failure. Early stage success depends on relentless focus on the right things. There will be 1000 things you could do for every 1 that you should do. Early on this is going to tend to be product-market fit stuff. If things are going very well then scalability could become a concern, but it would be a huge red flag for me as an investor if an early stage company was focusing on multi-cloud.


I certainly wouldn't recommend that anyone "focus on multi-cloud" in an early-stage company (unless of course multi-cloud is a crucial part of their product in some way).

Kubernetes is basically an industry standard at this point. It's easy to hire ops people competent in it, and if you do hire competent people, it will save you time and money even while you are small. As an investor "we use this product for orchestration rather than trying to roll our own solutions to the same problems, so that we can focus on $PRODUCT rather than reinventing half-baked solutions to mundane ops problems" should be music to your ears.


I agree with all of that. That said, I don't think competence is a binary proposition, and if you hire people who have only worked at scale they will be calibrated very differently to the question of what is table stakes. One of the critical components of competence for early stage tech leadership is a keen sense of overhead and what is good enough to ratchet up to the next milestone.

As many problems as containerization solves, it's not without significant overhead. Personally I'm not convinced the value is there unless you have multiple services which might not be the case for a long time. You can get huge mileage out of RDS + ELB/EC2 using a thinner tooling stack like Terraform + Ansible.


The overhead of containerisation is mostly in the learning curve for teams that are not already familiar with it (and the consequent risk of a poor implementation). A well designed build pipeline and deployment is at least as efficient to work with as your Terraform+Ansible.

If you have such a team, it can of course make sense to delay or avoid containerisation if you don't see obvious major technical benefits.

But those teams will get rarer as time goes on, and since we're talking about startups, honestly it would be questionable to build a new ops team from people with no containers knowledge in 2022.


Success is rarely so rapid that you can't throw money at a problem temporarily and build something more robust.

No one is advocating for a single server running in your closet, but a large and well funded PaaS can handle any realistic amount of growth at least temporarily, and something like Heroku is big enough (and more importantly, owned by a company big enough) that shutting down without notice is not a possibility worth considering.


Almost every k8s project I've looked at in the last few years is database bound. k8s is not really going to solve their scaling needs. They needed to plan more up front about what their application needed to look like in order to avoid that.

Yes, if your application looks like a web application that is cache friendly, k8s can really take you a long way.


In case it's not clear, nothing in my comment suggests that k8s will magically solve all your problems. It just provides abstractions that make growth (in size and complexity) of systems easier to manage, and helps to avoid lock-in to a single cloud vendor. The wider point is that thinking about architecture early will make scaling easier, and for most companies, k8s is likely to end up being a part of that.

The "web application" / cache-friendly part of your comment doesn't make much sense to me; k8s is pretty well agnostic to those kinds of details. You can orchestrate a database-bound system just as well as you can anything else, of course.


> if you were smart about how you used k8s, you'll be glad that you can relatively easily move between clouds or move to bare metal.

I'd argue you should definitely consider multi-cloud strategy from the get-go indeed in 2022. Something like Terraform helps statically setting k8s clusters on most clouds. Especially for startups, it's better to default to vanilla stuff and only complicate on a need-to basis.


Yes, completely agreed. Multi-cloud is really not that difficult nowadays, and it puts you in a better negotiating position (when you end up spending enough to be able to negotiate), as well as giving you more location flexibility and the ability to pick and choose the best services from each cloud.

Oh yes, negotiation is a strong argument in that context. One that makes or breaks a CTO's mission, me thinks, if that company expects a lean path to ROI.

A multi-cloud paradigm is also a great way to teach you about your application and about those clouds themselves. A good reminder that "implementation is where it's at", and "the devil is in the details".


The fact that they purchased 100 nodes has nothing to do with k8s but with their incompetence. You can run it on one machine. Also you can set up auto scaling easily based on whichever parameters.

Basically none of that needs, or is helped by, kube

Agree the author is wrong on that specific point, though thankfully the bulk of the article content deals with the headline, and is mostly fine wrt k8s.

Rather than the author "not knowing" what they're talking about, I suspect they're taking narrow experience and generalising it to the entire industry. Their background is selling k8s as a solution to small/medium enterprises: it strikes me that there may be a strong correlation between startups interested in that offering and those deploying failed overengineered multilang micro-architectures. Suspect the author has seen their fair share of bad multilang stacks and not a lot of counter examples.


The Whole advice of using same language is especially silly - iOS is stuck with Swift, and the web is stuck with JS, and maybe you need an applitation that scales using actors across mutiple machines with Golang or Java, or maybe you need to plug into Windows tightly and need C#.

Kubernetes is not 'harder' if all you need is to host a webapp. Where it falls on the hardness spectrum depends on what you are trying to do, and what is the alternative. I am very fluent with Kubernetes but have no skills in managing traditional virtual machines.


> The Whole advice of using same language is especially silly - iOS is stuck with Swift, and the web is stuck with JS, and maybe you need an applitation that scales using actors across mutiple machines with Golang or Java, or maybe you need to plug into Windows tightly and need C#

And you're also forgetting Android and macOS and Linux.

That's why cross-platform frameworks like Electron and React Native are so popular. The time wasted in going native for every single platform is just infeasible for most non-huge companies.


What is a huge company?

Here is an example of a team that is doing great work in mobile, frontend and backend:

3 people doing native iOS, 2 people doing native Android, 3 backend engineers, 1 frontend and 1 QA.

Two engineering managers/team leaders: one for mobile and one for web.

Of course this is one single product offering native mobile apps and some limited web app functionalities.

The apps are great, smooth, nice UX, works fast, offers native experience.

Is this a huge company? I don't think so.


But you could also have 2 people working on React Native and have 1 person each for getting it to play nice with iOS/Android, and eliminate the need for an extra engineer.

And end up with a subpar product because of your decision to use terrible tech that can't hold 60 FPS while scrolling a list

Well, if React native is anything like the many react websites, then this isn't too far off actually. "modern" websites can already send your CPU puffing, when you hover over some element with your mouse pointer and it triggers some JS emulated style for :hover.

tss.. some people dont like being reminded that their favourite tech performs worse on an Nvidia 3090 than Winforms did on 800 mhz cpu running windows 98

Complexity is often conflated with lack of familiarity.

Choosing more than one language as a startup can become really expensive quickly. As long as your tribes are small, chances are high that you one day run out of. e.g., python developers while you still have a lot of Java guys (or vice versa). This introduces unnecessary pain. (And obviously, you should have used Rust or Haskell from the get go for everything.)

The sole exception to this rule I would make is javascript which is more or less required for frontend stuff and should be avoided like the plague for any other development. As soon as you can get your frontend done in Rust, though, you should also switch.


Idk, I am someone, who has looked at many programming language, including all of those you mentioned. But a capable developer can be expected to learn a new language over the course of a few weeks if needed. I don't see how you could "run out of devs of language x", if you have capable devs on board. Especially, when those languages are all in the same programming language family/club.

Even the most capable developer that learns a new language in a few weeks will not be an expert in it. The difference in productivity and quality of the code will be huge. This is because in different languages things can be done very differently, it is not about the syntax as much as the best ways to do things.

>Doesn't look like the author knows what he is talking about.

This was my first thought, and I was to comment so, but saw you already did. The only reason we see this comment is because HN has an irrational hate of K8s, for us that do run things in production at scale, k8s is the best option. The rest is either wrapped in licenses or lacks basic functionality.


I also thought WhatsApp is a bad example. They not only hosted themselves, but they used solely FreeBSD (as far as I know) in their servers. (which don't get me wrong, I find great as a FreeBSD sysadmin myself).

Using WhatsApp as an example of a lean engineering org should almost be banned at this point. WhatsApp had a high performing engineering team that used basically the perfect set of tools to build their application (which also had a narrow feature scope; plaintext messaging). Even with hindsight there is very little you could do to improve on how they executed.

Just because WhatsApp scaled to almost half a billion users with a small engineering team doesn't mean that's the standard, or even achievable, for almost all teams.


I suspect a lot of the gripes and grousing about Kubernetes comes from SMEs trying to run it themselves. That will often result in pain and cost.

Kubernetes is a perfectly good platform for any size operation, but until you are a large org, just use a managed service from Google/Amazon/DigitalOcean/whoever. Kubernetes, the data plane, is really no more complex that eg Docker Compose, and with managed services, the control plane won't bother you.

K8s allows composability of apps/services/authentication/monitoring/logging/etc in a standardised way, much more so than any roll-your-own or 3rd-party alternative IMO, the OSS ecosystem around it is large and getting larger, and the "StackOverflowability" is strong too (ie you can look up the answers to most questions easily).

So, TLDR, just use a managed K8s until you properly need your own cluster.


Yep. In fact, the front/back language bit is the most egregious premature optimization I can think of.

> But the next advice about not using a different language for frontend and backend is wrong.

Being charitable, what I think they are getting at is maybe more about having fully separated frontend and backend applications (since the front-end examples he gives are not languages but frameworks / libraries). Otherwise it seems really backwards - I'm definitely an advocate of not always needing SPA-type libraries, but using literally zero Javascript unless your backend is also JS seems like it goes to a too-far extreme.


> I think the most appropriate advice is to choose a stack which the founding team is most familiar with. If that means RoR then RoR is fine. If it means PHP then PHP is fine too.

Taking human resource information into consideration sounds very wise. Although, learning a new language is generally not that a huge barrier, while changing your whole stack once the minimum viable product cap is passed can be very expensive. And if you need to scale the team, the available developer pool is not the same depending on which technology you have to stick with.

It doesn’t invalidate your point, but maybe it brings some relevant nuances.


Re: single language, there's a grain of truth to it - see http://boringtechnology.club/ - but that one mainly says there is a cost to adding more and more languages. When it comes to back- and frontend though, I would argue there is a cost to forcing the use of a single language. e.g. NodeJS is suboptimal, and web based mobile apps are always kinda bleh.

"I think the most appropriate advice is to choose a stack which the founding team is most familiar with." I'd think that's exactly what typically happens most of the time. But the degree of stack-lockin that occurs with startups still surprises me even when it's clear a better choice might have been made. Mostly due to management not being prepared to grant the necessary rewrite time.

sounds like it just boils down to: try to choose the technology your team is familiar with, not what other teams are successful with

Of course there's some balance needed. If your team is familiar with some niche language then long term that might not be a good strategy if you intend to bring more devs on board later.

One side of this which I don't think is discussed often is the fun of choosing new technology. How do you balance having fun and being realistic at the same time?

Fun meaning trying new technology, learning as you go, setting up systems that make you feel proud, etc. It can lead to failure, but I think having fun is important too.


> But the next advice about not using a different language for frontend and backend is wrong.

Er.

I read this as him saying "one of the things I've seen as a bad reason to use Kubernetes is that there are multiple languages in use."

I've seen people do this. Frontend in one container, backend in another, Kube to manage it.

If that's what author meant, author is right, that's a profoundly stupid (and common) reason to involve Kube.


I agree entirely.

I like to call what the author is referring to as, "What-If Engineering". It's the science of thinking you'll be Google next week, so you build for that level of scale today. It involves picking extremely complicated, expensive (both in compute and skilled labour) technologies to deploy a Rails app that has two features. And it all boils down to, "But what if..." pre-optimising.

It happens at all levels.

At the individual unit level: "I'll make these four lines of code a function in case I need to call it more than once later on - you know, what if that's needed?"

It also happens at the database level: "What if we need to change the schema later on? Do we really want to be restricted to a hard schema in MySQL? Let's use MongoDB".

What's even worse, is Helm and the likes make it possible to spin up these kinds of solutions in a heart beat. And, as witnessed and evidenced by several comments below, developers think that's that... all done. It's a perfect solution. It won't fail because K8s will manage it. Oh boy.

Start with a monolith on two VMs and a load balancer. Chips and networks are cheaper than labour, and right now, anyone with K8s experience is demanding $150k + 10% Superannuation here in Australia... minimum!

https://martinfowler.com/bliki/MonolithFirst.html


I've told this story before on HN, but a recent client of mine was on Kubernetes. He had hired an engineer like 5 years ago to build out his backend, and the guy set up about 60 different services to run a 2-3 page note taking web app. Absolute madness.

I couldn't help but rewrite the entire thing, and now it's a single 8K SLOC server in App Engine, down from about 70K SLOC.


My most recent job, and the job before that and the job before that all have one thing in common:

Someone convinced the right person to “put stuff on kubernetes” and then booked it for another job/greener pastures with barely any production services, or networking configured.

Thus an opening for a new SRE and once again I find myself looking at a five headed monster of confusing ingress rules, unnamed/unlabeled pods, unclaimed volume specs…

Sigh.


absolutely nuts to think what that would look like. Is he building services to abstract out english grammar? Have one service called VerbManager that returns a boolean if a given word is a verb, have another one called AdjectiveManager that does similar and so on.

What's your plan when (not if) Google deprecates GAE?

Eh, it's just a simple node server, no real attachments to GAE. I could move it over to Digital Ocean or something within a day or two.

It is very likely that GAE will last a lot more than the Kubernetes cowboy you hired to set up the undocumented and untested version of it in house.

Not OP, but App Engine have actually been around for a long time.

Also, there's no inherent lock-in, you can basically just deploy it somewhere else.

Data is where the lock-in lies. Moving can be hard if you use proprietary databases. Can still be worth it.


Yeah the one good decision by that former dev was to use Mongo Atlas, so the data layer is completely decoupled from GCP.

>App Engine have actually been around for a long time

That means very little, I hope you realize. Reader, Voice, Chat, etc.[0] were all around a long time.

>Also, there's no inherent lock-in

GAE has plenty of proprietary APIs you can depend on. Whether or not you do is up to the programmer.

0 - A comment below notes that voice and chat aren't deprecated yet. Voice was announced deprecated, and google has had so many chat apps I'm not sure which ones are gone. Anyway, here is a more complete list of things Google has abandoned: https://killedbygoogle.com/


Gae is nearly twice as old as reader was when deprecated. Voice and chat still exist and aren't deprecated.

> It's the science of thinking you'll be Google next week,

There's other reasons to use K8s than just thinking of massive scale.

Setting up environments becomes a massive PITA when working directly with VMs. The end result is either custom scripts, which is a messier version of terraform, which ends up being messier than just writing a couple of manifest files for a managed k8s.

> anyone with K8s experience is demanding $150k + 10% Superannuation here in Australia... minimum!

sheds a tear for CAD salaries and poor career decisions


> Setting up environments becomes a massive PITA when working directly with VMs. The end result is either custom scripts, which is a messier version of terraform, which ends up being messier than just writing a couple of manifest files for a managed k8s.

The author advocates using a high-level PaaS. For sure working directly with VMs is the wrong answer to premature optimization, but as an early stage startup, there's plenty of services around that you basically just need to point your Git repo at and you'll have a reasonable infrastructure set up for you.


OP here, I agree. You can get super far with a service like fly.io/Heroku/Netlify/Vercel etc. (pick the one that works with your stack). VMs are an anti-pattern as well for the early stages of an application or startup.

> Setting up environments becomes a massive PITA when working directly with VMs.

I guess I've just never found this to be true.

My main goal when engineering a solution is always, "Keep It Super Simple (KISS), so that a junior engineer can maintain and evolve it."

Working directly with operating systems, VMs, networking, etc. (purely in Cloud - never on-prem... come on it's 2022!) is the simplest form of engineering and is much easier than most claim.


>> Helm and the likes make it possible to spin up these kinds of solutions in a heart beat

Genuine question, why is this bad? Is it because k8s can spin it up but becomes unreliable later? I think the industry wants something like k8s - define a deployment in a file and have that work across cloud providers and even on premise. Why can't we have that? It's just machines on a network after all. Maybe k8s itself is just buggy and unreliable but I'm hopeful that something like it becomes ubiquitous eventually.


Oh it's most certainly NOT bad! It's very, very good. But it's not the end of the story. Imagine if you could press a button in a recipe book and a stove, pan, some oil, and some eggs appeared and the eggs started cooking... amazing! But that's not the end of the story. You don't have scrambled egg yet. There's still work to be done and after that, there's yet more work to be done - the washing up being one of them.

It's everything that comes afterwards that gets neglected. Not by everyone, granted, but by most.


We had J2EE almost 30 years ago. 1 file which described everything and contained everything

According to the following link, that literally sounds nothing like Kubernetes. Perhaps a more appropriate analogy to older tech is something like LSF or Slurm.

https://www.webopedia.com/definitions/j2ee/


LSF and Slurm is still around and kicking on HPC systems. But they're nothing like kubernetes. Maybe close to k8s batch jobs but that's it.

Finding machines with available cores and memory to run a workload is the fundamental feature that they share.

Sorry but managed k8s is really simple and wildly a better pattern than just running VMs. You don’t need google scale for it to help you, and spinning things up without understanding the maintenance cost is just bad engineering

> Sorry but managed k8s is really simple ...

If you need a service to manage K8s for you, then that's a red flag already (regarding K8s, not you personally.) If the service is so complicated that experienced engineers tell me constantly that only managed K8s is the way to do it, that tells me enough about why it's going to be a rough journey that should probably be avoided with IaaS or PaaS.

> ... and wildly a better pattern than just running VMs.

I've never had an issue running VMs, personally. And when I join a firm and begin helping them, I find I can very quickly and easily come up to speed with their infrastructure when it's based on straight IaaS/SaasS/PaaS. If it's K8s, it's often way more complicated ("Hey! We should template our YAML configs and then use Ansible to produce the Helm charts and then run those against the infra!" - ha ha!)


> If you need a service to manage K8s for you, then that's a red flag already

It's really not, k8s does a ton for you, trying to do the same with VMs would be unbelievably complex

> that should probably be avoided with IaaS or PaaS.

PaaS is great till it isn't, seen plenty of companies hit the edges of PaaS and then need to move to k8s

> infrastructure when it's based on straight IaaS/SaasS/PaaS

Again this is great till it isn't (see heroku) and then people move to k8s. Having control and understanding of the underlying infrastructure is important unless your just running some basic web app


You want simple? Heroku, Render, DigitalOcean AppPlatform

Yeah isn't Heroku dead? PaaS is great till it isn't then you're screwed, seen plenty of companies be forced to move off of PaaS to k8s because of the edges. Its fine if you are running a basic web app

No, it's not. If going for managed services: A load balancer + an asg is stupid simple to setup and it just works.

A basic k8s service is just as easy if not easier and takes you way further

How do you deploy your code in this scenario, ssh into VMs?

Level 1) you package your service into a zip/rpm/deb/etc and have an agent on the machine that periodically pulls

Level 2) you pack your software into an ami and use the update the asg config. You can periodically "drain" the asg of old instances

Level 3) you deploy your stack again with the new stack having the ami that you've build at level 2 referenced. You start shifting traffic between the old stack and the new stack. You monitor and rollback if something is wrong.


I find it's easier to use Ansible/Salt/Puppet Bolt and Packer to bake an AMI every night, update the launch template in a DB (which Terraform pulls the value from, thus there is no drift), and auto the ASG. Then you just force a drain.

Now you've got automatic, constantly updating VMs every night if you want them. And a new deployment is just commiting code to master and pushing and that whole pipeline triggers for you.

People like to overcomplicate things, Mirceal. You're on the right path :-)


Worst solutions I've heard in a while, no offense...

sure thing. share your solutions and why the are better?

I'll be honest I haven't fully explored AMIs as a solution but how do you run the AMI in your local dev environment? I can replicate the same K8s with docker images easily in local dev.

If you can't run your software locally without Docker, I'd be worried.

But to answer the question, VMs have been a thing on the desktop for a very long time.


How do you download AMIs of Redis, Postgres, etc? Are you building this all by hand?

that's the crux of the problem. people no longer know, understand or want to know and understand what their software is vs what is around their software. they treat docker and k8s as a way of packaging software and just ignore all the lessons that generations of software engineers have learned when it comes to how to properly manage your dependencies and how to correctly pack your software so that it's resilient and runs anywhere.

we also live in a world that does not appreciate well crafted software and a lot of things are driven by the desire to build a resume. I've maintained code that was decades old and was amazing to work with and was still generating ridiculous amounts of money. I've also worked on code that was just written and used all the possible bells and whistles and development speed grinded to a halt once the it's been around for more than a couple of months.

My worst case scenario is having to work on code where the original developer didn't understand what they were doing and they just wanted to use X. Double the trouble if they didn't master X when the thing was put together.


The truth is at scale the last thing you want is a nest of unmanaged complexity, so it’s also the wrong instinct there. It’s usually contact with the real world that dictates what needs the extra engineering effort, and trying to do it ahead of time just means you’ll sink time up front and in maintenance on things that didn’t turn out to be your problem.

I think at scale, K8s is a good choice. I run a Discord server with like, 3,400 members now, and some of them are working at mental scale. They've claimed the same as you: K8s at scale is the only way.

I would very likely agree in all cases.

However, they only represent 4-5 users out of all 3,400. And that's the issue - only a small fraction of the industry operates at that scale :-)


> I'll make these four lines of code a function in case I need to call it more than once later on - you know, what if that's needed?

The code example is not always right.

Beware, if you know it will be needed, you might as well make it a function now. Likewise if you think probably it will be needed, why not make it a function now?

It’s not a good review comment or rejection to say “yeah but I don’t want to do that because it’s not yet needed”. Sure, but what if you are just being lazy and you don’t appreciate what it should look like long term?

The “I don’t want to write a function yet not needed” is not a clear cut example.


> Sure, but what if you are just being lazy and you don’t appreciate what it should look like long term?

I wasn't aware that some devs have a side hustle as fortune tellers?

On a more serious note, you should take a look at Sandi Metz's "the wrong abstraction". https://sandimetz.com/blog/2016/1/20/the-wrong-abstraction


You're making a gamble either way. The article you linked is correct that duplication is usually cheaper than abstraction. So if you really have no idea what your code will do in the future, then cheaper is the way to go. But an experienced dev can start to fortune-tell, they know what parts tend to be re-used, which abstractions are common and powerful. And if you also plan your architecture before you code, you can also see what abstractions might be needed ahead of time. If you are sure an abstraction is better, then duplication is tech debt.

A simple example. If you are making a multi-page website that contains a header on all pages, you can separate the header into a component from the get go, instead of duplicating the header for each page and then abstracting it in the future (where it starts to become more work).


You're forgetting that many people will want to use K8s for a project because they want it on their CV to get the high paying jobs. I saw the term on HN a couple of weeks ago -- CVOps

I'm not forgetting that fact, I'm simply choosing to ignore such people. They're not really what the industry is about. That's not in the spirit of a healthy society. That's just leeching.

Good luck to them, but they're not going to occupy time and space in my mind.


> They're not really what the industry is about.

We'd like that (I'd like that), but resume-driven choices are a very large driver of technology direction, unfortunately.

It means those of who want to build something very maintainable and very stable using the most boring (stable, secure) technology possible are often outnumbered.


And underpaid.

Sigh. Tell me about it :-/

Maybe it's just my organization, but I see the behavior across the corporation far more than I'd like, and inevitably these people move on leaving a complex mess in their wake that long term support staff have to deal with.

We seem to mostly manage to avoid that in my department, but we have a very low turnover.


Sounds rough, buddy! Sorry to hear that. I hope you're being well compensated.
Jistern 1 day ago [flagged] [dead] | | | | [–]

Hey, can I borrow your crystal ball? Seriously, my man. Why were you trying to read someone's mind?

Next time why don't you give the man some credit for making a rock solid argument?

If you felt the need to point out the obvious (which, obviously, I don't think needed to be pointed out) you might have opined, "You didn't mention that many people will want to use K8s for a project..." But, like, ummmm, really nobody likes someone who comes across as Captain Obvious. We know the sky is normally blue. We don't need you to tell us that.

Here on Hacker News it's generally understood that many engineers unnecessarily advocate the use of trendy technology to burnish their resumes. It's despicable, but, hey, it's reality. Far from being dispassionate "men of science", many engineers are all to willing to flat out lie to their current managers in order to help themselves get a new, and better-paying job at another company. These charlatans are essentially thieves.

Bring on the downvotes... cowards.


At least in the sense of code you arent doing any real harm I can think of and there are other benefits like testing and organization.

>> I agree entirely.

I agree entirely too.

>> Start with a monolith on two VMs and a load balancer. Chips and networks are cheaper than labour,

Kudos to you! You are a dangerous man for you opine the truth.

My advice is generally, "Build something. Then see if you can sell it." or "Sell something and then go build it." Either way, it all starts soooo small that the infrastructure is hardly a problem.

If you "get lucky" and things really take off. Sure. Yeah. Then get a DevOpps superstar to build what you need. In reality, your business will very probably fail.


You can’t just hire 1 DevOps superstar though because they need to sleep and not burnout. You’ll need ~7 people on a rotation if you need to really support anything worth really supporting. DevOps is about giving Developers a dedicated System Operations job for some small fraction of their time.

> DevOps is about giving Developers a dedicated System Operations job for some small fraction of their time.

No. DevOps is about the development and operations disciplines working together in a cross functional way to deliver software faster and more reliably.

In a small enough startup both disciplines may be represented by a single person, though.


I respectfully disagree. I’ve scaled these teams myself, and it’s about giving developers in a small organization the job of deploying the solution and ensuring that it runs. In larger organizations, DevOps becomes impossible and it naturally splits into Dev and Ops. It’s important to understand where it works and when it stops working to effectively manage the transition as the business grows.

> You can’t just hire 1 DevOps superstar though

You don't go from zero to needing global 24x7 support overnight.

Hiring 1 DevOps superstar is exactly what we did a few startups back and it worked great. Of course there was no after-hours support, it's a small startup. Eventually the team grew.


Fair enough. If you don’t need 24/7 and the bus factor risk is tolerable, then you might not need a rotation.

I’d just caution that when it comes to the mental health of the one person in this role, even if they seem like they are doing ok, check in frequently.

Jistern 1 day ago [flagged] | | | | [–]

Good grief. First and foremost I was simplifying to make a point.

>> You can’t just hire 1 DevOps superstar

Secondly, that assertion is not necessarily true. Obviously, you should just hire 1 DevOps superstar... in some cases.

Don't nitpick and don't argue foolishly.


I’ve run DevOps and know from experience the pitfalls. I’m sorry that you’ve interpreted my general agreement and elaboration of your comment as nitpicking foolishness.
Jistern 1 day ago [flagged] [dead] | | | [–]

>> I’ve run DevOps and know from experience the pitfalls.

That is an obvious straw man. Don’t you see that?

In an organization where "everything is on fire", sure you might need a team of DevOps folks to parachute in and put out the fires… yesterday!!!

But let’s imagine something that isn’t actually impossible, shall we? Let’s imagine a company where the CTO says, “Hmmm, we don’t actually need to implement a DevOps solution today. Our monolith on two VMs and a load balancer is actually working just fine. But it looks like we’ll probably need a robust DevOps (due to what looks to be hockey stick growth) in another 6 to 12 months, especially since we secured our series B funding last week. Therefore, I think now would be a good time to hire a DevOps guru to start architecting a solution for us.”

Your experience is just that: it’s your experience. It’s not the totality of all likely scenarios. I’ve dealt with countless guys like you who “innocently” make simple logical errors when trying to defend specious assertions. It’s not innocent; it’s deceitful.

I don’t know if you are an engineer and/or an engineering manager, but I’d probably hate to work with you.

If you claim to be an engineer, well, then you don’t seem like an engineer to me; you seem like a mere technician (such as an ordinary plumber or carpenter). I’ve got nothing against ordinary plumbers or carpenters per se (though many I’ve dealt with have clearly been swindlers), but they don’t claim to be engineers.

And if you are an engineering manager, then may God help your unfortunate subordinates.

>> I’m sorry that you’ve interpreted my general agreement and elaboration of your comment as nitpicking foolishness.

It was nitpicking foolishness. I don’t accept your apology. Your ignorance and arrogance are obvious to me. See, even a bright six year old boy could grasp that, “If we aren’t in a rush to build a new, bigger and better system, to start with we might only need to bring in one really smart guy to plan it out.” When a normal adult misses such an obvious argument I assume they have been blinded by ignorance and/or arrogance.


We've banned this account for repeatedly breaking the site guidelines.

https://news.ycombinator.com/newsguidelines.html


I lean conservative in my tech choices but I just don't see the big issue with Kubernetes. If you use a managed service like GKE it is really a breeze. I have seen teams with no prior experience set up simple deployments in a day or two and operate them without issues. Sure, it is often better to avoid the "inner platform" of K8s and run your application using a container service + managed SQL offering. But the difference isn't huge and the IaC ends up being about as complex as the K8s YAML. For setting up things like background jobs, cron jobs, managed certificates and so on I haven't found K8s less convenient than using whatever infrastructure alternatives are provided by cloud vendors.

The main issue I have seen in startups is premature architecture complexity. Lambda soup, multiple databases, self-managed message brokers, unnecessary caching, microservices etc. Whether you use K8s or not, architectural complexity will bite your head off at small scales. K8s is an enabler for overly complicated architectures but it is not problematic with simple ones.

>Did users ask for this?

Not an argument. Users don't ask for implementation details. They don't ask us to use Git or build automation or React. But if you always opt for less sophisticated workflows and technologies in the name of "just getting stuff done right now" you will end up bogged down really quickly. As in, weeks or months. I've worked with teams who wanted to email source archives around because Git was "too complicated." At some point you have to make the call of what is and isn't worth it. And that depends on the product, the team, projected future decisions and so on.


As a startup founder that's not VC funded, I would totally recommend you look into building with kubernetes from the get go. The biggest learning curve is for the person setting up the initial deployments, services, ingress etc Most other team members may just need to maybe change the image name and kubectl apply to roll things out. Knowing that rollouts won't bring down prod and that they can be tested in different environments consistently is really valuable.

I started out with Iaas namely Google App Engine and we suffered a ton with huge bills especially from our managed db instance. Once the costs were too high we moved to VMs. Doing deployments was fine but complicated enough that only seasoned team members could do it safely. We needed to build a lot of checks, monitoring etc to do this safely. A bunch of random scripts existed to set things up and migrating base operating system etc required a ton of time. Moving to kubernetes was a breath of fresh air and I wish we'd done it earlier. We now have an easy repeatable process . Infra is easier to understand. Rollouts are safer and honestly, the system is safer too. We know exactly what ports can allow ingress, what service boundaries exist. What cronjobs are configured, their state etc with simple kubectl commands.

Using kubernetes forces you to write configurable code and is very similar to testing: it sounds like it'll slow you down and shouldn't be invested in until the codebase is at a certain size but we've all learned from experience how is actually speeds everything up, makes larger changes faster, cheaper customer support and saves you from explaining why a certain feature has been broken for 10 without anyone's knowledge


> The biggest learning curve is for the person setting up the initial deployments, services, ingress etc Most other team members may just need to maybe change the image name and kubectl apply to roll things out.

This is a huge redflag.

It's basically admitting that you expect most later employees to not understand k8s or how its being used. You may think they don't need because it works, but you have to think about what happens when it doesn't work.

The shops I've been to all had the same mindset: the docker/k8s infra was setup by one guy (or two) and no one else on the team understands what's going on, let alone have the ability to debug the configuration or fix problems with it.

Another thing that might happen is some team members understand just barely enough to be able to "add" things to the config files. Over time the config files accumulate so much cruft, no one knows what configuration is used for what anymore.


You have this with literally every deployment mechanism, except with Kubernetes the boundary is standardized and you can easily find/hire/teach new team members to work on the complicated parts.

Custom VM/cloud/$randomSaaS deployments are much worse when it comes to "the one guy who understands the intricate details is on vacation".


Of course, if your deployment mechanism needs to be complicated, standarizing on something everyone knows is useful.

The underlying assumption behind my comment is that you really really want to simplify your deployment as much as you can.

Unfortunately this is nearly impossible with the currently accept set of standard best practices for developing web applications, where you use a scripting language and like six different database systems (one for source of truth on data, one for caching, one for full text search, and who knows what else everyone is using these days; I honestly can't keep track).

There are many ways to make the deployment process simple. My rule of thumb is to use a compiled language (Go) and an embedded database engine.

The levels.io guy famously did it even with php https://twitter.com/levelsio/status/1102487697220820994


At some point, this becomes the dividing line between a "nu-ops" team and the dev team.

>It's basically admitting that you expect most later employees to not understand k8s or how its being used.

That's also exactly what would happen if you homebrewed your own system. You need to centralize some part of expertize around infra at some point, but hopefully around more than two people.


It's healthy to depend on your coworkers for their specific knowledge. The landscape is too large for everyone to know everything, and honestly if I heard someone say this in an interview I would chalk it down to social deficits because this is not how life, businesses, etc. work

I think the key with this though is that it's good when everyone on a team has a working knowledge of something, and one person has expert knowledge. If one person knows everything there is to know, and everyone else knows nothing, you've created a massive dependency on the single person (which in the case of infrastructure code, could easily be a near existential problem for the company).

It’s unrealistic to expect the entire team to know how to build and safely operate IPv6 BGP anycast with an HTTP/2 tls load balancer, authentication and authorization, integrated with a modern observability platform and a zero downtime deployment CI process.

It is realistic to expect a small team to build the same in an industry standard way and hand of a clear, well documented API to the rest of the team.

Bespoke solutions need to deal with this complexity somehow, k8s provides a standard way to do so.


I'd consider knowing how to use the API to configure services and deploy new services to be a good definition of "working knowledge". You're right that everyone doesn't need to know the ins and outs of everything for sure, but if you observe at your company that everyone is relying on "the Kubernetes guy" to do everything related to Kubernetes, you've just re-invented an old-school ops team in an especially brittle way.

Then write documentation or train your team.

YAML supports comments.

Just like how you don't really expect later devs to learn the intricacies of your hand rolled deployment setup.

The difference is, kubernetes is pretty standardized and therefore learnable in a repeatable way, unlike the Frankenstein of tooling you might otherwise cobble together.


That would only be true if you actually need 90% of what k8s provides and will end up reproducing it poorly.

> kubernetes is pretty standardized

Sure. Though the standard changes weekly.


My favorite Kubernetes joke can be told in one word:

v1alpha1


> I started out with Iaas namely Google App Engine and we suffered a ton with huge bills especially from our managed db instance Are you factoring in the salaries of the people setting up Kubernetes? And the cost of those people/salaries not working on the actual product? And the cost of those people leaving the company and leaving a ton of custom infrastructure code behind that the team can't quickly get up to speed with?

> ton with huge bills especially from our managed db instance

This doesn't have much to do with App Engine, right? Last time I used it, we were using a PostgresQL instance on AWS and had no problems with that.

> Doing deployments was fine but complicated enough that only seasoned team members could do it safely

I just plain don't believe this. I bet you were doing something wrong. How is it possible that the team find too difficult to do an App Engine deployment but then they're able to setup a full kubernetes cluster with all the stuff surrounding it? It's like saying I'm using React because JavaScript is too difficult.

> Using kubernetes forces you to write configurable code and is very similar to testing: it sounds like it'll slow you down and shouldn't be invested in until the codebase is at a certain size but we've all learned from experience how is actually speeds everything up, makes larger changes faster, cheaper customer support and saves you from explaining why a certain feature has been broken for 10 without anyone's knowledge

This is far, far, far from my own experience.

Some other questions:

How did you implement canary deployment?

How much time are you investing in upgrading Kubernetes, and the underlying nodes operating systems?

How did kubernetes solve the large database bills issue? How are you doing backups and restoration of the database now?

If I were to found a company, specially not VC founded, dealing with kubernetes would be definitely far below on my list of priorities. But that's just me.


(I agree with your overall points, but when the GP said "Doing deployments was fine but complicated enough that only seasoned team members could do it safely" they were referring to their post-App Engine, pre-Kubernetes "manually managed VMs". I have no problem believing that deploys are very complicated in that scenario)

If setting up a PostgreSQL behind VPC with an EC2 in front of it was too difficult, there is now a serverless database product from AWS that costs 90% less what it used to.

No more load balancers, no more VMs, no more scaling up or down to match demands.


Please don’t do this. I’m dealing with the mess caused by following this line of thinking.

One guy (okay it was two guys) set up all the infrastructure and as soon as it was working bounced to new jobs with their newfound experience. The result is that dozens of engineers have no idea what the heck is going on and are lost navigating the numerous repos that hold various information related to deploying your feature.

In my opinion (and I’m sure my opinion has flaws!), unless you have global customers and your actual users are pushing 1k requests per second load on your application servers/services, there is no reason to have these levels of abstractions. However once this becomes reality I think everyone working on that responsibility needs to learn k8s and whatever else. Otherwise you are screwed once the dude who set this up left for another job.

And honestly.. I’ve built software using node for the application services and managed Postgres/cache instances with basic replication to handle heavy traffic (10-20k rps) within 100ms. It requires heavy use of YAGNI and a bit of creativity tho which engineers seem to hate because they may not get to use the latest and shiniest tech. Totally understand but if you want money printer to go brrr you need to use the right tool for the job at the right time.


The mistake people make thinking about Kubernetes is that it's about scale, when really it's just a provider for common utility, with a common interface, that you need anyway. You still need to ingress traffic, you still need to deploy your services, etc.

Thank you for putting it so simply! I agree.

This, so much...

Totally agree!

Kubernetes isn't just about global scale that most people will never need, which would agree with the article. It is about deploying new apps to an existing production system really quickly and easily. We can deploy a new app alongside an old app and proxy between them. Setting up a new application on IIS or a new web server to scale is a mare, doing the same on AKS (managed!) is a breeze. It is also really good value for money, because we can scale relatively quickly compared to dedicated servers.

It is also harder to break something existing with a new deployment because of the container isolation. We might not need 1000 email services now but we could very quickly need that kind of scale and I don't want to be running up 100s of VMs at short notice as the business starts taking off when I can simply scale out the K8S deployment and add a few nodes. There is relatively little extra work (a dockerfile?) compared to hosting the same services on a web server.


> Setting up a new application on IIS or a new web server to scale is a mare.

I disagree.

With octopus deploy I can add a step, set a package and hostname, and press a button, and have a new deployment of a new api or website in IIS pushed out to how ever many servers currently exist in a few minutes.

There are many ways to manage deploying services, and scaling, without K8S or containers in general.

While I could setup a new Octopus Deploy quickly, it would be like you setting up something in AKS. We are both good at the tools we know. But saying my way or your way is wrong - is the thing thats wrong.


That is very interesting, I'd love to know more about how this works.

- How does it decide which servers to deploy to?

- Does it scale up the number of servers if the deployment is too large?

- What happens when a server dies?


Well I guess it all depends on your setup.

Servers get have a tentacle which has an environment and tag. So you could say a server is test environment and tagged with main-web-app. While another server is production with the same tag. You promote a package from test to production. You configure your setup to say a package is deployed to a server tagged main-web-app.

Octopus is an orchestration tool so isn’t responsible for scaling up. But in AWS with an autoscale group you can configure Octopus to auto detect a new server tag it and deploy. Part of the deployment would add it to the load balancer.

As a side note you can deploy containers using octopus too. Tho I’ve never found a reason to use containers in production yet.


We’ll see if octopus deploy is around in 10 years


the only rationale to do what you described is if and only if you have outside capital. If you are spending your hard earned boostrapped cash on this, I'm sorry but its a poor business decision that won't really net you any technical dividends.

Again, I really see this the result of VC money chasing large valuations, and business decisions influencing technical archietcture, a sign of our times, of exuberance and senselessness

Engineering has to raise the cost of engineering to match it (280 character limit crud app on AWS lambda with 2 full stack developers vs 2000 devs in an expensive office).


Why should using different languages for front-end and back-end be a problem? I rather think that it is better to use languages that are appropriate for the given problem. It is not premature optimization to have parts of a back-end implemented in C/C++/Go/whatever else if high performance is needed. It would rather be a waste of resources, money and energy not to use an high-performance language for high-performance applications. Of course using the same language for the front-end might make no sense at all.

Its seems to be a new thing with younger generations.

We never had an issue with multiple languages across tier-n architectures.

Suddenly with the uptake of HTML 5, it became an issue not being able to use JavaScript everywhere.


That's sad, JavaScript was already not great for front-end and we now get it in backend and even the edge.

Most job offers are for a mythical full-stack developper that'll master web design, CSS/HTML, front-end interactions and code, networking, back-end architectures, security,... You end-up with people who don't have time to get enough expertise and write and build clean stuff. Hacking poor JavaScript code everywhere.

With the same language, you may think you can somehow reuse stuff between front and back. A bad idea in most project, will typically create more problem than it'll solve.


I'm far from a full stack developer, but really how much code would actually be common across the front and and back end? I would have thought maybe some validation code, not sure how much else?

It depends.

If you do server-side rendering with something like nextjs, then its quite a lot of code.

With trpc you can share types without going through an intermediary schema language (https://trpc.io/) although I think it would've still benefitted from separating the schema from the implementation (hope you don't forget the type specifier and import your backend into your frontend by accident)

For business logic as always the anwwer is "it depends". Does your app benefit from zero-latency business logic updates? Do you need rich "work offline" functionality combined with a rich "run automation while I'm away" for the same processes? etc.


Even when using Nextjs, it's still pretty common to have a separate API backend that doesn't also have to be in Javascript/Node. There are parts of backend code (HTML generation) that very strongly benefit from being unified with the frontend code, and there are parts of backend code (like database persistence code) that much less strongly if at all in some cases benefit from unification with the frontend code. Many people split these parts of backend code up between a separate Nextjs backend service and API backend service, and Nextjs lends itself well to this.

I'm a big Nextjs fan; I just think it's useful to emphasize that using it doesn't necessarily have to mean "only javascript/typescript on the backend".


This. I like how the Remix.run devs frame the "BFF" (Backend-For-Frontend) pattern: https://remix.run/docs/en/v1/guides/bff

That document starts with this:

> While Remix can serve as your fullstack application,

Isn't that where most teams should start, and where they should stay unless they have a really good reason to get more complicated? This is what I was thinking of while reading the GP comment about Next.js.


Sounds about right. I have an API service written in Go for which we also offer a client library in Go. We moved the common ground between the two into a separate module to avoid code duplication. That module ended up having the type declarations that appear on the API in JSON payloads, plus some serialization/validation helper methods, and that's it.

Serialization/deserialization and templates are a huge pain to keep identical. The rest, not so much.

So, if you can keep all of your templates on a single place, and don't have a lot of data diversity, you won't have a problem. But if that's not your case, you do have a problem.

Personally, I think the frontend code should be more generic on its data and organized around operations that are different from the ones the backend sees. But I never could turn that ideal into a concrete useful style either.


In my mind it is not code reuse between frontend and backend, but expertise and standard library reuse that is the winner.

Better to have a full stack developer that can concentrate on becoming an expert and fluent in just one language rather than being kinda-ok in 3 or 4 IMHO.


The struggle is not learning a new language nor becoming fluent.

The real expertise is being a front-end expert, authoring efficient architecture around user interactions, browsing the MDN without effort, mastering the DOM... Then, being a back-end expert, knowledgeable on scaling, could architectures, security issues, being able to author good API design...

If you can be considered an expert in all of this, and also edge computing, I don't think switching language would be an issue for you. Language is a tiny fraction of required expertise and you might be more productive by switching to an appropriate one.


You really think programming languages are that different?

It's not a question of "if" a developer can learn a new language, but of "how long" it takes. And it's not a question of how long it takes to become moderately productive, but how long it takes to reach an expert level of proficiency.

The impression I've gotten from some of my co-workers is that in bootcamps and college they only learned one language, remember the time and effort that went into that, and assume learning another language will take the same time and effort. Because they haven't really put effort into a second one, they don't yet realize just how much conceptually transfers between the languages.

While concepts transfer over between languages the language is only 10% of it. The rest of it is the standard library, ecosystem, buildsystem and all kinds of intricacies you have to know about and what not which is specific to the language (ecosystem).

This is an underestimation of how hard it is to learn to program.

Someone learning a first language isn't just learning a new language: they're learning how to program. It's a new profession, a new hobby, a new superpower.

The rest of the stuff (standard library, ecosystem, buildsystem and all kinds of intricacies) is just a mix of trivia and bureaucratic garbage you gotta fill your brain with but will all be replaced in 10 years anyway. Sure it takes time but it's nowhere near as important as actually knowing how to program.

Even changing paradigms (imperative to functional) isn't as hard as learning the first language.


I think a lot of people here have been doing this programming thing for so long we've forgotten we once had trouble understanding things like:

  x = 1
  for i in [1, 2, 3] {
     x = i
  }
What is the value of "x" at the end? Assuming block scope, it will be 1, or assuming it doesn't have block scope (or that it uses the already defined "x" in this pseudo-example) it will be 3.

A lot of beginning programmers struggle with this kind of stuff, as did I. Reading this fluently and keeping track of what variables are set to takes quite a bit of practice.

I've been hired to work on languages I had no prior experience on, and while there was of course some ramp-up time and such, overall I managed pretty well because variables are variables, ifs are ifs, loops are loops, etc.


Well, loops are not loops in functional languages...

I'm not saying there are zero differences or that $other_language never has any new concepts to learn, but in functional languages variables are still variables, functions are still functions, conditionals are still conditionals, etc. Important aspects differ, but the basics are still quite similar as is a large chunk of the required reasoning and thinking.

The concepts still roughly translate and help, though. You have projections/mappings in FP. Or if you want to go deeper, recursion. Understanding loops before those will definitely make them easier since they are equivalent.

The argument that learning a second language is as difficult as learning the first doesn’t really hold water in practice as well, lots of people have done so.


There is also an issue with conceptual leakage, most noticeably I've found with devs well versed in one language bending another language into behaving like the former.

Agreed. You see this a lot with people with a C# and/or Java background using TypeScript with annotation based decorators and libraries/frameworks that implement dependency injection, etc. They don't really embrace the nature of the new language and ecosystem. And I can sympathise, it is simpler to do things as you have been doing them before.

You also see it when people who've mostly done imperative programming tries their hands with a lisp or some ml based language. I've been there myself. You still find yourself using the equivalent of variable bindings, trying to write in an imperative rather than declarative style.

I guess when trying to learn a new language in a different paradigm you also need to unlearn a lot of the concepts from the former language.


Well, that is the reason why outside HN bubble, Angular still wins the hearts of most enterprise consulting shops, as its quite close to JEE/Spring and ASP.NET concepts.

That wares off if they have decent PR processes, and mentors.

I write elixir that smells like rails, but as the months move on, I change.


> ...the standard library, ecosystem, buildsystem and all kinds of intricacies you have to know about and what not which is specific to the language (ecosystem)

IME a lot of the conceptual stuff transfers pretty well there too.


I think the argument was/is not "its a problem we cannot have javascript" but "if we can have javascript everywhere, we only need to hire javascript devs and only need to care about javascript tooling", which is a fair point.

That does ignore the fact that an experienced frontend JS dev is not necessarily also a good productive backend JS dev, but at least they know the language basics, can use the same IDE, etc.

If thats worth it is something that depends on what you try to achive, I guess. I personally would not pick JS (nor TS) for the backend.


I think this has been a huge failing of our industry of late.

The rise of the "fullstack developer" has mostly reduced quality across the board. When you hire a "fullstack developer with 5 years experience" you aren't getting someone who is as good as a frontend developer with 5 years AND a backend developer with 5 years but someone that adds up to 5 years split between those 2 endeavors but probably with less depth as a result of switching.

(as a side note I also think it's contributed to developer title inflation)

Learning a new language and its tooling is comparatively easy compared to learning the required domain knowledge to be effective in a new area. i.e transition from frontend -> backend or visa versa has very little to do with the language or tooling.

Your average frontend dev doesn't know squat about RDBMS schema design or query optimisation, probably aren't familar with backend observability patterns like logging, metrics and tracing, most likely have very little experience thinking about consistency and concurrency in distributed systems, etc, etc.

Just like the backend dev is going to struggle with the constraints of working in a frontend context, understanding UI interactions, optimizing for browser paint performance, dealing with layout and responsiveness, etc.

Meanwhile if you know say Java and some scripting language, say Python and you end up a new job doing backend in JS it's not going to take long for you to pick up JS and hit the ground running because you are going to encounter exactly the same stuff just different syntax and runtime.

Backend being substantially divorced from frontend isn't a bad thing, it's generally a good thing that results in nice clean separation of concerns and healthy push-pull in design decisions about where certain logic and responsibilities should lie etc.


> The rise of the "fullstack developer" has mostly reduced quality across the board.

Having a solo developer that can do it all well enough also allows useful products to reach users faster (and for me it's really about solving user problems, not just making money), without getting derailed by communication overhead, bureaucracy, fighting within the team, etc. Just make sure you don't pick a developer who cares too much about things that mostly matter to other nerds, because then they might get derailed with premature optimization. Yeah, I've been there.


Most 'fullstack' positions don't have nearly the complexity where worrying about concurrency etc. is actually that relevant. The idea that most frontend devs have any knowledge about optimising for browser paint performance beyond using the correct CSS or framework is funny ;-)

I know, we have seen insane developer inflation the last 10 years.

The bar is generally just a lot lower now across the board. We have people with the title "Senior Engineer" that still regularly need mentorship and peer-programming sessions.

The title I have now (Principal Engineer) I feel like I don't deserve, it was reserved for folks I looked up to when I was starting and I don't think I have reached their level. Yet at the same time it's essentially necessary to distinguish myself from what is now considered "Senior".

I have a separate rant for the lack of proper juniors and lack of structured mentoring etc but that is for another day.


It's a taxonomy problem.

The chunking unit is programming languages which means as long as you can call the task the specific programming language, people will believe it's the same thing.

In reality you have a hammer expert and are in the business of making everything look like a nail.

So now we have complicated bloated applications of JavaScript in places where it's absolutely inappropriate and you need far more skill to navigate those waters then someone who uses more appropriate tools.

It's a perversion in the name of simplicity because we're forcing too coarse of a model on too fine of a problem and as a result everything explodes in complexity, takes too long, is too expensive and comes out working like trash.

We could do better but first we have to eat crow and admit how foundationally wrong we were and frankly things aren't bad enough to make that mandatory so the circus continues just like it did when we tried to make everything OOP and it made the same smell.

They're useful tools, but that's their perimeter; they aren't zeitgeists... well they shouldn't be.


And isn’t Wasm supposed to (some day) free us from the need to pick JS for the front end?

Unfortunately, that doesn't seem the way WASM is going. It's been 5 years and we still can't access the DOM without going through JS.

There isn't the political will-power to make it happen and force all the browser vendors to agree and implement. Hence status-quo remains.

Which is ironic as the DOM interface was designed as an abstract interface (the IDL used in the spec is more interested in compatibility with Java than JS).

In practice though the main reason is that to have decent DOM bindings you need to stabilize many other specs first (unless you do a ultra-specific DOM-only extension, but nobody wants that)


WASM can access the DOM by its foreign functions interface. It still needs some data conversion, but it's not through JS.

Rust, for example, has a huge package with interfaces for every DOM function.


I have been very successful in replacing a JS browser application with Rust. It has been great, because Rust is much easier to change, and since the application is complex, I need to change it a lot.

But I wouldn't recommend it in general, because JS (and for slightly more complex things, TS) is much easier to create because of all the old Rust features that surface every time you mention it. And most GUI code is simple enough that you only have to write once, maybe fix a few details, and forget about it.

Wasm would be much more compelling if it was target by higher level languages.


I am waiting to be able to run the JVM and have client side swing again in WASM

Why waiting when it is already available?

> which is a fair point

It's a very effective way to make sure your team is only composed by junior developers.


The sooner they accept that there were no such thing as one language to rule them all, the better developer they become. I have never seen the "isomorphic" claim to be seriously analyzed. One aspect example is how much logic behind the wall is overlap with the optimistic ui logic? Some logic may seem reusable but it may not. It's insane when I saw a popular js framework author on twitter said javascript is a language of the web, other backend languages are not (not exact words). Like WTH.

It needs to be all Javascript, because Javascript in itself is already at least 5 languages, with ES6, browser runtimes, Node, ESM and CJS, and TypeScript, some CoffeeScript remenants, and the list doesn't end. There is no end to complexity.

It's about hiring talent that can drive customer value without concern for anything else and the business can eventually hire more experienced individuals to fix the mud mountain if there's market fit and continued need.

It is extremely easier to find more affordable talent that has come out of a boot camps knowing javascript but more specifically "react/nodejs" which lets them work both frontend and backend. They rarely know best practices, how to properly debug & troubleshoot a problem that isn't a Google search away, etc, but they will be hungry and work their butts off to ship their CRUD style features.


Most developers don't seem to want to learn more than "one" thing. Once they do one tutorial, it seems they're done for life with learning.

And they don't really have time to learn new stuff, as they spend too much of their time on Hacker News complaining there's too many new frontend frameworks or something like that.


I have never met such a developer - most devs I have met, are way more eager to learn the next shiny thing than just getting the job done.

I know a lot of them, it's easy to identify as they constantly complain about shiny things.

Honestly these are not optimizations at all, but rather architectural decisions. Large architectural decisions are generally best made at the outset, based on problem domain analysis. They are costly to change later.

Like so many others, the author appears to be latching on to the phrase "premature optimization" as a popular buzzword (buzz...phrase?). This is so far from what Knuth actually wrote in his book that it hurts.


If you are building a SaaS company and build your site in RoR, but also have experience in say Go, and decide 'hmm instead of using RoR ill use Go for this backend thing so its faster'

That's fine.

Premature optimization would be saying:

I don't know Go/C++/C... but I know its fast, so instead of using what I know to get up and running quickly, i'll waste time building it in something I don't know which probably wont be as fast as doing it in something I know well.

The thing is, when you're building it, no one is using it! If it's slower in RoR than Go, who cares, get it up and running, and fix it later when people are actually using it.


> It is not premature optimization to have parts of a back-end implemented in C/C++/Go/whatever else if high performance is needed.

But the overwhelming majority of the time you don't need it, at least not yet. I would say that unless you have actual evidence that your other language would not be adequate - i.e. an implementation of your system or some representative subset of it, in your main language, that you spent a reasonable amount of time profiling and optimizing, and that still proved to have inadequate performance - then it is indeed premature.


That assumes it’s harder to build it in the other language though. Maybe if that language is C then that will the case, but building in say Go may be just as easy as building in JavaScript (or close enough that it doesn’t really matter), whereas rewriting it later would be a massive undertaking.

This I very different to say starting a with a micro service architecture which imposes relatively high overheads, with little benefit as splitting up a well designed monolith is easy.


> That assumes it’s harder to build it in the other language though. Maybe if that language is C then that will the case, but building in say Go may be just as easy as building in JavaScript (or close enough that it doesn’t really matter)

The cases where Go is significantly faster than JavaScript are vanishingly small.

> whereas rewriting it later would be a massive undertaking.

This is vastly overstated IME. Porting existing code as-is between languages is actually pretty easy.

> This I very different to say starting a with a micro service architecture which imposes relatively high overheads,

Disagree. You're imposing a huge overhead on hiring (if you want people with both languages) or on people's ability to work on the whole system (if you're happy hiring people who only cover one side or the other). Debugging also gets a lot harder. There's essentially twice as many tools to learn.


> This is vastly overstated IME. Porting existing code as-is between languages is actually pretty easy.

Yep. We always implemented (parts of) embedded software in Java first and later in JS and then port it. If no additional functionality is added, this is trivial and saves a lot of work and errors as you already tested, debugged and fixed the logic parts.


> The cases where Go is significantly faster than JavaScript are vanishingly small.

It's not only about performance but also energy and resource efficiency.


The experts in how to profile/optimize/etc a system aren't going to be JS devs though. They're going to be people who are used to dealing with systems that need to be written in languages from the machine-code-compiled lineage.

Which is to say ... while JS developers do know how to profile code, people who are routinely exposed to this problem are not going to be JS developers. The people who are good at identifying when a system needs to be re-written in Go for performance reasons are probably already Go developers and people who have lots of experience writing performant code.

Plus writing in that sort of language from the start means there is a chance to segue into performant code without needing to rewrite things in a new language.


> The experts in how to profile/optimize/etc a system aren't going to be JS devs though. They're going to be people who are used to dealing with systems that need to be written in languages from the machine-code-compiled lineage.

Sure they are. The skills aren't really language-dependent, and nowadays machine code is so far away from the actual hardware behaviour that it doesn't actually help a lot. Besides, the biggest speedups still come from finding errors or inappropriate algorithms or datastructures, and that's if anything easier to spot in a higher-level language where there's less ceremony to get in your way.


> finding errors or inappropriate algorithms or datastructures

when I interview developers, folks coding in Java can usually tell me what an appropriate datastructure would be, while half the JS developers can't explain how an array is different from a linked list. Because most of the time JS developers. dont have to think about it much.


That makes sense assuming you have already written your backend in JS. In that case, yeah the bar for rewriting in another language should be high (as it should be for any ground-up rewrite). But it's not a "premature optimization" when you are deciding on the tech stack to begin with.

Using a second language for performance (which is what the post I replied to was suggesting) is a premature optimization - you're paying all the costs of having your code in two different languages, for a benefit that will only start paying off when you're much bigger if at all.

Sounds like "consultant talk", too much fluff and big words but no nuance

"Using different languages" is just Tuesday in most places. Sure, don't use languages needlessly, but it's not a big hurdle unless you're just a "nodejs bro"


How do you know what the performance will be in what language or needs to be, in numerical terms, before you are running the system in practice?

You can't really tell for a Web App whether it'll be faster in JS or Python, but you can definitely expect a Computer Vision application with lots of heavy number crunching to be a lot faster in C++ than in Python. We have actually also made comparisons, and even if you use things like numpy and Python bindings for OpenCV, you won't reach the speed that a C++ application achieves easily without optimization.

That depends a lot on what that CV application does and what hardware it runs on. Naive C++ (that runs on CPU) is usually much slower than using Python as glue for libraries that run on GPU.

For a lot of applications there are pretty straightforward calculations to meet a desired frame rate.

Also I've seen UX research that established some guidelines for how long something can take while still feeling responsive allowing a user to remain focused.

Knowing you can achieve the required/desired performance in any given language is mostly a matter of experience solving similar problems.


> It would rather be a waste of resources, money and energy not to use an high-performance language for high-performance applications.

Given that probably most developers support the push to tackle the climate change, they seem to be making no effort to ensure their apps execute in as short time as possible using as little resources as possible. You would expect that people would actually embrace doing things in C or Go to save energy. Maybe cloud providers should think of showing the carbon footprint as a first class performance metric.


Just like with code optimization we should first make sure making code run slightly more efficient really has an impact on the climate. Because I very much doubt it does.

When you want to save the climate there are many, many low-hanging fruits. The choice of programming language is likely not one of them, as much as I like efficient code.


Exactly what I do. C++ backends, JS frontends. I do not see any problems with using more than one language.

it's not but this article is just blogspam

We try to limit the amount of languages we use, but we have high performance Computer Vision code that is written in C++, we're interfacing that with Python for simplicity, and a web app in JS. Right tool for the job!

> Right tool for the job!

Well said: this should be the only rule!


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: