How to increase your ecommerce conversion rate in 2024
2%. That’s the average conversion rate for an online store. Unless you’re performing at Amazon’s promoted products ...
Senior Digital Marketing Manager, SEO
2%. That’s the average conversion rate for an online store. Unless you’re performing at Amazon’s promoted products ...
Senior Digital Marketing Manager, SEO
What’s a vector database? And how different is it than a regular-old traditional relational database? If you’re ...
Search and Discovery writer
How do you measure the success of a new feature? How do you test the impact? There are different ways ...
Senior Software Engineer
Algolia's advanced search capabilities pair seamlessly with iOS or Android Apps when using FlutterFlow. App development and search design ...
Sr. Developer Relations Engineer
In the midst of the Black Friday shopping frenzy, Algolia soared to new heights, setting new records and delivering an ...
Chief Executive Officer and Board Member at Algolia
When was your last online shopping trip, and how did it go? For consumers, it’s becoming arguably tougher to ...
Senior Digital Marketing Manager, SEO
Have you put your blood, sweat, and tears into perfecting your online store, only to see your conversion rates stuck ...
Senior Digital Marketing Manager, SEO
“Hello, how can I help you today?” This has to be the most tired, but nevertheless tried-and-true ...
Search and Discovery writer
We are proud to announce that Algolia was named a leader in the IDC Marketscape in the Worldwide General-Purpose ...
VP Corporate Marketing
Twice a year, B2B Online brings together America’s leading manufacturers and distributors to uncover learnings and industry trends. This ...
Director, Sales Enablement & B2B Practice Leader
Generative AI and large language models (LLMs). These two cutting-edge AI technologies sound like totally different, incomparable things. One ...
Search and Discovery writer
ChatGPT, Bing, Bard, YouChat, DALL-E, Jasper…chances are good you’re leveraging some version of generative artificial intelligence on ...
Search and Discovery writer
Your users are spoiled. They’re used to Google’s refined and convenient search interface, so they have high expectations ...
Technical Writer
Imagine if, as your final exam for a computer science class, you had to create a real-world large language ...
Sr. SEO Web Digital Marketing Manager
What do you think of the OpenAI ChatGPT app and AI language models? There’s lots going on: GPT-3 ...
Search and Discovery writer
In the fast-paced and dynamic realm of digital merchandising, being reactive to customer trends has been the norm. In ...
Staff User Researcher
You’re at a dinner party when the conversation takes a computer-science-y turn. Have you tried ChatGPT? What ...
Sr. SEO Web Digital Marketing Manager
It’s the era of Big Data, and super-sized language models are the latest stars. When it comes to ...
Search and Discovery writer
We started using Kubernetes almost four years ago. We had new services to deploy, and even if we’re big users of bare metal machines, we needed more flexibility. Therefore, we decided to test and use Kubernetes on new systems. Two years later, most of our products are deployed on Kubernetes, following Kubernetes best practices. As more and more teams started to use it internally, we created an internal training. And today, we’re proud to make this training open source, so anyone can learn from it and contribute.
Two years into our implementation, we extracted eight practices from this training that we consider to be key for using Kubernetes correctly. We’re republishing these Kubernetes best practices as a blast from the past and to lay the foundation for future articles on how we and Kubernetes have grown over the last two years.
The container paradigm, and the way it’s implemented on Linux, wasn’t built with security in mind. It only exists to restrict resources, such as CPU and RAM, like the documentation of Docker explains. This implies that your container shouldn’t use the “root” user to run commands. Running a program in a container is almost the same as running a program on the host itself. If you are interested in knowing more, check this article to understand why.
Thus, add those lines on all your images to make your application run with a dedicated user. Replace “appuser” with a name more relevant for you.
ARG USER=appuser # set ${USER} to be appuser addgroup -S ${USER} && adduser -S ${USER} -G ${USER} # adds a group and a user of it USER ${USER} # set the user of the container WORKDIR /home/${USER} # set the workdir to be the home directory of the user
This can also be ensured at the cluster level with pod security policies.
Kubernetes sends the “SIGTERM” signal whenever it wants to gracefully stop a container. You should listen to it and react accordingly in your application (by closing connections, save a state, etc.) In general, following the twelve-factor app recommendations for your application is considered good practice. Also, don’t forget to configure terminationGracePeriodSeconds
on your pods. The default is 30 seconds, but your application might need more (or less) time to properly terminate.
Use declarative manifests so you can rollback your code and infrastructure efficiently. It means that your source versioning should be the source of truth of your manifests.
It implies that you only use kubectl apply
to update or create your Kubernetes resources, but also that you don’t use the latest
tag for your image containers. Each version of your containers should be unique, and using Git hashes is a good practice. When deploying a new version of your application, you should update the manifest by specifying a new version for the containers, then commit the manifest in your source control, and finally run kubectl apply
.
YAML is a tricky format. We use yamllint, because it supports multi-documents in a single file.
You can also use Kubernetes-specifics linters:
In Kubernetes 1.13, the --dry-run
option appeared on kubectl
which lets Kubernetes check your manifests without applying them. You can use this feature to check if your YAML files are valid for Kubernetes.
Liveness and readiness are ways for an application to communicate its health to Kubernetes. Configuring both helps Kubernetes handle your pods correctly, and react accordingly to state change.
The liveness probe is here to assess whether if a container is still alive; meaning, if the container is not in a broken state, a deadlock, or anything similar. From there, it can take decisions such as restarting it.
The readiness probe is here to detect if a container is ready to accept traffic, block a rollout, influence the Pod Disruption Budget (PDB), etc. It’s particularly useful when your container is set to receive external traffic by Kubernetes (most of the time, when it’s an API).
Usually, having the same probe for readiness and liveness is acceptable. In some cases though, you might want them to be different. A good example is a container running a single-threaded application that accepts HTTP calls (like PHP). Let’s say you have an incoming request that takes a long time to process. Your application can’t receive any other request, as it’s blocked by the incoming requests; therefore it’s not “ready”. On the other hand, it’s processing a request, therefore it’s “alive”.
Another thing to keep in mind, your probes shouldn’t call dependent services of your application. This prevents cascading failure.
Kubernetes lets you configure “requests” and “limits” of the resources for pods (CPU, RAM and disk). Configuring the “requests” helps Kubernetes schedule your pods more easily, and better pack workloads on your nodes.
Most of the time you could define "request" = "limit"
. But be careful, as your pod will be terminated if it goes above the limit
.
Unless your applications are designed to use multiple cores, it is usually a best practice to keep the CPU request at "1"
or below.
When you deploy an application with a lot of replicas, you most probably want them to be evenly spread across all nodes of the Kubernetes cluster. If you have all your pods running on the same node, and this node dies, this will kill all your pods. Specifying a pod anti-affinity for your deployments ensures that Kubernetes schedules your pods across all nodes.
A good practice is to specify a podAntiAffinity
on the hostname of the node:
apiVersion: apps/v1 kind: Deployment metadata: name: my-application spec: replicas: 2 selector: matchLabels: app: my-application template: metadata: labels: app: my-application spec: containers: - name: my-pod image: my-image:my-version affinity: podAntiAffinity: preferredDuringSchedulingIgnoredDuringExecution: - labelSelector: matchExpressions: - key: app operator: In values: - app: my-deployment topologyKey: kubernetes.io/hostname
Here we have a deployment “my-application” with two replicas, and we specify a podAntiAffinity
specification with a soft requirement (preferredDuringSchedulingIgnoredDuringExecution
, see here for more details), so we don’t schedule the pods on the same hostname (topologyKey: kubernetes.io/hostname
).
In Kubernetes, pods have a limited lifespan and can be terminated at any time. This phenomenon is called a “disruption”.
Disruptions can either be voluntary or involuntary. Involuntary disruptions means, as its name suggests, that it wasn’t something anyone could expect (a hardware failure for example). Voluntary disruptions are initiated by someone or something, like the upgrade of a node, a new deployment, etc.
Defining a “Pod Disruption Budget” helps Kubernetes manage your pods when a voluntary disruption happens. Kubernetes will try to ensure that enough that match a given selector are remains available at the same time. Specifying a PDB improves the availability of your services.
Four years ago, we used these fine defaults, and we apply them on all our apps in Kubernetes. We recommend you adapt your practices based on the specifics of your applications and workload.
You can find more details on these good practices on the dedicated section of the training.
Staff Software Engineer
Powered by Algolia Recommend