Articles

In the previous blog post about Kubernetes autoscaling, we looked at different concepts and terminologies related to autoscaling such as HPA, cluster auto-scaler, etc. In this post, we’ll do a walkthrough of how Kubernetes autoscaling can be implemented for custom metrics generated by the application.

Why Custom Metrics?

The CPU or RAM consumption of an application may not indicate the right metric for scaling always. For example, if you have a message queue consumer that can handle 500 messages per second without crashing. Once a single instance of this consumer is handling close to 500 messages per second, you may want to scale the application to two instances so that load is distributed across two instances. Measuring CPU or RAM is a fundamentally flawed approach for scaling such an application and you would have to look at a metric that relates more closely to the application’s nature. The number of messages that an instance is processing at a given point in time is a better indicator of the actual load on that application. Similarly, there might be applications where other metrics make more sense and these can be defined using custom metrics in Kubernetes.

Source de l’article sur DZONE

As a consistent user and developer on the OpenShift platform over the years, I’ve tried helping users by sharing my application development content as we’ve journeyed from cartridges all the way to container base development.

With container based development we’ve also transitioned from using templates to define how to deploy our tooling and applications, to operators. There are many examples of how to work with the templated versions of our applications around decision management and process automation found on Red Hat Demo Central and JBoss Demo Central.

Source de l’article sur DZONE

EclipseCon Community Day is on Monday, October 19 14:00 to 18:00 CET (the day before the start of the main EclipseCon conference). Community Day at EclipseCon has always been a great event for Eclipse working groups and project teams. This year both EclipseCon and Community Day is virtual and free. Space for Community Day is limited, so please register and save your spot soon.

We have a packed agenda centered on the Jakarta EE, MicroProfile and Cloud Native Java communities. If there is a set of very focused sessions you should attend on these topics, the agenda offers the one place this year to do so. The sessions are intended not only for learning, but also for the community to actively engage with some key leaders. Note, after you register for EclipseCon, you will need to reserve your spot for Community Day through the Swapcard platform (let me know if you run into any issues).

Source de l’article sur DZONE

This is the final part of our Kubernetes logging series. In case you missed part 1, you can find it here. In this tutorial, we will learn about configuring Filebeat to run as a DaemonSet in our Kubernetes cluster in order to ship logs to the Elasticsearch backend. We are using Filebeat instead of FluentD or FluentBit because it is an extremely lightweight utility and has a first-class support for Kubernetes. It is best for production-level setups.

Deployment Architecture

Filebeat will run as a DaemonSet in our Kubernetes cluster. It will be:

Source de l’article sur DZONE


What is Terraform?

Terraform is a tool that is used for building, changing and versioning infrastructure safely and effectively. Using the configuration file you describe to Terraform what components are needed. Terraform then goes and generates an execution plan describing what the desired state should be. And then it goes and executes and builds it. Terraform manages all this through a state file. Now there are two flavors of Terraform:

  • An open-source version
  • An enterprise version

Terraform supports a wide variety of cloud and infrastructure platforms. This includes AWS, OpenStack, Azure, GCP, Kubernetes and much more.

Source de l’article sur DZONE

There are three steps that Kubernetes uses to enforce security access and permissions — Authentication, Authorization and Admission. In this article we are going to consider Authentication first.

              The Authentication, Authorization and Admission Control Process

The first thing in Authentication is Identity.

Source de l’article sur DZONE


Intro

Organizations are increasingly looking to containers and distributed applications to provide the agility and scalability needed to satisfy their clients. While doing so, modern enterprises also need the ability to benchmark their application and be aware of certain metrics in relation to their infrastructure.

In this post, I am introducing you to a cloud-native bench-marking tool known as Kubestone. This tool is meant to assist your development teams with getting performance metrics from your Kubernetes clusters.

How Does Kubestone Work?

At it’s core, Kubestone is implemented as a Kubernetes Operator in Go language with the help of Kubebuilder. You can find more info on the Operator Framework via this blog post.
Kubestone leverages Open Source benchmarks to measure Core Kubernetes and Application performance. As benchmarks are executed in Kubernetes, they must be containerized to work on the cluster. A certified set of benchmark containers is provided via xridge’s DockerHub space. Here is a list of currently supported benchmarks:

Source de l’article sur DZONE

Unlike analysts at the large firms, who have to specialize in narrow market segments to avoid stepping on each other’s toes, we at Intellyx have the luxury of covering cross-cutting topics that align with business needs.

One of our tools in trade: looking closely at how two different markets interrelate and thus provide business value. In today’s Cortex, I’ll consider the relationship between low-code and cloud-native computing.

Source de l’article sur DZONE

Modern businesses are highly consumer-driven. Delivering value to our customers should, therefore, be our first priority. Making the tasks of our customers more convenient and efficient should be our primary goal. To do that, we need ways to figure out “what” exactly makes our customers more efficient and brings them convenience in their tasks. 

This requires a lot of trial and error. This requires us to build and experiment with systems and features to see if these capabilities actually bring significant value to our customers. This is the primary motivation that drives enterprise architecture to be much more disaggregated and composeable. Heard about “Microservices” anyone? 

Source de l’article sur DZONE

Eclipse + Cloud = Love

Two major announcements over the last few days in the Java community! Today, the Eclipse Foundation announced both the Jakarta EE 8 release and Eclipse Che 7 release. And it’s all about the cloud!

You may also like: Jakarta EE and Beyond

Jakarta EE 8

Two years after Oracle handed over Enterprise Java to the Eclipse Foundation, they provided Jakarta EE, and since then, they have released version 8. As its name suggests, this version is compatible with Java EE 8, but is now completely open-source, and therefore, royalty-free.

Source de l’article sur DZONE