Articles

Using the Prometheus Operator has become a common choice when it comes to running Prometheus in a Kubernetes cluster. It can manage Prometheus and Alertmanager for us with the help of CRDs in Kubernetes. The kube-prometheus-stack Helm chart (formerly known as prometheus-operator) comes with Grafana, node_exporter, and more out of the box.

In a previous blog post about Prometheus, we took a look at setting up Prometheus and Grafana using manifest files. We also explored a few of the metrics exposed by YugabyteDB. In this post, we will be setting up Prometheus and Grafana using the kube-prometheus-stack chart. And we will configure Prometheus to scrape YugabyteDB pods. At the end, we will take a look at the YugabyteDB Grafana dashboard that can be used to visualize all the collected metrics.

Source de l’article sur DZONE

We all love web badges. You might have spotted many of them in README of repositories, including the repository of my blog, The Cloud Blog. In general, web badges serve two purposes.

  1. They are visually appealing.
  2. They display key information instantly.

If you scroll to my website’s footer section, you will find GitHub and Netlify badges that display the status of the latest build and deployment. I use them to quickly check whether everything is fine with the world without navigating to their dashboards. In essence, a badge is an SVG image with dynamic content embedded in it.

Source de l’article sur DZONE

In this series:

I am happy to see that many people are enthusiastic about this series and wish to make their IaC applications better with Ansible. What I intend to do is quite simple. I will write an Ansible playbook that uses the template module (see Templating with Jinja2) and a little magic of Jinja2 templates to load appropriate variables and configurations for each Terraform environment. Finally, I will use the Terraform CLI to deploy and delete the infrastructure.

Source de l’article sur DZONE

A data scientist extracts manipulate and generate insights from humongous data. To leverage the power of data science, data scientists apply statistics, programming languages, data visualization, databases, etc.

So, when we observe the required skills for a data scientist in any job description, we understand that data science is mainly associated with Python, SQL, and R. The common skills and knowledge expected from a data scientist in the data science industry includes – Probability, Statistics, Calculus, Algebra, Programming, data visualization, machine learning, deep learning, and cloud computing. Also, they expect non-technical skills like business acumen, communication, and intellectual curiosity.

Source de l’article sur DZONE

Distributed SQL databases combine the resilience and scalability of a NoSQL database with the full functionality of a relational database. In this Refcard, we explore the essentials to building a distributed SQL architecture, including key concepts, techniques, and operational metrics.
Source de l’article sur DZONE

This is the final part of our Kubernetes logging series. In case you missed part 1, you can find it here. In this tutorial, we will learn about configuring Filebeat to run as a DaemonSet in our Kubernetes cluster in order to ship logs to the Elasticsearch backend. We are using Filebeat instead of FluentD or FluentBit because it is an extremely lightweight utility and has a first-class support for Kubernetes. It is best for production-level setups.

Deployment Architecture

Filebeat will run as a DaemonSet in our Kubernetes cluster. It will be:

Source de l’article sur DZONE


What is Terraform?

Terraform is a tool that is used for building, changing and versioning infrastructure safely and effectively. Using the configuration file you describe to Terraform what components are needed. Terraform then goes and generates an execution plan describing what the desired state should be. And then it goes and executes and builds it. Terraform manages all this through a state file. Now there are two flavors of Terraform:

  • An open-source version
  • An enterprise version

Terraform supports a wide variety of cloud and infrastructure platforms. This includes AWS, OpenStack, Azure, GCP, Kubernetes and much more.

Source de l’article sur DZONE

As the great Mark Twain once wrote in response to reading his own obituary in May of 1897 , "reports of my death have been greatly exaggerated." Fast forward nearly a hundred years to 1995, and a Finnish computer scientist named Tatu Ylönen created a secure transport protocol known simply as Secure Shell (SSH). What do these things have to do with each other? Nothing, aside from perception.

In its most practical terms, SSH enables users to establish a secure, remote connection with a Linux-based machine via a Command Line Interface (CLI). SSH is the de facto standard for secure server access, and has survived the test of time, despite a significant shift in how infrastructure is operated in the cloud.

Source de l’article sur DZONE

In AWS, we have several ways to deploy Django (and not Django applications) with Docker. We can use ECS or EKS clusters. If we don’t have one ECS or Kubernetes cluster up and running, maybe it can be complex. Today, I want to show how deploy a Django application in production mode within a EC2 host. Let’s start.

The idea is create one EC2 instance (one simple Amazon Linux AMI AWS-supported image). This host doesn’t initially have Docker installed. We need to install it. When we launch one instance, when we’re configuring the instance, we can specify user data to configure an instance or run a configuration script during launch.

Source de l’article sur DZONE

There are three steps that Kubernetes uses to enforce security access and permissions — Authentication, Authorization and Admission. In this article we are going to consider Authentication first.

              The Authentication, Authorization and Admission Control Process

The first thing in Authentication is Identity.

Source de l’article sur DZONE