Articles

AWS is known to be a high-performance, scalable computing infrastructure, which more and more organizations are adapting to modernize their IT. However, one must be aware that no system is secure enough to ensure business continuity! Hence, you must have some kind of plan in place for your disaster recovery. With this article, we aim to discuss the top three disaster recovery (DR) scenarios that show the use of AWS:

  1. Backup and restore
  2. Pilot light for simple recovery into AWS
  3. Multi-site solution

Amazon Web Services (AWS) enables you to operate each of these three examples of DR strategies in a cost-effective manner. However, it’s also essential to note that these are only examples of potential approaches. Variations and combinations of these are also possible.

Source de l’article sur DZONE

At a previous job (I won’t tell you which), I had responsibility for a platform of 250,000 lines of C# code and 6 developers as the main architect. Our system was built in its entirety around Azure Functions and Cosmos DB. This was a huge company, with some 30,000 employees around the world, and our CEO got a deal with AWS. At that point we were paying 8,000 EUROs per month for our development environment – Seriously!

Our CEO was smart though, and struck a deal with AWS, probably due to that the company as a whole (I can only imagine) paid millions of EUROs per month for their cloud services in total, and was able to significantly reduce this number by porting « everything » to AWS. At this point we started pondering how to « port » our Azure Functions and Cosmos DB to something we could run in AWS. And yes, we even considered running the Azure Function debugger executable locally in servers inside of AWS – Needless to say, but this was simply suicide, and the whole idea was canned, the project had to be scrapped, and a « brand new AWS lockin project » was initiated – The irony … :/

Source de l’article sur DZONE

I’ve been furious for the larger parts of a decade – Partially due to how our governments and the industrialised war machine of the « United Slaves of America » has treated Assange, Snowden and Manning, arguably spearheaded by Silicon Valley – But also due to the fact of that the internet I grew up with no longer exists. However, I am ready, bring on the storm!

In the 90s when I started hanging out in USENET forums, the internet was a machine for good. Its atmosphere was a feeling of that everything was possible, and together we could create better democratic tools, resulting in a better world, by simply coming together, and together do our part. If everybody pulled a little bit more than their own weight, we could all enter « paradise on Earth ».

Source de l’article sur DZONE

For many enterprise-grade applications, providing a point where you can access in-depth analysis about your data has become a crucial feature. There are many approaches to this — you can build your own web application and backend that has views allowing customers to filter and analyze data. Alternatively, you can use the embedded analytics capabilities of Looker, Tableau, or Sisense — all of which are large business intelligence tools, with a host of features and connectors into all sorts of data sources.
But if you’re already on AWS, then it really is worth considering QuickSight to present analytics in your web application.

This series will guide you through the intricacies of creating a multi-tenant solution with QuickSight, dealing with data security across customers and within organizations. We’ll need to go beyond to AWS console and dive into the CLI/API commands that you’ll need to manage all of this.

Source de l’article sur DZONE

In This Series:

  1. Distributed Tracing With Jaeger
  2. Simplifying the Setup With Tye (this article)

Tye is an experimental dotnet tool from Microsoft that aims to make developing, testing, and deploying microservices easier. Tye’s opinionated nature greatly simplifies the lifecycle of development and deployment of .NET Core microservices.

To understand the benefits of Tye, let’s enumerate the steps involved in the development and deployment of the DCalculator application to Kubernetes:

Source de l’article sur DZONE

The serverless journey started with functions – small snippets of code running on-demand and a short period in Figure 1.  AWS Lambda in the “1.0” phase made this paradigm very popular, but it had its limitations around execution time, protocols, and poor local development experience. 

Since then, developers realized that the same serverless traits and benefits could be applied to microservices and Linux containers. This leads us into what we’re calling the “1.5” phase in Figure 1.  Some serverless containers here completely abstract Kubernetes, delivering the serverless experience through an abstraction layer that sits on top of it, like Knative.

Source de l’article sur DZONE

A VPN is a way to keep your identity a secret and protect your traffic on the internet. Your internet traffic passes through a tunnel that encrypts your data that no one can see, such as your internet service provider or government when you connect to a VPN server.

Let us list down why you need to use a VPN.

Source de l’article sur DZONE

Photo by Oskar Yildiz on Unsplash.

When building integration components, it’s almost a given that we will have to process data in different formats like JSON, XML, YAML, etc. It’s imperative that any integration product should have very good support for handling these data formats. This kind of robust support for handling data in different formats makes the product flexible to be adapted to different use cases.

In this article, we will look into the support provided by Kumologica for handling the data in these different formats.

Source de l’article sur DZONE

Hybrid cloud architectures are the new black for most companies. A cloud-first is obvious for many, but legacy infrastructure must be maintained, integrated, and (maybe) replaced over time. Event Streaming with the Apache Kafka ecosystem is a perfect technology for building hybrid replication in real-time at scale.

App Modernization and Streaming Replication With Apache Kafka at Bayer

Most enterprises require a reliable and scalable integration between legacy systems such as IBM Mainframe, Oracle, SAP ERP, and modern cloud-native applications like Snowflake, MongoDB Atlas, or AWS Lambda.

Source de l’article sur DZONE


Introduction

While many of us are habituated to executing Spark applications using the ‘spark-submit’ command, with the popularity of Databricks, this seemingly easy activity is getting relegated to the background. Databricks has made it very easy to provision Spark-enabled VMs on the two most popular cloud platforms, namely AWS and Azure. A couple of weeks ago, Databricks announced their availability on GCP as well. The beauty of the Databricks platform is that they have made it very easy to become a part of their platform. While Spark application development will continue to have its challenges – depending on the problem being addressed – the Databricks platform has taken out the pain of having to establish and manage your own Spark cluster.

Using Databricks

Once registered on the platform, the Databricks platform allows us to define a cluster of one or more VMs, with configurable RAM and executor specifications. We can also define a cluster that can launch a minimum number of VMs at startup and then scale to a maximum number of VMs as required. After defining the cluster, we have to define jobs and notebooks. Notebooks contain the actual code executed on the cluster. We need to assign notebooks to jobs as the Databricks cluster executes jobs (and not Notebooks). Databricks also allows us to setup the cluster such that it can download additional JARs and/or Python packages during cluster startup. We can also upload and install our own packages (I used a Python wheel).

Source de l’article sur DZONE