Articles


Agile 

AI

Big Data

Cloud

Database

DevOps

Integration

  • Mulesoft 4: Continuous Delivery/Deployment With Maven by Ashok S — This article is a great example of what we want every tutorial to look like on DZone. The main aim of this article is to provide a standard mechanism to release project artifacts and deploy to Anypoint Platform, from the local machine or configure in continuous delivery pipelines.
  • Integration With Social Media Platforms Series (Part 1) by Sravan Lingam — This article helps you to build a RESTful API through MuleSoft that integrates with LinkedIn and shares a post on behalf of one’s personal account. I like this article because, in the age of social media, it’s so important for businesses to be connected and integrated!

IoT

Java

Microservices

Open Source

Performance

  • What Is Big O Notation? by Huyen Pham — Aside from a silly name, this article is an example of an in-depth analysis on a little-spoken-about concept. In this article, take a look at a short guide to get to know Big O Notation and its usages.
  • Is Python the Future of Programming? by Shormisthsa Chatterjee — Where is programming going? This article attempts to answer this question in a well-rounded way. The author writes, "Python will be the language of the future. Testers will have to upgrade their skills and learn these languages to tame the AI and ML tools".

Security

Web Dev

  • A Better Way to Learn Python by Manas Dash: There’s so many resources available for learning Python — so many that it’s difficult to find a good and flexible place to start. Check out Manas’ curated list of courses, articles, projects, etc. to get your Python journey started today. 
  • Discovering Rust by Joaquin Caro: I’m a sucker for good Rust content, as there’s still so many gaps in what’s available. Joaquin does a great job of giving readers his perspective of the language’s features in a way that traditional docs just 

Source de l’article sur DZONE

Change Data Capture Architecture Using Debezium, Postgres, and Kafka
was a tutorial on how to use Debezium for change data capture from Azure PostgreSQL and send them to Azure Event Hubs for Kafka – it used the wal2json output plugin.

What About the pgoutput Plugin?

This blog will provide a quick walk through of how to pgoutput plugin. I will not be repeating a lot of details and use containerized versions (using Docker Compose) of Kafka connect, Kafka (and Zookeeper) to keep things simple. So, the only thing you need is Azure PostgreSQL, which you can setup using a variety of options including, the Azure Portal, Azure CLI, Azure PowerShell, ARM template.

Source de l’article sur DZONE

These are strange times. Cities are in lockdown, and few are venturing outside. Therefore, the increased use of on-demand logistics services, like online food delivery, doesn’t come as a surprise.

Most of these applications provide a near real-time tracking of the ETA once you place the order. Building a scalable, distributed, and real-time ETA prediction system is a tough task, but what if we could simplify its design? We’ll break our system into pieces such that each component is responsible for one primary job.

Source de l’article sur DZONE

Event sourcing, aka "the great myth". I’ve been thinking about writing a series of articles about this for a while, and now it’s time to put my hands back on the keyboard. 

I thought that with this long period of confinement at least I could have had more time to write some nice articles, but it turns out the reality has been slightly different so far.

Source de l’article sur DZONE


Learn how to develop a custom Spring Cloud Stream binder from scratch.

Spring Cloud Stream is a framework built on top of Spring Boot and Spring Integration that is designed to build event-driven microservices communicating via one or more shared messaging systems.

The core Spring Cloud Stream component is called Binder,” a crucial abstraction that’s already been implemented for the most common messaging systems (e.g. Apache Kafka, Kafka Streams, Google PubSub, RabbitMQ, Azure EventHub, and Azure ServiceBus).

Source de l’article sur DZONE

Learn more about the benefits of Digital Twin tech in IIoT and it’s relation to Apache Kafka!

This blog post discusses the benefits of a Digital Twin in Industrial IoT (IIoT) and its relation to Apache Kafka. Kafka is often used as a central event streaming platform to build a scalable and reliable digital twin for real-time streaming sensor data.

In November 2019, I attended the SPS Conference in Nuremberg. This is one of the most important events about Industrial IoT (IIoT). Vendors and attendees from all over the world fly in to make business and discuss new products. Hotel prices in this region go up from usually 80-100€ to over 300€ per night. Germany is still known for its excellent engineering and manufacturing industry. German companies drive a lot of innovation and standardization around the Internet of Things (IoT) and Industry 4.0.

Source de l’article sur DZONE

Everything you need to get started analyzing Kafka Event Streams

Events are messages that are sent by a system to notify operators or other systems about a change in its domain. With event-driven architectures powered by systems like Apache Kafka becoming more prominent, there are now many applications in the modern software stack that make use of events and messages to operate effectively. In this blog, we will examine the use of three different data backends for event data – Apache Druid, Elasticsearch, and Rockset.

Using Event Data

Events are commonly used by systems in the following ways:

Source de l’article sur DZONE

Apache Spark supports many different data formats, such as the ubiquitous CSV format and web-friendly JSON format. Common formats used primarily for big data analytical purposes are Apache Parquet and Apache Avro.

In this post, we’re going to cover the properties of these four formats — CSV, JSON, Parquet, and Avro with Apache Spark.

Source de l’article sur DZONE

This article is a continuation of part 1 Kafka technical overview, part 2 Kafka producer overview, part 3 Kafka producer delivery semantics and part 4 Kafka consumer overview. Let’s understand different consumer configurations and consumer delivery semantics.

Subscribe

To read records from Kafka topic, create an instance of Kafka consumer and subscribe to one or more of Kafka topics. You can subscribe to a list of topics using regular expressions, for example,  myTopic.*.

Source de l’article sur DZONE

This article is a continuation of Part 1, ‘Kafka Technical Overview.’ In Part 2 of the series, let’s look into the details of how a Kafka producer works and important configurations.

Producer Role

The primary role of a Kafka producer is to take producer properties, record them as inputs, and write them to an appropriate Kafka broker. Producers serialize, partition, compress, and load balance data across brokers based on partitions.

Source de l’article sur DZONE