Articles

In this article, we will discuss a use case where data from one kafka cluster has to be migrated to another Kafka Cluster. Here the target is strimzi and the source is a standalone Kafka cluster.  Target means where data has to be copied and the source is from where we want to copy/migrate data. I have an article on how to use mirrormaker with apache kafka clusters about mirrormaker version 1. This article is about mirrormaker 2, which has more features than mirrormaker1.

At the time of writing this article, the latest version of strimzi is 0.22.1 and can be downloaded from here.

Source de l’article sur DZONE

Developers are working with new applications every day using Apache Kafka as the backbone to implement an event-driven architecture (EDA) to support distributed systems. However, this adds new challenges when sharing across teams, even within the same organization. What endpoints are available? What is the structure of the message? That’s why payload examples became critical to speed up development. For this reason, having a reliable and enterprise-grade service to mock Apache Kafka should be an item in your EDA checklist. This post will do a quick review of the Microcks General Availability (GA) version and their support to Kafka.

What is Microcks?

Source de l’article sur DZONE

Hybrid cloud architectures are the new black for most companies. A cloud-first is obvious for many, but legacy infrastructure must be maintained, integrated, and (maybe) replaced over time. Event Streaming with the Apache Kafka ecosystem is a perfect technology for building hybrid replication in real-time at scale.

App Modernization and Streaming Replication With Apache Kafka at Bayer

Most enterprises require a reliable and scalable integration between legacy systems such as IBM Mainframe, Oracle, SAP ERP, and modern cloud-native applications like Snowflake, MongoDB Atlas, or AWS Lambda.

Source de l’article sur DZONE

There are multiple ways to ingest data streams into the Apache Kafka topic and subsequently deliver to various types of consumers who are hooked to the topic. The stream of data that collects continuously from the topic by consumers, passes through multiple data pipelines and then stream processing engines like Apache Spark, Apache Flink, Amazon Kinesis, etc and eventually landed upon the real-time applications to deliver a final data-driven decision. From finances, manufacturing, insurance, telecom, healthcare, commerce, and more, real-time applications are becoming the best solution for organizations to take immediate action, gain insights from the updated data. In the present day, Apache Kafka shapes the central nervous system that brings data from all aspects of the business to the large information operational hubs where choices are made.

The text files contain unformatted ASCII text and are commonly used for the storage of information. Each line of the file represents a data record and can be updated continuously to store. Every insert of a new line or lines on the text file can be considered as new data insertion on the file. Henceforth, every addition of a new line or lines on the text file continuously either by humans or applications (no modification on the already inserted line)and subsequently moves or sends to a different location can be considered as data streaming from the file. Every addition of a new line or row in the text file can be analyzed continuously by exporting the new line/lines to the Kafka topic and importing them by consumers that hooks up with the topic.

Source de l’article sur DZONE


Agile 

AI

Big Data

Cloud

Database

DevOps

Integration

  • Mulesoft 4: Continuous Delivery/Deployment With Maven by Ashok S — This article is a great example of what we want every tutorial to look like on DZone. The main aim of this article is to provide a standard mechanism to release project artifacts and deploy to Anypoint Platform, from the local machine or configure in continuous delivery pipelines.
  • Integration With Social Media Platforms Series (Part 1) by Sravan Lingam — This article helps you to build a RESTful API through MuleSoft that integrates with LinkedIn and shares a post on behalf of one’s personal account. I like this article because, in the age of social media, it’s so important for businesses to be connected and integrated!

IoT

Java

Microservices

Open Source

Performance

  • What Is Big O Notation? by Huyen Pham — Aside from a silly name, this article is an example of an in-depth analysis on a little-spoken-about concept. In this article, take a look at a short guide to get to know Big O Notation and its usages.
  • Is Python the Future of Programming? by Shormisthsa Chatterjee — Where is programming going? This article attempts to answer this question in a well-rounded way. The author writes, "Python will be the language of the future. Testers will have to upgrade their skills and learn these languages to tame the AI and ML tools".

Security

Web Dev

  • A Better Way to Learn Python by Manas Dash: There’s so many resources available for learning Python — so many that it’s difficult to find a good and flexible place to start. Check out Manas’ curated list of courses, articles, projects, etc. to get your Python journey started today. 
  • Discovering Rust by Joaquin Caro: I’m a sucker for good Rust content, as there’s still so many gaps in what’s available. Joaquin does a great job of giving readers his perspective of the language’s features in a way that traditional docs just 

Source de l’article sur DZONE

This article is a continuation of part 1 Kafka technical overview, part 2 Kafka producer overview, part 3 Kafka producer delivery semantics and part 4 Kafka consumer overview. Let’s understand different consumer configurations and consumer delivery semantics.

Subscribe

To read records from Kafka topic, create an instance of Kafka consumer and subscribe to one or more of Kafka topics. You can subscribe to a list of topics using regular expressions, for example,  myTopic.*.

Source de l’article sur DZONE