Articles


Introduction

In this article, I am going to explain to you how we can use the power of MuleSoft to send PDF files from Experience API to Process/System API, using the multipart/form-data type, and to covert it back to PDF file in the second API.

Use Case

We are going to read the PDF file from the local disk using Mule’s out-of-the-box (OOTB) File connector to read in Experience API. Then we will be sending this PDF as binary with some other fields to another API (this can be named as Process API) which accepts data as multipart/form-data. Next, we will extract this PDF binary from the received payload and convert it back to PDF and save the file to the local disk using Mule’s OOTB File connector to write.

Source de l’article sur DZONE

There are multiple ways to ingest data streams into the Apache Kafka topic and subsequently deliver to various types of consumers who are hooked to the topic. The stream of data that collects continuously from the topic by consumers, passes through multiple data pipelines and then stream processing engines like Apache Spark, Apache Flink, Amazon Kinesis, etc and eventually landed upon the real-time applications to deliver a final data-driven decision. From finances, manufacturing, insurance, telecom, healthcare, commerce, and more, real-time applications are becoming the best solution for organizations to take immediate action, gain insights from the updated data. In the present day, Apache Kafka shapes the central nervous system that brings data from all aspects of the business to the large information operational hubs where choices are made.

The text files contain unformatted ASCII text and are commonly used for the storage of information. Each line of the file represents a data record and can be updated continuously to store. Every insert of a new line or lines on the text file can be considered as new data insertion on the file. Henceforth, every addition of a new line or lines on the text file continuously either by humans or applications (no modification on the already inserted line)and subsequently moves or sends to a different location can be considered as data streaming from the file. Every addition of a new line or row in the text file can be analyzed continuously by exporting the new line/lines to the Kafka topic and importing them by consumers that hooks up with the topic.

Source de l’article sur DZONE