Articles

The sports world is changing. Digitalization is everywhere. Cameras and sensors analyze matches. Stadiums get connected and incorporate mobile apps and location-based services. Players use social networks to influence and market themselves and consumer products. Real-time data processing is crucial for most innovative sports use cases. This blog post explores how data streaming with Apache Kafka helps reimagine the sports industry, showing a concrete example from the worldwide table tennis organization. 

Innovation in Sports and Gaming With Real-time Analytics

Reimagining a data architecture to provide real-time data flow for sporting leagues and events is an enormous challenge. However, digitalization enables a ton of innovative use cases to improve user experiences and engage better with players, fans, and business partners.

Think about wonderful customer experiences with gamification when watching a match, live betting, location-based services in the stadium, automated payments, coupons, integration with connected fan shops and shopping malls, and so on.

Source de l’article sur DZONE

IT modernization and innovative new technologies change the healthcare industry significantly. This blog series explores how data streaming with Apache Kafka enables real-time data processing and business process automation. Real-world examples show how traditional enterprises and startups increase efficiency, reduce cost, and improve the human experience across the healthcare value chain, including pharma, insurance, providers, retail, and manufacturing. This is part five: Open API and Omnichannel. Examples include Care.com and Invitae.

Blog Series – Kafka in Healthcare

Many healthcare companies leverage Kafka today. Use cases exist in every domain across the healthcare value chain. Most companies deploy data streaming in different business domains. Use cases often overlap. I tried to categorize a few real-world deployments into different technical scenarios and added a few real-world examples:

Source de l’article sur DZONE

IT modernization and innovative new technologies change the healthcare industry significantly. This blog series explores how data streaming with Apache Kafka enables real-time data processing and business process automation. Real-world examples show how traditional enterprises and startups increase efficiency, reduce cost, and improve the human experience across the healthcare value chain, including pharma, insurance, providers, retail, and manufacturing. This is part five: Machine Learning and Data Science. Examples include Recursion and Humana.

Blog Series – Kafka in Healthcare

Many healthcare companies leverage Kafka today. Use cases exist in every domain across the healthcare value chain. Most companies deploy data streaming in different business domains. Use cases often overlap. I tried to categorize a few real-world deployments into different technical scenarios and added a few real-world examples:

Source de l’article sur DZONE

A lot, if not all, of data science projects, require some data visualization front-end to display the results for humans to analyze. Python seems to boast the most potent libraries, but do not lose hope if you’re a Java developer (or if you’re proficient in another language as well). In this post, I will describe how you can benefit from such a data visualization front-end without writing a single line of code.

The Use Case: Changes From Wikipedia

I infer that you are already familiar with Wikipedia. If you are not, Wikipedia is an online encyclopedia curated by the community. In their own words:

Source de l’article sur DZONE

If you’re paying attention to anything that’s happening in the development world, you’re likely familiar with the term “observability.” We’re seeing more and more monitoring companies from all different backgrounds jumping on the term to describe their solutions, many claiming their observability tool to be the factor that will take businesses to the next level.

Growing out-of-control system engineering, observability allows dev teams to unify and study the behaviors of various IT systems through the external outputs of the internal systems. In the case of software, that’s log events, distributed tracing, and time-series metrics. By unifying the data streaming through today’s complex IT environments, it certainly gives SREs and DevOps practitioners a leg up from traditional monitoring. But the data alone is no longer enough.

Source de l’article sur DZONE

There are multiple ways to ingest data streams into the Apache Kafka topic and subsequently deliver to various types of consumers who are hooked to the topic. The stream of data that collects continuously from the topic by consumers, passes through multiple data pipelines and then stream processing engines like Apache Spark, Apache Flink, Amazon Kinesis, etc and eventually landed upon the real-time applications to deliver a final data-driven decision. From finances, manufacturing, insurance, telecom, healthcare, commerce, and more, real-time applications are becoming the best solution for organizations to take immediate action, gain insights from the updated data. In the present day, Apache Kafka shapes the central nervous system that brings data from all aspects of the business to the large information operational hubs where choices are made.

The text files contain unformatted ASCII text and are commonly used for the storage of information. Each line of the file represents a data record and can be updated continuously to store. Every insert of a new line or lines on the text file can be considered as new data insertion on the file. Henceforth, every addition of a new line or lines on the text file continuously either by humans or applications (no modification on the already inserted line)and subsequently moves or sends to a different location can be considered as data streaming from the file. Every addition of a new line or row in the text file can be analyzed continuously by exporting the new line/lines to the Kafka topic and importing them by consumers that hooks up with the topic.

Source de l’article sur DZONE

Avec la licence Enterprise Edition, vous accéderez sans restrictions à la base de données SAP HANA. Découvrez en quoi le passage à une licence full use peut être avantageux pour vos données et applications métiers.

Il y a 10 ans, SAP présentait un outil de gestion de bases de données de nouvelle génération, SAP HANA. Une offre présentant plusieurs caractéristiques clés :

  • In memory : les données sont lues et écrites en mémoire, pour des performances extrêmes
  • Orienté lignes : ce mode permet d’optimiser l’écriture (un enregistrement par ligne)
  • Orienté colonnes : ce mode facilite les requêtes (un type de données par colonne)

Cette double casquette ligne/colonne permet à SAP HANA d’adresser à la fois les traitements transactionnels et analytiques. Des technologies avancées gravitent autour de ce cœur : serveur d’applications, scripting, prédictif, Machine Learning, vues OLAP, graphes, gestion des données spatiales…

L’ensemble propose à la fois une connexion aux applications SAP (BICS) ou non (SQL et MDX). Il est également possible d’accéder à des sources de données tierces via Smart Data Streaming et Smart Data Access et aussi d’intégrer quasiment n’importe quel type de données, structurées ou non, jusqu’aux sources Hadoop, au travers de Smart Data Integration. Tout ceci est combiné avec des fonctions de partionning, de haute disponibilité, de répartition de charge, de parallélisation des requêtes, d’aide à la reprise d’activité, etc.

SAP HANA est aujourd’hui au cœur de nombreuses applications SAP. Il est également possible de l’utiliser en mode autonome. « Dans les deux cas, l’ensemble des fonctionnalités est disponible, car il n’existe qu’une seule version de SAP HANA », explique Olivier Demeusy, Director at Center of Excellence, EMEA North for SAP Business Technology Platform.

Runtime VS Enterprise

La principale différence entre SAP HANA Runtime Edition et SAP HANA Enterprise Edition réside dans le mode d’accès à la base de données et les restrictions s’y appliquant :

  • L’édition Runtime est conçue pour les applications SAP et ne peut être adressée qu’à travers ces applications
  • L’édition Enterprise est accessible sans restrictions depuis n’importe quel système ou application, SAP ou non.

La Runtime Edition n’autorise donc l’interaction avec la base de données qu’au travers des applications SAP, qui vont se charger de lancer les requêtes. L’Enterprise Edition est pour sa part accessible depuis les applications SAP, des applications tierces ou vos propres applicatifs métiers.

L’accès pourra se faire en direct au travers de requêtes SQL. Les fonctions d’intégration et de qualité de données pourront être librement exploitées, tout comme les moteurs avancés de SAP HANA. Enfin, de multiples ponts seront accessibles afin de lier du code métier à SAP HANA. Et ce jusqu’à l’hébergement de vos applications dans SAP HANA. SAP HANA XS Advanced permet en effet le développement d’applications natives SAP HANA, capables de fonctionner au plus près de la donnée.

Un changement de licence facilité

Passer de la Runtime Edition à l’Enterprise Edition est aisé, SAP HANA restant identique dans les deux cas. « Le passage d’une licence à l’autre ne se traduit par aucun changement technique », confirme Olivier Demeusy.

Le tarif comprend un coût d’acquisition et une maintenance annuelle. « Le tarif appliqué dépend directement du volume de données qui sera pris en charge par SAP HANA, avec un calcul effectué par blocs de 64 Go. » Que vous utilisiez une base de données de 500 Go ou de 20 To, vous aurez donc toujours la garantie de bénéficier d’une offre parfaitement ajustée.

The post Exploitez la puissance de SAP HANA dans vos applications, avec une licence full use appeared first on SAP France News.

Source de l’article sur sap.com

In this post, we’ll see how to stream the data in ASP.NET Core SignalR. With ASP.NET Core 2.1 released, SignalR now supports streaming content.

What Is a Stream?

Streaming or media streaming is a technique for transferring data so that it can be processed as a steady and continuous stream. – webopedia.com

Source de l’article sur DZONE