As technologies evolve, they typically pass through different stages of maturity, both from a technical and a business perspective. They move from pilot project to a whole new way of working, from an idea to a bold vision to disruptive innovations. New technologies often generate extensive hype before they have gained traction and realized their full potential with tangible solutions. This is also true for the blockchain technology, which while it has matured and is moving beyond the experimental phase, still faces a level of uncertainty in the market.

Particularly in the enterprise world, it is a challenge for businesses to know how they can best take advantage of this technology. At the same time, the potential is so remarkable that, according to a survey by the World Economic Forum, by 2027, 10% of the global gross domestic product (GDP) will be stored on blockchain technology. In any case, the idea of decentralized applications that share and distribute control, resources, and data in order to benefit from the network effect and greater transparency and trust within a network, remains a promising opportunity to establish new business models.

Enabling enterprise blockchains

Many people consider blockchain to be one of the most significant innovations in the IT industry, while others consider its biggest strength – that there is no central control – to also be its greatest weakness. However, the majority of enterprise blockchains are consortium or semi-private blockchains. These types allow either a pre-selected group or a single organization to control and run the network and the according consensus mechanisms.

When companies start exploring the appliance of blockchain technologies, it should not just be for the sake of applying a new technology. Blockchain offers companies an opportunity to rethink their business processes, with significant potential to speed up existing processes, even across corporate boundaries.

Enterprise blockchains have the power to further simplify and accelerate business processes through reduced process steps and greater transaction consistency. From the trading of digital or physical goods to verifying documents and milestones of a specific process, applying blockchain-empowered solutions in the enterprise world can lead to increased profitability.

With SAP HANA®, we have the unique ability to enable enterprise blockchains and integrate them with new and existing business applications so customers can take full advantage of the technology’s benefits. Therefore, our goal is to embed blockchain in our multi-modal architecture and have blockchain-related processes appear as a typical database transaction to both humans and machines – taking out complexity and inefficiency, but ensuring the speed, reliability, and security that enterprises require to run their businesses smoothly.

Introducing the SAP HANA Blockchain adapter

At SAPPHIRE this year, we launched a first version of the SAP HANA Blockchain adapter integrating with the newly launched SAP Cloud Platform Blockchain. With this new capability, customers can easily consume and build on blockchain data in SAP HANA using an SQL interface and standard SQL commands – both on-premise or in the cloud. This consistent user experience makes it much easier for developers to build new applications using the blockchain technology because they do not need to be aware of the very concrete blockchain implementation details, but are empowered to simply apply them. By removing this barrier for developers, SAP HANA enables them to easily and quickly adopt blockchain, providing an enterprise-class data platform that harmonizes multi-party transactions and allowing customers to establish trustful and transparent business processes and networks.

In addition to connecting to existing blockchain networks, such as Hyperledger Fabric or Multichain, via APIs offered by SAP Cloud Platform Blockchain, customers can now take advantage of the relevant data from the blockchain network with the speed of SAP HANA. This data from the blockchains is provided in the form of virtual tables in SAP HANA and can also be replicated into physical tables. In addition, blockchain transactions triggered through SAP HANA are submitted to the respective blockchain ecosystem. On the other hand, for any additions to the relevant blockchain, the corresponding blockchain table in SAP HANA will be updated accordingly.

This makes it possible to run analytics and transactions in real-time on both regular business data and blockchain data – making blockchain enterprise-ready by integrating the technology with new and existing business applications. All these operations can be performed utilizing standard SQL commands since blockchain data resides in database tables.

 

SAP HANA Blockchain Adapter connects with blockchains through SAP Cloud Platform and brings together business with blockchain data

With this new capability of SAP HANA, companies can increase, for example, the supply chain transparency by extending order processing information to suppliers. Further use cases are applicable in the public sector, for example with document verification, or in the utilities industry with, among others, the tracking of electricity production and consumption in the private sector where people can trade rooftop solar power with other households.

As a next step, we will explore further use cases with customers in a controlled beta phase for the SAP HANA Blockchain adapter.

Making use of the blockchain technology in the enterprise world is a further step towards decentralizing applications for greater flexibility, transparency, and scalability in and for the hyper-connected economy.

The post SAP HANA Goes Blockchain – Bringing Together Business and Blockchain Data appeared first on SAP HANA.

source https://blogs.saphana.com/2018/06/12/sap-hana-goes-blockchain-bringing-together-business-and-blockchain-data/

Harnessing hyperscale is no longer the domain of digital giants such as Google® and Amazon®. Innovative utilities are capturing every movement in the life of millions of digital prosumers to find new sources of revenue and deliver personalized services. All while, in the pursuit of zero waste, consumer products companies that commercialize perishable and temperature-sensitive goods are remotely monitoring complex production lines, transportation fleets, and cooling units at thousands of locations in real-time to promptly detect performance anomalies and take action, preventing food contamination or spoilage.

Enabling these business transformations requires intelligent applications and analytic solutions that capture, process and analyze enormous sizes of diverse data coming from inside and outside a company’s walls. Data is at the core of all of these innovations and needs to be instantly processed to support everything from anomaly detection and outcome predictions to in-process decision making and intelligent automation.

 

The In-Memory Difference

In-memory computing has been widely recognized as essential to power this new breed of solutions. By maintaining data close to the processor instead of locking it into traditional disk-based storage, in-memory data management reduces processing latency by several orders of magnitude eliminating I/O bottlenecks. It also allows for fast execution of complex operations such as scoring machine learning algorithms or combining advanced analytics and transactions.

Today, however, some companies cannot find hardware configurations with enough memory capacity to store all their data. For many companies, budget constraints make it difficult to justify adopting in-memory data management for warm data – e.g. data that is not accessed that often, but is still necessary to provide historical context, such as trend analysis. Current computer memory technology, DRAM, provides very fast data access but has limited data capacity and is more expensive than other storage technology, such as solid-state disks (SSDs). DRAM is also not a permanent storage solution, as it loses its data when the power is turned off. As a result, businesses often have no choice but to opt for multi-tier data storage solutions, sacrificing data processing speed for lower costs.

 

Ushering in the Next Generation of In-Memory Computing

Hardware innovations such as Intel® OptaneTM DC persistent memory are going to transform the current data storage hierarchy and allow more data to be processed at in-memory speed.  Persistent memory is non-volatile, thus preserving its content even if the power goes off, and is almost as fast as DRAM while storing more data at a lower TCO.  This means more than a 1000x less latency to delivering data to each CPU core’s registers than classic SSD.

With SAP HANA, SAP has been at the forefront of the in-memory data management revolution for many years. With more than 23,000 customers, SAP HANA is powering core business processes and transformational innovations across all major industries.  The latest release, SAP HANA 2.0 SPS 03, has added native support for persistent memory, and the benefits are significant. Not only will customers be able to store more data in-memory for faster processing, but they will also benefit from shorter start-up times since data is already available in the persistent memory and does not have to be reloaded from I/O constrained storage.

By the numbers, the gains are also impressive. For example, an internal benchmark of SAP HANA using Intel® OptaneTM DC persistent memory, compared to a traditional DRAM and SSD persistent disk configuration with 6 terabytes of data, showed a 12.5 times improvement in SAP HANA startup time. Intel® OptaneTM DC persistent memory will also expand total memory capacity to greater than 3 terabytes per CPU socket enabling SAP HANA to better optimize workloads by maintaining larger amounts of data closer to the CPU.

With higher capacity and improved startup time, SAP and Intel® are removing the barriers to fast, always-on data processing[1]. As Lisa Davis, Intel Data Center VP & GM of Enterprise & Government, put it: “Intel and SAP are proud to launch a revolutionary change for SAP HANA. SAP HANA 2.0 SPS03 & Intel® OptaneTM DC persistent memory will help deliver a lower TCO through larger in-memory capacity, faster start times & simplified data tiering while moving more data closer to the processor for faster time to insights. After a multi-year collaboration, we are excited to see how this significant technology advancement will accelerate SAP’s customers’ journey to the intelligent enterprise.”

The combination of persistent memory and in-memory data management is poised to transform the price/performance ratio of next-generation compute and storage environments, paving the way for faster, cheaper, and bigger hyperscale systems to process data. With data processing capacity radically improved and the ability to store and retrieve large volumes of data as rapidly as needed, business innovation can thrive.

Further References

 

[1] Intel® OptaneTM DC persistent memory is available today for early adoption testing and production shipping to select customers later this year, with broad availability in 2019.

 

The post Harnessing Hyperscale: Processing More Data at Speed with Persistent Memory appeared first on SAP HANA.

source https://blogs.saphana.com/2018/06/12/harnessing-hyperscale-processing-more-data-at-speed-with-persistent-memory/

It’s been a while since my last reader question post. It’s hard to feel too bad, though. I was combining a cross-country relocation with a two-week vacation. So I suppose the internet just had to do without my wisdom for a few weeks.

But I’m back in the saddle, so that changes today.

Source de l’article sur DZone

Not multilingual? That’s okay, there’s an app for that. Check out this video for a full walkthrough!

You can now add a professional translator and friendly voice to any mobile app using Amazon Translate and Amazon Polly. If you haven’t tried AWS yet, these two services are possibly the easiest API implementation I’ve seen to date.


Source de l’article sur DZONE

Since its inception, the SAP HANA 2.0 cockpit has drawn interest from customers for its planned differentiating features, short patch-release cycles, and large development investment (not to mention, it’s usability-tested look n’ feel). One of drawbacks to the SAP HANA cockpit until now was the inability to back up and recover SAP HANA 1.0 SPS12 systems.

Customers using the long-term support release of the SAP HANA server, version 1.0 SPS12, need wait no further. Install the SAP HANA cockpit and test out the functionality on your SAP HANA 1.0 SPS12 systems today!

The support requires, at least:

    • SAP HANA cockpit SP6 (released April 6th)
    • SAP HANA 1.0 SPS12 revision 17 (released May 23rd)

To use the new functionality, be sure to patch both your SAP HANA server version to revision 122.17, and your SAP HANA cockpit revision to SP6.

In addition to simple backup and recovery, take advantage of scheduling data backups, and copying a database, now also supported in the SAP HANA cockpit for SAP HANA 1.0 systems.

For more details about installing and configuring the SAP HANA cockpit and registering your SAP HANA server, check out the SAP HANA cockpit documentation.

Also, stay tuned for upcoming SAP HANA cockpit videos from our friends at the SAP HANA Academy!

The post Now you can back up and recover SAP HANA 1.0 in the SAP HANA cockpit appeared first on SAP HANA.

source https://blogs.saphana.com/2018/06/11/now-can-back-recover-sap-hana-1-0-sap-hana-cockpit/

Here’s a question: Do consumers, suppliers, and partners trust companies to do the right thing with their personal and business data? Given the high-profile data security breaches dominating the news lately, it’s unlikely. Lack of trust regarding organizations’ data practices is rising and this not only damages a brand’s reputation, it can adversely affect an organization’s bottom line.

 

GDPR Changes the Way We Use and Distribute Data

As new data protection laws—like the European Union’s General Data Protection Regulation (GDPR)—go into effect, companies are scrambling to implement solid, repeatable data practices. That said, the work that goes into ensuring the availability and accessibility of trusted data within a company, is complicated not only by the enormous amount of data produced by enterprises daily, but also by the necessity to move that data from one repository to another.

To be clear, automation, make more informed decisions, and improve productivity. Inevitably, trusted data propels innovation and accelerates competitive advantage.

 

Invest in Data Catalogs

But how do you get to the point where you know you can trust your data? First step – invest in data cataloging. This used to be considered a nice-to-have; however, in the digital era, the explosion of available data sources and increase in self-service access to data have made cataloging a necessity. A data catalog discovers, tracks, organizes, and inventories all data assets and their lineage, whether the data is on-premise, in the cloud, in a data lake, or even on an edge IoT device. Data cataloging also supplies the context for data analysis initiatives—driving valuable and trustworthy results for analysts, data scientists, developers, and business decision-makers within the organization.

 

Link Data Catalogs to Data Architecture and Governance

That said, data catalogs won’t make much of a difference without the data architecture and governance that link the catalogs to a company’s information management plan. Data architecture is at the heart of an organization’s ability to execute its business strategy with clearly defined models, policies, and rules on how data is collected, stored, and used. Data rules and policies around security, privacy, validation, cleansing, usage, retention, and deletion are equally important. Effective architecture governance requires the documentation of business processes and the data flows within them, as well as assigning data and process owners to ensure ongoing organizational compliance.

 

Review and Analyze Data

As a final step, data monitoring and stewardship are vital to creating and maintaining trusted data. A company must actively and consistently review and analyze data quality, integrity, and compliance. To do this successfully, the organization must establish clear data quality metrics for specific business objectives, to ensure the impact of data quality on business outcomes can be properly assessed. Over time, good data stewardship establishes the practices and procedures that assess data for completeness, accuracy, security, accessibility, and quality.

While organizations with successful trusted data practices use data cataloging, architecture and monitoring, they also have something else in common—documenting the relationship between the data and their strategic business goals, initiatives, and outcome indicators, to help drive their data management decisions. With a clear view of how data is linked to the business strategy, organizations can more effectively prioritize data management activities and better meet their goals.

As an example of this, Alliander, a European-based utility company, demonstrates the critical nature of trusted data to their corporate operations and their customers’ well-being. With electric lines and natural gas pipelines that ensure reliable, affordable, and accessible energy to millions of customers, the company relies on trusted data to literally keep the lights on.

Good data architecture, cataloging, and stewardship, enables companies like Alliander to constantly monitor equipment operation and track data streams. Their commitment to trusted data means that if something goes wrong, a notification system can automatically trigger a call to a technician who, through geocoding, will know exactly where to find the equipment in need of repair—whether it’s a tiny relay station in the middle of a field or a power line on a city block. Upon arrival, the technician knows exactly which piece of equipment needs work, what it consists of, and the tools and materials they need for the job, including schematics on how to disassemble and repair the equipment in question. If a technician is missing a critical part, an automated request is made to the appropriate supplier to deliver it as quickly as possible.

With trusted data, utilities can be confident their customers have safe, dependable access to gas and electricity. As a bonus, customers can reduce their energy bills by 10-20% a month, through real-time access to their energy usage data.

Trusted data is a strategic asset for every company, which is why businesses need people, processes and technology to make sure their data is sound, accurate, and reliable, and can be used to maximize everything that’s good for a business, while fostering trusted relationships with consumers, suppliers, partners, and employees.

Managed and cultivated correctly, trusted data improves business outcomes and provides the foundation for innovation and transformation. Organizations around the world and across industries rely on data management solutions like SAP HANA Data Management Suite, the technology foundation that enables cleaner, more trusted data. Find out more by reading  information management and governance, watching a data preparation video, or  signing up for a data quality management trial and start delivering trusted data your business can count on —every time.

To learn more about SAP HANA Data Management Suite and its capabilities, visit us in the Platform campus at SAPPHIRE NOW 2018.

The post Investing in Trusted Data appeared first on SAP HANA.

source https://blogs.saphana.com/2018/06/11/investing-in-trusted-data/

Alors que l’intelligence artificielle (IA) promet une efficacité accrue et pourrait être la clé pour faire face aux problèmes les plus pressants de notre monde, son développement en Europe ne fait pas l’unanimité.

Dans un nouvel article SAP sur le leadership éclairé, intitulé « European Prosperity Through Human-Centric Artificial Intelligence » (La prospérité en Europe grâce à une intelligence artificielle centrée sur l’humain), préparé par Andreas Tegge, responsable Politique publique mondiale, SAP évoque les angoisses générées par cette technologie et propose des mesures pour s’assurer d’une rapide assimilation et avancée de l’IA en Europe.

Aujourd’hui, l’intelligence artificielle, et le Machine Learning en particulier, semblent être les seules technologies à inspirer à la fois autant de fascination et de débats enflammés. Les algorithmes capables d’acquérir, de manière autonome, des renseignements à partir de données, sans avoir été explicitement programmés, permettent déjà aux machines de voir, lire, écouter et interagir. Dans l’entreprise intelligente, le Machine Learning permet d’optimiser les processus et de bénéficier de niveaux de productivité accrus, libérant ainsi les collaborateurs, qui peuvent se consacrer à des tâches ayant une plus grande valeur ajoutée.

Un jour ou l’autre, le Machine Learning aura des applications dans la quasi-totalité des domaines et secteurs de l’industrie. Les avantages du Machine Learning pour les entreprises sont multiples : outre des économies, il offre la capacité de prédire les réactions des marchés, les comportements des clients et la durée de vie des machines, mais aussi de considérablement optimiser les opérations des entreprises et de totalement personnaliser le service client et l’utilisation des logiciels. De plus, le Machine Learning peut permettre de faire face à certains des défis sociaux les plus pressants du moment dans les domaines de la santé, de la prévention des désastres, de la sécurité publique et bien d’autres encore.

Pourtant, les angoisses et incertitudes autour du Machine Learning se multiplient. Quel sera l’impact du Machine Learning sur le marché du travail ? Comment garantir la confidentialité des données et le contrôle des humains sur les processus de prise de décision basés sur des machines ? Les machines égaleront-elles bientôt l’intelligence humaine ? Iront-elles même jusqu’à la dépasser ?

Luka Mucic, directeur financier et membre du Conseil de direction de SAP SE, estime qu’il est important d’aborder ces préoccupations et ces craintes dans le cadre d’un débat public : « Les individus continueront de jouer le rôle le plus important à l’avenir, mais ce rôle sera différent. L’objectif est que l’humain et la machine se complètent au travail, les machines facilitant le travail humain. Pour s’y préparer, politiques, industries et société civile doivent s’engager dans des discussions ouvertes aux différentes parties prenantes. SAP souhaite, avec cet article sur le leadership éclairé, contribuer à ces discussions. »

L’opportunité de l’Europe sur le marché mondial

Il ne fait aucun doute que l’IA sera à l’avenir un levier majeur d’innovation, de croissance et de productivité. Mais quel sera le rôle de l’Europe ? Dans l’état actuel des choses, la course à la domination mondiale dans le domaine de l’IA se joue entre la Chine et les États-Unis, avec l’Europe, jusqu’ici du moins, semblant faire un peu plus que jouer les spectateurs.

Les États-Unis sont actuellement en tête. Les entreprises comme Google, Facebook et Microsoft investissent dans les technologies de Machine Learning et disposent, de plus, d’un avantage certain : un accès à des volumes colossaux de données. En 2016, les investissements des États-Unis dans les technologies de Machine Learning représentaient près de deux tiers des investissements mondiaux consentis dans ce domaine.

De son côté, la Chine bénéficie d’un formidable réservoir de talents et dispose d’une véritable mine d’or avec sa population de 1,4 milliard d’habitants. Elle est prête à prendre le rôle de leader mondial en matière d’IA, aux côtés des États-Unis. Le gouvernement chinois a d’ailleurs récemment établi un plan de développement visant à garantir le leadership de la Chine sur le marché mondial de l’IA d’ici 2030. Les entreprises comme Alibaba et Baidu investissent massivement dans les voitures autonomes, le trafic intelligent, la défense et la santé. Et on estime que le marché chinois de l’IA pourrait s’élever à 5 milliards d’euros d’ici fin 2018.

 

Aussi, vous pourriez croire que l’Europe n’a plus ses chances. Mais en matière de marché B2B pour l’IA et l’entreprise intelligente, l’Europe dispose d’atouts certains pour jouer un rôle de premier plan. Les processus métier intelligents, les infrastructures intelligentes, les assistants numériques et les chatbots offrent une multitude d’opportunités d’exploitation du Machine Learning.

L’Europe dispose également d’une remarquable expertise industrielle, essentielle pour développer des solutions de Machine Learning de pointe. De nombreuses entreprises européennes, petites ou grandes, sont des leaders mondiaux dans leurs domaines et disposent d’un potentiel considérable en matière d’innovation. Des hubs de startups sont présents à Paris, Londres et Berlin et se focalisent clairement sur l’IA. Enfin, côté outils d’analyse de données, les entreprises européennes sont déjà bien positionnées avec de remarquables solutions de Machine Learning.

Néanmoins, l’Europe fait face à des défis très particuliers : les technologies de Machine Learning n’y sont pas facilement acceptées par la population. Le Machine Learning ne sera un succès en Europe qu’à la condition que son développement et son application respectent les normes légales et les valeurs de l’Europe.

L’avènement de la « dark factory » (une usine entièrement robotisée) ?

Le monde du travail sera impacté selon le degré d’assimilation du Machine Learning dans les différentes facettes des entreprises. Il va de soi que le Machine Learning ouvrira la voie à une automatisation des lieux de travail dans bien des domaines. Mais les avis des experts divergent quant aux types d’emplois qui seront touchés par cette automatisation et quant à l’ampleur de celle-ci. On estime qu’entre 5 et 47 % de l’ensemble des activités se déroulant sur les lieux de travail pourraient être concernés.

Cependant, le Machine Learning créera également des emplois, en particulier parce que nous avons besoin de spécialistes pour développer les systèmes de Machine Learning et pour optimiser leur fonctionnement. Sans parler de l’originalité, de la créativité et de l’innovation humaines, qui seront plus que jamais nécessaires :  nous verrons donc de tout nouveaux emplois apparaître. Le Machine Learning pourrait également contrer les effets de l’actuel manque de main d’œuvre en Europe (résultant de changements démographiques) et moins inciter les entreprises à délocaliser leur production vers des pays à bas salaires.

Il est difficile de prévoir les effets précis à venir mais l’IA sera probablement plus évolutionnaire que révolutionnaire. « La plupart de ces développements arriveront à un moment ou à un autre et dépassent largement les possibilités actuelles du Machine Learning », déclare Markus Noga, responsable de l’équipe Machine Learning chez SAP. « Nous tenons les rênes et pouvons jouer un rôle actif dans le choix de ce qui sera automatisé et de l’ampleur de cette automatisation. In fine, notre objectif est de renforcer le potentiel humain grâce à la technologie, et non pas de l’entraver. »

Pour Jürgen Müller, directeur de l’innovation chez SAP, penser que nous assistons là à l’avènement de la « dark factory » (opérant toutes lumières éteintes car les machines n’ont pas besoin de lumière) n’est pas réaliste : « Le Machine Learning peut automatiser des tâches très spécifiques, mais l’IA est loin de disposer de toutes les facettes de l’âme humaine et n’en sera peut-être jamais capable. À l’avenir, le travail reposera principalement sur une interaction humains-machines. Il est donc crucial que les humains utilisent l’IA pour compléter et renforcer leurs propres aptitudes, plutôt que d’essayer d’entrer en concurrence avec celle-ci. »

Quelle évolution privilégier ? Que recommande SAP ?

Pour rester compétitive, l’Europe doit tirer profit des riches opportunités offertes par le marché B2B. Cela implique d’aborder des préoccupations légitimes. L’article sur le leadership éclairé de SAP offre des recommandations spécifiques, destinées aux administrations et aux entreprises européennes, concernant la manière de s’associer pour accélérer l’adoption des technologies d’IA en Europe et les progrès en la matière.

 

Dans ce domaine, il est important qu’un dialogue social s’installe entre les parties concernées (politiques, entreprises, sociétés), à la fois au sein de chaque État membre de l’Union européenne, mais aussi au niveau de l’Union européenne dans son ensemble, pour développer une vision commune de l’IA en Europe.

SAP conseille de privilégier un cadre légal uniforme au sein de l’UE, qui permette des progrès en matière de développement de l’IA et la mise en place de groupes d’innovation et de recherche à grande échelle. Cette approche garantirait une collaboration optimale et permettrait l’utilisation de gros ensembles de données, pour des modèles de Machine Learning plus robustes et plus fiables, ou encore des projets de recherche plus efficaces, concernant l’avenir du monde du travail.

Promouvoir chez le personnel les compétences et aptitudes pertinentes vis-à-vis du Machine Learning est également une priorité absolue. Ce qui signifie non seulement que les futurs débutants devront être préparés à exécuter des tâches dans un environnement reposant sur l’IA, mais aussi que l’industrie tout entière devra s’assurer que le personnel actuel reçoive les formations et qualifications complémentaires adéquates.

Autre impératif majeur pour le développement des futures solutions de Machine Learning : la disponibilité de données de formation pour le Machine Learning. Ainsi, SAP recommande un allègement des obstacles techniques et administratifs (dans la limite du respect des exigences actuelles en matière de protection des données), pour permettre l’utilisation de données via, par exemple, le Portail des Données Européennes pour les données de l’administration publique.

SAP propose également d’établir un code de conduite, pour garantir une gouvernance de l’IA et des pratiques d’entreprise saines, code dans le cadre duquel les industriels devront adhérer à des principes de base et respecter des procédures spécifiques, garantissant le respect des normes éthiques et légales dans le cadre du développement et de l’utilisation des solutions de Machine Learning.

SAP considère également que le secteur public européen a la responsabilité d’être pionnier dans l’utilisation de l’IA, pour concrétiser ses bénéfices aux yeux des citoyens et permettre une meilleure compréhension des capacités de l’IA. Et il en va de même pour le secteur des PME/ETI, colonne vertébrale de l’économie européenne. L’adoption de l’IA dans ce secteur représente une formidable opportunité d’accélérer la transformation numérique dans les PME/ETI.

« Les humains doivent être au centre de toute discussion portant sur l’intelligence artificielle », a déclaré Bernd Leukert, membre du Conseil de direction de SAP SE, Produits et Innovation. « Pour répondre aux préoccupations de la population tout en tirant profit d’opportunités économiques, l’Europe doit trouver sa propre voie en matière de développement et d’utilisation de l’intelligence artificielle. L’industrie des technologies doit encourager la confiance dans ce domaine. Chez SAP, nous souhaitons jouer un rôle de leader en la matière. »

Article posté pour la première fois le 6 février 2018 sur news.sap.com

The post L’avis de SAP sur l’importance de l’intelligence artificielle en Europe appeared first on SAP France News.

Source de l’article sur le site SAP

In a previous article, we discussed the Apache Ignite Machine Learning Grid. At that time, a beta release was available. Subsequently, in version 2.4, Machine Learning became generally Available. Since the 2.4 release, more improvements and developments have been added, including support for Partitioned-Based Datasets and Genetic Algorithms. Many of the Machine Learning examples that are provided with Apache Ignite can work standalone, making it very easy to get started. However, later in this series of articles, we will perform some analysis on several freely available datasets using some of the algorithms that are supported by Ignite.

Introduction

In this first article, we will begin with an overview and introduction to the Machine Learning Grid, graphically shown in Figure 1.


Source de l’article sur DZONE

A comprehensive and thoughtful SEO strategy is what you would turn to if your goal is to improve your website’s visibility and grow traffic and revenue respectively.

While off-page tactics like link building still remain at the top of the agenda, on-page SEO is no less important in the age of semantic search.

Search engines’ attention has gradually shifted from authority alone toward the quality of the content you provide, its structure, its relevance, and the overall user experience, so taking care of those aspects also plays a major role in succeeding online.

In the past, SEO tags proved to have significant impact on rankings, but now tags are one of the most controversial aspects of on-page SEO, surrounded by debates.

Which tags are obsolete now? Which ones are as crucial as ever?

To answer these questions, it’s important to understand the role of each type of tag and evaluate the impact it may have in terms of user- and search-friendliness.

tags

Whether these are meta tags like title and description, or other tags classifying or organizing the content – the way we use tags and their relative impact on rankings has naturally changed over the years.

As the search engines got smarter at reading and interpreting data, using all kinds of tags in a manipulative manner has become obsolete. However, new tags and new ways of organizing data entered the game, and by changing the approach a bit, one can make great use of both old and new ones.

Let’s dive into the variety of tags and investigate their SEO importance.

Title Tags

A title tag is an HTML attribute from the <header> section that specifies the title of a webpage. It typically appears as a clickable headline in the SERPs and also shows up on social networks and in browsers.

Title tags are meant to provide a clear and comprehensive idea of what the page’s content is about. But do they have a major impact on rankings as they used to for many years?

On the one hand, they are no longer “a cure for all ills,” as explicit keyword stuffing just doesn’t seem to convince Google anymore. On the other hand, well-written optimized titles and higher rankings still do go hand in hand, even though the direct correlation got weaker.

Over the past few years, user behavior factors were being discussed a lot as logical proof of relevance and thus a ranking signal – even Google representatives admit its impact here and there.

The page’s title still is the first thing for a searcher to see in SERPs and decide if the page is likely to answer the search intent. A well-written one may increase the number of clicks and traffic, which have at least some impact on rankings.

A simple experiment can also show that Google no longer needs your title tag to include an exact match keyword to know the topic the page covers.

For instance, if you search for [how to build brand awareness] on Google, you’ll only see one result (Position 7) in the top 10 with the exact match phrase in the title:

how-to-build-brand-awareness Google SERP

This shows how search engines are getting more powerful in reading and understanding the content and the context rather than relying on keyword instances alone.

You can see how the title isn’t the cure-all, but is a crucial piece of the puzzle that proves your page is relevant and rank-worthy.

Search engines are now taking a more comprehensive picture into account, and tend to evaluate page’s content as a whole, but the cover of a book still matters – especially when it comes to interaction with searchers.

Following best SEO practices, you should:

  • Give each page a unique title that describes the page’s content concisely and accurately.
  • Keep the titles up to 50-60 characters long (for them not to get truncated in the SERPs).
  • Put important keywords first, but in a natural manner, as if you write titles for your visitors in the first place.
  • Make use of your brand name in titles.

Meta Description Tags

Meta description is another paragraph of text placed in the <header> of a webpage and commonly displayed in a SERP snippet along with a title and page URL. The purpose of a meta description is to reflect the essence of a page, but with more details and context.

It’s no secret that meta description hasn’t been an official ranking factor for almost a decade now. However, the importance of meta description tags lies close together with title tag, as it impacts the interaction of a searcher with your site.

  • The description occupies the largest part of a SERP snippet and is a great opportunity to invite searchers to click on your site by promising a clear and comprehensive solution to their query.
  • The description impacts the amount of clicks you get, and may also improve CTR and decrease bounce rates, if the pages’ content indeed fulfills the promises. That’s why the description must be as realistic as it is inviting and distinctly reflect the content.

Surely, no description can perfectly match absolutely all queries you may rank for.

Your meta description can be any length you want. But Google typically only shows around 160 characters in the SERPs – and the snippet Google uses for your site may not be the meta description you’ve written, depending on the query.

Following best SEO practices, you should:

  • Give each page a unique meta description that clearly reflects what value the page carries.
  • Google’s snippets typically max out around 150-160 characters (including spaces).
  • Include your most significant keywords, but don’t overuse them. Write for people.
  • Optionally, use an eye-catchy call-to-action, a unique proposition you offer or additional hints on what to expect – ‘Learn’, ‘Buy’ constructions, etc.

Heading Tags (H1-H6)

Heading tags are HTML tags used to identify headings and subheadings within your content from other types of text (e.g., paragraph text).

The hierarchy goes from H1-H6, historically in a sense of “importance.” H1 is the main heading of a page (visible to users unlike meta title), and the most prominent tag showing what the page is about. H2-H6 are optional tags to organize the content in a way that’s easy to navigate.

The usage of heading tags these days is a source of some debate. While H2-H6 tags are considered not as important to search engines, proper usage of H1 tag has been emphasized in many industry studies. Apart from that, clumsy usage of H1s may keep a site from major rankings and traffic improvements.

Utilizing the heading tags certainly adds up to the architecture of the content.

  • For search engines, it’s easier to read and understand the well-organized content than to crawl through structural issues.
  • For users, headings are like anchors in a wall of text, navigating them through the page and making it easier to digest.

Both these factors raise the importance of careful optimization, where small details add up to the big SEO- and user-friendly picture and can lead to ranking increases.

Following best SEO practices, you should:

  • Give each page a unique H1 reflecting the topic the page covers, using your primary keywords in it.
  • Use H2-H6 tags where appropriate (normally, there’s no need to go further than H3), using secondary keywords relevant to each paragraph.
  • Don’t overuse the tags and the keywords in them. Keep it readable for users.

Italic/Bold Tags

Italic and bold tags can be used to highlight most important parts of the content and to add a semantic emphasis on certain words.

In terms of SEO, it is commonly being said that bots may appreciate such little tweaks, but won’t care too much really.

Thereby, these are not crucial kinds of tags to utilize, yet again they may improve readability and user experience, and this will never hurt – bots tend to appreciate what’s appreciated by searchers.

Following best SEO practices, you should:

  • Only use these tags where it really makes sense. Steer clear of excessive use.
  • Scan a piece of content as a whole, to make sure it isn’t overloaded with accents and is comfortable to read and digest.

Meta Keywords Tags

At the beginning of the optimization race, meta keywords used to be small snippets of text only visible in the code, that were supposed to tell the search engines what topics the page relates to.

Naturally, over the years the tag turned into a breeding ground for spamming and stuffing, instead of honestly optimizing the content.

Now, it’s a well-known fact that Google ignores meta keywords completely – they neither impact the rankings, nor would cause a penalty if you stuff it up.

Bottom line: meta keywords are pretty much obsolete and not worth wasting too much of your time on.

Following best SEO practices, you should:

Image Alt Tags

The image alt tag is an HTML attribute added to an image tag to describe its contents. Alt tags are important in terms of on-page optimization for two reasons:

  • Alt text is displayed to visitors if any particular image cannot be loaded (or if the images are disabled).
  • Alt tags provide context, because search engines can’t “see” images.

For ecommerce sites, images often have crucial impact on how a visitor interacts with a page.

Google also says it outright: helping search engines understand what the images are about and how they go with the rest of the content may help them serve a page for suitable search queries.

Additionally, a clear and relevant description digestible for search engines raises your chances to appear among Google Images results.

Following best SEO practices, you should:

  • Do your best to optimize most prominent images (product images, infographics, or training images), images that are likely to be looked up in Google Images search.
  • Add alt text on pages where there’s not too much content apart from the images.
  • Keep the alt text brief and clear, use your keywords reasonably and make sure they fit naturally into the whole canvas of page’s content.

Nofollow Link Tags

External/outbound links are the links on your site pointing to other sites. Naturally, these are used to refer to proven sources, point people towards other useful resources, or mention a relevant site for some other reason.

These links matter a lot for SEO: they can make your content look like a hand-crafted comprehensive piece backed up by reliable sources, or like a link dump with not so much valuable content.

Google’s well-known for its severe antipathy to any manipulative linking tactics, sticking to which can cause a penalty, and it doesn’t get any less smart at detecting those.

Apart from that, in the age of semantic search, Google may treat the sources you refer to as the context, to better understand the content on your page. For both these reasons, it’s definitely worth paying attention to where you link, and how.

By default, all hyperlinks are dofollow, and when you place a dofollow link on your site, you basically ‘cast a vote of confidence’ to the linked page.

When you add a nofollow attribute to a link, it instructs search engines’ bots not to follow the link (and not to pass any link equity). Keeping your SEO neat, you would preserve a healthy balance between follow and nofollow links on your pages, but would normally set the following kinds of links to nofollow:

  • Links to any resources that in any way can be considered as “untrusted content.”
  • Any paid or sponsored links (you wouldn’t want Google to catch you selling your “vote”).
  • Links from comments or other kinds of user-generated content which can be spammed beyond your control.
  • Internal “Sign in” and “Register” links following, which is just a waste of crawl budget.

Robots Tags

A page-level noindex tag is an HTML element that instructs the search engines not to index given page. A nofollow tag instructs not to follow any links on that page.

While these tags don’t correlate with rankings directly, in some cases they may have some impact on how your site looks in the eyes of search engines overall.

For instance, Google highly dislikes thin content. You may not generate it intentionally, but happen to have some pages with little value for users, but necessary to have on the site for some reason.

You may also have “draft” or placeholder pages that you need to publish while they are not yet finished or optimized to their best. You probably wouldn’t want such pages to be taken into account while evaluating the overall quality of your site.

In some other cases, you may want certain pages to stay out of SERPs as they feature some kind of special deal that is supposed to be accessible by a direct link only (e.g., from a newsletter).

Finally, if you have a sitewide search option, Google recommends to close custom results pages, which can be crawled indefinitely and waste bot’s resources on no unique content.

In the above cases, noindex and nofollow tags are of great help, as they give you certain control over your site as it’s seen by the search engines.

Following best SEO practices, you should:

  • Close unnecessary/unfinished pages with thin content that have little value and no intent to appear in the SERPs.
  • Close pages that unreasonably waste crawl budget.
  • Make sure carefully you don’t mistakenly restrict important pages from indexing.

Canonical Tags

Canonical tag (rel=”canonical”) is a way of telling search engines which version of a page you consider the main one and would like to be indexed by search engines and found by people.

It’s commonly used in cases when the same page is available under multiple different URLs, or multiple different pages have very similar content covering the same subject.

Internal duplicate content is not treated as strictly as copied content, as there’s usually no manipulative intent behind it. Yet this may become a source of confusion to search engines: unless you indicate which URL is the one you prefer to rank with, search engines may choose it for you.

The selected URL gets crawled more frequently, while the others are being left behind. You can see that while there’s almost no penalty risk, such state of affairs is far not optimal.

Another benefit is that canonicalizing a page makes it easier to track performance stats associated with the content.

John Mueller also mentions that using a rel=canonical for duplicate content helps Google consolidate all your efforts and pass the link signals from all the page’s versions to the preferred one. That is where using the canonical tag may help you steer the SEO effort in one direction.

Following best SEO practices, you should canonicalize:

  • Pages with similar content on the same subject.
  • Duplicate pages available under multiple URLs.
  • Versions of the same page with session IDs or other URL Parameters that do not affect the content.

Schema Markup

Schema markup is a shared markup vocabulary recognized by search engines, letting you organize data in a logical way. It has been on everyone’s lips lately as one of the most underrated tweaks.

A “semantic web” is a “meaningful web,” where the focus shifts from keywords instances and backlinks alone to concepts behind them and relationships between those concepts. Structured data markup is exactly what helps search engines to not only read the content but also understand what certain words relate to.

The SERPs have evolved so much that you may not even need to click through the results to get an answer to your query. But if one is about to click, a rich snippet with a nice pic, a 5-star rating, specified price-range, stock status, operating hours or whatever is useful – is very likely to catch an eye and attract more clicks than a plain-text result.

Assigning schema tags to certain page elements makes your SERP snippet rich on information that is helpful and appealing for users. And, back to square one, user behavior factors like CTR and bounce rate add up to how search engines decide to rank your site.

Following best SEO practices, you would:

  • Study available schemas on schema.org.
  • Create a map of your most important pages and decide on the concepts relevant to each.
  • Implement the markup carefully (using Structured Data Markup Helper if needed).
  • Thoroughly test the markup to make sure it isn’t misleading or added improperly.

Social Media Meta Tags

Open Graph was initially introduced by Facebook to let you control how a page would look when shared on social media. It is now recognized by Google+ and LinkedIn as well. Twitter cards offer similar enhancements, but are exclusively to Twitter.

By using these social media meta tags, you can provide a bit more information about your page to social networks. By enhancing the appearance, you make the shared page look more professional and inviting, and increase the likelihood of clicking on it and sharing it further. This is not a crucial tweak, but it’s an absolutely nothing-to-lose one, with a couple of potential benefits.

To ensure your pages look good when shared across social media platforms, you would:

Viewport Meta Tag

Viewport meta tag allows you to configure how a page would be scaled and displayed on any device. Commonly, the tag and the value would look as follows:

<meta name=”viewport” content=”width=device-width, initial-scale=1″>

Where “width=device-width” will make the page match the screen’s width in device-independent pixels, and “initial scale=1” will establish a 1:1 relationship between CSS pixels and device-independent pixels, taking screen orientation into account.

This tag is a no-brainer to add, but one screenshot from Google is enough to show the difference it makes:

Viewport meta tag has nothing to do with rankings directly but has a tone to do with the user experience, especially considering the variety of devices that are being used nowadays and the noticeable shift to mobile browsing.

Same way as many of the above tags and tweaks, taking care of it will be appreciated by users (or, more likely, not taking care of it will be depreciated), and your CTR and bounce rates shall reflect the small efforts you make accordingly.

Conclusion

To get most of your on-page strategy, don’t neglect the small tweaks that add up to the big picture.

As for now, some tags are still must-have as they make up the taxonomy of your page; others are not vital, but can let you be one rich snippet ahead of competitors who just didn’t bother.

Small changes that improve user experience and help search engines understand your site better will be appreciated by both sides, and will definitely pay off in the long run.

More SEO Resources:


Image Credit

Screenshot taken by author (from Google Developers), June 2018

By Search Engine Journal Source : https://ift.tt/2sNS4GJ

Hello everyone,

With 6,150+ members, SAP HANA customers, partners, and experts are part of an exclusive international community supporting your SAP HANA implementations and adoption initiatives.

This community, known as the SAP HANA International Focus Group (iFG), gives you and your team access to face-to-face activities as well as our acclaimed webinar series to help you deliver SAP HANA solutions to your organization.

We’re excited to bring many compelling SAP HANA topics to you post SAPPHIRE NOW 2018! In his keynote, SAP co-founder Hasso Plattner shared the SAP vision for the intelligent enterprise and the essential importance of SAP HANA. Many of these concepts are shared with community throughout the year!

Register today for the June – July webinars: (Click on the links. See details below.)

We encourage you to register below, and join us! If you are already an iFG member, you have access to past webinar recordings and materials.

If you are new to this global SAP HANA community, then register for access @ www.SAPHANACommunity.com.

Best regards,

Scott Feldman, Esquire
Global Head, SAP HANA Customer Community
SAP SE
*

Upcoming Sessions:

June 12
Introduction to SAP HANA as a Service
*
In this session, Jeff Wootton and Rudi Leibbrant will introduce the latest enhancements to SAP HANA as a Service and discuss the roadmap and vision for the offering.
*
Speakers:

  • Jeff Wootton
    Senior Director, SAP HANA Product Management, SAP
  • Rudi Leibbrandt
    Senior Director, SAP HANA Product Management, SAP America

Time: 8am PST, 11am EST, 5pm CEST
*
Click here to register >>

June 13
What’s New with SAP Data Hub
*
Join this webcast and learn how SAP Data Hub helps companies to accelerate and expand the flow of data across the complex, diverse data landscape by providing unprecedented data operations management features and meta data governance; creating powerful, organization-spanning data flows; and delivering results by leveraging existing data-processing investments.
*
Speaker:

  • Tobias Koebler
    Product Manager for SAP Data Hub, SAP

Time: 8am PST, 11am EST, 5pm CEST
*

Click here to register >>

June 19
An Introduction to Workload Management in SAP HANA
*
Join this webinar and learn as this feature is essentially for safeguarding system performance and offers various approaches for customers to tackle issues, Product Management would like to give another overview of the functionality in general, as well as introduce new features and UI related improvements in SAP HANA 2.0 SPS03.
*
Speaker:

  • Lucas Kiesow
    Product Manager for SAP HANA,SAP SE

Time: 8am PST, 11am EST, 5pm CEST
***
Click here to register >>

June 21
Accelerate your Adaptive Server Enterprise (ASE) based data-driven insights to near real-time with HANA
*
Learn how SAP HANA can accelerate your current data-driven insights on SAP Adaptive Server Enterprise (ASE) from several hours to few minutes, with little-to-no changes to your existing code. Using the best practices from recent successes, this session will explain how HANA provides acceleration that reduces your runtime to a fraction.
*
Speaker:

  • Kaleem Aziz
    Senior Product Management, SAP

Time: 8am PST, 11am EST, 5pm CEST
*
Click here to register >>

June 28
#askSAP: Become an Intelligent Enterprise with SAP Analytics for SAP S/4HANA
*
Join us on June 28 to hear from our panel of experts and users how SAP Analytics help SAP S/4HANA customers seize this opportunity to drive strategic initiatives across the entire organization.
*
Speakers:

  • Owen Pettiford SVP SAP Digital Transformation, BackOffice (Moderator)
  • Ido Shamgar Sr. Director, Global Product Marketing SAP, S/4HANA & SAP S/4HANA Cloud, SAP
  • Tom Chelednik Global Business Development Executive, SAP
  • Christian Blumhoff Senior Director Global Center of Excellence for Analytics, SAP
  • Tom Pollock Head of Smart Information Management, Northern Gas Networks
  • Kevin McConnell – Machine Learning Solutions, Leonardo & Analytics Global CoE, SAP

Time: 8am PDT, 11am EDT, 5pm CEST
*
Click here to register >>

July 3 — *APJ Time Zone*
Create Powerful, Enterprise-Wide Data Pipelines with SAP Data Hub
*
Attend this webinar to learn more about SAP Data Hub. Topics include:

  • The major use cases for SAP Data Hub
  • SAP Data Hub key features and architecture
  • A live demo of SAP Data Hub in action

Speaker:

  • Tobias Koebler
    Product Manager for SAP Data Hub, SAP

Time: 8am CEST, 11pm PST, 2pm CST
**
Click here to register >>

July 11
Adaptive Server Enterprise (ASE) Cloud Overview including Subscription Model Support
*
SAP Adaptive Server Enterprise (ASE) continues to extend its footprint into the public cloud environment. Learn about certifications with the major cloud vendors to provide new ways to use it for dev/test, disaster recovery and production environments. Learn about our cloud strategy, new licensing models (including subscription pricing), use cases and best practices.
***
Speaker:

  • Andrew Neugebauer
    Director of Product Management, SAP

Time: 8am PST, 11am EST, 5pm CEST
***
Click here to register >>

July 11
Migration to SAP HANA: Roadmap to Success
*
Learn how Atos helped Siemens to successfully migrated to SAP HANA. Hear the methodology roadmap and best practices for a successful SAP HANA migration from both Atos and Siemens during this webcast.
*
Speakers:

  • Martin Pfeil
    CTO Global Siemens Account, Atos
  • Stefanie Wessely
    Global Service Architect SAP, Global Product Practice SAP, Atos
  • Stephan Baehr
    Global Technical Architect SAP, Atos
  • Gerd Zimmermann
    Head of IT, Siemans

Time: 8am PST, 11am EST, 5:00pm CEST
*
Click here to register >>

July 18
SAP HANA Goes Blockchain
*
For enterprise customers, the new SAP HANA Blockchain service lowers the barrier for blockchain adoption by unification of blockchain data into an existing enterprise landscape without users having to understand the complexities of blockchain systems and networks.
*
Speakers:

  • Andreas Schuster
    Product Management, SAP
  • Karen Sun
    Product Marketing, SAP

Time: 11am PST, 2pm EST
*
Click here to register >>

July 24
Unlock Real-Time Insights with Geo-Driven Decision Making featuring Chris Cappelli from Esri and Matt Zenus from SAP
*
Learn how you can use SAP HANA and the spatial capabilities to provide a one-stop shop to geo-enable your intelligent enterprise. We’ll also discuss the power of Esri ArcGIS, with HANA as the enterprise geodatabase.
*
Speakers:

  • Matthew Zenus
    Global Vice PresidentDatabase & Data Management, SAP
  • Christopher Cappelli
    Director, Global Business Development, Esri

Time: 8am PST, 11am EST, 5:00pm CEST
*
Click here to register >>

Join the Community!

Be a part of the SAP HANA International Focus Group Community today! Join the exclusive SAP HANA International Focus Group (iFG) Jam Community!
*
This community provides a single, central global location for unique SAP HANA updates only available to our members. Enjoy these benefits:

  • Global SAP HANA Expert and leadership insights exclusive to our community
  • Early access to SAP HANA related product updates and upcoming events
  • Receive the latest information and updates (webinars, recordings, and focused content)
  • HANA Spotlight webcasts from our customer successes

Don’t miss out on the latest SAP HANA updates!

Join the SAP HANA iFG Community today – CLICK HERE if you’re already a member. If not, please register @ www.SAPHANACommunity.com

SAP HANA International Focus Group Sponsors

We’d like to thank our sponsors, NetApp, HPE, and oXya for their continued support of the community.

If you’re already a member of the SAP HANA iFG Community, check out their work with SAP HANA HERE. If not, please register @ www.SAPHANACommunity.com

 

The post SAP HANA Expert Webinar Series – June/July 2018 appeared first on SAP HANA.

By SAP HANA Source : https://ift.tt/2Jy3Yf0