Articles

In computer science, there is nothing more fundamental and intuitive than control structures. Every student learned about them in the first few weeks of any computer science program. We could not code without them, period. But they are not set in stone: we did get rid of the infamous GOTO in the 80s!

At the pre-ALGOL meeting held in 1959 Heinz Zemanek explicitly threw doubt on the necessity for GOTO statements; at the time no one paid attention to his remark, including Edsger W. Dijkstra, who later became the iconic opponent of GOTO.[3] The 1970s and 1980s saw a decline in the use of GOTO statements in favor of the « structured programming » paradigm, with goto criticized as leading to « unmaintainable spaghetti code« 

Source de l’article sur DZONE

Message and Event payload validation has been a rather thorny problem ever since extensible data structures (XML, JSON, YAML…) started to be used at scale. In fact, very little progress has been made since the good old days of DTDs. Schema definition languages such as XML-schema, json-schema, or even the OpenAPI schema are unfamiliar to most developers and often result in a rather anemic validation set of rules, leading to a perceived low value, and therefore a lack of interest. 

There are three key problems in a schema architecture: 

Source de l’article sur DZONE

Many websites today use some type of traditional Content Delivery Network (CDN), which means improvements in website load times, decreases in bandwidth, and better redundancy and security. But not everything is optimized, specifically when it comes to images, and image CDNs can help with that! 

Traditional vs. Image CDNs

A traditional CDN treats images as static. If you want to tailor images to better match various mobile device types, then you need to create many variants of each image and upload them to your web server. It also means you must develop responsive code that will tell the server and CDN which image variant to deliver. This is clunky, time-consuming, and inefficient. For a large website, the amount of code needed can be astronomical. Using this static image model, there’s just no realistic way for each image to be effectively sized and compressed for every possible device model – at this point, there are thousands of them. The combination of these two unfortunate factors leads to potentially slow load times and poor UX caused by oversized images delivered to mobile devices.

So what is an image CDN? An image CDN builds on the traditional CDN model with the addition of device detection and image optimization. Instant detection of the device model and browser requesting the images is done right at the device-aware edge server (true edge computing!) Additional information, including screen resolution and dimension, pixels per inch, and support for next-gen image formats (such as WebP, JPEG 2000/JP2, and AVIF), provides even more details crucial for superior image optimization. Using this information derived from device-aware edge servers, the image CDN optimizes each image and serves the perfect version for each device and resolution, meaning users get the finest webpage experience faster.

A Bit About the Edge (Whoa, Living on the Edge?)

With a single server website, a web request would have to travel from the requestor, back to the origin server (wherever that was geographically located), be processed, and then travel back to the requestor. Depending on the physical distance between the requestor and the origin server, this could introduce a great deal of latency, which means lag time on page loads. 

A traditional content delivery network (CDN) is a global network of servers that optimizes web performance by using the node geographically closest to the user for faster delivery of assets. It takes static content like images and stores them on the edge. But usually, these edge servers are relatively simple in terms of their role in business processes. They mostly index, cache, and deliver content. And traditional CDNs like to keep edge servers simple because of concerns over CPU usage, storage, and scalability.

But what if these edge servers could also provide computing power that enhances performance and business processes? This is called edge computing. Slowly, CDNs are starting to open their edge servers to allow enterprises to deploy apps/services on the edge. Likewise, Cloud computing networks (e.g., AWS, Azure, Google Cloud) provide virtualized server capacity around the world for those who want to use geographically distributed servers. In a sense, Edge Computing is a marriage of the CDN (where edge servers synchronize/work with each other) and Cloud computing (where servers are open to applications). 

Edge computing is a fascinating concept, but what is the killer app that will enhance business processes and improve website performance? The addition of device detection to edge computing provides the ability to transform from delivery of static images to a new model where images are dynamic and tailored exactly to devices. 

Edge computing is computing that is done in a geographically distributed space, with many servers located at or near the source of the web request. This reduction in bandwidth and latency leads to fast processing times, increased site speed, and improved customer experience. And edge computing doesn’t require new infrastructure — it leverages the networks of existing providers to create Points of Presence (POP) around the globe. 

The Edge Servers are…Aware?

Device-aware edge servers, like those used by the ImageEngine image CDN, take edge computing to a new level. Device detection is actually one of the use cases where edge computing really shines. Normally, the edge server would have to send a Javascript query to the device to figure out any information about a requesting device’s model, browser, operating system. But with a device-aware edge server, the User Agent string is captured and decoded. This contains all of the information necessary for device detection without the need for any back and forth – a definite speed improvement. So you’re starting ahead of the game! 

Each time a new request comes to the device-aware edge server, the image is processed by that server (meaning optimized for that specific device parameters) and stored right there in cache, primed for future use. This is done in three stages: changing image size based on device resolution, compressing the image using an image optimization tool, and selecting the most efficient file format for the device. 

If the device-aware edge server has already processed a request from a similar device model before, then it can serve the device-optimized image from its edge cache, leading to a lightning-fast server response — and ImageEngine’s device-aware edge servers can serve up cached images 98% of the time! Not only is there geographical proximity because of the distributed global POP network, but the smaller size of the optimized image compared to the full-sized original cuts up to 80% off the image payload. This can cut up to several seconds off page load times. When almost 70% of people say that page speed influences their likelihood of making a purchase, every single second counts! 

Some image CDNs detect the device information and group the devices into “buckets” of similar types and serve an image based on that type. While this is certainly an advancement over a traditional CDN, and works passably well for some common devices, it still isn’t a truly optimal solution. There are so many variants of browser, screen size,  resolution, etc., even among very similar devices, that images are still often oversized (too large payloads) and lead to poor load speed. A true image CDN, such as ImageEngine, serves the perfect image for every device, every time.

So Now You Want To Get Started (Don’t Worry, It’s Really Simple)

One of the best things about the ImageEngine image CDN is the ease of integration – and it can integrate into any platform that supports a 3rd-party CDN. All you need is to sign up for an account and receive a delivery address during your two (yes, 2!) minute signup process. This delivery address is used to redirect image traffic for optimization and superior delivery performance. Next, you’ll have to make some slight adjustments to img tags on your website, but that’s really all the work you’ll need to do. There are no DNS changes during a standard (generic delivery address) integration. You read that right, none at all. Contrast that to a traditional CDN integration, where there is just no way around some messing around in the DNS – in fact, usually some fairly extensive DNS changes. 

This low-code, virtually no code, integration saves you time. It saves you money. It saves you the hassle of putting multiple team members on a new project. And it means that you can be up and running in about 15 minutes with a standard install. You can be serving optimized images to your site visitors at blazing fast speeds before lunch! And don’t worry, ImageEngine has an experienced integration support team available to answer any questions you might have. 

There’s also no issue with adding the ImageEngine image CDN on top of an existing CDN. Traditional CDNs may have security features that you may prefer to keep for your site. It requires slightly more integration but provides the same benefits of a solo ImageEngine implementation — screaming fast image load times and perfectly optimized images from device-aware edge servers. All that is recommended is that the ImageEngine image CDN actually serve the images directly, not simply process them, to get maximum benefits.

Adopt an Image CDN and See The Benefits

We’ve learned that image CDNs bring numerous benefits to your site AND your business. Using device-aware edge servers, image CDNs provide measurably better UX to your visitors. Pages load potentially seconds faster with perfectly optimized images, meaning your customers get to the heart of your message right away, and you don’t lose potential sales. 

Image CDNs are actually 30%+ faster than most traditional CDNs, improving site speed accordingly. From an SEO perspective, that’s huge! And your SEO gets an additional boost from the improvement to your Largest Contentful Paint scores (which can help you gain valuable rank on Google’s SERPs). Implementation is simple and fast. You get all this, plus cost savings: since you have smaller payloads because of the fully optimized images, you’re delivering fewer gigabytes of data.

Source

The post Image CDNs: How Edge Computing Provides a Faster Low Code Image Solution first appeared on Webdesigner Depot.

Source de l’article sur Webdesignerdepot

A lot, if not all, of data science projects, require some data visualization front-end to display the results for humans to analyze. Python seems to boast the most potent libraries, but do not lose hope if you’re a Java developer (or if you’re proficient in another language as well). In this post, I will describe how you can benefit from such a data visualization front-end without writing a single line of code.

The Use Case: Changes From Wikipedia

I infer that you are already familiar with Wikipedia. If you are not, Wikipedia is an online encyclopedia curated by the community. In their own words:

Source de l’article sur DZONE

By now you’re likely aware of JavaScript Object Notation (JSON). Heck, I’d be willing to bet that there’s even a good chance that you’ve used it for one reason or another. And, honestly, I’m sure that reason was a good one. JSON has become ubiquitous in the software industry because it provides developers with a simple and flexible way of managing data.

In the context of databases, JSON was often thought of as something you’d use with NoSQL solutions. However, over the past few years, JSON integrations have made their way into the relational world. And for good reason. The ability to store JSON documents within a relational database allows you to create hybrid data models, containing both structured and semi-structured data, and enjoy all of the benefits of JSON without having to sacrifice the advantages of relational databases (e.g. SQL and all things data integrity).

Source de l’article sur DZONE

Intermine, where I was tasked with creating new user training documentation. For this project, I entirely rewrote the Intermine user documentation — which included images, code snippets, tables, mathematical formulas, and more — using GitBook. This guide will share my experience creating technical documentation using GitBook and act as a de-facto quick-start guide to GitBook.

What is GitBook?

GitBook is a collaborative documentation tool that allows anyone to document anything—such as products and APIs—and share knowledge through a user-friendly online platform. According to GitBook, “GitBook is a flexible platform for all kinds of content and collaboration.” It provides a single unified workspace for different users to create, manage and share content without using multiple tools. For example:

Source de l’article sur DZONE

Gartner predicts that by 2023, over 50% of medium to large enterprises will have adopted a Low-code/No-code application as part of their platform development.
The proliferation of Low-code/No-code tooling can be partially attributed to the COVID-19 pandemic, which has put pressure on businesses around the world to rapidly implement digital solutions. However, adoption of these tools — while indeed accelerated by the pandemic — would have occurred either way.
Even before the pandemic, the largest, richest companies had already formed an oligopsony around the best tech talent and most advanced development tools. Low-Code/No-code, therefore, is an attractive solution for small and mid-sized organizations to level the playing field, and it does so by giving these smaller players the power to do more with their existing resources.
While these benefits are often realized in the short term, the long-term effect of these tools is often shockingly different. The promise of faster and cheaper delivery is the catch — or lure — inside this organizational mousetrap, whereas backlogs, vendor contracts, technical debts, and constant updates are the hammer.
So, what exactly is the No-Code trap, and how can we avoid it?

What is a No-Code Tool?

First, let’s make sure we clear up any confusion regarding naming. So far I have referred Low-Code and No-Code as if they were one term. It’s certainly easy to confuse them — even large analyst firms seem to have a hard time differentiating between the two — and in the broader context of this article, both can lead to the same set of development pitfalls.
Under the magnifying glass, however, there are lots of small details and capabilities that differentiate Low-code and No-code solutions. Most of them aren’t apparent at the UI level, leading to much of the confusion between where the two come from.
In this section, I will spend a little bit of time exploring the important differences between those two, but only to show that when it comes to the central premise of this article they are virtually equivalent.

Low-Code vs. No-Code Tools

The goal behind Low-Code is to minimize the amount of coding necessary for complex tasks through a visual interface (such as Drag ‘N’ Drop) that integrates existing blocks of code into a workflow.
Skilled professionals have the potential to work smarter and faster with Low-Code tools because repetitive coding or duplicating work is streamlined. Through this, they can spend less time on the 80% of work that builds the foundation and focuses more on optimizing the 20% that makes it different. It, therefore, takes on the role of an entry-level employee doing the grunt work for more senior developers/engineers.
No-Code has a very similar look and feel to Low-Code, but is different in one very important dimension. Where Low-Code is meant to optimize the productivity of developers or engineers that already know how to code (even if just a little), No-Code is built for business and product managers that may not know any actual programming languages. It is meant to equip non-technical workers with the tools they need to create applications without formal development training.
No-Code applications need to be self-contained and everything the No-Code vendor thinks the user may need is already built into the tool.
As a result, No-Code applications create a lot of restrictions for the long-term in exchange for quick results in the short-term. This is a great example of a ‘deliberate-prudent’ scenario in the context of the Technical Debt Quadrant, but more on this later.

Advantages of No-Code Solutions

The appeal of both Low-Code and No-Code is pretty obvious. By removing code organizations can remove those that write it — developers — because they are expensive, in short supply, and fundamentally don’t produce things quickly.
The benefits of these two forms of applications in their best forms can be pretty substantial:
  • Resources: Human Capital is becoming increasingly scarce — and therefore expensive. This can stop a lot of ambitious projects dead in their tracks. Low-Code and No-Code tools minimize the amount of specialized technical skills needed to get an application of the ground, which means things can get done more quickly and at a lower cost.
  • Low Risk/High ROISecurity processes, data integrations, and cross-platform support are all built into Low-Code and No-Code tools, meaning less risk and more time to focus on your business goals.
  • Moving to Production: Similarly, for both types of tools a single click is all it takes to send or deploy a model or application you built to production.
Looking at these advantages, it is no wonder that both Low-Code and No-Code have been taking industries by storm recently. While being distinctly different in terms of users, they serve the same goal — that is to say, faster, safer and cheaper deployment. Given these similarities, both terms will be grouped together under the ‘No-Code’ term for the rest of this article unless otherwise specified.

List of No-Code Data Tools

So far, we have covered the applications of No-Code in a very general way, but for the rest of this article, I would like to focus on data modeling. No-Code tools are prevalent in software development, but have also, in particular, started to take hold in this space, and some applications even claim to be an alternative to SQL and other querying languages (crazy, right?!). My reasons for focusing on this are two-fold: 
Firstly, there is a lot of existing analysis around this problem for software development and very little for data modeling. Secondly, this is also the area in which I have the most expertise.
Now let’s take a look at some of the vendors that provide No-Code solutions in this space. These in no way constitute a complete list and are, for the most part, not exclusively built for data modeling. 

1. No-Code Data Modeling in Power BI

Power BI was created by Microsoft and aims to provide interactive visualizations and business intelligence capabilities to all types of business users. Their simple interface is meant to allow end-users to create their own reports and dashboards through a number of features, including data mapping, transformation, and visualization through dashboards. Power BI does support some R coding capabilities for visualization, but when it comes to data modeling, it is a true No-Code tool.

2. Alteryx as a Low-Code Alternative

Alteryx is meant to make advanced analytics accessible to any data worker. To achieve this, it offers several data analytics solutions. Alteryx specializes in self-service analytics with an intuitive UI. Their offerings can be used as Extract, Transform, Load (ETL) Tools within their own framework. Alteryx allows data workers to organize their data pipelines through their custom features and SQL code blocks. As such, they are easily identified as a Low-Code solution.

3. Is Tableau a No-Code Data Modeling Solution?

Tableau is a visual analytics platform and a direct competitor to Power BI. They were recently acquired by Salesforce which is now hoping to ‘transform the way we use data to solve problems—empowering people and organizations to make the most of their data.’ It is also a pretty obvious No-Code platform that is supposed to appeal to all types of end-users. As of now, it offers fewer tools for data modeling than Power BI, but that is likely to change in the future.

4. Looker is a No-Code Alternative to SQL

Looker is a business intelligence software and big data analytics platform that promises to help you explore, analyze, and share real-time business analytics easily. Very much in line with Tableau and Power BI, it aims to make non-technical end-users proficient in a variety of data tasks such as transformation, modeling, and visualization.

You might be wondering why I am including so many BI/Visualization platforms when talking about potential alternatives to SQL. After all, these tools are only set up to address an organization’s reporting needs, which constitute only one of the use cases for data queries and SQL. This is certainly a valid point, so allow me to clarify my reasoning a bit more.

While it is true that reporting is only one of many potential uses for SQL, it is nevertheless an extremely important one. There is a good reason why there are so many No-Code BI tools in the market—to address heightening demand from enterprises around the world — and therefore, it is worth taking a closer look at their almost inevitable shortcomings.

Source de l’article sur DZONE

Novo Mesto est une petite ville slovène située sur le coude pittoresque de la rivière Krka. Cette ville, dont l’origine remonte à la préhistoire, a toujours su gérer intelligemment ses ressources. L’idée d’assurer aux génération futures un environnement propre est profondément ancrée dans l’état d’esprit collectif. Les citoyens et les touristes peuvent se baigner dans la rivière en plein centre-ville.

« Nous ne sommes ni les premiers ni les derniers à vivre sur cette planète », déclare l’adjoint au maire de la ville, Bostjan Grobler. « Devenir une ville intelligente n’est pas un objectif en soi. L’objectif est de préserver la santé de nos citoyens et la salubrité de notre environnement afin d’offrir des emplois durables et des espaces de vie attrayants. La technologie nous aide à y parvenir. »

L’air pur comme point de départ

Comme beaucoup d’autres villes en Europe, Novo Mesto lutte depuis dix ans contre la pollution atmosphérique.

Celle-ci est particulièrement élevée en hiver, où les mesures font souvent état de particules de suie qui dépassent plusieurs fois par semaine les limites de matières particulaires (PM) fixées par l’Union européenne à 40 microgrammes par mètre cube. Il existe différents types de matières particulaires. Les matières les plus fréquemment mesurées sont des particules en suspension d’un diamètre de 10 microns ou moins, appelées PM10. Pour vous donner une idée, un micron est un millionième d’un mètre et un cheveu humain a une épaisseur d’environ 75 microns.

Selon l’Organisation mondiale de la santé (OMS), le niveau de PM10 doit être inférieur à 20 microgrammes par mètre cube. La ville allemande de Mannheim, par exemple, enregistre une moyenne annuelle de 22 microgrammes, contre 27 à Novo Mesto. Même si ces moyennes sont faibles en comparaison de Shanghai, qui avoisine les 84, elles peuvent entraîner des maladies cardiaques et pulmonaires ainsi qu’une irritation des voies respiratoires, en particulier lorsqu’elles dépassent 40 microgrammes.

Novo Mesto affichait des niveaux élevés de PM10 année après année, mais les dirigeants municipaux ne savaient pas comment y remédier.

« Il était évident que nous devions agir », explique Peter Gersic, responsable du développement de projets pour la municipalité de Novo Mesto, « car la pollution atmosphérique ne disparaît pas toute seule. Mais en toute honnêteté, nous ne savions que faire de ces données. »

Après quelques recherches, la municipalité s’est adressée à SAP et Telekom Slovénie. Juraj Kovac, un analyste de Telekom doué de l’expertise technique adéquate pour mettre en œuvre des solutions de ville intelligente, nous a expliqué le fonctionnement de la solution. Des capteurs ont été installés dans toute la ville pour recueillir des données non seulement sur la pollution atmosphérique, mais aussi sur d’autres indicateurs environnementaux importants, notamment l’utilisation de l’eau et la pollution lumineuse.

« Nous utilisons SAP Leonardo pour collecter les données et SAP Analytics pour les analyser », explique Juraj Kovac. « Toutes nos plateformes IdO s’exécutent sur SAP Cloud Platform. Les données sont utilisées par la municipalité pour prendre des décisions opérationnelles et par les citoyens qui utilisent des applications mobiles, par exemple pour trouver des places de stationnement. »

Améliorer la vie urbaine

L’adjoint au maire comprend désormais que la gestion des ressources de la ville n’est pas uniquement une affaire d’État. Il s’agit d’aider les citoyens à revoir leur mode de vie. « Si nous voulons que les gens prennent moins leur voiture, nous devons leur offrir des alternatives comme les transports publics et les pistes cyclables », déclare Bostjan Grobler. « Il ne suffit pas de motiver les gens à acheter des véhicules électriques. Nous devons veiller à ce qu’ils puissent facilement les garer et les recharger. ».

Ce que Novo Mesto souhaite réaliser à petite échelle grâce à la technologie intelligente existe déjà dans plusieurs villes du monde. Depuis les bâtiments écologiques et la collecte des déchets basée sur des capteurs, jusqu’au développement des transports publics et des services municipaux en ligne, les villes intelligentes révolutionnent la vie urbaine.

La ville de New York, par exemple, a été nommée ville la plus intelligente au monde pendant deux années consécutives notamment pour son recours à un système de relevé automatisé permettant de mieux comprendre comment ses 8,5 millions d’habitants utilisent 1 milliard de gallons d’eau chaque jour. La ville de Londres, qui arrive deuxième au classement, a été récompensée pour son système de transport collectif et ses politiques d’urbanisme.

La Commission de transport de Toronto utilise la technologie SAP pour optimiser la visibilité des processus et la communication pour le personnel œuvrant dans les transports en commun de la ville. La technologie IdO de SAP aide la ville d’Antibes à mieux gérer ses ressources en eau. La ville de Nanjing utilise les capteurs de circulation de SAP pour développer une culture plus écologique et plus humaniste.

Grâce à son utilisation visionnaire de la technologie pour assurer l’attractivité et la durabilité de la ville, Novo Mesto prouve que toute ville, quelle que soit sa taille, peut être une référence pour les générations à venir en matière de qualité de vie urbaine.

Publié en anglais sur Forbes.com

The post Devenir une ville intelligente n’est pas un objectif, c’est un mode de vie appeared first on SAP France News.

Source de l’article sur sap.com