Articles

Hybrid cloud architectures are the new black for most companies. A cloud-first is obvious for many, but legacy infrastructure must be maintained, integrated, and (maybe) replaced over time. Event Streaming with the Apache Kafka ecosystem is a perfect technology for building hybrid replication in real-time at scale.

App Modernization and Streaming Replication With Apache Kafka at Bayer

Most enterprises require a reliable and scalable integration between legacy systems such as IBM Mainframe, Oracle, SAP ERP, and modern cloud-native applications like Snowflake, MongoDB Atlas, or AWS Lambda.

Source de l’article sur DZONE

As a core component of continuous delivery, feature flagging empowers developers to release software faster, more reliably, and with more control. This Refcard provides an overview of the concept, ways to get started with feature flags, and how to manage features at scale.
Source de l’article sur DZONE


Introduction

While many of us are habituated to executing Spark applications using the ‘spark-submit’ command, with the popularity of Databricks, this seemingly easy activity is getting relegated to the background. Databricks has made it very easy to provision Spark-enabled VMs on the two most popular cloud platforms, namely AWS and Azure. A couple of weeks ago, Databricks announced their availability on GCP as well. The beauty of the Databricks platform is that they have made it very easy to become a part of their platform. While Spark application development will continue to have its challenges – depending on the problem being addressed – the Databricks platform has taken out the pain of having to establish and manage your own Spark cluster.

Using Databricks

Once registered on the platform, the Databricks platform allows us to define a cluster of one or more VMs, with configurable RAM and executor specifications. We can also define a cluster that can launch a minimum number of VMs at startup and then scale to a maximum number of VMs as required. After defining the cluster, we have to define jobs and notebooks. Notebooks contain the actual code executed on the cluster. We need to assign notebooks to jobs as the Databricks cluster executes jobs (and not Notebooks). Databricks also allows us to setup the cluster such that it can download additional JARs and/or Python packages during cluster startup. We can also upload and install our own packages (I used a Python wheel).

Source de l’article sur DZONE

If you are in the world of software development, you must be aware of Node.js. From Amazon to LinkedIn, a plethora of major websites use Node.js. Powered by JavaScript, Node.js can run on a server, and a majority of devs use it for enterprise applications. As they consider it a very respectable language due to the power it provides them to work with. And if you follow Node.js best practices, you can increase your application performance on a vast scale.

When it comes to automation testing, it requires a very systematic approach to automate test cases and set them up for seamless execution of any application. This requires us to follow a set of defined best practices for better results. To help you do that, we will let you in on the best Node.js tips for automation testing.

Source de l’article sur DZONE

Traditionally, testing has been perceived as a bottleneck in SDLC, something that causes delays in delivery. Organizations have long adopted the Agile/DevOps model, but not without its pitfalls and stumbling blocks, especially in achieving the ideal speed/quality balance.

For enterprise DevOps, it is vital to rethink testing approaches to achieve agility at scale. There is much of an overlap of roles, for instance between business analysts and QA testers. Is the tester’s role diminished because of this overlap or because of automation?

Source de l’article sur DZONE

In the era of web-scale, every organization is looking to scale its applications on-demand, while minimizing infrastructure expenditure. Cloud-native applications, such as microservices are designed and implemented with scale in mind and Kubernetes provides the platform capabilities for dynamic deployment, scaling, and management. 

Autoscaling and scale to zero is a critical functional requirement for all serverless platforms as well as platform-as-a-service (PaaS) solution providers because it helps to minimize infrastructure costs.

Source de l’article sur DZONE

Since the skeuomorphism of the early 00s, the design trend of choice has been minimalism. In fact, minimalism has been de rigueur for substantially longer than that.

You can make a fair case that minimalism was the defining theme of the 20th century. Beginning with the widespread adoption of grotesque typefaces in mass advertising the century before, continuing through the less-is-more philosophies of the middle part of the century, and culminating in the luxury of 80s and 90s consumerism.

Minimalism has been central to the design practice of almost every designer that we recognize as a designer. It underpins the definition of the discipline itself.

With the weight of such illustrious history, it’s no wonder that the fledgling web — and in the scale of history, the web is still a very new phenomenon — adopted minimalism.

And then there’s the fact that a minimalist approach works on the web. Multiple viewports, multiple connection speeds, multiple user journeys, all of these things are so much easier to handle if you reduce the number of visual components that have to adapt to each context.

And yet, despite this, an increasing number of designers are abandoning minimalism in favor of a more flamboyant approach where form is function. It’s clearly happening. What’s not clear is whether this is a short-lived, stylistic fad or something altogether more fundamental. In other words, are designers about to abandon grids, or are they just slapping some gradients on an otherwise minimal design?

Featured image via Pexels.

Source

The post Poll: Is Minimalism a Dead Trend Walking? first appeared on Webdesigner Depot.


Source de l’article sur Webdesignerdepot

Finnish newspaper Helsingin Sanomat has developed a variable font, that is designed to make the effects of human-driven climate change tangible in a simple graphical form.

Whereas most type designers use variable font techniques to embed a range of weights in a single font file, the team — lead by Helsingin Sanomat’s art director Tuomas Jääskeläinen, and typographer Eino Korkala — used the technique to “melt” the typeface.

In the design process, we tried out countless letter shapes and styles, only to find that most of them visualized the disaster right in the earliest stages of the transformation. That’s something we wanted to avoid because unlike a global pandemic, climate change is a crisis that sneaks up on us.

— Tuomas Jääskeläinen

The default typeface represents the volume of Arctic sea ice in 1979 (when records began). It’s a rather beautiful, chiseled, chunky sans-serif, with cut-aways that open up counters to give it a modern appeal. As you move through the years towards 2050 the shapes appear to melt away, to the point that they’re barely legible.

Set the scale to 2021 and you’ll see an already dramatic loss of Arctic sea ice, and the resulting desalination of the ocean.

As depressing as these outlines are, they aren’t an estimate. The typeface’s outlines precisely match real data — there was an unexpected uptick in Arctic sea ice in 2000, and that’s reflected in the font.

The historical data is taken from the NSIDC (The US National Snow and Ice Data Center) and the predictive data comes from the IPCC (The Intergovernmental Panel on Climate Change).

We hope that using the font helps people see the urgency of climate change in a more tangible form – it is a call for action.

— Tuomas Jääskeläinen

You can download the font for free, for personal or commercial work.

Source

The post Variable Font Reveals The Full Horror of The Climate Crisis first appeared on Webdesigner Depot.


Source de l’article sur Webdesignerdepot


Introduction

If selling products online is a core part of your business, then you need to build an e-commerce data model that’s scalable, flexible, and fast. Most off-the-shelf providers like Shopify and BigCommerce are built for small stores selling a few million dollars in orders per month, so many e-commerce retailers working at scale start to investigate creating a bespoke solution.

Lire la suite

This is the eighth article documenting what I’ve learned from a series of 13 Trailhead Live video sessions on Modern App Development on Salesforce and Heroku. In these articles, we’re focusing on how to combine Salesforce with Heroku to build an “eCars” app—a sales and service application for a fictitious electric car company (“Pulsar”) that allows users to customize and buy cars, service techs to view live diagnostic info from the car, and more. In case you missed my previous article, you can find the link here.

Just as a quick reminder: I’ve been following this Trailhead Live video series to brush up and stay current on the latest app development trends on these platforms that are key for my career and business. I’ll be sharing each step for building the app, what I’ve learned, and my thoughts from each session. These series reviews are both for my own edification as well as for others who might benefit from this content.

Source de l’article sur DZONE