Articles

Gartner predicts that by 2023, over 50% of medium to large enterprises will have adopted a Low-code/No-code application as part of their platform development.
The proliferation of Low-code/No-code tooling can be partially attributed to the COVID-19 pandemic, which has put pressure on businesses around the world to rapidly implement digital solutions. However, adoption of these tools — while indeed accelerated by the pandemic — would have occurred either way.
Even before the pandemic, the largest, richest companies had already formed an oligopsony around the best tech talent and most advanced development tools. Low-Code/No-code, therefore, is an attractive solution for small and mid-sized organizations to level the playing field, and it does so by giving these smaller players the power to do more with their existing resources.
While these benefits are often realized in the short term, the long-term effect of these tools is often shockingly different. The promise of faster and cheaper delivery is the catch — or lure — inside this organizational mousetrap, whereas backlogs, vendor contracts, technical debts, and constant updates are the hammer.
So, what exactly is the No-Code trap, and how can we avoid it?

What is a No-Code Tool?

First, let’s make sure we clear up any confusion regarding naming. So far I have referred Low-Code and No-Code as if they were one term. It’s certainly easy to confuse them — even large analyst firms seem to have a hard time differentiating between the two — and in the broader context of this article, both can lead to the same set of development pitfalls.
Under the magnifying glass, however, there are lots of small details and capabilities that differentiate Low-code and No-code solutions. Most of them aren’t apparent at the UI level, leading to much of the confusion between where the two come from.
In this section, I will spend a little bit of time exploring the important differences between those two, but only to show that when it comes to the central premise of this article they are virtually equivalent.

Low-Code vs. No-Code Tools

The goal behind Low-Code is to minimize the amount of coding necessary for complex tasks through a visual interface (such as Drag ‘N’ Drop) that integrates existing blocks of code into a workflow.
Skilled professionals have the potential to work smarter and faster with Low-Code tools because repetitive coding or duplicating work is streamlined. Through this, they can spend less time on the 80% of work that builds the foundation and focuses more on optimizing the 20% that makes it different. It, therefore, takes on the role of an entry-level employee doing the grunt work for more senior developers/engineers.
No-Code has a very similar look and feel to Low-Code, but is different in one very important dimension. Where Low-Code is meant to optimize the productivity of developers or engineers that already know how to code (even if just a little), No-Code is built for business and product managers that may not know any actual programming languages. It is meant to equip non-technical workers with the tools they need to create applications without formal development training.
No-Code applications need to be self-contained and everything the No-Code vendor thinks the user may need is already built into the tool.
As a result, No-Code applications create a lot of restrictions for the long-term in exchange for quick results in the short-term. This is a great example of a ‘deliberate-prudent’ scenario in the context of the Technical Debt Quadrant, but more on this later.

Advantages of No-Code Solutions

The appeal of both Low-Code and No-Code is pretty obvious. By removing code organizations can remove those that write it — developers — because they are expensive, in short supply, and fundamentally don’t produce things quickly.
The benefits of these two forms of applications in their best forms can be pretty substantial:
  • Resources: Human Capital is becoming increasingly scarce — and therefore expensive. This can stop a lot of ambitious projects dead in their tracks. Low-Code and No-Code tools minimize the amount of specialized technical skills needed to get an application of the ground, which means things can get done more quickly and at a lower cost.
  • Low Risk/High ROISecurity processes, data integrations, and cross-platform support are all built into Low-Code and No-Code tools, meaning less risk and more time to focus on your business goals.
  • Moving to Production: Similarly, for both types of tools a single click is all it takes to send or deploy a model or application you built to production.
Looking at these advantages, it is no wonder that both Low-Code and No-Code have been taking industries by storm recently. While being distinctly different in terms of users, they serve the same goal — that is to say, faster, safer and cheaper deployment. Given these similarities, both terms will be grouped together under the ‘No-Code’ term for the rest of this article unless otherwise specified.

List of No-Code Data Tools

So far, we have covered the applications of No-Code in a very general way, but for the rest of this article, I would like to focus on data modeling. No-Code tools are prevalent in software development, but have also, in particular, started to take hold in this space, and some applications even claim to be an alternative to SQL and other querying languages (crazy, right?!). My reasons for focusing on this are two-fold: 
Firstly, there is a lot of existing analysis around this problem for software development and very little for data modeling. Secondly, this is also the area in which I have the most expertise.
Now let’s take a look at some of the vendors that provide No-Code solutions in this space. These in no way constitute a complete list and are, for the most part, not exclusively built for data modeling. 

1. No-Code Data Modeling in Power BI

Power BI was created by Microsoft and aims to provide interactive visualizations and business intelligence capabilities to all types of business users. Their simple interface is meant to allow end-users to create their own reports and dashboards through a number of features, including data mapping, transformation, and visualization through dashboards. Power BI does support some R coding capabilities for visualization, but when it comes to data modeling, it is a true No-Code tool.

2. Alteryx as a Low-Code Alternative

Alteryx is meant to make advanced analytics accessible to any data worker. To achieve this, it offers several data analytics solutions. Alteryx specializes in self-service analytics with an intuitive UI. Their offerings can be used as Extract, Transform, Load (ETL) Tools within their own framework. Alteryx allows data workers to organize their data pipelines through their custom features and SQL code blocks. As such, they are easily identified as a Low-Code solution.

3. Is Tableau a No-Code Data Modeling Solution?

Tableau is a visual analytics platform and a direct competitor to Power BI. They were recently acquired by Salesforce which is now hoping to ‘transform the way we use data to solve problems—empowering people and organizations to make the most of their data.’ It is also a pretty obvious No-Code platform that is supposed to appeal to all types of end-users. As of now, it offers fewer tools for data modeling than Power BI, but that is likely to change in the future.

4. Looker is a No-Code Alternative to SQL

Looker is a business intelligence software and big data analytics platform that promises to help you explore, analyze, and share real-time business analytics easily. Very much in line with Tableau and Power BI, it aims to make non-technical end-users proficient in a variety of data tasks such as transformation, modeling, and visualization.

You might be wondering why I am including so many BI/Visualization platforms when talking about potential alternatives to SQL. After all, these tools are only set up to address an organization’s reporting needs, which constitute only one of the use cases for data queries and SQL. This is certainly a valid point, so allow me to clarify my reasoning a bit more.

While it is true that reporting is only one of many potential uses for SQL, it is nevertheless an extremely important one. There is a good reason why there are so many No-Code BI tools in the market—to address heightening demand from enterprises around the world — and therefore, it is worth taking a closer look at their almost inevitable shortcomings.

Source de l’article sur DZONE

Oracle Transactional Business Intelligence (OTBI) is built on the power of Oracle’s industry-leading business intelligence tool Oracle Business Intelligence Enterprise Edition (OBIEE). This allows users to build powerful data visualization with real-time data that highlights data patterns and encourages data exploration instead of delivering static flat reports. OTBI provides users a wide variety of data visualization options from standard graphs to advanced visuals such as trellis, treemaps, performance tiles, KPIs, and others.

Introducing a CI/CD Solution for OTBI

FlexDeploy has an innovative CI/CD solution for managing the build and deployment of OTBI WebCatalog objects across the pipeline. Using FlexDeploy’s partial deployment model, developers can assemble related catalog objects into packages, build them from source control or a development environment, and deploy them into the target environments.

Source de l’article sur DZONE

In the era of web-scale, every organization is looking to scale its applications on-demand, while minimizing infrastructure expenditure. Cloud-native applications, such as microservices are designed and implemented with scale in mind and Kubernetes provides the platform capabilities for dynamic deployment, scaling, and management. 

Autoscaling and scale to zero is a critical functional requirement for all serverless platforms as well as platform-as-a-service (PaaS) solution providers because it helps to minimize infrastructure costs.

Source de l’article sur DZONE

I was in the first week of a new job and the assignment was to install a company software on the local machine. In the beginning, I found that there are some documents on the internal website which I could follow to install. This was a very optimistic expectation of course. As soon as I started reading and doing the steps, I realized that there are many things that are not mentioned in the document and I began to ask a lot of questions that I do not have any answers to. I had to ask the current DevOps engineers for each unclear part. Later, I found that every day I am on a call for hours with these engineers and we achieve bit by bit of this installation together. I would say 20% of the operations were documented and automated, but the rest of the things should be done manually. Finally, I managed to install the software in a week but still, there were many things I did not know how I had done, and forgot many other details that were not in the document. 

It turns out every time we have a new hire, this situation happens again and again and there has been no update neither on the document nor in the software installation process. What amazed me was that the DevOps engineers were proud of what they have already created after years but they did not realize that no one can install that software without them. This situation created a huge amount of complexity around the software deployment and takes hours for employees to figure out things again and again. So, no matter how nice all those scripts were, it created a high overhead for the company which was quite hidden for the managers for years. 

Source de l’article sur DZONE

Now, we have everything prepared and ready to go to a Kubernetes Cluster in a cloud provider. It is a fact that creating a cluster in any cloud provider manually is a difficult task. Moreover, if we want to automate this deployment, we need something that helps us in this tedious task. In this article, we will see how to create a Kubernetes Cluster and all of its required objects, deploying our Alexa Skill with Terraform using Google Kubernetes Engine.

Pre-Requisites

Here, you have the technologies used in this project:

Source de l’article sur DZONE

Container registries serve as libraries to store and access third-party container images required during the build phase of the SDLC and the images produced for deployment to test, staging, and production environments. While public container registries are accessible and convenient, private registries can better integrate into existing CI/CD workflows, offer greater control over access and security, as well as help ensure build repeatability and reliability. This Refcard covers key container concepts and terminology; common use cases; and guidelines for container registry configuration, operation, security, and storage.
Source de l’article sur DZONE

Creating a continuous deployment pipeline will bring us a step closer to an automated build, test, deploy strategy. In order to create such a pipeline, we need to have access to several tools. Instead of installing these on on-premise servers, we can make use of the AWS cloud offer. Let’s see how this can be accomplished!

1. Introduction

We want to create an automated pipeline in order to ensure that no manual and error prone steps are required for building, testing and deploying the application. When a failure occurs during one of these steps, we will be automatically notified and can take necessary actions in order to resolve the issue.

Source de l’article sur DZONE

We all love web badges. You might have spotted many of them in README of repositories, including the repository of my blog, The Cloud Blog. In general, web badges serve two purposes.

  1. They are visually appealing.
  2. They display key information instantly.

If you scroll to my website’s footer section, you will find GitHub and Netlify badges that display the status of the latest build and deployment. I use them to quickly check whether everything is fine with the world without navigating to their dashboards. In essence, a badge is an SVG image with dynamic content embedded in it.

Source de l’article sur DZONE

As more organizations move to establish DevOps techniques into their Software Development Life Cycle, the need of security becomes even more evident when so much application development is going on. But…

Security and DevOps Aren’t Natural Companions

The idea of security in DevOps or DevSecOps doesn’t go very well with the classic DevOps process that insists on continuous integration, delivery, and deployment. When at production you’re constantly releasing smaller bits of your code and application using the DevOps pipeline, introducing security to DevOps can slow down the process significantly. You can’t just pass that through a security team that takes several weeks bringing the new release out to production. 

Source de l’article sur DZONE

As businesses become AI-ready, efficient data management has acquired an unprecedented role in ensuring their success. Bottlenecks in the data pipeline can cause massive revenue loss while having a negative impact on reputation and brand value. Consequently, there’s a growing need for agility and resilience in data preparation, analysis, and implementation.

On the one hand, data-analytics teams extract value from incoming data, preparing and organizing it for the production cycle. On the other, they facilitate feedback loops that enable continuous integration and deployment (CI/CD) of new ideas.

Source de l’article sur DZONE