Articles

In application development, microservices is an architectural style where larger applications are structured as a collection of smaller, independent, yet interconnected services. While this allows for highly maintainable and testable applications (as each service can be maintained independent of the larger application), the problem with this method is the inherent complexity of interactions between microservices. It can be difficult for developers and team members to visualize how these microservices are connected to each other. We have been looking for ways to produce architectural diagrams that illustrate these interactions. We found that GraphViz helped us to solve part of this problem, as it can take the microservices structure of an application in the DOT language and convert it into a PNG format. However, we wanted this process to be even more user-friendly and more automatic, so that the user would not have to manually generate a DOT file of their microservices architecture. 

In-Browser Tool

As we could not find such a tool, we decided to create one ourselves. We decided that the most user-friendly interface would be to create an in-browser tool that allows the user to upload a jar  file containing a packaged service, and to have an image automatically rendered. This article discusses how we went about creating this tool and includes an example of what happens « behind the scenes » of this interface. 

Source de l’article sur DZONE

The introduction of the continuous integration/continuous deployment (CI/CD) process has strengthened the software release mechanism, helping products go to market faster than ever before and allowing application development teams to deliver code changes more frequently and reliably. Regression testing ensures no new mistakes have been introduced to a software application by testing newly modified code as well as any parts of the software that could potentially be affected. The software testing market size is projected to reach $40 billion in 2020 with a 7% growth rate by 2027. Regression testing accounted for more than 8.5% of market share and is expected to rise at an annual pace of over 8% through 2027, as per reports from the Global Market Insights group.

The Importance of Regression Testing

Regression testing is a must for large-sized software development teams following an agile model. When many developers are making multiple commits frequently, regression testing is required to identify any unexpected outcome in overall functionality caused by each commit. The CI/CD setup identifies that and notifies the developers as soon as the failure occurs and makes sure the faulty commit doesn’t get shipped into the deployment. 

Source de l’article sur DZONE

Suppose you are trying to decide whether to use native mobile application development or a hybrid mobile application development approach for your project. In that case, there are numerous considerations, and you will, of course, have to look closely at your business requirements. 

This article focuses on just two of the crucial differences between native and hybrid mobile application development and may help get your discussions started.  

Source de l’article sur DZONE


This is an article from DZone’s 2022 Enterprise Application Integration Trend Report.

For more:

Read the Report

In the echo chambers of application development, we constantly hear the mantra « API-first, » but this slogan has a fundamental flaw: APIs should typically be the last choice when building a distributed application. The correct war cry ought to instead be: « APIs outside, events inside. »

Source de l’article sur DZONE

Kubernetes offers developers tremendous advantages… if they can overcome the platform’s inherent complexities. It can be a big « if. » Without additional tooling, developers aren’t able to simply develop their applications on Kubernetes, but must also become experts in writing complex YAML templates to define Kubernetes resources. A relatively new tool called Shipa provides an application management framework that largely relieves developers of this burden, enabling dev teams to ship applications with no Kubernetes expertise required.

Having recently put the tool to the test, this article will demonstrate how to install and utilize Shipa to simplify Kubernetes and ease some common developer frustrations.

Source de l’article sur DZONE

Ask any seasoned web app developer about their choice of programming language, and they are sure to mention PHP. PHP is a widely-used general-purpose scripting language that is especially suited for Web development and can be embedded into HTML. As per Builtwith, 3,090,319 live websites are still using PHP. However, when it comes to developing massive projects without lag or stability issues, developers tend to use frameworks, and PHP has two remarkable frameworks: 1) Laravel and 2) Yii. Both frameworks have a lot of followers in terms of full-grown communities globally, and there may be questions arising about which to choose.

What Are Laravel and Yii?

Laravel is a simple PHP framework frequently used for web-based or web application development initially created as a better alternative to Codeigniter. It is known for MVC Support, articulated ORM systems, reliability, modularity, and uncomplicated coding rules. Some of the key features of Laravel Framework are:

Source de l’article sur DZONE

Mobile application development has increased tenfold due to the high demand for such digital platforms among users worldwide. According to a report, there are more than 3 billion mobile application users, and this is where most businesses are looking to capitalize.

Mobile application software helps businesses engage users on mobile devices, making it an attractive investment. Mobile applications offer higher engagement value for organizations, but they also help organizations generate more leads. 

Source de l’article sur DZONE

The application development landscape has fundamentally changed in recent years. In a recent interview with Ambassador Labs, Mario Loria from CartaX said he believes this is still uncharted territory, particularly for developers in the cloud-native space. As he sees it, site reliability engineers (SREs) play a key role in guiding developers through the learning curve toward comprehensive self-service of the supporting platforms and ecosystem, and ultimately to service ownership. This requires a major shift in company and management culture, and developer (and SRE) mindset and tooling as well as insight to make the journey to full lifecycle ownership not just smoother and more transparent but also technically feasible.

Two Worlds Colliding: The Monolith and Service-Oriented Architecture

The traditional monolith continues to exist in parallel with cloud-native application development. The operations side of the equation, according to Mario, understands that this has caused a big shift in deploying, releasing, and operating applications, and now the role of SREs is to help developers understand and own this shift. Developers know how to code, but building in the necessary understanding (and ownership) of the “ship” and “run” aspects of the lifecycle introduces a steep learning curve. For developers, this means taking on new responsibilities with the support of SREs.

Source de l’article sur DZONE


Introduction

While many of us are habituated to executing Spark applications using the ‘spark-submit’ command, with the popularity of Databricks, this seemingly easy activity is getting relegated to the background. Databricks has made it very easy to provision Spark-enabled VMs on the two most popular cloud platforms, namely AWS and Azure. A couple of weeks ago, Databricks announced their availability on GCP as well. The beauty of the Databricks platform is that they have made it very easy to become a part of their platform. While Spark application development will continue to have its challenges – depending on the problem being addressed – the Databricks platform has taken out the pain of having to establish and manage your own Spark cluster.

Using Databricks

Once registered on the platform, the Databricks platform allows us to define a cluster of one or more VMs, with configurable RAM and executor specifications. We can also define a cluster that can launch a minimum number of VMs at startup and then scale to a maximum number of VMs as required. After defining the cluster, we have to define jobs and notebooks. Notebooks contain the actual code executed on the cluster. We need to assign notebooks to jobs as the Databricks cluster executes jobs (and not Notebooks). Databricks also allows us to setup the cluster such that it can download additional JARs and/or Python packages during cluster startup. We can also upload and install our own packages (I used a Python wheel).

Source de l’article sur DZONE

As a consistent user and developer on the OpenShift platform over the years, I’ve tried helping users by sharing my application development content as we’ve journeyed from cartridges all the way to container base development.

With container based development we’ve also transitioned from using templates to define how to deploy our tooling and applications, to operators. There are many examples of how to work with the templated versions of our applications around decision management and process automation found on Red Hat Demo Central and JBoss Demo Central.

Source de l’article sur DZONE