Articles

We are moving toward a future where everything is going to be autonomous, fast, and highly efficient. To match the pace of this fast-moving ecosystem, application delivery times will have to be accelerated, but not at the cost of quality. Achieving quality at speed is imperative and therefore quality assurance gets a lot of attention. To fulfill the demands for exceptional quality and faster time to market, automation testing will assume priority. It is becoming necessary for micro, small, and medium-sized enterprises (SMEs) to automate their testing processes. But the most crucial aspect is to choose the right test automation framework. So let’s understand what a test automation framework is.

What Is a Test Automation Framework?

A test automation framework is the scaffolding that is laid to provide an execution environment for the automation test scripts. The framework provides the user with various benefits that help them to develop, execute, and report the automation test scripts efficiently. It is more like a system that was created specifically to automate our tests. In a very simple language, we can say that a framework is a constructive blend of various guidelines, coding standards, concepts, processes, practices, project hierarchies, modularity, reporting mechanism, test data injections, etc. to pillar automation testing. Thus, the user can follow these guidelines while automating applications to take advantage of various productive results.

Source de l’article sur DZONE

In today’s Quality Sense episode, Federico Toledo sits down for a chat with a colleague and friend, Sofia Palamarchuk. She’s a Director and Board Member of Abstracta and the co-founder and CEO of Apptim, a tool that helps you to test and analyze native mobile app performance.

After beginning her career as a performance engineer at Abstracta, she led our expansion to the United States – heading up business development. After seeing the challenges that mobile development teams face, in 2019, she embarked on a mission to transform the way global mobile teams create quality apps.

Source de l’article sur DZONE

If a product has a web and a mobile version of an application, their functionality is almost identical. The app QA process, however, will flow differently for each platform due to their particularities.

A mobile application has become an umbrella term that covers three different types of apps – native, PWA, and hybrid. Each is coded in a specific way and has some distinctive features. 

Source de l’article sur DZONE

A few weeks ago I had some interesting debates on the projects I work on, on questions like:

  • Is the automation engineer a developer?
  • Is a developer the best candidate to be an automation engineer? 
  • Where does the good ol’ Software Engineer in Test (SET, a.k.a SDET) fit in this fierce new world full of code and dependencies? 

It seems that the trend nowadays when looking for job candidates in automation is that they need to have the skills of a programmer. I have been doing technical interviews for people who are running for Test Engineer positions for years, and this trend has been increasing more and more. That is why I give the same advice to anyone asking me how to get into the automation world: “Start learning the fundamental concepts of object-oriented programming and how they apply to automation testing.”

Source de l’article sur DZONE

Software Testing and Quality Assurance is like a wakeup call in the Software Development Lifecycle. It keeps nudging over intervals and enhances the software delivery process. Software Testing and the QA scene has been transforming over the last decade, especially with practices such as Agile, Shift-left, and DevOps. Artificial Intelligence (AI) has added another spin to this game, focusing on speed, accuracy, and efficiency. Can AI transform software delivery and testing? That’s a doubt that has been clarified very well. Let’s look at ways in which AI can change the game for software delivery and testing.

AI can bring in value for the development teams with shift-left practices that enable the software development process. While AI might do the needful, it is important to ensure that the practices followed for delivery and testing are effective and are able to leverage the power of AI. Hence, it is very much essential to embed trust in the processes and ensure that there is effective validation and verification.


Source de l’article sur DZONE (AI)

Consider a scenario where you are moving a file from folder A to folder B. Think about all the possible ways you can test this. Apart from the usual scenarios, you can test the following conditions:

  • Trying to move the file when it is opened
  • You do not have the security rights to paste the file in folder B
  • Folder B is on a shared Drive and storage capacity is full
  • Folder B already has a file with the same name

In fact, the list is endless. Supposed you have 15 input fields to test each having 5 possible values, the number of combinations to be tested would be 5¹⁵=30,517,578,125.

Source de l’article sur DZone (Agile)

This post represents views on why Machine Learning systems or models are termed as non-testable from quality control/quality assurance perspectives. Before I proceed, let me humbly state that data scientists and the Machine Learning community have been saying that ML models are testable as they are first trained and then tested using techniques such as cross-validation etc. based on different techniques to increase the model performance and optimize the model. However, "testing" the model is referred with the scenario during the development (model building) phase when data scientists test the model performance by comparing the model outputs (predicted values) with the actual values. This is not the same as testing the model for any given input for which the output (expected) value is not known beforehand. In this post, I am rather talking about ML models testability from the overall traditional software testing perspective.

Given that Machine Learning systems are non-testable, it can be said that performing QA or quality control checks on Machine Learning systems is not easy, and, thus, a matter of concern given the trust, the end-users need to have on such systems. Project stakeholders must need to understand the non-testability aspects of Machine Learning systems in order to put appropriate quality controls in place to serve trustable Machine Learning models to end users in production. This applies greatly to healthcare and financial systems where a couple of false negatives or type-II error could lead to havoc or troubles for the stakeholders.


Source de l’article sur DZONE (AI)

Pushing the Bounds of What We Can Automate in Software Testing

We have this funny little tagline about how we’re pushing the boundaries of test automation. It’s a simple enough thing when you say it, but what do we really mean by it?

Recently, we were recognized by several industry analysts for the work we’ve been doing pushing those boundaries. At voke, they said, "Parasoft is a company borne of innovation with a relentless focus on software quality," and Forrester said, " Regarding AI, Parasoft has an impressive and concrete roadmap to increase test automation from design to execution, pushing autonomous testing."


Source de l’article sur DZONE (AI)

Over the years a lot has been done to enhance the control and efficiency of application development processes. From Agile development to change management solutions based on ISO and ITIL standards, the progress has been remarkable. However, like everything else, this, too, has a downside. They say that every cloud has a silver lining, but in the world of technology, this silver lining is likely to affect the functionality of the cloud. The increase in the use of Agile development has aggravated the pressure IT organizations face in deploying new applications.

Each new enterprise application brings in several diverse application components spread across numerous environments, including application servers, desktops, Web servers, mobile devices, databases, etc. Also, most large organizations have different departments handling each of these functions, and the potential product users are often not in control of the timelines. Besides, since security and compliance requirements put a lot of burden on the IT teams, companies adopt a "better safe than sorry" approach and discourage employees from easily getting new applications or their versions. For the product vendor, the total cost of support is directly proportional to the number of older versions out there in the field.

Source de l’article sur DZone

Testers + Scrum = ?

Several times I’ve had conversations with people who work with Scrum or Agile methodologies who claim they don’t have testers and don’t run into any problems. On the other hand, I have seen testers within these schemes who often feel excluded from the development team. Other testers who have not yet worked in Agile teams question whether there is even room for testers in Scrum.

It’s often touted that everyone in a Scrum team is able to perform different tasks and that all are responsible for quality. But, there are some things that a tester can handle better than others. For example, writing good acceptance criteria requires a tester skillset, as one must keep in mind and worry about certain characteristics such as quality, testability, maintainability, etc. These are all things that the tester role is responsible for obsessing over. Therefore, when you need to write acceptance criteria, you’ll be better off delegating it to someone trained in testing over someone that’s not.

Source : https://dzone.com/articles/can-there-be-testers-in-scrum?utm_medium=feed&utm_source=feedpress.me&utm_campaign=Feed%3A+dzone%2Fagile