Articles

Chaîner des requêtes API avec API Gateway

Avec API Gateway, vous pouvez facilement chaîner des requêtes API pour créer des solutions plus complexes et plus riches.

Pourquoi avons-nous besoin d’une demande API enchaînée?

La demande API enchaînée (ou demande de pipeline, ou appels API séquentiels) est une technique utilisée dans le développement logiciel pour gérer la complexité des interactions API lorsque le logiciel nécessite plusieurs appels API pour accomplir une tâche. Il est similaire au traitement des demandes par lots, où vous regroupez plusieurs demandes API en une seule demande et les envoyez au serveur en tant que lot. Bien qu’ils puissent sembler similaires, une demande de pipeline implique l’envoi d’une seule demande au serveur qui déclenche une séquence d’appels API à exécuter dans un ordre défini. Chaque demande API dans la séquence peut modifier les données de demande et de réponse, et la réponse d’une demande API est transmise en entrée à la prochaine demande API dans la séquence. Les demandes de pipeline peuvent être utiles lorsqu’un client doit exécuter une séquence de demandes API dépendantes qui doivent être exécutées dans un ordre spécifique.

Comment Apache APISIX API Gateway peut-il nous aider?

Apache APISIX est un moteur de routage et de mise en cache open source pour les services Web modernes. Il fournit une solution complète pour gérer les demandes API enchaînées. En utilisant Apache APISIX, vous pouvez créer des plugins personnalisés pour gérer les demandes client qui doivent être appelées en séquence. Par exemple, vous pouvez créer un plugin qui envoie une requête à l’API de recherche de produits, puis une requête à l’API de détails de produits pour récupérer des informations supplémentaires sur les produits. Apache APISIX fournit également des outils pour surveiller et analyser les performances des API, ce qui permet aux développeurs de mieux comprendre le comportement des API et d’améliorer leurs performances. Enfin, Apache APISIX fournit des fonctionnalités de sécurité pour protéger les données et les services contre les attaques malveillantes.

En conclusion, l’utilisation d’une demande API enchaînée peut aider à gérer la complexité des interactions API et à améliorer la qualité des services Web. Apache APISIX offre une solution complète pour gérer les demandes API enchaînées, y compris des outils pour surveiller et analyser les performances des API, ainsi que des fonctionnalités de sécurité pour protéger les données et les services contre les attaques malveillantes.

Source de l’article sur DZONE

Guide de conception de tests pour votre pipeline CI/CD

Découvrez comment optimiser votre pipeline CI/CD grâce à notre guide de conception de tests complet et facile à suivre !

Lors de la livraison plus rapide du logiciel sur le marché, il est essentiel d’intégrer des tests automatisés dans votre pipeline de livraison continue pour vérifier que le logiciel respecte les normes attendues par vos clients. Votre pipeline de livraison continue peut également comporter de nombreuses étapes qui doivent déclencher ces tests automatisés pour vérifier les portes de qualité définies avant que le logiciel ne puisse passer à la prochaine étape et finalement être mis en production (voir la figure 1). Selon l’étape de votre pipeline, vos tests automatisés peuvent varier en complexité, allant des tests unitaires, d’intégration, fonctionnels et de performances. En tenant compte de la quantité et de la complexité des tests, ainsi que de la possibilité d’avoir plusieurs étapes dans votre pipeline, il peut y avoir de nombreux défis lors de l’intégration, de l’exécution et de l’évaluation de la qualité de votre logiciel avant sa sortie. 

Comment intégrer des tests automatisés pour accélérer la livraison du logiciel sur le marché tout en maintenant la qualité ?

En tant qu’informaticien enthousiaste, je sais que pour livrer plus rapidement des logiciels sur le marché, il est essentiel d’intégrer des tests automatisés dans votre pipeline de livraison continue afin de vérifier que le logiciel répond aux normes attendues par vos clients. Votre pipeline de livraison continue peut également comporter de nombreuses étapes qui doivent déclencher ces tests automatisés pour vérifier les portes de qualité définies avant que le logiciel ne puisse passer à l’étape suivante et finalement être mis en production (voir la figure 1). Selon l’étape de votre pipeline, vos tests automatisés peuvent aller de la simplicité des tests unitaires, d’intégration, d’extrémité à extrémité et de performances. En tenant compte de la quantité et de la complexité des tests, ainsi que de la possibilité d’avoir plusieurs étapes dans votre pipeline, il peut y avoir de nombreux défis à relever lors de l’intégration, de l’exécution et de l’évaluation de la qualité de votre logiciel avant sa mise en production.

Cet article décrira certains de ces défis. Je fournirai également des lignes directrices sur les meilleures pratiques à suivre pour que vos tests automatisés respectent un contrat afin d’accroître la livraison de votre logiciel sur le marché tout en maintenant la qualité. Suivre un contrat aide à intégrer vos tests de manière opportune et plus efficace. Cela aide également lorsque d’autres personnes de votre organisation doivent résoudre des problèmes dans le pipeline.

En tant qu’informaticien passionné, je sais que le codage est un élément essentiel pour intégrer des tests automatisés dans un pipeline de livraison continue. Les tests automatisés peuvent être codés pour vérifier que le logiciel répond aux normes attendues par les clients et que les portes de qualité sont respectées avant que le logiciel ne puisse passer à l’étape suivante et être mis en production. Cependant, lorsque vous codifiez des tests automatisés, il est important de tenir compte des problèmes liés à la qualité et à la fiabilité des tests. Par exemple, les tests peuvent être codés pour s’exécuter plus rapidement, mais cela peut entraîner une baisse de la qualité des résultats. Il est donc important de trouver un équilibre entre la vitesse et la qualité des tests pour garantir que le logiciel répond aux normes attendues par les clients. De plus, il est important de s’assurer que les tests automatisés sont suffisamment robustes pour pouvoir être exécutés sur différents systèmes et environnements. Cela garantit que les tests sont fiables et peuvent être utilisés pour vérifier la qualité du logiciel avant sa mise en production.

Source de l’article sur DZONE

As microservices systems expand beyond a handful of services, we often need some way to coordinate everything and ensure consistent communication (avoid human error). Tools such as Kubernetes or Docker Compose have quickly become commonplace for these types of workloads. Today’s example will use Docker Compose.

Docker Compose is an orchestration tool that manages containerized applications, and while I have heard many lament the complexity of Kubernetes, I found Docker Compose to have some complexities as well. We will work through these along the way and explain how I solved them.

Source de l’article sur DZONE

Value Stream Management (VSM) is about empowering delivery organizations to measure, mitigate, and monitor complexity. Simply put, it aims at improving the flow of value in your organization. The VSM Consortium recently released their highly anticipated report on The State of Value Stream Management Report 2022.  

In this article, we recap some of the findings and look at it specifically from a software engineering and DevOps point of view. Can we capture some key lessons that lead to healthier and more productive engineering teams? What has worked and what has not? Can we simplify and adapt ideas of organizational change to create a thriving engineering organization?

Source de l’article sur DZONE

In application development, microservices is an architectural style where larger applications are structured as a collection of smaller, independent, yet interconnected services. While this allows for highly maintainable and testable applications (as each service can be maintained independent of the larger application), the problem with this method is the inherent complexity of interactions between microservices. It can be difficult for developers and team members to visualize how these microservices are connected to each other. We have been looking for ways to produce architectural diagrams that illustrate these interactions. We found that GraphViz helped us to solve part of this problem, as it can take the microservices structure of an application in the DOT language and convert it into a PNG format. However, we wanted this process to be even more user-friendly and more automatic, so that the user would not have to manually generate a DOT file of their microservices architecture. 

In-Browser Tool

As we could not find such a tool, we decided to create one ourselves. We decided that the most user-friendly interface would be to create an in-browser tool that allows the user to upload a jar  file containing a packaged service, and to have an image automatically rendered. This article discusses how we went about creating this tool and includes an example of what happens « behind the scenes » of this interface. 

Source de l’article sur DZONE

UX laws are an invaluable tool, providing guidelines for designers that ensure we don’t have to continually reinvent the wheel when crafting experiences for the web.

However, UX laws tend to be devised by scientists and psychologists — people who are more than comfortable with the exceptions and allowances of academic language. By the time they filter down to us in the trenches, the language has invariably been over-simplified, and the wisdom behind the idea diluted.

Today we’re going to look at seven well-known and commonly cited rules of UX design that too many designers get wrong.

1. Jakob’s Law

Jakob’s Law, named for the UX researcher Jakob Nielsen, states that users spend most of their time on other sites and as a result prefer sites that work the same way as the sites they already know.

Jakob’s Law has often been used to limit experimentation and encourage the adoption of common design patterns in the name of usability.

However, the word ‘prefer’ is hugely loaded. While it’s true that a user will more easily understand a familiar design pattern, they do not necessarily prefer familiar experiences.

It has been widely proved that new experiences boost our mood and that new experiences improve our memory. If your goal is a memorable site that leaves users with a positive impression, introducing novelty is a sound decision.

2. Goal Gradient Hypothesis

The Goal Gradient Hypothesis assumes that the closer users are to their goal, the more likely they are to complete it.

It’s an attractive theory, especially in e-commerce, where it is often used to justify simplifying the initial purchase process and postponing complexity to move users along the funnel — a typical example is leaving shipping charges until the final step.

However, anyone who has studied e-commerce analytics will know that cart abandonment is a huge issue. In North America, shopping cart abandonment is as high as 74%.

We don’t always know what the user’s goals are, and they may not match ours. It may be that users are treating your shopping cart as a bookmark feature, it may be that they have a last-minute change of heart, or they may be horrified by the shipping charges.

While providing a user with an indication of their progress is demonstrably helpful, artificially inflating their proximity to your preferred goal may actually hinder conversions.

3. Miller’s Law

Never in the whole of human history has any scientific statement been as misunderstood as Miller’s Law.

Miller’s Law states that an average person can only hold seven, plus or minus two (i.e., 5–9) items in their working memory. This has frequently been used to restrict UI navigation to no more than five items.

However, Miller’s Law does not apply to items being displayed. While it’s true that too many options can lead to choice paralysis, a human being is capable of considering more than nine different items.

Miller’s Law only applies to UI elements like carousels, which have been widely discredited for other reasons.

4. Aesthetic-Usability Effect

Edmund Burke once said, “Beauty is the promise of happiness.” That belief is central to the Aesthetic-Usability Effect, which posits that users expect aesthetically pleasing designs to be more usable.

Designers often use this as a justification for grey-on-grey text, slick animations, and minimal navigation.

Critical to understanding this is that just because users expect a design to be usable does not mean that it is or that they will find it so. Expectations can quickly be dashed, and disappointment often compounds negative experiences.

5. Peak-End Rule

The Peak-End Rule states that users judge an experience based on how they felt at the peak and the end, rather than an average of the experience.

Designers commonly use the Peak-End Rule to focus design resources on the primary goal of each experience (e.g. adding an item to a cart) and the closing experience (e.g. paying for the item).

However, while the Peak-End Law is perfectly valid, it cannot apply to open experiences like websites when it is impossible to identify a user’s starting or ending point.

Additionally, it is easy to see every interaction on a website as a peak and even easier to make assumptions as to which peak is most important. As such, while designing for peaks is attractive, it’s more important to design for exceptions.

6. Fitts’ Law

In the 1950s, Paul Fitts demonstrated that the distance to, and size of a target, affect the error rate of selecting that target. In other words, it’s harder to tap a small button and exponentially harder to tap a small button that is further away.

UX designers commonly apply this law when considering mobile breakpoints due to the relatively small viewport. However, mobile viewports tend not to be large enough for any distance to affect tap accuracy.

Fitts’ Law can be applied to desktop breakpoints, as the distances on a large monitor can be enough to have an impact. However, the majority of large viewports use a mouse, which allows for positional corrections before tapping.

Tappable targets should be large enough to be easily selected, spaced sufficiently, and tab-selection should be enabled. But distance has minimal impact on web design.

7. Occam’s Razor

No collection of UX laws would be complete without Occam’s Razor; unfortunately, this is another law that is commonly misapplied.

Occam’s Razor states that given any choice, the option with the least assumptions (note: not necessarily the simplest, as it is often misquoted) is the correct choice.

In an industry in which we have numerous options to test, measure, and analyze our user interfaces, you shouldn’t need to make assumptions. Even when we don’t need extensive UX testing, we can make decisions based on other designers’ findings.

Occam’s Razor is a classic design trap: the key to avoiding it is to recognize that it’s not your assumptions that matter, it’s the users’. As such, Occam’s Razor applies to a user’s experience, not a design process.

Source

The post 7 UX Laws You’Re Probably Getting Wrong first appeared on Webdesigner Depot.

Source de l’article sur Webdesignerdepot

Todoist is a to-do list app that 25 million people rely on every day to keep their lives organized. As part of the Doist design team’s goals for 2021, we aimed to redesign the Todoist Android app to take advantage of the latest Google Material Design guidelines.

In this post, we cover the design decisions and processes behind redesigning the Todoist Android app for Material Design. We explore the Design and Android team’s collaboration practices that brought the app update to life, which resulted in winning the Material Design Award 2021 in the large screen category. Let’s get started!

Opportunity

When we started the project, our design implementation on Android was ready for a major overhaul. The last milestone redesign on Android was initiated after the release of the first Material Design guidelines in 2016. Since then the team successfully worked on continuous improvements to the Android app, but we saw the opportunity to improve Todoist on Android on a more holistic level.

We set out to clean up instances of older UI components, colors, and text styles and update them with the latest Material Design components. We observed that some interactions and navigational patterns had become inconsistent with what users were expecting on newer Android devices and were eager to modernize this experience. With new hardware and software changes in mind, we set out to make the experience on larger phones and tablets even better, so Todoist could take full advantage of the latest generation of devices. Material 2 and 3 provided an incredible new framework to rethink the current app experience. With this in mind, we set out to challenge what a modern Android app should look like and innovate on top of the default user experience.

Solution

The team set itself the goal of redesigning our Todoist Android app and aspiring to make it the best-designed productivity app on Android. The project was ambitious and scheduled to take several months to complete. We set ourselves the following targets while working on the project:

  • Review the current implementation and older design specs.
  • Study the latest Material Design Guidelines and assess what is relevant for our project.
  • Research great Material Design apps and case studies and learn from their execution.
  • Define the new Todoist Android app design language and document the changes.
  • Design and development work together to assess the proposed solution and implementation.
  • Test an early version of the new app internally to gather feedback and make adjustments.
  • Invite beta testers to the new app to gather feedback and make adjustments.
  • Refine the app and address core issues before launching to the public.

Review

The project was kicked off by reviewing the current Todoist Android app implementation, noting down what areas needed to be fixed and what was up to date. While reviewing, we took screenshots of the app implementation for reference. This way we could easily see the current state of the app and compare it to the new design proposals that would be created. Once the review process was finalized, we had a comprehensive overview of the current state of the app and the layout, component, and styling changes we wanted to make.

Study

We continued the project by studying the latest Material Design Guidelines, assessing the components and practices that were most relevant to Todoist.

When the project kicked off in February 2021, Material 2 was the most recent version of their design system. Since Material 2 had already been released for quite some time, we anticipated that design changes to Material would be announced soon at the Google I/O event in May 2021. Rather than wait, because we expected the changes to be iterative, we pushed ahead with our work.

We identified 25 components and UI patterns that we wanted to change across the app. The changes included buttons, forms, menus, sheets, navigation drawer, app bar, system bars, text and color styles, and more. We started by creating a table view in a Dropbox Paper document with the component changes and references links to Google’s Material Design Guidelines.

This components list was a starting point for discussion to plan the scope and complexity of the changes. Close async discussions between the design and development team in Twist and Dropbox Paper comments helped us make decisions about scope and complexity early on and set a solid foundation for the project.

Research

In the initial Material Design study, we also researched inspiring Material Design apps, Material studies, Play Store apps, and Google Workspace apps to learn from their execution.

We started out by studying the Material Design Award Winners 2020 and tested out the products that were showcased. The showcased winners struck a good balance between implementing the Material Design Guidelines while maintaining their own product’s brand within the system. This balance between Google’s guidelines and the Todoist brand was also key for us to get right and so we strived to find this mix across the work we created and implemented in the project.

Along with the MDA winners, we researched the Material Studies that Google produced to showcase what apps could look like with branding and Material Design guidelines applied. It was a great reference to see how far components could be customized while maintaining the core platform principles. The Reply case study in particular offered valuable insight to us as its content type and layout came closest to Todoist. It showcased how components like the app bar, navigation drawer, and large screen layouts worked while being customized.

We continued our research by searching the Google Play store for inspiring app examples. Google Tasks, Press, Periodic Table, and Kayak stood out to us as the level of polish and quality of the apps were on par with the experience we were aspiring to create.

Sometime later in the project when Material You was released (more on that later), we stumbled upon the Google Workspace apps blog post which previewed Material 3 changes that Google was introducing to their own products. It offered a great glimpse at what was to come before the Material 3 Design Guidelines were officially released. This post sparked new internal discussions and further design explorations that we considered for future Todoist Android updates.

Design Spec

As we started to define the new Todoist Android app design language and document the changes, we opted to create a design framework, focusing on creating components rather than designing every screen in the app. This allowed us to consistently apply the design system in the app. We did so by using the previously defined component list that we created during the review and study process.

Core screens from different areas of the app were chosen to demonstrate how the components could be applied. We chose to mock up the Todoist project view, navigation drawer menu, project view edit screen, settings, and project detail view, among others. These screens gave us a good overview of how buttons, forms, drawers, lists, and other components would work together and in different states; selected, pressed, disabled, etc.

During the project, we were transitioning our Doist design system to Figma and started creating our first components in the new Doist Product Android Library. We started by using some components from the Material Design UI kit – Components library from the official Google Figma resource file and added them to our Doist design system. We then continued to build up the Product Android Library file with our Todoist-specific components such as task list & board views, detail views, sheets, colors, typography, etc.

We continued by documenting color and typography changes that were based on the Material Design guidelines. The design team opted to implement a new Design Token framework that would share the same values between our design system and the development implementation. The development team would output the values they had in the current implementation and the design team would analyze which values were needed and which could be merged, changed, or deleted. This informed the new Design Token color and typography system which we then documented and discussed with the team to implement. Later in the project, we were happy to see a similar token system introduced by Material 3 in the latest guidelines which validated our thinking and principles behind the new design system.

The design documentation expanded to hold other edge-case mockups that could sit alongside the design system. We documented different responsive screen experiences between phones and tablets against the previous implementation. Additional sections were created to document the motion that should be used for certain components and screens by referencing existing Material Design guidelines examples or prototyping custom motion in Principle and After Effects. The design spec also touched on haptic feedback that should appear on touch targets, how dark mode should work across the new components, documenting Todoist themes within the new design language, and more.

Design Implementation

At Doist, the benefit of the squad is that cross-team collaboration is built into the make-up of the team. Designers, developers, support, and product managers work together in a squad to deliver the project. This close collaboration from the start is key to bridging the gap between scope, estimations, design, development, and delivery. The squad discussed their findings on a daily basis and came up with the best plan of action together.

Designers started by creating components in Figma and shared them with developers in Dropbox Paper. We used screenshots to document the current implementation next to the new designs and linked to the default Google Material Design components. This allowed the team to compare all references in one place. Developers shared their feedback, adjustments would be brainstormed together as the designs were iterated.

Designers on the project would share their work in progress on a weekly basis with the rest of the design team in a design review Twist thread. Here details about the designs were discussed, alternatives mocked up and bigger picture plans made. Design reviews brought up topics like FAB (Floating Action Button) placement, theme options, accent color usage on components, consistency with other platforms, navigation options, and shadow elevation. After thorough discussions and alternative mockups were presented, the design team aimed to find the right balance between Material Design and Todoist brand guidelines. The development team, also part of the design reviews, gave their feedback on the solution and raised technical complexities early on.

Eventually, the design was stabilized and consistencies updated across components and mockups. The design spec was kept up to date so the development team could always review the latest designs in Figma.

Testing

As soon as the development process started, the Android team provided early screenshots and videos in Twist threads while they were implementing the design spec. This practice allowed us to review the app implementation early and often. Designers could review the development work and share feedback in Twist, which resulted in getting the implementation to a high quality. Alongside Twist discussions, the team set up a Todoist project to track ongoing issues and fix bugs. Designers logged new issues, developers would solve them and share the new implementation for designers to review.

When the team had the first stable version of the Android app, we shared it internally at Doist to get more insight and feedback. Other Doisters could access the redesign via a feature flag that could be turned on in the app settings and test the new version for however long they wanted. The feature flag system allowed people to give us early feedback on the design decisions we made and report bugs. Feedback was submitted by the wider team through a dedicated Twist thread and designers and developers could discuss how best to address the feedback during the active project implementation.

After we refined the app implementation further and addressed early feedback we opened up the app update to our beta users. Here users had access to the new Android redesign and were able to give us feedback. Our support team gathered feedback and shared it with us in a dedicated Twist thread. The squad aimed to analyze every comment and looked for patterns where we could make tweaks and improvements to the user experience.

As part of these tweaks, we made changes to how the bottom bar and navigation drawer worked. Some users reported frustrations with the way the new bottom navigation and menu drawer worked. In its first implementation, the drawer was half raised when opened and had to be swiped up to be raised again to see the full content list. This was an issue for some users as it was slower to get to the content below the list. So we decided to fully raise the drawer by default when opening. We also made it easier to open the navigation drawer by sliding up from the bottom app bar. This was a small shortcut but it enabled users to get to their content faster.

Material You

While we were in the testing phase and about to wrap up the project, Google unveiled Material You, and sometime later the Material 3 Guidelines were published. With the newly announced resources, we went back to study the latest guidelines and references we could find to see where the Todoist Android app redesign fits in and which adjustments we might need to make now or in the future.

Dynamic Color was a big new feature that was announced as part of the Material You update. As Todoist supports many different themes the Material You Dynamic Color feature seemed like a good fit for our product. We decided to prioritize this feature and implement Dynamic Color light and dark themes as part of our Todoist theme settings options.

To implement Dynamic Color, the development team started off by creating a demo prototype that utilized the Dynamic Color system and showcased how we could select from a range of color choices that the system defined based on the wallpaper choice. From there, we tried to incorporate system behavior in our design mockups. We designed a range of different color mockups and components to see which ones could fit with which components. We then came up with a color system that worked for the Todoist app and the new themes. These new Dynamic Color themes would sit alongside our current theme options in the Todoist app settings. From here users could choose between Dynamic Color Light and Dark themes.

Along with Dynamic Color, the team also created a customizable bottom app bar, allowing users to set up the app in a way that’s most convenient to their workflow. The location of the Dynamic Add Button can be changed to the center, left, or right corner of the screen. The order of the Menu, Search, and Notification buttons can be rearranged to best fit the ergonomics of the user’s dominant (left or right) hand and optimize their navigation patterns.

Launch

As critical beta feedback was addressed and stability tweaks were made, the squad felt ready to release the new Todoist Android app to the public. The team logged the issues that could not immediately be addressed for future reviews and updates.

The design and marketing team readied the launch by creating What’s New banner artwork and copy that are displayed within the app when launching the update. The Doist marketing team also created release notes and shared the app update announcements on our social channels. The brand and product design team worked together to create custom image assets and copy that summarised the project work in a simple and beautiful way.

What’s Next: Material 3

After a successful launch of the redesigned Todoist for Android app, Google contacted Doist to announce that Todoist was selected as the Material Design Award 2021 winner in the Large Screen category. The team was excited to be recognized for their hard work and it felt like we achieved the goal we had set out to accomplish.

Internally, designers and developers continued to study and discuss the Material 3 updates. The design team started exploring mockups and design changes inspired by Material 3 and Google’s Workspace app updates. Some of our current Todoist explorations include changing the FAB styling, updating the app bar, further removing elevation shadows, and more. Here is a preview of what a future Todoist update could look like.

We hope these insights into Doist’s design process and collaboration practices have sparked your interest. Thank you for reading and stay tuned for future design updates!

Takeaways

  • Study the Material guidelines, Material Design winners, Material studies, and Google Workspace apps to make informed design decisions when designing your next product or app update.
  • Evaluate which Material Design components and practices are right for you and implement them into your product.
  • Carefully balance the Material Design guidelines with your brand guidelines to create a unique and consistent experience between your product and the platform it lives on.
  • Collaborate with your Android developers early and often to ship app updates efficiently and increase the design implementation quality.
  • Use design components and build a design system along with practical mockups to create an efficient design spec.
  • Consider how the latest Android features fit into your product and which have the most impact on your users before deciding to implement them.
  • Test and review builds with your internal team and external beta users to get valuable feedback and make adjustments before releasing them to the public.
  • Create announcement artwork to showcase your latest app or feature update along with a clear description to share in-app and on social media.

Source

The post Case Study: Redesigning Todoist for Android first appeared on Webdesigner Depot.

Source de l’article sur Webdesignerdepot