Articles


What is Deno?

Deno a pristine method to compose worker-side JavaScript. It solves many of the problems that Node does. It was created by the same person as Node. It uses the V8 JavaScript engine under the hood but the rest of the runtime is implemented in Rust and Typescript.

What Reason Does Deno Utilize Rust?

Deno may be a safe TypeScript run-time on Chrome V8. It had been initially written in Go and now has been revamped in Rust to remain far away from potential garbage collector issues. Deno is like Node js yet is centered around security. The rationale that Deno made was JavaScript. Significantly more horrendous than having a competitor who understands your thing back to front, Deno was made expressly to fix what Dahl saw due to the crucial weaknesses of NodeJs — including security issues, use of a centralized repository system (npm), and heavy tooling.

Source de l’article sur DZONE


Introduction

While developing applications using Spring batch, especially in a micro-service project, we sometimes face one or most of the following cases:

  • The necessity of getting the security context inside the batch items to call methods that require authorizations inside the same micro-service or perform remote processing by calling other micro-services using Feign Client (HTTP) or  Spring Cloud Stream (broker like Kafka, RabbitMq …)
  • Propagating Sleuth trace Id and span Id in order to enhance logs traceability inside all the application components including other micro-services so the trace will not be lost if we use Job.
  • Getting the connected user Locale (i18n) in order to generate internationalized output otherwise, all the Job outputs will be generated in the default server language.
  • Retrieving objects stored inside Mapped Diagnostic Context  (MDC) for tracing purposes.

The following schema illustrates remote calls that can be performed in a micro-service-based application and the context information that String Batch items can propagate.

Source de l’article sur DZONE


When running Azure Kubernetes Service (AKS), it can be hard to understand and allocate costs in environments with multiple teams, projects, or even departments. With Kubecost, you gain full transparency into your Kubernetes usage and cost within minutes of installation. Officially launched in 2019 and built on open source, Kubecost now monitors over one billion dollars in Kubernetes spend, and enables startups and global enterprises alike to understand their spend and identify cost savings ranging from 30% to over 50%. Kubecost supports a wide range of self-managed and hosted Kubernetes environments, including Azure Kubernetes Service, which we’ll cover today in this article.

The Microsoft Azure Kubernetes Service (AKS) is a popular fully managed Kubernetes service that offers embedded continuous integration and continuous delivery as well as enterprise-grade security and governance— powerful tools for teams adopting Kubernetes. As with any complex infrastructure, AKS requires proper governance and financial transparency for successful organizational adoption. Kubecost, an open source tool that provides teams with visibility into Kubernetes spend and supports environments hosted in Azure, is a widely recommended solution for engineers and finance teams facing this problem. Note: This documentation page for AKS provides helpful context for using Kubecost to implement a cost governance strategy.

Source de l’article sur DZONE


Introduction: EnRoute Helm Chart

Helm is a popular package manager choice for Kubernetes. Installation of software, managing versions, upgrading versions, and finding charts from the registry are key benefits of Helm.

EnRoute helm chart installs the EnRoute Ingress Controller and provides easy configuration options to define policy for a service. The helm chart provides fine-grained control to define L7 policies with its ability to enable/disable plugins for a service using configuration options that can be specified when the helm is invoked.

Source de l’article sur DZONE

Critical system-of-record data must be compartmentalized and accessed by the right people and applications, at the right time.

Since the turn of the millennium, the art of cryptography has continuously evolved to meet the data security and privacy needs of doing business at Internet speed, by taking advantage of the ready processing horsepower of mainframe platforms for data encryption and decryption workloads.

Source de l’article sur DZONE

In this post, you will learn how to execute penetration tests with OWASP Zed Attack Proxy (ZAP). ZAP is a free web app scanner which can be used for security testing purposes.

1. Introduction

When you are developing an application, security must be addressed. It cannot be ignored anymore nowadays. Security must be taken into account starting from initial development and not thinking about it when you want to deploy to production for the first time. Often you will notice that adding security to your application at a later stage in development, will take a lot of time. It is better to take security into account from the beginning, this will save you from some painful headaches. You probably have some security experts inside of your company, so let them participate from the start when a new application needs to be developed. Nevertheless, you will also need to verify whether your developed application is secure. Penetration tests can help you with that. OWASP Zed Attack Proxy (ZAP) is a tool which can help you execute penetration tests for your application. In this post, you will learn how to setup ZAP and execute tests with the desktop client of ZAP. You will also need a preferably vulnerable application. For this purposes, Webgoat of OWASP will be used. In case you do not know what Webgoat is, you can read a previous post first. It might be a little bit outdated because Webgoat has been improved since then, but it will give you a good impression of what Webgoat is. It is advised to disconnect from the internet when using Webgoat because it may expose your machine to attacks.

Source de l’article sur DZONE

We all get excited about new projects; we’re daydreaming about possibilities from the first contact with a potential client. Most professionals have an established onboarding process, with contracts to sign and business assets to acquire; if you’re a coder, you probably set up a fresh new repository; if you’re a designer, you create a new project folder. All of us start imagining how the case study will look in our portfolio.

But few, if any, plan for the end of a project. Offboarding clients simply isn’t a thing. We build their site, and then one day, we don’t.

It may be that the client moves on; hopefully, you’ve done a good enough job that they can’t resist bringing you on board for their next startup. All too often, projects languish in some half-life, with occasional security patches that net you a whole $5 in service charges; is that why you got into web design? Probably not. There is the desirable option of upselling; if your client’s business grows due to your work, then more work should grow it some more.

If you’re great at startups, you’re probably not great at maintaining sites in the long term. If you’re great at maintaining sites, you’re probably not great at growing them.

For every cycle of a project’s life, there are different kinds of professionals who suit it best. And conversely, different cycles of a project suit you and your skillset better than others.

We all know that a bad client — demanding, rude, late at paying — should be fired. But what about a good client — a client who pays quickly, is friendly, professional, accommodating? Would you fire a good client if you’d outgrown the work?

Source

The post Poll: Should You Fire Good Clients? first appeared on Webdesigner Depot.


Source de l’article sur Webdesignerdepot

This week Google announced further details of its plan to remove cookies from ad tracking. The strategy, which the ad giant expects to be fully implemented by 2022, has come about due to increasingly stringent privacy laws in a growing number of territories around the globe.

Google’s first step was the announcement in January of FLoC (Federated Learning of Cohorts). Google itself is still testing and fine-tuning the system, but in essence, Google will replace 3rd-party cookies in Chrome with groups of anonymized users.

Critics of the plan have questioned whether users will be genuinely anonymous or whether Google will be tracking individuals to group them properly. The answer came earlier this week in a low-key announcement of KaST.

What is KaST?

KaST (Key and Surface Tracking) is the first iteration of Google’s new tracking technology. It works entirely without cookies and is fully device-agnostic.

The technology behind KaST is surprisingly old. It was first trialed in 1987 as a simple process for auditing the input of stenographers. Although the latest version of the technology draws heavily on voice recognition software algorithms, the original version of KaST — software named TAAA (Typist Account Accuracy Audit) — predates modern voice recognition by at least two years.

KaST uses…biomechanical and cognitive patterns, identifying individual users based on their keystrokes.

Just as your voice has a unique, identifiable modulation — anyone who uses telephone banking will be familiar with speaking their password — so too does your biomechanical input.

When you type on a keyboard or a touchscreen, the force, speed, and accuracy with which you hit characters are dependent on two things: your cognitive process and the unique biomechanics of your hands (the bones, ligaments, and muscles).

For example, when I type WordPress, I almost always type it as WordPRess (with a capitalized R). That is one facet of my combined biomechanical and cognitive process.

KaST uses keyboards and touch screens to track combined biomechanical and cognitive patterns, identifying individual users based on their keystrokes.

Mobile Approaches to KaST

KaST is heavily reliant on BMaC (Bio-Mechanical and Cognitive) input. Although Google hasn’t released any data to support the accuracy of KaST, BMaC is known to be surprisingly accurate.

Reports suggest that the KaST algorithm is 89.7% effective for character strings of 12 characters or more, leaping to 97.6% for 19 characters or more on a single device. That makes it too inaccurate for high-end processes like security but well within the necessary margin of error for a non-critical process like serving ads.

Google will be able to identify you on any machine, on any device, in any context, as soon as you type 19 characters or more

When switching to a touch-screen device, the accuracy plummets to just 87.8%. This may be one reason Google has been low-key in its trumpeting of the new technology so far.

According to TechBeat, initial trials of the tri-axis position of a device (X, Y, and Z rotation) were abandoned as inaccurate. Still, even without those additional tracking signals, Google claims KaST on mobile will achieve ~94% accuracy by the 1st quarter of 2022.

What Does KaST Mean for Users?

Much like many of the algorithms that govern our daily lives, KaST will be largely invisible to most of us. Unlike cookies that can be legislated for and removed from a local machine, your BMaC is as inescapable as your DNA.

Where privacy concerns really grow is that your BMaC follows you from device to device. How you type at home is identical to how you type at work. Your personal and professional profiles are now instantly connectable; Google will be able to identify you on any machine, on any device, in any context, as soon as you type 19 characters or more.

KaST Prompts Pre-M1 MacBook Rush

Within 24 hours of KaST’s announcement, Apple stores were reporting rush orders of pre-M1 MacBook Pros. With some stores reportedly selling out late on Wednesday.

The rush came in the wake of a Reddit post — that has since been removed — that claimed that the notoriously bad butterfly keyboard on pre-M1 MacBook Pros circumvented KaST because the inaccuracy of the keystrokes, and the tendency of the keys to stick introduced a random element that disguised the end-user from the KaST algorithm.

Although the Reddit post is unsubstantiated, it transpires that M1 Mac owners may not be the lucky ones after all.

Should You Worry About KaST?

Advocates maintain that KaST — and Google’s wider FLoC strategy — are beneficial to users and the web as a whole. They claim that identifying users without 3rd party cookies does more to protect privacy than hinder it.

Opponents argue that in a digital world rife with user tracking, privacy compromises of this magnitude cannot be contemplated simply to enable more sophisticated ad-serving.

Despite KaST’s early stages of development, privacy concerns are mounting, and a campaign has been launched to regulate Google’s use of the technology.

Source

The post Key and Surface Tracking Comes to Chrome first appeared on Webdesigner Depot.


Source de l’article sur Webdesignerdepot

Data privacy and protection are two imperative aspects for all businesses today as they could be prone to security breaches. Many small and medium organizations tend to ignore application security as they believe only large enterprises are targeted by hackers. However, statistics tell a different story, 43% of cybercrimes happen against small businesses.

There are several reasons behind a cyber-attack against these organizations; from old, unpatched security vulnerabilities to malware or human errors which make take them a lucrative target for attackers. So, ignoring Cyber Security can bring you on the radar of hackers even if you are a startup.

Source de l’article sur DZONE


How Does Cloud PLM Differ from On-premise Solutions?

While on-premise Agile PLM allows for product development, processes, and development of product records and more; these are essential features of any PLM. Moving to the Cloud brings you a step ahead in the product conception, with the following advantages:

  • The cloud allows for the identification of individual tasks related to each status of the workflow and the overall change.
  • The cloud has powerful security that enables roles and privileges control to directly. Agile PLM on the other hand has no team security.
  • Cloud provides Page Composer that allows complete customization of the page layout while Agile does not.
  • Sub-classes are of unlimited levels in the cloud, and only of three levels in Agile: base class, class, and subclass.

To make the transition to the cloud easier, GoSaaS has a clear and well-defined process that captures input from within the company to ensure every requirement is fulfilled.

Source de l’article sur DZONE