Articles

A critical PostgreSQL client contains valuable data, and PostgreSQL databases should be backed up regularly. Its process is quite simple, and it is important to have a clear understanding of the techniques and assumptions.

SQL Dump

The idea behind this dump method is to generate a text file from DataCenter1 with SQL commands that, when fed back to the DataCenter2 server, will recreate the database in the same state as it was at the time of the dump. In this case, if the Client cannot access the primary server, they can have access to the BCP server. PostgreSQL provides the utility program pg_dump for this purpose. The basic usage of this command is:  pg_dump dbname >backupoutputfile.db.

Source de l’article sur DZONE

With the rapid development of cloud computing technology, an increasing number of developers are deploying applications to Alibaba Cloud Elastic Compute Service (ECS) instances. This tutorial describes how to deploy a Java application developed locally to an Alibaba Cloud ECS instance using Cloud Toolkit.

Develop an Application Locally

The coding method is similar — no matter whether you compile Java applications that run on the cloud or locally. Therefore, this article takes a Java servlet for printing "Hello World" on a web page as an example to explain the deployment method.

Source de l’article sur DZONE

Database DevOps has come of age. Now seen as a key technical practice which can contribute to the successful implementation of DevOps, it stops the database being a bottleneck and makes releases faster and easier.

Conversely, perhaps, the automation and audit trails it introduces can help to protect personal data within databases and make compliance part of the same process rather than an additional time-consuming step outside it.

Source de l’article sur DZONE


“DevOps is Agile on steroids — because Agile isn’t Agile enough.”

So says Jim Bird, the CTO for BiDS Trading, a trading platform for institutional investors. Jim continued, "DevOps teams can move really fast…maybe too fast? This is a significant challenge for operations and security. How do you identify and contain risks when decisions are being made quickly and often by self-managing delivery teams? CABs, annual pen tests, and periodic vulnerability assessment are quickly made irrelevant. How can you prove compliance when developers are pushing their own changes to production?"

Jim was presenting at the 2018 Nexus User Conference on Continuous Delivery. Pulling on his 20+ years of experience in development, operations, and security in highly regulated environments, Jim laid how and why Continuous Delivery reduces risk and how you can get some easy wins toward making it more secure.

Source de l’article sur DZONE

This is part 3 in a series on alert fatigue. Read up on parts 1 and 2.

In many cases — as you’re monitoring a particular state of a system — you probably know some steps to triage or in some cases automatically fix the situation. Let’s take a look at how we can automate this using check hooks and handlers.

Source de l’article sur DZONE

For ERP administrators today, security is always on their mind. But, recent warnings from the US Department of Homeland Security about ERP vulnerabilities make securing your Oracle and SAP applications even more urgent. Data breaches and unauthorized access can disrupt business-critical processes and negatively impact your customers. Staying up-to-date with security patches is the best way to make sure this doesn’t happen to your organization, but good protocol requires that you first test patches against a separate test instance of SAP to confirm that they won’t impact operations of your production instance.

So, how can you speed up testing and implement these important patches as soon as they’re available? Automating the SAP system copy process is one way to clear the path of the obstacles that keep you from better security.

Source de l’article sur DZONE

About a year ago, I was convinced that the key to succeeding with Artificial Intelligence (AI) was to take a platform approach. In other words, the synergies that accrue from appropriately bringing together the range of technologies that are making AI a reality for enterprises was, I believed, the way to go. I still firmly believe that.

In fact, having personally met over 200 executives (business and technology) since then, from around the world, who seek to find relief and new value from AI, I am convinced that opting for best of breed capabilities from a variety of vendors is not necessarily going to work out in practice. For one, despite claims of using only open standards in building these offerings, deploying the offerings from a variety of vendors in an integrated manner is a challenge. Further, the business and operational challenges that naturally occur in such situations with multiple providers are deterrents too.


Source de l’article sur DZONE (AI)

The following is an excerpt from a presentation by Ron Forrester and Scott Boecker from Nike, titled “DevOps at Nike: There is No Finish Line.

nike-does-us-2017You can watch the video of the presentation, which was originally delivered at the 2017 DevOps Enterprise Summit in San Francisco.

Source de l’article sur DZONE

In Part 1 of this series, we discussed the need for automation of data science and the need for speed and scale in data transformation and building models. In this part, we will discuss other critical areas of ML-based solutions like:

  • Model Explainability
  • Model Governance (Traceability, Deployment, and Monitoring)

Model Explainability

Simpler Machine Learning models like linear and logistic regression have high interpretability, but may have limited accuracy. On the other hand, Deep Learning models have time and again produced high accuracy results, but are considered black boxes because of the machine’s inability to explain their decisions and actions to human users. With regulations like GDPR, model explainability is quickly becoming one of the biggest challenges for data scientists, legal teams, and enterprises. Explainable AI, commonly referred to as XAI, is becoming one of the most sought-after research areas in Machine Learning. Predictive accuracy and explainability are frequently subject to a trade-off; higher levels of accuracy may be achieved but at the cost of decreased levels of explainability. Unlike Kaggle, competitions where complex ensemble models are created to win competitions, for enterprises, model interpretability is very important. Loan Default Prediction model cannot be used to reject loan to a customer until the model is able to explain why a loan is being rejected. Also, it is often required at the model level as well as individual test instance level. At Model level, there is need to explain key features which are important and how variation in these features affect the model decision. Variable Importance and Partial Dependence plots are popularly used for this. For an individual test instance level, there are packages like “lime,” which help in explaining how black box models make a decision.


Source de l’article sur DZONE (AI)

Only a year ago, industry discourse around artificial intelligence (AI) was focused on whether or not to go the AI way. Businesses found themselves facing an important choice — weighing the considerable value that would manifest against the investment of capital and talent AI would necessitate. But that was yesterday.

Today, we have reached a critical inflection point. With their technology deployments hitting maturity, early adopters of AI have begun to realize incredible advantages — the ability to optimize operations, maximize productivity, derive insights and be more responsive to real-time market demands. The results are out for the world to see.


Source de l’article sur DZONE (AI)