Sorites de produit, innovation, start-up

In Part 1 of this series, we discussed the need for automation of data science and the need for speed and scale in data transformation and building models. In this part, we will discuss other critical areas of ML-based solutions like:

  • Model Explainability
  • Model Governance (Traceability, Deployment, and Monitoring)

Model Explainability

Simpler Machine Learning models like linear and logistic regression have high interpretability, but may have limited accuracy. On the other hand, Deep Learning models have time and again produced high accuracy results, but are considered black boxes because of the machine’s inability to explain their decisions and actions to human users. With regulations like GDPR, model explainability is quickly becoming one of the biggest challenges for data scientists, legal teams, and enterprises. Explainable AI, commonly referred to as XAI, is becoming one of the most sought-after research areas in Machine Learning. Predictive accuracy and explainability are frequently subject to a trade-off; higher levels of accuracy may be achieved but at the cost of decreased levels of explainability. Unlike Kaggle, competitions where complex ensemble models are created to win competitions, for enterprises, model interpretability is very important. Loan Default Prediction model cannot be used to reject loan to a customer until the model is able to explain why a loan is being rejected. Also, it is often required at the model level as well as individual test instance level. At Model level, there is need to explain key features which are important and how variation in these features affect the model decision. Variable Importance and Partial Dependence plots are popularly used for this. For an individual test instance level, there are packages like “lime,” which help in explaining how black box models make a decision.


Source de l’article sur DZONE (AI)

If you are planning to experiment with deep learning models, Keras might be a good place to start. It’s a high-level API written in Python with backend support for Tensorflow, CNTK, and Theano.

For those of you who are new to Keras, you can read more at keras.io or a simple google search will take you to the basics and more on Keras.


Source de l’article sur DZONE (AI)

The Amazon SageMaker machine learning service is a full platform that greatly simplifies the process of training and deploying your models at scale. However, there are still major gaps to enabling data scientists to do research and development without having to go through the heavy lifting of provisioning the infrastructure and developing their own continuous delivery practices to obtain quick feedback. In this talk, you will learn how to leverage AWS CodePipeline, CloudFormation, CodeBuild, and SageMaker to create continuous delivery pipelines that allow the data scientist to use a repeatable process to build, train, test and deploy their models.

Below, I’ve included a screencast of the talk I gave at the AWS NYC Summit in July 2018 along with a transcript (generated by Amazon Transcribe — another Machine Learning service — along with lots of human editing). The last six minutes of the talk include two demos on using SageMaker, CodePipeline, and CloudFormation as part of the open source solution we created.


Source de l’article sur DZONE (AI)

Whenever something serious happens, we usually try and determine cause and effect. What was it that caused this thing to unfold the way it did? Whilst the theory is nice, we often employ some rather dubious explanations to try and explain the series of events. Superstitions perhaps, or correlation rather than causation.

There have been attempts in the past to generate mathematical models for general causality, but they haven’t been particularly effective, especially for more complex problems. A new study from the University of Johannesburg, South Africa and National Institute of Technology Rourkela, India, has attempted to use AI to do a better job.


Source de l’article sur DZONE (AI)

From ride shares to smart power grids and from healthcare to our online lives, AI is being propelled out of labs and into our daily lives. Microsoft is betting that conservation-focused AI can save our planet, while Facebook sees it as a silver bullet for rooting out harmful content. Tesla CEO Elon Musk and the late physicist Stephen Hawking both warned society of the potential for weaponized AI.

At CA, we wanted to gain insight into how the AI Ecosystem has developed over the past year. We partnered with Quid, a San Francisco-based startup, whose platform can read millions of news articles, blog posts, company profiles, and patents — and offer immediate insight by organizing that content visually. From its global dataset of 1.8 million companies, Quid classified companies that mentioned a specific focus in "Artificial Intelligence" or "Deep Learning."


Source de l’article sur DZONE (AI)

The imaginary fiction in the scientific movies is now real stuff to talk on. Artificial Intelligence and Machine Learning are taking technology to the next level of advancement. Many giant companies are endeavoring to leverage this technology to understand the customer’s demands and engage for better success. Even the social marketing giant Twitter has joined the league.

Further, in a recent announcement, the company declares that they are going to use insightful Machine Learning technology to recommend tweets to its users.


Source de l’article sur DZONE (AI)

Way back in April 2016, Facebook CEO Mark Zuckerberg announced that third parties could use the Facebook messenger platform to create their own personal chatbot. This got everyone in the world talking about this artificially intelligent technology and ways to use it to leverage their business. Marketers were delighted since, besides back-end operations, chatbots could now go on the front end and handle the company’s brand by providing seamless and real-time conversational experiences to clients online.

But the game did not go as planned for Facebook as the hype died out a bit and social media chatbots started disappearing. Seemed like social media marketers were not able to figure out how they could use this amazing new technology. But at the time, several technology companies took the bots idea seriously and started developing these programs that used not just artificial intelligence but other advanced technologies like machine learning, natural language processing, and natural language understanding. This meant that the bots were now becoming self-learning virtual assistants. Almost every business with a virtual presence today has already installed or is seriously considering having a virtual assistant deployed on their site for pre-sales or sales service, customer support, and several other reasons.


Source de l’article sur DZONE (AI)

In this article, I will describe how analytics is related to Machine Learning. I’ll try to demystify some of the nonsense around ML, and explain the process and types of machine learning. Finally, I’ll share a couple of videos which describe the next level of Artificial Intelligence – Deep Learning.

Don’t worry if you’re not an artificial intelligence expert — I won’t ever mention Linear Regression and K-Means Clustering again. This is an article in plain English.


Source de l’article sur DZONE (AI)

Only a year ago, industry discourse around artificial intelligence (AI) was focused on whether or not to go the AI way. Businesses found themselves facing an important choice — weighing the considerable value that would manifest against the investment of capital and talent AI would necessitate. But that was yesterday.

Today, we have reached a critical inflection point. With their technology deployments hitting maturity, early adopters of AI have begun to realize incredible advantages — the ability to optimize operations, maximize productivity, derive insights and be more responsive to real-time market demands. The results are out for the world to see.


Source de l’article sur DZONE (AI)

Software Testing and Quality Assurance has been leveraged to bring speed and accuracy for the Digital Transformation efforts by enterprises. Over the last few years, Test Automation has been increasingly leveraged to ensure optimal accuracy for various digital initiatives. In the current scenario, software development teams are adopting Artificial Intelligence (AI) to execute testing tasks that are repetitive and time-consuming. The underlying purpose is to not only bring speed, but also ensure accuracy while processing massive chunks of data to derive meaningful inferences.

According to a PWC research, "45% of total economic gains by 2030 will come from product enhancements, stimulating consumer demand. This is because AI will drive greater product variety, with increased personalisation, attractiveness and affordability over time." AI is indisputably creating a positive stir across various sectors, and when it’s about application testing, the role is equally critical.


Source de l’article sur DZONE (AI)