Articles

Businesses have always been at the forefront as early adopters of new technologies. Advancements in computing like Machine Learning have already made a notable impact on the business world. With business operations and processes spread across varying levels, the inclusion of a Machine Learning framework can prove worthwhile in increasing efficiency, productivity, and speed.

Machine Learning has found widespread acceptance among enterprises. MIT Technology Review and Google Cloud recently published a report based on their studies in Machine Learning and its adoption. The reports state that about 60 percent of the respondents have already implemented Machine Learning into their business.


Source de l’article sur DZONE (AI)

Image titleComputers today can not only automatically classify photos, but they can also describe the various elements in pictures and write short sentences describing each segment with proper English grammar. This is done by the Deep Learning Network (CNN), which actually learns patterns that naturally occur in photos. Imagenet is one of the biggest databases of labeled images to train the Convolutional Neural Networks using GPU-accelerated Deep Learning frameworks such as Caffe2, Chainer, Microsoft Cognitive Toolkit, MXNet, PaddlePaddle, Pytorch, TensorFlow, and inference optimizers such as TensorRT.

Neural Networks were first used in 2009 for speech recognition and were only implemented by Google in 2012. Deep Learning, also called Neural Networks, is a subset of Machine Learning that uses a model of computing that’s very much inspired by the structure of the brain.


Source de l’article sur DZONE (AI)

How does TensorFlow apply to nuclear physics? In this video, I chat with Ian Langmore to learn about power generated from nuclear fusion, new plasma generator machines, and how TensorFlow is helping with plasma measurement.

To learn more about what we talked about:


Source de l’article sur DZONE (AI)

What a special experience. An old friend and colleague, Lynn Pausic, one of the co-founders of Expero — a company with extensive experience in Machine Learning applied to complex business and technical problems — asked if I would help judge a “Machine Learning hackathon for women.” How could I say no to that?

Eight teams of women presented highly innovative and varied ideas for Machine Learning that could be applied to do good in the world, help improve and save lives, and even make home-cooking easier!


Source de l’article sur DZONE (AI)

Data science is all about capturing data in an insightful way, whereas Machine Learning is a key area of it. Data science is a fantastic blend of advanced statistics, problem-solving, mathematics expertise, data inference, business acumen, algorithm development, and real-world programming ability. And Machine Learning is a set of algorithms that enable software applications to become more precise in predicting outcomes or take actions to separate it without being explicitly programmed.

The distinction between data science and Machine Learning is a bit fluid, but the main idea is that data science emphasizes statistical inference and interpretability, while Machine Learning prioritizes predictive accuracy over model interpretability. And for both data science and Machine Learning, open source has become almost the de facto license for innovative new tools.


Source de l’article sur DZONE (AI)

This article is featured in the new DZone Guide to Artificial Intelligence: Automating Decision-Making. Get your free copy for more insightful articles, industry statistics, and more!

Since February of 2018, scientists from Google’s health-tech subsidiary have pioneered innovative ways of creating revolutionary healthcare insights through artificial intelligence prediction algorithms. Based on the back of a patient’s eye scan, their system can make predictions against the patient’s risk of experiencing a severe cardiac incident.


Source de l’article sur DZONE (AI)

Conversation drives sales and this is a well-known fact. For customers, it is important to have someone to ask questions and clarify doubts, someone who could guide them and recommend them the best option. Today, conversations can be automated, and today there is no need to have a physical person attached to each customer. Nowadays, conversational commerce became a fast-growing buzzword and chatbots play a key role in this field. Today, I would like to discuss why chatbots became so popular and why e-commerce and m-commerce companies heavily invest in it.

What Is a Chatbot?

First off, let’s make sure we are on the same page. What is a chatbot? 
A chatbot is a computer program or an artificial intelligence, which conducts a conversation via auditory or textual methods. It simulates how a human would behave in an automatic way, improving the efficiency of the process.


Source de l’article sur DZONE (AI)


Why Enterprise Application Companies Should Take a Cue From Apple’s Siri and Google Assistant

Enterprise applications are the next frontier in the adoption of natural language interfaces. Unlike consumer tech, e-commerce, and various chatbots where NLP/U is more of a technical novelty, the world of enterprise is a killer ground for natural language interfaces.

A Need for a Unified Interface

One of the key unique properties of natural language is the fact that it provides a unified interface to any data source or sources. It’s the one interface that everyone already knows, and at the same time, it’s the same interface to any supporting system. Think about it…you can easily ask a lawyer, salesman, or marketing professional about any specific topic as long as you can formulate a question in a minimally understandable way.


Source de l’article sur DZONE (AI)

In Part 1 of this series, we discussed the need for automation of data science and the need for speed and scale in data transformation and building models. In this part, we will discuss other critical areas of ML-based solutions like:

  • Model Explainability
  • Model Governance (Traceability, Deployment, and Monitoring)

Model Explainability

Simpler Machine Learning models like linear and logistic regression have high interpretability, but may have limited accuracy. On the other hand, Deep Learning models have time and again produced high accuracy results, but are considered black boxes because of the machine’s inability to explain their decisions and actions to human users. With regulations like GDPR, model explainability is quickly becoming one of the biggest challenges for data scientists, legal teams, and enterprises. Explainable AI, commonly referred to as XAI, is becoming one of the most sought-after research areas in Machine Learning. Predictive accuracy and explainability are frequently subject to a trade-off; higher levels of accuracy may be achieved but at the cost of decreased levels of explainability. Unlike Kaggle, competitions where complex ensemble models are created to win competitions, for enterprises, model interpretability is very important. Loan Default Prediction model cannot be used to reject loan to a customer until the model is able to explain why a loan is being rejected. Also, it is often required at the model level as well as individual test instance level. At Model level, there is need to explain key features which are important and how variation in these features affect the model decision. Variable Importance and Partial Dependence plots are popularly used for this. For an individual test instance level, there are packages like “lime,” which help in explaining how black box models make a decision.


Source de l’article sur DZONE (AI)

If you are planning to experiment with deep learning models, Keras might be a good place to start. It’s a high-level API written in Python with backend support for Tensorflow, CNTK, and Theano.

For those of you who are new to Keras, you can read more at keras.io or a simple google search will take you to the basics and more on Keras.


Source de l’article sur DZONE (AI)