Articles

You must have seen videos on Youtube or posts on your news feed in which certain texts or a person’s face is blurred. Well, that’s how our digital privacy is ensured by simplest of technologies.
But think about it, in an age of Machine Learning, can’t your digital privacy be easily breached? The answer is a big "Yes," and a team of researchers at the University of Texas has proven that. They have developed a software that can identify the sensitive content hidden behind blurred or pixelated images. The content can be someone’s house or vehicle number, or simply a human face.

Interestingly, the team hasn’t used some state of the art technology to do it. It has instead used Machine Learning methods to train the neural networks. So instead of being programmed, the computer has been fed with large volumes of sample images. The algorithm used doesn’t actually unblur or restore the image. It identifies the content of the blurred image based on the information it already has.


Source de l’article sur DZONE (AI)

In Part 1 of this series, we discussed the need for automation of data science and the need for speed and scale in data transformation and building models. In this part, we will discuss other critical areas of ML-based solutions like:

  • Model Explainability
  • Model Governance (Traceability, Deployment, and Monitoring)

Model Explainability

Simpler Machine Learning models like linear and logistic regression have high interpretability, but may have limited accuracy. On the other hand, Deep Learning models have time and again produced high accuracy results, but are considered black boxes because of the machine’s inability to explain their decisions and actions to human users. With regulations like GDPR, model explainability is quickly becoming one of the biggest challenges for data scientists, legal teams, and enterprises. Explainable AI, commonly referred to as XAI, is becoming one of the most sought-after research areas in Machine Learning. Predictive accuracy and explainability are frequently subject to a trade-off; higher levels of accuracy may be achieved but at the cost of decreased levels of explainability. Unlike Kaggle, competitions where complex ensemble models are created to win competitions, for enterprises, model interpretability is very important. Loan Default Prediction model cannot be used to reject loan to a customer until the model is able to explain why a loan is being rejected. Also, it is often required at the model level as well as individual test instance level. At Model level, there is need to explain key features which are important and how variation in these features affect the model decision. Variable Importance and Partial Dependence plots are popularly used for this. For an individual test instance level, there are packages like “lime,” which help in explaining how black box models make a decision.


Source de l’article sur DZONE (AI)