Articles

Imagine a room with a wall of screens displaying closed-circuit video feeds from dozens of cameras, like a security office in a film. In the movies, there is often a guard responsible for keeping an eye on the screens that inevitably falls asleep, allowing something bad to happen. Although intuition and other distinctly “people skills” are useful in security, most would agree that the human attention span isn’t well-suited for always-on, 24/7 video monitoring. Of course, footage can always be reviewed after something happens, but it’s easy to see the security value of detecting something out of the ordinary as it unfolds.

Several cameras capturing different scenes.
Cameras capture our every move, but who watches them?

Now imagine a video artificial intelligence (AI) application capable of processing thousands of camera feeds in real-time. The AI constantly compares new footage to historical footage, then classifies anomalous events by their threat level. Humans are still involved, both to manage the system as well as review and respond to potential threats, but AI takes over where we fall short. This isn’t a hypothetical situation: from smart police drones to intelligent doorbells sold by Amazon and Google, AI-powered surveillance solutions are becoming increasingly sophisticated, affordable, and ubiquitous.

Source de l’article sur DZONE


Exploring GPT-3: A New Breakthrough in Language Generation

Substantial enthusiasm surrounds OpenAI’s GPT-3 language model, recently made accessible to beta users of the “OpenAI API.”

Source de l’article sur DZONE

AI-Powered Computer Vision

The impact of AI on human lives can be felt the most in the healthcare industry. AI-powered computer vision technology can help bring affordable healthcare to millions of people. Computer vision practices are already in place for sorting and finding images in blogs and retail websites. It also has applications in medicine.

You may be interested in:  Computer Vision: Overview of a Cutting Edge AI Technology

Medical diagnosis depends on medical images such as CAT scans, MRI images, X-rays, sonograms, and other images.

Source de l’article sur DZONE

Deep in thought studying deep learning for Java.

Introduction

Some time ago, I came across this life-cycle management tool (or cloud service) called Valohai, and I was quite impressed by its user-interface and simplicity of design and layout. I had a good chat about the service at that time with one of the members of Valohai and was given a demo. Previous to that, I had written a simple pipeline using GNU Parallel, JavaScript, Python, and Bash — and another one purely using GNU Parallel and Bash.

I also thought about replacing the moving parts with ready-to-use task/workflow management tools like Jenkins X, Jenkins Pipeline, Concourse or Airflow, but due to various reasons, I did not proceed with the idea.

Source de l’article sur DZONE


Comparison Between Data Science, AI, ML, and Deep Learning

What Is Data Science?

R Data science includes data analysis. It is an important component of the skill set required for many jobs in this area. But it’s not the only necessary skill. They play active roles in the design and implementation work of four related areas:

  • Data architecture
  • In data acquisition
  • Data analysis
  • In data archiving

Learn more about Data Science.


Source de l’article sur DZONE (AI)

After my article, “Role of Project Manager in Data Science”, a couple of program managers suggested me to elaborate the use case on meeting release commitments. We are going to explore simulation, one of the amazing concepts in Artificial Intelligence. Quantitative analytic techniques, such as the Monte Carlo simulation, helps program managers in decision making through probabilistic distributions of potential outcomes.

Monte Carlo relies heavily on the randomness of key variables in solving the problem. Along with key parameters, we also need to understand the relationship between them and sufficient data to analyze further. The five steps listed in “Forecasting the future: Let’s rewind to the basics” are essential to building an accurate model.


Source de l’article sur DZONE (AI)

For autonomous vehicles to successfully navigate myriad road obstacles, AI must be constantly trained to accurately perceive real-world 3D objects for what they are — traffic cones, pedestrians, electric scooters, etc. In order to do so, 2D images and video collected by sensor cameras must be refined and then annotated into 3D cuboid training data, which autonomous vehicle AI systems can leverage to become more intelligent. (This same method of creating 3D cuboid training data is also useful for teaching perception to AI in the field of robotics.) With cuboid annotation, drawings are first done manually and then calibrated for greater precision through a dynamic mathematical process that provides full 3D data for each cuboid. It’s an interesting process, and here’s a look under the hood at how it works.

Manual Cuboid Annotation

Manually annotating 2D images requires, rather simply, drawing boxes representing two sides of a cuboid around an object, like so:


Source de l’article sur DZONE (AI)

You must have seen videos on Youtube or posts on your news feed in which certain texts or a person’s face is blurred. Well, that’s how our digital privacy is ensured by simplest of technologies.
But think about it, in an age of Machine Learning, can’t your digital privacy be easily breached? The answer is a big "Yes," and a team of researchers at the University of Texas has proven that. They have developed a software that can identify the sensitive content hidden behind blurred or pixelated images. The content can be someone’s house or vehicle number, or simply a human face.

Interestingly, the team hasn’t used some state of the art technology to do it. It has instead used Machine Learning methods to train the neural networks. So instead of being programmed, the computer has been fed with large volumes of sample images. The algorithm used doesn’t actually unblur or restore the image. It identifies the content of the blurred image based on the information it already has.


Source de l’article sur DZONE (AI)

About a year ago, I was convinced that the key to succeeding with Artificial Intelligence (AI) was to take a platform approach. In other words, the synergies that accrue from appropriately bringing together the range of technologies that are making AI a reality for enterprises was, I believed, the way to go. I still firmly believe that.

In fact, having personally met over 200 executives (business and technology) since then, from around the world, who seek to find relief and new value from AI, I am convinced that opting for best of breed capabilities from a variety of vendors is not necessarily going to work out in practice. For one, despite claims of using only open standards in building these offerings, deploying the offerings from a variety of vendors in an integrated manner is a challenge. Further, the business and operational challenges that naturally occur in such situations with multiple providers are deterrents too.


Source de l’article sur DZONE (AI)

Have you ever thought about how your mail inbox is so smart that it can filter Spams, label important emails or conversations, and segregate promotional, social, and primary messages? There is a complex algorithm in place for this kind of prediction and this algorithm comes under the wide umbrella of Machine Learning. The formula looks at the words in the subject line, the links included in the mail, and/or patterns in the recipient’s list. Now, this method is definitely helping the business of email providers and such predictive (as well as prescriptive) algorithms can help all kinds of businesses. But first, let’s define exactly what Machine Learning (ML) is.

What Is Machine Learning?

Simply put, ML is all about understanding, mostly hidden, data and statistics and then mining meaningful insights from this raw dataset. The analytical method that uses algorithms can help solve intricate data-rich business problems.


Source de l’article sur DZONE (AI)