Machine Learning Patterns and Anti-Patterns

Machine learning can save developers time and resources when implemented with patterns that have been proven successful. However, it is crucial to avoid anti-patterns that will interfere with the performance of machine learning models. This Refcard covers common machine learning challenges — such as data quality, reproducibility, and data scalability — as well as key patterns and anti-patterns, how to avoid MLOps mistakes, and strategies to detect anti-patterns.

Text Preprocessing Methods for Deep Learning

Deep Learning, particularly Natural Language Processing (NLP), has been gathering a huge interest nowadays. Some time ago, there was an NLP competition on Kaggle called Quora Question insincerity challenge. The competition is a text classification problem and it becomes easier to understand after working through the competition, as well as by going through the invaluable kernels put up by the Kaggle experts.

First, let’s start by explaining a little more about the text classification problem in the competition. 

What is Neural Search?

TL;DR: Neural Search is a new approach to retrieving information using neural networks. Traditional techniques to search typically meant writing rules to “understand” the data being searched and return the best results. But with neural search, developers don’t need to wrack their brains for these rules; The system learns the rules by itself and gets better as it goes along. Even developers who don’t know machine learning can quickly build a search engine using open-source frameworks such as Jina.

Table of contents

6 Important Types of Neural Networks

An Introduction to the Most Common Neural Networks

Neural Nets have become pretty popular today, but there remains a dearth of understanding about them. For one, we've seen a lot of people not being able to recognize the various types of neural networks and the problems they solve, let alone distinguish between each of them. And second, which is somehow even worse, is when people indiscriminately use the words Deep Learning when talking about any neural network without breaking down the differences.

In this post, we will talk about the most popular neural network architectures that everyone should be familiar with when working in AI research.

Using Neural Networks to Discover Antibiotics

Antibiotic resistance is one of the greatest challenges plaguing modern medicine. More than a hundred thousand people die every year because doctors cannot treat bacterial infections. However, there is an unexpected ally in this fight for lives, which can help to solve the problem of bacterial resistance to existing drugs. This ally is neural networks. Scientists from the Massachusetts Institute of Technology demonstrated that well-trained neural networks can successfully identify new antibiotics from millions of candidate molecules.

Why Many Antibiotics Are Becoming Ineffective

Simply put, the mechanism of bacterial adaptation to antibiotics can be described as follows: random mutations constantly occur in bacterial DNA, and due to their huge number, there is always a probability that some of these mutations will help particular bacteria survive in new conditions. The rest of the population might die, but the surviving ones will quickly multiply and take their place. Bacteria are unlikely to survive boiling or intense irradiation, but many of them no longer respond to antibiotics. Developing resistance requires a certain period of time, and with each year, less and less time is needed. For example, by the early 1970s, most of the gonococcus bacteria had developed high-level resistance to antibiotics of the penicillin group.

A Friendly Introduction to Graph Neural Networks

Graph Neural Networks Explained

Graph neural networks (GNNs) belong to a category of neural networks that operate naturally on data structured as graphs. Despite being what can be a confusing topic, GNNs can be distilled into just a handful of simple concepts.

Starting With Recurrent Neural Networks (RNNs)

We’ll pick a likely familiar starting point: recurrent neural networks. As you may recall, recurrent neural networks are well-suited to data that are arranged in a sequence, such as time series data or language. The defining feature for a recurrent neural network is that the state of an RNN depends not only on the current inputs but also on the network’s previous hidden state. There have been many improvements to RNNs over the years, generally falling under the category of LSTM-style RNNs with a multiplicative gating function between the current and previous hidden state of the model. A review of LSTM variants and their relation to vanilla RNNs can be found here.

3 Reasons to Use a Random Forest Over a Neural Network

Neural networks have been shown to outperform a number of machine learning algorithms in many industry domains. They keep learning until it comes out with the best set of features to obtain a satisfying predictive performance. However, a neural network will scale your variables into a series of numbers that once the neural network finishes the learning stage, the features become indistinguishable to us.

If all we cared about was the prediction, a neural net would be the de-facto algorithm used all the time. But in an industry setting, we need a model that can give meaning to a feature/variable to stakeholders. And these stakeholders will likely be anyone other than someone with a knowledge of deep learning or machine learning.

Introduction to Deep Learning

Let's walk down this introduction to deep learning staircase and explore the learning process of artificial neural networks.

In this article, I will give you a very simple introduction to the basics of deep learning, regardless of the language, library, or framework you may choose thereafter.

Introduction

Trying to explain deep learning with a good level of understanding may take quite a while, so that's not the purpose of this article.

Developer Skills for AI

I had the opportunity to meet with Jeff Prosise, Co-founder and Chief Learning Officer, Wintellect, during Skillsoft's Perspectives 2019 user conference. Wintellect is a Microsoft certified developer consulting and education firm. Jeff has written nine books and hundreds of articles on software development.

Prior to the conference, Skillsoft announced a partnership with Wintellect to offer expanded training for enterprise technology and developer professionals with WintellectNOW's 500 hours or on-demand training.

Intelligent Automation in the Palm of Your Hand

The global workforce has transitioned to a mobile workforce. The number of mobile workers will grow to 1.9 billion by 2022, which will make up 43 percent of the total workforce [1]. This mobile workforce growth influences enterprise mobility needs.

As far as office automation is concerned, business users need the ability to orchestrate and supervise their digital workforce from anywhere. At Automation Anywhere, we offer a Robotic Process Automation (RPA) mobile app to access your live RPA dashboard securely. Users now can start, pause, and stop attended RPA bots from the app, monitor ROI and performance data, or receive automation alerts.

Optimistic About AI

Dr. Nilesh Modi, Vice President of Technology and Innovation at Cygnet Infotech, Ahmedabad, served on the panel for "Artificial Intelligence — Where Next?" during the second O2H Innovation Conference moderated by Mr. Prashant Shah, co-founder, O2H, held on March 14, 2019 at Courtyard Marriott, Ahmedabad.Image title

He served alongside other esteemed panelists from diverse fields and specializations including Mr. Amit Saraswat, Director — ML and AI, Fidelity Investments, Dr. M Muruganant, Executive President and Vice Chancellor, Indus University, Mr. Mark Warne, CEO and Director, DeepMatter Group plc, and Mr. Yamin Lawar, Full Stack Developer, O2H.

PyTorch Neural Quirks

PyTorch uses some different software models than you might be used to, especially if you migrate to using it from something like Keras or TensorFlow. This first is, of course, the Tensor concept (something shared with TensorFlow, but not so obvious in Keras). The second is the nn.Module hierarchy you need to use when building a particular network. The final one is implied dimensionality and the channel concept. Of these, I'd really like to focus on the latter its own article, so let's get the first two out of the way first.

Tensors in PyTorch are really just values, and they mirror many of the methods available on NumPy arrays — like ones(), zeros(), etc. They have specific naming conventions on instances too. For example, Tensor::add_() will add to the calling addend and adding in place, while Tensor::add() will return a new Tensor with the new cumulative value. They support list-like indexing semantics, slicing, and comprehensions as well. They convert easily too and from NumPy arrays as well via the torch.from_numpy() and Tensor::numpy() methods. They also have a sense of location and are affiliated with a specific device, and this is where things can get tricky.

Deep Reinforced Learning: Addressing Complex Enterprise Challenges

Current deep learning algorithms and methods are nowhere near the holy grail of “Artificial General Intelligence (AGI).”

Current algorithms lean more towards narrow learning, meaning they are good at learning and solving specific types of problems under specific conditions. These algorithms take a humongous amount of data as compared to humans who can learn from relatively few learning encounters. The transfer process of these learnings from one problem domain to another domain is somewhat limited as well.

AI Will Not Eat the World

So I work at the intersection of cybersecurity and machine learning. I use a variety of neural network architectures and machine learning techniques to try to create new ways to detect new malware. I've worked on other projects using machine learning and AI too.

And we have nothing to worry about.