Introduction to artificial neural networks part 2

A deeper dive into the artificial neural networks that drive deep learning and how they process information in medical image analysis tasks.
Written by Aiforia

How do neural networks learn?

In our previous article we covered the basics of artificial neural networks (ANNs), how they process data, and what convolutional neural networks (CNNs) are. 

To quickly recap, CNNs are a type of neural network. They are unique in the way they process data, such as images, and in fact that they are incredibly powerful at image analysis. There are different methods they can use for learning, some of the more common ones are: supervised, unsupervised, semi-supervised, and reinforcement learning. Now we will dive into the definitions of these. 

Types of learning 

In supervised learning the neural networks learn on a labeled dataset while in an unsupervised model, information is gathered by the networks from an unlabeled dataset by extracting features and patterns on its own. Essentially there is no defined ground-truth and therefore it is a useful method more for generating hypotheses rather than testing them. 

Semi-supervised learning is a method sitting between the two, using a mix of labeled and unlabelled data. This can be useful when the labeling of the data might be very consuming, resource intensive, or just difficult to extract relevant features from that data. 

The fourth most common learning method is reinforcement, in which AI models are trained via a reward system, meaning that feedback is provided by the system during the learning process. If you would like to learn more about these types of learning here is a comprehensive article from NVIDIA. Otherwise we will now focus on the type of learning that Aiforia mostly deploys. 

Supervised learning

Aiforia’s AI, and thereby its convolutional neural networks, are in most cases trained with the help of supervised learning. The first step in training is to select the input data, so in the case of image analysis, these are the training set of images that are labeled for certain parameters. The training set should represent the possible variations in the whole material used for research.

With Aiforia, this training process is unique as it is so easy for the end user who may even be a novice in computer science. The end user, so the medical scientist or healthcare professional, is able to train and thus develop their own neural network without the need for coding. Thanks to the intuitive interface of Aiforia, the parameters are taught to the neural networks simply by annotating and categorizing certain features in the image, for example liver tissue versus background, such as in this image from the Aiforia interface:

liver-annotations

As we learned in our previous article on the basics of deep learning, this type of artificial intelligence is deep as it can be made of many more layers than traditional machine learning. Therefore analyzing images with Aiforia’s neural networks allows a user to train the AI model to learn many different features.

During the training of these neural networks, they can be taught different categories such as liver tissue, parenchyme, portal areas, etc. The possibilities are limitless as the neural networks can learn whatever you wish to teach them. Here is an example of a segmentation project with Aiforia: 

liver-segmentation

Enhancing accuracy

The accuracy of this learning and teaching can be enhanced, particularly with more iterations of teaching. After all repetition is the mother of all learning. Running the analysis after teaching, on a different set of images, can then show the user whether the AI model has learned the parameters accurately.

Continuing with the liver project example from Aiforia, here is an analysis result from an AI model trained by a pathologist investigating Primary Sclerosing Cholangitis (PSC):

PSC-1

While more knowledge is always beneficial, with Aiforia’s image analysis software the end user does not need to know any coding in order to train their own AI model for any application to automate any image analysis task.