Confused about the evolution of Artificial Intelligence (AI)? What does AI even really mean? Learn more about the facts and fiction of this rapidly emerging technology.
AI has been one of the most hyped topics during recent years. One commonly accepted definition of AI is “the science and engineering of making intelligent machines, especially intelligent computer programs”. In other words, AI solves tasks that usually require human intelligence. AI encompasses the subfields machine learning and deep learning, which are commonly mentioned in the context of AI.
How is AI revolutionizing workflows?
Throughout history, technology in general has had a big role in increasing productivity and improving lives. We are already talking about facing the fourth industrial revolution and similar to all the previous industrial revolutions, this one has been accompanied by the fear of technology replacing human labor. This is one of the most common rumors heard about AI.
Undoubtedly, AI is affecting the ways we work. Fortunately, it is expected to have a positive effect by creating new jobs and complementing existing ones. For example, in pathology, there is a shortage of professionals even though their demand keeps increasing. One reason behind this is that globally cancer rates are estimated to increase by 47% during 2020-2040, which will lead to an increasing number of pathology tests. Furthermore, for example in the UK, only 3% of the histopathology departments have enough staff to meet clinical demand.
Luckily AI has stepped in to complement this shortage by giving ‘superpowers’ to the pathologists. These superpowers include extreme speed and heightened eyesight, by allowing AI to assist in manual, time-consuming, and visually precise tasks. This not only lets the pathologist unlock their full potential but also increases their confidence in their decision-making.
There are a variety of uses for AI, from identifying malaria-infected blood cells to recognising vehicles on a road.
This leads us to the next set of myths. If AI performs so well, should we be worried about AI getting smarter than us? Does AI learn from itself? How about the rumors on artificial general intelligence or super intelligence taking over? All these myths are still far from reality. First of all, AI does not magically become “better”. Simply speaking, it is built from rules defined by us. It only gets better if the building blocks of it, such as data, different algorithms, and hardware, get better. In fact, these do improve with enormous speed so AI is rapidly developing. Secondly, there are two types of AI, weak and strong AI. Weak AI is also known as narrow AI or artificial narrow intelligence, whereas strong AI, as artificial general intelligence.
What is the difference between narrow and weak AI?
In narrow AI the AI is trained to perform specific and narrow tasks with predefined rules. In other words, the AI is not genuinely intelligent or self-aware. It can’t perform any tasks outside the rules that we have defined. Strong AI then again would be expected to have the capability to do this, if it would exist. So far strong AI is only a theoretical construct, in which the AI would have the true intelligence and self-awareness capabilities as humans do. Artificial super intelligence would be even a step further from this, meaning an AI that would exceed human capabilities.
However, to date, all existing AI is classified as weak AI although the name ‘weak’ can be misleading when thinking about some of the most sophisticated applications such as self-driving cars or protein-folding algorithms. After all, these still work according to some predefined rules. Although weak AI can outperform humans in certain narrow tasks such as quantifying certain cells from a sample with an unbeatable efficiency, it is still far from approaching the feared artificial general or super intelligence.
What skills or knowledge are required to get started with AI?
How does AI work then? Is it only for the tech elite with programming skills and access to large datasets and powerful computers? The answer to the first question is a bit complex, but to the latter we can confidently tell you no. AI is not only for the tech elite, nor do you need to know anything about coding, have access to large datasets, or have highly powerful computers.
Through the cloud-based Aiforia software, anyone can easily train AI models to detect, quantify, and measure anything in any image. No coding, dedicated hardware, or excess datasets are needed for this. The software empowers scientists and pathologists with easy access to artificial neural network-driven deep learning, which is the most powerful computational method for image analysis. With Aiforia’s software, AI model training is done by simply annotating (marking by drawing) what you want the AI to do.
To learn more about AI, machine learning, deep learning, artificial neural networks, and Aiforia’s AI, read more from our other articles:
- AI unbounded
- Introduction to deep learning versus machine learning
- Introduction to artificial neural networks
- Introduction to artificial neural networks part 2
- Introduction to AI in digital pathology