Martin Spano is the author of Artificial Intelligence in a Nutshell, a book that explores the mystified subject of artificial intelligence (AI) with simple, non-technical language. Spano’s passion for AI began after he watched 2001: A Space Odyssey, but he insists this ever-changing technology is not just the subject of sci-fi novels and movies; artificial intelligence is present in our everyday lives.
Artificial intelligence as a scientific field has a tradition of more than sixty years. Its history is closely linked to the history of computers. At the time of its creation, computer scientists assumed that its construction would be very straightforward - they would simply programme it similarly to a computer. The software or computer programme consists of an algorithm, or algorithms written using source code. We can imagine it as one very long, detailed kitchen recipe. The problem with this recipe is that a single mistake is enough to stop it from working. This is one of the reasons why we programmers make so much money. But artificial intelligence cannot work this way. After all, you wouldn't want your autonomous car to be hampered by an unforeseen mistake at 130 km / h.
In classical programming, as this method is called, we have to "treat" all the possibilities, and there is an exponential number of them in real life. Simply put, we have no chance.
What to do?
As has been the case many times in history, computer scientists have begun to explore how it works in nature, specifically how intelligence is acquired by man. You do not instruct your children in detail what to do in all situations. Even if you try, they won't listen to you. Instead, you create the best possible conditions for them to learn on their own. Let us transfer this approach to the field of artificial intelligence. Instead of classic programming, we create and use a learning algorithm. The algorithm is also a computer programme, but one that receives training data and creates an internal structure from that data. After completing the training, artificial intelligence can make its own decisions from the new data it receives. So, if we repeat it again for a better understanding, artificial intelligence learns from the training data all by itself, creating an internal structure, on the basis of which it then makes independent decisions. This learning process is called machine learning.
Despite the fact that machine learning was discovered very soon after the emergence of artificial intelligence as a scientific discipline, it was not until the last decade that it became the de facto standard in the construction of artificial intelligence. This is because artificial intelligence learns from data. For example, if you want it to learn to distinguish a dog from a cat in pictures, you need to show it pictures of a dog and pictures of a cat. But those images should not be tens or hundreds, not even thousands, but preferably millions or billions. As one joke from the artificial intelligence environment says - “It didn't work the first time? It doesn't matter, it will work after billions of times.” This shows how much data artificial intelligence needs in order to learn.
Thanks to social networks and the Internet of Things, this has been possible in the last decade. At the same time, big data, as this large amount of data is called, needs sufficient computing power to process, which we have only achieved in recent years. To sum up, the synergistic effect of large amounts of data, sufficient computational power to process them, and the discovery of a method that processes them — machine learning — have caused the current boom in artificial intelligence.