I think most of us have seen Metalhead (episode 5, the 4th Black Mirror series). In this black-and-white episode, dogs-like machines take control of the Earth, terrorising people with their ruthlessness.
After watching this episode, probably every IT fan is immediately reminded what Boston Dynamics creates. The similarity of film killers to real robots created in this company is striking. Interestingly, the company Boston Dynamics in 2019 is going to sell these nice dogs. At first, 100-1000 items per year.
Let's also see the evolution that takes place in front of our eyes in robotics, in which Boston Dynamics is undoubtedly the leader.
Imagine that "dogs" are being sold in large ammounts in the following years and suddenly, due to a software bug or hacker attack, the apocalypse from the Black Mirror movie begins ...
I calm everyone down. NO - these are not yet "terminators".
Robots of this type will be sold with a remote control, where we control where they go. For now, they are only such modern "remotes". Although they are equipped with autonomous mode, it is still only setting a straight path from point A to point B.
It's good that artificial intelligence is not yet available ... ("it was not available so far" - Sara AI gonna think to herself soon).
When browsing the internet, we can see practically "artificial intelligence" everywhere, but can we? Do we see artificial intelligence or only two attractive marketing words?
We live in such times that AI describes many products, from sharp knives to voice assistants. Is this artificial intelligence?
If we look at different pages of description, what actually AI is, we will find there general descriptions so that everything can fit properly. I have the impression that when a large company makes for a few billion dollars new product or even any additional feature in the phone, it adds a further definition to the definition of what AI is, so that the product can be fully promoted to give in the description "AI powered".
I also have the impression that the majority of people, however, understand intuitively what it should be and what the real AI is, probably a lot of Hollywood films have a huge impact on it.
But why is it still impossible to talk with simple AI on such powerful computers, billions of dollars spent on research? Why are the best voice assistants become boring after a moment of use? Well, there are several "small" problems that have not been solved so far.
First of all, in order for programmers to write something they need to understand what to write, and unfortunately our knowledge of how the human brain works in terms of AI is almost none. We know how neurons work, how they communicate, which parts of the brain are responsible for what activities, from the psychological side of our behavior we already know quite well, but we cannot combine this knowledge to understand it, let alone describe it and copy it.
The second "small problem" is that computers are really blind, deaf, have no sense of touch, smell or taste. Imagine that a child is born without all the senses, what chance does it have to become intelligent in any degree? This is obviously an extreme case, but it is enough that the child is born blind. Blind children develop well, but they start to talk and understand much later. The sense of hearing and touch are able to quickly sharpen and help in the development of intelligence, but it must take much longer than in people with functional vision, one of the most important of our senses to explore the outside world.
Some of you probably think now: but computers have cameras and microphones. They have, but ...
The best image recognition systems available to all Google Vision, analyze the image for a very long time, see little, make thousands of basic mistakes, and the child in every second of life watches virtually 3D movie recording dozens of frames per second for many hours a day!
Microphones - here is the greatest progress, the computer is able to capture the sound direction, loudness, frequency, but the speech recognition systems are limping, unable to pick up the voice well and recognize it in the room disturbed by other sounds. Remember that even at 90% accuracy every 10th word is lost or converted to another. Try to communicate well, speaking to someone turning one word of 10 into random one not related to the topic ...
Now an explanation of what artificial intelligence in my opinion is and what it is not.
It would seem that the autonomous car of Elon Musk-Tesla, which is able to take us home from work, is an example of the development of artificial intelligence. No, this is a brilliant invention, the future of motoring, but there is no more artificial intelligence there than on any phone, i.e. it is not at all. These are simple extended algorithms that operate on the principle of implementing programmed conditions such as: when the red light is on, stop the car. Of course, it's a simplification, but it's exactly how it works. You do not really want the car to make decisions based on its experience, learning about past events, because we would not be able to predict the behavior of the car. It is better to write a pattern of rules in it than to wonder why the car suddenly turned left because it came up with such a brilliant idea. After all, we learn from mistakes, we do not let children ride a car because mistakes while driving could end tragically.
Voice assistants - ask a simple question about some activities, the effect of which is known to every child, eg "can I get into the fire?"
Voice assistants have zero IQ, how they work, and why there is no AI there, I'll explain it.
There are more developed systems that can, for example, summarize the read text. It seems that in order to be able to summarize the text, one must understand it, know what it is and know the context. Only then can it be summarized. Nothing could be more wrong - it's just statistics and enormous knowledge bases.
How does it all work now and cheats us by pretending to be AI?
Responsible for understanding our speech are systems based on Deep Learning, which are the basis of NLP (Natural Language Processing) systems. This is not a scientific article, so I will quickly summarize that there are many better or worse methods (POS tagging, Parsing, Named-Entity Recognition, Semantic Role Labeling, Sentiment Classification, Question Answering, Dialogue Systems, Contextualized Embeddings) which analyze in great summary big knowledge bases, eg database of Twitter dialogs and find the most common words, give different expressions some values and the greater the value, eg positive, the sentence is determined as positive in a given sense. Other manipulations of words, sounds or signs are also used.
It's all one big statistics that can really fool us a little.
The simplest example to understand how it works is to predict the completion of the sentence "Hungry like ..." - "a wolf", "once upon .." - "a time" etc. I know that I have simplified very much, but Google search is the same statistics. Enter a word and see the hints - it's not the AI that prompts words - it's just statistics.
If you want to know more from the technical side about NLP, read this article.
When it comes to voice assistants, it is even worse, I have the impression that there is a staff of people sitting there, who on the statistically most frequently asked questions puts in three different answers.
This is wrong way!
Identifying words in sentences, context, and predicting statistical answers, it is by no means AI.
Real AI should work on completely different principles, in which NLP is not the goal, but the means to the goal. The method of solving the problems described above to create a real AI will be described in the next article.