Artificial intelligence and expert predictions
It used to be trendy to worry about super-intelligent machines taking over the world. But 2018 has shown that AI is capable of a different kind of damage before that happens. Most modern algorithms are doing well in perception of image classification and speech processing,still it is a long way to go for AI to achieve the level of operation comparable to human brain.
Moreover, 2018 has shown that even the most advanced AI systems can fail, and we don’t know the exact consequences of the use of those technologies in the long run.
British scientist Stephen Hawking often spoke of artificial intelligence development and the danger it represents for the future of humankind.
In April 2017, Stephen Hawking, during a video conference in Beijing, held as part of the Global Mobile Internet Conference, said:
“The artificial intelligence development can be both the most positive and the most terrible factor for mankind. We must be aware of the danger that it represents,” he stressed.
As the scientist told in his interview at the end of November 2017, his main concern is that AI can replace people altogether. According to Hawking, people can create too powerful artificial intelligence that will be extremely good at achieving their goals. At some point, the goals of AI and people’s goals might go in a different direction, which might lead to unknown consequences.
Accidents with driverless cars
During the investigation of the fatal accident with a drone Uber, experts found errors in the program of the car which could have been altered beforehand. It underlines the fact that companies hurry too much to put products on the market without proper checking procedure.
The greatest success was achieved by Waymo, a subsidiary of Alphabet, which launched the unmanned taxi service last year.
What to expect: regulators in the United States and other countries have taken a passive stand for fear of stepping in on the innovation path, hinting, however, that existing security rules may be weakened. Drivers and pedestrians are not very enthusiastic about the new types of cars which might result in serious tensions if another accident happens in future.
Political manipulation
Last March, Facebook influence on American elections was actively discussed. During the discussion it was clearly shown how people can be manipulated by the use of social media instruments.
During the Senate hearings, head of Facebook Mark Zuckerberg said that AI can be trained to block harmful content, but the level of their current development does not allow doing it in any way.
What to expect: Zuckerberg’s statements will be tested during elections in two of the largest African countries: South Africa and Nigeria. Preparations for the presidential elections in the United States, which will be held in 2020, can also cause the emergence of new disinformation tools working on AI, including deleterious chat bots.
Peace Algorithms
Last year, movement for a peaceful AI was formed, when Google employees learned that the head-office was supplying technology to analyze records from drones to the United States Air Force. Workers feared that this cooperation could be a fatal step toward an autonomous, deadly strike by drones. In response to the protests, Google rolled up the project and created an ethical code for the AI.
Scientists and large companies were strongly against autonomous weapons. Nevertheless, military use of AI is only gaining popularity, and the variety of companies have developed interests in those fields.
What to expect: despite the increase in spending on AI by the Pentagon, it is expected that the UN would pass the resolution concerning banning autonomous weapon systems.
Observation
Ability of AI to recognize faces has led many countries to deploy observing system. In addition, face recognition allows you to unlock the phone and automatically mark the user in the photos on social networks.
This is a powerful tool which can also be used not only for the better, but for worse as well. In some countries, especially in China, face recognition is actively used by the police. Amazon sells technology to the US Immigration Service and other law enforcement agencies.
What to expect: face recognition in cars and webcams will appear and could be applied for recognition of identity and even emotions. IT is expected that authorities would make steps towards introducing preliminary regulations.
Fake videos
The rapid spread of deep fake video last year showed how easily AI makes fake videos: porn, mashups and even slander campaign. In addition, Nvidia, a graphics processor company, has shown the ease with which really believable fake content can be created.
What to expect: due to the development of deepfake-technology, people are likely to begin to fall into the trap. DAPRA will test new fake video detection methods. But since such developments also rely on AI, this will be a cat-and-mouse game. Still, this system relies on AI as well, so this will only continue the growing doubt
Algorithmic discrimination
Last year non-objectivity was found in a variety of commercial instruments. According to a study conducted at MIT Media Lab, computer vision algorithms, trained on asymmetric data, recognize women and people with dark skin color worse than white men.
The problem of non-objectivity is also associated with a weak diversity in studying AI: women at best occupy 30% of jobs and less than 25% of teaching positions at leading universities.
What to expect: not objective data would be sorted in order to establish equality between all groups of people. One of the major international conferences on machine learning will be held in 2020 in Ethiopia, as African scientists studying the prejudices in the data may face problems with receiving a visa for travelling to another location.