Criminal face of AI: lower threats.

In our last edition we published an article in the series which developed the ugly side of artificial intelligence. This face of the crime of this technology revolutionary could do harm to the planet in days to come. The threats start “burglar robots” which are very scary but can be better contained than other even more serious robots.

As we pointed out in our last article, robots burglars inviting themselves into an apartment are not ultimately so dangerous because they can be outmaneuvered and at the same time they cannot invade a large number of apartments at a time.

But comparing them to robots that can spread false information to destroy the image of famous people, we realize the position they occupy at the top of the most dangerous robots of this panoply of crimes whose artificial intelligence can be capable.

Fake videos that are used to impersonate a person to spray its reputation, the hacking of autonomous cars that can be used as a weapon of war, tailor-made phishing to break into data secure to install at the same time snitches, hacks of the artificial intelligence controlled systems that can be put practically at shutdowns such as the power cut or the shutdown of soul distributors banks…, large-scale blackmail used to send messages of personalized threats and finally false information written by the intelligence artificial which consists in writing propaganda articles by making sure to use reliable sources. These processes of artificial intelligence which turn out to be most dangerous are classified in a category of threats qualified as very seriously.

In this article we will develop threats qualified as less serious. This does not mean that their capacity for nuisance is the least.

Military robots. This is reminiscent of taking control of robots or weapons to commit crimes. This is a very dangerous threat in itself, but difficult to achieve because military equipment is still difficult to reach. Military stock is generally well guarded.

Scam. This activity consists of fraudulently selling services using artificial intelligence. There are a lot of scammers in the industry. Thesis many scammers have successfully sold several fake technologies to most famous companies. They also managed to sell these same tools technologies to governments and even to state armies.

Data corruption. The Ukrainian army did not use intelligence artificial to deceive the surveillance of the Crimea bridge during the bombardment of this bridge. She just wrapped explosives in plastic materials that escaped the surveillance of the Russian controllers. Purpose data corruption makes it possible to modify or deliberately introduce false data to infiltrate specific biases. This activity allows you to make, for example, a detector insensitive to weapons or encourage an algorithm to invest in such and such walk.

Learning-based cyberattack. This method is used to launch both specific and massive attacks. Thanks to artificial intelligence, the learning-based cyberattack makes it possible to probe the weaknesses of the systems before launching several simultaneous attacks.

Autonomous attack drones. It is about hijacking autonomous drones. TEA diverting is one thing and using it is another. They are entertained in order to use to attack a target. These drones can be particularly murderers when they act en masse.

Denial of access. It makes it possible to destroy or prohibit users from accessing a financial service, employment, public service or social activity. Even if she is not profitable, this technique makes it possible to blackmail.

Face recognition. This technique makes it possible to make fake photos identity to gain access to smartphones, surveillance cameras, control of passengers… To do this, technology must hijack transport systems facial recognition using artificial intelligence of course.

Manipulation of financial markets. If the researchers put it in the category less serious threats, this is probably not the point of view of traders of the financial markets and bankers. This technique is considered a major threat facing the financial markets. Using artificial intelligence, we manage to corrupt trading algorithms to harm competitors, artificially lower or raise a value, cause a financial crash…