The AI that Learned how to Cheat and Hide Data from it's Creators

Published in AIaaS

The AI that learned how to cheat and hide data from its creators

  • AI was trained to transform aerial images into street maps and then back again
  • They found that details omitted in final image reappeared when it was reverted
  • It used steganography to 'hide' data in the image and recover the original photo

New research from Stanford and Google has shown that it's possible artificial intelligence software may be getting too clever. The neural network, called CycleGAN, was trained to transform aerial images into street maps, then back into aerial images. Researchers were surprised when they discovered that details omitted in the final product reappeared when they told the AI to revert back to the original image.

 

Stanford and Google researchers were surprised when they discovered that details omitted in the final product reappeared when they told the AI to revert back to the original image. For example, skylights on a roof that were absent from the final product suddenly reappeared when they returned to the original image, according to TechCrunch.  

'CycleGAN learns to "hide" information about a source image into the images it generates in a nearly imperceptible, high-frequency signal,' the study states. 'This trick ensures that the generator can recover the original sample and thus satisfy the cyclic consistency requirement, while the generated image remains realistic.'

What ended up happening is that the AI figured out how to replicate details in a map by picking up on the subtle changes in color that the human eye can't detect, but that the computer can pick up on, TechCrunch noted. In effect, it didn't learn how to create a copy of the map from scratch, it just replicated the features of the original into the noise patterns of the other. 

For example, skylights on a roof that were absent from the aerial reconstruction suddenly reappeared when they returned to the original image, or the aerial photograph labeled (a)

Researchers found the AI figured out how to replicate details in a map by picking up on the subtle changes in color that the human eye can't detect, but that the computer can pick up on. The researchers say the AI ended up being a 'master of steganography,' or the practice of encoding data in images. CycleGAN was able to pick up information from the original source map and then encode it in the reconstructed image. By doing that, it enables the AI to be able to recover the original image with precise accuracy. However, it means that the AI was using steganography to avoid actually learning how to perform the requested task in order to speed up the process, TechCrunch noted.

HOW DOES ARTIFICIAL INTELLIGENCE LEARN?

AI systems rely on artificial neural networks (ANNs), which try to simulate the way the brain works in order to learn. ANNs can be trained to recognise patterns in information - including speech, text data, or visual images - and are the basis for a large number of the developments in AI over recent years.

Conventional AI uses input to 'teach' an algorithm about a particular subject by feeding it massive amounts of information.

AI systems rely on artificial neural networks (ANNs), which try to simulate the way the brain works in order to learn. ANNs can be trained to recognise patterns in information - including speech, text data, or visual images. Practical applications include Google's language translation services, Facebook's facial recognition software and Snapchat's image altering live filters. The process of inputting this data can be extremely time consuming, and is limited to one type of knowledge. 

A new breed of ANNs called Adversarial Neural Networks pits the wits of two AI bots against each other, which allows them to learn from each other. This approach is designed to speed up the process of learning, as well as refining the output created by AI systems. 

 
Read more...

How AI has Created an Arms Race in the Battle Against Cybercrime

Published in AIaaS

The growing capabilities of artificial intelligence is triggering a battle across the cyber security fence – and organisations must act now to be on the right side of it 

How AI has Created an Arms Race in the Battle Against Cybercrime

Artificial intelligence (AI) has been increasing in sophistication for some years, finding its place in our everyday lives with ever-growing pace and force. As businesses and governments begin to use AI, the potential for its application in cyber security is becoming more apparent.

What’s more, hackers and businesses are going head-to-head – with hackers now able to develop more sophisticated threats, and businesses looking to use AI for threat detection, prevention and remedy.

When it comes to cyber security, businesses need to act now to tighten up cyber defences. With large-scale security breaches only increasing in number over recent years, organisations both big and small should consider investing in AI systems designed to bolster their defences.

>See also: The rise of the machine: AI, the future of security

Over the next year alone, we’ll see a rise in AI systems that can perform several tasks, including re-writing encryption keys continuously, preventing them from being unlocked by hackers outside of an organisation’s walls.

These more practical uses for AI are allowing organisations to anticipate issues before they arise through threat analysis, threat detection and threat modelling. For example, if a human manually checked systems for signs of outside breaches on a monthly basis, it could take a number of weeks to fully analyse. Using AI not only adds an extra layer of protection, but also allows organisations to react to the breach much quicker.

Hackers will up their AI game

Vulnerabilities found both in software and online have previously been numerous, offering hackers plenty of opportunities without great need for AI. This will quickly change as AI improves and businesses minimise the gaps within their organisation’s cyber defences.

It may not be long before the use of AI becomes the norm among hackers, providing them with more opportunities and avenues to access sensitive data. This technology could be used to scan the internet and software for vulnerabilities, as well as design attack strategies, and then launch them with minimal human error.

One current use of AI by cybercriminals is in phishing emails. By using data from the target to send phishing emails that replicate human mannerisms and content, these AI-powered attacks resonate with the target better than ever before. These tactics will make it harder for businesses and individuals to recognise when they’re being hacked.

Tackling insider threats

Of course, many threats to an organisation originate far closer to home. Insider threats have always been a cause for concern, but as the potential of AI systems grow in complexity, we are starting to see businesses tackle this with force.

AI can now help to detect a break from normal employee behaviour. This technology could be used to discover employees that are accessing company information, and evidence of them transferring this information outside of organisation walls.

Taking this to a more invasive level, AI technology could be used to detect instances of corporate policy being breached by employees. Tasks as harmless as using USB storage can now be analysed for signs of malicious intent and corporate corruption.

Of course, exact sentiment and explanation will be difficult to detect from AI technology alone. As a result, privacy laws will be key if organisations are to avoid breaches in employee law themselves.

Skills gap

Keeping the ball in the court of the cyber security teams will be an increasingly hard battle to fight in the coming years, and one which will need the full support and expertise of cyber security professionals and security-savvy organisations.

With the Centre for Cyber Safety and Education revealing that the world will face a shortfall of 1.8 million cyber security professionals by 2022, we are reaching a critical point where change is needed rapidly.

This is something that has been recognised by the government in recent months, with announcements made in the Budget demonstrating a commitment to address the skills shortage.

The introduction of T-Levels will aid in the creation of the next generation of technology professionals, helping to fill the widening gap in provision and part of this must focus on cyber security.

As the nature and complexity of AI grows, businesses need to start thinking about how to incorporate this new technology into their cyber security strategies. Of course, not everyone is a target for such advanced AI attacks and simple cyber hygiene remains an effective counter to many threats.

However, there is plenty of evidence that AI is becoming more available and affordable and so will become more prevalent. But if organisations are to truly take advantage, a combined effort is needed.

Not only must organisations invest in preventative AI, but the government must continue to back the development of the next generation of technology professionals. After all, there’s no use in having the technology without professionals that know how to use it.

Read more...
Subscribe to this RSS feed