Learning the normal behaviour of a healthcare organisation’s IT system, identifying malevolent activity and fixing problems before they’ve even been noticed by humans is the new frontier in cybersecurity, as artificial intelligence is applied to the fight against hackers.
Healthcare CIOs and CISOs should recognise that AI has the ability to enhance technology’s ability to identify malicious activity and attackers and to protect systems and data, healthcare cybersecurity experts claim. And AI does so in different ways.
“Machine learning and artificial intelligence can be used to augment and replace traditional signature-based protections,” senior director of information security at First National Technology Solutions Robert LaMagna-Reiter said.
“One area is security information and event management alerting, or anti-virus solutions.”
With the immense amount of data in healthcare, security personnel cannot efficiently sift through every event or alert, whether legitimate or a false-positive. Machine learning and AI solve this problem by looking at behaviour versus signatures, as well as taking into account multiple data points from a network, LaMagna-Reiter said.
“By acting on behaviour and expected actions versus out-dated or unknown signatures, the systems can take immediate actions on threats instead of alerting after the fact.”
Artificial intelligence can also assist with “self-healing” or “self-correcting” actions.
“For example, if an antivirus or next-generation firewall system incorporates AI or behavioural monitoring information, assets with abnormal behaviour – signs of infection, abnormal traffic, anomalies – can automatically be placed in a quarantined group, removed from network access,” he said.
“Additionally, AI can be used to take vulnerability scan results and exploit information to move assets to a safe-zone to prevent infection, or apply different security policies in an attempt to virtually patch devices before an official patch is released.”
If abnormal activity is observed, prior to any execution AI can wipe the activity and all preceding actions from a machine.
Making use of artificial intelligence is a progressive action, where a system constantly trains and identifies patterns of behaviour and can discriminate between those considered normal and those that require attention or action, said Rafael Zubairov, a security expert at DataArt.
“For this, the machine can use a variety of available data sources, such as network activity, errors or denial of access to data, log files, and many more,” Zubairov said.
“Continuous interaction with a person and information gathering after deep analysis allow systems to self-improve and avoid future problems.”
But successful use of artificial intelligence in healthcare requires a top-down approach that includes an executive in the know, according to LaMagna-Reiter.
“An organisation must implement a defence-in-depth, multi-layer security program and have an executive-sponsored information security function in order to fully realise the benefits of implementing machine learning and AI,” he said.