Keys to Better User Training

It is widely documented that users are the biggest vulnerability in our cybersecurity ecosystem.  Technical solutions and policy are all foundational and necessary, but a single careless user or a deliberate shadow IT practitioner can easily expose the business to serious threats.  As a result, companies are increasingly looking for effective methods to train their employees to mitigate against the user risk.  I see three keys to user training and all of these must be addressed from the executive leadership level.

First we need to make sure we have the right strategic business objective for the training.  Security professionals and curriculum developers are good at writing courses to cover everything from two factor authentication to proper use of file sharing services.  But, all of that training is going to fall on, at least semi-deaf ears, if you do not get employee buy-in on the importance of standards of behavior on the network.  Employees from the most junior level to the C-Suite must believe that information security is critical to the success of the company, that there will necessarily be tradeoffs between convenience and security and that they can connect the dots between their personal future with the company and adherence to security policies and standards.

One you convince employees that their future is tied to their behavior on the network, then you need to conduct effective training.  Once during onboarding, or even annually is not sufficient.  Plus, it cannot be the same old computer based slide show if you expect people to be engaged and get something out of it.  The threat changes daily and so does the technology in use by the company.  Short, frequent, relevant training that is designed to engage the users is the key to success.  Unfortunately, few training programs meet this criteria.

The second key to success is making sure there are processes in place to validate that the training worked.  All of us have been subject to mind numbing slide show presentations spiced up with cute memes and forced on us with deadlines right around the holiday break.  Training delivery clearly needs to be improved and we cannot stop there.  We have to test the validity of the training.  Phishing exercises, scans for unauthorized USB devices and automated logging of file sharing and selected web activity are all ways to see if the training is effective.  Additionally, we need regular incident response exercises that touch the whole company, not just IT and security.  And then we need to close the loop.  When we discover individual violations determine if it was a training problem and if so adjust the training.  From the exercise perspective, make sure we capture lessons learned, not just lessons “observed,” and make the appropriate adjustments to the playbooks.

The third key to better training is accountability.  In every company there are standards of behavior pertaining to key business processes.  Some are regulatory, others are safety related and many are just designed to ensure the business is protected and can grow.  In all of those areas, we ensure people are trained and qualified to do their jobs, and if they fall short we hold them accountable.  We need to treat information security the same way.  If we have trained you on the policies and taught you to use the technology, then you must be held accountable for doing your part.  Without accountability throughout the management chain, this all becomes a problem for the CIO and CISO, and they cannot close the user vulnerability gap alone.

Make sure employees internalize the strategic rationale for security training, ensure that you have a good program by training and verifying, and in the end, enforce a level of accountability that fits your company culture when it comes to information security.

AI enabled anomaly detection—a people, process, technology challenge

There is a lot of talk about applying artificial intelligence (AI) to the challenge of cybersecurity and our company, IronNet Cybersecurity, is one of many attempting to do so.  I have found over the last two years that you must have a well-defined linkage between people, process and technology to have any chance of creating value.

Before you read the rest of this article, you may want to check out my earlier post on the challenge of moving from anomalies to alerts: Finding anomalies is easy, deriving alerts is hard.

People.  I believe you need three specific groups of people to work this problem.  The first group are hardware and software engineers who are expert at capturing very large data sets at line speeds (10Gbps+).  They must be able to parse the data and in near real time make it available to the second group of people, the data scientists.  All data scientists are not created equal.  They all know the same math, but it is how they apply the math to the data set that creates the specialties.  To solve the security problem, you need data scientists who can apply their science/art to network flow data.  This is a different problem than delivering ads at click speed or electronic trading.  The third group you need is the hunters.  These are operators who are highly skilled in both defense and offense and who really understand what it means to “hunt.”

Process.  The process begins with the HW/SW engineers collecting full network flow data and sending the data flow to an analytic engine.  The analytic engine hosts the algorithms created by the data scientists to identify anomalies in the data.  The first challenge to overcome is that network flow data is almost by definition anomalous.  The second hurdle is the algorithms must be informed by some sense of threat intelligence so the math is targeted at finding anomalies most likely to indicate the presence of malicious activity.  The third step in the process is to present the output of the algorithms to the hunters who are going to use their experience, intuition, and understanding of threat intelligence to let the data scientists know what is useful and what is not.  The output of this process may be that the data scientists need to change features and parameters in the algorithms or there may be a requirement for the engineers to collect different data or to process the data in a different way to produce useful results.  Success will come from a deliberate closed-loop process that produces a metric driven, interactive relationship between the three groups of people.

Technology.  There is lots of technology required to execute the process I have described.  Much of it is well known in terms of network engineering and data science.  What has not been solved is the ability to create a 1-to-n list of alerts such that the top alert is more important than the second alert and so on down the list.  At the same time, the list must contain a very small number of benign events, so called “false positives,” and less than .1% would probably be a good target.  Getting to the 1-to-n list requires the application of AI.  A human would create the 1-to-in list by examining the output of the data algorithms, putting the output in the context of the network to prioritize critical issues and applying their experience and intuition to focus on the entity that is at the highest risk of being involved in a compromise.  Humans cannot do this at speed given the volume of network flow, which is why we need machines to take on the task.  The trick is getting the machines to emulate the intelligence of humans and that is where AI comes in.

If you like this article please share it. Also check out my website, www.thecyberspeaker.com and my Facebook page https://www.facebook.com/thecyberspeaker.