Keys to Better User Training

It is widely documented that users are the biggest vulnerability in our cybersecurity ecosystem.  Technical solutions and policy are all foundational and necessary, but a single careless user or a deliberate shadow IT practitioner can easily expose the business to serious threats.  As a result, companies are increasingly looking for effective methods to train their employees to mitigate against the user risk.  I see three keys to user training and all of these must be addressed from the executive leadership level.

First we need to make sure we have the right strategic business objective for the training.  Security professionals and curriculum developers are good at writing courses to cover everything from two factor authentication to proper use of file sharing services.  But, all of that training is going to fall on, at least semi-deaf ears, if you do not get employee buy-in on the importance of standards of behavior on the network.  Employees from the most junior level to the C-Suite must believe that information security is critical to the success of the company, that there will necessarily be tradeoffs between convenience and security and that they can connect the dots between their personal future with the company and adherence to security policies and standards.

One you convince employees that their future is tied to their behavior on the network, then you need to conduct effective training.  Once during onboarding, or even annually is not sufficient.  Plus, it cannot be the same old computer based slide show if you expect people to be engaged and get something out of it.  The threat changes daily and so does the technology in use by the company.  Short, frequent, relevant training that is designed to engage the users is the key to success.  Unfortunately, few training programs meet this criteria.

The second key to success is making sure there are processes in place to validate that the training worked.  All of us have been subject to mind numbing slide show presentations spiced up with cute memes and forced on us with deadlines right around the holiday break.  Training delivery clearly needs to be improved and we cannot stop there.  We have to test the validity of the training.  Phishing exercises, scans for unauthorized USB devices and automated logging of file sharing and selected web activity are all ways to see if the training is effective.  Additionally, we need regular incident response exercises that touch the whole company, not just IT and security.  And then we need to close the loop.  When we discover individual violations determine if it was a training problem and if so adjust the training.  From the exercise perspective, make sure we capture lessons learned, not just lessons “observed,” and make the appropriate adjustments to the playbooks.

The third key to better training is accountability.  In every company there are standards of behavior pertaining to key business processes.  Some are regulatory, others are safety related and many are just designed to ensure the business is protected and can grow.  In all of those areas, we ensure people are trained and qualified to do their jobs, and if they fall short we hold them accountable.  We need to treat information security the same way.  If we have trained you on the policies and taught you to use the technology, then you must be held accountable for doing your part.  Without accountability throughout the management chain, this all becomes a problem for the CIO and CISO, and they cannot close the user vulnerability gap alone.

Make sure employees internalize the strategic rationale for security training, ensure that you have a good program by training and verifying, and in the end, enforce a level of accountability that fits your company culture when it comes to information security.

A Director’s Guide to Cybersecurity “Certification”

Boards get it, they need to exercise the same level of oversight regarding cybersecurity as they do with financial statements.  The difference is there is a well-developed set of standards governing financial accounting with associated audits and examinations.  The standards are not as well defined for cyber security and it is important for directors to know exactly where their company stands in terms of adhering to an accepted framework and third-party certification and audit options.

In 2014 the National Institute of Standards and Technology (NIST) released the Framework for Improving Critical Infrastructure Cybersecurity.  The framework has come to be known as just “NIST” when used by some in the context of “Oh, yeah, our company is NIST compliant.”  And so starts a trail of confusion regarding certification, compliance and standards pertaining to cybersecurity.

The NIST Framework for Improving Critical Infrastructure Cybersecurity is an excellent resource for organizing, planning and implementing cybersecurity controls in any environment, not just in a critical infrastructure company.  For Directors, it is easy to grasp how the controls are organized around the five functions of Identify, Protect, Detect, Respond and Recover.   It has other features that allow companies to tailor their controls to enterprise risk appetite and resource constraints and to facilitate program management in the sense of “we are here now and we want to be there next year.”  All good, but Directors should understand there is no such thing as “NIST certified” or “NIST compliant” in relation to the framework.  The NIST framework is increasingly gaining adoption as the de facto “standard,” however there is no third party that can come in and certify a company as meeting the “NIST standard.”

The NIST framework is an excellent tool to prepare the company to achieve certification to an information security standard and a widely accepted international standard is the ISO 27001 certification.  Published by the International Organization for Standardization, ISO 27001 defines the management processes and structures that must be in place to reduce the risk of information compromise.   It also provides guidance on embedding the information security procedures within the larger risk management process of the company. ISO 27001 certifications are issued for three years and require periodic audits to ensure compliance.  It is important to note that the ISO 27001 certification just indicates that the company has satisfactorily designed and documented an Information Security Management System.  ISO 27001 certification does not capture how well the system works.  ISO certification must be accomplished by an accredited certifying body and costs range from ~$20K to over $300K depending on how much consulting support the company needs to prepare for the certification, the auditor selected and the complexity of the company’s information environment.

In 2011 the American Institute of CPAs issued its Service Organizational Control (SOC) reporting framework.  Within that framework, the SOC 2, Reporting on Controls at a Service Organization Relevant to Security, Availability, Processing Integrity, Confidentiality, or Privacy, is the examination relevant to cybersecurity.  The SOC 2 examination is essentially an outside audit that verifies controls are in place and they are functioning.  Depending on the nature of the business, the company should satisfy one or more of the five Trust Services Principles and Criteria of: Security, Availability, Processing Integrity, Confidentiality, and Privacy.  SOC 2 examinations can cover a point in time or cover a period of time.  If a company has a well-established information security program, one year is typical.  SOC 2 examinations must be conducted by an accredited auditor and cost from ~$30K and up depending on the auditor and the complexity of the examination.

Not every company requires an ISO 27001 certification and a SOC 2 examination.  But every company should be following the NIST or some other accepted framework to derive their cybersecurity strategy.  While there is no defined “standard,” a company that relies on the NIST framework to prepare for and achieve ISO 27001 certification, and then engages an auditor to conduct a SOC 2 examination to prove that the controls are in place and functioning is on pretty solid ground for proving they have taken “reasonable measures” to protect critical information.

Directors who understand the difference between frameworks and certifications and can ask informed questions about why the company does or does not conduct an outside audit are in good standing when it comes to executing their oversight responsibilities regarding cyber risk.

Brett Williams with Peter Alexander

Brett Williams with Peter Alexander on MSNBC discussing Russian hacking

I had the opportunity to be interviewed by Peter Alexander on MSNBC to discuss Russian hacking of the U.S. Election.  It was a good opportunity for me to highlight several points that I think are vitally important.  First that the Russians have figured out how to use cyber as part of their national security policy, and the U.S. needs to do the same.  Second, some of the blame for this hack needs to be placed on the Clinton campaign for failing to realize very early on that cyber-risk was a strategically important consideration.  It is a leadership failure. Third, nothing cosmic in this hack.  Simple phishing and domain shortening techniques.  Finally, we need to get out of the reactive mode with Russia.  We cannot afford to debate for months a response to a hack or an activity in physical space.

New Year, Same Issue—Training to set a culture of cybersecurity.

Boards continue to be held accountable for cybersecurity failures.  (Shareholder Suit Against Wendy’s for Cyber Breach) Its déjà vu all over again.  As we go into 2017, it appears we will have to continue to beat the drum in favor of board level education and training around cybersecurity.  You would think they would all get it by now, but the surveys say otherwise.

Certainly, board education is critical but it is just one part of the puzzle.  To make a real dent in managing cyber-risk, the organizational culture must change.  Everyone in the company must understand the threat, understand their role in prevention and recovery and cyber-risk must be part of every business decision.  A company whose culture has inculcated these characteristics is well on its way to effectively managing the risk in cyberspace.

So what does it take to change the culture of the company?  No matter what aspect of the culture you are trying to change, culture change has to be initiated, supported and implemented under the leadership of the CEO.  If the CEO does not lead the culture change, it is not going to happen.  Nowhere is this more important than in getting a company to adopt a cybersecurity culture.  The CEO needs to aggressively pursue three specific training threads and they must be synchronized, coordinated and mutually supporting.

The first level is user training.  Surveys continue to show that user training is not well done, if it is done at all.  The smaller the company, the less likely it is to invest in any type of user training.  For those companies that do conduct user training, whether they are large or small, the majority accomplish the training during the onboarding process and recurring training is sporadic at best.  Overall, users continue to find the training boring, not relevant and simple tests like phishing exercises show that the training is not particularly effective.  User training must occur regularly since the threat changes constantly and the user technology changes as well.  Short, relevant bursts of training are more effective than long annual sessions.  And senior executive participation and engagement is a must to show that the training is important for everyone in the company.

The second level of training is Board training.  Some type of Board training is becoming more common, although not many boards invest the three to four hours necessary to get a solid foundation in cyber-risk oversight that includes an appropriate level of technical education so they can have an “adult” conversation with the CIO or the CISO.  If the Board invests in the foundational training, they typically do not get the periodic refresher necessary to stay current.  Educated Boards ask the right cyber-risk questions and more importantly they understand the answers.  Like all education, this cannot be a one-time event.  It must be a process of continual learning.

The third level of training, and one that is not typically accomplished is training for the SVPs, VPs, directors and mid-level managers and supervisors that builds on and is synchronized with the Board training. Companies that do a pretty good job of user training and also invest time with their boards are probably missing a critical link in training all the managers that work between the CEO and the line employee.  Those business unit leaders, HR people, accountants, business development execs and others need to get a version of the Board education so they understand what their responsibility is in helping to establish a culture of cybersecurity.  This mid-level manager training should talk cyber-risk management appropriate for the level of supervision.  It should support the tone set at the top by the Board and CEO.  If the leadership levels below the CEO do not understand and embrace their role in setting a cybersecurity culture, then culture change will never occur, no matter how hard the CEO works at it.

Tying these three levels of training together is best accomplished when the training is specifically designed top to bottom ensuring that the message is consistent from the Board down to the most junior employee.  Everyone has to understand their role in establishing and sustaining a culture that says cybersecurity is critical to our business success.

If you like this article, please share it. Also check out my website, www.thecyberspeaker.com and my Facebook page https://www.facebook.com/thecyberspeaker.

AI enabled anomaly detection—a people, process, technology challenge

There is a lot of talk about applying artificial intelligence (AI) to the challenge of cybersecurity and our company, IronNet Cybersecurity, is one of many attempting to do so.  I have found over the last two years that you must have a well-defined linkage between people, process and technology to have any chance of creating value.

Before you read the rest of this article, you may want to check out my earlier post on the challenge of moving from anomalies to alerts: Finding anomalies is easy, deriving alerts is hard.

People.  I believe you need three specific groups of people to work this problem.  The first group are hardware and software engineers who are expert at capturing very large data sets at line speeds (10Gbps+).  They must be able to parse the data and in near real time make it available to the second group of people, the data scientists.  All data scientists are not created equal.  They all know the same math, but it is how they apply the math to the data set that creates the specialties.  To solve the security problem, you need data scientists who can apply their science/art to network flow data.  This is a different problem than delivering ads at click speed or electronic trading.  The third group you need is the hunters.  These are operators who are highly skilled in both defense and offense and who really understand what it means to “hunt.”

Process.  The process begins with the HW/SW engineers collecting full network flow data and sending the data flow to an analytic engine.  The analytic engine hosts the algorithms created by the data scientists to identify anomalies in the data.  The first challenge to overcome is that network flow data is almost by definition anomalous.  The second hurdle is the algorithms must be informed by some sense of threat intelligence so the math is targeted at finding anomalies most likely to indicate the presence of malicious activity.  The third step in the process is to present the output of the algorithms to the hunters who are going to use their experience, intuition, and understanding of threat intelligence to let the data scientists know what is useful and what is not.  The output of this process may be that the data scientists need to change features and parameters in the algorithms or there may be a requirement for the engineers to collect different data or to process the data in a different way to produce useful results.  Success will come from a deliberate closed-loop process that produces a metric driven, interactive relationship between the three groups of people.

Technology.  There is lots of technology required to execute the process I have described.  Much of it is well known in terms of network engineering and data science.  What has not been solved is the ability to create a 1-to-n list of alerts such that the top alert is more important than the second alert and so on down the list.  At the same time, the list must contain a very small number of benign events, so called “false positives,” and less than .1% would probably be a good target.  Getting to the 1-to-n list requires the application of AI.  A human would create the 1-to-in list by examining the output of the data algorithms, putting the output in the context of the network to prioritize critical issues and applying their experience and intuition to focus on the entity that is at the highest risk of being involved in a compromise.  Humans cannot do this at speed given the volume of network flow, which is why we need machines to take on the task.  The trick is getting the machines to emulate the intelligence of humans and that is where AI comes in.

If you like this article please share it. Also check out my website, www.thecyberspeaker.com and my Facebook page https://www.facebook.com/thecyberspeaker.

Kill the Archer, Part II — How do I get permission?

Defending yourself by catching arrows is not much fun. Yet in cyberspace we are hesitant to go after the archer.  Why is that?

If we are defending a piece of ground from air attack we are not content to build hardened shelters, set up anti-aircraft guns and missiles and then hope that we can effectively blunt the attack by absorbing the blows.  Instead we send airplanes over the border to shoot down the enemy enroute, we bomb the airfield so the runways cannot be used and we attack the command and control system so the order to launch the raid cannot be issued.  None of these is an “offensive” action.  All of them, to include attacking the airfield, is a defensive action.  We should do the same in cyberspace, but it is hard to get permission to attack in cyberspace even when the purpose is to defend ourselves.

I argue it is easier to get authority to drop a bomb on a building full of hackers, than it is to get the authority to conduct a cyber-attack against the same group.  That is because we have great confidence in our ability to conduct a very precise attack from the air. We need the same confidence in cyberspace and it needs to go up and down the line from the policy makers through the military chain of command.

If I drop a bomb on a building, I know what will happen to the building and what will not happen to the building next door.  In cyberspace, I cannot provide the same level of precision regarding either the desired effects or the potential for undesired effects.  In most cases I cannot guarantee that when I take out the power to the missile site, that I will not affect the hospital attached to the same grid.

About 80 years ago some Airmen at Maxwell Field in Montgomery Alabama had this idea that we needed to be able to put a bomb in a pickle barrel.  It took another 50 years, but we eventually figured out how to put a bomb in pickle barrel and to do so with very little chance of error.  (Note that we do not always aim at the right pickle barrel.)  Once we achieved that level of precision with the actual attack we developed Joint Munition Effects Manual (JMEM), “bugsplat” models and all sorts of other techniques so that we could predict with great accuracy what would happen to the pickle barrel we were aiming at and what would not happen to the surrounding barrels.  With repetition, we proved the validity of the models and precision air attack while not perfect, has become in many cases the weapon of choice.

If we want to have the same authority to conduct operations in cyberspace we must define with the same level of precision that we can hit what we are aiming at and that we can characterize the desired and undesired effects with the same level of fidelity.  Some say this is impossible.  Cyberspace is too complex, it changes too frequently and despite the inherent logic of cyberspace based on 1’s and 0’s we cannot predict the outcome.  I disagree.

Cyberspace is a manmade domain.  We can shape the domain like we cannot shape the air, land or water.  It is not going to be easy but by leveraging advanced applications of machine learning and artificial intelligence we can get to a JMEM-like capability for cyberspace that normalizes the integration of cyberspace operations with operations in the other domains.

We still have to wrestle with a number of issues such as the ubiquitous and interconnected nature of the domain that makes fratricide and collateral damage a different problem than exists in physical space.  We still have to sort out issues of borders—if I change a one to a zero on your hard drive have I violated your sovereignty?  And finally, we have thousands of years of experience attacking each other kinetically and we can estimate the reaction of the target.  We do not have a good understanding of how the enemy will react to a significant cyber attack.  All of those issues must be addressed, but first we have to prove that we can deliver precision in cyberspace like we can through the air.

Please share this article and visit my website, www.thecyberspeaker.com and my Facebook page, facebook.com/thecyberspeaker.

Somebody has to kill the Archer!

President-elect Trump has stated that he is going to task the Joint Chiefs to come up with a plan to defend U.S. critical infrastructure in cyberspace.  That in turn has generated a number of opinion pieces and conversations lamenting the fact that the President-elect does not understand that the military should not be in charge of private sector cybersecurity.

I agree with that and from my experience, no one in Cyber Command thought they should be responsible for private sector cybersecurity.  We did feel strongly that the military has a responsibility to defend the nation in cyberspace just as it has responsibilities in the physical domains of air, land, maritime and space.  The private sector is the first line of defense and in most cases, will be left to its own devices.  However, in the event the country is threatened at the strategic level from a cyber-attack, the federal government is obligated to act.

Many argue there is almost nothing the military can do to protect private industry from cyberattack.  There I disagree.  The military operating in concert with the rest of the federal government has two distinct capabilities that cannot be duplicated in the private sector.

First, the military can leverage the all-source capabilities of the intelligence community (IC).  If you have not worked closely with the IC, you do not understand how powerful it is to have access to information gleaned from human intelligence, signals intelligence, geo-spatial intelligence, open source intelligence, foreign materials exploitation and other intelligence disciplines.  No one in the private sector has access to all those sources of information nor do they have the capacity to synthesize the information into actionable intelligence.

Our challenge continues to be sharing the information with the private sector in a way that either provides warning of an attack or allows defenders to prioritize their efforts based on adversary capability and intent.  We must get better at identifying and targeting by sector information that would be actionable by private entities.  Then we need to expedite the process of stripping out the classified bits so it can be distributed quickly and preferably, by machine.  Finally, we need to build trust and confidence so this sharing is a two-way street.  Even at its best, the IC does not see everything and the private sector has visibility on critical information that completes the picture.

The second unique capability possessed by the military is the ability to kill the archer.  Most of our time in cyberspace is spent catching arrows.  As the French figured out in 1415, catching arrows is not that much fun.  We have to be able to kill the archer.  Some downplay our capability to do this and further fear that blocking an attack would require the military to monitor all the internet traffic coming into the U.S.  That is a poor characterization of what is necessary to position a force in cyberspace that can stop an attack at the source.  And it unnecessarily inflames the privacy versus security debate.

Our biggest limitations in being able to kill the cyber archer revolve more around our failure to define red lines in cyberspace coupled with ill-defined national policy regarding how we will use cyberspace operations in combination with other elements of national power to defend ourselves.  It is really important to work these policy issues out now and get on to a meaningful understanding of cyber as part of national security policy.  China and Russia have certainly figured it out.

If you like this, please share it.  Also check out my website, www.thecyberspeaker.com and my Facebook page, facebook.com/thecyberspeaker.

C’mon, is this as far as we have come?

So I just read another article about how boards and c-suites will begin to get serious about cybersecurity in 2017.  (http://www.information-age.com/changing-role-cio-boardroom-2017-123463403/)

I am chagrined and frustrated by the fact we continue to make essentially the same three points year after year:

  1. Boards need to ensure that cybersecurity is a strategic business issue, not just an IT “thing.”
  2. CIOs need to be better equipped to articulate the interaction between IT and security and the relationship of both of those to the business from both a risk and an ROI perspective.
  3. Enterprise perimeter security as we know it is grossly insufficient to defend our interests in today’s cloud and mobile-based world against threats who have access to a wide variety of advanced tools.

I suppose we keep making these three points because there has been little wide-spread change in corporate culture despite the increased frequency and publicity of attacks. The exception from my perspective is at the high end of the financial sector. For example, I had dinner with the CEO of a very large financial entity and he would be insulted if you showed up with these three points as the basis of your presentation. But most companies appear to still be in the education and awareness phase and they need to move to the action phase.

Here are three action steps that your company can take to move from awareness to action. And they are not technical steps. These are leadership behaviors that set a corporate culture that recognizes security as a key component to all aspects of the business.

  1. There needs to be a director on the board with real experience in managing cyber-risk. I have seen boards where the designated technical expert is the former CFO of a defunct tech corporation. Not the right person to serve as the cybersecurity advocate. At the same time boards cannot afford to seat one-trick ponies. Too many boards have been burned seating a previous CIO or other technically oriented person only to find out they have no way to make an effective contribution to the strategic business discussions. Admittedly we are looking for unicorn-like directors, but there are execs with both significant leadership and management experience across diverse organizations who also have meaningful technical skills.
  2. Get promising business leaders real experience in IT and cybersecurity by having them serve a rotation as the deputy-CIO or CISO in the company. A three year stint working the technical issues would do two things. First, it significantly broadens the execs ability to understand the core issues facing the CIO and CISO and that understanding will provide him or her a much better decision making baseline when they eventually become a COO or CEO. Second, that high performing business unit leader will bring a new perspective to the CIO shop and help them up their game when it comes to making IT and cybersecurity issues relevant to the board and the executive leadership team.
  3. Designate a specific portion of the IT or security budget to next generation cyber defense technologies that are focused on leveraging machine learning and artificial intelligence to detect anomalous activity in the environment. Detecting anomalies that are likely to be early indicators of an attack is the only way we are going to defend ourselves in the cloud and mobile device world as well as in the world of IoT. There is still a role for firewalls and anti-virus to lower the noise threshold but active defense relies on finding those very subtle first steps that the sophisticated attacker uses to establish a foothold in your network.

So I got it that we need to continue to beat the drum on the basics, but we need to do everything we can to move business leaders from awareness to action when it comes to managing cyber-risk. If cyber-security is not a deliberate point of discussion in every business decision, then there is leadership work to do.

If you like this post please share it. Also check out my website: www.thecyberspeaker.com and like my facebook page, facebook.com/thecyberspeaker.

Hey, that’s my security system…and who are those guys?

Imagine you are sitting on an airplane in Atlanta and just as they close the door you receive this picture.  You look a bit closer and confirm that in the background is the control panel for the security cameras in your new house.

img_5091Now luckily I know these guys.  That’s Lee on the left and Stuart on the right and they work with me at IronNet Cybersecurity.  Lee and Stuart have some special skills and they put those skills to work testing the security on my new “smart” home.  Turns out I was pretty smart to have them test the system.  Turns out the camera system is not all that smart.

So here is what happened.  We built a new home and moved in this summer.  One of the sub-contractors I engaged early on was the sound and security guy.  He did great work and overall we are happy with the system he installed.  As we were going through the design and installation process I asked several times about the security of the system.  The contractor said the security was top notch, don’t worry.  I explained that I was going to trust, but verify once the installation was complete.

During the configuration process, I asked what it would take for me to be able to view the cameras remotely when I was away from the house.  Conveniently there is a proprietary app but it requires certain communication ports to be open on my router.  This did not sound like a good idea to me, but I was assured that appropriate access controls were in place, no problem.

After everything was up and working I gave Lee and Stuart the IP address of my house and in about three hours they had full control of my security cameras. Sparing the technical details, suffice it to say that Lee and Stuart found a flaw in the code that runs the camera system and they were able to obtain admin access without compromising any user names or passwords.

The first thing I did was close the ports and install a business grade firewall.  While that does not make me bullet proof, I am a significantly harder target.  The next thing I did was notify the installation contractor.  To my surprise, he was not that excited about the picture I shared or the revelation that he was installing a product with a significant security vulnerability.  Guess he is lucky that I am not the type to go out on Yelp and make a stink.

The next thing I did was call the system distributor.  It took about 15 minutes before I found someone who could understand that I was calling to notify them of a security flaw in their product.  The next day I did get a phone call from the company.  I asked if this was the type of company that would try to sue me for violating the product terms of agreement or was this the type of company that welcomed the information.  It turns out they were the latter.  According to the gentleman we spoke to, they were aware of this particular vulnerability and there was a fix on the roadmap.  No time frame was suggested nor was there an offer to retrofit or patch.  I imagine this distributor is at the mercy of the overseas company that builds the product.  He added that this was a common issue with other systems.  I did not find that comforting.   We provided them a detailed report of our findings, at no charge I might add.  Further attempts to contact them regarding the filing of a Common Vulnerabilities and Exposure (CVE) report went unanswered.

Several interesting points to ponder here.  First as a homeowner or a business owner, do not take for granted the security of your security systems.  There is going to be a tradeoff between security, privacy and convenience.  Just make sure you are making a conscious choice.  If you want to have Lee and Stuart check out your setup, send me a note.  Second, if you sell and install tech equipment these days, you should care about the security characteristics of the equipment you are putting in people’s homes and businesses.  Reputation damage is hard to measure and hard to fix.  Third, if someone notifies you of a security flaw in a system you produce, have a plan on how you are going to treat them.  Make it easy to reach the right person and follow it through to conclusion.  Finally, if you know there is risk involved that the average consumer would not consider, explain the trade-offs.  Help them make the right decision.

If you thought this post was useful, please share it.  Also, consider liking my Facebook page, The CyberSpeaker.  And check out my website:  www.thecyberspeaker.com.