BLOG
  • May 27, 2016
    Threat prevention must always begin with the effective ability to detect threats. Detection must be sweeping and comprehensive if an organization plans to truly understand its risk. As the model for detection continues to shift into ‘threat intelligence’, CISO’s and security teams must understand where these programs excel and where they do not. The issue many groups encounter is not due to lack of effort or investment, but instead, the fault lies with the vendors for lack of transparency. The threat management solutions such as managed SOC’s will often hold back intelligence as an add-on, but also vendors don’t take the time to present what is detected and what is not.

    There is also a gap between an internal threat actor and the ability to detect host-to-host incidents. This means that when an attacker finds himself inside the network, he is able to pivot and attack freely going mostly undetected. Why is this? There is nothing in place to detect host-to-host attacks unless they cross a gateway that is logging. Internal LAN to internal LAN is most commonly a flat design, and therefore doesn’t pass through a device such as a firewall. The same goes for trusted zones or many other design nomenclatures; the trusted traffic often goes unanalyzed.

    Of course there are solutions for this such as HIDS/HIPS, internal honeypots, port mirroring analysis, but the unfortunate fact is that this is not often in place, and data is not collected or properly investigated if they are in place. This is an area that organizations need to be paying attention to.
  • Nov 13, 2015
    There has long been a sliding rule that says the more secure a solution is, the less usable it becomes. With a boom in the Internet of Things, or “IoT", many hackers playing with rapid prototyping kits, hacker spaces in every major city, and 3D printing becoming a mainstay in most public areas, we need to pause and understand where the slide rule currently sits. As more and more elements of our daily lives are driven online, and the cost of convenience is driven down, we expose ourselves to risks not previously considered. Never before has it been possible to unlock your front door, preheat an oven or surveil a home all from a remote location. Never before has your TV put your privacy at risk.

    While the consumer elements can provide additional creature comforts, they are also drivers towards what we want at work. Why not push for tools in the workplace which make life easier?

    The reason is the sliding rule. When we choose to open elements of our home to the convenience of a cloud-based solution, the risk is primarily compartmentalized into the things within our control, and limited to our own personal sphere of potential loss. It is a fixed risk which affects mainly the individual consumer. When a decision is made to integrate similar technologies within a company, the result is a greater exposure of risk—at the corporate scale, and in many cases, the risk extends to potentially millions of customers as well.

    A troubling trend is the focused desire to implement SCADA and Building Management System (BMS) solutions into bridged IP networks. Typically these controllers have been closed looped or air gapped designs, secured through isolation. They run older software which has limited upgrades behind it, and are not designed to stand up to the rigors of an untrusted environment. By marrying the ability to turn on a porch light for a house with the perceived need to control a critical infrastructure from the comfort of an IPad on the back deck, our core utilities are more at risk than ever.

    As building engineers see the convenience of solutions offered through hardware stores, and push for these same integrations on a larger scale, the security community must be mindful of where the slide sits and assess the risks proportionally. It won’t be long before even the most mundane tasks are ported to the smart phone, but at what price to protection? There is an absolute benefit to access, immediate control, and newly discovered data points for analysis. These benefits need to be weighed against the risks of outages, loss of control, or even deliberate acts of malicious nature.
  • Oct 22, 2014
    With all the negative publicity this year surrounding major-league retailers and their staggering credit card information breaches, it’s no surprise the question of PCI compliance has moved to the front of many a CISO’s minds. And while the PCI Data Security Standard (PCI DSS) has steadily evolved to meet emerging security concerns (currently in its 3rdmajor revision) many security professionals either eschew the standard altogether or have misconceptions about some of the key tenants of the PCI standard.  Here’s a quick look at three top PCI misconceptions:
    #1 -- Since I don't store credit card information, I don't have to worry about being PCI compliant.
    The PCI DSS does not just apply to the storage of credit card data but also to the handling of data while it is processed or transmitted over networks, phone lines, faxes, etc.  While not storing credit card data does eliminate some compliance requirements the majority of the controls dictated by the DSS remain in effect. The only way to avoid PCI compliance is to transfer the risk entirely to someone else, such as PayPal's Website Payments Standard service where customers interact with the PayPal software directly and credit card information never traverses your own servers.

    #2 -- I don't process a large number of credit cards (e.g., too small, only level 3, only level 4), so I don't have to be compliant
    While merchants processing less than 20,000 total transactions a year are generally not required to seek compliance validation, the obligation for PCI compliance is still there, as are the consequences if the data your store or process is compromised. For merchants processing between 20,000 and 1 million total transactions -- a large majority of small businesses -- the requirement to fill out a self-assessment questionnaire, or “SAQ”, means that many if not all of the full PCI DSS requirements must be met and attested to.


    #3 -- We’ve just made it through a PCI audit and received our ROC (Repor
  • May 02, 2014
    A great many organizations implement various layers and tools within their security management program – IPS, malware intelligence, spam filtering, firewall logs, Active Directory events, and more – that feed into centralized event correlation engines. This is a great start to a security management program. But how can a security team with limited resources and time manage to prioritize streams consisting of thousands of events a second? Here are two basic strategies that can help:
    1)    Implement an asset value filter that prioritizes critical assets over less critical assets. For example, a security incident involving a server should take priority over a workstation, a server containing highly sensitive data should take priority over one without sensitive data. An upfront analysis of these assets must be performed and prioritization implemented.
    2)    Implement a filter based upon available time. All too often incidents pile up and, given the large number of elements to tackle, items do not get addressed in a timely manner. In order to limit the incidents created, a series of threshold changes are needed. For example, if the security engineer has 4 hours a day allotted to investigate events, and each event takes 30 minutes to investigate, the prioritized incidents should be adjusted to approximately 8-10 per day.

    This is a departure from many current security management program implementations where criticality and workload are not approached up front. It is a more realistic implementation that can help analysts to cover the critical events quickly vs. wasting precious time sifting through events of less importance. This is also a great method to help management understand workload, and scale event analysis with additional resources.


  • Feb 06, 2013
    The number of articles on cloud computing security risks is growing daily. However, is cloud computing any different than traditional architectures in terms of risk exposure?

    First, let’s clarify the definition of cloud computing to mean a style of computing in which dynamically scalable and often virtualized resources are provided as a service over the Internet. Let’s also mention that there are several flavors of this offering, one of the fastest growing being SaaS and IaaS.

    Taking away the dynamic scalability, neither of these are new offerings. Spam filtering (SaaS) has been accepted as a commoditized activity for some time now. This is almost always done in the cloud today. Admins feel comfortable with this decision even though email may contain some of the most sensitive data sent within an organization. Infrastructure as a Service (IaaS) has been the mainstay for hosting companies, Co-lo’s and third party data centers for decades. So how is today different?

    The difference is that administrators are putting more reliance upon this cloud service, and in turn losing some control. It is important to keep in mind that the premise of cloud computing is not flawed. By thoroughly vetting a vendor before conducting business with them, a cloud computing solution is often times more efficient, cost effective, and reliable than a traditional deployment. The right questions need to be asked as related to your environment and risk threshold. Questions such as how is the data protected against other customers? Who has access to the information? What does the architecture look like? Things that would be done as an internal discussion would now be shifted to the hosting provider.

    Administrators should take care now to begin understanding the risks posed to their environment by cloud computing and what they can do to minimize them. At the same time they should start looking for ways to leverage the benefits of SaaS and IaaS. In the long term, this is a trend that is here to stay and one that we can all benefit from.