BLOG
  • Jun 30, 2022

    In the age of sophisticated phishing attacks and credential harvesting, multi-factor authentication (MFA) is a crucial part of any security program. It is often trivial for attackers to gain access to a user’s credentials through password spraying or phishing campaigns. Even though most organizations educate users regarding phishing threats, a percentage of employees will still likely fall for them, providing their credentials unknowingly to attackers. In an environment of distractions, users will fail to see the 'external' banner, the mistyped domains, and the lack of encryption on the web page collecting their credentials – some of the common tell-tale signs of a phishing attack.

    You can enforce user lockouts for login pages and put technology in place to block most phishing attempts, but it is very likely that something, or someone, will get through. MFA gives your organization the peace of mind to know that if a user does fall victim to credential theft, a breach to the environment is less likely since an MFA token would be needed for access. 

    Note, it is not good enough to just lockdown VPN, all external access points into the environment must be protected: VPN, Webmail, Collaboration Tools, etc. Services that allow for multi-factor bypass (like EWS) should be limited or shut off completely. Users should not be allowed 'exceptions' to bypass this control. 

    Keep in mind that MFA is not a magic pill answer to every problem. Sophisticated phishing attacks can trick a user into allowing the attacker into the environment by providing the passcode or acceptance to a push request, but these happen less frequently than other types of attacks. User education about the specifics of what a 'real' phishing attack of today looks like will help as well. Without MFA you are not only susceptible to attack, but there could be a hard-to-spot attacker in your internal network environment right now. Depending on the compromised account without MFA, the attacker may have significant access privileges.

    What we know is that no security measure is perfect or infallible. The security layers in place – with MFA close to the top of the list (right up there with logging!) – will help to create an environment that will hold up to many of today's malicious activities.

  • May 31, 2022

    The landscape of technology has changed drastically over the past two decades. Companies trying to do it all themselves is a thing of the past. SaaS, PaaS and IaaS are all the rage and organizations are adopting solutions to run many aspects of their businesses. With this evolution comes risk. Even as companies look to lessen their burden of onsite data and regulatory obligations by pushing them off to third party vendors, security remains a major concern. 

    Today, organizations must enter into relationships with trust, but not trust alone. Contracts are a must and verification of security practices is too. All too often, companies look past the security concerns assuming that if the software they are going to use is out in the marketplace, that it must be secure. 

    Unfortunately, there are many software companies that go to market without a clear security plan, and without the proper controls in place. It is imperative that organizations take the time to ask about security practices and obtain attestation letters stating the solution has been penetration tested appropriately. Also important is to receive concrete assurances that security protocols are being followed, delivering upon the regulatory needs of the company, and aligning to the risk appetite of the organization. 

    Be sure to look at new third party vendors and ask security questions ahead of signing the contract. Before a signature, the vendor is more likely to share information quickly as it is holding up the contract from being executed. Additionally, you can make decisions about moving forward without the legal trouble of undoing a contract if this information is requested early enough in the process.

    If you decide to move forward despite security gaps, track the risk and follow up with the company periodically to see if the risks have been remediated. And if your third-party software vendor doesn’t have penetration test results feel free to send them our way.

  • Mar 31, 2022

    Many organizations have rolled out multi-factor authentication (a must) and other controls to protect their networks. Email threat detection is deployed and URL rewriting in place. Investments are made in antivirus, EDR, and threat detection solutions; vulnerability scanners are used to scan for known risks. Even with all the layers, technologies cannot protect your organization fully. Your Human Firewall is critically important.

    Your Human Firewall = your users. Education of users is usually considered, even implemented to a point. Once is not enough though! Even annually does not cut it anymore.  Ongoing security education within emails, newsletters, team presentations, training, phishing simulations, and individual follow-ups are all part of a comprehensive program. Reinforcing the tools available for data protection, detailing social engineering scenarios and things to look for, and reiterating acceptable use policies should all be included. 

    Let's be honest, your users are busy. They are looking at emails quickly on cell phones and are not paying as close attention to security threats as you would like. Security it not always top-of-mind and needs to be reinforced as a regular part of everyone's role - NOT just annually when completing compliance training.

    There are certainly lots of tools available for security awareness and phishing if you have the budget. If you don't, maybe consider allocating budget next year. But keep in mind that education can happen via tools you already own - emails, newsletters, PowerPoint slides, hand-outs, team meetings, etc. However you do it, just make sure you do it. You will be glad you did.

  • Feb 28, 2022

    Recently there have been vulnerabilities out in the wild that have had security teams racing to patch systems and gather an inventory of their assets. We believe in being proactive. As with working out – It is easier to stay in shape and form good habits to keep you there then to get in shape. The same can be said for the health and hygiene of your network. Keeping the inventory up-to-date, and running ongoing vulnerability scans proactively, will save you time and stress when a new time-sensitive vulnerability pops up.

    What are the steps you need to follow to make running after vulnerabilities less stress-inducing? Here are a few things to consider: 

    1) Catalogue your inventory, including what applications are exposed externally and what services your assets are running. 

    2) Understand what vulnerabilities exist in your network by running ongoing vulnerability scans or hiring a company to do it. 

    3) Know what domains and assets are managed by your company or by a third party and how to get in touch with the owners if needed.

    4) Investigate what security controls are in place or can easily be put in place to protect your network while updating configurations or patching systems.

    When a new vulnerability that promises to bypass your controls and infiltrate your network comes again (and they will come again!) you'll have a plan and can take steps forward in a logical and orderly way.

  • Jan 10, 2022

    Our troops do not go into battle without the proper training, knowledge, or practice under their belts. They learn their roles. They practice as a unit. They learn about their opponents.  They perform test runs. They plan for all feasible scenarios. 

    The landscape of security today requires a similar tactic. Not only do security teams need to know their specific roles daily and how to perform during an actual event, but they also need to understand the threats they face, plan for them and perform test simulations. Running through this process during a tabletop exercise helps to avoid delays, gaps, and confusion in the event of an actual incident. Tabletop exercises are not new, but they offer organizations a way to playout a situation and identify any areas missing coverage before an incident occurs. 

    Here are six tips to get you started with your next Tabletop Exercise: 

    1.  Make it a game with a time limit. Brainstorm, be creative, don't expect to be perfect but box it in. 90-120 minutes is likely long enough.
    2.  Come up with plausible scenarios. There are no points on the board for coming up with a farfetched, unlikely scenario. Start with the realistic threats and go from there. 
    3. Get it on the calendar - today! Don't get stuck in the 'we should do it' stage. Schedule it or it won't happen. 
    4. Get the right people in the room. In small organizations it may be all leaders in the organization. In large organizations it may need to be groups focused into several smaller teams/meetings.
    5. Divide and conquer. In a real scenario, tasks would be split up, small groups would work to tackle the incident from various angles. Allow member of the exercise to split up and brainstorm for part of the exercise.
    6. Create an after-action plan. What worked? What didn't? What gaps did you identify that you need to work to fill? Write it down and communicate it to the team. 

    If you need support, we are here to help! 

  • Nov 10, 2021

    Do you have backups in place? Simple yes or no answer, or is it? In the past, backups were not necessarily seen as a security issue, but ransomware changed that. Backups, and the security of those backups, is now more important than ever. Backups have become Information Security's best friend.

    But is simply having a backup really the only concern? As security professionals we are asked about backups regularly. Here is our take on the subject:
     
    Having backups in place is critical.
    Having working backups in place is critical.
    Having frequent backups in place is critical.
    Having tested backups in place is critical.
    Having isolated or offline backups in place is critical.
    Having backups only accessible to the necessary staff is critical.
    Ensuring there are backups of everything necessary to run your business is critical.
     
    Cloud assets, AD, applications... all of these are in scope and should have backups at some level. Multiple copies and retention should also be considered.
     
    If we are testing your environment and we can get into your backups, so can a malicious actor, so give them the appropriate level of attention. Make sure resources are put into ensuring that you know what is backed up, the frequency of backups, and how they are secured. Make this part of your Business Continuity planning and testing. Recover and test your backups, regularly. Do not wait for a loss of data to investigate this crucial component of your IT environment.
  • Oct 14, 2021

    It used to be that employees stayed at a company for a long time. People did the same job for years, and you could go to someone in-house and ask where something was or how it was done. Good, bad or indifferent, there was a 'way' things got done. In today's day and age, with the complexity of technology and the movement of jobs, this is no longer something you can count on. This is one of the reasons that documentation becomes so important for institutions. The need to have consistent implementation and education is paramount to being an efficient, and properly protected, organization. Often when our team conducts a security assessment, the inconsistencies show through... when documentation is requested, there is none to fall back on. Lack of consistent implementation creates the cracks that attackers squeeze through. 

    For some organizations, this is a shift in mindset. Stopping the practice of just doing, and instead stopping to ask: Is what I am doing the right thing? Is it consistent and secure? Getting started requires your team to ask: What technologies are in place? How are they managed today? Is how they are managed today appropriate? What are configurations necessary to secure the systems in place while meeting the business needs? Are there regulatory requirements that need to be considered? What are the minimum acceptable security controls to be put into place? Then, the next step is putting pen to paper - or fingers to keyboard - and recording it all.  

    The upfront 'pain' of documenting configurations and getting consensus on the right controls to have in place can be an investment in time, but efficiencies in communication, and consistencies in the management and implementation of technologies can save you time in the long run. It can also save you money on extra staffing costs, provide more effectiveness in the education and onboarding of new IT hires, and help your organization meet regulatory requirements. 

    The tough part, of course, is really putting it all into action and recording the gaps and risks present in the process. Progress over time is the goal. With appropriate tracking of the gaps, there will be less crevasses for malicious actors to slide through. That is a topic for another post.

  • Sep 22, 2021

    Everyday there are new cyber threats facing organizations. Often when an article makes its way to mainstream media, there is a flurry of action and response. This is often well intentioned, but many times is a poor use of resources. So how do we determine what is real and what is hype?

    The first answer, the one nobody will want to hear, is that you need to know your threats. You need to think about them, conduct tabletop exercises around them, understand them inside and out. What information do you stand to lose? What will the impact of a successful attack be? If your data is of low value, or a successful attack is unlikely to disrupt operations or customers, how important is it to protect against? Is the threat a realistic one within your market? Even ransomware can be distinct within various types of industries – from the threat actor to the impact.

    This evaluation can be difficult for data owners and IT security teams to conduct at times, especially if they are not in full knowledge of the datasets, compliance aspects, or operational dependence on data. Thinking about this proactively and having a plan to address threats can be the difference between falling victim to an attack and being able to stop (or prevent) it.

    Most attack types are not new. That is why the MITRE ATT&CK framework works well. In fact, we can take a wider view and map MITRE to general military attack techniques going back as far as history will allow. So as the next big urgent risk is played out in the public forum, take a moment to consider how it applies to you or your organization. Is it something that needs to be urgently addressed, or does it slot in behind other evaluated and prioritized risks in the register?

  • Jul 30, 2021

    Making sure security controls are working as expected is just as important as making sure backups are operating properly or patching is applied correctly. It isn’t uncommon for controls to fail, creating significant exposures. This can be common, for example, in EDR not blocking an exploit, or a guardrail failing to stop a public S3 bucket. Very rarely do companies revisit a security control put in place after it is initially setup unless it is pointed out in a security assessment.

    Simply having the security control in place isn’t enough, controls must be regularly tested and adjusted according to any changes in risk profile or environment. Without regular check-ins, the practical effectiveness of a control declines over time. Without an initial verification that it is in place as expected, the effectiveness is capped well below potential capability. Here are a few tips to get started:

    - Clearly outline the expected areas of applicability for a control. Define these and share with any relevant parties.

    - Align to a control framework such as ATT&CK or CSF, keeping things easily transferable to new risk evaluations.

    - Select an appropriate methodology, using configuration guides that fit best with the environment. CIS provides many working documents as do most vendors.

    - Perform annual security reviews to ensure controls are still relevant and working as expected.

    Keeping to this guidance will ensure longer term usefulness and best the best ROI on security control investments.

  • Jun 15, 2021

    Attacks today seem to follow the same playbook: crack the perimeter, exploit bad internal fundamentals, deploy ransomware, profit. The playbook is the same each time because it continues to work. How do you stay ahead of it?

    Organizational response to external breaches is mostly reactive. Wait until a breach makes the news cycle, then dig into intel feeds hoping for a meaty bundle of IOCs to explore. Later, speculate for a few days over how sophisticated the attack was, only to find out that the root cause of the breach was phishing, patching, or a bad password. Finally, internally search out and fix the specific patch or block the phishing email subject line and wait until the cycle repeats. These steps are indeed important, but each time the prevention bulletins come out, they seem to be based more on good security practices than any other element.

    What are good security practices? Good (tested) backups, two factor authentication, patching, logging, monitoring, etc. Basically, all the areas covered in a solid cybersecurity plan. Unfortunately, the basics are not always easy, and they are certainly not the coolest technology everyone wants to play with.  Instead of diligent patching, configuration management, and a solid monitoring program, many organizations rely on expensive EDR. Instead of good coding practices, organizations deploy application firewalls. Security has always been about layers, and as attacks have continued to become more complex, layers are what is needed. However, these layers must be built on the implementation of good foundational security practices.

    Organizational focus on cleaning things up and getting the basics right will stop many attacks up front.  Assessment of your current gaps and setting a plan for filling them in will also pay dividends. If you are looking for a framework to follow, using the NIST CSF is something that most organizations can easily align to. Start small, be realistic, and keep re-assessing as you work towards a goal. Risks will shift, threats will shift, but if you have a good foundation, adjusting to meet them will not be difficult.



  • May 16, 2021

    We work with organizations large and small. Something they all have in common? – A need to protect themselves against cyber threats. Budgets and security layers come in all shapes and sizes, but no matter what, there are two items needed at the base of any robust cybersecurity program. 1) Security Awareness and Education, and 2) IT Security Policies and Standards. We will go into detail in later posts, but here are some high-level thoughts on these areas of focus. 

    Security Awareness and Education. Like it or not, attackers will continue to use email, social engineering, and phishing campaigns to target users. Ensuring your workforce is educated regularly on cybersecurity issues is a cornerstone to combating these threats – not to mention it is also a standard regulatory requirement. Additional education on regulatory requirements, password management, data protection and other key security components will help to teach all users that cybersecurity is the responsibility of everyone in the organization. When your employees know better, they will do better. Understanding the fundamentals to staying safe online and protecting organizational data will bolster your cybersecurity program on day one. And remember, education comes in many forms – emails, videos, meetings, formal trainings, etc. Make it part of the culture and it will pay off. 

    Policies and Standards. This is a huge topic – the more policies and standards we write, the more pop up that need to be written. Security policies will help you to outline the beliefs of the organization. What are the tenants that are put in place that the organization will live by? How will data be protected? Will data be encrypted at rest, in storage? Will multi-factor authentication be required? How will data on mobile devices be protected? Will security training be required? If so, how often? On a tight budget, you can skim the Internet (safe sites of course) for sample policies and use internal resources to customize them to your environment. On a looser budget, you can hire someone to solely work to create and ratify this documentation or outsource its creation. No matter how you go about it, we would urge you to get them in place to layout the framework for your organization's approach to cybersecurity. From there, get the details documented within your standards so the organization is clear on the methods you are using to secure your environment.  

  • Apr 19, 2021

    As cybersecurity professionals, it is important to understand real-world threats facing your organization. Although there are plenty of tools and technologies to help identify thousands of possible vulnerabilities, threat hunting helps narrow these down into more realistic probabilities, and also helps you formulate appropriate countermeasures. 

    Threat hunting is a process that organizations of almost every size should engage in. Based on your organization's industry, it will help identify who likely attackers are, their methods, and their motivations (e.g., information, money). This is a critical component to ensuring that you understand the gaps that your organization may have and strategize the best ways to secure the organization. 

    At the end of the day, how can you stop what you do not understand? Understanding the common points of attack and methods in play by routine threat-hunting is an important part of a holistic cybersecurity program. 

    Here are a few ways to get started:

    • Join groups specific to security in your industry – they have a wealth of knowledge.
    • Take a look at the MITRE ATT&CK Framework – to understand the most common methods of attack (see https://attack.mitre.org/).
    • Stay up-to-date with CISA alerts on cybersecurity attack methods – and keep an eye out in your own organizations for signs of compromise (see https://www.cisa.gov/).


  • Mar 11, 2021

    We perform penetration testing on organizations that often use the latest and greatest tools in security defense, yet our testing sometimes goes unseen. The key to detection and defense? Security layers. If you assume your primary defense does not work, what is its backup? How do you spot activity based on the original outlined phases of an attack if your primary tooling is inadequate? Layers!

    Let us focus on the period during which threats are attempting to expand within your network, laterally, after having gained access. Expansion is possible both for ransomware and for individual attackers. Both wish to make the most of their efforts and obtain the greatest levels of success which often involves searching for additional targets.

    There are several ways to look for this type of movement, such as reviewing network logs or looking for unexpected connection attempts to different devices. Unfortunately, while academically possible, the reality is that not every organization has these capabilities or resources. Watching lateral movement inside of a subnet may be impossible without specific technologies to capture the traffic. Similarly, reviewing unexpected connections may not be possible without access to all logs and platforms on the network. Often groups are time crunched and need to focus on alerts with low false positives first.

    Another way to detect lateral movement within your network is with the use of a honeypot technology. This type of technology is often deployed in a network segment of interest, and setup to look like other targets in the network. The difference with the honeypot, however, is that it is not actually a valid network asset, so no valid connections should be made to it. Therefore, when an alert is generated, security teams can react with a high confidence that it is not legitimate behavior. CTInfoSec’ s Patented NARC® Deception Network Technology successfully detects threats in this manner, including attacks such as ransomware during the expansion phase. 

    The ability to detect lateral movement with a low false positive rate is a very important defensive layer in any network security program. Attackers do not know up front which devices are real, and which are not, which is why a honeypot is so successful at what is does. It sits and listens, and hopefully never reports an alert. When it does, you know it is time to react quickly. 


  • Feb 11, 2021

    We follow a pretty standard pattern for gaining access within a Network Penetration Test (aka pentest) – recon, exploit, escalate, expand, execute.  First, we look at the environment for any exposed information or misconfigured systems. Next the goal is to gain control or access to a resource using known methods or exploits. From there, we shoot to obtain as high a privilege level as possible. Once we have the appropriate levels of access, we can expand laterally looking for targets or data. When we have achieved a level of control on an environment equal to our goals, we execute our intended goals. This is a straightforward high-level process with a not-so-straight-forward multitude of steps in between. The uniqueness of the path between each step is what can make detection difficult for security teams and SOC’s watching the wheel. 

    Threat groups and malware campaigns use similar approaches to the above. Understanding that attack and infiltration methods of a pentester and ‘real world’ attackers will give your organization an advantage when identifying security gaps.

    There are several resources that are available that will cover detections at various points in an attack. For example, the MITRE ATT&CK framework is a strong reference to use when looking at the techniques and methodologies used in successful network attacks. There are others, but we like this one. This framework gives organizations a place to begin when thinking about what gaps may exist within their environment in relation to real-world attacks. It is a good exercise for all security teams to look at the framework and determine the security layers your organization has in place to protect against the various attack methodologies. From there, you can create a plan to address the gaps.


  • Jan 11, 2021

    If you do not have Multi-Factor Authentication (MFA) in place yet, get a move on! If by some chance you have selected this post to read and do not understand MFA let us provide a brief definition.

    MFA – sometimes known as 2 Factor Authentication – requires users to sign in with two out of three of the follow credential types – something you know (e.g., password), something you have (e.g., application on your phone), or something you are (e.g., fingerprint).

    While many organizations have made the shift to using MFA, there are still some that are struggling to get this technology in place – due to timelines, budgets, or just overall understanding of its benefit. Today, it is no longer a nice-to-have to implement MFA, it has really become a requirement. A layer that is a must have for all institutions. Phishing campaigns have made it easy business for attackers to gain credentials from unsuspecting employees and gain unauthorized access into company networks. If MFA is in place and enforced, once the attacker tries to leverage the password harvested to gain access to company systems, they would be stopped, not having the second factor available to complete the login.

    MFA is not perfect, but when organizations finally make the move, it brings some peace of mind. While the sentiment was that MFA would render all phishing or password stuffing irrelevant, as we know now, it did not, but none-the-less it does stop many attacks from full success and unauthorized access.

    As an aside, MFA is now available on many consumer services and we urge all users to enable this throughout their personal accounts as well (e.g., Gmail, Facebook).

  • May 27, 2016
    Threat prevention must always begin with the effective ability to detect threats. Detection must be sweeping and comprehensive if an organization plans to truly understand its risk. As the model for detection continues to shift into ‘threat intelligence’, CISO’s and security teams must understand where these programs excel and where they do not. The issue many groups encounter is not due to lack of effort or investment, but instead, the fault lies with the vendors for lack of transparency. The threat management solutions such as managed SOC’s will often hold back intelligence as an add-on, but also vendors don’t take the time to present what is detected and what is not.

    There is also a gap between an internal threat actor and the ability to detect host-to-host incidents. This means that when an attacker finds himself inside the network, he is able to pivot and attack freely going mostly undetected. Why is this? There is nothing in place to detect host-to-host attacks unless they cross a gateway that is logging. Internal LAN to internal LAN is most commonly a flat design, and therefore doesn’t pass through a device such as a firewall. The same goes for trusted zones or many other design nomenclatures; the trusted traffic often goes unanalyzed.

    Of course there are solutions for this such as HIDS/HIPS, internal honeypots, port mirroring analysis, but the unfortunate fact is that this is not often in place, and data is not collected or properly investigated if they are in place. This is an area that organizations need to be paying attention to.
  • Nov 13, 2015
    There has long been a sliding rule that says the more secure a solution is, the less usable it becomes. With a boom in the Internet of Things, or “IoT", many hackers playing with rapid prototyping kits, hacker spaces in every major city, and 3D printing becoming a mainstay in most public areas, we need to pause and understand where the slide rule currently sits. As more and more elements of our daily lives are driven online, and the cost of convenience is driven down, we expose ourselves to risks not previously considered. Never before has it been possible to unlock your front door, preheat an oven or surveil a home all from a remote location. Never before has your TV put your privacy at risk.

    While the consumer elements can provide additional creature comforts, they are also drivers towards what we want at work. Why not push for tools in the workplace which make life easier?

    The reason is the sliding rule. When we choose to open elements of our home to the convenience of a cloud-based solution, the risk is primarily compartmentalized into the things within our control, and limited to our own personal sphere of potential loss. It is a fixed risk which affects mainly the individual consumer. When a decision is made to integrate similar technologies within a company, the result is a greater exposure of risk—at the corporate scale, and in many cases, the risk extends to potentially millions of customers as well.

    A troubling trend is the focused desire to implement SCADA and Building Management System (BMS) solutions into bridged IP networks. Typically these controllers have been closed looped or air gapped designs, secured through isolation. They run older software which has limited upgrades behind it, and are not designed to stand up to the rigors of an untrusted environment. By marrying the ability to turn on a porch light for a house with the perceived need to control a critical infrastructure from the comfort of an IPad on the back deck, our core utilities are more at risk than ever.

    As building engineers see the convenience of solutions offered through hardware stores, and push for these same integrations on a larger scale, the security community must be mindful of where the slide sits and assess the risks proportionally. It won’t be long before even the most mundane tasks are ported to the smart phone, but at what price to protection? There is an absolute benefit to access, immediate control, and newly discovered data points for analysis. These benefits need to be weighed against the risks of outages, loss of control, or even deliberate acts of malicious nature.
  • Oct 22, 2014
    With all the negative publicity this year surrounding major-league retailers and their staggering credit card information breaches, it’s no surprise the question of PCI compliance has moved to the front of many a CISO’s minds. And while the PCI Data Security Standard (PCI DSS) has steadily evolved to meet emerging security concerns (currently in its 3rdmajor revision) many security professionals either eschew the standard altogether or have misconceptions about some of the key tenants of the PCI standard.  Here’s a quick look at three top PCI misconceptions:
    #1 -- Since I don't store credit card information, I don't have to worry about being PCI compliant.
    The PCI DSS does not just apply to the storage of credit card data but also to the handling of data while it is processed or transmitted over networks, phone lines, faxes, etc.  While not storing credit card data does eliminate some compliance requirements the majority of the controls dictated by the DSS remain in effect. The only way to avoid PCI compliance is to transfer the risk entirely to someone else, such as PayPal's Website Payments Standard service where customers interact with the PayPal software directly and credit card information never traverses your own servers.

    #2 -- I don't process a large number of credit cards (e.g., too small, only level 3, only level 4), so I don't have to be compliant
    While merchants processing less than 20,000 total transactions a year are generally not required to seek compliance validation, the obligation for PCI compliance is still there, as are the consequences if the data your store or process is compromised. For merchants processing between 20,000 and 1 million total transactions -- a large majority of small businesses -- the requirement to fill out a self-assessment questionnaire, or “SAQ”, means that many if not all of the full PCI DSS requirements must be met and attested to.


    #3 -- We’ve just made it through a PCI audit and received our ROC (Repor
  • May 02, 2014
    A great many organizations implement various layers and tools within their security management program – IPS, malware intelligence, spam filtering, firewall logs, Active Directory events, and more – that feed into centralized event correlation engines. This is a great start to a security management program. But how can a security team with limited resources and time manage to prioritize streams consisting of thousands of events a second? Here are two basic strategies that can help:
    1)    Implement an asset value filter that prioritizes critical assets over less critical assets. For example, a security incident involving a server should take priority over a workstation, a server containing highly sensitive data should take priority over one without sensitive data. An upfront analysis of these assets must be performed and prioritization implemented.
    2)    Implement a filter based upon available time. All too often incidents pile up and, given the large number of elements to tackle, items do not get addressed in a timely manner. In order to limit the incidents created, a series of threshold changes are needed. For example, if the security engineer has 4 hours a day allotted to investigate events, and each event takes 30 minutes to investigate, the prioritized incidents should be adjusted to approximately 8-10 per day.

    This is a departure from many current security management program implementations where criticality and workload are not approached up front. It is a more realistic implementation that can help analysts to cover the critical events quickly vs. wasting precious time sifting through events of less importance. This is also a great method to help management understand workload, and scale event analysis with additional resources.


  • Feb 06, 2013
    The number of articles on cloud computing security risks is growing daily. However, is cloud computing any different than traditional architectures in terms of risk exposure?

    First, let’s clarify the definition of cloud computing to mean a style of computing in which dynamically scalable and often virtualized resources are provided as a service over the Internet. Let’s also mention that there are several flavors of this offering, one of the fastest growing being SaaS and IaaS.

    Taking away the dynamic scalability, neither of these are new offerings. Spam filtering (SaaS) has been accepted as a commoditized activity for some time now. This is almost always done in the cloud today. Admins feel comfortable with this decision even though email may contain some of the most sensitive data sent within an organization. Infrastructure as a Service (IaaS) has been the mainstay for hosting companies, Co-lo’s and third party data centers for decades. So how is today different?

    The difference is that administrators are putting more reliance upon this cloud service, and in turn losing some control. It is important to keep in mind that the premise of cloud computing is not flawed. By thoroughly vetting a vendor before conducting business with them, a cloud computing solution is often times more efficient, cost effective, and reliable than a traditional deployment. The right questions need to be asked as related to your environment and risk threshold. Questions such as how is the data protected against other customers? Who has access to the information? What does the architecture look like? Things that would be done as an internal discussion would now be shifted to the hosting provider.

    Administrators should take care now to begin understanding the risks posed to their environment by cloud computing and what they can do to minimize them. At the same time they should start looking for ways to leverage the benefits of SaaS and IaaS. In the long term, this is a trend that is here to stay and one that we can all benefit from.