BLOG
  • Oct 14, 2021

    It used to be that employees stayed at a company for a long time. People did the same job for years, and you could go to someone in-house and ask where something was or how it was done. Good, bad or indifferent, there was a 'way' things got done. In today's day and age, with the complexity of technology and the movement of jobs, this is no longer something you can count on. This is one of the reasons that documentation becomes so important for institutions. The need to have consistent implementation and education is paramount to being an efficient, and properly protected, organization. Often when our team conducts a security assessment, the inconsistencies show through... when documentation is requested, there is none to fall back on. Lack of consistent implementation creates the cracks that attackers squeeze through. 

    For some organizations, this is a shift in mindset. Stopping the practice of just doing, and instead stopping to ask: Is what I am doing the right thing? Is it consistent and secure? Getting started requires your team to ask: What technologies are in place? How are they managed today? Is how they are managed today appropriate? What are configurations necessary to secure the systems in place while meeting the business needs? Are there regulatory requirements that need to be considered? What are the minimum acceptable security controls to be put into place? Then, the next step is putting pen to paper - or fingers to keyboard - and recording it all.  

    The upfront 'pain' of documenting configurations and getting consensus on the right controls to have in place can be an investment in time, but efficiencies in communication, and consistencies in the management and implementation of technologies can save you time in the long run. It can also save you money on extra staffing costs, provide more effectiveness in the education and onboarding of new IT hires, and help your organization meet regulatory requirements. 

    The tough part, of course, is really putting it all into action and recording the gaps and risks present in the process. Progress over time is the goal. With appropriate tracking of the gaps, there will be less crevasses for malicious actors to slide through. That is a topic for another post.

  • Sep 22, 2021

    Everyday there are new cyber threats facing organizations. Often when an article makes its way to mainstream media, there is a flurry of action and response. This is often well intentioned, but many times is a poor use of resources. So how do we determine what is real and what is hype?

    The first answer, the one nobody will want to hear, is that you need to know your threats. You need to think about them, conduct tabletop exercises around them, understand them inside and out. What information do you stand to lose? What will the impact of a successful attack be? If your data is of low value, or a successful attack is unlikely to disrupt operations or customers, how important is it to protect against? Is the threat a realistic one within your market? Even ransomware can be distinct within various types of industries – from the threat actor to the impact.

    This evaluation can be difficult for data owners and IT security teams to conduct at times, especially if they are not in full knowledge of the datasets, compliance aspects, or operational dependence on data. Thinking about this proactively and having a plan to address threats can be the difference between falling victim to an attack and being able to stop (or prevent) it.

    Most attack types are not new. That is why the MITRE ATT&CK framework works well. In fact, we can take a wider view and map MITRE to general military attack techniques going back as far as history will allow. So as the next big urgent risk is played out in the public forum, take a moment to consider how it applies to you or your organization. Is it something that needs to be urgently addressed, or does it slot in behind other evaluated and prioritized risks in the register?

  • Jul 30, 2021

    Making sure security controls are working as expected is just as important as making sure backups are operating properly or patching is applied correctly. It isn’t uncommon for controls to fail, creating significant exposures. This can be common, for example, in EDR not blocking an exploit, or a guardrail failing to stop a public S3 bucket. Very rarely do companies revisit a security control put in place after it is initially setup unless it is pointed out in a security assessment.

    Simply having the security control in place isn’t enough, controls must be regularly tested and adjusted according to any changes in risk profile or environment. Without regular check-ins, the practical effectiveness of a control declines over time. Without an initial verification that it is in place as expected, the effectiveness is capped well below potential capability. Here are a few tips to get started:

    - Clearly outline the expected areas of applicability for a control. Define these and share with any relevant parties.

    - Align to a control framework such as ATT&CK or CSF, keeping things easily transferable to new risk evaluations.

    - Select an appropriate methodology, using configuration guides that fit best with the environment. CIS provides many working documents as do most vendors.

    - Perform annual security reviews to ensure controls are still relevant and working as expected.

    Keeping to this guidance will ensure longer term usefulness and best the best ROI on security control investments.

  • Jun 15, 2021

    Attacks today seem to follow the same playbook: crack the perimeter, exploit bad internal fundamentals, deploy ransomware, profit. The playbook is the same each time because it continues to work. How do you stay ahead of it?

    Organizational response to external breaches is mostly reactive. Wait until a breach makes the news cycle, then dig into intel feeds hoping for a meaty bundle of IOCs to explore. Later, speculate for a few days over how sophisticated the attack was, only to find out that the root cause of the breach was phishing, patching, or a bad password. Finally, internally search out and fix the specific patch or block the phishing email subject line and wait until the cycle repeats. These steps are indeed important, but each time the prevention bulletins come out, they seem to be based more on good security practices than any other element.

    What are good security practices? Good (tested) backups, two factor authentication, patching, logging, monitoring, etc. Basically, all the areas covered in a solid cybersecurity plan. Unfortunately, the basics are not always easy, and they are certainly not the coolest technology everyone wants to play with.  Instead of diligent patching, configuration management, and a solid monitoring program, many organizations rely on expensive EDR. Instead of good coding practices, organizations deploy application firewalls. Security has always been about layers, and as attacks have continued to become more complex, layers are what is needed. However, these layers must be built on the implementation of good foundational security practices.

    Organizational focus on cleaning things up and getting the basics right will stop many attacks up front.  Assessment of your current gaps and setting a plan for filling them in will also pay dividends. If you are looking for a framework to follow, using the NIST CSF is something that most organizations can easily align to. Start small, be realistic, and keep re-assessing as you work towards a goal. Risks will shift, threats will shift, but if you have a good foundation, adjusting to meet them will not be difficult.



  • May 16, 2021

    We work with organizations large and small. Something they all have in common? – A need to protect themselves against cyber threats. Budgets and security layers come in all shapes and sizes, but no matter what, there are two items needed at the base of any robust cybersecurity program. 1) Security Awareness and Education, and 2) IT Security Policies and Standards. We will go into detail in later posts, but here are some high-level thoughts on these areas of focus. 

    Security Awareness and Education. Like it or not, attackers will continue to use email, social engineering, and phishing campaigns to target users. Ensuring your workforce is educated regularly on cybersecurity issues is a cornerstone to combating these threats – not to mention it is also a standard regulatory requirement. Additional education on regulatory requirements, password management, data protection and other key security components will help to teach all users that cybersecurity is the responsibility of everyone in the organization. When your employees know better, they will do better. Understanding the fundamentals to staying safe online and protecting organizational data will bolster your cybersecurity program on day one. And remember, education comes in many forms – emails, videos, meetings, formal trainings, etc. Make it part of the culture and it will pay off. 

    Policies and Standards. This is a huge topic – the more policies and standards we write, the more pop up that need to be written. Security policies will help you to outline the beliefs of the organization. What are the tenants that are put in place that the organization will live by? How will data be protected? Will data be encrypted at rest, in storage? Will multi-factor authentication be required? How will data on mobile devices be protected? Will security training be required? If so, how often? On a tight budget, you can skim the Internet (safe sites of course) for sample policies and use internal resources to customize them to your environment. On a looser budget, you can hire someone to solely work to create and ratify this documentation or outsource its creation. No matter how you go about it, we would urge you to get them in place to layout the framework for your organization's approach to cybersecurity. From there, get the details documented within your standards so the organization is clear on the methods you are using to secure your environment.  

  • Apr 19, 2021

    As cybersecurity professionals, it is important to understand real-world threats facing your organization. Although there are plenty of tools and technologies to help identify thousands of possible vulnerabilities, threat hunting helps narrow these down into more realistic probabilities, and also helps you formulate appropriate countermeasures. 

    Threat hunting is a process that organizations of almost every size should engage in. Based on your organization's industry, it will help identify who likely attackers are, their methods, and their motivations (e.g., information, money). This is a critical component to ensuring that you understand the gaps that your organization may have and strategize the best ways to secure the organization. 

    At the end of the day, how can you stop what you do not understand? Understanding the common points of attack and methods in play by routine threat-hunting is an important part of a holistic cybersecurity program. 

    Here are a few ways to get started:

    • Join groups specific to security in your industry – they have a wealth of knowledge.
    • Take a look at the MITRE ATT&CK Framework – to understand the most common methods of attack (see https://attack.mitre.org/).
    • Stay up-to-date with CISA alerts on cybersecurity attack methods – and keep an eye out in your own organizations for signs of compromise (see https://www.cisa.gov/).


  • Mar 11, 2021

    We perform penetration testing on organizations that often use the latest and greatest tools in security defense, yet our testing sometimes goes unseen. The key to detection and defense? Security layers. If you assume your primary defense does not work, what is its backup? How do you spot activity based on the original outlined phases of an attack if your primary tooling is inadequate? Layers!

    Let us focus on the period during which threats are attempting to expand within your network, laterally, after having gained access. Expansion is possible both for ransomware and for individual attackers. Both wish to make the most of their efforts and obtain the greatest levels of success which often involves searching for additional targets.

    There are several ways to look for this type of movement, such as reviewing network logs or looking for unexpected connection attempts to different devices. Unfortunately, while academically possible, the reality is that not every organization has these capabilities or resources. Watching lateral movement inside of a subnet may be impossible without specific technologies to capture the traffic. Similarly, reviewing unexpected connections may not be possible without access to all logs and platforms on the network. Often groups are time crunched and need to focus on alerts with low false positives first.

    Another way to detect lateral movement within your network is with the use of a honeypot technology. This type of technology is often deployed in a network segment of interest, and setup to look like other targets in the network. The difference with the honeypot, however, is that it is not actually a valid network asset, so no valid connections should be made to it. Therefore, when an alert is generated, security teams can react with a high confidence that it is not legitimate behavior. CTInfoSec’ s Patented NARC® Deception Network Technology successfully detects threats in this manner, including attacks such as ransomware during the expansion phase. 

    The ability to detect lateral movement with a low false positive rate is a very important defensive layer in any network security program. Attackers do not know up front which devices are real, and which are not, which is why a honeypot is so successful at what is does. It sits and listens, and hopefully never reports an alert. When it does, you know it is time to react quickly. 


  • Feb 11, 2021

    We follow a pretty standard pattern for gaining access within a Network Penetration Test (aka pentest) – recon, exploit, escalate, expand, execute.  First, we look at the environment for any exposed information or misconfigured systems. Next the goal is to gain control or access to a resource using known methods or exploits. From there, we shoot to obtain as high a privilege level as possible. Once we have the appropriate levels of access, we can expand laterally looking for targets or data. When we have achieved a level of control on an environment equal to our goals, we execute our intended goals. This is a straightforward high-level process with a not-so-straight-forward multitude of steps in between. The uniqueness of the path between each step is what can make detection difficult for security teams and SOC’s watching the wheel. 

    Threat groups and malware campaigns use similar approaches to the above. Understanding that attack and infiltration methods of a pentester and ‘real world’ attackers will give your organization an advantage when identifying security gaps.

    There are several resources that are available that will cover detections at various points in an attack. For example, the MITRE ATT&CK framework is a strong reference to use when looking at the techniques and methodologies used in successful network attacks. There are others, but we like this one. This framework gives organizations a place to begin when thinking about what gaps may exist within their environment in relation to real-world attacks. It is a good exercise for all security teams to look at the framework and determine the security layers your organization has in place to protect against the various attack methodologies. From there, you can create a plan to address the gaps.


  • Jan 11, 2021

    If you do not have Multi-Factor Authentication (MFA) in place yet, get a move on! If by some chance you have selected this post to read and do not understand MFA let us provide a brief definition.

    MFA – sometimes known as 2 Factor Authentication – requires users to sign in with two out of three of the follow credential types – something you know (e.g., password), something you have (e.g., application on your phone), or something you are (e.g., fingerprint).

    While many organizations have made the shift to using MFA, there are still some that are struggling to get this technology in place – due to timelines, budgets, or just overall understanding of its benefit. Today, it is no longer a nice-to-have to implement MFA, it has really become a requirement. A layer that is a must have for all institutions. Phishing campaigns have made it easy business for attackers to gain credentials from unsuspecting employees and gain unauthorized access into company networks. If MFA is in place and enforced, once the attacker tries to leverage the password harvested to gain access to company systems, they would be stopped, not having the second factor available to complete the login.

    MFA is not perfect, but when organizations finally make the move, it brings some peace of mind. While the sentiment was that MFA would render all phishing or password stuffing irrelevant, as we know now, it did not, but none-the-less it does stop many attacks from full success and unauthorized access.

    As an aside, MFA is now available on many consumer services and we urge all users to enable this throughout their personal accounts as well (e.g., Gmail, Facebook).

  • May 27, 2016
    Threat prevention must always begin with the effective ability to detect threats. Detection must be sweeping and comprehensive if an organization plans to truly understand its risk. As the model for detection continues to shift into ‘threat intelligence’, CISO’s and security teams must understand where these programs excel and where they do not. The issue many groups encounter is not due to lack of effort or investment, but instead, the fault lies with the vendors for lack of transparency. The threat management solutions such as managed SOC’s will often hold back intelligence as an add-on, but also vendors don’t take the time to present what is detected and what is not.

    There is also a gap between an internal threat actor and the ability to detect host-to-host incidents. This means that when an attacker finds himself inside the network, he is able to pivot and attack freely going mostly undetected. Why is this? There is nothing in place to detect host-to-host attacks unless they cross a gateway that is logging. Internal LAN to internal LAN is most commonly a flat design, and therefore doesn’t pass through a device such as a firewall. The same goes for trusted zones or many other design nomenclatures; the trusted traffic often goes unanalyzed.

    Of course there are solutions for this such as HIDS/HIPS, internal honeypots, port mirroring analysis, but the unfortunate fact is that this is not often in place, and data is not collected or properly investigated if they are in place. This is an area that organizations need to be paying attention to.
  • Nov 13, 2015
    There has long been a sliding rule that says the more secure a solution is, the less usable it becomes. With a boom in the Internet of Things, or “IoT", many hackers playing with rapid prototyping kits, hacker spaces in every major city, and 3D printing becoming a mainstay in most public areas, we need to pause and understand where the slide rule currently sits. As more and more elements of our daily lives are driven online, and the cost of convenience is driven down, we expose ourselves to risks not previously considered. Never before has it been possible to unlock your front door, preheat an oven or surveil a home all from a remote location. Never before has your TV put your privacy at risk.

    While the consumer elements can provide additional creature comforts, they are also drivers towards what we want at work. Why not push for tools in the workplace which make life easier?

    The reason is the sliding rule. When we choose to open elements of our home to the convenience of a cloud-based solution, the risk is primarily compartmentalized into the things within our control, and limited to our own personal sphere of potential loss. It is a fixed risk which affects mainly the individual consumer. When a decision is made to integrate similar technologies within a company, the result is a greater exposure of risk—at the corporate scale, and in many cases, the risk extends to potentially millions of customers as well.

    A troubling trend is the focused desire to implement SCADA and Building Management System (BMS) solutions into bridged IP networks. Typically these controllers have been closed looped or air gapped designs, secured through isolation. They run older software which has limited upgrades behind it, and are not designed to stand up to the rigors of an untrusted environment. By marrying the ability to turn on a porch light for a house with the perceived need to control a critical infrastructure from the comfort of an IPad on the back deck, our core utilities are more at risk than ever.

    As building engineers see the convenience of solutions offered through hardware stores, and push for these same integrations on a larger scale, the security community must be mindful of where the slide sits and assess the risks proportionally. It won’t be long before even the most mundane tasks are ported to the smart phone, but at what price to protection? There is an absolute benefit to access, immediate control, and newly discovered data points for analysis. These benefits need to be weighed against the risks of outages, loss of control, or even deliberate acts of malicious nature.
  • Oct 22, 2014
    With all the negative publicity this year surrounding major-league retailers and their staggering credit card information breaches, it’s no surprise the question of PCI compliance has moved to the front of many a CISO’s minds. And while the PCI Data Security Standard (PCI DSS) has steadily evolved to meet emerging security concerns (currently in its 3rdmajor revision) many security professionals either eschew the standard altogether or have misconceptions about some of the key tenants of the PCI standard.  Here’s a quick look at three top PCI misconceptions:
    #1 -- Since I don't store credit card information, I don't have to worry about being PCI compliant.
    The PCI DSS does not just apply to the storage of credit card data but also to the handling of data while it is processed or transmitted over networks, phone lines, faxes, etc.  While not storing credit card data does eliminate some compliance requirements the majority of the controls dictated by the DSS remain in effect. The only way to avoid PCI compliance is to transfer the risk entirely to someone else, such as PayPal's Website Payments Standard service where customers interact with the PayPal software directly and credit card information never traverses your own servers.

    #2 -- I don't process a large number of credit cards (e.g., too small, only level 3, only level 4), so I don't have to be compliant
    While merchants processing less than 20,000 total transactions a year are generally not required to seek compliance validation, the obligation for PCI compliance is still there, as are the consequences if the data your store or process is compromised. For merchants processing between 20,000 and 1 million total transactions -- a large majority of small businesses -- the requirement to fill out a self-assessment questionnaire, or “SAQ”, means that many if not all of the full PCI DSS requirements must be met and attested to.


    #3 -- We’ve just made it through a PCI audit and received our ROC (Repor
  • May 02, 2014
    A great many organizations implement various layers and tools within their security management program – IPS, malware intelligence, spam filtering, firewall logs, Active Directory events, and more – that feed into centralized event correlation engines. This is a great start to a security management program. But how can a security team with limited resources and time manage to prioritize streams consisting of thousands of events a second? Here are two basic strategies that can help:
    1)    Implement an asset value filter that prioritizes critical assets over less critical assets. For example, a security incident involving a server should take priority over a workstation, a server containing highly sensitive data should take priority over one without sensitive data. An upfront analysis of these assets must be performed and prioritization implemented.
    2)    Implement a filter based upon available time. All too often incidents pile up and, given the large number of elements to tackle, items do not get addressed in a timely manner. In order to limit the incidents created, a series of threshold changes are needed. For example, if the security engineer has 4 hours a day allotted to investigate events, and each event takes 30 minutes to investigate, the prioritized incidents should be adjusted to approximately 8-10 per day.

    This is a departure from many current security management program implementations where criticality and workload are not approached up front. It is a more realistic implementation that can help analysts to cover the critical events quickly vs. wasting precious time sifting through events of less importance. This is also a great method to help management understand workload, and scale event analysis with additional resources.


  • Feb 06, 2013
    The number of articles on cloud computing security risks is growing daily. However, is cloud computing any different than traditional architectures in terms of risk exposure?

    First, let’s clarify the definition of cloud computing to mean a style of computing in which dynamically scalable and often virtualized resources are provided as a service over the Internet. Let’s also mention that there are several flavors of this offering, one of the fastest growing being SaaS and IaaS.

    Taking away the dynamic scalability, neither of these are new offerings. Spam filtering (SaaS) has been accepted as a commoditized activity for some time now. This is almost always done in the cloud today. Admins feel comfortable with this decision even though email may contain some of the most sensitive data sent within an organization. Infrastructure as a Service (IaaS) has been the mainstay for hosting companies, Co-lo’s and third party data centers for decades. So how is today different?

    The difference is that administrators are putting more reliance upon this cloud service, and in turn losing some control. It is important to keep in mind that the premise of cloud computing is not flawed. By thoroughly vetting a vendor before conducting business with them, a cloud computing solution is often times more efficient, cost effective, and reliable than a traditional deployment. The right questions need to be asked as related to your environment and risk threshold. Questions such as how is the data protected against other customers? Who has access to the information? What does the architecture look like? Things that would be done as an internal discussion would now be shifted to the hosting provider.

    Administrators should take care now to begin understanding the risks posed to their environment by cloud computing and what they can do to minimize them. At the same time they should start looking for ways to leverage the benefits of SaaS and IaaS. In the long term, this is a trend that is here to stay and one that we can all benefit from.