Friday, May 29, 2020

Content Conundrums for the SOC: Part II

If you survived the first SOC conundrum for content development, then your organization is well on its way towards building new and exciting content without having to worry about those senseless questions when new threats emerge such as whether the threat is even relevant to your organization. Instead, your organization should be quick to determine threat applicability and move forward with a well-defined and streamlined content development process using build, attack, and defend methodologies. If you aren’t at this point, you can refer back to my previous post, and part 1 of this series, here.

With this stream of new and applicable content being developed into your SIEM for consumption by your Incident Response team, you now have several new detection queries actively looking for threats within your environment but how in the world are you going to manage this growing content repository when each piece of code was developed individually with no sense of reusability and how exactly are you planning to ensure that the past pieces of content developed are even still applicable now that time has passed? And in here lies the next content conundrums for the SOC.

Let’s start with a few issues that need to be addressed:
1.     How is your organization going to handle a situation where a critical component of these detection queries change?
2.     How is your organization going to ensure that logic blocks within your queries are repeatable and can continue to be refined over time?
3.     How is your organization going to validate that content previously produced remains applicable and working correctly over time?

The idea that your organization just produced a large quantity of content which took a painstaking amount of time and resources all to be rendered fully useless by a change in a data storage schema or even just partially useless to newly onboarded business units because their data needs to be incorporated into the logic is a nightmare. No one wants to have to go through each individual piece of content and update components one by one especially when there is a lot of room for human error. So how would you go about making sure that manually going through all your past developed content is not a reality? Easy, modularize your content to its fullest extent. This can be done via functions or macros or whatever reusable logic grouping  your organizations SIEM may utilize. The idea here is to modularize every piece of logic that can be reused for another piece of content, even how you call a certain data set should be modularized just to account for data storage schema changes and new data storage objects being created for new business units. Luckily, this idea resolves the second issue addressed above as well by having complex logic blocks modularized into these functions or macros, you can ensure that any statistical aggregation, machine learning algorithm, or anything else can continuously be improved and have all of your old content be automatically improved with each update. Sounds great not having to go back and update everything one by one every time you update a way you perform a particular statistical analysis or even something as simple as a data call, right? It is, and it is going to save your analysts an incredible amount of time while granting them reassurance that everything is running with the latest and greatest. With that, we have also introduced the third issue stated above which is how exactly do you know that those great new updates to those code blocks didn’t break anything..

In a worst case scenario, those modularized logic block updates would require you to go back and do the same thing you have been trying to remove from your workload just to have peace of mind that everything is still functioning. You just saved all of this time by making your updates ensure that everything is running on your most up-to-date logic developments just to add that time right back due to lack of validation and regression testing. Here, the best method is to automate the regression testing of your rules when updates are made. This can be done via custom scripts or even via playbooks within a SOAR system as long as that SIEM you are testing against supports some sort of API call to run remote searches. Of course there are a couple things you are going to have to incorporate into your B.A.D. processes referenced in Part I of this series which is essentially having your Red/Attack Team take note of exploit time simulations that way that, in conjunction with your detection queries, those variables can be passed via remote searches for very particular time frames where you can ensure that results will appear. If your automation comes back with certain detection queries returning null results after a modularized logic block update, then you know something went wrong. At the same time, you can utilize this same automation to check whether certain detection queries are no longer applicable by performing a periodic review of the outputs and metadata associated with every search ran in your automated regression tests. This will help to ensure that you are keeping system utilization as low as possible to make way for the detection content that is still applicable to your environment.

We hope that these insights are helping your organization tackle your SOC content conundrums head on. If there are any other difficult problems your organization is faced with today, feel free to reach out as we would love to hear you out. In the meantime, stay tuned for Content Conundrums for the SOC: Part III.

At Anvilogic, we take SOC content development seriously going through several vigorous cycles to ensure that all content developed is as robust, efficient, and applicable as possible and we hope to help your SOC’s detection content achieve the next level. Let us know how we can help you or visit us at www.anvilogic.com.

Friday, May 22, 2020

Becoming a Mature SOC: Part 1 – Why aren’t we flattening the curve?

If there is one thing everybody can agree on, it’s that cybersecurity breaches have risen significantly over the last 15 years, and so has the monetary and reputational impacts that follow as a result.  One interesting trend that we have noticed over this time period is that there are 5 major factors that continue to increase at rapid paces:
  1. Number of security breaches being reported are on the rise each year, including dwell time to respond
  2. Enterprise financial and reputational impacts of breaches continue to rise
  3. Cybersecurity technology market has grown 35X in the last 15 years
  4. Enterprise spending in the security market continues to rise, currently exceeding $115 billion
  5. The volume of data created within organizations has increased by 700% over the last 10 years
What is interesting to note here is that it doesn’t appear that the security market has been able to properly “flatten the curve” on improving the enterprises’ ability to adequately prevent, detect and respond to cyber threats.  Now, there can be many reasons behind why this is, but it is important to note that regardless of the reasons why, we know companies are spending billions of dollars on security and we are still seeing record increases in breaches being reported.

Are enterprises receiving a return on their security investments, or are they experiencing a false sense of security?  Why haven’t we flattened the curve?




Before we get into the reasons why, let’s just look at some of the industry numbers that justify the basis for these thoughts.


Breaches Reported:
  • According to Verizon’s 2020 Breach report, they investigated 3,950 confirmed breaches across multiple industries, up 87% from their 2019 report of 2,103.
  • According to Tech Republic, data breaches increased by 54% in 2019.
  • According to IBM, breaches caused by malicious or criminal attacks are a growing threat and has increased by 21% over the past six years.

Enterprise Impacts:
  • According to IBM research, the cost of a data breach has risen 12% over the last 5 years, averaging close to $4 million dollars per breach
  • According to IBM, the amount of time it took for organizations to detect a security breach (dwell time) was 279 days in 2019, up 5% from 266 days in 2018.

Enterprise Spending:
  • According to Gartner, worldwide spending on information security products and services exceeded $114 billion in 2018, increasing 12% from the previous year.  They forecast the market to grow to $170 billion by 2022.  

Cybersecurity Market:
  • According to Wired.com, the cybersecurity market has grown 35X over the last 13 years, and they anticipate a 12-15 percent year-over-year cybersecurity market growth through 2021.    

Data Volumes:
  • According to Statista.com, the volume of data/information created worldwide from 2010 to 2020 has increased by 700%.

Let’s Flatten the Curve

Based on all this information, let’s make this basic assumption – organizations are spending billions on cybersecurity technology and prevention, but still struggle to properly assess, quantify and measure their overall cybersecurity maturity/risk posture.  As a result, even after all this money is spent, they still have difficulty quantifying the value this all provides to the business and whether or not they are better off today than they were yesterday.   

As you begin to think about how to improve your security maturity posture, we encourage you to take a step back and think about ways you can improve using the technology, data, and resources you already have.  Instead of chasing the next shiny security tool, focus on the improvements you can make using your existing technology, because sometimes, the extra visibility you need or want you may already have. 

The successful model for running an effective security organization is your ability to properly prevent, detect, analyze, and contain/mitigate cyber threats.

To be able to do this effectively, you need to do 2 things very well, intelligence and data.  Intelligence should be driven by threat research and an understanding of the core business processes that can be impacted by those threats.  Your data hygiene coming from technology - the security event logs and application feeds, need to be structured, enriched, concise, and readily available to your security teams so they can use it to build detections and effectively respond to incidents. 

Over the next couple of weeks, we will be posting on how Anvilogic can help you flatten your curve using the technology, data, and resources you already have. 

Think about your maturity…


Intelligence
  • Have you prioritized the threats you need to prevent?
  • Do you have an understanding of the core infrastructure you need to protect?
  • Do you understand the critical business processes that use this core infrastructure?  Do you know how the business actually works/operates?
Data
  • Do you have the data sources needed to detect those threats and protect that core infrastructure and business process? 
  • Is that data structured, enriched, normalized and usable? 
Content
  • Do you have the detection content necessary to adequately respond to those threats in that core infrastructure and business process? 
  • Does that content have a combination of threat intelligence and business intelligence?
  • Do you actually correlate activity across different data domains
  • What is your detection efficacy?  
  • Is your SOC/SOAR looking at the right activity?
Productivity
  • Do you have the technology and controls necessary to mitigate and prevent those threats in your core infrastructure?
  • What is your response time?
  • Does your SOC have the access and capabilities necessary to respond to threats as they are occurring?
  • Do you perform proper testing and health monitoring of all of these security controls to ensure everything is operating as expected? 

At Anvilogic, our core mission is helping you improve your SOC’s overall maturity.  Let us know how we can help you.

References:


Sunday, May 17, 2020

From the SOC Frontlines: Post-Breach Detection Content In the SOC




As we work with our enterprise customers in helping them develop new detection content and improve existing content for post-breach detection activity, we are observing a few, common patterns driving development of new detection content in the SOC.  

I have a well-developed security program. Do I need detection content in the SOC?


Most enterprises have an anti-malware detection program spanning email, endpoint, and, in some cases, network. Most attacks targeting endpoints are delivered over email, and therefore, the Secure Email Gateway(SEG) has first cracks at detection.  SEG’s use a mix of static signature based analysis and dynamic analysis using sandboxes. Next up, is the endpoint protection platform (EPP) which can observe the behavior of the payload and do a more detailed analysis, and has multiple cracks at the payload as it moves through its kill chain cycle. Finally, network security has another crack at detecting malicious  traffic including Command and Control.  This set of technologies work really well to block a significant volume of incoming attacks  particularly the known attacks. 

But attacks get through - we all know that. Why? For a number of reasons
  1.  prevention products are very sensitive of false positives and therefore may let a marginally suspicious behavior through; 
  2. there is a lag between when an exploit is delivered and when it is known, a detection developed and rolled out by the security product vendors. 
  3. new generation of malware that use Living off the Land(LotL) techniques tend to appear similar to legitimate software resulting in potentially higher false positives if the product is aggressive or false negatives if it is not.  
  4. Supply chain attacks. When trusted vendor of yours is compromised and the safe-listed executable acts maliciously.

 The SOC As The Last Layer Of Defense

The purpose of the SOC is to catch these attacks that get through these layers of protection and become breaches. How are SOC’s prioritizing their efforts towards detecting breach activity? These are the drivers we have observed amongst our customers for developing detection use cases in the SOC. 

Detection Use Cases Identified by Red Teams 
Mature SOC’s have red teams that are constantly testing their protections and trying to break through them. They are often very specific, and precise sources of identifying which behaviors are getting through the layers of protection.  Often the TTP’s are described in terms of the Mitre ATT&CK framework, along with the specific procedures (the P in TTP) that got through.  The SOC in turn will perform threat hunting tasks to verify if an actual adversary got through using these procedures, and develop detection content based on their log sources collected in their SIEM. Making sure you have the right program in place for collecting logs, parsing and normalizing is critical. See: https://anvilogic.blogspot.com/2020/03/getting-back-to-basics.html). Further, collaboration between the red team/threat item and content team is critical for this purpose as described here: https://anvilogic.blogspot.com/2020/02/patterns-of-collaboration-in-enterprise.html

Detection of Newly Emerging Exploits and Adversaries
Another source of SOC detection use cases, is threat research  that indicates new adversary tactics targeting that industry vertical, geography or the specific company. Often security product vendors have a lag where they identify the exploit, develop a detection, verify for FP’s and FN’s, and roll out the detection in their products.  Mature SOC’s choose to be pro-active and roll out the detection content for these newly emerging exploits in their SOC’s using the log feeds they are collecting in their SIEM. We have helped our customer SOC's rapidly develop detections for targeted attacks involving anti-phishing and business email compromise(BEC) exploits.

Detection of Malicious Usage of Existing System Tools For LotL attacks
A third set of use cases is around technologies and behaviors that are known to be used by adversaries, but also by legitimate software. For example, in recent years we have frequently observed file-less attacks using existing system tools running on endpoints such as WMI, VBScript and PowerShell.  Endpoint protection products struggle to distinguish between legitimate use and malicious exploits that use these technologies. Therefore SOC’s develop and deploy detection content for malicious behaviors that use these Living off the Land techniques.  For example, detection content for suspicious Powershell behaviors are a common category of use cases that are SOC's deploy. These detections also have the problem of being noisy; mature SOC’s are finding ways to generate high fidelity detections by combining an array of these low fidelity detections in specific sequences. 

Are there other drivers in your SOC for creating detection content for post-breach activity? We would like to know!

At Anvilogic, we are helping address SOC’s develop content for all of the above use cases using our SOC Content Platform(https://anvilogic.blogspot.com/2020/04/the-future-state-of-siems-part-1-what.html). Let us know if we can help you.