Monday, December 7, 2020

The Pervasive Problem of Inferior Detection in your SOC!

Enterprise security operations centers (SOC) have existed for the sole purpose of detecting and responding to threats to an enterprise – external or insider. In the cybersecurity realm, the protectors have never been ahead of the adversaries, and more often than not, have fallen significantly behind and have struggled to recover from cyber attacks. The challenge centers around the efficacy and relevance of detection algorithms and methodologies. Everything is dependent on detection, and yet, many enterprises almost exclusively continue to treat the symptoms – e.g., alert volume/noise, triage/response automation etc. – without addressing, yet alone solving, the core ailment: inferior detection.

For data breaches and cybersecurity threats, SOC processes haven’t changed much in a decade. Logging common data sources (e.g., raw events like domain controller logs and processed events like firewall alerts) into a SIEM (e.g., Splunk) is typically the starting point. Then come threat detection rules, typically 100’s of them, being written by threat/intel and sometimes IR analysts, on the SIEM, to produce somewhat noteworthy ‘incidents’ to investigate, followed by ad-hoc threat hunting by professionals using tools, human observation, scripts etc. to find other noteworthy ‘incidents’ the mundane rules in a SIEM may have missed or not raised as an ‘incident’.

Finally, there is incident response, a response process to investigate a fraction of the ‘incidents’ and respond appropriately, and if lucky, perhaps even add a bit of automation. From a security leader and CISO’s perspective, reporting is important, but it’s almost completely a manual collection of non-standard metrics, rarely in repeatable form, for the purposes of visually understanding the level of security preparedness of an organization.




While detection tools have been seeing modest improvements, these gains are offset by fast-evolving complexity of the threat landscape, the increased volume and frequency of both new and recurring threats. According to the 2020 Verizon Breach Report, there has been an 87% increase in enterprise attacks, year over year. And this percentage has been similarly high for a number of years and does not show any signs of slowing down. 

Solving the detection puzzle

One significant challenge SOC professionals face, will not, and likely cannot change: there will never be enough proactive detection or preventative content to mitigate security threats before they occur. Put more simply, there will never be a world in which the protectors are ahead of the aggressors—the best we can manage is early detection and mitigation. The average dwell time of an attack technique is a whopping 280 days before detection and mitigation, according to IBM’s Cost of a Data Breach Report, 2020.

But the bigger problem is actually a confluence of separate issues that face all security professionals. Insufficient detection capability compounded by a lack of augmentation from machine learning algorithms, not only fails to detect threats adequately, but also generates significant noise and fatigue for overworked incident responders (IR). This perfect storm of intersecting issues results in an inability to intelligently address attack patterns, utilize discrete indicators for threat hunting, and massively augment humans with machine learning. 

Despite significant improvements to detection, problems still persist. Some improvements are simply incremental, while others are piecemeal solutions. The most effective detection method is building a graph across relevant and potentially indicative signals, and forming detections in the form of attack patterns, not discrete signals. This is a simple enough concept to understand yet has not been successfully implemented in most enterprise SOCs. Today it takes significant smart, deeply trained security professionals to connect these dots – a graph, if you will – and identify potential attack patterns. Such efforts have to be augmented and hence significantly expedited by machines – this is the power AI algorithms and frameworks can bring to enterprise SOCs, a force-multiplier effect.

Detection is hard

To begin with, most enterprise SOCs do not understand, in a tangible and measurable way, their state of security preparedness vis-à-vis adversary tactics and techniques; in other words, there is no comprehensive scoring that allows them to assess their own state of security preparedness from a detection defense standpoint.

The MITRE ATT&CK framework is a perfect example of such a template to measure against. This needs to be driven by data source logging, enterprise topology, enterprise priorities, current threat landscape and other relevant factors such as skills and current coverage. The next important aspect of this problem’s complexity is intelligent detection that is capable of a graph-manner of connecting the dots to detect entire patterns and making it easy to take comprehensive mitigation action, in reasonably short time-frames. Security threat analysts must be able to visualize detection scenarios for attack patterns and implement them easily, not be burdened by complex code suited to the underlying execution engine, e.g., a SIEM. Such detection scenario building doesn’t have to be done in a primary research/effort fashion – chances are someone else has solved the problem already, therefore it would be infinitely more efficient to be able to collaborate and share such solutions, at least amongst trusted partners.

Of course, in order to share any implementable code, there needs to be standardization in the code as well as unification of the underlying data models, which are hard problems to solve. Finally, incident responders need enriched alerts to eliminate painful, manual triage and investigation, which is yet another area of trial and error.

In summary, below are the main reasons why this is a hard problem to solve:

  1. No single metric for assessing your environment.
  2. Generating robust detection code to address sophisticated attack patterns is difficult.
  3. Threat detection tools lack the necessary standardization further aggravated by their proliferation.
  4. A lack of structured collaboration between peers relegates would-be partners to operate on a first-principles basis.
  5. Lack of unified, enriched, actionable alerts for hunting, triage & response.

A more connected experience tailored to security personnel will make the human experience less tedious and more productive.

Ideal solution for a better detection capability in the SOC

The ideal solution needs to offer:

  1. An automated, comprehensive and continuous assessment and scoring model that provides CISOs and SOC Managers a consistent view of their threat detection preparedness, and levers to adjust in order to improve and maintain the score.
  2. An AI-assisted recommendation engine that presents SOC users with relevant and personalized threat use cases they need to worry about, based on their environment, history of susceptibility and current threat landscape.
  3. An intelligent, preferably no-code, detection pattern-building environment that deploys highly relevant logic, and generates a low volume of high efficacy alerts for triage/response.
  4. A peer collaboration capability to share code, context, best practices etc. amongst teams as well as amongst enterprises, with the ability to share code that can be deployed across environments (unlike today’s inefficient and noisy IOC signal sharing through ISACs).
  5. A persona-friendly, visual, end-to-end, unifying experience for security professionals aimed to reduce fatigue and improve the quality and satisfaction of their daily lives.

The overall solution must give security professionals and the enterprise a consistent view of security preparedness, and the necessary implementations to keep their coverage high and their alerts rich.

Fostering collaboration amongst peers is a must in order to keep abreast, if not ahead, of the threat landscape, and allowing security domain experts to build content/logic quickly and efficiently in a no-code manner is above all key to the success of the future SOC. A collaborative, no-code approach to security threat detection will allow the experts to protect the house rather than rely on inefficient, faulty translation through developers and programming tools.

The future SOC will marry multi-product, best-of-breed, tool-agnostic threat detection with a response environment where security professionals enjoy the best support available. By developing a robust layer of relevant, custom and code-free threat detection, SOC teams can be more agile, more accurate and automate their response actions more precisely. Investment in this type of collaborative detection will help find complex and relevant attack patterns and threat scenarios, and ultimately help ensure the success of SOCs.

Monday, October 19, 2020

No-code in the SOC!

The traditional SOC is essentially controlled, in most cases, by a SIEM, e.g., Splunk. The language and inner workings of the SIEM are of paramount importance to the SOC team, and often, hiring decisions are made based on proficiency with the existing SIEM and other SOC tools. In other words, SOC teams are often forced to hire programmers rather than security professionals because of the dependencies with underlying SOC tools.

How would it be if SOC professionals are magically provided the capability to build detection logic without ever needing to write a single line of code? Wouldn’t SOC managers rather hire security experts instead of programmers? Yes, they absolutely would. That’s exactly how the new, future SOC is going to have to transform if it aims to protect enterprises from threats rather than just keep up with complex tools that suffer from poor detection capability and noisy alert generation. SOC managers must demand this of security vendors.

 

Introducing the concepts of low-code and no-code. These self-explanatory terms define the relatively low coding effort needed and absolutely no coding effort needed in building business apps/logic, respectively. We believe in total disruption of app/logic building, particularly in the SOC, as there is enough complex work to be done in keeping up, let alone moving ahead, of the threat landscape, and building algorithms (logic) to detect complex threats, often moving fast through an enterprise. Therefore, a no-code approach to the SOC is desperately needed to not only make SOCs efficient but also empower the security experts to take charge of the reins of security threat detection and response, rather than the programmer/developer.

 

According to Gartner, by 2024, low-code/no-code application development will be responsible for more than two-thirds of application development activity across the industry. This is a 165% growth from today, according to Salesforce’s Enterprise Technology Trends Report, 2020.

 

What might this look like? A no-code environment in a SOC would allow a threat analyst to model detection scenarios vis-à-vis actual threat attack patterns, for e.g., as those described in the MITRE ATT&CK framework, instead of modeling detection based on what the underlying tool can or cannot do. The result would be the development of a complex attack pattern detection model by a security expert, not a programmer, by moving atomic blocks of logic and associating operands with/between them, resulting in a detection model that could span days – all without writing a single line of code. Imagine the world of Lego blocks in the SOC – all the blocks, with the right colors, shapes, sizes and functional value, are there – the artist simply needs to put them together, no manufacturing or fabrication needed. They automatically interlock with one another – the expert may define the kind of interlocking needed.

 

A no-code implementation within a security threat detection builder aimed at a threat/intel analyst in the SOC would render a not-too-complex logic for detecting lateral movement like this, and would take a security domain expert minutes to compose:







Whereas the code written in SPL (Splunk’s underlying language) might start to look like this, and stretch well over 200 lines of code, and likely nowhere as readable as the below:







Not to mention the significantly long time it takes to write such code, the hard-to-find proficient programmer, the complexity of testing, as well as the lack of reusability (and modularity) of the code.

 

The example pretty much nails the importance of no-code in the SOC in order to be able to focus on the task at hand – threat detection and response – and not getting bogged down by security tools. Threat detection not only becomes easier but also much more efficient because of reusability, and introduces the ability to cascade improvements in one technique to all scenario detections that use this technique – 100’s of detections could get upgraded with a few clicks, thus massively improving scalability and currency of a SOC organization. Imagine the significant boost this can provide SOC teams with respect to preparedness in threat detection as well as efficiency.

 

No-code is soon going to shatter the monolithic, slow detection (and thus, response) process of the SOC!

Monday, September 7, 2020

The Emergence of Security-Oriented Silos: A Perspective on Gartner’s 2020 Security & Risk Trends – Part 2 (of 2)

As a follow up to the part 1 posting of this topic, and the XDR topic posted by our CTO, let’s discuss how we must deal with the decentralization of security operations yet the need for a unified view of the state of security and ways to secure the enterprise.


How do we deal with it?

As said before, we must embrace the next-gen cyber-security operations of an enterprise which shall be run by security domain experts rather than the traditional IT/developer persona. This certainly means the end of a traditional, central SIEM as we know it, and augmentation of the security infrastructure with a federated, content platform which operates as a fabric across all security silos – this is the ONLY way to embrace the next generation of enterprise security.

 

Security silos are not necessarily a bad thing – in fact, we see the next-gen SOC being quite decentralized and operated in a best-of-breed fashion. The key is to bring the intelligence (detections and response playbooks) together such that a unified coverage and action plan emerges for the enterprise. In order to achieve that, a few key paradigms need to be shattered:

 

1.     The concept of a central, primary SIEM at the heart of a SOC into which all security data sources feed into is disappearing – a more distributed model is emerging

2.     No more developer/IT skills needed to program rules/logic – security experts will be able to author and implement detection content without code and without needing to tie into a specific underlying run-time engine

3.     Content will no longer be developed only within the confines of a SOC – a more collaborative approach will emerge, with other business units and control points, as well as outside the enterprise

4.     By virtue of the above, content will no longer be in a single platform or SIEM-specific “language” – it will be more of a framework-led logic construct easily portable to any runtime environment

5.     Data prep – normalization, unified data model etc. – will no longer be an after-thought; this becomes a first-class citizen in building out the next-gen SOC, where the content and data interlock from the design stage onwards

 

Needless to say, much of this new world is going to be cloud-based; this offers the optimal path to collaboration and maximizes the CI/CD-style rate of innovation for the SOC. Naturally, private-cloud, hosted environments and hybrid deployments will be supported but the brains will be in the cloud.

 

In short, think of the new security infrastructure world as more inclusive and connected yet distributed. This is our view of the world, at Anvilogic. We’d be happy to engage anyone who’s interested in learning more about how we implement solutions that embrace the next generation “SOC”.



Monday, July 20, 2020

Tying Together The SOC Visibility Triad for Improved Threat Hunting



The SOC Visibility Triad has emerged as a concept over the last few years. The SOC triad consists of EDR and NDR solution running at the endpoint and network respectively, and pulling their alert feeds into the SOC(the third leg of the triad) for improved threat detection and hunting.

We have been working with customers to help them detect adversarial behaviors by correlating these alerts with their alert and log feeds.  Bringing in the EDR and NDR alerts into the SIEM provides rich context for threat hunting.  A common theme that we have observed in SOC's  is the application of consistent  data normalization and enrichment to these feeds which enables rapid development and deployment of automated detection and threat hunting content.

The Drivers

Traditional prevention technologies whether at the endpoint, the network or at the cloud are evolving into detection and response technologies. At the endpoint, tradition anti-malware is evolving into EDR (endpoint detection and response) solutions; at the network, traditional IDS/IPS are evolving into Network Detection and Response(NDR) solutions; further some vendors are integrating EDR and NDR solutions into an XDR solution.

What is driving the solution of these technologies? Adversary sophistication and their ability to deploy new variations of known exploits have been successful in getting around traditional prevention technologies are key drivers for these solution categories.

Analyst are noticing it; Gartner called out this out as a top trend for 2020. This trend bleeds into another trend we are have observed where Enterprise SOC's adopt multiple SIEM/s and data lakes for log and alert aggregation and correlation

EDR/NDR/XDR Configuration and Alert Management

These solutions are characterized by their deployment of a dynamic collection of signature, behavioral and statistical techniques to detect malicious behavior.  Care and feeding for these tools involve:
  1. Detection policy configuration. Product admins have a choice to select which out of the box detection content they want turned on or off, and even author and deploy new detections into these solutions. 
  2. Alert Investigation and Triage. Because alerts generated by these detections are not always known bad behaviors, this detection are lower fidelity than detections for known bad behaviors. These lower fidelity alerts  require action by the security ops team - 
    1. investigate the alert and  mark it as a false positive or promote it to an incident for remediation and containment. 
    2. In some cases, these solutions come with a managed services component where the vendor offers a service to investigate alerts and resolve them. 

SOC Use Cases: Threat Hunting and Automated Detection

So how are Security Operation Centers(SOC) effectively using the alerts generated by these EDR/MDR/XDR solutions? In our engagements with enterprise SOC's, we see these patterns emerging for making best use of these alerts.
  1. Basic investigation and response. Alerts, often in isolation, are ingested, and investigated. 
  2. Threat investigation. These alerts could be indicative of a campaign by an adversary, and the SOC investigates these alerts across EDR, NDR and XDR solutions, and combine them with other security product alerts, and correlations from raw logs to detect adversary tactics, techniques and procedures. A core foundational requirement is standardized data models that includes data normalization and standardized enrichment. This can be through:
    1. Ad hoc threat hunting. Experienced threat hunters look at the set of alerts coming form these solutions and other security products, and looks for patterns of adversary behavior. The wider the aperture for analysis, the higher is the fidelity of threat hunting detections. This requires highly skilled personnel and knowledge of behaviors that malicious actors are known to use. 
    2. Standardized detection and hunt procedures.  This requires data normalization and enrichment, and detection content  that can be applied to these alerts for detecting adversary behavior.  Alerts from a wide variety of sources are combined to obtain high fidelity detection for adversary behaviors. This environment has higher levels of automation and repeatability than ad-hoc threat hinting. 

SIEM Enablers: Data and Content 

Mature SOC's adopt standardized detection and hunt procedures on the alert stream being ingested and normalized for EDR/NDR/XDR technologies, and combing them with alerts from their other security products, and raw log streams. There are two foundational capabilities that must be in place for this to be successful:
  1. Data normalization and enrichment. If all of your alerts are stored in a normalized format, threat hunting and automated detection queries development and usage is simplified.
  2. Customized detection content that can hunt for those adversaries that are targeting you and your verticals. 
At Anvilogic, we offer a SOC content platform that has a wide variety of data parsers, normalizer and enrichments that you can quickly adopt, and a wide set of behavioral detections that you can assemble to create your unique adversary detection content that would use to hunt for adversary behaviors against your alert and log sources.

Tuesday, July 14, 2020

The Emergence of Security-Oriented Silos: A Perspective on Gartner’s 2020 Security & Risk Trends – Part 1 (of 2)

This Gartner post was published in June, after COVID 19 struck the world, and therefore the perspective of a new world is already factored into the posting. Response to COVID 19-related changes in work habits have driven cyber-security priorities since March 2019. But there is an uber trend that has been happening for a few years now and I expect will emerge as a high priority element in cyber-security planning at the CISO level – the emergence of several silos of security threat detection and analytics, run by different domain experts, for different workloads.

This is captured in trend #4 in Gartner’s post, about how enterprise-level (centralized) Chief Security Officers are arising in order to merge security-oriented silos. I fully agree with this, and we, at Anvilogic, have started to see this ourselves as we engage with Fortune 1000 companies, and I started seeing these signs of silos emerging a few years back while at Splunk. 

Let’s try and understand why this is happening, and how we must embrace and optimize operations to accommodate this phenomenon.

Why is it happening?
As enterprises grow larger in operations, varying workforce habits, and new application workloads, security organizations tend to get decentralized and clusters of expertise governing their own areas arise. This is, in general, a good thing because those specific application/area owners know their environments best and therefore allowing them to govern those areas for security vulnerabilities and attacks is the most viable strategy in the long haul. This is not akin to how the server world got disrupted with VMware’s virtual machines, and the business application world got disrupted with companies like SalesForce and the infrastructure world got disrupted with AWS – the commonality in all this is there ceased to be one central IT organization servicing the needs of server groups, business application areas and infrastructure project areas, in favor of domain experts producing the necessary value elements for the business to operate with a ‘best-of-breed’ approach. Similarly, we are seeing this forward-progress trend starting in cyber-security with the advent of subject matter expertise in respective areas operating to deliver value for the areas they know best and own. Enterprises are considering Microsoft Sentinel to address Cloud AD and Azure security needs, Google Chronicle for GCP workloads, XDR technologies for end-point and related detection & response and so on. This is in addition to multiple (at least two) SIEMs many enterprises are already operating today. As a result, there is a growing separation of data, analytics and detection in the enterprise, and this goes beyond the capacity and governance reach of a traditional SOC. This trend must continue for the betterment of overall security posture of enterprises. However, the downside which we have not yet addressed, but we must, is bringing the knowledge of these disparate silos together and provide a centralized view of the cyber-security posture of the enterprise. This is true next-gen value but we are not there yet.

How do we deal with it?
As mentioned above, we have not yet addressed how to bring concerted & correlated value from across these silos to address overall enterprise cyber-security and maturity. But it is important we look at the role of a SOC and consider the value of domain expert-run security methods carefully, and embrace the next-gen cyber-security operations of an enterprise which shall be run by security domain experts rather than the traditional IT/developer persona. This certainly means the end of a traditional, central SIEM as we know it, and augmentation of the security infrastructure with a federated, content platform which operates as a fabric across all security silos – this is the ONLY way to embrace the next generation of enterprise security. We shall address this further in an upcoming blog post soon – watch this space!

In the meantime, look out for our CTO’s post on the related trend, #1 in Gartner’s posting, about how XDR technology is gaining traction in enterprises.

Tuesday, June 23, 2020

The Future State of SIEMs - Part 3 ("The How")

If you read Part 1, https://medium.com/@Anvilogic/the-future-state-of-siems-part-1-the-what-149056482fefand Part 2, https://medium.com/@Anvilogic/the-future-state-of-siems-part-2-the-why-efffc64ffb6f?sk=81c7396126ab6cd0b5822408f05d51b9, of this topic series, then you are ready to learn how the revolution should happen in the SIEM and surrounding SOC stack such that relevant, high-efficacy, ready-to-deploy content will stream into the SIEM and result in highly actionable alerts leading to high rates of automation in downstream systems. This is not an evolutionary “how” rather it introduces a new paradigm that not only makes highly accurate detection content available to SOCs thereby increasing the rate of orchestration and automation but also future-proofs SOCs against the changing threat landscape as well as security architecture in that they will no longer be centrally dependent on a single SIEM.

There are several key elements in this new architecture of a Content Platform, including a content repository and frameworks but the most important is the capability to empower security experts to build necessary content (=detection logic) without needing to be tool experts or code developers. Such a flexible, code-less, UI wizard-driven content builder utilizes content objects that have gone through the frameworks and are ready to be linked together to form high efficacy scenario detections that result in fewer but more accurate, actionable alerts for SOC teams to triage.

The above architecture will be underpinned by a secure collaboration channel, which allows SOC teams to collaborate with one another, both internally within the SOC as well as externally with peers in other enterprises, optionally. Collaboration is possible at the code level, wherein actual code can be exchanged, or at the comments and best-practice levels which are more free-form text exchanges. Code-level exchanges are only possible because of the embedded standardization frameworks in this architecture.

This concise description of the next-gen SOC content platform architecture is imperative, and will split the monolithic SIEM stack such that Content will no longer be a part of the SIEM, rather it will be supplied by the framework-led, collaborative content platform which will serve all enterprise rules engines, such as a central SIEM, several micro data lakes, end-points etc., resulting in the future discussed here - https://medium.com/@Anvilogic/being-a-soc-content-platform-4ccd27c2472a

For more on how this will work in your SOC, sign up for our free trial at www.anvilogic.com

Monday, June 1, 2020

The Future State of SIEMs - Part 2 ("The Why")

If you read Part 1 of this topic series, https://medium.com/@Anvilogic/the-future-state-of-siems-part-1-the-what-149056482fef, then you are likely wondering why there needs to be a future state of SIEMs other than the usual reason that anything must evolve/improve over time. However, it’s more dire and revolutionary than that.

SIEMs have long been the ‘go-to’ system for data collection, alerting and triage from a technology standpoint but have always been reliant on the knowledge, priorities and intellect of the individuals running the SOC for core content – detection and triage logic – that sets everything in motion. This – content – is the core value of the central part of a SOC that needs domain expertise, not just technology. Unfortunately, a bulk of resources is spent in on-boarding data, conforming to ‘data models’ (though that’s an overloaded complimentary term) and coding basic rules that generate noisy alerts. These tasks are supposed to be table stakes – a means to an end, and not the end itself. But too often SIEM implementations start and end with these necessary but not sufficient tasks. And end up burning out analysts and making downstream triage/response automation impossible.

A screenshot of a cell phone

Description automatically generated

It’s time to elevate the game to better detection, and hence, better response methods. This is not necessarily a ding on SIEM vendors; they do what they do best – ingest data, help analyze/investigate, help automate but the core detection content needs to come from subject matter and domain experts who’ve ‘been there and done that’ and not from technology vendors who are good at engineering but not necessarily good at the dynamic security landscape. Therefore, the state of a SIEM needs to change in order to strengthen the core – detection content – and therefore deliver significant productivity gains to the SOC where budgets for systems or people are not necessarily increasing commensurate to the needs.

There are three phenomena that are happening in order to compensate for the content weakness of SIEMs:

1.     The lack of better detection leading to precise incidents that are truly actionable is why EDRs are implementing their own silo’ed detection/response logic at the end points, and other similar technologies are doing the same, mainly for lack of efficient and precise logic in the SIEM and sometimes for licensing cost reasons. The world is getting decentralized for convenience but at the cost of not correlating across enterprise data to get the richest of signals leading to the most precise incident alerts.

2.     The problem doesn’t end there – it continues downstream to the orchestration and automation side of the SOC too. This is the reason SOAR systems’ playbooks are mostly enrichment (more contextual data) in order to better understand the alert and dismiss it as a false positive or accept it as an action-worthy incident. As a result, SOAR systems are making up for the lack of efficacy of the SIEM system, and that’s not a wise use of resources, and leads to inadequate automation. In a nutshell, other systems are making up for the lack of SIEM functionality – core detection logic that will drive better rates of predictable, repeatable and successful automation.

3.     Finally, if there is no core system of record that assesses maturity factually, then it’s impossible to truly know the state of security preparedness of an enterprise. How can a CISO effectively answer the question, “what does our security coverage look like?” In order to be able to answer that correctly, there needs to be a system that dispenses detection content across enterprise priorities and frameworks such as the MITRE ATT&CK framework, and is able to generate metrics on efficacy, coverage, gaps, peer comparisons, maturity scores etc. SIEMs cannot perform all these functions – they just need to perform their core function well – that is, implement relevant, implementable, high efficacy detection logic which will come from elsewhere (discussed in the next part of this blog post series), and truly achieve a high rate of automation in the SOC.

Humans should not be chasing data and noisy alerts and triaging the basics, nor should other systems be compensating. And, neither should SOC executives be struggling to assess their level of maturity and security coverage. A SOC’s domain experts should not be making up for the lack of expert content in the SIEM rather they must be spending their time elevating and customizing it for their environments, easily, without needing to be programming experts nor threat landscape experts. There needs to be a dominant change in the SOC to bring in strong, relevant and high efficacy content to the SIEM – this meets the unsaid, implied need for productivity gains the SOC so desperately needs today. This is also described in a previous blog posting, https://medium.com/@Anvilogic/being-a-soc-content-platform-4ccd27c2472a


These are the true answers to the “Why do SIEMs need a future state” question. In a nutshell, SIEMs need way better content driving detection (alerts), and that must not continue to come in a manual, ad-hoc, difficult manner from SOC teams – a Content Platform needs to drive this change. And this is the basis for the next part of this series that will discuss the “How” …

Friday, May 29, 2020

Content Conundrums for the SOC: Part II

If you survived the first SOC conundrum for content development, then your organization is well on its way towards building new and exciting content without having to worry about those senseless questions when new threats emerge such as whether the threat is even relevant to your organization. Instead, your organization should be quick to determine threat applicability and move forward with a well-defined and streamlined content development process using build, attack, and defend methodologies. If you aren’t at this point, you can refer back to my previous post, and part 1 of this series, here.

With this stream of new and applicable content being developed into your SIEM for consumption by your Incident Response team, you now have several new detection queries actively looking for threats within your environment but how in the world are you going to manage this growing content repository when each piece of code was developed individually with no sense of reusability and how exactly are you planning to ensure that the past pieces of content developed are even still applicable now that time has passed? And in here lies the next content conundrums for the SOC.

Let’s start with a few issues that need to be addressed:
1.     How is your organization going to handle a situation where a critical component of these detection queries change?
2.     How is your organization going to ensure that logic blocks within your queries are repeatable and can continue to be refined over time?
3.     How is your organization going to validate that content previously produced remains applicable and working correctly over time?

The idea that your organization just produced a large quantity of content which took a painstaking amount of time and resources all to be rendered fully useless by a change in a data storage schema or even just partially useless to newly onboarded business units because their data needs to be incorporated into the logic is a nightmare. No one wants to have to go through each individual piece of content and update components one by one especially when there is a lot of room for human error. So how would you go about making sure that manually going through all your past developed content is not a reality? Easy, modularize your content to its fullest extent. This can be done via functions or macros or whatever reusable logic grouping  your organizations SIEM may utilize. The idea here is to modularize every piece of logic that can be reused for another piece of content, even how you call a certain data set should be modularized just to account for data storage schema changes and new data storage objects being created for new business units. Luckily, this idea resolves the second issue addressed above as well by having complex logic blocks modularized into these functions or macros, you can ensure that any statistical aggregation, machine learning algorithm, or anything else can continuously be improved and have all of your old content be automatically improved with each update. Sounds great not having to go back and update everything one by one every time you update a way you perform a particular statistical analysis or even something as simple as a data call, right? It is, and it is going to save your analysts an incredible amount of time while granting them reassurance that everything is running with the latest and greatest. With that, we have also introduced the third issue stated above which is how exactly do you know that those great new updates to those code blocks didn’t break anything..

In a worst case scenario, those modularized logic block updates would require you to go back and do the same thing you have been trying to remove from your workload just to have peace of mind that everything is still functioning. You just saved all of this time by making your updates ensure that everything is running on your most up-to-date logic developments just to add that time right back due to lack of validation and regression testing. Here, the best method is to automate the regression testing of your rules when updates are made. This can be done via custom scripts or even via playbooks within a SOAR system as long as that SIEM you are testing against supports some sort of API call to run remote searches. Of course there are a couple things you are going to have to incorporate into your B.A.D. processes referenced in Part I of this series which is essentially having your Red/Attack Team take note of exploit time simulations that way that, in conjunction with your detection queries, those variables can be passed via remote searches for very particular time frames where you can ensure that results will appear. If your automation comes back with certain detection queries returning null results after a modularized logic block update, then you know something went wrong. At the same time, you can utilize this same automation to check whether certain detection queries are no longer applicable by performing a periodic review of the outputs and metadata associated with every search ran in your automated regression tests. This will help to ensure that you are keeping system utilization as low as possible to make way for the detection content that is still applicable to your environment.

We hope that these insights are helping your organization tackle your SOC content conundrums head on. If there are any other difficult problems your organization is faced with today, feel free to reach out as we would love to hear you out. In the meantime, stay tuned for Content Conundrums for the SOC: Part III.

At Anvilogic, we take SOC content development seriously going through several vigorous cycles to ensure that all content developed is as robust, efficient, and applicable as possible and we hope to help your SOC’s detection content achieve the next level. Let us know how we can help you or visit us at www.anvilogic.com.

Friday, May 22, 2020

Becoming a Mature SOC: Part 1 – Why aren’t we flattening the curve?

If there is one thing everybody can agree on, it’s that cybersecurity breaches have risen significantly over the last 15 years, and so has the monetary and reputational impacts that follow as a result.  One interesting trend that we have noticed over this time period is that there are 5 major factors that continue to increase at rapid paces:
  1. Number of security breaches being reported are on the rise each year, including dwell time to respond
  2. Enterprise financial and reputational impacts of breaches continue to rise
  3. Cybersecurity technology market has grown 35X in the last 15 years
  4. Enterprise spending in the security market continues to rise, currently exceeding $115 billion
  5. The volume of data created within organizations has increased by 700% over the last 10 years
What is interesting to note here is that it doesn’t appear that the security market has been able to properly “flatten the curve” on improving the enterprises’ ability to adequately prevent, detect and respond to cyber threats.  Now, there can be many reasons behind why this is, but it is important to note that regardless of the reasons why, we know companies are spending billions of dollars on security and we are still seeing record increases in breaches being reported.

Are enterprises receiving a return on their security investments, or are they experiencing a false sense of security?  Why haven’t we flattened the curve?




Before we get into the reasons why, let’s just look at some of the industry numbers that justify the basis for these thoughts.


Breaches Reported:
  • According to Verizon’s 2020 Breach report, they investigated 3,950 confirmed breaches across multiple industries, up 87% from their 2019 report of 2,103.
  • According to Tech Republic, data breaches increased by 54% in 2019.
  • According to IBM, breaches caused by malicious or criminal attacks are a growing threat and has increased by 21% over the past six years.

Enterprise Impacts:
  • According to IBM research, the cost of a data breach has risen 12% over the last 5 years, averaging close to $4 million dollars per breach
  • According to IBM, the amount of time it took for organizations to detect a security breach (dwell time) was 279 days in 2019, up 5% from 266 days in 2018.

Enterprise Spending:
  • According to Gartner, worldwide spending on information security products and services exceeded $114 billion in 2018, increasing 12% from the previous year.  They forecast the market to grow to $170 billion by 2022.  

Cybersecurity Market:
  • According to Wired.com, the cybersecurity market has grown 35X over the last 13 years, and they anticipate a 12-15 percent year-over-year cybersecurity market growth through 2021.    

Data Volumes:
  • According to Statista.com, the volume of data/information created worldwide from 2010 to 2020 has increased by 700%.

Let’s Flatten the Curve

Based on all this information, let’s make this basic assumption – organizations are spending billions on cybersecurity technology and prevention, but still struggle to properly assess, quantify and measure their overall cybersecurity maturity/risk posture.  As a result, even after all this money is spent, they still have difficulty quantifying the value this all provides to the business and whether or not they are better off today than they were yesterday.   

As you begin to think about how to improve your security maturity posture, we encourage you to take a step back and think about ways you can improve using the technology, data, and resources you already have.  Instead of chasing the next shiny security tool, focus on the improvements you can make using your existing technology, because sometimes, the extra visibility you need or want you may already have. 

The successful model for running an effective security organization is your ability to properly prevent, detect, analyze, and contain/mitigate cyber threats.

To be able to do this effectively, you need to do 2 things very well, intelligence and data.  Intelligence should be driven by threat research and an understanding of the core business processes that can be impacted by those threats.  Your data hygiene coming from technology - the security event logs and application feeds, need to be structured, enriched, concise, and readily available to your security teams so they can use it to build detections and effectively respond to incidents. 

Over the next couple of weeks, we will be posting on how Anvilogic can help you flatten your curve using the technology, data, and resources you already have. 

Think about your maturity…


Intelligence
  • Have you prioritized the threats you need to prevent?
  • Do you have an understanding of the core infrastructure you need to protect?
  • Do you understand the critical business processes that use this core infrastructure?  Do you know how the business actually works/operates?
Data
  • Do you have the data sources needed to detect those threats and protect that core infrastructure and business process? 
  • Is that data structured, enriched, normalized and usable? 
Content
  • Do you have the detection content necessary to adequately respond to those threats in that core infrastructure and business process? 
  • Does that content have a combination of threat intelligence and business intelligence?
  • Do you actually correlate activity across different data domains
  • What is your detection efficacy?  
  • Is your SOC/SOAR looking at the right activity?
Productivity
  • Do you have the technology and controls necessary to mitigate and prevent those threats in your core infrastructure?
  • What is your response time?
  • Does your SOC have the access and capabilities necessary to respond to threats as they are occurring?
  • Do you perform proper testing and health monitoring of all of these security controls to ensure everything is operating as expected? 

At Anvilogic, our core mission is helping you improve your SOC’s overall maturity.  Let us know how we can help you.

References: