Skip to main content Link Menu Expand (external link) Document Search Copy Copied

Investigating a Behavior Alert

When a behavior alert is triggered, more information is often needed in order to determine whether the alert is a real hit or a false positive. This is easy to do using the Fluency interface, as you are able to easily pivot between places to dig deep into an alert.

A behavior alert is viewed from the Behavior Summary page. From here, you will be able to see key information, such as the key, risk score, and risks associated with the alert. In addition, you will also be able to see the time range when the events associated with the alert occurred.

Above is a sample event with a high risk score, assigning it a risk level of “Critical.” Below this, we can see the risk alerts that were triggered by the behavior alert, and to the right of that are the behavior models that were triggered. Between the two of these lists, we can gather that an O365 user logged in from a new country and performed admin-restricted operations. During the login process, they also failed multiple MFA challenges.

This collection of behavior model triggers could indicate a compromised account, but we won’t know for sure until we have the full story, such as figuring out where the user was logging in from before this new location.

Clicking the alert will expand it so you can view all the behavior models that were triggered in relation to the alert. Next to each model, you can see its individual risk score, in addition to the number of events that triggered it.

Beside both the key value on the main card and each behavior model in the table, there is a magnifying glass icon. The magnifying glass icons allow you to pivot to the Behavior Timeline page and dig deeper into the occurrences associated with the behavior alert.

Pivoting to the Behavior Timeline page allows you to view all the behavior model hits that occurred in relation to either the key value or behavior model specifically, depending on which magnifying glass you use. Each of these summary hits can be expanded to view all the field data associated with it.

For this alert, we’re going to click the magnifying glass next to the key, which in this case is the username, so we can view all the timeline hits associated with this user. In the search bar, you can see the query is filled in with both the keyType (username) and the key value, which is the user who triggered this alert.

Now that we’re on the Behavior Timeline page, we can see that this user has logged in multiple times.

Scrolling down, we can see one of the behavior models that was triggered that indicates an action was performed, Exchange_Uncommon_Operations. Expanding it, we can see that there is no IP address for this event, so we cannot determine if it was one of actions performed from a new location.

Using the facet, we can filter the timeline hits by risks, since we are curious about the events that occurred at the new location. This shows that the behavior models triggered that can be directly associated with this new location are O365_File_Access and O365_AzureAD_UserLoggedIn. We’re going to pivot to the events page to view the log ins in the surrounding window using the magnifying glass by the behavior name.

By expanding the search timeframe, we can see information about the other logins besides just the suspicious one. This will help us to determine whether this login is valid, or whether it’s indicative of more threatening behavior.

In the facet, we can see there have been logins from multiple cities in the last day. Three of them are relatively close together, while one is in a completely different country. These are the suspicious logins that triggered the behavior model hits. While it’s possible these logins are legitimate, we need to look at the timeframe in which they occurred in order to verify this.

Select “Seoul” in the facet and search. There are 11 total logins from Seoul within the specified timeframe. When you scroll down, you can see that the last login that occurred from Seoul was at 11:53 AM. Now that we know this, we can view all the logins within this window to determine if it was possible for this user to have logged in from Seoul.

Remove the checkmark next to Seoul to view all the logins within the timeframe again. Once this query has loaded, we can clearly see that there are logins within a very short time both before and after the logins from Seoul, all originating from cities in the United States. This means that there is no way the user could have legitimately logged in from Seoul. Now that we know this action occurred from someone besides the account owner, we want to learn what other actions this unauthorized user performed using this account.

Going back up to the search query, remove the part of the query that says “@behaviors:”O365_AzureAD_UserLoggedIn” from the query, then recheck “Seoul” in the facet. This allows you to view all the actions that have been performed from the Seoul Geolocation. In the facet we can see a list of the operations that were performed. Not including UserLoggedIn, these operations are: FileAccessed, FilePreviewed, and UserLoginFailed.

The actions FileAccessed and FilePreviewed are likely of interest. This indicates that the malicious user potentially had access to sensitive files, depending on the access rights of the user whose account they accessed. Further actions to take following this incident would include changing the compromised user’s account password, and determining which files were accessed in order to determine if sensitive information was leaked, and if so what information it was.

Page last updated: 2021 Nov 08