LoFP LoFP / update_known_false_positives

Techniques

Sample rules

Description

The following analytic identifies instances where Azure AD has blocked a user’s attempt to grant consent to an application deemed risky or potentially malicious. This suggests that the application has exhibited behaviors or characteristics that are commonly associated with malicious intent or poses a security risk. This detection leverages the Azure AD audit logs, specifically focusing on events related to user consent actions and system-driven blocks. By filtering for blocked consent actions associated with applications, the analytic highlights instances where Azure’s built-in security measures have intervened. Applications that are flagged and blocked by Azure typically exhibit suspicious characteristics or behaviors. Monitoring for these blocked consent attempts helps security teams identify potential threats early on and can provide insights into users who might be targeted or susceptible to such risky applications. It’s an essential layer of defense in ensuring that malicious or risky applications don’t gain access to organizational data. If the detection is a true positive, it indicates that the built-in security measures of O365 successfully prevented a potentially harmful application from gaining access. However, the attempt itself suggests that either a user might be targeted or that there’s a presence of malicious applications trying to infiltrate the organization. Immediate investigation is required to understand the context of the block and to take further preventive measures.

Detection logic

`azure_monitor_aad` operationName="Consent to application" properties.result=failure 
| rename properties.* as *  
| eval reason_index = if(mvfind('targetResources{}.modifiedProperties{}.displayName', "ConsentAction.Reason") >= 0, mvfind('targetResources{}.modifiedProperties{}.displayName', "ConsentAction.Reason"), -1) 
| eval permissions_index = if(mvfind('targetResources{}.modifiedProperties{}.displayName', "ConsentAction.Permissions") >= 0, mvfind('targetResources{}.modifiedProperties{}.displayName', "ConsentAction.Permissions"), -1) 
| search reason_index >= 0  
| eval reason = mvindex('targetResources{}.modifiedProperties{}.newValue',reason_index) 
| eval permissions = mvindex('targetResources{}.modifiedProperties{}.newValue',permissions_index) 
| search reason = "\"Risky application detected\"" 
| rex field=permissions "Scope: (?<Scope>[^,]+)" 
| stats count min(_time) as firstTime max(_time) as lastTime by operationName, user, reason, Scope 
| `security_content_ctime(firstTime)` 
| `security_content_ctime(lastTime)` 
| `azure_ad_user_consent_blocked_for_risky_application_filter`