Phase 1️⃣ Sunrise 🌅

Sunrise is the first phase of the detection lifecycle. It marks the inception, development and deployment of the detection. During that phase there are 6 core functions that should be addressed:

  • Research
  • Prepare (Logging)
  • Build (Detection Content)
  • Validate
  • Automate
  • Share (Knowledge)

High level goals for the Sunrise phase

  • Build high fidelity detection
  • Ensure detection validation
  • Create documentation
  • Integrate and automate in the environment
  • Socialize the detection with the security organization

ResearchOpportunity IdentificationIt can be triggered from analyzing threat intelligence reports, or OSINT, or internal knowledge for a particular security gap. Document the use case and the goals of the detection as part of the opportunity identification process.
  • Document the use case that you’re building and set goals.
  • Is the TTP already covered by an existing alert or detection?
  • Is there sufficient knowledge to start building or additional research would be required?
  • What are sources of information that will assist the research?
PrioritizeDetection engineering work has to be prioritized and tracked. Work prioritization can be based on urgency and priority. Backlog of detections and security posture activities is desirable and recommended.Prioritization criteria:
  • Criticality of the system

  • Highest level of threat to the organization

  • Ease of Exploitation

  • Past incidents

Develop Research QuestionsWrite your research questions that while answering you will gain understanding of the topic.Examples:
  • Write down what you already know or don’t know about the topic.
  • Use that information to develop questions. Use probing questions. (why? what if?).
  • Avoid “yes” and “no” questions
Information GatheringResearch and collect sufficient information in order to start understanding the detectionProvides a good overview of the topic if you are unfamiliar with it.
  • Identify important facts, dates, events, history, organizations, etc. (in case the detection is a response to a past incident.)
  • Find bibliographies which provide additional sources of information (include in the Appendix section detection document)
Technical ContextCreate and understand technical context around the detection
  • Start putting technical writeup by summarizing the most important information from technical aspect
  • Research the technology associated with the technique to help understand the use cases, related data sources, and detection opportunities
  • Note: Defenders often create superficial detections because they lack an understanding of the technology involved. In case of uncertainties it is best to engage the team or engineer responsible for the management of the technology
Identify DatasetIdentify the log source that will be used for the detectionKnow your environment
  • Understand the data source and document it by creating a data dictionary.
  • The data dictionary should grow and contain sources of data and their corresponding schemas. It can later be used to quickly refer to.
Visibility CheckEnsure there is sufficient logging, retention and visibility in order to successfully build the detection and satisfy the use case
  • Use the accumulated technical knowledge to identify source and identify the events required to build detection
  • Use any historical events in order to validate that there is sufficient visibility
Improve(optional)Once the data is explored we can identify opportunities for improvements such as:
  • Collecting additional logs or change logging levels
  • Create additional attributes (parsing of raw logs)
  • Consolidation of distinct logs
Improvement initiatives and requests should be communicated to the responsible for the dataset in question team. For that purpose it makes sense to maintain a contact list that provides quick reference to technology, support/engineering teams and contact details.
Build & EnrichDetection CreationCreate a detection query against the identified datasetHaving a good understanding of the technical context and the data source begin building queries to narrow down the data to actionable insight.
Manual TestingPerform a manual testing and ensure the query works syntax and logical perspective
  • Ensure the query does not have any syntax errors
  • In case the detection is build in response to past incident ensure that the query is indeed catching true positive events
Baseline developmentDevelop a baseline (if needed) that will improve the detection fidelity
  • Baselines are sets of known and verified good behaviors and events present in the organization. Those events are normally excluded from the detection logic.
  • Baselines decisions and considerations should be documented and clearly states in the ADS
  • Baselines are included in the hunt.yml/tf/hcl or alert.yml/tf/hcl files
Unittest DevelopmentThe unittest development is dependent on the type of devops pipeline. Simple goals are provided.Goals for the unittesting:
  • Changes or missing data
  • Syntax errors
  • To confirm detection logic by performing true positive detection
EnrichEnrich with additional data source if required
  • Each hunt could have different enrichment requirements. In some cases HR database could be used in order to understand if a person is on vacation, other trivial cases could be lookup of a hash, ip or domain in an threat intelligence repository etc.
  • Create KB Document
  • Complete the ADS
  • Mitre minefield update
  • Central knowledge base repository is required in order to mature the detection engineering program. This can be a github repository with controlled access that provides on a need to know basis the security teams members with access.
  • Each hunt should have a corresponding README.MD file that provides sufficient information and context. Consider an SOC analyst or Incident Responder responding to an event from your detection. By looking at the documentation they should be easily briefed on the premise and technicalities of the detection.
ValidateConfirm unittestsConfirm unittest are workingConfirmation of the unittests can be done by inspecting the implemented devops pipeline and ensuring that the actions (in the case of github) for unittests are running
True Positive validationValidate true positive event against real dataset using the query developed earlier.True positive validation can be achieved by:
  • Using historical event that exists in the central data repository
  • Emulation of the TTP by executing it in a controlled environment
False Positive ValidationEnsure no FP are produced by the query when ran against the prod dataset.
  • False positive events are good known events which are produced as output results by the detection/hunt query.
  • If baseline is used it should be validated that the baseline is catching those good known events. Splunk example: Splunk you can use makeresult command to create fake results and test your baseline and how you handle false positives.
AutomateAutomation & deploymentThis step is entirely dependant on the environment and should follow the standard ci/cd or automation practices of the organization.Integrate with devops pipeline and enable continuous deployment
ShareSocialize the new detectionA notification process is required and it should be created. The process can be in the form of newsletter or slack channel notification, preferably automated one.Follow a process to communicate the newly created detection with the Security Teams and inform them about it
Update Sec Dependency TreeThis document is actually part of the repository and can be shared with data engineering and security teams. The goal of sharing it is to promote care mentality where teams would check before they change. Meaning, if data engineer is about to rename an index they should first check if the index is being used. Having dependency document as part of the repository makes it easy and seamless for them to check.Update organization wide document showing dependencies for the detections

Process Flow

graph TD; Research1(Opportunity Identification) -->Research2(Prioritize); Research2 -->Research3(Develop Research Questions); Research3 -->Research4(Information Gathering); Research4 -->Research5(Collect Technical Context); Research5 -->Prepare1(Identify Dataset); Prepare1 -->Prepare2(Visibility Check); Prepare2 -->Prepare3{Improve}; Prepare3 --> |yes| cis[Start security improvement initiative]; Prepare3 --> |no| Build1(Detection Query Creation); Build1 --> Build2(Manual Testing); Build2 --> Build3(Baseline development); Build3 --> Build4(Automated Unittest Development); Build4 -->Build5(Enrich); Build5 --> Build6(Document); Build6 --> Validate1(Confirm unittests); Validate1 -->val2(True/False Positive validation); val2-->automate(Automation & deployment); automate --> share(Socialize the new detection); share -->share1(Update Sec Dependency Tree);