list-checkProcess

The on-boarding process for a new client consists of the following phases, which are detailed below:

Phase 1: Client creation and user registration

Creating clients and users is the first step the Partner must complete in The Platform before integrating data sources or activating dashboards. This phase establishes the organizational structure that will allow information to be segregated among different end-clients, apply contractually agreed-upon SLAs (Service Level Agreements), and assign appropriate permissions to users who will interact with the platform.

As this is a process with both technical and legal implications (identity and access management, regulatory compliance, personal data protection), it is recommended that the Partner execute it in a planned and documented manner.

Before integrating data sources, the Partner must manually register a new client in The Platform. This registration process involves:

  • Recording basic client information, such as the organization name, contractual ID, and primary contact information.

  • Defining service parameters, including agreed-upon SLAs (handle times, resolution times, accepted criticality). These parameters are linked to the client's service and will subsequently be used in managing alerts and tickets in the platform.

Customer registration not only has an administrative component, but also creates a segregated space within the platform, ensuring that each end customer's information is isolated from the rest.

Registering users associated with a client

Once the client is created, the Partner can begin registering users associated with that organization. Each user represents a natural person with authorized access to the platform and requires the registration of minimal personal information to ensure proper identification and traceability of their activity.

Credential delivery and two-factor authentication

Once the user is registered on the platform, their initial access credentials are automatically generated. These credentials are delivered in two distinct phases:

  • Sending basic credentials: The Partner is responsible for providing the user with their access identifier (usually their email address) and the initial password or activation link.

  • Activating two-factor authentication (2FA): For security reasons, information related to the second factor (activation codes, QR codes, temporary keys) is sent directly by the platform to the end user, without going through the Partner. This ensures that only the user has access to their second factor, avoiding the risk of interception or impersonation.

In this way, control over the user's digital identity is strengthened, complying with the strong authentication principles recommended by regulatory frameworks and cybersecurity best practices.

Principles of information segregation

The Platform's multi-client architecture is based on the principle of logical information isolation. This means that:

  • Users of a client can only access data, logs, dashboards, and alerts belonging to their own organization.

  • There is no cross-client visibility, ensuring confidentiality and avoiding the risk of information leakage.

  • Each client has its own management space, independent of other clients hosted on the same platform instance.

This design meets both technical needs and legal requirements, ensuring that each client's data is managed exclusively and in accordance with applicable contractual and regulatory clauses.

Flexibility in user assignment

Although the general rule is for a user to belong to a single client, the platform offers advanced permission management mechanisms that allow administrators to assign multiple clients to the same user.

This model is especially useful in scenarios such as:

  • Global or multinational clients, where a single person (e.g., a corporate CISO) needs visibility over multiple subsidiaries or business units managed as separate clients on the platform.

  • Business groups with multiple organizational units, where specific security officers need to be able to oversee the posture of all of them.

  • Partners with a complex hierarchical structure who wish to grant an internal user (e.g., a SOC team leader) aggregated visibility over multiple managed clients.

In these cases, platform administrators must carefully design the permissions and roles structure, ensuring that appropriate segregation is maintained and that each user only accesses the contractually defined scope.

Operational recommendations for the Partner

  • Pre-planning: Gather the necessary data (users, roles, SLAs) from the end customer before initiating on-boarding.

  • Process documentation: Maintain a formal record of customers and users created, including on-boarding dates, assigned roles, and approvals received.

  • Compliance with GDPR and local regulations: Ensure that the collection and storage of personal data is carried out legally and with appropriate protection measures in place.

  • Periodic access review: Regularly audit active users, disable inactive accounts, and adjust roles based on changes in the client's organization.

  • Clear communication with the client: Explain to the end customer the credential delivery process and the mandatory use of two-factor authentication, reinforcing the security culture.

Phase 2: Choosing the sources to integrate

Selecting the sources to integrate is one of the most critical steps in the process of on-boarding a new client to the Partner's SOC services. In this phase, the Partner and the end client work together to determine which systems, devices, and services should be connected to the platform, thus establishing the initial scope of visibility and security coverage.

The correct selection of sources is crucial for the following reasons:

  1. Security: Devices and services that generate events are the raw material for threat detection. If the appropriate sources are not integrated, the organization may be exposed to risks that will go unnoticed. For example, integrating only firewalls but not authentication systems can prevent the detection of credential stuffing attacks or lateral movement. The selection of sources must, therefore, prioritize those systems that are critical for protection against advanced threats and that provide relevant indicators of compromise.

  2. Observability and context: In addition to detecting attacks, it is essential for the SOC to understand the overall behavior of the client's infrastructure. This involves capturing not only security logs, but also events from key applications, cloud services, and network systems that add context to the activity of key elements. A good definition of sources expands the ability to observe and analyze, providing additional context that helps differentiate between a false positive and a real incident.

In this phase, the Partner provides essential added value. Thanks to their expert knowledge of cybersecurity and the operation of the Platform, the Partner must advise the client on:

  • Which sources are essential to ensure threat detection in their environment.

  • What additional integrations may be necessary to comply with applicable regulatory frameworks (e.g., ISO 27001, NIST CSF, PCI DSS, ENS in Spain, GDPR, etc.).

  • How to prioritize sources based on their criticality, business impact, and associated risks.

  • What is the best integration strategy to balance security coverage with operational efficiency (e.g., avoiding overloading the service with low-value or difficult-to-maintain sources).

The result of this definition exercise is a clear integration map that will serve as a roadmap for deployment. This map should include:

  • The inventory of critical systems to be integrated (security, network, endpoints, cloud).

  • The regulatory and compliance framework that governs the integration.

  • The prioritization order and recommended phases for addressing data ingestion.

A rigorous definition of sources ensures that the investment in The Platform translates into real visibility into the client's attack surface, improving both early detection and response and reporting capabilities in compliance audits.

Below is an example of the source integration map required for successful provisioning and subsequent service to a new client:

Source Category
Specific Examples
Suggested Priority
Security Value
Compliance Value
Comments

Perimeter security devices

Firewalls (Palo Alto, Fortinet, CheckPoint), WAFs, IDS/IPS High

High

Critical: Allows detection of network attacks, unauthorized access, and exploit attempts

Required by PCI DSS, ISO 27001, ENS

Always integrate all perimeter elements. Review log volume

Authentication and access control systems

Active Directory, LDAP, Azure AD, Radius, Okta

High

Vital: Detect unauthorized access, brute force attacks, privilege abuse

GDPR (access control), SOX, ISO 27001

Ensure centralized authentication and authorization logs

Endpoint Systems and Servers

Windows Event Logs, Linux syslog, EDR/XDR (CrowdStrike, Defender ATP, SentinelOne)

High

Enables threat correlation across end users and critical servers

ENS (Endpoint Monitoring), NIST CSF

Prioritize business-critical servers and VIP endpoints

Network and connectivity

Switches, routers, VPN concentrators, load balancers

Medium

Important for detecting traffic anomalies and unauthorized access

ISO 27001 (network security), PCI DSS

Include devices that support rich logs (NetFlow, sFlow)

Business applications

SAP, ERPs, databases, financial applications, industry applications

Medium/High depending on the client

Essential for fraud, unauthorized access to sensitive data

GDPR (personal data), financial sector (EBA, PCI)

Requires specific parsers; define scope with the client

Cloud Services

AWS (CloudTrail, GuardDuty, VPC Flow Logs), Azure (Event Hub, Security Center), GCP (Pub/Sub), SaaS (O365, Salesforce)

High

Essential for monitoring hybrid and cloud-native environments

Required for cloud security frameworks (CIS, CSA, CCM)

Validate minimum permissions and secure credentials for integration

External Sources

Threat Intelligence Feeds, IP/DNS reputation, blacklists

Medium

Provides context to enrich detection and correlations

Not always required by regulations, but recommended

Integrate sources aligned with industry and geographic area

Phase 3: Integration of data sources

The Data Source Integration phase is one of the fundamental pillars of the on-boarding process. It is at this stage that end customers connect their different systems, devices, and services to the partner's Platform instance so that events from the data sources are continuously collected, processed, and analyzed.

This integration ensures that the Platform has a stream of events that feeds the data lake, detection engines, detectors, AI, and processors for generating security indicators. Without a complete and properly designed source integration, SOC visibility is significantly reduced, limiting the partner's ability to generate indicators and the services provided.

The main objective of this phase is to establish the continuous, secure, and structured collection of security events from the customers' technological infrastructures. This involves:

  • Configuring on-premises devices (firewalls, proxies, IDS/IPS, servers, endpoints, etc.) to send their logs to the Platform.

  • Integrate the client's cloud services (AWS, Azure, Google Cloud, or other SaaS services) using connectors native to the Integrations tool that allow event retrieval via APIs or log streaming services.

  • Ensure that the information received is properly processed and enriched so that it can be used by the platform's components and aligned with the objectives of the services to be provided to the client.

circle-info

The end customer is responsible for configuring their equipment (e.g., firewalls, servers, proxies, switches, endpoints) to send logs via the protocols supported by the data sources or specific agents or connectors supported. The Partner assists in the configuration of these elements, adding value to the service offered to the customer.

Phase 4: Log formatting and processing

Once connectivity between the data source and the Platform has been established, before proceeding to validate the ingestion, it is essential to perform a formatting and processing phase. This stage aims to ensure that the events received from the source are correctly interpreted by the MDR Platform, so that the fields of interest can be extracted and enriched for subsequent analysis and processing by the detection engines and AI Framework.

The devices and services that generate logs produce information in very heterogeneous formats: from plain text messages to complex JSON structures or proprietary vendor formats. This diversity requires an analysis and adaptation process to ensure that the information contained in the events is usable by the detection engines, indicator generation, and dashboards of the MDR Platform.

At this point, the Partner plays a critical role: they must analyze the content and structure of the received logs and configure the appropriate transformation rules using the Transformation Pipelines tool included in the Platform. This functionality allows you to define parsing processes and extract key information, such as:

  • Field identification and separation (timestamp, source, destination, user, action, result, severity, etc.).

  • Field extractors based on the event format, whether structured (e.g., JSON) or unstructured (Grok or Regex).

  • Normalization of time formats and values ​​(e.g., converting different date formats to a single standard).

  • Enrichment of events with additional information (IP geolocation, user categorization, etc.).

  • Filtering of irrelevant or redundant information to optimize storage capacity and analysis efficiency.

The formatting and processing phase is crucial for the following reasons:

  • Data quality: A poorly parsed log can miss key information or generate false positives/negatives in security use cases.

  • Operational efficiency: Proper normalization reduces the effort of SOC analysts by presenting information in a uniform and consistent manner, regardless of the source.

The expected result of this stage is a flow of events structured according to a common Platform data model, ready for validation in the subsequent ingestion phase. This ensures that the information provided by each data source is not only collected but also converted into actionable intelligence for threat detection and regulatory compliance.

Phase 5: Use Cases

This phase ensures that the SOC Platform processes the integrated data to generate actionable indicators and detections, as well as security metrics. The Platform has an extensive catalog of rules, associations with the MITRE ATT&CK framework, CIS/NIST controls, fraud patterns, exfiltration, privilege abuse, identity issues, network anomalies, API abuse, and signals specific to SaaS and cloud environments (AWS, Azure, GCP, O365, etc.).

For automatic activation to be possible, the following points must be met:

  • Source coverage: the data lake must reflect events from the integrated sources.

  • Field quality: Events must be well parsed, and the information they contain must be well extracted, such as timestamps, IP addresses/ports, identity (user/role), results (success/failure), objects (file, process, API), as well as specific mappings or geo-IP enrichments. This requirement is essential since the activation of use cases depends on the characteristics of the information entering the data lake.

In addition to the platform's own use case library, which is continually expanded to cover the detection of new threats and the generation of new KPIs, Partner analysts can create their own rules using the rules engine's configurator. To do so, analysts can use the documentation in Section X of the Platform's operating manual.

Partner Responsibilities: SLA and Analyst Staffing

Automatic activation means that alerts begin to be generated as soon as valid signals exist. From that moment on, the Partner must ensure response within the SLAs agreed upon with the client. Recommendations:

  • Sizing: Estimate alert volume per client (based on source coverage, risk profile, and initial baseline) and adjust shifts (24x7/8x5), skill mix (L1/L2/L3), and super-automation tools.

  • Typical minimum SLAs:

    • MTTA (Mean Time to Recognition): e.g., 15–30 min for critical severity, 60–120 min for high severity.

    • MTTR (Mean Time to Resolution): Based on playbooks and containment capacity with super-automation.

    • Window Coverage: % of alerts responded to within the SLA (target ≥ 95% critical/high).

  • Escalation flows: objective criteria for moving from L1 to L2 to L3 (additional evidence required, impact, scope).

  • Runbooks: Use-case guides with triage, validation, containment, communication, and post-incident steps.

Phase 6 Dashboards

The Dashboards phase ensures that all information processed by the MDR Platform is transformed into operational and strategic visibility for the Partner and its clients.

Thanks to a predefined library of dashboards, the Platform automatically deploys relevant dashboards based on the integrated data sources and activated use cases. This way, security teams not only receive individual alerts but also have visual panels that consolidate trends, key metrics, and correlations, providing a comprehensive view of each client's security and compliance status.

The dashboard deployment logic follows a similar approach to that for use cases:

  1. The Platform evaluates the characteristics of the received logs (available fields, time granularity, volume, enrichment applied).

  2. It identifies which dashboards from the library meet the requirements for activation (for example, network activity dashboards require source/destination/port fields, identity dashboards require user information, and compliance dashboards require logs with information on specific controls).

  3. Automatically activate relevant dashboards, ensuring each client receives visualizations aligned with their actual sources and risks.

This process is completely unattended and prevents the Partner from having to invest time manually configuring dashboards or queries. At the same time, it ensures that the dashboards are always up-to-date when new data sources are integrated.

In addition to the catalog of dashboards included in the platform, users can create their own visualizations, indicators, and scorecards, adding information from multiple sources, not only security but also from any other business dimension, such as marketing, IT, commercial information, or user behavior. To do this, simply consult the Dashboard Creation section of the Platform's operations manual.

Dashboards not only serve a visual purpose but also constitute a strategic tool for the Partner:

  • They allow end customers to be provided with continuous and objective reports on the service provided.

  • They facilitate SLA management through indicators of incident response and resolution times.

  • They help identify anomalous trends or emerging risks early, even before they generate critical alerts.

  • They provide material for safety committees and regulatory audits, demonstrating the SOC's monitoring and response capabilities.

Although dashboard deployment is automatic, the Partner is responsible for:

  • Reviewing the activated dashboards and validating that they align with the client's security objectives.

  • Customizing dashboards if the client requires specific visualizations (e.g., a financial KPI or a specific sector control).

  • Explaining the information contained in the dashboards to different audiences (SOC analysts, IT managers, executive committees).

  • Ensuring operational continuity, ensuring that the dashboards continue to receive data after infrastructure changes or the incorporation of new sources.

Upon completing this phase, each client has a set of dynamic dashboards tailored to their sources, allowing analysts and security managers to monitor their security posture in real time, assess risks, and comply with regulatory obligations.

With this, the Partner not only enables detection and response but also provides the client with observability capabilities, which translate technical complexity into understandable and actionable indicators.

Last updated