Internet-Draft network-anomaly-lifecycle March 2024
Riccobene, et al. Expires 16 September 2024 [Page]
Workgroup:
NMOP
Internet-Draft:
draft-netana-nmop-network-anomaly-lifecycle-01
Published:
Intended Status:
Experimental
Expires:
Authors:
V. Riccobene
Huawei
A. Roberto
Huawei
T. Graf
Swisscom
W. Du
Swisscom
A. Huang Feng
INSA-Lyon

Experiment: Network Anomaly Lifecycle

Abstract

Accurately detect network anomalies is very challenging for network operators in production networks. Good results require a lot of expertise and knowledge around both the implied network technologies and the specific service provided to consumers, apart from a proper monitoring infrastructure. In order to facilitate the detection of network anomalies, novel techniques are being introduced, including AI-based ones, with the promise of improving scalability and the hope to keep a high detection accuracy. To guarantee acceptable results, the process needs to be properly designed, adopting well-defined stages to accurately collect evidence of anomalies, validate their relevancy and improve the detection systems over time.

This document describes the lifecycle process to iteratively improve network anomaly detection accurately. Three key stages are proposed, along with a YANG model specifying the required metadata for the network anomaly detection covering the different stages of the lifecycle.

Status of This Memo

This Internet-Draft is submitted in full conformance with the provisions of BCP 78 and BCP 79.

Internet-Drafts are working documents of the Internet Engineering Task Force (IETF). Note that other groups may also distribute working documents as Internet-Drafts. The list of current Internet-Drafts is at https://datatracker.ietf.org/drafts/current/.

Internet-Drafts are draft documents valid for a maximum of six months and may be updated, replaced, or obsoleted by other documents at any time. It is inappropriate to use Internet-Drafts as reference material or to cite them other than as "work in progress."

This Internet-Draft will expire on 16 September 2024.

Table of Contents

1. Status of this document

This document is experimental. The main goal of this document is to propose an iterative lifecycle process to network anomaly detection by proposing a data model for metadata to be addressed at different lifecycle stages.

The experiment consists of verifying whether the approach is usable in real use case scenarios to support proper refinement and adjustments of network anomaly detection algorithms. The experiments can be deemed successful if validated at least with an open-source implementation sucessfully applied in real production networks.

2. Introduction

In [Ahf23] network anomalies are defined as "Whatever would let an operator frown and investigate when looking at the collected forwarding plane, control plane and management plane network data relative to a customer".

In [I-D.netana-nmop-network-anomaly-semantics] a semantic for the annotation of network anomalies has been defined in order to support the exchange of related metadata between different actors, formalizing a semantically consistent representation of the behaviors worth investigating. In the same document, symptoms are defined as the essential piece of information to analyze network anomalies and incidents.

The intention is to enable operators detecting network incidents timely. A network incident can be defined as "An event that has a negative effect that is not as required/desired" (see [I-D.davis-nmop-incident-terminology]), or even more broadly, as "An unexpected interruption of a network service, degradation of network service quality, or sub-health of a network service" [TMF724A].

With all this in mind, this document starts from the assumption that it is still remarkably difficult to gain a full understanding and a complete perspective of "if" and "how" the network is deviating from the desired state: on the one side, symptoms are not necessarily a guarantee of an incident happening (false positives), on the other side, the lack of symptom is not a guarantee of the absence of an incident (false negative). The concept of network anomaly in this document plays the role of a bridge between symptoms and incident: a network anomaly is defined as a collection of symptoms, but without the guarantee that the observed symptoms are impacting existing services. This opens up to the necessity of further validating the network anomalies to understand if the detected symptoms are actually impacting services. This requires different actors (both human and algorithmic) to jump in during the process and refine their understanding across the network anomaly lifecycle.

Performing network anomaly detection is a process that requires a continuous learning and continuous improvement. Network anomalies are detected by collecting and understanding symptoms, then validated by confirming that there actually were service impacting and eventually need to be further analyzed by performing postmortem analysis to identify any potential adjustment to improve the detection capability. Each of these stages is an opportunity to learn and refine the process, and since these stages might also be provided by different parties and/or products, this document contributes a formal structure to capture and exchange symptom information across the lifecycle.

3. Terminology

The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT", "SHOULD", "SHOULD NOT", "RECOMMENDED", "NOT RECOMMENDED", "MAY", and "OPTIONAL" in this document are to be interpreted as described in BCP 14 [RFC2119] [RFC8174] when, and only when, they appear in all capitals, as shown here.

This document makes use of the terms defined in [I-D.davis-nmop-incident-terminology].

The following terms are used as defined in [RFC9417].

The following terms are defined in this document.

4. Defining Desired States

The above definitions of network incident provide the scope for what to be looking for when detecting network anomalies. Concepts like "desirable state" and "required state" are introduced. This poses the attention on a significant problem that network operators have to face: the definition of what is to be considered "desirable" or "undesirable". It is not always easy to detect if a network is operating in an undesired state at a given point in time. To approach this, network operators can rely on different methodologies, more or less deterministic and more or less sensitive: on the one side, the definition of intents (including Service Level Objectives and Service Level Agreements) which approaches the problem top-down; on the other side, the definition of symptoms, by mean of solutions like SAIN [RFC9417], [RFC9418] and Daisy [Ahf23], which approaches the problem bottom-up. At the center of these approaches, there are the so-called symptoms, defined as reasons explaining what is not working as expected in the network, sometimes also providing hints towards issues and their causes.

One of the more deterministic approaches is to rely on symptoms based on measurable service-based KPIs, for example, by using Service Level Indicators, Objectives and Agreements:

Service Level Agreement (SLA)
An SLA is an agreement between parties that a service provider makes to its customers on the behavior of the provided service. SLAs are a tool to define exactly what customers can expect out of the service provided to them. In many cases, SLA breaches also come with contractual penalties.
Service Level Objectives (SLOs)
An SLO is a threshold above which the service provider acts to prevent a breach of an SLA. SLOs are a tool for service providers to know when they should start becoming concerned about a service not behaving as expected. SLOs are rarely connected to penalties as they usually are internal metrics for the service providers.
Service Level Indicators (SLIs)
An SLI is an observable metric that describes the state of a monitored subsystem. SLIs are a tool to gain measurable visibility about the behavior of a subsystem in the network. SLIs are usually the basis for SLOs, as the main difference between an SLI and SLO is that SLOs usually are defined as thresholds applied to SLIs.

However, the definition of these KPIs turns out to be very challenging in some cases, as accurate KPIs could require computationally expensive techniques to be collected or substantial modifications to existing network protocols.

Alternative methodologies rely on symptoms as the way to generate analytical data out of operational data. For instance:

SAIN
introduces the definition and exposure of symptoms as a mechanism for detecting those concerning behaviors in more deterministic ways. Moreover, the concept of "impact score" has been introduced by SAIN, to indicate what is the expected degree of impact that a given symptom will have on the services relying on the related subservice to which the symptom is attached.
Daisy
introduces the concept of concern score to indicate what is the degree of concern that a given symptom could cause a degradation for a service.

In general, defining boundaries between desirable vs. undesirable in an accurate fashion requires continuous iterations and improvements coming from all the stages of the network anomaly detection lifecycle, by which network engineers can transfer what they learn through the process into new symptom definitions or refinements of the algorithms.

5. Lifecycle of a Network Anomaly

The lifecycle of a network anomaly can be articulated in three phases, structured as a loop: Detection, Validation, Refinement.

                        +-------------+
            +--------> |  Detection  | ---------+
Adjustments |          +-------------+          | Symptoms
            |                                   |
            |                                   v
    +------------+                       +------------+
    | Refinement |<--------------------- | Validation |
    +------------+        Incident       +------------+
                        Confirmation
Figure 1: Anomaly Detection Refinement Lifecycle

Each of these phases can either be performed by a network expert or an algorithm or complementing each other.

The network anomaly metadata is generated by an author, which can be either a human expert or an algorithm. The author can produce the metadata for a network anomaly, for each stage of the cycle and even multiple versions for the same stage. In each version of the network anomaly metadata, the author indicates the list of symptoms that are part of the network anomaly taken into account. The iterative process is about the identification of the right set of symptoms.

5.1. Network Anomaly Detection

The Network Anomaly Detection stage is about the continuous monitoring of the network through Network Telemetry [RFC9232] and the identification of symptoms. One of the main requirements that operator have on network anomaly detection systems is the high accuracy. This means having a small number of false negatives, symptoms causing service impact are not missed, and false positives, symptoms that are actually innocuous are not picked up.

As the detection stage is becoming more and more automated for production networks, the identified symptoms might point towards three potential kinds of behaviors:

i. those that are surely corresponding to an impact on services, (e.g. the breach of an SLO),

ii. those that will cause problems in the future (e.g. rising trends on a timeseries metric hitting towards saturation),

iii. those or which the impact to services cannot be confirmed (e.g. sudden increase/decrease of timeseries metrics, anomalous amounts of log entries, etc.).

The first category requires immediate intervention (a.k.a. the incident is "confirmed"), the second one provides pointers towards early signs of an incident potentially happening in the near future (a.k.a. the incident is "forecasted"), and the third one requires some analysis to confirm if the detected symptom requires any attention or immediate intervention (a.k.a. the incident is "potential"). As part of the iterative improvement required in this stage, one that is very relevant is the gradual conversion of the third category into one of the first two, which would make the network anomaly detection system more deterministic. The main objective is to reduce uncertainty around the raised alarms by refining the detection algorithms. This can be achieved by either generating new symptom definitions, adjusting the weights of automated algorithms or other similar approaches.

5.2. Network Anomaly Validation

The key objective for the validation stage is clearly to decide if the detected symptoms are signaling a real incident (a.k.a. require immediate action) or if they are to be treated as false positives (a.k.a. suppressing the alarm). For those symptoms surely having impact on services, 100% confidence on the fact that a network incident is happening can be assumed. For the other two categories, "forecasted" and "potential", further analysis and validation is required.

5.3. Network Anomaly Refinement

After validation of an incident, the service provider has to perform troubleshooting and resolution of the incident. Although the network might be back in a desired state at this point, network operators can perform detailed postmortem analysis of network incidents with the objective to identify useful adjustments to the prevention and detection mechanisms (for instance improving or extending the definition of SLIs and SLOs, refining concern/impact scores, etc.), and improving the accuracy of the validation stage (e.g. automating parts of the validation, implementing automated root cause analysis and automation for remediation actions). In this stage of the lifecycle it is assumed that the incident is under analysis.

After the adjustments are performed to the network anomaly detection methods, the cycle starts again, by "replaying" the network anomaly and checking if there is any measurable improvement in the ability to detect incidents by using the updated method.

6. Network Anomaly State Machine

From a network anomaly detection point of view a network incident is defined as a collection of interrelated symptoms. From this perspective, an incident can be defined according to the following states (Figure 2).

                                             +---------+
                                             | Initial |-----------------+
                                             +---------+                 |
                                                  |                      |
                                            +-----+---------+            |
                                   +--------|---------------|------+     |
                                   | +------v-----+  +------v----+ |     |
                                   | |  Incident  |  |  Incident | |     |
                             +---->| | Forecasted |  | Potential | |     |
                             |     | +------------+  +-----------+ |     |
                             |     +--------|--Detection---|-------+     |
                             |              |              |             |
        +-------+            |              +------- ----- +             |
        | Final |            |                      |                    |
        +---^---+            |                      |                    |
            |                |                      |                    |
            |                |                      v                    |
            |                |     +-----------Validation------------+   |
+-----------------------+    |     |  +-----------+                  |   |
|           |           |    |     |  |  Network  |   +-----------+  |   |
|  +-----------------+  |    |     |  |  Anomaly  |   |  Incident |  |   |
|  | Network Anomaly |  |    |     |  | Discarded |   | Confirmed |<-|---+
|  |     Adjusted    |-------+     |  +-----|-----+   +-----------+  |
|  +--------^--------+  |          +---------------------------------+
|           |           |                   |               |
|           |           |               +---v---+           |
|           |           |               | Final |           |
|           |           |               +-------+           |
| +---------|--------+  |                                   |
| | Network Anomaly  |  |                                   |
| |     Analyzed     |<-|-----------------------------------+
| +------------------+  |
+-------Refinement------+
Figure 2: Network Anomaly State Machine

6.1. Overview of the Model for the Network Anomaly Metadata

module: ietf-network-anomaly-metadata
  +--rw network-anomalies
     +--rw network-anomaly* [id author-name version state]
        +--rw id             yang:uuid
        +--rw description?   string
        +--rw author
        |  +--rw author-name     string
        |  +--rw author-type?    identityref
        |  +--rw algo-version?   uint8
        +--rw version        uint8
        +--rw state          identityref
        +--rw symptoms* [symptom_id]
           +--rw symptom_id    yang:uuid
Figure 3: YANG tree diagram for ietf-network-anomaly-metadata
<CODE BEGINS> file "ietf-network-anomaly-metadata@2024-02-26.yang"

module ietf-network-anomaly-metadata {
  yang-version 1.1;
  namespace "urn:ietf:params:xml:ns:yang:ietf-network-anomaly-metadata";
  prefix network_anomaly_metadata;

  import ietf-yang-types {
    prefix yang;
    reference "RFC 6991: Common YANG Data Types";
  }

  organization
    "IETF NMOP Working Group";
  contact
    "WG Web:   <https://datatracker.ietf.org/wg/nmop/>
     WG List:  <mailto:nmop@ietf.org>

     Authors:  Vincenzo Riccobene
               <mailto:vincenzo.riccobene@huawei-partners.com>
               Antonio Roberto
               <mailto:antonio.roberto@huawei.com>
               Thomas Graf
               <mailto:thomas.graf@swisscom.com>
               Wanting Du
               <mailto:wanting.du@swisscom.com>
               Alex Huang Feng
               <mailto:alex.huang-feng@insa-lyon.fr>";
  description
    "This module defines objects for the description of network anomalies.
      Network anomalies are a collection of symptoms observed on
      the network nodes.

      Copyright (c) 2024 IETF Trust and the persons identified as
      authors of the code.  All rights reserved.

      Redistribution and use in source and binary forms, with or
      without modification, is permitted pursuant to, and subject
      to the license terms contained in, the Revised BSD License
      set forth in Section 4.c of the IETF Trust's Legal Provisions
      Relating to IETF Documents
      (https://trustee.ietf.org/license-info).

      This version of this YANG module is part of RFC XXXX; see the RFC
      itself for full legal notices.";

  revision 2024-02-26 {
    description
      "Initial version";
    reference
      "RFCXXXX: Experiment: Network Anomaly Postmortem Lifecycle";
  }

  identity author-type {
    description
      "Type of the author of the network anomaly metadata";
  }

  identity user {
    base author-type;
    description
      "A real user (person) generated the network anomaly metadata";
  }

  identity algorithm {
    base author-type;
    description
      "An algorithm generated the network anomaly metadata";
  }

  identity network-anomaly-state {
    description
      "Base identity for representing the state of the network anomaly";
  }
  identity incident-forecasted {
    base network-anomaly-state;
    description
      "An incident has been forecasted, as it is expected that
      the indicated list of symptoms will impact a service
      in the near future";
  }
  identity incident-potential {
    base network-anomaly-state;
    description
      "An incident has been detected with a confidence
      lower than 100%. In order to confirm that this set of
      symptoms are generating service impact, it requires further
      validation";
  }
  identity incident-confirmed {
    base network-anomaly-state;
    description
      "After validation, the incident has been confirmed";
  }
  identity discarded {
    base network-anomaly-state;
    description
      "After validation, the network anomaly has been
      discarded, as there is no evindence that it is causing an
      incident";
  }
  identity analysed {
    base network-anomaly-state;
    description
      "The anomaly detection went through analysis to identify
      potential ways to further improve the detection process in
      for future anomalies";
  }
  identity adjusted {
    base network-anomaly-state;
    description
      "The network anomaly has been solved and analysed.
      No further action is required.";
  }

  container network-anomalies {
    description "Container having the network anomalies";
    list network-anomaly {
      key "id author-name version state";
      description "A network anomaly identified by an id, author-name, version
        and state.";
      leaf id {
        type yang:uuid;
        description
            "Unique ID of the network network anomaly";
      }
      leaf description {
        type string;
        description
          "Textual description of the network anomaly";
      }
      container author {
        description "Container defining the type of the author and the
          version of the algorithm if it is an algorithm who reported the anomaly.";
        leaf author-name {
          type string;
          description "Name of the user (person) or of the
            algorithm that generated the network anomaly metadata";
        }
        leaf author-type {
          type identityref {
              base author-type;
          }
          description "The type of author who reported the anomaly.";
        }
        leaf algo-version {
          type uint8;
          description "Version of the algorithm used to
          produce the netowrk anomaly metadata.  This is
          used only if the author type is an algorithm";
        }
      }
      leaf version {
        type uint8;
        description
          "Version of the incident metadata object.
          It allows multiple versions of the metadata to be
          generated in order to support the definition of
          multiple incindent objects from the same source to
          facilitate improvements overtime";
      }
      leaf state {
        type identityref {
          base network-anomaly-state;
        }
        mandatory true;
        description "State of the anomaly.";
      }
      list symptoms {
        key "symptom_id";
        description "List of symptoms identified by the symptom_id.";
        leaf symptom_id {
          type yang:uuid;
          description "UUID of the symptom that is part of this incident";
        }
      }
    }
  }
}

<CODE ENDS>
Figure 4: YANG module for ietf-network-anomaly-metadata

7. Implementation status

This section provides pointers to existing open source implementations of this draft. Note to the RFC-editor: Please remove this before publishing.

7.1. Antagonist

A tool called Antagonist has been implemented during the IETF 119 Hackathon, in order to validate the application of the YANG models defined in this draft. Antagonist provides visual support for two important use cases in the scope of this document:

  • the generation of a ground truth in relation to symptoms and incidents in timeseries data
  • the visual validation of results produced by automated network anomaly detection tools.

The open source code can be found here: [Antagonist]

8. Security Considerations

The security considerations will have to be updated according to "https://wiki.ietf.org/group/ops/yang-security-guidelines".

9. Acknowledgements

The authors would like to thank xxx for their review and valuable comments.

10. Normative References

[Ahf23]
Huang Feng, A., "Daisy: Practical Anomaly Detection in large BGP/MPLS and BGP/SRv6 VPN Networks", IETF 117, Applied Networking Research Workshop, DOI 10.1145/3606464.3606470, , <https://anrw23.hotcrp.com/doc/anrw23-paper8.pdf>.
[Antagonist]
Riccobene, V., Roberto, A., Du, W., Graf, T., and H. Huang Feng, "Antagonist: Anomaly tagging on historical data", <https://github.com/vriccobene/antagonist>.
[I-D.davis-nmop-incident-terminology]
Davis, N. and A. Farrel, "Some Key Terms for Incident Management", Work in Progress, Internet-Draft, draft-davis-nmop-incident-terminology-00, , <https://datatracker.ietf.org/doc/html/draft-davis-nmop-incident-terminology-00>.
[I-D.netana-nmop-network-anomaly-semantics]
Graf, T., Du, W., Feng, A. H., Riccobene, V., and A. Roberto, "Semantic Metadata Annotation for Network Anomaly Detection", Work in Progress, Internet-Draft, draft-netana-nmop-network-anomaly-semantics-01, , <https://datatracker.ietf.org/api/v1/doc/document/draft-netana-nmop-network-anomaly-semantics/>.
[RFC2119]
Bradner, S., "Key words for use in RFCs to Indicate Requirement Levels", BCP 14, RFC 2119, DOI 10.17487/RFC2119, , <https://www.rfc-editor.org/info/rfc2119>.
[RFC8174]
Leiba, B., "Ambiguity of Uppercase vs Lowercase in RFC 2119 Key Words", BCP 14, RFC 8174, DOI 10.17487/RFC8174, , <https://www.rfc-editor.org/info/rfc8174>.
[RFC8340]
Bjorklund, M. and L. Berger, Ed., "YANG Tree Diagrams", BCP 215, RFC 8340, DOI 10.17487/RFC8340, , <https://www.rfc-editor.org/info/rfc8340>.
[RFC9232]
Song, H., Qin, F., Martinez-Julia, P., Ciavaglia, L., and A. Wang, "Network Telemetry Framework", RFC 9232, DOI 10.17487/RFC9232, , <https://www.rfc-editor.org/info/rfc9232>.
[RFC9417]
Claise, B., Quilbeuf, J., Lopez, D., Voyer, D., and T. Arumugam, "Service Assurance for Intent-Based Networking Architecture", RFC 9417, DOI 10.17487/RFC9417, , <https://www.rfc-editor.org/info/rfc9417>.
[RFC9418]
Claise, B., Quilbeuf, J., Lucente, P., Fasano, P., and T. Arumugam, "A YANG Data Model for Service Assurance", RFC 9418, DOI 10.17487/RFC9418, , <https://www.rfc-editor.org/info/rfc9418>.
[TMF724A]
TMF, "Incident Management API Profile v1.0.0", , <https://www.tmforum.org/resources/standard/tmf724a-incident-management-api-profile-v1-0-0/>.

Authors' Addresses

Vincenzo Riccobene
Huawei
Dublin
Ireland
Antonio Roberto
Huawei
Dublin
Ireland
Thomas Graf
Swisscom
Binzring 17
CH-8045 Zurich
Switzerland
Wanting Du
Swisscom
Binzring 17
CH-8045 Zurich
Switzerland
Alex Huang Feng
INSA-Lyon
Lyon
France