Distributed Computing & Communications (DCC) Laboratory

Technologies and Protocols for Self-Managed & Self-Organizing Networks


An Architecture for Network Self Management and Organization

Contact : Alexander V. Konstantinou <akonstan@cs.columbia.edu>


Configuration management presently requires complex labor-intensive processes by experts. A single configuration task such as installing/reconfiguring a system, or provisioning a service typically involves a large number of activities fragmented among multiple network elements, each with its own proprietary configuration management instrumentation and tools. A change may cause configuration inconsistencies resulting in failures or inefficiencies; undoing changes to recover an operational state is often very difficult or even practically impossible. Configuration management is therefore, very costly, error prone and often results in unpredictable failures and slow recovery.

NESTOR seeks to replace labor-intensive configuration management with one that is automated and software-intensive. Configuration management is automated by policy rules that access and manipulate respective network elements via a Resource Directory Server (RDS). RDS provides a uniform object-relationship model of network resources and represents consistency in terms of constraints; it supports atomicity and recovery of configuration change transactions, and mechanisms to assure consistency through changes. RDS pushes configuration changes to network elements using a layer of adapters that translate operations on its object-relationship model to actions on respective elements. NESTOR has been implemented in two complementary versions and is now being applied to automate several configuration management scenarios of increasing complexity, with encouraging results.


Configuration management is primarily concerned with handling changes in networked systems; from installing or removing network elements, to changing element configuration parameters. At present, configuration management tasks are conducted manually, are very complex requiring substantial expertise -- typically acquired through apprenticeship and trial-and-error learning-- they are costly, error prone, can result in unpredictable failures and inefficiencies and may involve costly recovery. There are several reasons for these difficulties:

  1. A configuration management task typically requires changes in multiple interdependent elements at different network layers.
  2. Configuration changes may lead to inconsistent configuration states among elements; this can result in operational failures and inefficiencies.
  3. Undoing configuration changes to recover an operational state is a very difficult task.

The NESTOR project has been developing technologies that resolve these difficulties; it seeks to automate configuration management tasks assuring predictable error-free operations. NESTOR is concerned with several technical challenges:

  1. How to unify access to heterogeneous configuration databases and repositories so that configuration management tasks can be programmed and executed by software rather than manually,
  2. How to code knowledge of configuration consistency rules in a composable form, and enforce these rules through configuration changes,
  3. How to support rollback and/or recovery of operational configuration states,
  4. How to detect and handle emergent inconsistencies between configuration states and states controlled by underlying built-in procedures.
The following sections describe the architecture, mechanisms and operations of the NESTOR system, addressing these challenges.

Configuration Modeling

The goal of configuration modeling is to provide a unified view of all data and knowledge needed to support automated configuration management. Currently, configuration information is spread across different element-specific repositories. Relationships between different configuration elements are implicit, and require the development of special tools to be discovered. Gathering, correlating, and visualizing a system-wide picture of configuration is a daunting and sometimes impossible task. Different repositories contain replicated and interdependent configuration information, which can often be inconsistent.

Configuration models in the NESTOR system are expressed using the Resource Definition Language (RDL). RDL is an object-oriented interface language that supports the specification of resources as objects and their relationships. Object-orientation provides important clustering of configuration and behavior through interface inheritance and hierarchy mechanisms. Interfaces define generic behaviors of objects and inheritance supports abstraction of common features. Relationships between objects capture interdependencies due to both, hierarchical structures, as well as of distribution. Finally, objects encapsulate the methods for accessing the underlying element instrumentation.

interface nestor::IpHost {
  attribute String hostname "Name of host";
  relationshipset interfacedThrough, IpNetworkInterface, partOf;
  boolean restart() "reboot host";
interface nestor::IpNetworkInterface : netmate::Node {
  readonly attribute byte[] uniqueIdentifier 
                                         "e.g. MAC Address";
  relationship partOf, IpHost, interfacedThrough;

The above depicts fragments of the model of an IP host expressed in RDL. Interfaces are pure abstract classes, which may be scoped in a package. Packages are a requirement in an environment where models are likely to be imported from external sources, such as vendors or standard bodies. Interface definitions may include attribute, method, and relationship declarations. In the IpHost example, the first statement declares a string attribute named "hostname", which represents the name of the modeled host. The second statement declares an any-to-many association between this interface and classes implementing the interface IpNetworkInterface. Associations are declared by naming both ends (role names), the type of the association class, and the multiplicity of the association (one, or many). In the example, the association between IpHost and IpNetworkInterface is specified as one-to-many. The model reflects the fact that objects of type IP host may have one or more IP interfaces. The relationship partOf goes in the other direction, from an IpNetworkInterface to an IpHost. The IpHost restart declaration illustrates a method declaration. The "netmate::" scope in the declaration of IpNetworkInterface denotes the NETMATE schema, which serves as the base classes for the construction of NESTOR classes.

While object models capture structural (via inheritance) and dynamic (via associations) relationships, they do not make any statements on the values of the modeled objects. For example, the hostname attribute definition in IpHost does not state any restrictions on the value of the name attribute in one instance in relation to other instances (such as uniqueness). Another restriction on the host object may state that the configuration of its Internet Protocol (IP) network interfaces must match the configuration of the network to which they are connected. In the NESTOR system, these restrictions are expressed as constraints on the values of one or more objects. Constraints on configuration objects and relationships enrich the model, and can be used to automate detection and reaction to inconsistencies. For example, constraints may express the above IP network configuration prerequisite. Whenever a new IP interface is introduced, NESTOR will check the constraints and may force a change of the interface attributes in case of violation.

The Constraint Definition Language (CDL) is a declarative expression language for stating assertions over the valid values of objects in RDL. Statements in CDL cannot modify any attributes or relationships in the model and do not incite side effects. For example, a CDL statement may declare that "all IpHost hostname attributes must be unique". Constraints may be composed from restrictions on the configuration of component devices or services. E.g., "all user home directories must be backed up". This statement applies to two services that are usually separate, a network information service for user accounts, and the configuration of network backup services. Another example is "the IP interface configuration of every node connected to a switch must match the VLAN configuration active on its port".

The current implementation of CDL is based on the Object Constraint Language (OCL). OCL was developed as part of the Unified Modeling Language (UML) standard it order to formally define the semantics of the UML. Unlike OCL statements, CDL separates the object model from the constraint definitions for two reasons. First, the most interesting constraints are the ones that make statements about the configuration of multiple RDL interfaces. In such cases, it may not be clear which object should "own" the constraint. For example, the aforementioned backup constraint is as much a property of the user account as of the backup service. Second, the same manager will not always perform model authoring and constraint authoring. Device and service models will usually be obtained from the vendor, or may be bundled in some standard model package. Attaching domain-specific constraints to RDL interfaces will limit the sharing of these models.

  ->select(h | h.hostname <> null) 
  ->forAll(h1, h2 | h1 <> h2 implies h1.hostname <> h2.hostname);

The simple CDL constraint mentioned earlier is shown in the above figure. The constraint states that for all object instances implementing the RDL interface nestor::IpHost, those who have a non- null name should all have different names. In the OCL syntax, the right arrow operator (->) operates on collections of objects (sets, bags, and sequences). The allInstances operator iterates over all classes implementing a particular interface. Select is an operator that filters out elements in a collection that do not satisfy the boolean expression condition. In this case, select will remove all IP hosts that have a null name. Finally, the forAll operator states that for every pair of IP host instances, the following boolean expression on the remaining IP host instances must be valid: "if two hosts objects are different (different instances), then their names must be different".

NESTOR Architecture and Operations

The overall architecture of the NESTOR system is depicted in the figure below. In the top layer, Managers perform network configuration by accessing and manipulating data in a unified object- relationship network model. A systems administrator or a software agent may play the role of a Manager. Systems administrators may interactively access the repository through a graphical or text-based user interface tool, or they may execute scripts or programs tailored specifically for a particular task. NESTOR Managers access the repository using the Directory Access Protocol (DAP), a remote interface permitting Managers to execute either locally or remotely.

NESTOR architecture

The Resource Directory Server (RDS) maintains an object repository that stores and controls access to model object instances. Repository objects reflect configuration settings at the real network elements plus meta-information that is supplied or inferred from multiple sources. For example, a model object representing a network host may contain information instrumented from the host, such as network interface configuration, meta- information such as host ownership, and values such as the host's name which are replicated in various repositories. The DAP interface provides operations for creating, committing, and aborting transactions, supports simple object queries (based on type and exact attribute match), as well as operations for creating, updating, and deleting objects.

RDS stores and enforces declarative constraint expressions on the values of the repository objects. Changes in real network element configuration, brought about by manager scripts or propagated back from the devices, may result in violation of such constraints. In such cases, RDS uses policy scripts to guide the propagation of configuration changes among related resources and in ascertaining that these changes meet respective consistency constraints. Policy scripts are Manager programs that are invoked upon constraint violation.

The Directory Management Protocol (DMP) is used between NESTOR Resource Directory Servers to support distribution, replication, and caching of resource objects. Similarly to directory services, NESTOR offers mission-critical services which must be availability even in the face of server or network failures. Distribution of NESTOR services is also important for several reasons. (1) Although similar repositories used in event correlation have been shown to scale well (to the order of hundreds of thousand objects), there is ultimately a limit to the number of modeled objects that can be stored and maintained in a single server. (2) The wide geographical dispersion of some networks requires distribution for timely response. (3) Finally, the breakdown of administrative domains forces in many cases the distribution of services that may not be technically required otherwise.

The protocol adapter layer simplifies implementation of objects in the repository. Adapters are responsible for propagating information, forward and backward, between the RDS repository and the managed element or service. Use of protocol adapters separates the task of mapping the unified model attributes to the real element attributes, from the protocols realizing that mapping.


An initial prototype of the NESTOR system was built using the MODEL language and InCharge repository provided by SMARTS. The prototype employed the Event-Condition-Action (ECA) rules (which can be compiled from declarative constraints).

The current NESTOR prototype has been written in Java using Sun's Jini infrastructure (over 120K lines of Java, ANTLR grammar, and OCL code). The prototype consists of:

The images below show a screen-shot of the NESTOR browser and the NESTOR topology visualization engine (click to show in full size):

NESTOR prototype browser NESTOR prototype topology visualization

Technology Transfer

The NESTOR prototype has been released to Telcordia Technologies and is being used as a basis for a DARPA-sponsored smart firewall project.

The prototype was also released to the DARPA Active Networks UCLA/UCB/Utah team for use in the Active Network Support Services intergration demo held in the DARPA PI meeting in Atlanta (2000)


The manual process with which computer networks are currently managed is quickly reaching its limits as networks enlarge, add new services, become increasingly mission critical, and spread to new environments such as private homes. Network management automation is increasingly becoming a requirement in many different types of networks. Large networks are becoming too complex to manage; mission critical networks cannot afford operator errors; and small home networks must minimize management due to limited resources. The NESTOR system addresses these needs by combining several techniques from object modeling, constraint systems, active databases, and distributed systems in a novel management architecture.

In the NESTOR system, managers operate on a unified object-relationship model of the network using a rich set of operations that support rollback and/or recovery of operational configuration states. Declarative constraints prevent known configuration inconsistencies and in conjunction with policy scripts may automatically propagate changes to maintain consistency. Protocol proxies are used to provide much of this functionality with little or no changes in the network clients. A protocol for replication and distribution of the directory assures availability and operational efficiency. NESTOR has been implemented in two complementary versions and is now being applied to automate several configuration management scenarios of increasing complexity, with encouraging results.


NESTOR Publications

NESTOR Presentations

This effort is sponsored by the Defense Advanced Research Projects Agency (DARPA) Information Processing Technology Office (IPTO).

The views and conclusions contained herein are those of the authors and should not be interpreted as necessarily representing the official policies or endorsements, either expressed or implied, of the Defense Advanced Research Projects Agency (DARPA), or the U.S. Government.

[ DCC Laboratory ] [ DARPA IPTO ] Last updated by $Author: akonstan $ on $Date: 2000/06/14 15:38:31 $