Traffic Shaping Engine for Novell NetWare

API Documentation

Revision 2003-01-01

(C) 2003 www.TrafficShaper.com

Overview

This document describes an API ( Application Programming Interface ) for extending the functionallity of, and integrating with, the Traffic Shaping Engine for Novell NetWare.   TSE Beta 3 and future versions will offer an API allowing software developers to write applications which control the TSE and thereby manage bandwidth and network traffic flowing through the NetWare server.

What Is The TSE?

The TSE is a special application which implements the low level functionallity required to intercept and manage traffic flowing through a NetWare Server.   The TSE works with NetWare 4.10 - 6.x and is delivered as an NLM which appears to the NOS, and system administrators, as a protocol module, like TCPIP.   The TSE is bound to any desired network interface using standard binding commands from the console.  After the TSE is bound to an interface, the traffic flowing through that interface can be managed in accordance with a ruleset used to configure the TSE.

A Win 32 GUI configureation tool allows for the creation of static configurations which encompass a rule base for identification of desired traffic, QoS rate limiting queues, priority queues, and connection oriented rate limiting.  Rules can also forward or drop traffic as well.   This provides a basic set of capabilities combining filtering, stateful firewalling, and bandwidth management.   Significant capabilities are exposed by the configuration tool, however placing the TSE under dynamic control by a 3rd party application offers vastly more sophisticated applications.

These applications include the integration and management of the TSE with a directory service such as eDirectory / NDS / LDAP.  In addition, the TSE serves as an easy route for developers to build custom filtering, monitoring, firewalling, and bandwidth management systems without the need to undertake the low level development required to directly code to the underlying OS APIs.

While the TSE is still fairly monolithic in design, it is coded to be easily modularized and is potentially portable to other platforms including Windows or Unix / Linux variants.   ( Though in reality this is the goal of the next generation of the TSE, still in blueprint form, code name "Boa."   Boa unpaint us out of numerous corners we ended up in as a result of the organic groth of the code base from what was basically a proof of concept effort. )

What is the TSE API?

Well... it is really 3 API's collected as a group.   An Events API allows you to hook various TSE events to receive messages and post replies to these messages.  The Config API allows you to directly manipulate the TSE's static configureation, allowing your application to hijack the TSE for your own purposes without replying on the Win 32 based GUI config tool.  The third, and possibly the most important API is the Connection API.  The Connection API allows you to build, monitor, and tear down connections, which in turn tell the TSE how to process the traffic for that connection.

The ultimate goal of this API, at least according to my evil plans, is the construction of "agents" which manage the TSE, allowing intelligent management of the raw capabilities the TSE already provides.   For example, an agent which interacts with the TSE to implement policy based bandwidth management, or an authenticated firewall / bandwidth management solution.   The concept of the agent uses all three API's, in concert, to perform sophisticated bandwidth management of any type desired.

Connection Oriented Processing

The TSE has a robust means of dealing with the management of individual conversations and flows.   You can define what consitutes a conversation: from all traffic to / from a given Class B network down to individual TCP connections between two hosts, and anything in between.   Once a connection entry is built, the TSE handles the management of the traffic without the need for external assistance.   This "set it and forget it" operation means that the TSE need only call upon an Agent when new connections need to be built or torn down.   This greatly reduced the number of TSE / Agent interactions allowing the Agent technology to be more robust without the risk of weighing down the server with superfluous activity.   For example, a server servicing 1000 workstations, NATing 3000 packets a second, might only turn over 50 to 100 connections a second.   This greatly decreases the number of times your code needs to be bothered, increasing efficiency.

The TSE is designed to handle in excess of 1,000,000 simultaneous connection entries and has been tested with up to 8,000,000 silumtaneous connection entries.  The upper limit is based on available RAM, and with only 128 bytes per connection, you can see why no other product can manage traffic as finely as the TSE can.   The TSE does not suffer form appreciabel performance loss when uperating with these massive connection tables.

Each connection represents an arbitrary combination of packet fields, like source and destination IP and port.  The connection has an associated rate limiting FIFO queue ( which also gathers traffic stats ) and has an associated "action."  The Action associated with a given connection is the magic.  You can, on an individual flow / conversation basis drop, forward, priority queue, rate limit, audit, .... matching frames.   This is like having a infinitely expandable rule set.  The Agent technology can tell the TSE to "take all traffic for 192.168.100.200:80 <-> 10.10.234.123:2345 and sent it to priority level 5" and do that for millions of flows without a performance penalty!   You can also have multiple connection tables with identical address keys, allowing for all sorts of fancy processing.

Connection tables can be used programatically.   You can test to see if a packet belongs to a given connection table, selectively add entries, and so on.   For example, a stateful filter is one where an outbound packet creates a reverse inbound filter exception.   The TSE can easily be used to perform this same function.   So in a sense, the TSE can be used as the universal widget in regards to filtering and bandwidth management.

The Events API

The Events API is the means by which messages are passed back and forth between the TSE and its Agents.  For example, you can hook an event which corresponds to "new TCP conversation needs to be built" etc.   This would trigger a "work order" to be generated by the TSE and passed to the Agent.   The Agent could then inspect the conversations endpoints, inspect the packets, whatever, and then build a connection with the appropriate action.  From that point on, the Agent can kick up its heels and take it easy - the TSE does all the work.

You hook an event by registering a "callback" function.  The callback function is passed a data structure which contains a Pending and Completed queue.  These queues contain "Work Orders" generated by the TSE when events happen.  The TSE can simply leth these work order queue up and occasionally poll your callback to give you a chance toprocess them, or you can have your callback executed as the events occur.

The Events API consists of only 4 functions:

[Un]RegisterForEvents() allows you to regster / deregister a callback ( CB ) for a specific event.   All callbacks are polled every second or so, providing the TSE with a chance to execute your code on a periodic basis and to ensure you are warned about critical events, like a TSE unload request.   Your event callback routine can also be called INLINE.   The INLINE useage allows the TSE to execute your callback while the event is is progress and can happen at interrupt time or during packet transmission / reception.    The AllocEB() and FreeEB() functions allocate / return an empty Event Block ( Work Order ), though this is rarely necessary as the TSE handles the lifecycle of the WO.

The CB's job is to inspect its Pending queue and process the WO's which are pending.   This processing can be anything from simply inspecting the WO, to performing significant work.   For example, an agent providing name service for the TSE would need to accept a workstation address / endpoint and return a name for it.   The agent would receive WO's containing the resolution requests, fill in the fields in the WO, and place the completed WO in the Completed queue.   Once in the Completed queue, the TSE posts the completed WO to an internal consumer in the TSE.

Another example would be an event specified by a rule action.   If a connection entry were not found for a particular packet, the TSE generates an event which would trigger the agent to build / modify the connection per some policy information.   In such a situation, the TSE is merely providing hints to the policy Agent giving it a heads up on emergent connections.   An in-line invocation would allow the agent to immediately build / modify the connection prior to any further packets being processed.

Consider an authenticated firewall situation where a user on workstation 192.168.12.134 is only allowed HTTP and HTTPS outbound and SMTP inbound.   The default action for new connections can be set to "drop" so that packet is going nowhere fast.  The agent inspect the address info and sees the traffic is allowed.   Only through the intercession of the agent does the traffic pass when the default action of the connection entry is modified to allow it.   Similarly, the agent could set DiffServ tags, send the traffic to a specific priority queue level, forward, drop, rate limit, or group the traffic in any way desired.   Once the connection is built, the TSE does the rest of the work.

The Grave Responsibilities of a Callback Function

Your callback function could be executed several thousand times a second.  It must be rock solid, quick, and resilient..   Since your callback runs as the TSE API thread or as part of TSE code paths, abends in callbacks will "appear" to be caused by TSE code.  So far the TSE has had an excellent track record in the ABEND department and I want to enable you to keep it that way.   Callbacks should, preferably, be written in plain old C or assembler without any "fancy stuff."   Stack operations should be kept to a minimum.  Use of scratchpads is encouraged as they are a speedy and stackless way to deal with the problem of an unknown execution environment.
 

Events API Notes, Caveats, Gotchas...

In chained event callbacks, if a polled ( non in-line ) callback is registered prior to  in-line callbacks, work orders are queued in the polled callbacks Pending queue untill a polling cycle is completed.   It is recommended that polled callbacks be registered with the EVT_FLAG_WANT_LAST, and inline callbacks be registered with EVT_FLAG_WANT_FIRST ( or no position specified )  to ensure their appropriate placement in the chain.  Future versions of the TSEAPI may include a second chain for polled callbacks, but the same functionality is obtained using the placement flags already provide for.

Callbacks of a given placement are ordered such that the first to be registered is closest the desired location.  I.e. if 3 callbacks are registered with EVT_FLAG_WANT_LAST, the last callback registered will be towards the beginning of the chain.   Callbacks registered without any chain placement flags will end up in the "middle" of the chain between those desiring first or last placement, and those will be ordered as registered. 

Callbacks using the NEED_LAST and NEED_FIRST position flags should be aware that only a single callback can be registered, per chain / event type, in this position.   Developers should treat NEED_FIRST and NEED_LAST as reserved - and for "last resort" use only.    Since it is possible for other developers to attempt the same, your Agent could easily become unusable if another has already registered these chain positions.   If you want to use these positions, ALWAYS provide for a manual / automatic means of allowing the Agent to register as WANT_FIRST / WANT_LAST instead - this way your code will play well with others.